Aaron Whitehouse | 18 Feb 18:13 2015
Picon

Use of --include-filelist-stdin and --exclude-filelist-stdin

Hello all,

Does anybody on the list use --include-filelist-stdin or
--exclude-filelist-stdin?

If so, would you mind please giving me an example of how you use them?
It isn't a feature that I use, but I want to make sure that I don't
break the behaviour with some changes I am making in the select module.
I could just make something up, but it would be nice to know that my
test at least represented one person's use-case.

A little more info is here:
http://answers.launchpad.net/duplicity/+question/261866

Many thanks,

Aaron
Rupert Levene | 13 Feb 13:06 2015
Picon

pydrive backend: how to use storage of a user account instead of a service account?

Hi,

I have successfully made and verified a backup using the pydrive
backend written by Yigal Asnis. As per the man page, I made a service
account in the Google developers console; I did this while logged into
my user account.

My problem is that I want the backup to go into my user account's
Google drive space, but instead it ends up in the service account's
Google drive space. I know this because the files don't show up in my
user account's Google drive, but they do show up in the service
account's drive (I can check this with API calls).

This has two drawbacks:

(1) I would like to use the unlimited quota in my user account, but
the quota in the service account is restricted to 15 GB; and

(2) as far as I know the service account has no password, so I can't
see the backup through the usual Google drive web interface, android
apps etc.

Could the pydrive backend be modified to allow backups to the drive
storage of a user account rather than just a service account? I guess
the authentication process may be a bit more involved, but I believe
it can be done.

For example, gauth could be made in the following way, without using a
service account at all:

(Continue reading)

Mikko Ohtamaa | 13 Feb 03:46 2015

Failed to upload to S3 bucket (EU)

Hi,

I just had failed attempts to use Duplicity with S3 EU regions (Ireland, Frankfurt).

I tried both with

--s3-use-new-style --s3-european-buckets

... and no command line options. I was using URL:

s3://s3-eu-west-1.amazonaws.com/bucket/$SITENAME

This resulted to Forbidden exception from boto. But using boto manually from Python prompt still worked, so there is probably something Duplicity did not setup properly when setting up boto connections.

However after changing to s3-us-west-2.amazonaws.com region everything started work like a charm, using s3 url:

s3://s3-us-west-2.amazonaws.com/bucket/$SITENAME

Just for your information. Duplicity 0.7.1, boto 2.36.0.

If you come across this issue I recommend dropping in these lines:

    import boto
    boto.set_stream_logger('boto')

To get_connection() in _boto_single.py - this way you get some meaningful output from boto. Otherwise Duplicity only gives backend failure exception without meaningful message payload.

_______________________________________________
Duplicity-talk mailing list
Duplicity-talk <at> nongnu.org
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
Mikko Ohtamaa | 12 Feb 23:44 2015

Duplicity + Amazon Glacier, 2015 edition

Hi,

I was thinking to try out Duplicity + Glacier combo. Mr. Internet found me various tutorials dating back several years. Apparently a lot of patching has happened on Duplicity since then.

What is the current approach to get Duplicity backups flowing into Glacier (cheapish) storage?

Does old backup purge work in the similar manner it works for other backends?
_______________________________________________
Duplicity-talk mailing list
Duplicity-talk <at> nongnu.org
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
Jeffrey Yunes | 11 Feb 23:24 2015
Picon

2 incomplete backup sets

Hi duplicity experts,

I have a couple questions and suggestions.

Do you know why I would ever have "2 incomplete backup sets?" That is, 
why did duplicity start a new backup set before completing the first 
one? Is there a way I can continue an incomplete backup?

Also, can I can restore a file from an incomplete backup set?

Finally, when my disk ran out of space, I eventually got an error that 
sounded like there was a network problem. I think it would be nice to 
have a more descriptive error message.

Thanks!
-Jeff

> sudo duplicity -v 9 list-current-files 
> sftp://user <at> s.mydomain.com//media/jeff/Part1/backup.duplicity/
>
> Warning, found incomplete backup sets, probably left from aborted 
> session
> Last full backup date: none
> Collection Status
> -----------------
> Connecting with backend: SSHParamikoBackend
> Archive dir: 
> /Users/jeff/.cache/duplicity/8adb7199de998b91885f18324fed345a
>
> Found 0 secondary backup chains.
> No backup chains with active signatures found
> Also found 0 backup sets not part of any chain,
> and 2 incomplete backup sets.
> These may be deleted by running duplicity with the "cleanup" command.
> Releasing lockfile <LinkLockFile: 
>
'/Users/jeff/.cache/duplicity/8adb7199de998b91885f18324fed345a/jeff-laptop-2013.local-7ee79300.2718-1486341856114572708' 
> -- 
> '/Users/jeff/.cache/duplicity/8adb7199de998b91885f18324fed345a/lockfile'>
> Using temporary directory /tmp/duplicity-jbWkiF-tempdir
> Traceback (most recent call last):
> File "/usr/local/Cellar/duplicity/0.6.25/libexec/bin/duplicity", line 
> 1509, in <module>
>   with_tempdir(main)
> File "/usr/local/Cellar/duplicity/0.6.25/libexec/bin/duplicity", line 
> 1503, in with_tempdir
>   fn()
> File "/usr/local/Cellar/duplicity/0.6.25/libexec/bin/duplicity", line 
> 1352, in main
>   do_backup(action)
> File "/usr/local/Cellar/duplicity/0.6.25/libexec/bin/duplicity", line 
> 1441, in do_backup
>   list_current(col_stats)
> File "/usr/local/Cellar/duplicity/0.6.25/libexec/bin/duplicity", line 
> 667, in list_current
>   sig_chain = col_stats.get_signature_chain_at_time(time)
> File 
>
"/usr/local/Cellar/duplicity/0.6.25/libexec/lib/python2.7/site-packages/duplicity/collections.py", 
> line 977, in get_signature_chain_at_time
>   raise CollectionsError("No signature chains found")
> CollectionsError: No signature chains found
Rupert Levene | 8 Feb 15:05 2015
Picon

verify fails on 0.6.18 and 0.7.01

Hi,

I have made a large to gdocs using 0.6.18 from Ubuntu 12.04. A 2GB
file duplicity-full-signatures.20150205T170035Z.sigtar.gpg was
created, and duplicity chokes on this. It pauses for a while (I guess
while it downloads the file) and then says

Attempt 1 failed: BackendException: Failed to download file
'duplicity-full-signatures.20150205T170035Z.sigtar.gpg' in remote
folder 'i7-bigstore': join() result is too long for a Python string
Backtrace of previous error: Traceback (innermost last):
  File "/usr/lib/python2.7/dist-packages/duplicity/backend.py", line
311, in iterate
    return fn(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/duplicity/backends/gdocsbackend.py",
line 144, in get
    % (remote_filename, self.folder.title.text, str(e)), raise_errors)
  File "/usr/lib/python2.7/dist-packages/duplicity/backends/gdocsbackend.py",
line 182, in __handle_error
    raise BackendException(message)
 BackendException: Failed to download file
'duplicity-full-signatures.20150205T170035Z.sigtar.gpg' in remote
folder 'i7-bigstore': join() result is too long for a Python string

It'll then try again, and give the same error, at which point I hit control-C.

Version 0.7.01 fails more quickly:

$ apt-cache policy duplicity
duplicity:
  Installed: 0.7.01-0ubuntu0ppa1063~ubuntu12.04.1
  Candidate: 0.7.01-0ubuntu0ppa1063~ubuntu12.04.1
  Version table:
 *** 0.7.01-0ubuntu0ppa1063~ubuntu12.04.1 0
        500 http://ppa.launchpad.net/duplicity-team/ppa/ubuntu/
precise/main i386 Packages
        100 /var/lib/dpkg/status
     0.6.18-0ubuntu3.5 0
        500 http://ie.archive.ubuntu.com/ubuntu/ precise-updates/main
i386 Packages
     0.6.18-0ubuntu3 0
        500 http://ie.archive.ubuntu.com/ubuntu/ precise/main i386 Packages
$ duplicity -V
duplicity 0.7.01
$ duplicity verify --verbosity '8' --exclude-globbing-filelist
/home/rupert/.duply/i7-bigstore/exclude
gdocs://rupert.levene <at> ucd.ie/backups/duplicity/i7-bigstore
/mnt/bigstore
Using archive dir: /home/rupert/.cache/duplicity/xxxxxxxxxxxxxxxxxxxxx
Using backup name: xxxxxxxxxxxxxxxxxxxxx
Import of duplicity.backends.azurebackend Succeeded
Import of duplicity.backends.botobackend Succeeded
Import of duplicity.backends.cfbackend Succeeded
Import of duplicity.backends.copycombackend Succeeded
Import of duplicity.backends.dpbxbackend Succeeded
Import of duplicity.backends.gdocsbackend Succeeded
Import of duplicity.backends.giobackend Succeeded
Import of duplicity.backends.hsibackend Succeeded
Import of duplicity.backends.hubicbackend Succeeded
Import of duplicity.backends.imapbackend Succeeded
Import of duplicity.backends.lftpbackend Succeeded
Import of duplicity.backends.localbackend Succeeded
Import of duplicity.backends.megabackend Succeeded
Import of duplicity.backends.ncftpbackend Succeeded
Import of duplicity.backends.onedrivebackend Failed: No module named requests
Import of duplicity.backends.par2backend Succeeded
Import of duplicity.backends.pydrivebackend Succeeded
Import of duplicity.backends.rsyncbackend Succeeded
Import of duplicity.backends.ssh_paramiko_backend Succeeded
Import of duplicity.backends.ssh_pexpect_backend Succeeded
Import of duplicity.backends.swiftbackend Succeeded
Import of duplicity.backends.sxbackend Succeeded
Import of duplicity.backends.tahoebackend Succeeded
Import of duplicity.backends.webdavbackend Succeeded
Using temporary directory /tmp/duplicity-lCUBfI-tempdir
Traceback (most recent call last):
  File "/usr/bin/duplicity", line 1497, in <module>
    with_tempdir(main)
  File "/usr/bin/duplicity", line 1491, in with_tempdir
    fn()
  File "/usr/bin/duplicity", line 1324, in main
    action = commandline.ProcessCommandLine(sys.argv[1:])
  File "/usr/lib/python2.7/dist-packages/duplicity/commandline.py",
line 1056, in ProcessCommandLine
    backup, local_pathname = set_backend(args[0], args[1])
  File "/usr/lib/python2.7/dist-packages/duplicity/commandline.py",
line 949, in set_backend
    globals.backend = backend.get_backend(bend)
  File "/usr/lib/python2.7/dist-packages/duplicity/backend.py", line
221, in get_backend
    obj = get_backend_object(url_string)
  File "/usr/lib/python2.7/dist-packages/duplicity/backend.py", line
207, in get_backend_object
    return factory(pu)
  File "/usr/lib/python2.7/dist-packages/duplicity/backends/gdocsbackend.py",
line 61, in __init__
    entries = self._fetch_entries(parent_folder_id, 'folder', folder_name)
  File "/usr/lib/python2.7/dist-packages/duplicity/backends/gdocsbackend.py",
line 161, in _fetch_entries
    entries = self.client.get_all_resources(uri=uri)
AttributeError: 'DocsClient' object has no attribute 'get_all_resources'

Are these two independent bugs? Is there a workaround or fix?

Rupert
Christian Saga | 1 Feb 23:35 2015
Picon

Version 4 authorisation for Amazon S3 buckets

Hi there,
I am getting the error "The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256 duplicity" on a duplicity backup to Amazon S3.

This seems to be related to the Frankfurt bucket of Amazon S3, as this only supports V4 of the authorisation.
A bug was opened for this under: https://bugs.launchpad.net/duplicity/+bug/1407966

Is there a perspective on this bug on if/when it will be solved?
I wanted to start a new backup which is quite time consuming. If I know this takes a while I would do the backup on one of the other EU buckets, otherwise I would wait.

Regards
  Christian

_______________________________________________
Duplicity-talk mailing list
Duplicity-talk <at> nongnu.org
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
mailinglist | 1 Feb 13:24 2015

BackendException: Boto requires a bucket name.

Dear List,

I was running duplicity 0.6.24 with S3 fine for quite some time now on 
FreeBSD 8.4. I think I've upgraded py-boto and python a few weeks ago 
and since then I cannot make duplicity working again.

Now I have py-boto 2.35.1, python 2.7.9 and duplicity 0.7.01 and I'm 
getting the following error with the following connection setting: 
"--s3-european-buckets --s3-use-new-style s3+http://[bucket_name]".

Backend error detail: Traceback (most recent call last):
   File "/usr/local/bin/duplicity", line 1500, in <module>
     with_tempdir(main)
   File "/usr/local/bin/duplicity", line 1494, in with_tempdir
     fn()
   File "/usr/local/bin/duplicity", line 1327, in main
     action = commandline.ProcessCommandLine(sys.argv[1:])
   File 
"/usr/local/lib/python2.7/site-packages/duplicity/commandline.py", line 
1047, in ProcessCommandLine
     globals.backend = backend.get_backend(args[0])
   File "/usr/local/lib/python2.7/site-packages/duplicity/backend.py", 
line 221, in get_backend
     obj = get_backend_object(url_string)
   File "/usr/local/lib/python2.7/site-packages/duplicity/backend.py", 
line 207, in get_backend_object
     return factory(pu)
   File 
"/usr/local/lib/python2.7/site-packages/duplicity/backends/_boto_single.py", 
line 145, in __init__
     raise BackendException('Boto requires a bucket name.')
BackendException: Boto requires a bucket name.

BackendException: Boto requires a bucket name.

According to the documentation this URL format is valid. What could be 
the cause of this problem?

Best regards,
Mate
Remy van Elst | 25 Jan 18:33 2015
Picon

Large amount of small changing files

I have a few servers with a huge amount (+4 million) small (< 5 MB)
.TIFF files which change weekly. (custom geo-data mapping application)

I want to backup this to an Openstack swift objectstore. The commandline
I use is:

duplicity --asynchronous-upload --verbosity 5 --log-file
/var/log/duplicity.log --volsize 250 --tempdir="/root/tmp"
--file-prefix="fs2." --name="fs2." --exclude-device-files
--exclude-globbing-filelist=/etc/duplcity/exclude.conf
--full-if-older-than="14D" --no-encryption  / swift://fs2-backup

Duplicity version is 0.7, however 0.6.24 also had this.

The files change while the backup is running.

The backup takes multiple days to complete, it varies between 7 and 13.
The chain never is complete so only full backups are made. The swift
container does have data, albeit a huge signatures file. Sometimes the
backup breaks because the signatures file is > 5 GB (swift limit), but I
also have broken backup chains with a 700 MB signature file.

Broken means that restoring out of the backup always fails.

LVM snapshots are not possible on this machine, since some idiot has
chosen to install them with one big ext4 partition without lvm.

I want to have a chain of 14 dayly incremental backups and 1 full
backup, saving 4 full backups. The cronjob runs at 4AM in the morning.

What can I do to get regular, working duplicity backup in this situation?
Attachment (0x1B7F88DC.asc): application/pgp-keys, 4753 bytes
Attachment (smime.p7s): application/pkcs7-signature, 5032 bytes
_______________________________________________
Duplicity-talk mailing list
Duplicity-talk <at> nongnu.org
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
Martin Pool | 23 Jan 17:43 2015
Picon

new librsync release

Hi,

I've just made a new librsync release <https://github.com/librsync/librsync/releases/tag/v1.0.0>.

This includes a fix for a security bug reported by therealmik that's relevant to Duplicity, <https://github.com/librsync/librsync/issues/5>.

Unfortunately the fix for this necessitates a change in signature file format, so probably also a new version of the Duplicity archive format. librsync can read the old format but writing it is deprecated.

Regards,
--
Martin
_______________________________________________
Duplicity-talk mailing list
Duplicity-talk <at> nongnu.org
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
Josh Triplett | 20 Jan 16:37 2015

Support for current Google Drive API?

[Please CC me on replies.]

The current Duplicity "gdocs" backend uses the deprecated "Google
Documents List Data API"
(https://developers.google.com/google-apps/documents-list/), which says:

> Warning: The deprecation period for Version 3 of the Google Documents
> List API is nearly at an end. On April 20, 2015, we will discontinue
> service for this API. This means that service calls to the API are no
> longer supported, and features implemented using this API will not
> function after April 20, 2015. You must migrate to the Drive API as
> soon as possible to avoid disruptions to your application.

Any plans to migrate to the Drive API?

Also, switching to the new API would avoid the need to supply a Google
username and password; instead, Duplicity could obtain an access token
with the appropriate permission (drive.file) to access only files
created with that token.

- Josh Triplett

Gmane