Merge daily backups?

Hi Duplicity,

My use-case is similar to this thread:

But my backups are over the internet to a remote host.  So my full 
backup will take weeks.

And I want to do daily incrementals.

According to the above thread, that'd be a bad idea.  Ghozlane 
recommended the remove-older-than and remove-all-inc-of-but-n-full 

Does the remove-all-inc-of-but-n-full merge the incrementals into one?

Ideally I'd like to have my daily backups result in just a few 
incremental sets (varying between 1 and n depending on how often I 
merge).  The idea of thousands of incremental sets, reliant on each 
other for the backup's integrity, is scary.

Am I off in the weeds here?


Duplicity process killed


Since the recent upgrade to Duplicity 0.7.08 I'm having some problems of
the process being killed spontaneously.

I start Duplicity as follows:

nice /usr/bin/duplicity --verbosity 9 --full-if-older-than 1M
--archive-dir=/root/Backup/cache/ --include-globbing-filelist
/root/Backup/backup.list --exclude '**' / gs://bobo-backup/

The output is as follows:

Using archive dir: /root/Backup/cache/01f81df3eb37282f76c93fb89a3708f5
Using backup name: 01f81df3eb37282f76c93fb89a3708f5
Import of duplicity.backends.acdclibackend Succeeded
Import of duplicity.backends.azurebackend Succeeded
Import of duplicity.backends.b2backend Succeeded
Import of duplicity.backends.botobackend Succeeded
Import of duplicity.backends.cfbackend Succeeded
Import of duplicity.backends.copycombackend Succeeded
Import of duplicity.backends.dpbxbackend Failed: No module named dropbox
Import of duplicity.backends.gdocsbackend Succeeded
Import of duplicity.backends.giobackend Succeeded
Import of duplicity.backends.hsibackend Succeeded
Import of duplicity.backends.hubicbackend Succeeded
Import of duplicity.backends.imapbackend Succeeded
Import of duplicity.backends.lftpbackend Succeeded
Import of duplicity.backends.localbackend Succeeded
Import of duplicity.backends.mediafirebackend Succeeded
(Continue reading)

Adam M. via Duplicity-talk | 10 Jul 20:38 2016

ReadError: unexpected end of data


I've been using duplicty (0.6.something and now with python  
2.7.12) via duply.
After one full and several incremental backups, all successful, it  
suddenly stopped working with a "ReadError: unexpected end of data".
I now get this error every time.

The end of a run with verbosity=9 looks like:

Selection: examining path /path/profile_name/XXX/XXX/XXX.XXX
Selection:     result: None from function: Command-line exclude glob: /YYY/YYY
Selection:     + including file
Selecting /path/profile_name/XXX/XXX/XXX.XXX

Releasing lockfile /XXX/XXX/lockfile.lock
Removing still remembered temporary file /tmp/XXX/XXX
Removing still remembered temporary file /tmp/YYY/YYY

Traceback (most recent call last):
   File "/usr/bin/duplicity", line 1544, in <module>
   File "/usr/bin/duplicity", line 1538, in with_tempdir
   File "/usr/bin/duplicity", line 1392, in main
   File "/usr/bin/duplicity", line 1520, in do_backup
(Continue reading)

Duplicity 0.7.08 Released


It's been a while.  Thanks to the contributors for all the hard work!

Full details of the release and the tarball can be found at Milestone 0.7.08

Duplicity-talk mailing list
Duplicity-talk <at>

duplicity 0.7 slowness


we recently upgraded a couple of machines to and it feels like everything is wayy slower than on
0.6. We use duply and run `duply $profile status` thru icinga/nrpe. On 0.6.x it took <30sec to return the
status info, on 0.7 it takes around 5-10 minutes. 
I also have a feeling that backups take longer, but that’s hard to measure because the file server is
pretty busy at times and there are too many things that could slow that down, but collection-status
shouldn’t do much network stuff anyway, should it?
Did something significantly change on 0.7?
I saw something fixed in about an fsync issue, but that doesn’t seem to make a real difference.


Duplicity-talk mailing list
Duplicity-talk <at>

Detecting renames


I am considering adding a flag that looks for files that have been
renamed or hard-linked. I want this so that I can back up my emails more
efficiently; I frequently move files to different folders, but I never
change the contents of the files. I am curious as to whether this has
been tried before and whether anyone else would use it.

I imagine that I would maintain a mapping from inode and
hash-of-file-contents to a set of file names that currently point to the
inode. When duplicity is run with this flag, it would first walk the
filesystem and mark what has changed. I presently only really care about
renames, not multiple hard links to the same file, so I think I would
just update the globals.rename dict (as the --rename flag does) for
simple renames; hard links would still be treated as copies.

If there's a good reason why this won't work, maybe I won't do it, and
if other people think they would use it, maybe I will do it.


Compression Levels

Currently I am doing a encrypted backup and cant find any documentation on how to set compression level.  I have seens mention of a COMPRESSION_LEVEL arable and --gpg-options but no details.

The drive I am backing up is around 17.4GB and the, backup around 14 so I guess I am getting around 25% but wondering if there is a way to save more.

Ben Edwards, Video Editor and Cameraman mobile:07773 02 44 82 skype:funkytwig twitter: <at> funkytwig
iContact Community Video | Bristol Community Channel
Duplicity-talk mailing list
Duplicity-talk <at>

(no subject)

Hi there,

so I am back on version 1.9.1-1 and the exception down below is gone (globbing file warning is back though). So it seems that the combination I tried down below produces the exceptions. Anyone has a clue why or if it has any relevance for later versions?



On 11.06.2016 23:31, list-christian--- via Duplicity-talk wrote:

Hi there,

I switched to the duply version out of the unstable branch and now had 1.11-1 0 runnig. However at regular backup time I got the error below. Not sure, if just the version switch did this, so just went back to the old version, to see what happens. Will come back as soon as I know.

For reference again the other versions:
python-boto 2.40.0-1



Traceback (most recent call last):
  File "/usr/bin/duplicity", line 1539, in <module>
  File "/usr/bin/duplicity", line 1533, in with_tempdir
  File "/usr/bin/duplicity", line 1371, in main
    action = commandline.ProcessCommandLine(sys.argv[1:])
  File "/usr/lib/python2.7/dist-packages/duplicity/", line 1116, in ProcessCommandLine
    backup, local_pathname = set_backend(args[0], args[1])
  File "/usr/lib/python2.7/dist-packages/duplicity/", line 1005, in set_backend
    globals.backend = backend.get_backend(bend)
  File "/usr/lib/python2.7/dist-packages/duplicity/", line 223, in get_backend
    obj = get_backend_object(url_string)
  File "/usr/lib/python2.7/dist-packages/duplicity/", line 209, in get_backend_object
    return factory(pu)
  File "/usr/lib/python2.7/dist-packages/duplicity/backends/", line 161, in __init__
  File "/usr/lib/python2.7/dist-packages/duplicity/backends/", line 183, in resetConnection
    self.conn = get_connection(self.scheme, self.parsed_url, self.storage_uri)
  File "/usr/lib/python2.7/dist-packages/duplicity/backends/", line 99, in get_connection
    is_secure=(not globals.s3_unencrypted_connection))
  File "/usr/lib/python2.7/dist-packages/boto/", line 117, in connect
  File "/usr/lib/python2.7/dist-packages/boto/s3/", line 191, in __init__
    validate_certs=validate_certs, profile_name=profile_name)
  File "/usr/lib/python2.7/dist-packages/boto/", line 569, in __init__
    host, config, self.provider, self._required_auth_capability())
  File "/usr/lib/python2.7/dist-packages/boto/", line 989, in get_auth_handler
    'Check your credentials' % (len(names), str(names)))
NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['S3HmacAuthV4Handler'] Check your crede$
03:30:03.659 Task 'BKP' failed with exit code '30'.
On 10.06.2016 12:34, edgar.soldin--- via Duplicity-talk wrote:
On 10.06.2016 12:09, Christian via Duplicity-talk wrote:
Hi, for the versions: duplicity duply 1.5.10-1 <- Yes, I am using duply sorry for confusing it. Is there an extra mailing list?
nope, duply has no own ml
The command line I generate is: duplicity --name 'duply_profile' --encrypt-key 40XXXXXX --sign-key 40XXXXXX --verbosity '4' --include-filelist /etc/duply/profile/filelist.txt --s3-use-new-style --s3-european-buckets --full-if-older-than 6M --volsize 100 --exclude-globbing-filelist '/etc/duply/profile/exclude' '/' 's3://' You can see, that it uses the --exclude-globbing-filelist. However it is not activated in the config.
it's activated by default, even when empty. simply update to latest greatest duply 1.11.3 and you should be set. ..ede/ _______________________________________________ Duplicity-talk mailing list Duplicity-talk <at>

      _______________________________________________ Duplicity-talk mailing list Duplicity-talk <at>

Duplicity-talk mailing list
Duplicity-talk <at>

How often should I verify

Verify seem to be a lot slower than the actually backup.  I was planing on doing one for each backup but am thinking doing it once a day at night.  Is this advisable?

Ben Edwards, Video Editor and Cameraman mobile:07773 02 44 82 skype:funkytwig twitter: <at> funkytwig
iContact Community Video | Bristol Community Channel
Duplicity-talk mailing list
Duplicity-talk <at>

Six monthly roiling backups

Hi, ime new here and have setup a duplicity backup for a owncloud server.  so far I have done a full backup and an ruining incremental backups every 30 minutes.  I guess whatever I do I will at some point have to have two full backups but would like to avid even this if possible.  

So I guess I need to do a full every six months.  I think then I need to tell duplicity to delete anything over 6 months old.  Or is there a better way of doing it.  I am backing up onto a NAS (we have two buildings on the LAN so  we are off siting a couple of doors down).  The owncloud is sized as 1TB and the backup drive is 3TB (all RAID 1).

I am a little worry about what happens if one of the incrementals gets corrupted. Will the verify command tell me?  If it is corrupted will I loose everything or is there a way of recovering some of the backup? 

Any help with command like captions would be great.  I think I can tell duplicity to delete everything older than 6 months on the incremental backup command line but will have to schedule the full backups with cron.  Will duplicity delete the incremental and full backups for me?


Ben Edwards, Video Editor and Cameraman mobile:07773 02 44 82 skype:funkytwig twitter: <at> funkytwig
iContact Community Video | Bristol Community Channel
Duplicity-talk mailing list
Duplicity-talk <at>

Re: [Duplicity-team] Python 2.7+ for 0.8 series?

On 14.06.2016 11:46, Aaron wrote:
> Hello all,
> Does anybody have any objections to dropping Python 2.6 support in the 0.8 series and making the
requirements 2.7+? Even Ubuntu Precise (12.04) is running Python 2.7.3.

sounds reasonable, at least for 0.8.. ede/