Rubin Abdi | 23 Jul 03:57 2014

Re: Backing up desktop virtual machines?

Laurence Perkins (OE) wrote on 2014-07-22 09:52:
> My attempts to reply to the list don't seem to be getting through, so
> I'll send you my best guess as to the problem directly and you can
> forward it on to the list if I am correct.

Maybe the list admin is around and will see this.

> Duplicity only backs up changed parts of the file, but must scan the
> entire file to determine which parts have changed.  If your .VDIs for
> your VMs are large, this will take a while.  Try snapshotting the VMs.
> This will cause all changes to the disks to be written to a separate,
> potentially much smaller file.  If you snapshot after each incremental,
> then all the changes for the VM will be in their own, small file that
> duplicity can just pick up and add to the archive without having to scan
> the big, unchanged root VDI.

Thanks to both you and Edgar for pointing that out. I had no idea.
That's kind of awesome. It now makes sense that the slow down is simply
from Duplicity having to deal with scanning a large file in order to
find diffs.

> Then, before running your next full backup, merge all the snapshots back
> into the main image.

I'll try using Virtualbox with snapshots and see how that goes. I've
only been using Duplicity for a month. How often should I do a full
backup if I'm running it for my whole laptop?


(Continue reading)

Aaron Whitehouse | 21 Jul 23:06 2014

Re: Behaviour/Man page of --verify and --compare-restore (was Big bandwidth bill from Rackspace)

Sending this again as it doesn't seem the first one made it to the list.
From: edgar.soldin <at>
On 10.06.2013 21:56, James Patterson wrote:
Thanks. I would suggest: Enter verify mode instead of restore. This will restore each file from the latest backup and compare it to the local copy. If the --file-to-restore option is given, restrict verify to that file or directory. duplicity will exit with a non-zero error level if any files are different. On verbosity level 4 or higher, log a message for each file that has changed.
it's taken a year but good things etc. here is the change's branch
That does seem a better description of how I understand the behaviour to work, but I am still not convinced that description is completely correct or even what verify should actually be doing. Kenneth explained the design of verify here (comment #13):
"Duplicity does verify the contents of the archives *as they were*, it does not do a comparison with the contents on the filesystem. Verify is done by comparing the archive contents with the stored signatures, i.e. the original file with its hash value."
So on that basis, saying that it will restore each file and compare it to the local copy is a little misleading, though the latest manpage does clarify that you need the --compare-data option to enable data comparison. I have suggested some text below. This has also reminded me of a discussion that we have had a few times now. In my view, by default verify should not be concerned with the current contents of the filesystem at all, whether that is actual file contents or timestamps. This is particularly the case now that we have the --compare-data option that people can use if they want this functionality. If you agree, feel free to fast-forward to the end and let me know - the rest of this traverses the various comments we have had on this topic to date, so that we don't go over the ground again. As per Kenneth (again, comment #13):
"The assumption is that the filesystem will probably change shortly after backup. What you look for in a verify is a check to see if the backup is stored properly and can be restored. If you want a comparison function, you'll need to restore and compare the original with the restored files, or provide a direct comparison function for us to integrate into duplicity. If you want to test verify, backup to a local file system, hexedit one of the archives and try to verify. It will fail to verify. You can modify the original files at will, and verify will succeed, as it is designed to do."
I agree with that design decision, though I don't believe verify will in fact succeed if one modifies original files - as even though the original file contents are not checked (unless the new --compare-data option is used), the timestamps are. As per Edgar (comment #1): "
we really should remove the functionality that verify in addition to checking the backups integrity is comparing dates/modtimes with the backups source. here a citation from the mailing list lately: -->
2. Why we get 'Difference found: File etc/resolv.conf has mtime Wed Jan 19 09:49:14 2011, expected Wed Jan 19 00:21:25 2011' lines on this process?
confusing isn't it. For reasons not transparent to me, additionally to verifying the backed up data, verify also compares the date with the source. This should be removed from my point of view. It could be part of a new command compare, which actually really compares backup with source. <--"
Peter Schuller echoed this sentiment ( comment #16) :
" If the intent of verify is just to verify internal integrity, why is a file system even involved in the process (i.e., why even compare a file system hierarchy at all)?"
As I mentioned here (, this causes me issues because duplicity errors when the file-system changes shortly after a backup (Kenneth's "assumption" mentioned above). We now have that new separate command (--compare-data). Consistent with the various comments to date, I therefore propose that the comparison of dates/modtimes is only carried out if --compare-data is used. On that basis, verify would not give an error if the file-system changes after the backup, so long as it can restore the files and they match the signatures from the time of the backup. If we are all agreed conceptually, I will file a bug and have a go at making this work. I would also then suggest the above man page read: "Enter verify mode instead of restore. Verify tests the integrity of the backup archives at the remote location by checking each file can restore and that the restored file matches the signature of that file stored in the backup, i.e. compares the archived file with its hash value at archival time. Verify does not actually restore and will not overwrite any local files. If the --file-to-restore option is given, it will restrict verify to that file or directory. The --time option allows the selection of a specific backup to verify. Duplicity will exit with a non-zero error level if any files do not match the signature stored in the archive for that file. On verbosity level 4 or higher, it will log a message for each file that differs from the stored signature. Files must be downloaded to the local machine in order to compare them. Verify does not compare the backed-up version of the file to the current local copy of the files unless the --compare-data option is used (see below)." Aaron
Duplicity-talk mailing list
Duplicity-talk <at>
Rubin Abdi | 21 Jul 07:50 2014

Backing up desktop virtual machines?


So I've been using Duplicity for the last week (wrapped with duply) and
it's been great. My only current sore spot is if I touch any of my
virtual machines (through Virtual Box), a backup jumps from 15 minutes
to over 2 hours. And so I have three questions.

The first one I'm pretty sure the answer is no, is there any way to
backup only the changes within the vdi volume container file to my
Duplicity session?

Given that the answer to the first question is no, is there any sane way
of ignoring my virtual machine directory for incremental backups until I
decided it's time to also include the new changes to the virtual machine

And so, is there any way to have Duplicity only maintain two versions of
files in a particular directory while still having that be included in a
full system backup?

If the answer two question two is either no, or too much of a pain, I'm
guessing my only solution really is to have a separate Duplicity session
for the virtual machines that I run once in a while and only maintain
two revisions.

Am I thinking correctly about all this? Thanks!


rubin <at>

Duplicity-talk mailing list
Duplicity-talk <at>
Erik Romijn | 20 Jul 19:39 2014

duplicity collection status slowness

Hello all,

I'm using duplicity to run a few backups for my servers, and generally found it to work very well. However,
although my data is incredibly tiny, duplicity has become incredibly slow, which I think I've narrowed
down to the collection status process.

My source file size is only 20MB, but running this backup takes about 7 minutes, and is almost completely cpu
bound. Running the collection status takes nearly the same amount, so it would seem that this is where the
slowness comes from.

I make incremental backups every 15 minutes, with a full backup after 23 hours, so 92 sets per day. I
currently have 19 backup chains, according to collection status, and there are no orphaned or incomplete
sets.  The source file size is about 20MB. In total the destination volume is 154MB now. Running verify
confirms that the backups are correct.

These numbers are for the backups of my /var/log, but I have another backup of an unrelated directory of
about 300MB on the same backup schema, which shows similar numbers for collection status.

One workaround would be for me to move the files away from the duplicity destination, so that the total
collection appears smaller. But that leaves me to wonder: why does collection status take so much time,
particularly considering it's cpu bound?

I'm running duplicity 0.6.23 with python 2.7.6 on an Ubuntu 14.04 VPS.

The full duplicity command line I use is:
/usr/bin/duplicity --full-if-older-than 23h --encrypt-sign-key [...] --verbosity info
--ssh-options=-oIdentityFile=/root/.ssh/backup_rsa --exclude-globbing-filelist
/root/duplicity_log_exclude_filelist.txt /var/log sftp://[...] <at> [...]/[...]/backups/log

Can anyone here provide insights into what might be the issue, and what would be the best approach to tackle this?

Erik | 3 Jul 22:32 2014

local_path.exists() fails when testing a new backend

Hi guys,

as you probably know I'm working on the Duplicity backend for Skylable
service. I'm basing my work on your "devel" branch (0.7.x) and you can
find my work in progress here:

Now, let's go to the issue. It looks like I've problems implementing
the _get method of my backend.

On high level, this is the error I get when I execute a command like this:

PYTHONPATH=. ./bin/duplicity -v9 sx:// ./test-bkp

output here:

Debugging my code with ipdb I can see this:

> /home/andrea/Downloads/duplicity/duplicity/backends/
     38         commandline = "sxcp {0} {1}".format(remote_path,
---> 39         self.subprocess_popen(commandline)

ipdb> n
> /home/andrea/Downloads/duplicity/duplicity/
    540             self.backend._get(remote_filename, local_path)
--> 541             if not local_path.exists():
    542                 raise BackendException(_("File %s not found
locally after get "

ipdb> local_path.exists()
ipdb> local_path
(() /tmp/duplicity-V8v6rE-tempdir/mktemp-0tsmWi-2 None)

as you can see the .exists() method doesn't return anything, while
inspecting the object it contains a file that I've verified being
existing (the only two things that I don't understand are: is that
empty tumple at the beginning ok? It's the None at the end ok?):

andrea-Inspiron-660:duplicity andrea [master] $ ls -al
-rw-rw-r-- 1 andrea andrea 4455795 lug  3 21:20

The code of course fails because tha exists() fails:

> /home/andrea/Downloads/duplicity/duplicity/
    541             if not local_path.exists():
--> 542                 raise BackendException(_("File %s not found
locally after get "
    543                                          "from backend") %

Now my question is: why the .exists() fails if the path exists? What's
wrong with my code?

Thank you so much. Cheers.


Andrea Grandi -  Software Engineer / Qt Ambassador / Nokia Developer Champion
Benoit Tigeot | 24 Jun 16:10 2014

File listed but impossible to restore


I'm trying a restore a config file but have few problems. I'm using Duplicity backup threw the script of
Zertrin ( when I'm listing the file

~/duplicity-backup# ./ -c duplicity-backup.conf --list-current-files | grep default.cfg

Tue may 20 14:59:05 2014 home/martin/.willie/default.cfg

So ok I've got the file 

When I'm trying to do a restore : ./ -c duplicity-backup.conf --restore-file
home/martin/.willie/default.cfg /home/martin/.willierestore/

>> RESTORE: home/martin/.willie/default.cfg
>> TO: /home/martin/.willierestore/

Are you sure you want to do that ('yes' to continue)?
Restoring now ...

But .willierestore is empty. If I do ./ -c duplicity-backup.conf --restore-dir
home/martin/.willie /home/martin/.willierestore/
I get lot's of file but not the 'default.cfg'

What's happen?

Henri Salo | 19 Jun 17:21 2014

CVE-2014-3495 duplicity: improper verification of SSL certificates

Eric Christensen of Red Hat Product Security reported [1] that Duplicity did not
handle wildcard certificates properly.  If Duplicity were to connect to a remote
host that used a wildcard certificate, and the hostname does not match the
wildcard, it would still consider the connection valid.


Why is that upstream bug report still embargoed? Is there a fix for this
security issue already? If yes - what version or source control revision?


Henri Salo
Duplicity-talk mailing list
Duplicity-talk <at>
Radomir Cernoch | 19 Jun 10:00 2014

Restart duplicity without private key

Dear all,

I would like to use duplicity (v0.6.23 from Debian) without saving the
private key or its passphrase on the computer. After having abandoned
signing and started using encryption only, there is still an issue.

If a backup is interrupted, the local metadata go out of sync with
remote metadata. A subsequent backup needs to synchronize and
decrypt them from the remote server. However, this  procedure fails,
because of the missing private key. The result is undesirable: Every
interrupted backup requires a human intervention.

As a compromise, I hoped to use "cleanup" before every backup, which
would have enforced a full backup after an interrupted one.
Nevertheless, even the "cleanup" command requires the passphrase.

Why is it the case? A cleanup should only delete remote files,
shouldn't it? Is there a way to escape the "interrupted backup" trap?

Thanks in advance for your advice and help,
Radek ńĆernoch

Duplicity-talk mailing list
Duplicity-talk <at>
Fedechicco | 15 Jun 21:45 2014

Very long backup times, maybe needs for a distributed backup system


I'm in a bit of a situation here at work, and I wonder if there is a good way to solve my problem with duplicity.

We wanted to backup our whole raid/nas system holding data for all our /home/ directories, and we wanted to do this with duplicity, because it interacts nicely with Ubuntu.
After a few hiccups we succeeded launching everything, but the whole process last about 3-4 days for a full backup (about 4TB of data).
The backup system finally in place is like this: on a system with a lot of disks in raid 5 we mount with NFS read only our /home/ directories, and there we backup from the NFS mount to the raid 5.
Sadly I cannot install duplicity directly on the NAS.

I was wondering: is 3-4 days a normal duration for so much data & files or is the NFS holding us back too much?
Is there a way to parallelize, or even distribute, the backup process with duplicity? We have a lot of workstations mounting the same NFS share, and I'd like to make them contribute to the backup, to achieve a faster backup time.

Duplicity-talk mailing list
Duplicity-talk <at>
edgar.soldin | 14 Jun 15:02 2014

Re: Duplicity response status 200 with reason 'OK'

On 13.06.2014 18:37, Andreas Vogler wrote:
> Zitat von edgar.soldin <at>
>> 1. did it work before?
> No, not on this server

is there probably an enforced webproxy between the machine and the webdav server?

> So i changed the correct PY. Im dont know python very well, but i deleted the pyc file. it was generated again
> Still no output
> I have no clue why doesnt output somthing

hmm.. let's leave that for now. i registered a free account with and everything works fine. i
even checked volsize 50MB and it worked out.

i think i've got it. you have an incomplete, interrupted backup laying on the backend. the backup resume
fails as PUT is overwriting a file and not just creating one thus HTTP response code is 200.

try adding 200 to the valid codes in line 404 of . works for me.
don't forget to verify your backup afterwards.

edgar.soldin | 13 Jun 13:34 2014

Re: Duplicity response status 200 with reason 'OK'

On 13.06.2014 13:10, Andreas Vogler wrote:
> Hallo ,
> Hier das Logfile:

two questions.

1. did it work before?

2. is the webdav space full by any possibility.

try to add some debug out to 'duplicity/backends/' around line 403

if response.status in [201, 204]:
   status = response.status
   reason = response.reason
   document =       <---- add this
   log.Debug("%s" % (document,))    <---- and this
   raise BackendException("Bad status code %s reason %s." % (status,reason))

make sure the indention matches with the surrounding lines.
run again with -v9 and send the output.