Remy van Elst | 21 Apr 11:38 2015
Picon

multi-backend

I saw this commit on Launchpad:

 1085. By Kenneth Loafman on 2015-04-12

    * Merge in lp:~stynor/duplicity/multi-backend
      - A new backend that allows use of more than one backend stores
(e.g. to
        combine the available space from more than one cloud provider to
make
        a larger store available to duplicity).

Would this also allow to use multiple backends for the same backup? As
in, for redundancy? I define both an SFTP backend and Amazon, and the
backup goes to both? When restoring, if one is down, the other is tried?
Attachment (0x1B7F88DC.asc): application/pgp-keys, 5488 bytes
Attachment (smime.p7s): application/pkcs7-signature, 5032 bytes
_______________________________________________
Duplicity-talk mailing list
Duplicity-talk <at> nongnu.org
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
Philip Jocks | 20 Apr 18:22 2015

duply profile directory

Hej,

when I use 
GPG_KEY=11111111
and
GPG_KEYS_ENC=22222222

duply copies gpgkey.11111111.pub.asc to the profile directory, but when I move that directory to a
different server to have a convenient way to restore, the other server asks for the public key of 22222222
as well. 
Is it intended behavior not to store the 22222222.asc file along with the 11111111.asc?

Cheers,

Philip
Kenneth Loafman | 17 Apr 14:35 2015

Re: Create full backup from incremental

It's way off the roadmap for now, but a separate utility to accomplish this might be workable.  I can't help but think that the space and network requirements are going to be the same as that of a full backup or restore.

Rather than doing that, why not implement a backup where instead of doing an incremental based on the current chain, you reset the incremental process and start over with a second chain off the same full.  I have no name for that, and actually just thought of it, but it would solve some of the problems that people seem to have with backups.  That way you reuse your base full backup and still have incrementals.

...Ken


On Fri, Apr 17, 2015 at 4:07 AM, Edgar Soldin <edgar <at> soldin.de> wrote:
Ken? s.b. ..ede


-------- Forwarded Message --------
Subject: Re: [Duplicity-talk] Create full backup from incremental
Date: Thu, 16 Apr 2015 18:16:18 -0600
From: Eric O'Connor <eric <at> oco.nnor.org>
Reply-To: Discussion of the backup program duplicity <duplicity-talk <at> nongnu.org>
To: duplicity-talk <at> nongnu.org

Ya, I think it would take some work to make this happen, but I don't
think duplicity's approach is incompatible. Most tricky would be
to allow incremental backups based on a syn-full, as you mention in your
second bullet point.

I wouldn't consider the current backup chain (full + incrementals) to
have similar properties to a "full" backup, synthetic or otherwise.
Recovering the most recent state takes a bunch of processing time and
extra storage/bandwidth, both of which grow with the length of the chain.

Would you be interested in patches to implement this, or is it too far
off the roadmap?

Eric

On 04/16/2015 02:59 AM, edgar.soldin <at> web.de wrote:
> thx Eric, unfortunately the current duplicity design is such as that
>  - it bundles changes to different files in a volume until max size
> and than continues in a next volume - the changes are rsync diffs
> that have to be applied in a row eg. first state will be restored,
> first rsync diff will be applied, second rsync.. etc. until the
> latest state is restored
>
> if i understood your explanation correctly than this would mean that
> currently our "synthetic full" is essentially our complete backup
> chain.
>
> ..ede/duply.net
>
> On 15.04.2015 23:24, Eric O'Connor wrote:
>> For this feature, the remote doesn't really need to have access to
>> the data, or be very smart at all (dumb file servers work just be
>> fine). It is true that Duplicity does not support it yet.
>>
>> Doing a synthetic full backup requires only that you be able to
>> (locally) keep track of where on the dumb server the latest
>> version of each file is stored, and which files are recently
>> modified. Then a full backup is the set of archive files containing
>> unmodified files (likely a large percentage) + new archives
>> containing files modified since the last syn-full. So you upload
>> the new archives, and an index pointing to all the relevant data
>> chunks.
>>
>> It's a true "full backup" because it directly contains every file
>> needed to do a restore.
>>
>> When an archive file contains 1 file, there is no additional data
>> storage overhead to this -- you just upload a new index and all the
>> modified files. If archive files contain more than 1 file, a full
>> backup will have some storage overhead -- some files in the
>> archives will be irrelevant older copies. The backup program can
>> pick some overhead maximum and upload enough new data to reduce
>> the overhead to acceptable levels.
>>
>> This can also be spread out over the course of the full backup
>> period -- i.e. every day upload an incremental backup along with
>> 5% of the modified files. You could also occasionally re-upload
>> unmodified files such that it's more likely a single archive
>> corruption is recoverable. It may even be possible to ditch the
>> full/incr schedule entirely if the length of an incremental chain
>> for a file has an upper bound.
>>
>> Anyway, sorry for being pedantic, unhelpful? I've been thinking
>> about building something like this for a while but haven't gotten
>> around to it yet. Also, duplicity works well enough -- so thanks
>> for that :)
>>
>> Eric
>>
>> On 2015-04-15 13:48, edgar.soldin <at> web.de wrote:
>>> good point.. why would you need encrypted backup if you trust the
>>> backend?
>>>
>>> thx Scott.. ede/duply.net
>>>
>>> On 15.04.2015 19:30, Scott Hannahs wrote:
>>>> Note, that to do this, you need to have unencryption locally on
>>>> the server.  Duplicity assumes an insecure server model. To
>>>> collapse incremental backups onto a full backup means that all
>>>> your data is compromised to the level of security of the remote
>>>> server.
>>>>
>>>> The duplicity model assumes that once the data goes out over
>>>> the wire it is subject to unknown security.
>>>>
>>>> For any commercial remote storage, you might as well just use a
>>>> commercial backup system without encryption.
>>>>
>>>> -Scott
>>>>
>>>> On Apr 15, 2015, at 07:21, edgar.soldin <at> web.de wrote:
>>>>
>>>>> On 15.04.2015 12:56, Ulrik Rasmussen wrote:
>>>>>> On Wed, 15 Apr 2015 12:00:00 +0200 edgar.soldin <at> web.de
>>>>>> wrote:
>>>>>>
>>>>>>> On 15.04.2015 09:54, Ulrik Rasmussen wrote:
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> I just started using duplicity for backing up my work
>>>>>>>> to a VPS. It is my understanding that it is wise to do
>>>>>>>> a full backup about once a month, to enable deletion of
>>>>>>>> old backups and faster restoration. However, when doing
>>>>>>>> a full backup, duplicity seems to transfer everything
>>>>>>>> over the wire again, which takes a long time if I'm on
>>>>>>>> a slow connection and also costs me bandwidth. Since
>>>>>>>> the server already has all my data, this really
>>>>>>>> shouldn't be necessary.
>>>>>>>>
>>>>>>>> Is there a way to do a full backup on the server side?
>>>>>>>> More precisely, can I tell duplicity to create a new
>>>>>>>> backup chain based on the contents of the current
>>>>>>>> chain?
>>>>>>>>
>>>>>>>
>>>>>>> no.
>>>>>>>
>>>>>>> duplicity deals with "dumb" backends and solely uses them
>>>>>>> for file storage. for this design to create a synthetic
>>>>>>> full you would have to transfer the whole data over the
>>>>>>> line again anyway completely.
>>>>>>>
>>>>>>> however, it'd be possible to implement that for the rare
>>>>>>>  cases that users have shell access to their backends
>>>>>>> and can have a duplicity instance running locally there.
>>>>>>>
>>>>>>> see also
>>>>>>> https://answers.launchpad.net/duplicity/+question/257348
>>>>>>>
>>>>>>> ..ede/duply.net
>>>>>>
>>>>>> I see, thanks for clarifying. That makes sense, considering
>>>>>> most backends don't imply shell access. Since I _do_ have
>>>>>> shell access to the server and plenty of disk storage, I
>>>>>> guess I can accomplish the task by just restoring the
>>>>>> incremental backup on the server and doing a full backup
>>>>>> from that using the file system backend.
>>>>>>
>>>>>
>>>>> right you are.. make sure to have identical users/numeric ids
>>>>> and restore as root, if you want to keep those.
>>>>>
>>>>> alternatively you can hackishly "reuse" the old full by
>>>>> copying it and updating the filenames with a proper newer
>>>>> timestamp. depending on your data's turnover you might be
>>>>> doing that for a while until your first incremental grows
>>>>> too big.
>>>>>
>>>>> ..ede/duply.net
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Duplicity-talk mailing list Duplicity-talk <at> nongnu.org
>>>>> https://lists.nongnu.org/mailman/listinfo/duplicity-talk
>>>>
>>>>
>>>> _______________________________________________ Duplicity-talk
>>>>  mailing list Duplicity-talk <at> nongnu.org
>>>> https://lists.nongnu.org/mailman/listinfo/duplicity-talk
>>>>
>>>
>>> _______________________________________________ Duplicity-talk
>>> mailing list Duplicity-talk <at> nongnu.org
>>> https://lists.nongnu.org/mailman/listinfo/duplicity-talk
>>>
>>
>>
>> _______________________________________________ Duplicity-talk
>> mailing list Duplicity-talk <at> nongnu.org
>> https://lists.nongnu.org/mailman/listinfo/duplicity-talk
>
> _______________________________________________ Duplicity-talk
> mailing list Duplicity-talk <at> nongnu.org
> https://lists.nongnu.org/mailman/listinfo/duplicity-talk
>
s


_______________________________________________
Duplicity-talk mailing list
Duplicity-talk <at> nongnu.org
https://lists.nongnu.org/mailman/listinfo/duplicity-talk



_______________________________________________
Duplicity-talk mailing list
Duplicity-talk <at> nongnu.org
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
Maximilian Bloch | 17 Apr 12:46 2015
Picon

failed to upload before termination, restarting, restore SHA1 hash mismatch

Hi there,

I'm new here, nice to meet everyone. First off, I am pretty happy with
duplicity, which has been working for me for the past two years.

I have a current issue though and no solution. I searched through the
mailing-list archive and couldn't find anything to help me. Sorry if
this has been asked before.

I am doing automated daily backups:

duplicity \
    --full-if-older-than 14D \
    --encrypt-key ${GPG_KEY} \
    --sign-key ${GPG_KEY} \
    --exclude-regexp '\/te?mp\/' \
    --exclude-regexp '\/cache\/' \
    --include '/var/www' \
    --exclude '**' \
    / s3+http://xxxx-var-www

My last full backup(s) seem to have failed (Backup taking longer than
24h and conflicting with the next backup script?), then resumed the next
day with further errors, resumed/restarted on the third day with no
errors. Restoring the backup is failing due to a corrupted volume (SHA1
hash mismatch for file ..vol907.difftar.gpg). I have noticed similar
issues with my last two full backups. My backup consists of 1043 volumes
à 25MB.

Does anybody have a solution to get the next backups to work properly?
bigger volume sizes? More time until script is run again?

Here are some details of duplicity output:

Full backup take 1:
Sun Apr 12, Sorry, no output, I must have trashed it because it
contained no errors.

Full backup take 2:
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Sun Apr 12 07:12:11 2015
No old backup sets found, nothing deleted.
Local and Remote metadata are synchronized, no sync needed.
Last full backup left a partial set, restarting.
Last full backup date: Sun Apr 12 07:12:11 2015
Reuse configured PASSPHRASE as SIGN_PASSPHRASE
RESTART: Volumes 907 to 908 failed to upload before termination.
         Restarting backup at volume 907.
Restarting after volume 906, file var/www/xxxx.png, block 13
File duplicity-full.20150412T051211Z.vol914.difftar.gpg was corrupted
during upload.

Full backup take 3 (note it takes longer than 24h):

Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Sat Mar 28 07:09:23 2015
No old backup sets found, nothing deleted.
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Sat Mar 28 07:09:23 2015
Last full backup is too old, forcing full backup
Reuse configured PASSPHRASE as SIGN_PASSPHRASE
--------------[ Backup Statistics ]--------------
StartTime 1428815542.46 (Sun Apr 12 07:12:22 2015)
EndTime 1428911965.18 (Mon Apr 13 09:59:25 2015)
ElapsedTime 96422.72 (26 hours 47 minutes 2.72 seconds)
SourceFiles 150833
SourceFileSize 28580154097 (26.6 GB)
NewFiles 150833
NewFileSize 28580154097 (26.6 GB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 150833
RawDeltaSize 28490687563 (26.5 GB)
TotalDestinationSizeChange 27385520785 (25.5 GB)
Errors 0

Restoring attempt:
[...]
Processed volume 908 of 1048
Invalid data - SHA1 hash mismatch for file:
 duplicity-full.20150412T051211Z.vol907.difftar.gpg
 Calculated hash: 3b332923613018d50d4b73d0b2831c72c36f4d74
 Manifest hash: f9f8fa5e8bd5a07bb17228c83e13cf9f6f2adcdb

^ the restore script hangs here

Here is some output of my previous to last full backup attempt, which
also failed:

Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Sat Mar 14 06:29:20 2015
No old backup sets found, nothing deleted.
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Sat Mar 14 06:29:20 2015
Last full backup is too old, forcing full backup
Reuse configured PASSPHRASE as SIGN_PASSPHRASE
Upload
's3+http://xxxx-var-www/duplicity-full-signatures.20150328T060923Z.sigtar.gpg'
failed (attempt #1, reason: S3ResponseError: S3ResponseError: 400 Bad
Request
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>BadDigest</Code><Message>The Content-MD5 you specified did
not match what we
received.</Message><ExpectedDigest>oKyY+wDSBIKvBXzNFupGIQ==</ExpectedDigest><CalculatedDigest>zHtMZsbKINhRo9flUUjXPA==</CalculatedDigest><RequestId>4177190AB50613BA</RequestId><HostId>+hN7JpYX6auLQ63LegVOHtpxBk9ZHFfYU/92HucGXhFuIRKyScdWruXujslIFIb5dh1tfoeFcp8=</HostId></Error>)
Traceback (most recent call last):
  File "/usr/bin/duplicity", line 1404, in <module>
    with_tempdir(main)
  File "/usr/bin/duplicity", line 1397, in with_tempdir
    fn()
  File "/usr/bin/duplicity", line 1367, in main
    full_backup(col_stats)
  File "/usr/bin/duplicity", line 506, in full_backup
    sig_outfp.to_remote()
  File "/usr/lib/python2.7/dist-packages/duplicity/dup_temp.py", line
184, in to_remote
    globals.backend.move(tgt) # <at> UndefinedVariable
  File "/usr/lib/python2.7/dist-packages/duplicity/backend.py", line
364, in move
    source_path.delete()
  File "/usr/lib/python2.7/dist-packages/duplicity/path.py", line 567,
in delete
    util.ignore_missing(os.unlink, self.name)
  File "/usr/lib/python2.7/dist-packages/duplicity/util.py", line 116,
in ignore_missing
    fn(filename)
OSError: [Errno 2] No such file or directory:
'/root/.cache/duplicity/16e0b913754120e86533128ea399d494/duplicity-full-signatures.20150328T060923Z.sigtar.gpg'

Thanks for any help.

Best,
Max

_______________________________________________
Duplicity-talk mailing list
Duplicity-talk <at> nongnu.org
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
Ulrik Rasmussen | 15 Apr 09:54 2015
Picon

Create full backup from incremental

Hi,

I just started using duplicity for backing up my work to a VPS. It is
my understanding that it is wise to do a full backup about once a
month, to enable deletion of old backups and faster restoration.
However, when doing a full backup, duplicity seems to transfer
everything over the wire again, which takes a long time if I'm on a
slow connection and also costs me bandwidth. Since the server already
has all my data, this really shouldn't be necessary.

Is there a way to do a full backup on the server side? More precisely,
can I tell duplicity to create a new backup chain based on the contents
of the current chain?

Thanks!

/Ulrik
Kevin Broderick | 15 Apr 04:08 2015

Duplicity seems to stall while preparing backup

After successfully testing duplicity on one of our workstations, it seemed likely to be a good fit for off-site backups of our OS X server machine. However, despite having seemingly gotten it installed, I've been unable to successfully run a backup; in general, they seem to stall out while generating deltas, as below with -v9, and I'm not sure where to look next. Any pointers would be appreciated; my Google-fu has not turned up anything particularly helpful. I did anonymize the arguments (swapped valid arguments for bogus ones after pasting) but I don't think I changed any of the strings in a way that should affect execution.

Thanks,
Kevin

duplicity 0.7.02 (March 11, 2015)

Args: /usr/local/bin/duplicity -v9 --full-if-older-than 14D --s3-use-multiprocessing --s3-use-new-style --encrypt-key=63E5CD88 --sign-key=46559EC7 --include=/Volumes/.../OUR FILES/OUR MARKETS/COOS COUNTY, NH/ --exclude=** /Volumes/.../OUR FILES/OUR MARKETS s3+http://s3path.fqdn.com/server-bak/

Darwin server.discoverymap.com 13.4.0 Darwin Kernel Version 13.4.0: Wed Dec 17 19:05:52 PST 2014; root:xnu-2422.115.10~1/RELEASE_X86_64 x86_64 i386

/usr/bin/python 2.7.5 (default, Mar  9 2014, 22:15:05) 

[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)]

================================================================================

Using temporary directory /var/folders/1k/g25fvmqd4m9dd8028gvmgtfr0000gn/T/duplicity-nuACnC-tempdir

Registering (mkstemp) temporary file /var/folders/1k/g25fvmqd4m9dd8028gvmgtfr0000gn/T/duplicity-nuACnC-tempdir/mkstemp-LW5yfi-1

Temp has 592369008640 available, backup will use approx 34078720.

Local and Remote metadata are synchronized, no sync needed.

0 files exist on backend

2 files exist in cache

Extracting backup chains from list of files: []

Last full backup date: none

Last full backup is too old, forcing full backup

Collection Status

-----------------

Connecting with backend: BackendWrapper

Archive dir: /Users/administrator/.cache/duplicity/3c2f71ac2b6a6596c2ce67ab107b03c7


Found 0 secondary backup chains.

No backup chains with active signatures found

No orphaned or incomplete backup sets found.

PASSPHRASE variable not set, asking user.

GnuPG passphrase for signing key: 

Using temporary directory /Users/administrator/.cache/duplicity/3c2f71ac2b6a6596c2ce67ab107b03c7/duplicity-SIDYK3-tempdir

Registering (mktemp) temporary file /Users/administrator/.cache/duplicity/3c2f71ac2b6a6596c2ce67ab107b03c7/duplicity-SIDYK3-tempdir/mktemp-PNVWOG-1

Using temporary directory /Users/administrator/.cache/duplicity/3c2f71ac2b6a6596c2ce67ab107b03c7/duplicity-FvTsKj-tempdir

Registering (mktemp) temporary file /Users/administrator/.cache/duplicity/3c2f71ac2b6a6596c2ce67ab107b03c7/duplicity-FvTsKj-tempdir/mktemp-iQNRNf-1

AsyncScheduler: instantiating at concurrency 0

Registering (mktemp) temporary file /var/folders/1k/g25fvmqd4m9dd8028gvmgtfr0000gn/T/duplicity-nuACnC-tempdir/mktemp-FXh03c-2

Selecting /Volumes/.../OUR FILES/SELECTED MARKETS

Comparing . and None

Getting delta of (. dir) and None

A .


-- 
Kevin Broderick

_______________________________________________
Duplicity-talk mailing list
Duplicity-talk <at> nongnu.org
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
Rob Dupuis | 13 Apr 20:10 2015
Picon

Stable release recommendation for Ubuntu 14.04 Trusty / Multiple Swift backups

Hi

I would like to use duplicity to provide rsync-style backup of 20GB data from 3 directories (about 50k files). I am backing up to an OpenStack Swift compatible server. I think duplicity might be exactly what I need, and I just wanted to confirm what version I should use.

The downloads page recommends 0.6.25, but the stable ppa[1] has 0.7.02-0ubuntu0ppa1080~ubuntu14.04.1.

I've been trying 0.7.02 and it seems to work OK, but wanted to check, is it stable enough for production use?

The other question I have is can duplicity make multiple backups to the same openswift container? I tried creating a directory in my openswift container and using that in the swift url when I backup (swift://container/directory), but I see the following error:

Attempt 1 failed. JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Alternatively, is there a way to name to backup tar.gpgs that duplicity creates so that multiple backups don't collide? I tried the --name option, but this is only for the client name. 

Any advice would be much appreciated. 

Thanks,
Rob

_______________________________________________
Duplicity-talk mailing list
Duplicity-talk <at> nongnu.org
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
Steve Tynor | 11 Apr 22:18 2015
Picon

Re: pydrive backend: how to use storage of a user account instead of a service account?

On Fri, 13 Feb 2015 12:06:11 +0000, Rupert Levene wrote:
...
> This has two drawbacks:
>
> (1) I would like to use the unlimited quota in my user account, but
> the quota in the service account is restricted to 15 GB; and
>
> (2) as far as I know the service account has no password, so I can't
> see the backup through the usual Google drive web interface, android
> apps etc.
>
> Could the pydrive backend be modified to allow backups to the drive
> storage of a user account rather than just a service account? I guess
> the authentication process may be a bit more involved, but I believe
> it can be done.
>
> For example, gauth could be made in the following way, without using a
> service account at all:
...

Nudge.  I'm having same problem with the new pydrive backend.   I've been using the gdocs backend but based on the Google warnings about deprecating its authentication method, I'm trying to transition to pydrive - but as Rupert points out, it's storing to the service account's drive which isn't the same one that gdocs used (and which I can browse interactively via the drive web interface...)

Steve


_______________________________________________
Duplicity-talk mailing list
Duplicity-talk <at> nongnu.org
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
Marc Evans | 2 Apr 15:58 2015

can gs use ipv6?

Hello,

I am using duplicity successfully to backup with google cloud storage.
However I am finding that although the target host for such storage has
an IPv6 address:

$ host storage.googleapis.com
storage.googleapis.com is an alias for storage-ugc.l.googleusercontent.com.
storage-ugc.l.googleusercontent.com has address 216.58.217.129
storage-ugc.l.googleusercontent.com has IPv6 address
2607:f8b0:4004:80d::2001

the software always connects to the IPv4 address. I have verified that I
have transit to the IPv6 destination:

$ telnet storage.googleapis.com 443
Trying 2607:f8b0:4004:80d::2001...
Connected to storage-ugc.l.googleusercontent.com.
Escape character is '^]'.

Does anyone know if there is a way that I can force duplicity to use
IPv6 (using -6 doesn't seem to exist)? Any advice appreciated.

- Marc
Thomas Hartmann | 29 Mar 11:18 2015
Picon

does duplicity check for missing difftar files on backup?

hi there,
i am running duplicity 0.6.24.
i do backup to a remote location. lets say, one of the difftar files 
gets deleted there, does duplicity handle this situation on the next 
backup run?

thanks a lot,
thomas
Norbert Kéri | 28 Mar 13:13 2015
Picon

Encrypt without the private key?

Hey,

I'm trying to set up an unattended backup to S3, with the following command:

duplicity --progress --name mystuff --full-if-older-than 6M --s3-unencrypted-connection --encrypt-key A6ACD7BF ./myfolder s3://s3.eu-central-1.amazonaws.com/bucket/folder

However, if I rerun the above command, I get:

Local and Remote metadata are synchronized, no sync needed.
Last inc backup left a partial set, restarting.
Last full backup date: Sun Mar 22 16:54:42 2015

Then it pops up a pinentry dialog, asking for the passphrase for my private key. This surprised me, because I was expecting it to only ask for a passphrase when I restore files from the backup. Even more, if I just cancel the pinentry password dialog, it successfully finishes the backup, so it's not even using the key?

So what's happening here? Does duplicity need to decrypt some parts of the previous backup, is that why it's asking for a key? Why does it continue if I cancel the dialog then? I was thinking maybe it's trying to sign the backups, but I'm not using any of the signing switches, and it doesn't do that by default?

Is this still a problem?
_______________________________________________
Duplicity-talk mailing list
Duplicity-talk <at> nongnu.org
https://lists.nongnu.org/mailman/listinfo/duplicity-talk

Gmane