Tiernan OToole | 29 Jul 14:26 2015
Picon

Pushing older archives from S3 to Glacier


Morning all.

Reading the previous messages about S3 and Glacier, i have a question:
Is it possible to get Duplicity to push older archives (say 1month) to
Glacier and delete them from S3, potentially saving money? (not a
massive amount, mind, given S3 is 2-3c and glacier is 1... 50-66%
cheaper!)

For example, at the moment, i do a backup with the following flags:

--full-if-older-than 1M

and then i run a clean up to do

remove-older-than 6M

But i am wondering if, if the last full backup is older than 1month,
how do we archive it off to Glacier? Would this be an external tool,
or could Duplicity do it?

Thanks.

--Tiernan
mailinglist | 29 Jul 11:05 2015

cannot restore encrypted backup

Hi,

I know it might be a GnuPG issue but I sincerely hope that someone on 
this list might have a solution for me. :)

So I am trying to restore an ecrypted backup and receives the following 
error:

GPGError: GPG Failed, see log below:
===== Begin GnuPG log =====
gpg: encrypted with 4096-bit RSA key, ID xxxxx, created 2011-02-24
"xxxx <xxx <at> xxx>"
gpg: public key decryption failed: Invalid IPC response
gpg: decryption failed: No secret key
===== End GnuPG log =====

The key is in the user's keyring, at least the gpg --list-secret-keys 
shows the ID. The password seems to be right beacuse I am using this key 
to encrypt my weekly backups. Does anyone have any suggestion on this? I 
am not really familiar with GnuPG.

Best regards,
Mate
Benjamin Henrion | 29 Jul 10:46 2015
Picon

Restoring from Glacier to S3

Hi,

I am restoring some files with duply, and it seems some files are
backuped to Glacier instead of S3 when I do the restore:

===============================
$ duply mybackup list

[...]

Listing s3://s3.amazonaws.com/mydata/myserver/
Synchronizing remote metadata to local cache...
File duplicity-full-signatures.20150626T073747Z.sigtar.gz is in
Glacier storage, restoring to S3
File duplicity-full-signatures.20150626T073747Z.sigtar.gz is in
Glacier storage, restoring to S3
File duplicity-full-signatures.20150628T070002Z.sigtar.gz is in
Glacier storage, restoring to S3
File duplicity-full.20150626T073747Z.manifest is in Glacier storage,
restoring to S3
File duplicity-full.20150628T070002Z.manifest is in Glacier storage,
restoring to S3
Copying duplicity-full-signatures.20150626T073747Z.sigtar.gz to local cache.
Using temporary directory /tmp/duplicity-yReHpi-tempdir
Waiting for file duplicity-full-signatures.20150626T073747Z.sigtar.gz
to restore from Glacier

===============================

Any idea how to avoid files to be copied to Glacier?
(Continue reading)

Tiernan OToole | 28 Jul 12:29 2015
Picon

HubiC Backend: Multi Threaded uploads


Morning all.

I am using Duplicity and HubiC to backup some machines and noticed
that when uploading, its only using 1 of my internet connections
(24mb/s up). I have 2 pipes at the same speed, and was wondering how
(if possible) i can get Duplicity to upload multiple files to the
HubiC backend at the same time?

The connection is load balanced and i ran some tests and can get a
full 48mb/s up if i use multiple rsync connections... (rsync with
parallel).

Any ideas?

Thanks.

--Tiernan
rsync.net | 21 Jul 18:37 2015
Picon

duplicity HOWTO rewrite ... request for comments ...


Friends,

We created this general purpose duplicity HOWTO in 2007 (IIRC) and it 
appears to be holding up well:

http://www.rsync.net/resources/howto/duplicity.html

However, I'd like to rewrite/freshen it and make sure that it is both 
accurate AND reflects current best practices for using duplicity.

It would be much appreciated if anyone could skim over it and let us know 
if there is anything glaring that we should revise/rewrite.

The HOWTO is *not* geared specifically to rsync.net cloud storage, so if 
there's anything we can do to make it even more general and universal 
those suggestions would be well taken.

Thanks.

John Kozubik
rsync.net, Inc.
rsync.net | 21 Jul 18:19 2015
Picon

rsync.net "duplicity pricing" and (modest) contribution to duplicity development


Friends,

duplicity has been a fantastic tool for our customers at rsync.net who 
want encryption at-rest in addition to the in-flight encryption they get 
from SSH.  I suspect we have 500+ customers who use duplicity as their 
primary backup mechanism rsync.net cloud storage.

We'd like to serve more customers like that - clueful, unix-based and 
security minded - so we are offering a "duplicity friends" rate for our 
cloud storage of 4 cents per gigabyte-month.[1][2][3]

-----

Unrelated to this, we would also like to repeat our original $500 
contribution to the duplicity project that we made in 2007.  At that time, 
there was no maintainer and new versions were very slow to be released. 
Since then, Kenneth Loafman has been maintaining the project and 
development has been regular and healthy.  We trust Ken to distribute, or 
use, these funds to further this development.

Thanks,

John Kozubik
rsync.net, Inc.

[1] https://www.rsync.net/signup/signup_offer.html?code=e82d28

[2] Yes, slightly more expensive than S3, but also with no 
usage/bandwidth/transfer charges *and* with 7 days of filesystem snapshots 
(Continue reading)

Aaron Whitehouse | 21 Jul 10:45 2015
Picon

--exclude-if-present

Hello all,

What is the use case for --exclude-if-present?

I've never seen a normal exclude throw an error if the file is missing,
so I'm struggling to figure out what this adds over a simple --exclude.

Kind regards,

Aaron
Jeff Rizzo | 21 Jul 01:16 2015
Picon

Private key management, signature verification

First off, a brief description of what I'm trying to accomplish:

I'd like to have a "master" key which can decrypt backups, but whose 
private key doesn't live on any backed-up host.   I seem to have ALMOST 
achieved what I want with the script "duply" and duplicity 0.6.24/0.6.25 
(it's what I currently have available - i'd be willing to move to a 
newer version if it actually fixes things).

If I create a config file for "duply" that looks like this:

ulimit -n 2048
GPG_KEYS_ENC='EA2F12BE,FA174E5B'
GPG_KEY_SIGN='EA2F12BE'
GPG_PW=‘redacted'
TARGET='ssh://backups <at> backuphost.lan//backups/duplicity/duply3'
SOURCE='/var'
VOLSIZE=500
DUPL_PARAMS="$DUPL_PARAMS --volsize $VOLSIZE "
DUPL_PARAMS="${DUPL_PARAMS} 
--ssh-options='-oIdentityFile=/root/.ssh/id-backups' "
DUPL_PARAMS="${DUPL_PARAMS} --ssh-backend=pexpect "

I can successfully create backups from the original host using "duply".  
This host has the private key for EA2F12BE on it.

On the "restore" host, which has the key FA174E5B but not EA2F12BE, I 
create a config file that looks like this:

ulimit -n 2048
#GPG_KEYS_ENC='EA2F12BE,FA174E5B'
(Continue reading)

Andrew Beverley | 19 Jul 13:51 2015

Automatically creating buckets with S3 storage

Hi,

Is it possible for Duplicity to create buckets automatically with S3 storage?

The manual implies that it does, but I get the error "BackendException: The
specified bucket does not exist" when using a non-existent bucket.

Looking at the code I can't find anything that calls the create_bucket() function
in storage_uri.pl (well, apart from the CopyBot and SDBConverter classes, but I'm
not sure how these are used). Certainly create_bucket is not being called when I
run a normal backup.

Am I doing something stupid?

Thanks,

Andy
Peter.Hine | 10 Jul 06:46 2015
Picon

exclude lost+found . [SEC=UNCLASSIFIED]


Hi,

Can someone advise what the correct format for excluding lost+found might
be please ?

I keep getting 'Error accessing possibly locked file /backup/lost+found'
and thus the backup has 'Errors: 1' in the summary.

currently trying :
duplicity --dry-run --encrypt-key=backup '--exclude=/backup/lost
+found' /backup scp://backup <at> backup1:54321//backup/

Output :
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Fri Jul 10 14:27:01 2015
Error accessing possibly locked file /backup/lost+found
--------------[ Backup Statistics ]--------------
StartTime 1436503247.49 (Fri Jul 10 14:40:47 2015)
EndTime 1436503247.64 (Fri Jul 10 14:40:47 2015)
ElapsedTime 0.14 (0.14 seconds)
SourceFiles 214
SourceFileSize 28016193 (26.7 MB)
NewFiles 5
NewFileSize 9800 (9.57 KB)
DeletedFiles 6
ChangedFiles 2
ChangedFileSize 600 (600 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 13
(Continue reading)

Roy Waldspurger | 9 Jul 05:29 2015

Trouble installing duplicity from development branch on OS X 10.9.5

Forgive me, this is the first time I've tried to build the development branch code.

> bzr branch lp:duplicity
> cd duplicity
> python setup.py install

The following is the output to the shell, and it seems to not get past trying to build the duplicity libsync extension... excerpt included below.

Notably, I had to create a symbolic link to get gcc to run as it was looking for gcc-4.0.
> ln -s gcc gcc-4.0

Also, I grabbed librsync from here:
https://code.google.com/p/rudix/downloads/detail?name=librsync-0.9.7-9.pkg

Any pointers appreciated.

-- roy

=======================================

building 'duplicity._librsync' extension

creating build/temp.macosx-10.3-fat-2.7

creating build/temp.macosx-10.3-fat-2.7/duplicity

gcc-4.0 -fno-strict-aliasing -fno-common -dynamic -arch ppc -arch i386 -g -O2 -DNDEBUG -g -O3 -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c duplicity/_librsyncmodule.c -o build/temp.macosx-10.3-fat-2.7/duplicity/_librsyncmodule.o

In file included from duplicity/_librsyncmodule.c:25:

In file included from /Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:19:

In file included from /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/6.0/include/limits.h:38:

In file included from /usr/include/limits.h:63:

/usr/include/sys/cdefs.h:658:2: error: Unsupported architecture

#error Unsupported architecture
_______________________________________________
Duplicity-talk mailing list
Duplicity-talk <at> nongnu.org
https://lists.nongnu.org/mailman/listinfo/duplicity-talk

Gmane