Gordon Messmer | 22 Nov 19:10 2014
Picon

Snapshots (lvm and more)

Last year, I wrote a "snapshot" utility for backups.  There are a bunch 
of how-tos on the internet that describe how to make snapshots of your 
filesystems so that you don't need to dump server data (like SQL 
databases) to text files for backup. There isn't, however, a standard 
interface for doing so.

I want to change that.

snapshot provides a standard interface to making your data consistent, 
taking snapshots, making them available for backup, and cleaning up the 
snapshots on completion.

I've written integration for rsnapshot, since that's what I use in a lot 
of places, and I'm going to work on Bacula next.

I'd like to get feedback, testing, feature requests, etc from people 
interested in getting good, usable backups of their data.

https://bitbucket.org/gordonmessmer/dragonsdawn-snapshot
https://github.com/DrHyde/rsnapshot/pull/44

------------------------------------------------------------------------------
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration & more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktrk
Timmy O'Mahony | 13 Nov 15:41 2014

sync_first and sync command is backing up every file every time (not just changed files)

I am performing SSH backups from my laptop to my home server. Everything is configured and working.
 
I've enabled sync_first=1 which I understand means that when I want to perform an hourly/daily/weekly backup, I first need to run rsnapshot sync to actually perform the backup (copying remote files to a .sync folder). Only then do I run the rsnapshot weekly/hourly/daily command which simply updates the symlinks to the various hourly.0 etc. folders
 
My problem is that every time I run sync  my entire laptop is backed up every time. This means that every backup takes a huge amount of time. I would have assumed that the sync command took into consideration what has changed since the last sync and only grabbed those particular files. Is this the correct behaviour or am I misusing this command?
 
Thanks,
 
Timmy
 
 
 
 
------------------------------------------------------------------------------
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://pubads.g.doubleclick.net/gampad/clk?id=154624111&iu=/4140/ostg.clktrk
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
Chris Tebb | 7 Nov 10:21 2014
Picon

Missed backup, ok to run again?

Hi venerable rsnapshot experts!

I’ve got a pretty standard setup here with these options at their defaults:

#sync_first 0
#use_lazy_deletes 0

The backup runs every morning at 1AM

This morning the connection went down and all the backups apart from a couple returned 255 errors:

[07/Nov/2014:08:35:47] /usr/bin/rsnapshot daily: ERROR: /usr/bin/rsync returned 255 while processing xxxxxxxx

Am I ok just to run snapshot daily again to get it to take todays copy?

Thanks,
Chris.

------------------------------------------------------------------------------
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
Eildert Groeneveld | 5 Nov 17:40 2014
Picon

central configuration

Dear All

I have been using various backup packages including rsnapshot and
related packages like rs-backup.

The backup problem is a rather small one: a network of a bunch of
linux boxes with multiple users including laptops which may be switched
on or not and a dreamplug (or similar) with a big disk to serve as a
backup server running 24/7.

There are a number of wiki available to serving as a guideline for
configuration. While this is no rocket science, having configuration
on each machine and maybe for each user is anything but convenient nor
is it a good model to document  what has actually been configured.

I wonder if someone is aware of a centralized sort of one stop
configuration for the whole network with rsnapshot. Which could possibly
just generate a bunch of files that would have to be copied to the
machines in the network.

Usually, if something in Linux associated management is useful,it has
already been done.

May be I am lucky.

Thanks in advance

Redil

------------------------------------------------------------------------------
Konrád Lőrinczi | 4 Nov 17:50 2014
Picon

rsnapreport.pl more timing detail?

I would like to get more timing detail from rsnapreport.pl.

I know the hourly backup is done at 16:00, the report email I got at
16:41, so I know that 41 minutes was elapsed since backup start.

But rsnapreport.pl gives me the following info, which does not have
any info how long the backup lasted:
SOURCE                          TOTAL FILES   FILES TRANS      TOTAL
MB     MB TRANS   LIST GEN TIME  FILE XFER TIME
------------------------------
--------------------------------------------------------------------------------------
/var/lib                              38694           437
41223.25      2737.91   0.001 seconds   0.000 seconds

Any idea how to get more detailed timing info?

Thanks,
      Konrad Lorinczi

------------------------------------------------------------------------------
Konrád Lőrinczi | 4 Nov 18:11 2014
Picon

rsnapreport.pl more timing detail?

I would like to get more timing detail from rsnapreport.pl.

I know the hourly backup is done at 16:00, the report email I got at
16:41, so I know that 41 minutes was elapsed since backup start.

But rsnapreport.pl gives me the following info, which does not have
any info how long the backup lasted:
SOURCE                          TOTAL FILES   FILES TRANS      TOTAL
MB     MB TRANS   LIST GEN TIME  FILE XFER TIME
------------------------------
--------------------------------------------------------------------------------------
/var/lib                              38694           437
41223.25      2737.91   0.001 seconds   0.000 seconds

Any idea how to get more detailed timing info?

Thanks,
      Konrad Lorinczi

------------------------------------------------------------------------------
Julien Tessier | 25 Oct 07:24 2014

Skipping daily.0 (ie. not saving today's increment)

Dear all,

I have a use case where we backup our cPanel server to /backup and once finished be use rsnapshot to /rsnapshot.

After rsnapshot is run, /rsnapshot/daily.0 and /backup hold the exact same content.

Is there a way not to create /rsnapshot/daily.0 and switch directly to /rsnapshot/daily.1?

Since /backup is a backup itself it won't change during the day :)

Let me know if this is unclear.

Tropically yours,

Julien Tessier
Directeur technique ninja

Tél Réunion : +262 262 02 47 21
Tél Maurice : +230 453 92 91
Mobile : +230 54 94 75 04
Email : julien <at> cahri.com
L’agence digitale bien dans ses tongs
Découvrez l’agence sur www.cahri.com
RÉUNION
211, avenue du Général Lambert
97436 Saint-Leu
Tél : +262 262 02 47 27
reunion <at> cahri.com
MAURICE
99, avenue des Tourtereaux
Villa Mevin - Flic en Flac
Tél : +230 453 92 90
mauritius <at> cahri.com
100% des pixels ayant servi à rédiger cet email sont issus du recyclage numérique


------------------------------------------------------------------------------
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
Thierry Lavallee | 22 Oct 05:22 2014

Evaluating a backup repository

hi,
I did a 'rsnapshot du' on my repository to validate it and have an 
overview. Interesting... but I have a few questions:

1-I would have expected all these numbers in REVERSE (monthly.x being 
the biggest one and the newest daily.0 just containing the incremental 
part, the additions). Wrong thinking?

2-Because of a connection bug leaving daily{0,1,2,3,4,5} empty , I had 
to recently cp -al daily.6 to daily{0,1,2,3,4,5} I am not sure I 
understand why they take 769M each :/ WOuld have expected just a few Ks 
for the hard link. Wrong thinking?

3-And my weekly.1 looks out of sync with the rest at 127G. Any clue how 
to investigate this?!

Thanks!

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
My command and result
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
root <at> server:# rsnapshot -c 
/root/scripts/backup/rsnapshot.wscpb_cpbackup.conf du
132G    /media/backup02/wscpb/daily.0/
769M    /media/backup02/wscpb/daily.1/
769M    /media/backup02/wscpb/daily.2/
768M    /media/backup02/wscpb/daily.3/
768M    /media/backup02/wscpb/daily.4/
768M    /media/backup02/wscpb/daily.5/
768M    /media/backup02/wscpb/daily.6/
8,7G    /media/backup02/wscpb/weekly.0/
127G    /media/backup02/wscpb/weekly.1/
15G    /media/backup02/wscpb/weekly.2/
11G    /media/backup02/wscpb/weekly.3/
9,5G    /media/backup02/wscpb/monthly.0/
306G    total
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

------------------------------------------------------------------------------
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho
Thierry Lavallee | 21 Oct 18:50 2014

Real disk usage report

hi,
Is there a way to have a report of the real usage of individual snapshots from a WHOLE repository considering hard links and all?

Eg:
~~~~~~~~~~~~~~
daily.0 contains 120 incremental files changed since daily.1 (Total 1.07Gb)
daily.1 contains 137 incremental files changed since daily.2 (Total 0.85 Gb)
...
Your whole repository contains: 738Gb of unique files
~~~~~~~~~~~~~~

Tried rsnapshot du /media/mydirectory/myfiles/myrepositary/
But I get "ERROR: Full paths are not allowed"

And not sure it will help in fact.

Thanks!
- Thierry


------------------------------------------------------------------------------
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
Thierry Lavallee | 21 Oct 14:44 2014

Re: User lost access to remote dir - Rebuilding local repositary

Well, 132gb over the internet can take quite a while. And doing this has the same value in terms of data retention than the one you are suggesting. 

Would my solution work?
Thanks




On 2014-10-21, at 0:06, Ken Woods <kenwoods <at> gmail.com> wrote:

Ya know, you can run it manually.  Why not just so that once to get changes, then run it 4 more times to get everything back in line?

It's 132gb.  It's not a lot of data. 


On Oct 20, 2014, at 18:20, Thierry Lavallee <thierry <at> 8p-design.com> wrote:

thanks for both for your replies...

 FYI, yes, daily.{0,1,2,3,4} are really empty.
# du -hs daily.0
12K    daily.0
# du -hs daily.1
12K    daily.1
# du -hs daily.2
12K    daily.2
# du -hs daily.3
12K    daily.3
# du -hs daily.4
24K    daily.4
# du -hs daily.5
132G    daily.5

Ideally I would like to duplicate daily.5 into daily.{0,1,2,3,4}
And run the next rsnapshot from there. Just like if nothing happened on the remote machine for those last 5 days.
Save bandwidth too! 132G !

If this is viable, would I simply run the following commands then give access back and run my rsnapshot?
cp -al daily.5 daily.4 cp -al daily.5 daily.3 cp -al daily.5 daily.2 cp -al daily.5 daily.1 cp -al daily.5 daily.0   Normally this should not take more space as hard links are kept and all the same... Then the next rush should just pickup from there? What do you all think? thanks! -- Thierry




On 2014-10-20 9:01 PM, djk <at> cyber.com.au wrote:
On Oct 21 2014, Thierry Lavallee wrote:

We had a running rsnapshot to backup a remote server.
On the remote server, the SSH user lost privileges over the directory that it was supposed to snap.

Hence, for the last few days, the daily.0, daily.1, daily.2 and daily.4 are EMPTY.

I take it there are no files AT ALL in daily.{0,1,2,4} (because your ssh couldn't access any files on those days and IIUC presumably your setup is using link_dest to explain why you ended up with an empty backup directory rather than an exact copy of the previous backup when rsync failed), not
just that a sub-directory in the backup is empty.

I also assume some other backups have files in them.

1-We'll give back the access to the /backup directory on the remote server, _but how do you recommend that we proceed?_

 * delete the daily.0, daily.1, daily.2 and daily.4 directories?!

If you delete those directories (after double checking they are really empty), then the other backup directories (eg daily.{3,5,6}) will be kept for longer, because they won't be cycled out in favour of those empty backup directories.

This could be a plus or a minus depending on how you look at it (the minus I was thinking of is some backups are kept on a different schedule than usual, and so some of your weekly backups will end up being more than 7 days apart).

 * Just give back the access and Rsnapshot will move on?

Assuming you are using link_dest, it would probably be a good idea (at least temporarily) to set up daily.0 (or .sync or whatever is expected to have your most recent backup) with a fairly complete backup.

This should save network bandwidth (rsync can link with the previous backup rather than fetching the whole file across the network) and maximise chances of unchanged files being hard linked together between daily.X and daily.Y (saving disk space in your backup area).

This is only a consideration for the first backup after the underlying issue is fixed.

2-ALSO, for the future, is there a way to ensure that if the remote dir is not there to return an error? We would have seen this.

I would expect something on stderr from rsync (eg Permission Denied) which should have flowed through to stderr of rsnapshot. If you run from cron, generally stdout and stderr of rsnapshot are emailled to the user running the cron job unless you have made other arrangements. Assuming email works.

Thanks for your support!






------------------------------------------------------------------------------
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
------------------------------------------------------------------------------
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
rsbrux | 21 Oct 14:37 2014
Picon

How to shorten destination path for mounted drive?

In rsnapshot.conf I have:
>>
backup  /opt/etc/               localhost/
<<
for a local directory.
This backs up the content of /opt/etc to:
>>
rsnapshot/hourly.0/localhost/opt/etc/
<<
which is what I want.

However, my USB drive is mounted as:
>>
/volumeUSB1/usbshare/
<<
so if rsnapshot.conf contains:
>>
backup  /volumeUSB1/usbshare/   usbshare1/
<<
for the mounted USB drive, the content is backed up to:
>>
rsnapshot/hourly.0/localhost/usbshare1/volumeUSB1/usbshare
<<

This is logical and consistent, but it is not what I want.
What I would like is to back up the USB drive contents without adding the
mount point path to the destination.
In other words, I would like the backed up files to go to:
>>
rsnapshot/hourly.0/localhost/usbshare1/
<<
How can I achieve this (without screwing up the behaviour of my local
backup)?

My rsync arguments are:
>>
rsync_short_args        -amORuvvX
rsync_long_args --stats --modify-window=1       --fake-super    --safe-links
<<

Thanks in advance for any tips!

------------------------------------------------------------------------------
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho

Gmane