Winkel, Richard J. | 3 Jun 16:49 2015
Picon

relinking (deduping) disconnected rsnapshot trees

Because of an undetected disk overflow I have fragmented copies of 
partial rsnapshot backups on a raid.
I'd rather not just go back to the last intact backup, but find a way to 
merge the new data with the existing
tree.  In other words, scan directories A and B and
if files A/subpathK/fileX and B/subpathK/fileX exist and are identical, 
then link them together, otherwise do nothing.
Rsync (3.1.1) doesn't seem to be the tool to use, at least I can't 
figure it out.
Has anyone else run across this issue and how did you resolve it?

Thanks,
Rich
------------------------------------------------------------------------------
Eingedi | 3 Jun 10:46 2015

Is crontab -e needed?

Hi,
rsnapshot has it's own crob job at '/etc/cron.d/rsnapshot' which you can edit to set up the cron job, much
like crontab -e.
So... Is crontab -e needed? Can't I just edit the '/etc/cron.d/rsnapshot' exactly as I'd set up the crontab
itself? 
Kind of confused here... Which one should I edit/enable?

Thanks, 
Barak.

+----------------------------------------------------------------------
|This was sent by barak <at> galileo-nav.com via Backup Central.
|Forward SPAM to abuse <at> backupcentral.com.
+----------------------------------------------------------------------

------------------------------------------------------------------------------
Laurens V. | 2 Jun 21:11 2015
Picon

Rsnapshot rotating but no smaller interval ready

Hello,

I've used rsnapshot for a while now and I noticed that rsnapshot rotates, although a lower interval is not present. I believe this is a bug, or at least produces results that are less favorable than my suggestion below.

- What I try to run:
rsnapshot weekly

- What are the circumstances:
I have a backup dir with daily.0 up to daily.5. There is also a weekly.0 present. (last run moved the daily.6 to weekly.0) - all is fine.

- Expected output:
/home/sync/daily.6 not present (yet), nothing to copy

- Instead I get:
mv /home/sync/weekly.0/ /home/sync/weekly.1/
/home/sync/daily.6 not present (yet), nothing to copy


- The problem:
This leaves me with a weekly.1 directory, possibly ready to be rotated even further to weekly.2, 3, etc... up until the moment a fresh daily.6 is made and a new weekly.0 can be created. This means we could end up with weekly.0, weekly.2, weekly.4... rather messy and I'd say unneeded rotations.


- My suggestion:
Shouldn't the "nothing to copy" check (looking for existence of lower_interval.max) happen BEFORE it rotates all the other directories? It's less destructive than just going ahead with the rotation and THEN finding out about the fact that there is no daily.6 to move to weekly.0.

This will also prevent unnecessary gaps between weeklies and keep the directory structure cleaner.

- What you could be asking yourselves:
Why on earth would you run weekly before daily.6 is made? Answer: it could just happen... in my case, I use "rsnapshot-once" wrapper. This makes sure that rsnapshot rolls back if it fails. This is ideal for my laptop. Sometimes I shut it down when I'm on the move and this kills rsnapshot. The wrapper makes sure dirty backups get cleaned up. This also means that I need to have a cron job that runs multiple times. E.g. my weekly runs every day! The wrapper checks the timestamp and only runs when the weekly is >a week old. Hope this explains a little bit my situation.


Kind regards,
Laurens
------------------------------------------------------------------------------
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
Eingedi | 2 Jun 09:15 2015

How to copy to different directories?

Hi, 

I want to make rsnapshot copy data from different servers to different directories on the same host.

What I did was "cp /etc/rsnapshot.conf /etc/rsnapshot2.conf" and so on, made until number 5 (number of
different directories.
Then I edited "/etc/rsnapshot.conf" to set it up for each different destination. Checked each one with
"rsnapshot configtest", syntax ok.
Now I went to "/etc/cron.d/rsnapshot" and set it up like this:

# 0 */4         * * *           root    /usr/bin/rsnapshot hourly -c /etc/rsnapshot2.conf -c /etc/rsnapshot3.conf -c
/etc/rsnapshot4.conf -c /etc/rsnapshot5.conf
# 30 3          * * *           root    /usr/bin/rsnapshot daily -c /etc/rsnapshot2.conf -c /etc/rsnapshot3.conf -c
/etc/rsnapshot4.conf -c /etc/rsnapshot5.conf
# 0  3          * * 1           root    /usr/bin/rsnapshot weekly -c /etc/rsnapshot2.conf -c /etc/rsnapshot3.conf -c
/etc/rsnapshot4.conf -c /etc/rsnapshot5.conf
# 30 2          1 * *           root    /usr/bin/rsnapshot monthly -c /etc/rsnapshot2.conf -c /etc/rsnapshot3.conf -c
/etc/rsnapshot4.conf -c /etc/rsnapshot5.conf

Set the times with crontab -e and left it for the day, to check how it works after the jobs.
The result was bad. Only the first rsnapshot worked, only the hourly rsnapshot of the first directory
worked, nothing else.

How do I fix this please? What Am I doing wrong?

Thanks,
Barak.

+----------------------------------------------------------------------
|This was sent by barak <at> galileo-nav.com via Backup Central.
|Forward SPAM to abuse <at> backupcentral.com.
+----------------------------------------------------------------------

------------------------------------------------------------------------------
Eingedi | 28 May 13:42 2015

rsnapshot doesn't copy data to remote host

Hello,
I am using rsnapshot with ssh to copy data to a remote host. 
Everything look good, i run 'rsnapshot configtest' and it says ok.
But when I type 'rsnapshot hourly' to check if it's working fine, it says complete but when I check the remote
host I see no files, emply folder.

This is what I use for the remote host path.

'backup  root <at> remote-host:/path/to/folder/    remote-host/'

What am I doing wrong?

+----------------------------------------------------------------------
|This was sent by barak <at> galileo-nav.com via Backup Central.
|Forward SPAM to abuse <at> backupcentral.com.
+----------------------------------------------------------------------

------------------------------------------------------------------------------
Keith Ellis | 19 May 13:57 2015
Picon

Fwd: Running rsnapshot

Thanks for this,  just so I understand, my keys are not password protected, but by using sudo to launch rsnapshot,  root does not use my keys.  Are you saying I can use keychain to open my keys from ~/.ssh for my general user account?
Regards, Keith Ellis

On 19 May, 2015,at 12:27 PM, Nico Kadel-Garcia <nkadel <at> gmail.com> wrote:

On Tue, May 19, 2015 at 4:56 AM, Keith Ellis <keith.ellis <at> mac.com> wrote:
Ok, I have manages to install rsnapshot. It is on a Mac machine which has
all my backup drives attached. I want to backup a Linux machine over ssh.

I can ssh into my Linux box using keys under my normal username, but since
rsnapshot needs to run as root (to use the lock file) the root user does not
have the Linux public key available. I get an error message saying public
key permission failed. Mac's as default do not have a root user directory so
I am not sure where to put the public key file. Can anyone give me some
direction.

Alternatively, am I able to run rsnapshot as a general user?

Regards,
Keith Ellis

ssh-agent, especially with the 'keychain' perl sript, can be written
into your cron job that runs rsnapshot. See

http://sourceforge.net/p/rsnapshot/mailman/message/6585224/

You still have an unlocked SSH key, but it's only unlocked in memory,
not in the a passphrase free file, and it should ideally use a key
that is *not* the backup user's default key. Even if you don't use
keychain, as long as you save the 'eval ssh-agent` outpout into a
configuration file readable legible to the rsnapshot backup user, that
user can source that only that user should have access to.

Do *not* use the default key location for this: you can specify the
key location in the SSH options for rsnapshot.
------------------------------------------------------------------------------
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
Keith Ellis | 19 May 10:56 2015
Picon

Running rsnapshot

Ok, I have manages to install rsnapshot.  It is on a Mac machine which has all my backup drives attached.  I want to backup a Linux machine over ssh.

I can ssh into my Linux box using keys under my normal username, but since rsnapshot needs to run as root (to use the lock file) the root user does not have the Linux public key available.  I get an error message saying public key permission failed. Mac's as default do not have a root user directory so I am not sure where to put the public key file.  Can anyone give me some direction.

Alternatively, am I able to run rsnapshot as a general user? 
Regards, Keith Ellis
------------------------------------------------------------------------------
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
Keith Ellis | 18 May 22:05 2015
Picon

Re: Mac install

Thanks David, that helped. All installed now.

Cheers

Keith Ellis

> On 18 May 2015, at 19:07, rsnapshot-discuss-request <at> lists.sourceforge.net wrote:
> 
> Re: [rsnapshot-discuss] Mac install
Attachment (smime.p7s): application/pkcs7-signature, 3404 bytes
------------------------------------------------------------------------------
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
Keith Ellis | 18 May 15:19 2015
Picon

Mac install

Hi,

I'm trying to install rsnapshot on a Mac so I can back up my Raspberry Pi ownCloud server.  I have cloned the git repository and followed the instructions at https://github.com/rsnapshot/rsnapshot/blob/master/INSTALL.md however the first command
   ./autogen.sh

throws an error, it cannot find the command 'autoreconf', this does not seem to be in the repository.

Any help on how I can get rsanpshot installed on a Mac would be much appreciated.
Regards, Keith Ellis
------------------------------------------------------------------------------
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
John Lewis | 16 May 05:07 2015
Picon

Sudo based backup account?

Are newer versions of rsnapshot capable of initiating a backup with an
account with password protected sudo so that I can disable remote root
login on servers on the Internet that need to be backed up and easily
limit the privileged commands that can be used?

------------------------------------------------------------------------------
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
Cal Sawyer | 1 May 13:56 2015

Re: Collapsing a rsnapshot tree?

Hi

You fellows are, of course, absolutely right.  What i discovered in retrospect was that the snapshots i had varied widely in date, so huge deltas and generally a pretty rotten example

This morning, i collapsed a 7-day tree onto 5 by deleting daily.6 and daily.5.  Those deletes took barely any time at all and the subsequent snapshot was similarly very fast.

thanks very much to everyone who pitched in
Cal Sawyer | Systems Engineer | BlueBolt Ltd 15-16 Margaret Street | London W1W 8RW +44 (0)20 7637 5575 | www.blue-bolt.com
From: David Keegel <djk <at> cyber.com.au> Subject: Re: [rsnapshot-discuss] Collapsing a rsnapshot tree? To: Cal Sawyer <cal-s <at> blue-bolt.com> Cc: rsnapshot discussion list <rsnapshot-discuss <at> lists.sourceforge.net> Message-ID: <20150425001029.GP675 <at> honey.keegel.id.au> Content-Type: text/plain; charset=us-ascii I agree with Scott (and most of the other comments on this thread). On Fri, Apr 24, 2015 at 10:34:49AM +0100, Cal Sawyer wrote:
Thanks The reverse is actually what happened, i think. I had a daily 7 setup that i had reconfigured to daily 3 When i run rsnapshot -v manually, i can see at what level it's doing its initial rm's and cp's and daily.4-6 are no longer being touched, although they held data - a lot of it.
These data are either: (1) files which are no longer current (not present in daily.0), and will use up disk space that can be freed by deleting them (assuming you don't have any weekly or similar backups), but deleting them will have no effect on the next day's backup (2) files which do exist in daily.0, in which case having a hard link to them in daily.3-6 will not make any difference to disk space used; if daily.3-6 are deleted then the link count for those files in daily.0 will go down but no disk space will be freed . If the files still exist in daily.0 then it will have no effect on the next day's backup, because linking is only done against daily.0.
When i (naively) deleted daily.3-daily.6, i lost a few TB of data which
Deleting daily.3-6 makes sense and is what I would have done. I don't think deleting daily.3-daily.6 was naive at all.
i lost a few TB of data which then had to be picked up again in the latest daily (which took quite a while to complete).
This bit I don't understand. If you measured "lost a few TB of data" using df, then the difference would be old files which are no longer current, and hence not relevant to how long the rsnapshot will take. To explain the "took quite a while to complete", I'd be looking at whether there were many differences between daily.0 and your source file systems. For example, something messed with daily.0, or lots of files on the source directory appear to have changed (for example the mtime differs between files in source and daily.0). Or whether something changed in your rsnapshot config that means it is not making hard links properly any more (in which case I'd expect each rsnapshot will take quite a while until that is fixed).
So the question remains. i think - how best to condense/collapse older daily snapshots when reducing the retention time?
No, I think the question is why did your rsnapshots start taking longer and I think the answer is *not* because you deleted old daily.3-6 (which are no longer used nor relevant to rsnapshot).
ignoring 3-6 or should one just leave the older snapshots alone and live with it? - cal On 23/04/15 17:49, Scott Hess wrote: On Thu, Apr 23, 2015 at 1:20 AM, Cal Sawyer <[1]cal-s <at> blue-bolt.com> wrote: On occasion, i've wanted to collapse a weekly or multiday snapshot tree and redefine it to have fewer days' retention. If i modify a config for, say, 7-day retention to a 3 days, rsnapshot will ignore days 3-6 and start acting only on day 2 and below, effectively "orphaning" the older snapshots which remain in place untouched forever. If i delete those older snapshots, (predictably?) i end up with a very long sync run to re-acquire the files missing in the older snapshots. Is there an accepted method of collapsing a deep snapshot tree into a less deep one, minimising the amount of resyncing needed? I'm not sure I understand your question. If you wish to go from daily.[0123456] to daily.[012], then rsnapshot will not have created any hardlinks which span between daily.0 and daily.6 without also being present in daily.[12345]. The only time you have to be careful is when you're manually messing with the most-recent snapshot. -scott
-- ___________________________________________________________________________ David Keegel <djk <at> cyber.com.au> Cyber IT Solutions Pty. Ltd. http://www.cyber.com.au/~djk/ Linux & Unix Systems Administration
------------------------------------------------------------------------------
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss

Gmane