Thierry Lavallee | 29 Sep 16:02 2015

Moving to a delete dir rather than rm

hi,
Is there a way for Rsnapshot to MV rather than RM here?
[27/Sep/2015:04:00:48] /bin/rm -rf /media/backupServer/home/daily.6/

I would prefer Rsnapshot to act quickly by simply moving files to a /to_delete directory. I would then run a regular cron to empty that up.
Because at the present time sometimes my monthly or weekly are unable to run, because the daily is still running.

Thanks



------------------------------------------------------------------------------
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
Benedikt Heine | 17 Aug 19:46 2015

rsnapshot 1.4.1

Welcome to a new release of rsnapshot: 1.4.1

This is a small bugfix-release.

You can download the release:
http://rsnapshot.org/downloads/
http://rsnapshot.org/downloads/rsnapshot-1.4.1.tar.gz

Changes:
- rsnapshot handled the exitcode of rsync wrong. It assumed always, that
rsync exited correctly.

We encourage every user of 1.4.0 to update.

Sincerely,
Bene

------------------------------------------------------------------------------
Thierry Lavallee | 13 Aug 14:59 2015

Performance comparison and process

Hi,
Here is a shot at my log. I am wondering if this is normal and if there 
is a way to optimize this. I find all this copying an removing quite 
lengthy. And I am wondering if this is the kind of performance you guys 
get and I should await for on a 260G remote source directory. Should I 
post my configuration?
Thanks!

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[12/Aug/2015:03:00:03] /usr/bin/rsnapshot -c 
/root/scripts/backup/rsnapshot.Mediaserver02_cpbackup.conf sync: started
[12/Aug/2015:03:00:03] Setting locale to POSIX "C"
[12/Aug/2015:03:00:03] echo 4493 > /var/run/rsnapshot_Mediaserver02new.pid
[12/Aug/2015:03:00:03] /usr/bin/rsync -avx --delete --numeric-ids 
--delete-excluded --rsh="/usr/bin/ssh -i /root/.ssh/id_rsa" 
root <at> Mediaserver02.media-hosting.com:/backup/cpbackup/daily/ 
/media/backupMediaserver02b/home/.sync/

3 hours here to get the diff files.

[12/Aug/2015:06:12:41] touch /media/backupMediaserver02b/home/.sync/
[12/Aug/2015:06:12:45] rm -f /var/run/rsnapshot_Mediaserver02new.pid
[12/Aug/2015:06:12:45] /usr/bin/logger -i -p user.info -t rsnapshot 
/usr/bin/rsnapshot -c 
/root/scripts/backup/rsnapshot.Mediaserver02_cpbackup.conf sync: 
completed successfully
[12/Aug/2015:06:12:45] /usr/bin/rsnapshot -c 
/root/scripts/backup/rsnapshot.Mediaserver02_cpbackup.conf sync: 
completed successfully
[12/Aug/2015:06:12:46] /usr/bin/rsnapshot -c 
/root/scripts/backup/rsnapshot.Mediaserver02_cpbackup.conf daily: started
[12/Aug/2015:06:12:46] Setting locale to POSIX "C"
[12/Aug/2015:06:12:46] echo 15058 > /var/run/rsnapshot_Mediaserver02new.pid
[12/Aug/2015:06:12:48] /bin/rm -rf /media/backupMediaserver02b/home/daily.6/

6 Hours here to cleanout daily.6

[12/Aug/2015:12:08:55] mv /media/backupMediaserver02b/home/daily.5/ 
/media/backupMediaserver02b/home/daily.6/
[12/Aug/2015:12:08:55] mv /media/backupMediaserver02b/home/daily.4/ 
/media/backupMediaserver02b/home/daily.5/
[12/Aug/2015:12:08:56] mv /media/backupMediaserver02b/home/daily.3/ 
/media/backupMediaserver02b/home/daily.4/
[12/Aug/2015:12:08:56] mv /media/backupMediaserver02b/home/daily.2/ 
/media/backupMediaserver02b/home/daily.3/
[12/Aug/2015:12:08:56] mv /media/backupMediaserver02b/home/daily.1/ 
/media/backupMediaserver02b/home/daily.2/
[12/Aug/2015:12:08:56] mv /media/backupMediaserver02b/home/daily.0/ 
/media/backupMediaserver02b/home/daily.1/
[12/Aug/2015:12:08:56] /bin/cp -al 
/media/backupMediaserver02b/home/.sync 
/media/backupMediaserver02b/home/daily.0

10 hours here to copy from .sync to daily.0

[12/Aug/2015:22:03:50] rm -f /var/run/rsnapshot_Mediaserver02new.pid
[12/Aug/2015:22:03:50] /usr/bin/logger -i -p user.info -t rsnapshot 
/usr/bin/rsnapshot -c 
/root/scripts/backup/rsnapshot.Mediaserver02_cpbackup.conf daily: 
completed successfully
[12/Aug/2015:22:03:50] /usr/bin/rsnapshot -c 
/root/scripts/backup/rsnapshot.Mediaserver02_cpbackup.conf daily: 
completed successfully

All done 19 hours later...
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

------------------------------------------------------------------------------
jungle Boogie | 27 Jul 17:11 2015
Picon

exclude file assistance

Hello All,

To be fair, you can tell me to read the rsync manual and I won't have
a problem with that; however, this may be a quick question for you to
answer.

My rsnapshot.conf has this:
exclude_file    /usr/home/jungle/exclude.txt

And the exclude file sitting exactly where its listed above contains this:
/usr/home/jungle/fossil-repos/sqlite3/
/usr/home/jungle/fossil-repos/sqlite.fossil
/usr/home/jungle/fossil-repos/fossil/
/usr/home/jungle/fossil-repos/fossil.fossil
/usr/home/jungle/fossil-repos/check-in-edit/
/usr/home/jungle/mgs/api/Newman/
/usr/home/jungle/.node-gyp/
/usr/home/jungle/.npm/
/usr/home/jungle/bin/
/usr/home/jungle/.cache/
/usr/local/etc/fonts/

Do I need to prefix those with a minus sign or since this is _already_
the exclude file, it's assumed to exclude those items?

Thanks!

--

-- 
-------
inum: 883510009027723
sip: jungleboogie <at> sip2sip.info
xmpp: jungle-boogie <at> jit.si

------------------------------------------------------------------------------
Mathieu Chateau | 26 Jul 12:43 2015
Picon
Gravatar

3 rsync process ?

Hello,

when rsnapshot start, I can find this in log (replaced path for security):

[25/Jul/2015:21:48:33] /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded --link-dest=/desFolder/hourly.1/customers/ /sourceFolder /desFolder/hourly.0/customers/

So far, so good.

But When I look in process on server, I find 3 rsync (replaced path for security):

[root <at> myserver ~]# ps auwx | grep rsync
root      4567  0.2  0.1 142096 26956 ?        S    Jul25   2:12 /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded --link-dest=/desFolder/hourly.1/customers/ /sourceFolder /desFolder/hourly.0/customers/

root      4568  0.0  0.1 142000 26292 ?        S    Jul25   0:28 /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded --link-dest=/desFolder/hourly.1/customers/ /sourceFolder /desFolder/hourly.0/customers/

root      4569  0.3  0.1 142288 26480 ?        S    Jul25   2:59 /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded --link-dest=/desFolder/hourly.1/customers/ /sourceFolder /desFolder/hourly.0/customers/

is it really normal ?
I tried strace on 2 of them, looks like they are doing same folders.




Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
------------------------------------------------------------------------------
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
lukshuntim | 24 Jul 15:22 2015
Picon

Rollback during sync with sync_first and link_dest enabled

Hi,

There's a recent thread mentioning a lengthy rollback from lowest.0 to 
.sync when rsync fails in the sync step. Both link_dest and sync_first 
are enabled.

As sync_first is enabled, all snapshots are not yet rotated until sync 
is completed. If rsync fails, the backup will still be intact. So 
apparently there's no need for a rollback from lowest.0 to .sync, which 
can take a long time with a large archive.

Is this reasoning sound or maybe I overlook something? Response much 
appreciated.

Regards,
ST
--

-- 

------------------------------------------------------------------------------
Mathieu Chateau | 22 Jul 23:02 2015
Picon
Gravatar

how to use newest snapshot instead of oldest in rotation?

Hello,

When using tool to make backup, we generally want something like:
-1 backup per day (let's say 7 for 1 week)
-1 backup per month (first or last day of month, choose date)

As far as I understand, rsnapshot will rotate the oldest per day backup to make the first per month backup. So I won't have a monthly backup that match what was present on 1st day of the month (or whatever day I started rsnapshot with monthly argument).

I could use 2 specific config files so they ignore each other but I won't have benefit of hardlink between the two (and so basically host twice full data).

Anyway to achieve that?

Thanks in advance,

Mathieu Chateau

http://www.lotp.fr
------------------------------------------------------------------------------
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
Gordon Messmer | 15 Jul 19:39 2015
Picon

Re: Returned 255 while processing - Rolling back is lengthy

On 07/15/2015 10:35 AM, Thierry Lavallee wrote:
>
>> You can use "sync_first" to accomplish that.
> Seems like I don't have any mention of "sync_first" in my conf file. :/ 

You probably wouldn't, normally.  Check the man page for rsnapshot, and 
add the directive if that's how you want your backups to work.

------------------------------------------------------------------------------
Don't Limit Your Business. Reach for the Cloud.
GigeNET's Cloud Solutions provide you with the tools and support that
you need to offload your IT needs and focus on growing your business.
Configured For All Businesses. Start Your Cloud Today.
https://www.gigenetcloud.com/
Thierry Lavallee | 15 Jul 19:13 2015

Returned 255 while processing - Rolling back is lengthy

Hi,

I am getting these errors. Looks like the connection with the remote server might have died.

root <at> fileserver:~# tail -f /var/log/rsnapshot/rsnapshot_Media02_cpbackup.log

[15/Jul/2015:08:04:02] /usr/bin/rsnapshot -c /root/scripts/backup/rsnapshot.Media02_cpbackup.conf daily: started
[15/Jul/2015:08:04:02] echo 10645 > /var/run/rsnapshot_Media02new.pid
[15/Jul/2015:08:04:02] mv /media/backupMedia02b/home/daily.2/ /media/backupMedia02b/home/daily.3/
[15/Jul/2015:08:04:02] mv /media/backupMedia02b/home/daily.1/ /media/backupMedia02b/home/daily.2/
[15/Jul/2015:08:04:02] mv /media/backupMedia02b/home/daily.0/ /media/backupMedia02b/home/daily.1/
[15/Jul/2015:08:04:02] mkdir -m 0755 -p /media/backupMedia02b/home/daily.0/
[15/Jul/2015:08:04:02] /usr/bin/rsync -ax --delete --numeric-ids --delete-excluded --rsh="/usr/bin/ssh -i /root/.ssh/id_rsa" --link-dest=/media/backupMedia02b/home/daily.1/ root <at> Media02.domain.com:/backup/cpbackup/daily/ /media/backupMedia02b/home/daily.0/
[15/Jul/2015:08:52:32] /usr/bin/rsnapshot -c /root/scripts/backup/rsnapshot.Media02_cpbackup.conf daily: ERROR: /usr/bin/rsync returned 255 while processing root <at> Media02.domain.com:/backup/cpbackup/daily/
[15/Jul/2015:08:52:32] Rolling back ""
[15/Jul/2015:08:52:32] /bin/rm -rf /media/backupMedia02b/home/daily.0/


Yet "Rolling Back" takes more than 8 hours to process. It is a 400 GB repositary

From the log:
[15/Jul/2015:08:52:32] /bin/rm -rf /media/backupMedia02b/home/daily.0/
[15/Jul/2015:11:32:18] /bin/cp -al /media/backupMedia02b/home/daily.1 /media/backupMedia02b/home/daily.0

Looks like the SSH connection should be made BEFORE the whole process of moving the repositary is done. This would avoid this whole loss of time.

Also, seems that the quick mv done when initiating the backup could also be done in the roll back.

Anyway, is there any way to make things better on my end?
thanks
------------------------------------------------------------------------------
Don't Limit Your Business. Reach for the Cloud.
GigeNET's Cloud Solutions provide you with the tools and support that
you need to offload your IT needs and focus on growing your business.
Configured For All Businesses. Start Your Cloud Today.
https://www.gigenetcloud.com/
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
Gordon Messmer | 15 Jul 18:43 2015
Picon

Re: Source directory doesn't exist - should this be fatal?

On 07/14/2015 06:50 PM, Ken Woods wrote:
>> >I don't see why that would scale less well in a single configuration
>> >file rather than individual files per host (or per volume).
> Math.  It's hard, yeah?

First, admittedly, I totally garbled my reply.  I apologize for that.

However, I don't really see the point that you're trying to make. David 
suggested that a failure in one retain line should be treated as a 
warning rather than an overall failure.  Nico's reply suggested that he 
separate different hosts or different services into separate 
configuration files so that a failure would affect only the backup of 
that host or service.

That's a perfectly reasonable suggestion, and I don't see how it creates 
scalability concerns.

Especially when you have a very large number of hosts or directories to 
back up, separating them into their own configuration files becomes more 
desirable.  Each host (or however you decide to partition your 
configurations, but I'll use host for example) can be generated from a 
template, as Nico suggested.  That makes maintenance much easier.  When 
you're dealing with hundreds or thousands of entries, you don't want to 
manage the configuration by hand.  Adding or removing items is much more 
reliable when you script your maintenance tasks.

With individual files, you'll probably use a short script similar to 
"run-parts" to run rsnapshot with the interval specified for each of the 
configuration files present.  rsnapshot processes each of the "retain" 
entries in the configuration files in series.  If you break up your 
configuration files and process them in a loop, that remains true.  
However, you can choose to add logic to the loop to run several 
rsnapshot instances in parallel if your backup disk is fast enough to 
not be the bottleneck in your backup system.  In that case, individual 
files scale better than a single file.

Finally, the exit status of rsnapshot is the only really reliable means 
it has of indicating whether the backup process was successful or not.  
You could, as I did, use the exit status and the name of the 
configuration file run to feed a monitoring system so that alerts can be 
generated when backups fail, and provide a dashboard for monitoring a 
large set of backups.  If individual "retain" lines are demoted to a 
warning from a failure, then the ability to reliably communicate failure 
is lost.  rsnapshot only gets one exit code, and it should definitely be 
used to indicate the most severe error that it encountered during a 
backup run.

So, for reliability and scalability, individual files really look like 
best practice to me.  If you think I'm wrong, I'm open to criticism.

------------------------------------------------------------------------------
Don't Limit Your Business. Reach for the Cloud.
GigeNET's Cloud Solutions provide you with the tools and support that
you need to offload your IT needs and focus on growing your business.
Configured For All Businesses. Start Your Cloud Today.
https://www.gigenetcloud.com/
David Cantrell | 13 Jul 16:25 2015
Picon
Gravatar

Source directory doesn't exist - should this be fatal?

I had a disk die a few days ago, and rsnapshot is saying ...

> ----------------------------------------------------------------------------
> rsnapshot encountered an error! The program was invoked with these options:
> /usr/local/bin/rsnapshot daily 
> ----------------------------------------------------------------------------
> ERROR: /usr/local/etc/rsnapshot.conf on line 188:
> ERROR: backup /Volumes/Vault/ Vault/ - Source directory "/Volumes/Vault/" \
>          doesn't exist 
> ERROR: ---------------------------------------------------------------------
> ERROR: Errors were found in /usr/local/etc/rsnapshot.conf,
> ERROR: rsnapshot can not continue. If you think an entry looks right, make
> ERROR: sure you don't have spaces where only tabs should be.

This prevents all my other targets from being backed up.

I think this should emit a warning, but be non-fatal. Anyone disagree?

--

-- 
David Cantrell | top google result for "internet beard fetish club"

  Longum iter est per praecepta, breve et efficax per exempla.

------------------------------------------------------------------------------
Don't Limit Your Business. Reach for the Cloud.
GigeNET's Cloud Solutions provide you with the tools and support that
you need to offload your IT needs and focus on growing your business.
Configured For All Businesses. Start Your Cloud Today.
https://www.gigenetcloud.com/

Gmane