r1100gspd | 14 Jun 10:39 2015
Picon

Fw: Re: rsync_long_args --delete alternative

--- On Fri, 12/6/15, anonymouse wrote to me:

> if several "delete-before" sets fail, then you end up with no backup.
> 

I don't think this would occur with my setup because I have 
sync_first      1

------------------------------------------------------------------------------
r1100gspd | 12 Jun 14:44 2015
Picon

rsync_long_args --delete alternative

A backup to an external hard drive recently filled up, and wasn't detected for some days.
I found that even after pregressively deleting ALL the Daily and Weekly backups I still couldn't run a sync
because the disk was too full.  I suppose this is because with --delete set, files were attempting to add
to the sync folder prior to any directories with deletable files were processed.

The backup data size was 3.3TB and the backup drive was 3.8TB, so there should be enough room for a normal backup.
I could have just deleted the sync folder and started again, but instead I set rsync_long_args 
  from:  --delete 
  to:      --delete-before.  

This then allowed the backup to complete successfully.

I think it is sensible to leave the rsync_long_args parameter as --delete-before, as it should help avoid
out of disk space conditions.  But I am sure there is a reason why it is not the default.  How do others set
the --delete paramter, and is there any harm in leaving it set to --delete-before?

- Gunter

------------------------------------------------------------------------------
Hervé Werner | 10 Jun 14:22 2015
Picon

LVM thin support

Hello

Rsnapshot is able to create temporary LVM snapshots to backup datas so that it has a consistent view of the whole volume, and that's great. Today LVM also offers thin provisionning (for more information : https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/thinly-provisioned_snapshot_volumes.html). As for rsnapshot this would permit :
  • to simplify lthe lvcreate command (and the user configuring rsnapshot) by discarding the size option
  • to create snapshot of snapshot. Eg I use thin snapshots to quickly backup my datas daily and then I also rely on rsnapshot to backup the latest snapshot on an external disk weekly

Please see attached a quick & dirty patch I did for my needs. To use it you only need to put in rsnapshot.conf :
linux_lvm_snapshotsize = 'lvmthin'             # (lvmthin instead of a LVM size)


If you are interested in adding this feature in rsnapshot but would like this patch to get reworked, I'd be glad to help.

Regards

dud

------------------------------------------------------------------------------
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
Hervé Werner | 8 Jun 13:26 2015
Picon

Stale lock file

Hello

I found out that in some cases rsnapshot's lockfile is not removed which makes it issue a warning the next time it runs.

Here is a demonstration :

I'm using the following configuration file :

config file :
config_version    1.2
snapshot_root    /media/LV_BACKUP/rsnapshot/
cmd_cp        /bin/cp
cmd_rm        /bin/rm
cmd_rsync    /usr/bin/rsync
cmd_logger    /usr/bin/logger
retain    alpha    3
retain    beta    2
retain    gamma    2
verbose        5
loglevel    3
lockfile    /var/run/rsnapshot.pid
one_fs        1
link_dest    1
use_lazy_deletes    1
backup    /tmp/test    localhost


I then execute rsnapshot twice :

$ sudo rsnapshot alpha
require Lchown
Lchown module not found
Setting locale to POSIX "C"
echo 32014 > /var/run/rsnapshot.pid
mkdir -m 0755 -p /media/LV_BACKUP/rsnapshot/alpha.0/
/usr/bin/rsync -avx --delete --numeric-ids --relative --delete-excluded \
    /tmp/test/ /media/LV_BACKUP/rsnapshot/alpha.0/localhost
sending incremental file list
created directory /media/LV_BACKUP/rsnapshot/alpha.0/localhost
/tmp/
/tmp/test/
/tmp/test/env.txt
/tmp/test/install-dep.txt
/tmp/test/install-dep.txt2

sent 38,800 bytes  received 152 bytes  77,904.00 bytes/sec
total size is 38,490  speedup is 0.99
rsync succeeded
touch /media/LV_BACKUP/rsnapshot/alpha.0/
/usr/bin/logger -p user.info -t rsnapshot[32014] /usr/local/bin/rsnapshot \
    alpha: completed successfully


$ sudo rsnapshot alpha
require Lchown
Lchown module not found
Setting locale to POSIX "C"
WARNING: Removing stale lockfile /var/run/rsnapshot.pid
/usr/bin/logger -p user.err -t rsnapshot[32020] WARNING: Removing stale \
    lockfile /var/run/rsnapshot.pid
WARNING: About to remove lockfile /var/run/rsnapshot.pid which belongs to a different process: 32014 (this is OK if it's a stale lock)
rm -f /var/run/rsnapshot.pid
echo 32020 > /var/run/rsnapshot.pid
mv /media/LV_BACKUP/rsnapshot/alpha.0/ /media/LV_BACKUP/rsnapshot/alpha.1/
mkdir -m 0755 -p /media/LV_BACKUP/rsnapshot/alpha.0/
/usr/bin/rsync -avx --delete --numeric-ids --relative --delete-excluded \
    --link-dest=/media/LV_BACKUP/rsnapshot/alpha.1/localhost \
    /tmp/test/ /media/LV_BACKUP/rsnapshot/alpha.0/localhost
sending incremental file list
created directory /media/LV_BACKUP/rsnapshot/alpha.0/localhost

sent 171 bytes  received 85 bytes  512.00 bytes/sec
total size is 38,490  speedup is 150.35
rsync succeeded
touch /media/LV_BACKUP/rsnapshot/alpha.0/
/usr/bin/logger -p user.err -t rsnapshot[32020] WARNING: \
    /usr/local/bin/rsnapshot alpha: completed, but with some warnings


Actually according to the source code this behavioir happens when using the lazy_delete feature as the lockfile seem to only be removed when a _delete directory exists :

## code sub handle_interval
     
        # if use_lazy_delete is on, delete the _delete.$$ directory
        # we just check for the directory, it will have been created or not depending on the value of use_lazy_delete
        if (-d "$config_vars{'snapshot_root'}/_delete.$$") {

                # this is the last thing to do here, and it can take quite a while.
                # we remove the lockfile here since this delete shouldn't block other rsnapshot jobs from running
                remove_lockfile();


So I think rsnapshot should either create a dumb _delete directory if there is no previous backup to clean or add something like remove_lockfile() if ($use_lazy_deletes).


dud
------------------------------------------------------------------------------
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
Keith Ellis | 8 Jun 22:59 2015
Picon

Duplicate mount points when logged out

I've been running rsnapshot on a Mac, backing up an ownCloud server for a couple of weeks and it has been
working great.  The Mac has been left logged in whilst I have been checking all is working fine.  However when
I logged out and left it running for a couple of days, I noticed that the external HD I was backing up to have
been mounted as "\Volumes\backup 1" instead of "\Volumes\backup" as such the snapshots had not been
copied to the external drive.  This might not be an rsnapshot problem as such, but has anyone come across
this before or can anyone help me correct this error.

Thanks
Keith.
------------------------------------------------------------------------------
Mike Threesi | 8 Jun 14:41 2015
Picon

Fwd: Ubuntu 14.04.2 - rsnaphost: Memory Leak?

now I should be on list...

I use rsnapshot on an 8GB Ram ubuntu server.  After about 2 days it has consumed all real memory,.  No other extra apps are running on that box.  Any thoughts.  These are the commands I  use in CRON:

/usr/bin/rsnapshot -c /home/rsnapshot/samba/rsnapshot.conf sync && /usr/bin/rsnapshot -c /home/rsnapshot/samba/rsnapshot.conf hourly  

/usr/bin/rsnapshot -c /home/rsnapshot/samba/rsnapshot.conf daily  

/usr/bin/rsnapshot -c /home/rsnapshot/samba/rsnapshot.conf weekly  

/usr/bin/rsnapshot -c /home/rsnapshot/samba/rsnapshot.conf monthly  

/usr/bin/rsnapshot -c /home/rsnapshot/samba/rsnapshot.conf yearly

Thanks

------------------------------------------------------------------------------
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
Ray Morris | 7 Jun 22:31 2015
Picon

Re: rsnapshot-discuss Digest, Vol 108, Issue 9

>Version 1: rsnapshot running on the client, with the backup set being
accessed via NFS mount(s) from the server. 
> From the point of view of this
process, it's comparing two local files at a time. Recall how rsync
works: candidate files are compared by various criteria including
block-by-block to minimize the amount of physical copying.

As I recall, when it's running locally like this, rsync does not compare blocks because as you explained that would result in even more io than a simple copy would. Rather works almost exactly like cp -a.

Note also as mentioned before, there is a REASON we use network rsync rather than NFS for network backups.  If you get hit with Bitlocker, or "rm -r .*", or any of a number of other things, your NFS storage will be deleted.  Backups on NFS are not really backups.  Because they appear to be local storage, they will be destroyed when local storage is destroyed.

 

------------------------------------------------------------------------------
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
Eildert Groeneveld | 5 Jun 17:48 2015
Picon

strategy for backup

Dear All

it is not lack of information and cookbooks available for Rsnapshot
that poses a problem but rather the shear volume.

I would like to back up a small Linux network to a dedicated server,
where the clients are not always on (laptops and other machines being
switched off).

I do not need to develop my own strategy, but would be happy if I could
just use a setup that has proven useful elsewhere. The only required 
that I have are (which I would assume everyone else has):
- automatic operation (the server runs 24/7)
- information (mail) how the backup went each day

I would assume that this is a standard setup that many have. So I would
assume that there is also a standard rsnapshot description for it.

Maybe here is someone who can point me to such a document.

Thanks in advance

Tredlie

------------------------------------------------------------------------------
Winkel, Richard J. | 3 Jun 19:29 2015
Picon

Re: relinking (deduping) disconnected rsnapshot trees


> Thanks for the reply!
> I moved one of the backup trees elsewhere so I have some free space.
> The overflow has been happening for about a month.
> I obviously need to check what happened to the syslog message.
> Also I had lazy_deletes turned on, I think this interfered with the 
> rollback
> procedure, it left _delete* directories lying around that were never 
> cleaned up.
>
> On 06/03/2015 11:24 AM, Christopher Barry wrote:
>> On Wed, 3 Jun 2015 14:49:10 +0000
>> "Winkel, Richard J." <winkelr <at> missouri.edu> wrote:
>>
>>> Because of an undetected disk overflow I have fragmented copies of
>>> partial rsnapshot backups on a raid.
>> Can you go into more detail here? disk overflow? Do you mean you ran
>> out of disk space, didn't notice and backups have been failing for
>> some period?
>>
>> How have you proceeded to correct this problem? e.g. did you replace/add
>> disks and rebuild, and now you have a larger RAID (5?) and wish to
>> resume backing up to this? Did you make room by deleting a bunch of
>> older stuff? Or, do you now have another additional RAID device to add
>> new backups to? The more detail the better.
>>
>>> I'd rather not just go back to the last intact backup, but find a way
>>> to merge the new data with the existing
>>> tree.  In other words, scan directories A and B and
>>> if files A/subpathK/fileX and B/subpathK/fileX exist and are
>>> identical, then link them together, otherwise do nothing.
>>> Rsync (3.1.1) doesn't seem to be the tool to use, at least I can't
>>> figure it out.
>>> Has anyone else run across this issue and how did you resolve it?
>>>
>>> Thanks,
>>> Rich
>>> ------------------------------------------------------------------------------ 
>>>
>>> _______________________________________________
>>> rsnapshot-discuss mailing list
>>> rsnapshot-discuss <at> lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
>>
>> ------------------------------------------------------------------------------ 
>>
>> _______________________________________________
>> rsnapshot-discuss mailing list
>> rsnapshot-discuss <at> lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
>

------------------------------------------------------------------------------
Winkel, Richard J. | 3 Jun 19:29 2015
Picon

Re: relinking (deduping) disconnected rsnapshot trees

On 06/03/2015 11:56 AM, Rich Winkel wrote:
> Here's a first draft:
> #!/bin/bash
> if [ $# -ne 2 ]; then
>         echo "Syntax $0 tree1 tree2"
>         echo "Scans 2 trees for identically path'd and named files and 
> if they are identical, links them together."
>         exit 1
> fi
> if ! [ -d "$1" -a -d "$2" -a $(df -P "$1" "$2" | awk '{print $1}'| 
> uniq | wc -l) -eq 2 ]; then
>         echo "Arguments must be directories on the same partition! 
> Exiting..."
>         exit 2
> fi
> find "$1" -type f -print | sed "s,$1,," |while read f; do
>         if cmp -s "$1/$f" "$2/$f"; then
>                 echo "Linking $1/$f to $2/$f "
>                 rm -f "$2/$f"
>                 ln "$1/$f" "$2/$f"
>         fi
> done
>

------------------------------------------------------------------------------
Winkel, Richard J. | 3 Jun 19:28 2015
Picon

Re: relinking (deduping) disconnected rsnapshot trees


> I guess I'm just being lazy.  But if anyone else already has something 
> in hand
> it seems like it would be useful to a lot of people.
> Otherwise I guess I'll have to invent it.
>
> On 06/03/2015 09:49 AM, Winkel, Richard J. wrote:
>> Because of an undetected disk overflow I have fragmented copies of
>> partial rsnapshot backups on a raid.
>> I'd rather not just go back to the last intact backup, but find a way to
>> merge the new data with the existing
>> tree.  In other words, scan directories A and B and
>> if files A/subpathK/fileX and B/subpathK/fileX exist and are identical,
>> then link them together, otherwise do nothing.
>> Rsync (3.1.1) doesn't seem to be the tool to use, at least I can't
>> figure it out.
>> Has anyone else run across this issue and how did you resolve it?
>>
>> Thanks,
>> Rich
>> ------------------------------------------------------------------------------ 
>>
>> _______________________________________________
>> rsnapshot-discuss mailing list
>> rsnapshot-discuss <at> lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
>

------------------------------------------------------------------------------

Gmane