Harry Putnam | 22 Jan 18:40 2015

Having trouble with cp'ing files in h.0 to h.1 as links

Setup: OS=openindiana (a branch of solaris very close to solaris 11)
  rsnapshot-1.3.1lk, rsync-3.1.2dev

I see this problem has come up many times, but the googling I did
seems to indicate that something different is happening here.

  This line is taken from an rsnapshot run on a Solaris HOST where
  several similar rsnapshot runs happen... none of the 8 or so others
  have this problem:

-------       -------       ---=---       -------       ------- 
  /usr/gnu/bin/cp: cannot create link
`/rmh/m2/ImagesMusic/hourly.1/m2-IandM/ImageDB/images/imageArch/00inc/can1/2005/052602/can1_0015.XMP':
Cross-device link
-------       -------       ---=---       -------       -------

The usual recommended thing is to suggest uncommenting cmd_cp, but as
you can see above, I'm already doing that... and using gnu cp.

I tried using solaris cp too, but it has no flag like -l in gnu/cp
so is a non-starter.

-------       -------       ---=---       -------       -------

Am I correct in thinking the actual cmd is cp -al??

-------       -------       ---=---       -------       ------- 

So I tried the cp operation still with gnu/cp,  by hand on those same
files and (no surprise) had the same failure.
(Continue reading)

Terry Barnum | 21 Jan 21:20 2015

space in snapshot_root path?

I'd like to use an existing disk in use for rsnapshot backups but the disk name has a space in it. Changing the
disk name would be disruptive because it's already being used for other tasks. I found discussion threads
where it sounds like using spaces isn't possible after 1.3.0. I'm running 1.3.1. I've tried:

snapshot_root   /Volumes/diskname\ backup/snapshots/
snapshot_root   "/Volumes/diskname backup/snapshots/"
snapshot_root   '/Volumes/diskname backup/snapshots/'
snapshot_root   /Volumes/diskname?backup/snapshots/

Thanks,
-Terry

Terry Barnum
digital OutPost
http://www.dop.com

------------------------------------------------------------------------------
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet
alexandre | 21 Jan 19:03 2015
Picon

Option for rsnapshot recycle old backup directory

Hello

I love your scripts.

I was using the old script of backup you refer to ( the mikerubel one )

http://www.mikerubel.org/computers/rsync_snapshots/

i was using something like this one :

mv backup.3 backup.tmp
mv backup.2 backup.3
mv backup.1 backup.2
mv backup.0 backup.1
mv backup.tmp backup.0
cp -al backup.1/. backup.0
rsync -a --delete source_directory/ backup.0/

with the recycle the old backup directory, because the rm command was 
taking very very very long on my nas....  ( lots of small files )

Do you thing you can add on option so that rsnaphot can do the same.

Great jobs guys.

Alexandre

------------------------------------------------------------------------------
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
(Continue reading)

Harry Putnam | 21 Jan 18:41 2015

How to include existing directory hierarchy into rsnapshot bkups

I thought I'd try something I haven't really thought about before

I have 280 GB or so of image files from a windows host, backedup onto
a sorlaris host. I used rsync every once in a while to sync up the
pile of photos.

Now I've setup what is, to me, (I'm sure some hear will probably just
smile at the small amounts of data involved), a full fledged rsnapshot
server on a solaris hosts.

So, I thought I'd setup an rsnapshot run that I'd probably run by hand
periodically on that same dataset.

After studying the hierarchy rsnap creates, I wondered if I could create
the rsnapshot run in a way that puts that data in the slot where
`hourly.0' normally would appear.

Hopefully rsync would move it to `hourly.1' and proceed with the new
bits that have changed

Is trying this likely to result in a major mess? ( I do have all the
same data plus another 1-2GBs or so more on the windows box, so its
not the only copy of that data).

Or will rsync just laugh and say `yeah, right!' and throw errors?

------------------------------------------------------------------------------
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
(Continue reading)

Gordon Messmer | 20 Jan 19:13 2015
Picon

Re: The Dreaded rsync error 12 - how to get rid of it?

On 01/20/2015 09:04 AM, Mark Phillips wrote:
> No firewall...they are both on the same subnet on my home lan.
>

You confirmed earlier that rsync by itself completes normally.  Have you 
yet confirmed that "rsnapshot daily" will complete normally if it's run 
manually instead of from cron?  Have you yet tried "env -i rsnapshot 
daily"?  I suggested that earlier, but I don't see my email in the list 
archive.  I wonder if it got dropped?

The other thing I'd anticipate, since you're using -H, is memory 
exhaustion.  rsync has to store all of the paths and inode numbers in 
memory when you use that option, and with 3.5 million files on /home, 
you're looking at several gigs of RAM.  You might run out of memory due 
to 32 bit memory limits, or because you don't physically have enough 
RAM+swap, or because of a memory limit set on the cron process and/or 
its children.

------------------------------------------------------------------------------
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet
Benedikt Heine | 20 Jan 17:20 2015

Re: The Dreaded rsync error 12 - how to get rid of it?

Hi,

According to the rsync-issues[0], rsync exits with 12, when it has got
problems in the data-stream. Also the man-page says that. Did this error
only happen once, or does it still happen? In my eyes, the error is caused
by a corrupted connection to tsunami and seems to be only temporary.

Cheers,
Bene

[0] https://rsync.samba.org/issues.html

------------------------------------------------------------------------------
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet
Mark Phillips | 16 Jan 18:34 2015

The Dreaded rsync error 12 - how to get rid of it?

I have rsnapshot running on orca (Debian) backing up two servers - one with Debian (swordfish) and one with Ubuntu 14.04 (tsunami). The Debian machine backs up each night with no issues. The Ubuntu machine made one daily backup and then stopped making them. I get this error in the log:

[10/Jan/2015:03:24:11] /usr/bin/rsnapshot daily: ERROR: /usr/bin/rsync returned 12 while processing root <at> tsunami:/home/
[10/Jan/2015:03:24:11] WARNING: root <at> tsunami:/etc/ skipped due to rollback plan
[10/Jan/2015:03:24:11] WARNING: root <at> tsunami:/opt/ skipped due to rollback plan
[10/Jan/2015:03:24:11] WARNING: root <at> tsunami:/root/ skipped due to rollback plan
[10/Jan/2015:03:24:11] WARNING: root <at> tsunami:/var/www skipped due to rollback plan
[10/Jan/2015:03:24:11] WARNING: root <at> tsunami:/var/log skipped due to rollback plan
[10/Jan/2015:03:24:11] WARNING: root <at> tsunami:/var/altdrive_wbhome skipped due to rollback plan
[10/Jan/2015:03:24:11] WARNING: Rolling back "tsunami/"
[10/Jan/2015:03:24:11] /bin/rm -rf /media/backup/rsnapshot/daily.0/tsunami/
[10/Jan/2015:04:04:56] /bin/cp -al /media/backup/rsnapshot/daily.1/tsunami /media/backup/rsnapshot/daily.0/tsunami


The target drive, /media/backup/rsnapshot has 774 GB free, and the total size of the backup for tsunami is ~ 650GB. Note that one full backup of tsunami was completed, so there is already one backup with 650 GB allocated, so the incremental should easily fit in the remaining 774 GB on the backup drive.

I can ssh as root into tsunami from orca. I can run rsync through ssh on orca and copy files to tsunami.

I am at a loss as to why rsnapshot fails for backing up tsunami, but works everyday for swordfish.

My rsnapshot.conf:

#################################################
# rsnapshot.conf - rsnapshot configuration file #
#################################################
#                                               #
# PLEASE BE AWARE OF THE FOLLOWING RULES:       #
#                                               #
# This file requires tabs between elements      #
#                                               #
# Directories require a trailing slash:         #
#   right: /home/                               #
#   wrong: /home                                #
#                                               #
#################################################

#######################
# CONFIG FILE VERSION #
#######################

config_version    1.2

###########################
# SNAPSHOT ROOT DIRECTORY #
###########################

# All snapshots will be stored under this root directory.
#
#snapshot_root    /var/cache/rsnapshot/
snapshot_root    /media/backup/rsnapshot/

# If no_create_root is enabled, rsnapshot will not automatically create the
# snapshot_root directory. This is particularly useful if you are backing
# up to removable media, such as a FireWire or USB drive.
#
#no_create_root    1

#################################
# EXTERNAL PROGRAM DEPENDENCIES #
#################################

# LINUX USERS:   Be sure to uncomment "cmd_cp". This gives you extra features.
# EVERYONE ELSE: Leave "cmd_cp" commented out for compatibility.
#
# See the README file or the man page for more details.
#
cmd_cp        /bin/cp

# uncomment this to use the rm program instead of the built-in perl routine.
#
cmd_rm        /bin/rm

# rsync must be enabled for anything to work. This is the only command that
# must be enabled.
#
cmd_rsync    /usr/bin/rsync

# Uncomment this to enable remote ssh backups over rsync.
#
cmd_ssh    /usr/bin/ssh

# Comment this out to disable syslog support.
#
cmd_logger    /usr/bin/logger

# Uncomment this to specify the path to "du" for disk usage checks.
# If you have an older version of "du", you may also want to check the
# "du_args" parameter below.
#
cmd_du        /usr/bin/du

# Uncomment this to specify the path to rsnapshot-diff.
#
cmd_rsnapshot_diff    /usr/bin/rsnapshot-diff

# Specify the path to a script (and any optional arguments) to run right
# before rsnapshot syncs files
#
#cmd_preexec    /path/to/preexec/script

# Specify the path to a script (and any optional arguments) to run right
# after rsnapshot syncs files
#
#cmd_postexec    /path/to/postexec/script

# Paths to lvcreate, lvremove, mount and umount commands, for use with
# Linux LVMs.
#
#linux_lvm_cmd_lvcreate    /path/to/lvcreate
#linux_lvm_cmd_lvremove    /path/to/lvremove
#linux_lvm_cmd_mount    /bin/mount
#linux_lvm_cmd_umount    /bin/umount

#########################################
#           BACKUP INTERVALS            #
# Must be unique and in ascending order #
# i.e. hourly, daily, weekly, etc.      #
#########################################

#retain        hourly    6
retain        daily    7
retain        weekly    4
retain        monthly    3

############################################
#              GLOBAL OPTIONS              #
# All are optional, with sensible defaults #
############################################

# Verbose level, 1 through 5.
# 1     Quiet           Print fatal errors only
# 2     Default         Print errors and warnings only
# 3     Verbose         Show equivalent shell commands being executed
# 4     Extra Verbose   Show extra verbose information
# 5     Debug mode      Everything
#
verbose        2

# Same as "verbose" above, but controls the amount of data sent to the
# logfile, if one is being used. The default is 3.
#
loglevel    3

# If you enable this, data will be written to the file you specify. The
# amount of data written is controlled by the "loglevel" parameter.
#
logfile    /var/log/rsnapshot.log

# If enabled, rsnapshot will write a lockfile to prevent two instances
# from running simultaneously (and messing up the snapshot_root).
# If you enable this, make sure the lockfile directory is not world
# writable. Otherwise anyone can prevent the program from running.
#
lockfile    /var/run/rsnapshot.pid

# By default, rsnapshot check lockfile, check if PID is running
# and if not, consider lockfile as stale, then start
# Enabling this stop rsnapshot if PID in lockfile is not running
#
#stop_on_stale_lockfile        0

# Default rsync args. All rsync commands have at least these options set.
#
#rsync_short_args    -a
rsync_short_args    -aHz
#rsync_long_args    --delete --numeric-ids --relative --delete-excluded
rsync_long_args        --numeric-ids --relative

# ssh has no args passed by default, but you can specify some here.
#
#ssh_args    -p 22

# Default arguments for the "du" program (for disk space reporting).
# The GNU version of "du" is preferred. See the man page for more details.
# If your version of "du" doesn't support the -h flag, try -k flag instead.
#
#du_args    -csh

# If this is enabled, rsync won't span filesystem partitions within a
# backup point. This essentially passes the -x option to rsync.
# The default is 0 (off).
#
#one_fs        0

# The include and exclude parameters, if enabled, simply get passed directly
# to rsync. If you have multiple include/exclude patterns, put each one on a
# separate line. Please look up the --include and --exclude options in the
# rsync man page for more details on how to specify file name patterns.
#
#include    ???
#include    ???
#exclude    ???
#exclude    ???

# The include_file and exclude_file parameters, if enabled, simply get
# passed directly to rsync. Please look up the --include-from and
# --exclude-from options in the rsync man page for more details.
#
#include_file    /path/to/include/file
#exclude_file    /path/to/exclude/file

# If your version of rsync supports --link-dest, consider enable this.
# This is the best way to support special files (FIFOs, etc) cross-platform.
# The default is 0 (off).
#
link_dest    1

# When sync_first is enabled, it changes the default behaviour of rsnapshot.
# Normally, when rsnapshot is called with its lowest interval
# (i.e.: "rsnapshot hourly"), it will sync files AND rotate the lowest
# intervals. With sync_first enabled, "rsnapshot sync" handles the file sync,
# and all interval calls simply rotate files. See the man page for more
# details. The default is 0 (off).
#
#sync_first    0

# If enabled, rsnapshot will move the oldest directory for each interval
# to [interval_name].delete, then it will remove the lockfile and delete
# that directory just before it exits. The default is 0 (off).
#
#use_lazy_deletes    0

# Number of rsync re-tries. If you experience any network problems or
# network card issues that tend to cause ssh to crap-out with
# "Corrupted MAC on input" errors, for example, set this to a non-zero
# value to have the rsync operation re-tried
#
#rsync_numtries 0

# LVM parameters. Used to backup with creating lvm snapshot before backup
# and removing it after. This should ensure consistency of data in some special
# cases
#
# LVM snapshot(s) size (lvcreate --size option).
#
#linux_lvm_snapshotsize    100M

# Name to be used when creating the LVM logical volume snapshot(s).
#
#linux_lvm_snapshotname    rsnapshot

# Path to the LVM Volume Groups.
#
#linux_lvm_vgpath    /dev

# Mount point to use to temporarily mount the snapshot(s).
#
#linux_lvm_mountpath    /path/to/mount/lvm/snapshot/during/backup

###############################
### BACKUP POINTS / SCRIPTS ###
###############################

# SWORDFISH
backup    root <at> swordfish:/home/    swordfish/
backup    root <at> swordfish:/etc/    swordfish/
backup    root <at> swordfish:/opt/    swordfish/
backup    root <at> swordfish:/var/    swordfish/
backup    root <at> swordfish:/usr/    swordfish/
backup    root <at> swordfish:/root/    swordfish/
backup    root <at> swordfish:/lib/    swordfish/

# TSUNAMI
backup    root <at> tsunami:/home/    tsunami/
backup    root <at> tsunami:/etc/    tsunami/
backup    root <at> tsunami:/opt/    tsunami/
backup    root <at> tsunami:/root/    tsunami/
backup    root <at> tsunami:/var/www    tsunami/
backup    root <at> tsunami:/var/log    tsunami/
backup    root <at> tsunami:/var/altdrive_wbhome    tsunami/

# LOCALHOST
#backup    /etc/        localhost/
#backup    /usr/local/    localhost/
#backup    /var/log/rsnapshot        localhost/
#backup    /etc/passwd    localhost/
#backup    /home/foo/My Documents/        localhost/
#backup    /foo/bar/    localhost/    one_fs=1, rsync_short_args=-urltvpog
#backup_script    /usr/local/bin/backup_pgsql.sh    localhost/postgres/
# You must set linux_lvm_* parameters below before using lvm snapshots
#backup    lvm://vg0/xen-home/    lvm-vg0/xen-home/

# EXAMPLE.COM
#backup_script    /bin/date "+ backup of example.com started at %c"    unused1
#backup    root <at> example.com:/home/    example.com/    +rsync_long_args=--bwlimit=16,exclude=core
#backup    root <at> example.com:/etc/    example.com/    exclude=mtab,exclude=core
#backup_script    ssh root <at> example.com "mysqldump -A > /var/db/dump/mysql.sql"    unused2
#backup    root <at> example.com:/var/db/dump/    example.com/
#backup_script    /bin/date    "+ backup of example.com ended at %c"    unused9

# CVS.SOURCEFORGE.NET
#backup_script    /usr/local/bin/backup_rsnapshot_cvsroot.sh    rsnapshot.cvs.sourceforge.net/

# RSYNC.SAMBA.ORG
#backup    rsync://rsync.samba.org/rsyncftp/    rsync.samba.org/rsyncftp/

Thanks,

Mark
------------------------------------------------------------------------------
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
Nico Kadel-Garcia | 12 Jan 02:07 2015
Picon

Teatable rssh mkchroot scripts for rsnapshot targets

Hi, folks.

One of the things I've noticed lately is the difficulty setting up
rssh to go with rsnapshot use. The old mkchroot.sh scirpt is fairly
fragile, and various web pages and successful Google searches report
mixed and commingled setups that just do too much and leave steps out.
And bad directions setting up a chroot cage are noticeably worse than
*no* directions!!!!

So, I've set up a repo at https://github.com/nkadel/rssh-chroot-tools
that splits up the mkchroot.sh script into more powerful, more
legible, and more reliable tools, one fo rcreating the chroot cage,
the other for creating user credentials. This makes chroot setup, at
least on RHEL 5 and RHEL 6 based systems, a one-step operation.

I'd really welcome any review or testing, especially from folks who
use rssh for rsnapshot targets!

                    Nico Kadel-Garcia <nkadel <at> gmail.com>

------------------------------------------------------------------------------
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
vanity: www.gigenet.com
Harry Putnam | 11 Jan 22:48 2015

about running rsync on both ends of an rsnapshot script.

I've been told a few times that running rsync on both ends of an
rsnapshot pull, is most efficient.  And was told that both methods of
doing that are explained in the rsync manual.

I guess it is, but I definitely did not come away from the manual
knowning how to do it.

And it is not even attempted in the rsnapshot manual.

Can anyone post some simple examples of how an rsnapshot.conf looks
that does both kinds of engaging rsync on the remote.

1) rsync to running rsync daemon
and
2) rsync thru ssh terminal on the remote.

I'd like to see how that is done inside rsnapshot.conf.

PS - example on the rsync cmdline would be a big plus too.

------------------------------------------------------------------------------
Dive into the World of Parallel Programming! The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net
Harry Putnam | 7 Jan 03:57 2015

rsnapshot push from linux lchost to Solaris Remote backup server

If I wanted to run rsnapshot on a local linux host but send the
backups to a Solaris backup server with zfs file system what do I need
to do with rsnapshot_root

Just trying local lan address there with path to destination fails
immediately

snapshot_root   2x.local.lan:/rrsnap/gv

Must need some trickier line... but googling a while and all I'm
finding is lots of folks who only want to talk about working with
rsnapshot_root on localhost.

Whats the trick to it?

To try to make it a little clearer.

The host I want to backup is an Gentoo linux host.

I'm rsnaping lchost:/var lchost:/etc [...]

I was doing the runs on the local linux host and parking snapshot_root
there.

Then having a cron job on the solaris zfs server pulling that
rsnapshot_root over to the solaris host... each day, just using rsync.
But still keeping the solaris directory synced right up with
rsnapshot_root.

That seems like more work than I need to be doing.

There are a number of reasons why I want to run rsnapshot from my
linux host.

Most are related to how Solaris does everything different... And I'm
not much of a Solaris adept as yet.

Compiling and such on Solaris is usually a total nightmare (at least
for someone from gnu land) and rather than getting all bogged down in
that, I'd rather try doing the runs direct to the solaris host if
possible. 

------------------------------------------------------------------------------
Dive into the World of Parallel Programming! The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net
Scott Hess | 6 Jan 23:36 2015

Re: Inclusively adding excludes.

On Mon, Jan 5, 2015 at 4:00 PM, Scott Hess <scott <at> doubleu.com> wrote:
I've been reading around in the code, but I haven't found an obvious thing I'm missing.  AFAICT you can't just + any old option successfully.  Currently I've just duplicated the global excludes into each backup, which is obviously not the way to go in the long term.  AFAICT, if I use a global exclude_file (/etc/rsnapshot.excludes) with per-backup exclude= rules, everything works as I'd like it to, so that's probably where I'll take things.

Oops - I wasn't reading things correctly, using per-backup exclude= also drops global exclude_file.  Fortunately, using an exclude_file is easier to mix with per-backup settings, because I can use exclude=local,exclude=local,exclude_file=/etc/rsnapshot.exclude

-scott

------------------------------------------------------------------------------
Dive into the World of Parallel Programming! The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss

Gmane