Fyodorov "Bga" Alexander | 1 Aug 13:21 2015
Picon

unnecessary /proc requirement in 3.1.1

Hi. Thanks for good program.

I'm quite paranoid guy and dont beleave when some program offer me "use chroot = yes". Instead i jail program manually.
I was at 3.0.9 and all was fine. Manual chroot only requires files dir, config and personal tmp. 3.1.1 now also want whole /proc only for /proc/self/fd/X instead just fd number. Whole /proc is  serious security risk for me. Why?

starce log
lstat64("tt", {st_mode=S_IFDIR|S_ISGID|0755, st_size=4096, ...}) = 0
fstatat64(AT_FDCWD, "tt", {st_mode=S_IFDIR|S_ISGID|0755, st_size=4096, ...}, AT
_SYMLINK_NOFOLLOW) = 0
openat(AT_FDCWD, "tt", O_RDONLY|O_NOCTTY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = 2
fstatat64(AT_FDCWD, "/proc/self/fd/2", 0x5bafe7f0, 0) = -1 ENOENT (No such file
 or directory)
close(2)                                = 0
getpid()                                = 1395
sendto(0, "<28>Aug  1 00:35:51 rsyncd[1395]"..., 117, 0, NULL, 0) = -1 ENOTCONN
 (Socket not connected)
connect(0, {sa_family=AF_LOCAL, sun_path="/dev/log"}, 12) = -1 ENOENT (No such
file or directory)
select(4, [1], [3], [1], {60, 0})       = 1 (out [3], left {59, 999915})
write(3, "V\0\0\10rsync: failed to set permiss"..., 361) = 361



--
Alexander.
--

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
samba-bugs | 1 Aug 04:28 2015
Picon

[Bug 11423] New: rsync 3.1.x is creating empty backup directories

https://bugzilla.samba.org/show_bug.cgi?id=11423

            Bug ID: 11423
           Summary: rsync 3.1.x is creating empty backup directories
           Product: rsync
           Version: 3.1.1
          Hardware: All
                OS: All
            Status: NEW
          Severity: major
          Priority: P5
         Component: core
          Assignee: wayned <at> samba.org
          Reporter: adsh <at> univ.kiev.ua
        QA Contact: rsync-qa <at> samba.org

Running the command

$ rsync -a --delete --backup --backup-dir=../old source/ dest/

with version 3.1.0 gives the output

Created backup_dir ../old/

even when no files are copied to the backup directory. I usually add the
current time to the backup directory name, which means I end up with many empty
directories. With version 3.0.9 of rsync, a backup directory was only created
if needed.

-- 
You are receiving this mail because:
You are the QA Contact for the bug.

--

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

samba-bugs | 31 Jul 22:16 2015
Picon

[Bug 11422] New: Feature request: add support for Linux libcap[-ng]

https://bugzilla.samba.org/show_bug.cgi?id=11422

            Bug ID: 11422
           Summary: Feature request: add support for Linux libcap[-ng]
           Product: rsync
           Version: 3.1.1
          Hardware: All
                OS: Linux
            Status: NEW
          Severity: normal
          Priority: P5
         Component: core
          Assignee: wayned <at> samba.org
          Reporter: rsync <at> sanitarium.net
        QA Contact: rsync-qa <at> samba.org

Linux has added a concept called file capabilities.  This allows certain
binaries to perform specific privileged functions without requiring SUID root.

Example:
# getcap /bin/ping
/bin/ping = cap_net_raw+ep

Rsync should be able to (optionally of course) copy these attributes as it can
copy xattrs and ACLs.  They should also be storable via --fake-super on
non-Linux systems.

-- 
You are receiving this mail because:
You are the QA Contact for the bug.

--

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

samba-bugs | 27 Jul 10:54 2015
Picon

[Bug 11414] New: rsync: chgrp "/.filename.5afK5X" (in dirdir) failed: Operation not permitted (1)

https://bugzilla.samba.org/show_bug.cgi?id=11414

            Bug ID: 11414
           Summary: rsync: chgrp "/.filename.5afK5X" (in dirdir) failed:
                    Operation not permitted (1)
           Product: rsync
           Version: 3.1.1
          Hardware: x64
                OS: Linux
            Status: NEW
          Severity: normal
          Priority: P5
         Component: core
          Assignee: wayned <at> samba.org
          Reporter: nhannguyen <at> 0937686468.com
        QA Contact: rsync-qa <at> samba.org

Notice : With The configuration file below in version 3.0.6,it is working . 
-----------------------------------------------------------------------------

-My rsync's config (rsyncd.conf):
==========================================
lock file = /var/run/rsync.lock
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
ignore errors= true
uid = sfftp
gid = sfftp
reverse lookup = no
[9tlocal]
path=/sfarm/9tlocal/
uid = www
gid = www
auth users = 9tlocal
secrets file = /etc/rsyncd.secrets
read only = no
log file = /var/log/rsyncd9tlocal.log
============================================

However I got a problem with rsync daemon on version 3.1.1:

When I execute this cmd :
rsync -avz --progress --delete /data/client/ 9tlocal <at> 10.76.0.195::9tlocal

It sync data to dir /sfarm/9tlocal/ but It show these error log:
rsync -avz --progress --delete /data/client/ 9tlocal <at> 10.76.0.195::9tlocal
Password: 
sending incremental file list
rsync: chgrp "/." (in 9tlocal) failed: Operation not permitted (1)
rsync: failed to open "/syn_pre_update.log" (in 9tlocal), continuing:
Permission denied (13)
rsync: failed to open "/test" (in 9tlocal), continuing: Permission denied (13)
./
syn_pre_update.log
          2,017 100%    0.00kB/s    0:00:00 (xfr#1, to-chk=1/3)
test
  1,073,741,824 100%   63.80MB/s    0:00:16 (xfr#2, to-chk=0/3)
rsync: chgrp "/.syn_pre_update.log.4kTDrH" (in 9tlocal) failed: Operation not
permitted (1)
rsync: chgrp "/.test.yVQDby" (in 9tlocal) failed: Operation not permitted (1)

sent 1,044,817 bytes  received 486 bytes  27,874.75 bytes/sec
total size is 1,073,743,841  speedup is 1,027.21
rsync error: some files/attrs were not transferred (see previous errors) (code
23) at main.c(1165) [sender=3.1.1]

ls -al /data/client/
total 1048592
drwxr-xr-x 2 root root       4096 Th07 27 14:07 .
drwxr-xr-x 3 root root       4096 Th07 27 11:17 ..
-rw-r--r-- 1 root root       2017 Th07 27 14:07 syn_pre_update.log
-rw-r--r-- 1 root root 1073741824 Th07 24 14:34 test

ls -al /sfarm/9tlocal/
total 1048592
drwxr-xr-x 2 www www        4096 Th07 27 14:07 .
drwxr-xr-x 5 root    root       4096 Th07 27 11:25 ..
-rw------- 1 www www        2017 Th07 27 14:58 syn_pre_update.log
-rw------- 1 www www  1073741824 Th07 27 14:58 test

===========================================================
Finally I re-run cmd with below option(replace option -a by -r )it's OK : 

rsync -rvz --progress --delete /data/client/ 9tlocal <at> 10.76.0.195::9tlocal

-- 
You are receiving this mail because:
You are the QA Contact for the bug.

--

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Simon Wong (Staff | 21 Jul 10:48 2015
Picon

Rsync differences using NFS & SMB

Hi,

I’m having difficulties trying to understand the performance differences
between NFS and SMB. I  have used rsync (OS X) over SMB (mounted network
storage) and using rsync (OS X) over SSH (NFS mounted storage)

From my test, rsync over SMB builds a file list each time comparing
modified source/destination, where as rsync over ssh/nfs is incredibly
quicker, pretty much instant.

During the test;
- when running rsync to the mounted storage on Linux, it identified
new/deleted files immediately!
- When running rsync over SMB, it builds a complete file list and lists
each directory before any copy takes place.

To the users perception, building the file list (lets say the user has
hundreds/thousands of folders) takes some time to run through each
directory.  Once the rsync completes the initial copy, on the second
rsync, it completes within seconds.  As expected, right?

The problem occurs when I unmount and remount the SMB share.  If I run
rsync again it builds the whole file list and begins to run through 'every
folder”, even if there is nothing to copy!

This is not the same behaviour using rsync via ssh over nfs, it doesn’t
appear to show "building file list" and immediately informs the user if
files are copying or the completion time.  I have tried multiple
troubleshooting, even unmounting the the NFS share and clearing the cached
memory in Linux with no success.

The commands I have played around with are;-

Over smb
rsync -uvaz --delete /source /destination
rsync -aHEXAx -v --delete --progress --stats --timeout=999 /source
/destination  (tried without —progress and —stats, same behaviour)

Over ssh/nfs
rsync -nuvaz --delete /source/ root <at> nfsServer.domain.co.uk:

Thanks,

Si

The University of Dundee is a registered Scottish Charity, No: SC015096
--

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Cal Sawyer | 20 Jul 09:47 2015

Re: rsync Digest, Vol 151, Issue 15

This sounds like a job for Relax and Recover:

http://relax-and-recover.org/

Cal Sawyer | Systems Engineer | BlueBolt

On 19/07/15 13:00, rsync-request <at> lists.samba.org wrote:
> Send rsync mailing list submissions to
> 	rsync <at> lists.samba.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> 	https://lists.samba.org/mailman/listinfo/rsync
> or, via email, send a message with subject or body 'help' to
> 	rsync-request <at> lists.samba.org
>
> You can reach the person managing the list at
> 	rsync-owner <at> lists.samba.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of rsync digest..."
>
>
> To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
>
> ---------------------------------------
>
> Today's Topics:
>
>     1. Re: clone a disk (Simon Hobson)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sun, 19 Jul 2015 10:28:11 +0100
> From: Simon Hobson <linux <at> thehobsons.co.uk>
> To: "rsync <at> lists.samba.org" <rsync <at> lists.samba.org>
> Subject: Re: clone a disk
> Message-ID: <8FEEBF1C-31D8-41D2-AABF-FD71170B3ECF <at> thehobsons.co.uk>
> Content-Type: text/plain; charset=us-ascii
>
> Thierry Granier <th.granier <at> free.fr> wrote:
>
>> the "backup" is created on the source machine
>> i don't see how to get this backup on the destination machine and how to boot on this machine (for this backup)
> By specifying "user <at> address:path" you are telling rsync to copy the files ot a remote machine - that's how
the backup gets to the other machine.
> To make it bootable, you'll need to arrange that the root of the remote path is the root of it's own
filesystem, then you can do some stuff with chroot and install grub on the appropriate disk.
>
> I don't normally bother trying to keep backups bootable. I'll just prepare a system to restore to, create
the filesystems, create the mointpoints and mount all the filesystems, and then rsync all the files back.
This can be done while booted from a "live-CD" environment - or for virtual machines, by mounting the
filesystems on the host (but be careful not to restore your backup to the wrong place and wipe the host
filesystem, it's "inconvenient" !)
>
> NB - please keep replies to the list.
>
>
>
> Kevin Korb <kmk <at> sanitarium.net> wrote:
>
>> I would add --numeric-ids and --itemize-changes.
> Rats, yes you *must* specify numeric-ids or the backup is usually "a bit broken" as ownership information
will get mangled.
>
>> Also, I prefer to do backups by filesystem so I would add
>> - --one-file-system and run one rsync per filesystem.  This means you
>> don't have to exclude things like /proc and /dev and any random thing
>> that isn't normally connected but sometimes is but it also means you
>> have to list all the filesystems that you do want to backup.
> Yeah, that's a bit "6 of one, half a dozen of the other".
> I prefer to have a backup that is a complete image of the source directory tree - rather than several
backups, one per filesystem. You can do the former while doing "one sync per filesystem", but you have to be
a bit clever with your excludes to avoid the sync of the root deleting all the other filesystems before the
next step puts them back again.
> And I sometimes re-arrange my volumes during a restore - and then it's easier to have one backup tree rather
than one per filesystem.
>
>
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
>
> rsync mailing list
> rsync <at> lists.samba.org
> https://lists.samba.org/mailman/listinfo/rsync
>
>
> ------------------------------
>
> End of rsync Digest, Vol 151, Issue 15
> **************************************


--

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Thierry Granier | 17 Jul 19:40 2015
Picon

clone a disk

Hello
i have a machine A with 2 disks 1 et 2 running Debian Jessie
on 1 is the system and the boot and the swap
on 2 different partitions like /home /opt ETC.....

i have a machine B with 1 disk running kali-linux and 100G free

Can i clone the disk 1 of machine A on the 100G free on machine B with rsync?

If it is possible, how to do that?
Many thanks
TG
--

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Ken Chase | 17 Jul 17:53 2015

Re: [Bug 3099] Please parallelize filesystem scan

Sounds to me like maintaining the metadata cache is important - and tuning the
filesystem to do so would be more beneficial than caching writes, especially
with a backup target where a write already written will likely never be read
again (and isnt a big deal if it is since so few files are changed compared to
the total # of inodes to scan).

Your report of the minutes for the re-sync shows the unthrashed cache is highly
valuable. So all we need to do is tune the backup target (and even the operational
servers themselves) to maintain more metadata. I dont know how much ram is used
per inode, but I'd throw in another 4-8gb just for metadata caching per box, or
even more, if it meant scanning was sped up.

(Really, actually, one only needs it in the backup target - if you can run all
the backups in parallel, and there's N servers to backup, they can all run at 1/N
speed, as long as scanning metadata on the backup target is fast enough to keep
up with it all -- my total data written is only 20-30GB for example, which at reasonable
speed (20-30MB/s even, which is slow) is only 15 minutes total writing. Even 200-300GB
changed would be 150 minutes at that rate, and the rate could easily be 4x faster.

So, tuning caches to prefer metadata seems to be key. How?

As we've discussed before, letting the filesystem at it throws away precious
metadata cache, and so tracking your own changes (since the backup system will never
be used for anything else, right? :) would be beneficial. Of course the danger
is using the backup system for anything else and changing any of the target info -
inconsistencies would crop up and make the backup worthless very quickly.

/kc

On Fri, Jul 17, 2015 at 03:18:02PM +0000, Schweiss, Chip said:
  >Modern file systems have many internal queues, and service many clients simultaneously.  They arrange
their work to maximize throughput in both read and write operations.    This is the norm on any enterprise
file system, be it Hitachi, Oracle, Dell, HP, Isilon, etc.  You will get significantly higher throughput
if you hit it with multiple threads.   These systems have elaborate predictive read ahead caches and
perform best when multiple threads hit them.
  >
  >Using the test case of a single server with a simple file system such as ext3/4, or xfs, no gains will be seen
in multithreading rsync.   Use an enterprise file system with 100's of TBs and the more threads you use the
faster you will go.   Metadata and data on these systems ends up across 100's of disks.   Single threads end up
severely bound by latency.  This is why multi-threading should be optional.  It doesn't help everyone.
  >
  >For example, one of my rsync jobs moving from a ZFS system in St. Louis, Missouri to a Hitachi HNAS in
Minneapolis, Minnesota has over 100 million files.   Each day 50 to 100 thousand files get added or updated.  
A single rsync job would take weeks to parse this job and send the changes.   I split it into 120 jobs and it
typically completes in 2 hours when no humans are using the systems.   A re-sync immediately afterwards,
again with 120 jobs, scans both ends in minutes.
  >
  >-Chip
  >
  >-----Original Message-----
  >From: rsync [mailto:rsync-bounces <at> lists.samba.org] On Behalf Of Ken Chase
  >Sent: Friday, July 17, 2015 9:51 AM
  >To: samba-bugs <at> samba.org
  >Cc: rsync-qa <at> samba.org
  >Subject: Re: [Bug 3099] Please parallelize filesystem scan
  >
  >I dont understand - scanning metadata is sped up by thrashing the head
  >all over the disk instead of mostly-sequentially scanning through?
  >
  >How does that work out?
  >
  >/kc
  >
  >
  >On Fri, Jul 17, 2015 at 02:37:21PM +0000, samba-bugs <at> samba.org said:
  >  >https://bugzilla.samba.org/show_bug.cgi?id=3099
  >  >
  >  >--- Comment #8 from Chip Schweiss <chip <at> innovates.com> ---
  >  >I would argue that optionally all directory scanning should be made parallel.
  >  >Modern file systems perform best when request queues are kept full.  The
  >  >current mode of rsync scanning directories does nothing to take advantage of
  >  >this.
  >  >
  >  >I currently use scripts to split a couple dozen or so rsync jobs in to
  >  >literally 100's of jobs.   This reduces execution time from what would be days
  >  >to a couple hours every night.   There are lots of scripts like this appearing
  >  >on the net because the current state of rsync is inadequate.
  >  >
  >  >This ticket could reasonably combined with 5124.
  >  >
  >  >--
  >  >You are receiving this mail because:
  >  >You are the QA Contact for the bug.
  >  >
  >  >--
  >  >Please use reply-all for most replies to avoid omitting the mailing list.
  >  >To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
  >  >Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
  >
  >--
  >Ken Chase - ken <at> heavycomputing.ca skype:kenchase23 Toronto Canada
  >Heavy Computing - Clued bandwidth, colocation and managed linux VPS  <at> 151 Front St. W.
  >
  >--
  >Please use reply-all for most replies to avoid omitting the mailing list.
  >To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
  >Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
  >
  >________________________________
  >
  >The material in this message is private and may contain Protected Healthcare Information (PHI). If you
are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking
of any action in reliance on the contents of this information is strictly prohibited. If you have received
this email in error, please immediately notify the sender via telephone or return mail.

-- 
Ken Chase - ken <at> heavycomputing.ca skype:kenchase23 +1 416 897 6284 Toronto Canada
Heavy Computing - Clued bandwidth, colocation and managed linux VPS  <at> 151 Front St. W.

--

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

samba-bugs | 17 Jul 16:37 2015
Picon

[Bug 3099] Please parallelize filesystem scan

https://bugzilla.samba.org/show_bug.cgi?id=3099

--- Comment #8 from Chip Schweiss <chip <at> innovates.com> ---
I would argue that optionally all directory scanning should be made parallel.  
Modern file systems perform best when request queues are kept full.  The
current mode of rsync scanning directories does nothing to take advantage of
this.   

I currently use scripts to split a couple dozen or so rsync jobs in to
literally 100's of jobs.   This reduces execution time from what would be days
to a couple hours every night.   There are lots of scripts like this appearing
on the net because the current state of rsync is inadequate.  

This ticket could reasonably combined with 5124.

-- 
You are receiving this mail because:
You are the QA Contact for the bug.

--

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

samba-bugs | 17 Jul 11:01 2015
Picon

[Bug 3099] Please parallelize filesystem scan

https://bugzilla.samba.org/show_bug.cgi?id=3099

--- Comment #7 from Rainer <rainer <at> voigt-home.net> ---
Hi,

I'm experiencing the very same problem: I'm trying to sync a set of VMWare disk
files (about 2.5TB) with not too many changes, and direct copying is still
faster than the checksumming by a quite large margin because of the sequential
checksumming on source and target just doubles the time needed.

I think the point is that the GigE link between the PC and the NAS achieves
about 80MB/s, and the HDD read rate is not much higher (approx. 130MB/s). 

When doing the checksumming on source and target in parallel we could ideally
(if nothing changed) reach the read rate of the HDDs as 'transfer' bandwidth,
because this is the speed at which we can verify that the data is the same on
source and target. The sequential approach like it is now reduces the initial
check to half the HDD read rate, so transfering unchanged files will only yield
about 65MB/s in my case, which is slower than simple copying.

Is this patch you proposed some years ago something I can apply to and try on a
current rsync version? If not, could you update it to the 3.1.x version so I
can benchmark the parallel checksumming in my situation?

Best Regards
Rainer

-- 
You are receiving this mail because:
You are the QA Contact for the bug.

--

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Pierre Willaime | 15 Jul 10:44 2015
Picon

Rsync creates empty directories with backup-dir option

With rsync 3.1.1 (on debian), using the "backup-dir" option creates a tree of empty folders inside the backup repository. This doesn't affect the transfer (all files and folder are copied anyway).

Example:
----   
rsync -rtvhPx --delete --stats --exclude-from=/home/pierre/scripts/ExclusionRSync --backup --backup-dir=/media/pierre/g2/sauvegardes/fichiers_supprimes_pierre /home/pierre/ /media/pierre/g2/sauvegardes/sauvegarde_pierre
----
The previous command (launched the first time) recreates the whole structure of folders inside the "/media/pierre/g2/sauvegardes/fichiers_supprimes_pierre" backup-dir). These folders are empty and the files are present in "/media/pierre/g2/sauvegardes/sauvegarde_pierre"(destination).

It seems to be a bug because the backup directory should contains something if, and only if, I delete files or folder. But it is impossible to delete something during the first backup; nonetheless, the backup-dir is full of empty folder after the first call of rsync.

There are two unanswered messages on Unix.SE describing this bug ([1] and [2]). A quick search into this mailing list archives [3] doesn't show emails related to this question.


--

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Gmane