Thierry Lavallee | 4 Feb 18:13 2016

Moving a rsnapshot repositary to new disk

We need a bigger repository, disk getting full.

I have reread a thread form last year and see that there is no clear way of moving an existing repository to a new disk to insure everything is exactly the same as the original, considering hardlinks etc.

Is there any definitive recommendation?
rsnapshot_copy  /source /destination ?

Thanks!


------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
Christopher Barry | 3 Feb 07:44 2016
Picon

Fw: moving rsnapshot tree to larger disk


forgot to include list...

Begin forwarded message:

Date: Wed, 3 Feb 2016 01:42:29 -0500
From: Christopher Barry <christopher.r.barry <at> gmail.com>
To: Ken Woods <kenwoods <at> gmail.com>
Subject: Re: [rsnapshot-discuss] moving rsnapshot tree to larger disk

On Tue, 2 Feb 2016 21:16:01 -0900
Ken Woods <kenwoods <at> gmail.com> wrote:

>I never will understand why people don't use zfs.  

Well, for a *very* long time it was pretty iffy, that's why. beta
filesystems are not what I tend to go for. It's probably a lot better
now, but performance-wise I'm still pretty sure it's not all that and a
bag of chips...

yep, and this seems to back that up...
http://www.phoronix.com/scan.php?page=article&item=zfs_linux_062&num=1

which is why I use only ext[24] now. back in the day it was xfs, and
reiserfs for some disks, but not anymore. btrfs and zfs are
indeed interesting, but I was never able to trust my data to them...

-- 
Regards,
Christopher

--

-- 
Regards,
Christopher

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
Christopher Barry | 2 Feb 19:50 2016
Picon

moving rsnapshot tree to larger disk


Greetings,

I have two 1TB drives in a software mirror, with a mostly full 500GB lv
for backups. One of those disks failed, so I'm replacing both 1TB
disks with two new 2TB disks in a new mirrored configuration.

What's the best method of moving the 500GB LVM logical volume to the new
2TB mirror set?

Thanks

--

-- 
Regards,
Christopher

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
Balogh László | 31 Jan 21:33 2016
Picon

Fwd: Re: rsnashot high load











Yes it exists. And you've right, that could be a good tool to check, but unfortunately the logging option was disabled, so sar -u gives nothing. But i enabled it, so we can check tomorrow after 8-9am what will be in the sar log. Thanks for the tip! 2016. 01. 31. 20:27 keltezéssel, Ken Woods írta: > Does "sar" exist in unbuntu? > > > >> On Jan 31, 2016, at 02:59, David Keegel <djk <at> cyber.com.au> wrote: >> >> László, >> >> There are a number of things which could make your system slow; you >> need to work out why it is slow. >> >> I'd check whether your system is running low on available memory (RAM) >> and therefore paging/swapping around 08h. Running rsync -aH will use >> a lot of RAM for processing a large directory (with lots of files). >> >> If it happens while you are around, you could run top to get a better >> idea whether you are running short of memory (mem free small, mem buffers >> small, swap used large, cpu %wait high), have enough processes using a >> lot of CPU to consume all available CPU resources (cpu %idle=0) or >> something else is making your I/O slow (again cpu %wait high and >> processes in "D" state but without the other shortage of memory >> indicators). >> >> It's curious that you have a slowness problem at 08h but not 16h. >> I'm wondering if some overnight cron jobs are running for a lot >> longer than you expect, or many users are logging in around 08h. >> Looking at the top processes listing in top should give you an >> idea about those possibilities. >> >> Ideally, start top before 08:00 to see a difference between before and >> after rsnapshot starts, and look for processes running before 08:00. >> >> If you don't have time to investigate properly, and just want to >> try a short cut, you could try removing -H from rsync_short_args >> (or --hard-links from rsync_long_args) and see if slowness stops. >> >>> On Sun, Jan 31, 2016 at 11:45:49AM +0100, Balogh László wrote: >>> Hi All, >>> I'm new in this thread. I'm using rsnapshot for backing up my linux >>> system about two months ago. It's working very well, i have only one >>> problem. >>> I've set up an incremental backup, i have three hourly (00, 08 and >>> 16h), 7 daily, 4 weekly and 6 monthly: >>> retain hourly 3 >>> retain daily 7 >>> retain weekly 4 >>> retain monthly 6 >>> Interesting is, that i have problems at the 08h hourly backup. When the >>> rsnapshot is running at that time (08h) the system load gets a very >>> high level, above 15, sometimes about 30. At the other times the load >>> is also a little bit high, but the system is reachable. But when its >>> backing up at 08h after a few minutes the load gets incredible high, so >>> the system is not reacheble, or takes minutes to type a character. >>> There is no other job at 08h which could affect rsnaphot job. >>> I googled for the sollution and tried what i found: >>> 1. In /etc/default/rsync file i configured: >>> RSYNC_NICE='10' >>> RSYNC_IONICE='-c3' >>> 2. I'm running in cron with option ionice -c 3: >>> 5 0,8,16 * * * ionice -c 3 >>> /usr/bin/rsnapshot hourly >>> Nothing helps. :( >>> I also had checked my hard drives, which should be ok: >>> root <at> mcllserver:/mnt/sdc1# smartctl -a /dev/sda | grep 0x00 >>> SMART capabilities: (0x0003) Saves SMART data before >>> entering >>> ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE >>> UPDATED WHEN_FAILED RAW_VALUE >>> 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail >>> Always - 0 >>> 3 Spin_Up_Time 0x0027 174 173 021 Pre-fail >>> Always - 4258 >>> 4 Start_Stop_Count 0x0032 100 100 000 Old_age >>> Always - 13 >>> 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail >>> Always - 0 >>> 7 Seek_Error_Rate 0x002e 200 200 000 Old_age >>> Always - 0 >>> 9 Power_On_Hours 0x0032 098 098 000 Old_age >>> Always - 1867 >>> 10 Spin_Retry_Count 0x0032 100 253 000 Old_age >>> Always - 0 >>> 11 Calibration_Retry_Count 0x0032 100 253 000 Old_age >>> Always - 0 >>> 12 Power_Cycle_Count 0x0032 100 100 000 Old_age >>> Always - 13 >>> 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age >>> Always - 7 >>> 193 Load_Cycle_Count 0x0032 196 196 000 Old_age >>> Always - 14075 >>> 194 Temperature_Celsius 0x0022 115 111 000 Old_age >>> Always - 32 >>> 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age >>> Always - 0 >>> 197 Current_Pending_Sector 0x0032 200 200 000 Old_age >>> Always - 0 >>> 198 Offline_Uncorrectable 0x0030 100 253 000 Old_age >>> Offline - 0 >>> 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age >>> Always - 0 >>> 200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age >>> Offline - 0 >>> root <at> mcllserver:/mnt/sdc1# smartctl -a /dev/sdb | grep 0x00 >>> SMART capabilities: (0x0003) Saves SMART data before >>> entering >>> 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail >>> Always - 0 >>> 3 Spin_Up_Time 0x0027 186 180 021 Pre-fail >>> Always - 5683 >>> 4 Start_Stop_Count 0x0032 100 100 000 Old_age >>> Always - 35 >>> 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail >>> Always - 0 >>> 7 Seek_Error_Rate 0x002e 200 200 000 Old_age >>> Always - 0 >>> 9 Power_On_Hours 0x0032 084 084 000 Old_age >>> Always - 12113 >>> 10 Spin_Retry_Count 0x0032 100 253 000 Old_age >>> Always - 0 >>> 11 Calibration_Retry_Count 0x0032 100 253 000 Old_age >>> Always - 0 >>> 12 Power_Cycle_Count 0x0032 100 100 000 Old_age >>> Always - 35 >>> 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age >>> Always - 32 >>> 193 Load_Cycle_Count 0x0032 200 200 000 Old_age >>> Always - 2 >>> 194 Temperature_Celsius 0x0022 121 111 000 Old_age >>> Always - 29 >>> 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age >>> Always - 0 >>> 197 Current_Pending_Sector 0x0032 200 200 000 Old_age >>> Always - 0 >>> 198 Offline_Uncorrectable 0x0030 100 253 000 Old_age >>> Offline - 0 >>> 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age >>> Always - 0 >>> 200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age >>> Offline - 0 >>> root <at> mcllserver:/mnt/sdc1# smartctl -a /dev/sdc | grep 0x00 >>> <--------------- BACKUP DRIVE! >>> SMART capabilities: (0x0003) Saves SMART data before >>> entering >>> 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail >>> Always - 386 >>> 3 Spin_Up_Time 0x0027 168 168 021 Pre-fail >>> Always - 4566 >>> 4 Start_Stop_Count 0x0032 100 100 000 Old_age >>> Always - 48 >>> 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail >>> Always - 0 >>> 7 Seek_Error_Rate 0x002e 200 200 000 Old_age >>> Always - 0 >>> 9 Power_On_Hours 0x0032 074 074 000 Old_age >>> Always - 19282 >>> 10 Spin_Retry_Count 0x0032 100 253 000 Old_age >>> Always - 0 >>> 11 Calibration_Retry_Count 0x0032 100 253 000 Old_age >>> Always - 0 >>> 12 Power_Cycle_Count 0x0032 100 100 000 Old_age >>> Always - 48 >>> 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age >>> Always - 37 >>> 193 Load_Cycle_Count 0x0032 200 200 000 Old_age >>> Always - 10 >>> 194 Temperature_Celsius 0x0022 113 108 000 Old_age >>> Always - 34 >>> 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age >>> Always - 0 >>> 197 Current_Pending_Sector 0x0032 200 200 000 Old_age >>> Always - 0 >>> 198 Offline_Uncorrectable 0x0030 100 253 000 Old_age >>> Offline - 0 >>> 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age >>> Always - 0 >>> 200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age >>> Offline - 0 >>> root <at> mcllserver:/mnt/sdc1# smartctl -a /dev/sdd | grep 0x00 >>> SMART capabilities: (0x0003) Saves SMART data before >>> entering >>> 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail >>> Always - 36 >>> 3 Spin_Up_Time 0x0027 169 167 021 Pre-fail >>> Always - 6541 >>> 4 Start_Stop_Count 0x0032 100 100 000 Old_age >>> Always - 115 >>> 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail >>> Always - 0 >>> 7 Seek_Error_Rate 0x002e 200 200 000 Old_age >>> Always - 0 >>> 9 Power_On_Hours 0x0032 061 061 000 Old_age >>> Always - 29141 >>> 10 Spin_Retry_Count 0x0032 100 100 000 Old_age >>> Always - 0 >>> 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age >>> Always - 0 >>> 12 Power_Cycle_Count 0x0032 100 100 000 Old_age >>> Always - 113 >>> 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age >>> Always - 104 >>> 193 Load_Cycle_Count 0x0032 200 200 000 Old_age >>> Always - 10 >>> 194 Temperature_Celsius 0x0022 115 106 000 Old_age >>> Always - 35 >>> 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age >>> Always - 0 >>> 197 Current_Pending_Sector 0x0032 200 200 000 Old_age >>> Always - 0 >>> 198 Offline_Uncorrectable 0x0030 100 253 000 Old_age >>> Offline - 0 >>> 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age >>> Always - 0 >>> 200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age >>> Offline - 0 >>> I tried manually rsync some large folders, but the load seems to be ok >>> (same source, same destination drive/folder), goes to load 2, but not >>> higher. >>> I'm using Ubuntu server 14.04.3 LTS in 24/7 mode. >>> Backup destiantion is a separate local hard drive mounted (mount output >>> of the backup destination drive) >>> /dev/sdc1 on /mnt/sdc1 type ext4 >>> (rw,noatime,commit=120,errors=remount-ro) >>> Has anyone any idea what should i try to get rid of that high load? >>> Thanks >>> Regards >>> Laszlo >> -- >> ___________________________________________________________________________ >> David Keegel <djk <at> cyber.com.au> Cyber IT Solutions Pty. Ltd. >> http://www.cyber.com.au/~djk/ Linux & Unix Systems Administration >> >> >> ------------------------------------------------------------------------------ >> Site24x7 APM Insight: Get Deep Visibility into Application Performance >> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month >> Monitor end-to-end web transactions and take corrective actions now >> Troubleshoot faster and improve end-user experience. Signup Now! >> http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140 >> _______________________________________________ >> rsnapshot-discuss mailing list >> rsnapshot-discuss <at> lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss



Ezt az e-mailt az Avast víruskereső szoftver átvizsgálta.
www.avast.com


------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
Balogh László | 31 Jan 11:45 2016
Picon

rsnashot high load

Hi All,

I'm new in this thread. I'm using rsnapshot for backing up my linux system about two months ago. It's working very well, i have only one problem.

I've set up an incremental backup, i have three hourly (00, 08 and 16h), 7 daily, 4 weekly and 6 monthly:
retain          hourly  3
retain          daily   7
retain          weekly  4
retain          monthly 6

Interesting is, that i have problems at the 08h hourly backup. When the rsnapshot is running at that time (08h) the system load gets a very high level, above 15, sometimes about 30. At the other times the load is also a little bit high, but the system is reachable. But when its backing up at 08h after a few minutes the load gets incredible high, so the system is not reacheble, or takes minutes to type a character.
There is no other job at 08h which could affect rsnaphot job.

I googled for the sollution and tried what i found:

1. In /etc/default/rsync file i configured:
     RSYNC_NICE='10'
     RSYNC_IONICE='-c3'

2. I'm running in cron with option ionice -c 3:
    5       0,8,16  *       *       *       ionice -c 3 /usr/bin/rsnapshot hourly

Nothing helps. :(

I also had checked my hard drives, which should be ok:
root <at> mcllserver:/mnt/sdc1# smartctl -a /dev/sda | grep 0x00
SMART capabilities:            (0x0003) Saves SMART data before entering       
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0       
  3 Spin_Up_Time            0x0027   174   173   021    Pre-fail  Always       -       4258    
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       13      
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0       
  7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0       
  9 Power_On_Hours          0x0032   098   098   000    Old_age   Always       -       1867    
 10 Spin_Retry_Count        0x0032   100   253   000    Old_age   Always       -       0       
 11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0       
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       13      
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       7       
193 Load_Cycle_Count        0x0032   196   196   000    Old_age   Always       -       14075   
194 Temperature_Celsius     0x0022   115   111   000    Old_age   Always       -       32      
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0       
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0       
198 Offline_Uncorrectable   0x0030   100   253   000    Old_age   Offline      -       0       
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0       
200 Multi_Zone_Error_Rate   0x0008   100   253   000    Old_age   Offline      -       0       

root <at> mcllserver:/mnt/sdc1# smartctl -a /dev/sdb | grep 0x00
SMART capabilities:            (0x0003) Saves SMART data before entering
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
  3 Spin_Up_Time            0x0027   186   180   021    Pre-fail  Always       -       5683
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       35
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   084   084   000    Old_age   Always       -       12113
 10 Spin_Retry_Count        0x0032   100   253   000    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       35
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       32
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       2
194 Temperature_Celsius     0x0022   121   111   000    Old_age   Always       -       29
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   100   253   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   100   253   000    Old_age   Offline      -       0

root <at> mcllserver:/mnt/sdc1# smartctl -a /dev/sdc | grep 0x00 <--------------- BACKUP DRIVE!
SMART capabilities:            (0x0003) Saves SMART data before entering
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       386
  3 Spin_Up_Time            0x0027   168   168   021    Pre-fail  Always       -       4566
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       48
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   074   074   000    Old_age   Always       -       19282
 10 Spin_Retry_Count        0x0032   100   253   000    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       48
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       37
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       10
194 Temperature_Celsius     0x0022   113   108   000    Old_age   Always       -       34
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   100   253   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   100   253   000    Old_age   Offline      -       0

root <at> mcllserver:/mnt/sdc1# smartctl -a /dev/sdd | grep 0x00
SMART capabilities:            (0x0003) Saves SMART data before entering
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       36
  3 Spin_Up_Time            0x0027   169   167   021    Pre-fail  Always       -       6541
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       115
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   061   061   000    Old_age   Always       -       29141
 10 Spin_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   100   100   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       113
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       104
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       10
194 Temperature_Celsius     0x0022   115   106   000    Old_age   Always       -       35
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   100   253   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   100   253   000    Old_age   Offline      -       0

I tried manually rsync some large folders, but the load seems to be ok (same source, same destination drive/folder), goes to load 2, but not higher.

I'm using Ubuntu server 14.04.3 LTS in 24/7 mode.

Backup destiantion is a separate local hard drive mounted (mount output of the backup destination drive)
    /dev/sdc1 on /mnt/sdc1 type ext4 (rw,noatime,commit=120,errors=remount-ro)

Has anyone any idea what should i try to get rid of that high load?

Thanks
Regards
Laszlo


Ezt az e-mailt az Avast víruskereső szoftver átvizsgálta.
www.avast.com


------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
Katvanger | 28 Jan 20:08 2016
Picon

from rsnapreport.pl FILE XFER TIME is always zero

I looked into this and someother issues with rsnapreport.pl ~2 years ago. My
notes say that rsync 3.1.0 (protocol V31) does output transfer time, but
the value is always 0.000 seconds (even when doing a multi-gigabyte upwhich
rsnapreport then reports.
There were a few other issues which led me to believe that rsnapreport.pl
was based on an older rsync version.

regards, KJ

Antwoord op email "rsnapshot-discuss Digest, Vol 115, Issue 3" van
rsnapshot-discuss-request <at> lists.sourceforge.net op donderdag, 28 januari

> Message: 4
> Date: Tue, 26 Jan 2016 14:08:19 +0100
> From: Roy Krikke <roykrikke <at> gmail.com>
> Subject: [rsnapshot-discuss] from rsnapreport.pl FILE XFER TIME is
> 	always	zero
> To: rsnapshot-discuss <at> lists.sourceforge.net
> Message-ID: <56A76FC3.2000000 <at> gmail.com>
> Content-Type: text/plain; charset=utf-8; format=flowed
> 
> Hello,
> 
> can someone explain why FILE XFER TIME is always zero (from the 
> rsnapreport.pl script)?
> 
> Output: rsnapreport.pl
> 
> SOURCE                          TOTAL FILES   FILES TRANS      TOTAL MB     MB TRANS   LIST GEN TIME  FILE XFER TIME
> --------------------------------------------------------------------------------------------------------------------
> host.lan:/home/oneandtwo/Maildi	       1647          1561        802.80       802.80   0.001 seconds   0.000 seconds
> host.lan:/home/one/Maildir    	      20370         19992       5648.05      5648.05   0.001 seconds   0.000 seconds
> hosthost.lan:/zstore/oneandtwo	     181347        178101     194470.37    194470.37   0.001 seconds   0.000 seconds
> hosthost.lan:/zstore/one     	    1124976       1020679    2530810.12   2530810.12   0.001 seconds   0.000 seconds
> hosthost.lan:/zstore/oneandtwo	      42147         41279     594763.27    594763.27   0.001 seconds   0.000 seconds
> 
> I have used the original rsnapreport.pl and the one from r0max  (https://github.com/rsnapshot/rsnapshot/pull/110/files)
> 
> Thanks,
> Roy

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
Roy Krikke | 26 Jan 14:08 2016
Picon

from rsnapreport.pl FILE XFER TIME is always zero

Hello,

can someone explain why FILE XFER TIME is always zero (from the 
rsnapreport.pl script)?

Output: rsnapreport.pl

SOURCE                          TOTAL FILES   FILES TRANS      TOTAL MB     MB TRANS   LIST GEN TIME  FILE XFER TIME
--------------------------------------------------------------------------------------------------------------------
host.lan:/home/oneandtwo/Maildi	       1647          1561        802.80       802.80   0.001 seconds   0.000 seconds
host.lan:/home/one/Maildir    	      20370         19992       5648.05      5648.05   0.001 seconds   0.000 seconds
hosthost.lan:/zstore/oneandtwo	     181347        178101     194470.37    194470.37   0.001 seconds   0.000 seconds
hosthost.lan:/zstore/one     	    1124976       1020679    2530810.12   2530810.12   0.001 seconds   0.000 seconds
hosthost.lan:/zstore/oneandtwo	      42147         41279     594763.27    594763.27   0.001 seconds   0.000 seconds

I have used the original rsnapreport.pl and the one from r0max  (https://github.com/rsnapshot/rsnapshot/pull/110/files)

Thanks,
Roy

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
Terry Barnum | 26 Jan 02:39 2016

archive backups to tape

This might be a stupid question: I'd like to archive my rsnapshot backups to LTO but since I don't think tape
(LTFS) supports hardlinks I'm wondering if there's a good way to do this.

I ran rsnapshot du on one of my smaller rsnapshot roots and daily, weekly and monthly total ~335GB. I'm
currently running rsnapshot du on two more snapshot roots on another machine that I think will each be in
the 1TB range.

Am I correct in thinking that upon output, the hardlink structure is "flattened" and I'll end up with
massive data inflation on tape?

Thanks,
-Terry

Terry Barnum
digital OutPost
Carlsbad, CA

http://www.dop.com
800/464-6434

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
Roy Krikke | 19 Jan 15:09 2016
Picon

rsnapshot is putting backups of different directories/sub directories all into only one directory

Hello,

I'm new to this mailing-list. Forgive me if I ask a question that has been answered previously.

So whenever I use rsnapshot to backup different directories such as /home/user_a/Maildir/ or /home/user_b/Maildir/ it backs them up but in the folder specified to store the backups, it does not keep separate folders for /home/user_a/Maildir/ /home/user_b/Maildir/, instead it just dumps it all together in the hourly.x folder.

I have another rsnapshot config file that isn't doing this, the difference is the version of rsnapshot and the OS installed on the host. Why is it doing this and how can I fix it?

Host A:
- FreeBSD 10.2-RELEASE-p8
- rsnapshot 1.4.0
- rsync 3.1.1  protocol version 31

Host B:
- Ubuntu 14.04.1
- rsnapshot 1.3.1
- rsync 3.1.0  protocol version 31

If I use on Host A rsnapshot.conf (snapshot):
retain            hourly  24
#cmd_cp       /bin/cp
# Mail server host.lan
backup  root <at> host.lan:/home/user_a/Maildir/        host.lan/
backup  root <at> host.lan:/home/user_b/Maildir/        host.lan/

it does not keep separate folders for /home/user_a/Maildir/ and  /home/user_b/Maildir/, instead it just dumps it all together in the  hourly.x folder.

If I use on Host A rsnapshot.conf (snapshot):
retain            hourly  24
#cmd_cp       /bin/cp
# Mail server host.lan
backup  root <at> host.lan:/home/user_a/Maildir/        host.lan/home/user_a/Maildir/
backup  root <at> host.lan:/home/user_b/Maildir/        host.lan/home/user_b/Maildir/

it does keep separate folders for /home/user_a/Maildir/ and /home/user_b/Maildir/, so nice different directories/sub directories in hourly.x folder.

Host B:
- Ubuntu 14.04.1
- rsnapshot 1.3.1
- rsnapshot.conf (snapshot)
retain            hourly  24
cmd_cp       /bin/cp
# Mail server host.lan
backup  root <at> host.lan:/home/user_a/Maildir/        host.lan/
backup  root <at> host.lan:/home/user_b/Maildir/        host.lan/
it does keep separate folders for /home/user_a/Maildir/ and /home/user_b/Maildir/, so nice different directories/sub directories in hourly.x folder.


Why is the behavior of host a and host b different and how can I fix it?

Thank,
Roy

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
Thierry Lavallee | 14 Jan 23:41 2016

Rsnapshot in Montreal

Hi,
Is there someone willing to help me in Montreal with my little backup 
script?
I would kind of like to make things clear and stop fighting it. ;)
Please contact me privately.
Thanks!
--

-- 
Thierry

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
Nico Kadel-Garcia | 13 Jan 17:13 2016

Hooking rsnapshot to "aws s3 sync"

Has anyone else dealt with mirroring material from an AWS S3 repository? And merged it with the "aws s3 sync"
command, which is admittedly nowhere near as sophisticated as the "rsync" command? I'm looking at
locally mirroring datestamped copies of S3 stored content, to keep trackable local copies of the
upstream modified content.

Nico Kadel-Garcia
Lead DevOps Engineer
nkadel <at> skyhookwireless.com

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140

Gmane