Re: Collapsing a rsnapshot tree?
Cal Sawyer <cal-s <at> blue-bolt.com>
2015-05-01 11:56:13 GMT
You fellows are, of course, absolutely right. What i discovered in
retrospect was that the snapshots i had varied widely in date, so
huge deltas and generally a pretty rotten example
This morning, i collapsed a 7-day tree onto 5 by deleting daily.6
and daily.5. Those deletes took barely any time at all and the
subsequent snapshot was similarly very fast.
thanks very much to everyone who pitched in
Cal Sawyer | Systems Engineer | BlueBolt Ltd
15-16 Margaret Street | London W1W 8RW
+44 (0)20 7637 5575 | www.blue-bolt.com
The reverse is actually what happened, i think. I had a daily 7 setup
that i had reconfigured to daily 3 When i run rsnapshot -v manually, i
can see at what level it's doing its initial rm's and cp's and
daily.4-6 are no longer being touched, although they held data - a lot
These data are either:
(1) files which are no longer current (not present in daily.0), and
will use up disk space that can be freed by deleting them (assuming
you don't have any weekly or similar backups), but deleting them
will have no effect on the next day's backup
(2) files which do exist in daily.0, in which case having a hard link
to them in daily.3-6 will not make any difference to disk space
used; if daily.3-6 are deleted then the link count for those files
in daily.0 will go down but no disk space will be freed . If the
files still exist in daily.0 then it will have no effect on the
next day's backup, because linking is only done against daily.0.
When i (naively) deleted daily.3-daily.6, i lost a few TB of data which
Deleting daily.3-6 makes sense and is what I would have done.
I don't think deleting daily.3-daily.6 was naive at all.
i lost a few TB of data which
then had to be picked up again in the latest daily (which took quite a
while to complete).
This bit I don't understand.
If you measured "lost a few TB of data" using df, then the difference
would be old files which are no longer current, and hence not relevant
to how long the rsnapshot will take.
To explain the "took quite a while to complete", I'd be looking at
whether there were many differences between daily.0 and your source
file systems. For example, something messed with daily.0, or lots
of files on the source directory appear to have changed (for example
the mtime differs between files in source and daily.0).
Or whether something changed in your rsnapshot config that means it
is not making hard links properly any more (in which case I'd expect
each rsnapshot will take quite a while until that is fixed).
So the question remains. i think - how best to
condense/collapse older daily snapshots when reducing the retention
No, I think the question is why did your rsnapshots start taking longer
and I think the answer is *
because you deleted old daily.3-6 (which
are no longer used nor relevant to rsnapshot).
ignoring 3-6 or should one just leave the older snapshots alone and
live with it?
On 23/04/15 17:49, Scott Hess wrote:
On Thu, Apr 23, 2015 at 1:20 AM, Cal Sawyer <cal-s <at> blue-bolt.com>
On occasion, i've wanted to collapse a weekly or multiday snapshot
and redefine it to have fewer days' retention. If i modify a config
for, say, 7-day retention to a 3 days, rsnapshot will ignore days
and start acting only on day 2 and below, effectively "orphaning"
older snapshots which remain in place untouched forever. If i
those older snapshots, (predictably?) i end up with a very long sync
to re-acquire the files missing in the older snapshots.
Is there an accepted method of collapsing a deep snapshot tree into
less deep one, minimising the amount of resyncing needed?
I'm not sure I understand your question. If you wish to go from
daily. to daily., then rsnapshot will not have created
any hardlinks which span between daily.0 and daily.6 without also being
present in daily.. The only time you have to be careful is when
you're manually messing with the most-recent snapshot.
One dashboard for servers and applications across Physical-Virtual-Cloud
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
rsnapshot-discuss mailing list
rsnapshot-discuss <at> lists.sourceforge.net