Ivan Zhakov | 3 Jul 14:05 2015

Time to release Subversion 1.9.0-rc3?

There are several changes merged to 1.9.x branch since 1.9.0-rc2:
 * r1686554, r1686557, r1686239, r1686541, r1686543, r1686802
   Fix 'svnadmin hotcopy' for read-only FSFS repositories
     Format 7 repositories could not be hotcopied without write access to
     the source repo - which is a regression vs. older formats.  Because
     the new test case also uncovered another regression with hotcopying
     the rep-cache.db from r/o repos.

 * r1682714, r1682854, r1683126, r1683135, r1683290
   Fix segfaults in FSX's directory processing code.
     Despite its experimental state, FSX shall not segfault the server.

 * r1686478, r1686888, r1686984
   Make 'blame -g' work with old clients against new servers.
     Without this patch, old clients will "lose track" of what changes
     happened in -g mode and produce wrong / worse blames than against
     old servers.
     The output of 'blame -g' is only an approximation.  However, the
     new server would cause much worse results in old clients especially
     in simple cases where lines of development are kept in close sync.

 * r1685085
   Install svnbench as part of 'make install'.
     svnbench moved from  tools/ to subversion/ so it should be installed
     by default.

(Continue reading)

Branko Čibej | 30 Jun 21:22 2015

Trunk builds are failing with FSFSv4 and -v6


Could be related to the recent change in 'svnadmin pack' output.

-- Brane

Branko Čibej | 26 Jun 00:02 2015

Re: Possible incompatibility of svn_repos_verify_fs2() in 1.9.0-rc1

On 25.06.2015 18:11, Evgeny Kotkov wrote:
> Branko Čibej <brane <at> wandisco.com> writes:
>> Go ahead; I won't be able to for the next few days. Please remember to
>> remove the new error code, too, if we won't be using it any more.
> With a bit of trial and error, I was able to come up with a complete and
> tested solution that I find suitable in terms of the API design and the
> calling site (svnadmin.c).  The ideas are partly borrowed from the already
> mentioned svn_fs_lock_many() and svn_fs_lock_callback_t implementations.
> See the attached patch.  Please note that the patch is not a continuation of
> what I posted in my last e-mail (verify-fixup-squashed-v1.patch.txt), but from
> my point of view is better and should address the raised concerns, such as
> having a non-trivial compatibility wrapper.
> Here is the log message:
> [[[
> Reimplement svn_repos_verify_fs3() to support an arbitrary callback that
> receives the information about an encountered problem and lets the caller
> decide on what happens next.  This supersedes the keep_going argument for
> this API.  A callback is optional; the behavior of this API if the callback
> is not provided is equivalent to how svn_repos_verify_fs2() deals with
> encountered errors.  This allows seamless migration to the new API, if the
> callback is not necessary.  The idea is partly taken from how our existing
> svn_fs_lock_many() API works with a svn_fs_lock_callback_t and passes error
> information to the caller.
> Immediately use the new API to provide an alternative solution for the
> encountered problem with 'svnadmin verify --keep-going -q' (see r1684940)
> being useless in terms that it was only giving an indication of whether a
(Continue reading)

Stefan Hett | 25 Jun 17:10 2015

svn-mergeinfo-normalizer ideas


I'm dealing with one remaining case svn-mergeinfo-normalizer normalize 
doesn't seem to be able to handle yet. Would it be possible to add 
support for this?

Case: Eliminate incorrect mergeinfos of pre-branch-revisions.

Looking at the following output:
Trying to elide mergeinfo from path
     into mergeinfo at path

     All branches still exist in HEAD.

     Try to elide remaining branches:
     CANNOT elide branch /XRebirth/trunk/src/SDKs/bullet
         revisions not movable to parent: 173817,174057,180942,181150

     Branches not mentioned in parent:

     Sub-tree merge info cannot be elided due to the following branches:

here you see that the revisions 173817,174057,180942,181150 are reported 
(Continue reading)

Stefan Hett | 24 Jun 16:58 2015

mergeinfo-normalizer issue with irrelevant ranges


hope it's ok to discuss this on this list rather than the users list 
(since the tool is not part of a released version):

I'm looking at the following output from mergeinfo-normalizer analyse:

Trying to elide mergeinfo from path
     into mergeinfo at path

     All branches still exist in HEAD.

     Try to elide remaining branches:
     CANNOT elide branch /XRebirth/trunk/src/SDKs/libVMM
         revisions missing in sub-node: 
     CANNOT elide branch /XRebirth/branches/XR_porting/src/SDKs/libVMM
         revisions missing in sub-node: 194976

     Branches not mentioned in parent:

     Sub-tree merge info cannot be elided due to the following branches:
(Continue reading)

Stefan Hett | 22 Jun 18:40 2015

Building SVN w/o OpenSSL fails


trying to build SVN under Windows (1.9 based - svn-mergeinfo-normalizer 
branch) I'm getting the following error.

Please note that I'm not buiding with OpenSSL.

   All outputs are up-to-date.
   All outputs are up-to-date.
   C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin\link.exe 
/NOLOGO /LIBPATH:..\..\..\..\apr\lib 
/LIBPATH:..\..\..\..\zlib ws2_32.lib rpcrt4.lib mswsock.lib 
"libapr-1.lib" "libaprutil-1.lib" ssleay32.lib libeay32.lib secur32.lib 
xml.lib zlibstat.lib kernel32.lib user32.lib gdi32.lib winspool.lib 
comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib 
odbc32.lib odbccp32.lib /NODEFAULTLIB:msvcrtd.lib /DEF:"libsvn_ra.def" 

/MANIFESTUAC:"level='asInvoker' uiAccess='false'" /DEBUG 
(Continue reading)

Julian Foad | 17 Jun 22:58 2015

Move tracking - moves-element-model doc

Dear move tracking fans (I've CC'd a few particular people, but that's
not to exclude others),

Please take a look at the doc I've just added. It's meant to summarize
the model that I'm prototyping on the 'move-tracking-2' branch.

The commit that added it:

The doc, rendered as HTML (please view in Unicode encoding):

The source doc in LyX format is there as well.

Please let me know what else I can do to explain it better, or any
other thoughts.

- Julian

Marc Strapetz | 17 Jun 17:42 2015

New source for Subversion binaries

Starting with Subversion 1.9, as part of the SmartSVN build process we 
are also creating Subversion command line binaries (client-side only) 
which we are now providing as separate download for Windows (32 bit 
only) and OSX. Windows binaries are built in a Windows 7 VM with the 
minimum requirements installed. OSX binaries are built on a dedicated 
machine. Other properties of the bundles:

- portable, no installer
- no registration
- no certification

Currently, they are only available for the 1.9 preview builds:


Probably they are not perfect yet, so it would be great if Windows and 
OSX developers could have a look and let me know about possible problems.

We would also like to create portable (universal) Linux binaries for 32- 
and 64-bit platforms. AFAIU, this should be possible if the linking 
between the libraries would be relative (like on OSX). Unfortunately, we 
currently don't have a clue how to teach the linker to do so. Does 
anyone have ideas or have already succeeded in creating such portable 


Vincent Lefevre | 17 Jun 10:38 2015

keywords not updated after an update that doesn't change the file due to local changes

I have the following problem under Debian with the subversion 1.8.13-1
Debian package, but I don't think that this is Debian specific.

It seems that "svn update" doesn't update keywords when the local
modifications on the concerned file are the same as in the repository.

The problem occurs on something like that:

1. Do 2 checkout's of the repository, where there's some file "foo"
   has an Id keyword. -> wc1 and wc2.
2. In wc1, modify file "foo".
3. In wc1, do "svn commit".
4. In wc2, do the same modification.
5. In wc2, do "svn update".

At (5), the local changes and the changes from the repository are
merged, and the file is unchanged. But the Id keyword should have
been updated.


Vincent Lefèvre <vincent <at> vinc17.net> - Web: <https://www.vinc17.net/>
100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

Stefan Fuhrmann | 16 Jun 21:57 2015

Experiments with FlushFileBuffers on Windows

Hey there,

One of the links recently provided by Daniel Klima pointed
to a way to enable write caching even on USB devices.
So, I could use my Windows installation for experiments now
without the risk of brick-ing 2 grand worth of disks by pulling
the plug tens of times.

-- Stefan^2.

FlushFileBuffers operates on whole files, not just the parts
written through the respective handle. Not calling it after rename
results in potential data loss. Calling it after rename eliminates
the problem at least in most cases.

I used the attached program to conduct 3 different experiments,
each trying a specific modification / fsync sequence. All would
write to an USB stick which had OS write cache enabled for it
in Windows 7.

All tests run an unlimited number of iterations - until there is an
I/O error (e.g. caused by disconnecting out the drive). For each
run, separate files and different file contents will being written
("run number xyz", repeated many times). So, we can determine
which file contents is complete and correct and whether all files
are present. Each successful iteration is logged to the console.
We expect the data for all these to be complete.

The stick got yanked out at a random point in time, reconnected
after about a minute, chkdsk /f run on it and then the program
output would be compared with the USB stick's content.

Experiment 1: fsync a file written through a different handle.
Write the same contents to two files, write the same contents
100x alternating between the two files. Both files are the same
size >1MB and should be similarly "important" to the OS.
Close both files. Re-open the one written last and fsync it.
This re-open scenario is similar to what we do with the protorev

* 10 runs were made, between 17 and 84 iterations each.
* 10x, the fsync'ed file and its contents has been complete
* 10x, the non-synced files were present and showed the
  correct file size. The contents of the last few of them were
  NUL bytes.

Re-opening a file and fsync'ing it flushes *all* content changes
for that file - at least on Windows. The way we handle the
protorev file is correct.

Experiment 2: fsync before but not after rename
This mimics the core of our "move-in-place" logic: Write a
small-ish file (here: 10 .. 20k to not get folded into the MFT)
with some temporary name, fsync and close it. Rename to
its final name in the same folder.

* 5 runs were made, between 182 and 435 iterations each.
* 1x the final file existed with the correct contents
* 3x the file .temp file existed for the last completed iteration.
* 1x even the final file for the previous iteration contained
  NULs. After that run, chkdsk reported and fixed a large
  number of issues.

Not fsync'ing after rename will lead to data loss even with
NTFS. IOW, we don't have transactional guarantees for
"commit" on Windows servers at the moment.

The last case with the more severe corruption may be due
to the storage device not handling its buffers correctly.
The only thing we can do here is tell people to use battery-
backed storage.

Experiment 3: fsync before but *and* after rename
Same as above but re-open the file after rename and fsync it.

* 10 runs were made, between 127 and 1984 iterations each.
* 7x the final file existed with the correct contents
* 1x the next temp already existed with size 0
  (this is also a correct state; the last complete iteration's
   final file existed with the correct contents)
* 1x the next temp already existed with correct contents
  (correct, same as before)
* 1x the last final file was missing, there was no temp file
  and the previous final file contained invalid data. After
  that run, there were various issues fixed by chkdsk.
  It was also the run with the most iterations.

In 90% of the runs, fsync'ing after rename resulted in
correct disk contents. This is much better than the results
in Experiment 2. The remainder may be due to limitations
of the storage device and has been observed in Exp. 2
as well.

Attachment (FSyncExperiment.cpp): text/x-c++src, 5033 bytes
Evgeny Kotkov | 16 Jun 20:12 2015

FSFS7: 'svnadmin hotcopy' requires write access to the source

FSFS7 introduced a new on-disk lockfile, db/pack-lock, that allows packing
a repository without completely blocking the commits.  We also extended the
svn_fs_fs__hotcopy() logic to take this lock for the source repository, see
r1589284 and the related discussion [1, 2].

I think that this behavior — taking a lock in the filesystem of the source
repository — is a regression, because now 'svnadmin hotcopy' requires write
access to the source repository.  One example of how this prerequisite could
break existing installations would be a system where an account performing
the backup only has read access to the source (that sounds logical to me).
Another example would be using 'svnadmin hotcopy' to import an existing
repository from something like a read-only SMB share, and an attempt to do
so would trigger an error for FSFS7 repositories.

Here is a quick reproduction script:

$ svnadmin create source
$ svnadmin create source-old --compatible-version=1.8
$ chmod -w -R source
$ chmod -w -R source-old
$ svnadmin hotcopy source target
svnadmin: E000013: Can't open file 'source/db/pack-lock': Permission denied
$ svnadmin hotcopy source-old target-old
* Copied revision 0.

Please note that the existence of db/pack-lock in the source is orthogonal to
this reproduction script.  Calling 'echo "" > source/db/pack-lock' after the
first step would still trigger the same error for the FSFS7 repository.

Perhaps we should add a corresponding regression test and revert r1589284?

[1] https://svn.apache.org/r1589284
[2] http://svn.haxx.se/dev/archive-2014-07/0124.shtml

Evgeny Kotkov