boli | 8 Feb 23:19 2016

"layout" of a six drive raid10


I'm trying to figure out what a six drive btrfs raid10 would look like. The example at
seems ambiguous to me.

It could mean that stripes are split over two raid1 sets of three devices each. The sentence "Every stripe is
split across to exactly 2 RAID-1 sets" would lead me to believe this.

However, earlier it says for raid0 that "stripe[s are] split across as many devices as possible". Which for
six drives would be: stripes are split over three raid1 sets of two devices each.

Can anyone enlighten me as to which is correct?

Reason I'm asking is that I'm deciding on a suitable raid level for a new DIY NAS box. I'd rather not use btrfs
raid6 (for now). The first alternative I thought of was raid10. Later I learned how btrfs raid1 works and
figured it might be better suited for my use case: Striping the data over multiple raid1 sets doesn't
really help, as transfer from/to my box will be limited by gigabit ethernet anyway, and a single drive can
saturate that.

Thoughts on this would also be appreciated.

As a bonus I was wondering how btrfs raid1 are layed out in general, in particular with even and odd numbers of
drives. A pair is trivial. For three drives I think a "ring setup" with each drive sharing half of its data
with another drive. But how is it with four drives – are they organized as two pairs, or four-way, or …

Cheers, boli--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo <at>
More majordomo info at
(Continue reading)

Austin S. Hemmelgarn | 8 Feb 17:42 2016

Re: suspected BTRFS errors resulting in file system becoming unrecovable

On 2016-02-08 11:23, WillIam Thorne wrote:
> Thanks all for the help. Here’s a bit more info below. Seeing as its
> possibly related to the USB implementation on the pi, I have cc’d their
> mailing list.
Glad we could be of assistance.
>> On 25 Jan 2016, at 16:43, Austin S. Hemmelgarn <ahferroin7 <at>
>> <mailto:ahferroin7 <at>>> wrote:
>> On 2016-01-25 09:58, WillIam Thorne wrote:
>>> Hi
>>> I have a WD 3TB external HD attached over USB to an arm based micro
>>> PC (rasp pi). I was experimenting with btrfs for storing email
>>> archives but recently encountered some problems which resulted in the
>>> filesystem becoming apparently unrecoverable. I’m not an expert and
>>> it was quicker to switch back to ext4 and restored from backup so no
>>> support needed. Here what appears to be the relevant part of the
>>> syslog including the stack trace in case it is useful:
>>> Best
>>> W
>>> pi <at> mail /var/log $ btrfs --version
>>> Btrfs Btrfs v0.19
>> In general, if you plan to use BTRFS on Debian (or Raspbian), you
>> should be building the tools yourself locally, Debian is almost as bad
>> about staying up to date as most enterprise distros.
>>> pi <at> mail /var/log $ uname -a
(Continue reading)

Tomasz Chmielewski | 8 Feb 10:22 2016

4.4.0 - no space left with >1.7 TB free space left

Linux 4.4.0 - btrfs is mainly used to host lots of test containers, 
often snapshots, and at times, there is heavy IO in many of them for 
extended periods of time. btrfs is on HDDs.

Every few days I'm getting "no space left" in a container running mongo 
3.2.1 database. Interestingly, haven't seen this issue in containers 
with MySQL. All databases have chattr +C set on their directories.

Why would it fail, if there is so much space left?

2016-02-07T06:06:14.648+0000 E STORAGE  [thread1] WiredTiger (28) 
file:collection-33-7895599108848542105.wt, WT_SESSION.checkpoint: 
collection-33-7895599108848542105.wt write error: failed to write 4096 
bytes at offset 20480: No space left on device
2016-02-07T06:06:14.648+0000 E STORAGE  [thread1] WiredTiger (28) 
[1454825174:648740][9105:0x7f2b7e33e700], checkpoint-server: checkpoint 
server error: No space left on device
2016-02-07T06:06:14.648+0000 E STORAGE  [thread1] WiredTiger (-31804) 
[1454825174:648766][9105:0x7f2b7e33e700], checkpoint-server: the process 
must exit and restart: WT_PANIC: WiredTiger library panic
2016-02-07T06:06:14.648+0000 I -        [thread1] Fatal Assertion 28558
2016-02-07T06:06:14.648+0000 I -        [thread1]

***aborting after fassert() failure

2016-02-07T06:06:14.694+0000 I -        [WTJournalFlusher] Fatal 
Assertion 28559
2016-02-07T06:06:14.694+0000 I -        [WTJournalFlusher]

(Continue reading)

Ms Ayala | 7 Feb 22:28 2016

Very Urgent,response needed.


I am Ms. Ayala; I am getting in touch with you regarding an extremely
important and urgent matter.If you would oblige me the opportunity,I shall
provide you with details upon your response.Please respond to: msayalabrnaca101 <at>

Ms. Ayala Bracha
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo <at>
More majordomo info at

Nikolaus Rath | 7 Feb 20:06 2016

Use fast device only for metadata?


I have a large home directory on a spinning disk that I regularly
synchronize between different computers using unison. That takes ages,
even though the amount of changed files is typically small. I suspect
most if the time is spend walking through the file system and checking

So I was wondering if I could possibly speed-up this operation by
storing all btrfs metadata on a fast, SSD drive. It seems that
mkfs.btrfs allows me to put the metadata in raid1 or dup mode, and the
file contents in single mode. However, I could not find a way to tell
btrfs to use a device *only* for metadata. Is there a way to do that?

Also, what is the difference between using "dup" and "raid1" for the



GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

             »Time flies like an arrow, fruit flies like a Banana.«
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo <at>
More majordomo info at

(Continue reading)

Benjamin Valentin | 7 Feb 18:28 2016

One disc of 3-disc btrfs-raid5 failed - files only partially readable


I created a btrfs volume with 3x8TB drives (ST8000AS0002-1NA) in raid5
I copied some TB of data onto it without errors (from eSATA drives, so
rather fast - I mention that because of [1]), then set it up as a
fileserver where it had data read and written to it over a gigabit
ethernet connection for several days.
This however didn't go so well because after one day, one of the drives
dropped off the SATA bus.

I don't know if that was related to [1] (I was running Linux 4.4-rc6 to
avoid that) and by now all evidence has been eaten by logrotate :\

But I was not concerned for I had set up raid5 to provide redundancy
against one disc failure - unfortunately it did not.

When trying to read a file I'd get an I/O error after some hundret MB
(this is random across multiple files, but consistent for the same
file) on both files written before and after the disc failue.

(There was still data being written to the volume at this point.)

After a reboot a couple days later the drive showed up again and SMART
reported no errors, but the I/O errors remained.

I then ran btrfs scrub (this took about 10 days) and afterwards I was
again able to completely read all files written *before* the disc

(Continue reading)

Andreas Hild | 7 Feb 14:15 2016

Fi corruption on RAID1, generation doesn't match

Dear All,

The file system on a RAID1 Debian server seems corrupted in a major
way, with 99% of the files not found. This was the result of a
precarious shutdown after a crash that was preceded by an accidental
misconfiguration in /etc/fstab; it pointed "/" and "/tmp" to one and
the same UUID by omitting a subvol entry.

Is there any way to repair or recover a substantial part of this RAID?

The following output was obtained via a live disk boot.

Many thanks!

Best wishes,

--- --- ---
uname -a
Linux debian 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt20-1+deb8u3
(2016-01-17) x86_64 GNU/Linux

sudo btrfs fi show
Label: none  uuid: fc429e82-f46e-4018-a9fa-ded688cef161
        Total devices 2 FS bytes used 42.36MiB
        devid    1 size 455.76GiB used 169.03GiB path /dev/sdb4
        devid    2 size 455.76GiB used 169.03GiB path /dev/sda4

Btrfs v3.17
(Continue reading)

Andreas Hild | 7 Feb 09:42 2016


subscribe linux-btrfs
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo <at>
More majordomo info at

David Goodwin | 7 Feb 07:57 2016

kernel 4.1.15 balance => oops. out of memory.

kernel 4.1.15

balancing a fs led to this oops.

after a reboot, the balance resumed without problem.

presumably there wasn't enough real memory available


  ------------[ cut here ]------------
  WARNING: CPU: 1 PID: 16560 at fs/btrfs/super.c:260 
__btrfs_abort_transaction+0x4b/0x120 [btrfs]()
  BTRFS: Transaction aborted (error -12)
  Modules linked in: ufs qnx4 hfsplus hfs minix ntfs vfat msdos fat jfs 
xfs libcrc32c dm_mod veth xt_multiport iptable_filter ip_tables x_tables 
nfsd auth_rpcgss oid_registry nfs_acl nfs lockd grace fscache sunrpc 
bridge stp llc ip_gre ip_tunnel gre fuse crct10dif_pclmul crc32_pclmul 
ppdev cirrus aesni_intel snd_pcsp aes_x86_64 lrw gf128mul glue_helper 
ablk_helper cryptd snd_pcm ttm snd_timer evdev psmouse snd soundcore 
serio_raw drm_kms_helper parport_pc 8250_fintek parport drm i2c_piix4 
i2c_core acpi_cpufreq processor thermal_sys button ext4 crc16 mbcache 
jbd2 btrfs xor raid6_pq ata_generic xen_blkfront crc32c_intel floppy 
ata_piix libata scsi_mod ixgbevf
  CPU: 1 PID: 16560 Comm: apache2 Not tainted 4.1.15-dg1 #1
  Hardware name: Xen HVM domU, BIOS 12/07/2015
   0000000000000000 ffffffffa01a1af2 ffffffff81573596 ffff880029f13b08
   ffffffff81071221 ffff880139897388 00000000fffffff4 ffff88014ba71000
   ffffffffa01a061c 0000000000001854 ffffffff8107129a ffffffffa01a46e8
  Call Trace:
(Continue reading)

Tom Arild Naess | 6 Feb 22:35 2016

Unrecoverable error on raid10


I have quite recently converted my file server to btrfs, and I am in the 
progress of setting up a new backup server with btrfs to be able to 
utilize btrfs send/receive.

FIle server:
> uname -a
Linux main 3.19.0-49-generic #55~14.04.1-Ubuntu SMP Fri Jan 22 11:24:31 
UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

> btrfs fi show /store
Label: none  uuid: 2d84ca51-ec42-4fe3-888a-777cad6e1921
     Total devices 4 FS bytes used 4.35TiB
     devid    1 size 3.64TiB used 2.18TiB path /dev/sdc
     devid    2 size 3.64TiB used 2.18TiB path /dev/sdd
     devid    3 size 3.64TiB used 2.18TiB path /dev/sdb
     devid    4 size 3.64TiB used 2.18TiB path /dev/sda

btrfs-progs v4.1 (custom compiled)

> btrfs fi df /store
Data, RAID10: total=4.35TiB, used=4.35TiB
System, RAID10: total=64.00MiB, used=480.00KiB
Metadata, RAID10: total=6.00GiB, used=4.59GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

Backup server:
> uname -a
Linux backup 4.2.5-1-ARCH #1 SMP PREEMPT Tue Oct 27 08:13:28 CET 2015 
(Continue reading)

Nicholas D Steeves | 5 Feb 21:27 2016

btrfs-progs4.4 with linux-3.16.7 (with truncation of extends patch)


Is it safe to use btrfs-progs-4.4 with linux-3.16.7 patched with the 

Btrfs: fix truncation of compressed and inlined extents

The specific case I'm looking into is when a Debian user sticks with the 
default kernel, but installs btrfs-progs-4.4 from backports.  I've also 
read that there will be some userspace<->kernel compatibility checks 
added to btrfs-progs at some point, but I wasn't able to find recent 
news on its progress.

Kind regards,
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo <at>
More majordomo info at