Daniel Drake | 1 Apr 02:53 2012

Re: ext4 online resize crash

On Fri, Mar 30, 2012 at 6:30 PM, Yongqiang Yang <xiaoqiangnk <at> gmail.com> wrote:
> According to dumpe2fs, the filesystem initially has 28 groups, that is
> 3.5GB, then it is resized to a new size. Right?  Could you point out
> what's the new size, a specific number will be appreciated!

That's right. The filesystem is being grown by approximately 100mb.
I'll get you the exact numbers on Monday.

Thanks,
Daniel
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo <at> vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Daniel Drake | 1 Apr 02:55 2012

Re: ext4 online resize crash

On Sat, Mar 31, 2012 at 6:53 PM, Daniel Drake <dsd <at> laptop.org> wrote:
> That's right. The filesystem is being grown by approximately 100mb.
> I'll get you the exact numbers on Monday.

Actually, I just realised I have them here.
The partition was originally 7409856 sectors, I enlarged it to 7605248
sectors and then tried to grow the ext4 filesystem contained inside.

Thanks,
Daniel
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo <at> vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Lukas Czerner | 1 Apr 04:17 2012
Picon

Re: ext4 online resize crash

On Sat, 31 Mar 2012, Daniel Drake wrote:

> On Sat, Mar 31, 2012 at 6:53 PM, Daniel Drake <dsd <at> laptop.org> wrote:
> > That's right. The filesystem is being grown by approximately 100mb.
> > I'll get you the exact numbers on Monday.
> 
> Actually, I just realised I have them here.
> The partition was originally 7409856 sectors, I enlarged it to 7605248
> sectors and then tried to grow the ext4 filesystem contained inside.
> 
> Thanks,
> Daniel
> --
> To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
> the body of a message to majordomo <at> vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

Hi Daniel,

there is a fix bug that happens when resizing the file system in a way
that a new size is still in the same group:

a0ade1deb86d2325aecc36272bb4505a6eec9235
	ext4: fix resize when resizing within single group

I am not sure if that really solves your problem, since the actual bug
in you backtrace happens somewhere else, but you can give it a try. It
was merged into the main line just recently.

(Continue reading)

Andreas Dilger | 1 Apr 17:19 2012
Picon

Re: [PATCH] resize2fs: let online resizing report new blocks count right


On 2012-01-31, at 20:04, Yongqiang Yang <xiaoqiangnk <at> gmail.com> wrote:

> After online resizing finishes, resize2fs loads the latest super block
> so that the new blocks count is reported correctly.
> 
> Signed-off-by: Yongqiang Yang <xiaoqiangnk <at> gmail.com>
> ---
> resize/online.c |   16 ++++++++++++++--
> 1 files changed, 14 insertions(+), 2 deletions(-)
> 
> diff --git a/resize/online.c b/resize/online.c
> index 966ea1e..cb48556 100644
> --- a/resize/online.c
> +++ b/resize/online.c
>  <at>  <at>  -97,8 +97,7  <at>  <at>  errcode_t online_resize_fs(ext2_filsys fs, const char *mtpt,
>            exit(1);
>        }
>    } else {
> -        close(fd);
> -        return 0;
> +        goto succeeded;
>    }
> 
>    if ((ext2fs_blocks_count(sb) > MAX_32_NUM) ||
>  <at>  <at>  -220,6 +219,19  <at>  <at>  errcode_t online_resize_fs(ext2_filsys fs, const char *mtpt,
>    }
> 
>    ext2fs_free(new_fs);
> +succeeded:
(Continue reading)

Daniel Drake | 1 Apr 18:42 2012

Re: ext4 online resize crash

Hi Lukas,

On Sat, Mar 31, 2012 at 8:17 PM, Lukas Czerner <lczerner <at> redhat.com> wrote:
> there is a fix bug that happens when resizing the file system in a way
> that a new size is still in the same group:
>
> a0ade1deb86d2325aecc36272bb4505a6eec9235
>        ext4: fix resize when resizing within single group
>
> I am not sure if that really solves your problem, since the actual bug
> in you backtrace happens somewhere else, but you can give it a try. It
> was merged into the main line just recently.

Thanks for the suggestion but I am already using latest mainline which
includes that patch.

Daniel
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo <at> vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

bugzilla-daemon | 1 Apr 21:06 2012

[Bug 43023] New: Bad page map in process plugin-containe

https://bugzilla.kernel.org/show_bug.cgi?id=43023

           Summary: Bad page map in process plugin-containe
           Product: File System
           Version: 2.5
    Kernel Version: 3.3.0
          Platform: All
        OS/Version: Linux
              Tree: Mainline
            Status: NEW
          Severity: normal
          Priority: P1
         Component: ext4
        AssignedTo: fs_ext4 <at> kernel-bugs.osdl.org
        ReportedBy: harri <at> afaics.de
        Regression: No

Created an attachment (id=72784)
 --> (https://bugzilla.kernel.org/attachment.cgi?id=72784)
log file

I got an error in /var/log/kern.log about a "Bad page map in process
plugin-containe". See attached log file.

I am using autofs to mount /dev/sdg1 to /misc/data1 when needed. When I tried
to rsync /misc/data1/somefile to another host I have got this error message.
The copy went fine, though. A fsck didn't show any problems, either.

Mount options (as shown in /proc/mounts):

(Continue reading)

Yongqiang Yang | 2 Apr 07:04 2012
Picon

Re: backup of the last group'descriptor when it is the 1st group of a meta_bg

On Thu, Mar 29, 2012 at 12:08 AM, Andreas Dilger <aedilger <at> gmail.com> wrote:
> On 2012-03-27, at 8:47 AM, Yongqiang Yang wrote:
>> Hi Ted, Andreas and List,
>>
>> As Andreas pointed out last year, if the last group is the 1st group
>> in a meta bg, then its group desc has no backup.
>> With meta_bg resizing inode is useless,  I had a thought that we store
>> a backup group descriptor of the last group in the resizing inode?
>> What's your opinions?
>
> The main difficulty of referencing a backup group descriptor from the
> resize inode is that it may confuse tools that are trying to modify
> the resize inode.  Also, it is more difficult to access the block from
> userspace, since it would need to read the inode and use an extent to
> reference the block beyond 16TB.
I meant we store the backup group descriptor in resize inode itself
rather than store it in data blocks, so it does not need an extent at
all.  However, the inode is corrupted, we need patch e2fsck to let it
understand the resize inode.
>
> What about storing the 64-bit block number in the superblock?  This
> should be safe for older e2fsprogs that understand META_BG.  At worst
> the new backup group descriptor will not be updated on a resize by
> older e2fsprogs, which is no worse than not having a backup at all.
>
> I would suggest to put the backup group descriptor in the last block
> of the filesystem.  This would be in the 0th group of the metagroup.
> If the metagroup grows to have a second group, then this block is not
> needed anymore, and if both the primary (at the beginning of the group)
> and the backup (at the end of the group) are corrupted, then there is
(Continue reading)

Andreas Dilger | 2 Apr 07:46 2012
Picon

Re: backup of the last group'descriptor when it is the 1st group of a meta_bg

On 2012-04-01, at 11:04 PM, Yongqiang Yang wrote:
> On Thu, Mar 29, 2012 at 12:08 AM, Andreas Dilger <aedilger <at> gmail.com> wrote:
>> I would suggest to put the backup group descriptor in the last block
>> of the filesystem.  This would be in the 0th group of the metagroup.
>> If the metagroup grows to have a second group, then this block is not
>> needed anymore, and if both the primary (at the beginning of the group)
>> and the backup (at the end of the group) are corrupted, then there is
>> little chance that the data in this last group is good either...
> 
> Now we have 2 solutions, the 1st one is storing backup group
> descriptor in resize inode itself while the 2nd one is storing backup
> in the last block of the 0th block. Both need patching e2fsck because
> older e2fsck does not work.  The 1st one's patch to e2fsck is much
> more complicated, because only one group descriptor is stored in
> resize inode itself, but the e2fsck's code reading/writing group
> descriptor block. so I like the 2nd one.

This solution doesn't _require_ patching e2fsck, which is useful.
If an older e2fsck doesn't understand the backup group descriptor is
in the last block, it is no worse than today where the backup does
not exist at all.  In that case, the old e2fsck would mark this block
free, and there is a tiny chance that it would be allocated to some
file and overwritten.

However, the last block will almost never be allocated, since block
allocation is typically biased toward the beginning of the disk, so
storing a checksum in it (per Darrick's patches) would allow a new
e2fsck to use it in case of emergency, and it would mark the block
in use again (so long as it wasn't allocated to some file).

(Continue reading)

Artem Bityutskiy | 2 Apr 08:25 2012
Picon

3.3 oops

Hi,

I am testing vanilla 3.3 kernel under kvm using xfstests and kernel
usually oopses when I run it overnight, e.g.:

[36928.586097] ------------[ cut here ]------------
[36928.586198] kernel BUG at fs/buffer.c:2871!
[36928.586280] invalid opcode: 0000 [#1] SMP 
[36928.586374] CPU 1 
[36928.586414] Modules linked in: [last unloaded: scsi_wait_scan]
[36928.586615] 
[36928.586650] Pid: 11400, comm: fsstress Not tainted 3.3.0+ #43 Bochs Bochs
[36928.586792] RIP: 0010:[<ffffffff811a9add>]  [<ffffffff811a9add>] submit_bh+0x10d/0x120
[36928.586985] RSP: 0018:ffff880121713758  EFLAGS: 00010202
[36928.587045] RAX: 000000000004d025 RBX: ffff8802256a4a90 RCX: 0000000000000005
[36928.587045] RDX: ffff880121713fd8 RSI: ffff8802256a4a90 RDI: 0000000000000211
[36928.587045] RBP: ffff880121713778 R08: ffff8804075c0770 R09: 0000160000000000
[36928.587045] R10: 0000000000000001 R11: ffff880406fdd480 R12: 0000000000000211
[36928.587045] R13: ffff880121713834 R14: ffff88012a9c2000 R15: ffff880121713fd8
[36928.587045] FS:  00007f5470a81700(0000) GS:ffff88041fc20000(0000) knlGS:0000000000000000
[36928.587045] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[36928.587045] CR2: 00007f546c02c698 CR3: 000000012e4e9000 CR4: 00000000000006e0
[36928.587045] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[36928.587045] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[36928.587045] Process fsstress (pid: 11400, threadinfo ffff880121712000, task ffff8802cbba5c40)
[36928.587045] Stack:
[36928.587045]  ffff88012a9c2000 ffff8802256a4a90 0000000000000211 ffff880121713834
[36928.587045]  ffff880121713798 ffffffff811ab59d 000000000000000a ffff88012a9c2168
[36928.587045]  ffff8801217137f8 ffffffff8123be2d 0000000091827364 ffff8801217137b0
[36928.587045] Call Trace:
(Continue reading)

Artem Bityutskiy | 2 Apr 13:45 2012
Picon

[PATCH v2 2/4] ext4: Convert last user of ext4_mark_super_dirty() to ext4_handle_dirty_super()

From: Jan Kara <jack <at> suse.cz>

The last user of ext4_mark_super_dirty() in ext4_file_open() is so rare it
can well be modifying the superblock properly by journalling the change.
Change it and get rid of ext4_mark_super_dirty() as it's not needed anymore.

Artem: small amendments.
Artem: tested using xfstests for both journalled and non-journalled ext4.

Signed-off-by: Jan Kara <jack <at> suse.cz>
Tested-by: Artem Bityutskiy <artem.bityutskiy <at> linux.intel.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy <at> linux.intel.com>
---
 fs/ext4/ext4.h |    6 ------
 fs/ext4/file.c |   14 +++++++++++++-
 2 files changed, 13 insertions(+), 7 deletions(-)

diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index ab2594a..aba3749 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
 <at>  <at>  -2226,12 +2226,6  <at>  <at>  static inline void ext4_unlock_group(struct super_block *sb,
 	spin_unlock(ext4_group_lock_ptr(sb, group));
 }

-static inline void ext4_mark_super_dirty(struct super_block *sb)
-{
-	if (EXT4_SB(sb)->s_journal == NULL)
-		sb->s_dirt =1;
-}
(Continue reading)


Gmane