Neil Brown | 1 Dec 01:06 2010
Picon

Re: [PATCH] fix: assemble for external metadata generates segfault if invalid device found

On Tue, 30 Nov 2010 23:49:52 +0000 "Hawrylewicz Czarnowski, Przemyslaw"
<przemyslaw.hawrylewicz.czarnowski <at> intel.com> wrote:

> An attempt to invoke super_by_fd() on device that has
> metadata_version="none" always matches super0 (as test_version is "").
> In Assemble() it results in segfault when load_container is invoked
> (=null for super0).
> As of now load_container is only started if it points to valid pointer.

applied, thanks.

NeilBrown

> 
> Signed-off-by: Przemyslaw Czarnowski <przemyslaw.hawrylewicz.czarnowski <at> intel.com>
> ---
>  Assemble.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/Assemble.c b/Assemble.c
> index 5e71a43..5e4296c 100644
> --- a/Assemble.c
> +++ b/Assemble.c
>  <at>  <at>  -332,7 +332,7  <at>  <at>  int Assemble(struct supertype *st, char *mddev,
>  					fprintf(stderr, Name ": not a recognisable container: %s\n",
>  						devname);
>  				tmpdev->used = 2;
> -			} else if (tst->ss->load_container(tst, dfd, NULL)) {
> +			} else if (!tst->ss->load_container || tst->ss->load_container(tst, dfd, NULL)) {
>  				if (report_missmatch)
(Continue reading)

Mingming Cao | 1 Dec 01:14 2010
Picon

Re: [PATCH v6 0/4] ext4: Coordinate data-only flush requests sent by fsync

On Mon, 2010-11-29 at 18:48 -0500, Ric Wheeler wrote:
> On 11/29/2010 05:05 PM, Darrick J. Wong wrote:
> > On certain types of hardware, issuing a write cache flush takes a considerable
> > amount of time.  Typically, these are simple storage systems with write cache
> > enabled and no battery to save that cache after a power failure.  When we
> > encounter a system with many I/O threads that write data and then call fsync
> > after more transactions accumulate, ext4_sync_file performs a data-only flush,
> > the performance of which is suboptimal because each of those threads issues its
> > own flush command to the drive instead of trying to coordinate the flush,
> > thereby wasting execution time.
> >
> > Instead of each fsync call initiating its own flush, there's now a flag to
> > indicate if (0) no flushes are ongoing, (1) we're delaying a short time to
> > collect other fsync threads, or (2) we're actually in-progress on a flush.
> >
> > So, if someone calls ext4_sync_file and no flushes are in progress, the flag
> > shifts from 0->1 and the thread delays for a short time to see if there are any
> > other threads that are close behind in ext4_sync_file.  After that wait, the
> > state transitions to 2 and the flush is issued.  Once that's done, the state
> > goes back to 0 and a completion is signalled.
> >
> > Those close-behind threads see the flag is already 1, and go to sleep until the
> > completion is signalled.  Instead of issuing a flush themselves, they simply
> > wait for that first thread to do it for them.  If they see that the flag is 2,
> > they wait for the current flush to finish, and start over.
> >
> > However, there are a couple of exceptions to this rule.  First, there exist
> > high-end storage arrays with battery-backed write caches for which flush
> > commands take very little time (<  2ms); on these systems, performing the
> > coordination actually lowers performance.  Given the earlier patch to the block
(Continue reading)

hansbkk | 1 Dec 05:45 2010
Picon

Re: [linux-lvm] Q: LVM over RAID, or plain disks? A:"Yes" = best of both worlds?

On Tue, Nov 30, 2010 at 11:56 PM, Phil Turmel <philip <at> turmel.org> wrote:

> (Actually, rsync and tar are both hardlink-aware, at least the versions I use.)

My backup filesystems contain so many hardlinks (millions, constantly
growing) that file-level tools choke - this really must be done at the
block device level - see my previous post for more detail.

It's also now clear to me that rsync is the tool to use for this for
all the other LVs without such problematic filesystems, as I know the
tool and trust its error-checking routines.

>> So adapting your suggestion to fit (my perception of) my needs:
>>
>>   - create an LV snapshot
>>   - mount a plain partition on a physical hard disk (preferably on a
>> separate controller?)
>>   - dd the data from the LV snapshot over to the partition
>>   - delete the snapshot
>
> Yep, this is basically what I recommended.

>> So I guess my question becomes:
>>
>> What is the best tool to block-level clone an LV snapshot to a regular
>> disk partition?
>>
>>   - "best" = as close to 100% reliably as possible, speed isn't nearly
>> as important
>
(Continue reading)

Czarnowska, Anna | 1 Dec 11:37 2010
Picon

[PATCH] Monitor: pass statelist reference when adding new arrays

From af217d38a81223408fc53ef485d7c5bd43b9d841 Mon Sep 17 00:00:00 2001
From: Anna Czarnowska <anna.czarnowska <at> intel.com>
Date: Tue, 30 Nov 2010 14:44:45 +0100
Subject: [PATCH] Monitor: pass statelist reference when adding new arrays
Cc: linux-raid <at> vger.kernel.org, Williams, Dan J <dan.j.williams <at> intel.com>, Ciechanowski, Ed <ed.ciechanowski <at> intel.com>

Otherwise it will not get updated.

Signed-off-by: Anna Czarnowska <anna.czarnowska <at> intel.com>
---
 Monitor.c |   10 +++++-----
 1 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/Monitor.c b/Monitor.c
index d5514e9..e7f6d03 100644
--- a/Monitor.c
+++ b/Monitor.c
 <at>  <at>  -70,7 +70,7  <at>  <at>  static void alert(char *event, char *dev, char *disc, struct alert_info *info);
 static int check_array(struct state *st, struct mdstat_ent *mdstat,
 		       int test, struct alert_info *info,
 		       int increments);
-static int add_new_arrays(struct mdstat_ent *mdstat, struct state *statelist,
+static int add_new_arrays(struct mdstat_ent *mdstat, struct state **statelist,
 			  int test, struct alert_info *info);
 static void try_spare_migration(struct state *statelist, struct alert_info *info);
 static void link_containers_with_subarrays(struct state *list);
 <at>  <at>  -223,7 +223,7  <at>  <at>  int Monitor(struct mddev_dev *devlist,
 		
 		/* now check if there are any new devices found in mdstat */
 		if (scan)
(Continue reading)

Neil Brown | 1 Dec 12:25 2010
Picon

Re: [PATCH] Monitor: pass statelist reference when adding new arrays

On Wed, 1 Dec 2010 10:37:22 +0000 "Czarnowska, Anna"
<anna.czarnowska <at> intel.com> wrote:

> >From af217d38a81223408fc53ef485d7c5bd43b9d841 Mon Sep 17 00:00:00 2001
> From: Anna Czarnowska <anna.czarnowska <at> intel.com>
> Date: Tue, 30 Nov 2010 14:44:45 +0100
> Subject: [PATCH] Monitor: pass statelist reference when adding new arrays
> Cc: linux-raid <at> vger.kernel.org, Williams, Dan J <dan.j.williams <at> intel.com>, Ciechanowski, Ed <ed.ciechanowski <at> intel.com>
> 
> Otherwise it will not get updated.

Yes, of course.

Thanks a lot!

Applied and pushed out.

NeilBrown

> 
> Signed-off-by: Anna Czarnowska <anna.czarnowska <at> intel.com>
> ---
>  Monitor.c |   10 +++++-----
>  1 files changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/Monitor.c b/Monitor.c
> index d5514e9..e7f6d03 100644
> --- a/Monitor.c
> +++ b/Monitor.c
>  <at>  <at>  -70,7 +70,7  <at>  <at>  static void alert(char *event, char *dev, char *disc, struct alert_info *info);
(Continue reading)

Phil Turmel | 1 Dec 13:50 2010

Re: [linux-lvm] Q: LVM over RAID, or plain disks? A:"Yes" = best of both worlds?

On 11/30/2010 11:45 PM, hansbkk <at> gmail.com wrote:
> On Tue, Nov 30, 2010 at 11:56 PM, Phil Turmel <philip <at> turmel.org> wrote:
> 
>> (Actually, rsync and tar are both hardlink-aware, at least the versions I use.)
> 
> My backup filesystems contain so many hardlinks (millions, constantly
> growing) that file-level tools choke - this really must be done at the
> block device level - see my previous post for more detail.

Ah -- I did miss that detail.

> It's also now clear to me that rsync is the tool to use for this for
> all the other LVs without such problematic filesystems, as I know the
> tool and trust its error-checking routines.

Indeed.  I push my own critical systems offsite with rsync+ssh.

[snip /]

>> I would use dd.
> 
> OK, that's clear, thanks.
> 
> 
>> You want your dismountable disks to be accessible stand-alone, but I don't see why that would preclude
setting them up so each is a unique LVM VG.
> 
> It doesn't preclude it, but it's a layer of complexity during the data
> recovery process I'm trying to avoid.
> 
(Continue reading)

hansbkk | 1 Dec 20:47 2010
Picon

Re: [linux-lvm] Q: LVM over RAID, or plain disks? A:"Yes" = best of both worlds?

On Wed, Dec 1, 2010 at 7:50 PM, Phil Turmel <philip <at> turmel.org> wrote:
>> Does dd already do some sort of "verify after copy"? I will likely
>> investigate the available COTS partition cloning tools as well.
>
> Not natively, but it fairly easy to pipe a dd reader through md5sum to a dd writer, then follow up with a dd
read + md5sum of the copied partition (taking care to read precisely the same number of sectors).
>
> The various flavors of ddrescue might have something like this..  didn't check.

Sorry it's a bit OT, but for the sake of future googlers thought I'd
point to this tool I found:

http://dc3dd.sourceforge.net/
http://www.forensicswiki.org/wiki/Dc3dd
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo <at> vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Neil Brown | 1 Dec 22:57 2010
Picon

Re: md as module cannot be reloaded.

On Wed, 1 Dec 2010 14:23:14 +0000 "Hawrylewicz Czarnowski, Przemyslaw"
<przemyslaw.hawrylewicz.czarnowski <at> intel.com> wrote:

> Hi,
> 
> I have found an annoying problem with md module. If it is compiled as loadable module it is impossible to
reload module if unloaded once.
> 
> Eg.
> # mdadm -Ss
> # modprobe -r raid1 && modprobe -r md-mod
> # cat /proc/modules | grep md_mod || echo Unloaded
> Unloaded
> # cat /proc/mdstat
> cat /proc/mdstat: Invalid arguments

I cannot reproduce this.  That this point I get 

cat: /proc/mdstat: No such file or directory

> # modprobe md-mod
> # cat /proc/mdstat
> cat /proc/mdstat: Invalid argument
> # cat /proc/modules | grep md_mod || echo Unloaded
> md_mod 94178 0 - Live 0xf857a000
> 
> Some functionalities seem to work, but nothing is able to read /proc/mdstat.
> 
> If have tried few kernels from 2.6.27 to 2.6.37-rc3, openSUSE 11.x/RHEL6, doesn't matter, still the same
result. What can be wrong?
(Continue reading)

Picon

RE: md as module cannot be reloaded.

> -----Original Message-----
> From: linux-raid-owner <at> vger.kernel.org [mailto:linux-raid-
> owner <at> vger.kernel.org] On Behalf Of Neil Brown
> Sent: Wednesday, December 01, 2010 10:57 PM
> To: Hawrylewicz Czarnowski, Przemyslaw
> Cc: linux-raid <at> vger.kernel.org; Labun, Marcin; Czarnowska, Anna; Neubauer,
> Wojciech; Williams, Dan J; Ciechanowski, Ed
> Subject: Re: md as module cannot be reloaded.
> 
> On Wed, 1 Dec 2010 14:23:14 +0000 "Hawrylewicz Czarnowski, Przemyslaw"
> <przemyslaw.hawrylewicz.czarnowski <at> intel.com> wrote:
> 
> > Hi,
> >
> > I have found an annoying problem with md module. If it is compiled as
> loadable module it is impossible to reload module if unloaded once.
> >
> > Eg.
> > # mdadm -Ss
> > # modprobe -r raid1 && modprobe -r md-mod
> > # cat /proc/modules | grep md_mod || echo Unloaded
> > Unloaded
> > # cat /proc/mdstat
> > cat /proc/mdstat: Invalid arguments
> 
> I cannot reproduce this.  That this point I get
> 
> cat: /proc/mdstat: No such file or directory
> 
> 
(Continue reading)

Adam Kwolek | 2 Dec 09:18 2010
Picon

[PATCH 00/10] Pre-migration patch series

This is bunch of patches that I've pull from my OLCE/Migration tree and I believe that
they can be applied before we apply main feature (I'm curently going for some rework after your feedback).

This series is for some behaviours of mdadm that I've forund so far. 
Mainly they are bugs that are present in code or behaviour that I've observe using takeover (i.e. geo map fix).
Some of them are not visible at this moment, when we reshape big arrays (i.e. wait functions fixes) etc.

---

Adam Kwolek (10):
      FIX: wait_backup() sometimes hungs
      FIX: Honor !reshape state on wait_reshape() entry
      FIX: sync_completed_fd handler has to be closed
      FIX: Do not use layout for raid4 and raid0 while geo map computing
      FIX: open backup file for reshape as function
      Add spares to raid0 array using takeover
      Add support to skip slot configuration
      FIX: Add error code for raid_disks set
      FIX: Problem with removing array after takeover
      FIX: Cannot exit monitor after takeover

 Grow.c      |  137 +++++++++++++++++++++++++++++++++++++----------------------
 Manage.c    |  120 +++++++++++++++++++++++++++++++++++++++++++++++++++-
 managemon.c |    1 
 mdadm.h     |    8 +++
 monitor.c   |   16 +++++++
 msg.c       |    8 +++
 restripe.c  |    5 ++
 sysfs.c     |    3 +
 8 files changed, 242 insertions(+), 56 deletions(-)
(Continue reading)


Gmane