Agustín Ciciliani | 1 Oct 16:04 2004
Picon

(unknown)

Hi,

Everytime I boot my system It says that my partitions have different UUID.
If anybody know what I can do about it...

Thanks in advance,

Agustín
Oct  1 03:34:41 maria kernel: md: Autodetecting RAID arrays.
Oct  1 03:34:41 maria kernel: md: autorun ...
Oct  1 03:34:41 maria kernel: md: considering hdc13 ...
Oct  1 03:34:41 maria kernel: md:  adding hdc13 ...
Oct  1 03:34:41 maria kernel: md: hdc12 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hdc11 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hdc9 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hdc8 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hdc7 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hdc6 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hdc5 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hdc2 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hdc1 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md:  adding hda13 ...
Oct  1 03:34:41 maria kernel: md: hda12 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hda11 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hda9 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hda8 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hda7 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hda6 has different UUID to hdc13
(Continue reading)

Luca Berra | 1 Oct 22:55 2004
Picon

Re: raid 0+1 and 1+0 with mdadm

On Thu, Sep 30, 2004 at 10:56:03AM -0700, rich turner wrote:
>when I run "mdadm -As" i get the following output:
>mdadm: /dev/md3 has been started with 3 drives.
>mdadm: /dev/md4 has been started with 3 drives.
>mdadm: /dev/md6 has been started with 2 drives and 1 spare.
>mdadm: /dev/md7 has been started with 2 drives and 1 spare.
>mdadm: no devices found for /dev/md5
>mdadm: no devices found for /dev/md8
>
>if i then immediately run "mdadm -As" again it starts /dev/md5 and
>/dev/md8.
>
>why do i have to run it twice and why does it not start all devices the
>first time?
>
i believe the device line is evaluated first, then all array lines
processed.
L.

--

-- 
Luca Berra -- bluca <at> comedia.it
        Communication Media & Services S.r.l.
 /"\
 \ /     ASCII RIBBON CAMPAIGN
  X        AGAINST HTML MAIL
 / \
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo <at> vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
(Continue reading)

Agustín Ciciliani | 4 Oct 14:13 2004
Picon

RAID 1 - mdadm - Different UUID when booting

Hi,

Everytime I boot my system It says that my partitions have different UUID.
If anybody know what I can do about it...

 Thanks in advance,

 Agustín

Oct  1 03:34:41 maria kernel: md: Autodetecting RAID arrays.
Oct  1 03:34:41 maria kernel: md: autorun ...
Oct  1 03:34:41 maria kernel: md: considering hdc13 ...
Oct  1 03:34:41 maria kernel: md:  adding hdc13 ...
Oct  1 03:34:41 maria kernel: md: hdc12 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hdc11 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hdc9 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hdc8 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hdc7 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hdc6 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hdc5 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hdc2 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hdc1 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md:  adding hda13 ...
Oct  1 03:34:41 maria kernel: md: hda12 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hda11 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hda9 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hda8 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hda7 has different UUID to hdc13
(Continue reading)

William Knop | 4 Oct 14:12 2004
Picon

libata badness

Hi all,

I'm running a raid5 array atop a few sata drives via a promise tx4 
controller. The kernel is the official fedora lk 2.6.8-1, although I had 
run a few different kernels (never entirely successfully) with this array 
in the past.

In fact, this past weekend, I was getting oopses and panics (on lk 
2.6.8.1, 2.6.9-rc3, 2.6.9-rc3-mm1, and 2.6.9-rc3 w/ Jeff Garzik's recent 
libata patches) all of which happened when rebuilding a spare drive in the 
array. Unfortunately, somehow my root filesystem (ext3) got blown away-- 
it was on a reliable scsi drive (no bad blocks; I checked afterwards), and 
an adaptec aic7xxx host. The ram was good; I ran memtest86 on it. I'm 
assuming this was caused by some major kernel corruption, originating from 
libata.

I have since rebuilt my computer using an AMD Sempron (basically a Duron) 
rather than a P4. Other than that (cpu + m/b), it's the same hardware.

The errors I got over the weekend are similar to the one I just captured 
on my fresh fc2/lk2.6.8-1 install (at the same point; the spare disk had 
begun rebuilding). It's attached below.

Anyway, I haven't been able to find any other reports of this, so I'm at a 
loss about what to do. I hesitate to bring my array up at all now, for 
fear of blowing it away. Any assistance would be greatly appriciated.

Thanks much,
Will

(Continue reading)

Jon Lewis | 4 Oct 15:59 2004

Re: libata badness

On Mon, 4 Oct 2004, William Knop wrote:

> Hi all,
>
> I'm running a raid5 array atop a few sata drives via a promise tx4
> controller. The kernel is the official fedora lk 2.6.8-1, although I had
> run a few different kernels (never entirely successfully) with this array
> in the past.

What kind of sata drives?  It's not quite the same end result, but there
have been several posts on linux-raid about defective Maxtor sata drives
causing system freezes.  If your drives are Maxtor, download their
powermax utility and test your drives.  You may find that you have one or
more marginal drives that appear to work most of the time, but powermax
will determine are bad.  Replacing one like that fixed my problems.

----------------------------------------------------------------------
 Jon Lewis                   |  I route
 Senior Network Engineer     |  therefore you are
 Atlantic Net                |
_________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to majordomo <at> vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

William Knop | 4 Oct 17:50 2004
Picon

Re: libata badness


> What kind of sata drives?  It's not quite the same end result, but there
> have been several posts on linux-raid about defective Maxtor sata drives
> causing system freezes.  If your drives are Maxtor, download their
> powermax utility and test your drives.  You may find that you have one or
> more marginal drives that appear to work most of the time, but powermax
> will determine are bad.  Replacing one like that fixed my problems.

Ah, well all of them are Maxtor drives... One 6y250m0 and three 7y250m0 
drives. I'm using powermax on them right now. They all passed the quick 
test, and the full test results are forthcoming.

Actually, I was backing up the array (cp from the array - 2 of 3 drives 
running - to a normal drive) when I read your response. Shortly 
thereafter, during the cp (perhaps after copying 100GB-120GB), I got a 
double fault. I've never gotten a double fault before, but I'm guessing 
it's quite a serious error. It totally locked up the machine, and it 
outputted two lines each with a double fault message, followed by a 
register dump.

The saga continues...

Will
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo <at> vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Mark Lord | 4 Oct 18:06 2004
Picon

Re: libata badness

I have used Maxtor "SATA" drives that require
the O/S to do a "SET FEATURES :: UDMA_MODE" command
on them before they will operate reliably.
This despite the SATA spec stating clearly that
such a command should/will have no effect.

I suppose libata does this already, but just in case not..
Something simple to check up on.
--

-- 
Mark Lord
(hdparm keeper & the original "Linux IDE Guy")

William Knop wrote:
>
> Ah, well all of them are Maxtor drives... One 6y250m0 and three 7y250m0 
> drives. I'm using powermax on them right now. They all passed the quick 
> test, and the full test results are forthcoming.
-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to majordomo <at> vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Jon Lewis | 4 Oct 18:09 2004

Re: libata badness

On Mon, 4 Oct 2004, William Knop wrote:

> Ah, well all of them are Maxtor drives... One 6y250m0 and three 7y250m0
> drives. I'm using powermax on them right now. They all passed the quick
> test, and the full test results are forthcoming.

I'm pretty sure all the bad ones we had (at least the one I found at my
location) "failed" the quick test in that after the quick test it asked me
to run the full test, after which it spit out the magic fault code to give
Maxtor's RMA form.  Another possibility that comes to mind is that your
power supply could be inadequate to run the system and all the drives.

----------------------------------------------------------------------
 Jon Lewis                   |  I route
 Senior Network Engineer     |  therefore you are
 Atlantic Net                |
_________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo <at> vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

William Knop | 4 Oct 18:24 2004
Picon

Re: libata badness

Great. I'll give that a shot after I drive checker utility finishes.

However it seems like the kernel shouldn't be oopsing, panicking, or 
double faulting if the drive is questionable. It apparently blew away my 
root fs last time. A peripheral drive failure shouldn't cause such 
destruction across the system, no?

On Mon, 4 Oct 2004, Mark Lord wrote:

> I have used Maxtor "SATA" drives that require
> the O/S to do a "SET FEATURES :: UDMA_MODE" command
> on them before they will operate reliably.
> This despite the SATA spec stating clearly that
> such a command should/will have no effect.
>
> I suppose libata does this already, but just in case not..
> Something simple to check up on.
> -- 
> Mark Lord
> (hdparm keeper & the original "Linux IDE Guy")
>
> William Knop wrote:
>> 
>> Ah, well all of them are Maxtor drives... One 6y250m0 and three 7y250m0 
>> drives. I'm using powermax on them right now. They all passed the quick 
>> test, and the full test results are forthcoming.
>
>
-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
(Continue reading)

Jeff Garzik | 4 Oct 18:30 2004
Picon

Re: libata badness

William Knop wrote:
> Hi all,
> 
> I'm running a raid5 array atop a few sata drives via a promise tx4 
> controller. The kernel is the official fedora lk 2.6.8-1, although I had 
> run a few different kernels (never entirely successfully) with this 
> array in the past.
> 
> In fact, this past weekend, I was getting oopses and panics (on lk 
> 2.6.8.1, 2.6.9-rc3, 2.6.9-rc3-mm1, and 2.6.9-rc3 w/ Jeff Garzik's recent 
> libata patches) all of which happened when rebuilding a spare drive in 
> the array. Unfortunately, somehow my root filesystem (ext3) got blown 
> away-- it was on a reliable scsi drive (no bad blocks; I checked 
> afterwards), and an adaptec aic7xxx host. The ram was good; I ran 
> memtest86 on it. I'm assuming this was caused by some major kernel 
> corruption, originating from libata.
> 
> I have since rebuilt my computer using an AMD Sempron (basically a 
> Duron) rather than a P4. Other than that (cpu + m/b), it's the same 
> hardware.
> 
> The errors I got over the weekend are similar to the one I just captured 
> on my fresh fc2/lk2.6.8-1 install (at the same point; the spare disk had 
> begun rebuilding). It's attached below.
> 
> Anyway, I haven't been able to find any other reports of this, so I'm at 
> a loss about what to do. I hesitate to bring my array up at all now, for 
> fear of blowing it away. Any assistance would be greatly appriciated.
> 
> Thanks much,
(Continue reading)


Gmane