Martin Diaz, Luis | 1 Dec 11:02 2010

RE: replace one disk in isw RAID1 array

Dear Hanson

I could see your problem is generated by a exchange of disk. When you install the new disk and the raid-card
make the raid, it generate a new number of fake raid, completely different to the ancient number. 

I have had that problem and I solved it bootloading the server with a "tools distribution-disk" like the
fantastic (at last for me) systemrescuecd with the option "dodmraid" at prompt boot. When the
distribuition has loaded the server, already I could type all the differents orders of dmraid system like
"dmraid -s". So I could see the new number of fakeraid. This number is too important to write it because you
have to decompress the initrd of your server, looking for the old fake-raid number and exchange it.

As you can see, the problem you have, was caused by the initrd since it was wrote inside the number of the
fakeraid. This number is written when you install the server.

I have a script to resolve this problem in the IBM Xservers 205. I send you a copy to help you about this
question. This scripts has to run  on the systemrescuecd (of another system-tools distribution).

You may need to change the file /etc /fstab, but I'm not sure. In my case if I change it because I also use
logical volumes.

I hope I have helped in solving the problem and excuse for my bad english.

################################################################################
#                                                                              #
# Script que genera un nuevo initrd para los servidores IBM xSeries 205        #
# necesario cada vez que se regenera el RAID asi como en el proceso de         #
# instalacion                                                                  #
#                                                                              #
# Madrid 13 de Marzo de 2009                                                   #
# Luis Martin Diaz                                                             #
(Continue reading)

Picon

Re: replace one disk in isw RAID1 array

How did you add the new blank disk to the array?  ie, did you use the
Raid Setup in the BIOS to mark the disk as part of the array?  Or just
use dmraid commands?  I only have a little bit of experience with this,
but I believe the issue may have to do with the raid metadata on the new
disk being missing??

On 30/11/10 06:54 PM, Aaron Hanson wrote:
> Oops; forgot the most important command, where I try to initiate a rebuild with the new disk:
>
> bash (try 'info') lib > dmraid -dR isw_bdidaifdia_Raid1 /dev/sdb
> DEBUG: _find_set: searching isw_bdidaifdia
> DEBUG: _find_set: not found isw_bdidaifdia
> DEBUG: _find_set: searching isw_bdidaifdia_Raid1
> DEBUG: _find_set: searching isw_bdidaifdia_Raid1
> DEBUG: _find_set: not found isw_bdidaifdia_Raid1
> DEBUG: _find_set: not found isw_bdidaifdia_Raid1
> ERROR: isw: wrong number of devices in RAID set "isw_bdidaifdia_Raid1" [1/2] on /dev/sda
> DEBUG: set status of set "isw_bdidaifdia_Raid1" to 4
> DEBUG: _find_set: searching isw_bdidaifdia_Raid1
> DEBUG: _find_set: searching isw_bdidaifdia_Raid1
> DEBUG: _find_set: found isw_bdidaifdia_Raid1
> DEBUG: _find_set: found isw_bdidaifdia_Raid1
> DEBUG: _find_set: searching isw_bdidaifdia_Raid1
> DEBUG: _find_set: searching isw_bdidaifdia_Raid1
> DEBUG: _find_set: found isw_bdidaifdia_Raid1
> DEBUG: _find_set: found isw_bdidaifdia_Raid1
> DEBUG: _find_set: searching isw_bdidaifdia_Raid1
> DEBUG: _find_set: searching isw_bdidaifdia_Raid1
> DEBUG: _find_set: found isw_bdidaifdia_Raid1
> DEBUG: _find_set: found isw_bdidaifdia_Raid1
(Continue reading)

Aaron Hanson | 1 Dec 18:03 2010

RE: replace one disk in isw RAID1 array


> -----Original Message-----
> From: ataraid-list-bounces <at> redhat.com [mailto:ataraid-list-
> bounces <at> redhat.com] On Behalf Of Ian Stakenvicius, Aerobiology Research
> Sent: Wednesday, December 01, 2010 6:33 AM
> To: ATARAID (eg, Promise Fasttrak, Highpoint 370) related discussions
> Subject: Re: replace one disk in isw RAID1 array
> 
> How did you add the new blank disk to the array?  ie, did you use the Raid
> Setup in the BIOS to mark the disk as part of the array?  Or just use dmraid
> commands? 
[Aaron Hanson] 
I think I covered that. I simply did a physical disk replacement with power off.  After powering the system
back on, I tried to use 'dmraid -R <new device>' as an attempt to add the new device to the array.

But I'm new to ataraid; I don't really know that I'm using the correct procedure.  I'm looking for someone on
this list actually knows the typical and correct way to replace a disk in an ataraid array.  I presume there
is the simple step of physically replacing the bad disk with a good one, then some 'dmraid' commands. But 
can't seem to find the right 'dmraid' command(s).

Thanks for your response.

-Aaron
Picon

Re: replace one disk in isw RAID1 array

OK.  That, as far as I know, is not sufficient.  DMRAID allows the
kernel to set up raid devices based on the metadata that is on the
drives, but that metadata will not exist on the new drive, you have to
add it first (in the BIOS, most likely).  I would recommend rebooting
and entering the BIOS's raid management tool, and adding the new drive
to the array there.  It might let you do it without rebuilding the
array, but it might not.

I'm pretty sure that "dmraid -R" just triggers a rebuild procedure -- in
your case (ie with RAID1), it re-synchronizes the two drives.  But AFAIK
it won't set up a blank drive, ie, it won't create the proper metadata
on an uninitialized drive.

On 01/12/10 12:03 PM, Aaron Hanson wrote:
>
>> -----Original Message-----
>> From: ataraid-list-bounces <at> redhat.com [mailto:ataraid-list-
>> bounces <at> redhat.com] On Behalf Of Ian Stakenvicius, Aerobiology Research
>> Sent: Wednesday, December 01, 2010 6:33 AM
>> To: ATARAID (eg, Promise Fasttrak, Highpoint 370) related discussions
>> Subject: Re: replace one disk in isw RAID1 array
>>
>> How did you add the new blank disk to the array?  ie, did you use the Raid
>> Setup in the BIOS to mark the disk as part of the array?  Or just use dmraid
>> commands? 
> [Aaron Hanson] 
> I think I covered that. I simply did a physical disk replacement with power off.  After powering the system
back on, I tried to use 'dmraid -R <new device>' as an attempt to add the new device to the array.
>
> But I'm new to ataraid; I don't really know that I'm using the correct procedure.  I'm looking for someone on
(Continue reading)

Aaron Hanson | 1 Dec 18:20 2010

RE: replace one disk in isw RAID1 array

Hello Martin -
Comments in-line;

> -----Original Message-----
> From: ataraid-list-bounces <at> redhat.com [mailto:ataraid-list-
> bounces <at> redhat.com] On Behalf Of Martin Diaz, Luis
> Sent: Wednesday, December 01, 2010 2:02 AM
> To: ATARAID (eg, Promise Fasttrak, Highpoint 370) related discussions
> Subject: RE: replace one disk in isw RAID1 array
> 
> Dear Hanson
> 
> I could see your problem is generated by a exchange of disk. When you
> install the new disk and the raid-card make the raid, it generate a new
> number of fake raid, completely different to the ancient number.
> 
> I have had that problem and I solved it bootloading the server with a "tools
> distribution-disk" like the fantastic (at last for me) systemrescuecd with the
> option "dodmraid" at prompt boot. When the distribuition has loaded the
> server, already I could type all the differents orders of dmraid system like
> "dmraid -s". So I could see the new number of fakeraid. This number is too
> important to write it because you have to decompress the initrd of your
> server, looking for the old fake-raid number and exchange it.
> 
[Aaron Hanson] 
I think I understand the situation you are describing; the root device cannot be found by the kernel during
boot because it includes the name of the RAID, which has changed.  That is not the case for me. As you can see in
my earlier note, the name of the RAID set for me is ' isw_bdidaifdia_Raid1'; it is missing one member, but
the name has not changed.  It is true that the system is unable to boot after one member has been removed (this
is worrisome too). But I'm already doing as you suggested; I'm net-booting to a rescue linux system to
(Continue reading)

Picon

Re: replace one disk in isw RAID1 array

On 01/12/10 12:20 PM, Aaron Hanson wrote:
>
> It is true that the system is unable to boot after one member has been removed (this is worrisome too). 
This might have to do with the MBR's not being copied or installed on
both drives (or if you're using grub as your boot loader, it's
installation is still looking for the stage1.5 and stage2 files on the
first drive).  If this is the case, there are ways to get around it
(after your raid1 is back up and running, of course) by installing grub
manually on both disks.
Luca Berra | 2 Dec 08:44 2010
Picon

Re: replace one disk in isw RAID1 array

On Tue, Nov 30, 2010 at 11:48:37PM +0000, Aaron Hanson wrote:
>Hi All -
>
>This seems like it should be a very common procedure.  I've researched this a lot before bothering this
list, I hope someone can comment. In short:
I have replaced dmraid with mdadm 3.1.x for handling intel matrix raid
and i find the interface much more understandable and better working.

L.

--

-- 
Luca Berra -- bluca <at> comedia.it
Aaron Hanson | 2 Dec 17:15 2010

RE: replace one disk in isw RAID1 array

> I have replaced dmraid with mdadm 3.1.x for handling intel matrix raid and i
> find the interface much more understandable and better working.
> 
> L.
> 
> --
> Luca Berra -- bluca <at> comedia.it
[Aaron Hanson] 
Thanks for the tip.  I am using an older version of mdadm but I will upgrade and give it a try.
-Aaron
Philippe De Muyter | 8 Dec 15:43 2010
Picon

Re: promise tx2650 + ss1600 + dmraid

Hi Mikael,

CCing ataraid-list <at> redhat.com. 

On Wed, Dec 08, 2010 at 02:04:17PM +0100, Mikael Pettersson wrote:
> Philippe De Muyter writes:
>  > Hello Mikael and list,
>  > 
>  > I am currently fighting with a Promise tx2650 raid controller + SuperSwap 1600
>  > SATA-I/II enclosure on a linux 2.6.34.7 (opensuse 11.3) installation.
>  > 
>  > The installation seems to have detected a raid array :
>  > 
>  > 	# cat /proc/partitions
>  > 	major minor  #blocks  name
>  > 
>  > 	   8        0  244198584 sda
>  > 	   8        1          1 sda1
>  > 	   8        5     144585 sda5
>  > 	   8        6    4190208 sda6
>  > 	   8        7  238813184 sda7
>  > 	   8       16  244198584 sdb
>  > 	   8       17          1 sdb1
>  > 	   8       21     144585 sdb5
>  > 	   8       22    4190208 sdb6
>  > 	   8       23  238813184 sdb7
>  > 	 253        0  243164048 dm-0
>  > 	 253        1  243162112 dm-1
>  > 	 253        2     144585 dm-2
>  > 	 253        3    4190208 dm-3
(Continue reading)

Picon

build issues in dmraid-1.0.0.rc16-3

Hi -- while packaging this release for Gentoo, I ran into a few issues with the build system:

1.  --enable-shared_lib does nothing

2. the static lib always installs

3. when using --enable-static_lib and a recent device-mapper (from the lvm2 package), compilation fails due to the fact that it will not link against udev

I have patched my copy to deal with 2 and 3, and can provide these patches if desired.  I got around #3 using pkg-config checks.

Regards,
Ian

_______________________________________________
Ataraid-list mailing list
Ataraid-list <at> redhat.com
https://www.redhat.com/mailman/listinfo/ataraid-list

Gmane