mail support | 15 Nov 17:32 2015

Account Closure Notice


WEBMAIL ACCOUNT SERVICES

Account Closure Notice

Hi ataraid-list <at> redhat.com
 

199MB                                                                                    
 
This is to bring to your notice that your account is using 199mb of mail storage and an attempt to receive messages 335kb in size would bring it too close to its maximum storage quota.
 
Add more quota below. Otherwise new messages will be delayed or rejected.

Click to add more Quota
 
NOTE: Email will be disabled by default for failure to add more subscription data.
 
Best Regards 
Administrator. 
©2015 Administrator. All Rights Reserved.
_______________________________________________
Ataraid-list mailing list
Ataraid-list <at> redhat.com
https://www.redhat.com/mailman/listinfo/ataraid-list
Robert Osowiecki | 8 Dec 13:51 2015
Picon

dmraid: unable to remove ddf1 metadata with CRC error

Hello,

I hope it's a good place to submit problem with dmraid utility, maybe at least Mr Heinz Mauelshagen reads this mailing list?

I had a bunch of 1T disks with following problem:

ddf1: physical drives with CRC 7FFEB6, expected FFFFFFFF on /dev/sdf
ERROR: ddf1: Cannot find physical drive description on /dev/sdf!
ERROR: ddf1: setting up RAID device /dev/sdf

From my experience and from googling similar problems breaking CentOS/RedHat installation , I suspected removing metadata with dmraid -E might help, but it failed too:

ERROR: ddf1: seeking device "/dev/sdc" to 512104901378048
ERROR: writing metadata to /dev/sdc, offset 1000204885504 sectors, size 0 bytes returned 0
ERROR: erasing ondisk metadata on /dev/sdc

My coworker noticced, that dmraid tries to seek to position 512*1000204885504, which might be a bug caused by treating byte offset as number of sectors.

Anyway, I manually zeroed those buggy metadata with dd, but I used dmraid dump metadata feature to save them in case anybody wishes to see them.

Regards,
Robert.

_______________________________________________
Ataraid-list mailing list
Ataraid-list <at> redhat.com
https://www.redhat.com/mailman/listinfo/ataraid-list
Jelle de Jong | 28 May 10:40 2014
Picon

using dmraid to recover data from a isw_raid_member from recovered disks images

Hello everybody,

I think this is the right mailinglist for dmraid?

I made disk images with dd from to harddrives and made a copy for testing.

losetup /dev/loop0 sdd1Z-copy
losetup /dev/loop1 sddSN-copy

blkid /dev/loop*
/dev/loop0: TYPE="isw_raid_member"
/dev/loop1: TYPE="isw_raid_member"

But how do I use dmraid to assamble the array? dmraid -ay doesnt find
the devices?

http://paste.debian.net/hidden/b222c2e8/

# dmraid --version
dmraid version:		1.0.0.rc16 (2009.09.16) shared
dmraid library version:	1.0.0.rc16 (2009.09.16)
device-mapper version:	4.27.0

Kind regards,

Jelle de Jong
Bad Bod | 22 Apr 21:05 2014

Via RAID

Hi,
  Any update on this?

Via 8237 RAID checksum error.

I have moved on so can longer confirm any fix.

Now I moved to SSD I would like to know if I can RAID 0 on AMD SB950 southbridge.



Regards
David
_______________________________________________
Ataraid-list mailing list
Ataraid-list <at> redhat.com
https://www.redhat.com/mailman/listinfo/ataraid-list
HERVE Stephane (AREVA | 12 Jun 19:13 2013

Problem with dmraid volumes at boot time

Hi all,
on Saturday we had our annual reboot of my NFS server which is on RedHat 5.2 (kernel 2.6.18-92).
Since I'm in big trouble because my server does not recognize his dmraid volumes at boot time.
I know that my disks are ok (made diag from my LSIlogic controller) and that my datas are there
(because when I boot on install CD, I can see my all datas even if I still have some errors).
When I look into the rc.sysinit file, I can see that the dm_resolve_name function does not return
anything. In fact it's the command
dmsetup table
in this function which does not return anything. I think it's not the right way, but maybe it is, I hope
you can explain it to me.
Here are the output of some dmraid commands when I finally finish my boot :
 
# dmraid -r
/dev/sda: ddf1, ".ddf1_disks", GROUP, ok, 583983104 sectors, data <at> 0
/dev/sdb: ddf1, ".ddf1_disks", GROUP, ok, 583983104 sectors, data <at> 0
/dev/sdc: ddf1, ".ddf1_disks", GROUP, ok, 583983104 sectors, data <at> 0
/dev/sdd: ddf1, ".ddf1_disks", GROUP, ok, 585806427 sectors, data <at> 0
/dev/sde: ddf1, ".ddf1_disks", GROUP, ok, 585806427 sectors, data <at> 0
/dev/sdf: ddf1, ".ddf1_disks", GROUP, ok, 585806427 sectors, data <at> 0
# dmraid -s
*** Group superset .ddf1_disks
--> Subset
name   : ddf1_4c53494c4f4749431000005010003110000047bef3103289
size   : 1751949312
stride : 128
type   : stripe
status : ok
subsets: 0
devs   : 3
spares : 0
--> Subset
name   : ddf1_4c53494c4f4749431000005010003110000047d1d930979b
size   : 1751949312
stride : 128
type   : stripe
status : ok
subsets: 0
devs   : 3
spares : 0
# dmraid -tay
ddf1_4c53494c4f4749431000005010003110000047bef3103289: 0 1751949312 striped 3 128 /dev/sda 0 /dev/sdb 0 /dev/sdc 0
ddf1_4c53494c4f4749431000005010003110000047d1d930979b: 0 1751949312 striped 3 128 /dev/sdd 0 /dev/sde 0 /dev/sdf 0
# fdisk -l /dev/sda
 
Disk /dev/sda: 300.0 GB, 300000000000 bytes
255 heads, 63 sectors/track, 36472 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1      109054   875974655+  8e  Linux LVM
# fdisk -l /dev/sdd
 
Disk /dev/sdd: 300.0 GB, 300000000000 bytes
255 heads, 63 sectors/track, 36472 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1      109054   875974655+  8e  Linux LVM
An the errors I get in /var/log/messages :
 
kernel:  sda: p1 exceeds device capacity
........
kernel:  sdd: p1 exceeds device capacity
........
kernel: attempt to access beyond end of device
 
My system see /dev/sda1 and /dev/sdd1 like partitions of a single disk of 300Go and not like a member
of a dmraid volume of 3 disks.
How can I do to make my system recognize these volums ? And why it doesn't while from an install CD
averything seems almost ok ?
Thanks in advance if you can help me.
 
Regards.
 

S.HERVE

AREVA TA - DP/SI/EMI

stephane.herve <at> areva.com

+33 (0)442602647

+33 (0)777390285

 
_______________________________________________
Ataraid-list mailing list
Ataraid-list <at> redhat.com
https://www.redhat.com/mailman/listinfo/ataraid-list
Dead Gardens | 25 Apr 06:52 2013
Picon

Rebuilding a Raid 1

I'm trying to rebuild a raid 1 (dmraid+isw) unsuccessfully. I replaced the failed disk with a new one and the BIOS added it automatically to the raid. Running kernel 2.6.18-194.17.4.el5.


# dmraid -r


/dev/sda: isw, "isw_babcjifefe", GROUP, ok, 1953525165 sectors, data <at> 0
/dev/sdb: isw, "isw_babcjifefe", GROUP, ok, 1953525165 sectors, data <at> 0

# dmraid -s
*** Group superset isw_babcjifefe --> Subset name : isw_babcjifefe_Raid0 size : 1953519616 stride : 128 type : mirror status : nosync subsets: 0 devs : 2 spares : 0 When i try to start the raid i receive the next errors
# dmraid -f isw -S -M /dev/sdb
ERROR: isw: SPARE disk must use all space on the disk # dmraid -tay
isw_babcjifefe_Raid0: 0 1953519616 mirror core 3 131072 sync block_on_error 2 /dev/sda 0 /dev/sdb 0 # dmraid -ay
RAID set "isw_babcjifefe_Raid0" was not activated ERROR: device "isw_babcjifefe_Raid0" could not be found # dmraid -f isw -S -M /dev/sdb
ERROR: isw: SPARE disk must use all space on the disk # dmraid -R isw_babcjifefe_Raid0 /dev/sdb
ERROR: disk /dev/sdb cannot be used to rebuilding # dmesg
device-mapper: table: 253:13: mirror: Device lookup failure device-mapper: ioctl: error adding target to table device-mapper: ioctl: device doesn't appear to be in the dev hash table. device-mapper: table: 253:13: mirror: Device lookup failure device-mapper: ioctl: error adding target to table device-mapper: ioctl: device doesn't appear to be in the dev hash table. device-mapper: table: 253:13: mirror: Device lookup failure device-mapper: ioctl: error adding target to table device-mapper: ioctl: device doesn't appear to be in the dev hash table. device-mapper: ioctl: device doesn't appear to be in the dev hash table. device-mapper: ioctl: device doesn't appear to be in the dev hash table. Disks:
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes LVM:
PV /dev/sda5 VG storage lvm2 [914.64 GB / 28.64 GB free] Total: 1 [914.64 GB] / in use: 1 [914.64 GB] / in no VG: 0 [0 ] Reading all physical volumes. This may take a while... Found volume group "storage" using metadata type lvm2 ACTIVE '/dev/storage/home' [68.00 GB] inherit ACTIVE '/dev/storage/home2' [68.00 GB] inherit ACTIVE '/dev/storage/home3' [68.00 GB] inherit ACTIVE '/dev/storage/home4' [68.00 GB] inherit ACTIVE '/dev/storage/home5' [68.00 GB] inherit ACTIVE '/dev/storage/var' [15.00 GB] inherit ACTIVE '/dev/storage/mysql' [20.00 GB] inherit ACTIVE '/dev/storage/pgsql' [7.00 GB] inherit ACTIVE '/dev/storage/exim' [12.00 GB] inherit ACTIVE '/dev/storage/apache' [25.00 GB] inherit ACTIVE '/dev/storage/tmp' [2.00 GB] inherit ACTIVE '/dev/storage/backup' [450.00 GB] inherit ACTIVE '/dev/storage/log' [15.00 GB] inherit

Any idea of what is wrong?
Thanks!
_______________________________________________
Ataraid-list mailing list
Ataraid-list <at> redhat.com
https://www.redhat.com/mailman/listinfo/ataraid-list
Phillip Susi | 12 Nov 20:16 2012

PDC arrays > 2 TB and non 512 byte dm sector size


It appears that PDC arrays > 2 TB pretend they have > 512 byte sector
sizes.  I'm trying to figure out how to patch dmraid to support this
but I'm not sure how to tell device-mapper to use a different sector
size.  Is this possible?

Petr Uzel | 2 Nov 11:25 2012
Picon
Gravatar

Question about SCSI serial number retrieval algorithm

Hi,

I have a question about algorithm used in dmraid to retrieve the
serial number from the scsi device:

lib/device/scsi.c:

 77 /*
 78  * Retrieve SCSI serial number.
 79  */
 80 #define MAX_RESPONSE_LEN        255
 81 int
 82 get_scsi_serial(struct lib_context *lc, int fd, struct dev_info *di,
 83                 enum ioctl_type type)
 84 {
 85         int ret = 0;
 86         size_t actual_len;
 87         unsigned char *response;
 88         /*
 89          * Define ioctl function and offset into response buffer of serial
 90          * string length field (serial string follows length field immediately)
 91          */
 92         struct {
 93                 int (*ioctl_func) (int, unsigned char *, size_t);
 94                 unsigned int start;
 95         } param[] = {
 96                 { sg_inquiry, 3},
 97                 { old_inquiry, 11},
 98         }, *p = (SG == type) ? param : param + 1;
 99
100         if (!(response = dbg_malloc(MAX_RESPONSE_LEN)))
101                 return 0;
102
103         actual_len = p->start + 1;
104         if ((ret = (p->ioctl_func(fd, response, actual_len)))) {
105                 size_t serial_len = (size_t) response[p->start];
106
107                 if (serial_len > actual_len) {
108                         actual_len += serial_len;
109                         ret = p->ioctl_func(fd, response, actual_len);
110                 }
111
112                 ret = ret &&
113                      (di->serial = dbg_strdup(remove_white_space (lc, (char *) &response[p->start + 1], serial_len)));
114         }

If type == SG, this function uses two SG_IO ioctls() to retrieve the serial number.
First with response buffer length set to 4 (line 104), which is just enough to get the serial
number length, which is later used to set the length of buffer for the second ioctl() (line 109).

Why is this? Why not use sufficiently long buffer (MAX_RESPONSE_LEN) right with the first ioctl()?

I'm asking this because I've encountered a device which requires the buffer to be long
enough to store the serial number, otherwise the SCSI inquiry command timeouts [*]. If this
happens, not only that it gets stuck for some time, but also the response buffer from the first
ioctl contains all zeros, so serial_len on line 105 is set to 0 and condition on line
107 is false -> second ioctl() is not executed...

[*] I know this is likely a bug outside dmraid.

I'd really appreciate if someone could shed some light into why it is done this way.

Thanks in advance,

Petr

--
Petr Uzel
IRC: ptr_uzl  <at>  freenode
_______________________________________________
Ataraid-list mailing list
Ataraid-list <at> redhat.com
https://www.redhat.com/mailman/listinfo/ataraid-list
Mark-Willem Jansen | 19 Jul 16:30 2012
Picon

RE: Picking up development of dmraid

<!-- .hmmessage P { margin:0px; padding:0px } body.hmmessage { font-size: 10pt; font-family:Tahoma } -->
Hi Bryn,

Thank for the comments. The complete picture is slowly evolving.

> > dmraid. So it would be a good idea to remove the partitioning
> > support from dmraid. This will mean the package maintainer will
> > need to make dmraid dependent on kpartx for it to work.
>
> It already does in most (all?) distributions that ship it.

Probably Debian is legging behind here, as dmraid is unmaintained AFAIK.

> > One remark on this: I found that on Debian to make kpartx work
> > with dmraid during boot, one needs to make some changes to the
> > multipath-tools packages.
>
> What were the changes? The kpartx command is part of multipath-tools
> and although it's common to have it in a separate sub-package (all
> current Red Hat and Fedora distros do this) they are part of the same
> project upstream.

On Debain there is a package called multipath-tools-boot, which will add multipath, kpartx, and dmsetup to the initramfs. But I did not liked what multipath was doing to the /dev/mapper directory. So I mimicked ubuntu and made a separate package kpartx-boot. So when I come to think of it, maybe it worked out of the debian-box, apart from some warnings during boot.

> > On a side note: Why does mdadm support MBR and GPT?
>
> Not sure what you're asking here? The kernel MD driver creates
> partitionable devices so you can use any of the label formats that are
> enabled in the kernel you're running (although really, MBR and GPT are
> the only ones that make sense for most systems today).

I do not know the finer detail of mdadm, yet. But I saw super-mbr.c and super-gpt.c and and draw the conclusion, taken how dmraid handles mbr, that these were codes to parse mbt and gpt partition tables.

> I think adding new format handlers to MD is a much better idea; the
> dominant formats backed by major OEMs are already using it so if
> there's interest in the less commonly used formats I think they would
> see much better maintenance and continued development in an active
> project like mdadm than they would in a revived dmraid.

So it would be time that someone(probably me) starts adding the Promise formats used by the AMD chip-sets.

> > Just one last question I never really got an answer to. Can one
> > use mdadm on a dual boot system(MS and Linux) were the RAID
> > partitions are shared? In other words will mdadm leave the metadata
> > on the disks unchanged or in a state the the MS drivers can still
> > recognize the RAID.
>
> Assuming that MD supports the format handler you need: yes.

That is nice to hear.

> I think the time would be better spent learning or contributing to MD
> and mdadm development and adding support for other format handlers
> that have users wanting native Linux support.

Then it is time for me to start reading into mdadm.

Kind regards,

Mark-Willem
_______________________________________________
Ataraid-list mailing list
Ataraid-list <at> redhat.com
https://www.redhat.com/mailman/listinfo/ataraid-list
Mark-Willem Jansen | 18 Jul 10:20 2012
Picon

Picking up development of dmraid

Dear dmraid developers,

Sometime in this mail-list it was said that the program dmraid was in maintaining mode and not further developed anymore. In the meantime the dm-developement team has put out new dm-target, which can be used by the tool.

I would like to fork the latest RC and put on github, to continue developing the tool. I will give it a slightly new name, so people will not confuse it with the original. My plan is to add the support for new dm-targets and also implement more partition tables, starting with GPT.

I am not really good at generating new names, but here are some ideas.

dmraid-fbmw (forked by Mark-Willem)
dmraid-fu (follow-up)
dmraid-ext (extended version)

So my question which name you think is a good one for the forked?

And who can I connect if I have some questions about the tool.

Greetings,

Mark-Willem Jansen

_______________________________________________
Ataraid-list mailing list
Ataraid-list <at> redhat.com
https://www.redhat.com/mailman/listinfo/ataraid-list
Mark-Willem Jansen | 3 May 23:12 2012
Picon

Changing from dm-raid45.ko to dm-raid.ko

Dear md/dm developers,

As I already pointed out on the ataraid-list, I have made a small patch for dmraid, so it will use the dm-raid target in state of the dm-raid45 target to handle raid4 and riad5 setups. This patch is added to this e-mail. To put it in context I use dmraid to detect my fakeraid, which is shared with a Windows OS.

There were some things I could not figure out and would like to ask three questions regarding the arguments passed to the module.

- First a more general question does it look okay to you what I have implemented?

- Second what to do with the offset variable that is given by the metadata on the disk? On the other dmraid related modules the disk information is passed like [dev][offset]. With the dm-raid module it changed to [meta-dev][data-dev]. In the patch I ignore the meta-dev and just pass "- path_to_dev" to the module. This will work as the offset given by the metadata on my disks are equal to zero, the same value that is automatically set by the module. What does one have to do if the metadata on the disk says that the offset is not equal to zero?

- Last I would like to build up the argument string more generally. Like concatenate the argument strings depending on some if statements, like

if(need_sync)
   num_arg += 2;
   arguments = arguments + sprintf("rebuild %d", rebuild_drive.data.i32);
fi

And then pass the final string using

p_fmt(lc, table, "0 %U %s %s %u %s", sectors, dm_type, raid_type, num_arg, arguments).

This will make the code easier to read and quicker to hack/change/code.

Thanks,

Mark-Willem Jansen
P.S.: The patch is a modified version of the one I send to the ataraid-list some weeks ago.
_______________________________________________
Ataraid-list mailing list
Ataraid-list <at> redhat.com
https://www.redhat.com/mailman/listinfo/ataraid-list

Gmane