Luca Berra | 7 Apr 14:56 2002
Picon

Re: ANNOUNCE - mdadm 0.8 - candidate for 1.0

On Thu, Apr 04, 2002 at 02:55:43PM +1000, Neil Brown wrote:
> Yes... I sort of noticed that in passing...
> 
> I added:
>    %_sbindir /sbin
>    %_sysconfdir /etc
> 
> to my .rpcmacros and rebuild the rpms so they now put files in the
> "right" places.
> 
> Any  RPM gurus want to suggest what I *should* do?
i am no guru but an alterantive seems to be setting
%define _exec_prefix %{nil}
in the spec file

> Is it wrong to use %_sbindir and $_sysconfdir ???
i believe common practice is using /sbin literaly in the
spec file, but i prefer the above thing

%_sysconfdir should be set to /etc in the per platform rpm macro file
/usr/lib/rpm/%{_target_cpu}-%{_vendor}-%{_target_os}
ie: /usr/lib/rpm/i386-redhat-linux

L.

--

-- 
Luca Berra -- bluca <at> comedia.it
        Communication Media & Services S.r.l.
 /"\
 \ /     ASCII RIBBON CAMPAIGN
(Continue reading)

Steven Rodenburg | 7 Apr 15:22 2002
Picon

LVM / EXT3 production ready ?

Hi,

I was wondering if the Logical Volume Manager (LVM) on for example SuSE
7.3 is any good in a proffesional "24 x 7 x Forever" "many GB/TB's SAN"
production enviroment.

In my/our company's case being able to logically expand hardware raid
sets and partitions in a real production enviroment is important. We
plan to use linux because Linux itself is stable and robust enough and a
good alternative to say SUN Solaris.

But downtime due to a problem with the Linux LVM is not an option.
Normally, we would add an array within the SAN and then create a new
(very big) filesystem on it and mount it. But when we could expand a
partition on the fly that would be better.

And can the LVM be used with RedHat 7.2 as well (i can't find it on the
distro) ?

How does EXT3 (we need journaling) behave on very large filesystems
(many GB/TB's) ?

I like EXT3 because loss of the journal do not pose big problems because
the partition can be mounted EXT2 and run until a time-window arrives to
repair the journal and mount it back again with EXT3.

It's all about MASSIVE fileserving.

In short:  are LVM and EXT3 mature/stable enough ?

(Continue reading)

Jakob Østergaard | 7 Apr 15:26 2002
Picon

Re: mdadm Some Makefile changes , Hth , JimL

On Fri, Apr 05, 2002 at 05:53:25PM -0500, Mr. James W. Laferriere wrote:
> 

Hello all,

A little nit-picking here  :)

...
> Preferred Minor : 0
>     Persistance : Superblock is persistant

Persistant and Persistance are not english words.

s/stan/sten/g

Cheers,

--

-- 
................................................................
:   jakob <at> unthought.net   : And I see the elder races,         :
:.........................: putrid forms of man                :
:   Jakob Østergaard      : See him rise and claim the earth,  :
:        OZ9ABN           : his downfall is at hand.           :
:.........................:............{Konkhra}...............:
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo <at> vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

(Continue reading)

Kevin Page | 8 Apr 00:52 2002
Picon

Software RAID-1 and badblocks


Hi,

I've suffered some ext3 corruptions recently (I don't know why these
occured yet), and despite not seeing any ide errors (as I've seen when
disks have died before :( ) it worried me into running badblocks on my
partitions.

I run 5 software raid1 devices on from the two onboard IDE channels on
my motherboard (and a raid0 on a highpoint controller, but the
problems aren't there). (More specific system details at the end of
the mail).

It occurs to me that I'm not actually sure which devices to run
badblocks on.

I can run badblocks on the /dev/md<x> device, which it seems to
accept; but is this useful? Is the badblock "check" (whatever that may
be) passed down to the actual constituent devices by the software raid
subsystem? Anyway, when I run badblocks in this way, I don't get any
errors.

So I then ran badblocks on one of the partitions that makes up the
raid1 device (on which the broken filesystem resides).
This turned up some badblocks, although suspiciously these were the
last two blocks of the device (5124701 and 5124702 on a device
badblocks reported as going to 5124703).

I presumed these blocks were junk, and struggled to work out how to
map these from the device - or more specifically from the raid1 device
(Continue reading)

Neil Brown | 8 Apr 01:53 2002
X-Face
Picon
Picon

Re: mdadm Some Makefile changes , Hth , JimL

On Sunday April 7, jakob <at> unthought.net wrote:
> On Fri, Apr 05, 2002 at 05:53:25PM -0500, Mr. James W. Laferriere wrote:
> > 
> 
> Hello all,
> 
> A little nit-picking here  :)

Nits need to be picked, thanks.

> 
> ...
> > Preferred Minor : 0
> >     Persistance : Superblock is persistant
> 
> Persistant and Persistance are not english words.
> 
> s/stan/sten/g

I also took the oportunity to change "disk" (which should really be
spelt "disc" anyway) to "device" in the Detail output and other
places, as it may sometimes be a disc drive, but might also be a
partition or some other device.

NeilBrown

> 
> Cheers,
> 
> -- 
(Continue reading)

Neil Brown | 8 Apr 05:22 2002
X-Face
Picon
Picon

Re: mdadm Some Makefile changes , Hth , JimL

On Friday April 5, babydr <at> baby-dragons.com wrote:
> 
> 	Hello Neil ,  Something funny in the --Detail output from a raid5
> 	array I maintain , see below .  Tia ,  JimL
> 
>  mdadm-0.8.1# mdadm -v -D /dev/md0
> /dev/md0:
>         Version : 00.90.00
>   Creation Time : Sat Apr 14 06:16:46 2001
>      Raid Level : raid5
>      Array Size : 892800 (871.87 MiB 914.22 MB)
>                   ^^^^^^

Thanks:
 I had
     larray_size = array_size << 9;
 when array_size was a "long" and larray_size was "long long"
 so some high bits got lost.
 It is now 
     larray_size = array_size;
     larray_size <<= 9;
 which should work correctly.

NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo <at> vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

(Continue reading)

Brad Hubbard | 8 Apr 05:24 2002

Re: Winfast ATA100

On Wed, 13 Mar 2002 09:54, Pieter Hollants did align ASCII characters thusly:
> > Has anyone got any experience with getting the Leadtek Winfast ATA100
> > cards
> > under Linux?
>
> They seem to use the CMD/Silicon Image chips, which have excellent Linux
> support. When I researched chipsets in last August, the driver hadn't
> changed since kernel 2.4.4. My SW RAID-1 has run stable and reliable
> ever since.

Finally got this going (got absolutely no response from CMD or Leadtek by the 
way).

Needed a line like this in lilo.conf

append="ide2=0xdc00,0xd802,17 ide3=0xd400,0xd002,17 ide0=autotune 
ide1=autotune ide2=autotune ide3=autotune"

Note: This line is, of course, system specific.

Hope this stops someone burning as many hours on it as I did :-)

Cheers,
Brad
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo <at> vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

(Continue reading)

SoulBlazer | 8 Apr 20:41 2002
Picon

Floating Spare

Can you share 2 or 3 'spare-disks' between multiple raid sets?

Eg. I have a 22 disk array chopped into multiple 4 disk raid-5 volumes, this 
leaves me with 2 disks left over..  Can I assign these two drives as floating 
spares incase of failure.. this will obviously be a 'first come first serve' 
scenario for raid failure.. but its better then not using them.

## Vol 00 ##
raiddev /dev/md/0
        raid-level              5
        nr-raid-disks           3
        nr-spare-disks          1
        persistent-superblock   1
        chunk-size              128
        parity-algorithm        left-symmetric

        device          /dev/scsi/host2/bus0/target0/lun0/part1
        raid-disk       0
        device          /dev/scsi/host2/bus0/target1/lun0/part1
        raid-disk       1
        device          /dev/scsi/host2/bus0/target2/lun0/part1
        raid-disk       2
        device          /dev/scsi/host2/bus0/target3/lun0/part1
        spare-disk      0

## Vol 01 ##
raiddev /dev/md/1
        raid-level              5
        nr-raid-disks           3
        nr-spare-disks          1
(Continue reading)

SoulBlazer | 8 Apr 21:02 2002
Picon

2 HBA's and Multipath.

Question,

I have 2 Qlogic Fiberchannel HBA's (2100F's) connected directly to both FCAL 
ports on my Sun A5200 22 Disk disk array.  If I enable multipath, do I get 
twice the throughput between my linux box and the array ? Or is multipath 
purely for failover.

Additionally do I get twice the bandwidth anyhow since both HBA's see the 
same disk ?  How can I tell if either one or the other or both hba's are 
being utilized...

These are private loops being run in full duplex.

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo <at> vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Marcel | 8 Apr 21:05 2002
Picon

Re: Floating Spare

SoulBlazer wrote:
> Can you share 2 or 3 'spare-disks' between multiple raid sets?
> 
> Eg. I have a 22 disk array chopped into multiple 4 disk raid-5 volumes, this 
> leaves me with 2 disks left over..  Can I assign these two drives as floating 
> spares incase of failure.. this will obviously be a 'first come first serve' 
> scenario for raid failure.. but its better then not using them.

I'm sure you could script something to that effect based on the current 
RAID tools and their options/output...

If there's nothing like it already included (I haven't checked - and no 
in-depth knowledge of raid tools either, so forgive any stupid remarks) 
I could imagine a cron job checking the logs for disk failures and 
hot-adding the "new" disk (one of the spares, if available).

Interesting idea!

Marcel

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo <at> vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Gmane