Axel Thimm | 1 Jun 05:42 2005
Picon

Re: out of the tree compilation and 4KSTACKS

On Tue, May 31, 2005 at 03:25:48PM -0700, Chris Wedgwood wrote:
> On Tue, May 31, 2005 at 11:54:02PM +0200, Axel Thimm wrote:
> 
> > Hm, any pointers to "how"? ;)
> 
>   make -C <path-to-kernel> M=`pwd`
> 
> sort of thing.
> 
> linux/Documentation/kbuild/modules.txt will probably explain it better
> than I can.
> 
> > Sure, that's clear. What I mean is: If you turn on xfs in RHEL4's
> > kernel is it considered safe with 4KSTACKS?
> 
> It is on already on RHEL isn't it?

No. That's the whole point of this exercise ;)

> As to whether it's safe it depends who you ask.
> 
> Various people from Red Hat insist that 4K stacks are desirable
> because they see order-1 allocations failing sometimes which make
> sense, however, x86-64 still uses 8K stacks and nobody is pushing hard
> for 4K stacks there.
> 
> > If not, that would make the whole point of building the kernel
> > modules out of the tree meaningless.
> 
> It has no advantages unless it's newer code.  I would just just a tree
(Continue reading)

Chris Wedgwood | 1 Jun 06:00 2005

Re: out of the tree compilation and 4KSTACKS

On Wed, Jun 01, 2005 at 05:42:02AM +0200, Axel Thimm wrote:

> > It is on already on RHEL isn't it?
>
> No. That's the whole point of this exercise ;)

I'm saying I thought RH kernels defaulted to using CONFIG_4KSTACKS
don't they?

> The advantage is no xfs vs xfs.

When I talked (bitched?) to some RH people about the problems with XFS
and 4KSTACKS they claimed that ext3 is faster than XFS is pretty much
any meaningful benchmark on 8-CPU machines or smaller.

Sonny Rao | 1 Jun 05:53 2005

Re: XFS module for RHEL4

On Tue, May 31, 2005 at 05:57:13PM -0400, Wayne Steenburg wrote:
> How would one go about compiling the XFS modules for the stock RHEL4
> kernel?  Redhat decided not to include it for whatever reason.
> 

More precisely, RedHat decided it couldn't support any filesystem
other than Ext3  (hint, if you paid for RHEL 4 and want XFS, you could
complai^H^H^H^H ask RedHat to reevaluate their choices of supported
filesystems) 

On to the (possibly) useful portion of this post,

You need to get the kernel source RPM and use the rpmbuild command to
setup the kernel tree appropriately.. I believe the command is
"rpmbuild -bp" for "prep" which means untar and apply patches in
RPM-speak.   This will put the kernel tree in the "BUILD" directory
where you can then go in and theoretically change whatever kernel
options you want and rebuild.  If you want the module to load into the
supplied kernel, you should start with their config file and just
build the XFS module.   There could be other steps I'm missing (like
adding the proper extraversion junk to the Makefile, possibly) because
I haven't had the misfortune of doing this in a while, but that should
get you started.  

And don't forget to let them hear your opinion on this subject :-)

Sonny

Sonny Rao | 1 Jun 05:56 2005

Re: out of the tree compilation and 4KSTACKS

On Tue, May 31, 2005 at 09:00:09PM -0700, Chris Wedgwood wrote:
> When I talked (bitched?) to some RH people about the problems with XFS
> and 4KSTACKS they claimed that ext3 is faster than XFS is pretty much
> any meaningful benchmark on 8-CPU machines or smaller.

LOL

Why are their customers, or should I say "sheep",  standing for this
kind of crap ?

Sonny

Axel Thimm | 1 Jun 06:29 2005
Picon

Re: out of the tree compilation and 4KSTACKS

On Tue, May 31, 2005 at 09:00:09PM -0700, Chris Wedgwood wrote:
> On Wed, Jun 01, 2005 at 05:42:02AM +0200, Axel Thimm wrote:
> > > > Sure, that's clear. What I mean is: If you turn on xfs in
> > > > RHEL4's kernel is it considered safe with 4KSTACKS?
> > > 
> > > It is on already on RHEL isn't it?
> >
> > No. That's the whole point of this exercise ;)
> 
> I'm saying I thought RH kernels defaulted to using CONFIG_4KSTACKS
> don't they?

Yes, they are.

> > The advantage is no xfs vs xfs.
> 
> When I talked (bitched?) to some RH people about the problems with XFS
> and 4KSTACKS they claimed that ext3 is faster than XFS is pretty much
> any meaningful benchmark on 8-CPU machines or smaller.

Sounds like you talked to Arjan. He does has radical views. The real
reason are (lack of) support resources for non-ext3 filesystems.
--

-- 
Axel.Thimm at ATrpms.net
Robin Humble | 1 Jun 08:05 2005
Picon
Picon

Re: out of the tree compilation and 4KSTACKS

On Tue, May 31, 2005 at 09:00:09PM -0700, Chris Wedgwood wrote:
>When I talked (bitched?) to some RH people about the problems with XFS
>and 4KSTACKS they claimed that ext3 is faster than XFS is pretty much
>any meaningful benchmark on 8-CPU machines or smaller.

I think RedHat are mistaken.
  http://oss.sgi.com/archives/linux-xfs/2005-03/msg00189.html

cheers,
robin

Robin Humble | 1 Jun 08:18 2005
Picon
Picon

Re: XFS module for RHEL4

On Tue, May 31, 2005 at 11:53:47PM -0400, Sonny Rao wrote:
>On Tue, May 31, 2005 at 05:57:13PM -0400, Wayne Steenburg wrote:
>> How would one go about compiling the XFS modules for the stock RHEL4
>> kernel?  Redhat decided not to include it for whatever reason.

this is almost a FAQ.

>You need to get the kernel source RPM and use the rpmbuild command to
>setup the kernel tree appropriately.. I believe the command is
>"rpmbuild -bp" for "prep" which means untar and apply patches in

if you want 8k stacks as well (highly recommended for servers) then
unfortunately it's not quite that simple. you need to stop some of the
RedHat patches being applied, as (for some unknown and possibly insane
reason) RedHat removed the 8k stack option entirely...

OTOH, I run fc3 at home with the default kernel (4k stacks) and XFS and
it's very stable. So for simple workloads and setups 4k stacks are
usually fine.

Anyway, this post has a (possibly slightly outdated) recipe for
compiling a RHEL4 kernel with 8k stacks and XFS:
  http://oss.sgi.com/archives/linux-xfs/2005-03/msg00189.html

cheers,
robin

Gaspar Bakos | 1 Jun 16:10 2005
Picon

reserved blocks

Hi,

Is there any way to specify reserved blocks under XFS?

I mean similarly to ext2/ext3, where one was able to specify that e.g.
only 95% can be used, 5% is reserved for the root.

It seems to me that this option is not available with XFS, and probably
there is a reason for it.

Cheers
Gaspar

Xander Meadow | 1 Jun 16:07 2005

Re: getting data from a corrupted file system

> I've done a bit of this from time to time.  It depends how stuck you
> are and how much effort you want to spend as to whether this is worth
> your while.

Well as my personal time is worth nearly nothing I'm willing to try a 
bunch of stuff to get this back, so any and all suggestions you have 
I'll be willing to try.

> As I said on the list, try xfs_repair and make sure it has enough
> memory.  Make sure you have a recent xfs_repair too.

I'm pretty sure we have a recent version of xfs_repair because the tech 
guy at consensys sent me a xfs_repair update, and then he modified that 
version so that it would skip an Assertion error that was causing 
xfs_repair to stop.

> how much memory does the system have?  have you tried making sure
> there is enough swap?

from /proc/meminfo:

  	  total:		used:	     free: 	     shared:    buffers:     cached:
Mem:  3962441728  919121920  3043319808   0    63930368    361832448
Swap: 2138537984        0 2138537984
MemTotal:      3869572 kB
MemFree:       2971992 kB
...
SwapTotal:     2088416 kB
SwapFree:      2088416 kB

(Continue reading)

Ravi Wijayaratne | 1 Jun 20:05 2005
Picon

XFS file system crash with SUSE SLES 8

Hi all

We are running Suse SLES8 2.4.21-138 kernel with XFS 1.3.
We see the following XFS crash in our syslog.

kernel: xfs_inotobp: xfs_imap()  returned an error 22 on lvm(58,0).  Returning error.
kernel: xfs_iunlink_remove: xfs_inotobp()  returned an error 22 on lvm(58,0).  Returning error.
kernel: xfs_inactive:^Ixfs_ifree() returned an error = 22 on lvm(58,0)
kernel: xfs_force_shutdown(lvm(58,0),0x1) called from line 1866 of file xfs_vnodeops.c.  Return
address = 0xc020941b
kernel: Filesystem "lvm(58,0)": I/O Error Detected.  Shutting down filesystem: lvm(58,0)
kernel: Please umount the filesystem, and rectify the problem(s)

Here are the steps we used to reproduce the problem.

1. Creare 3 RAID 5 sets.
2. Group these 3 RAID sets to one large physical volume
3. Create an XFS file system on it
4. Run dbench -c ./client.txt 64 
5. Within 10 minutes the file system crashes.

Having dropped in to KDB I see that the problem occur in xfs_iunlink_remove()
I think it is expected that the unlinked inode number be either in the agi->agi_unlinked[..]
array or if not xfs_dinode_t->di_next_unlinked list ( first item has the refernce fro,
agi_unlinked
array). How ever in this case the xfs_dinode_t->di_nlink is 0 and the agi_unlinked array is filled
up with 0xFFFFFFFF or I gather NULLAGINO. The agi seem to be sensible judging from the Magic
number. So it seem that some inode got escaped from the unlink process. How that would happen
is beyond me. Any insights would be very helpful.

(Continue reading)


Gmane