David Greaves | 1 May 11:41 2006

Re: Bad page state in process 'nfsd' with xfs


Nathan Scott wrote:
> Hi there,
>
> On Fri, Apr 28, 2006 at 09:22:23PM +0100, David Greaves wrote:
>
> But, the warning is triggered by the page count (16777216 above), and
> that is 0x1000000 -- which is a huge, improbable count; that looks to
> me like it could very well be the result of a single bit error too.
>
> You may have a hardware problem - try running memtest I guess.
Thanks guys

It's in use a lot so I'll  schedule some downtime, blow out the dust
and run memtest (though I've done that before and it has been clean).

I'll let you know how it goes...

David

--
fabiowanja | 1 May 15:18 2006
Picon

FUND RELOCATION FOR INVESTMENT

FUND RELOCATION FOR INVESTMENT
FROM MR.WANJALA FABIO
KENYA

Dear 
Friend,

                              FUND RELOCATION FOR INVESTMENT

I humbly seek your assistance in a business transaction that requires 
your ability to invest on our behalf for a minimum period of 5years, 
without the inference of a third party.

The fund emanated from 
individual wealth, Political contribution, and contract kick - back by 
party members towards their next parliamentary election. With the 
present situation, I urge to relocate the funds to you for safe 
keeping/Investment, hence my decision to contact you.

Details would 
be disclosed and the possibility to meet with you, on your ability to 
handle this transaction legally. The fund for the investment is valued 
at $ 50 Million.

Thanks and look forward to your response.

Regards,

Wanjala Fabio 

(Continue reading)

Chris Wedgwood | 1 May 17:21 2006

Re: Bad page state in process 'nfsd' with xfs

On Mon, May 01, 2006 at 10:41:59AM +0100, David Greaves wrote:

> It's in use a lot so I'll schedule some downtime, blow out the dust
> and run memtest (though I've done that before and it has been
> clean).

memtest doesn't always find bad memory sadly

finding bad memory is hard, and sometimes it's exacerbated by
complicated factors (heat from drives for example)

i wish ecc memory was standard

Nathan Scott | 2 May 01:47 2006
Picon

TAKE 907752 - acl

Merge simple nftw-vs-symlinks handling fixes, based on suggestions
from Andreas.

Date:  Tue May  2 09:46:43 AEST 2006
Workarea:  chook.melbourne.sgi.com:/build/nathans/xfs-cmds
Inspected by:  nathans,agruen <at> suse.de

The following file(s) were checked into:
  longdrop.melbourne.sgi.com:/isms/xfs-cmds/master-melb

Modid:  master-melb:xfs-cmds:25861a
acl/VERSION - 1.77 - changed
http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/acl/VERSION.diff?r1=text&tr1=1.77&r2=text&tr2=1.76&f=h
acl/doc/CHANGES - 1.86 - changed
http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/acl/doc/CHANGES.diff?r1=text&tr1=1.86&r2=text&tr2=1.85&f=h
acl/debian/changelog - 1.74 - changed
http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/acl/debian/changelog.diff?r1=text&tr1=1.74&r2=text&tr2=1.73&f=h
acl/setfacl/setfacl.c - 1.18 - changed
http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/acl/setfacl/setfacl.c.diff?r1=text&tr1=1.18&r2=text&tr2=1.17&f=h
acl/getfacl/getfacl.c - 1.18 - changed
http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/acl/getfacl/getfacl.c.diff?r1=text&tr1=1.18&r2=text&tr2=1.17&f=h

Ming Zhang | 4 May 23:15 2006
Picon

multiple write stream performance

Hi, all

I have a 8*300GB DISK RAID0 used to hold temporary large size media
files. Usually application will write those ~10GB files to it
sequentially.

Now I found that if I have one file write to it, I can get like
~260MB/s, but if i have 4 concurrent file write, i can only get
aggregated 192MB/s, with 16 concurrent writes, the aggregated throughput
becomes ~100MB/s.

Anybody know why I got such a bad write performance? I guess it is
because of seek back and forth.

This shows that spaces are still allocated to file with large chunks.
thus lead to the seek when writing different files. but why xfs can not
allocate space better?

[root <at> dualxeon bonnie++-1.03a]# xfs_bmap /tmp/t/v8
/tmp/t/v8:
        0: [0..49279]: 336480..385759
        1: [49280..192127]: 39321664..39464511
        2: [192128..229887]: 39485504..39523263
        3: [229888..267391]: 39571904..39609407
        4: [267392..590207]: 52509888..52832703
        5: [590208..620671]: 52847168..52877631
        6: [620672..663807]: 91995584..92038719
        7: [663808..677503]: 92098112..92111807
        8: [677504..691327]: 92130624..92144447

(Continue reading)

Ming Zhang | 5 May 01:32 2006
Picon

Re: multiple write stream performance

On Fri, 2006-05-05 at 08:32 +1000, David Chatterton wrote:
> Ming,
> 
> What are the I/O characteristics of the application? Typically I
> have seen direct I/O for video data at reasonable sizes, and
> smaller buffered I/O for audio data in media apps. In the
> worse case they mix buffered and direct to the same file. The
> larger the I/O requests the better in terms of reducing
> fragmentation.

I feel that I here want the fragmentation. I will have 10-20 large size
(~10GB) multimedia files write at same time to this RAID0. then later a
background program will dump them to tape. so i want the concurrent
write to be as soon as possible.

so if xfs allocate 0 ~ (16MB-512) to file1, 16MB ~ (32MB-51) file2,...,
then when write to file 1 to file N concurrently. the disk heads have to
move back and forth among these places and thus leave the the poor
performance i saw.

ps, what u mean DDN, the full name is ___?

ming

> 
> Some applications take advantage of the preallocation APIs and
> know that they are ingesting X GBs, and preallocate that space.
> This may still be fragmented, but in most circumstnaces the
> fragmentation is far less than without preallocation.
> 
(Continue reading)

Ming Zhang | 5 May 03:38 2006
Picon

Re: multiple write stream performance

Hi David

Or we put fragmentation issue aside first. How could I allow multiple
write streams to come in concurrently and get full speed potential by
avoiding seek as much as possible?

Thanks!

Ming

On Thu, 2006-05-04 at 19:32 -0400, Ming Zhang wrote:
> On Fri, 2006-05-05 at 08:32 +1000, David Chatterton wrote:
> > Ming,
> > 
> > What are the I/O characteristics of the application? Typically I
> > have seen direct I/O for video data at reasonable sizes, and
> > smaller buffered I/O for audio data in media apps. In the
> > worse case they mix buffered and direct to the same file. The
> > larger the I/O requests the better in terms of reducing
> > fragmentation.
> 
> I feel that I here want the fragmentation. I will have 10-20 large size
> (~10GB) multimedia files write at same time to this RAID0. then later a
> background program will dump them to tape. so i want the concurrent
> write to be as soon as possible.
> 
> so if xfs allocate 0 ~ (16MB-512) to file1, 16MB ~ (32MB-51) file2,...,
> then when write to file 1 to file N concurrently. the disk heads have to
> move back and forth among these places and thus leave the the poor
> performance i saw.
(Continue reading)

David Chatterton | 5 May 00:32 2006
Picon

Re: multiple write stream performance

Ming,

What are the I/O characteristics of the application? Typically I
have seen direct I/O for video data at reasonable sizes, and
smaller buffered I/O for audio data in media apps. In the
worse case they mix buffered and direct to the same file. The
larger the I/O requests the better in terms of reducing
fragmentation.

Some applications take advantage of the preallocation APIs and
know that they are ingesting X GBs, and preallocate that space.
This may still be fragmented, but in most circumstnaces the
fragmentation is far less than without preallocation.

Performance degrading with multiple writers is not unexpected
if they are jumping around a lot, and there is limited cache
of the controller etc. That is why for customers with demanding
media workloads we recommend storage like DDN that have very
large caches and can absorb lots of streams. But that costs
a lot more than a jbod!

Coming soon we will introduce to XFS on linux a new mount option
that will put writers to files in different directories into
different allocation groups. If you only have one writer per
directory, then fragementation in those files can be significantly
better since the writers aren't fighting for space in the same
region of the filesystem. That will help here but I'm not sure
it will solve your problem.

Thanks,
(Continue reading)

Ming Zhang | 5 May 16:57 2006
Picon

RE: multiple write stream performance

On Fri, 2006-05-05 at 14:53 +0100, Sebastian Brings wrote:
> 
> > -----Original Message-----
> > From: linux-xfs-bounce <at> oss.sgi.com
> [mailto:linux-xfs-bounce <at> oss.sgi.com]
> > On Behalf Of Ming Zhang
> > Sent: Friday, May 05, 2006 3:38 AM
> > To: chatz <at> melbourne.sgi.com
> > Cc: xfs
> > Subject: Re: multiple write stream performance
> > 
> > Hi David
> > 
> > Or we put fragmentation issue aside first. How could I allow multiple
> > write streams to come in concurrently and get full speed potential by
> > avoiding seek as much as possible?
> > 
> > Thanks!
> > 
> > Ming
> > 
> > 
> 
> By fragmentation :) Storing your files in interleaved extents of a few
> megabytes each. 

what file system parameter(s) will have impact on the size of this
extents?

> 
(Continue reading)

Ming Zhang | 5 May 20:04 2006
Picon

how to get the metadata

Hi

When mkfs.xfs, we can get this information. But how can i get this with
a existing xfs? Thanks!

meta-data=/dev/vg1/v1            isize=256    agcount=3,
agsize=131072000 blks
         =                       sectsz=512
data     =                       bsize=4096   blocks=393216000,
imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=32768, version=1
         =                       sectsz=512   sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0

Ming


Gmane