bugzilla-daemon | 1 Jan 01:07 2004
Picon

[Bug 301] New: xfsdump won't compile

http://oss.sgi.com/bugzilla/show_bug.cgi?id=301

           Summary: xfsdump won't compile
           Product: Linux XFS
           Version: Current
          Platform: IA32
        OS/Version: Linux
            Status: NEW
          Severity: normal
          Priority: Medium
         Component: xfsdump
        AssignedTo: xfs-master <at> oss.sgi.com
        ReportedBy: corliss <at> digitalmages.com

Using the latest CVS snapshop as of 2003/12/29 xfsdump won't compile.  It fails
with the message:

gcc -o xfsdump arch_xlate.o cldmgr.o content_common.o dlog.o drive.o
drive_scsitape.o drive_simple.o drive_minrmt.o fs.o getdents.o global.o lock.o
main.o mlog.o openutil.o qlock.o path.o ring.o stream.o util.o sproc.o inv_api.o
inv_core.o inv_files.o inv_fstab.o inv_idx.o inv_mgr.o inv_stobj.o content.o
hsmapi.o inomap.o var.o  /usr/lib/libuuid.a /lib/libhandle.so /lib/libattr.so
/lib/libdm.so ../librmt/.libs/librmt.al
drive_scsitape.o: In function `is_scsi_driver':
/usr/src/nevaeh/build/xfs-cmds/xfs-cmds/xfsdump/dump/drive_scsitape.c:531:
undefined reference to `IRIX_DEV_TO_KDEVT'
collect2: ld returned 1 exit status
make[1]: *** [xfsdump] Error 1
make: *** [default] Error 2

(Continue reading)

bugzilla-daemon | 1 Jan 01:10 2004
Picon

[Bug 300] Lookup while "mkisofs"ing a DVD-Image

http://oss.sgi.com/bugzilla/show_bug.cgi?id=300

------- Additional Comments From ms <at> citd.de  2003-31-12 16:10 PDT -------
95GB/16 = 5,9GB. Sure this is more than 4GB. :-(

OK. I've recreated the filesystem with doubled agcount.

meta-data=/x4                    isize=256    agcount=32, agsize=763170 blks
         =                       sectsz=512
data     =                       bsize=4096   blocks=24421438, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=11924, version=1
         =                       sectsz=512   sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0

This also explains why "cp -av"ing the data around ("dump & restore") wasn't a
problem because the biggest file copied was only(tm) 3,4 GB in size.

<Recreating & filling the filesystem finished>

Creating the DVD-Image hadn't crashed the machine (3 tries).
I hope this was my first and last problem with XFS (knocking on wood). :-)

------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.

Eyal Lebedinsky | 1 Jan 01:46 2004
Picon

Re: blocksize question

Mike Young wrote:
> 
> Hi Eyal,
> 
> I'm currently doing some performance testing with 3Ware and some other RAID
> options on SuSE 9.0 and using XFS.  What kind of performance are you seeing?
> And what tests are you using?

We have a 7500-12, The raid5 is 8 disks + hot spare (XFS mounted
on /backup). We attempted to add a raid0 with two disks but this
seems to be too much load. This two disk raid0 was software raid
but we kept getting io errors
	{DriveReady SeekComplere Error}

The test is simply reading or writing (dd) a large file. For example:
	10GB /dev/zero   -> /backup	3m57 (43 MB/s)
	10GB /backup     -> /dev/null	7m20 (23 MB/s)
We do each 'dd' twice and take the second copy as the result. The
machine is otherwise idle.

This copying is a good measure for us since this machine is an online
backup service and it will mostly have large files copied to it and
from it.

Each disk alone on an IDE cable does about 55MB/s (hdparm, not real
performance).

Here are the results for the raid0 on the 3ware (/dev/sdb, ext2
on /ssaback):
	10GB /dev/zero   -> /ssaback	3m52 (44 MB/s)
(Continue reading)

Jason White | 1 Jan 01:58 2004
Picon
Picon

Access to 2.6-xfs CVS tree

It was announced on the list a few months ago that the 2.6-xfs CVS
tree had been withdrawn while legal issues (cryptographic export?)
were sorted out.

What's the current status?

I live in Australia and I'm interested in trying the latest 2.6-xfs.

Joshua Schmidlkofer | 1 Jan 02:38 2004

Calculating agsize?

How does one calculate the ag size?

thanks,
  Joshua

Eric Sandeen | 1 Jan 04:27 2004
Picon

Re: Calculating agsize?

From mkfs.xfs or xfs_info output, take agsize * bsize,
to get the size of an AG in bytes.

-Eric

On Wed, 31 Dec 2003, Joshua Schmidlkofer wrote:

> How does one calculate the ag size?
> 
> thanks,
>   Joshua
> 
> 

Eric Sandeen | 1 Jan 05:06 2004
Picon

Re: Access to 2.6-xfs CVS tree

On Thu, 1 Jan 2004, Jason White wrote:

> It was announced on the list a few months ago that the 2.6-xfs CVS
> tree had been withdrawn while legal issues (cryptographic export?)
> were sorted out.
> 
> What's the current status?
> 
> I live in Australia and I'm interested in trying the latest 2.6-xfs.

The 2.6 tree should be available by cvs following the instructions
at http://oss.sgi.com/projects/xfs/cvs_download.html; if you have
trouble let us know.

-Eric

Mike Young | 1 Jan 06:28 2004

RE: blocksize question

Eyal,

I have not tried the dd test yet.  I'll be getting to that tomorrow.  On a
dual 3.06 Xeon, I received a performance spread of 7MB/sec to 29MB/sec using
smbtorture across GbE using a crossover cable.  My client was a 1.4GHz P4.
The server also had 512MB.  The 29MB/sec was at 12 clients.  After that, the
output was a consistent 21.xxMB/sec up to 53 clients. I did this test using
an external log, which I have found to be much faster on such testing than
using the internal log.

However, I did the same test with an LSI controller and obtained over
40MB/sec at 53 clients.  I'm trying to also repeat the test using MD RAID.
In the past, I've had MD RAID doing over 50MB/sec on the same test.  But I
do want to repeat just to be sure.

Also, I have some messages over to the 3Ware folks to see if they can help
on this.  It sounds like the external journal should help you some.  

Any idea what the block size is on ext2 system?  I'm wondering if that may
be the issue.  The default you're using on your xfs system is probably 4K.

-Mike

-----Original Message-----
From: linux-xfs-bounce <at> oss.sgi.com [mailto:linux-xfs-bounce <at> oss.sgi.com] On
Behalf Of Eyal Lebedinsky
Sent: Wednesday, December 31, 2003 4:47 PM
To: Mike Young
Cc: 'linux-xfs list'
Subject: Re: blocksize question
(Continue reading)

Jerry Haltom | 1 Jan 17:00 2004

XFS external log questions

Few questions about setting up and using a proper external log:

I have a system whose only drive is a raid 5 array. I assume putting hte
log on the array itself, on another partition, would be pretty useless,
if the data is on the array also. So, I could put in anohter drive, but
what are the usual redundency requirements for the external log? I
assume if it fails, the entire partition fails also.

So, one would need two raid setups to use this effectively? How much
data does the log usually use?

Thanks

--

-- 
Jerry Haltom <jhaltom <at> feedbackplusinc.com>
Feedback Plus, Inc.
Eric Sandeen | 1 Jan 18:32 2004
Picon

Re: XFS external log questions

There are a couple of reasons for using an external log.
If there is a dedicated block device for the log, then
log writes won't cause the head to seek away from the data,
which could speed things up.  Of course in a raid this is
probably less critical, as the notion of "the head" is a little
blurry.

Also, log writes are done in sector-sized units, while data
writes are done in filesystem block-sized units.  In software
MD raids, this size-switching is very inefficient.  Putting
the log on an external device alleviates this (as would setting
the sector size equal to the block size, but I think that codepath
is a bit less tested).  I don't think that hardware raid performance
suffers from this size-switching, but I suppose some implementations
might.  I'm not well-versed in hardware raids.

The log does not take up much space at all, maximum 65536
filesystem blocks, or 256M on a 4k block filesystem.  Defaults
are usually much smaller than this.

If the log device fails, you don't lose the whole filesystem,
but you will lose any metadata in the log.  You could recover
by replacing the device, and running xfs_repair -L to zero
out the log device and repairing the fallout from the missing
log.  Not ideal, but it should not be catastrophic.

-Eric

On Thu, 1 Jan 2004, Jerry Haltom wrote:

(Continue reading)


Gmane