Ming Zhang | 31 Jan 22:41 2005
Picon

any special requirement on scsi device?

Hi, folks

I wonder if opengfs has any special requirements on scsi deivces. For
example, reserve/release scsi commands.

the reason i ask this is we are developing a open source iscsi target
and would like to support opengfs. since we build all scsi response, we
need to know what extra scsi commands or feature we need to support.

thanks.

ming

-------------------------------------------------------
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag-&-drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
Dominik Vogt | 29 Sep 19:31 2004
Picon
Picon

Re: GFS of the Linuxtag and OpenGFS

On Tue, Sep 28, 2004 at 11:29:25AM -0700, Cahill, Ben M wrote:
> Hi Dominik,
> 
> I meant to reply to this a long time ago ... sorry.
> 
> Nice to hear from you!
> 
> Actually, they haven't leveraged the OpenGFS work at all ... they've
> open-sourced the commercial Sistina version, which they've continued to
> improve over the course of time.
> 
> The Redhat GFS/GDLM combination seems pretty stable, and works well ...
> much better than current OpenGFS/OpenDLM combination at present
> (although OpenGFS/memexp seems pretty stable, and works pretty well).
> 
> So, I've pretty much moved over to GFS now, and I don't think that I'll
> be doing anything more to get OpenDLM working with OpenGFS, although we
> were getting pretty close ... the open-source release of the RedHat DLM
> was a big surprise.

> There isn't really any "merging" going on ... about the only things that
> OpenGFS would have to offer would be the external journals (in which
> nobody seems very interested) and documentation, unless you can think of
> anything else.

Sad.  I can't remember the details, but there were quite a couple
of things that we already did in OpenGFS that they want to
duplicate.  If they don't use our code, they're really wasting
time.  I hope they *at least* leverage Alan Cox' audit of the
code.
(Continue reading)

Steve Landherr | 13 Aug 03:08 2004

Weird permission checking on unlink

Howdy, folks.

I've run into a strange problem removing files in OpenGFS.  Here's the
scenario:

As root:
# mkdir /ogfs/test
# chmod 707 /ogfs/test

As some other user:
$ touch /ogfs/test/file
$ rm /ogfs/test/file

The touch succeeds, but the rm fails with Operation not permitted (EPERM).

Now as root:
# chown other_user /ogfs/test

And as other_user:
$ rm /ogfs/test/file

Now the rm succeeds.

This problem seems to be in the ogfs_dir_permissions() function, introduced
in rev 1.4 of src/fs/arch_linux_2_4/inode_linux.c.  It was added in Jan 2002
to fix a problem where the group list was not being honored for
unlink/renames.  But it has the nasty side-effect of ignoring the world
permission bits altogether.

As a reference point, I took a look at the ext3 code, and it does no
(Continue reading)

Dominik Vogt | 9 Aug 10:15 2004
Picon
Picon

GFS of the Linuxtag and OpenGFS

A couple of weeks ago, I visited the Linuuxtag 2004 in Karlsruhe
(Germany) and listened to a GFS presentation by the RedHat staff.
Since they claimed they had already contacted the OpenGFS
developers, is assumed they had already informed the mailing list
of what they are planning.  Looking archives this seems to have
been wishful thinking.

I don't remember all the details anymore, but here is the
information I can remember (my comments are in braces).

 * Nobody needs OpenGFS anymore and it is going to vanish.
   [Seems to be marketing blabla]

 * The RedHat developers are merging the GFS and OpenGFS projects.

 * They are working on a DLM based lock manager.

 * GFS conforms to POSIX.
   [I asked if that means that they fixed the issues that violate
   POSIX, e.g. flock() behaviour, but got no answer]

 * GFS has a working redundant lock manager (RLM) which is an
   improved version of the old single lock manager (SLM).  Locks
   are still managed on a central lock server, but any other node
   can become the lock server.  A cluster of lock manager nodes
   separate to the cluster of GFS nodes can be used.

 * GFS scales very well.  They claim one of their customers has
   a GFS cluster with more than 100 nodes.

(Continue reading)

Cahill, Ben M | 4 Aug 00:20 2004
Picon

FW: Call for presentation materials; Attendee List

FYI, some follow-up from the Minneapolis meeting last week.

Note that they're requesting comments and suggestions for making GDLM
compatible with other applications, among other things.  Their API is a
bit different from OpenDLM.

I removed the list of attendees, in case you're wondering, in case
anyone is sensitive to that sort of thing.

-- Ben --

Opinions are mine, not Intel's

-----Original Message-----
From: Daniel Phillips [mailto:phillips <at> redhat.com] 
Sent: Tuesday, August 03, 2004 2:28 PM

Hi folks,

And thanks for your part in making the first-ever Minneapolis Cluster 
Summit a great success.  Here is the attendee list as promised, in the 
form of a massive cc list.  You're encouraged to "Reply All" with any 
comments, suggestions, gripes, flames or other constructive material.  
Please don't worry about generating n-squared traffic, as n is yet low.

There may be a few names on the list who didn't actually make it; no 
matter, I'd rather err on the side of not leaving anybody out who was 
there.  If you know of anybody I left out, could you please email me.

Speakers:
(Continue reading)

Walker, Bruce J | 31 Jul 18:00 2004
Picon

RE: [Linux-cluster] Re: [ANNOUNCE] OpenSSI 1.0.0 released!!

Kevin,
   Got out of bed on the wrong side?  Such anger.  First, the
clusterwide device capability is a very small part of OpenSSI so your
comment "put the entire clustering layer on top of it" is COMPLETELY
wrong - you clearly are commenting about something you know nothing
about.  In the 2.4 implementation, providing this one capability by
leveraging devfs was quite economic, efficient and has been very stable.
I'm not sure who you mean by "that's what WE want".  If you mean the
current worldwide users of OpenSSI on 2.4, they are a very happy group
with a kick-ass clustering capability.

About one thing you are correct.  We are going to have to have a way to
lookup and name remote devices in 2.6.  I believe the remote file-op
mechanism we are using in 2.4 will adapt easily.

Bruce Walker
Architect and project manager - OpenSSI project

> -----Original Message-----
> From: linux-cluster-bounces <at> redhat.com 
> [mailto:linux-cluster-bounces <at> redhat.com] On Behalf Of Kevin 
> P. Fleming
> Sent: Saturday, July 31, 2004 7:41 AM
> To: Linux Kernel Mailing List
> Cc: linux-cluster <at> redhat.com; 
> opengfs-devel <at> lists.sourceforge.net; 
> opengfs-users <at> lists.sourceforge.net; 
> opendlm-devel <at> lists.sourceforge.net
> Subject: [Linux-cluster] Re: [ANNOUNCE] OpenSSI 1.0.0 released!!
> 
(Continue reading)

Aneesh Kumar K.V | 31 Jul 13:21 2004
Picon

[ANNOUNCE] OpenSSI 1.0.0 released!!

Hi,

Sorry for the cross post. I came across this on OpenSSI website. I guess 
others may also be interested.

-aneesh

The OpenSSI project leverages both HP's NonStop Clusters for Unixware 
technology and other open source technology to provide a full, highly 
available Single System Image environment for Linux.

Feature list:
1.  Cluster Membership
   * includes libcluster  that application can use
2. Internode Communication

3. Filesystem
    * support for CFS over ext3,  Lustre Lite
    * CFS can be used for the root
    * reopen of files, devices, ipc objects when processes move is supported
    * CFS supports file record locking and shared writable mapped files 
(along with all other standard POSIX capabilities
    * HA-CFS is configurable for the root or other filesystems
4. Process Management
     * almost all pieces there, including:
           o clusterwide PIDs
           o process migration and distributed rexec(), rfork() and 
migrate() with reopen of files, sockets, pipes, devices, etc.
           o vprocs
           o clusterwide signalling, get/setpriority
(Continue reading)

Steve Landherr | 27 Jul 21:17 2004

Stabilizing some OpenGFS corner cases

As I have been working with OpenGFS, I have come across a several system
crashes.  I checked in a few of the more simple fixes this morning, but I
have a couple additional fixes on which I would like feedback.

1) OGFS_ASSERT(list_empty(&sdp->sd_log_ail),); in ogfs_shutdown_log()

An easy way to reproduce is to start "iozone -a" on an OpenGFS filesystem in
the background.  Chdir out of the OpenGFS filesystem and wait 10-20 seconds.
Kill the iozone, and unmount the filesystem immediately.  My node takes the
assert every time.

The problem is that there are dirty buffers associated with transactions on
the AIL at the time ogfs_pull_tail() is called from ogfs_put_super().  This
causes the transactions to remain on the AIL, and then ogfs_shutdown_log()
takes the assert.

My fix involves creating a new function called ogfs_ail_flush(), modeled
after ogfs_trans_check_empty(), and clear_from_ail().  This function gets
called in a loop along with ogfs_pull_tail() until the AIL is empty.  Only
then is ogfs_shutdown_log() called by ogfs_put_super().

I have attached a patch that I have been using for about a month without
problems.

2) OGFS_ASSERT(*block != BLKALLOC_INTERNAL_NOENT,); in ogfs_blkalloc()

This assert has since been replaced with a return of -EIO, but the problem
still remains.

This happens when the filesystem is near capacity and a reservation is made
(Continue reading)

Cahill, Ben M | 27 Jul 19:07 2004
Picon

Recovery in opendlm now has multiple instance support

Hi all,

Yesterday, I checked in some changes to the opendlm lock module to
"instancize" the recovery stuff.  That means you should be able to mount
multiple OpenGFS filesystems on the same machine.

It's still kind of buggy, running into lock deadlocks (error 30, if you
turn on debug output in the opendlm lock module), and mysterious hangs,
so it is *not* production-ready.  But I have mounted two filesystems
simultaneously.

Unfortunately, I'm not sure how much time I'll be able to put into
further debug/development ... my focus will likely be turning to the
RedHat implementation.

-- Ben --

Opinions are mine, not Intel's

-------------------------------------------------------
This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
http://ads.osdn.com/?ad_idG21&alloc_id040&op=click
Cahill, Ben M | 13 Jul 22:31 2004
Picon

RE: [ogfs-users]RE: Problems with opengfs + opendlm on RHEL 3


> -----Original Message-----
> From: opengfs-users-admin <at> lists.sourceforge.net 
> [mailto:opengfs-users-admin <at> lists.sourceforge.net] On Behalf 
> Of Marc Swanson
> Sent: Tuesday, July 13, 2004 3:11 PM
> To: opengfs-users <at> lists.sourceforge.net
> Subject: [ogfs-users]RE: Problems with opengfs + opendlm on RHEL 3
> 
> Solved the problems I was having before by removing all existing
> ultramonkey heartbeat rpms (including libnet) and compiling 
> from src.  I
> also needed to use LD_ASSUME_KERNEL=2.4 on top of all of that.

Congratulations!  And thanks for sharing.  I'm going to cross-post to
OpenDLM list to let that crew know about your experience.

> 
> So now that I got it working I'm running into further 
> problems, some of
> which I can work around and others I'm struggling to understand how to
> fix.
> 
> Problem 1:  Mounting more than one ogfs filesystem with 
> opendlm does not
> seem very stable at all, true?  

Yes, true ... there are a bunch of global/static variables for recovery
in current CVS, and sharing between filesystems is not healthy right now
... I'm working on some changes that will "instancize" recovery stuff.
(Continue reading)

Cahill, Ben M | 8 Jul 04:36 2004
Picon

We're invited to RedHat!

Hi all,

RedHat has extended an invitation to meet face to face with their
(Sistina) cluster component engineers (GFS and DLM).

See our project websites:

opengfs.sourceforge.net
opendlm.sourceforge.net

  for a bit more info and a mailto: to RedHat's Daniel Phillips.

-- Ben --

Opinions are mine, not Intel's

-------------------------------------------------------
This SF.Net email sponsored by Black Hat Briefings & Training.
Attend Black Hat Briefings & Training, Las Vegas July 24-29 - 
digital self defense, top technical experts, no vendor pitches, 
unmatched networking opportunities. Visit www.blackhat.com

Gmane