Gordan Bobic | 1 Apr 01:06 2008
Picon

Re: Using GFS and DLM without RHCS

Danny Wall wrote:
> I was wondering if it is possible to run GFS on several machines with a
> shared GFS LUN, but not use full clustering like RHCS. From the FAQs:

First of all, what's the problem with having RHCS running? It doesn't 
mean you have to use it to handle resources failing over. You can run it 
all in active/active setup with load balancing in front.

If this is not an acceptable solution for you and you still cannot be 
bothered to create cluster.conf (and that is all that is required), you 
can always use OCFS2. This doesn't have a cluster component (it's 
totally unrelated to RHCS), but you still have to create the equivalent 
config, so you won't be saving yourself any effort.

Gordan

Daniel Maher | 1 Apr 11:32 2008
Picon

Re: (newbie) mirrored data / cluster ?

On Mon, 31 Mar 2008 14:17:59 -0500 Chris Harms <chris <at> cmiware.com>
wrote:

> The non-SAN option would be to use DRBD (http://www.drbd.org) and put 
> NFS, Samba, etc on top of the DRBD partition.

Thank you for your reply.

On this topic, consider this paper by Lars Ellenberg :
http://www.drbd.org/fileadmin/drbd/publications/drbd8.linux-conf.eu.2007.pdf

Where he notes the following :
"The most inconvenient limitations is currently that DRBD supports only
two nodes natively."

While this is not a problem in my theoretical two-server setup, should
we wish to add a third server in the future (which i find highly
likely), then DRBD will no longer be an appropriate solution.

Furthermore, that same paper seems to suggest that DRBD is best used in
a primary / secondary relationship, whereas i'm suggesting an
"all-primary" sort of setup.

--

-- 
Daniel Maher <dma AT witbe.net>
On Mon, 31 Mar 2008 14:17:59 -0500 Chris Harms <chris <at> cmiware.com>
wrote:

(Continue reading)

Daniel Maher | 1 Apr 11:41 2008
Picon

Re: (newbie) mirrored data / cluster ?

On Mon, 31 Mar 2008 13:57:46 -0500 "MARTI, ROBERT JESSE"
<RJM002 <at> shsu.edu> wrote:

> You don't have to have a mirrored LVM to do what youre trying to do.
> You just need a common mountable share - typically a SAN or NAS.  It
> shouldn't be too hard to configure (and I've already done it).  You
> don't even *have* to have cluster suite - if you have a load balancer.
> My brain isn't fast enough today to figure out how to share a load
> without a load balanced VIP or a DNS round robin (which should be easy
> to do as well).

Thank you for your reply.  As for your suggestion of having a common
mountable share - well, yes, that's exactly what i'm trying to do.
I want to take to servers, and create a NAS device from them.  I don't
already have a load balancer, but using RRDNS is straightforward
enough.

The other aspect of this initiative is to gain some useful applicative
experience with cluster suite, as we'd like to clusterise our front-end
web servers down the road as well.

--

-- 
Daniel Maher <dma AT witbe.net>
On Mon, 31 Mar 2008 13:57:46 -0500 "MARTI, ROBERT JESSE"
<RJM002 <at> shsu.edu> wrote:

> You don't have to have a mirrored LVM to do what youre trying to do.
> You just need a common mountable share - typically a SAN or NAS.  It
(Continue reading)

gordan | 1 Apr 11:50 2008
Picon

Re: (newbie) mirrored data / cluster ?

On Tue, 1 Apr 2008, Daniel Maher wrote:

>> The non-SAN option would be to use DRBD (http://www.drbd.org) and put
>> NFS, Samba, etc on top of the DRBD partition.
>
> On this topic, consider this paper by Lars Ellenberg :
> http://www.drbd.org/fileadmin/drbd/publications/drbd8.linux-conf.eu.2007.pdf
>
> Where he notes the following :
> "The most inconvenient limitations is currently that DRBD supports only
> two nodes natively."

I'm not 100% sure, but I think this limit is increased in latest 8.0 and 
8.2 releases.

> While this is not a problem in my theoretical two-server setup, should
> we wish to add a third server in the future (which i find highly
> likely), then DRBD will no longer be an appropriate solution.

I'd double check that this is still a limitation. Ask on the DRBD list.

> Furthermore, that same paper seems to suggest that DRBD is best used in
> a primary / secondary relationship, whereas i'm suggesting an
> "all-primary" sort of setup.

That is the way it has been used traditionally with DRBD <= 7.x, but for a 
while now primary/primary operation has been fully supported (obviously, 
you need to use a FS that is aware of such things, such as GFS(2) or 
OCFS(2)).

(Continue reading)

Danny Wall | 1 Apr 18:45 2008

Re: Using GFS and DLM without RHCS

Danny Wall wrote:
> I was wondering if it is possible to run GFS on several machines with
a
> shared GFS LUN, but not use full clustering like RHCS. From the FAQs:

> First of all, what's the problem with having RHCS running? It doesn't 
> mean you have to use it to handle resources failing over. You can run
> it all in active/active setup with load balancing in front.

I was looking to minimize everything as much as possible, so if it is
not needed, do not install it. This would reduce problems with updates
and overall management. Having said that, your solution is still a
better alternative for my needs, and options like this are what I am
looking for. Thanks

> If this is not an acceptable solution for you and you still cannot be 
> bothered to create cluster.conf (and that is all that is required), 
> you can always use OCFS2. This doesn't have a cluster component (it's 
> totally unrelated to RHCS), but you still have to create the 
> equivalent config, so you won't be saving yourself any effort.

> Gordan

OCFS is out of he question. OCFS can not handle the number of files and
directories on these servers. 

I don't technically need a cluster, but the cluster filesystem allows me
to have multiple servers with access to the storage at the same time,
reducing downtime, and allowing for processes like backups to run on a
different server and not overload the server used by the end users. If I
(Continue reading)

Gary Romo | 1 Apr 19:38 2008
Picon

VIP's on mixed subnets


In my cluster all of my servers NICs are bonded.
Up until recently all of my VIPs (for resources/services) were in the same subnet.
Is it ok that VIPs be in mixed subnets?  Thanks.

Gary Romo
IBM Global Technology Services
303.458.4415
Email: garromo <at> us.ibm.com
Pager:1.877.552.9264
Text message: gromo <at> skytel.com
<div>
<br>In my cluster all of my servers NICs
are bonded.
<br>Up until recently all of my VIPs (for
resources/services) were in the same subnet.
<br>Is it ok that VIPs be in mixed subnets?
&nbsp;Thanks.
<br><br>Gary Romo<br>
IBM Global Technology Services<br>
303.458.4415<br>
Email: garromo <at> us.ibm.com<br>
Pager:1.877.552.9264<br>
Text message: gromo <at> skytel.com</div>
Tomasz Sucharzewski | 1 Apr 21:29 2008
Picon

Re: (newbie) mirrored data / cluster ?

Hello,

BTW do you know any software solution that supports asynchronous replication on Linux like AVS on Solaris ?

Best regards,
Tomek 

On Mon, 31 Mar 2008 14:17:59 -0500
Chris Harms <chris <at> cmiware.com> wrote:

> The non-SAN option would be to use DRBD (http://www.drbd.org) and put 
> NFS, Samba, etc on top of the DRBD partition.
> 
> Chris
> 
> MARTI, ROBERT JESSE wrote:
> > You don't have to have a mirrored LVM to do what youre trying to do.
> > You just need a common mountable share - typically a SAN or NAS.  It
> > shouldn't be too hard to configure (and I've already done it).  You
> > don't even *have* to have cluster suite - if you have a load balancer.
> > My brain isn't fast enough today to figure out how to share a load
> > without a load balanced VIP or a DNS round robin (which should be easy
> > to do as well).
> >
> > Rob Marti
> > Systems Analyst II
> > Sam Houston State University
> >
> > -----Original Message-----
> > From: linux-cluster-bounces <at> redhat.com
> > [mailto:linux-cluster-bounces <at> redhat.com] On Behalf Of Daniel Maher
> > Sent: Monday, March 31, 2008 12:40 PM
> > To: linux-cluster <at> redhat.com
> > Subject: [Linux-cluster] (newbie) mirrored data / cluster ?
> >
> > Hello all,
> >
> > I have spent the day reading through the mailing list archives, Redhat
> > documentation, and CentOS forums, and - to be frank - my head is now
> > swimming with information.
> >
> > My scenario seems reasonably straightforward : I would like to have two
> > file servers which mirror each others' data, then i'd like those two
> > servers to act as a cluster, whereby they serve said data as if they
> > were one machine.  If one of the servers suffers a critical failure, the
> > other will stay up, and the data will continue to be accessible to the
> > rest of the network.
> >
> > I note with some trepidation that this might not be possible, as per
> > this document :
> > http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/en-US/RHEL51
> > 0/Cluster_Logical_Volume_Manager/mirrored_volumes.html
> >
> > However, i don't know if that document relates to the same scenario i've
> > described above.  I would very much appreciate any and all feedback,
> > links to further documentation, and any other information that you might
> > like to share.
> >
> > Thank you !
> >
> >
> > --
> > Daniel Maher <dma AT witbe.net>
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster <at> redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> >   
> 
> --
> Linux-cluster mailing list
> Linux-cluster <at> redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster

--

-- 
Tomasz Sucharzewski <tsucharz <at> poczta.onet.pl>

अनुज Anuj Singh | 1 Apr 23:23 2008
Picon

distributed file system... can we achieve effectively using linux

Hi,

How can we create a common Q drive using linux that meets the following needs ?

It should be possible to logically the common Q drive into smaller partitions, each managed by a custodian

The custodian of a partition, should be able to monitor and control the usage of a partition.

Presently, Q drives are used as shared folders at different locations over WAN. (so the network traffic and server load will be a factor, not all the files of Q - drive is required among locations.)

Present Q drives are on windows platform.

Do we have a better option over microsoft's DFS?

Thanks and Regards
Anuj

<div><p>Hi,<br><br>How can we create a common Q drive using linux that meets the following needs ?<br><br>It should be possible to logically the common Q drive into smaller partitions, each managed by a custodian<br><br>The custodian of a partition, should be able to monitor and control the usage of a partition.<br><br>Presently, Q drives are used as shared folders at different locations over WAN. (so the network traffic and server load will be a factor, not all the files of Q - drive is required among locations.) <br><br>Present Q drives are on windows platform. <br><br>Do we have a better option over microsoft's DFS?<br><br>Thanks and Regards<br>Anuj<br><br></p></div>
Andrew A. Neuschwander | 2 Apr 02:19 2008

dlm high cpu on latest stock centos 5.1 kernel

I have a GFS cluster with one node serving files via smb and nfs. Under
fairly light usage (5-10 users) the cpu is getting pounded by dlm. I am
using CentOS5.1 with the included kernel (2.6.18-53.1.14.el5). This sounds
like the dlm issue mentioned back in March of last year
(https://www.redhat.com/archives/linux-cluster/2007-March/msg00068.html)
that was resolved in 2.6.21.

Has (or will) this fix be back ported to the current el5 kernel? Will it
be in RHEL5.2? What is the easiest way for me to get this fix?

Also, if I try a newer kernel on this node, will there be any harm in the
other nodes using their current kernel?

Thanks,
-Andrew
--

-- 
Andrew A. Neuschwander, RHCE
Linux Systems Administrator
Numerical Terradynamic Simulation Group
College of Forestry and Conservation
The University of Montana
http://www.ntsg.umt.edu
andrew <at> ntsg.umt.edu - 406.243.6310

David Ayre | 2 Apr 02:30 2008
Picon

Re: dlm high cpu on latest stock centos 5.1 kernel

What do you mean by pounded exactly ?

We have an ongoing issue, similar... when we have about a dozen users using both smb/nfs, and at some seemingly random point in time our dlm_senddd chews up 100% of the CPU... then dies down at on its own after quite a while.  Killing SMB processes, shutting down SMB didn't seem to have any affect... only a reboot cures it.  I've seen this described (if this is the same issue) as a "soft lockup" as it does seem to come back to life:


We've been assuming its a kernel/dlm version as we are running 2.6.9-55.0.6.ELsmp with dlm-kernel 2.6.9-46.16.0.8

we were going to try a kernel update this week... but you seem to be using a later version and still have this problem ?

Could you elaborate on "getting pounded by dlm" ?  I've posted about this on this list in the past but received no assistance.




On 1-Apr-08, at 5:19 PM, Andrew A. Neuschwander wrote:
I have a GFS cluster with one node serving files via smb and nfs. Under
fairly light usage (5-10 users) the cpu is getting pounded by dlm. I am
using CentOS5.1 with the included kernel (2.6.18-53.1.14.el5). This sounds
like the dlm issue mentioned back in March of last year
(https://www.redhat.com/archives/linux-cluster/2007-March/msg00068.html)
that was resolved in 2.6.21.

Has (or will) this fix be back ported to the current el5 kernel? Will it
be in RHEL5.2? What is the easiest way for me to get this fix?

Also, if I try a newer kernel on this node, will there be any harm in the
other nodes using their current kernel?

Thanks,
-Andrew
--
Andrew A. Neuschwander, RHCE
Linux Systems Administrator
Numerical Terradynamic Simulation Group
College of Forestry and Conservation
The University of Montana
http://www.ntsg.umt.edu
andrew <at> ntsg.umt.edu - 406.243.6310

--
Linux-cluster mailing list
Linux-cluster <at> redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

~_~_~_~_~_~_~_~_~_~_~_~_~_~_~_~_~_~_~_~_~_~
David Ayre
Programmer/Analyst - Information Technlogy Services 
Emily Carr Institute of Art and Design
Vancouver, B.C.   Canada
604-844-3875 /  david <at> eciad.ca

<div>What do you mean by pounded exactly ?<div><br></div>
<div>We have an ongoing issue, similar... when we have about a dozen users using both smb/nfs, and at some seemingly random point in time our dlm_senddd chews up 100% of the CPU... then dies down at on its own after quite a while. &nbsp;Killing SMB processes, shutting down SMB didn't seem to have any affect... only a reboot cures it. &nbsp;I've seen this described (if this is the same issue) as a "soft lockup" as it does seem to come back to life:</div>
<div><br></div>
<div><span class="Apple-style-span"><a href="http://lkml.org/lkml/2007/10/4/137">http://lkml.org/lkml/2007/10/4/137</a>&nbsp;</span></div>
<div><br></div>
<div>We've been assuming its a kernel/dlm version as we are running&nbsp;2.6.9-55.0.6.ELsmp with dlm-kernel&nbsp;2.6.9-46.16.0.8</div>
<div><br></div>
<div>we were going to try a kernel update this week... but you seem to be using a later version and still have this problem ?</div>
<div><br></div>
<div>Could you elaborate on "getting pounded by dlm" ? &nbsp;I've posted about this on this list in the past but received no assistance.</div>
<div><br></div>
<div><br></div>
<div><br></div>
<div>
<br><div>On 1-Apr-08, at 5:19 PM, Andrew A. Neuschwander wrote:<br class="Apple-interchange-newline"><blockquote type="cite">I have a GFS cluster with one node serving files via smb and nfs. Under<br>fairly light usage (5-10 users) the cpu is getting pounded by dlm. I am<br>using CentOS5.1 with the included kernel (2.6.18-53.1.14.el5). This sounds<br>like the dlm issue mentioned back in March of last year<br>(<a href="https://www.redhat.com/archives/linux-cluster/2007-March/msg00068.html">https://www.redhat.com/archives/linux-cluster/2007-March/msg00068.html</a>)<br>that was resolved in 2.6.21.<br><br>Has (or will) this fix be back ported to the current el5 kernel? Will it<br>be in RHEL5.2? What is the easiest way for me to get this fix?<br><br>Also, if I try a newer kernel on this node, will there be any harm in the<br>other nodes using their current kernel?<br><br>Thanks,<br>-Andrew<br>-- <br>Andrew A. Neuschwander, RHCE<br>Linux Systems Administrator<br>Numerical Terradynamic Simulation Group<br>College of Forestry and Conservation<br>The University of Montana<br><a href="http://www.ntsg.umt.edu">http://www.ntsg.umt.edu</a><br>andrew <at> ntsg.umt.edu - 406.243.6310<br><br>--<br>Linux-cluster mailing list<br>Linux-cluster <at> redhat.com<br>https://www.redhat.com/mailman/listinfo/linux-cluster<br>
</blockquote>
</div>
<br><div apple-content-edited="true"> <span class="Apple-style-span"><div>
<div>~_~_~_~_~_~_~_~_~_~_~_~_~_~_~_~_~_~_~_~_~_~</div>
<div>David Ayre</div>
<div>Programmer/Analyst -&nbsp;Information Technlogy Services&nbsp;</div>
<div>
<div>Emily Carr Institute of Art and Design</div>
<div>Vancouver, B.C.&nbsp;&nbsp;&nbsp;Canada</div>
<div>604-844-3875 / &nbsp;<a href="mailto:david <at> eciad.ca">david <at> eciad.ca</a>
</div>
</div>
</div></span> </div>
<br>
</div>
</div>

Gmane