mike | 1 Jun 04:06 2010
Picon

Re: explain the difference between servers?

mike wrote:
> Nikita Michalko wrote:
>   
>> Hi mike,
>>
>> it seems  to be  no HA-problem anymore though, but:
>>
>> Am Montag, 31. Mai 2010 01:29 schrieb mike:
>>   
>>     
>>> So I've got ldirector up and running just fine and providing ldap high
>>> availability to  2 backend real servers on port 389.
>>>
>>> Here is the output of netstat on both real servers:
>>> tcp        0      0 0.0.0.0:389
>>> 0.0.0.0:*                   LISTEN
>>> tcp        0      0 :::389
>>>
>>> :::*                        LISTEN
>>>
>>> So I used the same director server to create another highly available
>>> application jboss running on port 8080. If I look at the director server
>>> I see the output of ipvsadm shows both real servers alive and well.
>>>
>>> [root <at> lvsuat1a ha.d]# ipvsadm
>>> IP Virtual Server version 1.2.1 (size=4096)
>>> Prot LocalAddress:Port Scheduler Flags
>>>   -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
>>> TCP  esbuat1.vip.intranet.mydom lc
>>>   -> gasayul9300602.intranet.mydom Tunnel  1      0          0
(Continue reading)

Andrew Beekhof | 1 Jun 08:27 2010
Picon

Re: Active-Active nfs storage

On Mon, May 31, 2010 at 4:22 PM, RaSca <rasca <at> miamammausalinux.org> wrote:
> Hi all,
> I have a cluster with two nodes configured to mount two drbd, with LVM
> and filesystem. I need to put each drbd on a different node, for an
> active-active setup, like a storage, so I have two groups like these:
>
> group share-a share-a-ip share-a-LVM share-a-fs
> group share-b share-b-ip share-b-LVM share-b-fs
>
> Now I want to export the filesystem via NFS on each node, so I need that
> each node runs the nfs-kernel-server daemon and also the Resource Agent
> exportfs that manages the exports.
> As you can imagine the problem is that I cannot put the
> nfs-kernel-server daemon in any of the two groups, so I decide to create
> a cloned resource for the nfs daemon.
> But here is the other problem: I can configure the different exports in
> the group, like this:
>
> group share-a share-a-ip share-a-LVM share-a-fs share-a-exportfs
> group share-b share-b-ip share-b-LVM share-b-fs share-b-exportfs
>
> but IF the export is mounted by a client and this client is writing on
> it and IF the group switches, then the migration fails. The sequence is
> this one:
>
> - The resource exportfs stops correctly (the RA launch exportfs -u)
> - The Filesystem resource tries to unmount the exported filesystem,
> doing an fuser to see if some processes are locking the fs.
> - fuser doesn't return anything, but the filesystem is still locked.
> This happens because the kernel process nfsd is locking the FS.
(Continue reading)

RaSca | 1 Jun 11:33 2010

Re: Active-Active nfs storage

Il giorno Mar 01 Giu 2010 08:27:31 CET, Andrew Beekhof ha scritto:
[...]
> Who is in charge of stopping/detaching nfsd? exportfs perhaps?
> The solution is to get that part working, until it does the cluster wont work.

exportfs actually does not have the capability of doing anything on the 
running nfsd. It should be modified by adding nfsd handle.

So, this resource agent at this moment is totally useless, because it 
does not handle any of the tasks described.

I patched the script to do a nfsd restart, based upon what the RA 
nfsserver do. In this way, things works, but seems a little bit ugly... 
I'm going to do some tests and then post the patch.

--

-- 
RaSca
Mia Mamma Usa Linux: Niente è impossibile da capire, se lo spieghi bene!
rasca <at> miamammausalinux.org
http://www.miamammausalinux.org
_______________________________________________
Linux-HA mailing list
Linux-HA <at> lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

RaSca | 1 Jun 15:20 2010

Re: Active-Active nfs storage

Il giorno Mar 01 Giu 2010 11:33:39 CET, RaSca ha scritto:
[...]
> I patched the script to do a nfsd restart, based upon what the RA
> nfsserver do. In this way, things works, but seems a little bit ugly...
> I'm going to do some tests and then post the patch.

The patch is attached. It works, even if the solution appears to me a 
little bit ugly :-)
Some notes: there is a new parameter named nfs_init_script, which by 
default is /etc/init.d/nfs-kernel-server.
This nfs_init_script is used by the RA to restart the nfs daemon on the 
stop operation.
When the resource is migrated, the script removes the export with 
exportfs -u and then restarts the nfsd daemon.
Note that if you have two exportfs resources on the same node, i.e. 
share-a-exportfs and share-b-exportfs, and you force the migration of 
only share-a-exportfs, then share-b-exportfs will fail: his monitor will 
not see the nfsd for a while because this is being restarted by 
share-a-exportfs. Then resource share-b-exportfs will be immediately 
restarted, and share-a-exportfs will be activated on the other node. 
Finally, the migration will be successfully completed.

If you have any kind of suggestion to make this look better, you're welcome.

Thanks,

--

-- 
RaSca
Mia Mamma Usa Linux: Niente è impossibile da capire, se lo spieghi bene!
rasca <at> miamammausalinux.org
(Continue reading)

Chris May | 1 Jun 15:31 2010

Re: FSCK Error

Sorry the full error is below followed by the configs.

RA output: (WebFS:start:stderr) 2010/06/01_09:25:09 ERROR: Couldn't
sucessfully fsck filesystem for /dev/mapper/VolGroup-drbd--demonode

Cluster configs:

cluster1 \
        attributes standby="off"
node cluster2
primitive ClusterIP ocf:heartbeat:IPaddr2 \
        params ip="172.16.101.194" cidr_netmask="32" \
        op monitor interval="30s"
primitive WebData ocf:linbit:drbd \
        params drbd_resource="wwwdata" \
        op monitor interval="60s"
primitive WebFS ocf:heartbeat:Filesystem \
        params device="/dev/mapper/VolGroup-drbd--demo"
directory="/var/www/html" fstype="ext4"
primitive WebSite ocf:heartbeat:apache \
        params configfile="/etc/httpd/conf/httpd.conf" \
        op monitor interval="1min" \
        meta target-role="Started"
ms WebDataClone WebData \
        meta master-max="1" master-node-max="1" clone-max="2"
clone-node-max="1" notify="true"
colocation WebSite-with-WebFS inf: WebSite WebFS
colocation fs_on_drbd inf: WebFS WebDataClone:Master
order WebFS-after-WebData inf: WebDataClone:promote WebFS:start
order WebSite-after-WebFS inf: WebFS WebSite
(Continue reading)

Chris May | 1 Jun 15:33 2010

Re: FSCK Error

On Tue, Jun 1, 2010 at 9:31 AM, Chris May <cmay <at> stonemor.com> wrote:

> Sorry the full error is below followed by the configs.
>
> RA output: (WebFS:start:stderr) 2010/06/01_09:25:09 ERROR: Couldn't
> sucessfully fsck filesystem for /dev/mapper/VolGroup-drbd--demo

>
>
> Cluster configs:
>
> node cluster1 \
>         attributes standby="off"
> node cluster2
> primitive ClusterIP ocf:heartbeat:IPaddr2 \
>         params ip="172.16.101.194" cidr_netmask="32" \
>         op monitor interval="30s"
> primitive WebData ocf:linbit:drbd \
>         params drbd_resource="wwwdata" \
>         op monitor interval="60s"
> primitive WebFS ocf:heartbeat:Filesystem \
>         params device="/dev/mapper/VolGroup-drbd--demo"
> directory="/var/www/html" fstype="ext4"
> primitive WebSite ocf:heartbeat:apache \
>         params configfile="/etc/httpd/conf/httpd.conf" \
>         op monitor interval="1min" \
>         meta target-role="Started"
> ms WebDataClone WebData \
>         meta master-max="1" master-node-max="1" clone-max="2"
> clone-node-max="1" notify="true"
(Continue reading)

Mozafar Roshany | 1 Jun 16:45 2010
Picon

Re: Solution for Auto-Mount/Unmount an Ext3 Filesystem

Thanks Karl. It did worked. This made me familiar with the details of
Heartbeat configuration.

And Thank you David for that nice point. I did that by "tune2fs -i 0
/dev/sda3".

Karl wrote:
>
> I use just such a set-up on a three node cluster of web-servers, the disk
> system gets mounted automatically on the active node.  (The SAN serves it
> up
> to all three, and any of the three can mount it, but the resource agent
> sees
> to it that it is only mounted by one node at a time.)
>
> Mine looks something like this:
>
>       <primitive class="ocf" id="My_Disk" provider="heartbeat"
> type="Filesystem">
>         <operations>
>           <op id="My_Disk_mon" interval="25s" name="monitor"
> timeout="50s"/>
>         </operations>
>         <instance_attributes id="My_Disk_inst_attr">
>             <nvpair id="My_Disk_dev" name="device" value="/dev/sdb1"/>
>             <nvpair id="My_Disk_mountpoint" name="directory"
> value="/path_to_mount"/>
>             <nvpair id="My_Disk_fstype" name="fstype" value="ext3"/>
>             <nvpair id="My_Disk_status" name="target_role"
> value="started"/>
(Continue reading)

Dejan Muhamedagic | 1 Jun 16:52 2010

Re: FSCK Error

Hi,

On Tue, Jun 01, 2010 at 09:31:53AM -0400, Chris May wrote:
> Sorry the full error is below followed by the configs.
> 
> RA output: (WebFS:start:stderr) 2010/06/01_09:25:09 ERROR: Couldn't
> sucessfully fsck filesystem for /dev/mapper/VolGroup-drbd--demonode

So, is that device in a volume group? It looks so, but you don't
have a LVM resource defined to start the volume group. Otherwise,
if you can't fsck that filesystem from the command line, then
it needs to be fixed.

Thanks,

Dejan

> Cluster configs:
> 
> cluster1 \
>         attributes standby="off"
> node cluster2
> primitive ClusterIP ocf:heartbeat:IPaddr2 \
>         params ip="172.16.101.194" cidr_netmask="32" \
>         op monitor interval="30s"
> primitive WebData ocf:linbit:drbd \
>         params drbd_resource="wwwdata" \
>         op monitor interval="60s"
> primitive WebFS ocf:heartbeat:Filesystem \
>         params device="/dev/mapper/VolGroup-drbd--demo"
(Continue reading)

Hénaux Didier | 1 Jun 12:37 2010
Picon

Heartbeat unable to start "Resource is stopped"

Bonjour à tous,

Pour un projet que je compte présenter aux
cours, j’expérimente DRBD et heartbeat pour approcher les bases de la
haute disponibilité.
Je suis un débutant sous Linux et encore plus sous Debian ( j'ai commencé avec Ubuntu).
J’ai suivis plusieurs tutoriaux pour mettre en place DRBD et heartbeat.
DRBD ne pose pas de problèmes.
Par contre heartbeat, arg, ca parrait simple  mais rien ne marche.
De plus je fais la configuration en style V1, donc c’est facilement lisible.

Lorceque je lance heartbeat via:

Hello everyone, 

 I try to experiment DRBD and
heartbeat. 
I am a beginner in Linux.
I followed several tutorials to develop DRBD and heartbeat. 
DRBD works perfectly. 

Also I am setting V1 style, so it is easily readable. 

When I start heartbeat via:

/etc/init.d/heartbeat restart 

I get to folow message:

Stopping High-Availability services: 
(Continue reading)

Chris May | 1 Jun 16:55 2010

Re: FSCK Error

Well that makes sense. As I am still new to this I didnt know that it
required a lvm resource. I will give that a shot and I thank you for your
help.

On Tue, Jun 1, 2010 at 10:52 AM, Dejan Muhamedagic <dejanmm <at> fastmail.fm>wrote:

> Hi,
>
> On Tue, Jun 01, 2010 at 09:31:53AM -0400, Chris May wrote:
> > Sorry the full error is below followed by the configs.
> >
> > RA output: (WebFS:start:stderr) 2010/06/01_09:25:09 ERROR: Couldn't
> > sucessfully fsck filesystem for /dev/mapper/VolGroup-drbd--demonode
>
> So, is that device in a volume group? It looks so, but you don't
> have a LVM resource defined to start the volume group. Otherwise,
> if you can't fsck that filesystem from the command line, then
> it needs to be fixed.
>
> Thanks,
>
> Dejan
>
> > Cluster configs:
> >
> > cluster1 \
> >         attributes standby="off"
> > node cluster2
> > primitive ClusterIP ocf:heartbeat:IPaddr2 \
> >         params ip="172.16.101.194" cidr_netmask="32" \
(Continue reading)


Gmane