B.Baransel BAĞCI | 5 Jan 19:37 2016
Picon

GFS2 mount hangs for some disks

Hi list,

I have some problems with GFS2 with failed nodes. After one of the  
cluster nodes fenced and rebooted, it cannot mount some of the gfs2  
file systems but hangs on the mount operation. No output. I've waited  
nearly 10 minutes to mount single disk but it didn't respond. Only  
solution is to shutdown all nodes and clean start of the cluster. I'm  
suspecting journal size or file system quotas.

I have 8-node rhel-6 cluster with GFS2 formatted disks which are all  
mounted by all nodes.
There are two types of disk:
     Type A :
         ~50 GB disk capacity
         8 journal with size 512MB
         block-size: 1024
         very small files (Avg: 50 byte - sym.links)
         ~500.000 file (inode)
         Usage: 10%
         Nearly no write IO (under 1000 file per day)
         No user quota (quota=off)
         Mount options: async,quota=off,nodiratime,noatime

     Tybe B :
         ~1 TB disk capacity
         8 journal with size 512MB
         block-size: 4096
         relatively small files (Avg: 20 KB)
         ~5.000.000 file (inode)
         Usage: 20%
(Continue reading)

Kelvin Edmison | 3 Dec 20:19 2015

Fencing problem w/ 2-node VM when a VM host dies


I am hoping that someone can help me understand the problems I'm having 
with linux clustering for VMs.

I am clustering 2 VMs on two separate VM hosts, trying to ensure that a 
service is always available.  The hosts and guests are both RHEL 6.7. 
The goal is to have only one of the two VMs running at a time.

The configuration works when we test/simulate VM deaths and graceful VM 
host shutdowns, and administrative switchovers (i.e. clusvcadm -r ).

However, when we simulate the sudden isolation of host A (e.g. ifdown 
eth0), two things happen
1) the VM on host B does not start, and repeated fence_xvm errors appear 
in the logs on host B
2) when the 'failed' node is returned to service, the cman service on 
host B dies.

This is my cluster.conf file (some elisions re: hostnames)

<?xml version="1.0"?>
<cluster config_version="14" name="clustername">
     <fence_daemon/>
     <clusternodes>
         <clusternode name="hostA.fqdn" nodeid="1">
             <fence>
                 <method name="VmFence">
                     <device name="virtfence1" port="jobhistory"/>
                 </method>
             </fence>
(Continue reading)

Jonathan Davies | 25 Nov 16:22 2015
Gravatar

Rejoin cluster after failure without reboot?

Hi,

I'm experimenting with corosync+dlm+gfs2 (approximately following 
http://people.redhat.com/teigland/cluster4-gfs2-dlm.txt) and am trying 
to establish whether it meets my requirements. I have a query about a 
node rejoining a cluster after failure, and want to make sure I'm not 
overlooking something.

I have a three-node cluster and deliberately cause token loss by 
firewalling one of them (call it node A) out of the network for longer 
than the token timeout. At this point, the other two hosts (B and C) 
decide that A has disappeared and continue with quorum. That is fine.

When I unfirewall node A, dlm tries to reconnect to its peers on B and 
C. But then I see the following on host B:

16:29:25.823496 nodeb dlm_controld[6548]: 908 daemon node 85 stateful merge
16:29:25.823529 nodeb dlm_controld[6548]: 908 daemon node 85 kill due to 
stateful merge
16:29:25.823543 nodeb dlm_controld[6548]: 908 tell corosync to remove 
nodeid 85 from cluster
16:29:25.823696 nodeb corosync[6536]:  [CFG   ] request to kill node 
85(us=83): xxx

and then the following on node A:

16:29:25.828547 nodea corosync[3896]:  [CFG   ] Killed by node 83: 
dlm_controld
16:29:25.828575 nodea corosync[3896]:  [MAIN  ] Corosync Cluster Engine 
exiting with status -1 at cfg.c:530.
(Continue reading)

Vladislav Bogdanov | 16 Nov 15:30 2015

GFS2: fatal: invalid metadata block

Hi,

recently following appeared in dmesg during test with very intensive, 
mostly unaligned, sometimes overlapping IO (test of course failed):

GFS2: fsid=cluster:stg0.0: fatal: invalid metadata block
GFS2: fsid=cluster:stg0.0:   bh = 404971163 (magic number)
GFS2: fsid=cluster:stg0.0:   function = gfs2_meta_indirect_buffer, file 
= fs/gfs2/meta_io.c, line = 365
GFS2: fsid=cluster:stg0.0: about to withdraw this file system
GFS2: fsid=cluster:stg0.0: dirty_inode: glock -5
GFS2: fsid=cluster:stg0.0: dirty_inode: glock -5
GFS2: fsid=cluster:stg0.0: dirty_inode: glock -5
GFS2: fsid=cluster:stg0.0: telling LM to unmount
GFS2: fsid=cluster:stg0.0: withdrawn
Pid: 135440, comm: test Not tainted 2.6.32-504.30.3.el6.x86_64 #1
Call Trace:
  [<ffffffffa047eab8>] ? gfs2_lm_withdraw+0x128/0x160 [gfs2]
  [<ffffffff8109eca0>] ? wake_bit_function+0x0/0x50
  [<ffffffffa047ec15>] ? gfs2_meta_check_ii+0x45/0x50 [gfs2]
  [<ffffffffa0468c69>] ? gfs2_meta_indirect_buffer+0xf9/0x100 [gfs2]
  [<ffffffffa0452caa>] ? gfs2_block_map+0x2aa/0xf10 [gfs2]
  [<ffffffff81014a29>] ? read_tsc+0x9/0x20
  [<ffffffff810aab71>] ? ktime_get_ts+0xb1/0xf0
  [<ffffffff810f29e9>] ? delayacct_end+0x89/0xa0
  [<ffffffff811c5310>] ? sync_buffer+0x0/0x50
  [<ffffffff81181565>] ? mem_cgroup_charge_common+0xa5/0xd0
  [<ffffffff811cf790>] ? do_mpage_readpage+0x150/0x5f0
  [<ffffffff8114701e>] ? __inc_zone_page_state+0x2e/0x30
  [<ffffffff8113b4e0>] ? __lru_cache_add+0x40/0x90
(Continue reading)

Milos Jakubicek | 13 Nov 09:13 2015
Picon

GFS2 development plans

Hi,

can somebody from the devel team at RedHat share some thoughts on what are actually the development plans for GFS2 in the future?

I mean: will it mainly be small and big performance improvements like in the past couple of years, or are there any major new things planned? In particular, I would be interested to hear about online fsck -- whether this is something that is at least being considered (on really big arrays with a large number of partitions around 10TB, fsck is really a pain).

Thank you
Milos
--

-- 
Linux-cluster mailing list
Linux-cluster <at> redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Huw Davies | 13 Nov 05:03 2015

clvmd issues

All,

I have an issue with one of my test clusters (CentOS Linux release 7.1.1503 (Core)) running with the latest patches.

I added a new LUN and configured it to run as a shared GFS2 file system accessed from both nodes. I’ve done this more than a few times and followed the normal process. Somewhere along the line the LVM configuration has become corrupted and as noted in https://bugzilla.redhat.com/show_bug.cgi?id=1198681 I need clvmd running to allow me to fix the corruption but it won’t start due to the corruption.

The bugzillza article indicates that this is fixed in resource-agents-3.9.5-43.el7 (or later) but I can’t find this patch level anywhere on-line. The master git repository doesn’t appear to be at this level (based on the date of the changelog). Does anyone have any pointers/suggestions.

Huw Davies

--

-- 
Linux-cluster mailing list
Linux-cluster <at> redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Dil Lee | 10 Nov 05:56 2015
Picon

sudden slow down of gfs2 volume

Hi,

I have a centos 6.5 cluster that are connected to a Fibre Channel SAN in star
topology. All nodes/SAN_storages have single-pair fibre connection and
no multipathing. Possibility of hardware issue had been eliminated
because read/write between all other node/SAN_storage pairs works
perfectly.

Problem:
Everything was running perfectly for years. Then node3 suddenly has
very slow write to SAN_storage1, ~10KB/sec. Read speed seems to remain
normal.

Can anyone give be some pointers to debug the problem. Thank you.

Dil

--

-- 
Linux-cluster mailing list
Linux-cluster <at> redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

Jayesh Shinde | 2 Nov 15:39 2015
Picon

fence_cisco_ucs issue within cluster.conf

Hi ,

I am trying to configure the 2 node cluster with  fence_cisco_ucs . The fence testing is working properly via command line but its not working within cluster.conf

problem  / scenario  :--
----------------------------
When I manually shutdown the Ethernet card of of mailbox1 server , then mailbox2 server detect network failure and trying  to fence mailbox1 ,
But its getting fail with "plug" related error ( refer below log )  i.e  Failed: Unable to obtain correct plug status or plug is not available

I refer Redhat KB and google and older mail-thread  . As per suggestion I upgrade the from "fence-agents-3.1.5-35.el6"   to "fence-agents-4.0.15-8.el6.x86_64".
Also checked by doing few other changes in cluster.conf , but that not worked .     Kindly guide where I am doing wrong with "plug"  ?

I am using  OS is  RHEL 6.5
-------------------------------------
cman-3.0.12.1-59.el6.x86_64
rgmanager-3.0.12.1-19.el6.x86_64
fence-virt-0.2.3-15.el6.x86_64
fence-agents-4.0.15-8.el6.x86_64

command line fencing  :--
-----------------------------
[root <at> mailbox2 ~]# /usr/sbin/fence_cisco_ucs -a 172.17.1.30 -l KVM -p 'myPassword' -o status  -v  -z  --plug=mailbox1 --ipport=443  suborg="/org-root/ls-mailbox"  ; echo $?
<aaaLogin inName="KVM" inPassword="P <at> ssword" />
 <aaaLogin cookie="" response="yes" outCookie="1446038319/bdd0ea5b-386f-4374-8eca-9ff7595c33f3" outRefreshPeriod="600" outPriv="pn-equipment,pn-maintenance,read-only" outDomains="" outChannel="noencssl" outEvtChannel="noencssl" outSessionId="web_29402_B" outVersion="2.2(3d)" outName="KVM"> </aaaLogin>
<configResolveDn cookie="1446038319/bdd0ea5b-386f-4374-8eca-9ff7595c33f3" inHierarchical="false" dn="org-root/ls-mailbox1/power"/>
 <configResolveDn dn="org-root/ls-mailbox1/power" cookie="1446038319/bdd0ea5b-386f-4374-8eca-9ff7595c33f3" response="yes"> <outConfig> <lsPower dn="org-root/ls-mailbox1/power" state="up"/> </outConfig> </configResolveDn>
Status: ON
<aaaLogout inCookie="1446038319/bdd0ea5b-386f-4374-8eca-9ff7595c33f3" />
 <aaaLogout cookie="" response="yes" outStatus="success"> </aaaLogout>

0

 /etc/hosts  on both mailbox1 & mailbox2 server

127.0.0.1                localhost localhost.localdomain
192.168.51.91        mailbox1.mydomain.com
192.168.51.92        mailbox2.mydomain.com

  /etc/cluster/cluster.conf   :--
------------------------------------

<?xml version="1.0"?>
<cluster config_version="69" name="cluster1">
    <clusternodes>
        <clusternode name="mailbox1.mydomain.com" nodeid="1">
            <fence>
                <method name="CiscoFence">
                    <device name="CiscoFence" port="mailbox1"/>
                </method>
            </fence>
        </clusternode>
        <clusternode name="mailbox2.mydomain.com" nodeid="2">
            <fence>
                <method name="CiscoFence">
                    <device name="CiscoFence" port="mailbox2"/>
                </method>
            </fence>
        </clusternode>
    </clusternodes>
    <cman expected_votes="1" two_node="1"/>
    <rm>
        <failoverdomains>
            <failoverdomain name="failover1" ordered="1" restricted="1">
                <failoverdomainnode name="mailbox1.mydomain.com" priority="2"/>
                <failoverdomainnode name="mailbox2.mydomain.com" priority="1"/>
            </failoverdomain>
            <failoverdomain name="failover2" ordered="1" restricted="1">
                <failoverdomainnode name="mailbox1.mydomain.com" priority="2"/>
                <failoverdomainnode name="mailbox2.mydomain.com" priority="1"/>
            </failoverdomain>
        </failoverdomains>
        <resources>
            <ip address="192.168.51.93/24" sleeptime="10"/>
            <fs device="/dev/mapper/mail_1-mailbox1" force_unmount="1" fsid="28418" fstype="ext4" mountpoint="/mailbox1" name="imap1_fs" self_fence="1"/>
            <script file="/etc/init.d/cyrus-imapd1" name="cyrus1"/>
            <ip address="192.168.51.94/24" sleeptime="10"/>
            <fs device="/dev/mapper/mail_2-mailbox2" force_unmount="1" fsid="49388" fstype="ext4" mountpoint="/mailbox2" name="imap2_fs" self_fence="1"/>
            <script file="/etc/init.d/cyrus-imapd2" name="cyrus2"/>
        </resources>
        <service domain="failover1" name="mailbox1" recovery="restart">
            <fs ref="imap1_fs"/>
            <ip ref="192.168.51.93/24"/>
            <script ref="cyrus1"/>
        </service>
        <service domain="failover2" name="mailbox2" recovery="restart">
            <ip ref="192.168.51.94/24"/>
            <fs ref="imap2_fs"/>
            <script ref="cyrus2"/>
        </service>
    </rm>
    <fencedevices>
        <fencedevice agent="fence_cisco_ucs" ipaddr="172.17.1.30" ipport="443" login="KVM" name="CiscoFence" passwd="myPassword" ssl="on" suborg="/org-root/ls-mailbox"/>
    </fencedevices>
</cluster>



tail -f /var/log/messages  :--
------------------------------

Oct 28 15:42:13 mailbox2 corosync[2376]:   [CPG   ] chosen downlist: sender r(0) ip(192.168.51.92) ; members(old:2 left:1)
Oct 28 15:42:13 mailbox2 corosync[2376]:   [MAIN  ] Completed service synchronization, ready to provide service.
Oct 28 15:42:13 mailbox2 fenced[2435]: fencing node mailbox1
Oct 28 15:42:13 mailbox2 rgmanager[2849]: State change: mailbox1 DOWN
Oct 28 15:42:14 mailbox2 python: Failed: Unable to obtain correct plug status or plug is not available#012
Oct 28 15:42:14 mailbox2 fenced[2435]: fence mailbox1 dev 0.0 agent fence_cisco_ucs result: error from agent
Oct 28 15:42:14 mailbox2 fenced[2435]: fence mailbox1 failed
Oct 28 15:42:17 mailbox2 fenced[2435]: fencing node mailbox1
Oct 28 15:42:17 mailbox2 python: Failed: Unable to obtain correct plug status or plug is not available#012
Oct 28 15:42:17 mailbox2 fenced[2435]: fence mailbox1 dev 0.0 agent fence_cisco_ucs result: error from agent
Oct 28 15:42:17 mailbox2 fenced[2435]: fence mailbox1 failed
Oct 28 15:42:20 mailbox2 fenced[2435]: fencing node mailbox1
Oct 28 15:42:21 mailbox2 python: Failed: Unable to obtain correct plug status or plug is not available#012
Oct 28 15:42:21 mailbox2 fenced[2435]: fence mailbox1 dev 0.0 agent fence_cisco_ucs result: error from agent
Oct 28 15:42:21 mailbox2 fenced[2435]: fence mailbox1 failed
Oct 28 15:42:25 mailbox2 python: Failed: Unable to obtain correct plug status or plug is not available#012
Oct 28 15:42:28 mailbox2 python: Failed: Unable to obtain correct plug status or plug is not available#012
Oct 28 15:42:31 mailbox2 python: Failed: Unable to obtain correct plug status or plug is not available#012
Oct 28 15:42:35 mailbox2 python: Failed: Unable to obtain correct plug status or plug is not available#012
Oct 28 15:42:38 mailbox2 python: Failed: Unable to obtain correct plug status or plug is not available#012
Oct 28 15:42:42 mailbox2 python: Failed: Unable to obtain correct plug status or plug is not available#012
Oct 28 15:42:45 mailbox2 python: Failed: Unable to obtain correct plug status or plug is not available#012
Oct 28 15:42:49 mailbox2 python: Failed: Unable to obtain correct plug status or plug is not available#012
Oct 28 15:42:49 mailbox2 corosync[2376]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Oct 28 15:42:49 mailbox2 corosync[2376]:   [CPG   ] chosen downlist: sender r(0) ip(192.168.51.92) ; members(old:1 left:0)
Oct 28 15:42:49 mailbox2 corosync[2376]:   [MAIN  ] Completed service synchronization, ready to provide service.
Oct 28 15:42:52 mailbox2 python: Failed: Unable to obtain correct plug status or plug is not available#012
Oct 28 15:42:56 mailbox2 python: Failed: Unable to obtain correct plug status or plug is not available#012


Regards
Jayesh Shinde
--

-- 
Linux-cluster mailing list
Linux-cluster <at> redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Vladimir Martinek | 30 Oct 15:04 2015
Picon

Two cluster nodes hold exclusive POSIX lock on the same file

Hello,

I have a 3 node cluster and fencing agent that takes about 30 seconds to 
complete the fencing. In those 30 seconds it is possible for two nodes 
of the cluster to get exclusive POSIX lock on the same file.

Did I miss something here or is this correct behaviour?

Also, when trying with BSD flock, it works as I would expect - the locks 
are only released after the fencing completes and node 1 is confirmed to 
be fenced.

Following is output of dlm_tool dump command. Watch for the line "gfs2fs 
purged 1 plocks for 1" - the locks of failed node 1 are purged long 
before the fencing is completed.

Thank you for any advice.

Vladimir Martinek

217 dlm:controld conf 2 0 1 memb 2 3 join left 1
217 dlm:controld left reason nodedown 1 procdown 0 leave 0
217 set_fence_actors for 1 low 2 count 2
217 daemon remove 1 nodedown need_fencing 1 low 2
217 fence work wait for cpg ringid
217 dlm:controld ring 2:1292 2 memb 2 3
217 fence work wait for cluster ringid
217 dlm:ls:gfs2fs conf 2 0 1 memb 2 3 join left 1
217 gfs2fs add_change cg 4 remove nodeid 1 reason nodedown
217 gfs2fs add_change cg 4 counts member 2 joined 0 remove 1 failed 1
217 gfs2fs stop_kernel cg 4
217 write "0" to "/sys/kernel/dlm/gfs2fs/control"
217 gfs2fs purged 1 plocks for 1
217 gfs2fs check_ringid wait cluster 1288 cpg 1:1288
217 dlm:ls:gfs2fs ring 2:1292 2 memb 2 3
217 gfs2fs check_ringid cluster 1288 cpg 2:1292
217 fence work wait for cluster ringid
217 gfs2fs check_ringid cluster 1288 cpg 2:1292
217 cluster quorum 1 seq 1292 nodes 2
217 cluster node 1 removed seq 1292
217 del_configfs_node rmdir "/sys/kernel/config/dlm/cluster/comms/1"
217 fence request 1 pos 0
217 fence request 1 pid 4046 nodedown time 1446211577 fence_all dlm_stonith
217 fence wait 1 pid 4046 running
217 gfs2fs check_ringid done cluster 1292 cpg 2:1292
217 gfs2fs check_fencing 1 wait start 30 fail 217
217 gfs2fs check_fencing wait_count 1
217 gfs2fs wait for fencing
218 fence wait 1 pid 4046 running
218 gfs2fs wait for fencing
219 fence wait 1 pid 4046 running
219 gfs2fs wait for fencing
220 fence wait 1 pid 4046 running
220 gfs2fs wait for fencing
221 fence wait 1 pid 4046 running
221 gfs2fs wait for fencing
222 fence wait 1 pid 4046 running
222 gfs2fs wait for fencing
223 fence wait 1 pid 4046 running
223 gfs2fs wait for fencing
224 fence wait 1 pid 4046 running
224 gfs2fs wait for fencing
225 fence wait 1 pid 4046 running
225 gfs2fs wait for fencing
226 fence wait 1 pid 4046 running
226 gfs2fs wait for fencing
227 fence wait 1 pid 4046 running
227 gfs2fs wait for fencing
228 fence wait 1 pid 4046 running
228 gfs2fs wait for fencing
229 fence wait 1 pid 4046 running
229 gfs2fs wait for fencing
230 fence wait 1 pid 4046 running
230 gfs2fs wait for fencing
231 fence wait 1 pid 4046 running
231 gfs2fs wait for fencing
232 fence wait 1 pid 4046 running
232 gfs2fs wait for fencing
233 fence wait 1 pid 4046 running
233 gfs2fs wait for fencing
234 fence wait 1 pid 4046 running
234 gfs2fs wait for fencing
235 fence wait 1 pid 4046 running
235 gfs2fs wait for fencing
236 fence wait 1 pid 4046 running
236 gfs2fs wait for fencing
237 fence wait 1 pid 4046 running
237 gfs2fs wait for fencing
238 fence wait 1 pid 4046 running
238 gfs2fs wait for fencing
239 fence wait 1 pid 4046 running
239 gfs2fs wait for fencing
240 fence wait 1 pid 4046 running
240 gfs2fs wait for fencing
241 fence wait 1 pid 4046 running
241 gfs2fs wait for fencing
242 fence wait 1 pid 4046 running
242 gfs2fs wait for fencing
243 fence wait 1 pid 4046 running
243 gfs2fs wait for fencing
244 fence wait 1 pid 4046 running
244 gfs2fs wait for fencing
245 fence wait 1 pid 4046 running
245 gfs2fs wait for fencing
246 fence wait 1 pid 4046 running
246 gfs2fs wait for fencing
247 fence wait 1 pid 4046 running
247 gfs2fs wait for fencing
248 fence result 1 pid 4046 result 0 exit status
248 fence wait 1 pid 4046 result 0
248 gfs2fs wait for fencing
248 fence status 1 receive 0 from 2 walltime 1446211608 local 248
248 gfs2fs check_fencing 1 done start 30 fail 217 fence 248
248 gfs2fs check_fencing done
248 gfs2fs send_start 2:4 counts 2 2 0 1 1
248 gfs2fs receive_start 2:4 len 80
248 gfs2fs match_change 2:4 matches cg 4
248 gfs2fs wait_messages cg 4 need 1 of 2
248 gfs2fs receive_start 3:2 len 80
248 gfs2fs match_change 3:2 matches cg 4
248 gfs2fs wait_messages cg 4 got all 2
248 gfs2fs start_kernel cg 4 member_count 2
248 dir_member 3
248 dir_member 2
248 dir_member 1
248 set_members rmdir "/sys/kernel/config/dlm/cluster/spaces/gfs2fs/nodes/1"
248 write "1" to "/sys/kernel/dlm/gfs2fs/control"
248 gfs2fs prepare_plocks
248 gfs2fs set_plock_data_node from 1 to 2
248 gfs2fs send_plocks_done 2:4 counts 2 2 0 1 1 plocks_data 1426592608
248 gfs2fs receive_plocks_done 2:4 flags 2 plocks_data 1426592608 need 0 
save 0

--

-- 
Linux-cluster mailing list
Linux-cluster <at> redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

Vallevand, Mark K | 15 Oct 19:18 2015
Picon

Alternative to resource monitor polling?

Is there an alternative to resource monitor polling to detect a resource failure?

If, for example, a resource failure is detected by our own software, could it signal clustering that a resource has failed?

 

Regards.
Mark K Vallevand   Mark.Vallevand <at> Unisys.com
Never try and teach a pig to sing: it's a waste of time, and it annoys the pig.

THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers.

--

-- 
Linux-cluster mailing list
Linux-cluster <at> redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Vallevand, Mark K | 1 Oct 18:25 2015
Picon

Resource placement after node comes online.

In a multiple node cluster with resources distributed across the nodes, is there a way to automatically ‘rebalance’ resources when a node comes online?

 

If this has been asked and answered, I apologize.  And, pointer to the relevant information would be welcome.

 

Regards.
Mark K Vallevand   Mark.Vallevand <at> Unisys.com
Never try and teach a pig to sing: it's a waste of time, and it annoys the pig.

THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers.

--

-- 
Linux-cluster mailing list
Linux-cluster <at> redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

Gmane