Ron Croonenberg | 20 Jan 20:34 2015

heartbeat

Hello,

I have an ether net connection that connects all hosts in a cluster and 
the nodes also have an IB connection. I want the failover host to take 
over when an IB connection goes down on a host. Is there an example for 
how to do this? (I am using ipmi for shutting down hosts etc).

A cluster I am using has 8 nodes and want to do fail over in pairs of 
two.  in the ha.cf file do I mention all the hosts or just the host and 
it's fail over, per pair?

thanks,

Ron
_______________________________________________
Linux-HA mailing list
Linux-HA <at> lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Ulrich Windl | 20 Jan 17:38 2015
Picon

SLES11 SP3: warning: crm_find_peer: Node 'h01' and 'h01' share the same cluster nodeid: 739512321

Hi!

When a SLES11SP3 node joined a 3-node cluster after reboot (and preceeding update), a node with up-to-date
software showed these messages (I feel these should not appear):

Jan 20 17:12:38 h10 corosync[13220]:  [MAIN  ] Completed service synchronization, ready to provide service.
Jan 20 17:12:38 h10 cib[13257]:  warning: crm_find_peer: Node 'h01' and 'h01' share the same cluster
nodeid: 739512321
Jan 20 17:12:38 h10 cib[13257]:  warning: crm_find_peer: Node 'h01' and 'h01' share the same cluster
nodeid: 739512321
Jan 20 17:12:38 h10 cib[13257]:  warning: crm_find_peer: Node 'h01' and 'h01' share the same cluster
nodeid: 739512321

### So why may not the same nodes have the same nodeid?

Jan 20 17:12:38 h10 attrd[13260]:  warning: crm_dump_peer_hash: crm_find_peer: Node 84939948/h05 =
0x61ae90 - b6cabbb3-8332-4903-85be-0c06272755ac
Jan 20 17:12:38 h10 attrd[13260]:  warning: crm_dump_peer_hash: crm_find_peer: Node 17831084/h01 =
0x61e300 - 11693f38-8125-45f2-b397-86136d5894a4
Jan 20 17:12:38 h10 attrd[13260]:  warning: crm_dump_peer_hash: crm_find_peer: Node 739512330/h10 =
0x614400 - 302e33d8-7cee-4f3b-97da-b38f0d51b0f6

### above are the three nodes of the cluster

Jan 20 17:12:38 h10 attrd[13260]:     crit: crm_find_peer: Node 739512321 and 17831084 share the same name 'h01'

### Now there are different nodeids it seems...

Jan 20 17:12:38 h10 attrd[13260]:  warning: crm_find_peer: Node 'h01' and 'h01' share the same cluster
nodeid: 739512321
(Continue reading)

Ben Hunger | 19 Jan 15:55 2015
Picon

Pacemakerd and Stonith issues

Attachment (corosync.log): application/octet-stream, 203 KiB
_______________________________________________
Linux-HA mailing list
Linux-HA <at> lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems
Vladislav Bogdanov | 19 Jan 13:42 2015

[Patch] Collection of patches for crmsh

Hi Kristoffer,

there are two patches, one for crmsh and one for parallax.
They make history commands work.

--- a/modules/crm_pssh.py	2015-01-19 11:42:02.000000000 +0000
+++ b/modules/crm_pssh.py	2015-01-19 12:17:46.328000847 +0000
 <at>  <at>  -85,14 +85,14  <at>  <at>  def do_pssh(l, opts):
                '-o', 'PasswordAuthentication=no',
                '-o', 'SendEnv=PSSH_NODENUM',
                '-o', 'StrictHostKeyChecking=no']
-        if opts.options:
+        if hasattr(opts, 'options'):
             for opt in opts.options:
                 cmd += ['-o', opt]
         if user:
             cmd += ['-l', user]
         if port:
             cmd += ['-p', port]
-        if opts.extra:
+        if hasattr(opts, 'extra'):
             cmd.extend(opts.extra)
         if cmdline:
             cmd.append(cmdline)
---

--- a/parallax/manager.py	2014-10-15 13:40:04.000000000 +0000
+++ b/parallax/manager.py	2015-01-19 12:15:47.911000236 +0000
 <at>  <at>  -47,11 +47,26  <at>  <at>  class Manager(object):
         # Backwards compatibility with old __init__
         # format: Only argument is an options dict
         if not isinstance(limit, int):
-            self.limit = limit.par
-            self.timeout = limit.timeout
-            self.askpass = limit.askpass
-            self.outdir = limit.outdir
-            self.errdir = limit.errdir
+            if hasattr(limit, 'par'):
+                self.limit = limit.par
+            else:
+                self.limit = DEFAULT_PARALLELISM
+            if hasattr(limit, 'timeout'):
+                self.timeout = limit.timeout
+            else:
+                self.timeout = DEFAULT_TIMEOUT
+            if hasattr(limit, 'askpass'):
+                self.askpass = limit.askpass
+            else:
+                self.askpass = False
+            if hasattr(limit, 'outdir'):
+                self.outdir = limit.outdir
+            else:
+                self.outdir = None
+            if hasattr(limit, 'errdir'):
+                self.errdir = limit.errdir
+            else:
+                self.errdir = None
         else:
             self.limit = limit
             self.timeout = timeout
---

Two more patches I use for ages in my builds are:

Fix transition start detection.

--- a/modules/constants.py  2014-12-22 08:48:26.000000000 +0000
+++ b/modules/constants.py  2014-12-22 13:07:43.945077805 +0000
 <at>  <at>  -272,7 +272,7  <at>  <at> 
 # r.group(3) file number
 transition_patt = [
     # transition start
-    "crmd.* do_te_invoke: Processing graph ([0-9]+) .*derived from (.*/pe-[^-]+-(%%)[.]bz2)",
+    "pengine.* process_pe_message: Calculated Transition ([0-9]+): (.*/pe-[^-]+-(%%)[.]bz2)",
     # r.group(1) transition number (a different thing from file number)
     # r.group(2) contains full path
     # r.group(3) transition status
---

Make tar follow symlinks.

--- a/modules/crm_pssh.py   2013-08-12 12:52:11.000000000 +0000
+++ b/modules/crm_pssh.py   2013-08-12 12:53:32.666444069 +0000
 <at>  <at>  -170,7 +170,7  <at>  <at> 
         dir = "/%s" % r.group(1)
         red_pe_l = [x.replace("%s/" % r.group(1), "") for x in pe_l]
         common_debug("getting new PE inputs %s from %s" % (red_pe_l, node))
-        cmdline = "tar -C %s -cf - %s" % (dir, ' '.join(red_pe_l))
+        cmdline = "tar -C %s -chf - %s" % (dir, ' '.join(red_pe_l))
         opts = parse_args(outdir, errdir)
         l.append([node, cmdline])
     if not l:
---

Best,
Vladislav

_______________________________________________
Linux-HA mailing list
Linux-HA <at> lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Support for DRDB

Hi,

I have been told that support for DRBD is supposed to be phased out from both SLES and RHEL in the near future.

Are any of you aware this or is this a false claim?

Yours

--martin

P.S.: Of course support from LINBIT should be available even if SuSE or Redhat will not support it anymore.
_______________________________________________
Linux-HA mailing list
Linux-HA <at> lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Fabian Herschel | 15 Jan 16:50 2015
Picon

Re: SAPInstance does not start and asking for START_PROFILE

Hi Muhammad,

please ask the SAP Guys what they have changed. Did they install the 
sapstartsrv or saphostagent?

Threads should not stop with "its working now, but I could not explain 
what we have changed" :) This stops others to learn from this situation.

Regards
Fabian

On 01/15/2015 04:12 PM, Muhammad Sharfuddin wrote:
> Thanks for your excellent help, appreciated.
>
> I dont know what happened exactly, seems like SAP Guys has fixed the
> issue as now cluster start running the SAPInstance without any issue.
> Also find below the sapcontrol output
>
> thltlp2:tlpadm 48> /usr/sap/TLP/DVEBMGS00/exe/sapcontrol -nr 00
> -function GetProcessList
>
> 15.01.2015 19:40:04
> GetProcessList
> OK
> name, description, dispstatus, textstatus, starttime, elapsedtime, pid
> disp+work, Dispatcher, GREEN, Running, 2015 01 15 19:27:24, 0:12:40, 10920
> igswd_mt, IGS Watchdog, GREEN, Running, 2015 01 15 19:27:24, 0:12:40, 10921
> gwrd, Gateway, GREEN, Running, 2015 01 15 19:27:25, 0:12:39, 10938
> icman, ICM, GREEN, Running, 2015 01 15 19:27:25, 0:12:39, 10939
> thltlp2:tlpadm 49>
>
>
> Thanks once again.
>
> Regards,
>
> Muhammad Sharfuddin
> Cell: +92-3332144823 | UAN: +92(21) 111-111-142 ext: 113 | NDS.COM.PK
> <http://www.nds.com.pk>
>
> On 01/15/2015 03:57 PM, Fabian Herschel wrote:
>> Hi Muhammad,
>>
>> please retry the command as user <sid>adm. Or inspect the resource
>> agent for ALL environment variables to be set, not only LD_LIBRARY_PATH
>>
>> If sapcontrol would be disfunctional using <sid>adm you have a SAP
>> problem and that could not be disussed here.
>>
>> Regards
>> Fabian
>>
>> On 01/15/2015 11:49 AM, Muhammad Sharfuddin wrote:
>>> thltlp1:~ # echo $LD_LIBRARY_PATH
>>> /usr/sap/TLP/ASCS01/exe/:/usr/sap/TLP/DVEBMGS00/exe:/usr/lib64
>>> thltlp1:~ # /usr/sap/TLP/DVEBMGS00/exe/sapcontrol -nr 00 -function Start
>>> Could not open the ICU common library.
>>>     The following files must be in the path described by
>>>     the environment variable "LD_LIBRARY_PATH":
>>>     libicuuc.so.50, libicudata.so.50, libicui18n.so.50
>>> [/bas/741_REL/src/flat/nlsui0.c 1535] pid = 27543
>>> LD_LIBRARY_PATH is currently set to <not set>
>>> [/bas/741_REL/src/flat/nlsui0.c 1538] pid = 27543
>>> thltlp1:~ #
>>>
>>> please help
>>>
>>>
>>> Regards,
>>>
>>> Muhammad Sharfuddin
>>> Cell: +92-3332144823 | UAN: +92(21) 111-111-142 ext: 113 | NDS.COM.PK
>>> <http://www.nds.com.pk>
>>>
>>> On 01/15/2015 02:15 PM, Fabian Herschel wrote:
>>>> On 01/14/2015 10:53 PM, Muhammad Sharfuddin wrote:
>>>>> On 01/15/2015 02:35 AM, Fabian Herschel wrote:
>>>>>  > Hi Muhammed,
>>>>>  >
>>>>>  > sorry please do NOT use startsap. Please use sapctrl.
>>>>>  > sapctrl -nr 00 -function Start
>>>>>  > Check the started processes using
>>>>>  > sapctrl -nr 00 -function GetProcessList
>>>>>  >
>>>>> I dont find the sapctrl command available on the system.
>>>>
>>>> Sorry the command is sapcontrol (I abbreviated the control to ctrl)
>>>> From the SAPInstance resource agent:
>>>> SAPCONTROL="/usr/sap/$SID/$InstanceName/exe/sapcontrol
>>>>
>>>>>
>>>>>  >
>>>>>  > If disp+work processes are not starting than you might need to
>>>>> check
>>>>> the reason in the work directory of the SAP NetWaver instance.
>>>>>  >
>>>>> Thanks for the pointer, I'll get this check with SAP Guys
>>>>>
>>>>>  > Regards
>>>>>  > Fabian
>>>>>  >
>>>>>  >
>>>>
>>>> _______________________________________________
>>>> Linux-HA mailing list
>>>> Linux-HA <at> lists.linux-ha.org
>>>> http://lists.linux-ha.org/mailman/listinfo/linux-ha
>>>> See also: http://linux-ha.org/ReportingProblems
>>>>
>>>
>>
>>
>

_______________________________________________
Linux-HA mailing list
Linux-HA <at> lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Muhammad Sharfuddin | 15 Jan 16:33 2015
Picon

multipath sbd stonith device recommended configuration

I have to put this 2 node active/passive cluster in production very soon 
and I have tested the resource migration
works perfectly in case of the node running the resource goes 
down(abruptly/forcefully).

I have always read and heard to increase msgwait and watchdog timeout 
when sbd is a multipath disk, but in my case
I have just created the disk via
     sbd -d /dev/mapper/mpathe create

and I have following resource for sbd
     primitive sbd_stonith stonith:external/sbd \
             op monitor interval="3000" timeout="120" start-delay="21" \
             op start interval="0" timeout="120" \
             op stop interval="0" timeout="120" \
             params sbd_device="/dev/mapper/mpathe"

as of now I am quite satisfied, but should I increase the msgwait and 
watchdog timeouts ?

also I am using the start-delay=21 for "op monitor interval" should I 
also use the start-delay=11 for "op start interval"

Please recommend

--

-- 
Regards,

Muhammad Sharfuddin
Cell: +92-3332144823 | UAN: +92(21) 111-111-142 ext: 113 | NDS.COM.PK 
<http://www.nds.com.pk>

_______________________________________________
Linux-HA mailing list
Linux-HA <at> lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Muhammad Sharfuddin | 14 Jan 20:12 2015
Picon

SAPInstance does not start and asking for START_PROFILE

OS: SLES 11 SP 3
pacemaker-1.1.9-0.19.102
corosync-1.4.5-0.18.15
resource-agents-3.9.5-0.32.22

starting the SAP Instance resource fails with following errors:

Jan 14 18:22:16 thltlp1 SAPInstance(SAPInst-DVEBMGS00)[50450]: ERROR: 
Expected
TLP_DVEBMGS00_thltlp to be the instance START profile, please set 
START_PROFILE
parameter!
Jan 14 18:22:16 thltlp1 crmd[47231]:   notice: process_lrm_event: LRM 
operation
SAPInst-DVEBMGS00_start_0 (call=81, rc=6, cib-update=66, confirmed=true) 
not configured

following is the resource configurations:
primitive SAPInst-DVEBMGS00 ocf:heartbeat:SAPInstance \
op monitor interval="120" timeout="60" \
op start interval="0" timeout="300" \
op stop interval="0" timeout="300" \
params InstanceName="TLP_DVEBMGS00_thltlp" 
DIR_EXECUTABLE="/usr/sap/TLP/DVEBMGS00/exe"
START_PROFILE="TLP_DVEBMGS00_thltlp" DIR_PROFILE="/sapmnt/TLP/profile"

i.e START_PROFILE is configured but cluster is not starting the SAP 
Instance.

Please help

--

-- 
Regards,

Muhammad Sharfuddin

_______________________________________________
Linux-HA mailing list
Linux-HA <at> lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Marlon Guao | 29 Dec 10:19 2014
Picon

Re: pacemaker/heartbeat LVM

hi,

uploaded it here.

http://susepaste.org/45413433

thanks.

On Mon, Dec 29, 2014 at 5:09 PM, Marlon Guao <marlon.guao <at> gmail.com> wrote:

> Ok, i attached the log file of one of the nodes.
>
> On Mon, Dec 29, 2014 at 4:42 PM, emmanuel segura <emi2fast <at> gmail.com>
> wrote:
>
>> please use pastebin and show your whole logs
>>
>> 2014-12-29 9:06 GMT+01:00 Marlon Guao <marlon.guao <at> gmail.com>:
>> > by the way.. just to note that.. for a normal testing (manual failover,
>> > rebooting the active node)... the cluster is working fine. I only
>> encounter
>> > this error if I try to poweroff/shutoff the active node.
>> >
>> > On Mon, Dec 29, 2014 at 4:05 PM, Marlon Guao <marlon.guao <at> gmail.com>
>> wrote:
>> >
>> >> Hi.
>> >>
>> >>
>> >> Dec 29 13:47:16 s1 LVM(vg1)[1601]: WARNING: LVM Volume cluvg1 is not
>> >> available (stopped)
>> >> Dec 29 13:47:16 s1 crmd[1515]:   notice: process_lrm_event: Operation
>> >> vg1_monitor_0: not running (node=
>> >> s1, call=23, rc=7, cib-update=40, confirmed=true)
>> >> Dec 29 13:47:16 s1 crmd[1515]:   notice: te_rsc_command: Initiating
>> action
>> >> 9: monitor fs1_monitor_0 on
>> >> s1 (local)
>> >> Dec 29 13:47:16 s1 crmd[1515]:   notice: te_rsc_command: Initiating
>> action
>> >> 16: monitor vg1_monitor_0 on
>> >>  s2
>> >> Dec 29 13:47:16 s1 Filesystem(fs1)[1618]: WARNING: Couldn't find device
>> >> [/dev/mapper/cluvg1-clulv1]. Ex
>> >> pected /dev/??? to exist
>> >>
>> >>
>> >> from the LVM agent, it checked if the volume is already available.. and
>> >> will raise the above error if not. But, I don't see that it tries to
>> >> activate it before raising the VG. Perhaps, it assumes that the VG is
>> >> already activated... so, I'm not sure who should be activating it
>> (should
>> >> it be LVM?).
>> >>
>> >>
>> >>  if [ $rc -ne 0 ]; then
>> >>                 ocf_log $loglevel "LVM Volume $1 is not available
>> >> (stopped)"
>> >>                 rc=$OCF_NOT_RUNNING
>> >>         else
>> >>                 case $(get_vg_mode) in
>> >>                 1) # exclusive with tagging.
>> >>                         # If vg is running, make sure the correct tag
>> is
>> >> present. Otherwise we
>> >>                         # can not guarantee exclusive activation.
>> >>                         if ! check_tags; then
>> >>                                 ocf_exit_reason "WARNING:
>> >> $OCF_RESKEY_volgrpname is active without the cluster tag, \"$OUR_TAG\""
>> >>
>> >> On Mon, Dec 29, 2014 at 3:36 PM, emmanuel segura <emi2fast <at> gmail.com>
>> >> wrote:
>> >>
>> >>> logs?
>> >>>
>> >>> 2014-12-29 6:54 GMT+01:00 Marlon Guao <marlon.guao <at> gmail.com>:
>> >>> > Hi,
>> >>> >
>> >>> > just want to ask regarding the LVM resource agent on
>> pacemaker/corosync.
>> >>> >
>> >>> > I setup 2 nodes cluster (opensuse13.2 -- my config below). The
>> cluster
>> >>> > works as expected, like doing a manual failover (via crm resource
>> move),
>> >>> > and automatic failover (by rebooting the active node for instance).
>> >>> But, if
>> >>> > i try to just "shutoff" the active node (it's a VM, so I can do a
>> >>> > poweroff). The resources won't be able to failover to the passive
>> node.
>> >>> > when I did an investigation, it's due to an LVM resource not
>> starting
>> >>> > (specifically, the VG). I found out that the LVM resource won't try
>> to
>> >>> > activate the volume group in the passive node. Is this an expected
>> >>> > behaviour?
>> >>> >
>> >>> > what I really expect is that, in the event that the active node be
>> >>> shutoff
>> >>> > (by a power outage for instance), all resources should be failover
>> >>> > automatically to the passive. LVM should re-activate the VG.
>> >>> >
>> >>> >
>> >>> > here's my config.
>> >>> >
>> >>> > node 1: s1
>> >>> > node 2: s2
>> >>> > primitive cluIP IPaddr2 \
>> >>> > params ip=192.168.13.200 cidr_netmask=32 \
>> >>> > op monitor interval=30s
>> >>> > primitive clvm ocf:lvm2:clvmd \
>> >>> > params daemon_timeout=30 \
>> >>> > op monitor timeout=90 interval=30
>> >>> > primitive dlm ocf:pacemaker:controld \
>> >>> > op monitor interval=60s timeout=90s on-fail=ignore \
>> >>> > op start interval=0 timeout=90
>> >>> > primitive fs1 Filesystem \
>> >>> > params device="/dev/mapper/cluvg1-clulv1" directory="/data"
>> fstype=btrfs
>> >>> > primitive mariadb mysql \
>> >>> > params config="/etc/my.cnf"
>> >>> > primitive sbd stonith:external/sbd \
>> >>> > op monitor interval=15s timeout=60s
>> >>> > primitive vg1 LVM \
>> >>> > params volgrpname=cluvg1 exclusive=yes \
>> >>> > op start timeout=10s interval=0 \
>> >>> > op stop interval=0 timeout=10 \
>> >>> > op monitor interval=10 timeout=30 on-fail=restart depth=0
>> >>> > group base-group dlm clvm
>> >>> > group rgroup cluIP vg1 fs1 mariadb \
>> >>> > meta target-role=Started
>> >>> > clone base-clone base-group \
>> >>> > meta interleave=true target-role=Started
>> >>> > property cib-bootstrap-options: \
>> >>> > dc-version=1.1.12-1.1.12.git20140904.266d5c2 \
>> >>> > cluster-infrastructure=corosync \
>> >>> > no-quorum-policy=ignore \
>> >>> > last-lrm-refresh=1419514875 \
>> >>> > cluster-name=xxx \
>> >>> > stonith-enabled=true
>> >>> > rsc_defaults rsc-options: \
>> >>> > resource-stickiness=100
>> >>> >
>> >>> > --
>> >>> >>>> import this
>> >>> > _______________________________________________
>> >>> > Linux-HA mailing list
>> >>> > Linux-HA <at> lists.linux-ha.org
>> >>> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
>> >>> > See also: http://linux-ha.org/ReportingProblems
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> esta es mi vida e me la vivo hasta que dios quiera
>> >>> _______________________________________________
>> >>> Linux-HA mailing list
>> >>> Linux-HA <at> lists.linux-ha.org
>> >>> http://lists.linux-ha.org/mailman/listinfo/linux-ha
>> >>> See also: http://linux-ha.org/ReportingProblems
>> >>>
>> >>
>> >>
>> >>
>> >> --
>> >> >>> import this
>> >>
>> >
>> >
>> >
>> > --
>> >>>> import this
>> > _______________________________________________
>> > Linux-HA mailing list
>> > Linux-HA <at> lists.linux-ha.org
>> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
>> > See also: http://linux-ha.org/ReportingProblems
>>
>>
>>
>> --
>> esta es mi vida e me la vivo hasta que dios quiera
>> _______________________________________________
>> Linux-HA mailing list
>> Linux-HA <at> lists.linux-ha.org
>> http://lists.linux-ha.org/mailman/listinfo/linux-ha
>> See also: http://linux-ha.org/ReportingProblems
>>
>
>
>
> --
> >>> import this
>

--

-- 
>>> import this
_______________________________________________
Linux-HA mailing list
Linux-HA <at> lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Marlon Guao | 29 Dec 06:54 2014
Picon

pacemaker/heartbeat LVM

Hi,

just want to ask regarding the LVM resource agent on pacemaker/corosync.

I setup 2 nodes cluster (opensuse13.2 -- my config below). The cluster
works as expected, like doing a manual failover (via crm resource move),
and automatic failover (by rebooting the active node for instance). But, if
i try to just "shutoff" the active node (it's a VM, so I can do a
poweroff). The resources won't be able to failover to the passive node.
when I did an investigation, it's due to an LVM resource not starting
(specifically, the VG). I found out that the LVM resource won't try to
activate the volume group in the passive node. Is this an expected
behaviour?

what I really expect is that, in the event that the active node be shutoff
(by a power outage for instance), all resources should be failover
automatically to the passive. LVM should re-activate the VG.

here's my config.

node 1: s1
node 2: s2
primitive cluIP IPaddr2 \
params ip=192.168.13.200 cidr_netmask=32 \
op monitor interval=30s
primitive clvm ocf:lvm2:clvmd \
params daemon_timeout=30 \
op monitor timeout=90 interval=30
primitive dlm ocf:pacemaker:controld \
op monitor interval=60s timeout=90s on-fail=ignore \
op start interval=0 timeout=90
primitive fs1 Filesystem \
params device="/dev/mapper/cluvg1-clulv1" directory="/data" fstype=btrfs
primitive mariadb mysql \
params config="/etc/my.cnf"
primitive sbd stonith:external/sbd \
op monitor interval=15s timeout=60s
primitive vg1 LVM \
params volgrpname=cluvg1 exclusive=yes \
op start timeout=10s interval=0 \
op stop interval=0 timeout=10 \
op monitor interval=10 timeout=30 on-fail=restart depth=0
group base-group dlm clvm
group rgroup cluIP vg1 fs1 mariadb \
meta target-role=Started
clone base-clone base-group \
meta interleave=true target-role=Started
property cib-bootstrap-options: \
dc-version=1.1.12-1.1.12.git20140904.266d5c2 \
cluster-infrastructure=corosync \
no-quorum-policy=ignore \
last-lrm-refresh=1419514875 \
cluster-name=xxx \
stonith-enabled=true
rsc_defaults rsc-options: \
resource-stickiness=100

--

-- 
>>> import this
_______________________________________________
Linux-HA mailing list
Linux-HA <at> lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Robert.Koeppl | 22 Dec 10:00 2014

AUTO: Robert Koeppl ist außer Haus. Robert Koeppl is out of office (Rückkehr am 07.01.2015)


Ich kehre zurück am 07.01.2015.

Hinweis: Dies ist eine automatische Antwort auf Ihre Nachricht  "Re:
[Linux-HA] [ha-wg-technical] [Pacemaker] [RFC] Organizing HA	Summit
2015" gesendet am 22.12.2014 09:35:10.

Diese ist die einzige Benachrichtigung, die Sie empfangen werden, während
diese Person abwesend ist.

_______________________________________________
Linux-HA mailing list
Linux-HA <at> lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Gmane