Kevin Corry | 1 May 16:51 2005
Picon

Re: using CLI from a script

Hi Ross,

On Sat April 30 2005 12:56 am, Ross Boylan wrote:
> I'm interested in using the CLI from a script, probably python.  The
> problematic part is that I want to have a back and forth exchange with
> the CLI.  Is there any chance this will work?
>
> The likely sequence would be to do some queries to check that things
> are in a proper state, then do some operations.  Interspersed with my
> pumping commands to the CLI it would be giving responses, which I'd
> need to capture and interpret.
>
> Can I make any assumptions about this?  Examples of assumptions I
> would like to make are
> * the CLI will take and respond to my inputs as they come in (I'm not
> sure what I need to do to convince it the end of a command has
> arrived--would sending \n work?, flushing the buffer? sending ":"?)

The CLI doesn't do any internal buffering of input, so as it receives a 
complete command, it will process that command. Depending on the method that 
your program is using to actually send those commands to the CLI, you may 
require some kind of buffer flushing.

If you're program opens the CLI as if it were actually being run from the 
command-line, then a \n would probably be enough to distinguish between 
commands. Or you can use the ":" character between commands just to be safe.

> * the responses will be unbuffered enough that they won't sit in
> neverland while I wait for them

(Continue reading)

John Huttley | 2 May 23:58 2005
Picon

Thrashing Syslog

Hi,
I'm running 2.5.2 on Gentoo with EVMS on a pair of
IDE drives. These are set up as RAID1.

Then LVM facility sits on the Md device.

I'm now suffering hardware failures on one drive.

The system log in filling with
DM-7 resheduling sector 91734128
DM-7 redirecting sector 91734128 to another mirror.

There are simply millions of these, blowing the log file out beyond 4gb,
filling the /var partition, making syslog-ng use 97% CPU,
and generally making a bad situation worse.

Can you throttle these messages please?

Regards,

> 
> 
--

-- 
John Huttley <john <at> mwk.co.nz>
MWK Computers Ltd

-------------------------------------------------------
This SF.Net email is sponsored by: NEC IT Guy Games.
Get your fingers limbered up and give it your best shot. 4 great events, 4
opportunities to win big! Highest score wins.NEC IT Guy Games. Play to
(Continue reading)

John DeFranco | 3 May 17:01 2005
Picon

Re: [Evms-cluster] evms cluster plug-in error

Steve Dobbelstein wrote:

>
>
>John DeFranco wrote:
>  
>
>>Hi,
>>
>>For some reason I misplaced a follow up email you sent asking if evmsd
>>was running and if so could
>>I send the log file. Yes it is running:
>>
>>[root <at> arran log]# ps -edf | grep evmsd
>>root      4240  3322 99 14:32 ?        00:18:26 /sbin/evmsd -d debug
>>root      4464  4327  0 14:50 pts/0    00:00:00 grep evmsd
>>
>>The log files are attached.
>>
>>Thanks!
>>
>>Kevin Corry wrote:
>>
>>    
>>
>>>Hi John,
>>>
>>>On Wed April 6 2005 10:41 am, John DeFranco wrote:
>>>
>>>
(Continue reading)

John DeFranco | 6 May 20:48 2005
Picon

Re: [Evms-cluster] evms cluster plug-in error

Steve Dobbelstein wrote:

>
>
>John DeFranco wrote:
>  
>
>>Hi,
>>
>>For some reason I misplaced a follow up email you sent asking if evmsd
>>was running and if so could
>>I send the log file. Yes it is running:
>>
>>[root <at> arran log]# ps -edf | grep evmsd
>>root      4240  3322 99 14:32 ?        00:18:26 /sbin/evmsd -d debug
>>root      4464  4327  0 14:50 pts/0    00:00:00 grep evmsd
>>
>>The log files are attached.
>>
>>Thanks!
>>
>>Kevin Corry wrote:
>>
>>    
>>
>>>Hi John,
>>>
>>>On Wed April 6 2005 10:41 am, John DeFranco wrote:
>>>
>>>
(Continue reading)

Steve Dobbelstein | 6 May 23:28 2005
Picon

Re: [Evms-cluster] evms cluster plug-in error


John DeFranco <defranco <at> cup.hp.com> wrote on 05/06/2005 01:48:03 PM:

<snip>

> >>>>I am running the evmsd daemon with with the debug flag but the log
file
> >>>>didn't seem to help me. Does anyone know what device its looking for?
> >>>>
> >>>>
> >
> ><snip>
> >
> >John,
> >
> >Assuming the daemon log did not get truncated when you included it in
your
> >post, the daemon is hung in its hb_register() function which registers
with
> >heartbeat.  hb_register() calls several heartbeat functions to get the
> >connection setup.  I don't know which of the calls is hanging.
> >hb_register() doesn't put any additional information in to the log.  If
you
> >are willing, I can write up a patch that will put more debugging
> >information into the log and we can run that and see if we get any more
> >clues.
> >
> >In the meantime, try shutting down evmsd and running "evmsccm -l".  It
> >should list the nodes that are in the cluster membership.  evmsccm makes
> >calls to heartbeat that are similar to the ones hb_register() calls.  If
(Continue reading)

Sven Karlsson | 8 May 17:15 2005

RAID5 problems

Hi all,

I have run evms on a Debian (Linux) based secondary file server for some
time. 4 IDE disks are split into 4 partitions each which then forms 4
RAID5 so that each disk contributes to 4 different RAID5s. The RAID5s
are all bundled together with LVM2.

One week ago there was a hiccup on one of the IDE controllers causing
the md code to believe it had failed. Unfortunately, the spare disk was
not in place which caused the RAIDs to run in degraded.

As soon as I noticed the hiccup (my guesstimate is that I saw it within
2 hours of the error) I immediately shut the machine down. Yesterday,
after adding a spare disk and changing a few IDE cable, the machine
worked well for some time. The RAIDS came up in degraded mode. All
volumes were detected and checking the file systems showed no errors. I
could conclude that the hiccup probably was due to a cable which was
replaced when adding the spare. S.M.A.R.T. and other tools reports no
errors.

I decided to reboot just to see that everything was brought up ok before
 adding the spare and trying to figure out how to remove (and re-add)
the disk that was marked faulty. Now the problem starts.

evms_activate crashed with segmentation fault. I have tested with
different Linux kernels (2.4.27 and 2.6.11.8) and I also upgraded to the
most recent evms 2.5.2. All combinations yielded segmentation faults.

I was rather desperate and removed the faulty disk from the the probing
process by explicitly excluding it in evms.conf. Now no segmentation
(Continue reading)

Kevin Corry | 9 May 05:14 2005
Picon

Re: RAID5 problems

Hi Sven,

Since EVMS seems to be misbehaving, the safest way to get your system back 
working would be to use mdadm and recreate each of the raid5 regions in 
degraded mode.

You didn't mention the exact names of your disk and other objects, so I'll 
assume your four ide disks are hd[abcd], and that you have four primary 
partitions on each. Simply adjust the names and number as appropriate for 
your exact setup. I'll also assume hdd is the "bad" disk, and that hd[abc] 
are the working ones.

First start by running "mdadm --examine" on one partition from each raid5, and 
write down the chunk-size, parity algorithm, and order of the partitions 
within the raid5. I'll assume your chunk-size is 32kb, the parity is 
left-symmetric, and the partitions are in alphabetical order. Adjust these 
values as appropriate based on the --examine information.

Then use the following command to recreate one raid5 in degraded mode. The 
"missing" item specifies which partition out of the four is not present. See 
the mdadm man page for more information about the options. This also assumes 
you have used EVMS to activate the underlying partitions (segments).

mdadm -C /dev/md0 -c 32 -l 5 -p ls -n -f \
  /dev/evms/.nodes/hda1 /dev/evms/.nodes/hdb1 /dev/evms/.nodes/hdc1 missing

If this command succeeds, you should see md0 appear in /proc/mdstat, and it 
should be in degraded mode, since only three disks are available. Run the 
same command to recontruct your other three raid5s with the appropriate 
partitions.
(Continue reading)

Olivier Calle | 9 May 18:07 2005

Reiserfs resize problem

Hi,

I've got the following EVMS setup:
2 120GB disks partitioned into 20GB pieces, each a RAID1 set.
An LVM container "container1" consuming the RAID1's.
LVM regions: /, /var, /tmp, /usr/local, /tesla/usr1 and swap
kernel 2.6.11-gentoo-r6, evms 2.5.2

I freed up a bunch of space on usr1 and rebooted into single user mode to 
shrink it down from 94GB to 50GB.  So, in single user mode, I unmounted 
/dev/evms/usr1 and ran evmsn.  I selected /dev/evms/usr1 and picked 
shrink, by 44GB.  I think did a quit and selected save, when asked.  I 
got the "resizing /dev/evms/usr1" message.  I saw the disk activity light 
on solid.  After leaving it to its task for 10-20 minutes, I came back and 
found the computer idle, but still showing the same EVMS screen.  I logged 
into another console, did a ps and saw resize_reiserfs in the process 
list.  I ran vmstat 1 for a while and saw no system activity whatsoever 
(I'm in single user mode).  After watching for a while, I did a Ctrl-C at 
the evmsn console, which dumped me at the prompt, at which point I needed 
to leave for work, so I powered down the system...

My plan is to take one of the disks and a bigger, clean disk to another 
system, boot from a gentoo 2005.0 live cd and dd my partitions to the 
clean disk as files so that I have an untouched version, and then I think 
I'll run evms_activate and see what I get?  Any suggestions or other tips 
on seeing if I can bring that partition back to life?

Thanks,

--

-- 
(Continue reading)

Alex | 10 May 00:54 2005

EVMS w/ colinux?

Is there a trick to getting EVMS to work with CoLinux?

CoLinux allows one to run Linux inside Windows using a modified kernel. 
Partitions are exported as devices named "/dev/cobd/0", "/dev/cobd1"... 
(when devfs is on).

Note, colinux doesn't export entire disks, only individual partitions. 
They show up as block devices with major 117.

I can get evms to see loop devices bound to them, but not the block 
devices directly.

I have tried modifying evms.conf under "legacy devices" but to no success:

Thanks

-Alex

engine {
         mode            = readwrite
         debug_level     = default
         log_file        = /var/log/evms-engine.log
         metadata_backup_dir     = /var/evms/metadata_backups
}
daemon {
         log_file        = /var/log/evms-daemon.log
}
clustering {

         membership_timeout = 10
(Continue reading)

John DeFranco | 10 May 18:14 2005
Picon

Re: [Evms-cluster] evms cluster plug-in error

Steve Dobbelstein wrote:

>
>Do you have the following line in your /etc/ha.d/ha.cf?
>
>apiauth evms gid=haclient uid=root
>
>I have that line in my ha.cf.  I think it's what allows EVMS to use the
>"evms" ID to talk to HA.  I noticed that the EVMS HA instructions don't say
>anything about adding this line.  If you don't have the line, try adding it
>and see if EVMS will connect with HA.
>
>Steve D.
>  
>

Hi Steve,

I tried adding this line and it did not make a difference.

--

-- 
==========
Cheers
   -jdf

-------------------------------------------------------
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7393&alloc_id=16281&op=click
(Continue reading)


Gmane