Mike Gerdts | 30 May 15:59 2011
Picon

Re: Swap device priorities in Solaris?

On Sun, May 29, 2011 at 5:24 AM, Jim Klimov <jimklimov@...> wrote:
> And actually your comment on the Sun/Oracle blog pretty much sums it up -
> why sometimes much swap should be reserved and may get used even
> if in practice we hope it is hever touched. If some runaway process eats
> up all VM, we still want the system to be responsive enough as to get in
> and kill that process. Or let it complete in a timely fashion...

While this type of administration is sometimes necessary, generally I
see this as an indication of a bad application that needs babysitting.
 I prefer to train Solaris to do that babysitting than performing fire
drills every time an application misbehaves.

You may want to look at using ulimit -d (or maybe ulimit -v) to keep
individual processes from doing bad things.  ulimit values are
inherited by from parent to child (setting ulimit in a shell gets
inherited by a process created by that shell).  You can use plimit to
change an existing process' ulimits or view the ulimits for running
processes.  If a process tries to use too much memory, it will be
killed - the same thing you would do if you were to log in to take
care of the bad process.

If you use SMF to manage the starting and stopping, it will be
automatically restarted after it is killed for consuming too much
memory.  In this case, you probably want to use ulimit in the start
script before starting the actual process.

If you have a collection of processes that you are trying to put a
limit on, you can run those in a resource controlled zone.  Like in
the URL I give below, zone.max-swap refers to memory reservations.
Setting a limit on physical memory can actually induce paging (to swap
(Continue reading)

Jim Klimov | 28 May 16:50 2011
Picon

Swap device priorities in Solaris?

Hello all,

I was recently asked if it is possible to specify different priorities to swap devices in Solaris, like it is
possible on Linux, and found that "I don't know but probably not" :\

One use-case might be a system running with the rpool in a fast SSD with limited space, and the owner wants the
system to swap into the SSD swap partition (or rpool volume) until that space runs out, and only then swap
into slower additional HDD swap spaces - and not round-robin all available swap spaces all the time.

So - is that currently possible in any recent version of Solaris or OpenSolaris?
If not, how difficult would it be to code such behaviour as an RFE?

Thanks,
//Jim Klimov
--

-- 
This message posted from opensolaris.org
Picon

Re: Sun Storage 7320 system

please also see these paper

MySQL Guide for Sun Storage 7000 Unified Storage System



http://blogs.oracle.com/dlutz/entry/mysql_on_sun_storage_7000

for mysql and 7000 storage, could also give you some hits


postgreSQL and 7000


http://www.oracle.com/technetwork/systems/articles/adempiere-7000-jsp-139750.html
http://blogs.oracle.com/jkshah/entry/postgresql_and_project_amberroad_sun


On 5/26/2011 9:09 AM, Gustav Potgieter wrote:
Thank you for the reply Sir. I have downloaded the Simulator but as per the attached image it only briefly explains the options, and does not actually go into detail or in real terms explains it. I would like to know if this compares to say RAID 10 if I choose Triple mirrored, or what the best option would be for a database, from google etc I can not see this reference. I have a support contract and I am just waiting for the details, As for optimization, I have already run a lot of tests and I am confident in my config, but would like to know if there was any specific optimization for a 7320 that would be of interest for me. I will most def follow <at> lisa09, and have posted this into the zfs-discuss section. Hope you have a good day, Gustav On 26 May 2011, at 2:57 PM, LaoTsao wrote:
May be you should ask in zfs-discuss list Zfs appliance has simulator that should be able to g.I've you some idea about the tread off for space for various raid/mirror As for performance, google it and read various zfs links , best practice tunning guide Zfs <at> lisa09 is good start You should get support contract to cover your self Sent from my iPad Hung-Sheng Tsao ( LaoTsao) Ph.D On May 26, 2011, at 7:13 AM, Gustav <gus-jqj1eM7VVrDYkQIYctQFYw@public.gmane.org> wrote:
Hi All, Can someone please give some advise on the following? We are installing a 7320 with 2 18 GB Write Accelerators, 20 x 1 TB disks and 96 GB of ram. Postgres will be running on a Oracle x6270 device with 96GB of ram installed and two quad core cpus, with a local WALL on 4 hard drives, and 7320 LUN via 8GB FC. I am going configure the 7320 as Mirrored with the following options available to me (read and write cache enabled): Double parity RAID Mirrored Single partiy RAID, narrow stripes Striped Triple mirrored Was does the above mean in real terms officially, and what is optimum for a (Postgresql 9, or any performance tips for Postgres on a 7320) database high writes, and are there any comments that can help us improve performance? Thanks, all feedback will be appreciated! -- This message posted from opensolaris.org _______________________________________________ sysadmin-discuss mailing list sysadmin-discuss-xZgeD5Kw2fzokhkdeNNY6A@public.gmane.org http://mail.opensolaris.org/mailman/listinfo/sysadmin-discuss
Attachment (laotsao.vcf): text/x-vcard, 884 bytes
_______________________________________________
sysadmin-discuss mailing list
sysadmin-discuss@...
http://mail.opensolaris.org/mailman/listinfo/sysadmin-discuss
LaoTsao | 26 May 16:30 2011
Picon

Re: Sun Storage 7320 system

What I say may not be upto date

If you want performance, mirror and ssd  for zil, there exist white paper  for zfs and oracle db
You may be able to used for postgres

If you want sPace raidz give you the most space it use raid5 like protection raidz2 and raidz3 
Like double and triple parity to protect more hdd. Failure

One more things you need to consider is
How are serve the db: nfs, iscsi, fc etc
IMHO, nfs is the simplest
Iscsi and fc you will need another FS on server for DB

Zfs provide analytics(dtrace based) will have you do some realtime data to help you about performance

Of course you will need good backup

You did not mention you have single head or dual head for 7320
It provide and plus and minus

Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D

On May 26, 2011, at 9:09 AM, Gustav Potgieter <gp@...> wrote:

> Thank you for the reply Sir.
> 
> I have downloaded the Simulator but as per the attached image it only briefly explains the options, and
does not actually go into detail or in real terms explains it.
> 
> I would like to know if this compares to say RAID 10 if I choose Triple mirrored, or what the best option would
be for a database, from google etc I can not see this reference.
> 
> <configure-7320.png>
> 
> 
> I have a support contract and I am just waiting for the details,
> 
> As for optimization, 
> 
> I have already run a lot of tests and I am confident in my config, but would like to know if there was any
specific optimization for a 7320 that would be of interest for me.
> 
> I will most def follow  <at> lisa09, and have posted this into the zfs-discuss section.
> 
> Hope you have a good day,
> Gustav
> 
> On 26 May 2011, at 2:57 PM, LaoTsao wrote:
> 
>> May be you should ask in zfs-discuss list
>> Zfs appliance has simulator that should be able to g.I've you some idea about the tread off for space for
various raid/mirror
>> As for performance, google it and read various zfs links , best practice tunning  guide
>> Zfs  <at> lisa09 is good start
>> You should get support contract to cover your self
>> 
>> 
>> Sent from my iPad
>> Hung-Sheng Tsao ( LaoTsao) Ph.D
>> 
>> On May 26, 2011, at 7:13 AM, Gustav <gus@...> wrote:
>> 
>>> Hi All,
>>> 
>>> Can someone please give some advise on the following?
>>> 
>>> We are installing a 7320 with 2 18 GB Write Accelerators, 20 x 1 TB disks and 96 GB of ram.
>>> 
>>> Postgres will be running on a Oracle x6270 device with 96GB of ram installed and two quad core cpus, with a
local WALL on 4 hard drives, and 7320 LUN via 8GB FC.
>>> 
>>> I am going configure the 7320 as Mirrored with the following options available to me (read and write
cache enabled):
>>> Double parity RAID
>>> Mirrored
>>> Single partiy RAID, narrow stripes
>>> Striped
>>> Triple mirrored
>>> 
>>> Was does the above mean in real terms officially, 
>>> and what is optimum for a (Postgresql 9, or any performance tips for Postgres on a 7320) database high writes,
>>> and are there any comments that can help us improve performance?
>>> 
>>> Thanks, all feedback will be appreciated!
>>> -- 
>>> This message posted from opensolaris.org
>>> _______________________________________________
>>> sysadmin-discuss mailing list
>>> sysadmin-discuss@...
>>> http://mail.opensolaris.org/mailman/listinfo/sysadmin-discuss
> 
Sanu P Soman | 26 May 14:06 2011
Picon

How to install new service in open solaris

Hello,

I want to install a new service for my application (Atlassian JIRA) in open Solaris. Now I'm starting the
jira service by starting the "startup" script in tomcat's bin folder.

In RHEL, I've installed the jira service by using "chkconfig" command and a custom script, like 
chkconfig --add <script name>, the custom script is placed in /etc/rc.d/init.d

I'm new to Solaris, Please advice me on this ..

Thanks,
Sanu P Soman
--

-- 
This message posted from opensolaris.org
Gustav | 26 May 13:13 2011
Picon

Sun Storage 7320 system

Hi All,

Can someone please give some advise on the following?

We are installing a 7320 with 2 18 GB Write Accelerators, 20 x 1 TB disks and 96 GB of ram.

Postgres will be running on a Oracle x6270 device with 96GB of ram installed and two quad core cpus, with a
local WALL on 4 hard drives, and 7320 LUN via 8GB FC.

I am going configure the 7320 as Mirrored with the following options available to me (read and write cache enabled):
Double parity RAID
Mirrored
Single partiy RAID, narrow stripes
Striped
Triple mirrored

Was does the above mean in real terms officially, 
and what is optimum for a (Postgresql 9, or any performance tips for Postgres on a 7320) database high writes,
and are there any comments that can help us improve performance?

Thanks, all feedback will be appreciated!
--

-- 
This message posted from opensolaris.org
russell | 23 May 21:17 2011
Picon

Enforcing security using DNSSEC

Hi,

With the introduction of DNSSEC, it should be possible to publish a security policy via a DNS record which is
encrypted and signed to verify its authenticity which could be read by every computer within a domain.

This record could be used to set the security policy on every computer, by setting nis, ldap, smtp, nfs
servers and locking down the networks by setting global firewall parameters.
As DNS is a hierarchical model, child domains could inherit settings from the parent or be allowed to
override them. This would also apply to individual DNS records to allow permissions on a per machine basis.

Using this approach, settings could be enforced via DHCP internally and if an organisation published a
security policy publicly on the internet it could be used to ensure that all computers would follow a
corporate standard.

Unfortunately this technology would require an acceptable platform independant framework to allow it to
be used by an internet connected device, laptop, server or mobile phone.
--

-- 
This message posted from opensolaris.org
Naveen kumar | 21 May 13:17 2011

zfs flar restore

Hi, 

I wanted to create non interactive zfs flar restore DVD. I followed below procedure.
This experiment on soalris10, apologize me i know this forum is for opensolaris.  

1) flarcreate -n patched /export/home/flar/patched.flar
2) I followed "http://www.phoronix.com/scan.php?page=article&item=867&um=1" 
    to create NON interactive restore iso image.

As soon i machine boots with the iso. i got error runcmd error in /usr/sbin/sysidput 

the details log  message 

sip_have_min_net_info: returning FLASE
sip_have_all_info:returning FLASE
sip_configurable_network:FAILED to configure primary interface
cleanup_if:checking booted off net image
system is not booted off net image
get_net_ipaddr:iprb0 
get_net_ipaddr returned FASE iprb0 is probably 0.0.0.0
unable to remove /tmp/root/etc/hostname.iprb0

----------------------------------

in sysidcfg file i replaced with network_interface=primary to 
network_interface=bnx0 

still system is looking for iprb0 interface. 

what went wrong?...  Is there any other solution for non interactive method to create restore media with zfs?.

Thanks
Naveen
--

-- 
This message posted from opensolaris.org
Luigi Manna | 17 May 21:45 2011
Picon

cxtydz renumbering

Hi,

  maybe this has already been discussed but couldn't figure out how to search for it. We have a backup server
with 24 SATA disks where we upgraded all disks from 1 TB to 2 TB. We forgot to do all the cfgadm -c replace e for
all the receptacles and the AP_ID just increased the "y" value for cxtydz . For instance for controller c5
we got:

c5::dsk/c5t8d0                 disk         connected    configured   unknown
c5::dsk/c5t9d0                 disk         connected    configured   unknown
c5::dsk/c5t10d0                disk         connected    configured   unknown
c5::dsk/c5t11d0                disk         connected    configured   unknown
c5::dsk/c5t12d0                disk         connected    configured   unknown
c5::dsk/c5t13d0                disk         connected    configured   unknown
c5::dsk/c5t14d0                disk         connected    configured   unknown
c5::dsk/c5t15d0                disk         connected    configured   unknown

  We want to have them back at:

c5::dsk/c5t0d0                 disk         connected    configured   unknown
c5::dsk/c5t1d0                 disk         connected    configured   unknown
c5::dsk/c5t2d0                disk         connected    configured   unknown
c5::dsk/c5t3d0                disk         connected    configured   unknown
c5::dsk/c5t4d0                disk         connected    configured   unknown
c5::dsk/c5t5d0                disk         connected    configured   unknown
c5::dsk/c5t6d0                disk         connected    configured   unknown
c5::dsk/c5t7d0                disk         connected    configured   unknown

   Is there a way to do this without replugging the old drives, doing a cfgadm replace for all of them ? We tried
reboot with --r and that didn't work. 

Thanks in advance for any answers,

Luigi--
--

-- 
This message posted from opensolaris.org
Jim Klimov | 10 May 13:47 2011
Picon

How to add system-shutdown processing commands in OpenSolaris/Solaris11E/OI

Hello all,

A long while ago I made a package for NUT (Network UPS Tools) used in some of our projects since Solaris 8. Part
of this packaging involved changes to "/etc/rc0" scripts, with variants for Solaris 8 and 9, and for
Solaris 10 with SMF, example of the patch changes is attached below.

Currently on another occasion I want to add a watchdog cycle to force rebooting/halting with "uadmin" if
the pool export-on-shutdown hangs, as happens on my home test box too often.

The important part of these patches is adding a watchdog to the shutdown procedure, so that if it does not
complete in a timely fashion for any reason (in case of UPS - line power returns during shutdown/halt, and
the UPS never dies), and in such event the OS is forcibly rebooted. By adding the watchdog to "/etc/rc0" we
could avoid the watchdog being killed as any other system process (it is spawned after the kills).

These patches no longer work for OpenSolaris and its "SunOS 5.11" descendants, because all of that
shutdown logic is no longer in the scripts, but maybe in the binary svc.startd?..

Anyway, what is the proper way to add such special shutdown handlers to modern OpenSolaris versions and its kin?

Example of watchdog cycle in "/etc/rc0" for our NUT integration:

===
# cat /etc/ups/rc0.patch.sol10 
--- rc0.orig    Sat Jan 22 02:27:33 2005
+++ rc0 Thu Sep 28 16:06:15 2006
 <at>  <at>  -16,6 +16,16  <at>  <at> 

 PATH=/usr/sbin:/usr/bin

+# File used by NUT if shutting down because of UPS event
+UPSFLAG=/etc/ups/killpower
+# How long we wait after halt before reboot
+UPSWAIT=600
+# Flag used by us to trigger waiting
+_UPSPOWERDOWN=0
+
+[ -f "$UPSFLAG" ] && _UPSPOWERDOWN=1
+[ x"$_UPSPOWERDOWN" = x1 ] && echo "UPS powering off, will wait and reboot in the end"
+
 if [ -z "$SMF_RESTARTER" ]; then
        echo "Cannot be run outside smf(5)"
        exit 1
 <at>  <at>  -71,3 +81,11  <at>  <at> 

 [ -x /usr/lib/acct/closewtmp ] && /usr/lib/acct/closewtmp
 /sbin/sync; /sbin/sync; /sbin/sync
+
+if [ x"$_UPSPOWERDOWN" = x1 -a -x /etc/ups/reboot ]; then
+       echo "Counting down to restart in case UPS line power is back: waiting $UPSWAIT sec"
+       sleep "$UPSWAIT"
+       ( echo "Syncing disks..."; /sbin/sync ) &
+       sleep 3
+       echo "Rebooting system now"
+       /etc/ups/reboot -lq
+fi
+
===

Thanks,
//Jim Klimov
--

-- 
This message posted from opensolaris.org
Naveen surisetty | 10 May 11:01 2011
Picon

ZFS backup and restore

Hi,

Is there a way to backup and restore entire machine without using zfs send/receive options. 

Will export /import will help?. I searched on internet but i am finding backup using send and receive.
Please suggest me 

Thanks in advance 
kumar
--

-- 
This message posted from opensolaris.org

Gmane