Posts Tagged ‘HP’

Timezone for Israel on HP-UX 11i and above

Thursday, August 21st, 2008

While Linux vendors tend to maintain and publish the whole zone and daylight saving (DST) information, most legacy Unix vendors do not. Especially when it comes to such a small country such as Israel.

The following solution was tested for HP-UX B.11.31, and would probably work for all 11i versions.

Israel timezone is called IST. Israel daylight saving timezone is called IDT.

The quick and dirty:

Edit /usr/lib/tztab and append the following lines at the bottom:

# Israel daylight savings
# Added by Ez-Aton. years 2008 to 2011 only. Simple and ugly.
IST-2IDT
0 3 28 3 2008 5 IDT-3
0 1 5 10 2008 0 IST-2
0 3 27 3 2009 5 IDT-3
0 1 27 9 2009 0 IST-2
0 3 26 3 2010 5 IDT-3
0 1 12 9 2010 0 IST-2
0 3 1 4 2011 5 IDT-3
0 1 2 9 2011 0 IST-2

Save (this is a write-protected file, so force saving) and then edit /etc/TIMEZONE to include the following TZ directive:

TZ=IST-2IDT
export TZ

Assuming you sync your time using NTP, all future logins will have correct Israeli date and daylight savings.

For further information, you can check man tztab and man environ

HP EVA SSSU and fixed LUN WWID

Monday, July 14th, 2008

Linux works perfectly well with multiple storage links using dm-multipath. Not only that, but HP has released their own spawn of dm-multipath, which is optimized (or so claimed, but, anyhow, well configured) to work with EVA and MSA storage devices.

This is great, however, what do you do when mapping volume snapshots through dm-multipath? For each new snapshot, you enjoy a new WWID, which will remap to a new “mpath” name, or raw wwid (if “user_friendly_name” is not set). This can, and will set chaos to remote scripts. On each reboot, the new and shiny snapshot will aquire a new name, thus making scripting a hellish experience.

For the time being I have not tested ext3 labels. I suspect that using labels will fail, as the dm-multipath over layer device does not hide the under layered sd devices, and thus – the system might detect the same label more than once – once for each under layered device, and once for the dm-multipath over layer.

A solution which is both elegant and useful is to fixate the snapshots’ WWID through a small alteration to SSSU command. Append a string such as this to the snap create command:

WORLD_WIDE_LUN_Name="6300-0000-0000-0000-0010-0000"

Don’t use the numbers supplied here. “invent” your own 🙂

Mind you that you must use dashes, else the command will fail.

Doing so will allow you to always use the same WWID for the snapshots, and thus – save tons of hassle after system reboot when accessing snapshots through dm-multipath.

dm-multipath and loss of all paths

Tuesday, May 13th, 2008

dm-multipath is a great tool. Its abilities were proven to me on many occasions, and I’m sure I’m not the only one. NetApp, for example, use it. HP use it as well (a slightly modified version, and still), and it works.

A problem I have encountered is as follow – if a single path fails, the device-mapper continue to work correctly (as expected) and the remaining path becomes active. However – if the last link fails, all processes which require disk access become stale. It means that many tests which search for a given process pass correctly even when this process becomes stale through delayed (forever) access to the filesystem. Also – tests which attempt to write/read a file to/from such a stale filesystem, become stale themselves, which can bring down an entire system (assume we have a cron which creates a file every minute. Every new process becomes stale immediately, so after an hour, we’ll have 60 more processes, and after a day – 1440 additional processes – all stale (D) and waiting for the disk to come back).

Certain detection systems actually fail to auto-detect cases of stale filesystems when using dm-multipath. This is caused by a (default) option called “1 queue_if_no_path”. I discovered that when this option is omitted, such as in the configuration below (only the “device” section):

device
{
vendor “NETAPP”
product “LUN”
getuid_callout “/sbin/scsi_id -g -u -s /block/%n”
prio_callout “/sbin/mpath_prio_ontap /dev/%n”
# features “1 queue_if_no_path”
hardware_handler “0”
path_grouping_policy group_by_prio
failback immediate
rr_weight uniform
rr_min_io 128
path_checker readsector0
}

multiple disk failures will actually result in the filesystem layout reporting I/O errors (which is good). A disk mounted through these options can be mounted with special parameters, such as (for ext2/3): errors=continue ; errors=read-only or errors=panic – my favorite, as it ensured data integrity through self-fencing mechanism.

HP EVA bug – Snapshot removed through sssu is still there

Friday, May 2nd, 2008

This is an interesting bug I have encountered:

The output of an sssu command should look like this:

EVA> DELETE STORAGE “Virtual DisksLinuxoracleSNAP_ORACLE”

EVA>

It still leaves the snapshot (SNAP_ORACLE in this case) visible, until the web interface is used to press on “Ok”.

This happened to me on HP EVA with HP StorageWorks Command View EVA 7.0 build 17.

When sequential delete command is given, it looks like this:

EVA> DELETE STORAGE “Virtual DisksLinuxoracleSNAP_ORACLE”

Error: Error cannot get object properties. [ Deletion completed]

EVA>

When this command is given for a non-existing snapshot, it looks like this:

EVA> DELETE STORAGE “Virtual DisksLinuxoracleSNAP_ORACLE”

Error: Virtual DisksLinuxoracleSNAP_ORACLE not found

So I run the removal command twice (scripted) on an sssu session without “halt_on_errors”. This removes the snapshots correctly.

HP-UX patch structure

Saturday, April 12th, 2008

Hunting patches for HP-UX is not a simple task. Patch IDs are different for newer versions of the same patch, meaning that a newer version of a patch will have no meaningful connection to the previous version of that patch.

I have discovered that the best option is to track the patch is by finding out what is the subsystem which has been fixed by its predecessor, and searching for this subsystem in this HP-UX patch search engine or in this entire list here.

Good luck.