Archive for the ‘AIX’ Category

Incorrect dependencies for installation of packages on AIX 5.3

Wednesday, December 5th, 2007

Following an upgrade of AIX 5.3 to level 07, with SP1 technology upgrade, I had encountered a problem installing a package required for Oracle 11g – rsct.basic.rte 2.4.8.0

This rsct.basic.rte package requires rsct.basic.rte version 2.4.0.0 from AIX CD, however, to install it, I am required to install xlC.aix61 version 9.0.0.1, which should not be here, and following that, bos.rte version 6.0.0.0, which should be part of AIX 6.x.

Some elaboration on the bos family of packages – bos stands for Base Operating System. rte stands for RunTime Environment. It means that bos.rte version 6.0.0.0 would be the base operating system runtime components of AIX version 6. This was far from my desire, as you cannot replace the system’s bos.rte package…

I have attempted to force installation of the baseline version of rsct.* from the cd, by running the command

installp -aF -d /dev/cd0 rsct.basic.rte

but for no avail. I have removed all rsct.* packages (this time I used smit), and still – I was unable to install the baseline package rsct.basic.rte, since it had dependencies from AIX 6…

I was able to solve it using the following method:

1. Installed all bos.adt baseline packages missing, using the following command

installp -aXg -d /dev/cd0 bos.adt

2. Extracted the combined package of SP1 technology update, and upgrade package to a specific directory

3. Copied the contents of baseline rsct packages from the cdrom to that same directory mentioned above:

mount /mnt/cdrom
cp /mnt/cdrom/installp/ppc/rsct.* ./

4. Created new .toc file

inutoc .

5. Installed (and succeeded this time) rsct.basic.rte. This time all dependencies were fulfilled

installp -aXg -d . rsct.basic.rte

6. Updated the entire level back to the latest os level

smit update_all

This worked fine, and I write it down for the next sucker who would be required to fulfill an impossible requirement in order to install one small package.

It has been a while… Today – Monitor I/O in AIX

Saturday, November 25th, 2006

It is a question I’ve been asked a while back, and didn’t find the time to search for it.

I have searched for an answer just now, and got to an answer given by one of the old-time gurus (most likely) in news servers (where the gurus lurk, usually). The answer is to use the command "filemon" in a syntax such as this:

filemon -O lf -o outputfile.txt

To stop the monitoring session, run "trcstop". Review the output file generated by this command, and you will be able to view the I/O interactions which happened during that time on the system.

Setting up an AIX HA-CMP High Availability test Cluster

Tuesday, July 4th, 2006

This post will be divided into this common view part, and (the first in this blog) "click here for more" part.

The main reason I’ve created this blog was to document, both for myself and other technical persons, the acts required to perform set tasks.

My first idea was to document how to install HACMP on AIX. For those of you who do not know what I’m talking about, HACMP is a general-purpose high availability cluster made by IBM, which can work on AIX, and if I’m not mistaken, on other platforms as well. It is based, actually, on a large set of "event" scripts, which run in a predefined order.

Unlike other HA clusters, this is a fixed-order cluster. You can bring your application up after the disk has been up, and after IP has been up. You cannot change this predefined order, and you have no freedom to set this order. Unless.

Unless you create custom scripts, and use them as pre-event and post-event, naming them correctly and putting them in the right directories.

This is not an easy cluster to manage. It has no flashy features, it is not versatile like other HA Clusters are (VCS being the best one, to my opinion, and MSCS, despite its tendency to reach race conditions, is quite versatile itself).

It is a hard HA Cluster, for a hard working people. It is meant for a single method of operation, and for a single track of mind. It is rather stable, if you don’t go around adding volumes to VGs (know what you want before you do it.

Below is a step by step list of actions to do, based on my work experience. I’ve brought up two clusters (four nodes) while actually doing copy-paste into a text document of every action done by me, any package installed, etc.

It is meant for test purposes. It is not as HA as it could be (using the same network infrastructure), it doesn’t employ all heart-bit connections it might have had – it’s meant for lab purposes, and has done quite well there.

It was installed on P5 machines, P510, if I’m not mistaken, using FastT200 (single port) for shared storage (single logical drive, small size – about 10GB), with Storage Manager 8.2 and adequate firmware.