Posts Tagged ‘XML’

Preparing your own autoyast with custom Suse CD

Friday, August 17th, 2007

Suse, for some reason, has to be over-complicated. I don’t really know why, but the time required to perform simple tasks under Suse is longer than on any other Linux distro, and can be compared only to other legacy Unix systems I am familiar with.

When it comes to a more complicated tasks, it gets evern worse.

I have created an autoinst.xml today. It, generally speaking, installs a SLES10.1 system from scratch. Luckily, I was able to test it in a networked environment, so I helped the environment just a bit by not throwing tons of CDs.

Attached is my autoinst.xml. Notice that the root user has the password 123456, and that this file is based on a rather default selection.

Interesting, though, is my usage of the <ask> directives, which allow me to ask for manual IP address, Netmask, gateway, etc during the 2nd phase of the installation. sles10.1-autoinst.xml

This is only a small part. Assuming you want to ship this autoinst.xml with your Suse CDs, as a stand-alone distribution, you need to do the following:

1. Mount as loop the first CD:

mount -o loop /home/ezaton/ISO/SLES10-SP1-CD1.iso /mnt/temp

2. For quick response, if you have the required RAM, you could try to create a ramdisk. It will sure work fast:

mkdir /mnt/ram

mount -t ramfs -o size=700M none /mnt/ram

3. Copy the data from the CD to the ramdisk:

cp -R /mnt/temp/* /mnt/ram/

4. Add your autoinst.xml to the new cd root:

cp autoinst.xml /mnt/ram/

5. Edit the required isolinux.cfg parameters. On Suse10 it resides in the new <CD-ROOT>/boot/i386/loader/isolinux.cfg. In our case, CD-ROOT is /mnt/ram

Add the following text to the "linux" append line:

autoyast=default install=cdrom

6. Generate a new ISO:

cd /mnt/ram

mkisofs -o /tmp/new-SLES10.1-CD1.iso -b boot/i386/loader/isolinux.bin -r -T -J -pad -no-emul-boot -boot-load-size 4 -c -boot-info-table /mnt/ram

7. When done, burn your new CD, and boot from it.

8. If everything is ok, umount /mnt/ram and /mnt/temp, and you’re done.

Note – It is very important to use Rock Ridge and Jouliet extensions with the new CD, or else files will be in 8.3 format, and will not allow installation of SLES.

VMware Fencing in RedHat Cluster 5 (RHCS5)

Thursday, June 14th, 2007

Cluster fencing – Unlike many common thoughts, high-availability is not the highest priority of an high-availability cluster, but only the 2nd one. The highest priority of an high-availability cluster is maintenance of data integrity by prevention of multiple concurrent access of nodes to the shared disk.

On different cluster, depending on the vendor, this can be achieved by different methods, either by prevention of access based on the status of the cluster (for example – Microsoft Cluster, which will not allow access to the disks without cluster management and coordination), by panicking the node in question (Oracle RAC, for example, or IBM HACMP), or by preventing failover unless the status of the other node, as well as all heartbeat links were ok up to the exact moment of failure (VCS, for example).

Another method is based on a fence, or “Shoot the Other Node in the Head”. This “fence” is usually based on an hardware device which has no dependencies for the node’s OS, and is capable of shutting it down, many times brutally, upon request. A good fencing device can be a UPS, which supports the other node. The whole idea is that in a case of uncertainty, either one of the nodes can attempt to ‘kill’ the other node, independently of any connectivity issue one of them might experience. This race result is quite obvious: one node remains alive, capable of taking over the resource groups, the other node is off, unable to access the disk in an uncontrolled manner.

Linux-based clusters will not force you to use fencing of any sort, however, for a production environments, setups without any fencing device will be unsupported, as the cluster cannot handle cases of split-brain or uncertainty. These hardware devices, which can be, as said before, a manageable UPS, a remote-control power-switch, the server’s own IPMI (or any other independent system such as HP ILO, IBM HMC, etc), and even the fiber switch – as long as it can prevent the node in question from accessing the disks, are quite expensive, but comparing to hours of restore-from-backup, they sure justify their price.

On many sites there is a demand for a “test” setup which will be as similar to the production setup as possible. This test setup can be used to test upgrades, configuration changes, etc. Using fencing in this environment is important, for two reasons:

1. Simulation of the production system behavior is achieved with as similar setup as possible, and fencing takes an important part in the cluster and its logic.

2. A replicated production environment contain data which might have some importance, and if not that, at least re-replicating it from the production environment after a case of uncontrolled access to the disk by a faulty node (and this test cluster is in a higher risk, as defined by its role), or restoring from tapes is unpleasant and time consuming.

So we agree that the test cluster should have some sort of fencing device, even if not similar to production’s one, for the sake of the cluster logic.

On some sites, there is a demand for more than one test environment. Both setups – a single test environment and multiple test environments can be defined to work as guests on a virtual server. Virtualization assists in saving hardware (and power, and cooling) costs, and allows for easy duplication and replication, so this is a case where it is ideal for the task. This said, it brings up a problem – fencing a virtual server has implications – we can kill all guest systems in one go. We wouldn’t want that to happen. Lucky for us, RedHat Cluster has a fencing device for VMware, which, although not recommended in a production environment, will suffice for a test environment. These are the steps required to setup one such VMware fencing device in RHCS5:

1. Download the latest CVS fence_vmware from here. You can use this direct link (use with “save target as”). Save it in your /sbin directory under the name fence_vmware, and give it execution permissions.

2. Edit fence_vmware. In line 249 change the string “port” to “vmname”.

3. Install VMware Perl API on both cluster nodes. You will need to have gcc and openssl-devel installed on your system to be able to do so.

4. Change your fencing based on this example:

<?xml version="1.0"?>
<cluster alias="Gfs-test" config_version="39" name="Gfs-test">
        <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
                <clusternode name="cent2" nodeid="1" votes="1">
                                <method name="1">
                                        <device name="man2"/>
                <clusternode name="cent1" nodeid="2" votes="1">
                                <method name="1">
                                        <device name="man1"/>
                                <method name="2">
                                        <device domain="22 " name="11 "/>
        <cman expected_votes="1" two_node="1"/>
                <fencedevice agent="fence_vmware" name="man2"
                          ipaddr="" login="user" passwd="password"
                <fencedevice agent="fence_vmware" name="man1"
                          ipaddr="" login="user" passwd="password"
                        <fs device="/dev/sda" force_fsck="0" force_unmount="0"
				fsid="5" fstype="ext3" mountpoint="/data"
                                name="sda" options="" self_fence="0"/>
                <service autostart="1" name="smartd">
                        <ip address="" monitor_link="1"/>
                <service autostart="1" name="disk1">
                        <fs ref="sda"/>

Change to your relevant VMware username and password.

If you have a Centos system, you will be required to perform these three steps:

1. ‘ln -s /usr/sbin/cman_tool /sbin/cman_tool

2. ‘cp /etc/redhat-release /etc/redhat-release.orig

3. ‘echo “Red Hat Enterprise Linux Server release 5 (Tikanga)” > /etc/redhat-release

This should do the trick. Good luck, and thanks again to Yoni who brought and fought the configuration steps.


Per comments (and a bit-late – common logic) I have broken lines in the XML quote for cluster.conf. In cases these line breaks might break something in RedHat Cluster, I have added the original xml file here: cluster.conf

More on the Nabaztag/tag

Wednesday, June 13th, 2007

Actually, this post has become less of the non-technical type and more of the technical type, however, for the sake of the cute little Nabaztag (you can send me messages too! Go here and send a message to “fatutchi”!), I keep it still in this category as well.

Today is a busy day, so I’ll have several posts.

This one will deal with the Nabaztag/tag. I have extended the PHP form/script offered in my previous post to allow for multiple Nabaztags selection. Also, added reading the ears status, and parsing the XML returned by the Nabaztag API site.

This is an ugly script, but it works. As said before – if you see fit to extend it or add features, please do so. Attached here: nabiV2.php.txt

I have noticed Violet had several issues with their site. I must confess that I have expected more from their site. As I’ve been involved as a consultant in several large-scale setups which sustained several tenth of thousands (and more) of connections per second, I know that, usually, the main performance hog is caused by an inefficient application design. It could be that Violet’s problems might just point at a low quality server-side software. Pity. I hope it will get better.