Posts Tagged ‘RedHat Cluster’

Protect Vmware guest under RedHat Cluster

Monday, November 17th, 2008

Most documentation on the net is about how to run a cluster-in-a-box under Vmware. Very few seem to care about protecting Vmware guests under real RedHat cluster with a shared storage.

This article is just about it. While I would not recommend using Vmware in such a setup, it has been the case, and that Vmware guest actually resides on the shared storage. To relocate it is out of the question, so migrating it together with other resources is the only valid option.

To do so, I have created a simple script which will accept start/stop/status arguments. The Vmware guest VMX is hard-coded into the script, but in an easy-to-change format. This script will attempt to freeze the Vmware guest, and only if it fails, to shut it down. Mind you that the blog’s HTML formatting might alter quotation marks into UTF-8 marks which will not be understood well by shell.

#!/bin/bash
# This script will start/stop/status VMware machine
# Written by Ez-Aton
# http://www.tournament.org.il/run

# Hardcoded. Change to match your own settings!
VMWARE="/export/vmware/hosts/Windows_XP_Professional/Windows XP Professional.vmx"
VMRUN="/usr/bin/vmrun"
TIMEOUT=60

function status () {
  # This function will return success if the VM is up
  $VMRUN list | grep "$VMWARE" &>/dev/null
  if [[ "$?" -eq "0" ]]
  then
    echo "VM is up"
    return 0
  else
    echo "VM is down"
    return 1
  fi
}

function start () {
  # This function will start the VM
  $VMRUN start "$VMWARE"
  if [[ "$?" -eq "0" ]]
  then
    echo "VM is starting"
    return 0
  else
    echo "VM failed"
    return 1
  fi
}

function stop () {
  # This function will stop the VM
  $VMRUN suspend "$VMWARE"
  for i in `seq 1 $TIMEOUT`
  do
    if status
    then
      echo
    else
      echo "VM Stopped"
      return 0
    fi
    sleep 1
  done
  $VMRUN stop "$VMWARE" soft
}

case "$1" in
start)     start
        ;;
stop)      stop
        ;;
status)   status
        ;;
esac
RET=$?

exit $RET

Since the formatting is killed by the blog, you can find the script here: vmware1

I intend on building a “real” RedHat Cluster agent script, but this should do for the time being.

Enjoy!

RedHat 4 working cluster (on VMware) config

Sunday, November 11th, 2007

I have been struggling with RH Cluster 4 with VMware fencing device. This was also a good experiance with qdiskd, the Disk Quorum directive and utilization. I have several conclusions out of this experience. First, the configuration, as is:

<?xml version=”1.0″?>
<cluster alias=”alpha_cluster” config_version=”17″ name=”alpha_cluster”>
<quorumd interval=”1″ label=”Qdisk1″ min_score=”3″ tko=”10″ votes=”3″>
<heuristic interval=”2″ program=”ping vm-server -c1 -t1″ score=”10″/>
</quorumd>
<fence_daemon post_fail_delay=”0″ post_join_delay=”3″/>
<clusternodes>
<clusternode name=”clusnode1″ nodeid=”1″ votes=”1″>
<multicast addr=”224.0.0.10″ interface=”eth0″/>
<fence>
<method name=”1″>
<device name=”vmware”
port=”/vmware/CLUSTER/Node1/Node1.vmx”/>
</method>
</fence>
</clusternode>
<clusternode name=”clusnode2″ nodeid=”2″ votes=”1″>
<multicast addr=”224.0.0.10″ interface=”eth0″/>
<fence>
<method name=”1″>
<device name=”vmware”
port=”/vmware/CLUSTER/Node2/Node2.vmx”/>
</method>
</fence>
</clusternode>
</clusternodes>
<cman>
<multicast addr=”224.0.0.10″/>
</cman>
<fencedevices>
<fencedevice agent=”fence_vmware” ipaddr=”vm-server” login=”cluster”
name=”vmware” passwd=”clusterpwd”/>
</fencedevices>
<rm>
<failoverdomains>
<failoverdomain name=”cluster_domain” ordered=”1″ restricted=”1″>
<failoverdomainnode name=”clusnode1″ priority=”1″/>
<failoverdomainnode name=”clusnode2″ priority=”1″/>
</failoverdomain>
</failoverdomains>
<resources>
<fs device=”/dev/sdb2″ force_fsck=”1″ force_unmount=”1″ fsid=”62307″
fstype=”ext3″ mountpoint=”/mnt/sdb1″ name=”data”
options=”” self_fence=”1″/>
<ip address=”10.100.1.8″ monitor_link=”1″/>
<script file=”/usr/local/script.sh” name=”My_Script”/>
</resources>
<service autostart=”1″ domain=”cluster_domain” name=”Test_srv”>
<fs ref=”data”>
<ip ref=”10.100.1.8″>
<script ref=”My_Script”/>
</ip>
</fs>
</service>
</rm>
</cluster>

Several notes:

  1. You should run mkqdisk -c /dev/sdb1 -l Qdisk1 (or whatever device is for your quorum disk)
  2. qdiskd should be added to the chkconfig db (chkconfig –add qdiskd)
  3. qdiskd order should be changed from 22 to 20, so it precedes cman
  4. Changes to fence_vmware according to the past directives, including Yoni’s comment for RH4
  5. Changes in structure. Instead of using two fence devices, I use only one fence device but with different “ports”. A port is translated to “-n” in fence_vmware, just as it is being translated to “-n” in fence_brocade – fenced translates it
  6. lock_gulmd should be turned off using chkconfig

A little about command-line version change:

When you update the cluster.conf file, it is not enough to update the ccsd using “ccs_tool update /etc/cluster/cluster.conf“, but you also need to understand that cman is still on the older version. Using “cman_tool version -r <new version>“, you can force it to allow other nodes to join after a reboot, when they’re using the latest config version. If you fail to do it, other nodes might be rejected.

I will add additional information as I move along.

VMware Fencing in RedHat Cluster 5 (RHCS5)

Thursday, June 14th, 2007

Cluster fencing – Unlike many common thoughts, high-availability is not the highest priority of an high-availability cluster, but only the 2nd one. The highest priority of an high-availability cluster is maintenance of data integrity by prevention of multiple concurrent access of nodes to the shared disk.

On different cluster, depending on the vendor, this can be achieved by different methods, either by prevention of access based on the status of the cluster (for example – Microsoft Cluster, which will not allow access to the disks without cluster management and coordination), by panicking the node in question (Oracle RAC, for example, or IBM HACMP), or by preventing failover unless the status of the other node, as well as all heartbeat links were ok up to the exact moment of failure (VCS, for example).

Another method is based on a fence, or “Shoot the Other Node in the Head”. This “fence” is usually based on an hardware device which has no dependencies for the node’s OS, and is capable of shutting it down, many times brutally, upon request. A good fencing device can be a UPS, which supports the other node. The whole idea is that in a case of uncertainty, either one of the nodes can attempt to ‘kill’ the other node, independently of any connectivity issue one of them might experience. This race result is quite obvious: one node remains alive, capable of taking over the resource groups, the other node is off, unable to access the disk in an uncontrolled manner.

Linux-based clusters will not force you to use fencing of any sort, however, for a production environments, setups without any fencing device will be unsupported, as the cluster cannot handle cases of split-brain or uncertainty. These hardware devices, which can be, as said before, a manageable UPS, a remote-control power-switch, the server’s own IPMI (or any other independent system such as HP ILO, IBM HMC, etc), and even the fiber switch – as long as it can prevent the node in question from accessing the disks, are quite expensive, but comparing to hours of restore-from-backup, they sure justify their price.

On many sites there is a demand for a “test” setup which will be as similar to the production setup as possible. This test setup can be used to test upgrades, configuration changes, etc. Using fencing in this environment is important, for two reasons:

1. Simulation of the production system behavior is achieved with as similar setup as possible, and fencing takes an important part in the cluster and its logic.

2. A replicated production environment contain data which might have some importance, and if not that, at least re-replicating it from the production environment after a case of uncontrolled access to the disk by a faulty node (and this test cluster is in a higher risk, as defined by its role), or restoring from tapes is unpleasant and time consuming.

So we agree that the test cluster should have some sort of fencing device, even if not similar to production’s one, for the sake of the cluster logic.

On some sites, there is a demand for more than one test environment. Both setups – a single test environment and multiple test environments can be defined to work as guests on a virtual server. Virtualization assists in saving hardware (and power, and cooling) costs, and allows for easy duplication and replication, so this is a case where it is ideal for the task. This said, it brings up a problem – fencing a virtual server has implications – we can kill all guest systems in one go. We wouldn’t want that to happen. Lucky for us, RedHat Cluster has a fencing device for VMware, which, although not recommended in a production environment, will suffice for a test environment. These are the steps required to setup one such VMware fencing device in RHCS5:

1. Download the latest CVS fence_vmware from here. You can use this direct link (use with “save target as”). Save it in your /sbin directory under the name fence_vmware, and give it execution permissions.

2. Edit fence_vmware. In line 249 change the string “port” to “vmname”.

3. Install VMware Perl API on both cluster nodes. You will need to have gcc and openssl-devel installed on your system to be able to do so.

4. Change your fencing based on this example:

<?xml version="1.0"?>
<cluster alias="Gfs-test" config_version="39" name="Gfs-test">
        <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="cent2" nodeid="1" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="man2"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="cent1" nodeid="2" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="man1"/>
                                </method>
                                <method name="2">
                                        <device domain="22 " name="11 "/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" two_node="1"/>
        <fencedevices>
                <fencedevice agent="fence_vmware" name="man2"
                          ipaddr="192.168.88.1" login="user" passwd="password"
                          vmname="c:vmwarevirt2rhel5.vmx"/>
                <fencedevice agent="fence_vmware" name="man1"
                          ipaddr="192.168.88.1" login="user" passwd="password"
                          vmname="c:vmwarevirt1rhel5.vmx"/>
        </fencedevices>
        <rm>
                <failoverdomains/>
                <resources>
                        <fs device="/dev/sda" force_fsck="0" force_unmount="0"
				fsid="5" fstype="ext3" mountpoint="/data"
                                name="sda" options="" self_fence="0"/>
                </resources>
                <service autostart="1" name="smartd">
                        <ip address="192.168.88.201" monitor_link="1"/>
                </service>
                <service autostart="1" name="disk1">
                        <fs ref="sda"/>
                </service>
        </rm>
</cluster>

Change to your relevant VMware username and password.

If you have a Centos system, you will be required to perform these three steps:

1. ‘ln -s /usr/sbin/cman_tool /sbin/cman_tool

2. ‘cp /etc/redhat-release /etc/redhat-release.orig

3. ‘echo “Red Hat Enterprise Linux Server release 5 (Tikanga)” > /etc/redhat-release

This should do the trick. Good luck, and thanks again to Yoni who brought and fought the configuration steps.

***UPDATE***

Per comments (and a bit-late – common logic) I have broken lines in the XML quote for cluster.conf. In cases these line breaks might break something in RedHat Cluster, I have added the original xml file here: cluster.conf

RedHat Cluster, and some more

Sunday, February 12th, 2006

It’s been a long while since I’ve written. I get to have, once a while, a period of time dedicated for laziness. I’ve had just one of these for the last few weeks, in which I’ve been almost completely idle. Usually, waking up from such idle time is a time dedicated to self studies and hard work, so I don’t fight my idle periods too hard. This time, I’ve had the pleasure of testing and playing, for personal reasons, both with VMWare GSX, in a “Cluster-In-a-Box” setup, based on recommendation regarding MSCS, altered for Linux (and later, Veritas Cluster Service) and both with RedHat Cluster Server, with the notion of playing with RedHat’s GFS, but, regrettably, without the last.

First, VMware. In their latest rivalty with Microsoft over the issue of Virtualization of servers and desktops, MS has gained an advantage lately. Due to the lower prices of “Virtual Server 2005”, comparing with “VMware GSX Server”, and due to their excellent marketing system (from which we should all learn, if I may say!), Not a few servers and virtual server farms, especially the ones running Windows/Windows setups, had moved to this MS solution, which is as capable as VMware GSX Server. Judging by the history of such rivalries, MS would have won. They always have. However, VMware, in an excellent move, has announced that the next generation of their GSX, simply called “Server”, would be for free. Free for everyone. In this they probably mean to invest more in their more robust ESX server, and give the GSX as a taste of their abilities. While MS do not have any more advanced product than their Virtual Server, it could mean a death blow to their effort in this direction. It could even mean they will just give away their product! While this will happen, we, the customers, will earn a selection of free, advanced and reliable products designed for virtualization. Could it be any better than that?

One more advantage of this “Virtualization for the People” is that community based virtual images, of even the most complicated to install setups can and would be widely available. Meaning to shorten installation time, and allow for a quick working system for everyone. It will require, however, better knowledge and understanding of the products themselves, as merely installing them will not be enough. To survive the future market, you won’t be able to just sell an installation of a product, but should be able to support an out-of-the-box setup of it. That’s for the freelances, and the partially freelances of us…

So, I’ve reinstalled my GSX, and started playing with it. The original goal was to actually run a working setup of RHEL, VCS and Oracle 10g. Unfortunately, VCS supports only RH3 (update 2?), and not RH4, which was a shame. At that point, I’ve considered using RH Cluster Server for the task at hand. It grew to the task of learning this cluster server, and nothing more, which I did, and I can and would share my concepts about it here.

First – Names – I’ve had the pleasure of working with numerous cluster solutions. I’m thrilled each time I get to play with another cluster solution the naming conventions, and name changes vendors do, just to keep themselves unique. I hate it. So here’s a little explanation:
All clusters contain a group of resources (Resource Group, as most vendors call them). This group contains a set of resources, and in some cases, relations (order of startup, dependencies, etc). Each resource could be any single element required for an application. Example – Resource could be an IP address, which without you won’t be able to contact the application. Resource could be a disk device, containing the application’s data. It could be an application start/stop script, and it could be a sub-application – an application required for the whole group to be up, such as a DB for DB driven web server. The order you would ask them to start would be IP, disk, DB, web server (in our case). You’d ask the IP to be brought up first because some of the cluster servers can trick an IP based clients into some delay, so the client hardly feels the short downtime of application failover. But this is for later. So, in a resource group, we have resources. If we can separate resources into different groups, if they have no required dependency between them, it is always better to do so. In our previous example, lets say our web server uses the DB, but it contacts it using IP address, or using hostname. In this case, we won’t need the DB to run on the same physical machine the web server is using, and in such a case, assuming the physical disk holding the DB and the one holding the rest of the web application are not the same disk, we could separate them.

The idea, if I can try to sum it up, is to split your application to the smallest self-maintained structures. Each structure will be called resource group, and each component in this structure is a resource. On some cluster servers, one could group and set dependencies between resource groups, which allows for even more scalability, but that is not our case.

So we had resource groups containing resources. Each computer, a member in the cluster, is called a node. Now, let’s assume our cluster containing three nodes, but we want our application (our resource group) to be able to run on only two specific? In this case, we need to define, for our resource group, which nodes are to be associated with it. In RH Cluster Server, a thing called “Domain” is designed for it. This Domain containes a list of nodes. This Domain can be associated with Resource Group, and thus set priority of failover, and set the group of nodes allowed to deal with the resource group.

All clusters have a single point of error (unlike failure). The whole purpose of the cluster is to allow for non-cluster-aware application the high-availability you could expect for a (relatively) low price. We’re great – we know how to bring an application up, we know how to bring it down. We can assume when the other node(s) is/are down. We cannot be sure of it. We try. We demand few means of communication, so that one link failure won’t cause us to corrupt our shared volumes (by trying multiple access into them). We set a whole system of logic, a heartbit, just name it, to avoid, at almost all cost, a status of split-head – two cluster nodes believing they are the only ones up. You can guess what it means, right?

In RH, there is a heartbit, sure. However, it is based on bonding, in the event of more than one NIC, and not on separated infrastructures. It is a simple network-based HB, with nothing special about it. In case of loss of connection, it would have reset the inactive node, if it saw fit, using a mechanism they call “Fence”. A “Fence” is a system by which the cluster can *know* for sure (or almost for sure) a node has been down, or the cluster can physically take a node down (poweroff if needs), such as control of the UPS the node is connected to, or its power switch, or alternate monitoring infrastructure, such as the Fibre Channel Switch, etc. In such an event, the cluster can know for sure, or can assume, at least, that the hung node has been reset, or it can force it to reset, to release some hung application.

Naming – Resource group is called Service. Resource remains resource, but an application resource *must* be defined by an rc-like script, which accepts start/stop (/restart?). Nothing complicated to it, really. The service contains all required resources.

I was not happy with the cluster, if I can sum up my issues with it. Monitoring machines (nodes) it did correctly, but in the simple enough example I’ve chosen to setup, using apache as a resource (only afterwards I’ve noticed it to be the example RedHat used in their documentation) it failed miserably to take the correct action when an application failed (unlike a failure of a node). I’ve defined my “Service” to contain the following three items:

1) IP Address – Unique for my testing purposes.

2) Shared partition (in my case, and thanks to VMware, /dev/sdb1, mounted at /var/www/html)

3) The Apache application – “/etc/init.d/httpd”

All in all, it was brought up correctly, and switch-over went just fine, including in a case of correct and incorrect reset of the active/passive node, however, when I’ve killed my apache (killall httpd), the cluster detected failure in the application, but was helpless with it. I was unable to bring down the “Service”, as it failed to turn off Apache (duh!), so it did not release neither the IP address, nor the shared volume. In so doing, I’ve had to restart the service rgmanager on both nodes, after manual removal of the remains of the “Service”. I didn’t like it. I expect the cluster to notice failure in the application, which it did, but I expect it to either try to restart the application (/etc/init.d/httpd stop && /etc/init.d/httpd start) before it fails completely, or to set a flag saying it is down, remove the remains of the “Service” from the node in question (release the IP address and the shared storage), and try to bring it up on the other node(s). It did nothing of the likes. It just failed, completely, and required manual intervention.

I expect HA-Cluster to be able to react to an application or resource failure, and not just to a node failure. Since HA-Clusters are meant for the non-ideal world, a place where computers crash, where hardware failures occure, and where applications just die, while servers remain working, I expect the Cluster Server to be able to handle the full variety of problems, but maybe i was expecting too much. I believe it to be better in their future versions, and I believe it could have been done quite easily right now, as long as detection of the failed application occurred, which it has, but it’s not for me to define the cluster’s abilities. This cluster is not mature enough for real-life production sites, if and only because of its failure to react correctly to a resource failure, without demanding manual intervention. A year from now, I’ll probably recommend it as a cheap and reliable solution for most common HA-related tasks, but not today.

That leaves me with VCS and Oracle, which I’ll deal with in the future, wether I like it or not 🙂