The concept of running a virtual machine, KVM-based, in this case, under RHCS is acceptable and reasonable. The interesting part is that the <vm/> directive replaces the <service/> directive and acts as a high-level directive for VMs. This allows for things which cannot be performed with regular ‘service’, such as live migration. There are probably more, but this is not the current issue.
An example of how it can be done can be shown in this excellent explanation. You can grab whatever parts of it relevant to you, as there is an excellent combination of DRBD, CLVM, GFS and of course, KVM-based VMs.
This whole guide assumes that the VMs reside on a shared storage, which is concurrently accessible by both (all?) hosts. When this is not the case, like when the shared filesystem is ext3/4 and not GFS, and the virtual disk image file is located on it. In this particular case, you would want to connect the VM to the mount. This cannot be performed, however, when using the <vm/> as a top directive (like <service/>), as it does not allow for child-resources.
As the <vm/> directive allows to be defined (with some limitations) as a child resource in a <service/> group, it inherits some properties from its parent (the <service/> directive), while some other properties are not mandatory and will be ignored. A sample configuration would be this:
<fs device=”/dev/mapper/mpathap1″ force_fsck=”1″ force_unmount=”1″ fstype=”ext4″ mountpoint=”/images” name=”vmfs” self_fence=”0″/>
<service autostart=”1″ domain=”vm1_domain” max_restarts=”2″ name=”vm1″ recovery=”restart”>
<vm migrate=”pause” name=”vm1″ restart_expire_time=”600″ use_virsh=”1″ xmlfile=”/images/vm1.xml”/>
This would do the trick. However, the VM will not be able to live migrate, but will have to shutdown/startup for each cluster takeover.