Archive for May, 2010

Don’t try it at home – NetApp SnapMirror network-free Seeding

Wednesday, May 26th, 2010

Well, this is rather common. Network-free seeding is being performed through using a tape device.
What happens if you do not have a tape device? We have the poor-man’s huge TB disks (SATA for the people) connected to simple PC systems, but no tape…
We perform network-free seeding into disk.

There is a utility called ‘lrep’ which is nice and effective. It forces the use of Qtree-based snapmirror (QSM), which has its own limitations. I advise you’ll read NetApp’s “SnapMirror Async Overview and Best Practices Guide” with id TR3446 for further details.


  • You have to have enough space on the source NetApp device to contain twice the volume you require to replicate
  • You can copy the output replica to an external Windows/Linux/Other system, of a movable type (could be a desktop), with large disks using CIFS or NFS.
  • HIgh CPU usage is guaranteed, as well as high disk usage.
  • You are aware to the fact that this is dangerous, and they will probably won’t like you just a little bit for doing this.


Using the command ‘snapmirror store‘ you are able to store initial replica into a tape device for later seeding. The options (/man/Internet) describe how you should use your tape device. You don’t have one, or you don’t have one on each site.


  • Create a volume or verify you have enough free space on some existing volume on the source NetApp device.
  • You will require the exact amount of used space on the source volume, give or give a little.
  • I recommend you’ll disable snapshots on the target volume for the duration of this operation, to save space.

Let’s assume that the source volume name is ‘vol2’, and that we have enough free space on /vol/vol1/free_space.

You need to perform the following:

Dangerous – change your privilege level:

priv set diag

Perform the initial SnapMirror

snapmirror store vol2 /vol/vol1/free_space/vol2

Remember – you have to be in ‘diag’ mode for it to work.

Reduce your privileges to normal:

priv set

Track the status through

snapmirror status

When the operation has completed, connect from your external Windows/Linux/Other machine to /vol/vol1/free_space and copy out the file ‘vol2’ which will probably be huge.

Transfer the Windows/Linux/Other system (or only its disk) to the alternate location, create a volume or verify you have enough free space on an existing volume to contain the entire file, and copy it to that location. I will assume it’s the same as on the source NetApp device – /vol/vol1/free_space/

Change your privileges level:

priv set diag

Create the target volume, and set it to be restricted (the values here are just an example):

vol create vol2 aggr0 100g

vol restrict vol2

Perform a ‘snapmirror retrieve’ operation:

snapmirror retrieve vol2 /vol/vol1/free_space/vol2

Reduce your privileges to normal:

priv set

You can track the status through

snapmirror status

Following that, perform an update with the source NetApp real path (filer1:/vol/vol2) and you’re fine.

Remember – be very very careful when running under ‘diag’ privileges.

Oracle VM post-install check list

Saturday, May 22nd, 2010

Following my experience with OracleVM, I am adding my post-install steps for your pleasure. These steps are not mandatory, by design, but will help you get up and running faster and easier. These steps are relevant to Oracle VM 2.2, but might work for older (and newer) versions as well.

Define bonding

You should read more about it in my past post.

Define storage multipathing

You can read about it here.

Define NTP

Define NTP servers for your Oracle VM host. Make sure the daemon ‘ntpd’ is running, and following an initial time update, via

ntpdate -u <server>

to set the clock right initially, perform a sync to the hardware clock, for good measures

hwclock –systohc

Make sure NTPD starts on boot:

chkconfig ntpd on

Install Linux VM

If the system is going to be stand-alone, you might like to run your VM Manager on it (we will deal with issues of it later). To do so, you will need to install your own Linux machine, since Oracle supplied image fails (or at least – failed for me!) for no apparent reason (kernel panic, to be exact, on a fully MD5 checked image). You could perform this action from the command line by running the command

virt-install -n linux_machine -r 1024 -p –nographics -l nfs://iso_server:/mount

This directive installs a VM called “linux_machine” from nfs iso_server:/mount, with 1GB RAM. You will be asked about where to place the VM disk, and you should place it in /OVS/running_pool/linux_machine , accordingly.

It assumes you have DHCP available for the install procedure, etc.

Install Oracle VM Manager on the virtual Linux machine

This should be performed if you select to manage your VMs from a VM. This is a bit tricky, as you are recommended not to do so if you designing HA-enabled server pool.

Define autostart to all your VMs

Or, at least, those you want to auto start. Create a link from /OVS/running_pool/<VM_NAME>/vm.cfg to /etc/xen/auto/

The order in which ‘ls’ command will see them in /etc/xen/auto/ is the order in which they will be called.

Disable or relocate auto-suspending

Auto-suspend is cool, but your default Oracle VM installation has shortage of space under /var/lib/xen/save/ directory, where persistent memory dumps are kept.  On a 16GB RAM system, this can get pretty high, which is far more than your space can contain.

Either increase the size (mount something else there, I assume), or edit /etc/sysconfig/xendomains and comment the line  with the directive XENDOMAINS_SAVE= . You could also change the desired path to somewhere you have enough space on.

Hashing this directive will force regular shutdown to your VMs following a power off/reboot command to the Oracle VM.

Make sure auto-start VMs actually start

This is an annoying bug. For auto-start of VMs, you need /OVS up and available. Since it’s OCFS2 file system, it takes a short while (being performed by ovs-agent).

Since ovs-agent takes a while, we need to implement a startup script after it and before xendomains. Since both are markes “S99” (check /etc/rc3.d/ for details), we would add a script called “sleep”.

The script should be placed in /etc/init.d/

# sleep     Workaround Oracle VM delay issues
# chkconfig: 2345 99 99
# description: Adds a predefined delay to the initialization process


case "$1" in
start) sleep $DELAY
exit 0

Place the script as a file called “sleep” (omit the suffix I added in this post), set it to be executable, and then run

chkconfig –add sleep

This will solve VM startup problems.

Fix /etc/hosts file

If you are into multi-server pool, you will need that the host name would not be defined to address. By default, Oracle VM defines it to match, which will result in a poor attempt to create multi-server pool.

This is all I have had in mind for now. It should solve most new-comer issues with Oracle VM, and allow you to make good use of it. It’s a nice system, albeit it’s ugly management.

Update the OracleVM

You could use Oracle’s unbreakable network, if you are a paying customer, or you could use the Public Yum Server for your system.

Updates to Oracle VM Manager

If you won’t use Oracle Grid Control (Enterprise Manager) to manage the pool, you will probably use Oracle VM Manager. You would need to update the ovs-console package, and you will probably want to add tightvnc-java package, so that IE users will be able to use the web-based VNC services. You would better grub these packages from here.

Citrix XenServer and DR

Sunday, May 16th, 2010

Assuming your storage is capable of replication, a bunch of VMs could be happily replicated to an alternate location, where you can start them on will (and on crisis, most likely).

This procedure, in theory, is rather simple. I have discovered that it is less so, especially if your system goes into testing once a while.

This leaves some “memories” with Citrix XenServer, as well as the underlayer OS.

The forums show examples of how to handle iSCSI re-attach, so I am adding here hot to handle fiber channel re-attach operation as well, especially if you’re using multipath.

Grab the SCSI ID

The SCSI ID is very important, as it is he basic identified of the LUN:

xe sr-proble type=lvmohba 2>&1

This will show a section (XML-like) under “SCSIid” containing the device’s SCSI ID. Also – grab the UUID of the device, as XenServer assigns. You could do this with:

xe sr-probe type=lvmohba device-config:SCSIid=<Enter SCSI ID here>

Introduce the Device

The system needs the device introduced. For that we’ll use the above captured UUID:

xe sr-introduce uuid=<Enter the UUID obtained before> shared=true typb=lvmohba namb-label=”Imported SR”

Keep the UUID returned by this command. I will reference to it as “SR UUID”

Create PBD for each host in the pool

This should be performed for each host, based on the UUID of all hosts:

xe pbd-create sr-uuid=<Enter the SR UUID obtained before> device-config=SCSIid=<Enter SCSI ID> host-uuid=<Enter UUID of host>

The result value, for each host, is the pbd UUID for that specific host. pbd is a physical Block Device, meaning – the “connection” of the Storage Repository (which is pool-wide) to the specific host.

Plug PBDs

For each of the pbd UUID obtained above, run the following command to plug it in:

xe pbd-plug uuid=<pbd UUID as obtained above>

That should be all.

For ease of repeat, below is a script to perform these exact actions automatically. It assumes one (1) unallocated LUN at the beginning of the process. It will probably behave differently for more than one.

SCSIID=`xe sr-probe type=lvmohba 2>&1 | grep -A1 SCSIid | grep -v SCSIid | grep -v vendor | tail -1 | awk '{print $NF}'`
echo "scsi ID: $SCSIID"
UUID=`xe sr-probe type=lvmohba device-config:SCSIid=$SCSIID | grep -A1 UUID | grep -v UUID | grep -v Devlist | head -1 | awk '{print $NF}'`
echo "SR UUID $UUID"
SRID=`xe sr-introduce uuid=$UUID name-label="Imported SR" shared=true type=lvmohba`
echo "SR UID: $SRID"
HOSTID=`xe host-list  | grep uuid | awk '{print $NF}'`
for i in $HOSTID
   PBDID="$PBDID `xe pbd-create sr-uuid=$SRID device-config:SCSIid=$SCSIID host-uuid=$i`"
for i in $PBDID
   xe pbd-plug uuid=$i

Oracle VM and network bonding

Sunday, May 9th, 2010

Oracle VM, out of the box, does not allow network bonds. An excellent guide on how to enable bonding which I have partially followed, has convinced me that changing the relevant scripts would be better. That I have done, and reported in this wiki post.

To sum things up – configure bonding/VLAN tagging as you would have normally do on any RHEL-based Linux distro, and replace the script /etc/xen/scripts/network-bridges with the modified one mentioned below. This script will define bridge for each network interface node already defined as bridge, or slave of a bond connection, as part of the xend service.

# Runs network-bridge against each ethernet card. 

dir=$(dirname "$0") 

    for f in /sys/class/net/*; do 
        netdev=$(basename $f) 
        if [[ $netdev =~ "^bond[0-9.]+$" ]] || [[ $netdev =~ "^eth[0-9.]+$" ]]; then 
		# devnum=${netdev:3} 
		. /etc/sysconfig/network-scripts/ifcfg-${netdev}
		if [[ `echo $SLAVE | egrep "yes|YES|1"` ]] || [[ -n "$BRIDGE" ]] ;then            
			echo "Interface ${netdev} is being defined and claimed by someone else"
			if [[ $netdev =~ "^((eth)|(bond))[0-9]+." ]]; then
				# This is vlan tagging, and we want persistent vlan name!
				VLAN=$(echo $netdev | cut -f2 -d.)
				# Assume tags are unique on a server - no several bridges for a single tag.
				# If you intend on having eth2.3 and eth3.3 to your vms as bridges (and not deal with it
				# on the host level), this script is not for you
				$dir/network-bridge "$@" "netdev=${netdev}" "bridge=xenbrv${VLAN}"
				$dir/network-bridge "$@" "netdev=${netdev}" "bridge=xenbr${devnum}" 
				let devnum++

run_all_ethernets "$@"

After a very long absence, Changing Linux HVM to PV on Xen

Saturday, May 8th, 2010

I have had a stressed time, and had no time to actually write down anything here. This is pity, since I have been doing so many things worth sharing.
I will start with a small one now – how to convert a physical machine into Xen-based VM. I assume you know the drill of how to do P2V in whatever method you like. My preferred method is of booting into a new system (virtual) and then manually building the partitions, LVM, etc, and using ‘tar’ with ‘nc’ to copy files from the source server to the target server. I might elaborate more about it some time, but this post is about the next phase – We have now previously physical server which used to use /dev/sdX or /dev/cciss/cXdXpX, or whatever else, and we need to make it Xen-friendly.
This procedure will apply to RedHat’s Xen-based virtualization, OracleVM or Citrix XenServer.
I assume that the target system maintains LVM settings and mount points as the original one.
These are the steps that need to be performed, manually.
I used CactiEZ image, installed as HVM on Citrix XenServer as my example. This procedure should apply to all Linux systems, with respect to their package manager.

Edit /etc/modprobe.conf

Change eth0,1,whatever alias to xennet

Change/add alias of scsi_hostadapter to xenblk

Edit /etc/

Run the command

sed -i ‘s/hd/xvd/g’ /etc/

to change references if you use labels. These will be used by rc.sysinit later during boot sequence, and we need them configured correctly.

Get Paravirtualized Kernel

yum install kernel-xenU

Edit /etc/securetty

Add the line “xvc0” to it. Notice, it’s zero and not ‘o’

Edit /etc/inittab

Replace tty1 with xvc0

Edit /boot/grub/menu.lst

Change the default entry to be the one with the xenU kernel

Replace ‘/dev/sda’ with ‘/dev/xvda’

Edit /boot/grub/

Replace ‘/dev/sda’ with ‘/dev/xvda’

Edit /etc/fstab

Verify labels are used (and then – see changes to above), or devices are renamed to xvd[a-z]

Reboot into PV-enabled VM

This should do it. The VM will attempt to detect new hardware, network MAC changes, etc, but it will work fine.