Posts Tagged ‘performance’

Some few small insights

Wednesday, March 12th, 2008

Lately I have been overloaded above my capabilities. This did not prevent me from doing all kind of things, but most of them are too small to justify a real entry here, so I have decided to make a small collection of small stuff someone might need to know, in order to make it indexed in search engines. These small insights might save some time for someone. This is a noble cause.

1. Oreon is a nice overlay for Nagios, however, it is poorly documented, and some of the existing docs are in French. I have put hours on building it into a working setup, and I hope to be able to write down the process as is.

2. “Sun Java System Active Server Pages” does not support 64bit Linux installations – at least not if you’re interested in using it with your existing Apache server. Look here. Seems nothing has changed.

3. Under Ubuntu 7.10, Compiz suffers from a major memory leak when using NVidia display adapters. You can read about it in the bug page. I was able, thanks to this link, to workaround it using compiz –indirect-rendering . Does not see to cause any ill-effect on my display performance.

4. Suse 10 and wireless cards – This one is a great guide, which I would happily recommend.

5. Flushing the existing read buffer for your Linux machine (should never be done, unless you’re testing performance) can be done by running the following command:

echo 1 > /proc/sys/vm/drop_caches

Seems to be enough for today. Hope these tips help.

Quick provisioning of virtual machines

Friday, February 1st, 2008

When one wants to achieve fast provisioning of virtual machines, some solutions might come into account. The one I prefer uses Linux LVM snapshot capabilities to duplicate one working machine into few.

This can happen, of course, only if the host running VMware-Server is Linux.

LVM snapshots have one vast disadvantage – performance. When a block on the source of the snapshot is being changed for the first time, the original block is being replicated to each and every snapshot COOW space. It means that a creation of a 1GB file on a volume having ten snapshots means a total copy of 10GB of data across your disks. You cannot ignore this performance impact.

LVM2 has support for read/write snapshots. I have come up with a nice way of utilizing this capability to my benefit. An R/W snapshot which is being changed does not replicate its changes to any other snapshot. All changes are considered local to this snapshot, and are being maintained only in its COOW space. So adding a 1GB file to a snapshot has zero impact on the rest of the snapshots or volumes.

The idea is quite simple, and it works like this:

1. Create adequate logical volume with a given size (I used 9GB for my own purposes). The name of the LV in my case will be /dev/VGVM3/centos-base

2. Mount this LV on a directory, and create a VM inside it. In my case, it’s in /vmware/centos-base

3. Install the VM as the baseline for all your future VMs. If you might not want Apache on some of them, don’t install it on the baseline.

4. Install vmware-tools on the baseline.

5. Disable the service “kudzu”

6. Update as required

7. In my case I always use DHCP. You can set it to obtain its IP once from a given location, or whatever you feel like.

8. Shut down the VM.

9. In the VM’s .vmx file add a line like this:

uuid.action = “create”

I have added below (expand to read) two scripts which will create the snapshot, mount it and register it, including new MAC and UUID.

Press below for the scripts I have used to create and destroy VMs

create-replica.sh:

#!/bin/sh
# This script will replicate vms from a given (predefined) source to a new system
# Written by Ez-Aton, http://www.tournament.org.il/run
# Arguments: name

# FUNCITONS BE HERE
test_can_do () {
# To be able to snapshot, we need a set of things to happen
if [ -d $DIR/$TARGET ] ; then
echo “Directory already exists. You don’t want to do it…”
exit 1
fi
if [ -f $VG/$TARGET ] ; then
echo “Target snapshot exists”
exit 1
fi
if [ `vmrun list | grep -c $DIR/$SRC/$SRC.vmx` -gt “0” ] ; then
echo “Source VM is still running. Shut it down before proceeding”
exit 1
fi
if [ `vmware-cmd -l | grep -c $DIR/$TARGET/$SRC.vmx` -ne “0” ] ; then
echo “VM already registered. Unregister first”
exit 1
fi
}

do_snapshot () {
# Take the snapshot
lvcreate -s -n $TARGET -L $SNAPSIZE $VG/$SRC
RET=$?
if [ “$RET” -ne “0” ]; then
echo “Failed to create snapshot”
exit 1
fi
}

mount_snapshot () {
# This function creates the required directories and mounts the snapshot there
mkdir $DIR/$TARGET
mount $VG/$TARGET $DIR/$TARGET
RET=$?
if [ “$RET” -ne “0” ]; then
echo “Failed to mount snapshot”
exit 1
fi
}

alter_snap_vmx () {
# This function will alter the name in the VMX and make it the $TARGET name
cat $DIR/$TARGET/$SRC.vmx | grep -v “displayName” > $DIR/$TARGET/$TARGET.vmx
echo “displayName = “$TARGET”” >> $DIR/$TARGET/$TARGET.vmx
cat $DIR/$TARGET/$TARGET.vmx > $DIR/$TARGET/$SRC.vmx
rm $DIR/$TARGET/$TARGET.vmx
}

register_vm () {
# This function will register the VM to VMWARE
vmware-cmd -s register $DIR/$TARGET/$SRC.vmx
}

# MAIN
if [ -z “$1” ]; then
echo “Arguments: The target name”
exit 1
fi

# Parameters:
SRC=centos-base         #The name of the source image, and the source dir
PREFIX=centos             #All targets will be created in the name centos-$NAME
DIR=/vmware               #My VMware VMs default dir
SNAPSIZE=6G              #My COOW space
VG=/dev/VGVM3           #The name of the VG
TARGET=”$PREFIX-$1″

test_can_do
do_snapshot
mount_snapshot
alter_snap_vmx
register_vm
exit 0

remove-replica.sh:

#!/bin/sh
# This script will remove a snapshot machine
# Written by Ez-Aton, http://www.tournament.org.il/run
# Arguments: machine name

#FUNCTIONS
does_it_exist () {
# Check if the described VM exists
if [ `vmware-cmd -l | grep -c $DIR/$TARGET/$SRC.vmx` -eq “0” ]; then
echo “No such VM”
exit 1
fi
if [ ! -e $VG/$TARGET ]; then
echo “There is no matching snapshot volume”
exit 1
fi
if [ `lvs $VG/$TARGET | awk ‘{print $5}’ | grep -c $SRC` -eq “0” ]; then
echo “This is not a snapshot, or a snapshot of the wrong LV”
exit 1
fi
}

ask_a_thousand_times () {
# This function verifies that the right thing is actually done
echo “You are about to remove a virtual machine and an LVM. Details:”
echo “Machine name: $TARGET”
echo “Logical Volume: $VG/$TARGET”
echo -n “Are you sure? (y/N): “
read RES
if [ “$RES” != “Y” ]&&[ “$RES” != “y” ]; then
echo “Decided not to do it”
exit 0
fi
echo “”
echo “You have asked to remove this machine”
echo -n “Again: Are you sure? (y/N): “
read RES
if [ “$RES” != “Y” ]&&[ “$RES” != “y” ]; then
echo “Decided not to do it”
exit 0
fi
echo “Removing VM and snapshot”
}

shut_down_vm () {
# Shut down the VM and unregister it
vmware-cmd $DIR/$TARGET/$SRC.vmx stop hard
vmware-cmd -s unregister $DIR/$TARGET/$SRC.vmx
}

remove_snapshot () {
# Umount and remove the snapshot
umount $DIR/$TARGET
RET=$?
if [ “$RET” -ne “0” ]; then
echo “Cannot umount $DIR/$TARGET”
exit 1
fi
lvremove -f $VG/$TARGET
RET=$?
if [ “$RET” -ne “0” ]; then
echo “Cannot remove snapshot LV”
exit 1
fi
}

remove_dir () {
# Removes the mount point
rmdir $DIR/$TARGET
}

#MAIN
if [ -z “$1” ]; then
echo “No machine name. Exiting”
exit 1
fi

#PARAMETERS:
DIR=/vmware                #VMware default VMs location
VG=/dev/VGVM3            #The name of the VG
PREFIX=centos              #Prefix to the name. All these VMs will be called centos-$NAME
TARGET=”$PREFIX-$1″
SRC=centos-base           #The name of the baseline image, LVM, etc. All are the same

does_it_exist
ask_a_thousand_times
shut_down_vm
remove_snapshot
remove_dir

exit 0

Pros:

1. Very fast provisioning. It takes almost five seconds, and that’s because my server is somewhat loaded.

2. Dependable: KISS at its marvel.

3. Conservative on space

4. Conservative on I/O load (unlike the traditional use of LVM snapshot, as explained in the beginning of this section).

Cons:

1. Cannot streamline the contents of snapshot into the main image (LVM team will implement it in the future, I think)

2. Cannot take a snapshot of a snapshot (same as above)

3. If the COOW space of any of the snapshots is full (viewable through the command ‘lvs‘) then on boot, the source LV might not become active (confirmed RH4 bug, and this is the system I have used)

4. My script does not edit/alter /etc/fstab (I have decided it to be rather risky, and it was not worth the effort at this time)

5. My script does not check if there is enough available space in the VG. Not required, as it will fail if creation of LV will fail

You are most welcome to contribute any further changes done to this script. Please maintain my URL in the script if you decide to use it.

Thanks!

net-snmp broken in RHEL (and Centos, of course) – diskio

Saturday, June 9th, 2007

I’ve had a belief for quite a while now that Linux, unlike other types of systems, was unable to produce any I/O SNMP information. I only recently found out that it was partially true – all production-level distros, such as RedHat (and Centos, for that matter) were unable to produce any output for any SNMP DISKIO queries.

I had found a bugzilla entry about it, so I raise the glove in a request to any of the maintainers of an RH-compatible repositories to recompile (and maintain, of course) an alternate net-snmp package which supports diskio.

Meanwhile, I have found this blog post, which offers an alternate (and quite clumsy, yet working) solution to the disk performance measurement issue in Linux. I haven’t tried it yet, but I will, rather soon.

—Update—

I have used the script from the blog post mentioned above, and it works.

Speed could be an issue. Comparing two servers the speed differential was amazing.

Both servers are connected on the same switch as the server running the query is connected. Server1 has a P2 233MHz CPU, while Server2 has a dual 2.8GHz Xion CPU.

~$ time snmpwalk -c COMMUNITY -v2c Server1 1.3.6.1.4.1.2021.13.15 > /dev/null

real 0m0.311s
user 0m0.024s
sys 0m0.020s

~$ time snmpwalk -c COMMUNITY -v2c Server2 1.3.6.1.4.1.2021.13.15 > /dev/null

real 0m8.303s
user 0m0.044s
sys 0m0.012s

Looks like a huge difference. However, I believe it’s currently good enough for me.

High load average due to hardware issues

Friday, March 30th, 2007

Performance tuning is a sort of art. You know what you expect to reach, and you somehow strive towards that through selective tuning. Either your OS memory utilization, your network settings, NFS mount parameters, etc.

I’ve been to a customer who’s server acted funny. First, it had high load average – for an idle server with 2 CPUs, a load average which never gets below 1.0 can be considered high.

Viewing the logs I’ve seen lots of PS/2 error messages. It seems that the hotplug daemon had been very busy at respawning several times a second due to incorrect hardware detection – due to these PS/2 errors, and caused high load average (many processes in the CPU queue). Disconnecting the PS/2 port between the server and the KVM solved the issue, and within around 2 minutes the load average has decreased to around 0.02.

Hardware related problems are, usually, the most intensive and easy to solve performance hogging.

It has been a while… Today – Monitor I/O in AIX

Saturday, November 25th, 2006

It is a question I’ve been asked a while back, and didn’t find the time to search for it.

I have searched for an answer just now, and got to an answer given by one of the old-time gurus (most likely) in news servers (where the gurus lurk, usually). The answer is to use the command "filemon" in a syntax such as this:

filemon -O lf -o outputfile.txt

To stop the monitoring session, run "trcstop". Review the output file generated by this command, and you will be able to view the I/O interactions which happened during that time on the system.