VMware experiance – lots of it

During the past few days/weeks, I’ve had the pleasure (and will have in the future as well) of playing with VMware ESX (2.5.2) and GSX (3.2.1), as well as Workstation in my long forgotten past, and here I try to describe my own personal impressions of the product.

First – it is a good product. I enjoyed working with it. It is not too complicated, however, it is not documented enough, and finding some solutions for specific problems were not easy and were not made easy by their docuemntations online and their web site.

The GSX I will start with. It is a modern, easilly usable product. It allows to run virtual systems on a running Windows or Linux system, and it allows for remote management of such systems. Good remote GUI (VMware Console), which allows some cool stunts such as installing a guest (virtual, but we’ll keep to VMware’s lingo here) OS directly from your own CDROM, on your own personal desktop. If you don’t get it – Install a Windows server, call it Server1. Install VMware GSX on it, and then run on your desktop the VMware-Console software. Using this software you can define a whole guest system on Server1, control it, and view its "physically attached" keyboard, mouse, screen. So, you can map your own desktop’s CDROM to a guest system on Server1, and install the guest from there. It’s a stunt which allow you never to leave your own chair! It doesn’t exist on the more expensive and advanced ESX, and it’s a pitty.

You can define, using the VMware-Console, or even using a web-based management interface a larger variaty of hardware on a guest system using the GSX than you can using the ESX. The ESX’s console and web interface did not allow for serial ports on a guest. It did not allow for sound, or for USB. So it appears that although the ESX version is more advanced, it is limited comparing to the lesser GSX.

I’ve discovered, during such an effort, that I could manually define a serial port on ESX guest system. I believe other devices can be defined as well, but I wouldn’t want to try that, nor would I be able to do so without a good example of a GSX system’s guest configuration file as an example. I’ve come to a resolution here, and it was working, for the time being.

The ESX version is more like a mainframe style system – it allows for an embedded system slicing and partitioning for consolidation of numerous virtual machines. Lots of buzz-words, but all they mean is that you can have one stronger PC hardware running few virtual configurations (guests), easier to manage, and with better utilization of your actual resources, as physical servers tend to lay idle noticable part of the day in most cases.

It adds in, however, few, more complicated considerations into the soup – if I had 3 servers doing nothing most of the day, but at 4 AM, all of them start to index local files, I couldn’t care less. However, on such consolidated setup, I would care – for better utilization, I would measure the amount of time, or estimated amount of time each require for its own task, and try to spread it better around the clock – this one will start a bit earlier, and that one will start a bit later, so I would not get to hog my system. It brings us to the major problem of such a setup – I/O. Each computers system ever built had problems with its I/O. I/O, and especially disk access, is the slowest mechanism in a computer. You can calculate millions and tens of millions instructions per second, but you would need few minutes to put the results on the disk. You could say that the I/O problem can be identified at two levels:

1) General disk access – Reading and writing to disks is rather slow.

2) Small files – Most files on the average system are small. Very small. Disk layout, as hard as any FS might try, results in random and spread layout, which leads to high seek-time when reading and writing small files, which is, actually, the main occupation of any OS I/O subsystem.

Virtual and consolidated solutions are no different than that. Each virtual OS requires its own share with the physical hardware’s disk I/O, which might lead, in some cases, to poor performance of all guest OSes, just because of disk hog, which, by the way, is the harderst to measure and detect. Moreover, it is the harderst to solve. You can always pour in some more hard-drives, but the host (Container) I/O subsystem remains the same single system, and the load generated by large amounts of small, random reads and writes remains the same. So, unless you use some QoS mechanism, you can get a single machine to hog your entire virtual construction. This is one of the biggest downsides of such consolidation solutions.

With P-Series, by the way, they can allow consolidation of the hardware into few I/O seperated virtual machines (Logical Partitions, or LPAR, as IBM call them. They call everything "Partitions"). VMware ESX supports such a setup as well, but I wonder how well, since it is not really hardware-bound setup (as LPAR is), they manage to prevent negative effects and degrade of performance of one I/O channel on others.

I guess that for low-I/O systems, or for lab usage, ESX could do the trick. You can run a full OS cluster (Windows or Linux) on it, and it will work correctly, and nicely. Unless you’re up to disconnecting physical (or virtual) disks from guest servers, it is a good solution for you.

So, to sum things up, I can say that I enjoy "playing" with VMware products. I enjoy them because they’re innovative, sophisticated, and they look sexy, but I am well aware of the way the market chooses its current solutions, and I am aware of the fact many utilize VMware products for the sake of consolidation and ease of management, without propper consideration or understanding of the well expected performance loss which can be part of it (but does not have to be, if you calculate things correctly). A friend has told me about ESX setup he has encountered, where the had quad-CPU system, with 16GB RAM, running 16 guest OSes, of which MS Exchange, MSSQL2005, MS-SMS, and more, using a single shelve of raid5 based storage, connected via two 2Gb/s fibre connections, setup as failback (only one active link at a time). It was over loaded, and was performing badly. Nice server, though 🙂

One last thing about ESX is that it would not install on purely IDE systems. It requires SCSI (and maybe SATA?) for the space holding the guests virtual hard drives.

So, enough about VMware today. I wonder if there’s some easy matrix for "tell me what servers will do, and we’ll calculate I/O, CPU and memory for your future server", instead of the poor way of "I’ve discovered my server is too weak for the task, half a year after deployment", which we see too much of today.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.