Posts Tagged ‘RAM’

Veritas (Symantec) Cluster Server on Centos 3.6

Wednesday, February 15th, 2006

I’m in the process of installing VCS on two guest (virtual) Linux nodes, each using the following setup:

256MB RAM

One local (Virtual) HDD, SCSI channel 0

Two shared (Virtual) HDDs, SCSI channel 1

Two NICs, one bridged, and one using /dev/vmnet2, a personal Virtual switch.

The host (carrier) is a Pentium 3 800MHz, with 630MB RAM. I don’t expect great performance out of it, but I expect a slow working testing environment.

Common mistake – Never forget to change your /etc/redhat-release to state you are running a RedHat Enterprise system. Failure to do so will result in failure to install VRTSllt, which will force you to either install it manually after you’ve fixed the file, or remove and reinstall all. In my case, Centos 3.6 (equivalent to RedHat Enterprise Server 3 Update 6) the file /etc/redhat-release should have contained the string:

Red Hat Enterprise Linux AS release 3 (Taroon)

Veritas has advanced a great deal in the last few years, regarding ease of installation, even on multiple servers. Installing Cluster software usually involves more than one server, and installing it all, from a single point, is an advancement. My two nodes are named vcs01-rhel and vcs02-rhel, and the installation was done completely from vcs01-rhel.

The installer assumes you can login using ssh (by default) without password prompt from one node to the other. In my case, it wasn’t true. I’ve found it quicker (and dirty, mind you!) to allow, for the sake of the installation and configuration process, to utilize rsh. It’s not safe, it’s not good, but if it’s just for the short and limited time required for the installation, I’de hack it so it would work. How did I do it?

One node vcs02-rhel I’ve installed (using yum, of course) the package rsh-server. The syntax is yum install rsh-server. Afterwards, I’ve changed its relevant xinetd file, /etc/xinetd.d/rsh to set the flag "disable = no" and restarted xinetd. Following that, I’ve hashed two lines in /etc/pam.d/rsh :

auth required pam_securetty.so

auth required pam_rhosts_auth.so

As said, quick and dirty. It allowed rsh from vcs01-rhel as root, without password. Don’t try it at an unsercure environment, as it actually allows not only vcs01-rhel, but any and every computer on the net full rsh, password free, access to the server. Better think it over, right?

The first thing I’ve done after I’ve finished installing the software, was to undo my pam.d changes, and disable rsh service. Later, I will remove it.

So, we need to run the installer, which can be done by cding to /mnt/cdrom/rhel3_i686 and running ./installer -usersh

I’m asked about all the machines I need to install VCS on, I’m asked if I want to configure the cluster (which I do), and I set a name, a cluster ID (a number between 0 and 255). This Cluster ID is especially important when dealing with few Veritas Clusters running on the same infrastracture. If you have two clusters with the same Cluster ID, you get extra-large cluster, and a mess out of it. Mind it, if you’re ever into few clusters in one network.

I decide to set it up onwards. I decide to enable the web management GUI, and I decide to set IP for the cluster. This IP will be used for the resource group (called ClusterService by default), and will be a resource in it. When/if I have more resource groups, I should consider adding more IP addresses for them. At least one for each. In such a case, the cluster server is serving clients requests without them being aware of any "special" setup with the server, like, for example, it has switched over two times already.

I define heartbit networks. I’ve used eth1 as the private heartbit, and eth0 as both public network and "slow heartbit". I would add later some more virtual NICs to both nodes, and define them to be used as private heartbit as well.

Installing packages – I decide to install all optional packages. It’s not that I’m going to lack space. I did not install, mind you, VVM, because I want to simulate a no-volume-manager-enabled system. Just pure basic simple partitions.

Installation went fine, and I was happy as a puppy.

One thing to note – I wanted to install the Maintanance Pack I have. I was unable to eject my CD. running lsof | grep /mnt/cdrom revealed that CmdServer, some part of VCS, was using the cdrom, probably because, as root, I initiated the service from that location. I shut down vcs service, and started it again from another path, and I was able to eject my CD.

Installing the MP wasn’t that easy. The installer, much smarter this time, has required the package redhat-release which is a mandatory package in RedHat systems, but me, running Centos, had the package centos-release which just wouldn’t do the trick. I’ve decided to rebuild the package centos-release with an internal different name – redhat-release, and to do that, I’ve had to download the srpm of centos-release. You need to change the name and version so in your RPM you’ll have redhat-release-3ES-13.6.2. I’ve done it with this SRPM centos-release.spec file. Replace your centos-release srpm spec file with this one, and you should be just fine. Remove your current centos-release package, and you’ll be able to install your newly built (using rppbuild -bb centos-release.spec) redhat-release RPM (faked, of course). Mind you – it will overwrite your /etc/redhat-release, so you better back it up, just in case. I’ve take precaution, and restored the file to its fake RedHat contents. You can never know…

You could wonder why I haven’t used RHEL itself, but a clone, namely Centos. Although its for home usage only, the ease of updates, availability of packages (using yum) and the fact I do not want to steal software combined together bring me to install Centos for all my home usages. Production environments, however, it will be an official RedHat, I can gurantee that.

So, it’s installing MP2, which means removing some packages, and then installing newer versions. The reason they do not use "upgrade" option of RPM is beyond me, but so their nastiness about redhat-release version. So, if you’ve kept all the rules given here, you’re supposed to have VCS4.0 MP2 installed on your Linux Nodes. Good luck. Our next chapter would be installing and configuring Oracle DataBase on this setup. Stay tuned 🙂

Server upgrade pictures, as promised

Saturday, November 5th, 2005

Here are some pictures of the server upgrade.

Old server: P2 300MHz, with 256MB RAM:

Old server, in its old location

New server: Xion 3GHz+HT+EMT64, with 1GB RAM. Nice:

Dell PowerEdge 1800. Nice LEDs 🙂

The whole server room during the upgrade.

Notice the 3com router/switch! You’ll be able to recognise it by its antenas

Notice the wireless routed/switch at the center left, which connects the servers, while keeping them disconnected from the Internet and independent of any ISP based solution, cables, etc. You cannot see me, sitting in a more comfortable room (but not by far), using my laptop, through the wireless link, as the manager for this whole migration process. Worked like charm.

Dell PowerEdge 1800 and Linux – Part 1

Tuesday, September 27th, 2005

As part of my voluntary actions, I manage and support Israeli Amateur Radio Committee Internet server. This machine is an old poor machine, custom made about 5-7 years ago, containing Pentium2 300MHz, and 256MB RAM. It serves few hundreds of users, and you can guess for yourself how slow and miserable this machine is.

After
long wait, the committee has decided to purchase a new server. This server, Dell PE 1800, has landed in my own personal lab just two days ago. It’s a nice machine, cheap, considering its abilities, and it’s waiting just to be installed. Or so it was, just up to now.

Mind you that brand PC servers containing more than one CPU can cost around
3K$ and above. This baby has come with a minimal, yet scalable setup, containing only one CPU out of two, 1GB RAM, our of 8GB possible max, and two SCSI HotSwap HDDs, using 2 out of 6 slots. Nice machine. And it was cheap too. Couldn’t complain about it.

At first, I’ve tried using Dell’s CDs. The "Server Setup" CD is supposed to help me prepare the machine to OS installation, either it be Windows, Linux, Novell, etc. I’ve tried using it, preparing it to a new Centos install, when I’ve noticed it didn’t partition quite as I’ve expected. Well, the "Server Setup" tool has decided I would not use Mirrors, and that I would not use LVM, but would use a predefined permanent setup, and that’s all. This machine did not come with a RAID controller, so I’ve had to configure Software RAID. What better time is there than during the install? Dell’s people think otherwise, so I’ve had to boot into a bootable media of Centos 4.1 (my whole installation tree resides on NFS share). The installation was smooth, and worked just like expected. Fast, sleek, smooth. All I’ve ever expected out of Linux installation on a server class PC. Just like it should have been.

I’ve partitioned the system using the following guidelines:

1) Software mirror Raid /dev/md0, containing /boot (150MB)

2) Two stand alone SWAP partitions, 1GB each, one on each HDD. I do not need mirror for the SWAP.

3) Software mirror Raid /dev/md1, containing LVM, expanding all over what’s left of the disk.

4) Logical Volumes "rootvol" (5GB) holding / and "varvol" (6GB) holding /var. Both can be expanded, so I don’t need to worry now about their final sizes.

As said, the installation went great. However, I was not able to boot the system… I just got each time to a hidden maintenance system partition, and it seems my GRUB failed to install itself. Darn.

I’ve booted into rescue mode, and tried to install GRUB manually. Failed. I think (and it’s not the first time I’ve had such problems with GRUB) GRUB is not as good as everyone say it is. It can’t boot into software mirror, and it means it’s not ready for production, as far as I are.

I’ve used YUM to download and install Lilo, and managed easily to convert /etc/lilo.conf.anaconda to
the correct file for Lilo (/etc/lilo.conf), and to run it. Worked great, and the system was able to boot.