Posts Tagged ‘perl’

mime_decode-1 FAILED: Can’t locate object method “seek” via package “File::Temp ” at /usr/lib/perl5/vendor_perl/5.8.5/MIME/ line 816

Sunday, January 6th, 2008

This is the error message I have seen in my Linux+Postfix+Amavisd-new system. Not only that, but Amavis has kept a copy of each message in its tmp directory, which reduced my /var size to nothing quite rapidly.

amavis[21189]: (21189-01) (!)PRESERVING EVIDENCE in /var/amavis/tmp/amavis-200

Doesn’t sound too good.

Partial search in google has produced the following mailing list compressed archive, which pointed me onwards. It could either be a problem with Amavis, or with Perl.

After some further investigation, it appears that RpmForge has released a non-compatible version of perl-MIME-tools – from 5.420 to 5.425-1-test. It was quite disappointing, but I had to downgrade the perl package to its origin (the latest which worked), and to force yum never to upgrade that specific package.

This one will be on a different post (for the sake of order and search ability).

VMware Perl SDK bug and workaround

Saturday, November 10th, 2007

During an attempt to use the VMware Perl SDK, I have encountered the following error:

VMControl Panic: SSLLoadSharedLibrary: Failed to load library /usr/bin/ cannot open shared object file: No such file or directory

This is weird, as it was compiled successfully on my system (Centos4), but still…

The workaround was to create two symlinks:

ln -s /usr/lib/ /usr/bin/

ln -s /usr/lib/ /usr/bin/

This was related to an attempt to setup VMware fencing in RH Cluster on VMware Server.

Single-Node Linux Heartbeat Cluster with DRBD on Centos

Monday, October 23rd, 2006

The trick is simple, and many of those who deal with HA cluster get at least once to such a setup – have HA cluster without HA.

Yep. Single node, just to make sure you know how to get this system to play.

I have just completed it with Linux Heartbeat, and wish to share the example of a setup single-node cluster, with DRBD.

First – get the packages.

It took me some time, but following Linux-HA suggested download link (funny enough, it was the last place I’ve searched for it) gave me exactly what I needed. I have downloaded the following RPMS:









I was required to add up the following RPMS:




I have added DRBD RPMS, obtained from YUM:


kernel-module-drbd-2.6.9-42.EL-0.7.21-1.c4.i686.rpm (Note: Make sure the module version fits your kernel!)

As soon as I finished searching for dependent RPMS, I was able to install them all in one go, and so I did.

Configuring DRBD:

DRBD was a tricky setup. It would not accept missing destination node, and would require me to actually lie. My /etc/drbd.conf looks as follows (thanks to the great assistance of

resource web {
protocol C;
incon-degr-cmd “echo ‘!DRBD! pri on incon-degr’ | wall ; sleep 60 ; halt -f”; #Replace later with halt -f
startup { wfc-timeout 0; degr-wfc-timeout 120; }
disk { on-io-error detach; } # or panic, …
syncer {
group 0;
rate 80M; #1Gb/s network!
on p800old {
device /dev/drbd0;
disk /dev/VolGroup00/drbd-src;
address; #eth0 network address!
meta-disk /dev/VolGroup00/drbd-meta[0];
on node2 {
device /dev/drbd0;
disk /dev/sda1;
address; #eth0 network address!
meta-disk /dev/sdb1[0];

I have had two major problems with this setup:

1. I had no second node, so I left this “default” as the 2nd node. I never did expect to use it.

2. I had no free space (non-partitioned space) on my disk. Lucky enough, I tend to install Centos/RH using the installation defaults unless some special need arises, so using the power of the LVM, I have disabled swap (swapoff -a), decreased its size (lvresize -L -500M /dev/VolGroup00/LogVol01), created two logical volumes for DRBD meta and source (lvcreate -n drbd-meta -L +128M VolGroup00 && lvcreate -n drbd-src -L +300M VolGroup00), reformatted the swap (mkswap /dev/VolGroup00/LogVol01), activated the swap (swapon -a) and formatted /dev/VolGroup00/drbd-src (mke2fs -j /dev/VolGroup00/drbd-src). Thus I have now additional two volumes (the required minimum) and can operate this setup.

Solving the space issue, I had to start DRBD for the first time. Per Linux-HA DRBD Manual, it had to be done by running the following commands:

modprobe drbd

drbdadm up all

drbdadm — –do-what-I-say primary all

This has brought the DRBD up for the first time. Now I had to turn it off, and concentrate on Heartbeat:

drbdadm secondary all

Heartbeat settings were as follow:


use_logd on #?Or should it be used?
udpport 694
keepalive 1 # 1 second
deadtime 10
initdead 120
bcast eth0
node p800old #`uname -n` name
crm yes
auto_failback off #?Or no
compression bz2
compression_threshold 2

I have also created a relevant /etc/ha.d/haresources, although I’ve never used it (this file has no importance when using “crm=yes” in I did, however, use it as a source for /usr/lib/heartbeat/

p800old IPaddr:: drbddisk::web Filesystem::/dev/drbd0::/mnt::ext3 httpd

It is clear that the virtual IP will be in my class A network, and DRBD would have to go up before mounting the storage. After all this, the application would kick in, and would bring up my web page. The application, Apache, was modified beforehand to use the IP, and to search for DocumentRoot in /mnt

Running /usr/lib/heartbeat/ on the file (no need to redirect output, as it is already directed to /var/lib/heartbeat/crm/cib.xml), and I was ready to go.

/etc/init.d/heartbeat start (while another terminal is open with tail -f /var/log/messages), and Heartbeat is up. It took it few minutes to kick the resources up, however, I was more than happy to see it all work. Cool.

The logic is quite simple, the idea is very basic, and as long as the system is being managed correctly, there is no reason for it to get to a dangerous state. Moreover, since we’re using DRBD, Split Brain cannot actually endanger the data, so we get compensated for the price we might pay, performance-wise, on a real two-node HA environment following these same guidelines.

I cannot express my gratitude to, which is the source of all this (adding up with some common sense). Their documents are more than required to setup a full working HA environment.

Proccess monitoring, Keepalive, etc

Sunday, October 23rd, 2005

My new Linux server-to-be will require some remote monitoring and process keepalive going there. It’s that I’ve noticed nscd (which is required, when dealing with hundreds of LDAP based accounts) tends to
die once a while. I’ve also made a mistake once, and managed to kill all SSH daemons, including the running ones. I am happy to say it was solved by going down one floor, and connecting a screen to the machine, and restarting the service, however, it would have been nasty has it happened in relocation room, inside our ISP’s server farm…

So I’m trying to solve problems *before* they appear, I’ve decided to search for process KeepAlive daemon, or something which will ease my life, and make sure I don’t get any phone calls.

At first searching for "process keepalive" led me to some pages about HA-servers, aka, High Availability clusters. I don’t need multi-node keepalive, so I didn’t bother with it. Installing Centos’ or Dag’s keepalived proved to be exactly the thing I did not look for. So I’ve removed it, and kept on searching.

In the process, I found this link, which should have been put into cron. Nice going for one or two processes, but maintaining a full load of about 10 processes, which I must keep alive at all times, is a bit too big for this one. Without being able to code perl, I needed something else, better scalable.

I’ve seen lots of things, and some of them looked like they could interest me, but I wanted it as part of my package tree. I wanted it to be an RPM, and me to be able to upgrade it, if there are updates. All this, without actually tracking each package in person (which is a good enough reason to having package management system in the first place).

I was able to find in Dag Wieers RPM repository just the thing for me. It’s called "monit", and it was just the thing. Took me about 10 minutes to set the thing up, and make it work, tested, for most of my more important daemons.

Example of a configuration file is here monit.conf

It works, and it made my life a lot easier. I can easily recover both human mistakes and machine errors now. I might add some mail notification, but for now I will settle for logs only.

Not much of a programmer

Thursday, August 11th, 2005

I’ve never been much of a coder. I used to write a little pieces of code in C, when I was student, but long since I’ve stopped.

I have had a resolution just now. To learn PHP and Perl. I want to build few things, and now is as good a time as ever.

I have the literature, I have the will, I only need enough of it, and some time.