Archive for September, 2005

My PSP – Oh my Portabale PlayStation!

Friday, September 30th, 2005

I pause for a small unrelated island of happiness. I’ve bought PSP few month ago, and due to some issues with minor language bugs, I’ve upgraded its firmware, through its built-in interface, to a newer level. I had 1.50, and I’ve upgraded to 1.52. It was a major mistake. It seems there was (and still is) no known buffer overrun in this firmware version, which prevents the running of legitimate pieces of code on the PSP. I’ve searched for either some exploit which will allow running code on the machine, or for firmware downgrader. Searched on the net, of course. There was nothing. Version 2.0 of the PSP’s firmware has risen, and still – no buffer overrun was found.

Now it has – on version 2.0, a buffer overrun was found, which enabled the process of downgrading the system’s firmware to an older version, namely, 1.50. I’ve upgraded to 2.0, and followed these instructions. Besides, using this site for updates proves to just hit the spot – I get all the info and homebrew applications I want.

After performing the so longed for downgrade, I am now able to run homebrew applications and games, and I enjoy it much.

These few pictures for an example:

And this

Dell PowerEdge 1800 and Linux – Part 1

Tuesday, September 27th, 2005

As part of my voluntary actions, I manage and support Israeli Amateur Radio Committee Internet server. This machine is an old poor machine, custom made about 5-7 years ago, containing Pentium2 300MHz, and 256MB RAM. It serves few hundreds of users, and you can guess for yourself how slow and miserable this machine is.

long wait, the committee has decided to purchase a new server. This server, Dell PE 1800, has landed in my own personal lab just two days ago. It’s a nice machine, cheap, considering its abilities, and it’s waiting just to be installed. Or so it was, just up to now.

Mind you that brand PC servers containing more than one CPU can cost around
3K$ and above. This baby has come with a minimal, yet scalable setup, containing only one CPU out of two, 1GB RAM, our of 8GB possible max, and two SCSI HotSwap HDDs, using 2 out of 6 slots. Nice machine. And it was cheap too. Couldn’t complain about it.

At first, I’ve tried using Dell’s CDs. The "Server Setup" CD is supposed to help me prepare the machine to OS installation, either it be Windows, Linux, Novell, etc. I’ve tried using it, preparing it to a new Centos install, when I’ve noticed it didn’t partition quite as I’ve expected. Well, the "Server Setup" tool has decided I would not use Mirrors, and that I would not use LVM, but would use a predefined permanent setup, and that’s all. This machine did not come with a RAID controller, so I’ve had to configure Software RAID. What better time is there than during the install? Dell’s people think otherwise, so I’ve had to boot into a bootable media of Centos 4.1 (my whole installation tree resides on NFS share). The installation was smooth, and worked just like expected. Fast, sleek, smooth. All I’ve ever expected out of Linux installation on a server class PC. Just like it should have been.

I’ve partitioned the system using the following guidelines:

1) Software mirror Raid /dev/md0, containing /boot (150MB)

2) Two stand alone SWAP partitions, 1GB each, one on each HDD. I do not need mirror for the SWAP.

3) Software mirror Raid /dev/md1, containing LVM, expanding all over what’s left of the disk.

4) Logical Volumes "rootvol" (5GB) holding / and "varvol" (6GB) holding /var. Both can be expanded, so I don’t need to worry now about their final sizes.

As said, the installation went great. However, I was not able to boot the system… I just got each time to a hidden maintenance system partition, and it seems my GRUB failed to install itself. Darn.

I’ve booted into rescue mode, and tried to install GRUB manually. Failed. I think (and it’s not the first time I’ve had such problems with GRUB) GRUB is not as good as everyone say it is. It can’t boot into software mirror, and it means it’s not ready for production, as far as I are.

I’ve used YUM to download and install Lilo, and managed easily to convert /etc/lilo.conf.anaconda to
the correct file for Lilo (/etc/lilo.conf), and to run it. Worked great, and the system was able to boot.

Diving computers – Suunto Mosquito and Suunto Vyper

Tuesday, September 20th, 2005

I happen to dive. I like it. I happen to be the owner of two dive computers, namely the Suunto Mosquito and the Suunto Vyper.

As I don’t have Windows running on my desktop computer, I encountered some problems trying to view my diving profile using a software and a connection between my PC and my dive computers. There is no official
software for Linux, and I won’t install Windows on my PC just for that. Two options are available: Using some GPL Linux piece of code, avail through linux-diving or using Suunto’s own official software, with WINE. After I’ve failed, long time ago, with the tools available through linux-diving, and
seeing there was no major update with any of the, for the last year or so, I’ve decided to try the WINE option. It didn’t go quite as good as I’ve expected.

I’m using WINE build 20050725, and I’m using the following configuration file. wine-config.txt

I’ve tried running the newer Suunto Diving Manager software, version 2.1.5, and got exceptions claiming something about non-existing registry key. The software just hung during its splash screen display. I’ve struggled with it for a short while, and decided to go for an older version – 1.6.

I was able to install it just right, and I’ve found out that I must NEVER pass my mouse pointer above the sub-windows titles, else the software stops accepting commands (although it can close correctly). It took me a while, but I’ve downloaded the information from the dive computers, and was able to view it correctly, with some pointer-locations limitations.

DBMail, the hard way

Saturday, September 17th, 2005

I’ve been using DBMail as my own personal mail server for a long while now. I’ve known the software ever since it’s 1.x release, and been using it for its great benefits – Quick and accessible large IMAP folders (and I have over 60,000 items in there).

DBMail is a filtering backend for server based mail storage. It allows saving e-mail into a Database, and accessing it both via IMAP4 or POP3 protocols. This allows for large volumes of mail, with high load and traffic to be stored via faster-than-plain-files storage mechanism. It is great solution when you decide to use server-based-storage (IMAP4, in our case), and when you have tons of e-mail you don’t tend to delete for the last 6 or so years. I like it.

During the last few years of upgrades, I’ve seen the product grow, and get better. I’ve seen it get developed to a level where it can actually be the back-end of a real production services supplier. I’ve heard about some who actually used it for this, via the mailing lists of DBMail users. During this time, during the upgrades I’ve applied, something with my DB schema was broken. Nothing which prevented actually using the system, but it caused all kinds of small-time minor side effects, such as, in my case, subscribing to IMAP4 mailboxes broken, and the such. No show-stopper.

After consulting DBMail people, I’ve decided to change my schema, using a procedure suggested by them. I’ve “dumped” my DB into a file, “drop” it, and recreate the DB, brand new and all, according to their specifications. Afterwards, I’d import the dumped DB into it, and get it over with.

I’ve failed. It’s either there’s some major difference, or something’s broken with the procedure. I was able to login, but failed to get my e-mail!

So, I’ve restored my DB to its origin, and waited for some more spare time.

It takes a while doing this, BTW, as this DB is 17GB in size!

It took few weeks, but I was able to find the free time to start initiating another transfer. I’ve failed doing so, working on the level of the DB itself, then I’ll go on the applicative solution – stinks, but works.

It’s stinks, because it means every client of you (me and my wife) would have to transfer mail between mailboxes via our mail client. It stinks because it takes hours.

With over-bloated DB files as is, I’ve decided to create a new Database server, with new and separated store. I’ve added another MySQL server on this specific machine, running using different socket, different port and different store, lock and pid files. I’ve run an additional dbmail-imapd, my IMAP4 server, using the alternate socket, recreated the (empty) DB, and the empty tables, and started populating them. It took almost a day.

One weird thing I’ve noticed, but did not investigate any further, is that ThunderBird 1.0.6 on Linux did this same job faster than on Windows. Network settings of both computers are rather similar, and I could not really explain it, nor did I try much further.

I’ve almost forget to recreate our mail aliases and addresses, when recreated the rest.

After it was done, I’ve stopped both dbmail-imapd servers, and both MySQL servers, and replaced the stores between them. I’ve re initiated this dual-configuration, and moved the mails accumulated during the 10 hours delay. Afterwards, I’ve removed the older account, and shut down the alternative mail server.

It worked flawlessly, and although slow (and I’ve made few mistakes, including holding both stores on the same disk array), it was straight forward procedure. Stinks, but works.

Linux, Storage, SANs and stuff. Part one of many

Tuesday, September 13th, 2005

As part of my job, as an Infrastructure and Mr. Fix-it in my company, where it comes to Unix, Windows, HA-Clusters, Storages, etc, I often get to encounter some unpredictable situations.

During the last two days I’ve had to setup a Compaq Proliant ML360G2 with the following configuration (to simulate customer’s environment):


Update level 4

Specific Kernel version

Dual-port Qlogic FC HBA, with multipath access to a SAN storage.

Not that I’ve looked it up too much, as most of it is rather straight forward procedure, I’ve decided I will share in this blog my experience, and some comments regarding this. To make life easier for future attempts, etc.

Stages went like this:

1) Entered into the BIOS, and configured the machine to be a Linux machine.

2) Installed RHES AS 3, release 0. That’s what I had, and I didn’t feel like wasting time and downloading the newer release.

3) Installed the required Kernel and Kernel-Sources packages (RPMs).

4) Installed Proliant ServicePack (PSP). Version 7.11. It’s what I’ve had, and although at the time of this writing, I am aware a newer version exists (7.50, or 7.40), I did not have time to download and install it.
I’ve discovered that during its install, it managed, somehow, to corrupt my RPM DB. Anyhow, it has installed what it had to install.

5) Deleted /var/lib/rpm/__* , and initiated rpm –rebuilddb to fix the RPM DB corruption. Tested with rpm -qa with no segfault on its way.

6) Physically installed the Dual-Port Qlogic HBA. QLA2342 is the model.

7) Downloaded both the driver and the SANSurfer control and management utility.

8) Physically connected the HBAs to the Fibre Switch (defined VSANs, defined the storage – FastT200, dual ported), and managed to view the whole storage.

9) During the installation of the Qlogic driver (./qlinstall) I’ve detected it failed to compile because I lacked the file called modversions.h inside include inside the kernel source tree. The file was supposed to be there, but for some reason, wasn’t. I’ve reinstalled kernel-sources (the correct version, of course), and was able to complete the installation.

10) A new initrd was supposed to be
built by the installation process of the Qlogic driver. For some reason, the file was corrupted, and I’ve had to reboot into another version of the kernel, and rebuild the initrd.

11) The SANSurfer was a great fun, but it refused to define Multi-Path. Well, it actually claimed to have done it, but it didn’t work, and the system detected twice as volumes as it actually had. I’ve changed the file /etc/modules.conf to contain a directive for the driver named "ql2xfailover=1", rebuilt initrd, and rebooted. Not tested yet, but seems to work, so I will test it tomorrow.

I have to understand whether the utilization of the Multi-Path system was done using Qlogic driver, or was it done using any other in-kernel system. I’m not sure yet as to what is Linux MPT, and whether it’s MultiPath, and I would have to use it. I will check it during the coming day. Meanwhile, we’ll just have to do with

Update to come.