Posts Tagged ‘Unix’

x86 Scale Up

Thursday, September 11th, 2008

I have been introduced to a very cool software/hardware combination yesterday. It has been, without exaggerating, one of the coolest things I have seen in a while.

As you may know, x86 has an issue with scaling up. It’s that x86 architectures and price don’t justify scaling up to tenths and hundreds of CPUs. The multi-core technology introduced in the last few years made a four-way server seem trivial today, where in the past it was a high-performance server for large (and expensive) data centers. It is very common today to purchase an eight-way server at a price of a mere commodity server – all thanks to the multi-core technology.

However, when compared to the large Unix data centers, where 64 and 128 cpus are rather common (I will emphasis – the large Unix data centers), although nowadays, per-core, x86 is somewhat more powerful, for a large load set, it could not rival any many-way server. The common solution with x86 was to “scale out” – add more cheap servers and manage the workload in a more distributed way. Yes, you might pay with communication overhead, however, this can be made cheaper still.

With a distributed load sharing came the illnesses of communication latencies. Myrinet, 10Gb/s Ethernet and Infiniband were a common, yet expensive (as it was a niche market) solutions, and still – for distribution of high loads, they were well worth it. Still – a large scaled-up server based on x86 was nowhere to find.

No more. With ScaleMP’s concatenation you can “bundle” a set of servers using Infiniband link into a single huge-multi-way, huge-ram server at a very low cost, relatively.

Think about how you can purchase your current server, for example, your eight-core server (two quad-core cpus), and in time, scale it up into more powerful server (add another two quad-core cpus), or add more RAM, or more network interfaces, or whatever.

This is not as fast as the IBM x3950 board-link (excuse me for not knowing the exact name), so it is not ideal for databases or systems which tend to create a lot of cache-misses, however for large (actually – very large) SMP systems, it could be great. It can allow any company which feels that the current server might not be enough the safety and assurance that they can actually scale up, using the same server, into adding more cpus and more RAM to the server at any time.

I is supported, as far as I know, only for Linux at the time being. It diminishes some of the distance between the large Unix machines and the modern Linux, for a fracture of the price.

I liked it.

Timezone for Israel on HP-UX 11i and above

Thursday, August 21st, 2008

While Linux vendors tend to maintain and publish the whole zone and daylight saving (DST) information, most legacy Unix vendors do not. Especially when it comes to such a small country such as Israel.

The following solution was tested for HP-UX B.11.31, and would probably work for all 11i versions.

Israel timezone is called IST. Israel daylight saving timezone is called IDT.

The quick and dirty:

Edit /usr/lib/tztab and append the following lines at the bottom:

# Israel daylight savings
# Added by Ez-Aton. years 2008 to 2011 only. Simple and ugly.
0 3 28 3 2008 5 IDT-3
0 1 5 10 2008 0 IST-2
0 3 27 3 2009 5 IDT-3
0 1 27 9 2009 0 IST-2
0 3 26 3 2010 5 IDT-3
0 1 12 9 2010 0 IST-2
0 3 1 4 2011 5 IDT-3
0 1 2 9 2011 0 IST-2

Save (this is a write-protected file, so force saving) and then edit /etc/TIMEZONE to include the following TZ directive:

export TZ

Assuming you sync your time using NTP, all future logins will have correct Israeli date and daylight savings.

For further information, you can check man tztab and man environ

Preparing your own autoyast with custom Suse CD

Friday, August 17th, 2007

Suse, for some reason, has to be over-complicated. I don’t really know why, but the time required to perform simple tasks under Suse is longer than on any other Linux distro, and can be compared only to other legacy Unix systems I am familiar with.

When it comes to a more complicated tasks, it gets evern worse.

I have created an autoinst.xml today. It, generally speaking, installs a SLES10.1 system from scratch. Luckily, I was able to test it in a networked environment, so I helped the environment just a bit by not throwing tons of CDs.

Attached is my autoinst.xml. Notice that the root user has the password 123456, and that this file is based on a rather default selection.

Interesting, though, is my usage of the <ask> directives, which allow me to ask for manual IP address, Netmask, gateway, etc during the 2nd phase of the installation. sles10.1-autoinst.xml

This is only a small part. Assuming you want to ship this autoinst.xml with your Suse CDs, as a stand-alone distribution, you need to do the following:

1. Mount as loop the first CD:

mount -o loop /home/ezaton/ISO/SLES10-SP1-CD1.iso /mnt/temp

2. For quick response, if you have the required RAM, you could try to create a ramdisk. It will sure work fast:

mkdir /mnt/ram

mount -t ramfs -o size=700M none /mnt/ram

3. Copy the data from the CD to the ramdisk:

cp -R /mnt/temp/* /mnt/ram/

4. Add your autoinst.xml to the new cd root:

cp autoinst.xml /mnt/ram/

5. Edit the required isolinux.cfg parameters. On Suse10 it resides in the new <CD-ROOT>/boot/i386/loader/isolinux.cfg. In our case, CD-ROOT is /mnt/ram

Add the following text to the "linux" append line:

autoyast=default install=cdrom

6. Generate a new ISO:

cd /mnt/ram

mkisofs -o /tmp/new-SLES10.1-CD1.iso -b boot/i386/loader/isolinux.bin -r -T -J -pad -no-emul-boot -boot-load-size 4 -c -boot-info-table /mnt/ram

7. When done, burn your new CD, and boot from it.

8. If everything is ok, umount /mnt/ram and /mnt/temp, and you’re done.

Note – It is very important to use Rock Ridge and Jouliet extensions with the new CD, or else files will be in 8.3 format, and will not allow installation of SLES.

Power Supply failure – the wrong type of failure

Sunday, March 5th, 2006

I’ve lost an external DAS (Direct Attached Storage) today. Not lost as in could not find, but lost due to power supply failure. I’ve been home, got a phone call saying all the company’s Unix storage, which is contained on a DAS including two 72GB HDDs in a mirror (DiskSuite on Solaris8) is not available. Remotely, I could not reach it. I/O Error on each request (ls, cd, etc). I’ve had to reach the place. There I’ve noticed lots of error messages generated by the kernel when trying to access the disks. After lots of games (I will not describe the procedure, but it included replacing external SCSI cable, disconnecting one of the disks, etc), I have replaced the DAS module, and put the older disks in it, making sure I use the same LUN the disks had before (for DiskSuite’s sake). 

Conclusion – Power Supply failure, but not an absolute failure. The lights remained working, and disks could spin-up, but when required to work, the power supply failed to give the disks the whole power capacity the disks required, resulting in read/write errors. Working, but just not quite.

Tracking the problem, and hacking a different SCSI DAS module required almost two hours of my life. I hope never to encounter such a problem again.

First message

Tuesday, July 19th, 2005

Well, It’s been a while since I’ve considered playing with a blog of my own. I’ve never quite found the convicting reason which will pull me out of my chair, and not-a-single-thing-doing-for-a-whole-afternoon-while-browsing-the-net into the active part of installing my own blog.

Well, I did just ten minutes ago.

Why? Because during my tech adventures (as much as they might seem adventurues to anyone), I get to complete tasks, or do things which I have nowhere to add or update, and thus I don’t get to keep, neither to myself for later refferance, nor for others who might bump into the same problems I have.

Who am I?

Keeping this blog blogish enough, there is no point in mentioning my name. You can call me Ez-Aton, or Ez, for short, if you feel like calling me at all. It’s not that I hide, but there is a point in a person’s life where he wants to use the little anonymity the web offers. The little of it which still exists nowdays, anyhow.

Well, I act as a Unix SysAdmin, linux hobbist and SysAdmin, Windows SysAdmin, some experiance with mac, etc. I manage few dozens of *nix machines @ work, namely Linux, Solaris, HP-UX, AIX, and somewhat Windows on a more complex environment. I don’t claim to be an expert on it, oh now, but I claim to know some on everything. And Google being my friend, I can manage my way around the more common obstacles. I learn fast, I almost never do the same mistake twice (unless it’s on purpose, to gain something), so I manage to get, uneducated (no degree, no official courses), a very complex set of systems up and running. not perfect, but how many people you know who can make such things (as you might find in my blog at other times) up and running?

Oh, and I live in Israel, which is a small country, but one can get used to it.

So I hope you find info you need here, or at least enjoy wasting few more minutes of your life, where time flies in front of the computer, browsing and searching, and doing only little. Yep, techie’s life.