Archive for May, 2006

HP ML110 G3 and Linux Centos 4.3 / RHEL 4 Update 3

Tuesday, May 30th, 2006

Using the same installation server as before, my laptop, I was able to install Linux Centos 4.3, with the addition of HP’s drivers for Adaptec SATA raid controller, on my new HP ML110 G3.

Using just the same method as before, when I’ve installed Centos 4.3 on IBM x306, but with HP drivers, I was able to do the job easily.

To remind you the process of preparing the setup:

(A note – When I say "replace it with it" I always recommend you keep the older one aside for rainy days)

1. Obtain the floppy image of the drivers, and put it somewhere accessible, such as some easily accessible NFS share.

2. Obtain the PXE image of the kernel of Centos4.1 or RHEL 4 Update 1, and replace your PXE kernel with it (downgrade it)

3. Prepare the driver’s RPM and Centos 4.1 / RHEL 4 Update 1 kernel RPM handy on your NFS share.

4. Do the same for the PXE initrd.img file.

5. Obtain the /Centos/base/stage2.img file from Centos 4.1 or RHEL 4 Update 1 (depends on the installation distribution, of course), and replace your existing one with it.

6. I assume your installation media is actually NFS, so your boot command should be something like: linux dd=nfs:NAME_OF_SERVER:/path/to/NFS/Directory

Should and would work like charm. Notice you need to use the 64bit kernel with the 64bit driver, and same for the 32bit. Won’t work otherwise, of course.

After you’ve finished the installation, *before the reboot*, press Ctrl+Alt+F2 to switch to text console, and do the following:

1. Copy your kernel RPM to the new system /root directory: cp /mnt/source/prepared_dir/kernel….rpm /mnt/sysimage/root/

2. Do the same for HP drivers RPM

3. Chroot into the new system: chroot /mnt/sysimage

4. Install (with –force if required, but *never* try it first) the RPMs you’ve put in /root. First the kernel and then HP driver.

5. HP Driver RPM will fail the post install. It’s OK. rename /boot/initrd-2.6.9-11.ELsmp (or non SMP, depends on your installed kernel)

6. Verify you have alias for the new storage device in your /etc/modprobe.conf

7. run mkinitrd /boot/initrd-2.6.9-11.ELsmp 2.6.9-11.ELsmp (or non SMP, depending on your kernel)

8. Edit manually your /etc/grub.conf to your needs.

Note – I do not like Grub. Actually, I find it lacking in many ways, so I install Lilo from the i386 (not the 64bit, since it’s not there) version of the distro. Later on, you can rename /etc/lilo.conf.anaconda to /etc/lilo.conf, and work with it. Don’t forget to run /sbin/lilo after changes to this file.

Hard Freeze when Using FireFox, Unison-GTK, and some other GTK apps

Friday, May 26th, 2006

Hard freezes are unpleasant at best. They also prevent you from tracking the source of the problem. You speculate, based on the "familly" of applications you encounter problems with, and try to obtain some resolution.

At first it was FireFox. I’ve removed it to install Galeon instead. Didn’t help. I left it at that, and started using Konqueror, which worked correctly.

I’ve worked correctly for a while, and one day, when opened Unison-GTK to sync my folders with my desktop, another hard freeze.

Based on few assumptions, I’ve started searching for a solution or some workaround. I’ve encountered this link, which led me to believe the problem is GTK+ 1.0 related. This link, however, is rather old, and seems unrelevant to the cause, as this has started only few weeks ago.

Better defined search lead me to find this Ubunto bug description, which suggested I go back to 16bit colors (which I’ve used just up to a month or so ago).

At first glance, the problem is solved. However, this is no more than some workaround, as I am limited currently to 16bit colors, and because I’m stuck with Radeon Mobility M6 LY, which is the main cause of all this. I hope to dump these buggy cards on my next mobile.

My current xserver-xorg-video-ati package version is 6.5.8.0-1 (debian).

Linux IPTables flow

Friday, May 26th, 2006

IPTables can be tricky. The concept of chains pointing to chains pointing to chains can get complicated.

However, understanding the initial flow, the initial "which chain points where", and the general concept which can allow, later, for easier NAT, or DNAT, or even knowing where to put a single rule is important. Especially if you are to utilize your Linux box as a router. Even if not, it better helps knowing how to defent it.

So, here’s an image describing the common relationship between the predefined chains in Linux IPTables.

IPTables default chains relationship

Web server behind a web server

Friday, May 26th, 2006

I’ve acquired a new server which is to supply services to a certain group. On most cases, I would have used PREROUTE chain in my IPTABLES on my router for prerouting, based on a rule such as this:

iptables -t nat -I PREROUTING -i <external_Interface_name> -p tcp -s <Some_IP_address> –dport 80 -j DNAT –to-destination <New_server_internal_IP>:80

I can do this trick to any other port just as well, however, I already have one web server inside my network, and I cannot know the source IP of my special visitors. Tough luck.

Reverting to more application-based solution, I can use my existing Apache server, which listens on port 80 alread, and gets its requests already, with mod_proxy directive and Name based Virtual Hosts.

Assuming the name of the server should be domain.com, and that the DNS entries are correct, I would add such a directive to my vhosts.conf (or whatever other file containing your Apache2 Virtual Servers configuration):

<VirtualHost *:80>
ServerName domain.com
ErrorLog logs/domain.com-error_log
CustomLog logs/domain.com-access_log common
ProxyRequests Off
<Proxy *>
Order deny,allow
Allow from all
</Proxy>

ProxyPass / http://<Internal_Server_IP_or_Name>/
ProxyPassReverse / http://<Internal_Server_IP_or_Name>/
</VirtualHost>

I’m not absolutely sure about the need for logs, but I was able to see few issues by using them, such as that the internal server was down, etc. I can see that the internal server is being accessed, and that it’s working just fine.

A note – If it’s the first Name Based Virtual Host you’ve added, you will need to “readjust” your entire configuration to a Name Based Virtual Host. Name agnostic and Name based cannot reside on the same IP configuration. It just won’t work.

Transparently Routing / Proxying Information

Monday, May 15th, 2006

I was required to utilize a transparent proxy. The general idea was to follow a diagram as the one here:

The company did not want any information (http, https, ftp, whatever) to pass directly through the firewall from the internal network to the external network. If we can move it all via some sort of proxy, the general idea says, the added security is well worth it.

Getting an initial configuration to work is rather simple. For port 80 (HTTP), all need not do more than install squid with transparent directives included (can be found here, for example, and on dozens of other web sites), and make sure the router redirects all outbound HTTP traffic to the Linux proxy.

It worked like a charm. Few minor tweeks, and caching was done well.

It didn’t work when it came to other protocols. It appreas Squid cannot transparently redirect (I did not expect it to actually cache the information) SSL requests. The whole idea of SSL is to prevent the possibility of "A-Man-in-the-Middle" attack, so Squid cannot be part of the point-to-point communication, unless directed to do so by the browser, with the CONNECT command. This command can be assigned ONLY if the client is aware of the fact that there is a proxy on the way, aka, configured to use it, which is in contrast to the whole idea of Transparent Proxy.

When it failed, I’ve came up with the next idea – let the Linux machine route onwards the forwarded packets, by acting as a self-sustained NAT server. If it can translate all requests as comming from it, I will be able to redirect all traffic through it. It did not work, and working hard into IPTables chains, and adding logging (iptables -t nat -I PREROUTING -j LOG –log-prefix "PRERouting: ") into it, I’ve discovered that although the PREROUTING chain accepted the packets, they never reached the FORWARD or POSTROUTING chains…

The general conclusion was that the packets were destinated to the Linux machine. The Firewall/Router has redirected all packets to the Linux server not by altering the routing table to point at the Linux server as the next hop, but by altering the destination of the packets themselves. It meant that all redirected packets were to go to the Linux machine.

Why did HTTP succeed in passing the transparent proxy? Because HTTP packets contain the target name (web address) in their data, and not only in their headers. This allows for "Name based shared hosting", and thus the transparent proxy can actually exist.

There is no such luck with other protocols, I’m afraid.

The solution in this case can be achieved via few methods:

1. Use non-transparent proxy. Set the clients to use it via some script, which will enable them to avoid using it when outside the company. Combined with transparent HTTP proxy, it can block unwanted access.

2. Use stateful inspection on any allowed outbound packets, except HTTP, which will be redirected to the proxy server transparently.

3. Set the Linux machine in the direct path outside, as an additional line of defence.

4. If the firewall/Router is capable of it, set a protocol-based routing. If you only route differently packets outbound for some port, you do not rewrite the packet destination.

I tend to chose option 1, as it allows for access to work silently when using HTTP, and prevents unconfigured clients from accessing disallowed ports. Such a set of rules could look something like (the proxy listens on port 80):

1. From *IN* to *LINUX* outbound to port 80, ALLOW

2. From *IN* to *INTERNET* outbound to port 80 REDIRECT to Linux:80

3. From *IN* to *INTERNET* DENY

Clients with defined proxy settings will work just alright. Clients with undefined proxy settings, will not be able to access HTTPS, FTP, etc, but will still be able to browse the regular web.

In all these cases, control over the allowed URLs and destinations is in the hands of the local IT team.