Posts Tagged ‘rhel5’

RHEL5 100% CPU with LDAP client for Active Directory

Friday, June 5th, 2009

ADS integration has been available natively since Windows 2003 R2, and in heterogeneous sites this has become the preferred method of integrating login information, as well as utilizing the added security of using Kerberos wherever possible.

The following guide is a very good one, and was the source of information I have used throughout my work integrating Linux into ADS. So far it has worked quite well for RHEL4.

RHEL5, on the other hand, is a different story. While it can work, and ldap queries return sensible results, it is too common for a process to utilize 100% CPU while doing absolutely nothing.

My research brought me to the following conclusions:

  • The high CPU utilization is being caused by something RHEL5 specific (tested to work correctly for RHEL4)
  • High CPU utilization is caused by nss_ldap module.
  • Yes, it does happen to every nss related service. NSCD does not help, and gets to 100% CPU also.
  • Tracing to nss_ldap modules return after a very long time (if ever) that the session to the ADS server has somehow hanged.

You can see an example of this bug in this specific bugzilla entry.

A quick and effective workaround was used after examining the differences between configuration directives for RHEL4 and RHEL5. Forcing LDAP version 2 instead of 3 (which is the default for RHEL5 ldap client, as it attempts the highest version possible) results in a correct behavior.The line in /etc/ldap.conf is:

ldap_version 2

FYI

Poor man’s load balancing

Monday, December 8th, 2008

I was requested to setup a “poor man’s” load balancing. The server accesses various HTTP servers with a GET command, and the customer fears that the server’s IP will get blocked. To work around this, or at least – to minimize the problem, the customer has purchased three IP addresses.

I have assigned all three addresses to the server, and was into smart routing as a solution. I did not want to capture all outbound communication, but only HTTP (port 80). Also – the system is Centos, which means there are only few available iptbles modules, so “random” module is not an option. Compiling a new kernel for a server across an ocean didn’t sound like the best idea, so I have attempted to work with the available tools.

With net.ipv4.conf.default.rp_filter set to 0 in /etc/sysctl.conf , and with real internet IPs (won’t work above NAT), I added the following rules to the mangle table:

iptables -t mangle -N CUST_OUTPUT
iptables -t mangle -A CUST_OUTPUT -o ! lo -p tcp -m recent –remove -j MARK –set-mark 1
iptables -t mangle -A OUTPUT -o ! lo -p tcp –dport 80 -m state –state NEW -m recent –update –seconds 60 –hitcount 6 -j  CUST_OUTPUT
iptables -t mangle -A OUTPUT -o ! lo -p tcp –dport 80 -m state –state NEW -m recent –update –seconds 60 –hitcount 4 -j MARK –set-mark 3
iptables -t mangle -A OUTPUT -m state –state NEW -m mark –mark 3 -j ACCEPT
iptables -t mangle -A OUTPUT -o ! lo -p tcp –dport 80 -m state –state NEW -m recent –update –seconds 60 –hitcount 2 -j MARK –set-mark 2
iptables -t mangle -A OUTPUT -m state –state NEW -m mark –mark 2 -j ACCEPT
iptables -t mangle -A OUTPUT -o ! lo -p tcp –dport 80 -m state –state NEW -m recent –set

To the nat table, I have added these following rules:

iptables -t nat -A POSTROUTING -m mark –mark 2 -j SNAT –to 1.1.1.2
iptables -t nat -A POSTROUTING -m mark –mark 3 -j SNAT –to 1.1.1.3
iptables -t nat -A POSTROUTING -m mark –mark 1 -j SNAT –to 2.3.4.5

(replaced the real customer’s IPs with the junk above). Of course – all three IP addresses are accessible from the Internet and fully routable.

After a short run, I saw the following lines when running iptables -t nat -L -v

Chain POSTROUTING (policy ACCEPT 39055 packets, 2495K bytes)
pkts bytes target     prot opt in     out     source               destination
143  8580 SNAT       all  —  any    any     anywhere             anywhere            MARK match 0x2 to:1.1.1.2
144  8640 SNAT       all  —  any    any     anywhere             anywhere            MARK match 0x3 to:1.1.1.3
72  4320 SNAT       all  —  any    any     anywhere             anywhere             MARK match 0x1 to:2.3.4.5

Statistically, this is quite ok. Mark 0x1 forces the same route as Mark 0x0, so this is rather balanced.

Works like a charm, for now 🙂

Persistent raw devices for Oracle RAC with iSCSI

Saturday, December 6th, 2008

If you’re into Oracle RAC over iSCSI, you should be rather content – this configuration is a simple and supported. However, working with some iSCSI target devices, order and naming is not consistent between both Oracle nodes.

The simple solutions are by using OCFS2 labels, or by using ASM, however, if you decide to place your voting disks and cluster registry disks on raw devices, you are to face a problem.

iSCSI on RHEL5:

There are few guides, but the simple method is this:

  1. Configure mapping in your iSCSI target device
  2. Start the iscsid and iscsi services on your Linux
    • service iscsi start
    • service iscsid start
    • chkconfig iscsi on
    • chkconfig iscsid on
  3. Run “iscsiadm -m discovery -t st -p target-IP
  4. Run “iscsiadm -m node -L all
  5. Edit /etc/iscsi/send_targets and add to it the IP address of the target for automatic login on restart

You need to configure partitioning according to the requirements.

If you are to setup OCFS2 volumes for the voting and for the cluster registry, there should not be a problem as long as you use labels, however, if you require raw volumes, you need to change udev to create your raw devices for you.

On a system with persistent disk naming, follow this process, however, on a system with changing disk names (every reboot names are different), the process can become a bit more complex.

First, detect your scsi_id for each device. While names might change upon reboots, scsi_ids do not.

scsi_id -g -u -s /block/sdc

Replace sda with the device name you are looking for. Notice that /block/sda is a reference to /sys/block/sdc

Use the scsi_id generated by that to create the raw devices. Edit /etc/udev/rules.d/50-udev.rules and find line 298. Add a line below with the following contents:

KERNEL==”sd*[0-9]”, ENV{ID_SERIAL}==”14f70656e66696c000000000004000000010f00000e000000″, SYMLINK+=”disk/by-id/$env{ID_BUS}-$env{ID_SERIAL}-part%n” ACTION==”add” RUN+=”/bin/raw /dev/raw/raw%n %N”

Things to notice:

  1. The ENV{ID_SERIAL} is the same scsi_id obtained earlier
  2. This line will create a raw device in the name of raw and number in /dev/raw for each partition
  3. If you want to differtiate between two (or more) disks, change the name from raw to an aduqate name, like “crsa”, “crsb”, etc, for example:

KERNEL==”sd*[0-9]”, ENV{ID_SERIAL}==”14f70656e66696c000000000005000000010f00000e000000″, SYMLINK+=”disk/by-id/$env{ID_BUS}-$env{ID_SERIAL}-part%n” ACTION==”add” RUN+=”/bin/raw /dev/raw/crs%n %N”

Following these changes, run “udevtrigger” to reload the rules. Be advised that “udevtrigger” might reset network connection.