Posts Tagged ‘Linux’

Extracting/Recreating RHEL/Centos6 initrd.img and install.img

Tuesday, October 1st, 2013

A quick note about extracting and recreating RHEL6 or Centos6 (and their derivations) installation media components:

Initrd:

Extract:

mv initrd.img /tmp/initrd.img.xz
cd /tmp
xz –format=lzma initrd.img.xz –decompress
mkdir initrd
cd initrd
cpio -ivdum < ../initrd.img

Archive (after you applied your changes):

cd /tmp/initrd
find . | cpio -o -H newc | xz -9 –format=lzma > ../new-initrd.img

/images/install.img:

Extract:

mount -o loop install.img /mnt
mkdir /tmp/install.img.dir
cd /mnt ; tar cf – –one-file-system . | ( cd /tmp/install.img.dir ; tar xf – )
umount /mnt

Archive (after you applied your changes):

cd /tmp
mksquashfs install.img.dir/ install-new.img

Additional note for Anaconda installation parameters:

I did not test it, however there is a boot flag called stage2= which should lead to a new install.img file, other than the hardcoded one. I don’t if it will accept /images/install-new.img as its flag, but it can be a good start there.

One more thing:

Make sure that the vmlinuz and initrd used for any custom properties, in $CDROOT/isolinux do not exceed 8.3 format. Longer names didn’t work for me. I assume (without any further checks) that this is isolinux limitation.

XenServer – increase LVM over iSCSI LUN size – online

Wednesday, September 4th, 2013

The following procedure was tested by me, and was found to be working. The version of the XenServer I am using in this particular case is 6.1, however, I belive that this method is generic enough so that it could work for every version of XS, assuming you're using iSCSI and LVM (aka - not NetApp, CSLG, NFS and the likes). It might act as a general guideline for fiber channel communication, but this was not tested by me, and thus - I have no idea how it will work. It should work with some modifications when using Multipath, however, regarding multipath, you can find in this particular blog some notes on increasing multipath disks. Check the comments too - they might offer some better and simplified way of doing it.

So - let's begin.

First - increase the size of the LUN through the storage. For NetApp, it involves something like:

lun resize /vol/XenServer/luns/SR1.lun +1t

You should always make sure your storage volume, aggregate, raid group, pool or whatever is capable of holding the data, or - if using thin provisioning - that a well tested monitoring system is available to alert you when running low on storage disk space.

Now, we should identify the LUN. From now on - every action should be performed on all XS pool nodes, one after the other.

cat /proc/partitions

We should keep the output of this command somewhere. We will use it later on to identify the expanded LUN.

Now - let's scan for storage changes:

iscsiadm -m node -R

Now, running the previous command again will have a slightly different output. We can not identify the modified LUN

cat /proc/partitions

We should increase it in size. XenServer uses LVM, so we should harness it to our needs. Let's assume that the modified disk is /dev/sdd.

pvresize /dev/sdd

After completing this task on all pool hosts, we should run sr-scan command. Either by CLI, or through the GUI. When the scan operation completes, the new size would show.

Hope it helps!

Juniper NetworkConnect (NC) and 64bit Linux

Tuesday, June 25th, 2013

Due to a major disk crash, I have had to use my ‘other’ computer for VPN connections. It meant that I have had to prepare it for the operation. I attempted to login to aJuniper-based SSL-VPN connection, however, I did get a message saying that my 64bit Java was inadequate. I had a link, as part of the error message to Juniper KB, to which I must link (remembering how I have had to search for possible solutions in the past).

The nice thing about this solution is that it does not replace your default Java version on the system, which was always a problem, as I was using Java for various purposes, but it recognizes that it’s part of the (update-)alternatives list, and makes use of the correct Java version.

Juniper did it right this time!

Oh – and the link to their KB

And to Oracle Java versions, to make life slightly easier for you. You will need Oracle login, however (you can register for free).

Target-based persistent device naming

Saturday, June 22nd, 2013

When Connecting Linux to a large array of SAS disks (JBOD), udev creates default persistent names in /dev/disk/by-* . These names are based on LUN ID (all disks take lun0 by default), and by path, which includes, for a pure SAS bus – the PWWN of the disks. It means that an example to such naming would be like this (slightly trimmed for ease of view):

/dev/disk/by-id:
scsi-35000c50055924207 -> ../../sde
scsi-35000c50055c5138b -> ../../sdd
scsi-35000c50055c562eb -> ../../sda
scsi-35000c500562ffd73 -> ../../sdc
scsi-35001173100134654 -> ../../sdn
scsi-3500117310013465c -> ../../sdk
scsi-35001173100134688 -> ../../sdj
scsi-35001173100134718 -> ../../sdo
scsi-3500117310013490c -> ../../sdg
scsi-35001173100134914 -> ../../sdh
scsi-35001173100134a58 -> ../../sdp
scsi-3500117310013671c -> ../../sdm
scsi-35001173100136740 -> ../../sdl
scsi-350011731001367ac -> ../../sdi
scsi-350011731001cdd58 -> ../../sdf
wwn-0x5000c50055924207 -> ../../sde
wwn-0x5000c50055c5138b -> ../../sdd
wwn-0x5000c50055c562eb -> ../../sda
wwn-0x5000c500562ffd73 -> ../../sdc
wwn-0×5001173100134654 -> ../../sdn
wwn-0x500117310013465c -> ../../sdk
wwn-0×5001173100134688 -> ../../sdj
wwn-0×5001173100134718 -> ../../sdo
wwn-0x500117310013490c -> ../../sdg
wwn-0×5001173100134914 -> ../../sdh
wwn-0x5001173100134a58 -> ../../sdp
wwn-0x500117310013671c -> ../../sdm
wwn-0×5001173100136740 -> ../../sdl
wwn-0x50011731001367ac -> ../../sdi
wwn-0x50011731001cdd58 -> ../../sdf

/dev/disk/by-path:
pci-0000:03:00.0-sas-0x5000c50055924206-lun-0 -> ../../sde
pci-0000:03:00.0-sas-0x5000c50055c5138a-lun-0 -> ../../sdd
pci-0000:03:00.0-sas-0x5000c50055c562ea-lun-0 -> ../../sda
pci-0000:03:00.0-sas-0x5000c500562ffd72-lun-0 -> ../../sdc
pci-0000:03:00.0-sas-0×5001173100134656-lun-0 -> ../../sdn
pci-0000:03:00.0-sas-0x500117310013465e-lun-0 -> ../../sdk
pci-0000:03:00.0-sas-0x500117310013468a-lun-0 -> ../../sdj
pci-0000:03:00.0-sas-0x500117310013471a-lun-0 -> ../../sdo
pci-0000:03:00.0-sas-0x500117310013490e-lun-0 -> ../../sdg
pci-0000:03:00.0-sas-0×5001173100134916-lun-0 -> ../../sdh
pci-0000:03:00.0-sas-0x5001173100134a5a-lun-0 -> ../../sdp
pci-0000:03:00.0-sas-0x500117310013671e-lun-0 -> ../../sdm
pci-0000:03:00.0-sas-0×5001173100136742-lun-0 -> ../../sdl
pci-0000:03:00.0-sas-0x50011731001367ae-lun-0 -> ../../sdi
pci-0000:03:00.0-sas-0x50011731001cdd5a-lun-0 -> ../../sdf

Real port (connection) persistence is not possible in that manner. A map of PWWN-to-Slot is required, and handling the system in case of a disk failure by non-expert is nearly impossible. A solution for that is to create matching udev rules which will allow handling disks per-port.

While there are (absolutely) better ways of doing it, time constrains require that I get it to work quick&dirty. The solution is based on lsscsi command, as the backend engine of the system, so make sure it exists on the system. I tend to believe that the system will not be able to scale out to hundreds of disks in its current design, but for my 16 disks (and probably for several tenths as well) – it works fine.

Add 60-persistent-disk-ports.rules to /etc/udev/rules.d/ (and omit the .txt suffix)

 

# By Ez-Aton, based partially on the built-in udev block device rule
# forward scsi device event to corresponding block device
ACTION=="change", SUBSYSTEM=="scsi", ENV{DEVTYPE}=="scsi_device", TEST=="block", ATTR{block/*/uevent}="change"

ACTION!="add|change", GOTO="persistent_storage_end"
SUBSYSTEM!="block", GOTO="persistent_storage_end"

# skip rules for inappropriate block devices
KERNEL=="fd*|mtd*|nbd*|gnbd*|btibm*|dm-*|md*", GOTO="persistent_storage_end"

# never access non-cdrom removable ide devices, the drivers are causing event loops on open()
KERNEL=="hd*[!0-9]", ATTR{removable}=="1", SUBSYSTEMS=="ide", ATTRS{media}=="disk|floppy", GOTO="persistent_storage_end"
KERNEL=="hd*[0-9]", ATTRS{removable}=="1", GOTO="persistent_storage_end"

# ignore partitions that span the entire disk
TEST=="whole_disk", GOTO="persistent_storage_end"

# for partitions import parent information
ENV{DEVTYPE}=="partition", IMPORT{parent}="ID_*"

# Deal only with SAS disks
KERNEL=="sd*[!0-9]|sr*", ENV{ID_SERIAL}!="?*", IMPORT{program}="/usr/local/sbin/detect_disk.sh $tempnode", ENV{ID_BUS}="scsi"
KERNEL=="sd*|sr*|cciss*", ENV{DEVTYPE}=="disk", ENV{TGT_PATH}=="?*", SYMLINK+="disk/by-target/disk-$env{TGT_PATH}"
#KERNEL=="sd*|cciss*", ENV{DEVTYPE}=="partition", ENV{ID_SERIAL}!="?*", IMPORT{program}="/usr/local/sbin/detect_disk.sh $tempnode"
KERNEL=="sd*|cciss*", ENV{DEVTYPE}=="partition", ENV{ID_SERIAL}=="?*", IMPORT{program}="/usr/local/sbin/detect_disk.sh $tempnode", SYMLINK+="disk/by-target/disk-$env{TGT_PATH}p%n"

ENV{DEVTYPE}=="disk", KERNEL!="xvd*|sd*|sr*", ATTR{removable}=="1", GOTO="persistent_storage_end"
LABEL="persistent_storage_end"

 
You will need to add (and make executable) the script detect_disk.sh in /usr/local/sbin. Again – remove the .txt suffix
 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#!/bin/bash
# Written by Ez-Aton to assist with disk-to-port mapping
# $1 - disk device name
name=$1
name=${name##*/}
# Full disk
TGT_PATH=`/usr/bin/lsscsi | grep -w /dev/$name | awk '{print $1}' | tr -d ] | tr -d [`
if [ -z "$TGT_PATH" ]
then
	# This is a partition, so our grep fails
	name=`echo $name | tr -d [0-9]`
	TGT_PATH=`/usr/bin/lsscsi | grep -w /dev/$name | awk '{print $1}' | tr -d ] | tr -d [`
fi
echo "TGT_PATH=$TGT_PATH"

 
The result of this addition to udev would be a directory called /dev/disk/by-target containing links as follow:

/dev/disk/by-target:
disk-0:0:0:0 -> ../../sda
disk-0:0:1:0 -> ../../sdb
disk-0:0:10:0 -> ../../sdk
disk-0:0:11:0 -> ../../sdl
disk-0:0:12:0 -> ../../sdm
disk-0:0:13:0 -> ../../sdn
disk-0:0:14:0 -> ../../sdo
disk-0:0:15:0 -> ../../sdp
disk-0:0:2:0 -> ../../sdc
disk-0:0:3:0 -> ../../sdd
disk-0:0:4:0 -> ../../sde
disk-0:0:5:0 -> ../../sdf
disk-0:0:6:0 -> ../../sdg
disk-0:0:7:0 -> ../../sdh
disk-0:0:8:0 -> ../../sdi
disk-0:0:9:0 -> ../../sdj

The result is a persistent naming, based on real device ports.
 
I hope it helps. If you get to read it and have some suggestions (or a better use of udev, which I know is far from perfect in this case), I would love to hear about it.

RedHat cluster on RHEL6 and KVM-based VMs

Wednesday, August 1st, 2012

The concept of running a virtual machine, KVM-based, in this case, under RHCS is acceptable and reasonable. The interesting part is that the <vm/> directive replaces the <service/> directive and acts as a high-level directive for VMs. This allows for things which cannot be performed with regular 'service', such as live migration. There are probably more, but this is not the current issue.

An example of how it can be done can be shown in this excellent explanation. You can grab whatever parts of it relevant to you, as there is an excellent combination of DRBD, CLVM, GFS and of course, KVM-based VMs.

This whole guide assumes that the VMs reside on a shared storage, which is concurrently accessible by both (all?) hosts. When this is not the case, like when the shared filesystem is ext3/4 and not GFS, and the virtual disk image file is located on it. In this particular case, you would want to connect the VM to the mount. This cannot be performed, however, when using the <vm/> as a top directive (like <service/>), as it does not allow for child-resources.

As the <vm/> directive allows to be defined (with some limitations) as a child resource in a <service/> group, it inherits some properties from its parent (the <service/> directive), while some other properties are not mandatory and will be ignored. A sample configuration would be this:

<resources>
     <fs device="/dev/mapper/mpathap1" force_fsck="1" force_unmount="1" fstype="ext4" mountpoint="/images" name="vmfs" self_fence="0"/>
</resources>
<service autostart="1" domain="vm1_domain" max_restarts="2" name="vm1" recovery="restart">
     <fs ref="vmfs"/>
     <vm migrate="pause" name="vm1" restart_expire_time="600" use_virsh="1" xmlfile="/images/vm1.xml"/>
</service>

This would do the trick. However, the VM will not be able to live migrate, but will have to shutdown/startup for each cluster takeover.

Attach USB disks to XenServer VM Guest

Saturday, May 5th, 2012

There is a very nice script for Windows dealing with attaching XenServer USB disk to a guest. It can be found here.

This script has several problems, as I see it. The first – this is a Windows batch script, which is a very limited language, and it can handle only a single VDI disk in the SR group called “Removable Storage”.

As I am a *nix guy, and can hardly handle Windows batch scripts, I have rewritten this script to run from Linux CLI (focused on running from the XenServer Domain0), and allowed it to handle multiple USB disks. My assumption is that running this script will map/unmap *all* local USB disks to the VM.

Following downloading this script, you should make sure it is executable, and run it with the arguments “attach” or “detach”, per your needs.

And here it is:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
#!/bin/bash
# This script will map USB devices to a specific VM
# Written by Ez-Aton, http://run.tournament.org.il , with the concepts
# taken from http://jamesscanlonitkb.wordpress.com/2012/03/11/xenserver-mount-usb-from-host/
# and http://support.citrix.com/article/CTX118198
 
# Variables
# Need to change them to match your own!
REMOVABLE_SR_UUID=d03f247d-6fc6-a396-e62b-a4e702aabcf0
VM_UUID=b69e9788-8cd2-0074-5bc1-63cf7870fa0d
DEVICE_NAMES="hdc hde" # Local disk mapping for the VM
XE=/opt/xensource/bin/xe
 
function attach() {
        # Here we attach the disks
        # Check if storage is attached to VBD
        VBDS=`$XE vdi-list sr-uuid=${REMOVABLE_SR_UUID} params=vbd-uuids --minimal | tr , ' '`
        if [ `echo $VBDS | wc -w` -ne 0 ]
        then
                echo "Disks are allready attached. Check VBD $VBDS for details"
                exit 1
        fi
        # Get devices!
        VDIS=`$XE vdi-list sr-uuid=${REMOVABLE_SR_UUID} --minimal | tr , ' '`
        INDEX=0
        DEVICE_NAMES=( $DEVICE_NAMES )
        for i in $VDIS
        do
                VBD=`$XE vbd-create vm-uuid=${VM_UUID} device=${DEVICE_NAMES[$INDEX]} vdi-uuid=${i}`
                if [ $? -ne 0 ]
                then
                        echo "Failed to connect $i to ${DEVICE_NAMES[$INDEX]}"
                        exit 2
                fi
                $XE vbd-plug uuid=$VBD
                if [ $? -ne 0 ]
                then
                        echo "Failed to plug $VBD"
                        exit 3
                fi
                let INDEX++
        done
}
 
function detach() {
        # Here we detach the disks
        VBDS=`$XE vdi-list sr-uuid=${REMOVABLE_SR_UUID} params=vbd-uuids --minimal | tr , ' '`
        for i in $VBDS
        do
                $XE vbd-unplug uuid=${i}
                $XE vbd-destroy uuid=${i}
        done
        echo "Storage Detached from VM"
}
case "$1" in
        attach) attach
                ;;
        detach) detach
                ;;
        *)      echo "Usage: $0 [attach|detach]"
                exit 1
esac

 

Cheers!

Bonding + VLAN tagging + Bridge – updated

Wednesday, April 25th, 2012

In the past I hacked around a problem with the order of starting (and with several bugs) a network stack combined of network bonding (teaming) + VLAN tagging, and then with network bridging (aka – Xen bridges). This kind of setup is very useful for introducing VLAN networks to guest VMs. This works well on Xen (community, Server), however, on RHEL/Centos 5 versions, the startup scripts (ifup and ifup-eth) are buggy, and do not handle this operation correctly. It means that, depending on the update release you use, results might vary from “everything works” to “I get bridges without VLANs” to “I get VLANs without bridges”.

I have hacked a solution in the past, modifying /etc/sysconfig/network-scripts/ifup-eth and fixing some bugs in it, however, both maintaining the fix on every release of ‘initscripts’ package has proven, well, not to happen…

So, instead, I present you with a smarter solution, better adept to updates supplied from time to time by RedHat or Centos, using predefined ‘hooks’ in the ifup scripts.

Create the file /sbin/ifup-pre-local with the following contents:

 

#!/bin/bash
# $1 is the config file
# $2 is not interesting
# We will start the vlan bonding before any bridge
 
DIR=/etc/sysconfig/network-scripts
 
[ -z "$1" ] &amp;&amp; exit 0
. $1
 
if [ "${DEVICE%%[0-9]*}" == "xenbr" ]
then
    for device in $(LANG=C egrep -l "^[[:space:]]*BRIDGE=\"?${DEVICE}\"?" /etc/sysconfig/network-scripts/ifcfg-*) ; do
        /sbin/ifup $device
    done
fi

You can download this scrpit. Don’t forget to change it to be executable. It will call ifup for any parent device of xenbr* device called at. If the parent device is already up, no harm is done. If the parent device is not up, it will be brought up, and then the xenbr device can start normally.

Things to remember…

Monday, October 24th, 2011

As my work takes me to various places (where technology is concerned), I collect lots of browser tab of things I want to keep for later reference.
I have to admit, sadly, that I lack the time to sort them out, to make a real good and nice post about them. I do not want to lose them, however, so I am posting now those which I find or found in the past as more useful to me. I might expand either of them one day into a full post, or elaborate further on them. Either or none. For now – let’s clean up some tab space:
Reading IPMI sensors. Into Cacti, and into Nagios, with some minor modifications by myself (to be disclosed later, I believe):
Cacti
Nagios
This is somewhat info of the plugin check_ipmi_sensor
And its wiki (in German. Use Google for translation)
XenServer checks:
check_xen_pool
Checking XenServer using NRPE
But I did not care about Dom0 performance parameters, as they meant very little regarding the hypervisor’s behavior. So I have combined into it the following XenServer License Check. Unfortunately, I could run it only on the XenServer domain0, due to python version limitations on my Cacti /Nagios server.
You can obtain XenServer SDK
This plugin looks interesting for various XenServer checks, but I have never tried it myself.
Backing up (exporting) XenServer VMs as a scheduled task. I have had it modified extensively to match my requirements, but I am allowed to, it has some of its sources based on my blog :-)
Installing Dell OpenManage on XenServer 5.6.1, and the nice thing is that it works fine on XenServer 6 as well.
Oracle ASM recovery tips . One day I will take it further, and investigate possible human errors and methods of fixing them. Experience, they say, has a value :-)
A guide dealing with changing from raw to block devices in Oracle ASM . This is only a small part of it, but it’s the thing that interests me.
Understanding Steal Time in Linux Xen-based VMs.
Because I always forget, and I’m too lazy to search again and again (and reach the same page again and again): Upgrading PHP to 5.2 on Centos 5
And last – a very nice remote-control software fomr my Android phone. Don’t leave home without it. Seriously.

Reduced to only 23 tabs is excellent. This was a very nice job, and these links will be useful. To me, for sure. I hope that to you as well.

Hot resize Multipath Disk – Linux

Friday, August 19th, 2011

This post is for the users of the great dm-multipath system in Linux, who encounter a major availability problem when attempting a resize of mpath devices (and their partitions), and find themselves scheduling a reboot.

This documented is based on a document created by IBM called "Hot Resize Multipath Storage Volume on Linux with SVC", and its contents are good for any other storage. However - it does not cover the procedure required in case of a partition on the mpath device (for example - mpath1p1 device).

I will demonstrate with only two paths, but, with understanding this process, it can be well used for any amount of paths for a device.

I do not explain how to reduce a LUN size, but the apt viewer will be able to generate a method out of this document. I, for myself, try to avoid as much as I can from shrinking LUNs. I prefer backup, LUN recreation, and then restore. In many case - it's just faster.

So - back to our topic - first - increase the size of your LUN on the storage.

Now, you need to collect the paths used for your mpath device. Check this example:

mpath1 (360a980005033644b424a6276516c4251) dm-2 NETAPP,LUN
[size=200G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=4][active]
\_ 2:0:0:0 sdc 8:32  [active][ready]
\_ round-robin 0 [prio=1][enabled]
\_ 1:0:0:0 sdb 8:16  [active][ready]

The devices marked in bold are the ones we will need to change. Lets get their current size:

blockdev --getsz /dev/sdb
419430400

Keep this number somewhere safe. We can (and should!) assume that sdc has the same values, otherwise, this is not the same exact path.

Collect this info for the partition as well. It will be smaller by a tiny bit:

blockdev --getsz /dev/sdb1
419424892

Keep this number as well.

Now we need to reread the current (storage-based) size parameters of the devices. We will run

blockdev --rereadpt /dev/sdb
blockdev --rereadpt /dev/sdc

Now, our size will be slightly different:

blockdev --getsz /dev/sdb
734003200

Of course, the partition size will not change. We will deal with it later. Keep the updated values as well. Of course, the multipath still holds the disks with their original size values, so running 'multipath -ll' will not reveal any size change. Not yet.

We now need to create editable dmsetup map. Use the current command to create two files: cur and org containing this map:

dmsetup table mpath1 | tee org cur
0 419424892 multipath 1 queue_if_no_path 0 2 1 round-robin 0 1 1 8:32 128 round-robin 0 1 1 8:16 128

Important part - explaining some of these values. The map shows the device's size in blocks - 419424892. It shows some parameters, it shows path groups info (0 2 1), and both sub devices - sdc being 8:32 and sdb being 8:16. Try it with 'ls -la /dev/sdb' to see the minor and major. At this point, if you are not familiar with majors and minors, I would recommend you do some reading about it. Not mandatory, but will make your life here safer.

We need to delete one of the paths, so we can refresh it. I have decided to remove sdb first:

multipathd -k"del path sdb"

Now, running the multipath command, we will get:

mpath1 (360a980005033644b424a6276516c4251) dm-2 NETAPP,LUN
[size=200G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=4][active]
\_ 2:0:0:0 sdc 8:32  [active][ready]

Only one path. Good. We will need to edit the 'cur' file created earlier to reflect the new settings we are to introduce:

0 419424892 multipath 1 queue_if_no_path 0 1 1 round-robin 0 1 1 8:32 128

The only group left was the one containing 'sdc' (8:32), and since one group down, the bold number was changed from 2 to 1 (as there is only a single path group now!)

We need to reload multipath with these settings:

dmsetup suspend mpath1; dmsetup reload mpath1 cur; dmsetup resume mpath1

The correct response for this line is 'ok'. We pause mpath1, reload and then resume it. It is best to be in a single line, as this process freezes IO for a short period of time on the device, and we prefer it to be as short as possible.

Now, as /dev/sdb is not a part of the multipath managed devices, we can modify it. I usually use 'fdisk' - deleting the old partition, and recreating it in the new size, but you must make sure, if your device requires LUN alignment, that you recreated the partition from the same start point. I will dedicate a post some time to LUN alignment, but not at this particular time. Just a hint - if you're not sure, run fdisk in expert mode and get a printout of your partition table (fdisk /dev/sdb and then x and then p). If your partition starts at 128 or 64, it is aligned. If not (usually for large LUNs - at 63), you are not, and you should either be worried about it, but not now, or should not care at all.

Back to our task.

We need to grab the size of the newly created partition, for later use. Write it down somewhere.

blockdev --getsz /dev/sdb1
733993657

Following the partition recreation, we need to introduce the device to the multipath daemon. We do this by:

multipathd -k"add path sdb"

followed by immediately removing the remaining device:

multipathd -k"del path sdc"

We need to have our 'cur' file updated, so we can release the device to our uses. This time, we update both the size section with the new size, and the new, remaining path. Now, the file looks like this:

0 734003200 multipath 1 queue_if_no_path 0 1 1 round-robin 0 1 1 8:16 128

As mentioned before - the large number in bold is the new size of the block device. The amount of failure groups is one (1), also in bold, and the device name is 'sdb' which is 8:16. Save this modified file, and run:

dmsetup suspend mpath1; dmsetup reload mpath1 cur; dmsetup resume mpath1

Running the command 'multipath -ll' you will get the real size of the device.

mpath1 (360a980005033644b424a6276516c4251) dm-2 NETAPP,LUN
[size=350G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 1:0:0:0 sdb 8:16  [active][ready]

We will need to reread the partition layout of /dev/sdc. The quickest way is by running:

partprobe

This should do it. We can now add it back in:

multipathd -k"add path sdc"

and then run

multipath

(which should result in all the available paths, and the correct size).

Our last task is to update the partition size. The partition, normally, is called mpath1p1, so we need to read its parameters. Lets keep it in a file:

dmsetup table mpath1p1 | tee partorg partcur

We should now edit the newly created file 'partcur' with the new size. You should not change anything else. Originally, it looked like this:

0 419424892 linear 253:2 128

and it was modified to look like this:

0 733993657 linear 253:2 128

Notice that the size (in bold) is the one obtained from /dev/sdb1 (!!!) and not /dev/sdb.

We need to reload the device. Again - attempt to do it in a single line, or else you will freeze IO for a long time (which might cause things to crush):

dmsetup suspend mpath1p1; dmsetup reload mpath1p1 partcur; dmsetup resume mpath1p1

Do not mistaked mpath1 with mpath1p1.

Our device is updated, our paths are online. We are happy. All left to do is to online resize the file system. With ext3, this is done like this:

resize2fs /dev/mapper/mpath1p1

The mount will increase in size online, and all left for us is to wait for it to complete, and then go home.

I hope this helps. It helped me.

Stateless Systems (diskless, boot from net) on RHEL5/Centos5

Thursday, July 21st, 2011

I have encountered several methods of doing stateless RedHat Linux systems. Some of them are over-sophisticated, and it doesn’t work. Some of them are too old, and you either have to fix half the scripts, or give up (which I did, BTW), and after long period of attempts, I have found my simple-yet-working-well-enough solution. It goes like that (drums please!)

yum install system-config-netboot

Doesn’t it sound funny? So simple, and yet – so working. So I have decided to supply you with the ultimate, simple, working guide to this method – how to make it work – fast and easy.

You will need the following:

  • RHEL5/Centos5 with ability to run yum client, and enough disk space to contain an entire system image, and more. Lots more. About it later. This machine will be called “server” in this guide.
  • A single “Golden Image” system – the base of your system-to-replicate. A system you will, with some (minor) modifications, use as the source of all your future stateless systems. Usually – resides on another machine (physical or virtual, doesn’t matter much. More about it later). This machine will be called “GI” in this guide.
  • A test system. Better be diskless, for assurance. Better also be able to boot from net, otherwise, we miss something here (although hybrid boot methods are possible, I will not discuss them here, for the time being). Can be virtual, as long as it is full hardware virtualization, as you cannot net-boot, except for the latest Xen Community versions, in PV mode. It will be called “net-client” or “client” in this guide.

Our flow:

  • Install required server-side tools
  • Setup server configuration parameters
  • Image the GI into the server
  • Run this and that
  • Boot our net-client happily

Let’s start!

On the server, run:

yum install -y system-config-netboot xorg-x11-xinit dhcp

and then, run:

chkconfig dhcpd on
chkconfig tftp on
chkconfig xinetd on
chkconfig nfs on

We will now perform configurations for the above mentioned services.

Your /etc/dhcpd.conf should look like this:

ddns-update-style interim;
ignore client-updates;

subnet $NETWORK netmask $NETMASK {
option routers              $GATEWAY;
option subnet-mask          $NETMASK;
option domain-name-servers  $DNS;
ignore unknown-clients;
filename “linux-install/pxelinux.0″;
next-server $SERVERIP;
}

You should replace all these variables with the ones matching your own layout. In my case

NETWORK=10.0.0.0
NETMASK=255.0.0.0
GATEWAY=10.200.1.1
DNS=10.100.1.4
SERVERIP=10.100.1.8

We will include hosts later. Notice that this DHCP server will refuse to serve unknown hosts. This will prevent the “Oops” factor…

We can start our DHCPD now. It won’t do anything, and it’s a good opportunity to test our configuration:

service dhcpd start

We need to create the directory structure. I have selected /export/NFSroots as my base structure. I will follow this structure here:

mkdir -p /export/NFSroots

We will proceed with the NFS server settings later.

Imaging the GI to the server is quite simple. We will begin by creating a directory in /export/NFSroots with the name of the imaged system type. In my case:

mkdir /export/NFSroots/centos5.6

However (and this is the tricky part!), we will create a directory under this location, called ‘root’. We will image the entire contents of our GI to this root directory. This is how this mechanism works, and I have no desire of bending it. So

mkdir /export/NFSroots/centos5.6/root

Now we copy the contents of the GI to this directory. There are several methods, but in this particular case, I have chosen to use ‘rsync’ over ‘ssh’. There are other methods just as good. Just one note – on the GI, before this copy, we will need to have busybox-anaconda package. So make sure you have it:

yum install -y busybox-anaconda

Now we can create an image from the GI:

rsync -e ssh -av –exclude ‘/proc/*’ –exclude ‘/sys/*’ –exclude ‘/dev/shm/*’ $GI_IP:/ /export/NFSroots/centos5.6/root

You must include as many exclude entries as required. You should never cause the system to attempt to grab the company’s huge NFS share, just because you have forgotten some ‘exclude’ entry. This will cause major loss of time, and possibly – some outage to some network resource. Be careful!

This is the best time to grub a cup of coffee/tee/cocoa/coke/whatever, and chit-chat with your friends. I have had about 10 minutes for 1.8GB image, via 1Gb/s network link, so you can do some math and guesswork, and probably – go grab launch.

When this operation is complete, your next mission is to configure your NFS server. Order does matter. You should create a read-only entry for the image root directory, and a full-access entry for the above structure. Well – I can probably narrow it down, but I did not really bother. During pre-production, I will test how to narrow permissions down, without getting myself into management hell. So – our entries are looking like this in /etc/exports :

/export/NFSroots/centos5.6/root *(ro,no_root_squash)
/export/NFSroots *(rw,no_root_squash,async)

I would love to hear comments by people who did narrow security down to the required minimum, and how they managed to handle tenths and hundreds of clients, or several dozens of RO images without much hassle. Really. Comment, people, if you have something to add. I would love to modify this document accordingly.

Now we need to start our NFS server:

service nfs start

We are ready for the big entry. You will need GUI here. We have installed xorg-x11-xauth which will allow us, if invoked, remote X. I use X over SSH, so it’s a breeze. Run the command:

system-config-netboot

I will not bother with screenshots, as this is not my style, but the following entries will be required:

  • First Time Druid: Diskless
  • Name: Easy identified. In my case: Centos5.6
  • Server IP address: $SERVERIP
  • Directory: /export/NFSroots/centos5.6
  • Kernel: Select the desired kernel
  • Apply

The system might take several minutes to complete. When done, you will be presented with a blank table. We need to populate it now.

For that to be easy, we better have all our clients in /etc/hosts file. While DNS does help, a lot, this specific tool, for it to work like charm, requires /etc/hosts to have the entry, or else, it just won’t be very readable. Below is a one-liner script to create a set of entries in /etc/hosts. I have started with .100 and completed with .120. This can be changed easily to match your requirements:

for i in {100..120} ; do echo “10.100.1.$i   pxe${i}” >> /etc/hosts ; done

This way we can refer to our clients as pxe100, pxe101, and so on.

Let’s create a new client!

Select “new” from the GUI menu, and fill in the name ‘pxe100′. If you have multiple images, this is a good time to select which one you will be using. If you have followed this guide to the letter, you have only a single option. Press on “OK” and you’re done.

We need to setup MAC and IP addresses into the DHCP server. I have written a script which assists with this greatly. For this script to work, you will need to run the following commands:

echo “” > /etc/dhcpd.conf.bootnet
echo ‘include “/etc/dhcpd.d/dhcpd.conf.bootnet”;’ >> /etc/dhcpd.conf

#!/bin/bash
# Creates and removes netboot nodes
 
# Check arguments
 
# VARIABLES
DHCPD_CONF=/etc/dhcpd.conf.bootnet
SERVICE=dhcpd
DATE=`date +"%H%M%S%d%m%y"`
BACKUPS=${DHCPD_CONF}.backups
 
function show_help() {
	echo "Usage: $0"
	echo "add NAME MAC IP"
	echo "where NAME is the name of the node"
	echo "MAC is the hardware address of the netboot interface (usually eth0)"
	echo "IP is the designated IP address of the node"
	echo
	echo
	echo "del ARGUMENT"
	echo "Where ARGUMENT can be either name, MAC address or IP address"
	echo "The script will attempt to remove whatever match it finds, so be careful"
	echo
	echo
	echo "check ARGUMENT"
	echo "Where ARGUMENT can be either name, MAC address or IP address"
        echo "The script will list any matching entries. This is a recommended"
	echo "action prior to removing a node"
	echo "The same logic used for finding entries to remove is used here"
	echo "so it is rather safe to run 'del' after a successful 'check' operation"
	echo
	exit 0
}
 
function check_inside() {
	# Will be used to either show or find the amount of matching entries"
	# Arguments: 	$1 - silent - 0 ; loud - 1
	# 		$2 - the string to search
	# The function will search the string as a stand-alone word only
	# so it will not find abc in abcde, but only in "abc"
 
	case "$1" in
		0) 	# silent
			RET=`grep -iwc $2 $DHCPD_CONF`
			;;
		1) 	# loud
			grep -iw $2 $DHCPD_CONF
			;;
 
		*)	echo "Something is very wrong with $0"
			echo "Exiting now"
			exit 2
			;;
	esac
 
	return $RET
}
 
function add_to() {
	# This function will add to the conf file
	# Arguments: $1 host ; $2 MAC ; $3 IP address
	echo "host $1 { hardware ethernet $2; fixed-address $3; }" >> $DHCPD_CONF
	[ "$?" -ne "0" ] && echo "Something wrong happened when attempted to add entry" && exit 3
}
 
function del_from() {
	# This function will delete a matching entry from the conf file
	# Arguments: $1 - expression
	[ -z "$1" ] && echo "Missing argument to del function" && exit 4
	if cat $DHCPD_CONF | grep -vw $1 > $DHCPD_CONF.$$
	then
		\mv $DHCPD_CONF.$$ $DHCPD_CONF
	else
		echo "Failed to create temp DHCPD file. Aborting"
		exit 5
	fi
}
 
function backup_conf() {
	cp $DHCPD_CONF $BACKUPS/`basename $DHCPD_CONF.$DATE`
}
 
function restore_conf() {
	\cp $BACKUPS/`basename $DHCPD_CONF.$DATE` $DHCPD_CONF
}
 
function check_wrapper() {
	# Perform check. Loud one
	echo "Searching for $REGEXP in configuration"
	check_inside 1 $REGEXP
	exit 0
}
 
function del_wrapper() {
	# Performs delete
	[ -z "$REGEXP" ] && echo "Missing argument for delete action" && exit 6
	backup_conf
	echo "Removing all invocations which include $REGEXP from config"
	del_from $REGEXP
 
	if /sbin/service $SERVICE restart
        then
                echo "Done"
        else
                restore_conf
                /sbin/service $SERVICE restart
                echo "Failed to update. Syntax error"
        fi
}
 
function add_wrapper() {
	# Adds to config file
	[ -z "$NAME" -o -z "$MAC" -o -z "$IP" ] && echo "Missing argument for add action" && exit 7
 
	for i in $NAME $MAC $IP
	do
		if check_inside 0 $i
		then
			echo -n .
		else
			echo "Value $i already exists"
			echo "Will not update duplicate value"
			exit 7
		fi
	done
	echo
 
	backup_conf
	add_to $NAME $MAC $IP
 
	if /sbin/service $SERVICE restart
	then
		echo "Done"
	else
		restore_conf
		/sbin/service $SERVICE restart
		echo "Failed to update. Syntax error"
	fi
}
 
function prep() {
	# Make sure everything is as expected
	[ ! -d ${BACKUPS} ] && mkdir ${BACKUPS}
}
 
prep
 
case "$1" in
	add)	NAME="$2"
		MAC="$3"
		IP="$4"
		add_wrapper
		;;
	check)	REGEXP="$2"
		check_wrapper
		;;
	del)	REGEXP="$2"
		del_wrapper
		;;
	help)	show_help
		;;
	*)	echo "Usage: $0 [add|del|check|help]"
		exit 1
		;;
esac

This script will update your /etc/dhcpd.conf.bootnet with new nodes. Place it somewhere in your path (for example: /usr/local/bin/ )

We will need the MAC address of a node. Run

net-node.sh add pxe100 00:16:3E:00:82:7A 10.100.1.100

This will add the node pxe100 with that specific MAC address to the DHCPD configuration.

Now, all you need to do is boot your client, and see how it works. Remember to disable other DHCP servers which might serve this node, or blacklist its MAC address from them.

Our next chapters will deal with my (future) attempt to make RHEL6 work under this setup (as a GI, and client, not as a server), and all kind of mass-deployment methods. If and when :-)