Posts Tagged ‘storage’

NetApp internals – how to add SSH keys without C$ nor NFS shares

Thursday, April 3rd, 2014

This post will describe the process of placing SSH keys using the internal ‘systemshell’ command of NetApp. As always – when doing something which the vendor did not intend you to do, do it very carefully. This data was obtained from NetApp forums, and while I do not have the original post to link (I usually link to the original, as a courtesy to the original author), this is the content, as is.

First, set to advanced mode:
filer> priv set advanced

Then, unlock and set a password to diag account:
filer*> useradmin diaguser unlock
filer*> useradmin diaguser password

Start the systemshell, create the directory you need and put the pubkey generated in the authorized_keys file:
filer*> systemshell

login: diag
Password: the same you set in the previous step

filer% mkdir -p /mroot/etc/sshd/root/.ssh
filer% vi /mroot/etc/sshd/root/.ssh/authorized_keys
filer% sudo chown -R root:wheel /mroot/etc/sshd/root
filer% sudo chmod -R 0600 /mroot/etc/sshd/root

Last, exit systemshell, lock diag account and exit advanced mode:
filer% exit
filer*> useradmin diaguser lock
filer*> priv set admin

If you want to do it for any other user, just replace the word ‘root’ with the said user.

An additional note – I had to create a user to perform ‘df’ operations only. The purpose was to be able to obtain data using ‘ssh’ without disclosing the keys used for root SSH access, by having a very limited user, designed to do that.

So the commands to create such a user are as follows:

useradmin role add df -a cli-df*,login-ssh
useradmin group add df_users -r df
useradmin user add df -g df_users
(here you will be asked to enter the user’s password)

Hope it helps!

 

 

XenServer and its damn too small system disks

Thursday, December 26th, 2013

I love XenServer. I love the product, I believe it to be a very good answer for SMBs, and enterprises. It lacks on external support, true, but the price tag for many of the ‘external capabilities’ on VMware, for instance, are very high, so many SMBs, especially, learn to live without them. XenServer gives a nice pack of features, at a very reasonable price.

One of the missing features is the management packs of hardware vendors, such as HP, Dell and IBM. Well, HP does have something, and its installation is always some sort of a challenge, but they do, so scratch that. Others, however, do not supply management packs. The bright side is that with Domain0 being a full featured i386 Centos 5 distribution, I can install the Centos/RHEL management packs, and have a ball. This brings us to another challenge there – the size of the system disk (root partition) by default is too small – 4GB, and while it works quite well without any external components, it tends to get filled very fast with external packages installed, like Dell tools, etc. Not only that, but on a system with many patches the patches backups take their toll, and consume valuable space. While my solution will not work for those who aim at the smallest possible space, such as SD or Disk-on-Key for the XenServer OS, it aims for the most of us all, where the system resides on several tenths of gigabytes at least, and is capable of sustaining the ‘loss’ of additional 4GB. This process modifies the install.img file, and authors the CD as a new one, your own privately-modified instance of XenServer installation. Mind you that this change will be effective only for new installations. I have not tested this as the upgrade path for existing systems, although I believe no harm will be done to those who upgrade. Also – it was performed and tested on XenServer 6.2, and not 6.2 SP1, or prior versions, although I believe that the process should look pretty similar in nature.

You will need a Linux machine to perform this operation, end to end. You could probably use some Windows applications on the way, but I have no idea as to which or what.

Step one: Open the ISO, and copy it to somewhere useful (assume /tmp is useful):

mkdir /tmp/ISO
mkdir /tmp/RW
mount -o loop /path/to/XenServer-6.2.0-install-cd.iso /tmp/ISO
cd /tmp/ISOtar cf – . | ( cd /tmp/RW ; tar xf – )

Step two: Extract the contents of the install.img file in the root of the CDROM:

mkdir /tmp/install
cd /tmp/install
cat /tmp/RW/install.img | gzip -dc | cpio -id

Step three: Edit the contents of the definitions file:

vi opt/xensource/installer/constants.py

Change the value of ‘root_size’ to something to your taste. Mind you that with 4GB it was tight, but still usable, even with additional 3rd party tools, so don’t become greedy. I defined it to be 6GB (6144)

Step four: Wrap it up:

cd /tmp/install ; find . | cpio -o -H newc | gzip -9 > /tmp/RW/install.img

Step five: Author the CD, and prepare it to be burned:

cd /tmp/RW
mkisofs -J -T -o /share/temp/XenServer-6.2-modified.iso -V “XenServer 6.2” -volset “XenServer 6.2” -A “XenServer 6.2”
-b boot/isolinux/isolinux.bin -no-emul-boot -boot-load-size 4 -boot-info-table -R -m TRANS.TBL .

You now have a file called ‘XenServer-6.2-modified.iso’ in your /tmp, which will install your XenServer with the disk partition size you have set it to install. Cheers.

BTW, and to make it entirely clear – I cannot be held responsible to any damage caused to any system you tweaked using this (or for that matter – any other) guide I published.

Enjoy your XenServer’s new apartment!

XenServer – increase LVM over iSCSI LUN size – online

Wednesday, September 4th, 2013

The following procedure was tested by me, and was found to be working. The version of the XenServer I am using in this particular case is 6.1, however, I belive that this method is generic enough so that it could work for every version of XS, assuming you're using iSCSI and LVM (aka - not NetApp, CSLG, NFS and the likes). It might act as a general guideline for fiber channel communication, but this was not tested by me, and thus - I have no idea how it will work. It should work with some modifications when using Multipath, however, regarding multipath, you can find in this particular blog some notes on increasing multipath disks. Check the comments too - they might offer some better and simplified way of doing it.

So - let's begin.

First - increase the size of the LUN through the storage. For NetApp, it involves something like:

lun resize /vol/XenServer/luns/SR1.lun +1t

You should always make sure your storage volume, aggregate, raid group, pool or whatever is capable of holding the data, or - if using thin provisioning - that a well tested monitoring system is available to alert you when running low on storage disk space.

Now, we should identify the LUN. From now on - every action should be performed on all XS pool nodes, one after the other.

cat /proc/partitions

We should keep the output of this command somewhere. We will use it later on to identify the expanded LUN.

Now - let's scan for storage changes:

iscsiadm -m node -R

Now, running the previous command again will have a slightly different output. We can not identify the modified LUN

cat /proc/partitions

We should increase it in size. XenServer uses LVM, so we should harness it to our needs. Let's assume that the modified disk is /dev/sdd.

pvresize /dev/sdd

After completing this task on all pool hosts, we should run sr-scan command. Either by CLI, or through the GUI. When the scan operation completes, the new size would show.

Hope it helps!

Target-based persistent device naming

Saturday, June 22nd, 2013

When Connecting Linux to a large array of SAS disks (JBOD), udev creates default persistent names in /dev/disk/by-* . These names are based on LUN ID (all disks take lun0 by default), and by path, which includes, for a pure SAS bus – the PWWN of the disks. It means that an example to such naming would be like this (slightly trimmed for ease of view):

/dev/disk/by-id:
scsi-35000c50055924207 -> ../../sde
scsi-35000c50055c5138b -> ../../sdd
scsi-35000c50055c562eb -> ../../sda
scsi-35000c500562ffd73 -> ../../sdc
scsi-35001173100134654 -> ../../sdn
scsi-3500117310013465c -> ../../sdk
scsi-35001173100134688 -> ../../sdj
scsi-35001173100134718 -> ../../sdo
scsi-3500117310013490c -> ../../sdg
scsi-35001173100134914 -> ../../sdh
scsi-35001173100134a58 -> ../../sdp
scsi-3500117310013671c -> ../../sdm
scsi-35001173100136740 -> ../../sdl
scsi-350011731001367ac -> ../../sdi
scsi-350011731001cdd58 -> ../../sdf
wwn-0x5000c50055924207 -> ../../sde
wwn-0x5000c50055c5138b -> ../../sdd
wwn-0x5000c50055c562eb -> ../../sda
wwn-0x5000c500562ffd73 -> ../../sdc
wwn-0x5001173100134654 -> ../../sdn
wwn-0x500117310013465c -> ../../sdk
wwn-0x5001173100134688 -> ../../sdj
wwn-0x5001173100134718 -> ../../sdo
wwn-0x500117310013490c -> ../../sdg
wwn-0x5001173100134914 -> ../../sdh
wwn-0x5001173100134a58 -> ../../sdp
wwn-0x500117310013671c -> ../../sdm
wwn-0x5001173100136740 -> ../../sdl
wwn-0x50011731001367ac -> ../../sdi
wwn-0x50011731001cdd58 -> ../../sdf

/dev/disk/by-path:
pci-0000:03:00.0-sas-0x5000c50055924206-lun-0 -> ../../sde
pci-0000:03:00.0-sas-0x5000c50055c5138a-lun-0 -> ../../sdd
pci-0000:03:00.0-sas-0x5000c50055c562ea-lun-0 -> ../../sda
pci-0000:03:00.0-sas-0x5000c500562ffd72-lun-0 -> ../../sdc
pci-0000:03:00.0-sas-0x5001173100134656-lun-0 -> ../../sdn
pci-0000:03:00.0-sas-0x500117310013465e-lun-0 -> ../../sdk
pci-0000:03:00.0-sas-0x500117310013468a-lun-0 -> ../../sdj
pci-0000:03:00.0-sas-0x500117310013471a-lun-0 -> ../../sdo
pci-0000:03:00.0-sas-0x500117310013490e-lun-0 -> ../../sdg
pci-0000:03:00.0-sas-0x5001173100134916-lun-0 -> ../../sdh
pci-0000:03:00.0-sas-0x5001173100134a5a-lun-0 -> ../../sdp
pci-0000:03:00.0-sas-0x500117310013671e-lun-0 -> ../../sdm
pci-0000:03:00.0-sas-0x5001173100136742-lun-0 -> ../../sdl
pci-0000:03:00.0-sas-0x50011731001367ae-lun-0 -> ../../sdi
pci-0000:03:00.0-sas-0x50011731001cdd5a-lun-0 -> ../../sdf

Real port (connection) persistence is not possible in that manner. A map of PWWN-to-Slot is required, and handling the system in case of a disk failure by non-expert is nearly impossible. A solution for that is to create matching udev rules which will allow handling disks per-port.

While there are (absolutely) better ways of doing it, time constrains require that I get it to work quick&dirty. The solution is based on lsscsi command, as the backend engine of the system, so make sure it exists on the system. I tend to believe that the system will not be able to scale out to hundreds of disks in its current design, but for my 16 disks (and probably for several tenths as well) – it works fine.

Add 60-persistent-disk-ports.rules to /etc/udev/rules.d/ (and omit the .txt suffix)

 

# By Ez-Aton, based partially on the built-in udev block device rule
# forward scsi device event to corresponding block device
ACTION=="change", SUBSYSTEM=="scsi", ENV{DEVTYPE}=="scsi_device", TEST=="block", ATTR{block/*/uevent}="change"

ACTION!="add|change", GOTO="persistent_storage_end"
SUBSYSTEM!="block", GOTO="persistent_storage_end"

# skip rules for inappropriate block devices
KERNEL=="fd*|mtd*|nbd*|gnbd*|btibm*|dm-*|md*", GOTO="persistent_storage_end"

# never access non-cdrom removable ide devices, the drivers are causing event loops on open()
KERNEL=="hd*[!0-9]", ATTR{removable}=="1", SUBSYSTEMS=="ide", ATTRS{media}=="disk|floppy", GOTO="persistent_storage_end"
KERNEL=="hd*[0-9]", ATTRS{removable}=="1", GOTO="persistent_storage_end"

# ignore partitions that span the entire disk
TEST=="whole_disk", GOTO="persistent_storage_end"

# for partitions import parent information
ENV{DEVTYPE}=="partition", IMPORT{parent}="ID_*"

# Deal only with SAS disks
KERNEL=="sd*[!0-9]|sr*", ENV{ID_SERIAL}!="?*", IMPORT{program}="/usr/local/sbin/detect_disk.sh $tempnode", ENV{ID_BUS}="scsi"
KERNEL=="sd*|sr*|cciss*", ENV{DEVTYPE}=="disk", ENV{TGT_PATH}=="?*", SYMLINK+="disk/by-target/disk-$env{TGT_PATH}"
#KERNEL=="sd*|cciss*", ENV{DEVTYPE}=="partition", ENV{ID_SERIAL}!="?*", IMPORT{program}="/usr/local/sbin/detect_disk.sh $tempnode"
KERNEL=="sd*|cciss*", ENV{DEVTYPE}=="partition", ENV{ID_SERIAL}=="?*", IMPORT{program}="/usr/local/sbin/detect_disk.sh $tempnode", SYMLINK+="disk/by-target/disk-$env{TGT_PATH}p%n"

ENV{DEVTYPE}=="disk", KERNEL!="xvd*|sd*|sr*", ATTR{removable}=="1", GOTO="persistent_storage_end"
LABEL="persistent_storage_end"

 
You will need to add (and make executable) the script detect_disk.sh in /usr/local/sbin. Again – remove the .txt suffix
 

#!/bin/bash
# Written by Ez-Aton to assist with disk-to-port mapping
# $1 - disk device name
name=$1
name=${name##*/}
# Full disk
TGT_PATH=`/usr/bin/lsscsi | grep -w /dev/$name | awk '{print $1}' | tr -d ] | tr -d [`
if [ -z "$TGT_PATH" ]
then
	# This is a partition, so our grep fails
	name=`echo $name | tr -d [0-9]`
	TGT_PATH=`/usr/bin/lsscsi | grep -w /dev/$name | awk '{print $1}' | tr -d ] | tr -d [`
fi
echo "TGT_PATH=$TGT_PATH"

 
The result of this addition to udev would be a directory called /dev/disk/by-target containing links as follow:

/dev/disk/by-target:
disk-0:0:0:0 -> ../../sda
disk-0:0:1:0 -> ../../sdb
disk-0:0:10:0 -> ../../sdk
disk-0:0:11:0 -> ../../sdl
disk-0:0:12:0 -> ../../sdm
disk-0:0:13:0 -> ../../sdn
disk-0:0:14:0 -> ../../sdo
disk-0:0:15:0 -> ../../sdp
disk-0:0:2:0 -> ../../sdc
disk-0:0:3:0 -> ../../sdd
disk-0:0:4:0 -> ../../sde
disk-0:0:5:0 -> ../../sdf
disk-0:0:6:0 -> ../../sdg
disk-0:0:7:0 -> ../../sdh
disk-0:0:8:0 -> ../../sdi
disk-0:0:9:0 -> ../../sdj

The result is a persistent naming, based on real device ports.
 
I hope it helps. If you get to read it and have some suggestions (or a better use of udev, which I know is far from perfect in this case), I would love to hear about it.

NetApp – Copy LUN between filers using NDMP

Saturday, January 19th, 2013

ndmpcopy is a wonderful command. It allows a fine-grained copy of files or directories between NetApp devices, across network, even if they do not use (or unlicensed) SnapMirror, SnapVault and the rest of the Snap* products NetApp offer.
In this example I will show how to copy a LUN from one filer to the other.

First, set the LUN to offline on the source filer. Make sure that it is not mounted, disconnected, etc – whatever prevents any major data loss. As you can deduce – setting a LUN to offline state will prevent write access to it. Also – take its parameters. For example:

lun show -v /vol/server1/data/mydb.lun

Second, create the required qtree structure. Make sure that the LUN is created at the root of either a volume or a qtree, or else.

Third, use ndmpcopy:

ndmpcopy -da root:password /vol/server1/data/mydb.lun remotefiler:/vol/server1/data/

This operation will take time.

When it completes, on the target NetApp, set priv to diag, and do the following:

  • Rename the LUN:
    mv /vol/server1/data/mydb.lun /vol/server1/data/mydb.not.lun
  • Create a hard-link LUN from a file (requires priv diag!)
    lun create -f /vol/server1/data/mydb.not.lun -t linux -o noreserve /vol/server1/data/mydb.lun
    (Command syntax: lun create -f <file_path> -t <ostype> [ -o noreserve ] [ -e space_alloc ] <lun_path>)
  • Remove the original file (it is hard-linked, so the data will not be affected)
    rm /vol/server1/data/mydb.not.lun
  • Resize, if required, the LUN to the original full size (relevant if the LUN was thin-privisioned)
    lun resize /vol/server1/data/mydb.lun 400g

You can now map the LUN to any relevant host, and obtain full access to its data.