Posts Tagged ‘lun’


Monday, July 14th, 2008

Linux works perfectly well with multiple storage links using dm-multipath. Not only that, but HP has released their own spawn of dm-multipath, which is optimized (or so claimed, but, anyhow, well configured) to work with EVA and MSA storage devices.

This is great, however, what do you do when mapping volume snapshots through dm-multipath? For each new snapshot, you enjoy a new WWID, which will remap to a new “mpath” name, or raw wwid (if “user_friendly_name” is not set). This can, and will set chaos to remote scripts. On each reboot, the new and shiny snapshot will aquire a new name, thus making scripting a hellish experience.

For the time being I have not tested ext3 labels. I suspect that using labels will fail, as the dm-multipath over layer device does not hide the under layered sd devices, and thus – the system might detect the same label more than once – once for each under layered device, and once for the dm-multipath over layer.

A solution which is both elegant and useful is to fixate the snapshots’ WWID through a small alteration to SSSU command. Append a string such as this to the snap create command:


Don’t use the numbers supplied here. “invent” your own πŸ™‚

Mind you that you must use dashes, else the command will fail.

Doing so will allow you to always use the same WWID for the snapshots, and thus – save tons of hassle after system reboot when accessing snapshots through dm-multipath.

Oracle ASM and EMC PowerPath

Wednesday, May 28th, 2008

Setting up an Oracle ASM disks is rather simple, and the procedure can be easily obtained from here, for example. This is nice and pretty, and works well for most environments.

EMC PowerPath creates meta devices which utilize the underlying paths, as mod_scsi sees them in Linux, without hiding them (unlike IBM’s RDAC, for example). This results in the ability to view and access each LUN either through the PowerPath meta device (/dev/emcpower*) or through the underlying SCSI disk device (/dev/sd*). You can obtain the existing paths of a single meta devices through running the command

powermt display dev=emcpowera

where ’emcpowera’ is an example. It can be any of your power meta devices. You will see the underlying SCSI devices.

During startup, Oracle ASM (startup script: /etc/init.d/oracleasm) scans all block devices for ASM headers. On a system with many LUNs, this can take a while (half an hour, and sometimes much more). Not only that, but since ASM scans the available block devices in a semi-random order, the chances are very high that the /dev/sd* will be used instead of the /dev/emcpower* block device. This results in degraded performance, where active-active configuration has been set for PowerPath (because it will not be used), and moreover – a failure of that specific link will result in failure to access the specific LUN through that path, with disregard to any other existing paths to the LUN.

To "set things right", you need to edit /etc/sysconfig/oracleasm, and exclude all ‘sd’ devices from ASM scan.

To verify that you’re actually using the right block device:

/etc/init.d/oracleasm listdisks

Select any one of the DG disks, and then

/etc/init.d/oracleasm querydisk DATA1
Disk β€œDATA1″ is a valid ASM disk on device [120, 6]

The numbers are the major and minor of the block device. You can easily find the device through this command:

ls -la /dev/ | grep MAJOR | grep MINOR

In our example, the MAJOR will be 120, and the MINOR will be 6. The result would look like a single block device.

If you’re using EMC PowerPath, your block device major would be 120 and around that number. If you’re (mistakenly) using one of the underlying paths, your major would be 8 and nearby numbers. If you’re using Linux LVM, your major would be around the number 253. The expected result, when using EMC PowerPath is always with major of 120 – always using the /dev/emcpower* devices.

This also decreases the boot time rather dramatically.

iSCSI target/client for Linux in 5 whole minutes

Tuesday, December 4th, 2007

I was playing a bit with iSCSI initiator (client) and decided to see how complicated it is to setup a shared storage (for my purposes) through iSCSI. This proves to be quite easy…

On the server:

1. Download iSCSI Enterprise Target from here, or you can install scsi-target-utils from Centos5 repository

2. Compile (if required) and install on your server. Notice – you will need kernel-devel packages

3. Create a test Logical Volume:

lvcreate -L 1G -n iscsi1 /dev/VolGroup00

4. Edit your /etc/ietd.conf file to look something like this:

Lun 0 Path=/dev/VolGroup00/iscsi1,Type=fileio
InitialR2T Yes
ImmediateData No
MaxRecvDataSegmentLength 8192
MaxXmitDataSegmentLength 8192
MaxBurstLength 262144
FirstBurstLength 65536
DefaultTime2Wait 2
DefaultTime2Retain 20
MaxOutstandingR2T 8
DataPDUInOrder Yes
DataSequenceInOrder Yes
ErrorRecoveryLevel 0
HeaderDigest CRC32C,None
DataDigest CRC32C,None
# various target parameters
Wthreads 8

5. Start iscsi-target service:

/etc/init.d/iscsi-target start

On the client:

1. Install open-iscsi package. It will be called iscsi-initiator-utils for RHEL5 and Centos5

2. Run detection command:

iscsiadm -m discovery -t sendtargets -p <server IP address>

3. You should get a nice reply. Something like this. <IP> refers to the server’s IP


4. Login to the devices using the following command:

iscsiadm -m node -T -p <IP>:3260,1 -l

5. Run fdisk to view your new disk

fdisk -l

6. To disconnect the iSCSI device, run the following command:

iscsiadm -m node -T -p <IP>:3260,1 -u

This will not allow you to set the iSCSI initiator during boot time. You will have to google your own distro and its bolts and nuts, but this will allow you a proof of concept of a working iSCSI

Good luck!

Hot-Adding SAN lun to Linux (RH with Qlogic drivers)

Monday, September 18th, 2006

cat /proc/scsi/qla2xxx/$Z” where Z represents the SCSI interface the Qlogic has taken for itself, you’ll get something like this:





SCSI LUN Information:
(Id:Lun) * – indicates lun is not registered with the OS.
( 0: 0): Total reqs 63185608, Pending reqs 0, flags 0x2, 0:0:81 00

Assuming you’ve just added the next LUN, in our case, LUN1, after reboot you would get an additional line below such as:

( 0: 1): Total reqs 1923, Pending reqs 0, flags 0x2, 0:0:81 00

However, on a production server we want to add this line without a reboot.

To achieve this goal, we need to run the following command:

“echo “scsi-qlascan” > /proc/scsi/qla2xxx/$Z” where $Z represents, like before, the SCSI interface.

Then you get the additional line(s) in the file. Now you should help your Linux see them (and attach a module to them). You can do it by using the following convention (taken from here): Using “dmesg“.

you can obtain the required details for the next stage: Controller, Channel, Target and LUN. Example:

Host: scsi2 Channel: 00 Id: 00 Lun: 01
Vendor: IBM Model: 1742 Rev: 0520
Type: Direct-Access ANSI SCSI revision: 03

Obtain the following details:

Controller=2 (scsi2)

Channel=0 (Channel: 00)

Target=0 (Id: 00)

LUN=1 (Lun: 01)

We will ask Linux nicely to reattach the device. Replace the descriptors with the numeric values

echo “scsi add-single-device Controller Channel Target LUN” > /proc/scsi/scsi”

In our example: “echo “scsi add-single-device 2 0 0 1” > /proc/scsi/scsi

To remove a device prior to unmapping it from the SAN, replace add-single-device with “remove-single-device”.

This post’s Qlogic discovery was the insight of a friend of mine, and the credit is his πŸ™‚

HP-UX – allowed shells, and connecting FC Multipath to NetApp

Thursday, August 10th, 2006

When adding a certain shell to an HP-UX system, for example, /usr/bin/tcsh, each user set to use this shell will not be able to FTP to the machine, until there is entry in /etc/shells. The trick is that even if the file doesn’t exist, you have to create it. By default, HP-UX allows only /sbin/sh and /bin/sh shells, but as soon as you setup this file, you can allow more shells. Mind you that you have to include /sbin/sh and /bin/sh in /etc/shells, else other things might not work correctly. Taken from here.

Connecting HP-UX to SAN storage is never too simple. The actual list of actions is:

1. Install HP-UX drivers for the FC adapter

2. Map the PWWN obtained from (reading the sticker at the back of the machine, or querying the storage/SAN switch) the machine to the relevant LUNs.

3. Run “/usr/sbin/ioscan -fnC disk” and see that the new disk devices are detected.

4. Run “/usr/sbin/ioinit -i” to create the relevant device files.

A note – HP-UX might require a reboot after the initial connection. On several cases I’ve noticed that if the server was running for a while with disconnected fiber, only being connected during before startup would result in link and in SAN registration. Of course, the driver must be installed then.

If you are to connect your HP-UX to NetApp device, as we did, take a day (or more) notice and open “now” account in You can find documentation about HP-UX (including step-by-step), you can find the “SAN Attach Kit for HP-UX” which will make your life easier, and set of best-practice guides. Just follow these guides, and you will find it easy and simple task to do.