Posts Tagged ‘storage management’

Hot-resize disks on Linux

Monday, April 6th, 2020

After major investigations around, I came to the conclusion that a full guide describing the procedure required for online disk resize on Linux (especially – expanding disks). I have created a guide for RHEL5/6/7/8 (works the same for Centos or OEL or ScientificLinux – RHEL-based Linux systems) which takes into account the following four scenarios:

  • Expanding a disk where there is a filesystem directly on disk (no partitioning used)
  • Expanding a disk where there is LVM PV directly on disk (no partitioning used)
  • Expanding a disk where there is a filesystem on partition (a single partition taking all the disk’s space)
  • Expanding a disk where there is an LVM PV on partition (a single partition taking all the disk’s space)

All four scenarios were tested with and without use of multipath (device-mapper-multipath). Also – notes about using GPT compared to MBR are given. The purpose is to provide a full guideline for hot-extending disks.

This document does not describe the process of extending disks on the storage/virtualisation/NAS/whatever end. Updating the storage client configuration to refresh the disk topology might differ in various versions of Linux and storage communication methods – iSCSI, FC, FCoE, AoE, local virtualised disk (VMware/KVM/Xen/XenServer/HyperV) and so on. Each connectivity/OS combination might require different refresh method called on the client. In this lab, I use iSCSI and iSCSI software initiator.

The Lab

A storage server running Linux (Centos 7) with targetcli tools exporting 5GB (or more) LUN through iSCSI to Linux clients running Centos5, Centos6, Centos7 and Centos8, with the latest updates (5.11, 6.10, 7.7, 8.1). See some interesting insights on iSCSI target disk expansion using linux LIU (targetcli command line) in my previous post.

The iSCSI clients all see the disk as ‘/dev/sda’ block device. When using LVM, the volume group name is tempvg and the logical volume name is templv. When using multipath, the mpath name is mpatha. On some systems the mpath partition would appear as mpatha1 and on others as mpathap1.

iSCSI client disk/partitions were performed like this:

Centos5:

* Filesystem on disk

1
2
mkfs.ext3 /dev/sda
mount /dev/sda /mnt

* LVM on disk

1
2
3
4
5
pvcreate /dev/sda
vgcreate tempvg /dev/sda
lvcreate -l 100%FREE -n templv tempvg
mkfs.ext3 /dev/tempvg/templv
mount /dev/tempvg/templv /mnt

* Filesystem on partition

1
2
3
parted -s /dev/sda "mklabel msdos mkpart primary 1 -1"
mkfs.ext3 /dev/sda1
mount /dev/sda1 /mnt

* LVM on partition

1
2
3
4
5
6
parted -s /dev/sda "mklabel msdos mkpart primary 1 -1 set 1 lvm on"
pvcreate /dev/sda1
vgcreate tempvg /dev/sda1
lvcreate -l 100%FREE -n templv tempvg
mkfs.ext3 /dev/tempvg/templv
mount /dev/tempvg/templv /mnt

Centos6:

* Filesystem on disk

1
2
mkfs.ext4 /dev/sda
mount /dev/sda /mnt

* LVM on disk

1
2
3
4
5
pvcreate /dev/sda
vgcreate tempvg /dev/sda
lvcreate -l 100%FREE -n templv tempvg
mkfs.ext4 /dev/tempvg/templv
mount /dev/tempvg/templv /mnt

* Filesystem on partition

1
2
3
parted -s /dev/sda "mklabel msdos mkpart primary 1 -1"
mkfs.ext4 /dev/sda1
mount /dev/sda1 /mnt

* LVM on partition

1
2
3
4
5
6
parted -s /dev/sda "mklabel msdos mkpart primary 1 -1 set 1 lvm on"
pvcreate /dev/sda1
vgcreate tempvg /dev/sda1
lvcreate -l 100%FREE -n templv tempvg
mkfs.ext4 /dev/tempvg/templv
mount /dev/tempvg/templv /mnt

Centos7/8:

* Filesystem on disk

1
2
mkfs.xfs /dev/sda
mount /dev/sda /mnt

* LVM on disk

1
2
3
4
5
pvcreate /dev/sda
vgcreate tempvg /dev/sda
lvcreate -l 100%FREE -n templv tempvg
mkfs.xfs /dev/tempvg/templv
mount /dev/tempvg/templv /mnt

* Filesystem on partition

1
2
3
parted -a optimal -s /dev/sda "mklabel msdos mkpart primary 1 -1"
mkfs.xfs /dev/sda1
mount /dev/sda1 /mnt

* LVM on partition

1
2
3
4
5
6
parted -a optimal -s /dev/sda "mklabel msdos mkpart primary 1 -1 set 1 lvm on"
pvcreate /dev/sda1
vgcreate tempvg /dev/sda1
lvcreate -l 100%FREE -n templv tempvg
mkfs.xfs /dev/tempvg/templv
mount /dev/tempvg/templv /mnt

Some variations might exist. For example, use of ‘GPT’ partition layout would result in a parted command like this:

1
parted -s /dev/sda "mklabel gpt mkpart ' ' 1 -1"

Also, for multipath devices, replace the block device /dev/sda with /dev/mapper/mpatha, like this:

1
parted -a optimal -s /dev/mapper/mpatha "mklabel msdos mkpart primary 1 -1"

There are several common tasks, such as expanding filesystems – for XFS, using xfs_growfs <mount target> ; for ext3fs and ext4fs using resize2fs <device path>. Same goes for LVM expansion – using pvresize <device path>, followed by lvextend command, followed by the filesystem expanding command as noted above.

The document layout

The document will describe the client commands for each OS, sorted by action. The process would be as following:

  • Expand the visualised storage layout (storage has already expanded LUN. Now we need the OS to update to the change)
  • (if in use) Expand the multipath device
  • (if partitioned) Expand the partition
  • Expand the LVM PV
  • Expand the filesystem

Actions

For each OS/scenario/mutipath combination, we will format and mount the relevant block device, and attempt an online expansion.

Operations following disk expansion

Expanding the visualised storage layout

For iSCSI, it works quite the same for all OS versions. For other transport types, actions might differ.

1
iscsiadm -m node -R

Expanding multipath device

If using multipath device (device-mapper-multipath), an update to the multipath device layout is required. Run the following command (for all OSes)

1
multipathd -k"resize map mpatha"

Expanding the partition (if disk partitions are in use)

This is a bit complicated part. It differs greatly both in the capability and the commands in use between different versions of operation systems.

Centos 5/6

Online expansion of partition is impossible, except if used with device-mapper-multipath, in which case we force the multipath device to refresh its paths to recreate the device. It will result in an I/O error if there is only a single path defined. For non-multipath setup, a umount and re-mount is required. Disk partition layout cannot be read while the disk is in use.

Without Multipath
1
2
fdisk /dev/sda # Delete and recreate the partition from the same starting point
partprobe # Run when disk is not mounted, or else it will not refresh partition size
With Multipath
1
2
3
4
5
6
fdisk /dev/mapper/mpatha # Delete and recreate the partition from the same starting point
partprobe
multipathd -k"reconfigure" # Sufficient for Centos 6
multipathd -k"remove path sda" # Required for Centos 5
multipathd -k"add path sda" # Required for Centos 5
# Repeat for all sub-paths of expanded device

Centos 7/8

Without Multipath
1
2
fdisk /dev/sda # Delete and recreate partition from the same starting point. Sufficient for Centos 8
partx -u /dev/sda # Required for Centos 7
with Multipath
1
2
fdisk /dev/mapper/mpatha # Delete and recreate the partition from the same starting point. Sufficient for Centos 8
kpartx -u /dev/mapper/mpatha # Can use partx

Expanding LVM PV and LV

1
pvresize DEVICE
Device can be /dev/sda ; /dev/sda1 ; /dev/mapper/mpatha ; /dev/mapper/mpathap1 ; /dev/mapper/mpatha1 – according to the disk layout and LVM choice. lvextend -l +100%FREE /dev/tempvg/templv

Expanding filesystem

For ext3fs and ext4fs
1
resize2fs DEVICe
Device can be /dev/sda ; /dev/sda1 ; /dev/mapper/mpatha ; /dev/mapper/mpathap1 ; /dev/mapper/mpatha1 – according to the disk layout and LVM choice.
For xfs
1
xfs_growfs /mnt

Additional Considerations

MBR vs GPT

On most Linux versions (For Centos – up and including version 7) the command ‘fdisk’ is incapable of handling GPT partition layout. If using GPT partition layout, use of gdisk is recommended, if it exists for the OS. If not, parted is a decent although somewhat limited alternative.

gdisk command can also modify a partition layout (at your own risk, of course) from MBR to GPT and vice versa. This is very useful in saving large data migrations where legacy MBR partition layout was used on disks which are to be expanded beyond the 2TB limits.

GPT backup table is located at the end of the disk, so when extending a GPT disk, it is require to repair the GPT backup table. Based on my lab tests – it is impossible to both extend the partition and repair the GPT backup table location in a single call to gdisk. Two runs are required – one to fix the GPT backup table, and then – after the changes were saved – another to extend the partition.

Storage transport

I have demonstrated use of iSCSI software initiator on Linux. Different storage transport exist – each may require its own method of ‘notifying’ the OS of changed storage layout. See RedHat’s article about disk resizing (RHN access required). This article explains how to refresh the storage transport for a combination of various transports and RHEL versions. and sub-versions.

Relocating LVs with snapshots

Monday, February 2nd, 2009

Linux LVM is a wonderful thing. It is scalable, flexible, and truly, almost enterprise-class in every details. It lacks, of course, at IO performance for LVM snapshots, but this can be worked-around in several creative ways (if I haven’t shown here before, I will sometime).

What it can’t do is dealing with a mixture of Stripes, Mirrors and Snapshots in a single logical volume. It cannot allow you to mirror a stripped LV (even if you can follow the requirementes), it cannot allow you to snapshot a mirrored or a stripped volume. You get the idea. A volume you can protect, you cannot snapshot. A volume with snapshots cannot be mirrored or altered.

For the normal user, what you get is usually enough. For storage management per-se, this is just not enough. When I wanted to reduce a VG – remove a disk from an existing volume group,  I had to evacuate it from any existing logical volume. The command to perform this actions is ‘pvmove‘ which is capable of relocating data from within a PV to other PVs. This is done through mirroring each logical volume and then removing the origin.

Mirroring, however, cannot be performed on LVs with snapshots, or on an already mirrored LV, so these require different handling.

We can detect which LVs reside on our physical volume by issuing the following command

pvdisplay -m /dev/sdf1

/dev/sdf1 was only an example. You will see the contents of this PV. So next, performing

pvmove /dev/sdf1

would attempt to relocate every existing LV from this specific PV to any other available PV. We can use this command to change the disk balance and allocations on multi-disk volume groups. This will be discussed on a later post.

Following a ‘pvmove‘ command, all linear volumes are relocated, if space permits, to another PVs. The remaining LVs are either mirrored or LVs with snapshots.

To relocate a mirrored LV, you need to un-mirror it first. To do so, first detect using ‘pvdisplay‘ which LV is belongs to (the name should be easy to follow) and then change it to non-mirrored.

lvconvert -m0 /dev/VolGroup00/test-mirror

This will convert it to be a linear volume instead of a mirror, so you could move it, if it still resides on the PV you are to remove.

Snapshot volumes are more complicated, due to their nature. Since all my snapshots are of a filesystem, I could allow myself to use tar to perform the action.

The steps are as follow:

  1. tar the contents of the snapshot source to nowhere, but save an incremental file
  2. Copy the source incremental file to a new name, and tar the contents of a snapshot according to this copy.
  3. Repeat the previous step for each snapshot.
  4. Remove all snapshots
  5. Relocate the snapshot source using ‘pvmove
  6. Build the snapshots and then recover the data into them

This is a script to do steps 1 to 3. It will not remove LVs, for obvious reasons. This script was not tested, but should work, of course 🙂

None of the LVs should be mounted for it to function. It’s better to have harder requirements than to destroy data by double-mounting it, or accessing it while it is being changed.

#!/bin/bash
# Get: VG Base-LV, snapshot name, snapshot name, snapshot name...
# Example:
# ./backup VolGroup00 base snap1 snap2 snap3
# Written by Ez-Aton

TARGET=/tmp
if [ "[email protected]" -le 3 ]
then
   echo "Parameters: $0 VG base snap snap snap snap"
   exit 1
fi
VG=$1
BASE=$2
shift 2

function check_not_mounted () {
   # Check if partition is mounted
   if mount | grep /dev/mapper/${VG}-${1}
   then
      return 0
   else
      return 1
   fi
}

function create_base_diff () {
   # This function will create the diff file for the base
   mount /dev/${VG}/${BASE} $MNT
   if [ $? -ne 0 ]
   then
      echo "Failed to mount base"
      exit 1
   fi
   cd $MNT
   tar -g $TARGET/${BASE}.tar.gz.diff -czf - . > /dev/null
   cd -
   umount $MNT
}

function create_snap_diff () {
   mount /dev/${VG}/${1} $MNT
   if [ $? -ne 0 ]
   then
      echo "Failed to mount base"
      exit 1
   fi
   cp $TARGET/${BASE}.tar.gz.diff $TARGET/$1.tar.gz.diff
   cd $MNT
   tar -g $TARGET/${1}.tar.gz.diff -czf $TARGET/${1}.tar.gz .
   cd -
   umount $MNT
}

function create_mount () {
   # Creates a temporary mount point
   if [ ! -d /mnt/$$ ]
   then
      mkdir /mnt/$$
   fi
   MNT=/mnt/$$
}

create_mount
if check_not_mounted $BASE
then
   create_base_diff
else
   echo "$BASE is mounted. Exiting now"
   exit 1
fi
for i in [email protected]
do
   if check_not_mounted $i
   then
      create_snap_diff $i
   else
      echo "$i is mounted! I will not touch it!"
   fi
done

The remaining steps should be rather easy – just mount the newly created snapshots and restore the tar file on them.