Archive for the ‘Disk Storage’ Category

ZFS clone script

Sunday, March 28th, 2021

ZFS has some magical features, comparable to NetApp’s WAFL capabilities. One of the less-used on is the ZFS send/receive, which can be utilised as an engine below something much like NetApp’s SnapMirror or SnapVault.

The idea, if you are not familiar with NetApp’s products, is to take a snapshot of a dataset on the source, and clone it to a remote storage. Then, take another snapshot, and clone only the delta between both snapshots, and so on. This allows for cloning block-level changes only, which reduces clone payload and the time required to clone it.

Copy and save this file as clone_zfs_snapshots.sh. Give it execution permissions.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
#!/bin/bash
# This script will clone ZFS snapshots incrementally over SSH to a target server
# Snapshot name structure: [email protected]${TGT_HASH}_INT ; where INT is an increment number
# Written by Etzion. Feel free to use. See more stuff in my blog at https://run.tournament.org.il
# Arguments:
# $1: ZFS filesystem name
# $2: (target ZFS system):(target ZFS filesystem)
 
IAM=$0
ZFS=/sbin/zfs
LOCKDIR=/dev/shm
LOCAL_SNAPS_TO_LEAVE=3
RESUME_LIMIT=3
 
### FUNCTIONS ###
 
# Sanity and usage
function usage() {
	echo "Usage: $IAM SRC REMOTE_SERVER:ZFS_TARGET (port=SSH_PORT)"
	echo "ZFS_TARGET is the parent of filesystems which will be created with the original source names"
	echo "Example: $IAM share/test backupsrv:backup"
	echo "It will create a filesystem 'test' under the pool 'backup' on 'backupsrv' with clone"
	echo "of the current share/test ZFS filesystem"
	echo "This script is (on purpose) not a recursive script"
	echo "For the script to work correctly, it *must* have SSH key exchanged from source to target"
	exit 0
}
 
function abort() {
	# exit errorously with a message
	echo "[email protected]"
	pkill -P $$
	remove_lock
	exit 1
}
 
function parse_parameters() {
	# Parses command line parameters
	# called with $*
	SRC_FS=$1
	shift
	TGT=$1
	shift
	for i in $*
	do
		case ${i} in
			port=*)	PORT=${i##*=}
			;;
			hash=*)	HASH=${i##*=}
			;;
		esac
	done
	TGT_SYS=${TGT%%:*}
	TGT_FS=${TGT##*:}
	# Use a short substring of MD5sum of the target name for later unique identification
	SRC_DIRNAME_FS=${SRC_FS#*/}
	if [ -z "$hash" ]
	then
		TGT_FULLHASH="`echo $TGT_FS/${SRC_DIRNAME_FS} | md5sum -`"
		TGT_HASH=${TGT_FULLHASH:1:7}
	else
		TGT_HASH=${hash}
	fi
 
}
 
function sanity() {
	# Verify we have all details
	[ -z "$SRC_FS" ] && usage
	[ -z "$TGT_FS" ] && usage
	[ -z "$TGT_SYS" ] && usage
	$ZFS list -H -o name $SRC_FS > /dev/null 2>&1 || abort "Source filesystem $SRC_FS does not exist"
	# check_target_fs || abort "Target ZFS filesystem $TGT_FS on $TGT_SYS does not exist, or not imported"
}
 
function remove_lock() {
	# Removes the lock file
	\rm -f ${LOCKDIR}/$SRC_LOCK
}
 
function construct_ssh_cmd() {
	# Constract the remote SSH command
	# Here is a good place to put atomic parameters used for the SSH
	[ -z "${PORT}" ] && PORT=22
	SSH="ssh -p $PORT $TGT_SYS -o ConnectTimeout=3"
	CONTROL_SSH="$SSH -f"
}
 
function get_last_remote_snapshots() {
	# Gets the last snapshot name on a remote system, to match it to our snapshots
	remoteSnapTmpObj=`$SSH "$ZFS list -H -t snapshot -r -o name ${TGT_FS}/${SRC_DIRNAME_FS}" | grep ${SRC_DIRNAME_FS}@ | grep ${TGT_HASH}`
	# Create a list of all snapshot indexes. Empty means its the first one
	remoteSnaps=""
	for snapIter in ${remoteSnapTmpObj}
	do
	  remoteSnaps="$remoteSnaps ${snapIter##*@${TGT_HASH}_}"
	done
}
 
function check_if_remote_snapshot_exists() {
	# Argument: $1 -> Name of snapshot
	# Checks if this snapshot exists on remote node
	$SSH "$ZFS list -H -t snapshot -r -o name ${TGT_FS}/${SRC_DIRNAME_FS}@${TGT_HASH}_${newLocalIndex}"
	return $?
}
 
function get_last_local_snapshots() {
	# This function will return an array of local existing snapshots using the existing TGT_HASH
    localSnapTmpObj=`$ZFS list -H -t snapshot -r -o name $SRC_FS | grep $SRC_FS@ | grep $TGT_HASH `
    # Convert into a list and remove the HASH and everything before it. We should have clear list of indexes
    localSnapList=""
    for snapIter in ${localSnapTmpObj}
    do
    	localSnapList="$localSnapList ${snapIter##*@${TGT_HASH}_}"
    done
    # Convert object to array
    localSnapList=( $localSnapList )
    # Get the last object
    let localSnapArrayObj=${#localSnapList[@]}-1
}
 
function delete_snapshot() {
	# This function will delete a snapshot
	# arguments: $1 -> snapshot name
	[ -z "$1" ] && abort "Cleanup snapshot got no arguments"
	$ZFS destroy $1
	#$ZFS destroy ${SRC_FS}@${TGT_HASH}_${newLocalIndex}
}
 
function find_matching_snapshot() {
	# This function will attempt to find a matching snapshot as a replication baseline
	# Gets the latest local snapshot index
	localRecentIndex=${localSnapList[$localSnapArrayObj]}
    # Gets the latest mutual snapshot index
    while [ $localSnapArrayObj -ge 0 ]
    do
    	# Check if the current counter already exists
    	if echo "$remoteSnaps" | grep -w ${localSnapList[$localSnapArrayObj]} > /dev/null 2>&1
    	then
    		# We know the mutual index.
    		commonIndex=${localSnapList[$localSnapArrayObj]}
    		return 0
    	fi
    	let localSnapArrayObj--
    done
    # If we've reached here - there is no mutual index!
    abort "There is no mutual snapshot index, you will have to resync"
}
 
function cleanup_snapshots() {
	# Creates a list of snapshots to delete and then calls delete_snapshot function
	# We are using the most recent common index, $localSnapArrayObj as the latest reference for deletion
	let deleteArrayObj=$localSnapArrayObj-${LOCAL_SNAPS_TO_LEAVE}
	snapsToDelete=""
	# Construct a list of snapshots to delete, and delete it in reverse order
	while [ $deleteArrayObj -ge 0 ]
	do
		# Construct snapshot name
		snapsToDelete="$snapsToDelete ${SRC_FS}@${TGT_HASH}_${localSnapList[$deleteArrayObj]}"
		let deleteArrayObj--
	done
	snapsToDelete=( $snapsToDelete )
 
	snapDelete=0
 
	while [ $snapDelete -lt ${#snapsToDelete[@]} ]
	do
		# Delete snapshot
		delete_snapshot ${snapsToDelete[$snapDelete]}
		let snapDelete++
	done
}
 
function initialize() {
	# This is a unique case where we initialize the first sync
	# We will call this procedure when $remoteSnaps is empty (meaning that there was no snapshot whatsoever)
	# We have to verify that the target has no existing old snapshots here
	# is it empty?
	echo "Going to perform an initialization replication. It might wipe the target $TGT_FS completely"
	echo "Press Enter to proceed, or Ctrl+C to abort"
	read "abc"
	### Decided to remove this check
	### [ -n "$LOCSNAP_LIST" ] && abort "No target snapshots while local history snapshots exists. Clean up history and try again"
	RECEIVE_FLAGS="-sFdvu"
	newLocalIndex=1
	# NEW_LOC_INDEX=1
	create_local_snapshot $newLocalIndex
	open_remote_socket
	sleep 1
	$ZFS send -ce ${SRC_FS}@${TGT_HASH}_${newLocalIndex} | nc $TGT_SYS $NC_PORT 2>&1
	if [ "$?" -ne "0" ]
	then
		# Do no cleanup current snapshot
		# delete_snapshot ${SRC_FS}@${TGT_HASH}_${newLocalIndex}
		abort "Failed to send initial snapshot to target system"
	fi
	sleep 1
	# Set target to RO
	$SSH $ZFS set readonly=on $TGT_FS
	[ "$?" -ne "0" ] && abort "Failed to set remote filesystem $TGT_FS to read-only" # No need to remove local snapshot
}
 
function create_local_snapshot() {
	# Creates snapshot on local storage
	# uses argument $1
	[ -z "$1" ] && abort "Failed to get new snapshot index"
	$ZFS snapshot ${SRC_FS}@${TGT_HASH}_${1}
	[ "$?" -ne "0" ] && abort "Failed to create local snapshot. Check error message"
}
 
function open_remote_socket() {
	# Starts remote socket via SSH (as the control operation)
	# port is 3000 + three-digit random number
	let NC_PORT=3000+$RANDOM%1000
	$CONTROL_SSH "nc -l -i 90 $NC_PORT | $ZFS receive ${RECEIVE_FLAGS} $TGT_FS > /tmp/output 2>&1 ; sync"
	#$CONTROL_SSH "socat tcp4-listen:${NC_PORT} - | $ZFS receive ${RECEIVE_FLAGS} $TGT_FS > /tmp/output 2>&1 ; sync"
	#zfs send -R [email protected] | zfs receive -Fdvu zpnew
}
 
function send_zfs() {
	# Do the heavy lifting of opening remote socket and starting ZFS send/receive
	open_remote_socket
	sleep 1
	$ZFS send -ce -I ${SRC_FS}@${TGT_HASH}_${commonIndex} ${SRC_FS}@${TGT_HASH}_${newLocalIndex} | nc -i 90 $TGT_SYS $NC_PORT 
	#$ZFS send -ce -I ${SRC_FS}@${TGT_HASH}_${commonIndex} ${SRC_FS}@${TGT_HASH}_${newLocalIndex} | socat tcp4-connect:${TGT_SYS}:${NC_PORT} -
	sleep 20
 
}
 
function increment() {
	# Create a new snapshot with the index $localRecentIndex+1, and replicate it to the remote system
	# Baseline is the most recent common snapshot index $commonIndex
	RECEIVE_FLAGS="-Fsdvu" # With an 'F' flag maybe?
	# Handle the case of latest snapshot in DR is newer than current latest snapshot, due to mistaken deletion
	remoteSnaps=( $remoteSnaps )
	let remoteIndex=${#remoteSnaps[@]} # Get last snapshot on DR
	if [ ${localRecentIndex} -lt ${remoteIndex} ]
	then
		let newLocalIndex=${remoteIndex}+1
	else
		let newLocalIndex=localRecentIndex+1
	fi
	create_local_snapshot $newLocalIndex
 
	send_zfs
 
	# if [ "$?" -ne "0" ]
	# then
 
		# Cleanup current snapshot
		#delete_snapshot ${SRC_FS}@${TGT_HASH}_${newLocalIndex}
		#abort "Failed to send incremental snapshot to target system"
	# fi
	if ! verify_correctness
	then
 
		if ! loop_resume # If we can
		then
			# We either could not resume operation or failed to run with the required amount of iterations
			# For now we abort. 
			echo "Deleting local snapshot"
			delete_snapshot ${SRC_FS}@${TGT_HASH}_${newLocalIndex}
			abort "Remote snapshot should have the index of the latest snapshot, but it is not. The current remote snapshot index is ${commonIndex}"
		fi
	fi
}
 
function loop_resume() {
	# Attempts to loop over resuming until limit attempt has been reached
	REMOTE_TOKEN=$($SSH "$ZFS get -Ho value receive_resume_token ${TGT_FS}/${SRC_DIRNAME_FS}")
	if [ "$REMOTE_TOKEN" == "-" ]
	then
		return 1
	fi
	# We have a valid resume token. We will retry
	COUNT=1
	while [ "$COUNT" -le "$RESUME_LIMIT" ]
	do
		# For ease of handline - for each iteration, we will request the token again
		echo "Attempting resume operation" 
		REMOTE_TOKEN=$($SSH "$ZFS get -Ho value receive_resume_token ${TGT_FS}/${SRC_DIRNAME_FS}")
		let COUNT++
		open_remote_socket
		$ZFS send -e -t $REMOTE_TOKEN | nc -i 90 $TGT_SYS $NC_PORT
		#$ZFS send -e -t $REMOTE_TOKEN | socat tcp4-connect:${TGT_SYS}:${NC_PORT} -
		sleep 20
		if verify_correctness
		then
			echo "Done"
			return 0
		fi
	done
	# If we've reached here, we have failed to run the required iterations. Lets just verify again
	return 1
}
 
function verify_correctness() {
	# Check remote index, and verify it is correct with the current, latest snapshot
 
    if check_if_remote_snapshot_exists
    then
    	echo "Replication Successful"
    	return 0
    else
    	echo "Replication failed"
    	return 1
    fi
}
 
### MAIN ###
[ `whoami` != "root" ] && abort "This script has to be called by the root user"
[ -z "$1" ] && usage
parse_parameters $*
SRC_LOCK=`echo $SRC_FS | tr / _`
if [ -f ${LOCKDIR}/$SRC_LOCK ] 
then
	echo "Already locked. If should not be the case - remove ${LOCKDIR}/$SRC_LOCK"
	exit 1
fi
sanity
touch ${LOCKDIR}/$SRC_LOCK
construct_ssh_cmd
get_last_remote_snapshots # Have a string list of remoteSnaps
# If we dont have remote snapshot it should be initialization
if [ -z "$remoteSnaps" ]
then
	initialize
	echo "completed initialization. Done"
	remove_lock
	exit 0
fi
 
# We can get here only if it is not initialization
get_last_local_snapshots # Have a list (array) of localSnaps
find_matching_snapshot # Get the latest local index and the latest common index available
increment # Creates a new snapshot and sends/receives it
cleanup_snapshots # Cleans up old local snapshots
pkill -P $$
remove_lock
echo "Done"

A manual initial run should be called manually. If you expect a very long initial sync, you should run it in tmux to screen, to avoid failing in the middle.

To run the command, run it like this:

./clone_zfs_snapshots.sh share/my-data backuphost:share

This will create under the pool ‘share’ in the host ‘backuphost’ a filesystem matching the source (in this case: share/my-data) and set it to read-only. The script will create a snapshot with a unique name based on a shortened hash of the destination, with a counting number suffix, and start cloning the snapshot to the remote host. When called again, it will create a snapshot with the same name, but different index, and clone the delta to the remote host. In case of a disconnection, the clone will retry a few times before failing.

Note that the receiving side does not remove snapshots, so handling (too) old snapshots on the backup host remains up to you.

Extend /boot from within a Linux system

Saturday, March 27th, 2021

This is a tricky one. In order to resize /boot, which is, commonly the first partition, you need to push forward the beginning of the next partition. This is not an easy task, especially if you are not using LVM – then you have to use external partitioning modification tools, like PQMagic (if it still exists, who knows?), or other such offline tools.

However, if you are using LVM, there is a (complex) trick to it. We need to evict some of the first few PEs, resize the partition to begin at a new location, and then re-sign (and restore) the LVM meta-data in a way which will reflect the relative change in data blocks position (aka – the new PEs). To have some additional grasp of LVM and its meta-data, I recommend you read my article here.

Also, and this is an important note – you cannot change an open (in-use) partition on systems prior to RHEL 8 (on which I have not tested my solution just yet) – meaning – you can change the partition layout on the disk, but the kernel will not refresh that information and would not act accordingly until reboot.

If you have not tried this before, or not sure about all the details in this post, I urge you to use a VM for testing purposes. A failure in this process might leave your data inaccessible, and you do not want that.

So, we have a complex set of tasks:

  • If there is some empty space somewhere on the LVM PV, migrate the first X blocks out.
  • Export the LVM meta-data so we could edit it afterwards
  • Recreate the partition (delete and recreate) with a new starting location
  • (here comes the tricky part) – sign the partition’s updated beginning with LVM meta-data, with the updated relative block locations.

Assumptions:

  • The disk partition layout is /boot as /dev/sda1 and LVM PV on /dev/sda2
  • The LVM VG name is ‘VG’
  • We are using modern dracut-capable system, such as RHEL/CentOS version 6 and above (not tested on version 8 yet)
  • We use basic (msdos) partition layout and not GPT

Clear 500MB for further use, if not enough free space in PV:

In order to do so, we will need 500MB of free space in our PV. If space is an issue, you can easily clean up space from the swap space, by stopping swap, reducing the LV size, signing swap with ‘mkswap’ and starting swap again. This is in a nutshell, and I will not go further into it.

Move the first 500MB out of the beginning of the PV:

We need to do some math. The size of a single PE is defined in the LVM VG settings. By default it is 4MB today, and it can be checked using ‘vgdisplay’ command. Look for the field ‘PE Size’. So – 500MB is 125 PEs. So our command would be:

pvmove --alloc anywhere /dev/sda2:0-124

Which will migrate the first 125 PEs starting at position 0 to 124 away somewhere in the VG.

Export LVM meta-data to a file, and edit it for future handling:

vgcfgbackup -f /tmp/vg-orig.txt VG 

This command will create a file called /tmp/vg-orig.txt which will contain the original VG meta-data copy. We will clone this file and edit it:

cp /tmp/vg-orig.txt /tmp/vg.txt

Now comes the more complex part. We need to adjust the meta-data file to reflect the relative change in the block location. We will edit the new /tmp/vg.txt file. Initially – find the block describing ‘pv0’, which is the first PV in your VG (and maybe the only one), and verify that ‘pv0’ is the correct device, by verifying the ‘device’ directive in this block.
Now comes the harder part – Each LV block in the meta-data file has a sub-section describing disk segments. These blocks describe the relative location of the LV in the PVs. I have already pointed at my article describing the meta-data file and how to read it. The task is to find the ‘stripes’ directive in each LV sub-segment, and reduce the amount of PEs – in our case – 125. It needs to be done for all LVs which reside on our ‘pv0’ – one after the other. An example would look like this:

lvswap { ### Another LV
			id = "E3Ei62-j0h6-cGu5-w9OB-l9tU-0Qf5-f09bvh"
			status = ["READ", "WRITE", "VISIBLE"]
			flags = []
			creation_host = "localhost.localdomain"
			creation_time = 1594157749	# 2019-01-01 08:42:29 +0000
			segment_count = 1

			segment1 {
				start_extent = 0  ### Tee LE of the LV. On LEs - later ###
				extent_count = 94	# 2.9375 Gigabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					# Was: "pv0", 1813. Now:
					"pv0", 1688 ### reduced 125 PEs ###
				]
			}
		}

Copy the resulting file /tmp/vg.txt (after double-checking it!) to /boot. We will use /boot later on to re-sign the PV meta-data.

Recreate the partition:

Another tricky part. You cannot just resize a partition, or at least – without the tool (parted, fdisk – depending on your OS version) attempting to resize the over-layer, and failing to do so. Most tools do not allow changes to the size of the partitions at all, so we will need to delete and recreate the partition layout. Now, depending if you are using GPT or msdos partition, your tools might vary, but in this case, I handle only msdos partition layout, so the tools will be in accordance. Other tools can apply for GPT layout, and the process, in general, will work on GPT as well.

So – we will backup the partition layout before we change it. The command ‘sfdisk’ will allow us to do so, so we can call

sfdisk -d /dev/sda > /boot/original-disk-layout.txt

I am leaving quite a lot of stuff on /boot partition, because this partition is not a member of the LVM volume group, and will remain, mostly, unaffected during our process. You can use an external USB disk, or any other non-LVM partition, as long as you verify you can access it from within the boot process, directly from initrd/initramfs, or dracut. /boot is, commonly, accessible from within the boot process.

Now we modify the partition layout. To do so, I recommend to document the original start point of the two interesting partitions – /boot (usually /dev/sda1) and our PV (in this example: /dev/sda2). I prefer using ‘sector’ directives. An example would be:

parted -s /dev/sda "unit s p"

It is common, for modern Linux systems, to have /boot starting at sector 2048 (which is 1MB into the disk). This is due to block alignment, however, I will not discuss this here. The interesting part is the size of a sector (commonly 512b, but can be 4K for ‘advanced format’ disks), so we will be able to calculate the new partitions starting positions and sizes.

Now, using ‘parted’ we need to remove the 2nd partition (in my example, note. It might vary on your setup) and recreate it at a newer location – 125PEs further, or 500MB further, or 1024000 sectors ahead. So, if our starting sector is 411648(s), then we will have to create the partition starting at sector 1435648 (=411648+1024000), with the original ending location. Don’t forget to set this partition to LVM. Assuming you have saved the starting point of the partition in the variable StartOfPart, and the original ending in EndOfPart, your command would look like this:

parted -s /dev/sda "unit s rm 2 mkpart primary $(( StartOfPart + 1024000 )) ${EndOfPart} set 2 lvm on"

Now, we need to recreate the /boot partition (partition #1 in my example) to include the new size. Again – we need to document its beginning, and now recreate it. Assuming we have kept the same variables as before, the command would look like this:

parted -s /dev/sda "unit s rm 1 mkpart primary ${StartOfPart} $(( EndOfPart + 1024000 - 1 )) set 1 boot on"

The kernel will not update the new partitions sizes because they are in use. We will need a reboot, however – when we reboot (do not do that just yet), we will no longer have access to our LVM. This is because it will not have meta-data anymore, and we will need to recreate it.

Prepare a script to place in /boot, called vgrecover.sh which will hold the following lines:

#!/bin/sh
sed -i 's/locking_type = 4/locking_type = 0/g' /etc/lvm/lvm.conf
lvm pvcreate -u ${PVID} --restorefile /mnt/vg.txt /dev/sda2
lvm vgcfgrestore -f /mnt/vg.txt VG

You need to save the PVID for /dev/sda2 and replace this value in this script. This is the field ‘PV UUID’ in the output of the command:

pvdisplay /dev/sda2

Some more explanations: The device in our example is /dev/sda2 (change it to match your device name), and the VG name is ‘VG’ (again – change to match your setup). This script needs to be placed on /boot and be made executable.

Before our reboot:

We need to verify the following files exist on our /boot:

  • vg.txt
  • vgrecover.sh
  • original-disk-layout.txt

If any of these files is missing, you will not be able to boot, you will not be able to recover your system, and you will not be able to access the data there ever again!

I also recommend you keep your original-disk-layout.txt file somewhere external. If you have made a partitioning mistake and changed the beginning of /boot, you will not have access to /boot and all its files, and having this file elsewhere (on external disk, for example) will help you recover the partition layout quickly and with no frustration.

Now comes another risky part: reboot and get into recovery shell used by GRUB. See my article here to understand how to enter recovery shell. If you have a different OS version, your boot arguments might differ. An external boot media (like RHEL/CentOS recovery boot, or Ubuntu live) could also suffice to complete the task, but it is preferred to use the GRUB recovery console to reduce the change of some unknown automatic task or detection process doing stuff for you.

We need to break the boot sequence in the pre-mount phase. We will have a minimal shell on which we need to run the following commands:

mkdir /mnt
mount /dev/sda1 /mnt
/mnt/vgrecover.sh

We are mounting /dev/sda1 (our /boot) on /mnt, which we have just created. Then we call the vgrecover.sh script we have created before. It will use LVM recovery commands to re-sign the PV on /dev/sda2, and then recover the VG meta-data using our modified meta-data file, describing a new relative positions of LVs.

When done, assuming no problems happened there, just umount /mnt and reboot. The system should boot up successfully, however, /boot will not have the designated size just yet.

Extending /boot :

The partition /dev/sda1 is of the updated size now, however, the filesystem is not. You can verify that using ‘fdisk -l /dev/sda’ of ‘parted -s /dev/sda unit s p’ or any other command. If this is not the case, then check your process.

Extending the filesystem depends on the type of filesystem. You can run ‘df -hPT /boot’ to identify the filesystem type. If it is XFS, use the command:

xfs_growfs /boot

If the filesystem is of type ext3 or ext4, use

resize2fs /dev/sda1

Other filesystems will require different tools, and since I cannot cover it all, I leave it to you. This is an online process, and as soon as it is over, the new size will show in the ‘df’ command.

Recovery:

If, for some reason, the disk partitioning or PV re-signing failed, and the system cannot boot, you can use the original-disk-layout.txt file in /boot to recover the original disk layout. Boot into GRUB rescue mode as shown above, and run:

mkdir /mnt
mount /dev/sda1 /mnt
sfdisk -f /dev/sda < /mnt/original-disk-layout.txt

If your /boot is inaccessible, and the file original-disk-layout.txt was kept on an external storage, you can use a live Ubuntu, or any other live system to run the ‘sfdisk’ command as shown above to recover /dev/sda original partitioning layout.

Bottom line:

This is a possible, although complex, task, and you should practice it on a VM, with disk snapshots before you attempt to kill production servers. Leave me a comment if it worked, or if there is anything I need to add or correct in this post. Thanks, and good luck!

Ubuntu 20.04 and TPM2 encrypted system disk

Thursday, February 18th, 2021

Following the conversation around a past post about Ubuntu 18.04 and TPM2 encrypted system disk (can be found here), I have added a post about using a more modern Ubuntu (20.04) with TPM2. This post includes all the updates (many thanks to the commenter kelderek for setting up a very well working setup!) and has just been tested on a virtual Ubuntu with (virtual) TPM2 device. A note about how to get this setup – sometime in the future.

A very important note, before we begin – This is not enough! To fully prevent your data from being stolen by a bad person with a screwdriver, you will need to add some additional measures, like locking up your BIOS with a password (resetting the BIOS password resets TPM2 data, which is good) ; Signing and using your own embedded GRUB2 (which is not covered here, and maybe one day will be), with your own certificate ; Protect GRUB2 from booth changed using password ; Lock down boot devices in the BIOS (so the evil hacker cannot boot from a USB or CDROM without unlocking the BIOS first) ; Lock the hard drive (or SSD, or NVME) with a security to the BIOS (possible on some models of laptops and hard drives). All these steps are required to complete the security process in order to make the life of a cracker/hacker harder to the point of impossible, when attempting to obtain your data. Without them, the effort is great, but not impossible.
Also note that all the above described actions might lock you out of your own system in case of any hardware failure or forgotten password, so make sure you have a (secure!) backup of your data, or else…

Now – to the process. The idea is this: We add a new key to the cryptsetup – a long one, and this key is stored in TPM2. We add scripts which pull this key out of TPM2 store whenever the system boots. Thanks to some additional comments by Kelderek, we also add some failback, in case of an incorrect key, to allow up to recover and boot using manual key.

I will not wrap all my actions with ‘sudo’ command. All commands need to be called as root, so you can prefix this guide with ‘sudo -s’ or ‘sudo bash’ if you like. I can also assume you ‘cd’ to the root’s directory to protect the root.key file while it is still there. So – ‘sudo’ yourselves, and let’s begin.

Install required packages:

apt install tpm2-tools

Define TPM2 memory space to hold the key:

tpm2 nvdefine -s 64 0x1500016

# This command will define a 64 bit memory space in TPM2, at the above mentioned address

Create a random 64 bit key file:

cat /dev/urandom | tr -dc ‘a-zA-Z0-9’ | head -c 64 > root.key

Save the contents of the key file to TPM2:

tpm2_nvwrite -i root.key 0x1500016

Compare the contents of the TPM and the file, to verify that they are exactly the same:

echo root.key file contents: `cat root.key`
echo The value stored in TPM: `tpm2_nvread 0x1500016`

Identify the encrypted device, and add the key to the LUKS:

lsblk # Identify the owner of 'crypt'. We will assume it is /dev/sda3 in our example
cryptsetup luksAddKey /dev/sda3 root.key

Create a script to pull the key from TPM2:

1
2
3
4
5
6
7
8
9
10
11
12
13
cat << EOF > /usr/local/sbin/tpm2-getkey
#!/bin/sh
if [ -f ".tpm2-getkey.tmp" ]; then
# tmp file exists, meaning we tried the TPM this boot, but it didn’t work for the drive and this must be the second
# or later pass for the drive. Either the TPM is failed/missing, or has the wrong key stored in it.
/lib/cryptsetup/askpass "Automatic disk unlock via TPM failed for $CRYPTTAB_SOURCE ($CRYPTTAB_NAME) Enter passphrase: "
exit
fi
# No tmp, so it is the first time trying the script. Create a tmp file and try the TPM
touch .tpm2-getkey.tmp
 
tpm2_nvread 0x1500016
EOF

This script will be embedded into the future initramfs (or initrd), and will pull the key from TPM2. Now, we need to set its permissions and ownerships:

chown root: /usr/local/sbin/tpm2-getkey
chmod 750 /usr/local/sbin/tpm2-getkey

Create a hook script to initramfs:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
cat << EOF > /etc/initramfs-tools/hooks/tpm2-decryptkey
#!/bin/sh
PREREQ=""
prereqs()
{
echo ""
}
case $1 in
prereqs)
prereqs
exit 0
;;
esac
. /usr/share/initramfs-tools/hook-functions
copy_exec `which tpm2_nvread`
copy_exec /usr/lib/x86_64-linux-gnu/libtss2-tcti-device.so.0.0.0
copy_exec /usr/lib/x86_64-linux-gnu/libtss2-tcti-device.so.0
copy_exec /lib/cryptsetup/askpass
exit 0
EOF

Set file permissions:

chown root: /etc/initramfs-tools/hooks/tpm2-decryptkey
chmod 755 /etc/initramfs-tools/hooks/tpm2-decryptkey

Edit crypttab:

The file /etc/crypttab needs to have an entry added at the end of the line containing the boot volume. For example (taken from the previous post’s comments):

sda3_crypt UUID=d4a5a9a4-a2da-4c2e-a24c-1c1f764a66d2 none luks,discard

should become:

sda3_crypt UUID=d4a5a9a4-a2da-4c2e-a24c-1c1f764a66d2 none luks,discard,keyscript=/usr/local/sbin/tpm2-getkey

First – backup your existing crypttab file:

cp /etc/crypttab /etc/crypttab.backup

If you have a single line in the file (only the boot volume is encrypted), you can use the following command:

sed -i 's%$%,keyscript=/usr/local/sbin/tpm2-getkey%' /etc/crypttab

Otherwise, you will have to append it manually.

Backup the original initrd and create a new one

cp /boot/initrd.img-uname -r /boot/initrd.img-uname -r.orig
mkinitramfs -o /boot/initrd.img-uname -r uname -r

Reboot.

If all works well – you can now delete the file root.key holding a clear text key. You will still have your original manually defined decryption key working, and the initrd will fail-over to handle manual askpass if the automatic TPM2 fails to work, so you should be rather safe there, as well as update-proof.

Notes:

  • The procedure might work on older Ubuntu systems, although it was not tested. A comment in the previous post suggested that the addition of the source
    deb http://cz.archive.ubuntu.com/ubuntu focal main universe
    to your Ubuntu 18.04, the same procedure should work. I did not test it yet.
  • The security of the system is not complete until you handle the other tasks mentioned at the beginning of the text.
  • This procedure does not cover TPM1.2 systems, and will not work without modification on them. I might document such process in the future.

Good luck, and post here if it works for you, or if you have any suggestions!

Linux LVM explained

Saturday, July 11th, 2020

You can find bazillion sites explaining Linux LVM, however, I am preparing for my next article, about partition resize for the advanced user, and LVM deep understanding is required, so I have decided to explain some of the internals of LVM for the advanced user. This explains the how it is built more than the how to use it, so if you’re looking for the right commands – you are not likely to find them here. If you are looking for the theoretical understanding of how LVM is structured, what is PV, PE, LE and so on – this is probably an article you want to read.

In general, a block device – a disk, a partition, SSD, RamDisk, character device mapped as block (loop) or whatever – can be signed as a ‘physical device’ (PV) for the purpose of LVM. A physical device (from now on – PV) is a block device which can hold data and allow random access to it. For ease of definitions – a disk or its equivalent. If you can format and mount it – it can act as PV. The data this PV is required to hold is both the LVM metadata, and the PV ‘physical extents’ (PE). I will use the term PE.

The ‘Physical Extents’ are small partitions (logical definition, there is no ‘fdisk’ like tool to create them) the PV is being split to. It means that if we define a PE as a 32M chunk (this is a logical parameter when creating Volume Group. On that later), the PV will be split into many 32MB small chunks, each has its own number (sequential number, of course) in this PV. We will have PE #0, and PE#1 and so on. We, as humans, have (almost) no interaction with this numbering, but it is important we understand them.

All these ‘physical extents’ (PE) which reside on a ‘physical volume’ (PV) are mapped to a logical object called ‘logical volume’ (LV). A logical volume is the actual object we can use to place our data on. It behaves like any other block device or partition – we can format it, partition it (heavens knows why, but it can be done), mount it (when it has a file system), put our important data on – and so on. About how the mapping looks like – later in this article.

The connection between PE residing on a PV to the LV is kept in a logical object called “Volume Group” (VG). A “volume group” (VG) is a logical and theoretical object which merges the PE provided by multiple PVs into a logical group of objects with a mapping to the LV. This sounds complicated, I am sure, but we’ll get deeper into it soon.

As said – a VG is a logical object holding PVs (with their PEs) on one hand, and LVs (with their LEs, – about it later) on the other hand. It has no ‘real’ existence, except as a group of objects. A PV can be member of a single VG (but a single VG can have many PVs), and an LV can be a member of single VG (but again – a single VG can have many LVs). When we look at the metadata, later in this article, it should become more clear.

In order to understand how PEs are located on a disk, Let’s take a look at this nice drawing.
This drawing will show a (basic partitioning) disk, with Master Boot Record (MBR) and two partitions, of which the 2nd is used as LVM PV.
The PV has a small metadata signature, and many PEs.

We can ask the LVM mechanism nicely to export the metadata configuration. Since a volume group (VG) can hold multiple PVs (physical volumes, aka – block devices) the metadata will reside in the beginning of each disk (PV) for the sake of redundancy. This is important when we want to recover a failed LVM caused by human error or missing disk(s).

Moreover – because the LV has only logical mapping to the PEs residing on disks (can be more than one, and even more than three! ), the order of the PEs mapped to a single LV doesn’t have to be continuous, nor does it has to reside on a single disk. This is a flexible system, and we’ll get to that later.

I would like to show an exported (backed-up) VG metada for the sake of our observation. I will add comments inline for your viewing pleasure

# Generated by LVM2 version 2.02.98(2)-RHEL6 (2012-10-15): Thu Jun  5 00:00:00 2019

contents = "Text Format Volume Group"
version = 1

### This is the description of the command used to create this file ###
description = "vgcfgbackup -f /tmp/VG-export.txt VG00"

### Some information about the creation host and time ###
creation_host = "localhost.localdomain"	# Linux localhost.localdomain 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:26 UTC 2013 x86_64
creation_time = 1594292258	# Thu Jun  5 00:00:00 2019

### Volume group information ###
VG00 {  ### Name of the Volume Group ###
	id = "8svbhm-euN1-d7Hr-PGIo-yHnH-kIIa-yxECBa"  ### Each object has unique ID to prevent confusion ###
	seqno = 8
	format = "lvm2" # informational
	status = ["RESIZEABLE", "READ", "WRITE"]
	flags = []
	extent_size = 65536		# 32 Megabytes ### The size of a single PE in Sectors. This is across all VG (all the member PVs), regardless of the PV size! ###
	max_lv = 0   ### Configurable limitations. None.
	max_pv = 0
	metadata_copies = 0

	physical_volumes { ### The list of the member PVs ###

		pv0 {  ### This is the first PV. They will have names like 'pv0' or 'pv1'. Nothing very artistic ###
			id = "FRDFDw-fMrG-ma1d-2rP5-bqck-cFsz-fr2OWf"   ### UUID. A unique identifier allowing for easy scan
			device = "/dev/sda2"	# Hint only ### This is only a hint. Device-mapper (LVM kernel engine) scans for LVM metadata on all disk partitions ###

			status = ["ALLOCATABLE"]  ### Can we allocate PEs from this PV? Why not? We can prevent it from allocating space. On that - some other time ###
			flags = []
			dev_size = 209590272	# 99.9404 Gigabytes ### The PV size in Sectors. This is very important. ###
			pe_start = 2048 # The offset of the first PE, #0, from the beginning of the PV, in Sectors ###
			pe_count = 3198	# 99.9375 Gigabytes # How many PEs do we have here? The size can be easily calculated by multiplying the amount of PEs (pe_count) with the size of each PE (extent_size)
		}
	}

I will go further into the LV topic shortly, but in the meanwhile – let’s see what we have here. This is the global definition of a Volume Group (VG) and its physical volume(s) (PV). The VG name is ‘VG00’ and it has a unique ID (which is why you do not want to map storage snapshost of an LVM to the same machine in parallel, without fully understanding what you are doing). We have the size of the PE – 32M in our case. As soon as the VG was created – it cannot be changed. A note – the PEs don’t have a header on-disk, meaning you cannot binary-dump a hard drive and look for the beginning or end of each PE. The PEs are defined as a mapping, and the driver can jump to the right location on the disk. It is fairly easy – calculate the position of the PE you aim at by multiplying the PE size with the sequential number of the PE, jump to this number relatively to the beginning of the partition, and you’re there.

Let’s look at the PV definition here – we have its UUID, which is extremely important, as it identified the PV for the VG. Since there is no order constraint on the devices (you can reverse the disk order for a multiple-PV system, and LVM will not get affected) – the only way LVM identifies the member PVs is by looking at their metadata copy, containing their UUID. If the metadata is damaged, missing or has an incorrect UUID, we get to data recovery! (or metadata recovery, which is easier, but still unpleasant).
Since the physical OS disk mapping doesn’t matter, because LVM makes use of PV UUID, the block device name is only a hint, for the human who might read this config backup file.
We have the status. A PV can be set to ‘not allocatable’ – let’s say we want to evict a PV from a VG – this can be done, however, in the meanwhile, we would not want anyone allocating data on this soon-to-be-removed PV – so we set it to ‘not allocatable’ to keep it empty.
It can have additional flags, used in cases of external lock management like in HA clusters.
Next, it shows the size of the device in sectors ; the PE beginning location (relative to the beginning of the PV), and the amount of PEs present in it.

Now, let’s look at how an LV is defined. Again – comments inline:

logical_volumes {

		lvroot {  ### The name of the LV ###
			id = "dmaQ5x-eTX0-JRsR-aMhG-Ldz5-SlR6-lAT6EB"  ### A unique identifier.  ###
			status = ["READ", "WRITE", "VISIBLE"] ### It is available R/W and visible. It can be none of these too ###
			flags = [] ### Special arguments. None defined ###
			creation_host = "localhost.localdomain"
			creation_time = 1594157738	# 2019-01-01 08:42:18 +0000
			segment_count = 1 ### An LV can be continuous or split in multiple ways. I will demonstrate that later ###

			segment1 { ### The first continuous are (and the only one, in our case ###
				start_extent = 0 ### Where does it start with the LOGICAL extent? On that later ###
				extent_count = 875	# 27.3438 Gigabytes ### The amount of LEs used by this segment, meaning - the segment size or length ###

				type = "striped" 	# linear  # There are multiple types. striped is the common one - a linear setup
				stripe_count = 1 ###

				stripes = [ ### Where does this segment reside *physically*? ###
					"pv0", 0 ### On 'pv0' we've seen before! And where does it start? On PE 0 (the first one) ###
				]
			}
		}

		lvswap { ### Another LV
			id = "E3Ei62-j0h6-cGu5-w9OB-l9tU-0Qf5-f09bvh"
			status = ["READ", "WRITE", "VISIBLE"]
			flags = []
			creation_host = "localhost.localdomain"
			creation_time = 1594157749	# 2019-01-01 08:42:29 +0000
			segment_count = 1

			segment1 {
				start_extent = 0  ### Tee LE of the LV. On LEs - later ###
				extent_count = 94	# 2.9375 Gigabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					"pv0", 1813 ### Here we start at PE number 1813. More details below ###
				]
			}
		}
	}

Before I explain the LV settings, I need to explain what ‘Logical Extent’ is. A block device has to be presented to the operating system as a continuous device with random-access capabilities. So, logically, an LV has to be continuous. However – we do know that LVM allows us to modify, migrate and even resize an existing LV into split areas of a disk or disks (PVs). This is achieved by defining an LV as made out of a set of small chunks, ordered in a continuous manner. They are ordered in such a way, however, since they are logical, they can be mapped to any PEs we have, in a non-ordered mode. It means, practically, that this ‘chunk’, called “Logical Extent” (LE) is in the size of PE, and maps to one (or more, in cases of LVM RAID. Not included in this article). So an LV has a continuous array of LEs mapped to non-continuous list of PEs. This way, LVM can satisfy both the OS requirement for a block device, with the relevant properties, while maintaining flexibility with the actual disk positioning.

Here is another image to elaborate some more on the LE-to-PE mapping. This image was taken, with permission, from ‘thegeekdiary’ article explaining Linux LVM basics. If you want to know how to do stuff – you should check this article. I am just explaining how things look internally.

So – Back to our configuration. What do we have here? A Logical Volume (LV) is a logical unit with parameters, like name, UUID, status and so on. We can see that the LV called ‘lvroot’ has one ‘segment’ (called ‘segment1’). A segment is an uninterrupted list of continuous blocks, with a logical starting point and length (aka – uninterrupted list) with mapping of “extents” (in the configuration – meaning LE) to the starting point on the PV, defined as “PV”, PE_number. In this configuration, we can see that ‘lvroot’ block (LE) 0 begins at the PV ‘pv0’ block (PE) 0.

Here is aconfiguration dump of the same LV after I have migrated the first 10 PEs to another location in the disk (PV), using the command
pvmove –alloc anywhere /dev/sda2:0-9

lvroot {
                        id = "dmaQ5x-eTX0-JRsR-aMhG-Ldz5-SlR6-lAT6EB"
                        status = ["READ", "WRITE", "VISIBLE"]
                        flags = []
                        creation_host = "localhost.localdomain"
                        creation_time = 1594157738	# 2019-01-01 08:42:18 +0000
                        segment_count = 2 ### We now have two segments! ###

                        segment1 {  ### This is the beginning of the LV - mapped as LE 0-9 (the first 10, which I have migrated) ###
                                start_extent = 0
                                extent_count = 10       # 320 Megabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 1907 ### They are on pv0, but somewhere further back the disk, on PE 1907 and onwards! ###
                                ]
                        }
                        segment2 { # This is the next segment, of blocks 10 to the end ###
                                start_extent = 10
                                extent_count = 865      # 27.0312 Gigabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 10 ### It resides at the original location, which was PE 10 and onwards ###
                                ]
                        }
                }

The LV mapping has changed to match the change. The first 10 blocks (LEs) of lvroot are somewhere else on the disk on PV ‘pv0’ at location 1907, and the next segment of blocks remains in its original position – blocks 10 and onwards, except that because I’ve split the LV into two chunks, it has to have a new ‘segment’ definition.

This concludes my explanation of disk positioning and how it looks like, with LVM internals. We went through what PV is, what PE is, what LV and LE are, and how they are related to each other. Just to stress – a VG is a logical construct combining the PVs, PEs to the LEs and LVs.

If you find anything incorrect, not clear enough or want me to go further into any detail – drop me a note. I will be happy to hear from you.

Hot-resize disks on Linux

Monday, April 6th, 2020

After major investigations around, I came to the conclusion that a full guide describing the procedure required for online disk resize on Linux (especially – expanding disks). I have created a guide for RHEL5/6/7/8 (works the same for Centos or OEL or ScientificLinux – RHEL-based Linux systems) which takes into account the following four scenarios:

  • Expanding a disk where there is a filesystem directly on disk (no partitioning used)
  • Expanding a disk where there is LVM PV directly on disk (no partitioning used)
  • Expanding a disk where there is a filesystem on partition (a single partition taking all the disk’s space)
  • Expanding a disk where there is an LVM PV on partition (a single partition taking all the disk’s space)

All four scenarios were tested with and without use of multipath (device-mapper-multipath). Also – notes about using GPT compared to MBR are given. The purpose is to provide a full guideline for hot-extending disks.

This document does not describe the process of extending disks on the storage/virtualisation/NAS/whatever end. Updating the storage client configuration to refresh the disk topology might differ in various versions of Linux and storage communication methods – iSCSI, FC, FCoE, AoE, local virtualised disk (VMware/KVM/Xen/XenServer/HyperV) and so on. Each connectivity/OS combination might require different refresh method called on the client. In this lab, I use iSCSI and iSCSI software initiator.

The Lab

A storage server running Linux (Centos 7) with targetcli tools exporting 5GB (or more) LUN through iSCSI to Linux clients running Centos5, Centos6, Centos7 and Centos8, with the latest updates (5.11, 6.10, 7.7, 8.1). See some interesting insights on iSCSI target disk expansion using linux LIU (targetcli command line) in my previous post.

The iSCSI clients all see the disk as ‘/dev/sda’ block device. When using LVM, the volume group name is tempvg and the logical volume name is templv. When using multipath, the mpath name is mpatha. On some systems the mpath partition would appear as mpatha1 and on others as mpathap1.

iSCSI client disk/partitions were performed like this:

Centos5:

* Filesystem on disk

1
2
mkfs.ext3 /dev/sda
mount /dev/sda /mnt

* LVM on disk

1
2
3
4
5
pvcreate /dev/sda
vgcreate tempvg /dev/sda
lvcreate -l 100%FREE -n templv tempvg
mkfs.ext3 /dev/tempvg/templv
mount /dev/tempvg/templv /mnt

* Filesystem on partition

1
2
3
parted -s /dev/sda "mklabel msdos mkpart primary 1 -1"
mkfs.ext3 /dev/sda1
mount /dev/sda1 /mnt

* LVM on partition

1
2
3
4
5
6
parted -s /dev/sda "mklabel msdos mkpart primary 1 -1 set 1 lvm on"
pvcreate /dev/sda1
vgcreate tempvg /dev/sda1
lvcreate -l 100%FREE -n templv tempvg
mkfs.ext3 /dev/tempvg/templv
mount /dev/tempvg/templv /mnt

Centos6:

* Filesystem on disk

1
2
mkfs.ext4 /dev/sda
mount /dev/sda /mnt

* LVM on disk

1
2
3
4
5
pvcreate /dev/sda
vgcreate tempvg /dev/sda
lvcreate -l 100%FREE -n templv tempvg
mkfs.ext4 /dev/tempvg/templv
mount /dev/tempvg/templv /mnt

* Filesystem on partition

1
2
3
parted -s /dev/sda "mklabel msdos mkpart primary 1 -1"
mkfs.ext4 /dev/sda1
mount /dev/sda1 /mnt

* LVM on partition

1
2
3
4
5
6
parted -s /dev/sda "mklabel msdos mkpart primary 1 -1 set 1 lvm on"
pvcreate /dev/sda1
vgcreate tempvg /dev/sda1
lvcreate -l 100%FREE -n templv tempvg
mkfs.ext4 /dev/tempvg/templv
mount /dev/tempvg/templv /mnt

Centos7/8:

* Filesystem on disk

1
2
mkfs.xfs /dev/sda
mount /dev/sda /mnt

* LVM on disk

1
2
3
4
5
pvcreate /dev/sda
vgcreate tempvg /dev/sda
lvcreate -l 100%FREE -n templv tempvg
mkfs.xfs /dev/tempvg/templv
mount /dev/tempvg/templv /mnt

* Filesystem on partition

1
2
3
parted -a optimal -s /dev/sda "mklabel msdos mkpart primary 1 -1"
mkfs.xfs /dev/sda1
mount /dev/sda1 /mnt

* LVM on partition

1
2
3
4
5
6
parted -a optimal -s /dev/sda "mklabel msdos mkpart primary 1 -1 set 1 lvm on"
pvcreate /dev/sda1
vgcreate tempvg /dev/sda1
lvcreate -l 100%FREE -n templv tempvg
mkfs.xfs /dev/tempvg/templv
mount /dev/tempvg/templv /mnt

Some variations might exist. For example, use of ‘GPT’ partition layout would result in a parted command like this:

1
parted -s /dev/sda "mklabel gpt mkpart ' ' 1 -1"

Also, for multipath devices, replace the block device /dev/sda with /dev/mapper/mpatha, like this:

1
parted -a optimal -s /dev/mapper/mpatha "mklabel msdos mkpart primary 1 -1"

There are several common tasks, such as expanding filesystems – for XFS, using xfs_growfs <mount target> ; for ext3fs and ext4fs using resize2fs <device path>. Same goes for LVM expansion – using pvresize <device path>, followed by lvextend command, followed by the filesystem expanding command as noted above.

The document layout

The document will describe the client commands for each OS, sorted by action. The process would be as following:

  • Expand the visualised storage layout (storage has already expanded LUN. Now we need the OS to update to the change)
  • (if in use) Expand the multipath device
  • (if partitioned) Expand the partition
  • Expand the LVM PV
  • Expand the filesystem

Actions

For each OS/scenario/mutipath combination, we will format and mount the relevant block device, and attempt an online expansion.

Operations following disk expansion

Expanding the visualised storage layout

For iSCSI, it works quite the same for all OS versions. For other transport types, actions might differ.

1
iscsiadm -m node -R

Expanding multipath device

If using multipath device (device-mapper-multipath), an update to the multipath device layout is required. Run the following command (for all OSes)

1
multipathd -k"resize map mpatha"

Expanding the partition (if disk partitions are in use)

This is a bit complicated part. It differs greatly both in the capability and the commands in use between different versions of operation systems.

Centos 5/6

Online expansion of partition is impossible, except if used with device-mapper-multipath, in which case we force the multipath device to refresh its paths to recreate the device. It will result in an I/O error if there is only a single path defined. For non-multipath setup, a umount and re-mount is required. Disk partition layout cannot be read while the disk is in use.

Without Multipath
1
2
fdisk /dev/sda # Delete and recreate the partition from the same starting point
partprobe # Run when disk is not mounted, or else it will not refresh partition size
With Multipath
1
2
3
4
5
6
fdisk /dev/mapper/mpatha # Delete and recreate the partition from the same starting point
partprobe
multipathd -k"reconfigure" # Sufficient for Centos 6
multipathd -k"remove path sda" # Required for Centos 5
multipathd -k"add path sda" # Required for Centos 5
# Repeat for all sub-paths of expanded device
Centos 7/8
Without Multipath
1
2
fdisk /dev/sda # Delete and recreate partition from the same starting point. Sufficient for Centos 8
partx -u /dev/sda # Required for Centos 7
with Multipath
1
2
fdisk /dev/mapper/mpatha # Delete and recreate the partition from the same starting point. Sufficient for Centos 8
kpartx -u /dev/mapper/mpatha # Can use partx

Expanding LVM PV and LV

1
pvresize DEVICE
Device can be /dev/sda ; /dev/sda1 ; /dev/mapper/mpatha ; /dev/mapper/mpathap1 ; /dev/mapper/mpatha1 – according to the disk layout and LVM choice. lvextend -l +100%FREE /dev/tempvg/templv

Expanding filesystem

For ext3fs and ext4fs
1
resize2fs DEVICe
Device can be /dev/sda ; /dev/sda1 ; /dev/mapper/mpatha ; /dev/mapper/mpathap1 ; /dev/mapper/mpatha1 – according to the disk layout and LVM choice.
For xfs
1
xfs_growfs /mnt

Additional Considerations

MBR vs GPT

On most Linux versions (For Centos – up and including version 7) the command ‘fdisk’ is incapable of handling GPT partition layout. If using GPT partition layout, use of gdisk is recommended, if it exists for the OS. If not, parted is a decent although somewhat limited alternative.

gdisk command can also modify a partition layout (at your own risk, of course) from MBR to GPT and vice versa. This is very useful in saving large data migrations where legacy MBR partition layout was used on disks which are to be expanded beyond the 2TB limits.

GPT backup table is located at the end of the disk, so when extending a GPT disk, it is require to repair the GPT backup table. Based on my lab tests – it is impossible to both extend the partition and repair the GPT backup table location in a single call to gdisk. Two runs are required – one to fix the GPT backup table, and then – after the changes were saved – another to extend the partition.

Storage transport

I have demonstrated use of iSCSI software initiator on Linux. Different storage transport exist – each may require its own method of ‘notifying’ the OS of changed storage layout. See RedHat’s article about disk resizing (RHN access required). This article explains how to refresh the storage transport for a combination of various transports and RHEL versions. and sub-versions.