Archive for the ‘Scripting/Programming’ Category

ZFS clone script

Sunday, March 28th, 2021

ZFS has some magical features, comparable to NetApp’s WAFL capabilities. One of the less-used on is the ZFS send/receive, which can be utilised as an engine below something much like NetApp’s SnapMirror or SnapVault.

The idea, if you are not familiar with NetApp’s products, is to take a snapshot of a dataset on the source, and clone it to a remote storage. Then, take another snapshot, and clone only the delta between both snapshots, and so on. This allows for cloning block-level changes only, which reduces clone payload and the time required to clone it.

Copy and save this file as clone_zfs_snapshots.sh. Give it execution permissions.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
#!/bin/bash
# This script will clone ZFS snapshots incrementally over SSH to a target server
# Snapshot name structure: [email protected]${TGT_HASH}_INT ; where INT is an increment number
# Written by Etzion. Feel free to use. See more stuff in my blog at https://run.tournament.org.il
# Arguments:
# $1: ZFS filesystem name
# $2: (target ZFS system):(target ZFS filesystem)
 
IAM=$0
ZFS=/sbin/zfs
LOCKDIR=/dev/shm
LOCAL_SNAPS_TO_LEAVE=3
RESUME_LIMIT=3
 
### FUNCTIONS ###
 
# Sanity and usage
function usage() {
	echo "Usage: $IAM SRC REMOTE_SERVER:ZFS_TARGET (port=SSH_PORT)"
	echo "ZFS_TARGET is the parent of filesystems which will be created with the original source names"
	echo "Example: $IAM share/test backupsrv:backup"
	echo "It will create a filesystem 'test' under the pool 'backup' on 'backupsrv' with clone"
	echo "of the current share/test ZFS filesystem"
	echo "This script is (on purpose) not a recursive script"
	echo "For the script to work correctly, it *must* have SSH key exchanged from source to target"
	exit 0
}
 
function abort() {
	# exit errorously with a message
	echo "[email protected]"
	pkill -P $$
	remove_lock
	exit 1
}
 
function parse_parameters() {
	# Parses command line parameters
	# called with $*
	SRC_FS=$1
	shift
	TGT=$1
	shift
	for i in $*
	do
		case ${i} in
			port=*)	PORT=${i##*=}
			;;
			hash=*)	HASH=${i##*=}
			;;
		esac
	done
	TGT_SYS=${TGT%%:*}
	TGT_FS=${TGT##*:}
	# Use a short substring of MD5sum of the target name for later unique identification
	SRC_DIRNAME_FS=${SRC_FS#*/}
	if [ -z "$hash" ]
	then
		TGT_FULLHASH="`echo $TGT_FS/${SRC_DIRNAME_FS} | md5sum -`"
		TGT_HASH=${TGT_FULLHASH:1:7}
	else
		TGT_HASH=${hash}
	fi
 
}
 
function sanity() {
	# Verify we have all details
	[ -z "$SRC_FS" ] && usage
	[ -z "$TGT_FS" ] && usage
	[ -z "$TGT_SYS" ] && usage
	$ZFS list -H -o name $SRC_FS > /dev/null 2>&1 || abort "Source filesystem $SRC_FS does not exist"
	# check_target_fs || abort "Target ZFS filesystem $TGT_FS on $TGT_SYS does not exist, or not imported"
}
 
function remove_lock() {
	# Removes the lock file
	\rm -f ${LOCKDIR}/$SRC_LOCK
}
 
function construct_ssh_cmd() {
	# Constract the remote SSH command
	# Here is a good place to put atomic parameters used for the SSH
	[ -z "${PORT}" ] && PORT=22
	SSH="ssh -p $PORT $TGT_SYS -o ConnectTimeout=3"
	CONTROL_SSH="$SSH -f"
}
 
function get_last_remote_snapshots() {
	# Gets the last snapshot name on a remote system, to match it to our snapshots
	remoteSnapTmpObj=`$SSH "$ZFS list -H -t snapshot -r -o name ${TGT_FS}/${SRC_DIRNAME_FS}" | grep ${SRC_DIRNAME_FS}@ | grep ${TGT_HASH}`
	# Create a list of all snapshot indexes. Empty means its the first one
	remoteSnaps=""
	for snapIter in ${remoteSnapTmpObj}
	do
	  remoteSnaps="$remoteSnaps ${snapIter##*@${TGT_HASH}_}"
	done
}
 
function check_if_remote_snapshot_exists() {
	# Argument: $1 -> Name of snapshot
	# Checks if this snapshot exists on remote node
	$SSH "$ZFS list -H -t snapshot -r -o name ${TGT_FS}/${SRC_DIRNAME_FS}@${TGT_HASH}_${newLocalIndex}"
	return $?
}
 
function get_last_local_snapshots() {
	# This function will return an array of local existing snapshots using the existing TGT_HASH
    localSnapTmpObj=`$ZFS list -H -t snapshot -r -o name $SRC_FS | grep $SRC_FS@ | grep $TGT_HASH `
    # Convert into a list and remove the HASH and everything before it. We should have clear list of indexes
    localSnapList=""
    for snapIter in ${localSnapTmpObj}
    do
    	localSnapList="$localSnapList ${snapIter##*@${TGT_HASH}_}"
    done
    # Convert object to array
    localSnapList=( $localSnapList )
    # Get the last object
    let localSnapArrayObj=${#localSnapList[@]}-1
}
 
function delete_snapshot() {
	# This function will delete a snapshot
	# arguments: $1 -> snapshot name
	[ -z "$1" ] && abort "Cleanup snapshot got no arguments"
	$ZFS destroy $1
	#$ZFS destroy ${SRC_FS}@${TGT_HASH}_${newLocalIndex}
}
 
function find_matching_snapshot() {
	# This function will attempt to find a matching snapshot as a replication baseline
	# Gets the latest local snapshot index
	localRecentIndex=${localSnapList[$localSnapArrayObj]}
    # Gets the latest mutual snapshot index
    while [ $localSnapArrayObj -ge 0 ]
    do
    	# Check if the current counter already exists
    	if echo "$remoteSnaps" | grep -w ${localSnapList[$localSnapArrayObj]} > /dev/null 2>&1
    	then
    		# We know the mutual index.
    		commonIndex=${localSnapList[$localSnapArrayObj]}
    		return 0
    	fi
    	let localSnapArrayObj--
    done
    # If we've reached here - there is no mutual index!
    abort "There is no mutual snapshot index, you will have to resync"
}
 
function cleanup_snapshots() {
	# Creates a list of snapshots to delete and then calls delete_snapshot function
	# We are using the most recent common index, $localSnapArrayObj as the latest reference for deletion
	let deleteArrayObj=$localSnapArrayObj-${LOCAL_SNAPS_TO_LEAVE}
	snapsToDelete=""
	# Construct a list of snapshots to delete, and delete it in reverse order
	while [ $deleteArrayObj -ge 0 ]
	do
		# Construct snapshot name
		snapsToDelete="$snapsToDelete ${SRC_FS}@${TGT_HASH}_${localSnapList[$deleteArrayObj]}"
		let deleteArrayObj--
	done
	snapsToDelete=( $snapsToDelete )
 
	snapDelete=0
 
	while [ $snapDelete -lt ${#snapsToDelete[@]} ]
	do
		# Delete snapshot
		delete_snapshot ${snapsToDelete[$snapDelete]}
		let snapDelete++
	done
}
 
function initialize() {
	# This is a unique case where we initialize the first sync
	# We will call this procedure when $remoteSnaps is empty (meaning that there was no snapshot whatsoever)
	# We have to verify that the target has no existing old snapshots here
	# is it empty?
	echo "Going to perform an initialization replication. It might wipe the target $TGT_FS completely"
	echo "Press Enter to proceed, or Ctrl+C to abort"
	read "abc"
	### Decided to remove this check
	### [ -n "$LOCSNAP_LIST" ] && abort "No target snapshots while local history snapshots exists. Clean up history and try again"
	RECEIVE_FLAGS="-sFdvu"
	newLocalIndex=1
	# NEW_LOC_INDEX=1
	create_local_snapshot $newLocalIndex
	open_remote_socket
	sleep 1
	$ZFS send -ce ${SRC_FS}@${TGT_HASH}_${newLocalIndex} | nc $TGT_SYS $NC_PORT 2>&1
	if [ "$?" -ne "0" ]
	then
		# Do no cleanup current snapshot
		# delete_snapshot ${SRC_FS}@${TGT_HASH}_${newLocalIndex}
		abort "Failed to send initial snapshot to target system"
	fi
	sleep 1
	# Set target to RO
	$SSH $ZFS set readonly=on $TGT_FS
	[ "$?" -ne "0" ] && abort "Failed to set remote filesystem $TGT_FS to read-only" # No need to remove local snapshot
}
 
function create_local_snapshot() {
	# Creates snapshot on local storage
	# uses argument $1
	[ -z "$1" ] && abort "Failed to get new snapshot index"
	$ZFS snapshot ${SRC_FS}@${TGT_HASH}_${1}
	[ "$?" -ne "0" ] && abort "Failed to create local snapshot. Check error message"
}
 
function open_remote_socket() {
	# Starts remote socket via SSH (as the control operation)
	# port is 3000 + three-digit random number
	let NC_PORT=3000+$RANDOM%1000
	$CONTROL_SSH "nc -l -i 90 $NC_PORT | $ZFS receive ${RECEIVE_FLAGS} $TGT_FS > /tmp/output 2>&1 ; sync"
	#$CONTROL_SSH "socat tcp4-listen:${NC_PORT} - | $ZFS receive ${RECEIVE_FLAGS} $TGT_FS > /tmp/output 2>&1 ; sync"
	#zfs send -R [email protected] | zfs receive -Fdvu zpnew
}
 
function send_zfs() {
	# Do the heavy lifting of opening remote socket and starting ZFS send/receive
	open_remote_socket
	sleep 1
	$ZFS send -ce -I ${SRC_FS}@${TGT_HASH}_${commonIndex} ${SRC_FS}@${TGT_HASH}_${newLocalIndex} | nc -i 90 $TGT_SYS $NC_PORT 
	#$ZFS send -ce -I ${SRC_FS}@${TGT_HASH}_${commonIndex} ${SRC_FS}@${TGT_HASH}_${newLocalIndex} | socat tcp4-connect:${TGT_SYS}:${NC_PORT} -
	sleep 20
 
}
 
function increment() {
	# Create a new snapshot with the index $localRecentIndex+1, and replicate it to the remote system
	# Baseline is the most recent common snapshot index $commonIndex
	RECEIVE_FLAGS="-Fsdvu" # With an 'F' flag maybe?
	# Handle the case of latest snapshot in DR is newer than current latest snapshot, due to mistaken deletion
	remoteSnaps=( $remoteSnaps )
	let remoteIndex=${#remoteSnaps[@]} # Get last snapshot on DR
	if [ ${localRecentIndex} -lt ${remoteIndex} ]
	then
		let newLocalIndex=${remoteIndex}+1
	else
		let newLocalIndex=localRecentIndex+1
	fi
	create_local_snapshot $newLocalIndex
 
	send_zfs
 
	# if [ "$?" -ne "0" ]
	# then
 
		# Cleanup current snapshot
		#delete_snapshot ${SRC_FS}@${TGT_HASH}_${newLocalIndex}
		#abort "Failed to send incremental snapshot to target system"
	# fi
	if ! verify_correctness
	then
 
		if ! loop_resume # If we can
		then
			# We either could not resume operation or failed to run with the required amount of iterations
			# For now we abort. 
			echo "Deleting local snapshot"
			delete_snapshot ${SRC_FS}@${TGT_HASH}_${newLocalIndex}
			abort "Remote snapshot should have the index of the latest snapshot, but it is not. The current remote snapshot index is ${commonIndex}"
		fi
	fi
}
 
function loop_resume() {
	# Attempts to loop over resuming until limit attempt has been reached
	REMOTE_TOKEN=$($SSH "$ZFS get -Ho value receive_resume_token ${TGT_FS}/${SRC_DIRNAME_FS}")
	if [ "$REMOTE_TOKEN" == "-" ]
	then
		return 1
	fi
	# We have a valid resume token. We will retry
	COUNT=1
	while [ "$COUNT" -le "$RESUME_LIMIT" ]
	do
		# For ease of handline - for each iteration, we will request the token again
		echo "Attempting resume operation" 
		REMOTE_TOKEN=$($SSH "$ZFS get -Ho value receive_resume_token ${TGT_FS}/${SRC_DIRNAME_FS}")
		let COUNT++
		open_remote_socket
		$ZFS send -e -t $REMOTE_TOKEN | nc -i 90 $TGT_SYS $NC_PORT
		#$ZFS send -e -t $REMOTE_TOKEN | socat tcp4-connect:${TGT_SYS}:${NC_PORT} -
		sleep 20
		if verify_correctness
		then
			echo "Done"
			return 0
		fi
	done
	# If we've reached here, we have failed to run the required iterations. Lets just verify again
	return 1
}
 
function verify_correctness() {
	# Check remote index, and verify it is correct with the current, latest snapshot
 
    if check_if_remote_snapshot_exists
    then
    	echo "Replication Successful"
    	return 0
    else
    	echo "Replication failed"
    	return 1
    fi
}
 
### MAIN ###
[ `whoami` != "root" ] && abort "This script has to be called by the root user"
[ -z "$1" ] && usage
parse_parameters $*
SRC_LOCK=`echo $SRC_FS | tr / _`
if [ -f ${LOCKDIR}/$SRC_LOCK ] 
then
	echo "Already locked. If should not be the case - remove ${LOCKDIR}/$SRC_LOCK"
	exit 1
fi
sanity
touch ${LOCKDIR}/$SRC_LOCK
construct_ssh_cmd
get_last_remote_snapshots # Have a string list of remoteSnaps
# If we dont have remote snapshot it should be initialization
if [ -z "$remoteSnaps" ]
then
	initialize
	echo "completed initialization. Done"
	remove_lock
	exit 0
fi
 
# We can get here only if it is not initialization
get_last_local_snapshots # Have a list (array) of localSnaps
find_matching_snapshot # Get the latest local index and the latest common index available
increment # Creates a new snapshot and sends/receives it
cleanup_snapshots # Cleans up old local snapshots
pkill -P $$
remove_lock
echo "Done"

A manual initial run should be called manually. If you expect a very long initial sync, you should run it in tmux to screen, to avoid failing in the middle.

To run the command, run it like this:

./clone_zfs_snapshots.sh share/my-data backuphost:share

This will create under the pool ‘share’ in the host ‘backuphost’ a filesystem matching the source (in this case: share/my-data) and set it to read-only. The script will create a snapshot with a unique name based on a shortened hash of the destination, with a counting number suffix, and start cloning the snapshot to the remote host. When called again, it will create a snapshot with the same name, but different index, and clone the delta to the remote host. In case of a disconnection, the clone will retry a few times before failing.

Note that the receiving side does not remove snapshots, so handling (too) old snapshots on the backup host remains up to you.

Ubuntu 20.04 and TPM2 encrypted system disk

Thursday, February 18th, 2021

Following the conversation around a past post about Ubuntu 18.04 and TPM2 encrypted system disk (can be found here), I have added a post about using a more modern Ubuntu (20.04) with TPM2. This post includes all the updates (many thanks to the commenter kelderek for setting up a very well working setup!) and has just been tested on a virtual Ubuntu with (virtual) TPM2 device. A note about how to get this setup – sometime in the future.

A very important note, before we begin – This is not enough! To fully prevent your data from being stolen by a bad person with a screwdriver, you will need to add some additional measures, like locking up your BIOS with a password (resetting the BIOS password resets TPM2 data, which is good) ; Signing and using your own embedded GRUB2 (which is not covered here, and maybe one day will be), with your own certificate ; Protect GRUB2 from booth changed using password ; Lock down boot devices in the BIOS (so the evil hacker cannot boot from a USB or CDROM without unlocking the BIOS first) ; Lock the hard drive (or SSD, or NVME) with a security to the BIOS (possible on some models of laptops and hard drives). All these steps are required to complete the security process in order to make the life of a cracker/hacker harder to the point of impossible, when attempting to obtain your data. Without them, the effort is great, but not impossible.
Also note that all the above described actions might lock you out of your own system in case of any hardware failure or forgotten password, so make sure you have a (secure!) backup of your data, or else…

Now – to the process. The idea is this: We add a new key to the cryptsetup – a long one, and this key is stored in TPM2. We add scripts which pull this key out of TPM2 store whenever the system boots. Thanks to some additional comments by Kelderek, we also add some failback, in case of an incorrect key, to allow up to recover and boot using manual key.

I will not wrap all my actions with ‘sudo’ command. All commands need to be called as root, so you can prefix this guide with ‘sudo -s’ or ‘sudo bash’ if you like. I can also assume you ‘cd’ to the root’s directory to protect the root.key file while it is still there. So – ‘sudo’ yourselves, and let’s begin.

Install required packages:

apt install tpm2-tools

Define TPM2 memory space to hold the key:

tpm2_nvdefine -s 64 0x1500016

# This command will define a 64 bit memory space in TPM2, at the above mentioned address

Create a random 64 bit key file:

cat /dev/urandom | tr -dc ‘a-zA-Z0-9’ | head -c 64 > root.key

Save the contents of the key file to TPM2:

tpm2_nvwrite -i root.key 0x1500016

Compare the contents of the TPM and the file, to verify that they are exactly the same:

echo root.key file contents: `cat root.key`
echo The value stored in TPM: `tpm2_nvread 0x1500016`

Identify the encrypted device, and add the key to the LUKS:

lsblk # Identify the owner of 'crypt'. We will assume it is /dev/sda3 in our example
cryptsetup luksAddKey /dev/sda3 root.key

Create a script to pull the key from TPM2:

1
2
3
4
5
6
7
8
9
10
11
12
13
cat << EOF > /usr/local/sbin/tpm2-getkey
#!/bin/sh
if [ -f ".tpm2-getkey.tmp" ]; then
# tmp file exists, meaning we tried the TPM this boot, but it didn’t work for the drive and this must be the second
# or later pass for the drive. Either the TPM is failed/missing, or has the wrong key stored in it.
/lib/cryptsetup/askpass "Automatic disk unlock via TPM failed for \$CRYPTTAB_SOURCE (\$CRYPTTAB_NAME) Enter passphrase: "
exit
fi
# No tmp, so it is the first time trying the script. Create a tmp file and try the TPM
touch .tpm2-getkey.tmp
 
tpm2_nvread 0x1500016
EOF

This script will be embedded into the future initramfs (or initrd), and will pull the key from TPM2. Now, we need to set its permissions and ownerships:

chown root: /usr/local/sbin/tpm2-getkey
chmod 750 /usr/local/sbin/tpm2-getkey

Create a hook script to initramfs:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
cat << EOF > /etc/initramfs-tools/hooks/tpm2-decryptkey
#!/bin/sh
PREREQ=""
prereqs()
{
echo ""
}
case \$1 in
prereqs)
prereqs
exit 0
;;
esac
. /usr/share/initramfs-tools/hook-functions
copy_exec `which tpm2_nvread`
copy_exec /usr/lib/x86_64-linux-gnu/libtss2-tcti-device.so.0.0.0
copy_exec /usr/lib/x86_64-linux-gnu/libtss2-tcti-device.so.0
copy_exec /lib/cryptsetup/askpass
exit 0
EOF

Set file permissions:

chown root: /etc/initramfs-tools/hooks/tpm2-decryptkey
chmod 755 /etc/initramfs-tools/hooks/tpm2-decryptkey

Edit crypttab:

The file /etc/crypttab needs to have an entry added at the end of the line containing the boot volume. For example (taken from the previous post’s comments):

sda3_crypt UUID=d4a5a9a4-a2da-4c2e-a24c-1c1f764a66d2 none luks,discard

should become:

sda3_crypt UUID=d4a5a9a4-a2da-4c2e-a24c-1c1f764a66d2 none luks,discard,keyscript=/usr/local/sbin/tpm2-getkey

First – backup your existing crypttab file:

cp /etc/crypttab /etc/crypttab.backup

If you have a single line in the file (only the boot volume is encrypted), you can use the following command:

sed -i 's%$%,keyscript=/usr/local/sbin/tpm2-getkey%' /etc/crypttab

Otherwise, you will have to append it manually.

Backup the original initrd and create a new one

cp /boot/initrd.img-$(uname -r) /boot/initrd.img-$(uname -r).orig
mkinitramfs -o /boot/initrd.img-$(uname -r) $(uname -r
)

Reboot.

If all works well – you can now delete the file root.key holding a clear text key. You will still have your original manually defined decryption key working, and the initrd will fail-over to handle manual askpass if the automatic TPM2 fails to work, so you should be rather safe there, as well as update-proof.

Notes:

  • The procedure might work on older Ubuntu systems, although it was not tested. A comment in the previous post suggested that the addition of the source
    deb http://cz.archive.ubuntu.com/ubuntu focal main universe
    to your Ubuntu 18.04, the same procedure should work. I did not test it yet.
  • The security of the system is not complete until you handle the other tasks mentioned at the beginning of the text.
  • This procedure does not cover TPM1.2 systems, and will not work without modification on them. I might document such process in the future.

Good luck, and post here if it works for you, or if you have any suggestions!

Extracting multi-layered initramfs

Thursday, December 5th, 2019

Modern Kernel specification (can be seen here) defined the initial ramdisk (initrd or initramfs, depends on who you ask) to allow stacking of compressed or uncompressed CPIO archives. It means, in fact, that you can extend your current initramfs by appending a cpio.gz (or cpio) file at the end, containing the additions or changes to the filesystem (be it directories, files, links and anything else you can think about).

An example of this action:

1
2
3
4
5
mkdir /tmp/test
cd /tmp/test
tar -C /home/ezaton/test123 -cf - . | tar xf - # Clones the contests of /home/ezaton/test123 to this location
find ./ | cpio -o -H newc &gt; ../test.cpio.gz # Creates a compressed CPIO file
cat ../test.cpio.gz &gt;&gt; /boot/initramfs-`uname -r`.img

This should work (I haven’t tried, and if you do it – make sure you have a copy of the original initramfs file!), and the contents of the directory /tmp/test would be reflected in the initramfs.

This method allows us to quickly modify existing ramdisk, replacing files (the stacked cpio files are extracted by order), and practically – doing allot of neat tricks.

The trickier question, however, is how to extract the stacked CPIO files.
If you create a file containing multiple cpio.gz files, appended, and just try to extract them, only the contents of the first CPIO file would be extracted.

The Kernel can do it, and so are we. The basic concept we need to understand is that GZIP compresses a stream. It means that there is no difference between a file structured of stacked CPIO files, and then compressed altogether, or a file constructed by appending cpio.gz files. The result would be similar, and so is the handling of the file. It also means that we do not need to run a loop of zcat/un-cpio and then again zcat/un-cpio on the file chunk by chunk, but when we decompress the file, we decompress it in whole.

Let’s create an example file:

1
2
3
4
5
6
cd /tmp for i in {1..10} ; do
    mkdir test${i}
    touch test${i}/test${i}-file
    find ./test${i} | cpio -o -H newc | gzip &gt; test${i}.cpio.gz
    cat test${i}.cpio.gz &gt;&gt; test-of-all.cpio.gz 
done

This script will create ten directories called test1 to test10, each containing a single file called test<number>-file. Each of them will both be archived into a dedicated cpio.gz file (named the same) and appended to a larger file called test-of-all.cpio.gz

If we run the following script to extract the contents, we will get only the first CPIO contents:

1
2
3
mkdir /tmp/extract
cd /tmp/extract
zcat ../test-of-all.cpio.gz | cpio -id # Format is newc, but it is auto detected

The resulting would be the directory ‘test1’ with a single file in it, but with nothing else. The trick to extract all files would be to run the following command:

1
2
3
4
rm -Rf /tmp/extrac # Cleanup
mkdir /tmp/extract
cd /tmp/extract
zcat ../test-of-all.cpio.gz | while cpio -id ; do : ; done

This will extract all files, until there is no more cpio format remaining. Then the ‘cpio’ command will fail and the loop would end.

Some additional notes:
The ‘:’ is a place holder (does nothing) because ‘while’ loop requires a command. It is a legitimate command in shell.

So – now you can extract even complex CPIO structures, such as can be found in older Foreman “Discovery Image” (very old implementation), Tiny Core Linux (see this forum post, and this wiki note as reference on where this stacking is invoked) and more. This said, for extracting Centos/RHEL7 initramfs, which is structured of uncompressed CPIO appended by a cpio.gz file, a different command is required, and a post about it (works for Ubuntu and RHEL) can be found here.

EDIT: It seems the kernel-integrated CPIO extracting method will not “overwrite” a file with a later layer of cpio.gz contents, so I will have to investigate a different approach to that. FYI.

Using yum with SOCKS proxy

Sunday, June 30th, 2019

SSH is a wonderful tool. One of its best features is the ability to pierce a firewall and let you go through it. If you’re using the dynamic port (-D as argument in command line openSSH), you actually get a SOCKS5 proxy over which you can transport all your desired data.

This allows you the freedom of accessing the Internet from a restricted machine, on the condition it can connect via SSH to another unrestricted machine. So – how does it work?

To simplify things – you will need two sessions on your restricted machine. Use the first to connect via SSH to an unrestricted machine, with the argument, in our example, -D 10000

What it tells the SSH connection is to create a SOCKS5 proxy locally (on the restricted machine) over port 10000, and all the transport sent there – to transfer through the remote (unrestricted) server.

Using the other session, we can implement local variable like this:

export http_proxy=socks5://localhost:10000

export https_proxy=socks5://localhost:10000

It sets a variable which yum (among many other programs) can read and use. Afterwards, using the same session, running ‘yum update‘ or ‘yum install package‘ will result in yum running through the proxy connection. Of course – the SSH session to the unrestricted server must be active at all times, or else yum command will fail.

Old Dell iDrac – work around Java failures

Wednesday, June 5th, 2019

I have an old Dell server (R610, if it’s important) and I seem to fail to connect to its iDrac console via Java. No other options exist, and the browser calling Java flow fails somehow.

I have found an explanation here, and I will copy it for eternity 🙂

First – Download the latest JRE version 1.7 from https::/java.com

Then, extract it to a directory of your choice. We’ll call this directory $RUN_ROOT

Download the viewer.jnlp file to this directory $RUN_ROOT, and open it with a text editor. You will see an XML block pointing at a JAR file called avctKVM.jar. Download it manually using ‘wget’ or ‘curl’ from the URL provided in the viewer.jnlp XML file.

Extract the avctKVM.jar file using ‘unzip’. You will get two libraries – avctKVMIO(.so or .dll for Windows) and avmWinLib(.so or .dll for Windows). Move these two files into a new directory under $RUN_ROOT/lib

Download/copy-paste the below .bat or .sh script files (.bat file for Windows, .sh file for Linux).

start-virtual-console.bat

@echo off

set /P drachost="Host: "
set /p dracuser="Username: "
set "psCommand=powershell -Command "$pword = read-host 'Enter Password' -AsSecureString ; ^
    $BSTR=[System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($pword); ^
        [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)""
for /f "usebackq delims=" %%p in (`%psCommand%`) do set dracpwd=%%p

start-virtual-console.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
#!/bin/bash
 
echo -n 'Host: '
read drachost
 
echo -n 'Username: '
read dracuser
 
echo -n 'Password: '
read -s dracpwd
echo
 
./jre/bin/java -cp avctKVM.jar -Djava.library.path=./lib com.avocent.idrac.kvm.Main ip=$drachost kmport=5900 vport=5900 user=$dracuser passwd=$dracpwd apcp=1 version=2 vmprivilege=true "helpurl=https://$drachost:443/help/contents.html"

Run the downloaded script file (with Linux – you might want to give it execution permissions first), and you will be asked for your credentials.

Thanks Nicola for this brilliant solution!