Archive for the ‘Linux’ Category

Extracting multi-layered initramfs

Thursday, December 5th, 2019

Modern Kernel specification (can be seen here) defined the initial ramdisk (initrd or initramfs, depends on who you ask) to allow stacking of compressed or uncompressed CPIO archives. It means, in fact, that you can extend your current initramfs by appending a cpio.gz (or cpio) file at the end, containing the additions or changes to the filesystem (be it directories, files, links and anything else you can think about).

An example of this action:

1
2
3
4
5
mkdir /tmp/test
cd /tmp/test
tar -C /home/ezaton/test123 -cf - . | tar xf - # Clones the contests of /home/ezaton/test123 to this location
find ./ | cpio -o -H newc > ../test.cpio.gz # Creates a compressed CPIO file
cat ../test.cpio.gz >> /boot/initramfs-`uname -r`.img

This should work (I haven’t tried, and if you do it – make sure you have a copy of the original initramfs file!), and the contents of the directory /tmp/test would be reflected in the initramfs.

This method allows us to quickly modify existing ramdisk, replacing files (the stacked cpio files are extracted by order), and practically – doing allot of neat tricks.

The trickier question, however, is how to extract the stacked CPIO files.
If you create a file containing multiple cpio.gz files, appended, and just try to extract them, only the contents of the first CPIO file would be extracted.

The Kernel can do it, and so are we. The basic concept we need to understand is that GZIP compresses a stream. It means that there is no difference between a file structured of stacked CPIO files, and then compressed altogether, or a file constructed by appending cpio.gz files. The result would be similar, and so is the handling of the file. It also means that we do not need to run a loop of zcat/un-cpio and then again zcat/un-cpio on the file chunk by chunk, but when we decompress the file, we decompress it in whole.

Let’s create an example file:

1
2
3
4
5
6
cd /tmp for i in {1..10} ; do
    mkdir test${i}
    touch test${i}/test${i}-file
    find ./test${i} | cpio -o -H newc | gzip > test${i}.cpio.gz
    cat test${i}.cpio.gz >> test-of-all.cpio.gz 
done

This script will create ten directories called test1 to test10, each containing a single file called test<number>-file. Each of them will both be archived into a dedicated cpio.gz file (named the same) and appended to a larger file called test-of-all.cpio.gz

If we run the following script to extract the contents, we will get only the first CPIO contents:

1
2
3
mkdir /tmp/extract
cd /tmp/extract
zcat ../test-of-all.cpio.gz | cpio -id # Format is newc, but it is auto detected

The resulting would be the directory ‘test1’ with a single file in it, but with nothing else. The trick to extract all files would be to run the following command:

1
2
3
4
rm -Rf /tmp/extrac # Cleanup
mkdir /tmp/extract
cd /tmp/extract
zcat ../test-of-all.cpio.gz | while cpio -id ; do : ; done

This will extract all files, until there is no more cpio format remaining. Then the ‘cpio’ command will fail and the loop would end.

Some additional notes:
The ‘:’ is a place holder (does nothing) because ‘while’ loop requires a command. It is a legitimate command in shell.

So – now you can extract even complex CPIO structures, such as can be found in older Foreman “Discovery Image” (very old implementation), Tiny Core Linux (see this forum post, and this wiki note as reference on where this stacking is invoked) and more. This said, for extracting Centos/RHEL7 initramfs, which is structured of uncompressed CPIO appended by a cpio.gz file, a different command is required, and a post about it (works for Ubuntu and RHEL) can be found here.

cluvfy fails with user equivalence

Wednesday, September 11th, 2019

I came across a Linux host (RHEL6.2) which I was not the one managing before, which required adding a node in Oracle 11.2 Grid Infrastructure. According the common documentations, as can be seen here, after cloning the host (or installing it correctly, according to Oracle requirements), you should run ‘cluvfy’ on the existing cluster node.

This has failed miserably – the error result was “PRVF-7610: cannot verify user equivalence/reachability on existing cluster nodes with equivalence configured”. No additional logs were present, and no indication of the problem existed, and equivalence was working correctly, when tested using manual SSH commands in all and any direction.

I have found a hint in this post, where I could control the debugging level of the cluvfy command through the following three env variables:
export CV_TRACELOC=/tmp/cvutrace
export SRVM_TRACE=true
export SRVM_TRACE_LEVEL=2

Following these variables, I have learned how cluvfy works – it checks connectivity (ping and then SSH), followed by copying a set of files to /tmp/CVU_<version>_<GI user> and then executes them.

In this particular case, /tmp was mounted with noexec flag, which resulting in a complete failure, and lack of any logs. So – take heed.

Using yum with SOCKS proxy

Sunday, June 30th, 2019

SSH is a wonderful tool. One of its best features is the ability to pierce a firewall and let you go through it. If you’re using the dynamic port (-D as argument in command line openSSH), you actually get a SOCKS5 proxy over which you can transport all your desired data.

This allows you the freedom of accessing the Internet from a restricted machine, on the condition it can connect via SSH to another unrestricted machine. So – how does it work?

To simplify things – you will need two sessions on your restricted machine. Use the first to connect via SSH to an unrestricted machine, with the argument, in our example, -D 10000

What it tells the SSH connection is to create a SOCKS5 proxy locally (on the restricted machine) over port 10000, and all the transport sent there – to transfer through the remote (unrestricted) server.

Using the other session, we can implement local variable like this:

export http_proxy=socks5://localhost:10000

export https_proxy=socks5://localhost:10000

It sets a variable which yum (among many other programs) can read and use. Afterwards, using the same session, running ‘yum update‘ or ‘yum install package‘ will result in yum running through the proxy connection. Of course – the SSH session to the unrestricted server must be active at all times, or else yum command will fail.

SecureBoot and VirtualBox kernel modules

Saturday, June 1st, 2019

Installing VirtualBox on Ubuntu 18 (same as for modern Fedora Core) with SecureBoot will result in the following error when running the command /sbin/vboxsetup

The error message would be something like this:

There were problems setting up VirtualBox. To re-start the set-up process, run
/sbin/vboxconfig
as root. If your system is using EFI Secure Boot you may need to sign the
kernel modules (vboxdrv, vboxnetflt, vboxnetadp, vboxpci) before you can load
them. Please see your Linux system’s documentation for more information.

This is because SecureBoot would not allow for non-signed kernel drivers, and VirtualBox creates its own drivers as part of its configuration.

I have found a great solution for this problem in the answers to this question here, which goes as follows:

Create a file (as root) called /usr/bin/ensure-vbox-signed with the following content:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
#!/bin/bash
 
MOKUTIL="/usr/bin/mokutil"
MODPROBE="/sbin/modprobe"
MODINFO="/sbin/modinfo"
SIG_DIR="/var/lib/shim-signed/mok"
PUB="${SIG_DIR}/MOK.der"
KEY="${SIG_DIR}/MOK.priv"
 
if ! "${MOKUTIL}" --sb-state | grep -qi '[[:space:]]enabled$' ; then
	echo "WARNING: Secure Boot is not enabled, signing is not necessary"
	exit 0
fi
 
# If secure boot is enabled, we try to find the signature keys
[ -f "${KEY}" ] || { echo "ERROR: Couldn't find the MOK private key at ${KEY}" ; exit 1 ; }
[ -f "${PUB}" ] || { echo "ERROR: Couldn't find the MOK public key at ${PUB}" ; exit 1 ; }
 
INFO="$("${MODINFO}" -n vboxdrv)"
if [ -z "${INFO}" ] ; then
	# If there's no such module, compile it
	/usr/lib/virtualbox/vboxdrv.sh setup
	INFO="$("${MODINFO}" -n vboxdrv)"
	if [ -z "${INFO}" ] ; then
		echo "ERROR: Module compilation failed (${MODPROBE} couldn't find it after vboxdrv.sh was called)"
		exit 1
	fi
fi
 
KVER="${1}"
[ -z "${KVER}" ] &amp;&amp; KVER="$(uname -r)"
 
KDIR="/usr/src/linux-headers-${KVER}"
DIR="$(dirname "${INFO}")"
 
for module in "${DIR}"/vbox*.ko ; do
	MOD="$(basename "${module}")"
	MOD="${MOD//.*/}"
 
	# Quick check - if the module loads, it needs no signing
	echo "Loading ${MOD}..."
	"${MODPROBE}" "${MOD}" &amp;&amp; continue
 
	# The module didn't load, and it must have been built (above), so it needs signing
	echo "Signing ${MOD}..."
	if ! "${KDIR}/scripts/sign-file" sha256 "${KEY}" "${PUB}" "${module}" ; then
		echo -e "\tFailed to sign ${module} with ${KEY} and ${PUB} (rc=${?}, kernel=${KVER})"
		exit 1
	fi
 
	echo "Reloading the signed ${MOD}..."
	if ! "${MODPROBE}" "${MOD}" ; then
		echo -e "\tSigned ${MOD}, but failed to load it from ${module}"
		exit 1
	fi
	echo "Loaded the signed ${MOD}!"
done
exit 0

Make sure this file is executable by root. Create a systemd service /etc/systemd/system/ensure-vboxdrv-signed.service with the following contents:

[Unit]
SourcePath=/usr/bin/ensure-vbox-signed
Description=Ensure the VirtualBox Linux kernel modules are signed
Before=vboxdrv.service
After=

[Service]
Type=oneshot
Restart=no
TimeoutSec=30
IgnoreSIGPIPE=no
KillMode=process
GuessMainPID=no
RemainAfterExit=yes
ExecStart=/usr/bin/ensure-vbox-signed

[Install]
WantedBy=multi-user.target
RequiredBy=vboxdrv.service

Run sudo systemctl reload-daemon, and then enable the service by running sudo systemctl start ensure-vboxdrv-signed.service

It should sign and enable your vbox drivers, and allow you to run your VirtualBox machines.

How to extract modern Ubuntu initramfs

Thursday, May 30th, 2019

Just to remember, there is an explanation here, from which the following directive can be taken:

(cpio -id; zcat | cpio -id) < /path/to/initrd.img