Archive for the ‘Linux’ Category

Linux boot security – a discussion

Saturday, May 7th, 2022

Hi. As some know now, I have an article about encrypting and protecting Linux – while using TPM to prevent the need to enter the encryption passphrase every time the system boots. This is one of the most popular article in this blog, and for a very good reason – people want their computers (especially portable devices) to be secure and well protected against data theft (let’s assume that by trivial means. I tend to believe that if the NSA or any country-class intelligence service is targeting you, your data will be compromised, so don’t get targeted by countries please), but they want to be the least-bothered with entering and re-entering security credentials along the way. Eating the cake while leaving it whole, if you would like.

Using TPM (which is welded into the motherboard of the computer) to keep the encryption passphrase is a great solution – you get to have an encrypted disk, but don’t have to reuse the passphrase on every reboot. Also – you can rest assured that when/if your hard drive is pulled out of the computer, there is nothing to do with it, because the key is kept on the motherboard.

However – this is not enough, and my article does not cover the scenario where a malicious user pulls out our hard drive and puts a different hard drive instead of it. The current process is of keeping the passphrase directly in the TPM2 chip, not as an SSL challenge/response mechanism, but just as a storage for the key. As such – any user with physical access to the machine from within the OS level can obtain it – even by replacing the disk and pulling the key out, or even worse – by patching the boot sequence (which partially resided on /boot, which is not encrypted) and by replacing initramfs/initrd or by modifying GRUB boot flow, and just asking the TPM nicely for the key. So – our system is not that secure. At least not as much as we aim at. I have mentioned that in the disk encryption article, but yet to have provided a solution.

The reason I have not provided a solution has to do with how Canonical (Ubuntu) take a look at the chain of trust, and the different way I look at it, and to bring this issue up front, I am writing this article, which is not really a technical, but more of a discussion of options. If some of my readers respond and enlighten me – I would be grateful for that, as I am truly looking for a feasible solution to my Ubuntu – this or the next one.

A chain of trust is a workflow where every step is trusted by the previous step, on one side, and can trust the next step. My desirable chain of trust in this case would look like this:

  • The BIOS is locked and cannot be modified without credentials
  • The BIOS allows booting only from a single disk device, and only a specific boot file
  • Using SecureBoot, only the bootable binary (GRUB2 in our case) – a signed and un-tempered file, can be called
  • GRUB2 EFI executable has its configuration embedded, so that you cannot modify it, add additional parameters, or stop its boot process mid-way.
  • GRUB2 is password protected, and allows only a single entry (default) without additional parameters, without authentication.
  • GRUB2 loader trusts the kernel it boots to be one pre-signed by some trusted authority.
  • GRUB2 loader trusts initramfs/initrd as it is signed in a manner GRUB2 understands and respects, aka – it is unmodified
  • Initramfs/initrd is calling the TPM2 to obtain the key, and by doing so, does two things: Opens the disk encryption by using the key, and trusting the BIOS to be un-tempered with, because resetting the BIOS password would reset the TPM2 contents as well (this is good for us).
  • The system continues boot correctly.

You can notice that every step of the boot process has the current step trusted by the previous one, and has a trust mechanism towards the next step. This can work by using self-signed certificate to sign GRUB2 EFI executable, and saving this certificate (and its verification!) in the SecureBoot section of the BIOS, overriding any ‘common’ certificates the BIOS knows of. Then – using grub-standalone instead of the regular GRUB2, which means that GRUB configuration and modules are embedded into a single, signed, executable – we prevent injection of GRUB modules (drivers) or change of parameters. Also – GRUB is password protected against non-default boot. Following that, GRUB trusts the kernel, which is signed, and initramfs/initrd, which is also signed (GRUB used to trust GPG signed files, however, and this is the source of this discussion, Canonical changed some things there, and in my opinion – not for the better). And from here and onwards – the boot process works fine.

This chain of trust seems reasonable. It will prevent boot sequence intervention, and will make sure that the unencrypted part of the partition gets files replaced. You cannot load any different GRUB (even if replaced) due to BIOS trust issues, and you cannot change the configuration, kernel of initramfs/initrd, and therefore – you are bounded to this predefined flow.

This does not follow the ‘trusted computing’ flow. Trusted computing is a mechanism that is supposed to prevent the kernel (and the hardware assisting the process) loading unsigned kernel drivers, and it should be enforced by the TPM/SecureBoot as well. This is nice and pretty, but the vulnerable areas in the trusted computing areas are with the loaders. You might try to inject kernel drivers or malicious code when running under privileged context – when GRUB gets called in.

To mitigate this, Ubuntu enforced (starting with grub version 2.04-1ubuntu26.2) using SHIM, which is a pre-loader mechanism. The chain of trust works slightly differently here:

  • BIOS trusts SHIM, using Microsoft certificate
  • SHIM is signed using Microsoft certificate trusts a different set of certificates – self-signed or Canonical certificates. SHIM is pre-GRUB loader.
  • SHIM trusts GRUB EFI binary, which is signed by Canonical
  • GRUB trusts the kernel using Canonical certificate (all stock kernels are provided pre-signed by Canonical), or custom kernel (and modules) by self-signed certificate, but it needs to be used for every kernel used, and every module used by this kernel.
  • GRUB trusts initramfs/initrd by whatever means (no trust is checked by default there)

The problem with this flow is that it has external sources in the chain of trust, where different components might get replaced or patched. The problems, as I see them (and I would like to get a different opinion about it, so please feel free to share) are:

  • SHIM can be replaced with another EFI binary trusted by Microsoft (like MS Windows loader, or some other tool used by whatever 3rd party I do not know). This will result in skipping the entire trust chain, into a different flow, of which we have no control.
  • SHIM might be able to get compiled with a self-signed certificate (which then be entered into the BIOS SecureBoot as a trusted certificate), however – this process will prevent easy/automated deployment of SHIM, or will require re-compiling it on every update.
  • Using grub-standalone (reviewed above) will require signing it with Canonical certificate (we cannot do this), or with our self-signed. If we use self-signed, we will remove Canonical certificate from SHIM, but then we will have to use our certificate to re-sign the kernel, or we add our self-signed certificate to SHIM, but then, our locked-down GRUB can be replaced with a general GRUB2, signed by Canonical (the default) and worked around the lockdown easily.
  • Now GRUB2 (also grub-standalone, as this is part of the change introduced in version 2.04-1ubuntu26.2) will not trust GPG signature for the kernel (allowing us to avoid re-signing the kernel using SSL, but adding a GPG signature, thus – easier automation of processes and trust chains), but only the SSL, which has to be known to SHIM.
  • By default – initramfs signature is not checked. At this stage with all the chain breaks along the way – it doesn’t matter much now. An attacker can just hook-in anywhere before, and pull our keys out already, using off-the-shelf security loaders of GRUB EFI binaries.

I was able to recomplie the latest GRUB packages for Ubuntu omitting these three patches, without code modifications, to regain the original security flow I presented above (pending tests. Not done yet):

  • 0096-linuxefi-fail-kernel-validation-without-shim-protoco.patch
  • 0099-chainloader-Avoid-a-double-free-when-validation-fail.patch
  • ubuntu-linuxefi-arm64.patch

Is it to be trusted? I do not know. Is it safe “enough”? Maybe. Would I have to continue maintaining a non-official GRUB package? Maybe. How do I maintain it for Ubuntu 22.04? It seems Canonical has their mind set on their flow, which allows for trusted computing – I cannot ignore that – but fails, to my understanding, to provide “good enough” physical access protection when some of the disk is not encrypted (to be exact: /boot and /boot/efi).

You can take a look at the discussion in this Ubuntu bug thread. You can also get the workflow of using SecureBoot and SHIM in Ubuntu in their official wiki here. I did not discuss MokManager, as a tool used to add certificates to SecureBoot or SHIM since I need to dig deeper into this, but to my belief – it only eases management for some of the tasks which might be set easier from within a working OS.

I would love to hear where my analysing of Canonical workflow is wrong, and where their offered security works better than my (previous) workflow – in regards to physical access to data protection.

I will also link this article to the encrypted disk with TPM2 article, in hope it will draw more interaction with this post.

Thanks for reading this far.

Video colour correction for diving, in Linux

Wednesday, March 2nd, 2022

Diving videos tend to be blueish. There are tools to automatically fix it, or at least – enhance the video so that the colours appear more natural. Unfortunately, most of these tools either don’t work well with Linux, or with my phone.

After a short investigation, I was able to produce a satisfactory result with the Linux program ‘Shotcut’ (available here)

Based on tips by the following blog and forum, I am able to produce a reasonable result out of my SJCam4000 action video camera, which has inferior sensors with lower light levels.

Still – it works.

I am using the ‘snap’ version of Shotcut, and the following filter is a very good baseline for modifications. Of course – the actual parameters depend on the light levels, depths (and thus – how many colours get absorbed), the level of white-balance your camera can get – and so on. Just paste the value below on Shotcut’s ‘filters’ area, and you should have a go:

<?xml version="1.0" encoding="utf-8"?>
<mlt LC_NUMERIC="C" version="7.5.0" root="" parent="producer0" in="00:00:00.000" out="00:08:19.967"><producer id="producer0" in="00:00:00.000" out="00:08:19.967"><property name="length">15000</property><property name="eof">pause</property><property name="resource">black</property><property name="aspect_ratio">1</property><property name="mlt_service">color</property><property name="shotcut:filtersClipboard">1</property><filter id="filter0" out="00:00:41.967"><property name="version">0.1</property><property name="mlt_service">frei0r.colgate</property><property name="threads">0</property><property name="0">#0aa5ee</property><property name="1">0.433333</property><property name="disable">0</property></filter><filter id="filter1" out="00:00:41.967"><property name="mlt_service">avfilter.hue</property><property name="av.h">-4</property><property name="av.b">-0.5</property><property name="av.s">1.5</property></filter><filter id="filter2" out="00:00:41.967"><property name="lift_r">0</property><property name="lift_g">0</property><property name="lift_b">0</property><property name="gamma_r">1</property><property name="gamma_g">1</property><property name="gamma_b">1</property><property name="gain_r">1</property><property name="gain_g">1</property><property name="gain_b">1</property><property name="mlt_service">lift_gamma_gain</property><property name="shotcut:filter">contrast</property></filter><filter id="filter3" out="00:00:41.967"><property name="lift_r">-0.253986</property><property name="lift_g">-0.253986</property><property name="lift_b">-0.253986</property><property name="gamma_r">1.106</property><property name="gamma_g">1.09001</property><property name="gamma_b">1.183</property><property name="gain_r">264=1.79556</property><property name="gain_g">264=1.79556</property><property name="gain_b">264=1.79556</property><property name="mlt_service">lift_gamma_gain</property></filter></producer></mlt>

Hope it helps someone 🙂

Enable blk_mq on Redhat (OEL/Centos) 7 and 8

Saturday, December 18th, 2021

The IO scheduler blk_mq was designed to increase disk performance – better IOps and lower latency – on flash disk systems, with emphasis on NVME devices. A nice by-product, is that it increases performance on the “older” SCSI/SAS/SATA layer as well, when the disks in the back are SSD, and even when they are rotational disks, in some cases.

When attempting to increase system disk performance (where and when this is the bottleneck, such as in the case of databases), this is a valid and relevant configuration option. If you have a baseline performance metrics to compare to – great ; But even if you do not have other, previous performance details – in most cases – for a modern system – your performance will improve after this change.

NVME devices get this scheduler enabled by default, so no worry there. However, your system might not be aware of the backend storage type of devices, nor of the backend capabilities when dealing with SCSI subsystem, so – by default – these settings are not enabled.

To enable these settings on RHEL/OEL/Centos version 7 and 8, you should do the following:

Edit /etc/default/grub and append to your GRUB_CMDLINE_LINUX the following string:

scsi_mod.use_blk_mq=1 dm_mod.use_blk_mq=y

This will enable blk_mq both on the SCSI subsystem, and the device mapper (DM), which includes software RAID, LVM and device-mapper-multipath (aka – multipath).

Following this, run:

grub2-mkconfig -o /boot/grub2/grub.cfg  # for legacy BIOS

or

grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg  # for EFI

Of course – a reboot is required for these changes to take effect. Following the reboot, you will be able to see that the contents of /sys/block/sda/queue/scheduler (for /dev/sda, in my example. Check your relevant disks, of course) shows “[mq-deadline]”

If you want to read more about blk_mq – you can find it in great detail in this blog post.

Reduce traverse time of large directory trees on Linux

Tuesday, December 14th, 2021

Every Linux admin is familiar with the long time running through a large directory tree (with hundred of thousands of files and more) can take. Most are aware that if you re-run the same run-through, it will be shorter.

This is caused by a short-valid filesystem cache, where the memory is allocated to other tasks, or the metadata required cache exceeds the available for this task.

If the system is focused on files, meaning that its prime task is holding files (like NFS server, for example) and the memory is largely available, a certain tunable can reduce recurring directory dives (like the ‘find’ or ‘rsync’ commands, which run huge amounts of attribute queries):

sysctl vm.vfs_cache_pressure=10

The default value is 100. Lower values will cause the system to prefer keeping this cache. A quote from kernel’s memory tunables page:

vfs_cache_pressure
——————————–
This percentage value controls the tendency of the kernel to reclaim the memory which is used for caching of directory and inode objects.

At the default value of vfs_cache_pressure=100 the kernel will attempt to reclaim dentries and inodes at a “fair” rate with respect to pagecache and swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will never reclaim dentries and inodes due to memory pressure and this can easily lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100 causes the kernel to prefer to reclaim dentries and inodes.

Increasing vfs_cache_pressure significantly beyond 100 may have negative performance impact. Reclaim code needs to take various locks to find freeable directory and inode objects. With vfs_cache_pressure=1000, it will look for ten times more freeable objects than there are.

Installation boot of RHEL8 (network settings)

Wednesday, November 10th, 2021

This blog is my extended memory, and as such, its task is to remind me things I tend to forget, saving me the time required to search them again. So here is another one of these things.

The network settings syntax for RHEL8/OEL8 or any of their compatible systems, when you want to pass these to Anaconda, as can be found here, are

inst.ks=http://url-to-ks.com ip=10.10.10.2::10.10.10.254:255.255.255.0:testsrv1:em1:none dns=10.10.10.253

These network settings works for static IP addresses, and would be constructed by these arguments:

ip=|IP address|::|gateway|:|netmask|:|hostname|:|interface|:|bootproto|

I find this syntax confusing, and so – I’ve kept it here to help me remember it.

Hope it helps.