Archive for December, 2021

Enable blk_mq on Redhat (OEL/Centos) 7 and 8

Saturday, December 18th, 2021

The IO scheduler blk_mq was designed to increase disk performance – better IOps and lower latency – on flash disk systems, with emphasis on NVME devices. A nice by-product, is that it increases performance on the “older” SCSI/SAS/SATA layer as well, when the disks in the back are SSD, and even when they are rotational disks, in some cases.

When attempting to increase system disk performance (where and when this is the bottleneck, such as in the case of databases), this is a valid and relevant configuration option. If you have a baseline performance metrics to compare to – great ; But even if you do not have other, previous performance details – in most cases – for a modern system – your performance will improve after this change.

NVME devices get this scheduler enabled by default, so no worry there. However, your system might not be aware of the backend storage type of devices, nor of the backend capabilities when dealing with SCSI subsystem, so – by default – these settings are not enabled.

To enable these settings on RHEL/OEL/Centos version 7 and 8, you should do the following:

Edit /etc/default/grub and append to your GRUB_CMDLINE_LINUX the following string:

scsi_mod.use_blk_mq=1 dm_mod.use_blk_mq=y

This will enable blk_mq both on the SCSI subsystem, and the device mapper (DM), which includes software RAID, LVM and device-mapper-multipath (aka – multipath).

Following this, run:

grub2-mkconfig -o /boot/grub2/grub.cfg  # for legacy BIOS

or

grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg  # for EFI

Of course – a reboot is required for these changes to take effect. Following the reboot, you will be able to see that the contents of /sys/block/sda/queue/scheduler (for /dev/sda, in my example. Check your relevant disks, of course) shows “[mq-deadline]”

If you want to read more about blk_mq – you can find it in great detail in this blog post.

Reduce traverse time of large directory trees on Linux

Tuesday, December 14th, 2021

Every Linux admin is familiar with the long time running through a large directory tree (with hundred of thousands of files and more) can take. Most are aware that if you re-run the same run-through, it will be shorter.

This is caused by a short-valid filesystem cache, where the memory is allocated to other tasks, or the metadata required cache exceeds the available for this task.

If the system is focused on files, meaning that its prime task is holding files (like NFS server, for example) and the memory is largely available, a certain tunable can reduce recurring directory dives (like the ‘find’ or ‘rsync’ commands, which run huge amounts of attribute queries):

sysctl vm.vfs_cache_pressure=10

The default value is 100. Lower values will cause the system to prefer keeping this cache. A quote from kernel’s memory tunables page:

vfs_cache_pressure
——————————–
This percentage value controls the tendency of the kernel to reclaim the memory which is used for caching of directory and inode objects.

At the default value of vfs_cache_pressure=100 the kernel will attempt to reclaim dentries and inodes at a “fair” rate with respect to pagecache and swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will never reclaim dentries and inodes due to memory pressure and this can easily lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100 causes the kernel to prefer to reclaim dentries and inodes.

Increasing vfs_cache_pressure significantly beyond 100 may have negative performance impact. Reclaim code needs to take various locks to find freeable directory and inode objects. With vfs_cache_pressure=1000, it will look for ten times more freeable objects than there are.