Reduce Oracle ASM disk size

I have had a system with Oracle ASM diskgroup from which I needed some space. The idea was to release some disk space, reduce the size of existing partitions, add a new partition and use it – all online(!)

This is a grocery list of tasks to do, with some explanation integrated into it.

Overview

  • Reduce ASM diskgroup size
  • Drop a disk from ASM diskgroup
  • Repartition disk
  • Delete and recreate ASM label on disk
  • Add disk to ASM diskgroup

Assuming the diskgroup is constructed of five disks, called SSDDATA01 to SSDDATA05. All my examples will be based on that. The current size is 400GB and I will reduce it to somewhat below the target size of the partition, lets say – to 356000M

Reduce ASM diskgroup size

alter diskgroup SSDDATA resize all size 365000M;

This command will reduce the expected disks to below the target partition size. This will allow us to reduce its size

Drop a disk from the ASM diskgroup

alter diskgroup SSDDATA drop disk 'SSDDATA01';

We will need to know which physical disk this is. You can look at the device major/minor in /dev/oracleasm/disks and compare it with /dev/sd* or with /dev/mapper/*

It is imperative that the correct disk is marked (or else…) and that the disk will become currently unused.

Reduce partition size

As the root user, you should run something like this (assuming a single partition on the disk):

parted -a optimal /dev/sdX
(parted) rm 1
(parted) mkpart primary 1 375GB
(parted) mkpart primary 375GB -1
(parted) quit

This will remove the first (and only) partition – but not its data – and recreate it with the same beginning but smaller in size. The remaining space will be used for an additional partition.

We will need to refresh the disk layout on all cluster nodes. Run on all nodes:

partprobe /dev/sdX

Remove and recreate ASM disk label

As root, run the following command on a single node:

oracleasm deletedisk SSDDATA01
oracleasm createdisk SSDDATA01 /dev/sdX1

You might want to run on all other nodes the following command, as root:

oracleasm scandisks

Add a disk to ASM diskgroup

Because the disk is of different size, adding it without specific size argument would not be possible. This is the correct command to perform this task:

alter diskgroup SSDDATA add disk 'ORCL:SSDDATA01' size 356000M;

To save time, however, we can add this disk and drop another at the same time:

alter diskgroup SSDDATA add disk 'ORCL:SSDDATA01' size 356000M drop disk 'SSDDATA02';

In general – if you have enough space on your ASM diskgroup – it is recommended to add/drop multiple disks in a single command. It will allow for a faster operation, with less repeating data migration. Save time – save efforts.

The last disk will be re-added, without dropping any other disk, of course.

I hope it helps 🙂

Extracting multi-layered initramfs

Modern Kernel specification (can be seen here) defined the initial ramdisk (initrd or initramfs, depends on who you ask) to allow stacking of compressed or uncompressed CPIO archives. It means, in fact, that you can extend your current initramfs by appending a cpio.gz (or cpio) file at the end, containing the additions or changes to the filesystem (be it directories, files, links and anything else you can think about).

An example of this action:

1
2
3
4
5
mkdir /tmp/test
cd /tmp/test
tar -C /home/ezaton/test123 -cf - . | tar xf - # Clones the contests of /home/ezaton/test123 to this location
find ./ | cpio -o -H newc > ../test.cpio.gz # Creates a compressed CPIO file
cat ../test.cpio.gz >> /boot/initramfs-`uname -r`.img

This should work (I haven’t tried, and if you do it – make sure you have a copy of the original initramfs file!), and the contents of the directory /tmp/test would be reflected in the initramfs.

This method allows us to quickly modify existing ramdisk, replacing files (the stacked cpio files are extracted by order), and practically – doing allot of neat tricks.

The trickier question, however, is how to extract the stacked CPIO files.
If you create a file containing multiple cpio.gz files, appended, and just try to extract them, only the contents of the first CPIO file would be extracted.

The Kernel can do it, and so are we. The basic concept we need to understand is that GZIP compresses a stream. It means that there is no difference between a file structured of stacked CPIO files, and then compressed altogether, or a file constructed by appending cpio.gz files. The result would be similar, and so is the handling of the file. It also means that we do not need to run a loop of zcat/un-cpio and then again zcat/un-cpio on the file chunk by chunk, but when we decompress the file, we decompress it in whole.

Let’s create an example file:

1
2
3
4
5
6
cd /tmp for i in {1..10} ; do
    mkdir test${i}
    touch test${i}/test${i}-file
    find ./test${i} | cpio -o -H newc | gzip > test${i}.cpio.gz
    cat test${i}.cpio.gz >> test-of-all.cpio.gz 
done

This script will create ten directories called test1 to test10, each containing a single file called test<number>-file. Each of them will both be archived into a dedicated cpio.gz file (named the same) and appended to a larger file called test-of-all.cpio.gz

If we run the following script to extract the contents, we will get only the first CPIO contents:

1
2
3
mkdir /tmp/extract
cd /tmp/extract
zcat ../test-of-all.cpio.gz | cpio -id # Format is newc, but it is auto detected

The resulting would be the directory ‘test1’ with a single file in it, but with nothing else. The trick to extract all files would be to run the following command:

1
2
3
4
rm -Rf /tmp/extrac # Cleanup
mkdir /tmp/extract
cd /tmp/extract
zcat ../test-of-all.cpio.gz | while cpio -id ; do : ; done

This will extract all files, until there is no more cpio format remaining. Then the ‘cpio’ command will fail and the loop would end.

Some additional notes:
The ‘:’ is a place holder (does nothing) because ‘while’ loop requires a command. It is a legitimate command in shell.

So – now you can extract even complex CPIO structures, such as can be found in older Foreman “Discovery Image” (very old implementation), Tiny Core Linux (see this forum post, and this wiki note as reference on where this stacking is invoked) and more. This said, for extracting Centos/RHEL7 initramfs, which is structured of uncompressed CPIO appended by a cpio.gz file, a different command is required, and a post about it (works for Ubuntu and RHEL) can be found here.

EDIT: It seems the kernel-integrated CPIO extracting method will not “overwrite” a file with a later layer of cpio.gz contents, so I will have to investigate a different approach to that. FYI.

OnePlus 7 Pro black screen

I have OnePlus 7 Pro. Following the recent update to Android 9, (Just now updated to Android 10, so I don’t know if this problem still relevant) – Once every 2-3 weeks or so, the phone would not wake up from sleep, and remain having a black screen. Long-pressing on the power button has no effect, and the phone remained “dead” as far as I could see. The only indication it was not “entirely dead” was that it generated a very low level of heat.

After leaving it for two days just laying around, I attempted to start it and got a screen message saying the phone needs to be charged enough to start.

It appears, based on this thread, that the phone might have gotten into deep-sleep, from which it could not wake up. A quick workaround was to long-press on Volume Up + Power for about 10-15 seconds. It vibrates (which is the best response ever at that stage) and then you can start the phone normally and it works correctly.

I hope that the Android 10 update solved this issue.

Oculus Quest casting to Linux

I have acquired a new Oculus Quest, which is a wonderful device. I aim at making it a useful desktop tool, and hopefully – some day – replace my entire working (Linux) desktop with this 360Degץ For now – this is a nice gaming platform, and while it is entirely immersive, and drowns one entirely in the experience, others cannot take even the part of the viewer. It means that when you introduce the Quest to someone new, you cannot instruct her with how to use the device, because you cannot see what’s happening inside.

The application allows for casting, but only to a limited set of destination devices, or only for some applications. Limitations. We don’t like it.

So, based on a great instruction video, I have modified the procedure to work on Linux.

You will need to enable developer mode on your Oculus (the video has a quick reference to that), and you will have to install ‘adb’ on your Linux. For Ubuntu – ‘sudo apt install adb’ should do the trick.
Also – you will need the ‘scrcpy’ tool, which can be installed using ‘snap’ command like this: ‘sudo snap install scrcpy’.

If you can see the device (when connected via USB) running the command ‘adb devices’ (you might need to run it first under ‘sudo’ – check for guides on enabling ‘adb’ for your user) when the Oculus is connected via USB, or modify the connection script to match your Oculus IP address (instead of 1.2.3.4), and then run the command to connect.

Connect_wireless_adb_quest_Linux.sh

1
2
3
4
#!/bin/bash
ipaddr=1.2.3.4
adb tcpip 5555
adb connect $ipaddr

Cast_to_your_Linux_quest.sh

1
2
#!/bin/bash
scrcpy -c 1440:1600:0:0 -m 1600 -b 8M

Save these scripts, make them executable, and good luck!

cluvfy fails with user equivalence

I came across a Linux host (RHEL6.2) which I was not the one managing before, which required adding a node in Oracle 11.2 Grid Infrastructure. According the common documentations, as can be seen here, after cloning the host (or installing it correctly, according to Oracle requirements), you should run ‘cluvfy’ on the existing cluster node.

This has failed miserably – the error result was “PRVF-7610: cannot verify user equivalence/reachability on existing cluster nodes with equivalence configured”. No additional logs were present, and no indication of the problem existed, and equivalence was working correctly, when tested using manual SSH commands in all and any direction.

I have found a hint in this post, where I could control the debugging level of the cluvfy command through the following three env variables:
export CV_TRACELOC=/tmp/cvutrace
export SRVM_TRACE=true
export SRVM_TRACE_LEVEL=2

Following these variables, I have learned how cluvfy works – it checks connectivity (ping and then SSH), followed by copying a set of files to /tmp/CVU_<version>_<GI user> and then executes them.

In this particular case, /tmp was mounted with noexec flag, which resulting in a complete failure, and lack of any logs. So – take heed.