| |

Easily Replacing an AFD Disk in ASM DiskGroup

When using OracleASM in normal or high redundancy, a failed disk is nothing to be concerned with. However, when using AFD, there are several additional actions you need to take into account in order to replace the disk quickly and easily.

It is imperative to understand that when using AFD, there are some unavoidable tasks which should be performed as the root user. Below we will discuss these actions. Mind you that I refer to a multiple-nodes cluster, and that my assumption is that any step performed on the 2nd node should be performed similarly on all additional nodes.

On all cluster nodes, we need to setup the required environment variables to be able to handle ‘asmcmd’ commands correctly:

export ORACLE_BASE=/tmp
export ORACLE_HOME=/oracle/app/19.3/grid
export PATH=${PATH}:${ORACLE_HOME}/bin

On all cluster nodes, we need to refresh the status of AFD, assuming that the disk is missing – either failed and non-existing, or manually removed:

asmcmd afd_refresh

Listing the AFD disks following that command will show that the missing disk is not present. Run the following command to verify:

asmcmd afd_lsdsk

On the first node, we need to recognise the new disk, and make sure its data partition is identical to any other disk. Do not introduce the newly added disk to other cluster nodes just yet. We will introduce it later, with all disk-persistent properties already set. I do not cover in this article how to introduce a disk (scsi-rescan-disks.sh is a simple method), or how to identify the newly added disk. I assume the reader is capable of doing so.

In this example I will demonstrate using a “multipath” (device-mapper-multipath) device, however, this task is valid for a single-path device as well. OracleASM does not take kindly to different disk topology, so it is imperative that the data partition is set to the same size exactly. Taking one (online and working) disk as an example can show like this:

# parted /dev/mapper/358ce38ee223d5ffd "unit s p"
Model: Linux device-mapper (multipath) (dm)
Disk /dev/mapper/358ce38ee223d5ffd: 6251233968s
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start      End          Size         File system  Name  Flags
 1      2048s      33202175s    33200128s
 2      33202176s  6055921663s  6022719488s

Using sectors as the measurement unit should guarantee similar topology. Mind you that for OracleASM to allow the disk to join an existing DiskGroup, sector size (4Kb/512b) should be similar, or emulated similarly. I will not discuss 4Kb/512b emulation and solutions in this article.

To create a new partition layout using exactly the same sector geometry, you can either clone the partition (using sfdisk) or create it manually. In this case, this is rather straight forward:

parted /dev/mapper/358ce38ee215b2191
unit s
mklabel gpt
mkpart ' ' 2048s 33202175s
mkpart ' ' 33202176s 6055921663s

The same details as defined in the reference disk.

Now, we need to label and protect using AFD labelling system. A reminder – we are still running only on the first cluster node!

asmcmd afd_label DATA16P1 /dev/mapper/358ce38ee215b2191p1
asmcmd afd_label DATA16P2 /dev/mapper/358ce38ee215b2191p2

Then, run a quick test on the first node to verify the disk is defined and visible:

asmcmd afd_lsdsk

We can now introduce the disk to other cluster nodes. And following that, running on them the following command:

asmcmd afd_refresh
asmcmd afd_lsdsk

which should show the newly added AFD disks.

Expanding the ASM diskgroup is as simple as running, as the Oracle GI user (with the correct ORACLE_HOME and ORACLE_SID variables set) a command similar to this:


I like the number 8. You can use any priority (power) you wish, up to 1024. A low priority usually guarantees a smoother rebalance.

I hope you find this article helpful.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.