Posts Tagged ‘Oracle RAC’

Reduce Oracle ASM disk size

Tuesday, January 21st, 2020

I have had a system with Oracle ASM diskgroup from which I needed some space. The idea was to release some disk space, reduce the size of existing partitions, add a new partition and use it – all online(!)

This is a grocery list of tasks to do, with some explanation integrated into it.

Overview

  • Reduce ASM diskgroup size
  • Drop a disk from ASM diskgroup
  • Repartition disk
  • Delete and recreate ASM label on disk
  • Add disk to ASM diskgroup

Assuming the diskgroup is constructed of five disks, called SSDDATA01 to SSDDATA05. All my examples will be based on that. The current size is 400GB and I will reduce it to somewhat below the target size of the partition, lets say – to 356000M

Reduce ASM diskgroup size

alter diskgroup SSDDATA resize all size 365000M;

This command will reduce the expected disks to below the target partition size. This will allow us to reduce its size

Drop a disk from the ASM diskgroup

alter diskgroup SSDDATA drop disk 'SSDDATA01';

We will need to know which physical disk this is. You can look at the device major/minor in /dev/oracleasm/disks and compare it with /dev/sd* or with /dev/mapper/*

It is imperative that the correct disk is marked (or else…) and that the disk will become currently unused.

Reduce partition size

As the root user, you should run something like this (assuming a single partition on the disk):

parted -a optimal /dev/sdX
(parted) rm 1
(parted) mkpart primary 1 375GB
(parted) mkpart primary 375GB -1
(parted) quit

This will remove the first (and only) partition – but not its data – and recreate it with the same beginning but smaller in size. The remaining space will be used for an additional partition.

We will need to refresh the disk layout on all cluster nodes. Run on all nodes:

partprobe /dev/sdX

Remove and recreate ASM disk label

As root, run the following command on a single node:

oracleasm deletedisk SSDDATA01
oracleasm createdisk SSDDATA01 /dev/sdX1

You might want to run on all other nodes the following command, as root:

oracleasm scandisks

Add a disk to ASM diskgroup

Because the disk is of different size, adding it without specific size argument would not be possible. This is the correct command to perform this task:

alter diskgroup SSDDATA add disk 'ORCL:SSDDATA01' size 356000M;

To save time, however, we can add this disk and drop another at the same time:

alter diskgroup SSDDATA add disk 'ORCL:SSDDATA01' size 356000M drop disk 'SSDDATA02';

In general – if you have enough space on your ASM diskgroup – it is recommended to add/drop multiple disks in a single command. It will allow for a faster operation, with less repeating data migration. Save time – save efforts.

The last disk will be re-added, without dropping any other disk, of course.

I hope it helps 🙂

cluvfy fails with user equivalence

Wednesday, September 11th, 2019

I came across a Linux host (RHEL6.2) which I was not the one managing before, which required adding a node in Oracle 11.2 Grid Infrastructure. According the common documentations, as can be seen here, after cloning the host (or installing it correctly, according to Oracle requirements), you should run ‘cluvfy’ on the existing cluster node.

This has failed miserably – the error result was “PRVF-7610: cannot verify user equivalence/reachability on existing cluster nodes with equivalence configured”. No additional logs were present, and no indication of the problem existed, and equivalence was working correctly, when tested using manual SSH commands in all and any direction.

I have found a hint in this post, where I could control the debugging level of the cluvfy command through the following three env variables:
export CV_TRACELOC=/tmp/cvutrace
export SRVM_TRACE=true
export SRVM_TRACE_LEVEL=2

Following these variables, I have learned how cluvfy works – it checks connectivity (ping and then SSH), followed by copying a set of files to /tmp/CVU_<version>_<GI user> and then executes them.

In this particular case, /tmp was mounted with noexec flag, which resulting in a complete failure, and lack of any logs. So – take heed.

Oracle ACFS autostart on Oracle RAC stand alone (Oracle Restart)

Thursday, January 3rd, 2019

I would like to start with a declaration – I would prefer not to use ACFS for a stand-alone system. It binds the “normal” order of startup and mounts to the cluster. Not only that – but while until version 12.1, RAC stand alone had a built-in service for ACFS, this is no longer the case for 12.2 and above. This resource/service exists only for a two (and above) node clusters.

If you have upgraded from a 12.1 (or 11.2) stand alone RAC to 12.2 or above, you will no longer be able to automatically mount your ACFS disks. This (and some minor bugs I’ve found with ACFS) is part of the reason I would recommend against using ACFS for stand alone system. HOWEVER – there are cases where you have no choice – either because you are using ACFS replication/snapshots, or because you are using Oracle ASM redundancy model (using either “normal” or “high” redundancy) over a JBoD – which forces you to use ADVM and with it – ACFS is only a small addition.

As I’ve written before – ACFS won’t auto start on 12.2 stand alone GI. A possible solution I thought of (but did not apply, and thus – cannot show it here) is to use a method of creating a 3rd party application service (as described in a document called “TWP-Oracle-Clusterware-3rd-party” to implement a custom service which will actually mount your ACFS for you, when the cluster is ready to do so. I would have done it like that in a recent project, however, a nice person called Pierre has done it for me, slightly differently – he used a systemd services to run custom scripts which attempted to run in loop until the cluster was ready to perform the required actions. I have tested it, and it works well. My only comment about it, which you will be able to see in his blog post, was that if your ORACLE_HOME resides on a dedicated mount point (which is my case, usually), you should force your systemd unit to require this mount as well as its prerequisites. Other than that – his solution worked well, and I thank him for his time and efforts. Kudos Pierre!

Oracle 12.2 Grid Infrastructure installation tips

Wednesday, October 17th, 2018

There are many sites explaining how to install Oracle GI 12.2, however, there are some special tricks which can simplify GI installation.

For once, when installing GI and then installing the huge PatchSet (which is usually around 1.4GB in size) – it takes time. A lot of time. A simple but not very well documented trick is to run the installer with a specific flag, pointing to the extracted PSU. You can obtain the PSU from Oracle document ID 2118136.2 (Oracle support plan required).

After extracting the contents of the oracle grid home to the destination directory, and after extracting the contents of the PSU to a known (other) location, run from the oracle grid home directory the following command:

./gridSetup.sh -applyPSU /path/to/PSU

(case sensitive). You will need a working $DISPLAY because following the patch apply phase, an installation window will pop up.

That said, make sure you remove (rpm -e –nodeps stix-fonts) if you are on RHEL/OEL/Centos version newer than 7.4. These fonts will prevent the GUI installer from starting (java would crash) and will cause great frustration. You can later on restore this package if you feel the urge.

Additional trick I’ve seen, but yet to try, is how to run the GI installer unattended. This can be done like this:

./gridSetup.sh -silent -responseFile /path/to/response/file.rsp
< run root.sh as directed >
./gridSetup.sh -executeConfigTools -all -silent -responseFile /path/to/response/file.rsp

Hope this helps.

Migration of Oracle GI quorum disk to another diskgroup

Sunday, June 28th, 2015

When installing Oracle RAC (or in its more modern name – GI) version 11.2.0.1 and above, you can use Oracle ASM DiskGroup as your CRS+Voting file location.

It is fairly simple changing the disk membership in Oracle ASM DiskGroup, however, when you face some unknown bugs which prevent you from doing just that, or when you are required to modify the ASM DiskGroup on which the CRS+Voting files are placed, the article below is the one for you. You would have to remember, in addition, the ASM spfile.

So, as a reminder (which is one of the purposes of this blog), here’s a link to a very extensive article about how to migrate the CRS, Vote and spfile from one ASM DiskGroup to another: Migrate OCR to another DiskGroup