Archive for the ‘Clusters’ Category

Oracle ACFS autostart on Oracle RAC stand alone (Oracle Restart)

Thursday, January 3rd, 2019

I would like to start with a declaration – I would prefer not to use ACFS for a stand-alone system. It binds the “normal” order of startup and mounts to the cluster. Not only that – but while until version 12.1, RAC stand alone had a built-in service for ACFS, this is no longer the case for 12.2 and above. This resource/service exists only for a two (and above) node clusters.

If you have upgraded from a 12.1 (or 11.2) stand alone RAC to 12.2 or above, you will no longer be able to automatically mount your ACFS disks. This (and some minor bugs I’ve found with ACFS) is part of the reason I would recommend against using ACFS for stand alone system. HOWEVER – there are cases where you have no choice – either because you are using ACFS replication/snapshots, or because you are using Oracle ASM redundancy model (using either “normal” or “high” redundancy) over a JBoD – which forces you to use ADVM and with it – ACFS is only a small addition.

As I’ve written before – ACFS won’t auto start on 12.2 stand alone GI. A possible solution I thought of (but did not apply, and thus – cannot show it here) is to use a method of creating a 3rd party application service (as described in a document called “TWP-Oracle-Clusterware-3rd-party” to implement a custom service which will actually mount your ACFS for you, when the cluster is ready to do so. I would have done it like that in a recent project, however, a nice person called Pierre has done it for me, slightly differently – he used a systemd services to run custom scripts which attempted to run in loop until the cluster was ready to perform the required actions. I have tested it, and it works well. My only comment about it, which you will be able to see in his blog post, was that if your ORACLE_HOME resides on a dedicated mount point (which is my case, usually), you should force your systemd unit to require this mount as well as its prerequisites. Other than that – his solution worked well, and I thank him for his time and efforts. Kudos Pierre!

Oracle 12.2 Grid Infrastructure installation tips

Wednesday, October 17th, 2018

There are many sites explaining how to install Oracle GI 12.2, however, there are some special tricks which can simplify GI installation.

For once, when installing GI and then installing the huge PatchSet (which is usually around 1.4GB in size) – it takes time. A lot of time. A simple but not very well documented trick is to run the installer with a specific flag, pointing to the extracted PSU. You can obtain the PSU from Oracle document ID 2118136.2 (Oracle support plan required).

After extracting the contents of the oracle grid home to the destination directory, and after extracting the contents of the PSU to a known (other) location, run from the oracle grid home directory the following command:

./gridSetup.sh -applyPSU /path/to/PSU

(case sensitive). You will need a working $DISPLAY because following the patch apply phase, an installation window will pop up.

That said, make sure you remove (rpm -e –nodeps stix-fonts) if you are on RHEL/OEL/Centos version newer than 7.4. These fonts will prevent the GUI installer from starting (java would crash) and will cause great frustration. You can later on restore this package if you feel the urge.

Additional trick I’ve seen, but yet to try, is how to run the GI installer unattended. This can be done like this:

./gridSetup.sh -silent -responseFile /path/to/response/file.rsp
< run root.sh as directed >
./gridSetup.sh -executeConfigTools -all -silent -responseFile /path/to/response/file.rsp

Hope this helps.

Migration of Oracle GI quorum disk to another diskgroup

Sunday, June 28th, 2015

When installing Oracle RAC (or in its more modern name – GI) version 11.2.0.1 and above, you can use Oracle ASM DiskGroup as your CRS+Voting file location.

It is fairly simple changing the disk membership in Oracle ASM DiskGroup, however, when you face some unknown bugs which prevent you from doing just that, or when you are required to modify the ASM DiskGroup on which the CRS+Voting files are placed, the article below is the one for you. You would have to remember, in addition, the ASM spfile.

So, as a reminder (which is one of the purposes of this blog), here’s a link to a very extensive article about how to migrate the CRS, Vote and spfile from one ASM DiskGroup to another: Migrate OCR to another DiskGroup

 

Redhat Cluster and Citrix XenServer

Thursday, April 9th, 2015

I wanted to write down a guide for RHCS on RHEL/Centos6 and XenServer.

If you want to do that, you need to go through two major challenges which you will encounter. I want to save on the search and sum it all up together here.

The first difficulty is the shared disk. In order to set up most common cluster scenarios, you will need a shared storage. You could either map the VMs to an iSCSI LUNs external to the environment, however, if you do not have such infrastructure (either because everything is based on SAS/FC, or you do not have the ability to set up iSCSI storage with reasonable level of availability), you will want XenServer to allow you to share the VDI between two VMs.

In order to do so, you will need to add a flag to all your pool’s XenServers, and to create the VDI in a specific method. First – the flag – you need to create a file in /etc/xensource called “allow_multiple_vdi_attach”. Do not forget to add it to all your XenServers:

touch /etc/xensource/allow_multiple_vdi_attach

Next, you will need to create your VDI as “raw” type. This is an example. You need to change the SR UUID to the one you use:

xe vdi-create sm-config:type=raw sr-uuid=687a023b-0b20-5e5f-d1ef-3db777ce7ae4 name-label=”My Raw LVM VDI” virtual-size=8GiB type=user

You can find Citrix article about it here.

Following that, you can complete your cluster setup and configuration. I will not add details about it here, as this is not the focus of this article. However, when it comes to fencing, you will need a solution. The solution I used was a fencing agent which was written specifically for XenServer using XenAPI, by using the agent called fence-xenserver. I did not use the fencing agents repository (which this page also points to), because I was unable to compile the required components to run on Centos6. They just don’t compile well. This is, however, a simple Python script which actually works.

In order to make it work, I did the following:

  • Extracted the archive (version 0.8)
  • Placed fence_cxs* in /usr/sbin, and removed their ‘.py’ suffix
  • Placed XenAPI.py as-is in /usr/sbin
  • Verified /usr/sbin/fence_cxs* had execution permissions.

Now, I needed to add it to the cluster configuration. Since the agent cannot handle accessing a non-pool master, it had to be defined for each pool member (I cannot tell in advance which of them is going to have the pool master role when a failover should happen). So, this is my cluster.conf relevant parts:

<fencedevices>
<fencedevice agent=”fence_cxs_redhat” login=”root” name=”xenserver01″ passwd=”password” session_url=”https://xenserver01″/>
<fencedevice agent=”fence_cxs_redhat” login=”root” name=”xenserver02″ passwd=”password” session_url=”https://xenserver02″/>
<fencedevice agent=”fence_cxs_redhat” login=”root” name=”xenserver03″ passwd=”password” session_url=”https://xenserver03″/>
<fencedevice agent=”fence_cxs_redhat” login=”root” name=”xenserver04″ passwd=”password” session_url=”https://xenserver04″/>
</fencedevices>
<clusternodes>
<clusternode name=”clusternode1″ nodeid=”1″>
<fence>
<method name=”xenserver01″>
<device name=”xenserver01″ vm_name=”clusternode1″/>
</method>
<method name=”xenserver02″>
<device name=”xenserver02″ vm_name=”clusternode1″/>
</method>
<method name=”xenserver03″>
<device name=”xenserver03″ vm_name=”clusternode1″/>
</method>
<method name=”xenserver04″>
<device name=”xenserver04″ vm_name=”clusternode1″/>
</method>
</fence>
</clusternode>
<clusternode name=”clusternode2″ nodeid=”2″>
<fence>
<method name=”xenserver01″>
<device name=”xenserver01″ vm_name=”clusternode2″/>
</method>
<method name=”xenserver02″>
<device name=”xenserver02″ vm_name=”clusternode2″/>
</method>
<method name=”xenserver03″>
<device name=”xenserver03″ vm_name=”clusternode2″/>
</method>
<method name=”xenserver04″>
<device name=”xenserver04″ vm_name=”clusternode2″/>
</method>
</fence>
</clusternode>
</clusternodes>

Attached xenserver-fencing-cluster.xml for clarity (WordPress makes a mess out of that)

Note that I used four (4) entries, since my pool has four hosts. Also note the VM name (it is case sensitive), and your methods – one for each host, since you don’t want them running in parallel, but one at a time. Failover time is between 5-15 seconds on my tests, depending on who is the actually pool master (xenserver04 takes the longest, obviously). I did not test it with pool master down (before or without HA kicking in), nor with the hosts down and thus TCP timeout is longer (than when attempting to connect a host which responds immediately that it is not the pool master). However, if ILO fencing takes about 30-60 seconds, I am not complaining about the current timeouts.

Timeout when using Ricci as the backend for Corosync update in Redhat Cluster

Saturday, March 14th, 2015

When using Ricci as the engine for ‘cman_tool version -r’ command, you will experience timeouts (and practically – you will be unable to use ricci to update the cluster configuration across the nodes) when the ricci user password contains XML-sensitive characters, like <>&, etc.
As they say – FYI 🙂