Posts Tagged ‘Oracle RAC’

Oracle ACFS autostart on Oracle RAC stand alone (Oracle Restart)

Thursday, January 3rd, 2019

I would like to start with a declaration – I would prefer not to use ACFS for a stand-alone system. It binds the “normal” order of startup and mounts to the cluster. Not only that – but while until version 12.1, RAC stand alone had a built-in service for ACFS, this is no longer the case for 12.2 and above. This resource/service exists only for a two (and above) node clusters.

If you have upgraded from a 12.1 (or 11.2) stand alone RAC to 12.2 or above, you will no longer be able to automatically mount your ACFS disks. This (and some minor bugs I’ve found with ACFS) is part of the reason I would recommend against using ACFS for stand alone system. HOWEVER – there are cases where you have no choice – either because you are using ACFS replication/snapshots, or because you are using Oracle ASM redundancy model (using either “normal” or “high” redundancy) over a JBoD – which forces you to use ADVM and with it – ACFS is only a small addition.

As I’ve written before – ACFS won’t auto start on 12.2 stand alone GI. A possible solution I thought of (but did not apply, and thus – cannot show it here) is to use a method of creating a 3rd party application service (as described in a document called “TWP-Oracle-Clusterware-3rd-party” to implement a custom service which will actually mount your ACFS for you, when the cluster is ready to do so. I would have done it like that in a recent project, however, a nice person called Pierre has done it for me, slightly differently – he used a systemd services to run custom scripts which attempted to run in loop until the cluster was ready to perform the required actions. I have tested it, and it works well. My only comment about it, which you will be able to see in his blog post, was that if your ORACLE_HOME resides on a dedicated mount point (which is my case, usually), you should force your systemd unit to require this mount as well as its prerequisites. Other than that – his solution worked well, and I thank him for his time and efforts. Kudos Pierre!

Oracle 12.2 Grid Infrastructure installation tips

Wednesday, October 17th, 2018

There are many sites explaining how to install Oracle GI 12.2, however, there are some special tricks which can simplify GI installation.

For once, when installing GI and then installing the huge PatchSet (which is usually around 1.4GB in size) – it takes time. A lot of time. A simple but not very well documented trick is to run the installer with a specific flag, pointing to the extracted PSU. You can obtain the PSU from Oracle document ID 2118136.2 (Oracle support plan required).

After extracting the contents of the oracle grid home to the destination directory, and after extracting the contents of the PSU to a known (other) location, run from the oracle grid home directory the following command:

./gridSetup.sh -applyPSU /path/to/PSU

(case sensitive). You will need a working $DISPLAY because following the patch apply phase, an installation window will pop up.

That said, make sure you remove (rpm -e –nodeps stix-fonts) if you are on RHEL/OEL/Centos version newer than 7.4. These fonts will prevent the GUI installer from starting (java would crash) and will cause great frustration. You can later on restore this package if you feel the urge.

Additional trick I’ve seen, but yet to try, is how to run the GI installer unattended. This can be done like this:

./gridSetup.sh -silent -responseFile /path/to/response/file.rsp
< run root.sh as directed >
./gridSetup.sh -executeConfigTools -all -silent -responseFile /path/to/response/file.rsp

Hope this helps.

Migration of Oracle GI quorum disk to another diskgroup

Sunday, June 28th, 2015

When installing Oracle RAC (or in its more modern name – GI) version 11.2.0.1 and above, you can use Oracle ASM DiskGroup as your CRS+Voting file location.

It is fairly simple changing the disk membership in Oracle ASM DiskGroup, however, when you face some unknown bugs which prevent you from doing just that, or when you are required to modify the ASM DiskGroup on which the CRS+Voting files are placed, the article below is the one for you. You would have to remember, in addition, the ASM spfile.

So, as a reminder (which is one of the purposes of this blog), here’s a link to a very extensive article about how to migrate the CRS, Vote and spfile from one ASM DiskGroup to another: Migrate OCR to another DiskGroup

 

Attach multiple Oracle ASM snapshots to the same host

Thursday, September 12th, 2013

The goal – connecting multiple Oracle ASM snapshots (same source LUNs, of course) to the same machine. The next process will demonstrate how to do it.

Problem: ASM disks use a disk label called ASMLib to maintain access even when the logical disk path might change (like adding a LUN with a lower ID and rebooting the server). This solves a major problem which was experienced with RAW devices, when order changed, and the ‘wrong’ disks took the place of others. ASM labels are a vital part in managing ASM disks and ASM DiskGroups. Also – the ASM DiskGroup name should be unique. You cannot have multiple DiskGroups with the same name.

Limitations – you cannot connect the snapshot LUNs to the same server which has access to the source LUNs.

Process:

  1. Take a snapshot of the source LUN. If the ASM DiskGroup spans across several LUNs, you must create a consistency group (each storage device has its own lingo for the task).
  2. Map the snapshots to the target server (EMC – prepare EMC Snapshot Mount Points (SMP) in advance. Other storage devices – depending)
  3. Perform partprobe on all target servers.
  4. Run ‘service oracleasm scandisks‘ to scan for the new ASM disk labels. We will need to change them now, so that the additional snapshot will not use duplicate ASM labels.
  5. For each of the new ASM disks, run ‘service oracleasm force-renamedisk SRC_NAME TGT_NAME‘. You will want to rename the source name (SRC_NAME) to a unique target name, with some correlation to the snapshot name/purpose. This is the reasonable way of making some sense of a possibly very messy setup.
  6. As the Oracle user, with the correct PATH variables ($ORACLE_HOME should point to the CRS_HOME) and the right ORACLE_SID (for example – +ASM1), run: ‘renamedg phase=both dgname=SRC_DG_NAME newdgname=NEW_DG_NAME verbose=true‘. The value ‘SRC_DG_NAME’ represents the original (on the source) DiskGroup name, and the NEW_DG_NAME represents the new name. Much like when renaming the disks – the name should have some relationship with either the snapshot name, so you can find your hands and legs in this mess (again – imagine having six snapshots, each of a DiskGroup with four LUNs. Now – this is a mess).
  7. You can now mount the DiskGroup (named NEW_DG_NAME in my example) on both nodes

Assumptions:

  1. Oracle GI is up and running all through this process
  2. I tested it with Oracle 11.2.0.3. Other versions of 11.2.0.x might work, I have no clue about previous 11.1.x versions, or any earlier versions.
  3. It was tested on Linux. My primary work platform. It was, to be exact, on RHEL 6.4, but it should work just the same on any RHEL-like platform. I believe it will work on other Linux platforms. I have no clue about running it on any other Unix/Windows platform.
  4. The DiskGroup should not be mounted (no reason for it to be mounted right on discovery). Do not manually mount it prior to performing this procedure.

Good luck, and post a comment if you find this explanation either unclear, or if you encounter any problem.

 

 

 

Oracle ASM and EMC PowerPath

Wednesday, May 28th, 2008

Setting up an Oracle ASM disks is rather simple, and the procedure can be easily obtained from here, for example. This is nice and pretty, and works well for most environments.

EMC PowerPath creates meta devices which utilize the underlying paths, as mod_scsi sees them in Linux, without hiding them (unlike IBM’s RDAC, for example). This results in the ability to view and access each LUN either through the PowerPath meta device (/dev/emcpower*) or through the underlying SCSI disk device (/dev/sd*). You can obtain the existing paths of a single meta devices through running the command

powermt display dev=emcpowera

where ’emcpowera’ is an example. It can be any of your power meta devices. You will see the underlying SCSI devices.

During startup, Oracle ASM (startup script: /etc/init.d/oracleasm) scans all block devices for ASM headers. On a system with many LUNs, this can take a while (half an hour, and sometimes much more). Not only that, but since ASM scans the available block devices in a semi-random order, the chances are very high that the /dev/sd* will be used instead of the /dev/emcpower* block device. This results in degraded performance, where active-active configuration has been set for PowerPath (because it will not be used), and moreover – a failure of that specific link will result in failure to access the specific LUN through that path, with disregard to any other existing paths to the LUN.

To "set things right", you need to edit /etc/sysconfig/oracleasm, and exclude all ‘sd’ devices from ASM scan.

To verify that you’re actually using the right block device:

/etc/init.d/oracleasm listdisks

Select any one of the DG disks, and then

/etc/init.d/oracleasm querydisk DATA1
Disk “DATA1″ is a valid ASM disk on device [120, 6]

The numbers are the major and minor of the block device. You can easily find the device through this command:

ls -la /dev/ | grep MAJOR | grep MINOR

In our example, the MAJOR will be 120, and the MINOR will be 6. The result would look like a single block device.

If you’re using EMC PowerPath, your block device major would be 120 and around that number. If you’re (mistakenly) using one of the underlying paths, your major would be 8 and nearby numbers. If you’re using Linux LVM, your major would be around the number 253. The expected result, when using EMC PowerPath is always with major of 120 – always using the /dev/emcpower* devices.

This also decreases the boot time rather dramatically.