Posts Tagged ‘ASM’

Migration of Oracle GI quorum disk to another diskgroup

Sunday, June 28th, 2015

When installing Oracle RAC (or in its more modern name – GI) version 11.2.0.1 and above, you can use Oracle ASM DiskGroup as your CRS+Voting file location.

It is fairly simple changing the disk membership in Oracle ASM DiskGroup, however, when you face some unknown bugs which prevent you from doing just that, or when you are required to modify the ASM DiskGroup on which the CRS+Voting files are placed, the article below is the one for you. You would have to remember, in addition, the ASM spfile.

So, as a reminder (which is one of the purposes of this blog), here’s a link to a very extensive article about how to migrate the CRS, Vote and spfile from one ASM DiskGroup to another: Migrate OCR to another DiskGroup

 

Attach multiple Oracle ASM snapshots to the same host

Thursday, September 12th, 2013

The goal – connecting multiple Oracle ASM snapshots (same source LUNs, of course) to the same machine. The next process will demonstrate how to do it.

Problem: ASM disks use a disk label called ASMLib to maintain access even when the logical disk path might change (like adding a LUN with a lower ID and rebooting the server). This solves a major problem which was experienced with RAW devices, when order changed, and the ‘wrong’ disks took the place of others. ASM labels are a vital part in managing ASM disks and ASM DiskGroups. Also – the ASM DiskGroup name should be unique. You cannot have multiple DiskGroups with the same name.

Limitations – you cannot connect the snapshot LUNs to the same server which has access to the source LUNs.

Process:

  1. Take a snapshot of the source LUN. If the ASM DiskGroup spans across several LUNs, you must create a consistency group (each storage device has its own lingo for the task).
  2. Map the snapshots to the target server (EMC – prepare EMC Snapshot Mount Points (SMP) in advance. Other storage devices – depending)
  3. Perform partprobe on all target servers.
  4. Run ‘service oracleasm scandisks‘ to scan for the new ASM disk labels. We will need to change them now, so that the additional snapshot will not use duplicate ASM labels.
  5. For each of the new ASM disks, run ‘service oracleasm force-renamedisk SRC_NAME TGT_NAME‘. You will want to rename the source name (SRC_NAME) to a unique target name, with some correlation to the snapshot name/purpose. This is the reasonable way of making some sense of a possibly very messy setup.
  6. As the Oracle user, with the correct PATH variables ($ORACLE_HOME should point to the CRS_HOME) and the right ORACLE_SID (for example – +ASM1), run: ‘renamedg phase=both dgname=SRC_DG_NAME newdgname=NEW_DG_NAME verbose=true‘. The value ‘SRC_DG_NAME’ represents the original (on the source) DiskGroup name, and the NEW_DG_NAME represents the new name. Much like when renaming the disks – the name should have some relationship with either the snapshot name, so you can find your hands and legs in this mess (again – imagine having six snapshots, each of a DiskGroup with four LUNs. Now – this is a mess).
  7. You can now mount the DiskGroup (named NEW_DG_NAME in my example) on both nodes

Assumptions:

  1. Oracle GI is up and running all through this process
  2. I tested it with Oracle 11.2.0.3. Other versions of 11.2.0.x might work, I have no clue about previous 11.1.x versions, or any earlier versions.
  3. It was tested on Linux. My primary work platform. It was, to be exact, on RHEL 6.4, but it should work just the same on any RHEL-like platform. I believe it will work on other Linux platforms. I have no clue about running it on any other Unix/Windows platform.
  4. The DiskGroup should not be mounted (no reason for it to be mounted right on discovery). Do not manually mount it prior to performing this procedure.

Good luck, and post a comment if you find this explanation either unclear, or if you encounter any problem.

 

 

 

Oracle ASM and EMC PowerPath

Wednesday, May 28th, 2008

Setting up an Oracle ASM disks is rather simple, and the procedure can be easily obtained from here, for example. This is nice and pretty, and works well for most environments.

EMC PowerPath creates meta devices which utilize the underlying paths, as mod_scsi sees them in Linux, without hiding them (unlike IBM’s RDAC, for example). This results in the ability to view and access each LUN either through the PowerPath meta device (/dev/emcpower*) or through the underlying SCSI disk device (/dev/sd*). You can obtain the existing paths of a single meta devices through running the command

powermt display dev=emcpowera

where ’emcpowera’ is an example. It can be any of your power meta devices. You will see the underlying SCSI devices.

During startup, Oracle ASM (startup script: /etc/init.d/oracleasm) scans all block devices for ASM headers. On a system with many LUNs, this can take a while (half an hour, and sometimes much more). Not only that, but since ASM scans the available block devices in a semi-random order, the chances are very high that the /dev/sd* will be used instead of the /dev/emcpower* block device. This results in degraded performance, where active-active configuration has been set for PowerPath (because it will not be used), and moreover – a failure of that specific link will result in failure to access the specific LUN through that path, with disregard to any other existing paths to the LUN.

To "set things right", you need to edit /etc/sysconfig/oracleasm, and exclude all ‘sd’ devices from ASM scan.

To verify that you’re actually using the right block device:

/etc/init.d/oracleasm listdisks

Select any one of the DG disks, and then

/etc/init.d/oracleasm querydisk DATA1
Disk “DATA1″ is a valid ASM disk on device [120, 6]

The numbers are the major and minor of the block device. You can easily find the device through this command:

ls -la /dev/ | grep MAJOR | grep MINOR

In our example, the MAJOR will be 120, and the MINOR will be 6. The result would look like a single block device.

If you’re using EMC PowerPath, your block device major would be 120 and around that number. If you’re (mistakenly) using one of the underlying paths, your major would be 8 and nearby numbers. If you’re using Linux LVM, your major would be around the number 253. The expected result, when using EMC PowerPath is always with major of 120 – always using the /dev/emcpower* devices.

This also decreases the boot time rather dramatically.