Posts Tagged ‘partitions’

Reduce Oracle ASM disk size

Tuesday, January 21st, 2020

I have had a system with Oracle ASM diskgroup from which I needed some space. The idea was to release some disk space, reduce the size of existing partitions, add a new partition and use it – all online(!)

This is a grocery list of tasks to do, with some explanation integrated into it.

Overview

  • Reduce ASM diskgroup size
  • Drop a disk from ASM diskgroup
  • Repartition disk
  • Delete and recreate ASM label on disk
  • Add disk to ASM diskgroup

Assuming the diskgroup is constructed of five disks, called SSDDATA01 to SSDDATA05. All my examples will be based on that. The current size is 400GB and I will reduce it to somewhat below the target size of the partition, lets say – to 356000M

Reduce ASM diskgroup size

alter diskgroup SSDDATA resize all size 365000M;

This command will reduce the expected disks to below the target partition size. This will allow us to reduce its size

Drop a disk from the ASM diskgroup

alter diskgroup SSDDATA drop disk 'SSDDATA01';

We will need to know which physical disk this is. You can look at the device major/minor in /dev/oracleasm/disks and compare it with /dev/sd* or with /dev/mapper/*

It is imperative that the correct disk is marked (or else…) and that the disk will become currently unused.

Reduce partition size

As the root user, you should run something like this (assuming a single partition on the disk):

parted -a optimal /dev/sdX
(parted) rm 1
(parted) mkpart primary 1 375GB
(parted) mkpart primary 375GB -1
(parted) quit

This will remove the first (and only) partition – but not its data – and recreate it with the same beginning but smaller in size. The remaining space will be used for an additional partition.

We will need to refresh the disk layout on all cluster nodes. Run on all nodes:

partprobe /dev/sdX

Remove and recreate ASM disk label

As root, run the following command on a single node:

oracleasm deletedisk SSDDATA01
oracleasm createdisk SSDDATA01 /dev/sdX1

You might want to run on all other nodes the following command, as root:

oracleasm scandisks

Add a disk to ASM diskgroup

Because the disk is of different size, adding it without specific size argument would not be possible. This is the correct command to perform this task:

alter diskgroup SSDDATA add disk 'ORCL:SSDDATA01' size 356000M;

To save time, however, we can add this disk and drop another at the same time:

alter diskgroup SSDDATA add disk 'ORCL:SSDDATA01' size 356000M drop disk 'SSDDATA02';

In general – if you have enough space on your ASM diskgroup – it is recommended to add/drop multiple disks in a single command. It will allow for a faster operation, with less repeating data migration. Save time – save efforts.

The last disk will be re-added, without dropping any other disk, of course.

I hope it helps 🙂

Aquiring and exporting external disk software RAID and LVM

Wednesday, August 22nd, 2007

I had one of my computers die a short while ago. I wanted to get the data inside its disk into another computer.

Using the magical and rather cheap USB2SATA I was able to connect the disk, however, the disk was part of a software mirror (md device) and had LVM on it. Gets a little complicated? Not really:

(connect the device to the system)

Now we need to query which device it is:

dmesg

It is quite easy. In my case it was /dev/sdk (don’t ask). It shown something like this:

usb 1-6: new high speed USB device using address 2
Initializing USB Mass Storage driver…
scsi5 : SCSI emulation for USB Mass Storage devices
Vendor: WDC WD80 Model: WD-WMAM92757594 Rev: 1C05
Type: Direct-Access ANSI SCSI revision: 02
SCSI device sdk: 156250000 512-byte hdwr sectors (80000 MB)
sdk: assuming drive cache: write through
SCSI device sdk: 156250000 512-byte hdwr sectors (80000 MB)
sdk: assuming drive cache: write through
sdk: sdk1 sdk2 sdk3
Attached scsi disk sdk at scsi5, channel 0, id 0, lun 0

This is good. The original system was RH4, so the standard structure is /boot on the first partition, swap and then one large md device containing LVM (at least – my standard).

Lets list the partitions, just to be sure:

# fdisk -l /dev/sdk

Disk /dev/sdk: 80.0 GB, 80000000000 bytes
255 heads, 63 sectors/track, 9726 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdk1 * 1 13 104391 fd Linux raid autodetect
/dev/sdk2 14 144 1052257+ 82 Linux swap
/dev/sdk3 145 9726 76967415 fd Linux raid autodetect

Good. As expected. Let’s activate the md device:

# mdadm –assemble /dev/md2 /dev/sdk3
mdadm: /dev/md2 has been started with 1 drive (out of 2).

It’s going well. Now we have the md device active, and we can try to scan for LVM:

# pvscan

PV /dev/md2 VG SVNVG lvm2 [73.38 GB / 55.53 GB free]

Activating the VG is a desired action. Notice the name – SVNVG (a note at the bottom):

# vgchange -a y /dev/SVNVG
3 logical volume(s) in volume group “SVNVG” now active

Now we can list the LVs and mount them on our desired location:

]# lvs
LV VG Attr LSize Origin Snap% Move Log Copy%
LogVol00 SVNVG -wi-a- 2.94G
LogVol01 SVNVG -wi-a- 4.91G
VarVol SVNVG -wi-a- 10.00G

Mounting:

mount /dev/SVNVG/VarVol /mnt/

and it’s all ours.

To remove this connected the disk, we need to reverse the above process.

First, we will umount the volume:

umount /mnt

Now we need to disable the Volume Group:

# vgchange -a n /dev/SVNVG
0 logical volume(s) in volume group “SVNVG” now active

0 logical volumes active means we were able to disable the whole VG.

Disable the MD device:

# mdadm –manage -S /dev/md2

Now we can disconnect the physical disk (actually, the USB) and continue with out life.

A note: RedHat systems name their logical volumes using a default name VolGroup00. You cannot have two VGs with the same name! If you activate a VG which originated from RH system and used a default name, and your current system uses the same defaults, you need to connect the disk to an external system (non RH would do fine) and change the VG name using vgrename before you can proceed.