There are many questions from few of my clients about asmlib support in RHEL6, as they are gearing up to upgrade the database servers to RHEL6. There is a controversy about asmlib support in RHEL6. As usual, I will only discuss technical details in this blog entry.
ASMLIB is applicable only to Linux platform and does not apply to any other platform.
Now, you might ask why bother and why not just use OEL and UK? Well, not every Linux server is used as a database server. In a typical company, there are hundreds of Linux servers and just few percent of those servers are used as Database servers. Linux system administrators prefer to keep one flavor of Linux distribution for management ease and so, asking clients to change the distribution from RHEL to OEL or OEL to RHEL is always not a viable option.
Do you need to use ASMLIB in Linux?
I have already written about the renamedg command, but since then fell in love with ASMLib. The use of ASMLib introduces a few caveats you should be aware of.
This document presents research I performed with ASM on a lab environment. It should be applicable to any environment, but you should NOT use this for production-the renamedg command still is buggy, and you should not mess with ASM disk headers in an important system such as production or staging/UAT. You set the importance here! The recommended setup for cloning disk groups is to use a data guard physical standby database on a different storage array to create a real time copy of your production database on that array. Again, do not use you production array for this!
Oracle ASMLib introduces a new value to the ASM header, called the provider string as the following example shows:
[root@asmtest ~]# kfed read /dev/oracleasm/disks/VOL1 | grep prov kfdhdb.driver.provstr: ORCLDISKVOL1 ; 0x000: length=12
This can be verified with ASMLib:
[root@asmtest ~]# /etc/init.d/oracleasm querydisk /dev/xvdc1 Device "/dev/xvdc1" is marked an ASM disk with the label "VOL1"
The prefix “ORCLDISK” is automatically added by ASMLib and cannot easily be changed.
The problem with ASMLib is that the renamedg command does NOT update the provider string, which I’ll illustrate by walking through an example session. Disk group “DATA”, setup with external redundancy and two disks, DATA1 and DATA2, is to be cloned to “DATACLONE”.
The renamedg command requires the disk group to be cloned to be stopped. To prevent nasty surprises, you should stop the databases using that diskgroup manually.
[grid@rac11gr2drnode1 ~]$ srvctl stop database -d dev [grid@rac11gr2drnode1 ~]$ ps -ef | grep smon grid 3424 1 0 Aug07 ? 00:00:00 asm_smon_+ASM1 grid 17909 17619 0 15:13 pts/0 00:00:00 grep smon [grid@rac11gr2drnode1 ~]$ srvctl stop diskgroup -g data [grid@rac11gr2drnode1 ~]$
You can use the new “lsof” command of asmcmd to check for open files:
ASMCMD> lsof DB_Name Instance_Name Path +ASM +ASM1 +ocrvote.255.4294967295 asmvol +ASM1 +acfsdg/APACHEVOL.256.724157197 asmvol +ASM1 +acfsdg/DRL.257.724157197 ASMCMD>
So apart from files from other disk groups no files are open, especially not referring to disk group DATA.
Now comes the part where you copy the LUNs, and this entirely depends on your system. The EVA series of storage arrays I worked with in this particular project offered a “snapclone” function, which used COW to create an identical copy of the source LUN, with a new WWID (which can be an input parameter to the snapclone call). When you are using device-mapper-multipath then ensure that your sys admins add the newly created LUNs to the /etc/multipath.conf file on all cluster nodes!
I am using Xen in my lab, which makes it simpler-all I need to do is to copy the disk containers on the domO and then add the new block devices to the running domU (“virtual machine” in Xen language). This can be done easily as the following example shows:
Usage: xm block-attachxm block-attach rac11gr2drnode1 file:/var/lib/xen/images/rac11gr2drShared/oradata1.clone xvdg w! xm block-attach rac11gr2drnode2 file:/var/lib/xen/images/rac11gr2drShared/oradata1.clone xvdg w! xm block-attach rac11gr2drnode1 file:/var/lib/xen/images/rac11gr2drShared/oradata2.clone xvdh w! xm block-attach rac11gr2drnode2 file:/var/lib/xen/images/rac11gr2drShared/oradata2.clone xvdh w!
In the example, rac11gr2drnode{1,2} are the domU, the backend device is the copied file on the file system, the front end device in the domU is xvd{g,h}, and the mode is read/write, shareable. The exclamation mark here is crucial or else the second domU can’t mount the new block device-it is already exclusively mounted to another domU.
The fdisk command in my example immediately “sees” the new LUNs, with device mapper multipathing you might have to go through iterations of restarting multipathd and discovering partitions using kpartx. It is again very important to have all disks presented to all cluster nodes!
Here’s the sample output from my system:
[root@rac11gr2drnode1 ~]# fdisk -l | grep Disk | sort Disk /dev/xvda: 4294 MB, 4294967296 bytes Disk /dev/xvdb: 16.1 GB, 16106127360 bytes Disk /dev/xvdc: 5368 MB, 5368709120 bytes Disk /dev/xvdd: 16.1 GB, 16106127360 bytes Disk /dev/xvde: 16.1 GB, 16106127360 bytes Disk /dev/xvdf: 10.7 GB, 10737418240 bytes Disk /dev/xvdg: 16.1 GB, 16106127360 bytes Disk /dev/xvdh: 16.1 GB, 16106127360 bytes
I cloned /dev/xvdd and /dev/xvde to /dev/xvdg and /dev/xvdh.
Do NOT run /etc/init.d/oracleasm scandisks yet! Otherwise the renamedg command will complain about duplicate disk names, which is entirely reasonable.
I dumped all headers for disks /dev/xvd{d,e,g,h}1 to /tmp to be able to compare.
[root@rac11gr2drnode1 ~]# kfed read /dev/xvdd1 > /tmp/xvdd1.header # repeat with the other disks
Start with phase one of the renamedg command:
[root@rac11gr2drnode1 ~]# renamedg phase=one dgname=DATA newdgname=DATACLONE \ > confirm=true verbose=true config=/tmp/cfg Parsing parameters.. Parameters in effect: Old DG name : DATA New DG name : DATACLONE Phases : Phase 1 Discovery str : (null) Confirm : TRUE Clean : TRUE Raw only : TRUE renamedg operation: phase=one dgname=DATA newdgname=DATACLONE confirm=true verbose=true config=/tmp/cfg Executing phase 1 Discovering the group Performing discovery with string: Identified disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DATA1 with disk number:0 and timestamp (32940276 1937075200) Identified disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DATA2 with disk number:1 and timestamp (32940276 1937075200) Checking for hearbeat... Re-discovering the group Performing discovery with string: Identified disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DATA1 with disk number:0 and timestamp (32940276 1937075200) Identified disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DATA2 with disk number:1 and timestamp (32940276 1937075200) Checking if the diskgroup is mounted Checking disk number:0 Checking disk number:1 Checking if diskgroup is used by CSS Generating configuration file.. Completed phase 1 Terminating kgfd context 0x2b7a2fbac0a0 [root@rac11gr2drnode1 ~]#
You should always check “$?” for errors-the message “terminating kgfd context” sounds bad, but isn’t. At the end of stage 1, there is no change to the header. Only at phase two there is:
[root@rac11gr2drnode1 ~]# renamedg phase=two dgname=DATA newdgname=DATACLONE config=/tmp/cfg Parsing parameters.. renamedg operation: phase=two dgname=DATA newdgname=DATACLONE config=/tmp/cfg Executing phase 2 Completed phase 2
Now there are changes:
[root@rac11gr2drnode1 tmp]# grep DATA *header xvdd1.header:kfdhdb.driver.provstr: ORCLDISKDATA1 ; 0x000: length=13 xvdd1.header:kfdhdb.dskname: DATA1 ; 0x028: length=5 xvdd1.header:kfdhdb.grpname: DATACLONE ; 0x048: length=9 xvdd1.header:kfdhdb.fgname: DATA1 ; 0x068: length=5 xvde1.header:kfdhdb.driver.provstr: ORCLDISKDATA2 ; 0x000: length=13 xvde1.header:kfdhdb.dskname: DATA2 ; 0x028: length=5 xvde1.header:kfdhdb.grpname: DATACLONE ; 0x048: length=9 xvde1.header:kfdhdb.fgname: DATA2 ; 0x068: length=5 xvdg1.header:kfdhdb.driver.provstr: ORCLDISKDATA1 ; 0x000: length=13 xvdg1.header:kfdhdb.dskname: DATA1 ; 0x028: length=5 xvdg1.header:kfdhdb.grpname: DATA ; 0x048: length=4 xvdg1.header:kfdhdb.fgname: DATA1 ; 0x068: length=5 xvdh1.header:kfdhdb.driver.provstr: ORCLDISKDATA2 ; 0x000: length=13 xvdh1.header:kfdhdb.dskname: DATA2 ; 0x028: length=5 xvdh1.header:kfdhdb.grpname: DATA ; 0x048: length=4 xvdh1.header:kfdhdb.fgname: DATA2 ; 0x068: length=5
Although the original disks (/dev/xvdd1 and /dev/xvde1) had their disk group name changed, the provider string remained untouched. So if we were to issue a scandisks command now through /etc/init.d/oracleasm, there’d still be duplicate disk names. This is a bug in my opinion, and a bad thing.
Renaming the disks is straight forward, the difficult bit is to find out which have to be renamed. Again, you can use kfed to figure that out. I knew the disks to be renamed were /dev/xvdd1 and /dev/xvde1 after consulting the header information.
[root@rac11gr2drnode1 tmp]# /etc/init.d/oracleasm force-renamedisk /dev/xvdd1 DATACLONE1 Renaming disk "/dev/xvdd1" to "DATACLONE1": [ OK ] [root@rac11gr2drnode1 tmp]# /etc/init.d/oracleasm force-renamedisk /dev/xvde1 DATACLONE2 Renaming disk "/dev/xvde1" to "DATACLONE2": [ OK ]
I then performed a scandisks operation on all nodes just to be sure… I had corruption of the disk group before :)
[root@rac11gr2drnode1 tmp]# /etc/init.d/oracleasm scandisks Scanning the system for Oracle ASMLib disks: [ OK ] [root@rac11gr2drnode1 tmp]# [root@rac11gr2drnode2 ~]# /etc/init.d/oracleasm scandisks Scanning the system for Oracle ASMLib disks: [ OK ] [root@rac11gr2drnode2 ~]#
The output on all cluster nodes should be identical, on my system I found the following disks:
[root@rac11gr2drnode1 tmp]# /etc/init.d/oracleasm listdisks
ACFS1
ACFS2
ACFS3
ACFS4
DATA1
DATA2
DATACLONE1
DATACLONE2
VOL1
VOL2
VOL3
VOL4
VOL5
Sure enough, the cloned disks were present. Although everything seemed ok at this point, I could not start disk group DATA and had to reboot the cluster nodes to rectify that problem. Maybe there is some not so transient information stored somewhere about ASM disks. After the reboot, CRS started my database correctly, and with all dependent resources:
[oracle@rac11gr2drnode1 ~]$ srvctl status database -d dev Instance dev1 is running on node rac11gr2drnode1 Instance dev2 is running on node rac11gr2drnode2
Recent comments
1 year 45 weeks ago
2 years 5 weeks ago
2 years 10 weeks ago
2 years 11 weeks ago
2 years 15 weeks ago
2 years 36 weeks ago
3 years 4 weeks ago
3 years 34 weeks ago
4 years 18 weeks ago
4 years 19 weeks ago