11g Release 2

Installing RAC 11.2.0.2 on Solaris 10/09 x64

One of the major adventures this time of the year involves installing RAC 11.2.0.2 on Solaris 10 10/09 x86-64. The system setup included EMC Power Path 5.3 as the multipathing solution to shared storage.

I initially asked for 4 BL685 G6 with 24 cores, but in the end “only” got two-still plenty of resources to experiment with.  I especially like the output of this command:

$ /usr/sbin/psrinfo | wc –l
 24

Nice! Actually, it’s 4 Opteron processors:

$ /usr/sbin/prtdiag | less
System Configuration: HP ProLiant BL685c G6
 BIOS Configuration: HP A17 12/09/2009
 BMC Configuration: IPMI 2.0 (KCS: Keyboard Controller Style)
==== Processor Sockets ====================================
Version                          Location Tag
 -------------------------------- --------------------------
 Opteron                          Proc 1
 Opteron                          Proc 2
 Opteron                          Proc 3
 Opteron                          Proc 4

So much for the equipment. The operating system showed 4 NICs, all called bnxen where n was 0 through 4. The first interface, bnxe0, will be used for the public network. The second NIC is to be ignored and the final 2, bnxe2 and bnxe3 will be used for the high available cluster interconnect feature. This way I can prevent the use of SFRAC which inevitably would have meant a clustered Veritas file system instead of ASM.

One interesting point to notice is that the Oracle MOS document 1210883.1 specifies that the interfaces for the private interconnect are on the same subnet. So-node1 will use 192.168.0.1 for bnxe2 and 192.168.0.2 for bnxe3. Similarly, node2 uses 192.168.0.3 for bnxe2 and 192.168.0.4 for bnxe3. The Oracle example is actually a bit more complicated than it could have been, as they use a /25 subnet mask. But ipcalc confirms that the address range they use are all well within the subnet:

 Address:   10.1.0.128            00001010.00000001.00000000.1 0000000
 Netmask:   255.255.255.128 = 25  11111111.11111111.11111111.1 0000000
 Wildcard:  0.0.0.127             00000000.00000000.00000000.0 1111111
 =>
 Network:   10.1.0.128/25         00001010.00000001.00000000.1 0000000 (Class A)
 Broadcast: 10.1.0.255            00001010.00000001.00000000.1 1111111
 HostMin:   10.1.0.129            00001010.00000001.00000000.1 0000001
 HostMax:   10.1.0.254            00001010.00000001.00000000.1 1111110
 Hosts/Net: 126                   (Private Internet)

This setup will have some interesting implications which I’ll describe a little later.

Part of the test was to find out how mature the port to Solaris on Intel was. So I decided to start off by installing Grid Infrastructure on node 1 first, and extend the cluster to node2 using the addNode.sh script in $ORACLE_HOME/oui/bin.

The installation uses 2 different accounts to store the Grid Infrastructure binaries separately from the RDBMS binaries. Operating system accounts are oragrid and oracle.

Oracle: uid=501(oracle) gid=30275(oinstall) groups=309(dba),2046(asmdba),2047(asmadmin)
OraGrid: uid=502(oragrid) gid=30275(oinstall) groups=309(dba),2046(asmdba),2047(asmadmin)

I started off by downloading files 1,2 and 3 of patch 10098816 for my platform. The ratio of downloads of this patch was 243 to 751 between x64 and SPARC. So not a massive uptake of this patchset for Solaris it would seem.

As the oragrid user I created user equivalence for RSA and DSA ssh-keys, a little utility will do this now for you, but I’m old-school and create the keys and exchanged them on the hosts myself. Not too bad a task on only 2 nodes.

The next step was to find out about the shared storage. And that took me a little while I admit freely: I haven’t used the EMC Power Path multipathing software before and found it difficult to approach, mainly for the lack of information about it. Or maybe I just didn’t find it, but device-mapper-multipath for instance is easier to understand. Additionally, the fact that this was Solaris Intel made it a little more complex. First I needed to know what the device names actually mean. As on Solaris SPARC, /dev/dsk will list the block devices, /dev/rdsk/ lists the raw devices. So there’s where I’m heading. Next I checked the devices, emcpower0a to emcpower9a. In the course of the installation I found out how to deal with these. First of all, on Solaris Intel, you have to create a partition of the LUN before it can be dealt with in the SPARC way. So for each device you would like to use, fdisk the emcpowerxp0 device, i.e.

# fdisk /dev/rdsk/emcpower0p0

If there is no partition, simply say “y” to the question if you want to use all of it for Solaris and exit fdisk. Otherwise, delete the existing partition (AFTER HAVING double/triple CHECKED THAT IT’S REALLY NOT NEEDED!) and create a new one of type “Solaris2”. It didn’t seem necessary to make it active.

Here’s a sample session:

bash-3.00# fdisk /dev/rdsk/emcpower0p0
No fdisk table exists. The default partition for the disk is:
a 100% "SOLARIS System" partition
Type "y" to accept the default partition,  otherwise type "n" to edit the partition table.
Y

Now let’s check the result:

bash-3.00# fdisk /dev/rdsk/emcpower0p0
Total disk size is 1023 cylinders
Cylinder size is 2048 (512 byte) blocks
Cylinders
Partition   Status    Type          Start   End   Length    %
=========   ======    ============  =====   ===   ======   ===
1       Active    Solaris2          1  1022    1022    100
SELECT ONE OF THE FOLLOWING:
1. Create a partition
2. Specify the active partition
3. Delete a partition
4. Change between Solaris and Solaris2 Partition IDs
5. Exit (update disk configuration and exit)
6. Cancel (exit without updating disk configuration)
Enter Selection: 6

bash-3.00#

This particular device will be used for my OCRVOTE disk group, that’s why it’s only 1G. The next step is identical on SPARC-start the format tool, select partition, change the fourth partition to use the whole disk (with an offset of 3 cylinders at the beginning of the slice) and label it. With that done, exit the format application.

This takes me back to the discussion of the emcpower-device name. The letters [a-p] refer to the slices of the device, while p stands for the partition. /dev/emcpowernc is slice 2 of the second multipathed device, in other words the whole disk. I usually create a slice 4 which translates to emcpowerne. After having completed the disk initialisation, I had to ensure that the ones I was working on were really shared. Unfortunately the emcpower devices are not consistently named across the cluster. What is emcpower0a on node1 turned out to be emcpower2a on the second node. How to check? The powermt tool to the rescue. Similar to “multipath –ll” on Linux the powermt command can show the underlying disks which are aggregated under the emcpowern pseudo device. So I wanted to know if my device /dev/rdsk/emcpower0e was shared. What I really was interested on was the native device:

# powermt display dev=emcpower0a | awk \
 > '/c[0-9]t/ {print $3}'
 c1t50000974C00A611Cd6s0
 c2t50000974C00A6118d6s0

Well, does that exist on the other node?

# powermt display dev=all | /usr/sfw/bin/ggrep -B8  c1t50000974C00A611Cd6s0
Pseudo name=emcpower3a
Symmetrix ID=000294900664
Logical device ID=0468
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3072 pci@39,0/pci1166,142@12/pci103c,1708@0/fp@0,0 c1t50000974C00A611Cd6s0 FA  8eA   active  alive       0      0

So yes it was there. Cool! I checked the 2 other OCR/voting disks LUNS and they were shareable as well. The final piece was to change the ownership of the devices to oragrid:asmdba and permissions to 0660.

Project settings

Another item to look at is the project settings for the grid owner and oracle. It’s important to set projects correctly, otherwise the installation will fail when ASM is starting. All newly created users inherit the settings from the default project. Unless the sys admins set the default project high enough, you will have to change them. To check the settings you can use the “prctl -i project default” call to check all the values for this project.

I usually create a project for the grid owner, oragrid, as well as for the oracle account. My settings are as follows for a maximum SGA size of around 20G:

projadd -c “Oracle Grid Infrastructure” ‘user.oracle’
projmod -s -K “process.max-sem-nsems=(privileged,256,deny)” ‘user.oracle’
projmod -s -K “project.max-shm-memory=(privileged,20GB,deny)” ‘user.oracle’
projmod -s -K “project.max-shm-ids=(privileged,256,deny)” ‘user.oracle’

Repeat this for the oragrid user, then log in as oragrid and check that the project is actually assigned:

# id -p oragrid
uid=223(oragrid) gid=30275(oinstall) projid=100(user.oragrid)

Installing Grid Infrastructure

Finally ready to start the installer! The solaris installation isn’t any different from Linux except for the aforementioned fiddling with the raw devices.

The installation went smoothly, I ran orainstroot.sh and root.sh without any problem. If anything, it was a bit slow, taking 10 minutes to complete root.sh on node1. You can tail the rootcrs_node1.log file in /data/oragrid/product/11.2.0.2/cfgtoollogs/crsconfig to see what’s going on behind the scenes. This is certainly one of the biggest improvements over 10g and 11g Release 1.

Extending the cluster

The MOS document I was alluding to earlier suggested, like I said, to have all the private NIC IP addresses in the same subnet. That isn’t necessarily to the liking of cluvfy. The communication over bnxe3 on both hosts fails, as shown in this example. Tests executed from node1:

bash-3.00# ping 192.168.0.1
192.168.0.1 is alive
bash-3.00# ping 192.168.0.2
192.168.0.2 is alive
bash-3.00# ping 192.168.0.3
192.168.0.3 is alive
bash-3.00# ping 192.168.0.4
^C
192.168.0.4 is not replying

Tests executed on node 2

bash-3.00# ping 192.168.0.1
192.168.0.1 is alive
bash-3.00# ping 192.168.0.2
^C
bash-3.00# ping 192.168.0.3
192.168.0.3 is alive
bash-3.00# ping 192.168.0.4
192.168.0.4 is alive

I decided to ignore this for now, and sure enough, the cluster extension didn’t fail. As I’m not using GNS, the command to add the node was

$ ./addNode.sh -debug -logLevel finest "CLUSTER_NEW_NODES={loninengblc208}" \
 CLUSTER_NEW_VIRTUAL_HOSTNAMES={loninengblc208-vip}"

This is actually a little more verbose than I needed, but it’s always good to be prepared for a SR with Oracle.

However, the OUI command will perform a pre-requisite check before the actual call to runInstaller, and that repeatedly failed, complaining about connectivity on the bnxe3 network. Checking the contents of the addNode.sh script I found an environment variable “$IGNORE_PREADDNODE_CHECKS” which can be set to “Y” to force the script to ignore the pre-requisite checks. With that set, the addNode operation succeeded.

RDBMS installation

This is actually not worthy to report, it’s pretty much the same as on Linux. However, a small caveat is specified to Solaris x86-64. Many files in the Oracle inventory didn’t have correct permissions. When launching runInstaller to install the binaries, I was bombarded with complaints about file permissions.

For example, oraInstaller.properties has the wrong permissions. Example for Solaris Intel:

# ls -l oraInstaller.properties
 -rw-r--r--   1 oragrid  oinstall     317 Nov  9 15:01 oraInstaller.properties

On Linux:

$ ls -l oraInstaller.properties
 -rw-rw---- 1 oragrid oinstall 345 Oct 21 12:44 oraInstaller.properties

There were a few more, I fixed them using these commands:

$ chmod 770 ContentsXML
$ chmod 660 install.platform
$ chmod 770 oui
$ chmod 660 ContentsXML/*
$ chmod 660 oui/*

Once the permissions were fixed the installation succeeded.

DBCA

Nothing to report here, it’s the same as for Linux.

Build your own stretch cluster part V

This post is about the installation of Grid Infrastructure, and where it’s really getting exciting: the 3rd NFS voting disk is going to be presented and I am going to show you how simple it is to add it into the disk group chosen for OCR and voting disks.

Let’s start with the installation of Grid Infrastructure. This is really simple, and I won’t go into too much detail. Start by downloading the required file from MOS, a simple search for patch 10098816 should bring you to the download patch for 11.2.0.2 for Linux-just make sure you select the 64bit version. The file we need just now is called p10098816_112020_Linux-x86-64_3of7.zip. The file names don’t necessarily relate to their contents, the readme helps finding out which piece of the puzzle is used for what functionality.

I alluded to my software distribution method in one of the earlier posts, and here’s all the detail to come. My dom0 exports the /m directory to the 192.168.99.0/24 network, the one accessible to all my domUs. This really simplifies software deployments.

So starting off, the file has been unzipped:

openSUSE-112-64-minimal:/m/download/db11.2/11.2.0.2 # unzip -q p10098816_112020_Linux-x86-64_3of7.zip

This creates the subdirectory “grid”. Switch back to edcnode1 and log in as oracle. As I already explained I won’t use different accounts for Grid Infrastructure and the RDBMS in this example.

If not already done so, mount the /m directory on the domU (which requires root privileges). Move to the newly unzipped “grid” directory under your mount point and begin to set up the user equivalence. On edcnode1 and edcnode2, create RSA and DSA keys for SSH:

[oracle@edcnode1 ~]$ ssh-keygen -t rsa

Any questions can be answered with the return key, it’s important to leave the passphrase empty. Repeat the call to ssh-keygen with argument “-t dsa”. Navigate to ~/.ssh and create the authorized_keys file as follows:

[oracle@edcnode1 .ssh]$ cat *.pub >> authorized_keys

Then copy the authorized_keys file to edcnode2 and add the public keys:

[oracle@edcnode1 .ssh]$ scp authorized_keys oracle@edcnode2:`pwd`
[oracle@edcnode1 .ssh]$ ssh oracle@edcnode2

If you are prompted, add the host to the ~/.ssh/known_hosts file by typing in “yes”.

[oracle@edcnode2 .ssh]$ cat *.pub >> authorized_keys

Change the permissions on the authorized_keys file to 0400 on both hosts, otherwise it won’t be considered when trying to log in. With all of this done, you can add all the unknown hosts to each node’s known_hosts file. The easiest way is a for loop:

[oracle@edcnode1 ~]$ for i in edcnode1 edcnode2 edcnode1-priv edcnode2-priv; do  ssh $i hostname; don

Run this twice on each node, acknowledging the question if the new address should be added. Important: Ensure that there is no banner (/etc/motd, .profile, .bash_profile etc) writing to stdout or stderr or you are going to see strange error messages about user equivalence not being set up correctly.

I hear you say: but 11.2 can create user equivalence in OUI now-this is of course correct, but I wanted to run cluvfy now which requires a working setup.

Cluster Verification

It is good practice to run a check to see if the prerequisites for the Grid Infrastructure installation are met, and keep the output. Change to the NFS mount where the grid directory is exported, and execute runcluvfy.sh as in this example:

[oracle@edcnode1 grid]$ ./runcluvfy.sh stage -pre crsinst -n edcnode1,edcnode2 -verbose -fixup 2>&1 | tee /tmp/preCRS.tx

The nice thing is that you can run the fixup script now to fix kernel parameter settings:

[root@edcnode2 ~]# /tmp/CVU_11.2.0.2.0_oracle/runfixup.sh
/usr/bin/id
Response file being used is :/tmp/CVU_11.2.0.2.0_oracle/fixup.response
Enable file being used is :/tmp/CVU_11.2.0.2.0_oracle/fixup.enable
Log file location: /tmp/CVU_11.2.0.2.0_oracle/orarun.log
Setting Kernel Parameters...
fs.file-max = 327679
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.wmem_max = 262144
net.core.wmem_max = 1048576

Repeat this on the second node, edcnode2. Obviously you should fix any other problem cluvfy reports before proceeding.

In the previous post I created the /u01 mount point-double check that /u01 is actually mounted-otherwise you’d end up writing on your root_vg’s root_lv, not an ideal situation.

You are now ready to start the installer: type in ./runInstaller to start the installation.

Grid Installation

This is rather mundane, and instaed of providing print screens, I opted for a description of the steps needed to execute in the OUI session.

  • Screen 01: Skip software updates (I don’t have an Internet connection on my lab)
  • Screen 02: Install and configure Grid Infrastructure for a cluster
  • Screen 03: Advanced Installation
  • Screen 04: Keep defaults or add additional languages
  • Screen 05: Cluster Name: edc, SCAN name edc-scan, SCAN port: 1521, do not configure GNS
  • Screen 06: Ensure that both hosts are listed in this screen. Add/edit as appropriate. Hostnames are edcode{1,2}.localdomain, VIPs are to be edcnode{1,2}-vip.localdomain. Enter the oracle  user’s password and click on next
  • Screen 07: Assign eth0 to public, eth1 to private and eth2 to “do not use”.
  • Screen 08: Select ASM
  • Screen 09: disk group name: OCRVOTE with NORMAL redundancy. Tick the boxes for “ORCL:OCR01FILER01″, “ORCL:OCR01FILER02″ and “ORCL:OCR02FILER01″
  • Screen 10: Choose suitable passwords for SYS and ASMSNMP
  • Screen 11: Don’t use IPMI
  • Screen 12: Assign DBA to OSDBA, OSOPER and OSASM. Again, in the real world you should think about role separation and assign different groups
  • Screen 13: ORACLE_BASE: /u01/app/oracle, Software location: /u01/app/11.2.0/grid
  • Screen 14: Oracle inventory: /u01/app/oraInventory
  • Screen 15: Ignore all-there should only be references to swap, cvuqdisk, ASM device checks and NTP. If you have additional warnings, fix them first!
  • Screen 16: Click on install!

The usual installation will now take place. At the end, run the root.sh script on edcnode1 and after it completes, on edcnode2. The output is included here for completeness:

[root@edcnode1 u01]# /u01/app/11.2.0/grid/root.sh 2>&1 | tee /tmp/root.sh.out
Running Oracle 11g root script...

The following environment variables are set as:
 ORACLE_OWNER= oracle
 ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
 Copying dbhome to /usr/local/bin ...
 Copying oraenv to /usr/local/bin ...
 Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
OLR initialization - successful
 root wallet
 root wallet cert
 root cert export
 peer wallet
 profile reader wallet
 pa wallet
 peer wallet keys
 pa wallet keys
 peer cert request
 pa cert request
 peer cert
 pa cert
 peer root cert TP
 profile reader root cert TP
 pa root cert TP
 peer pa cert TP
 pa peer cert TP
 profile reader pa cert TP
 profile reader peer cert TP
 peer user cert
 pa user cert
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-2672: Attempting to start 'ora.mdnsd' on 'edcnode1'
CRS-2676: Start of 'ora.mdnsd' on 'edcnode1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'edcnode1'
CRS-2676: Start of 'ora.gpnpd' on 'edcnode1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'edcnode1'
CRS-2672: Attempting to start 'ora.gipcd' on 'edcnode1'
CRS-2676: Start of 'ora.gipcd' on 'edcnode1' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on 'edcnode1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'edcnode1'
CRS-2672: Attempting to start 'ora.diskmon' on 'edcnode1'
CRS-2676: Start of 'ora.diskmon' on 'edcnode1' succeeded
CRS-2676: Start of 'ora.cssd' on 'edcnode1' succeeded

ASM created and started successfully.

Disk Group OCRVOTE created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 38f2caf7530c4f67bfe23bb170ed2bfe.
Successful addition of voting disk 9aee80ad14044f22bf6211b81fe6363e.
Successful addition of voting disk 29fde7c3919b4fd6bf626caf4777edaa.
Successfully replaced voting disk group with +OCRVOTE.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   38f2caf7530c4f67bfe23bb170ed2bfe (ORCL:OCR01FILER01) [OCRVOTE]
 2. ONLINE   9aee80ad14044f22bf6211b81fe6363e (ORCL:OCR01FILER02) [OCRVOTE]
 3. ONLINE   29fde7c3919b4fd6bf626caf4777edaa (ORCL:OCR02FILER01) [OCRVOTE]
Located 3 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'edcnode1'
CRS-2676: Start of 'ora.asm' on 'edcnode1' succeeded
CRS-2672: Attempting to start 'ora.OCRVOTE.dg' on 'edcnode1'
CRS-2676: Start of 'ora.OCRVOTE.dg' on 'edcnode1' succeeded
ACFS-9200: Supported
ACFS-9200: Supported
CRS-2672: Attempting to start 'ora.registry.acfs' on 'edcnode1'
CRS-2676: Start of 'ora.registry.acfs' on 'edcnode1' succeeded
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

[root@edcnode2 ~]# /u01/app/11.2.0/grid/root.sh 2>&1 | tee /tmp/rootsh.out
Running Oracle 11g root script...

The following environment variables are set as:
 ORACLE_OWNER= oracle
 ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
 Copying dbhome to /usr/local/bin ...
 Copying oraenv to /usr/local/bin ...
 Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
OLR initialization - successful
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node edcnode1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@edcnode2 ~]#

Congratulations! You have a working setup! Check if everything is ok:

[root@edcnode2 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.OCRVOTE.dg
 ONLINE  ONLINE       edcnode1
 ONLINE  ONLINE       edcnode2
ora.asm
 ONLINE  ONLINE       edcnode1                 Started
 ONLINE  ONLINE       edcnode2
ora.gsd
 OFFLINE OFFLINE      edcnode1
 OFFLINE OFFLINE      edcnode2
ora.net1.network
 ONLINE  ONLINE       edcnode1
 ONLINE  ONLINE       edcnode2
ora.ons
 ONLINE  ONLINE       edcnode1
 ONLINE  ONLINE       edcnode2
ora.registry.acfs
 ONLINE  ONLINE       edcnode1
 ONLINE  ONLINE       edcnode2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
 1        ONLINE  ONLINE       edcnode2
ora.LISTENER_SCAN2.lsnr
 1        ONLINE  ONLINE       edcnode1
ora.LISTENER_SCAN3.lsnr
 1        ONLINE  ONLINE       edcnode1
ora.cvu
 1        ONLINE  ONLINE       edcnode1
ora.edcnode1.vip
 1        ONLINE  ONLINE       edcnode1
ora.edcnode2.vip
 1        ONLINE  ONLINE       edcnode2
ora.oc4j
 1        ONLINE  ONLINE       edcnode1
ora.scan1.vip
 1        ONLINE  ONLINE       edcnode2
ora.scan2.vip
 1        ONLINE  ONLINE       edcnode1
ora.scan3.vip
 1        ONLINE  ONLINE       edcnode1
[root@edcnode2 ~]#

[root@edcnode1 ~]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   38f2caf7530c4f67bfe23bb170ed2bfe (ORCL:OCR01FILER01) [OCRVOTE]
 2. ONLINE   9aee80ad14044f22bf6211b81fe6363e (ORCL:OCR01FILER02) [OCRVOTE]
 3. ONLINE   29fde7c3919b4fd6bf626caf4777edaa (ORCL:OCR02FILER01) [OCRVOTE]
Located 3 voting disk(s).

Adding the NFS voting disk

It’s about time to deal with this subject. If not done so already, start the domU “filer03″. Log in as openfiler and ensure that the NFS server is started. On the services tab click on enable next to the NFS server if needed. Next navigate to the shares tab, where you should find the volume group and logical volume created earlier. The volume group I created is called “ocrvotenfs_vg”, and it has 1 logical volume, “nfsvol_lv”. Click on the name of the LV to create a new share. I named the new share “ocrvote” – enter this in the popup window and click on “create sub folder”.

The new share should appear underneath the nfsvol_lv now. Proceed by clicking on “ocrvote” to set the share’s properties. Before you get to enter these, click on “make share”. Scroll down to the host access configuration section in the following screen. In this section you could set all sorts of technologies-SMB, NFS, WebDAV, FTP and RSYNC. For this example, everything but NFS should be set to “NO”.

For NFS, the story is different: ensure you set the radio button to “RW” for both hosts. Then click on Edit for each machine. This is important! The anonymous UID and GID must match the Grid Owner’s uid and gid. In my scenario I entered “500″ for both-you can check your settings using the id command as oracle: it will print the UID and GID plus other information.

The UID/GID mapping then has to be set to all_squash, IO mode to sync, and write delay to wdelay. Leave the default for “requesting origin port”, which was set to “secure < 1024″ in my configuration.

I decided to create /ocrvote on both nodes to mount the NFS export:

[root@edcnode2 ~]# mkdir /ocrvote

Edit the /etc/fstab file to make the mount persistent across reboots. I added this line to the file on both nodes:

192.168.101.52:/mnt/ocrvotenfs_vg/nfsvol_lv/ocrvote /ocrvote nfs rw,bg,hard,intr,rsize=32768,wsize=32768,tcp,noac,nfsvers=3,timeo=600,addr=192.168.101.51

The “addr” command instructs Linux to use the storage network to mount the share. Now you are ready to mount the device on all nodes, using the “mount /ocrvote” command.

I changed the export on the filer to the uid/gid combination of the oracle account (or, on an installation with separate grid software owner, to its uid/gid combination):

[root@filer03 ~]# cd /mnt/ocrvotenfs_vg/nfsvol_lv/
[root@filer03 nfsvol_lv]# ls -l
total 44
-rw-------  1 root    root     6144 Sep 24 15:38 aquota.group
-rw-------  1 root    root     6144 Sep 24 15:38 aquota.user
drwxrwxrwx  2 root    root     4096 Sep 24 15:26 homes
drwx------  2 root    root    16384 Sep 24 15:26 lost+found
drwxrwsrwx  2 ofguest ofguest  4096 Sep 24 15:31 ocrvote
-rw-r--r--  1 root    root      974 Sep 24 15:45 ocrvote.info.xml
[root@filer03 nfsvol_lv]# chown 500:500 ocrvote
[root@filer03 nfsvol_lv]# ls -l
total 44
-rw-------  1 root root  7168 Sep 24 16:09 aquota.group
-rw-------  1 root root  7168 Sep 24 16:09 aquota.user
drwxrwxrwx  2 root root  4096 Sep 24 15:26 homes
drwx------  2 root root 16384 Sep 24 15:26 lost+found
drwxrwsrwx  2  500  500  4096 Sep 24 15:31 ocrvote
-rw-r--r--  1 root root   974 Sep 24 15:45 ocrvote.info.xml
[root@filer03 nfsvol_lv]#

ASM requires zero padded files asm “disks”, so create one:

[root@filer03 nfsvol_lv]# dd if=/dev/zero of=ocrvote/nfsvotedisk01 bs=1G count=2
[root@filer03 nfsvol_lv]# chown 500:500 ocrvote/nfsvotedisk01

Add the third voting disk

Almost there! Before performing any change to the cluster configuration it is always a good idea to take a backup.

[root@edcnode1 ~]# ocrconfig -manualbackup

edcnode1     2010/09/24 17:11:51     /u01/app/11.2.0/grid/cdata/edc/backup_20100924_171151.ocr

You only need to do this on one node. Recall that the current state is:

[oracle@edcnode1 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   38f2caf7530c4f67bfe23bb170ed2bfe (ORCL:OCR01FILER01) [OCRVOTE]
 2. ONLINE   9aee80ad14044f22bf6211b81fe6363e (ORCL:OCR01FILER02) [OCRVOTE]
 3. ONLINE   29fde7c3919b4fd6bf626caf4777edaa (ORCL:OCR02FILER01) [OCRVOTE]
Located 3 voting disk(s).

ASM sees it the same way:

SQL> select mount_status,header_status, name,failgroup,library
 2  from v$asm_disk
 3  /

MOUNT_S HEADER_STATU NAME                           FAILGROUP       LIBRARY
------- ------------ ------------------------------ --------------- ------------------------------------------------------------
CLOSED  PROVISIONED                                                 ASM Library - Generic Linux, version 2.0.4 (KABI_V2)
CLOSED  PROVISIONED                                                 ASM Library - Generic Linux, version 2.0.4 (KABI_V2)
CLOSED  PROVISIONED                                                 ASM Library - Generic Linux, version 2.0.4 (KABI_V2)
CLOSED  PROVISIONED                                                 ASM Library - Generic Linux, version 2.0.4 (KABI_V2)
CACHED  MEMBER       OCR01FILER01                   OCR01FILER01    ASM Library - Generic Linux, version 2.0.4 (KABI_V2)
CACHED  MEMBER       OCR01FILER02                   OCR01FILER02    ASM Library - Generic Linux, version 2.0.4 (KABI_V2)
CACHED  MEMBER       OCR02FILER01                   OCR02FILER01    ASM Library - Generic Linux, version 2.0.4 (KABI_V2)

7 rows selected.

Now here’s the idea: you add the NFS location to the ASM diskstring in addition with “ORCL:*” and all is well. But that didn’t work:

SQL> show parameter disk  

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
asm_diskgroups                       string
asm_diskstring                       string      ORCL:*
SQL> 

SQL> alter system set asm_diskstring = 'ORCL:*, /ocrvote/nfsvotedisk01' scope=memory sid='*';
alter system set asm_diskstring = 'ORCL:*, /ocrvote/nfsvotedisk01' scope=memory sid='*'
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-15014: path 'ORCL:OCR01FILER01' is not in the discovery set

Regardless of what I tried, the system complained. Grudgingly I used the GUI – asmca.

After starting asmca, click on Disk Groups. Then select diskgroup “OCRVOTE”, and right click to “add disks”. The trick is to click on “change discovery path”. Enter “ORCL:*, /ocrvote/nfsvotedisk01″ (without quotes) to the dialog field and close it. Strangely, now the NFS disk now appears. Make two ticks: before disk path, and in the quorum box. A click on the OK button starts the magic, and you should be presented with a success message. The ASM instance reports a little more:

ALTER SYSTEM SET asm_diskstring='ORCL:*','/ocrvote/nfsvotedisk01' SCOPE=BOTH SID='*';
2010-09-29 10:54:52.557000 +01:00
SQL> ALTER DISKGROUP OCRVOTE ADD  QUORUM DISK '/ocrvote/nfsvotedisk01' SIZE 500M /* ASMCA */
NOTE: Assigning number (1,3) to disk (/ocrvote/nfsvotedisk01)
NOTE: requesting all-instance membership refresh for group=1
2010-09-29 10:54:54.445000 +01:00
NOTE: initializing header on grp 1 disk OCRVOTE_0003
NOTE: requesting all-instance disk validation for group=1
NOTE: skipping rediscovery for group 1/0xd032bc02 (OCRVOTE) on local instance.
2010-09-29 10:54:57.154000 +01:00
NOTE: requesting all-instance disk validation for group=1
NOTE: skipping rediscovery for group 1/0xd032bc02 (OCRVOTE) on local instance.
2010-09-29 10:55:00.718000 +01:00
GMON updating for reconfiguration, group 1 at 5 for pid 27, osid 15253
NOTE: group 1 PST updated.
NOTE: initiating PST update: grp = 1
GMON updating group 1 at 6 for pid 27, osid 15253
2010-09-29 10:55:02.896000 +01:00
NOTE: PST update grp = 1 completed successfully
NOTE: membership refresh pending for group 1/0xd032bc02 (OCRVOTE)
2010-09-29 10:55:05.285000 +01:00
GMON querying group 1 at 7 for pid 18, osid 4247
NOTE: cache opening disk 3 of grp 1: OCRVOTE_0003 path:/ocrvote/nfsvotedisk01
GMON querying group 1 at 8 for pid 18, osid 4247
SUCCESS: refreshed membership for 1/0xd032bc02 (OCRVOTE)
2010-09-29 10:55:06.528000 +01:00
SUCCESS: ALTER DISKGROUP OCRVOTE ADD  QUORUM DISK '/ocrvote/nfsvotedisk01' SIZE 500M /* ASMCA */
2010-09-29 10:55:08.656000 +01:00
NOTE: Attempting voting file refresh on diskgroup OCRVOTE
NOTE: Voting file relocation is required in diskgroup OCRVOTE
NOTE: Attempting voting file relocation on diskgroup OCRVOTE
NOTE: voting file allocation on grp 1 disk OCRVOTE_0003
2010-09-29 10:55:10.047000 +01:00
NOTE: voting file deletion on grp 1 disk OCR02FILER01
NOTE: starting rebalance of group 1/0xd032bc02 (OCRVOTE) at power 1
Starting background process ARB0
ARB0 started with pid=29, OS id=15446
NOTE: assigning ARB0 to group 1/0xd032bc02 (OCRVOTE) with 1 parallel I/O
2010-09-29 10:55:13.178000 +01:00
NOTE: GroupBlock outside rolling migration privileged region
NOTE: requesting all-instance membership refresh for group=1
2010-09-29 10:55:15.533000 +01:00
NOTE: stopping process ARB0
SUCCESS: rebalance completed for group 1/0xd032bc02 (OCRVOTE)
GMON updating for reconfiguration, group 1 at 9 for pid 31, osid 15451
NOTE: group 1 PST updated.
2010-09-29 10:55:17.907000 +01:00
NOTE: membership refresh pending for group 1/0xd032bc02 (OCRVOTE)
2010-09-29 10:55:20.481000 +01:00
GMON querying group 1 at 10 for pid 18, osid 4247
SUCCESS: refreshed membership for 1/0xd032bc02 (OCRVOTE)
2010-09-29 10:55:23.490000 +01:00
NOTE: Attempting voting file refresh on diskgroup OCRVOTE
NOTE: Voting file relocation is required in diskgroup OCRVOTE
NOTE: Attempting voting file relocation on diskgroup OCRVOTE

Superb! But did it kick out the correct disk? Yes it did-you now see OCR01FILER01 and ORC01FILER02 plus the NFS disk:

[oracle@edcnode1 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   38f2caf7530c4f67bfe23bb170ed2bfe (ORCL:OCR01FILER01) [OCRVOTE]
 2. ONLINE   9aee80ad14044f22bf6211b81fe6363e (ORCL:OCR01FILER02) [OCRVOTE]
 3. ONLINE   6107050ad9ba4fd1bfebdf3a029c48be (/ocrvote/nfsvotedisk01) [OCRVOTE]
Located 3 voting disk(s).

Preferred Mirror Read

One of the cool new 11.1 features allowed administrators to instruct administrators of stretch RAC system to read mirrored extents rather than primary extents. This can speed up data access in cases where data would otherwise have been sent from the remote array. Setting this parameter is crucial to many implementations. In preparation of the RDBMS installation (to be detailed in the next post), I created a disk group consisting of 4 ASM disks, two from each filer. The syntax for the disk group creation is as follows:

SQL> create diskgroup data normal redundancy
  2  failgroup sitea disk 'ORCL:ASM01FILER01','ORCL:ASM01FILER02'
  3* failgroup siteb disk 'ORCL:ASM02FILER01','ORCL:ASM02FILER02'
SQL> /

Diskgroup created.

As you can see all disks from sitea are from filer01 and form one failure group. The other disks, originating from filer02 form the second failure group.

You can see the result in v$asm_disk, as this example shows:

SQL> select name,failgroup from v$asm_disk;

NAME                           FAILGROUP
------------------------------ ------------------------------
ASM01FILER01                   SITEA
ASM01FILER02                   SITEA
ASM02FILER01                   SITEB
ASM02FILER02                   SITEB
OCR01FILER01                   OCR01FILER01
OCR01FILER02                   OCR01FILER02
OCR02FILER01                   OCR02FILER01
OCRVOTE_0003                   OCRVOTE_0003

8 rows selected.

Now all that remains to be done is to instruct the ASM instances to read from the local storage if possible. This is performed by setting an instance-specific init.ora parameter. I used the following syntax:

SQL> alter system set asm_preferred_read_failure_groups='DATA.SITEB' scope=both sid='+ASM2';

System altered.

SQL> alter system set asm_preferred_read_failure_groups='DATA.SITEA' scope=both sid='+ASM1';

System altered.

So I’m all set for the next step, the installation of the RDBMS software. But that’s for another post…

Kindle version of Pro Oracle Database 11g RAC on Linux

I had a few questions from readers whether or not there was going to be a kindle version of Pro Oracle Database 11g RAC on Linux.

The good news for those waiting is: yes! But it might take a couple of weeks for it to be released.

I checked with Jonathan Gennick who expertly oversaw the whole project and he confirmed that Amazon have been contacted to provide a kindle version.

As soon as I hear more, I’ll post it here.

Frightening number of linking errors for 11.2.0.1.3 on AIX 5.3 TL11

I just applied PSU 3 for 11.2.0.1 on AIX 5.3 TL11. That would have been more straight forward, had  there not been a host of errors when relinking oracle. There were loads of these:

SEVERE:OPatch invoked as follows: 'apply '
INFO:
Oracle Home       : /u01/app/oracle/product/11.2.0.1
Central Inventory : /u01/app/oracle/product/oraInventory
 from           : /etc/oraInst.loc
OPatch version    : 11.2.0.1.3
OUI version       : 11.2.0.1.0
OUI location      : /u01/app/oracle/product/11.2.0.1/oui
Log file location : /u01/app/oracle/product/11.2.0.1/cfgtoollogs/opatch/opatch2010-11-01_09-28-22AM.log
...
[more output]
...
INFO:Running make for target ioracle
INFO:Start invoking 'make' at Mon Nov 01 09:29:26 GMT 2010Mon Nov 01 09:29:26 GMT 2010
INFO:Finish invoking 'make' at Mon Nov 01 09:29:40 GMT 2010
WARNING:OUI-67215:
OPatch found the word "error" in the stderr of the make command.
Please look at this stderr. You can re-run this make command.
Stderr output:
ld: 0711-415 WARNING: Symbol ldxdts is already exported.
ld: 0711-415 WARNING: Symbol ldxsto is already exported.
ld: 0711-415 WARNING: Symbol lnxadd is already exported.
...
ld: 0711-319 WARNING: Exported symbol not defined: cout__3std
...
ld: 0711-224 WARNING: Duplicate symbol: .kotgscid
...
ld: 0711-783 WARNING: TOC overflow. TOC size: 219488    Maximum size: 65536
 Extra instructions are being generated for each reference to a TOC
 symbol if the symbol is in the TOC overflow area.

Wow, that looked like a completely corrupted installation of Oracle now, and I got ready to roll the change back. However, MOS had an answer to this!

These errors can be ignored as per this MOS document: Relinking causes many warning on AIX [ID 1189533.1]. See also MOS note Note 245372.1 TOC overflow Warning Can Safely Be Ignored and BUG:9828407 – TOO MANY WARNING MESSAGES WHEN INSTALLING ORACLE 11.2 for more information.

Build your own 11.2.0.2 stretched RAC part IV

Finally I have some more time to work on the next article in this series, dealing with the setup of my two cluster nodes. This is actually going to be quite short compared to the other articles so far. This is mainly due to the fact that I have streamlined the deployment of new Oracle-capable machines to a degree where I can comfortably set up a cluster in 2 hours. It’s a bit more work initially, but it paid off. The setup of my reference VM is documented on this blog as well, search for virtualisation and opensuse to get to the article.

When I first started working in my lab environment I created a virtual machine called “rhel55ref”. In reality it’s OEL, because of Red Hat’s windooze like policy to require an activation code. I would have considered CentOS as well, but when I created the reference VM the community hadn’t provided the “update 5″. I like the brand new shiny things most :)

Seems like I’m lucky now as well with the introduction of Oracle’s own Linux kernel I am ready for the future. Hopefully Red Hat will get their act together soon and release version 6 of their distribution. As much as I like Oracle I don’t want them to dominate the OS market too much. With Solaris now in their hands as well…

Anyway, to get started with my first node I cloned my template. Moving to /var/lib/xen/images all I had to do was to “cp -a rhel55ref edcnode1″. One repetition to edcnode2 gave me my second node. Xen (or libvirt for that matter) stores the VM configuration in xenstore, a backend database which can be interrogated easily. So I dumped the XML configuration file for my rhel55ref VM and stored it in edcnode{1,2}.xml. The command to dump the information is “virsh dumpxml domainName” > edcnode{1,2}.xml

The domU folder contains the virtual disk for the root file system of my VM, called disk0. I then created a new “hard disk”, called disk1 to contain the Oracle binaries. Experience told me not to have that too small, 20G should be enough for my /u01 mountpoint for Grid Infrastructure and the RDBMS binaries.

[root@dom0]# /var/lib/xen/images/edcnode1 # dd if=/dev/zero of=disk01 bs=1 count=0 seek=20G
0+0 records in
0+0 records out
0 bytes (0 B) copied, 1.3869e-05 s, 0.0 kB/

I like to speed the file creation up by using the sparse file trick: the file disk1 will be reported to be 20G in size, but it will only use that if the virtual machine needs them. It’s a bit like Oracle creating a temporary tablespace.

With that information it’s time to modify the dumped XML file. Again it’s important to define MAC addresses for the network interfaces, otherwise the system will try and use dhcp for your NICs, destroying the carefully crafted /etc/sysconfig/network-scripts/ifcfg-eth{0,1,2} files. Oh, and remember that the first 3 tupel are reserved for XEN, so don’t change “00:16:3e”! Your UUID also has to be unique. In the end my first VM’s XML description looked like this:


 edcnode1
 46a36f98-4e52-45a5-2579-80811b38a3ab
 4194304
 524288
 2
 /usr/bin/pygrub
 -q
 
 linux
  
 
 
 destroy
 restart
 destroy
 
 /usr/lib64/xen/bin/qemu-dm
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 

 
 
 
 
 
 
 
 

You can see that the interfaces refer to br1, br2, and br3. These are the ones that were previously defined in the first article. The tag “” in the tag doesn’t matter as that will be dynamically assigned anyway.

When done, you can define the new VM and start it:

[root@dom0]# virsh define edcnode{1,2}.xml
[root@dom0]# xm start edcnode1 -c

You are directly connected to the VM’s console (80×24-just like in the old times!) and have to wait a looooong time for the DHCP requests for eth0, eth1 and eth2 to time out. This is the first thing to address. As root, log in to the system and navigate straight to /etc/sysconfig/network-scripts to change ifcfg-eth{0,1,2}. Alternatively, use system-config-network-tui to change the network settings.

The following settings should be used for edcnode1:

  • eth0:    192.168.99.56/24
  • eth1:    192.168.100.56/24
  • eth2:    192.168.101.56/24

These are the settings for edcnode2:

  • eth0:    192.168.99.58/24
  • eth1:    192.168.100.58/24
  • eth2:    192.168.101.58/24

The nameserver for both is my dom0 – in this case 192.168.99.10. Enter the appropriate hostname as well as the nameserver. Note that 192.168.99.57 and 59 are reserved for the node VIPs, hence the “gap”. Then edit /etc/hosts to enter the information about the private interconnect, which for obvious reasons is not included in DNS. If you like, persist your public and VIP information in /etc/hosts as well. Don’t do this with the SCAN, it’s not suggested to have the SCAN resolve through /etc/hosts although it works.

Now’s the big moment-restart the network services and get out of the uncomfortable 80×24 character limitation:

[root@edcnode1]# service network restart

The complete configuration is printed here for the sake of completeness for edcnode1:

[root@edcnode1 ~]# cat /etc/resolv.conf
nameserver 192.168.99.10
search localdomain
[root@edcnode1 ~]#

[root@edcnode1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth{0,1,2}
# Xen Virtual Ethernet
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
HWADDR=00:16:3e:ab:cd:ef
NETMASK=255.255.255.0
IPADDR=192.168.99.56
TYPE=Ethernet
# Xen Virtual Ethernet
DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
HWADDR=00:16:3e:10:13:1a
NETMASK=255.255.255.0
IPADDR=192.168.100.56
TYPE=Ethernet
# Xen Virtual Ethernet
DEVICE=eth2
BOOTPROTO=none
ONBOOT=yes
HWADDR=00:16:3e:11:12:ef
NETMASK=255.255.255.0
IPADDR=192.168.101.56
TYPE=Ethernet

[root@edcnode1 ~]# cat /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=edcnode1

Next on the agenda is the iscsi-initiator. This isn’t part of my standard build and had to be added. All my software is exported from the dom0 via NFS and mounted to /mnt/

[root@edcnode1 ~]# find /mnt -iname "iscsi*"
/mnt/oracleEnterpriseLinux/source/iscsi-initiator-utils-6.2.0.871-0.16.el5.x86_64.rpm

[root@edcnode1 ~]# cd /mnt/oracleEnterpriseLinux/source/
[root@edcnode1 ~]# rpm -ihv iscsi-initiator-utils-6.2.0.871-0.16.el5.x86_64.rpm
warning: iscsi-initiator-utils-6.2.0.871-0.16.el5.x86_64.rpm: ...
Preparing...                ########################################### [100%]
 1:iscsi-initiator-utils  ########################################### [100%

It's important to edit the initiator name, i.e. the name the initiator reports back to OpenFiler. I changed it to include edcnode1 and edcnode2 on their respective hosts. The file to edit is /etc/iscsi/initiatorname.iscsi

Time to get serious now:

[root@edcnode1 ~]# /etc/init.d/iscsi start
iscsid is stopped
Starting iSCSI daemon:                                     [  OK  ]
 [  OK  ]
Setting up iSCSI targets: iscsiadm: No records found!
 [  OK  ]

We are ready to roll. First, we need to discover the targets from the OpenFiler appliance-start with the first one filer01:

[root@edcnode1 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.101.50
192.168.101.50:3260,1 iqn.2006-01.com.openfiler:asm01Filer01
192.168.101.50:3260,1 iqn.2006-01.com.openfiler:ocrvoteFiler01
192.168.101.50:3260,1 iqn.2006-01.com.openfiler:asm02Filer01

A restart of the iscsi service will automatically log in and persist the settings (this is very wide output-works best in 1280xsomething resolution)

[root@edcnode1 ~]# service iscsi restart
Stopping iSCSI daemon:
iscsid dead but pid file exists                            [  OK  ]
Starting iSCSI daemon:                                     [  OK  ]
 [  OK  ]
Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2006-01.com.openfiler:asm01Filer01, portal: 192.168.101.50,3260]
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:ocrvoteFiler01, portal: 192.168.101.50,3260]
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:asm02Filer01, portal: 192.168.101.50,3260]
Login to [iface: default, target: iqn.2006-01.com.openfiler:asm01Filer01, portal: 192.168.101.50,3260]: successful
Login to [iface: default, target: iqn.2006-01.com.openfiler:ocrvoteFiler01, portal: 192.168.101.50,3260]: successful
Login to [iface: default, target: iqn.2006-01.com.openfiler:asm02Filer01, portal: 192.168.101.50,3260]: successful
 [  OK  ]

Fine! Now over to fdisk the new devices. I know that my “local” storage is named /dev/xvd*, so anything new (“/dev/sd*”) will be iSCSI provided storage. If you are unsure you can always check the /var/log/messages file to see which device have just been discovered. You should see something similar to this output:

Sep 24 12:20:08 edcnode1 kernel: Loading iSCSI transport class v2.0-871.
Sep 24 12:20:08 edcnode1 kernel: cxgb3i: tag itt 0x1fff, 13 bits, age 0xf, 4 bits.
Sep 24 12:20:08 edcnode1 kernel: iscsi: registered transport (cxgb3i)
Sep 24 12:20:08 edcnode1 kernel: Broadcom NetXtreme II CNIC Driver cnic v2.1.0 (Oct 10, 2009)
Sep 24 12:20:08 edcnode1 kernel: Broadcom NetXtreme II iSCSI Driver bnx2i v2.1.0 (Dec 06, 2009)
Sep 24 12:20:08 edcnode1 kernel: iscsi: registered transport (bnx2i)
Sep 24 12:20:08 edcnode1 kernel: iscsi: registered transport (tcp)
Sep 24 12:20:08 edcnode1 kernel: iscsi: registered transport (iser)
Sep 24 12:20:08 edcnode1 kernel: iscsi: registered transport (be2iscsi)
Sep 24 12:20:08 edcnode1 iscsid: iSCSI logger with pid=20558 started!
Sep 24 12:20:08 edcnode1 kernel: scsi0 : iSCSI Initiator over TCP/IP
Sep 24 12:20:08 edcnode1 kernel: scsi1 : iSCSI Initiator over TCP/IP
Sep 24 12:20:08 edcnode1 kernel: scsi2 : iSCSI Initiator over TCP/IP
Sep 24 12:20:09 edcnode1 kernel:   Vendor: OPNFILER  Model: VIRTUAL-DISK      Rev: 0
Sep 24 12:20:09 edcnode1 kernel:   Type:   Direct-Access                      ANSI SCSI revision: 04
Sep 24 12:20:09 edcnode1 kernel:   Vendor: OPNFILER  Model: VIRTUAL-DISK      Rev: 0
Sep 24 12:20:09 edcnode1 kernel:   Type:   Direct-Access                      ANSI SCSI revision: 04
Sep 24 12:20:09 edcnode1 kernel:   Vendor: OPNFILER  Model: VIRTUAL-DISK      Rev: 0
Sep 24 12:20:09 edcnode1 kernel:   Type:   Direct-Access                      ANSI SCSI revision: 04
Sep 24 12:20:09 edcnode1 kernel:   Vendor: OPNFILER  Model: VIRTUAL-DISK      Rev: 0
Sep 24 12:20:09 edcnode1 kernel:   Type:   Direct-Access                      ANSI SCSI revision: 04
Sep 24 12:20:09 edcnode1 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 0
Sep 24 12:20:09 edcnode1 kernel: scsi 1:0:0:0: Attached scsi generic sg1 type 0
Sep 24 12:20:09 edcnode1 kernel: scsi 2:0:0:0: Attached scsi generic sg2 type 0
Sep 24 12:20:09 edcnode1 kernel: scsi 1:0:0:1: Attached scsi generic sg3 type 0
Sep 24 12:20:09 edcnode1 kernel: SCSI device sda: 20971520 512-byte hdwr sectors (10737 MB)
Sep 24 12:20:09 edcnode1 kernel: sda: Write Protect is off
Sep 24 12:20:09 edcnode1 kernel: SCSI device sda: drive cache: write through
Sep 24 12:20:09 edcnode1 kernel: SCSI device sda: 20971520 512-byte hdwr sectors (10737 MB)
Sep 24 12:20:09 edcnode1 kernel: sda: Write Protect is off
Sep 24 12:20:09 edcnode1 kernel: SCSI device sda: drive cache: write through
Sep 24 12:20:09 edcnode1 kernel:  sda: unknown partition table
Sep 24 12:20:09 edcnode1 kernel: sd 0:0:0:0: Attached scsi disk sda

The output will continue with /dev/sdb and other devices exported by the filer.

Prepare the local Oracle Installation

Using fdisk, modify /dev/xvdb, create a partition spanning the whole disk and set its type to “8e” – Linux LVM. It’s always a good idea to use LVM to install Oracle binaries into, it makes later extension of a filesystem easier. I’ll add the fdisk output here for this device but won’t for later partitioning excercises.

root@edcnode1 ~]# fdisk /dev/xvdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
 (e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Command action
 e   extended
 p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1305, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305):
Using default value 1305

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Once /dev/xvdb1 is ready, we need to start its transformation into a logical volume. First, a physical volume is to be created:

[root@edcnode1 ~]# pvcreate /dev/xvdb1
 Physical volume "/dev/xvdb1" successfully created

The physical volume (“PV”) is then used to form a volume group (“VG”). In real life, you’d probably have more than 1 PV to form a VG… I named my volume group “oracle_vg”. The existing volume group is called “root_vg” by the way.

[root@edcnode1 ~]# vgcreate oracle_vg /dev/xvdb1
 Volume group "oracle_vg" successfully create

Wonderful! I never quite remember how many extents this VG has so I need to query it. When using –size 10g it will through an error – some internal overhead will reduce the available capacity to something just shy of 10G:

[root@edcnode1 ~]# vgdisplay oracle_vg
 --- Volume group ---
 VG Name               oracle_vg
 System ID
 Format                lvm2
 Metadata Areas        1
 Metadata Sequence No  1
 VG Access             read/write
 VG Status             resizable
 MAX LV                0
 Cur LV                0
 Open LV               0
 Max PV                0
 Cur PV                1
 Act PV                1
 VG Size               10.00 GB
 PE Size               4.00 MB
 Total PE              2559
 Alloc PE / Size       0 / 0
 Free  PE / Size       2559 / 10.00 GB
 VG UUID               QgHgnY-Kqsl-noAR-VLgP-UXcm-WADN-VdiwO7

Right, so now let’s create a logical volume (“LV”) with 2559 extents:

[root@edcnode1 ~]# lvcreate --extents 2559 --name grid_lv oracle_vg
 Logical volume "grid_lv" created

And now we need a file system:

[root@edcnode1 ~]# mkfs.ext3 /dev/oracle_vg/grid_lv

You are done! Create the mountpoint for your oracle installation, /u01/ in my case, and grant oracle:oinstall ownership to it. In this lab excercise I didn’t create a separate owner for the Grid Infrastructure to avoid potentially undiscovered problems in 11.2.0.2 and stretched RAC. Finally add this to /etc/fstab to make it persistent:

[root@edcnode1 ~]# echo "/dev/oracle_vg/grid_lv   /u01   ext3 defaults 0 0" >> /etc/fstab
[root@edcnode1 ~]# mount /u01

Now continue to partition the iSCSI volumes, but don’t create file systems on top of them. You should not assign a partition type other than the default “Linux” to it either.

ASMLib

Yes I know…The age old argument, but I decided to use it anyway. The reason is simple: scsi_id doesn’t return a value in para-virtualised Linux, which makes it impossible to set up device name persistence with udev. And ASMLib is easier to use anyway! But if your system administrators are database agnostic and not willing to learn the basics about ASM, then probably ASMLib is not a good idea to be rolled out. It’s only a matter of time until someone executes an “rpm -Uhv kernel*” to your box and of course a) didn’t tell the DBAs and b) didn’t bother applying the ASMLib kernel module. But I digress.

Before you are able to use ASMLib you have to configure it on each cluster node. A sample session could look like this:

[root@edcnode1 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting  without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]:
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done
Dropping Oracle ASMLib disks:                              [  OK  ]
Shutting down the Oracle ASMLib driver:                    [  OK  ]
[root@edcnode1 ~]#

Now with this done, it is possible to create the ASMLib maintained ASM disks. For the LUNs presented by filer01 these be

  • ASM01FILER01
  • ASM02FILER01
  • OCR01FILER01
  • OCR02FILER01

The disks are created using the /etc/init.d/oracleasm createdisk command as in these examples:

[root@edcnode1 ~]# /etc/init.d/oracleasm createdisk asm01filer01 /dev/sda1
Marking disk "asm01filer01" as an ASM disk:                [  OK  ]
[root@edcnode1 ~]# /etc/init.d/oracleasm createdisk asm02filer01 /dev/sdc1
Marking disk "asm02filer01" as an ASM disk:                [  OK  ]
[root@edcnode1 ~]# /etc/init.d/oracleasm createdisk ocr01filer01 /dev/sdb1
Marking disk "ocr01filer01" as an ASM disk:                [  OK  ]
[root@edcnode1 ~]# /etc/init.d/oracleasm createdisk ocr02filer01 /dev/sdd1
Marking disk "ocr02filer01" as an ASM disk:                [  OK  ]

Switch over to the second node now to validate the configuration and to continue the configuration of the iSCSI LUNs from filer02. Define the domU with a similar configuration file as shown above for edcnode1, and start the domU. Once the wait for DHCP timeouts is over and you are presented with a login, set up the network as shown above. Install the iscsi initiator package, change the initiator name and discover the targets from filer02 in addition to those from filer01.

[root@edcnode2 ~]# iscsiadm -t st -p 192.168.101.51 -m discovery
192.168.101.51:3260,1 iqn.2006-01.com.openfiler:asm02Filer02
192.168.101.51:3260,1 iqn.2006-01.com.openfiler:ocrvoteFiler02
192.168.101.51:3260,1 iqn.2006-01.com.openfiler:asm01Filer02
[root@edcnode2 ~]# iscsiadm -t st -p 192.168.101.50 -m discovery
192.168.101.50:3260,1 iqn.2006-01.com.openfiler:asm01Filer01
192.168.101.50:3260,1 iqn.2006-01.com.openfiler:ocrvoteFiler01
192.168.101.50:3260,1 iqn.2006-01.com.openfiler:asm02Filer01

Still on the second node, continue the mounting of the scsi devices

[root@edcnode2 ~]# service iscsi start
iscsid (pid  2802) is running...
Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2006-01.com.openfiler:asm02Filer02, portal: 192.168.101.51,3260]
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:ocrvoteFiler02, portal: 192.168.101.51,3260]
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:asm01Filer01, portal: 192.168.101.50,3260]
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:asm01Filer02, portal: 192.168.101.51,3260]
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:ocrvoteFiler01, portal: 192.168.101.50,3260]
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:asm02Filer01, portal: 192.168.101.50,3260]
Login to [iface: default, target: iqn.2006-01.com.openfiler:asm02Filer02, portal: 192.168.101.51,3260]: successful
Login to [iface: default, target: iqn.2006-01.com.openfiler:ocrvoteFiler02, portal: 192.168.101.51,3260]: successful
Login to [iface: default, target: iqn.2006-01.com.openfiler:asm01Filer01, portal: 192.168.101.50,3260]: successful
Login to [iface: default, target: iqn.2006-01.com.openfiler:asm01Filer02, portal: 192.168.101.51,3260]: successful
Login to [iface: default, target: iqn.2006-01.com.openfiler:ocrvoteFiler01, portal: 192.168.101.50,3260]: successful
Login to [iface: default, target: iqn.2006-01.com.openfiler:asm02Filer01, portal: 192.168.101.50,3260]: successful

Partition the disks from filer02 the same way as shown in the previous example. On edcnode2, fdisk reported the following as new disks

Disk /dev/sda doesn't contain a valid partition table
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdf doesn't contain a valid partition table

Disk /dev/sda: 10.6 GB, 10670309376 bytes
Disk /dev/sdb: 2650 MB, 2650800128 bytes
Disk /dev/sdf: 10.7 GB, 10737418240 bytes

Note that /dev/sda and /dev/sdf are the 2 10G LUNs for ASM data, and /dev/sdb is the OCR/voting disk combination. Next, create the additional ASMLib disks:

[root@edcnode2 ~]# /etc/init.d/oracleasm scandisks
...
[root@edcnode2 ~]# /etc/init.d/oracleasm createdisk asm01filer02 /dev/sda1
Marking disk "asm01filer02" as an ASM disk:                [  OK  ]
[root@edcnode2 ~]# /etc/init.d/oracleasm createdisk asm02filer02 /dev/sdf1
Marking disk "asm02filer02" as an ASM disk:                [  OK  ]
[root@edcnode2 ~]# /etc/init.d/oracleasm createdisk ocr01filer02 /dev/sdb1
Marking disk "ocr01filer02" as an ASM disk:                [  OK  ]
[root@edcnode2 ~]# /etc/init.d/oracleasm listdisks
ASM01FILER01
ASM01FILER02
ASM02FILER01
ASM02FILER02
OCR01FILER01
OCR01FILER02
OCR02FILER01

Perform another scandisks command on edcnode1 to have all the disks:

[root@edcnode1 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks:               [  OK  ]
[root@edcnode1 ~]# /etc/init.d/oracleasm listdisks
ASM01FILER01
ASM01FILER02
ASM02FILER01
ASM02FILER02
OCR01FILER01
OCR01FILER02
OCR02FILER01

Summary

All done!And I seriously thought initially that this was going to be a shorter post than the others, how wrong I was. Congratulations on having arrived here at the bottom of the article by the way.

In the course of this post I prepared my virtual machines to begin the installation of Grid Infrastructure. The ASM disk names will be persistent across reboots thanks to ASMLib, and no messing around with udev for that matter. You might notice that there are 2 ASM disk from filer01 but only 1 from filer02 for the voting disk/OCR diskgroup, and that’s for a reason. I’m cheeky and won’t tell you here, that’s for another post later…

First contact with Oracle 11.2.0.2 RAC

As you may know, Oracle released the first patchset on top of 11g Release 2. At the time of this writing, the patchset is out for 32bit and 64bit Linux, 32bit and 64bit Solaris SPARC and Intel. What an intersting combination of platforms… I thought there was no Solaris 32bit on Intel anymore.

Upgrade

Oracle has come up with a fundamentally different approach to patching with this patchset. The long version of this can be found in MOS document 1189783.1 “Important Changes to Oracle Database Patch Sets Starting With 11.2.0.2″. The short version is that new patches will be supplied as full releases. This is really cool, and some people have asked why that wasn’t always the case. In 10g Release 2, to get to the latest version with all the patches, you had to

  • Install the base release for Clusterware, ASM and at least one RDBMS home
  • Install the latest patchset on Clusterware, ASM and the RDBMS home
  • Apply the latest PSU for Clusterware/RDBMS, ASM and RDBMS

Especially applying the PSUs for Clusterware were very labour intensive. In fact, for a fresh install it was usually easier to install and patch everything on only one node and then extend the patched software homes to the other nodes of the cluster.

Now in 11.2.0.2 things are different. You no longer have to apply any of the interim releases-the patch contains everything you need, already on the correct version. The above process is shortened to:

  • Install Grid Infrastructure 11.2.0.2
  • Install RDBMS home 11.2.0.2

Optionally, apply PSUs or other patches when they become available. Currently, MOS note 756671.1 doesn’t list any patch as recommended on top of 11.2.0.2.

Interestingly upgrading from 11.2.0.1 to 11.2.0.2 is more painful than from Oracle 10g, at least on the Linux platform. Before you can run rootupgrade.sh, the script tests if you applied the Grid Infrastructure PSU for 11.2.0.1.2. OUI hasn’t performed the test when it checked for prerequisistes which caught me off-guard. The casual observer may now ask: why do I have to apply a PSU when the bug fixes should be rolled up into the patchset anyway? I honestly don’t have an answer, other than that if you are not on Linux you should be fine.

Grid Infrastructure will be an out-of-place upgrade which means you have to manage your local disk space very carefully from now on. I would not use anything less than 50-75G on my Grid Infrastructure mount point.This takes the new cluster health monitor facility (see below) into account, as well as the fact that Oracle performs log rotation for most logs in $GRID_HOME/log.

The RDBMS binaries can be patched either in-place or out-of-place. I’d say that the out-of-place upgrade for RDBMS binaries is wholeheartedly recommended as it makes backing out a change so much easier. As I said, you don’t have a choice for Grid Infrastructure which is always out-of-place.

And then there is the multicast issue Julian Dyke (http://juliandyke.wordpress.com/) has written about. I couldn’t reproduce the test case, and my lab and real-life clusters run with 11.2.0.2 happily.

Changes to Grid Infrastructure

After the successful upgrade you’d be surprised to find new resources in Grid Infrastructure. Have a look at these:

[grid@node1] $ crsctl stat res -t -init
-----------------------------------------------------------------
NAME           TARGET  STATE        SERVER          STATE_DETAILS
-----------------------------------------------------------------
Cluster Resources
-----------------------------------------------------------------
ora.asm
 1        ONLINE  ONLINE       node1           Started
ora.cluster_interconnect.haip
 1        ONLINE  ONLINE       node1
ora.crf
 1        ONLINE  ONLINE       node1
ora.crsd
 1        ONLINE  ONLINE       node1
ora.cssd
 1        ONLINE  ONLINE       node1
ora.cssdmonitor
 1        ONLINE  ONLINE       node1
ora.ctssd
 1        ONLINE  ONLINE       node1           OBSERVER
ora.diskmon
 1        ONLINE  ONLINE       node1
ora.drivers.acfs
 1        ONLINE  ONLINE       node1
ora.evmd
 1        ONLINE  ONLINE       node1
ora.gipcd
 1        ONLINE  ONLINE       node1
ora.gpnpd
 1        ONLINE  ONLINE       node1
ora.mdnsd
 1        ONLINE  ONLINE       node1

The cluster_interconnect.haip is yet another step towards the self contained system. The Grid Infrastructure installation guide for Linux states:

“With Redundant Interconnect Usage, you can identify multiple interfaces to use for the cluster private network, without the need of using bonding or other technologies. This functionality is available starting with Oracle Database 11g Release 2 (11.2.0.2).”

So – good news for anyone who is relying on third party software like for example HP ServiceGuard for network bonding. Linux has always done this for you, even in the times of the 2.4 kernel. Linux network bonding is actually quite simple to set up as well. But anyway, I’ll run a few tests in the lab when I have time with this new feature enabled, deliberately taking down NICs to see if the new feature works as labelled on the tin. The documentation states that you don’t need to bond your NICs for the private interconnect, simply leave the ethx (or whatever name you NICs have on your OS) as they are, and indicate the ones you like to use for the private interconnect as private during the installation. If you decide to add a NIC to the cluster for use with the private interconnect later, use oifcfg as root to add the new interface (or watch this space for a later blog post on this). Oracle states that if one of the private interconnects fails, it will transparently use another one. Additionally to the high availability benefit, Oracle apparently also performs load balancing across the configured interconnects.

To learn more about the redundant interconnect feature I had a glance at its profile. As with any resource in the lower stack (or HA stack), you need to append the “-init” argument to crsctl.

[oracle@node1] $ crsctl stat res ora.cluster_interconnect.haip -p -init
NAME=ora.cluster_interconnect.haip
TYPE=ora.haip.type
ACL=owner:root:rw-,pgrp:oinstall:rw-,other::r--,user:grid:r-x
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTIVE_PLACEMENT=0
AGENT_FILENAME=%CRS_HOME%/bin/orarootagent%CRS_EXE_SUFFIX%
AUTO_START=always
CARDINALITY=1
CHECK_INTERVAL=30
DEFAULT_TEMPLATE=
DEGREE=1
DESCRIPTION="Resource type for a Highly Available network IP"
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=
LOAD=1
LOGGING_LEVEL=1
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=balanced
PROFILE_CHANGE_TEMPLATE=
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=60
SERVER_POOLS=
START_DEPENDENCIES=hard(ora.gpnpd,ora.cssd)pullup(ora.cssd)
START_TIMEOUT=60
STATE_CHANGE_TEMPLATE=
STOP_DEPENDENCIES=hard(ora.cssd)
STOP_TIMEOUT=0
UPTIME_THRESHOLD=1m
USR_ORA_AUTO=
USR_ORA_IF=
USR_ORA_IF_GROUP=cluster_interconnect
USR_ORA_IF_THRESHOLD=20
USR_ORA_NETMASK=
USR_ORA_SUBNET=

With this information at hand, we see that the resource is controlled through ORAROOTAGENT, and judging from the start sequence position and the fact that we queried crsctl with the “-init” flag, it must be OHASD’s ORAROOTAGENT.

Indeed, there are references to it in the $GRID_HOME/log/`hostname -s`/agent/ohasd/orarootagent_root/ directory. Further reference to the resource was found in cssd.log which makes perfect sense: it will use it for many things, last but not least fencing.

[ USRTHRD][1122056512] {0:0:2} HAIP: configured to use 1 interfaces
...
[ USRTHRD][1122056512] {0:0:2} HAIP:  Updating member info HAIP1;192.168.52.0#0
[ USRTHRD][1122056512] {0:0:2} InitializeHaIps[ 0]  infList 'inf bond1, ip 192.168.52.155, sub 192.168.52.0'
[ USRTHRD][1122056512] {0:0:2} HAIP:  starting inf 'bond1', suggestedIp '169.254.79.209', assignedIp ''
[ USRTHRD][1122056512] {0:0:2} Thread:[NetHAWork]start {
[ USRTHRD][1122056512] {0:0:2} Thread:[NetHAWork]start }
[ USRTHRD][1089194304] {0:0:2} [NetHAWork] thread started
[ USRTHRD][1089194304] {0:0:2}  Arp::sCreateSocket {
[ USRTHRD][1089194304] {0:0:2}  Arp::sCreateSocket }
[ USRTHRD][1089194304] {0:0:2} Starting Probe for ip 169.254.79.209
[ USRTHRD][1089194304] {0:0:2} Transitioning to Probe State
[ USRTHRD][1089194304] {0:0:2}  Arp::sProbe {
[ USRTHRD][1089194304] {0:0:2} Arp::sSend:  sending type 1
[ USRTHRD][1089194304] {0:0:2}  Arp::sProbe }
...
[ USRTHRD][1122056512] {0:0:2} Completed 1 HAIP assignment, start complete
[ USRTHRD][1122056512] {0:0:2} USING HAIP[  0 ]:  bond1 - 169.254.79.209
[ora.cluster_interconnect.haip][1117854016] {0:0:2} [start] clsn_agent::start }
[    AGFW][1117854016] {0:0:2} Command: start for resource: ora.cluster_interconnect.haip 1 1 completed with status: SUCCESS
[    AGFW][1119955264] {0:0:2} Agent sending reply for: RESOURCE_START[ora.cluster_interconnect.haip 1 1] ID 4098:343
[    AGFW][1119955264] {0:0:2} ora.cluster_interconnect.haip 1 1 state changed from: STARTING to: ONLINE
[    AGFW][1119955264] {0:0:2} Started implicit monitor for:ora.cluster_interconnect.haip 1 1
[    AGFW][1119955264] {0:0:2} Agent sending last reply for: RESOURCE_START[ora.cluster_interconnect.haip 1 1] ID 4098:343

OK, I know understand this a bit better. But the log information mentioned something else as well, an IP address that I haven’t assigned to the cluster. It turns out that this IP address is another virtual IP on the private interconnect, called bond1:1

[grid]grid@node1 $ /sbin/ifconfig
bond1     Link encap:Ethernet  HWaddr 00:23:7D:3d:1E:77
 inet addr:192.168.52.155  Bcast:192.168.52.255  Mask:255.255.255.0
 inet6 addr: fe80::223:7dff:fe3c:1e74/64 Scope:Link
 UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
 RX packets:33155040 errors:0 dropped:0 overruns:0 frame:0
 TX packets:20677269 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:0
 RX bytes:21234994775 (19.7 GiB)  TX bytes:10988689751 (10.2 GiB)
bond1:1   Link encap:Ethernet  HWaddr 00:23:7D:3d:1E:77
 inet addr:169.254.79.209  Bcast:169.254.255.255  Mask:255.255.0.0
 UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1

Ah, something running multicast. I tried to sniff that traffic but couldn’t make any sense if it. There is UDP (not TCP) multicast traffic on that interface. This can be checked with tcpdump:

root@node1 ~]# tcpdump src 169.254.79.209 -i bond1:1 -c 10  -s 1514
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on bond1:1, link-type EN10MB (Ethernet), capture size 1514 bytes
14:30:18.704688 IP 169.254.79.209.55310 > 169.254.228.144.31112: UDP, length 252
14:30:18.704943 IP 169.254.79.209.55310 > 169.254.169.62.20057: UDP, length 252
14:30:18.705155 IP 169.254.79.209.55310 > 169.254.45.135.30040: UDP, length 252
14:30:18.895764 IP 169.254.79.209.51227 > 169.254.228.144.57323: UDP, length 192
14:30:18.895976 IP 169.254.79.209.51227 > 169.254.228.144.21319: UDP, length 296
14:30:18.897109 IP 169.254.79.209.48094 > 169.254.45.135.40464: UDP, length 192
14:30:18.897633 IP 169.254.79.209.48094 > 169.254.45.135.40464: UDP, length 192
14:30:18.897998 IP 169.254.79.209.48094 > 169.254.169.62.48215: UDP, length 192
14:30:18.902325 IP 169.254.79.209.51227 > 169.254.228.144.57323: UDP, length 192
14:30:18.902422 IP 169.254.79.209.51227 > 169.254.228.144.21319: UDP, length 296
10 packets captured
14 packets received by filter
0 packets dropped by kernel

If you are interested in the actual messages, use this command instead to capture a package:

[root@node1 ~]# tcpdump src 169.254.79.209 -i bond1:1 -c 1 -X -s 1514
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on bond1:1, link-type EN10MB (Ethernet), capture size 1514 bytes
14:31:43.396614 IP 169.254.79.209.58803 > 169.254.169.62.16178: UDP, length 192
 0x0000:  4500 00dc 0000 4000 4011 ed04 a9fe 4fd1  E.....@.@.....O.
 0x0010:  a9fe a93e e5b3 3f32 00c8 4de6 0403 0201  ...>..?2..M.....
 0x0020:  e403 0000 0000 0000 4d52 4f4e 0003 0000  ........MRON....
 0x0030:  0000 0000 4d4a 9c63 0000 0000 0000 0000  ....MJ.c........
 0x0040:  0000 0000 0000 0000 0000 0000 0000 0000  ................
 0x0050:  a9fe 4fd1 4d39 0000 0000 0000 0000 0000  ..O.M9..........
 0x0060:  e403 0000 0000 0000 0100 0000 0000 0000  ................
 0x0070:  5800 0000 ff7f 0000 d0ff b42e 0f2b 0000  X............+..
 0x0080:  a01e 770d 0403 0201 0b00 0000 67f2 434c  ..w.........g.CL
 0x0090:  0000 0000 b1aa 0500 0000 0000 cf0f 3813  ..............8.
 0x00a0:  0000 0000 0400 0000 0000 0000 a1aa 0500  ................
 0x00b0:  0000 0000 0000 ae2a 644d 6026 0000 0000  .......*dM`&....
 0x00c0:  0000 0000 0000 0000 0000 0000 0000 0000  ................
 0x00d0:  0000 0000 0000 0000 0000 0000            ............
1 packets captured
10 packets received by filter
0 packets dropped by kernel

Substitute the correct values of course for interface and source address.

Oracle CRF resources

Another intersting new feature is the CRF resource, which seems to be an implementation of IPD/OS Cluster Health Monitor on the servers. I need to dig a little deeper in this feature, currently I can’t get any configuration data from the cluster:

[grid@node1] $ oclumon showobjects

 Following nodes are attached to the loggerd
[grid@node1] $

You will see some additional background processes now, namely ologgerd and osysmond.bin, which are started through the CRF resource. The resource profile (shown below) suggests that this resource is started through OHASD’s ORAROOTAGENT and can take custom logging levels.

[grid]grid@node1 $ crsctl stat res ora.crf -p -init
NAME=ora.crf
TYPE=ora.crf.type
ACL=owner:root:rw-,pgrp:oinstall:rw-,other::r--,user:grid:r-x
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTIVE_PLACEMENT=0
AGENT_FILENAME=%CRS_HOME%/bin/orarootagent%CRS_EXE_SUFFIX%
AUTO_START=always
CARDINALITY=1
CHECK_ARGS=
CHECK_COMMAND=
CHECK_INTERVAL=30
CLEAN_ARGS=
CLEAN_COMMAND=
DAEMON_LOGGING_LEVELS=CRFMOND=0,CRFLDREP=0,...,CRFM=0
DAEMON_TRACING_LEVELS=CRFMOND=0,CRFLDREP=0,...,CRFM=0
DEFAULT_TEMPLATE=
DEGREE=1
DESCRIPTION="Resource type for Crf Agents"
DETACHED=true
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=3
FAILURE_THRESHOLD=5
HOSTING_MEMBERS=
LOAD=1
LOGGING_LEVEL=1
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
ORA_VERSION=11.2.0.2.0
PID_FILE=
PLACEMENT=balanced
PROCESS_TO_MONITOR=
PROFILE_CHANGE_TEMPLATE=
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=60
SERVER_POOLS=
START_ARGS=
START_COMMAND=
START_DEPENDENCIES=hard(ora.gpnpd)
START_TIMEOUT=120
STATE_CHANGE_TEMPLATE=
STOP_ARGS=
STOP_COMMAND=
STOP_DEPENDENCIES=hard(shutdown:ora.gipcd)
STOP_TIMEOUT=120
UPTIME_THRESHOLD=1m
USR_ORA_ENV=

An investigation of orarootagent_root.log revealed that the rootagent indeed starts the CRF resource. This resource will start the ologgerd and oysmond processes, which then write their log files into $GRID_HOME/log/`hostname -s`/crf{logd,mond}.

Configuration of the daemons can be found in $GRID_HOME/ologgerd/init and $GRID_HOME/osysmond/init. Except for the PID file for the daemons there didn’t seem to be anything of value in the directory.

The command line of the ologgerd process shows it’s configuration options:

root 13984 1 0 Oct15 ? 00:04:00 /u01/crs/11.2.0.2/bin/ologgerd -M -d /u01/crs/11.2.0.2/crf/db/node1

The files in the directory specified by the “-d” flag denote where the process stores its logging information. The files are in BDB format, or Berkeley DB (now Oracle too). The oclumon tool should be able to read these files, but until I can persuade it to connect to the host there is no output.

CVU

Unlike the previous resources, the cvu resource is actually cluster aware. It’s the Cluster Verification Utility we all know from installing RAC. Going by the profile (shown below), I conclude that the utility is run through the grid software owner’s scriptagent and has exactly 1 incarnation on the cluster. It is only executed every 6 hours and restarted if it fails. If you like to execute a manual check, simply execute the action script with the command line argument “check”.

[root@node1 tmp]# crsctl stat res ora.cvu -p
NAME=ora.cvu
TYPE=ora.cvu.type
ACL=owner:grid:rwx,pgrp:oinstall:rwx,other::r--
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=%CRS_HOME%/bin/cvures%CRS_SCRIPT_SUFFIX%
ACTIVE_PLACEMENT=1
AGENT_FILENAME=%CRS_HOME%/bin/scriptagent
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=21600
CHECK_RESULTS=
CHECK_TIMEOUT=600
DEFAULT_TEMPLATE=
DEGREE=1
DESCRIPTION=Oracle CVU resource
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=
LOAD=1
LOGGING_LEVEL=1
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=balanced
PROFILE_CHANGE_TEMPLATE=
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=600
SERVER_POOLS=*
START_DEPENDENCIES=hard(ora.net1.network)
START_TIMEOUT=0
STATE_CHANGE_TEMPLATE=
STOP_DEPENDENCIES=hard(ora.net1.network)
STOP_TIMEOUT=0
TYPE_VERSION=1.1
UPTIME_THRESHOLD=1h
USR_ORA_ENV=
VERSION=11.2.0.2.0

The action script $GRID_HOME/bin/cvures implements the usual callbacks required by scriptagent: start(), stop(), check(), clean(), abort(). All log information goes into $GRID_HOME/log/`hostname -s`/cvu.

The actual check performed is this one: $GRID_HOME/bin/cluvfy comp health -_format & > /dev/null 2>&1

Summary

Enough for now, this has become a far longer post than I initially anticipated. There are so many more new things around, like Quality of Server that need exploring making it very difficult to keep up.

Build your own 11.2.0.2 stretched RAC

Finally time for a new series! With the arrival of the new 11.2.0.2 patchset I thought it was about time to try and set up a virtual 11.2.0.2 extended distance or stretched RAC. So, it’s virtual, fair enough. It doesn’t allow me to test things like the impact of latency on the inter-SAN communication, but it allowed me to test the general setup. Think of this series as a guide after all the tedious work has been done, and SANs happily talk to each other. The example requires some understanding of how XEN virtualisation works, and it’s tailored to openSuSE 11.2 as the dom0 or “host”. I have tried OracleVM in the past but back then a domU (or virtual machine) could not mount an iSCSI target without a kernel panic and reboot. Clearly not what I needed at the time. OpenSuSE has another advantage: it uses a new kernel-not the 3 year old 2.6.18 you find in Enterprise distributions. Also, xen is recent (openSuSE 11.3 even features xen 4.0!) and so is libvirt.

The Setup

The general idea follows the design you find in the field, but with less cluster nodes. I am thinking of 2 nodes for the cluster, and 2 iSCSI target providers. I wouldn’t use iSCSI in the real world, but my lab isn’t connected to an EVA or similar.A third site will provide quorum via an NFS provided voting disk.

Site A will consist of filer01 for the storage part, and edcnode1 as the RAC node. Site B will consist of filer02 and edcnode2. The iSCSI targets are going to be provided by openFiler’s domU installation, and the cluster nodes will make use of Oracle Enterprise Linux 5 update 5.To make it more realistic, site C will consist of another openfiler isntance, filer03 to provide the NFS export for the 3rd voting disk. Note that openFiler seems to support NFS v3 only at the time of this writing. All systems are 64bit.

The network connectivity will go through 3 virtual switches, all “host only” on my dom0.

  • Public network: 192.168.99/24
  • Private network: 192.168.100/24
  • Storage network: 192.168.101/24

As in the real world, private and storage network have to be separated to prevent iSCSI packets clashing with Cache Fusion traffic. Also, I increased the MTU for the private and storage networks to 9000 instead of the default 1500. If you like to use jumbo frames you should check if your switch supports it.

Grid Infrastructure will use ASM to store OCR and voting disks, and the inter-SAN replication will also be performed by ASM in normal redundancy. I am planning on using preferred mirror read and intelligent data placement to see if that makes a difference.

Known limitations

This setup has some limitations, such as the following ones:

  • You cannot test inter-site SAN connectivity problems
  • You cannot make use of udev for the ASM devices-a xen domU doesn’t report anything back from /sbin/scsi_id which makes the mapping to /dev/mapper impossible (maybe someone knows a workaround?)
  • Network interfaces are not bonded-you certainly would use bonded NICs in real life
  • No “real” fibre channel connectivity between the cluster nodes

So much for the introduction-I’ll post the setup step-by-step. The intended series will consist of these articles:

  1. Introduction to XEN on openSuSE 11.2 and dom0 setup
  2. Introduction to openFiler and their installation as a virtual machine
  3. Setting up the cluster nodes
  4. Installing Grid Infrastructure 11.2.0.2
  5. Adding third voting disk on NFS
  6. Installing RDBMS binaries
  7. Creating a database

That’s it for today, I hope I got you interested and following the series. It’s been real fun doing it; now it’s about writing it all up.

UKOUG RAC&HA SIG September 2010

Just a quick one to announce that I’ll present at said event. Here’s the short synopsis of my talk:

Upgrading to Oracle Real Application Cluster 11.2

With the end of premier support in sight mid 2011 many business start looking at possible upgrade paths. With the majority of RAC systems deployed on Oracle 10g, there is a strong demand to upgrade these systems to 11.2. The presentation focuses on different upgrade paths, including Grid Infrastructure and the RDBMS. Alternative approaches to upgrading the software will be discussed as well. Experience from migrations performed at a large financial institution round the presentation up.

The renamdg command revisited-ASMLib

I have already written about the renamedg command, but since then fell in love with ASMLib. The use of ASMLib introduces a few caveats you should be aware of.

USAGE NOTES

This document presents research I performed with ASM on a lab environment. It should be applicable to any environment, but you should NOT use this for production-the renamedg command still is buggy, and you should not mess with ASM disk headers in an important system such as production or staging/UAT. You set the importance here!  The recommended setup for cloning disk groups is to use a data guard physical standby database on a different storage array to create a real time copy of your production database on that array. Again, do not use you production array for this!

Walking through a renamdg session

Oracle ASMLib introduces a new value to the ASM header, called the provider string as the following example shows:

[root@asmtest ~]# kfed read /dev/oracleasm/disks/VOL1 | grep prov
kfdhdb.driver.provstr:     ORCLDISKVOL1 ; 0x000: length=12

This can be verified with ASMLib:

[root@asmtest ~]# /etc/init.d/oracleasm querydisk /dev/xvdc1
Device "/dev/xvdc1" is marked an ASM disk with the label "VOL1"

The prefix “ORCLDISK” is automatically added by ASMLib and cannot easily be changed.

The problem with ASMLib is that the renamedg command does NOT update the provider string, which I’ll illustrate by walking through an example session. Disk group “DATA”, setup with external redundancy and two disks, DATA1 and DATA2, is to be cloned to “DATACLONE”.

The renamedg command requires the disk group to be cloned to be stopped. To prevent nasty surprises, you should stop the databases using that diskgroup manually.

[grid@rac11gr2drnode1 ~]$ srvctl stop database -d dev
[grid@rac11gr2drnode1 ~]$ ps -ef | grep smon
grid      3424     1  0 Aug07 ?        00:00:00 asm_smon_+ASM1
grid     17909 17619  0 15:13 pts/0    00:00:00 grep smon
[grid@rac11gr2drnode1 ~]$ srvctl stop diskgroup -g data
[grid@rac11gr2drnode1 ~]$

You can use the new “lsof” command of asmcmd to check for open files:

ASMCMD> lsof
DB_Name  Instance_Name  Path
+ASM     +ASM1          +ocrvote.255.4294967295
asmvol   +ASM1          +acfsdg/APACHEVOL.256.724157197
asmvol   +ASM1          +acfsdg/DRL.257.724157197
ASMCMD>

So apart from files from other disk groups no files are open, especially not referring to disk group DATA.

Now comes the part where you copy the LUNs, and this entirely depends on your system. The EVA series of storage arrays I worked with in this particular project offered a “snapclone” function, which used COW to create an identical copy of the source LUN, with a new WWID (which can be an input parameter to the snapclone call). When you are using device-mapper-multipath then ensure that your sys admins add the newly created LUNs to the /etc/multipath.conf file on all cluster nodes!

I am using Xen in my lab, which makes it simpler-all I need to do is to copy the disk containers on the domO and then add the new block devices to the running domU (“virtual machine” in Xen language). This can be done easily as the following example shows:

Usage: xm block-attach     

xm block-attach rac11gr2drnode1 file:/var/lib/xen/images/rac11gr2drShared/oradata1.clone xvdg w!
xm block-attach rac11gr2drnode2 file:/var/lib/xen/images/rac11gr2drShared/oradata1.clone xvdg w!

xm block-attach rac11gr2drnode1 file:/var/lib/xen/images/rac11gr2drShared/oradata2.clone xvdh w!
xm block-attach rac11gr2drnode2 file:/var/lib/xen/images/rac11gr2drShared/oradata2.clone xvdh w!

In the example, rac11gr2drnode{1,2} are the domU, the backend device is the copied file on the file system, the front end device in the domU is xvd{g,h}, and the mode is read/write, shareable. The exclamation mark here is crucial or else the second domU can’t mount the new block device-it is already exclusively mounted to another domU.

The fdisk command in my example immediately “sees” the new LUNs, with device mapper multipathing you might have to go through iterations of restarting multipathd and discovering partitions using kpartx. It is again very important to have all disks presented to all cluster nodes!

Here’s the sample output from my system:

[root@rac11gr2drnode1 ~]# fdisk -l | grep Disk | sort
Disk /dev/xvda: 4294 MB, 4294967296 bytes
Disk /dev/xvdb: 16.1 GB, 16106127360 bytes
Disk /dev/xvdc: 5368 MB, 5368709120 bytes
Disk /dev/xvdd: 16.1 GB, 16106127360 bytes
Disk /dev/xvde: 16.1 GB, 16106127360 bytes
Disk /dev/xvdf: 10.7 GB, 10737418240 bytes
Disk /dev/xvdg: 16.1 GB, 16106127360 bytes
Disk /dev/xvdh: 16.1 GB, 16106127360 bytes

I cloned /dev/xvdd and /dev/xvde to /dev/xvdg and /dev/xvdh.

Do NOT run /etc/init.d/oracleasm scandisks yet! Otherwise the renamedg command will complain about duplicate disk names, which is entirely reasonable.

I dumped all headers for disks /dev/xvd{d,e,g,h}1 to /tmp to be able to compare.

[root@rac11gr2drnode1 ~]# kfed read /dev/xvdd1 > /tmp/xvdd1.header
# repeat with the other disks

Start with phase one of the renamedg command:

[root@rac11gr2drnode1 ~]# renamedg phase=one dgname=DATA newdgname=DATACLONE \
> confirm=true verbose=true config=/tmp/cfg

Parsing parameters..

Parameters in effect:

 Old DG name       : DATA
 New DG name          : DATACLONE
 Phases               :
 Phase 1
 Discovery str        : (null)
 Confirm            : TRUE
 Clean              : TRUE
 Raw only           : TRUE
renamedg operation: phase=one dgname=DATA newdgname=DATACLONE confirm=true
  verbose=true config=/tmp/cfg
Executing phase 1
Discovering the group
Performing discovery with string:
Identified disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DATA1 with
  disk number:0 and timestamp (32940276 1937075200)
Identified disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DATA2 with
  disk number:1 and timestamp (32940276 1937075200)
Checking for hearbeat...
Re-discovering the group
Performing discovery with string:
Identified disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DATA1 with
  disk number:0 and timestamp (32940276 1937075200)
Identified disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DATA2 with
  disk number:1 and timestamp (32940276 1937075200)
Checking if the diskgroup is mounted
Checking disk number:0
Checking disk number:1
Checking if diskgroup is used by CSS
Generating configuration file..
Completed phase 1
Terminating kgfd context 0x2b7a2fbac0a0
[root@rac11gr2drnode1 ~]#

You should always check “$?” for errors-the message “terminating kgfd context” sounds bad, but isn’t. At the end of stage 1, there is no change to the header. Only at phase two there is:

[root@rac11gr2drnode1 ~]# renamedg phase=two dgname=DATA newdgname=DATACLONE config=/tmp/cfg

Parsing parameters..
renamedg operation: phase=two dgname=DATA newdgname=DATACLONE config=/tmp/cfg
Executing phase 2
Completed phase 2

Now there are changes:

[root@rac11gr2drnode1 tmp]# grep DATA *header
xvdd1.header:kfdhdb.driver.provstr:    ORCLDISKDATA1 ; 0x000: length=13
xvdd1.header:kfdhdb.dskname:                   DATA1 ; 0x028: length=5
xvdd1.header:kfdhdb.grpname:               DATACLONE ; 0x048: length=9
xvdd1.header:kfdhdb.fgname:                    DATA1 ; 0x068: length=5
xvde1.header:kfdhdb.driver.provstr:    ORCLDISKDATA2 ; 0x000: length=13
xvde1.header:kfdhdb.dskname:                   DATA2 ; 0x028: length=5
xvde1.header:kfdhdb.grpname:               DATACLONE ; 0x048: length=9
xvde1.header:kfdhdb.fgname:                    DATA2 ; 0x068: length=5
xvdg1.header:kfdhdb.driver.provstr:    ORCLDISKDATA1 ; 0x000: length=13
xvdg1.header:kfdhdb.dskname:                   DATA1 ; 0x028: length=5
xvdg1.header:kfdhdb.grpname:                    DATA ; 0x048: length=4
xvdg1.header:kfdhdb.fgname:                    DATA1 ; 0x068: length=5
xvdh1.header:kfdhdb.driver.provstr:    ORCLDISKDATA2 ; 0x000: length=13
xvdh1.header:kfdhdb.dskname:                   DATA2 ; 0x028: length=5
xvdh1.header:kfdhdb.grpname:                    DATA ; 0x048: length=4
xvdh1.header:kfdhdb.fgname:                    DATA2 ; 0x068: length=5

Although the original disks (/dev/xvdd1 and /dev/xvde1) had their disk group name changed, the provider string remained untouched. So if we were to issue a scandisks command now through /etc/init.d/oracleasm, there’d still be duplicate disk names. This is a bug in my opinion, and a bad thing.

Renaming the disks is straight forward, the difficult bit is to find out which have to be renamed. Again, you can use kfed to figure that out. I knew the disks to be renamed were /dev/xvdd1 and /dev/xvde1 after consulting the header information.

[root@rac11gr2drnode1 tmp]# /etc/init.d/oracleasm force-renamedisk /dev/xvdd1 DATACLONE1
Renaming disk "/dev/xvdd1" to "DATACLONE1":                [  OK  ]
[root@rac11gr2drnode1 tmp]# /etc/init.d/oracleasm force-renamedisk /dev/xvde1 DATACLONE2
Renaming disk "/dev/xvde1" to "DATACLONE2":                [  OK  ]

I then performed a scandisks operation on all nodes just to be sure… I had corruption of the disk group before :)

[root@rac11gr2drnode1 tmp]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks:               [  OK  ]
[root@rac11gr2drnode1 tmp]#

[root@rac11gr2drnode2 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks:               [  OK  ]
[root@rac11gr2drnode2 ~]#

The output on all cluster nodes should be identical, on my system I found the following disks:

[root@rac11gr2drnode1 tmp]# /etc/init.d/oracleasm listdisks
ACFS1
ACFS2
ACFS3
ACFS4
DATA1
DATA2
DATACLONE1
DATACLONE2
VOL1
VOL2
VOL3
VOL4
VOL5

Sure enough, the cloned disks were present. Although everything seemed ok at this point, I could not start disk group DATA and had to reboot the cluster nodes to rectify that problem. Maybe there is some not so transient information stored somewhere about ASM disks. After the reboot, CRS started my database correctly, and with all dependent resources:

[oracle@rac11gr2drnode1 ~]$ srvctl status database -d dev
Instance dev1 is running on node rac11gr2drnode1
Instance dev2 is running on node rac11gr2drnode2

Where are the logs for the SCAN listeners?

Quick post and note to self. Where are the SCAN listener log files? A little bit of troubleshooting was required, but I guess I could have read the manuals too. In the end it turned out to be quite simple!

First of all, I needed to find out where on my four node cluster I had a SCAN listener. This is done quite easily by asking Clusterware:

[grid@rac11gr2node2 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node rac11gr2node2
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node rac11gr2node4
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node rac11gr2node3

I was initially on the first node, so had to ssh to the second. From there on I thought that the proc file system might have the answer. I needed to get the PID of the SCAN listener first:

[grid@rac11gr2node2 ~]$ ps -ef | grep -i scan
grid      4738     1  0 Jun03 ?        00:00:13 /u01/app/grid/product/11.2.0/crs/bin/tnslsnr LISTENER_SCAN1 -inherit
grid     24694 24147  0 20:55 pts/0    00:00:00 grep -i scan

Now /proc/4738/fd lists all the open file descriptors used by the SCAN listener. Surely the log.xml file would be there somewhere:

[grid@rac11gr2node2 ~]$ ll /proc/4738/fd
total 0
lrwx------ 1 grid oinstall 64 Jun 16 20:46 0 -> /dev/null
lrwx------ 1 grid oinstall 64 Jun 16 20:46 1 -> /dev/null
lrwx------ 1 grid oinstall 64 Jun 16 20:46 10 -> socket:[20906]
lrwx------ 1 grid oinstall 64 Jun 16 20:46 11 -> socket:[20908]
lrwx------ 1 grid oinstall 64 Jun 16 20:46 12 -> socket:[20927]
lrwx------ 1 grid oinstall 64 Jun 16 20:46 13 -> socket:[20957]
lrwx------ 1 grid oinstall 64 Jun 16 20:46 14 -> socket:[20958]
lrwx------ 1 grid oinstall 64 Jun 16 20:46 15 -> socket:[22991]
lrwx------ 1 grid oinstall 64 Jun 16 20:46 16 -> socket:[10712179]
lrwx------ 1 grid oinstall 64 Jun 16 20:46 17 -> socket:[10173760]
lrwx------ 1 grid oinstall 64 Jun 16 20:46 18 -> socket:[10176036]
lrwx------ 1 grid oinstall 64 Jun 16 20:46 19 -> socket:[9106216]
lrwx------ 1 grid oinstall 64 Jun 16 20:46 2 -> /dev/null
lr-x------ 1 grid oinstall 64 Jun 16 20:46 3 -> /u01/app/grid/product/11.2.0/crs/rdbms/mesg/diaus.msb
lr-x------ 1 grid oinstall 64 Jun 16 20:46 4 -> /proc/4738/fd
lr-x------ 1 grid oinstall 64 Jun 16 20:46 5 -> /u01/app/grid/product/11.2.0/crs/network/mesg/nlus.msb
lr-x------ 1 grid oinstall 64 Jun 16 20:46 6 -> pipe:[20893]
lr-x------ 1 grid oinstall 64 Jun 16 20:46 7 -> /u01/app/grid/product/11.2.0/crs/network/mesg/tnsus.msb
lrwx------ 1 grid oinstall 64 Jun 16 20:46 8 -> socket:[20904]
l-wx------ 1 grid oinstall 64 Jun 16 20:46 9 -> pipe:[20894]

Well maybe not. Next option is to query the listener itself via lsnrctl. Nothing easier that that:

LSNRCTL> set current_listener LISTENER_SCAN1
Current Listener is LISTENER_SCAN1
LSNRCTL> show log_file
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
LISTENER_SCAN1 parameter "log_file" set to /u01/app/grid/product/11.2.0/crs/log/diag/tnslsnr/rac11gr2node2/listener_scan1/alert/log.xml
The command completed successfully
LSNRCTL>

Aha, it uses the ADR as well. So back there, change the base and query the file:

[grid@rac11gr2node2 ~]$ adrci

ADRCI: Release 11.2.0.1.0 - Production on Wed Jun 16 20:58:17 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/u01/app/oracle"
adrci> set base /u01/app/grid/product/11.2.0/crs/log
adrci> show home
ADR Homes:
diag/tnslsnr/rac11gr2node2/listener_scan1
diag/tnslsnr/rac11gr2node2/listener_scan3
diag/tnslsnr/rac11gr2node2/listener_scan2
adrci> set home diag/tnslsnr/rac11gr2node2/listener_scan1
adrci> show alert -tail
2010-06-16 20:58:25.021000 +01:00
16-JUN-2010 20:58:25 * service_update * polstdby_1 * 0
2010-06-16 20:58:27.441000 +01:00
16-JUN-2010 20:58:27 * service_update * poldb_2 * 0
2010-06-16 20:58:30.444000 +01:00
16-JUN-2010 20:58:30 * service_update * poldb_2 * 0
16-JUN-2010 20:58:30 * service_update * poldb_1 * 0
2010-06-16 20:58:33.442000 +01:00
16-JUN-2010 20:58:33 * service_update * poldb_2 * 0
2010-06-16 20:58:35.784000 +01:00
16-JUN-2010 20:58:35 * service_update * prod1 * 0
16-JUN-2010 20:58:36 * service_update * poldb_2 * 0
16-JUN-2010 20:58:36 * service_update * poldb_1 * 0
2010-06-16 20:58:39.546000 +01:00
16-JUN-2010 20:58:39 * service_update * poldb_2 * 0
16-JUN-2010 20:58:39 * service_update * poldb_1 * 0
2010-06-16 20:58:42.574000 +01:00
16-JUN-2010 20:58:42 * service_update * poldb_2 * 0
2010-06-16 20:58:45.574000 +01:00
16-JUN-2010 20:58:45 * service_update * poldb_2 * 0
2010-06-16 20:58:48.576000 +01:00
16-JUN-2010 20:58:48 * service_update * poldb_2 * 0
16-JUN-2010 20:58:48 * service_update * poldb_1 * 0
2010-06-16 20:58:51.575000 +01:00
16-JUN-2010 20:58:51 * service_update * poldb_2 * 0
16-JUN-2010 20:58:51 * service_update * poldb_1 * 0
2010-06-16 20:58:54.578000 +01:00
16-JUN-2010 20:58:54 * service_update * poldb_2 * 0

Job done.