11gR2 – ASM redundancy levels.


In 11gR2 ASM diskgroups are used 1) by Grid infrastructure for the OCR, Voting disk and the ASMspfile. 2) by the database for DATA and FRA.

ASM redundancy levels, can be a bit misleading in Oracle 11gR2 compared to 10gR2.   They are called the same, Normal redundancy, High Redundancy and External Redundancy.  However, depending on what area of the 11gR2 configuration this will be used, the number of disks required to configure are not the same.  DBA’s seldom are able to take the difference into consideration when requesting disks /LUN’s from the storage administrators and are taken by surprise during installation.

Grid Infrastructure:  When configuring diskgroups for the OCR file, voting disk and ASMspfile, Normal redundancy requires 3 disks/LUNS to be part of the diskgroup.  High redundancy requires 5 disks/LUNS to be part of the diskgroup.  External redundancy requires just one disk/LUN to be part of the diskgroup. The below screen output shows the error message when only one disk/LUN is selected for the GRID diskgroup.


Database: When configuring diskgroups for DATA and FRA, normal redundancy requires 2 disks/LUNS to be part of the diskgroup.  High redundancy requires 3 disks/LUNs or failure groups to be part of the diskgroup.  External redundancy requires just one disk/LUNs or failure groups to be part of the diskgroup


More about configuring OCR and voting disks in the next post.


11gR2 – having a second OCR file on a separate diskgroup


Oracle database versions  < 11gR2 RAC, the OUI had a screen where we could specify multiple OCR and or Voting disks.  Starting with 11gR2 since the OCR and Voting disks are stored on ASM storage we don’t have options to specify multiple OCR or for that matter multiple voting disks. Well do we need multiple disks for OCR and voting disks.  with these files stored on ASM, we could take advantage of the ASM mirroring options.  Normal Redundancy (two copies of the file), High Redundancy (three copies of the file) or External Redundancy (where ASM does not manage the redundant copy of these files, but the redundancy is maintained /protected at the storage level by mirroring disks.

If the normal redundancy option or the high redundancy option was selected when the disk group that stores the OCR/voting disks, the automatically the disks are mirrored for either normal or high redundancy.

Now what if we do not want to use ASM mirroring.. but would like the clusterware to maintain two or more copies of OCR files on physically different diskgroups.  Oracle supports this option, not while installing the Grid infrastructure but after the installation is  complete using the ocrconfig utility. Lets go through one such scenario.

OCR is a critical component of the RAC architecture.  From the point that the clusterware starts (on server start/reboot)and to start all applications running on the clusterware which includes database, listener, ASM, database services etc, the OCR is consulted by the clusterware for placement of resources.   OCR contains all the rules for high availability of these resources. The clusterware will use these definitions for placement of resources when an server / instance crashes.  In 11gR2 the OCR contains additional information such as server pool definitions etc.

The first time the clusterware starts it determines the location of the OCR file by checking the following location

1.  Request storage administrators to create a new LUN with the same size as the LUN that currently hosts the OCR and voting disks for the 11gR2 cluster

2.  Connect to the ASM instance on one for the database servers and create a new disk group and mount the disk group on all instances in the cluster using the following syntax



3. Once the diskgroup have been mounted..we can configure the diskgroup for OCR.   This configuration requires root privileges. 

Connect to the server as root. and execute ocrconfig command from the GRID_HOME /bin directory. 

[root@prddb3 bin]# ./ocrconfig -add +PRD_GRID2

Note:  if the disk group does not have the required privileges then you can get the following error.   I ran into this error while configuring OCR on PRD_GRID2 diskgroup.

PROT-30: The Oracle Cluster Registry location to be added is not accessible.

Also reported in the GRID_HOME/log/prddb3/client directory is a ocrconfig log file that contains the following errors

[root@prddb3 bin]# cat /app/grid/product/11.2.0/log/prddb3/client/ocrconfig_4000.log
Oracle Database 11g Clusterware Release - Production Copyright 1996, 2009 Oracle. All rights reserved.
2010-07-10 20:38:18.472: [ OCRCONF][2833243664]ocrconfig starts...
2010-07-10 20:38:23.140: [  OCRCLI][2833243664]proac_replace_dev:[+PRD_GRID2]: Failed. Retval [8]
2010-07-10 20:38:23.140: [  OCRAPI][2833243664]procr_replace_dev: failed to replace device (8)
2010-07-10 20:38:23.140: [ OCRCONF][2833243664]The new OCR device [+PRD_GRID2] cannot be opened
2010-07-10 20:38:23.140: [ OCRCONF][2833243664]Exiting [status=failed]... 

What does this error mean?  by default when you create a diskgroup in 11gR2 RAC from the command prompt using SQL plus, the default compatibility attribute of the diskgroup is set to 10.1. This issue does not occur if you create it through ASMCA.

[oracle@prddb3]# sqlplus ‘ /as sysasm’


------------ -------------------- -------------------- ---
PRD_DATA            N
PRD_FRA            N
PRD_GRID1            N
PRD_GRID2            N

Note The ASM compatibility and the database (RDBMS) compatibility defaults to 10.1. This needs to be changed to 11.2  in order for the clusterware to recognize that this is a 11gR2 ASM configuration.

4. Change the compatibility of the new diskgroup to 11.2 as follows:



5. The commands will change the compatibility levels for the PRD_GRID2 diskgroup and the new values can be verified using the following query:



------------ -------------------- --------------------

6. Once the data has been verified, attempt to configure the diskgroup for OCR.   This configuration requires ‘root’ privileges. Connect to the server as root. and execute ocrconfig command from the $GRID_HOME/bin directory. 

[root@prddb3 bin]# ./ocrconfig -add +PRD_GRID2

7.  To verify if the new OCR file is created and the /etc/oracle/ocr.loc file is updated with the new file information.

[root@prddb3 bin]# cat /etc/oracle/ocr.loc
#Device/file  getting replaced by device +PRD_GRID2

8. The  ocrcheck command reflects two OCR locations

[root@prddb3 bin]# ./ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3972
         Available space (kbytes) :     258148
         ID                       : 1288159793
         Device/File Name         : +PRD_GRID1
                                    Device/File integrity check succeeded
         Device/File Name         : +PRD_GRID2
                                    Device/File integrity check succeeded 

                                    Device/File not configured 

                                    Device/File not configured 

                                    Device/File not configured 

         Cluster registry integrity check succeeded 

         Logical corruption check succeeded 

9. Apart from the standard utilities used to very the integrity of the OCR file, the $GRID_HOME/log/prddb3/client directory also contains logs reflecting that the change was successful.

[root@prddb3 bin]# cat /app/grid/product/11.2.0/log/prddb3/client/ocrconfig_25560.log
Oracle Database 11g Clusterware Release - Production Copyright 1996, 2009 Oracle. All rights reserved.
2010-07-10 21:01:00.652: [ OCRCONF][4224605712]ocrconfig starts...
2010-07-10 21:01:13.593: [ OCRCONF][4224605712]Successfully replaced OCR and set block 0
2010-07-10 21:01:13.593: [ OCRCONF][4224605712]Exiting [status=success]... 

10.  The clusterware alert log also has an entry indicating successful addition of the OCR disk.

[crsd(27512)]CRS-1007:The OCR/OCR mirror location was replaced by +PRD_GRID2.

11.   Lets check if the physical files can be found on the ASM storage.  Set the ORACLE_SID to the ASM instance on the server and using asmcmd the following can be found

ASMCMD> ls -lt
Type     Redund  Striped  Time             Sys  Name
OCRFILE  UNPROT  COARSE   JUL 10 21:00:00  Y    REGISTRY.255.715795989 

ASMCMD> ls -lt
Type     Redund  Striped  Time             Sys  Name
OCRFILE  UNPROT  COARSE   JUL 10 21:00:00  Y    REGISTRY.255.724021255 

coexist 10gR2 and 11gR2 RAC db on the same cluster – Part II

I accidently posted this blog entry over my previous entry on this same topic.  Thanks to Google I was able retrieve my old post from the Google cache and posted it again to my blog.


My previous post discussed about the various stumbling blocks we encountered during our 10gR2 database installation in a 11gR2 environment. We took it a step at a time to troubleshoot and install the database documenting and fixing the issues as we go. Yesterday browsing through Metalink I noticed a very recent article on the same subject.
Pre 11.2 Database Issues in 11gR2 Grid Infrastructure Environment [ID 948456.1]
Which recommends several patches and steps that could help ease the installation process

coexist 10gR2 and 11gR2 RAC db on the same cluster.. stumbling blocks

Due to project/application requirements we had to create a new 10gR2 database on a 11gR2 cluster. These are the high level steps that were attempted to complete this effort.

  1. Install 11gR2 Grid Infrastructure
  2. Create all ASM diskgroups using asmca
  3. Install 11gR2 database binaries
  4. Create the 11gR2 database using dbca from the 11gR2 DB home
  5. Install 10gR2 database binaries
  6. Create 10gR2 database using dbca from the 10gR2 DB home

Once all the prerequisites are met, 11gR2 installation is a very smooth process. Everything goes so smooth. some of us who have worked the some of the true clustered database solutions such as Rdb on VMS clusters (many of you don’t even know that Oracle owns another database called Oracle Rdb.  Oracle acquired this excellent database from Digital Equipment Corporation a.k.a DEC around 1992 and surprisingly Oracle Rdb is used by many customers even today to manage their VLDB systems), Oracle Parallel Servers (OPS) and then most recently with 9iR2 RAC, would remember how difficult it was to complete the installation. Oracle has come a long way in streamlining this process. Its so easy and the entire 11gR2 RAC configuration can be completed with little or no effort in less than 1 hour.

Once the 11gR2 environment as up and running, the next step was to configure the 10gR2 RAC database on the same cluster. We first installed the 10gR2 binaries. runInstaller was able to see that there was a cluster already installed. During the verification step, installer complained of an incompatible version of clusterware  on the server.  We ignored the error and moved on. Binaries installed successfully on all nodes in the cluster.  After installed we complete the upgrade to

Note: When 10gR2 installer was available 11g was not, then how can it be aware of a higher version. Higher versions are almost always compatible with the lower versions.  With this idea we moved on.

Stumbling Block I

Next step was to configure the database using dbca. Its important to install the dbca from the 10gR2 /bin directory.  we noticed that the intro screen was different, it did not show the choices we normally see in a clustered database installation.  We did not get the choice to select between creating a ‘RAC’ database or a ‘single instance database’.  This indicated that something was wrong. Why did the installer see that there was a clusterware already there and this is a RAC implementation.  Why not dbca?  Searching through Oracle documentation I found this note.

When Oracle Database version 10.x or 11x is installed on a new Oracle grid infrastructure for a cluster configuration, it is configured for dynamic cluster configuration, in which some or all IP addresses are provisionally assigned, and other cluster identification information is dynamic. This configuration is incompatible with older database releases, which require fixed addresses and configuration.

You can change the nodes where you want to run the older database to create a persistent configuration. Creating a persistent configuration for a node is called pinning a node.”

We can check if the nodes are pinned using olsnodes command. You have a new switch in 11gR2 that lists the pinned status for a node. 

[prddb1] olsnodes -h
Usage: olsnodes [ [-n] [-i] [-s] [-t] [<node> | -l [-p]] | [-c] ] [-g] [-v]
                -n print node number with the node name
                -p print private interconnect address for the local node
                -i print virtual IP address with the node name
                <node> print information for the specified node
                -l print information for the local node
                -s print node status – active or inactive
                –t print node type – pinned or unpinned
                -g turn on logging
                -v Run in debug mode; use at direction of Oracle Support only.
                -c print clusterware name

[prddb1] olsnodes -t
prddb1     Unpinned
prddb2     Unpinned
prddb3     Unpinned

Pinning of a node is done using the crsctl utility.  crsctl and olsnodes are both located in $GRID_HOME/bin directory.

crsctl pin css –n prddb1

check if they are pinned

[prddb1] olsnodes -t
prddb1     Pinned
prddb2     Pinned
prddb3     Pinned

Stumbling Block II

dbca was now able to see the RAC cluster and we continued.  Ran into the second stumbling block after selecting ASM as the storage manager. “ASM instance not found .. press ok to configure ASM” 

In  11gR2 listeners are driven by the SCAN feature, meaning there is one SCAN listener for each SCAN IP defined in the DNS server. Apart from the three scan listeners, there is a parent SCAN listener that listens on the various database services and connections. Listener service is named as LISTENER in 11gR2 however called LISTENER_<HOSTNAME> in Oracle 10gR2. the dbca log files located in $ORACLE_HOME/cfgtools/dbca/trace.log showed the following entries..

[AWT-EventQueue-0] [11:7:53:935] [NetworkUtilsOPS.getLocalListenerProperties:912] localNode=prddb1, localNodeVIP=prddb1-vip
[AWT-EventQueue-0] [11:7:53:935] [NetworkUtilsOPS.getLocalListenerProperties:913] local listener name = LISTENER_prddb1
[AWT-EventQueue-0] [11:7:53:939] [NetworkUtilsOPS.getLocalListenerProperties:923] No endpoint found for listener name=LISTENER_prddb1
[AWT-EventQueue-0] [11:7:53:939] [ASMAttributes.getConnection:209] getting port using fallback…
[AWT-EventQueue-0] [11:7:53:940] [ASMInstanceRAC.validateASM:609] oracle.sysman.assistants.util.CommonUtils.getListenerProperties(CommonUtils.java:455)

Metalink describes this error and suggests that we use the LISTENER name that is currently configured during database creation.  Well this also did not help.

The workaround was to add the qualified listener to the listener.ora file and to reload the listener.

Stumbling Block III

The listener fix got us through the ASM configuration and database creation. Hoping that is all the issues we could potentially run into we just moved ahead and defined database services as part of the database creation process.

Our next encounter was with the instance /database/services startup process. dbca failed with errors unable to start database, instance and database services. The entries made it to the OCR file as was evident from the crsstat output. However they would not start?  As the obvious step was to check if these resources are registered with the OCR, we checked the status of the database using srvctl

srvctl status database –d  <dbname>  there was no output. How could this be, the crsstat showed the entries but the srvctl gave no results to the check. 

Next step was to take a dump of the OCR file and see if the entries was really in it, found the entries. After considerable research determined that the entire format and structure, syntax of the srvctl utility in 11gR2 is different compared to 10gR2. Tried the srvctl utility from 10gR2 home and it did the trick

We have both 11gR2 and 10gR2 RAC on a 11gR2 clusterware/Grid Infrastructure cluster.  Both using ASM, 11gR2