In the previous post in this series we discussed the two most common methods for mapping shared storage for an Oracle RAC. In this post, we are going to discuss creating RAC using asmlib. The changes for creation using udev are covered at the end of this post. As a review, below is the section of the previous post that discussed configuring shared storage using asmlib:
CONFIGURING SHARED STORAGE USING ORACLE ASMLIB
Now, if you are going to map the shared storage using asmlib, there are more steps involved than using udev. The base oracleasm package is part of the kernel. Add the oracleasm support package: ‘yum -y install oracleasm-support’ on all nodes. Then run the command ‘oracleasm configure -i’ to set up the asmlib packages. Answer the questions as shown below:
[root@racnode2 ~]# oracleasm configure -i
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets (‘[]’). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
[root@racnode2 ~]#reboot
Ensure that the above commands are done on all nodes in the cluster. As noted, reboot after configuring asm.
The next set of commands are done on only the first cluster node.
First, create partitions on the shared disks. Repeat for each disk:
[root@racnode1 ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x619c61f3.
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-41943039, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039):
Using default value 41943039
Partition 1 of type Linux and of size 20 GiB is set
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@racnode1 ~]#
Note that on the above, the first sector is 2048 to ensure that the partition starts on a sector boundary. This partition alignment is the default in OEL7, but make sure that the sector alignment is there. It makes a substantial difference in performance.
The next step is to mark each disk for oracle asm: ‘oracleasm createdisk <name> <device>’. For example, ‘oracleasm createdisk ASMDISK1 /dev/sdb1’. Do this for each node, something like this:
[root@racnode1 ~]# oracleasm createdisk asmdisk1 /dev/sdb1
Writing disk header: done
Instantiating disk: done
[root@racnode1 ~]# oracleasm createdisk asmdisk2 /dev/sdc1
Writing disk header: done
Instantiating disk: done
[root@racnode1 ~]# oracleasm createdisk asmdisk3 /dev/sdd1
Writing disk header: done
Instantiating disk: done
[root@racnode1 ~]# oracleasm createdisk asmdisk4 /dev/sde1
Writing disk header: done
Instantiating disk: done
[root@racnode1 ~]# oracleasm createdisk asmdisk5 /dev/sdf1
Writing disk header: done
Instantiating disk: done
Next, on each additional host run the command ‘oracleasm scandisks’, then ‘oracleasm listdisks’. The output should look something like this:
[root@racnode2 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks…
Scanning system for ASM disks…
Instantiating disk “ASMDISK1”
Instantiating disk “ASMDISK2”
Instantiating disk “ASMDISK4”
Instantiating disk “ASMDISK3”
Instantiating disk “ASMDISK5”
[root@racnode2 ~]# oracleasm listdisks
ASMDISK1
ASMDISK2
ASMDISK3
ASMDISK4
ASMDISK5
[root@racnode2 ~]#
This completes the shared device configuration using oracle asmlib.
See this link for shared device configuration using udev:
https://dbakerber.wordpress.com/2019/10/16/udev-rules-for-oracle-storage/
The cluster build step by step that follows uses asmlib. At the end of this post, the changes required when using udev rules are noted. The changes are fairly minor.
Personally, I prefer to use udev rules because it allows us to skip the partitioning step, and reduces the number of oracle packages running on the host. Also, it can be problematic patching when using asmlib since it is a kernel package for OEL.
Now download the Oracle GI (Grid Infrastructure) software from here. You should download this onto the first cluster node as the oracle user:
https://www.oracle.com/database/technologies/oracle19c-linux-downloads.html
Note that this is an entire home installation. Once downloaded, create a directory for the files. Most people create a /u01 mountpoint and file system for the Oracle binaries. Note that initially this must be owned by Oracle. Run the following commands (as root) on all the cluster nodes:
#mkdir -p /u01
#chown oracle:dba /u01
Next, log back in as Oracle, and create the full path for the oracle GI home on the first node. The installation process will create the required directories on the other nodes as long as /u01 has been created and is owned by oracle:
[oracle@racnode1 ~]$ mkdir -p /u01/app/19.3.0/grid
Next, move or copy the installation file to that location, and unzip the GI home.
[oracle@racnode1 grid]$ mv LINUX.X64_193000_grid_home.zip /u01/app/19.3.0/grid/
[oracle@racnode1 grid]$ cd /u01/app/19.3.0/grid/
[oracle@racnode1 grid]$ unzip LINUX.X64_193000_grid_home.zip
Note that we will be using xwindows to complete the installation. That is the reason we chose the GUI installation option when we first created the VMware guests. If you have questions on how to configure xwindows with putty, see this post (look section 2): https://dbakerber.wordpress.com/2015/09/23/installing-oracle-software-part-1-prerequisites/.
For further reference, below is the hosts file for this installation. I have configured dnsmasq for names resolution:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.12.1.107 racnode1 racnode1.localdomain
10.12.1.136 racnode2 racnode2.localdomain
10.12.1.108 racnode1-vip racnode1-vip.localdomain
10.12.1.137 racnode2-vip racnode2-vip.localdomain
10.12.1.146 racnode1-scan racnode1-scan.localdomain
10.12.1.147 racnode1-scan racnode1-scan.localdomain
10.12.1.148 racnode1-scan racnode1-scan.localdomain
192.168.63.101 racnode1-priv1 racnode1-priv1.localdomain
192.168.44.101 racnode1-priv2 racnode1-priv2.localdomain
192.168.63.102 racnode2-priv1 racnode2-priv1.localdomain
192.168.44.102 racnode2-priv2 racnode2-priv2.localdomain
Start the installation process by running the gridSetup.sh program.
[oracle@racnode1 grid]$ ./gridSetup.sh
We are building a new cluster.
This is a standalone cluster.
The name of the cluster isn’t really important, though the name must be 15 characters or fewer in length, and contain only letters, numbers, and hyphens (no underscores). The requirements are the same for the scan name, and the length is not validated here, so make sure you check it yourself. If the names are too long, the root script will fail.
This automatically includes the name of the host you are running on. Add additional hosts.
Next, click on the ss connectivity button, shown below.
Enter the oracle password, and tell it to setup. This sets up passwordless ssh across all the nodes in the cluster.
Click ok, then next.
Assign the networks as shown below. We will used the ens35 network in the database setup, so make sure it stays as do not use.
While Oracle wants you to create the GIMR, I have never seen any purpose in it at all. As far as I can tell, it just takes valuable resources. And in any case, this is not a production build.
On the next screen, the /dev/sd* devices will show up. First, try and changing the disk discovery string to ORCL:*. Sometimes this works. Most often, I have had to change it to /dev/oracleasm/disks/*
The shared disks should now show up. Three will be needed for normal redundancy, and since this is the data for the cluster, the group should be either normal or high redundancy. Five disks would be needed for high redundancy. Enable the ASM filter driver if you desire.
Next, enter the passwords as appropriate. I typically use the same password for everything, then change and/or lock accounts as required.
Do not enable IPMI.
Register the cluster with OEM if you want. You need to have the OEM agent already installed, and OEM set up and running in order to do this.
I normally use dba for all groups. Sometimes oinstall is also used.
Don’t worry about any errors on the groups.
Take the defaults for oracle base and the inventory location. For a GI RAC home, the location must be outside of the oracle base.
I normally run the root scripts myself so I can better monitor for errors.
Check for the prereqs, and run the fixup script if necessary.
The items below can be safely ignored.
Continue.
Click install, and let it run.
When all files are set up and copied, you will be prompted to run the root scripts. Run orainstRoot.sh on each node, then run root.sh on each node. Note that root.sh must be run on node 1 first, then on the other nodes in any order.
Below is the output of the root scripts.
[root@racnode1 ~]# /tmp/GridSetupActions2021-02-11_10-43-18AM/CVU_19.0.0.0.0_oracle/runfixup.sh
All Fix-up operations were completed successfully.
[root@racnode1 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@racnode1 ~]# /u01/app/19.3.0/grid/root.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/19.3.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin …
Copying oraenv to /usr/local/bin …
Copying coraenv to /usr/local/bin …
Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/19.3.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/oracle/crsdata/racnode1/crsconfig/rootcrs_racnode1_2021-02-11_11-30-32AM.log
2021/02/11 11:30:44 CLSRSC-594: Executing installation step 1 of 19: ‘SetupTFA’.
2021/02/11 11:30:44 CLSRSC-594: Executing installation step 2 of 19: ‘ValidateEnv’.
2021/02/11 11:30:44 CLSRSC-363: User ignored prerequisites during installation
2021/02/11 11:30:44 CLSRSC-594: Executing installation step 3 of 19: ‘CheckFirstNode’.
2021/02/11 11:30:47 CLSRSC-594: Executing installation step 4 of 19: ‘GenSiteGUIDs’.
2021/02/11 11:30:48 CLSRSC-594: Executing installation step 5 of 19: ‘SetupOSD’.
2021/02/11 11:30:48 CLSRSC-594: Executing installation step 6 of 19: ‘CheckCRSConfig’.
2021/02/11 11:30:49 CLSRSC-594: Executing installation step 7 of 19: ‘SetupLocalGPNP’.
2021/02/11 11:32:02 CLSRSC-594: Executing installation step 8 of 19: ‘CreateRootCert’.
2021/02/11 11:32:15 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2021/02/11 11:32:16 CLSRSC-594: Executing installation step 9 of 19: ‘ConfigOLR’.
2021/02/11 11:32:33 CLSRSC-594: Executing installation step 10 of 19: ‘ConfigCHMOS’.
2021/02/11 11:32:33 CLSRSC-594: Executing installation step 11 of 19: ‘CreateOHASD’.
2021/02/11 11:32:40 CLSRSC-594: Executing installation step 12 of 19: ‘ConfigOHASD’.
2021/02/11 11:32:41 CLSRSC-330: Adding Clusterware entries to file ‘oracle-ohasd.service’
2021/02/11 11:33:09 CLSRSC-594: Executing installation step 13 of 19: ‘InstallAFD’.
2021/02/11 11:33:17 CLSRSC-594: Executing installation step 14 of 19: ‘InstallACFS’.
2021/02/11 11:33:29 CLSRSC-594: Executing installation step 15 of 19: ‘InstallKA’.
2021/02/11 11:33:38 CLSRSC-594: Executing installation step 16 of 19: ‘InitConfig’.
ASM has been created and started successfully.
[DBT-30001] Disk groups created successfully. Check /u01/app/oracle/cfgtoollogs/asmca/asmca-210211AM113420.log for details.
2021/02/11 11:35:27 CLSRSC-482: Running command: ‘/u01/app/19.3.0/grid/bin/ocrconfig -upgrade oracle oinstall’
CRS-4256: Updating the profile
Successful addition of voting disk b349b830f8644fc2bf0433b684627a8b.
Successful addition of voting disk 39ef9e8dd1674f06bfdc2b727c5b0583.
Successful addition of voting disk 9b708235ed7e4f1cbfd211a72b8facfa.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
— —– —————– ——— ———
1. ONLINE b349b830f8644fc2bf0433b684627a8b (/dev/oracleasm/disks/ASMDISK1) [DATA]
2. ONLINE 39ef9e8dd1674f06bfdc2b727c5b0583 (/dev/oracleasm/disks/ASMDISK2) [DATA]
3. ONLINE 9b708235ed7e4f1cbfd211a72b8facfa (/dev/oracleasm/disks/ASMDISK3) [DATA]
Located 3 voting disk(s).
2021/02/11 11:37:30 CLSRSC-594: Executing installation step 17 of 19: ‘StartCluster’.
2021/02/11 11:39:12 CLSRSC-343: Successfully started Oracle Clusterware stack
2021/02/11 11:39:12 CLSRSC-594: Executing installation step 18 of 19: ‘ConfigNode’.
2021/02/11 11:42:21 CLSRSC-594: Executing installation step 19 of 19: ‘PostConfig’.
2021/02/11 11:43:18 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster … succeeded
[root@racnode1 ~]#
After clicking ok that the scripts have been run, it will run through several more steps and then continue to the finish screen. Sometimes the cluster integrity check fails, but if ASM is running and you can see the shared space in ASM, everything should be fine.
Click close.
At this point, the cluster is built.
Notes for using udev rules instead of ASMLIB.
If you choose to use udev rules, the differences are minor. Where we entered the disc discover path as /dev/oracleasm/disks above, change it to /dev/oracleasm/*, as below:
You will notice that the names of the disks are the names defined in the file /etc/udev/rules.d/99-oracleasm.rules.
Once again, to create a disk group with normal redundancy, we need three disks.
The remainder of the process is the same as above.
In this post, we have completed the creation of an Oracle RAC (Real Application Cluster) on VMware workstation guests using oracle enterprise linuz 7.9 (OEL7.9).