Oracle12cR2 RAC Step by Step Installation Procedures on Oracle VM VirtualBox 5.1

1. Create VM on Windows 10 Host Operating System
2. Install the Guest Operating System
3. Preparation for the RAC Environment
4. Create Shared Disks
5. Clone the Hard Disk
6. Install the Grid Infrastructure
7. Install the Oracle RAC Databases
 

1. Create VM on Windows 10 Host Operating System

-- Create a new 50G virtual hard disk image named 12cRAC1.vdi
D:\VirtualBox> VBoxManage createhd --filename "D:\VirtualBox VMs\12cRAC1.vdi" --size 51200 --format VDI --variant Fixed

-- Create a new Virtual Machine named 12cRAC1
D:\VirtualBox> VBoxManage createvm --name 12cRAC1 --ostype "Oracle_64" --register

-- Add a SATA controller with the dynamic disk attached
D:\VirtualBox> VBoxManage storagectl 12cRAC1 --name "SATA" --add sata --controller IntelAHCI
D:\VirtualBox> VBoxManage storageattach 12cRAC1 --storagectl "SATA" --port 0 --device 0 --type hdd --medium "D:\VirtualBox VMs\12cRAC1.vdi"

-- Add an IDE controller with a Oracle Linux 7.3 DVD drive attached, using the specified ISO file:
D:\VirtualBox> VBoxManage storagectl 12cRAC1 --name "IDE" --add ide
D:\VirtualBox> VBoxManage storageattach 12cRAC1 --storagectl "IDE" --port 0 --device 0 --type dvddrive --medium F:\ISO\Oracle_Linux_7.3\V834394-01.iso

-- Add an IDE controller with a VBoxGuestAdditions DVD drive attached, using the specified ISO file:
D:\VirtualBox> VBoxManage storageattach 12cRAC1 --storagectl "IDE" --port 1 --device 0 --type dvddrive --medium F:\ISO\VBoxGuestAdditions_5.1.16.iso

-- Misc system settings.
D:\VirtualBox> VBoxManage modifyvm 12cRAC1 --ioapic on
D:\VirtualBox> VBoxManage modifyvm 12cRAC1 --boot1 dvd --boot2 disk --boot3 none --boot4 none

D:\VirtualBox> VBoxManage modifyvm 12cRAC1 --cpus 2 --memory 4096 --vram 128
D:\VirtualBox> VBoxManage hostonlyif create
D:\VirtualBox>
VBoxManage modifyvm 12cRAC1 --nic1 nat --nic2 hostonly --nic3 intnet


PS. Please verify that three NICs have been set up correctly on Oracle VM VirtualBox Manager (GUI).

2. Install the Guest Operating System

Start to install the OS - Oracle Linux 7.3:

Check the following software packages that will be installed:
Base Environment:
8 Server with GUI

Add-Ons for Selected Environment:
◎  Hardware Monitoring Utilities
◎  Java Platform
◎  KDE
◎  Large Systems Performance
◎  Network file system client
◎  Performance Tools
◎  Compatibility Libraries
◎  Development Tools

During the installation , the following information should be set:

  • hostname: rac12c01
  • enp0s3 (eth0): DHCP (Connect Automatically)
  • enp0s8 (eth1): IP= 192.168.10.11, Subnet=255.255.255.0, Gateway=192.168.10.1, DNS=192.168.10.1, Search=localdomain (Connect Automatically)
  • enp0s9 (eth2): IP= 192.168.20.11, Subnet=255.255.255.0, Gateway=<blank>, DNS=<blank>, Search=<blank> (Connect Automatically)

 

Click Done, and then click INSTALLATION DESTINATION.

Click Done.

Click Begin Installation.

Click Reboot to reboot the VM machine.

Install VM Virtual Box Guest Additions:

Add the shared directory named "Oracle12cR2_Shared_Dir" as follows:

reboot the machine to take effect.
The shared directory has appeared as follows:

Check Firewall Status:

Stop and Disable Firewall:

 

3. Preparation for the RAC Environment:
For the simplicity, install "oracle-database-server-12cR2-preinstall" package to perform the prerequisite setup.



Install NPT service:

Additional Setup:
Install Oracle ASM:
[root@rac12c01 ~]# yum install oracleasm-support

 

We can verify that the "oracle" user has been created.
[root@rac12c01 ~]# id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba)

Set the password for the "oracle" user.

[root@rac12c01 ~]# cat <<EOF > /etc/hosts
127.0.0.1       localhost.localdomain   localhost
# Public (eth1)
192.168.10.11   rac12c01
192.168.10.12   rac12c02
# Private (eth2)
192.168.20.11   rac12c01-priv
192.168.20.12   rac12c02-priv
# Virtual
192.168.10.101   rac12c01-vip
192.168.10.102   rac12c02-vip
# SCAN
192.168.10.10   scan12c
EOF

Since we are not going to use DNS, let's rename the resolv.conf to resolv.conf. bak as follows:
[root@rac12c01 ~]# mv /etc/resolv.conf /etc/resolv.conf.bak

The first network interface - enp0s3 (eth0) can be disabled if unneeded by the following command:


 

Modify the "/etc/security/limits.d/20-nproc.conf" file using sed command as follows:

Change the setting of SELinux to permissive by editing the "/etc/selinux/config" file and making sure the SELINUX flag is set as follows:

 De-configure NTP as follows:


Create the directories in which the Oracle software will be installed as follows:

Log in as the "oracle" user and add the following lines at the end of the "/home/oracle/.bash_profile" file.

# Oracle Settings
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=rac12c01
export ORACLE_UNQNAME=CDBRAC
export ORACLE_BASE=/u01/app/oracle
export GRID_HOME=/u01/app/12.2.0.1/grid
export DB_HOME=$ORACLE_BASE/product/12.2.0.1/db_1
export ORACLE_HOME=$DB_HOME
export ORACLE_SID=cdbrac1
export ORACLE_TERM=xterm
export BASE_PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$BASE_PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
alias grid_env='. /home/oracle/grid_env'
alias db_env='. /home/oracle/db_env'

Create a file called "/home/oracle/grid_env" with the following contents:

export ORACLE_SID=+ASM1
export ORACLE_HOME=$GRID_HOME
export PATH=$ORACLE_HOME/bin:$BASE_PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib


Create a file called "/home/oracle/db_env" with the following contents.

export ORACLE_SID=cdbrac1
export ORACLE_HOME=$DB_HOME
export PATH=$ORACLE_HOME/bin:$BASE_PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

 

Once the "/home/oracle/.bash_profile" has been run, different environments can be switched as follows:

 

4. Create Shared Disks:

Shutdown the guest Operating System:
[root@rac1 ~]# systemctl poweroff

On the host operating system(Windows 10 as my example), switch to the command line interface:

cd D:\VirtualBox
D:
D:\VirtualBox> VBoxManage createhd --filename "D:\VirtualBox VMs\12c_asm1.vdi" --size 20480 --format VDI --variant Fixed
D:\VirtualBox> VBoxManage createhd --filename "D:\VirtualBox VMs\12c_asm2.vdi" --size 20480 --format VDI --variant Fixed
D:\VirtualBox> VBoxManage createhd --filename "D:\VirtualBox VMs\12c_asm3.vdi" --size 20480 --format VDI --variant Fixed
D:\VirtualBox> VBoxManage createhd --filename "D:\VirtualBox VMs\12c_asm4.vdi" --size 20480 --format VDI --variant Fixed

Connect the four newly created hard disks to the VM - 12cRAC1 as follows:
D:\VirtualBox> VBoxManage storageattach 12cRAC1 --storagectl "SATA" --port 1 --device 0 --type hdd --medium "D:\VirtualBox VMs\12c_asm1.vdi" --mtype shareable
D:\VirtualBox> VBoxManage storageattach 12cRAC1 --storagectl "SATA" --port 2 --device 0 --type hdd --medium "D:\VirtualBox VMs\12c_asm2.vdi" --mtype shareable
D:\VirtualBox> VBoxManage storageattach 12cRAC1 --storagectl "SATA" --port 3 --device 0 --type hdd --medium "D:\VirtualBox VMs\12c_asm3.vdi" --mtype shareable
D:\VirtualBox> VBoxManage storageattach 12cRAC1 --storagectl "SATA" --port 4 --device 0 --type hdd --medium "D:\VirtualBox VMs\12c_asm4.vdi" --mtype shareable

Make the four hark disks shareable.
D:\VirtualBox> VBoxManage modifyhd "D:\VirtualBox VMs\12c_asm1.vdi" --type shareable
D:\VirtualBox> VBoxManage modifyhd "D:\VirtualBox VMs\12c_asm2.vdi" --type shareable   
D:\VirtualBox> VBoxManage modifyhd "D:\VirtualBox VMs\12c_asm3.vdi" --type shareable
D:\VirtualBox> VBoxManage modifyhd "D:\VirtualBox VMs\12c_asm4.vdi" --type shareable

Start the "12cRAC1" VM by clicking the "Start" button on the toolbar:

[root@rac12c01 ~]# ls -l /dev/sd*
brw-rw----. 1 root disk 8,  0 Mar 17  2017 /dev/sda
brw-rw----. 1 root disk 8,  1 Mar 17  2017 /dev/sda1
brw-rw----. 1 root disk 8,  2 Mar 17  2017 /dev/sda2
brw-rw----. 1 root disk 8, 16 Mar 17  2017 /dev/sdb
brw-rw----. 1 root disk 8, 32 Mar 17  2017 /dev/sdc
brw-rw----. 1 root disk 8, 48 Mar 17  2017 /dev/sdd
brw-rw----. 1 root disk 8, 64 Mar 17  2017 /dev/sde

Use the "fdisk" command to partition the disks sdb to sde.   (PS. repeat fdisk command from sdb to sde)
[root@rac12c01 ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xa78c1f11.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p):
Using default response p
Partition number (1-4, default 1):
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Verify the results:
[root@rac12c01 ~]# ll /dev/sd*1
brw-rw----. 1 root disk 8,  1 Mar 17  2017 /dev/sda1
brw-rw----. 1 root disk 8, 17 Mar 16 18:06 /dev/sdb1
brw-rw----. 1 root disk 8, 33 Mar 16 18:07 /dev/sdc1
brw-rw----. 1 root disk 8, 49 Mar 16 18:07 /dev/sdd1
brw-rw----. 1 root disk 8, 65 Mar 16 18:07 /dev/sde1

 

Configure Oracle ASM & Create ASM Disks

(On rac12c01 machine)
[root@rac12c01 ~]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done

[root@rac12c01 ~]# oracleasm init

Creating /dev/oracleasm mount point: /dev/oracleasm

Loading module "oracleasm": oracleasm

Configuring "oracleasm" to use device physical block size

Mounting ASMlib driver filesystem: /dev/oracleasm

[root@rac12c01 ~]# oracleasm createdisk DISK1 /dev/sdb1
Writing disk header: done
Instantiating disk: done

[root@rac12c01 ~]# oracleasm createdisk DISK2 /dev/sdc1
Writing disk header: done
Instantiating disk: done

[root@rac12c01 ~]# oracleasm createdisk DISK3 /dev/sdd1
Writing disk header: done
Instantiating disk: done

[root@rac12c01 ~]# oracleasm createdisk DISK4 /dev/sde1
Writing disk header: done
Instantiating disk: done

[root@rac12c01 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...

[root@rac12c01 ~]# oracleasm listdisks
DISK1
DISK2
DISK3
DISK4

Shutdown the rac12c01 at this moment to begin cloning the Hard Disk.

5. Clone the Hard Disk

Go back to the host operating system command line interface, and type the following command to clone the hard disk, as follows:
D:\VirtualBox> VBoxManage clonehd "D:\VirtualBox VMs\12cRAC1.vdi" "D:\VirtualBox VMs\12cRAC2.vdi"

Create the "12cRAC2" virtual machine in VirtualBox in the same way as "12cRAC1" VM,
except using an existing "D:\VirtualBox VMs\12cRAC2.vdi" virtual hard drive.

D:\VirtualBox> VBoxManage createvm --name 12cRAC2 --ostype "Oracle_64" --register
D:\VirtualBox> VBoxManage storagectl 12cRAC2 --name "SATA" --add sata --controller IntelAHCI

D:\VirtualBox> VBoxManage storageattach 12cRAC2 --storagectl "SATA" --port 0 --device 0 --type hdd --medium "D:\VirtualBox VMs\12cRAC2.vdi"

Be sure to add the three network adaptors as "RAC1" VM did.
D:\VirtualBox> VBoxManage modifyvm 12cRAC2 --nic1 nat --nic2 hostonly --nic3 intnet
PS. Please verify that three NICs have been set up correctly on Oracle VM VirtualBox Manager (GUI).

D:\VirtualBox> VBoxManage modifyvm 12cRAC2 --ioapic on
D:\VirtualBox> VBoxManage modifyvm 12cRAC2 --boot1 disk --boot2 none --boot3 none --boot4 none
D:\VirtualBox> VBoxManage modifyvm 12cRAC2 --cpus 2 --memory 4096 --vram 128

Once the "12cRAC2" VM is ready, attach the shared disks to it as bellow:
D:\VirtualBox> VBoxManage storageattach 12cRAC2 --storagectl "SATA" --port 1 --device 0 --type hdd --medium "D:\VirtualBox VMs\12c_asm1.vdi" --mtype shareable
D:\VirtualBox> VBoxManage storageattach 12cRAC2 --storagectl "SATA" --port 2 --device 0 --type hdd --medium "D:\VirtualBox VMs\12c_asm2.vdi" --mtype shareable
D:\VirtualBox> VBoxManage storageattach 12cRAC2 --storagectl "SATA" --port 3 --device 0 --type hdd --medium "D:\VirtualBox VMs\12c_asm3.vdi" --mtype shareable
D:\VirtualBox> VBoxManage storageattach 12cRAC2 --storagectl "SATA" --port 4 --device 0 --type hdd --medium "D:\VirtualBox VMs\12c_asm4.vdi" --mtype shareable

Start the "12cRAC2" VM by clicking the "Start" button on the toolbar.

Log in to the "12cRAC2" virtual machine as the "root" user in order to reconfigure the network settings to match the following:
(using nm-connection-editor command)

hostname: 12cRAC2
IP Address eth1: 192.168.10.12 (public address)
Default Gateway eth1: 192.168.10.1 (public address)
IP Address eth2: 192.168.20.12 (private address)
Default Gateway eth2: none

Since the 12cRAC2 VM was cloned from 12cRAC1, we have to change the host name from rac12c01 to rac12c02, by the following commands:

[root@rac12c01 ~]# hostnamectl set-hostname rac12c02
[root@rac12c01 ~]# systemctl reboot

Now we can bring back 12cRAC1 VM by clicking the start button of 12cRAC1 VM.

 

Ping each network ip (eth1 & eth2) from both machines as follows:
(On rac12c01 machine)

[root@rac12c01 ~]# ping -c 1 rac12c01 ; ping -c 1 rac12c01-priv ; ping -c 1 rac12c02 ; ping -c 1 rac12c02-priv
PING rac12c01 (192.168.10.11) 56(84) bytes of data.
64 bytes from rac12c01 (192.168.10.11): icmp_seq=1 ttl=64 time=0.036 ms

--- rac12c01 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms
PING rac12c01-priv (192.168.20.11) 56(84) bytes of data.
64 bytes from rac12c01-priv (192.168.20.11): icmp_seq=1 ttl=64 time=0.030 ms

--- rac12c01-priv ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms
PING rac12c02 (192.168.10.12) 56(84) bytes of data.
64 bytes from rac12c02 (192.168.10.12): icmp_seq=1 ttl=64 time=0.563 ms

--- rac12c02 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms
PING rac12c02-priv (192.168.20.12) 56(84) bytes of data.
64 bytes from rac12c02-priv (192.168.20.12): icmp_seq=1 ttl=64 time=0.461 ms

--- rac12c02-priv ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms

(On rac12c02 machine)

[root@rac12c02 ~]# ping -c 1 rac12c01 ; ping -c 1 rac12c01-priv ; ping -c 1 rac12c02 ; ping -c 1 rac12c02-priv
PING rac12c01 (192.168.10.11) 56(84) bytes of data.
64 bytes from rac12c01 (192.168.10.11): icmp_seq=1 ttl=64 time=0.314 ms

--- rac12c01 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms
PING rac12c01-priv (192.168.20.11) 56(84) bytes of data.
64 bytes from rac12c01-priv (192.168.20.11): icmp_seq=1 ttl=64 time=0.201 ms

--- rac12c01-priv ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms
PING rac12c02 (192.168.10.12) 56(84) bytes of data.
64 bytes from rac12c02 (192.168.10.12): icmp_seq=1 ttl=64 time=0.037 ms

--- rac12c02 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms
PING rac12c02-priv (192.168.20.12) 56(84) bytes of data.
64 bytes from rac12c02-priv (192.168.20.12): icmp_seq=1 ttl=64 time=0.032 ms

--- rac12c02-priv ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms


[root@rac12c02 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...

[root@rac12c02 ~]# oracleasm listdisks
DISK1
DISK2
DISK3
DISK4



6. Install the Grid Infrastructure

(On rac12c01 machine)
[root@rac12c01 ~]# ls -l /media/sf_Oracle12cR2_Shared_Dir/
total 6297251
-rwxrwx---. 1 root vboxsf 3453696911 Mar 16 13:15 linuxx64_12201_database.zip
-rwxrwx---. 1 root vboxsf 2994687209 Mar 16 10:43 linuxx64_12201_grid_home.zip


[root@rac12c01 ~]# cd /media/sf_Oracle12cR2_Shared_Dir/
[root@rac12c01 grid]# unzip linuxx64_12201_grid_home.zip -d /u01/app/12.2.0.1/grid
[root@rac12c01 source]# chown -R oracle:oinstall /u01/app/12.2.0.1/grid
[root@rac12c01 source]# chmod -R 770 /u01/app/12.2.0.1/grid

Install Cluster Verification Utility (CVU) package on both rac12c01 and rac12c02 from the Oracle grid media as follows: (e.g. /source/grid/rpm/)

[root@rac12c01 ~]# cd /u01/app/12.2.0.1/grid/cv/rpm
[root@rac12c01 rpm]# ls -l
total 12
-rwxrwx---. 1 oracle oinstall 8860 Jan  5 17:36 cvuqdisk-1.0.10-1.rpm

[root@rac12c01 rpm]# export CVUQDISK_GRP=dba
[root@rac12c01 rpm]# rpm -Uvh cvuqdisk-1.0.10-1.rpm
Preparing...                          ################################# [100%]
Updating / installing...
   1:cvuqdisk-1.0.10-1                 ################################# [100%]

# For rac12c02 node
[root@rac12c01 rpm]# scp cvuqdisk-1.0.10-1.rpm root@rac12c02:/tmp
[root@rac12c01 rpm]# ssh rac12c02 rpm -Uvh /tmp/cvuqdisk-1.0.10-1.rpm

[root@rac12c01 rpm]# xhost +
access control disabled, clients can connect from any host
 

Switch to "oracle" user and install the Grid Infrastructure by executing runInstaller as follows:
[root@rac12c01 rpm]# su - oracle

Last login: Fri May 27 15:39:57 CST 2016 on pts/0
[oracle@rac12c01 ~]$ export DISPLAY=:0.0

[oracle@rac12c01 grid]$ grid_env
[oracle@rac12c01 grid]$ cd $        ORACLE_HOME
[oracle@rac12c01 grid]$ pwd

/u01/app/12.2.0.1/grid
[oracle@rac12c01 grid]$ ./gridSetup.sh

 

 

Add the second node information by clicking Add button and fill out the node information as below:


Then click OK.

For SSH Connectivity, highlight both nodes and then click setup

Click “OK” and then click the “Next” button to continue the next step.

Verify the correctness of public and private networks and mark "Do Not Use" for the NAT interface (virbr0).
Then click the "Next" button to continue the next step.


Choose the "Configure ASM using block devices" option and click the "Next" button to continue the next step:

Choose the "No" option since we do not want to create a separate ASM disk group at this time.
Click the "Next" button to continue the next step.


For the disk group “DATA”, set the redundancy to "External", click the "Change Discovery Path" button and set the path to "/dev/oracleasm/disks/*".
Return the main screen and select all 4 disks and click the "Next" button.


 

Specify the password and then click the “Next” button to continue the next step:

Accept the default IPMI option by clicking the "Next" button to continue the next step.

 

Allow the option to be intact and click the "Next" button to continue the next step.

 

Allow both Oracle ASM Administrator Group and ASM DBA Group to be set as “dba” and click the “Next” button to continue the next step.

 

Ignore the warning message and click “Yes” button to continue.

Specify the Oracle Base location as "/u01/app/oracle" and click the "Next" button to continue.

Click the “Next” button to continue.

Click the “Next” button to continue.

 

Use the “Fix & Check Again” button if possible to allow the issues to be fixed. Otherwise check the “Ignore All” checkbox and click the “Next” button to continue.

Click the “Install” button to start the installation.

 

Execute the configuration scripts (/u01/app/oraInventory/orainstRoot.sh & /source/grid/root.sh) on both nodes respectively.

Since we are not using the DNS, we can just ignore the error above by click the “OK” button.

Click the "Next”button to continue

 

Click the “Yes” button and then the “close” button to complete the grid infrastructure installation.

 

We can check the cluster overall resource status as follows:

[oracle@rac12c01 grid]$ crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details      
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac12c01                 STABLE
               ONLINE  ONLINE       rac12c02                 STABLE
ora.DATA.dg
               ONLINE  ONLINE       rac12c01                 STABLE
               ONLINE  ONLINE       rac12c02                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac12c01                 STABLE
               ONLINE  ONLINE       rac12c02                 STABLE
ora.chad
               ONLINE  ONLINE       rac12c01                 STABLE
               ONLINE  ONLINE       rac12c02                 STABLE
ora.net1.network
               ONLINE  ONLINE       rac12c01                 STABLE
               ONLINE  ONLINE       rac12c02                 STABLE
ora.ons
               ONLINE  ONLINE       rac12c01                 STABLE
               ONLINE  ONLINE       rac12c02                 STABLE
ora.proxy_advm
               OFFLINE OFFLINE      rac12c01                 STABLE
               OFFLINE OFFLINE      rac12c02                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac12c01                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       rac12c01                 169.254.208.14 192.1
                                                             68.20.11,STABLE
ora.asm
      1        ONLINE  ONLINE       rac12c01                 Started,STABLE
      2        ONLINE  ONLINE       rac12c02                 Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac12c01                 STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       rac12c01                 Open,STABLE
ora.qosmserver
      1        ONLINE  ONLINE       rac12c01                 STABLE
ora.rac12c01.vip
      1        ONLINE  ONLINE       rac12c01                 STABLE
ora.rac12c02.vip
      1        ONLINE  ONLINE       rac12c02                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac12c01                 STABLE

 

7. Install the Oracle RAC Databases

(On rac12c01 machine)
Add the "oracle" user into the "vboxsf" group so it has access to shared drives.

[root@rac12c01 ~]# usermod -G oinstall,dba,vboxsf oracle
[root@rac12c01 ~]# id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),983(vboxsf),54322(dba)

 [root@rac12c01 ~]# ls -l /media/sf_Oracle12cR2_Shared_Dir/
total 6297251
-rwxrwx---. 1 root vboxsf 3453696911 Mar 16 13:15 linuxx64_12201_database.zip
-rwxrwx---. 1 root vboxsf 2994687209 Mar 16 10:43 linuxx64_12201_grid_home.zip

[root@rac12c01 /]# xhost +
access control disabled, clients can connect from any host
[root@rac12c01 ~]# su - oracle
Last login: Tue Mar 21 14:04:33 CST 2017 on pts/0
[oracle@rac12c01 ~]$ cd /media/sf_Oracle12cR2_Shared_Dir/
[oracle@rac12c01 sf_Oracle12cR2_Shared_Dir]$ ls -l
total 6297251
-rwxrwx---. 1 root vboxsf 3453696911 Mar 16 13:15 linuxx64_12201_database.zip
-rwxrwx---. 1 root vboxsf 2994687209 Mar 16 10:43 linuxx64_12201_grid_home.zip
[oracle@rac12c01 sf_Oracle12cR2_Shared_Dir]$ unzip linuxx64_12201_database.zip

[oracle@rac12c01 sf_Oracle12cR2_Shared_Dir]$ cd database
[oracle@rac12c01 sf_Oracle12cR2_Shared_Dir]$ export DISPLAY=:0.0
[oracle@rac12c01 sf_Oracle12cR2_Shared_Dir]$ ./runInstaller

 

Uncheck the security updates checkbox and click the "Next" button and "Yes" on the subsequent warning dialog.

 

Select the "Install database software only" option, then click the "Next" button.

Accept the "Oracle Real Application Clusters database installation" option by clicking the "Next" button.


 

Make sure both nodes are selected, then click the "Next" button.

 

Select the "Enterprise Edition" option, then click the "Next" button.
.

Enter "/u01/app/oracle" as the Oracle base and "/u01/app/oracle/product/12.2.0.1/db_1" as the software location, then click the "Next" button.

 

Select the operating system groups as "dba" group, then click the "Next" button.

 

Check "Ignore All" and then click Next to continue.

Click the “Yes” button to continue

It's now ready to perform the installation by clicking the “Install” button.



Run root.sh on both nodes respectively with root user, then click OK.

Click Close to exit.

 

Create a database by running dbca command:
[root@rac12c01 ~]# xhost +
access control disabled, clients can connect from any host
[root@rac12c01 ~]# su - oracle
Last login: Mon May 30 16:18:57 CST 2016 on pts/1
[oracle@rac12c01 ~]$ export DISPLAY=:0.0
[oracle@rac12c01 ~]$ db_env

[oracle@rac12c01 ~]$ dbca
 

 

Click Next to continue.
Select the "Typical configuration" option. Enter the container database name (cdbrac), pluggable database name (pdb1) and administrator password. Click the "Next" button.
 

 

Check "Ignore All" and then click the “Next” button to continue.

Click the “Yes” button to continue.

 

Click the “Finish” button and wait while the database creation takes place.

Click Close to exit.

 

We can now check the RACDB database configuration and status using srvctl (Sever Control Utility) or crsctl (Clusterware Control Utility) as follows:

[oracle@rac12c01 ~]$ srvctl config database -d cdbrac
Database unique name: cdbrac
Database name: cdbrac
Oracle home: /u01/app/oracle/product/12.2.0.1/db_1
Oracle user: oracle
Spfile: +DATA/CDBRAC/PARAMETERFILE/spfile.303.939304459
Password file: +DATA/CDBRAC/PASSWORD/pwdcdbrac.282.939303601
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group:
Database instances: cdbrac1,cdbrac2
Configured nodes: rac12c01,rac12c02
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services:
Database is administrator managed

[oracle@rac12c01 ~]$ srvctl status database -d cdbrac
Instance cdbrac1 is running on node rac12c01
Instance cdbrac2 is running on node rac12c02

[oracle@rac12c01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Wed Mar 22 14:12:23 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select inst_name from v$active_instances;
INST_NAME
--------------------------------------------------------------------------------
rac12c01:cdbrac1
rac12c02:cdbrac2
SQL> exit

[oracle@rac12c01 ~]$ grid_env
[oracle@rac12c01 ~]$ crsctl status res -t -w "TYPE = ora.database.type"
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details      
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cdbrac.db

      1        ONLINE  ONLINE       rac12c01                 Open,HOME=/u01/app/o

                                                             racle/product/12.2.0

                                                             .1/db_1,STABLE

      2        ONLINE  ONLINE       rac12c02                 Open,HOME=/u01/app/o

                                                             racle/product/12.2.0

                                                             .1/db_1,STABLE

 

 

 

[Reference]
https://oracle-base.com/articles/12c/oracle-db-12cr2-rac-installation-on-oracle-linux-7-using-virtualbox

 

 

 

arrow
arrow

    DanBrother 發表在 痞客邦 留言(0) 人氣()