Oracle12cR1 RAC Step by Step Installation Procedures on Oracle VM VirtualBox 5.0

1. Create VM on Windows 10 Host Operating System
2. Install the Guest Operating System
3. Preparation for the RAC Environment
4. Create Shared Disks
5. Clone the Hard Disk
6. Install the Grid Infrastructure
7. Install the Oracle RAC Databases

1. Create VM on Windows 10 Host Operating System

-- Create a new 50G virtual hard disk image named 12cRAC1.vdi
F:\VirtualBox> VBoxManage createhd --filename "F:\VirtualBox VMs\12cRAC1.vdi" --size 51200 --format VDI --variant Fixed

-- Create a new Virtual Machine named 12cRAC1
F:\VirtualBox> VBoxManage createvm --name 12cRAC1 --ostype "Oracle_64" --register

-- Add a SATA controller with the dynamic disk attached
F:\VirtualBox> VBoxManage storagectl 12cRAC1 --name "SATA" --add sata --controller IntelAHCI
F:\VirtualBox> VBoxManage storageattach 12cRAC1 --storagectl "SATA" --port 0 --device 0 --type hdd --medium "F:\VirtualBox VMs\12cRAC1.vdi"

-- Add an IDE controller with a Oracle Linux 7.0 DVD drive attached, using the specified ISO file:
F:\VirtualBox> VBoxManage storagectl 12cRAC1 --name "IDE" --add ide
F:\VirtualBox> VBoxManage storageattach 12cRAC1 --storagectl "IDE" --port 0 --device 0 --type dvddrive --medium E:\ISO\Oracle_Linux_7\V46135-01.iso

-- Add an IDE controller with a VBoxGuestAdditions DVD drive attached, using the specified ISO file:
F:\VirtualBox> VBoxManage storageattach 12cRAC1 --storagectl "IDE" --port 1 --device 0 --type dvddrive --medium E:\ISO\VBoxGuestAdditions_5.0.16.iso

-- Misc system settings.
F:\VirtualBox> VBoxManage modifyvm 12cRAC1 --ioapic on
F:\VirtualBox> VBoxManage modifyvm 12cRAC1 --boot1 dvd --boot2 disk --boot3 none --boot4 none

F:\VirtualBox> VBoxManage modifyvm 12cRAC1 --cpus 2 --memory 4096 --vram 128
F:\VirtualBox> VBoxManage hostonlyif create
F:\VirtualBox>
VBoxManage modifyvm 12cRAC1 --nic1 nat --nic2 hostonly --nic3 intnet
PS. Please verify that three NICs have been set up correctly on Oracle VM VirtualBox Manager (GUI).

2. Install the Guest Operating System

Start to install the OS - Oracle Linux 7.0:

Check the following packages that will be installed:

  • Server with GUI
  • Hardware Monitoring Utilities
  • Large Systems Performance
  • Network file system client
  • Performance Tools
  • Compatibility Libraries
  • Development Tools

During the installation , the following information should be set:

  • hostname: rac12c01
  • enp0s3 (eth0): DHCP (Connect Automatically)
  • enp0s8 (eth1): IP= 192.168.10.11, Subnet=255.255.255.0, Gateway=192.168.10.1, DNS=192.168.10.1, Search=localdomain (Connect Automatically)
  • enp0s9 (eth2): IP= 192.168.20.11, Subnet=255.255.255.0, Gateway=<blank>, DNS=<blank>, Search=<blank> (Connect Automatically)

 

Click Done, and then click INSTALLATION DESTINATION.

Click Done.

Click Begin Installation.

Click Reboot to reboot the VM machine.
Here's the screenshot after the reboot of the VM:

Reboot the machine to take effect.

Click "Start  using Oracle Linux Server".
Install VM VirtualBox Guest Additions:

Add the shared directory named "Oracle12c_Shared_Dir" as follows:

Reboot the machine to take effect.
The shared directory has appeared as follows:

3. Preparation for the RAC Environment:

For the simplicity, install "oracle-rdbms-server-12cR1-preinstall" package to perform the prerequisite setup.



Additional Setup:
Install Oracle ASM:

[root@rac12c01 ~]# yum install oracleasm-support oracleasmlib oracleasm-`uname -r`

We can verify that the "oracle" user has been created.
[root@rac12c01 ~]# id oracle

uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba)

Set the password for the "oracle user".

[root@rac12c01 ~]# cat <<EOF > /etc/hosts
127.0.0.1       localhost.localdomain   localhost
# Public (eth1)
192.168.10.11   rac12c01
192.168.10.12   rac12c02
# Private (eth2)
192.168.20.11   rac12c01-priv
192.168.20.12   rac12c02-priv
# Virtual
192.168.10.101   rac12c01-vip
192.168.10.102   rac12c02-vip
# SCAN
192.168.10.10   scan12c
EOF


Since we are not going to use DNS, let's rename the resolv.conf to resolv.conf. bak as follows:
[root@rac12c01 ~]# mv /etc/resolv.conf /etc/resolv.conf.bak

Disable the first network interface - enp0s3 (eth0) by the following command:

Modify the "/etc/security/limits.d/20-nproc.conf" file using sed command as follows:

Change the setting of SELinux to permissive by editing the "/etc/selinux/config" file and making sure the SELINUX flag is set as follows:

Disable firewall as follows:

De-configure NTP as follows:

Create the directories in which the Oracle software will be installed as follows:

Log in as the "oracle" user and add the following lines at the end of the "/home/oracle/.bash_profile" file.

# Oracle Settings
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=rac12c01
export ORACLE_UNQNAME=CDBRAC
export ORACLE_BASE=/u01/app/oracle
export GRID_HOME=/u01/app/12.1.0.2/grid
export DB_HOME=$ORACLE_BASE/product/12.1.0.2/db_1
export ORACLE_HOME=$DB_HOME
export ORACLE_SID=cdbrac1
export ORACLE_TERM=xterm
export BASE_PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$BASE_PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
alias grid_env='. /home/oracle/grid_env'
alias db_env='. /home/oracle/db_env'


Create a file called "/home/oracle/grid_env" with the following contents:

export ORACLE_SID=+ASM1
export ORACLE_HOME=$GRID_HOME
export PATH=$ORACLE_HOME/bin:$BASE_PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib


Create a file called "/home/oracle/db_env" with the following contents.

export ORACLE_SID=cdbrac1
export ORACLE_HOME=$DB_HOME
export PATH=$ORACLE_HOME/bin:$BASE_PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

Once the "/home/oracle/.bash_profile" has been run, different environments can be switched as follows:

 

4. Create Shared Disks:

Shutdown the guest Operating System:

[root@rac1 ~]# systemctl poweroff

On the host operating system(Windows 10 as my example), switch to the command line interface:

cd F:\VirtualBox
F:
F:\VirtualBox> VBoxManage createhd --filename "F:\VirtualBox VMs\12c_asm1.vdi" --size 5120 --format VDI --variant Fixed
F:\VirtualBox> VBoxManage createhd --filename "F:\VirtualBox VMs\12c_asm2.vdi" --size 5120 --format VDI --variant Fixed
F:\VirtualBox> VBoxManage createhd --filename "F:\VirtualBox VMs\12c_asm3.vdi" --size 5120 --format VDI --variant Fixed
F:\VirtualBox> VBoxManage createhd --filename "F:\VirtualBox VMs\12c_asm4.vdi" --size 5120 --format VDI --variant Fixed

Connect the four newly created hard disks to the VM - 12cRAC1 as follows:
F:\VirtualBox> VBoxManage storageattach 12cRAC1 --storagectl "SATA" --port 1 --device 0 --type hdd --medium "F:\VirtualBox VMs\12c_asm1.vdi" --mtype shareable
F:\VirtualBox> VBoxManage storageattach 12cRAC1 --storagectl "SATA" --port 2 --device 0 --type hdd --medium "F:\VirtualBox VMs\12c_asm2.vdi" --mtype shareable
F:\VirtualBox> VBoxManage storageattach 12cRAC1 --storagectl "SATA" --port 3 --device 0 --type hdd --medium "F:\VirtualBox VMs\12c_asm3.vdi" --mtype shareable
F:\VirtualBox> VBoxManage storageattach 12cRAC1 --storagectl "SATA" --port 4 --device 0 --type hdd --medium "F:\VirtualBox VMs\12c_asm4.vdi" --mtype shareable

Make the four hark disks shareable.
F:\VirtualBox> VBoxManage modifyhd "F:\VirtualBox VMs\12c_asm1.vdi" --type shareable
F:\VirtualBox> VBoxManage modifyhd "F:\VirtualBox VMs\12c_asm2.vdi" --type shareable    
F:\VirtualBox> VBoxManage modifyhd "F:\VirtualBox VMs\12c_asm3.vdi" --type shareable
F:\VirtualBox> VBoxManage modifyhd "F:\VirtualBox VMs\12c_asm4.vdi" --type shareable

Start the "12cRAC1" VM by clicking the "Start" button on the toolbar:
[root@rac12c01 ~]# ls -l /dev/sd*
brw-rw----. 1 root disk 8,  0 May 26  2016 /dev/sda
brw-rw----. 1 root disk 8,  1 May 26  2016 /dev/sda1
brw-rw----. 1 root disk 8,  2 May 26  2016 /dev/sda2
brw-rw----. 1 root disk 8, 16 May 26  2016 /dev/sdb
brw-rw----. 1 root disk 8, 32 May 26  2016 /dev/sdc
brw-rw----. 1 root disk 8, 48 May 26  2016 /dev/sdd
brw-rw----. 1 root disk 8, 64 May 26  2016 /dev/sde

Use the "fdisk" command to partition the disks sdb to sde.   (PS. repeat fdisk command from sdb to sde)

[root@rac12c01 ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.


Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-652, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-652, default 652):

Using default value 652

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

Verify the results:
[root@rac12c01 ~]# ll /dev/sd*1
brw-rw----. 1 root disk 8,  1 May 26  2016 /dev/sda1
brw-rw----. 1 root disk 8, 17 May 26 13:35 /dev/sdb1
brw-rw----. 1 root disk 8, 33 May 26 13:36 /dev/sdc1
brw-rw----. 1 root disk 8, 49 May 26 13:36 /dev/sdd1
brw-rw----. 1 root disk 8, 65 May 26 13:36 /dev/sde1

Configure Oracle ASM & Create ASM Disks

(On rac12c01 machine)

[root@rac12c01 ~]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done

[root@rac12c01 ~]# oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Configuring "oracleasm" to use device physical block size
Mounting ASMlib driver filesystem: /dev/oracleasm

[root@rac12c01 ~]# oracleasm createdisk DISK1 /dev/sdb1
Writing disk header: done
Instantiating disk: done

[root@rac12c01 ~]# oracleasm createdisk DISK2 /dev/sdc1
Writing disk header: done
Instantiating disk: done

[root@rac12c01 ~]# oracleasm createdisk DISK3 /dev/sdd1
Writing disk header: done
Instantiating disk: done

[root@rac12c01 ~]# oracleasm createdisk DISK4 /dev/sde1
Writing disk header: done
Instantiating disk: done

[root@rac12c01 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...

[root@rac12c01 ~]# oracleasm listdisks
DISK1
DISK2
DISK3
DISK4

Shutdown the rac12c01 at this moment to begin cloning the Hard Disk.
 

5. Clone the Hard Disk

Go back to the host operating system command line interface, and type the following command to clone the hard disk, as follows:
F:\VirtualBox> VBoxManage clonehd "F:\VirtualBox VMs\12cRAC1.vdi" "F:\VirtualBox VMs\12cRAC2.vdi"

Create the "12cRAC2" virtual machine in VirtualBox in the same way as "12cRAC1" VM,
except using an existing "F:\VirtualBox VMs\12cRAC2.vdi" virtual hard drive.

F:\VirtualBox> VBoxManage createvm --name 12cRAC2 --ostype "Oracle_64" --register
F:\VirtualBox> VBoxManage storagectl 12cRAC2 --name "SATA" --add sata --controller IntelAHCI

F:\VirtualBox> VBoxManage storageattach 12cRAC2 --storagectl "SATA" --port 0 --device 0 --type hdd --medium "F:\VirtualBox VMs\12cRAC2.vdi"

Be sure to add the three network adaptors as "RAC1" VM did.
F:\VirtualBox> VBoxManage modifyvm 12cRAC2 --nic1 nat --nic2 hostonly --nic3 intnet
PS. Please verify that three NICs have been set up correctly on Oracle VM VirtualBox Manager (GUI).

F:\VirtualBox> VBoxManage modifyvm 12cRAC2 --ioapic on
F:\VirtualBox> VBoxManage modifyvm 12cRAC2 --boot1 disk --boot2 none --boot3 none --boot4 none
F:\VirtualBox> VBoxManage modifyvm 12cRAC2 --cpus 2 --memory 4096 --vram 128

Once the "12cRAC2" VM is ready, attach the shared disks to it as bellow:
F:\VirtualBox> VBoxManage storageattach 12cRAC2 --storagectl "SATA" --port 1 --device 0 --type hdd --medium "F:\VirtualBox VMs\12c_asm1.vdi" --mtype shareable
F:\VirtualBox> VBoxManage storageattach 12cRAC2 --storagectl "SATA" --port 2 --device 0 --type hdd --medium "F:\VirtualBox VMs\12c_asm2.vdi" --mtype shareable
F:\VirtualBox> VBoxManage storageattach 12cRAC2 --storagectl "SATA" --port 3 --device 0 --type hdd --medium "F:\VirtualBox VMs\12c_asm3.vdi" --mtype shareable
F:\VirtualBox> VBoxManage storageattach 12cRAC2 --storagectl "SATA" --port 4 --device 0 --type hdd --medium "F:\VirtualBox VMs\12c_asm4.vdi" --mtype shareable

 

Start the "12cRAC2" VM by clicking the "Start" button on the toolbar.

Log in to the "12cRAC2" virtual machine as the "root" user in order to reconfigure the network settings to match the following:
(using nm-connection-editor command)

hostname: 12cRAC2

IP Address eth1: 192.168.10.12 (public address)

Default Gateway eth1: 192.168.10.1 (public address)

IP Address eth2: 192.168.20.12 (private address)

Default Gateway eth2: none

 

Since the 12cRAC2 VM was cloned from 12cRAC1, we have to change the host name from rac12c01 to rac12c02, by the following commands:

[root@rac12c01 ~]# hostnamectl set-hostname rac12c02

[root@rac12c01 ~]# systemctl reboot

Now we can bring back 12cRAC1 VM by clicking the start button of 12cRAC1 VM.

Ping each network ip (eth1 & eth2) from both machines as follows:
(On rac12c01 machine)

[root@rac12c01 ~]# ping -c 1 rac12c01 ; ping -c 1 rac12c01-priv ; ping -c 1 rac12c02 ; ping -c 1 rac12c02-priv
PING rac12c01 (192.168.10.11) 56(84) bytes of data.
64 bytes from rac12c01 (192.168.10.11): icmp_seq=1 ttl=64 time=0.016 ms

--- rac12c01 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms
PING rac12c01-priv (192.168.20.11) 56(84) bytes of data.
64 bytes from rac12c01-priv (192.168.20.11): icmp_seq=1 ttl=64 time=0.008 ms

--- rac12c01-priv ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.008/0.008/0.008/0.000 ms
PING rac12c02 (192.168.10.12) 56(84) bytes of data.
64 bytes from rac12c02 (192.168.10.12): icmp_seq=1 ttl=64 time=0.319 ms

--- rac12c02 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms
PING rac12c02-priv (192.168.20.12) 56(84) bytes of data.
64 bytes from rac12c02-priv (192.168.20.12): icmp_seq=1 ttl=64 time=0.273 ms

--- rac12c02-priv ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms


(On rac12c02 machine)
[root@rac12c02 ~]# ping -c 1 rac12c01 ; ping -c 1 rac12c01-priv ; ping -c 1 rac12c02 ; ping -c 1 rac12c02-priv
PING rac12c01 (192.168.10.11) 56(84) bytes of data.
64 bytes from rac12c01 (192.168.10.11): icmp_seq=1 ttl=64 time=0.691 ms

--- rac12c01 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms
PING rac12c01-priv (192.168.20.11) 56(84) bytes of data.
64 bytes from rac12c01-priv (192.168.20.11): icmp_seq=1 ttl=64 time=1.11 ms

--- rac12c01-priv ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.112/1.112/1.112/0.000 ms
PING rac12c02 (192.168.10.12) 56(84) bytes of data.
64 bytes from rac12c02 (192.168.10.12): icmp_seq=1 ttl=64 time=0.022 ms

--- rac12c02 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms
PING rac12c02-priv (192.168.20.12) 56(84) bytes of data.
64 bytes from rac12c02-priv (192.168.20.12): icmp_seq=1 ttl=64 time=0.018 ms

--- rac12c02-priv ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms

[root@rac12c02 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...

[root@rac12c02 ~]# oracleasm listdisks
DISK1
DISK2
DISK3
DISK4

 

6. Install the Grid Infrastructure

(On rac12c01 machine)
[root@rac12c01 ~]# ls -l /media/sf_Oracle12c_Shared_Dir/
total 4962982
-rwxrwx---. 1 root vboxsf 1673544724 Nov 14  2014 linuxamd64_12102_database_1of2.zip
-rwxrwx---. 1 root vboxsf 1014530602 Nov 14  2014 linuxamd64_12102_database_2of2.zip
-rwxrwx---. 1 root vboxsf 1747043545 Nov 14  2014 linuxamd64_12102_grid_1of2.zip
-rwxrwx---. 1 root vboxsf  646972897 Nov 14  2014 linuxamd64_12102_grid_2of2.zip

[root@rac12c01 source]# unzip /media/sf_Oracle12c_Shared_Dir/linuxamd64_12102_grid_1of2.zip -d /source
[root@rac12c01 source]# unzip /media/sf_Oracle12c_Shared_Dir/linuxamd64_12102_grid_2of2.zip -d /source

[root@rac12c01 source]# chown -R oracle:oinstall /source/grid/
[root@rac12c01 source]# chmod -R 770 /source/grid

Install Cluster Verification Utility (CVU) package on both rac12c01 and rac12c02 from the Oracle grid media as follows: (e.g. /source/grid/rpm/)
[root@rac12c01 ~]# cd /source/grid/rpm
[root@rac12c01 rpm]# ls -l
total 12
-rwxr-xr-x. 1 root root 8976 Jul  1  2014 cvuqdisk-1.0.9-1.rpm

[root@rac12c01 rpm]# export CVUQDISK_GRP=dba
[root@rac12c01 rpm]# rpm -Uvh cvuqdisk-1.0.9-1.rpm
Preparing...                          ################################# [100%]
Updating / installing...
   1:cvuqdisk-1.0.9-1                 ################################# [100%]

[root@rac12c01 rpm]# xhost +

access control disabled, clients can connect from any host

Switch to "oracle" user and install the Grid Infrastructure by executing runInstaller as follows:
[root@rac12c01 rpm]# su - oracle
Last login: Fri May 27 15:39:57 CST 2016 on pts/0
[oracle@rac12c01 ~]$ export DISPLAY=:0.0

[oracle@rac12c01 ~]$ cd /source/grid/
[oracle@rac12c01 grid]$ ./runInstaller



For SSH Connectivity, highlight both nodes and then click Setup

Click Yes to continue.
Set the redundancy to "External", click the "Change Discovery Path" button and set the path to "/dev/oracleasm/disks/*".
Return the main screen and select all 4 disks and click the "Next" button.

 


Click Ignore All and then click Next



Go ahead and click Install to continue the installation.



Run two scripts above on both nodes.
When two nodes have finished, click OK.




Since we are not using the DNS, we can just ignore the error above.



Since we are using the /etc/hosts file rather than the DNS to resolve SCAN name resolution, we can simply ignore the errors above.



Click Close to end the installation.

 

We can check the cluster overall resource status as follows:

[root@rac12c01 ~]# su - oracle
Last login: Fri May 27 17:44:09 CST 2016 on pts/1
[oracle@rac12c01 ~]$ grid_env
[oracle@rac12c01 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details      
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac12c01                 STABLE
               ONLINE  ONLINE       rac12c02                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac12c01                 STABLE
               ONLINE  ONLINE       rac12c02                 STABLE
ora.asm
               ONLINE  ONLINE       rac12c01                 STABLE
               ONLINE  ONLINE       rac12c02                 Started,STABLE
ora.net1.network
               ONLINE  ONLINE       rac12c01                 STABLE
               ONLINE  ONLINE       rac12c02                 STABLE
ora.ons
               ONLINE  ONLINE       rac12c01                 STABLE
               ONLINE  ONLINE       rac12c02                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac12c01                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       rac12c01                 169.254.103.184 192.
                                                             168.20.11,STABLE
ora.cvu
      1        ONLINE  ONLINE       rac12c01                 STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       rac12c01                 Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       rac12c01                 STABLE
ora.rac12c01.vip
      1        ONLINE  ONLINE       rac12c01                 STABLE
ora.rac12c02.vip
      1        ONLINE  ONLINE       rac12c02                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac12c01                 STABLE
--------------------------------------------------------------------------------

7. Install the Oracle RAC Databases

(On rac12c01 machine)
[root@rac12c01 ~]# ls -l /media/sf_Oracle12c_Shared_Dir/
total 4962982
-rwxrwx---. 1 root vboxsf 1673544724 Nov 14  2014 linuxamd64_12102_database_1of2.zip
-rwxrwx---. 1 root vboxsf 1014530602 Nov 14  2014 linuxamd64_12102_database_2of2.zip
-rwxrwx---. 1 root vboxsf 1747043545 Nov 14  2014 linuxamd64_12102_grid_1of2.zip
-rwxrwx---. 1 root vboxsf  646972897 Nov 14  2014 linuxamd64_12102_grid_2of2.zip

[root@rac12c01 ~]# unzip /media/sf_Oracle12c_Shared_Dir/linuxamd64_12102_database_1of2.zip -d /source
[root@rac12c01 ~]# unzip /media/sf_Oracle12c_Shared_Dir/linuxamd64_12102_database_2of2.zip -d /source

[root@rac12c01 /]# xhost +
access control disabled, clients can connect from any host
[root@rac12c01 /]# su - oracle
Last login: Mon May 30 15:34:57 CST 2016 on pts/0
[oracle@rac12c01 ~]$ export DISPLAY=:0.0
[oracle@rac12c01 ~]$ cd /source/database/
[oracle@rac12c01 database]$ ./runInstaller




Click Next to continue.



Click Yes to continue.





Make sure both nodes are selected, then click the "Next" button.




Click Next to continue.

Check "Ignore All" and then click Next to continue.

It's now ready to perform the installation by clicking Install button.



Run root.sh on both nodes with root user, then click OK.

Click Close to exit.

Create a database by running dbca command:
[root@rac12c01 ~]# xhost +
access control disabled, clients can connect from any host
[root@rac12c01 ~]# su - oracle
Last login: Mon May 30 16:18:57 CST 2016 on pts/1
[oracle@rac12c01 ~]$ export DISPLAY=:0.0
[oracle@rac12c01 ~]$ dbca


Click Next to continue.
Select the "Create a database with default configuration" option. Enter the container database name (cdbrac), pluggable database name (pdb1) and administrator password. Click the "Next" button.


Check "Ignore All" and then click Next to continue.

Click Finish and wait while the database creation takes place.

Click Close to exit.

We can now check the RACDB database configuration and status using srvctl (Sever Control Utility) or crsctl (Clusterware Control Utility) as follows:
[oracle@rac12c01 ~]$ srvctl config database -d cdbrac
Database unique name: cdbrac
Database name: cdbrac
Oracle home: /u01/app/oracle/product/12.1.0.2/db_1
Oracle user: oracle
Spfile: +DATA/CDBRAC/PARAMETERFILE/spfile.296.913221991
Password file: +DATA/CDBRAC/PASSWORD/pwdcdbrac.276.913220819
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group:
Database instances: cdbrac1,cdbrac2
Configured nodes: rac12c01,rac12c02
Database is administrator managed

[oracle@rac12c01 ~]$ srvctl status database -d cdbrac
Instance cdbrac1 is running on node rac12c01
Instance cdbrac2 is running on node rac12c02

[oracle@rac12c01 ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Mon May 30 17:01:01 2016
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> select inst_name from v$active_instances;
INST_NAME
--------------------------------------------------------------------------------
rac12c01:cdbrac1
rac12c02:cdbrac2

[oracle@rac12c01 ~]$ grid_env
[oracle@rac12c01 ~]$ crsctl status res -t -w "TYPE = ora.database.type"
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details      
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cdbrac.db
      1        ONLINE  ONLINE       rac12c01                 Open,STABLE
      2        ONLINE  ONLINE       rac12c02                 Open,STABLE
--------------------------------------------------------------------------------




[Reference]
https://oracle-base.com/articles/12c/oracle-db-12cr1-rac-installation-on-oracle-linux-7-using-virtualbox

arrow
arrow

    DanBrother 發表在 痞客邦 留言(0) 人氣()