Steps by step RAC Installation
I want to divide total installation work into four parts.
A. Preinstallation Configuration.
B. Installing Oracle Grid Infrastructure.
C. Installing the Oracle Database Software.
D. Creating Oracle Database.
Before I go through each individual steps let’s discuss about some RAC concepts so that you can easily understand the documents.
Concepts & New things related to Oracle 11gR2
Starting with Oracle Database 11g Release 2, Oracle Clusterware and Oracle ASM are installed into a single home directory, which is called the Grid home. Oracle grid infrastructure refers to the installation of the combined products. Oracle Clusterware and Oracle ASM are still individual products, and are referred to by those names.
Oracle Clusterware enables servers, referred to as hosts or nodes, to operate together as if they are one server, commonly referred to as a cluster. Although the servers are standalone servers, each server has additional processes that communicate with other servers. In this way the separate servers appear as if they are one server to applications and end users. Oracle Clusterware provides the infrastructure necessary to run Oracle RAC.
An Oracle Database database has a one-to-one relationship between data files and the instance. An Oracle RAC database, however, has a one-to-many relationship between data files and instances. In an Oracle RAC database, multiple instances access a single set of database files. That’s why datafiles must be kept in a shared storage so that every instances can access. Instances indicate its own memory structures and background processes.
Oracle Automatic Storage Management (ASM) is an integrated, high-performance volume manager and file system. With Oracle Database 11g Release 2, Oracle ASM adds support for storing the Oracle Clusterware OCR and voting disk files. OCR and voting disk files are two component of Oracle clusterware.
To install and configuration Oracle RAC there are several tools that RAC provides. Let’s know the name of those tools.
i) Oracle Universal Installer (OUI) – GUI tool which installs the Oracle grid infrastructure software (which consists of Oracle Clusterware and Oracle ASM)
ii) Cluster Verification Utility (CVU) - a command-line tool that you can use to verify your environment. It is used for both preinstallation as well as postinstallation checks of your cluster environment.
iii) Oracle Enterprise Manager(OEM) – GUI tool for managing single- instance and Oracle RAC environments.
iv) SQL*Plus – Command line interface that enables you to perform database management operations for a database.
v) Server Control (SRVCTL) - A command-line interface that you can use to manage the resources defined in the Oracle Cluster Registry (OCR).
vi) Cluster Ready Services Control (CRSCTL)—A command-line tool that you can use to manage Oracle Clusterware daemons. These daemons include Cluster Synchronization Services (CSS), Cluster-Ready Services (CRS), and Event Manager (EVM).
vii) Database Configuration Assistant (DBCA)—A GUI utility that is used to create and configure Oracle Databases.
viii) Oracle Automatic Storage Management Configuration Assistant (ASMCA)—ASMCA is a utility that supports installing and configuring Oracle ASM instances, disk groups, volumes. It has both a GUI and a non-GUI interface.
ix) Oracle Automatic Storage Management Command Line utility (ASMCMD)—A command-line utility that you can use to manage Oracle ASM instances, Oracle ASM disk groups, file access control for disk groups, files and directories within Oracle ASM disk groups, templates for disk groups, and Oracle ASM volumes.
x) Listener Control (LSNRCTL)—A utility is a command-line interface that you use to administer listeners.
If you use ASMCMD, srvctl, sqlplus, or lnsrctl to manage Oracle ASM or its listener, then use the binaries located in the Grid home, not the binaries located in the Oracle Database home, and set ORACLE_HOME environment variable to the location of the Grid home.
If you use srvctl, sqlplus, or lnsrctl to manage a database instance or its listener, then use the binaries located in the Oracle home where the database instance or listener is running, and set the ORACLE_HOME environment variable to the location of that Oracle home
OUI no longer supports installation of Oracle Clusterware files on block or raw devices.
A. Preinstallation Requirements.
- Hardware Requirements.
- Network Hardware Requirements.
- IP Address Requirements.
- OS and software Requirements.
- Preparing the server to install Grid Infrastructure.
- Hardware Requirements:
The minimum required RAM is 1.5 GB for grid infrastructure for a cluster, or 2.5 GB for grid infrastructure for a cluster and Oracle RAC. To check your RAM issue,
# grep MemTotal /proc/meminfo
The minimum required swap space is 1.5 GB. Oracle recommends that you set swap space to
- 1.5 times the amount of RAM for systems with 2 GB of RAM or less.
- Systems with 2 GB to 16 GB RAM, use swap space equal to RAM.
- Systems with more than 16 GB RAM, use 16 GB of RAM for swap space.
To check swap space issue,
# grep SwapTotal /proc/meminfo
At least you need to have 1 GB of temp space in /tmp. However if you have more it will not hurt any.
To check issue you temp space issue,
# df -h /tmp
You will need at least 4.5 GB of available disk space for the Grid home directory, which includes both the binary files for Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM) and their associated log files, and at least 4 GB of available disk space for the Oracle Database home directory.
To check space in the OS partition issue,
# df –h
- Network Hardware Requirements:
Each node must have at least two network interface cards (NIC), or network adapters. One adapter is for the public network interface and the other adapter is for the private network interface (the interconnect).
You need to install additional network adapters on a node if that node does not have at least two network adapters or has two network interface cards but is using network attached storage (NAS). You should have a separate network adapter for NAS.
Public interface names must be the same for all nodes. If the public interface on one node uses the network adapter eth0, then you must configure eth0 as the public interface on all nodes.
You should configure the same private interface names for all nodes as well. If eth1 is the private interface name for the first node, then eth1 should be the private interface name for your second node.
The private network adapters must support the user datagram protocol (UDP) using high-speed network adapters and a network switch that supports TCP/IP (Gigabit Ethernet or better). Oracle recommends that you use a dedicated network switch.
- IP Address Requirements.
You must have a DNS server in order to make SCAN listener work. So, before you proceed installation prepare you DNS server. You must give the following entry manually in your DNS server.
i) A public IP address for each node
ii) A virtual IP address for each node
ii) Three single client access name (SCAN) addresses for the cluster
During installation a SCAN for the cluster is configured, which is a domain name that resolves to all the SCAN addresses allocated for the cluster. The IP addresses used for the SCAN addresses must be on the same subnet as the VIP addresses. The SCAN must be unique within your network. The SCAN addresses should not respond to ping commands before installation.
- OS and software Requirements.
To determine which distribution and version of Linux is installed as root user issue,
# cat /proc/version
Be sure your linux version is supported by Oracle dataabase 11gR2.
To determine which chip architecture each server is using and which version of the software you should install, as the root user
issue,
# uname -m
This command displays the processor type. For a 64-bit architecture, the output would be "x86_64".
To determine if the required errata level is installed, as the root user issue,
# uname -r
2.6.9-55.0.0.0.2.ELsmp
From the output kernel version is 2.6.9, and the errata level (EL) is 55.0.0.0.2.ELsmp.
# rpm -q package_name
Without cluster verification utility as well as by running OUI you can determine whether you have missed any packages that is required to install Grid Infrastructure. If you get any package missing you can install it by,
# rpm -Uvh package_name
- Preparing the server to install Grid Infrastructure.
i) Synchronize the time between each RAC nodes:
Oracle Clusterware 11g release 2 (11.2) requires time synchronization across all nodes within a cluster when Oracle RAC is deployed. The linux
# dateconfig
Command provides you a GUI through which you can set same timing across all nodes. But for accurate time time synchronization across the nodes you have two options: an operating system configured network time protocol (NTP), or Oracle Cluster Time Synchronization Service(octss).
I recommend to use oracle cluster time synchronization service because it can synchronize time among cluster members without contacting an external time server.
Note that If you use NTP, then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode. If you do not have NTP daemons, then ctssd starts up in active mode.
If you have NTP daemons on your server but you cannot configure them to synchronize time with a time server, and you want to use Cluster Time Synchronization Service to provide synchronization service in the cluster, then deactivate and deinstall the Network Time Protocol (NTP).
To deactive do the following things:
# /sbin/service ntpd stop
# chkconfig ntpd off
# rm /etc/ntp.conf
or, mv /etc/ntp.conf to /etc/ntp.conf.org.
Also remove the following file:
/var/run/ntpd.pid
ii) Create the required OS users and groups.
# groupadd -g 1000 oinstall
# groupadd -g 1200 dba
# useradd -u 1100 -g oinstall -G dba oracle
# mkdir -p /u01/app/11.2.0/grid
# mkdir -p /u01/app/oracle
# chown -R oracle:oinstall /u01
# chmod -R 775 /u01/
# passwd oracle
iii) Modify the linux kernel parameters.
Open the /etc/sysctl.conf file and change the value like below.
#vi /etc/sysctl.conf
kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default=262144
net.core.wmem_default=262144
net.core.rmem_max=262144
net.core.wmem_max=262144
kernel.shmmax = 536870912
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default=262144
net.core.wmem_default=262144
net.core.rmem_max=262144
net.core.wmem_max=262144
Note that the GUI does not give any warning about kernel.sem parameter setting and if you don’t set this parameter manually then later you may get unforeseen error.
iv) Configure the network.
Determine the cluster name. We set the cluster name as dc-db-cluster
Determine the public, private and virtual host name for each node in the cluster.
It is determined as,
For host dc-db-01 public host name as dc-db-01
For host dc-db-02 public host name as dc-db-02
For host db-db-01 private host name as dc-db-01-priv
For host dc-db-02 private host name as dc-db-02-priv
For host dc-db-01 virtual host name as dc-db-01-vip
For host dc-db-02 virtual host name as dc-db-02-vip
Identify the interface names and associated IP addresses for all network adapters by executing the following command on each node:
# /sbin/ifconfig
On each node in the cluster, assign a public IP address with an associated network name to one network adapter. The public name for each node should be registered with your domain name system (DNS).
Also configure private IP addresses for cluster member nodes in a different subnet in each node.
Also determine the virtual IP addresses for each nodes in the cluster. These addresses and name should be registered in your DNS server. The virtual IP address must be on the same subnet as your public IP address.
Note that you do not need to configure these private, public, virtual addresses manually in the /etc/hosts file.
You can test whether or not an interconnect interface is reachable using a ping command.
Define a SCAN that resolves to three IP addresses in your DNS.
My full IP Address assignment table is as following.
Identity | Host Node | Name | Type | Address | Address static or dynamic | Resolved by |
Node 1 Public | dc-db-01 | dc-db-01 | Public | 192.168.100.101 | Static | DNS |
Node 1 virtual | Selected by oracle clusterware | dc-db-01-vip | Virtual | 192.168.100.103 | Static | DNS and/ or hosts file |
Node 1 private | dc-db-01 | dc-db-01-priv | Private | 192.168.200.101 | Static | DNS, hosts file, or none |
Node 2 Public | dc-db-02 | dc-db-02 | Public | 192.168.100.102 | Static | DNS |
Node 2 virtual | Selected by oracle clusterware | dc-db-02-vip | Virtual | 192.168.100.104 | Static | DNS and/ or hosts file |
Node 2 private | dc-db-02 | dc-db-02-priv | Private | 192.168.200.102 | Static | DNS, hosts file, or none |
SCAN vip 1 | Select by oracle clusterware | dc-db-cluster | Virtual | 192.168.100.105 | Static | DNS |
SCAN vip 2 | Selected by oracle clusterware | dc-db-cluster | Virtual | 192.168.100.106 | Static | DNS |
SCAN vip 3 | Selected by oracle clusterware | dc-db-cluster | Virtual | 192.168.100.107 | Static | DNS |
In your /etc/resolve.conf file entry your DNS nameserver address.
# vi /etc/resolve.conf
192.168.100.1
Verify the network configuration by using the ping command to test the connection from each node in your cluster to all the
other nodes.
$ ping -c 3 dc-db-01
$ ping -c 3 dc-db-02
v) Configure shared storages
a. Oracle RAC is a shared everything database. All datafiles, clusterware files, database files must share a common space. Oracle strongly recommends to use ASM type of shared storage.
b. When using Oracle ASM for either the Oracle Clusterware files or Oracle Database files, Oracle creates one Oracle ASM instance on each node in the cluster, regardless of the number of databases.
c. You need to prapare the storage for Oracle Automatic Storage Management(ASM). This storage preparatinon is necessary When you reboot the server, unless you have configured special files for device persistence, a disk that appeared as /dev/sdg before the system shutdown can appear as /dev/sdh as well as permission on the device is changed after the system is restarted.
d. Install the Linux ASMLIB RPMs is the simpliest solution to storage administration. ASMLIB provides persistent paths and permissions for storage devices used with Oracle ASM, eliminating the need for updating udev or devlabel files with storage device paths and permissions.
e. You can download the ASMLIB RPMs browsing http://www.oracle.com/technetwork/topics/linux/downloads/index.html, select downloads
tabs and click on "Linux Drivers for Automatic Storage Management". You will see ASMLIB RMPs for various operating systems like SuSE Linux Enterprise Server 11, SuSE Linux Enterprise Server 10, Red Hat Enterprise Linux 5 AS, Red Hat Enterprise Linux 4 AS, SuSE Linux Enterprise Server 9, Red Hat Enterprise Linux 3 AS, SuSE Linux Enterprise Server 8 SP3, Red Hat Advanced Server 2.1.
Select your OS from the list, Download the oracleasmlib and oracleasm-support packages for your version of Linux. Then you must download the appropriate package for the kernel you are running. Use the uname -r command to determine the version of the kernel
on your server. For example, if your kernel version is 2.6.18-194.8.1.el5 then you need to download oracleasm drivers for kernel 2.6.18-194.8.1.el5.
ASMLib 2.0 is delivered as a set of three Linux packages:
i) oracleasmlib-2.0 - the Oracle ASM libraries
ii) oracleasm-support-2.0 - utilities needed to administer ASMLib
iii)oracleasm - a kernel module for the Oracle ASM library
f. As a root user, install these three packages.
# rpm -Uvh oracleasm-support-2.1.3-1.el4.x86_64.rpm
# rpm -Uvh oracleasmlib-2.0.4-1.el4.x86_64.rpm
# rpm -Uvh oracleasm-2.6.9-55.0.12.ELsmp-2.0.3-1.x86_64.rpm
g. To configure ASMLIB issue,
# oracleasm configure -i
If you enter the command oracleasm configure without the -i flag, then you are shown the current configuration.
After you issue oracleasm configure –i you will be prompted to provide
Default user to own the driver interface (example: oracle),
Default group to own the driver interface (example: dba),
Start Oracle Automatic Storage Management Library driver on boot (y/n): (provide: y), Fix permissions of Oracle ASM disks on boot? (y/n): (provide: y)
After it is run, it
Creates the /etc/sysconfig/oracleasm configuration file
Creates the /dev/oracleasm mount point
Mounts the ASMLIB driver file system
Enter the following command to load the oracleasm kernel module:
# /usr/sbin/oracleasm init
Repeat above steps in all nodes.
h. To mark a disk for use by Oracle ASM, enter the following command syntax, where ASM_DISK_NAME is the name of the Oracle ASM
disk group, and candidate_disk is the name of the disk device that you want to assign to that disk group:
# oracleasm createdisk ASM_DISK_NAME candidate_disk
In other words,
# /usr/sbin/oracleasm createdisk ASM_DISK_NAME device_partition_name
For example,
# oracleasm createdisk data1 /dev/sdf
Note that For Oracle Enterprise Linux and Red Hat Enterprise Linux version 5, when scanning, the kernel sees the devices as /dev/mapper/XXX entries. By default, the 2.6 kernel device file naming scheme udev creates the /dev/mapper/XXX names for human readability. Any configuration using ORACLEASM_SCANORDER should use the/dev/mapper/XXX entries.
So your command would look like,
# oracleasm createdisk data1 /dev/mapper/mpath1
For each disk you need to issue createdisk statment that will be used for Oracle ASM.
If you need to unmark a disk that was used in a createdisk command use,
# /usr/sbin/oracleasm deletedisk disk_name
After you have created all the ASM disks for your cluster, use the listdisks command to verify their availability:
# /usr/sbin/oracleasm listdisks
Note that you need to create the ASM disks only on one node. On all the other nodes in the cluster, use the scandisks command to
view the newly created ASM disks.
# /usr/sbin/oracleasm scandisks
After scanning for ASM disks, display the available ASM disks on each node to verify their availability:
# /usr/sbin/oracleasm listdisks
i. If you use ASMLIB, then you do not need to ensure permissions and device path persistency in udev. If you do not
use ASMLib, then you must create a custom rules file in the path /etc/udev/rules.d/.
Preinstallation configuration is done at this stage. Now we will move to install Oracle Grid Infrastructure.
B. Installing Oracle Grid Infrastructure.
As Oracle run the runInstaller from the Oracle Grid Infrastructure CD room.
$ ./runInstaller
If you don’t have CD then download the software from from http://download.oracle.com/otn/linux/oracle11g/R2/linux_11gR2_grid.zip, unzip and then run the runInstaller.
Now I am providing the each level screenshot in stead of discussing much.
At this stage Oracle Grid Infrasture Installation is successful. Now we need to install Oracle software and create the database.
C. Installing the Oracle Database Software.
Insert the CD ROM which contains the Oracle Database software. And then simply run the runInstaller as oracle user. In order to reduce the size of the document I hereby pasted the screenshot of first and last steps.
$ ./runInstaller
In this stage both Oracle grid infrastructure installation and Oracle database software installation is done. Now you need to create the database.
D. Creating Oracle Database.
Simply login as oracle user.
Set the ORACLE_HOME to your oracle software home not grid home.
$ export ORACLE_HOME=ORACLE_INSTALLATION_HOME_HERE.
$ ./$ORACLE_HOME/bin/dbca
Now the steps are so simple that I would not want to paste screenshot here.
While it invokes Global database name put as,
DB_NAME.world
Thanks alot for the post, but I still didnt understand for what the three SCAN addresses are defined, i've read many posts and articles and they said for each node, define public,virtual and private ip, they didnt mention about SCAN addresses, can you please explain more?
ReplyDeleteThank you
Greetings,
ReplyDeleteWhat about configuring ssh and user equivalence? is it also neccesary to do as a prerequisite befor installing oracle rac 11g?