INSTALLATION OF ORACLE RAC 10 g RELEASE 2 ON HP-UX
CONTENTS •
HARDWARE CONSIDERATIONS
•
SOFTWARE CONSIDERATIONS
•
STORAGE CONSIDERATIONS
•
CLUSTER MANAGEMENT CONSIDERATIONS CONSIDERATIONS
•
INSTALLATION OF ORACLE SOFTWARE
HARDWARE CONSIDERATIONS:
1. SYST SYSTEM EM REQU REQUIR IREM EMEN ENTS TS 2. NETW NETWOR ORK K REQU REQUIR IREM EMEN ENTS TS
SYSTEM PARAMETERS REQUIRED BEFORE INSTALLATION OF ORACLE RAC 10 g RELEASE 2 ON HP-UX: -
PARAMETER NAME RAM SIZE SWAP SPACE DISK SPACE IN TMP DIRECTORY TOTAL DISK SPACE OPERATING SYSTEM
COMPILER LINKS ASYNCHRONOUS I/O HP SERVICEGUARD
RECOMMENDED VALUE 512 MB 2* RAM SIZE (1 GB approx) 400 MB
AVAILABLE VALUE 16 GB 20 GB 3 GB
1 GB HP-UX 11.23 (Itanium2), 11.23 (PA-RISC), 11.11 (PA-RISC) HP-UX 11i (11.11), HP-UX 11i v2 (11.23) 9 LINKS NEED TO BE INSTALLED
20 GB HP-UX 11.23(PARISC) ?
PRESENT BY DEFAULT
HP Serviceguard A.11.16, A.11.16, SGeRAC A.1 A. 11.16.
INSATLLED THE LINKS PRESENT HP Serviceguard A.11.16, A.11.16, SGeRAC A.11.16.
NETWORK PARAMETERS REQUIRED BEFORE INSTALLATION OF ORACLE RAC 10 g RELEASE 2 ON HP-UX: -
PARAMETER NAME NET NETWO WORK RK ADAP ADAPTE TERS RS
INTERFACE NAME ASSOCIATED WITH NETWORK ADAPTERS REMOTE COPY (rcp)
RECOMMENDED VALUE TWO TWO (1. (1. PUBL PUBLIC IC INT INTER ERF FACE) ACE) (2. PRIVATE INTERFACE) SAME ON ALL THE NODES
AVAILABLE VALUE VALUES ARE ASSIGNED INTERFACE NAMES ARE PROVIDED
ENABLED
ENABLED
NAME OF THE TWO NODES MADE AT CRIS: 1. prod_db1 2. prod_db2
PAREMETER NAME PUBLIC IP ADDRESS & ASSOCIATED HOSTNAME REGISTERED IN THE DNS FOR PUBLIC NETWORK INTERFACE INTERFACE PRIVATE IP ADDRESS FOR PRIVATE NETWORK INETRFACE VIP ADDRESS PER NODE WITH DEFINED HOSTNAME & RESOLVED RESOLVED THROUGH DNS
GRANTED VALUE THE REQUIRED IP ADDRESSES ARE PROVIDED
THE REQUIRED IP ADDRESSES ARE PROVIDED THE REQUIRED IP ADDRESSES ARE PROVIDED
SOFTWARE CONSIDERATIONS:
1. PATCHE TCHES S REQU REQUIR IRED ED 2. KERNEL KERNEL PARAMET ARAMETER ER SET SETTIN TINGS GS
PATCHES REQUIRED BEFORE INSTALLATION OF ORACLE RAC 10 g RELEASE 2 ON HP-UX: -
HP-UX 11.23 (ITANIUM2 / PA-RISC): •
HP-UX B.11.23.0409 OR LATER LATER
•
QUALITY PACK BUNDLE: LATEST PATCH BUNDLE: QUALITY PACK PATCHES FOR HP-UX 11I V2, MAY 2005
•
HP-UX 11.23 PATCHES: PHSS_32502: ARIES CUMULATIVE PATCH (REPLACED PHSS_29658) PHSS_33275: LINKER + FDP CUMULATIVE PATCH (REPLACED PHSS_31856,PHSS_29660) PHSS_29655: AC++ COMPILER (A.05.52) PHSS_29656: HP C COMPILER (A.05.52) PHSS_29657: U2COMP/BE/PLUGIN LIBRARY PATCH
PHKL_31500: 11.23 SEPT04 BASE PATCH (REPLACED PHKL_29817,PHCO_29957, PHKL_30089, PHNE_30090,PHNE_30093,PHKL PHNE_30090,PHNE_30093,PHKL_30234,PHKL_30245) _30234,PHKL_30245)
ALL THE PATCHES ARE INSTALLED AND THE REQUIRED SOFWARE CONSIDERATIONS ARE MET.
KERNEL CONFIGURATION REQUIRED BEFORE INSTALLATION OF ORACLE RAC 10 g RELEASE 2 ON HP-UX: -
PARAMETER NAME nproc msgmni ksi_alloc_max maxdsiz maxdsiz_64bit Maxuprc Msgmap msgtql msgseg ninode ncsize nflocks semmni semmns semmnu shmmax shmmni Shmseg swchunk Semvmx Vps_ceiling Maxssiz Maxssiz_64bit
RECOMMENDED VALUE 4 0 96 Nproc (nproc*8) 1 07 374 18 24 2 14 748 3 648 ((nproc*9)/10) (msgmni+2) Nproc (nproc*4); at least 32767 (8*nproc+2048) (ninode+1024)
Nproc (nproc*2) (semmni*2)
(nproc-4) 1 0 737 4 1 8 24 5 12 1 20 4 0 96 3 27 67 64 1 3 42 17 728 1 0 737 4 128 4
ASSIGNED VALUE 42 00 4 2 00 33 60 0 1 0 73 7 41 8 24 2 1 4 748 3 6 48 37 80 42 0 2 42 00 32 767 35648 36672 42 00 4200 8400 41 96 42 92 870 14 4 51 2 12 0 40 96 32 767 64 1 3 4 217 7 2 8 10 737 41 284
STORAGE CONSIDERATIONS CONSIDERATIONS :
1. STORAGE OPTION FOR ORACLE CRS, DATABASE DATABASE AND RECOVERY FILES
2. CONFIGURING DISKS FOR AUTOMATIC AUTOMATIC STORAGE MANAGEMENT 3. COFIGURING RAW LOGICAL VOLUMES
STORAGE CONSIDERATION FOR ORACLE CRS, DATABASE AND RECOVERY FILES BEFORE INSTALLATION OF ORACLE RAC 10 g RELEASE 2 ON HP-UX: The following table shows the storage options supported for storing Oracle Cluster Ready Services (CRS) files, Oracle Database Files, and Oracle Database Recovery Files. Oracle Database Files include data files, control files, and redo log files, the server parameter file, and the password file. Oracle CRS files include the oracle cluster registry (OCR) and the CRS voting disk. Oracle recovery files includes archive log files. STORAGE OPTION
CRS
DATABASE
RECOVERY
AUTOMATIC AUTOMATIC STORAGE MANAGEMENT
NO
YES
YES
SHARED RAW LOGICAL VOLUMES YES (REQUIRES SGERAC)
YES
NO
SHARED RAW DISK DEVICES AS PRESENTED TO HOSTS
YES
YES
NO
SHARED RAW PARTITIONS (ITANIUM2 ONLY)
YES
YES
NO
VERITAS CFS (PLANNED SUPPORT YES FOR RAC10G IN DEC05)
YES
YES
CONFIGURING OF CRS FILES BEFORE INSTALLATION OF ORACLE RAC 10 g RELEASE 2 ON HP-UX: -
Create a Raw Device for:
File Size:
Name Given To The File:
Comments:
OCR (Oracle Cluster 100 MB MB Registry)
ora_ocr_raw_ aw_100m
This raw raw logical vo volume wa was cr created only once on the cluster. If more than one database are created on the cluster, cluster, they all share the same Oracle cluster registry.
Oracle CRS voting disk
ora_vote_raw_ aw_20m
This raw logical volume also needs to be created only once on the cluster. If more than one database are created on the cluster, they all share the same Oracle CRS voting disk.
20 MB
The command given on both the nodes to make the disks available and the resultant output obtained are as following: -
# /usr/sbin/ioscan -fun -C disk
The output from this command is similar to the following: Class I H/W Path Driver S/W State H/W Type Description ============================================================================ disk 4 255/255/0/0.0 sdisk CLAIMED DEVICE HSV100 HP /dev/dsk/c8t0d0 /dev/rdsk/c8t0d0 disk
5 255/255/0/0.1
sdisk CLAIMED DEVICE HSV100 HP /dev/dsk/c8t0d1 /dev/rdsk/c8t0d1
This command displays information about each disk attached to the system, including the block device name (/dev/dsk/cxtydz) and character raw device name (/dev/rdsk/cxtydz).
CONFIGURING DISKS FOR AUTOMATIC STORAGE MANAGEMENT BEFORE INSTALLATION OF ORACLE RAC 10 g RELEASE 2 ON HPUX: Automatic Storage Management (ASM) is a feature in Oracle Database 10g that provides the database administrator with a simple storage management interface that is consistent across all server and storage platforms. As a vertically integrated file system and volume manager, purpose-built for Oracle database files, ASM provides the performance of async as ync I/O with the easy management of a file system. ASM provides capability that saves the DBA’s DBA’s time and provides flexibility to manage a dynamic database environment with increased efficiency. Automatic Storage Management is part of the database kernel. It is linked into $ORACLE_HOME/bin/oracle so that its code may be executed by all database processes. One portion of the ASM code allows for the start-up of a special instance called an ASM Instance. ASM Instances do not mount databases, but instead manage the metadata needed to make ASM files available to ordinary database instances. ASM instances manage the metadata describing the layout of the ASM files. Database instances access the contents of ASM files directly, directly, communicating with an ASM instance only to get information about the layout of these files. This requires that a second portion of the ASM code run in the database instance, in the I/O path. Four disk groups are created at CRIS namely ASMdb1, ASM db2, ASM db3 and ASMARCH. For each disk that has to be added to a disk group, enter the following command to verify that it is not already part of an LVM LVM volume group: # /sbin/pvdisplay /dev/dsk/cxtydz
If this command displays volume group information, the disk is already part of a volume group. The disks that you choose must not n ot be part of an LVM volume group. The device paths must be the same from both b oth systems and if not same they are mapped to one virtual device name.
The following commands are executed execu ted to change the owner, group, and permissions on the character raw device file for each disk that is added to a disk group: # chown oracle:dba /dev/rdsk/cxtydz # chmod 660 /dev/rdsk/cxtydz The redundancy level chosen for the ASM disk group is the External Redundancy, which had an intelligent subsystem an HP Storage Works EVA or HP Storage Works XP. Useful ASM v$ views commands: View
ASM Instance
DB Instance
V$AS V$ASM_ M_CL CLIE IENT NT
Show Showss eac each h dat datab abas asee ins insta tanc ncee using an ASM disk group
Shows the ASM instance if the database has open ASM files.
V$ASM_DISK
Shows di disk di discovered by by the ASM instance, including disks, which are not part of any disk group.
Shows a row for each disk in disk groups in use by the database instance.
V$ASM_DISKGRO Shows disk groups discovered by Shows each disk group UP the ASM instance. mounted by the local ASM instance. V$ASM_FILE
Displays al all fi files fo for ea each ASM disk group
Returns no rows
CONFIGURING RAW LOGICAL VOLUMES BEFORE INSTALLATION OF ORACLE RAC 10 g RELEASE2 ON HP-UX: Create a Raw Device for: File Size:
Sample Name:
SYSTEM tablespace
500 MB
dbname_system_raw_500m
SYSAUX tablespace
300 + (Number of instances * 250)
dbname_sysaux_raw_800m
An undo tablespace per instance
500 MB
dbname_undotbsn_raw_500m
EXAMPLE tablespace
160 MB
dbname_example_raw_160m
USERS tablespace
120 MB
dbname_users_raw_120m
Two ONLINE redo log files per instance
120 MB
dbname_redon_m_raw_120m
First and second control file
110 MB
dbname_control[1|2]_raw_110m
TEMP tablespace
250 MB
dbname_temp_raw_250m
Server parameter file (SPFILE):
5 MB
dbname_spfile_raw_5m
Password file
5 MB
dbname_pwdfile_raw_5m
OCR (Oracle Cluster Repository)
100 MB
ora_ocr_raw_100m
Oracle CRS voting disk
20 MB
ora_vote_raw_20m
Checking to see if the volume groups are properly created and available using the following commands: # strings /etc/lvmtab # vgdisplay –v /dev/vg_rac Changing the permissions of the database volume group vg_rac to 777, and change the permissions of all raw logical volumes to 660 and a nd the owner to oracle:dba. # chmod 777 /dev/vg_rac # chmod 660 /dev/vg_rac/r* # chown oracle:dba /dev/vg_rac/r* Change the permissions of the OCR logical logica l volumes: # chown root:oinstall /dev/vg_rac/rora_ocr_raw_100m # chmod 640 /dev/vg_rac/c8t0d0s1
To enable Database Configuration Assistant (DBCA) later to identify the appropriate raw device for each database file, a raw device-mapping file must be created, as follows: Set the ORACLE_BASE environment variable: $ export ORACLE_BASE=/opt/oracle/product Create a database file subdirectory under the Oracle base directory and set the appropriate owner, group, and permissions on it: # mkdir -p $ORACLE_BASE/oradata/ # chown -R oracle:oinstall $ORACLE_BASE/oradata # chmod -R 775 $ORACLE_BASE/oradata Change directory to the $ORACLE_BASE/oradata/dbname directory. Enter a command similar to the following to create a text file that you can be used to create the raw device mapping file: # find /dev/vg_name -user oracle -name 'r*' -print > dbname_raw.conf Create the dbname_raw.conf file that looks similar to the following: system=/dev/vg_name/rdbname_system_raw_500m sysaux=/dev/vg_name/rdbname_sysaux_raw_800m example=/dev/vg_name/rdbname_example_raw_160m users=/dev/vg_name/rdbname_users_raw_120m temp=/dev/vg_name/rdbname_temp_raw_250m undotbs1=/dev/vg_name/rdbname_undotbs1_raw_500m undotbs2=/dev/vg_name/rdbname_undotbs2_raw_500m redo1_1=/dev/vg_name/rdbname_redo1_1_raw_120m redo1_2=/dev/vg_name/rdbname_redo1_2_raw_120m redo2_1=/dev/vg_name/rdbname_redo2_1_raw_120m redo2_2=/dev/vg_name/rdbname_redo2_2_raw_120m control1=/dev/vg_name/rdbname_control1_raw_110m control2=/dev/vg_name/rdbname_control2_raw_110m spfile=/dev/vg_name/rdbname_spfile_raw_5m pwdfile=/dev/vg_name/rdbname_pwdfile_raw_5m When we are configuring the Oracle user's environment, set the DBCA_RAW_CONFIG environment variable to specify the full path to this file: $ export DBCA_RA DBCA_RAW_CONFIG=$ORACLE_BASE/oradata/dbname/dbname_raw W_CONFIG=$ORACLE_BASE/oradata/dbname/dbname_raw.conf .conf
CLUSTER MANAGEMENT CONSIDERATIONS: CONSIDERATIONS: 1. CONFIGURA CONFIGURATION TION OF HP SERVICEGU SERVICEGUARD ARD CLUSTE CLUSTER R
CLUSTER MANAGEMENT CONSIDERATIONS BEFORE INSTALLATION OF ORACLE RAC 10 g RELEASE2 ON HP-UX: Oracle RAC 10 g includes its own Clusterware and package management solution with the database product. This Clusterware is included as part of the Oracle RAC 10 g bundle. Oracle Clusterware consists of Oracle Cluster Ready Services (CRS) and Oracle Cluster Synchronization Services (CSS). CRS supports services and workload management and helps to maintain the continuous availability of the services. CRS also manages resources such as virtual IP (VIP) address for the node and the global services daemon. CSS provides cluster management functionality in case that no vendor clusterware such as HP Serviceguard is used. CONFIGURATION OF HP SERVICEGUARD CLUSTER: -
After all the LAN cards are installed and configured, and the RAC volume group and the cluster lock volume group(s) are configured, cluster configuration is started. Activate Activate the lock disk on the configuration node ONLY ONLY. Lock volume v olume can only be b e activated on the node where the cmapplyconf command is issued so that the lock disk can be initialized accordingly. # vgchange -a y /dev/vg_rac Creation of a cluster configuration template: # cmquerycl –n nodeA –n nodeB –v –C /etc/cmcluster/rac.asc Check the cluster configuration: # cmcheckconf -v -C rac.asc Create the binary configuration file and distribute the cluster con figuration to all the nodes in the cluster: # cmapplyconf -v -C rac.asc Cluster is not started until cmrunnode on each node or cmruncl command are run. De-activate the lock disk on the configuration node after cmapplyconf command. # vgchange -a n /dev/vg_rac Start up the cluster and view it to be sure its up and running. Start the cluster from any node in the cluster # cmruncl -v or, on each node # cmrunnode -v Make all RAC volume groups and Cluster Lock volume groups sharable and cluster aware (not packages) from the cluster c luster configuration node. This has to be done only once.
# vgchange -S y -c y /dev/vg_rac Then on all the nodes, activate the volume group in shared mode in the cluster. cluster. This has to be done each time when the cluster is started. # vgchange -a s /dev/vg_rac Check the cluster status: # cmviewcl –v
INSTALLATION OF ORACLE SOFTWARE: 1. INSTALLA INSTALLATION TION OF ORACLE ORACLE CLUSTER CLUSTER READY READY SERVIC SERVICES ES 2. INSTALLA INSTALLATION TION OF ORACLE ORACLE DAT DATABASE RAC RAC 10 g 3. CREATION CREATION OF ORACLE ORACLE DAT DATABASE USING USING DAT DATABASE CONFIGURATION ASSISTANT 4. ORACLE ORACLE ENTERPRISE ENTERPRISE MANAGER MANAGER 10 g DAT DATABASE CONTROL CONTROL
INSTALLATION OF ORACLE CLUSTER READY SERVICES: -
Before the installation of CRS a user is created who owns the Oracle RAC software. Before CRS is installed, the storage option is chosen cho sen that is to be used for the Oracle Cluster Registry (100 MB) and CRS voting disk (20 MB). Automatic Automatic Storage Management cannot be used to store these files, because they must be accessible before any Oracle instance starts. Display is to be set first before the installation of the CRS. Steps involved in the installation of CRS are as follows: -
•
•
•
•
•
•
Login as Oracle User and set the ORACLE_HOME environment variable to the CRS Home directory. directory. Then start the Oracle Universal Installer from Disk1 by issuing the command $. /runInstaller.sh. Click next on the OUI welcome screen. Enterr the Ente the inve invent ntor ory y loca locati tion on and and oins oinsta tall ll as the the UNIX UNIX grou group p name name information information into the Specify Inventory Directory Directory and Credentials Credentials page, click Next. The OUI dialog indicates then that you should run the oraInventory location/orainstRoot.sh script. Run the orainstRoot.sh script as root user, click Continue. The Specify File Locations Page contains predetermined information for the source of the installation files and the target destination information. Enter the CRS home name and its location in the target destination. In the next Cluster Configuration Screen the cluster name as well as the node information is specified. If HP Serviceguard is running, then the OUI installs CRS on each node on which the OUI detects that HP Serviceguard is running. If HP Serviceguard is not running, then OUI is used to select the nodes on which to install CRS. The private node name is used by Oracle for Cache Fusion processing. processing. The private private node name is configured configured in the /etc/hosts /etc/hosts file of each node in the cluster. cluster. The interface interface names associated associated with the network adapt adapter erss for for each each netw networ ork k are are same same on all all nod nodes es,, e.g. e.g. lan0 lan0 for for priv privat atee interconnect and lan1 for public interconnect. In the Private Interconnect Enforcement page the OUI displays a list of cluster-wide interfaces. Here with the use of drop-down menus each interface as Public, Private is specified. When Next is clicked on the Private Interconnect Enforcement page, the OUI looks looks for the Oracle Oracle Clust Cluster er Registr Registry y file file ocr.lo ocr.locc in the /var/o /var/opt/ pt/ora oracle cle directory. If the ocr.loc file already exists, and if the ocr.loc file has a valid entry for the Oracle Cluster Registry (OCR) location, then the Voting Disk Locatio Location n page page appears appears otherw otherwise ise,, the Oracle Oracle Cluste Clusterr Regist Registry ry Locati Location on Information page Information page appears and the ocr.loc ocr.loc path is specified there.
•
•
•
•
On the Voting Disk Information Page, a complete path and file name for the file in which the voting disk is to be stored is specified and Next is clicked. This must be a shared raw device dev ice (/dev/rdsk/cxtxdx). It is verif verifie ied d that that the the OUI OUI shou should ld inst instal alll the the compon componen ents ts shown shown on the the Summary page and then the components are installed. During the installation, the OUI first copies software to the local node and then copies the software to the remote nodes. Then the OUI displays a dialog indicating that root.sh script must be run on all the nodes. Execution of the root.sh script on one node at a time is done and OK is clicked in the dialog that root.sh displays after it completes each sess sessio ion. n. Anot Anothe herr sess sessio ion n of root root.s .sh h is star starte ted d on anoth another er nod nodee afte afterr the the previous root.sh execution is complete When the OUI displays the End of Installation page, click Exit to exit the Installer.
INSTALLATION OF ORACLE DATABASE RAC 10g: -
This part describes describes phase two of the installation installation procedures for installing installing the Oracle Database 10g with Real Application Clusters (RAC). •
•
•
• • •
Login as Oracle User and the ORACLE_HOME environment variable is set to the Oracle Home directory. Then start the Oracle Universal Installer from Disk1 by issuing the command $./runInstaller. When the OUI displays the Welcome page, click Next, and the OUI displays the Specify File Locations page. The Oracle home name and path that is used in this step must be different from the home that is used during the CRS installation in phase one. On the Specify Hardware Cluster Installation Mode page, an installation mode is selected. The Cluster Installation mode is selected by default when the OUI detects that this installation is performed on a cluster. In addition, the local node is always selected for the installation. Additional nodes that are to be part of this installation session are selected and click Next. On the Install Type page Enterprise Edition is selected. On the Create a Starter S tarter Database Page a software installation only is chosen. The Summary Page displays the software components that the OUI will install and the space available in the Oracle home with a list of the nodes that are part of the installation session. The details are verified about the installation that appear on the Summary page and click Install or click Back to revise the installation. During the installation, the OUI copies software to the local node and then copies the software to the remote nodes.
•
•
•
•
•
•
Then Then OUI OUI promp prompts ts to run run the the root root.s .sh h scri script pt on all the the sele select cted ed nodes nodes.. Execution of the root.sh script is performed on one node at a time. The first root.sh script brings up the Virtual Internet Protocol Configuration Assistant (VIPCA). (VIPCA). After After the VIPCA completion, completion, root.sh script script is run on the second node. On the Public Network Interfaces page the public network interface cards (NICs) to which VIP addresses are to be assigned are selected. On the IP Address Address page page an unused (unassigned) public virtual IP address for each node displayed on OUI page is assigned and click Next. If the virtual hostname / virtual IP address is not yet known in the DNS, it has to be configured in the /etc/hosts /etc/hosts file on both systems. Please ensure ensure that the same Subnet Mask that is also configured for the public NIC is entered. After Next is clicked, the VIPCA displays a Summary page. Review the information on this page and click Finish. A progress dialog appears while the VIPCA configures the virtual IP addresses with the network interfaces that were selected. The VIPCA then creates and starts the VIPs, GSD, and Oracle Notification Service (ONS) node applications. When the configuration is complete, click OK to see the VIPCA session results. results. Review the information information on the Configuration Configuration Results page, and click Exit to exit the VIPCA. /oracle/10g/root.sh is run on the second node and output is checked with the help of # of # crs-stat -t which gives a compact output.
CREATION OF ORACLE DATABASE USING DATABASE CONFIGURATION ASSISTANT: •
•
•
•
Connect as oracle user and start the Database Configuration Assistant Assistant by issuing the command $ dbca . The first page that the DBCA displays is the Welcome Welcome page page for RAC. The DBCA displays this RAC-specific Welcome Welcome page only if the Oracle home from which it is invoked was cluster installed. If the DBCA does not display this Welcome Welcome page for RAC, then the DBCA was unable to detect whether the Oracle home is cluster installed. Select Real Application Clusters database, click Next. At the Configure Database Options page Options page select Create a database and click Next. At the Node Selection page Selection page the DBCA highlights the local node by default. The other nodes are selected which we want to configure as members of our cluster database, click Next.
•
•
•
•
•
•
•
•
• •
•
• •
The templates on the Database Templates page Templates page are Custom Database, Transaction Processing, Data Warehouse, Warehouse, and General Purpose. General purpose database is selected, click Next. At the Database Identification page Identification page the global database name is entered and the Oracle system identifier (SID) prefix for our database and click Next. On the Management Options page, Options page, we can choose to manage our database with Enterprise Manager. On UNIX-based systems only, we can also choose either the Grid Control or Database Control option if we select Enterprise Manager database management. Then at the Database Credentials page we can enter the passwords for our database. At the Storage Options page Options page we selected a storage type for the database. On the HP-UX platform there is no Cluster File System. To initiate the creation of the required ASM instance, the password for the SYS user of the ASM instance is supplied. Either an IFILE or an SPFILE can be selected on shared storage for the instances. After the required information is entered, click Next to create the ASM instance. Once the instance is created, DBCA proceeds to the ASM Disk Groups page Groups page that allows creating a new disk group, g roup, add disks to an existing e xisting disk group, or select a disk group for database storage. When a new ASM instance is created, then there will be no disk groups from which to select, so a new one is created by clicking Create New to open the Create Disk Group page. At the Create Disk Group page Group page disk group name is entered and then the redundancy level for the group is checked and external redundancy level is selected and NEXT is clicked. At the Database File Locations page Locations page Oracle-Managed Files are selected. On the Recovery Configuration page, Configuration page, when ASM is used, then we can also select the flash recovery area and size on the Recovery Configuration page. When a pre configured database template is selected, such as the General Purpose template, then the DBCA displays the control files, datafiles, and redo logs on the Database Storage page. Storage page. The folder and the file name underneath the folder are selected to edit the file name. On the Creation Options page, Options page, Create Database is selected and clicked Finish. Reviewed the Summary dialog information and clicked OK to create the database.
ORACLE ENTERPRISE MANAGER 10 g DAT DATABASE CONTROL: -
When the database software is installed, the OUI also installs the software for Oracle Enterprise Manager Database Control and integrates this tool into the cluster environment. Once installed, Enterprise Manager Database Control is fully configured and operational op erational for RAC. We can also install Enterprise Manager Grid Control onto other client machines outside our cluster to monitor multiple RAC and single-instance Oracle database environments. •
•
• • •
Start the DBConsole agent on one of the cluster nodes as Oracle user: $ emctl start dbconsole To connect to the Oracle Enterprise Manager Database Control (default port 5500) open the following URL in the web browser: http://:5500/em Login as sys/manager and sysdba profile. Accepted the licensing. Now OEM Database Control Home Page is reached.
With this the installation of Oracle 10g RAC on HP-UX at CRIS was done and the project was completed successfully successfully..