Wednesday, September 29, 2010

How to install Oracle 10g on windows vista or windows 7

Step 01: Download the appropriate Oracle software version.
The first step is to download oracle software for windows vista or windows 7. So determine your hardware and operating system in order to determine whether you will download oracle 32 bit or 64 bit. You can follow the post http://arjudba.blogspot.com/2008/05/how-to-identify-os-or-oracle-64-bit-or.html in order to determine which version of oracle you will download.

You can get the oracle software from two ways.
Either download from
OTN (technet.oracle.com)
or from
edelivery (edelivery.oracle.com)

I am showing the downloads way from OTN (technet.oracle.com).
You can directly go to page http://www.oracle.com/technetwork/database/database10g/downloads/index.html in order to download Oracle 10g.

If you want to download 64 bit oracle for your windows vista or windows 7 then go to http://www.oracle.com/technetwork/database/10204-winx64-vista-win2k8-082253.html.

If you want to download 64 bit oracle for your windows vista or windows 7 then go to http://www.oracle.com/technetwork/database/10203vista-087538.html to download.

In summary,
For 32bit Windows: Oracle Database 10g Release 2 (10.2.0.3/10.2.0.4) for Microsoft Windows Vista and Windows 2008

For 64bit Windows: Oracle Database 10g Release 2 (10.2.0.4) for Microsoft Windows Vista x64 and Microsoft Windows Server 2008 x64

With 32 bit downloads you can install on Windows Vista, Windows Server 2008 and Windows 7.
With 64 but downloads you can install on Windows Vista x64, Windows Server 2008 x64, Windows 7 x64 and Windows Server 2008 R2 x64.

Step 02: Run the installer (OUI).
Double click on setup.exe. Ignore the following prerequisite errors displayed by Oracle Universal Installer and complete the installation:
i) Checking operating system requirements
ii) Checking service pack requirements
(Click on the Ignore Errors Check Box)

Another option is to run the following command from the command line:
change to the directory where the Oracle files were extracted.
Then run the command:
> setup.exe -ignoreSysprereqs
(command is case sensitive)
This will ignore the Pre-Requisite checks.

Renaming global database name hang with row cache enqueue lock

Problem Description
Renaming global database name hangs and never completes. That is following statements hangs,

SQL> alter database rename global_name to orcl.world;

From the alert log we see the a tracefile is generated containing following entry.

>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! <<< row cache enqueue: session: 0x11d4fd018, mode: N, request: X

 Cause of the Problem
The database was using Database links, which in fact using database GLOBAL_NAME. The ALTER DATABASE RENAME requires an exclusive lock which was waiting for the database link session to end and release their lock on the underlying data dictionary table.

Solution of the Problem
Ensure that currently no active database link session exist and then you can issue rename global_name command.

Alternatively, you can do the following:

SQL> shutdown immediate
SQL> startup restrict
SQL> alter database rename global_name to new name;
SQL> alter system disable restricted session;

Primary DB freezed with waited too long for a row cache enqueue lock

Problem Description
On Oracle database 10.2.0.3.0 Data Guard with Broker configuration whenever there is an attempting to restart the standby in read only, the following error occurs:

ERROR: WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
ORA-12170: TNS:Connect timeout occurred
PING[ARC6]: Error 3113 when pinging standby
ARC6: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (1089)

The errors occurred in standby database and primary database hangs!

Cause of the Problem
The above error is caused by LGWR process in Oracle Data Guard having RAC environment and it is in fact Oracle bug. The bug number is 7487408.
The bug fires whenever Data Guard is managed by Broker and standby database is shutdown or opened in read only mode. The Primary still tries to ship/send redo streams to standby in SYNC mode and eventually hangs.

Solution of the Problem
- Download the optach 7487408 from oracle support/metalink in order to solve the bug.
- The above bug is fixed in Oracle patchset 10.2.0.5. So you need to upgrade your oracle database to 10.2.0.5.

Tuesday, September 28, 2010

WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK!

This article will explain some reasons for which you may encounter "WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK!" message in the alert log file. When Row cache contention occurs, if the enqueue cannot be obtained within a certain time period, a trace file will be generated in the trace location with some trace details.

The trace file tends to contain the following words:
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! <<

/opt/oracle/admin/EAIAPP/udump/eaiapp1_ora_23288.trc

Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production

With the Partitioning, Real Application Clusters, OLAP, Data Mining

and Real Application Testing options

ORACLE_HOME = /opt/oracle/product/dbs

System name: Linux

Node name: db1-eai.prod.stl.cw.intraisp.com

Release: 2.6.18-128.4.1.el5

Version: #1 SMP Thu Sep 23 19:59:19 EDT 2010

Machine: x86_64

Instance name: EAIAPP1

Redo thread mounted by this instance: 1

Oracle process number: 43

Unix process pid: 23288, image: oracle@db1-eai.prod.stl.cw.intraisp.com



*** 2010-09-25 17:08:05.532

*** ACTION NAME:() 2010-09-25 17:08:05.532

*** MODULE NAME:(OMS) 2010-09-25 17:08:05.532

*** SERVICE NAME:(EAIAPP) 2010-09-25 17:08:05.532

*** SESSION ID:(120.29614) 2010-09-25 17:08:05.532

>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! <<<
row cache enqueue: session: 70000001b542d78, mode: N, request: S
row cache parent object: address=700000036f27328 cid=0(dc_tablespaces)
hash=a6840ab5 typ=9 transaction=0 flags=00008000


The trace will often contain a systemstate dump, although most useful information is in the header section. Typically a session holding the row cache resource will either be on cpu or blocked by another session. If it is on cpu then errorstacks are likely to be required to diagnose, unless tuning can be done to reduce the enqueue hold time. Remember that on a RAC environment, the holder may be on another node and so multiple systemstates from each node will be required.

For each enqueue type, there are a limited number of operations that require each enqueue.

1) DC_TABLESPACES : This is probably the most likely is allocation of new extents. If extent sizes are set low then the application may constantly be requesting new extents and causing contention. Do you have objects with small extent sizes that are rapidly growing? (You may be able to spot these by looking for objects with large numbers of extents). Check the trace for insert/update activity, check the objects inserted into for number of extents.

2) DC_SEQUENCES : Check for appropriate caching of sequences for the application requirements.

3) DC_USERS : Deadlock and resulting "WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK!" can occur if a session issues a GRANT to a user, and that user is in the process of logging on to the database.

4) DC_OBJECTS : Look for any object compilation activity which might require an exclusive lock and thus block online activity.

5) DC_SEGMENTS : This is likely to be down to segment allocation. Identify what the session holding the enqueue is doing and use errorstacks to diagnose.

6) In many cases no operations are notified in the trace file. Only the session ID is specified. In that case we need to investigate that session id.

Index on a partitioned table waits on 'row cache lock' in RAC environment

Problem Description
Creating Index on a partitioned table fails with Oracle error ORA-04021. This is a 2-node Oracle RAC environment and in the table there is large number of partitions and sub-partitions. Index creation waits on 'row cache lock'. The same is true while dropping a tablespace. Drop Tablespace hangs on Row Cache Enqueue.

Problem Investigation
From the trace file we get following information.
PROCESS 24:

SO: 0xb9e505908, type: 4, owner: 0xb9f351708, flag: INIT/-/-/0x00
(session) sid: 477 trans: 0xb7ce39798, creator: 0xb9f351708, flag: (41) USR/- BSY/-/-/-/-/-
DID: 0001-0018-00000018, short-term DID: 0001-0018-00000019
txn branch: (nil)
oct: 9, prv: 0, sql: 0xacafc75e8, psql: 0xb0266cc18, user: 55/TA
O/S info: user: oracle, term: pts/1, ospid: 16322, machine: rac2
program: sqlplus@rac2 (TNS V1-V3)
application name: SQL*Plus, hash value=3669949024
waiting for 'row cache lock' blocking sess=0x(nil) seq=4750 wait_time=0 seconds since wait started=18
cache id=2, mode=0, request=5

Process 24 is executing sql: 0xacafc75e8 which is:
LIBRARY OBJECT HANDLE: handle=acafc75e8 mtx=0xacafc7718(1) cdp=1
name=CREATE INDEX USER_ACTIVITY_IDX ON PROD.USER_ACTIVITY(IMSI, CALLING_IMSI) TABLESPACE HISTORY_IDX_01 LOCAL PARALLEL 32 UNUSABLE

It seems this process is waiting for exclusive enqueue lock mode on dc_segments:
----------------------------------------
SO: 0xafb5989f0, type: 50, owner: 0xb7ce39798, flag: INIT/-/-/0x00
row cache enqueue: count=1 session=0xb9e505908 object=0xb004f3690, request=X
savepoint=0x25a3749
row cache parent object: address=0xb004f3690 cid=2(dc_segments)
hash=2997173a typ=11 transaction=(nil) flags=00000000
own=0xb004f3760[0xb004f3760,0xb004f3760] wat=0xb004f3770[0xafb598a20,0xafb598a20] mode=N
status=-/-/-/-/-/-/-/-/-
request=X release=FALSE flags=2
instance lock id=QC 1cd04d4c 3ab28c03
data=
0000006a 000001da 0039918b 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000
----------------------------------------
From the trace file it is not clear who is holding this row cache enqueue.

Cause of the Problem
Index creation waits on 'row cache lock' in RAC environment occurred due to oracle bug.
Oracle named this bug as 'Bug 6321551' in version 10.2.0.2
'Bug 8417354' in version 10.2.0.4
'Bug 6004916' in version 11.0.0.0

Solution of the Problem
The bug is Oracle internal unpublished bug.
- This bug is fixed in Oracle version 10.2.0.5. So apply Oracle 10.2.0.5 patchset if you are running Oracle version 10g.

- If your Oracle database version is 10.2.0.2 or 10.2.0.3 you can apply one-off patch 6004916.

- Another workaround is to create the index with only one node being started (single instance).
i) Stop all RAC instance but one.
ii) Run the index creation script on one node.
iii) After process is completed startup rest of the instances.

Monday, September 27, 2010

How to check used or free disk space in ASM Instance

With asmcmd lsdg command it will be displayed mounted disk groups lists and their information. In fact, lsdg queries V$ASM_DISKGROUP_STAT by default and shows disk groups information.

The syntax of lsdg command is as follows,
lsdg [-gH][--discovery][pattern]

If the --discovery flag is specified, then it is queried the V$ASM_DISKGROUP view. The output also includes notification of any current rebalance operation for a disk group. If a disk group is specified, then lsdg returns only information about that disk group.

If -g option is specified then it selects from GV$ASM_DISKGROUP_STAT, or from GV$ASM_DISKGROUP if the --discovery flag is also specified.

From the lsdg command,

Total_MB indicates the size of the disk group in megabytes.

Free_MB indicates free space in the disk group in megabytes, without regard to redundancy. From the V$ASM_DISKGROUP view.

Req_mir_free_MB indiates the amount of space that must be available in the disk group to restore full redundancy after the most severe failure that can be tolerated by the disk group. This is the REQUIRED_MIRROR_FREE_MB column from the V$ASM_DISKGROUP view.

Usable_file_MB indicates the amount of free space, adjusted for mirroring, that is available for new files.

You can directly query from v$asm_diskgroup or V$ASM_DISKGROUP_STAT view or you can issue lsdg within asmcmd tool.

From the V$ASM_DISKGROUP view,

SYS@EAI1>SELECT name, free_mb, total_mb, free_mb/total_mb*100 "%" FROM v$asm_diskgroup;

NAME FREE_MB TOTAL_MB %
------------------------------ ---------- ---------- ----------
DISKGROUP1 112168 245754 45.6423904

From ASMCMD,

ASMCMD> lsdg
State Type Rebal Unbal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Name
MOUNTED EXTERN N N 512 4096 1048576 245754 112168 0 112168 0 DISKGROUP1/

ORA-01110 ORA-01187: cannot read from file because it failed verification tests

Problem Description
In the physical standby database alert log file shows following errors.
Errors in file /u01/app/oracle/diag/rdbms/bdafisdrs/bdafisdc1/trace/bdafisdc1_m001_19676.trc:
ORA-01187: cannot read from file because it failed verification tests
ORA-01110: data file 201: '+DATA/bdafisdrs/tempfile/temp'

Following is the contents found in the generated trace file.

Trace file /u01/app/oracle/diag/rdbms/bdafisdrs/bdafisdc1/trace/bdafisdc1_m001_19676.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, Oracle Label Security,
OLAP, Data Mining, Oracle Database Vault and Real Application Testing option
ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_1
System name: Linux
Node name: DRS-DB-01
Release: 2.6.18-92.el5
Version: #1 SMP Tue Apr 29 13:16:15 EDT 2008
Machine: x86_64
Instance name: bdafisdc1
Redo thread mounted by this instance: 1
Oracle process number: 77
Unix process pid: 19676, image: oracle@DRS-DB-01 (M001)


*** 2010-09-19 11:35:51.595
*** SESSION ID:(103.167) 2010-09-19 11:35:51.595
*** CLIENT ID:() 2010-09-19 11:35:51.595
*** SERVICE NAME:(SYS$BACKGROUND) 2010-09-19 11:35:51.595
*** MODULE NAME:(MMON_SLAVE) 2010-09-19 11:35:51.595
*** ACTION NAME:(Autotask Slave Action) 2010-09-19 11:35:51.595

DDE: Problem Key 'ORA 1110' was flood controlled (0x5) (no incident)
ORA-01110: data file 201: '+DATA/bdafisdrs/tempfile/temp'
ORA-01187: cannot read from file because it failed verification tests
ORA-01110: data file 201: '+DATA/bdafisdrs/tempfile/temp'
Dump of memory from 0x0000000363A3B078 to 0x0000000363A3B3EA
...
...
...
ket_get_active: error 1187
ket_aba_main[2] : error 1187
ket_aba_slave: clearing error 1187

Cause of the Problem
Scenario 01: If your Oracle database version is 8i and you are able to activate the standby database with no errors, however, each time you try to open the standby database in read only mode it fails with the above errors, then possibly it is due to READ_ONLY_OPEN_DELAYED parameter setting.

Scenario 02:
If database files reside on an ocfs and datafiles appear to corrupt to nodes other than "node1" (instance1) in a RAC then problem happened because filesystem was mounted with the "reclaimid" option.

Solution of the Problem
Solution for Scenario 01:
Set READ_ONLY_OPEN_DELAYED parameter to false and then you will be able to open the database in read only mode.
SQL> Alter system set READ_ONLY_OPEN_DELAYED = FALSE scope=spfile;
SQL> shut immediate
SQL> startup


Solution for Scenario 02:
While mounting do not use the reclaimid mount option for "normal" mounts. This option should only be used once after having to regenerate the guid of a node after ip address change.