Saturday, July 31, 2010

Troubleshoot ORA-12547: TNS: lost contact

Problem Description
Whenever you connect to database as sysdba it fails with ORA-12517 like below.
$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.2.0 - Production on Sun Aug 01 11:27:27 2010
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
ERROR:
ORA-12547: TNS:lost contact
Enter user-name:
Cause of the Problem
ORA-12547 indicates that the communication channel has been broken. Based on the scenarios cause and solution of the problem is discussed below.

1)Local connection as sysdba (Bequeath):
Problem
BEQ connection fails when connecting with / as sysdba

Cause
Oracle binaries have not been linked correctly

Solution
Relink the Oracle binaries by executing the following command,

$ ORACLE_HOME/bin relink all

2)Bequeath connections fail with ora-12547 and Remote connections fail with ORA-12500:

From the listener.log file it shows,
TNS-12500: TNS:listener failed to start a dedicated server process 
TNS-12547: TNS:lost contact
TNS-12560: TNS:protocol adapter error
TNS-00517: Lost contact
The version of the database server is 9.2.0.2

Cause
This is oracle bug 2654576 and happened due to the processes value reaches a limit.

Solution
i) The bug is fixed in patch set 9.2.0.3 and in 10g. So upgrade oracle database is a solution.

ii) Increase the PROCESSES parameter in the init.ora to a higher value for example 2000.

iii) If increasing the PROCESSES parameter does not help then increase the value of the hidden parameter.
_attach_count_slack = 2000

3) Remote connections to the database server fail with ORA-12547
Cause 1
SQLNET.INBOUND_CONNECT_TIMEOUT and/or INBOUND_CONNECT_TIMEOUT_listener_name is set in the database server's sqlnet.ora and listener.ora. If the client fails to establish a connection and complete authentication in the time specified defined by these parameter, then the database server terminates the connection.

If this is the case and your oracle database version is 10g and higher then ORA-3136 errors is reported in the alert.log.

Cause 2
TCP.VALIDNODE_CHECKING is active on the database server and the TCP.INVITED_NODES list does not have the IP address of the failing client.
Alternatively the TCP.EXCLUDED_NODES list contain the IP address of the failing client.

Solution 1
If problem happened due to cause 1 then SQLNET.INBOUND_CONNECT_TIMEOUT and/or INBOUND_CONNECT_TIMEOUT_listener_name to appropriate values.

Note that the database server and the listener has to be restarted for these parameters to take effect.

Solution 2
For cause 2 either add the IP address of the failing client in the TCP.INVITED_NODES list or remove it from the TCP.EXCLUDED_NODES list.

It is recommended to restart the database server and the listener for these parameters to take effect.

4) Listener fails to start with Oracle error ORA-12547
Cause
TCP.VALIDNODE_CHECKING is active on the database server and the TCP.INVITED_NODES list does not have the IP address of the failing client.
Alternatively the TCP.EXCLUDED_NODES list contain the IP address of the failing client.

Solution
Either add the IP address of the failing client in the TCP.INVITED_NODES list or remove it from the TCP.EXCLUDED_NODES list.
It is recommended to restart the database server and the listener for these parameters to take effect.

5) After changing /etc/system and rebooting system, the 10g R2 TNS Listener fails to start
$ lsnrctl start 
Starting /u01/oracle/product/10.2.0/Db_1/bin/tnslsnr: please wait...
TNSLSNR for Solaris: Version 10.2.0.1.0 - Production
System parameter file is /u01/oracle/product/10.2.0/Db_1/network/admin/listener.ora
Log messages written to /u01/oracle/product/10.2.0/Db_1/network/log/listener.log
Trace information written to /u01/oracle/product/10.2.0/Db_1/network/trace/listener.trc
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=server11)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=extproc10)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=server11)(PORT=1521)))
TNS-12547: TNS:lost contact
TNS-12560: TNS:protocol adapter error
TNS-00517: Lost contact
Solaris Error: 131: Connection reset by peer
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=extproc10)))
TNS-12541: TNS:no listener
TNS-12560: TNS:protocol adapter error
TNS-00511: No listener
Solaris Error: 146: Connection refused
Cause
This one is happened due to a DNS client process, 'BIND 9' not running/started on the Solaris 10 Server.

Solution
You need to start the BIND 9 DNS Client on the Solaris10 Server.

RMAN duplicate fails with RMAN-10006, ORA-03113, ORA-01092

Problem Description
While duplicating database using RMAN it fails with ORA-01092: "ORACLE instance terminated. Disconnection forced" and ora-03113 error reported.
RMAN-00601: fatal error in recovery manager
RMAN-03004: fatal error during execution of command
RMAN-10039: error encountered while polling for RPC completion on channel clone_default
RMAN-10006: error running SQL statement: select act from x$ksusex where sid=:1 and serial=:2
RMAN-10002: ORACLE error: ORA-03113: end-of-file on communication channel
RMAN-03002: failure of Duplicate Db command at 02/30/2010 01:21:18
RMAN-03015: error occurred in stored script Memory Script
RMAN-06136: ORACLE error from auxiliary database:
ORA-01092: ORACLE instance terminated. Disconnection forced
If we look for the auxiliary database alert.log file then it shows that Undo Tablespace 'UNDOTBS1' does not exist or of wrong type.

Cause of the Problem
The UNDO_TABLESPACE parameter of auxiliary database is not same as target database. In the source database undo tablespace is set to UNDOTBS2 while in target database undo tablespace is set to UNDOTBS1. The undo tablespace specified in the auxiliary database parameter file does not exist.

Solution of the Problem
Make sure that the undo tablespace for auxiliary database is same as of target database.
Edit the init.ora or alter the UNDO_TABLESPACE parameter of the auxiliary instance and set
undo_tablespace = UNDOTBS2
- restart the auxiliary instance in nomount mode and repeat the duplicate database command.

RMAN duplicate database fails with RMAN-00601, RMAN-10006, ORA-00704, ORA-39700

Problem Description
Duplicating database using RMAN fails with following errors.
channel channel1: starting piece 1 at 29-JUL-2010 03:49:19
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00601: fatal error in recovery manager
RMAN-03004: fatal error during execution of command
RMAN-10039: error encountered while polling for RPC completion on channel channel1
RMAN-10006: error running SQL statement: select action from gv$session where sid=:1 and serial#=:2 and inst_id=:3
RMAN-10002: ORACLE error: ORA-3114: not connected to ORACLE
RMAN-10041: Could not re-create polling channel context following failure.
RMAN-10024: error setting up for rpc polling
RMAN-10005: error opening cursor
RMAN-10002: ORACLE error: ORA-3114: not connected to ORACLE
RMAN-03002: failure of Duplicate Db command at 02/23/2007 11:11:50
RMAN-03015: error occurred in stored script Memory Script
RMAN-06136: ORACLE error from auxiliary database: ORA-1092: ORACLE instance
terminated. Disconnection forced
ORA-00704: bootstrap process failure
ORA-39700: database must be opened with UPGRADE option

Cause of the Problem
The errors occurred because the Oracle versions for both the target database home and the auxiliary database home are not same.

Solution of the Problem
The solution is to upgrade the auxiliary database server binaries to match the target database server. Also if there is one off patches that exist on the target database, you also need to apply on the auxiliary database in order to make the duplication work.

Saturday, July 24, 2010

ORA-14080: partition cannot be split along the specified high bound

Problem Scenarios
SQL>    CREATE TABLE USER_ACTIVITY
2 (
3 COL1 NUMBER NOT NULL,
4 COL2 VARCHAR2(10) NULL,
5 COL3 NUMBER NOT NULL
6 )
7 TABLESPACE USERS
8 PARTITION BY RANGE (COL3)
9 (
10 PARTITION P10 VALUES LESS THAN (100),
11 PARTITION P20 VALUES LESS THAN (200),
12 PARTITION P_MAX VALUES LESS THAN (MAXVALUE)
13 );

Table created.

SQL> ALTER TABLE USER_ACTIVITY SPLIT PARTITION
2 P_MAX AT (200)
3 INTO
4 (
5 PARTITION P30,
6 PARTITION MAXPART
7 );
P_MAX AT (200)
*
ERROR at line 2:
ORA-14080: partition cannot be split along the specified high bound

Cause of the Problem
There is already a partition with a specified high bound of 200. You need to split the partition on a value higher than the next range down but lower that the next range up.

Solution of the Problem
The solution is to split the partition into appropriate bound. The following statement will split the partition at value 250. From the two, the first partition (P30) holding values 200
to 249 and the second partition (MAXPART) holding values 250 and above.

SQL> ALTER TABLE USER_ACTIVITY SPLIT PARTITION
2 P_MAX AT (250)
3 INTO
4 (
5 PARTITION P30,
6 PARTITION MAXPART
7 );

Table altered.

Thursday, July 22, 2010

Export fails with ORA-01406: fetched column value was truncated

Problem Description
Exporting oracle 11g database using 10g exp utility fails with ORA-01406 errors.
E:\>exp exp_user/exp_user full=y file=exp_dump.dmp log=exp_dump.log

Export: Release 10.2.0.1.0 - Production on Thu Jul 22 23:01:49 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

About to export the entire database ...
. exporting tablespace definitions
. exporting profiles
. exporting user definitions
. exporting roles
EXP-00008: ORACLE error 1406 encountered
ORA-01406: fetched column value was truncated
EXP-00000: Export terminated unsuccessfully

By further investigation it is noticed that the statement that fails with ORA-01406 is:
SELECT GRANTEE, PRIV, WGO FROM SYS.EXU8SPV ORDER BY SEQUENCE

And the database character set is WE8MSWIN1252.

SQL> select property_value from database_properties where property_name='NLS_CHARACTERSET';

PROPERTY_VALUE
--------------------------------------------------------------------------------
WE8MSWIN1252

The same problem arises whenever you connect to 11g database using oracle 9i or oracle 10gR1 exp client utility.

Cause of the Problem
The cause of this problem has been identified as Oracle Bug 6804150 and fixed in 11g and 10.2.0.5.

It is caused by the new privilege name "ADMINISTER SQL MANAGEMENT OBJECT" added in 11g.
Since the value exceeds 30 characters, then truncation happens and the error ora-01406 is encountered.

Solution of the Problem
Solution 01:
Change the Database character set to AL32UTF8 will solve this problem.

Solution 02:
Apply Patch for BUG 6804150 if available for your platform.

Solution 03:
Install the Oracle 10.2.0.5 patchset.

Solution 04:
Use 11g exp client utility.

exp fails with EXP-00023 and expdp fails with ORA-31631, ORA-39161

Problem Description
It is needed to export/import full database. To export full database, "EXPORT FULL DATABASE" privilege is granted and to import full database "IMPORT FULL DATABASE" privilege is granted. Now while doing full database export/import, exp fails with EXP-00023 and expdp fails with
ORA-31631: privileges are required
ORA-39161: Full database jobs require privileges

In the following example the error case is demonstrated.

A user named exp_user is created and it is granted EXPORT FULL DATABASE privilege.
E:\>sqlplus / as sysdba

SQL*Plus: Release 11.1.0.6.0 - Production on Thu Jul 22 19:35:15 2010

Copyright (c) 1982, 2007, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create user exp_user identified by exp_user;

User created.

SQL> grant create session, resource, export full database to exp_user;

Grant succeeded.

SQL> select * from dba_sys_privs where grantee='EXP_USER';

GRANTEE PRIVILEGE ADM
------------------------------ ---------------------------------------- ---
EXP_USER EXPORT FULL DATABASE NO
EXP_USER CREATE SESSION NO
EXP_USER UNLIMITED TABLESPACE NO

SQL> select * from dba_role_privs where grantee='EXP_USER';

GRANTEE GRANTED_ROLE ADM DEF
------------------------------ ------------------------------ --- ---
EXP_USER RESOURCE NO YES

E:\>exp exp_user/exp_user full=y file=exp_dump.dmp log=exp_dump.log

Export: Release 10.2.0.1.0 - Production on Thu Jul 22 19:43:52 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
EXP-00023: must be a DBA to do Full Database or Tablespace export
(2)U(sers), or (3)T(ables): (2)U >

Export grants (yes/no): yes >

Export table data (yes/no): yes >

Compress extents (yes/no): yes >

Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user EXP_USER
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user EXP_USER
About to export EXP_USER's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export EXP_USER's tables via Conventional Path ...
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully with warnings.

E:\>expdp exp_user/exp_user full=y dumpfile=exp_dump.dmp logfile=exp_dump.log

Export: Release 10.2.0.1.0 - Production on Thursday, 22 July, 2010 19:51:21

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-31631: privileges are required
ORA-39161: Full database jobs require privileges
From the above example, we see both exp and expdp operations have failed.

Cause of the Problem
The system privileges "EXPORT FULL DATABASE" and "IMPORT FULL DATABASE" was introduced in oracle database 10gR1. But these two privileges are not currently in use by oracle. May be they will be implemented in future releases. In oracle 10g and 11g these two system privileges are not operational and hence assigning these privileges will do nothing.

The right privileges used by export/import are the roles EXP_FULL_DATABASE/ IMP_FULL_DATABASE.

Solution of the Problem
Assign correct privileges to the user. He who will do full database export operation assign him EXP_FULL_DATABASE role and he who will perform full database import operation assign him IMP_FULL_DATABASE role.
E:\>sqlplus / as sysdba

SQL*Plus: Release 11.1.0.6.0 - Production on Thu Jul 22 19:53:32 2010

Copyright (c) 1982, 2007, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> revoke export full database from exp_user;

Revoke succeeded.

SQL> grant exp_full_database to exp_user;

Grant succeeded.

SQL> select * from dba_sys_privs where grantee = 'EXP_USER';

GRANTEE PRIVILEGE ADM
------------------------------ ---------------------------------------- ---
EXP_USER CREATE SESSION NO
EXP_USER UNLIMITED TABLESPACE NO

SQL> select * from dba_role_privs where grantee = 'EXP_USER';

GRANTEE GRANTED_ROLE ADM DEF
------------------------------ ------------------------------ --- ---
EXP_USER EXP_FULL_DATABASE NO YES
EXP_USER RESOURCE NO YES

Now invoking exp and expdp with full=y went fine.
D:\>exp exp_user/exp_user full=y file=exp_dump.dmp log=exp_dump.log

Export: Release 11.1.0.6.0 - Production on Thu Jul 22 21:19:03 2010

Copyright (c) 1982, 2007, Oracle. All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

About to export the entire database ...
. exporting tablespace definitions
. exporting profiles
. exporting user definitions
. exporting roles
. exporting resource costs
. exporting rollback segment definitions
. exporting database links
. exporting sequence numbers
. exporting directory aliases
. exporting context namespaces
. exporting foreign function library names
. exporting PUBLIC type synonyms
. exporting private type synonyms
.
.
.

How to uninstall Oracle 11g

Uninstall Oracle software 11.2g
In 11.2g there is deinstall command through which you can uninstall standalone Oracle Database installations, Oracle Clusterware and Automatic Storage Management (ASM) from your server, as well as Oracle Real Application Clusters (Oracle RAC) and Oracle Database client installations. You can find out deinstall tool under $ORACLE_HOME/deinstall folder.

If you don't have deinstall tool you can download it from url http://www.oracle.com/technology/software/products/database/index.html

- Under Oracle Database 11g Release 2, click See All for the respective platform for which you want to download the Deinstallation Tool.

- The Deinstallation Tool is available for download at the end of the page.

So, in 11.2g just use,
$$ORACLE_HOME/deinstall/deinstall

Uninstall Oracle software in 11.1g
If you are running Oracle 11.1g then you have to open the Oracle Universal Installer on windows and runInstaller on unix. In windoes you can find oui under $ORACLE_HOME/oui/bin and file name is setup.exe.

Wednesday, July 21, 2010

ORA-29913, ORA-29400, KUP-00554 while querying external table

Problem Description
While querying an external table it fails with error ORA-29913, ORA-29400, KUP-00554, KUP-01005 like below.
SQL> create directory ext_dir as 'c:\';

Directory created.

SQL> create table external_table(
2 col1 varchar2(1),
3 col2 varchar2(20),
4 col3 varchar2(10)
5 )
6 organization external
7 (type oracle_loader
8 default directory ext_dir
9 access parameters
10 (
11 records delimited by newline
12 fields
13 missing field value are null
14 (col1 position(1,1),
15 col2 position(2,20),
16 col3 position(21,30)
17 )
18 )
19 location ('c:\temp\TEST.TXT')
20 )
21 ;

Table created.

SQL> select * from external_table;
select * from external_table
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-00554: error encountered while parsing access parameters
KUP-01005: syntax error: found "identifier": expecting one of: "values"
KUP-01008: the bad identifier was: value
KUP-01007: at line 3 column 15

Cause of the Problem
While creating external table the access parameters are not parsed. The access parameters are parsed when the external table is queried. The above errors are returned due to syntax error in the external table access parameters.

Solution of the Problem
The solution is correct the syntax in the external table creation access parameters. Let's try to solve syntax error one by one.

- Drop the table as in the database two tables as same name under one schema can't exist.

SQL> drop table external_table;

Table dropped.
- Write the previous external table creation script.
SQL> create table external_table(
2 col1 varchar2(1),
3 col2 varchar2(20),
4 col3 varchar2(10)
5 )
6 organization external
7 (type oracle_loader
8 default directory ext_dir
9 access parameters
10 (
11 records delimited by newline
12 fields
13 missing field value are null
14 (col1 position(1,1),
15 col2 position(2,20),
16 col3 position(21,30)
17 )
18 )
19 location ('c:\temp\TEST.TXT')
20 )
21
- At line 13 the keyword will be values.
SQL> 13
13* missing field value are null
SQL> c/value/values
13* missing field values are null
- At line 14, 15 and 16 position value comma (,) will be replaced by colon (:).
SQL> 14
14* (col1 position(1,1),
SQL> c/1,1/1:1
14* (col1 position(1:1),
SQL> 15
15* col2 position(2,20),
SQL> c/2,20/2:20
15* col2 position(2:20),
SQL> 16
16* col3 position(21,30)
SQL> c/21,30/21:30
16* col3 position(21:30)
SQL> /

Table created.
- Now selecting the table avoid any syntax type errors but it lead to another error related to path. In the external table we can't specify absolute path in this way. So in the following example we are correcting that.
SQL> select * from external_table;
select * from external_table
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04076: file name cannot contain a path specification: c:\temp\TEST.TXT

SQL> drop table external_table;

Table dropped.
- Creating external table after correcting path.

SQL> create table external_table(
2 col1 varchar2(1),
3 col2 varchar2(20),
4 col3 varchar2(10)
5 )
6 organization external
7 (type oracle_loader
8 default directory ext_dir
9 access parameters
10 (
11 records delimited by newline
12 fields
13 missing field values are null
14 (col1 position(1:1),
15 col2 position(2:20),
16 col3 position(21:30)
17 )
18 )
19 location ('test.txt')
20 );

Table created.

SQL> select * from external_table;

C COL2 COL3
- -------------------- ----------
1 222222222222222222 2 11111111
1 222255555555555222 2 11111333

Can one use Oracle 10g XE in commercial enviroment

As of July 21, 2010 Oracle has not yet released Express Edition (Oracle Database XE) for 11g. But there is available Oracle database 10g XE and many ones use it for their personal use as Enterprise Edition of Oracle is really expensive. Now many ones think whether one can use Oracle database 10g XE for their commercial environment.

It is true that Oracle 10g XE comes with many limitations and also has less features compared to Oracle database Enterprise Edition (EE). But there is no restriction about where you can use it or not. So in the commercial environment you can use Oracle database 10g XE but have following limitation while using it.

1. It can have a single instance on any server.

2. You can install it on a multiple CPU server, but it can only be executed on one processor in any server.

3. XE only supports up to 4GB of user data (not including XE system datafile data.)

4. XE can only use up to 1 GB RAM of available memory.

So if your settings/environmental satisfies/follows above 4 limitations, you can use XE at your commercial environment without any problems.

Along with above limitations in XE there is many others limitations in terms of features/option. Such as the following features is not included in XE.

Major Features not included in XE
1) Oracle Real Application Clusters

2) Oracle Spatial

3) Advanced Security Option

4) Oracle Label Security

5) Oracle Partitioning

6) Oracle OLAP

7) Oracle Data Mining

8) Oracle Enterprise Manager

9) Oracle Change Management Pack

10) Oracle Configuration Management Pack

11) Oracle Diagnostic Pack

12) Oracle Tuning Pack

13) Oracle Clusterware

14) Oracle Connection Manager

15) Oracle Names

16) Oracle Streams (Apply process only)

17) Oracle Workflow

18) Messaging Gateway

19) Oracle Data Guard

20) Oracle Fail Safe

21) Ultra Search

22) interMedia

23) Oracle Workspace Manager

Application Development, Language, and Database Features not included in 10g XE

1) iSQL*Plus
2) Database Web services
3) Java support in the database
4) Java Server Pages
5) Java native compilation
6) SQLJ
7) XQuery, JNDI, or Servlet support
8) Data Compression
9) Bitmapped index, bitmapped join index
10) Summary Management
11) Materialized View Query Rewrite
12) Parallel query/DML
13) Parallel statistics gathering
14) Parallel index build/scans
15) Parallel export/import

Backup and Recovery Features not included in XE
1) Online schema reorganization/redefinition
2) Flashback Table
3) Flashback Database
4) Flashback Transaction Query
5) Server-managed backup and recovery
6) Backup Unused Block Compression
7) Backup Encryption
8) In case of Incremental backup and recovery does not support change tracking file or optimized incremental backup capability.
9) Duplexed backup sets
10) Fast-Start Selectable Recovery Time
11) Block-level media recovery
12) Parallel backup and recovery
13) Point-in-time tablespace recovery
14) Trial recovery
15) Rolling Upgrade Support – Patch Set, Database and O/S

Security & Manageability Features not included in 10g XE
1) Virtual Private Database
2) Fine grained auditing

3) Automatic Storage Management
4) Database Resource Manager

Data Movement/Integration Features not included in XE
1) Synchronous Change Data Capture
2) Asynchronous Change Data Capture
3) Transportable tablespaces, including cross-platform
4) Advanced Replication
5) Transparent Gateways

Networking Features not included in Oracle 10g XE
1) Directory connectivity.

Monday, July 19, 2010

PRKP-1001, CRS-0215 while starting instance using srvctl

Problem Description
The database/instances are not starting up using srvctl command, while starting up it shows PRKP-1001, CRS-0215 errors.

oracle@DBRAC2:~> srvctl status instance -i dbrac2 -d dbrac
Instance dbrac2 is not running on node dbrac2

oracle@DBRAC2:~> srvctl start instance -i dbrac2 -d dbrac
PRKP-1001 : Error starting instance dbrac2 on node dbrac2
CRS-0215: Could not start resource ora.dbrac.dbrac2.inst.

But using sqlplus the instances are able to come up. For example,
SQL> startup
works fine.

Cause of the Problem
"srvctl" command picks the parameter file from the location specified while registering the database with OCR.

In this case srvctl will pick "+DATA1/dbrac/spfiledbrac.ora" to start the instance,
# srvctl config database -d dbrac -a 
adc17 dbrac1 /u01/app/oracle/product/10.2.0/db_1
adc18 dbrac2 /u01/app/oracle/product/10.2.0/db_1
DB_UNIQUE_NAME: dbrac
DB_NAME: dbrac
ORACLE_HOME: /u01/app/oracle/product/10.2.0/db_1
SPFILE: +DATA1/dbrac/spfiledbrac.ora
DOMAIN: null
DB_ROLE: PRIMARY
START_OPTIONS: open
POLICY: AUTOMATIC
ENABLE FLAG: DB ENABLED
While we start the instance using sqlplus, it picks the parameter file from $ORACLE_HOME/dbs directory by default

SQL> startup
SQL> show parameter spfile
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile string /u01/app/oracle/product/10.2.0/db_1/dbs/spfiledbrac.ora

As the parameter file used to start the instance using srvctl and sqlplus are not the same srvctl fails to start the oracle instance.

Solution of the problem
Make sure that both sqlplus (i.e $ORACLE_HOME/dbs) and srvctl command (i.e +DATA1/dbrac/spfiledbrac.ora ) are using same parameter file to start up the instances.

Saturday, July 17, 2010

While startup standby database it fails with ORA-01154

Problem Description
Whenever you try to open or shutdown a standby database it fails with error ORA-01154
SQL> connect / as sysdba
Connected.

SQL> alter database open
ORA-01154: database busy. Open, close, mount, and dismount not allowed now

SQL> shutdown
ORA-01154: database busy. Open, close, mount, and dismount not allowed now

Cause of the Problem
The database is a standby database and it is in managed recovery mode. To verify the database is in managed recovery mode, enter this command

SQL> SELECT RECOVERY_MODE FROM V$ARCHIVE_DEST_STATUS;

If it returns MANAGED then the standby database is in managed recovery mode.

If database is in managed recover mode then it is not allowed to do open or shutdown the database.

Solution of the Problem
To take the database out of managed recovery mode, enter this command

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

You can now open the database as,
SQL> ALTER DATABASE OPEN READ ONLY;

Or, you may now shutdown the database.
SQL> shutdown

Database startup fails with ORA-00064

Problem Description
Case 01: After it is set SGA_TARGET to 100G with a total of about 300G physical memory on the server "startup" fails with
ORA-00064: object is too large to allocate on this O/S (1,15429280) errors

Case 02: The parameter OPEN_LINKS_PER_INSTNACE is set to 1000 and then startup of the instance causes ORA-00064.

Case 03: After it is set high value of PROCESSES parameter, whenever you try to startup the instance it fails with ORA-00064.

Case 04: "db_files" initialization parameter on 64bit versions of Oracle is set to a higher value and now startup does not work.

Cause of the Problem
As it is already discussed in the post startup migrate fails with ORA-00064 while upgrading to 10.2.0.2 with DBUA the problem happened due to lower default value of oracle hidden parameter _ksmg_granule_size.

The calculation is,
- sga_max_size <= 1024M then _ksmg_granule_size = 4M - sga_max_size > 1024M and <128g then _ksmg_granule_size = 16M - sga_max_size > 128G and <256g then _ksmg_granule_size = 32M - sga_max_size > 256G and <512g then _ksmg_granule_size = 64M


Solution of the Problem
Solution 01: The issue is fixed in Oracle 10.2.0.4.3 (Patch Set Update) and 10.2.0.5 (Server Patch Set). So applying patch will solve the problem.

Solution 02: Disable NUMA optimization on the system
To do this set two hidden parameters like below.
_enable_NUMA_optimization=FALSE
and
_db_block_numa=1

Solution 03: Increase granule size on the system (_ksmg_granule_size)
_ksmg_granule_size=32M

Solution 04: If your PROCESSES initialization parameter has higher value then reduce the value. For example make it less than 1500. Also set the DB_FILES parameter to lower value.

startup migrate fails with ORA-00064 while upgrading to 10.2.0.2 with DBUA

Problem Description
It was tried to upgrade an oracle database from version 9.2.0.7 to 10.2.0.2 using oracle database upgrade assistant. After invoking DBUA ORA-00064: returns. If you also issue startup upgrade from sql*plus it also fails like below.

SQL> startup upgrade
ORA-00064: object is too large to allocate on this O/S (1,7614720)
Cause of the Problem
1. The oracle PROCESSES initialization parameter is set to high value. For example, if you set 'Processes' parameter to a high value (for example > 14000), the instance startup fails with ORA-00064.

2. Low value of granule size.

The oracle hidden parameter "_ksmg_granule_size" is set based on the oracle sga size.
The calculation is,
- sga_max_size <= 1024M then _ksmg_granule_size = 4M
- sga_max_size > 1024M and <128g then _ksmg_granule_size = 16M
- sga_max_size > 128G and <256g then _ksmg_granule_size = 32M
- sga_max_size > 256G and <512g then _ksmg_granule_size = 64M

Now if you set high value of a parameter(for example DB_CACHE_SIZE, SHARED_POOL_SIZE, LARGE_POOL_SIZE, JAVA_POOL_SIZE) that exceeds the value of the granule size that was calculated based on the size of the sga.

If your SGA size is over 1G then you will manually have to set the parameter _ksmg_granule_size. Note that the default setting of _ksmg_granule_size is 4M and low value of _ksmg_granule_size can prohibit the database startup if your memory value is over 1G.

3. "db_files" initialization parameter on 64bit versions of Oracle is set to a higher value.

4. The ORA-00064 error could be occurred even though the big "_ksmg_granule_size" was configured in init.ora file as because during DBUA in 10.2.x it strips out the oracle hidden parameter while opening oracle database.

Solution of the Problem
Solution 01:
Reduce the value of the oracle "PROCESSES" initialization parameter.

For example, open oracle parameter file with editor and put following line
PROCESSES = 1500

Solution 02:
Increase Oracle hidden parameter value of "_ksmg_granule_size" directly to 16M (16777216) or 32M (33554432)

- open oracle parameter file with editor and put following line
_ksmg_granule_size=16777216
or
_ksmg_granule_size=33554432

- Start up gradation manually.

Solution 03:
- Increase SGA size bigger than 1024M to affect granule size. Open oracle parameter file with and editor and put following line
sga_max_size = 1028M

Solution 04:
- Run DBUA with -initParam flag.

$ dbua -initParam "_ksmg_granule_size"=16777216
or
$ dbua -initParam "_ksmg_granule_size"=33554432

Non-Oracle user hits SP2-0642 or SP2-1503/SP2-152 after DST patch

Problem Description
The oracle user (owner of the oracle installation) is able to run sql*plus.

However users different than the owner of the installation (Oracle user is usually the owner) are not able to run SQL*Plus after the DST patches were applied at operating system and database software level. Non oracle users who do not belong under dba unix group it fails with

"SP2-0642: SQL*Plus internal error state 2165, context 4294967295:0:0
Unable to proceed"


Or, after applying DST patch (i.e. Patch 5632264) to Oracle Enterprise Manager 10g Database Control Release 10.2.0.1.0 it fails with following errors.
$ sqlplus /nolog 
SP2-1503: Unable to initialize Oracle call interface
SP2-0152: ORACLE may not be functioning properly
Cause of the Problem
SQL*Plus is unable to access $ORACLE_HOME/oracore/zoneinfo/timezone.dat file. The error message is indicating timezone.dat file is missing from install or inadequate permissions to access $ORACLE_HOME/oracore/zoneinfo directory.

If you do ls -l you will see output like,
$ls -l oracore/zoneinfo/timezone.dat
-rw-r----- 1 oracle dba 161096 Mar 10 20:12 oracore/zoneinfo/timezone.dat
From the permission, it is clear that the users others than oracle and not under dba group do not have privileges on the timezone.dat file, so they are not able to read/write the file above. It needs at least read privileges on other unix group.

Solution of the Problem
To implement the solution, execute the following steps:

1. Manually change the privileges for the $ORACLE_HOME/oracore/zoneinfo directory and files.

$ chmod o+r oracore
$ chmod o+r oracore/zoneinfo/timezone.dat


Alternatively, you can do.
$ cd $ORACLE_HOME
$ chmod -R 755 oracore



2. After you change the permission setting SQL*Plus should be working.
$ sqlplus

SP2-1503 SP2-0152 returned after invoking sqlplus as a non-admin user

Problem Symptoms
When logged on to the Windows server as a non-Administrator OS account using Microsoft Terminal Services client (mstsc.exe)/ Remote desktop option, starting SQL*Plus fails with

SP2-1503: Unable to initialize Oracle call interface
SP2-0152: ORACLE may not be functioning properly

But, starting SQL*Plus works when logged on to the Windows server using an Administrator OS account.

Also it works when logged on locally to the Windows server console using the non-Administrator OS account.

Cause of the Problem
The issue is related to a Windows Security configuration. The problem is caused by a security policy called "Create Global Objects". The user account that is used to run the program does not have the "Create global objects" user right. This security policy was introduced with Windows 2000 SP4, and determines if applications started during a Terminal Services session can create or access globally accessible memory.

By default, members of the Administrators group, the System account, and Services that are started by the Service Control Manager are assigned the "Create global objects" user right. That's why administrators group members do not receive any error while non-administrator members get errors.

Solution of the Problem
Assign the "Create global objects" user right to the non-Administrator account.

1. Click Start, point to Programs, point to Administrative Tools, and then click Local Security Policy.

2. Expand Local Policies, and then click User Rights Assignment.

3. In the right pane, double-click Create global objects.

4. In the Local Security Policy Setting dialog box, click Add.

5. In the Select Users or Group dialog box, click the user account that you want to add, click Add, and then click OK.

6. Click OK.

After 11.2g new installation invoking sqlplus fails with SP2-1503 SP2-0152

Problem Description
After new Oracle 11.2g installation(64 bit) invoking sqlplus returns SP2-1503 SP2-0152 errors like below.
$ sqlplus
SP2-1503: Unable to initialize Oracle call interface
SP2-0152: ORACLE may not be functioning properly

If you browse to $ORACLE_HOME/oracore/zoneinfo directory and check privilege by using command ls -l you will see a similar output like below,
$cd $ORACLE_HOME/oracore/zoneinfo
$ls -l
total 10092
drwxr-xr-x 2 oracle dba 4096 May 12 17:02 big
drwxr-xr-x 2 oracle dba 4096 May 12 17:02 little
-rw-r--r-- 1 oracle dba 5725 Jun 12 12:02 readme.txt
-rw-r--r-- 1 oracle dba 25681 Jun 16 12:02 timezdif.csv
-rw-r--r-- 1 oracle dba 792894 Jul 10 10:11 timezlrg_10.dat
-rw-r--r-- 1 oracle dba 787272 Jul 10 10:11 timezlrg_11.dat
-rw-r--r-- 1 oracle dba 493675 Jul 10 10:11 timezlrg_1.dat
-rw-r--r-- 1 oracle dba 507957 Jul 10 10:11 timezlrg_2.dat
-rw-r--r-- 1 oracle dba 527717 Jul 10 10:11 timezlrg_3.dat
-rw-r--r-- 1 oracle dba 531137 Jul 10 10:11 timezlrg_4.dat
-rw-r--r-- 1 oracle dba 587487 Jul 10 10:11 timezlrg_5.dat
-rw-r--r-- 1 oracle dba 586750 Jul 10 10:11 timezlrg_6.dat
-rw-r--r-- 1 oracle dba 601242 Jul 10 10:11 timezlrg_7.dat
-rw-r--r-- 1 oracle dba 616723 Jul 10 10:11 timezlrg_8.dat
-rw-r--r-- 1 oracle dba 801410 Jul 10 10:11 timezlrg_9.dat
-rw-r--r-- 1 oracle dba 345637 Jul 10 10:11 timezone_10.dat
-rw-r--r-- 1 oracle dba 345356 Jul 10 10:11 timezone_11.dat
-rw-r--r-- 1 oracle dba 274427 Jul 10 10:11 timezone_1.dat
-rw-r--r-- 1 oracle dba 274900 Jul 10 10:11 timezone_2.dat
-rw-r--r-- 1 oracle dba 286651 Jul 10 10:11 timezone_3.dat
-rw-r--r-- 1 oracle dba 286264 Jul 10 10:11 timezone_4.dat
-rw-r--r-- 1 oracle dba 286310 Jul 10 10:11 timezone_5.dat
-rw-r--r-- 1 oracle dba 286217 Jul 10 10:11 timezone_6.dat
-rw-r--r-- 1 oracle dba 286815 Jul 10 10:11 timezone_7.dat
-rw-r--r-- 1 oracle dba 302100 Jul 10 10:11 timezone_8.dat
-rw-r--r-- 1 oracle dba 351525 Jul 10 10:11 timezone_9.dat

Cause of the Problem
If you look for the ls -l output you will see timezone.dat and timezlrg.dat are not present but multiple versions timezlrg.* and timezone.* are present. The problem happened because there is a choice of timezone versions are available.

Solution of the Problem
To resolve the problem, create links to one of the timezone_nn.dat files. While creating symbolic links you should choose the latest timezone version unless you have a requirement for a specific version.
$ cd $ORACLE_HOME/oracore/zoneinfo
$ ln -s timezone_11.dat timezone.dat
$ ln -s timezlrg_11.dat timezlrg.dat

How to login to RHEL4 after you have forgotten root password

This post will guide to you what to do after you have forgotten you linux root password and help you how to login as root under in Red Hat Enterprise Linux 4.

The idea is you can log in using single-user mode and create a new root password.

Step 01: Using your machine restart button manually reboot your computer.

Step 02: If you use the default boot loader, GRUB, you can enter single user mode. To do so, at the boot loader menu, use the arrow keys to highlight the installation you want to edit and type [A] to enter into append mode.

Step 03: You are presented with a prompt that looks similar to the following:

grub append> ro root=LABEL=/

Step 04: Press the Spacebar once to add a blank space, then add the word single to tell GRUB to boot into single-user Linux mode. The result should look like the following:

ro root=LABEL=/ single

Step 05: Press [Enter] and GRUB will boot single-user Linux mode. After it finishes loading, you will be presented with a shell prompt similar to the following:

sh-2.05b#

Step 06: You can now change the root password by typing

passwd root
You will be asked to re-type the password for verification. Once you are finished, the password will be changed. You can then reboot by typing reboot at the prompt; then you can log in to root as you normally would.

Difference between connecting to database as normal and sysdba/sysoper

As it is discussed in Database Administrator Authentication, it is said whenever you connect to database as sysdba privilege, you are connecting to SYS default schema. And whenever you are connecting to database as sysoper privilege, you are connecting to PUBLIC default schema.

Note that, sysdba and sysoper are special privilege and therefore certain types of operations can be performed whenever you assign sysdba/sysoper privilege to a user. A complete lists of operations that can be performed by the user who has sysdba/sysoper privilege are listed in the post SYSDBA and SYSOPER authorized operations. A very important thing to remember that, whenever you only assign these two privileges to a user and you don't assign any more privilege then user will not be able to do any schema/table level modification unless you specifically login as sysdba privilege.

With examples I will try to make you more clear between the differences.

sysdba privilege is not enough for a user to login to database unless he login as sysdba privilege
1. Login as sysdba.
E:\>sqlplus / as sysdba

SQL*Plus: Release 11.1.0.6.0 - Production on Sat Jul 17 15:12:05 2010

Copyright (c) 1982, 2007, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

2. Create a user named test_sysdba with password test_sysdba.
SQL> create user test_sysdba identified by test_sysdba;

User created.

3. Change default tablespace to users of test_sysdba user.
SQL> alter user test_sysdba default tablespace users;

User altered.

4. Grant sysdba privilege to user test_sysdba.
SQL> grant sysdba to test_sysdba;

Grant succeeded.

As soon as we assign sysbda privilege under password file there will be an entry. By querying v$pwfile_users view we can see an entry.
SQL> select * from v$pwfile_users ;

USERNAME SYSDB SYSOP SYSAS
------------------------------ ----- ----- -----
SYS TRUE TRUE FALSE
TEST_SYSDBA TRUE FALSE FALSE
5. Try to connect to database as test_sysdba
SQL> conn test_sysdba/test_sysdba
ERROR:
ORA-01045: user TEST_SYSDBA lacks CREATE SESSION privilege; logon denied


Warning: You are no longer connected to ORACLE.

As we have not specify "as sysdba" while login so it does not permit test_sysdba to login to database even he has sysdba privilege.

If you don't login as SYSDBA privilege it will act as a normal user
1. Log in as sysdba
SQL> conn / as sysdba
Connected.

2. Grant create session privilege to test_sysdba.
SQL> grant create session to test_sysdba;

Grant succeeded.

3. Now try to login as test_sysdba privilege and try to create table.
SQL> conn test_sysdba/test_sysdba
Connected.

SQL> create table test(col1 number);
create table test(col1 number)
*
ERROR at line 1:
ORA-01031: insufficient privileges

The "create table" statement fails as while login we did not specify "sysdba" privilege, and so user has connected to database as normal user.

4. Now connect as sysdba privilege and grant dba to test_sysdba user.
SQL> conn / as sysdba
Connected.

SQL> grant dba to test_sysdba;

Grant succeeded.

5. Connection will be successful as it has dba role but still it is normal test_sysdba user.
SQL> conn test_sysdba/test_sysdba
Connected.

SQL> create table test_sysdba_table1(col1 number);

Table created.

SQL> show user
USER is "TEST_SYSDBA"

Note that the user is TEST_SYSDBA.

Whenever we specify "sysdba privilege" while connecting the schema became SYS
1. Connect to database with test_sysdba user and using sysdba privilege.
SQL> conn test_sysdba/test_sysdba as sysdba
Connected.

SQL> show user
USER is "SYS"

Note that, now user became SYS as we specified "as sysdba" while login.

2. As it is SYS user and any table we create will go under SYS default "SYSTEM" tablespace whereas whenever we connect as normal test_sysdba user it would go under that user default schema.

SQL> create table test_sysdba_table2(col1 number);

Table created.

SQL> set lines 200
SQL> select owner, table_name, tablespace_name from dba_tables where table_name like 'TEST_SYSDBA_TABLE%';

OWNER TABLE_NAME TABLESPACE_NAME
------------------------------ ------------------------------ ------------------------------
TEST_SYSDBA TEST_SYSDBA_TABLE1 USERS
SYS TEST_SYSDBA_TABLE2 SYSTEM
Whenever we connect through sysoper privilege the schema is PUBLIC
SQL> grant sysoper to test_sysdba;

Grant succeeded.

SQL> conn test_sysdba/test_sysdba as sysoper;
Connected.

SQL> show user
USER is "PUBLIC"

SQL> create table test_sysdba_table3(col1 number);
create table test_sysdba_table3(col1 number)
*
ERROR at line 1:
ORA-01031: insufficient privileges

So user connecting as sysoper privilege will not be able to create table as PUBLIC user is not permitted so.

How to manually install Oracle Data Mining in 11g

There are some substantial changes with the Data Mining option in 11g than the previous versions. In 11g, no DMSYS schema exists. Also it has a tight integration with Oracle Database. Data Mining metadata and PL/SQL packages have been migrated from DMSYS to SYS schema. Moreover there is no longer an 'odm' directory in the ORACLE_HOME.

If you look into further details, you will notice that there is no entry for Data Mining in the DBA_REGISTRY but there is a value in V$OPTION:
SQL> col value format a30
SQL> col parameter format a30
SQL> select * from v$option where PARAMETER = 'Data Mining';

PARAMETER VALUE
------------------------------ ------------------------------
Data Mining TRUE
In general, the option is installed with the use of the Database Configuration Assistant (dbca).

However if there is a need to install Data Mining manually, then you need to perform following steps.

1) Go to the $ORACLE_HOME/admin directory of the RDBMS instance where you want to install Data Mining.

2) Ensure that your ORACLE_HOME and ORACLE_SID environmental variable setup is correct.
On unix/linux issue,
$ echo $ORACLE_HOME
$ echo $ORACLE_SID


3) Login as SYS and run catodm.sql, dbmsodm.sql, prvtodm.plb scripts from SQL*Plus:
SQL> conn / as sysdba
SQL> @catodm.sql
SQL> @dbmsodm.sql
SQL> @prvtodm.plb

How to manually install Data Mining in Oracle 10g

Manually Install Oracle Data Mining for RDBMS 10.1.x
Step 01: Ensure that your ORACLE_HOME and ORACLE_SID environmental variable setup is correct.
On unix/linux issue,
$ echo $ORACLE_HOME
$ echo $ORACLE_SID


Step 02: Start Sql*plus and connect with sys user as sysdba privilege.
$ sqlplus /nolog
SQL> conn / as sysdba

Step 03: Execute dminst.sql, odmpatch.sql and utlrp.sql scripts like below.

On Unix - Linux,
SQL> run $ORACLE_HOME/dm/admin/dminst.sql SYSAUX TEMP
SQL> run $ORACLE_HOME/dm/admin/odmpatch.sql
SQL> run $ORACLE_HOME/dm/admin/utlrp.sql
On Windows,
SQL> start %ORACLE_HOME%\dm\admin\dminst.sql SYSAUX TEMP 
SQL> start %ORACLE_HOME%\dm\admin\odmpatch.sql
SQL> start %ORACLE_HOME%\dm\admin\utlrp.sql
Step 04: Ensure that 'Oracle Data Mining' is at Valid status in dba_registry by executing the following query,

SQL> select COMP_NAME,VERSION,STATUS from dba_registry where COMP_NAME='Oracle Data Mining';

Manually Install Oracle Data Mining for RDBMS 10.2.x
Step 01: Ensure that your ORACLE_HOME and ORACLE_SID environmental variable setup is correct.
On unix/linux issue,
$ echo $ORACLE_HOME
$ echo $ORACLE_SID


Step 02: Start Sql*plus and connect with sys user as sysdba privilege.
$ sqlplus /nolog
SQL> conn / as sysdba

Step 03: Execute dminst.sql, odmpatch.sql and utlrp.sql scripts like below.

On Unix/Linux issue,
SQL> run $ORACLE_HOME/rdbms/admin/dminst.sql SYSAUX TEMP
SQL> run $ORACLE_HOME/rdbms/admin/odmpatch.sql
SQL> run $ORACLE_HOME/rdbms/admin/utlrp.sql

On Windows,
SQL> start %ORACLE_HOME%\rdbms\admin\dminst.sql SYSAUX TEMP 
SQL> start %ORACLE_HOME%\rdbms\admin\odmpatch.sql
SQL> start %ORACLE_HOME%\rdbms\admin\utlrp.sql

Step 04: Ensure that 'Oracle Data Mining' is at Valid status in dba_registry by executing the following query,

SQL> select COMP_NAME,VERSION,STATUS from dba_registry where COMP_NAME='Oracle Data Mining';

Friday, July 16, 2010

How to remove data mining option from database

Oracle data mining is a licensed database option that is generally installed by default. However if you want to uninstall it the following procedures will help you.

Uninstall for database versions 9iR2 (9.2.x) and 10gR1 (10.1.x)
From the Oracle Universal Installer (OUI) page, you should be able to deinstall Data Mining from the 'Deinstall' icon. This is a supported way.

If OUI does not work for some reasons, you may do the following to deinstall ODM.

1. Start SQLPLUS and connect with user sys as sysdba
$ sqlplus / as sysdba

2. Remove ODM repositories from the database.

2.1 For Database release 9iR2 (9.2.x).

- Drop ODM, ODM_MTR schemas.

SQL> DROP USER ODM_MTR CASCADE;
SQL> DROP USER ODM CASCADE;


- Drop ODM public synonyms.
SQL> set heading off
SQL> spool /home/oracle/drop_odm_synonyms.sql
SQL> select 'Drop public synonym ' || SYNONYM_NAME || ' ;' from DBA_SYNONYMS where TABLE_OWNER = 'ODM';
SQL> spool off
SQL> @/home/oracle/drop_odm_synonyms.sql
Note that 264 synonyms should be selected and dropped.

2.2 For Database release 10gR1 (10.1.x)

- Drop DMSYS schema.

SQL> DROP USER DMSYS CASCADE;

- Drop DMSYS public synonyms.
SQL> set heading off 
SQL> spool /home/oracle/drop_dmsys_synonyms.sql
SQL> select 'Drop public synonym '" || SYNONYM_NAME || '" ;' from DBA_SYNONYMS where TABLE_OWNER = 'DMSYS';
SQL> spool off
SQL> @/home/oracle/drop_dmsys_synonyms.sql
Note that 568 synonyms should be selected and dropped.

Issue the following DELETE statement.

SQL> DELETE FROM exppkgact$ WHERE SCHEMA='DMSYS';

3. Update the DBA registry :

Before updating the DBA registry, first make sure Data Mining is registered in your database by running the following:
SQL> connect / as sysdba; 
SQL> select comp_id, version, status from dba_registry;
If ODM (Data Mining) is returned by the above query than execute the following to remove Data Mining from the DBA registry:

SQL> exec dbms_registry.removed('ODM');

For UNIX Platform the following steps allow to remake Oracle executable without DM libraries.
1. Shutdown database
2. cd $ORACLE_HOME/rdbms/lib
3. make -f ins_rdbms.mk dm_off
4. make -f ins_rdbms.mk ioracle
5. startup the database

The sqlplus banner should no longer display the Data Mining option and v$option should show 'Oracle Data Mining' as false on a Unix system based install.

Please note that it is possible synonyms may exist for Data Mining objects. If any exist, remove them to complete the removal.

Uninstall for Database version 10gR2 (10.2.x)

With Database Release 10gR2, the only supported way to remove the Data Mining option is to use the Oracle Universal Installer.

The steps would be,

- Start The Oracle Universal Installer.

-In Installer , click on the "Deinstall" Button.

- Find you RDBMS home and expand it.

- Under it you will find 'Oracle Database 10g 10.2.0.x' . Expand it.

- Under it you will find another folder also called Oracle Database 10g 10.2.0.1'. Expand it.

- In the bottom of this tree you will find 'Oracle Data Mining RDBMS Files 10.2.0.1'. This is what you need to remove.

- This Data Mining uninstall process should relink oracle executable with dm_off target, which should be reflected in sys.v$option view and SQLPlus banner.

Note that on some platform (e.g: HP UX) you may notice that after selecting the 'Oracle Data Mining RDBMS Files 10.2.0.1' and pressing OK you will be asked if you're sure you want to remove basically everything, including "Oracle Database 10g" itself. If this is the case stop deinstall process and do the following:

On unix and linux platform,
1. Shutdown database
2. cd $ORACLE_HOME/rdbms/lib
3. make -f ins_rdbms.mk dm_off
4. make -f ins_rdbms.mk ioracle
5. startup database.
On all platforms,
Update the database registry to remove the Data Mining component from it by performing the following :

Start SQL*Plus and log in as SYSDBA and execute dbms_registry.removed('ODM')

SQL> conn / as sysdba
SQL> exec dbms_registry.removed('ODM');

The sqlplus banner should no longer display the Data Mining option and v$option should show 'Oracle Data Mining' as false on a Unix system based install.

Uninstall for Database version 11gR1 (11.1.x)

With 11g, Oracle Data Mining is installed as part of the database installation. Data Mining is now part of Oracle binary and SYS metadata.

The Data Mining option cannot be removed but it can be disabled:

On Linux/Unix platform as below:
1. Shutdown database 
2. cd $ORACLE_HOME/rdbms/lib
3. make -f ins_rdbms.mk dm_off
4. make -f ins_rdbms.mk ioracle
5. startup database

On Windows platform as below:

Use the remove option from the Universal Installer and select the Data Mining under Enterprise Edition.

To verify the option have been disabled:

- Start sqlplus
$ sqlplus /nolog

- Connect as DBA user
SQL> conn / as sysdba

- Query the sys.v$option using :
SQL> Select parameter, value from v$option where parameter ='Data Mining';

If the returned value for column "value" is FALSE, then the option have been disabled.

Expdp fails with PLS-00201: identifier DMSYS.DBMS_DM_MODEL_EXP must be declared

Problem Description
Oracle data pump export fails with error "PLS-00201: identifier 'DMSYS.DBMS_DM_MODEL_EXP' must be declared ORA-06550:" like below.
ORA-39125: Worker unexpected fatal error in KUPW$WORKER.
GET_TABLE_DATA_OBJECTS while calling DBMS_METADATA.FETCH_XML_CLOB []
ORA-31642: the following SQL statement fails:
BEGIN
DMSYS.DBMS_DM_MODEL_EXP.SCHEMA_CALLOUT('CLUS1PROD1',0,1,'10.01.00.03.00');
END;
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.DBMS_METADATA", line 872
ORA-06550: line 1, column 7:
PLS-00201: identifier 'DMSYS.DBMS_DM_MODEL_EXP' must be declared ORA-06550: line 1, column 7:
PL/SQL: Statement ignored

Cause of the Problem
The problem arises because DMSYS schema objects have been accidentally removed or DMSYS schema objects have Invalid status or DMSYS schema has been dropped.

Solution of the Problem
Case 01: DMSYS has been dropped
1. Start SQLPlus.
$ sqlplus /nolog

2. Connect with user SYS as SYSDBA and issue the following commands:
SQL> conn / as sysdba
SQL> DELETE FROM exppkgact$ WHERE SCHEMA='DMSYS';
SQL> exit;
3. Run export jobs.

Case 02: DMSYS schema objects have been accidentally removed / have invalid objects
1. Start SQLPlus and connect with user SYS as SYSDBA
$ sqlplus / as sysdba

2.
i) If Database is version 10.1.0.x do the following steps:
 SQL> run $ORACLE_HOME/dm/admin/dminst.sql SYSAUX TEMP $ORACLE_HOME/dm/admin/ 
SQL> run $ORACLE_HOME/dm/admin/odmpatch.sql (if the database is at a patch level)
SQL> run $ORACLE_HOME/rdbms/admin/utlrp.sql
ii) If Database is version 10.2.0.x do the following steps:
 SQL> run $ORACLE_HOME/rdbms/admin/dminst.sql SYSAUX TEMP $ORACLE_HOME/rdbms/admin/ 
SQL> run $ORACLE_HOME/rdbms/admin/odmpatch.sql
SQL> run $ORACLE_HOME/rdbms/admin/utlrp.sql
3. Ensure that 'Oracle Data Mining' is at valid status in dba_registry using,
SQL> select COMP_NAME,VERSION,STATUS from dba_registry where COMP_NAME='Oracle Data Mining';

4. Run the export jobs.

Export DataPump fails with ORA-39125 while Calling DMSYS.DBMS_DM_MODEL_EXP.SCHEMA_CALLOUT

Problem Description
While invoking oracle datapump schema level export it fails with error
"ORA-39125: Worker unexpected fatal error in KUPW$WORKER.GET_TABLE_DATA_OBJECTS
while calling DBMS_METADATA.FETCH_XML_CLOB []
ORA-31642: the following SQL statement fails:
BEGIN DMSYS.DBMS_DM_MODEL_EXP.SCHEMA_CALLOUT" like below.
#> expdp userid=system/password DIRECTORY=mydir DUMPFILE=expdp_scott.dmp LOGFILE=expdp_scott.log SCHEMAS=scott

Export: Release 10.2.0.1.0 - 64bit Production on Sunday, 27 July, 2008 10:00:38

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Starting "TEST"."JOB1": userid=system/***** DIRECTORY=mydir DUMPFILE=expdp_scott.dmp LOGFILE=expdp_scott.log SCHEMAS=scott
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
ORA-39125: Worker unexpected fatal error in KUPW$WORKER.GET_TABLE_DATA_OBJECTS
while calling DBMS_METADATA.FETCH_XML_CLOB []
ORA-31642: the following SQL statement fails:
BEGIN DMSYS.DBMS_DM_MODEL_EXP.SCHEMA_CALLOUT('SCOTT',0,1,'10.01.00.02.00'); END;
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.DBMS_METADATA", line 872
ORA-04063: package body "DMSYS.DBMS_DM_UTIL" has errors
ORA-06508: PL/SQL: could not find program unit being called

Cause of the Problem
As the line "ORA-04063: package body "DMSYS.DBMS_DM_UTIL" has errors" appears the problem happened due to invalid state of the package DMSYS.DBMS_DM_UTIL. Package DMSYS.DBMS_DM_UTIL is used by the Oracle Data Mining option. You can verify the invalid objects in oracle by following query,
SQL> set lines 200
SQL> select status,
object_id,
object_type,
owner||'.'||object_name "OWNER.OBJECT"
from dba_objects
where status != 'VALID'
order by 4,2;

Solution of the Problem
Step 01: Log in to database as dmsys user and run the script dmutil.plb to re-create the invalid package.
SQL> CONNECT dmsys/dmsys
Connected.

SQL> @$ORACLE_HOME/dm/admin/dmutil.plb
Package created.
Package created.

Step 02: Run the script $ORACLE_HOME/rdbms/admin/utlrp.sql to recompile the invalid objects.
SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql

Step 03: Delete the Export DataPump log file and dump file of the previous failed attempt. Then re-run the export DataPump operation.

Tuesday, July 13, 2010

New features in Oracle 11g Data Guard

Oracle 11g already have two releases. Oracle 11g Release 1 (11.1g) and Oracle 11g Release 2(11.2g). In this post I will specify new features available to Oracle data guard 11g separately.

New Features in Oracle Data Guard 11.1 (Applicable to both Redo Apply and SQL Apply)
- The COMPRESSION attribute is used to specify whether redo data is compressed before transmission to a redo transport destination.

- With NET_TIMEOUT attribute it is specified the number of seconds that the LGWR background process will block waiting for a redo transport destination to acknowledge redo data sent to it. If an acknowledgement is not received within NET_TIMEOUT seconds, an error is logged and the redo transport session to that destination is terminated.

- Role transitions happened faster than previous versions.

- Strong authentication for redo transport network sessions.

- Simplified Data Guard management interface by deprecation of redundant SQL clauses and initialization parameters.

- A physical standby database can now take advantage of the rolling upgrade feature provided by a logical standby. Through the use of the new KEEP IDENTITY clause option to the SQL ALTER DATABASE RECOVER TO LOGICAL STANDBY statement, a physical standby database can be temporarily converted into a logical standby database for the rolling upgrade, and then reverted back to the original configuration of a primary database and a physical standby database when the upgrade is done.

- In the same Data Guard configuration now it allows a mix of Linux and Windows primary and standby databases.

New Features in Oracle Data Guard 11.1 (Applicable to only Redo Apply)
- You can open ans query physical standby database while continuing to receive and apply redo data from a primary database.

- There comes snapshot standby database which is new type of updatable standby database.

- You can use the RMAN DUPLICATE command to create a physical standby database over the network without a need for pre-existing database backups.

- Lost-write detection feature using a physical standby.

New Features in Oracle Data Guard 11.1 (Applicable to SQL Apply only)
- DBMS_RLS, DBMS_FGA PL/SQL Packages are supported and also XML stored as CLOB.

- Transparent Data Encryption (TDE) is supported in SQL apply. This allows a logical standby database to provide data protection for applications with advanced security requirements.

- Data Guard SQL Apply parameters can be set dynamically.

- When switching over to a logical standby database where either the primary database or the standby database is using Oracle RAC, the SWITCHOVER command can be used without having to shut down any instance, either at the primary or at the logical standby database.

- Scheduler Jobs can be created on a standby database using the PL/SQL DBMS_SCHEDULER package and can be associated with an appropriate database role so that they run when intended (for example, when the database is the primary, standby, or both).


New Features in Oracle Data Guard 11.2 (Applicable to both Redo Apply and SQL Apply)

- Data Guard configuration now support up to 30 standby databases.

- In Oracle 10g there was introduced flash recovery area. Starting from Oracle 11g the place is known as fast recovery area. In 11gR2 the fast recovery area location changed from LOG_ARCHIVE_DEST_10 to LOG_ARCHIVE_DEST_1.

- The initialization parameter FAL_CLIENT is deprecated.

- Redo transport compression is no longer limited to compressing redo data only when a redo gap is being resolved. When compression is enabled, all redo data sent is compressed.

- The ALTER SYSTEM FLUSH REDO SQL statement can be used at failover time to flush unsent redo from a mounted primary database to a standby database, thereby allowing a zero data loss failover even if the primary database is not running in a zero data loss data protection mode.


New Features in Oracle Data Guard 11.2 (Applicable to both Redo Apply only)

- Apply lag tolerance can be configured by using the new STANDBY_MAX_DATA_DELAY parameter.

- The "ALTER SESSION SYNC WITH PRIMARY" SQL statement can be used to ensure that a physical standby database is synchronized with the primary database as of the time the statement is issued.

- The V$DATAGUARD_STATS view now has apply lag and transport lag columns. Data inside the view columns also contain more accurate information.

- The new V$STANDBY_EVENT_HISTOGRAM view has been introduced where we can see the histogram of apply lag values on the physical standby database.

- A corrupted data block in a primary database can be automatically replaced with an uncorrupted copy of that block from a physical standby database and vice versa.

New Features in Oracle Data Guard 11.2 (Applicable to only SQL Apply)
- Logical standby databases and the LogMiner utility support tables with basic table compression and OLTP table compression.

- Logical standby and the LogMiner utility support tables with SecureFile LOB columns. Compression and encryption operations on SecureFile LOB columns are also supported.

- Online redefinition performed at the primary database using the DBMS_REDEFINITION PL/SQL package is transparently replicated on a logical standby database.

- Logical Standby supports the use of editions at the primary database, including the use of edition-based redefinition to upgrade applications with minimal downtime.

- Logical standby databases support Streams Capture. This allows you to offload processing from the primary database in one-way information propagation configurations and make the logical standby the hub that propagates information to multiple databases. Streams Capture can also propagate changes that are local to the logical standby database.

Saturday, July 10, 2010

While startup listener it fails with HPUX Error: 29: Illegal seek

Problem Description
While issuing "lsnrctl start" command on HP-UX it fails with HPUX Error: 29: Illegal seek error like below.
ORACLE:/home>lsnrctl start 

LSNRCTL for HPUX: Version 10.2.0.4.0 - Production on 09-JUL-2010 22:19:41

Copyright (c) 1991, 2007, Oracle. All rights reserved.

Starting /SIDS/app/oracle/product/10.2.0/db/bin/tnslsnr: please wait...

TNS-12537: TNS:connection closed
TNS-12560: TNS:protocol adapter error
TNS-00507: Connection closed
HPUX Error: 29: Illegal seek
"lsnrctl status" command also with error HPUX Error: 239: Connection refused
ORACLE:/home>lsnrctl status 

LSNRCTL for HPUX: Version 10.2.0.4.0 - Production on 09-JUL-2010 22:33:21

Copyright (c) 1991, 2007, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1621)))
TNS-12541: TNS:no listener
TNS-12560: TNS:protocol adapter error
TNS-00511: No listener
HPUX Error: 239: Connection refused
On Linux, the error stack may look like this:
TNS-12547: TNS:lost contact 
TNS-12560: TNS:proto adapter error
TNS-00517: Lost contact
Linux IA64 Error: 104: Connection reset by peer

Cause of the Problem
The problem happened due to incorrect settings of /etc/hosts file or host user is unable to read the /etc/hosts file. TNS Listener fails while attempting to initiate subscription to ONS node down event.

Solution of the Problem
1) Verify that OS user has permission on /etc/hosts file.
$ls -l /etc/hosts

2) Verify that the /etc/hosts file contains the localhost entry in the following form:

127.0.0.1 localhost.localdomain localhost
10.1.10.2 myhost.arjudba.blogspot.com myhost

3) If you are not in RAC environment explicitly disable the 10g TNS Listener's subscription to ONS by editing the listener.ora file and add the parameter below in the listener.ora file.

SUBSCRIBE_FOR_NODE_DOWN_EVENT_{listener_name}=OFF

where {listener_name} would be replaced with the actual listener name configured in the listener.ora file.

For example, if the listener name is LISTENER (which is default listener), the parameter would be:

SUBSCRIBE_FOR_NODE_DOWN_EVENT_LISTENER=OFF

4) Start the listener.
$ lsnrctl start