Showing posts with label Export. Show all posts
Showing posts with label Export. Show all posts

Thursday, July 22, 2010

Export fails with ORA-01406: fetched column value was truncated

Problem Description
Exporting oracle 11g database using 10g exp utility fails with ORA-01406 errors.
E:\>exp exp_user/exp_user full=y file=exp_dump.dmp log=exp_dump.log

Export: Release 10.2.0.1.0 - Production on Thu Jul 22 23:01:49 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

About to export the entire database ...
. exporting tablespace definitions
. exporting profiles
. exporting user definitions
. exporting roles
EXP-00008: ORACLE error 1406 encountered
ORA-01406: fetched column value was truncated
EXP-00000: Export terminated unsuccessfully

By further investigation it is noticed that the statement that fails with ORA-01406 is:
SELECT GRANTEE, PRIV, WGO FROM SYS.EXU8SPV ORDER BY SEQUENCE

And the database character set is WE8MSWIN1252.

SQL> select property_value from database_properties where property_name='NLS_CHARACTERSET';

PROPERTY_VALUE
--------------------------------------------------------------------------------
WE8MSWIN1252

The same problem arises whenever you connect to 11g database using oracle 9i or oracle 10gR1 exp client utility.

Cause of the Problem
The cause of this problem has been identified as Oracle Bug 6804150 and fixed in 11g and 10.2.0.5.

It is caused by the new privilege name "ADMINISTER SQL MANAGEMENT OBJECT" added in 11g.
Since the value exceeds 30 characters, then truncation happens and the error ora-01406 is encountered.

Solution of the Problem
Solution 01:
Change the Database character set to AL32UTF8 will solve this problem.

Solution 02:
Apply Patch for BUG 6804150 if available for your platform.

Solution 03:
Install the Oracle 10.2.0.5 patchset.

Solution 04:
Use 11g exp client utility.

exp fails with EXP-00023 and expdp fails with ORA-31631, ORA-39161

Problem Description
It is needed to export/import full database. To export full database, "EXPORT FULL DATABASE" privilege is granted and to import full database "IMPORT FULL DATABASE" privilege is granted. Now while doing full database export/import, exp fails with EXP-00023 and expdp fails with
ORA-31631: privileges are required
ORA-39161: Full database jobs require privileges

In the following example the error case is demonstrated.

A user named exp_user is created and it is granted EXPORT FULL DATABASE privilege.
E:\>sqlplus / as sysdba

SQL*Plus: Release 11.1.0.6.0 - Production on Thu Jul 22 19:35:15 2010

Copyright (c) 1982, 2007, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create user exp_user identified by exp_user;

User created.

SQL> grant create session, resource, export full database to exp_user;

Grant succeeded.

SQL> select * from dba_sys_privs where grantee='EXP_USER';

GRANTEE PRIVILEGE ADM
------------------------------ ---------------------------------------- ---
EXP_USER EXPORT FULL DATABASE NO
EXP_USER CREATE SESSION NO
EXP_USER UNLIMITED TABLESPACE NO

SQL> select * from dba_role_privs where grantee='EXP_USER';

GRANTEE GRANTED_ROLE ADM DEF
------------------------------ ------------------------------ --- ---
EXP_USER RESOURCE NO YES

E:\>exp exp_user/exp_user full=y file=exp_dump.dmp log=exp_dump.log

Export: Release 10.2.0.1.0 - Production on Thu Jul 22 19:43:52 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
EXP-00023: must be a DBA to do Full Database or Tablespace export
(2)U(sers), or (3)T(ables): (2)U >

Export grants (yes/no): yes >

Export table data (yes/no): yes >

Compress extents (yes/no): yes >

Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user EXP_USER
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user EXP_USER
About to export EXP_USER's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export EXP_USER's tables via Conventional Path ...
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully with warnings.

E:\>expdp exp_user/exp_user full=y dumpfile=exp_dump.dmp logfile=exp_dump.log

Export: Release 10.2.0.1.0 - Production on Thursday, 22 July, 2010 19:51:21

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-31631: privileges are required
ORA-39161: Full database jobs require privileges
From the above example, we see both exp and expdp operations have failed.

Cause of the Problem
The system privileges "EXPORT FULL DATABASE" and "IMPORT FULL DATABASE" was introduced in oracle database 10gR1. But these two privileges are not currently in use by oracle. May be they will be implemented in future releases. In oracle 10g and 11g these two system privileges are not operational and hence assigning these privileges will do nothing.

The right privileges used by export/import are the roles EXP_FULL_DATABASE/ IMP_FULL_DATABASE.

Solution of the Problem
Assign correct privileges to the user. He who will do full database export operation assign him EXP_FULL_DATABASE role and he who will perform full database import operation assign him IMP_FULL_DATABASE role.
E:\>sqlplus / as sysdba

SQL*Plus: Release 11.1.0.6.0 - Production on Thu Jul 22 19:53:32 2010

Copyright (c) 1982, 2007, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> revoke export full database from exp_user;

Revoke succeeded.

SQL> grant exp_full_database to exp_user;

Grant succeeded.

SQL> select * from dba_sys_privs where grantee = 'EXP_USER';

GRANTEE PRIVILEGE ADM
------------------------------ ---------------------------------------- ---
EXP_USER CREATE SESSION NO
EXP_USER UNLIMITED TABLESPACE NO

SQL> select * from dba_role_privs where grantee = 'EXP_USER';

GRANTEE GRANTED_ROLE ADM DEF
------------------------------ ------------------------------ --- ---
EXP_USER EXP_FULL_DATABASE NO YES
EXP_USER RESOURCE NO YES

Now invoking exp and expdp with full=y went fine.
D:\>exp exp_user/exp_user full=y file=exp_dump.dmp log=exp_dump.log

Export: Release 11.1.0.6.0 - Production on Thu Jul 22 21:19:03 2010

Copyright (c) 1982, 2007, Oracle. All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

About to export the entire database ...
. exporting tablespace definitions
. exporting profiles
. exporting user definitions
. exporting roles
. exporting resource costs
. exporting rollback segment definitions
. exporting database links
. exporting sequence numbers
. exporting directory aliases
. exporting context namespaces
. exporting foreign function library names
. exporting PUBLIC type synonyms
. exporting private type synonyms
.
.
.

Saturday, February 27, 2010

EXP-00008, ORA-04063, ORA-06508, EXP-00083 PL/SQL: could not find program unit

Problem Description
Oracle full database export fails with error stack EXP-00008, ORA-04063, ORA-06508, EXP-00083 like below.
EXP-00008: ORACLE error 4063 encountered
ORA-04063: package body "WMSYS.LT_EXPORT_PKG" has errors
ORA-06508: PL/SQL: could not find program unit being called: "WMSYS.LT_EXPORT_PKG"
ORA-06512: at line 1
EXP-00083: The previous problem occurred when calling WMSYS.LT_EXPORT_PKG.schema_info_exp

Cause of the Problem
The above problems are caused by the revoke of the execute privilege on UTL_FILE package from Public.

Oracle Workspace Manager (OWM) and some other database components (which are not installed in the database) need the privilege to execute procedure SYS.UTL_FILE.

During the creation of a default Database, the Workspace Manager gets installed by default. and the WMSYS user becomes intertwined with the export process and must be valid or at least the major parts for an export to complete properly as the export process looks for any "version-enabled" tables that Workspace Manager schema may be using.

Solution of the Problem
In order to solve above problems do following steps.

1. Grant execute privilege on SYS.UTL_FILE package to WMSYS user.
SQL> grant execute on SYS.UTL_FILE to WMSYS;

2. check for invalid objects in the WMSYS schema using following query,
SQL> select object_name,object_type,owner,status from dba_objects where status='INVALID' and owner='WMSYS';

3. Run script $ORACLE_HOME/rdbms/admin/utlrp.sql
SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql

Related Documents
http://arjudba.blogspot.com/2010/02/exp-00008-ora-06550-pls-00201-exp-00083.html
http://arjudba.blogspot.com/2010/02/ora-39127-ora-04063-ora-06508-ora-06512.html
http://arjudba.blogspot.com/2010/02/exp-00008-ora-04063-ora-06508-exp-00083.html
http://arjudba.blogspot.com/2009/12/export-fully-fails-with-pls-00201-ora.html
http://arjudba.blogspot.com/2009/12/export-fails-with-exp-00002-error-in.html
http://arjudba.blogspot.com/2009/01/ora-31655-no-data-or-metadata-objects.html
http://arjudba.blogspot.com/2009/01/expdp-fails-with-ora-31693-ora-06502.html
http://arjudba.blogspot.com/2008/12/ora-39095-dump-file-space-has-been.html
http://arjudba.blogspot.com/2008/09/expdp-fails-with-ora-31626-ora-31633.html
http://arjudba.blogspot.com/2008/07/data-pump-export-fails-with-ora-39000.html
http://arjudba.blogspot.com/2009/07/ora-39165-schema-sys-was-not-found-ora.html
http://arjudba.blogspot.com/2009/07/ora-39166-object-was-not-found-sys.html
http://arjudba.blogspot.com/2009/05/ora-39000-ora-39143-dump-file-may-be.html
http://arjudba.blogspot.com/2009/05/expdp-fails-with-ora-39001ora-39169ora.html

Sunday, December 13, 2009

Export Full=y fails with PLS-00201 ORA-06510 ORA-06512

Problem Description
In the database version 10.2.0.3, 10.2.0.4 original version of export with FULL=Y option fails with error PLS-00201: identifier 'SYS.DBMS_DEFER_IMPORT_INTERNAL' must be declared. From the log file error stack is shown below.



Table DEF$_AQCALL will be exported in conventional path.
. . exporting table DEF$_AQCALL
EXP-00008: ORACLE error 6550 encountered
ORA-06550: line 1, column 18:
PLS-00201: identifier 'SYS.DBMS_DEFER_IMPORT_INTERNAL' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
ORA-06512: at "SYS.DBMS_SYS_SQL", line 1204
ORA-06512: at "SYS.DBMS_SQL", line 323
ORA-06512: at "SYS.DBMS_EXPORT_EXTENSION", line 97
ORA-06512: at "SYS.DBMS_EXPORT_EXTENSION", line 126
ORA-06512: at line 1
Table DEF$_AQERROR will be exported in conventional path.
. . exporting table DEF$_AQERROR
EXP-00008: ORACLE error 6510 encountered
ORA-06510: PL/SQL: unhandled user-defined exception
ORA-06512: at "SYS.DBMS_EXPORT_EXTENSION", line 50
ORA-06512: at "SYS.DBMS_EXPORT_EXTENSION", line 126
ORA-06512: at line 1
. . exporting table DEF$_CALLDEST
EXP-00008: ORACLE error 6550 encountered
ORA-06550: line 1, column 18:
PLS-00201: identifier 'SYS.DBMS_DEFER_IMPORT_INTERNAL' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
ORA-06512: at "SYS.DBMS_SYS_SQL", line 1204
ORA-06512: at "SYS.DBMS_SQL", line 323
ORA-06512: at "SYS.DBMS_EXPORT_EXTENSION", line 97
ORA-06512: at "SYS.DBMS_EXPORT_EXTENSION", line 126
ORA-06512: at line 1
. . exporting table DEF$_DEFAULTDEST
EXP-00008: ORACLE error 6510 encountered
ORA-06510: PL/SQL: unhandled user-defined exception
ORA-06512: at "SYS.DBMS_EXPORT_EXTENSION", line 50
ORA-06512: at "SYS.DBMS_EXPORT_EXTENSION", line 126
ORA-06512: at line 1
. . exporting table DEF$_DESTINATION
EXP-00008: ORACLE error 6550 encountered
ORA-06550: line 1, column 18:
PLS-00201: identifier 'SYS.DBMS_DEFER_IMPORT_INTERNAL' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
ORA-06512: at "SYS.DBMS_SYS_SQL", line 1204
ORA-06512: at "SYS.DBMS_SQL", line 323

Cause of the Problem
The error occurred during full database export operation because the user who is performing export operation does not have execute privilege on DBMS_DEFER_IMPORT_INTERNAL and DBMS_EXPORT_EXTENSION. Though the user might be a dba or sysdba user but it need explicit privilege to these two packages.

Solution of the Problem

Note that the role IMP_FULL_DATABASE, this is not enough to export FULL database schema in this case. So, it's necessary to grant the execute right over this package directly to the user and not through a role.

The solution is,
1) Connect to database as sys user.
SQL> conn / as sysdba

2) Explicitly grant the execute privilege on DBMS_DEFER_IMPORT_INTERNAL and DBMS_EXPORT_EXTENSION to the user performing the export as shown below.

SQL> GRANT EXECUTE ON SYS.DBMS_DEFER_IMPORT_INTERNAL TO <user_name>;

SQL> GRANT EXECUTE ON SYS.DBMS_EXPORT_EXTENSION TO <user_name>;

3) Run the export operation again.

Related Documents
Export/Import from Enterprise Manager
Original Export/Import


Export fails with EXP-00002: error in writing to export file

Problem Description
I ran my script exp10g_zip.sh and I failed with error below.
EAIAPP:/opt/oracle/admin/EAIAPP/scripts>./exp10g_zip.sh EAIAPP 1 Y ALL
./exp10g_zip.sh: line 172: /usr/contrib/bin/gzip: No such file or directory
ERROR MSG: ./exp10g_zip.sh FAILED WITH AN ORACLE (EXP) ERROR ON HOST db1-eai.pst.hrn.clearitlab.com
LOG FILE: /opt/oracle/admin/EAIAPP/logs/exp10g_zip.sh_21273.log
RC: 1
From the /opt/oracle/admin/EAIAPP/logs/exp10g_zip.sh_21273.log logfile (which is export logfile) it shows,
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in UTF8 character set and AL16UTF16 NCHAR character set
EXP-00002: error in writing to export file
EXP-00000: Export terminated unsuccessfully
From the script logfile it says
INVOKING DATABASE BACKUP Sun, 13 Dec 2009 00:11:44.


Export: Release 10.2.0.4.0 - Production on Sun Dec 13 00:11:44 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in UTF8 character set and AL16UTF16 NCHAR character set
EXP-00002: error in writing to export file
EXP-00000: Export terminated unsuccessfully

DATABASE BACKUP COMPLETE 00:11:44. EXPORT FILE /opt/oracle/admin/EAIAPP/exports/EAIAPP_20091213_001144.dmp

./exp10g_zip.sh FAILED WITH AN ORACLE (EXP) ERROR. SYNOPSIS WRITEN TO STDOUT FOR LSF. EXIT WITH RC=1

exit 1
And my script looked like below,
if mkfifo  $PIPE
then
trap "/bin/rm $PIPE 2>/dev/null" HUP INT QUIT TERM
/usr/contrib/bin/gzip < $PIPE > $EXPFILE.Z&

if [ $EXP_USERS = "ALL" ] ; then


$ORACLE_HOME/bin/exp exp_dba/$DB_EXP_DBA file=$PIPE full=Y direct=Y rows=$EXP_DATA compress=N log=$EXP_LOG STATISTICS=NONE >> $LOGFILE 2>&1
else

$ORACLE_HOME/bin/exp exp_dba/$DB_EXP_DBA file=$PIPE OWNER=\(${EXP_USERS}\) direct=Y rows=$EXP_DATA compress=N log=$EXP_LOG STATISTICS=NONE >> $LOGFILE 2>&1

fi

/bin/rm $PIPE 2>/dev/null

fi


RC=$?
Cause of the Problem
After running script whenever I see ./exp10g_zip.sh: line 172: /usr/contrib/bin/gzip: No such file or directory I immediately thought that location of gzip is wrong and hence error returned but then after seeing EXP-00002: error in writing to export file I guessed that it might be the problem because of oracle failed to write dump in the operating file system.

Solution of the Problem
In order to solve "EXP-00002: error in writing to export file" error you need to look for following things:

1) The export file could not be written to disk anymore, probably because the disk is full or the device has an error.

Check the disk free usage by issuing command,
$df -h

And check whether device has any error or not by creating file into it and writing some.
$vi test_device.txt

and write some contents and save it. If it is ok then device has no problem in it.

2) It may be the cause because of your file systems do not support a certain limit (eg. dump file size > 2Gb). If you see dumpfile you might see like it's size like 1.99G and after exporting next table it fails because file system does not support file more than 2G. In that case you need to divide the dump into several files. For that include a filesize parameter as well as add more dumpfiles with the file parameter.
Example:
exp arju/arju
full=y
file=exp%U.dmp
log=export.log
filesize=2000M
3) Make sure no limitation is shown by issuing ulimit -a.

4) Note that while export operation, Export selects from the DUAL table to obtain the current date and other information. So when DUAL table has been truncated, you cannot select any data from it any more. If a select is not possible, "EXP-00002: error in writing to export file" will be produced.

To solve it,

i) Check who owns the DUAL table (should be SYS only) and if there is a public
synonym for it:
SQL> connect / as sysdba
SQL> select created,owner,object_type from dba_objects
where object_name='DUAL';

CREATED OWNER OBJECT_TYPE
----------------- ------------------------------ ------------------
29-JUL-2009 00:06 SYS TABLE
29-JUL-2009 00:06 PUBLIC SYNONYM
Also make sure, Only DUAL table exists under SYS schema. Delete other copies if there is any.

ii)Check if there is a row in dual, if not, add one:
SQL> select * from dual;
no rows selected

SQL> insert into dual values ('X');
1 row created.

SQL> commit;
Commit complete.

SQL> select * from dual;
D
-
X
iii)Now re-run the export.

But in my case these solution did not solve my problem. If you check the script
trap "/bin/rm $PIPE 2>/dev/null" HUP INT QUIT TERM
/usr/contrib/bin/gzip < $PIPE > $EXPFILE.Z&
and error message it says at first /usr/contrib/bin/gzip: No such file or directory. Then I checked whether gzip already exist in the location /usr/contrib/bin/ and saw it is not there. By using,
which gzip I see its location is /bin/gzip and so in the script I used following lines and it solved my problem.
trap "/bin/rm $PIPE 2>/dev/null" HUP INT QUIT TERM
/bin/gzip < $PIPE > $EXPFILE.Z&

Related Documents
Export/Import from Enterprise Manager
Original Export/Import


Wednesday, May 6, 2009

Understand the Estimate parameter in data pump export

With the ESTIMATE_ONLY parameter as in discussed on http://arjudba.blogspot.com/2009/02/estimate-dumpfile-size-before-taking.html we can estimate the space in bytes per tables that would be consumed without actually performing data pump export operation or in other word without generating dump file.

With help of ESTIMATE parameter of data pump export you can specify the method that Export will use to estimate how much disk space each table in the export job will consume (in bytes) before performing actual data pump export operation.

The ESTIMATE parameter can take two parameters. Either BLOCKS (default) or STATISTICS.

The meaning of these two parameter values are specified below.

BLOCKS: The estimate is calculated by multiplying the number of database blocks used by the target objects with the appropriate block sizes.

STATISTICS: The estimate is calculated using statistics for each table. So to be accurate you must analyze table recently.

Note that the outcome specified by ESTIMATE=BLOCKS is far away from the size of the actual dumpfile. In fact, ESTIMATE=BLOCKS method generates more inaccurate result from dump file size when,

a) The table was created with a much bigger initial extent size than was needed for the actual table data.

b) Many rows have been deleted from the table, or a very small percentage of each block is used.


The outcome generated by ESTIMATE=STATISTICS is most accurate to dump file size if recently table is analyzed.

Below is an example shown both in case of ESTIMATE=STATISTICS and ESTIMATE=BLOCKS. In both cases data pump export dump file is generated after estimation of dump file.

E:\>expdp schemas=arju userid=arju/a dumpfile=arju_11_01_blocks.dmp directory=test estimate=blocks
Export: Release 10.2.0.1.0 - Production on Thursday, 07 May, 2009 11:51:12

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "ARJU"."SYS_EXPORT_SCHEMA_03": schemas=arju userid=arju/********
dumpfile=arju_11_01_blocks.dmp directory=test estimate=blocks
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. estimated "ARJU"."SYS_EXPORT_SCHEMA_02" 320 KB
. estimated "ARJU"."SYS_EXPORT_SCHEMA_01" 128 KB
. estimated "ARJU"."AUTHOR" 64 KB
. estimated "ARJU"."BOOKAUTHOR" 64 KB
. estimated "ARJU"."BOOKS" 64 KB
. estimated "ARJU"."BOOK_CUSTOMER" 64 KB
. estimated "ARJU"."BOOK_ORDER" 64 KB
. estimated "ARJU"."ORDERITEMS" 64 KB
. estimated "ARJU"."PROMOTION" 64 KB
. estimated "ARJU"."PUBLISHER" 64 KB
. estimated "ARJU"."T" 64 KB
Total estimation using BLOCKS method: 1024 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "ARJU"."SYS_EXPORT_SCHEMA_02" 214.3 KB 1125 rows
. . exported "ARJU"."SYS_EXPORT_SCHEMA_01" 31.55 KB 12 rows
. . exported "ARJU"."AUTHOR" 5.835 KB 14 rows
. . exported "ARJU"."BOOKAUTHOR" 5.609 KB 20 rows
. . exported "ARJU"."BOOKS" 7.781 KB 14 rows
. . exported "ARJU"."BOOK_CUSTOMER" 8.234 KB 21 rows
. . exported "ARJU"."BOOK_ORDER" 8.398 KB 21 rows
. . exported "ARJU"."ORDERITEMS" 6.742 KB 32 rows
. . exported "ARJU"."PROMOTION" 5.710 KB 4 rows
. . exported "ARJU"."PUBLISHER" 6.265 KB 8 rows
. . exported "ARJU"."T" 4.914 KB 1 rows
Master table "ARJU"."SYS_EXPORT_SCHEMA_03" successfully loaded/unloaded
******************************************************************************
Dump file set for ARJU.SYS_EXPORT_SCHEMA_03 is:
E:\ORACLE\TEST\ARJU_11_01_blocks.DMP
Job "ARJU"."SYS_EXPORT_SCHEMA_03" successfully completed at 11:51:39

E:\>expdp schemas=arju userid=arju/a dumpfile=arju_11_01_statistics.dmp directory=test estimate=statistics
Export: Release 10.2.0.1.0 - Production on Thursday, 07 May, 2009 11:52:12

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "ARJU"."SYS_EXPORT_SCHEMA_03": schemas=arju userid=arju/********
dumpfile=arju_11_01_statistics.dmp directory=test estimate=statistics
Estimate in progress using STATISTICS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. estimated "ARJU"."SYS_EXPORT_SCHEMA_02" 187.0 KB
. estimated "ARJU"."SYS_EXPORT_SCHEMA_01" 36.67 KB
. estimated "ARJU"."BOOK_ORDER" 8.832 KB
. estimated "ARJU"."BOOK_CUSTOMER" 8.708 KB
. estimated "ARJU"."BOOKS" 8.156 KB
. estimated "ARJU"."ORDERITEMS" 6.808 KB
. estimated "ARJU"."PUBLISHER" 6.542 KB
. estimated "ARJU"."AUTHOR" 6.054 KB
. estimated "ARJU"."PROMOTION" 5.882 KB
. estimated "ARJU"."BOOKAUTHOR" 5.746 KB
. estimated "ARJU"."T" 5.061 KB
Total estimation using STATISTICS method: 285.5 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "ARJU"."SYS_EXPORT_SCHEMA_02" 214.3 KB 1125 rows
. . exported "ARJU"."SYS_EXPORT_SCHEMA_01" 31.55 KB 12 rows
. . exported "ARJU"."BOOK_ORDER" 8.398 KB 21 rows
. . exported "ARJU"."BOOK_CUSTOMER" 8.234 KB 21 rows
. . exported "ARJU"."BOOKS" 7.781 KB 14 rows
. . exported "ARJU"."ORDERITEMS" 6.742 KB 32 rows
. . exported "ARJU"."PUBLISHER" 6.265 KB 8 rows
. . exported "ARJU"."AUTHOR" 5.835 KB 14 rows
. . exported "ARJU"."PROMOTION" 5.710 KB 4 rows
. . exported "ARJU"."BOOKAUTHOR" 5.609 KB 20 rows
. . exported "ARJU"."T" 4.914 KB 1 rows
Master table "ARJU"."SYS_EXPORT_SCHEMA_03" successfully loaded/unloaded
******************************************************************************
Dump file set for ARJU.SYS_EXPORT_SCHEMA_03 is:
E:\ORACLE\TEST\ARJU_11_01_STATISTICS.DMP
Job "ARJU"."SYS_EXPORT_SCHEMA_03" successfully completed at 11:52:25

Using ESTIMATE=BLOCKS, before data pump export size is shown as 1024 KB and using ESTIMATE=STATISTICS, before data pump export size is shown as 285.5 KB and my actual dump file size was 472KB which is far away from estimation using ESTIMATE=BLOCKS as difference is 1024-472=552. In later case difference is 186.5.

Note that if a table involves LOBs, the dump file size may vary as ESTIMATE does not take LOB size into consideration.

Related Documents

Thursday, January 29, 2009

ORA-31655: no data or metadata objects selected for job

Problem Description
You are going to do data pump export operation in order to export objects based on filtering via EXCLUDE or INCLUDE parameter of expdp.
In this example you wanted tables 'ACCOUNT_GROUP','ADDRESS','AREA_GROUP' and all the tables starting with word CV(like 'CV', 'CV_EXPERIENCE', 'CV_EDUCATION' etc)

Your parameter file is like below.
userid=smilebd/a
directory=d
dumpfile=b.dmp
include =TABLE:"IN('ACCOUNT_GROUP','ADDRESS','AREA_GROUP')"
include =TABLE:"LIKE 'CV%' "

And you invoke expdp as
expdp parfile=d:\parfile.txt
from command line. But it fails with below message on my windows PC.
C:\>expdp parfile=d:\parfile.txt

Export: Release 10.1.0.4.2 - Production on Thursday, 29 January, 2009 14:53

Copyright (c) 2003, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Starting "SMILEBD"."SYS_EXPORT_SCHEMA_01": parfile=d:\parfile.txt
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
ORA-31655: no data or metadata objects selected for job
Job "SMILEBD"."SYS_EXPORT_SCHEMA_01" completed with 1 error(s) at 14:53:50

As you expected 'ACCOUNT_GROUP','ADDRESS','AREA_GROUP','CV', 'CV_EXPERIENCE', 'CV_EDUCATION' will be exported but not a single table is exported and error ORA-31655: no data or metadata objects selected for job returned.

Cause of the Problem
This problem happens because of multiple INCLUDE options was set in expdp. Note that multiple INCLUDE parameter can be given in expdp but I recommend not to do that because if multiple filters(EXCLUDE/INCLUDE) are specified , an implicit AND operation is applied to them.

In this example data pump interprets
include =TABLE:"IN('ACCOUNT_GROUP','ADDRESS','AREA_GROUP')"  
include =TABLE:"LIKE 'CV%' "

as,
TABLE:"IN('ACCOUNT_GROUP','ADDRESS','AREA_GROUP')" AND TABLE:"LIKE 'CV%'"

As AND operation is applied, so all tables is filtered out (because no tables is there that starts with CV and between 'ACCOUNT_GROUP','ADDRESS','AREA_GROUP') and no tables are exported.

A bit clear example I will show you.

Your parameter file is like below.
userid=smilebd/a
directory=d
dumpfile=b.dmp
include =TABLE:"IN('ACCOUNT_GROUP','ADDRESS','AREA_GROUP')"
include =TABLE:"LIKE 'CV%' "

And you invoke datapump
C:\>expdp parfile=d:\parfile.txt

Export: Release 10.1.0.4.2 - Production on Thursday, 29 January, 2009 14:54

Copyright (c) 2003, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Starting "SMILEBD"."SYS_EXPORT_SCHEMA_01": parfile=d:\parfile.txt
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SMILEBD"."ACCOUNT_GROUP" 11.75 KB 132 rows
Master table "SMILEBD"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SMILEBD.SYS_EXPORT_SCHEMA_01 is:
G:\B.DMP
Job "SMILEBD"."SYS_EXPORT_SCHEMA_01" successfully completed at 14:54:38

Note that here only one table is exported.
Because with first INCLUDE parameter,
include =TABLE:"IN('ACCOUNT_GROUP','ADDRESS','AREA_GROUP')"

first 3 tables are selected
and with second INCLUDE parameter
include =TABLE:"LIKE 'ACC%' "
both output are ANDed and only one table is selected that is ACCOUNT_GROUP (which starts with word ACC.)

Solution of the Problem
It is recommended not to use multiple INCLUDE or EXCLUDE parameter in data pump jobs and read http://arjudba.blogspot.com/2008/04/datapump-parameters-include-and-exclude.html If you are in above scenario then only add one INCLUDE parameter and add all the tables within IN clause like below,

include =TABLE:"IN('ACCOUNT_GROUP','ADDRESS','AREA_GROUP','CV', 'CV_EXPERIENCE', 'CV_EDUCATION')"

There is no valid reason to use multiple INCLUDE or EXCLUDE parameter in your data pump operation. With only one INCLUDE or EXCLUDE parameter you can do your job.

Monday, January 5, 2009

Expdp fails with ORA-31693, ORA-06502, ORA-31605, LPX-314

Problem Description
$expdp parfile=pfile_maxim_history_sel.par directory=d

Export: Release 10.2.0.1.0 - 64bit Production on Monday, 05 January, 2009 17:23:52

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Username: system
Password:

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/******** parfile=pfile_maxim_history_sel.par directory=d
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 964 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/AUDIT_OBJ
Processing object type TABLE_EXPORT/TABLE/TRIGGER
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-31693: Table data object "MAXIM"."HISTORY" failed to load/unload and is being skipped due to error:
ORA-06502: PL/SQL: numeric or value error
ORA-31605: the following was returned from LpxXSLSetTextVar in routine kuxslSetParam:
LPX-314: an internal failure occurred
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
/db1/oracle/dump/expdat.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at 17:24:11

My parameter file was below,
$ cat pfile_maxim_history_sel.par
tables=maxim.history
query=maxim.history:'"where cdate between '01-JAN-09' and '03-JAN-09'"'

Cause of the Problem
In the data pump export operation ORA-31693, ORA-06502, ORA-31605, LPX-314 happened due to incorrect usage of single or double quotes for the QUERY parameter. So in the query parameter within the parameter files quote is given incorrect.
We see there is single quote(') surrounding double quote(") in the query parameter.

Solution of the Problem
Just omit the single quote (') from the query parameter in the expdp.
$ cat pfile_maximsg_history_sel.par
tables=maxim.history
query=maxim.history:"where cdate between '01-JAN-09' and '03-JAN-09'"

And now run your datapump export operation.

Wednesday, December 31, 2008

ORA-39095: Dump file space has been exhausted

Problem Description
I ran my datapump export operation with the combination of multiple dumpfile arguments and filesize parameter but it fails with ORA-39095 as below.

maximsg@TIGER> $expdp maximsg/a dumpfile=d:parti1.dmp,e:parti2.dmp,c:resit%U.dmp filesize=188417 tablespaces=users

Export: Release 10.2.0.1.0 - Production on Thursday, 01 January, 2009 12:23:30

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MAXIMSG"."SYS_EXPORT_TABLESPACE_05": maximsg/******** dumpfile=d:parti1.dmp,e:parti2.dmp,c:resit%U.dmp filesize=188417 tablespace
s=users
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-39095: Dump file space has been exhausted: Unable to allocate 188416 bytes
Job "MAXIMSG"."SYS_EXPORT_TABLESPACE_05" stopped due to fatal error at 12:23:47

Just a variant of the above is below.
>expdp maximsg/a@tiger tablespaces=users filesize=200K directory=d dumpfile=b.dmp

Export: Release 10.2.0.1.0 - Production on Saturday, 03 January, 2009 22:29:34

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MAXIMSG"."SYS_EXPORT_TABLESPACE_08": maximsg/********@tiger tablespaces=users filesize=200K directory=d dumpfile=b.d
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-39095: Dump file space has been exhausted: Unable to allocate 208896 bytes
Job "MAXIMSG"."SYS_EXPORT_TABLESPACE_08" stopped due to fatal error at 10:30:17

Note that both in first and second case same errors are return but they show various bytes in order of unable to allocate. In first case Unable to allocate 188416 bytes and in second case Unable to allocate 208896 bytes.

Cause of the Error
They are many cause for which ORA-39095 is fired.
1)If the summation of dumpfile sizes that is specified in the datapump parameters is less than total dumpfile size that produces then ORA-39095 returns. Like in the second case the filesize is specified as 200K and only one dumpfile is specified. So maximum dumpfile size that can be produced is no more than 200K. If dumpfile to be produced is greater than 100K then above error will return.

Note that %U specification for the dump file can expand up to 99 files.
If 99 files have been generated before the export has completed, it will again return the ORA-39095 error. If in the second case %U is specified then maximum dumpfile size can be 99*200K=19800K. If dumpfile to be produced is greater than 19800K then error will return.


2)Like in the first case the size of filesize is too small and hence the error returns. filesize is defined about 184K(188407 bytes). And in the error message it shows unable to allocate 184K(188407 bytes). The solutions is and extra block is needed. As with 4k (with standard block size 8k) a block is generated so 188K filesize will solve the problem.


maximsg@TIGER> $expdp maximsg/a dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp filesize=188K tablespaces=users

Export: Release 10.2.0.1.0 - Production on Thursday, 01 January, 2009 12:25:09

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MAXIMSG"."SYS_EXPORT_TABLESPACE_06": maximsg/******** dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp filesize=188K tablespaces=use
rs
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "CR_2"."OUT_CDR_TEST" 310.9 KB 2166 rows
. . exported "CR_2"."OUT_CDR" 484.4 KB 2606 rows
. . exported "CR_2"."USER_IP" 126 KB 3403 rows
. . exported "CR_2"."OUT_CDR_TEMP" 33.72 KB 361 rows
. . exported "CR_2"."USERS" 42.57 KB 230 rows
. . exported "SCOTT"."DEPT" 5.656 KB 4 rows
. . exported "SCOTT"."EMP" 7.851 KB 15 rows
. . exported "SCOTT"."SALGRADE" 5.585 KB 5 rows
. . exported "SCOTT"."BONUS" 0 KB 0 rows
Master table "MAXIMSG"."SYS_EXPORT_TABLESPACE_06" successfully loaded/unloaded
******************************************************************************
Dump file set for MAXIMSG.SYS_EXPORT_TABLESPACE_06 is:
D:\PART1.DMP
E:\PART2.DMP
C:\REST01.DMP
C:\REST02.DMP
C:\REST03.DMP
C:\REST04.DMP
C:\REST05.DMP
Job "MAXIMSG"."SYS_EXPORT_TABLESPACE_06" successfully completed at 12:25:32


Solution of the Error
1)Increase the total dumpfile number so that dump produced by export operation is lower than total dumpfile sizes that is specified in the expdp.
With the 'wildcard' specification (%U) the dump file can expand up to 99 files.

2)Use a bigger filesize value.

3)Another option is to use,
dumpfile=dmpdir1:part1_%U.dmp, dmpdir2:part2_%U.dmp, dmpdir3:part3_%U.dmp

Tuesday, December 16, 2008

How to put dumps in multiple locations

You have a very big database and you want to take data pump export of this very big database. Suppose your dump size would be 100G. But in your hard disk in one partition 100G is not available. You may get free space more than 100G that span over partitions. For example in one partition 40G free space is available, in another 50G and in another partition 30G.

You then span this 100G dumpfile in this partitions and you can take the dump. The combination of two parameters DUMPFILE and FILESIZE of data pump export operation help to take the dump.

Suppose in /oradata1 40G is available.
in /oradata2 50G is available.
in /oradata3 30G is available.
And you have created directories dir1,dir2 and dir3 that represents /oradata1, /oradata3 and /oradata3 respectively.

If your approximate dump size is 100G then you would write your datapump export command as,
expdp user_name/pass dumpfile=dir1:part1.dmp, dir2:part2.dmp, dir3:part3.dmp, dir2:rest%U.dmp filesize=30G
In this case dump file will be created sequentially.
First in /oradata1 30G dump size will be created named part1.dmp.
Then in /oradata2 30G dump size will be created named part2.dmp.
Then in /oradata3 30G dump size will be created named part3.dmp.
Finally rest 10G dump size will be created in /oradata2 named rest01.dmp.

The %U causes file name to be generated by oracle and sequentially like rest01.dmp, rest02.dmp and etc.

Below is an example which will take dump of tablespace users and each dump size will be 300K and it will span in C:\, D:\ and E:\ drive each.

SQL> create or replace directory d as 'd:';

Directory created.

SQL> create or replace directory e as 'e:';

Directory created.

SQL> create directory c as 'c:';

Directory created.

maximsg@TIGER> $expdp maximsg/a dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp filesize=300K tablespaces=users

Export: Release 10.2.0.1.0 - Production on Thursday, 01 January, 2009 12:16:15

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MAXIMSG"."SYS_EXPORT_TABLESPACE_01": maximsg/******** dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp filesize=300K tablespaces=use
rs
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "CR_2"."O_CDR_TEST" 310.9 KB 2166 rows
. . exported "CR_2"."O_CDR" 484.4 KB 2606 rows
. . exported "CR_2"."USER_IP" 126 KB 3403 rows
. . exported "CR_2"."O_CDR_TEMP" 33.72 KB 361 rows
. . exported "CR_2"."USERS" 42.57 KB 230 rows
. . exported "SCOTT"."DEPT" 5.656 KB 4 rows
. . exported "SCOTT"."EMP" 7.851 KB 15 rows
. . exported "SCOTT"."SALGRADE" 5.585 KB 5 rows
. . exported "SCOTT"."BONUS" 0 KB 0 rows
Master table "MAXIMSG"."SYS_EXPORT_TABLESPACE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MAXIMSG.SYS_EXPORT_TABLESPACE_01 is:
D:\PART1.DMP
E:\PART2.DMP
C:\REST01.DMP
C:\REST02.DMP
C:\REST03.DMP
Job "MAXIMSG"."SYS_EXPORT_TABLESPACE_01" successfully completed at 12:16:33

For dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp,
first in D: drive part1.dmp will be created with sized 300K.
Then in E: drive part2.dmp will be created with sized 300K.
Then all remaining parts of the dump will be created in C: drive each with 300K (possibly except last one due to dump size is not multiple of 300K) and their name will be like rest01.dmp, rest02.dmp etc.


If you use original export then the you control the size with FILE and FILESIZE parameter. The FILE parameter takes a list of file names to be created and FILESIZE parameter shows the maximum size of each dump file.

Expdp fails with ORA-39125, ORA-04031

Problem Description
I was performing data pump export operation and it fails with ORA-39125, ORA-04031 as below.
SQL> $expdp full=y dumpfile=fulldb.dmp directory=d userid=system/a

Export: Release 10.2.0.1.0 - Production on Tuesday, 16 December, 2008 17:07:44

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
FLASHBACK automatically enabled to preserve database integrity.
Starting "SYSTEM"."SYS_EXPORT_FULL_02": full=y dumpfile=fulldb.dmp directory=d userid=system/********
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
ORA-39125: Worker unexpected fatal error in KUPW$WORKER.GET_TABLE_DATA_OBJECTS while calling DBMS_METADATA.FETCH_XML_CLOB []
ORA-04031: unable to allocate 28 bytes of shared memory ("shared pool","SELECT /*+rule*/ SYS_XMLGEN(...","sql area","ub1[]: qkexrXformVal")

ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 6235

----- PL/SQL Call Stack -----
object line object
handle number name
64E077B4 14916 package body SYS.KUPW$WORKER
64E077B4 6300 package body SYS.KUPW$WORKER
64E077B4 9120 package body SYS.KUPW$WORKER
64E077B4 1880 package body SYS.KUPW$WORKER
64E077B4 6861 package body SYS.KUPW$WORKER
64E077B4 1262 package body SYS.KUPW$WORKER
64CB4398 2 anonymous block

Job "SYSTEM"."SYS_EXPORT_FULL_02" stopped due to fatal error at 17:08:03

Cause of the Error
The ORA-04031 error returns whenever oracle attempts to allocate a large piece of contiguous memory in the shared pool but fails to allocate. Oracle first flushes all objects that are not currently in use from the pool and the resulting free memory chunks are merged. If there is still not a single chunk large enough to satisfy the request the ORA-04031 error is returned.

Since in 10g automatic memory management in enabled and this allows the dba to reserve a pool of shared memory that is used to allocate the shared pool, the buffer cache, the java pool and the large pool with the single parameter SGA_TARGET. So simply increase of SGA_TARGET likely solve the problem.

Solution
Let's see the sga_target value.

SQL> show parameter sga_t

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_target big integer 100M

And SGA_MAX_SIZE value,
SQL> show parameter sga_max
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_max_size big integer 100M

We see these two parameter settings are low. So we increase it and restart database. As setting of SGA_TARGET is depend on SGA_MAX_SIZE and to change SGA_MAX_SIZE we need to bounce the database in order to effect.

SQL> alter system set sga_max_size=300M scope=spfile;

System altered.

SQL> alter system set sga_target=300M scope=spfile;

System altered.

SQL> startup force
ORACLE instance started.

Total System Global Area 314572800 bytes
Fixed Size 1248768 bytes
Variable Size 96469504 bytes
Database Buffers 209715200 bytes
Redo Buffers 7139328 bytes
Database mounted.
Database opened.

SQL> show parameter sga_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_target big integer 300M

Now the export operation runs smoothly as shared pool finds enough memory to do the operation.
SQL> $expdp full=y dumpfile=fulldb.dmp directory=d userid=system/a

Saturday, September 20, 2008

Expdp fails with ORA-31626, ORA-31633, ORA-06512, ORA-01031

Error Description
Whenever I run the expdp command to export only my schema objects it fails with ORA-31626, ORA-31633, ORA-06512 and ORA-01031.
A screenshot is below from my console.
SQL> host expdp test/t full=y directory=d dumpfile=a.dmp

Export: Release 10.2.0.1.0 - Production on Saturday, 20 September, 2008 14:58:53

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
ORA-31626: job does not exist
ORA-31633: unable to create master table "TEST.SYS_EXPORT_SCHEMA_05"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT", line 863
ORA-01031: insufficient privileges

Cause of the Problem

As ORA-01031 indicates the test user that is performing data pump export does not have enough privilege.

In order to see the current privilege assigned to the user issue,
SQL> select * from session_privs;
PRIVILEGE
----------------------------------------
CREATE SESSION
BACKUP ANY TABLE
SELECT ANY TABLE
SELECT ANY SEQUENCE
EXECUTE ANY PROCEDURE
CREATE ANY DIRECTORY
EXECUTE ANY TYPE
ADMINISTER RESOURCE MANAGER
RESUMABLE
SELECT ANY DICTIONARY
READ ANY FILE GROUP

11 rows selected.


Solution of the Problem
In order to do export operation at a minimum level the user must have the create table system privilege. But with above output we see the user does not have "create table" privilege. Do in order to avoid problem we can have different solution.

Solution 1)As a DBA user grant create table privilege to the user who will perform data pump export operation.

SQL> conn system/a
SQL> grant create table to test;


Then as a test user perform your operation.
SQL> host expdp test/t full=y directory=d dumpfile=a.dmp

Solution 2)Run the export operation as a different user who has the create table privilege. For example run the export operation as a system user.
Like,
SQL>host expdp system/a full=y directory=d dumpfile=a.dmp

Related Documents

Data pump export fails with ORA-39000, ORA-31641,ORA-27038
Export data to a previous time or point-in-time

Saturday, July 12, 2008

Data pump export fails with ORA-39000, ORA-31641,ORA-27038

Error Description:
---------------------------------------

Data pump export fails with following error.
Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "/backup1/d.dmp"
ORA-27038: created file already exists
Additional information: 1

Cause of the problem:
-------------------------------

Within the dumpfile parameter it was given the name as d.dmp. And directory parameter was set to /backup1. In the location of /backup1 there already exists d.dmp. Oracle data pump will generate error if the file name provided with dumpfile already exists in the destination.

Solution of The problem:
------------------------------------

Either rename the OS file name in the specified directory to a new name and perform export.
$mv /backup1/d.dmp /backup1/d2.dmp
Or, drop the existing file from the OS and perform export if OS file no longer necessary.
SQL> !rm /backup1/d.dmp

Or, rename the directory or dumpfile parameter to a new name so that OS file name does not conflict with the dumpfile+directory name.

$expdp dumpfile=d3.dmp

Related Documents:
--------------------------------

Thursday, May 15, 2008

Export data to a previous time or point-in-time

From oracle 9i and 10g along with export/import utility the two parameters named FLASHBACK_SCN and FLASHBACK_TIME really help us in order to take dump up to a previous point. This allows use for example, to export the original data of a table, in which rows have been updated erroneously, or rows were deleted by mistake.

In order to illustrate the usage of these two parameters I used an example. In the example I used a table named TEST_NUM and owner of the table is ARJU whose password is A.

1)Determine the current SCN of the database.

SQL> SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL;

GET_SYSTEM_CHANGE_NUMBER
------------------------
1252872

2)Note the contents of table test_num.
SQL> SELECT * FROM TEST_NUM;

A
----------
100
100
5
7

3)Update the table test_num.
SQL> UPDATE TEST_NUM SET A=A+1;

4 rows updated.

SQL> COMMIT;
Commit complete.

SQL> SELECT * FROM TEST_NUM;
A
----------
101
101
6
8
4)Export the table test_num with parameter FLASHBACK_SCN to 1252872.
So export operation will be done up to SCN 1252872. Changes after SCN 1252872 will not reflected with dump file.

SQL> HOST expdp ARJU/A TABLES=TEST_NUM FLASHBACK_SCN=1252872

Export: Release 10.2.0.1.0 - Production on Friday, 16 May, 2008 2:50:09
Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "ARJU"."SYS_EXPORT_TABLE_01": ARJU/******** TABLES=TEST_NUM FLASHBACK_SCN=1252872
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "ARJU"."TEST_NUM" 4.937 KB 4 rows
Master table "ARJU"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ARJU.SYS_EXPORT_TABLE_01 is:
/oracle/app/oracle/product/10.2.0/db_1/admin/dbase/dpdump/expdat.dmp
Job "ARJU"."SYS_EXPORT_TABLE_01" successfully completed at 02:50:17


Alternatively you can used FLASHBACK_TIME parameter like on unix,
flashback_time=\"TO_TIMESTAMP\(\'16-05-2008 11:21:42\', \'DD-MM-YYYY HH24:MI:SS\'\)\"

On windows,
flashback_time=\"TO_TIMESTAMP('16-05-2008 13:24:26', 'DD-MM-YYYY HH24:MI:SS')\"

Use backslash(\) before special character.

5)Drop the table and Import the dump file.


Here I provide no location so default location is used. While dump as I did not provide any DIRECTORY or DUMPFILE parameter and hence it looked for default directory DATA_PUMP_DIR and by default location of this parameter is $ORACLE_HOME/admin/$ORACLE_SID/dpdump/ (/oracle/app/oracle/product/10.2.0/db_1/admin/dbase/dpdump/) and searched for default dump file which is expdat.dmp.

SQL> DROP TABLE TEST_NUM;

Table dropped.

SQL> HOST impdp ARJU/A

Import: Release 10.2.0.1.0 - Production on Friday, 16 May, 2008 2:51:15

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Master table "ARJU"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "ARJU"."SYS_IMPORT_FULL_01": ARJU/********
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "ARJU"."TEST_NUM" 4.937 KB 4 rows
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "ARJU"."SYS_IMPORT_FULL_01" successfully completed at 02:51:20

6)Now look at the data in TEST_NUM.

SQL> SELECT * FROM TEST_NUM;

A
----------
100
100
5
7

Saturday, April 12, 2008

Original Export and Import Versus Data Pump Export and Import

If you have worked with prior 10g database you possibly are familiar with exp/imp utilities of oracle database. Oracle 10g introduces a new feature called data pump export and import.Data pump export/import differs from original export/import. The difference is listed below.

1)Impdp/Expdp has self-tuning unities. Tuning parameters that were used in original Export and Import, such as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import.

2)Data Pump represent metadata in the dump file set as XML documents rather than as DDL commands.

3)Impdp/Expdp use parallel execution rather than a single stream of execution, for improved performance.

4)In Data Pump expdp full=y and then impdp schemas=prod is same as of expdp schemas=prod and then impdp full=y where in original export/import does not always exhibit this behavior.

5)Expdp/Impdp access files on the server rather than on the client.

6)Expdp/Impdp operate on a group of files called a dump file set rather than on a single sequential dump file.

7)Sequential media, such as tapes and pipes, are not supported in oracle data pump.But in original export/import we could directly compress the dump by using pipes.

8)The Data Pump method for moving data between different database versions is different than the method used by original Export/Import.

9)When you are importing data into an existing table using either APPEND or TRUNCATE, if any row violates an active constraint, the load is discontinued and no data is loaded. This is different from original Import, which logs any rows that are in violation and continues with the load.

10)Expdp/Impdp consume more undo tablespace than original Export and Import.

11)If a table has compression enabled, Data Pump Import attempts to compress the data being loaded. Whereas, the original Import utility loaded data in such a way that if a even table had compression enabled, the data was not compressed upon import.

12)Data Pump supports character set conversion for both direct path and external tables. Most of the restrictions that exist for character set conversions in the original Import utility do not apply to Data Pump. The one case in which character set conversions are not supported under the Data Pump is when using transportable tablespaces.

13)There is no option to merge extents when you re-create tables. In original Import, this was provided by the COMPRESS parameter. Instead, extents are reallocated according to storage parameters for the target table.

Wednesday, April 9, 2008

Verify Physical Data Structure in Oracle.

With Oracle database it is sometime needed to check whether the physical data structure is intact or not. There are different ways to check whether physical data structure is intact or not, like to detect data block corruption.

I will try to demontrate two procedure in order to validate physical data structure.

1)With Export/Import Method.
2)With DBVERIFY Utility.

I have discussed about 1st method in other of my thread. Here I will try to explain about 2nd one.

DBVERIFY Utility
-------------------------


DBVERIFY utility can be used on offline or online databases, as well on backup files. We can use it to ensure that a backup database (or datafile) is valid before it is restored, to detect the corrupted block.

There are two command line interface of DBVERIFY utility.

A)To check disk blocks of a single datafile.
B)To check segment.

This utility is invoked by dbv command.

A)To check disk blocks of a single datafile.
--------------------------------------------------

Invoke dbv utility with the datafile name in file parameter. Like,

bash-3.00$ dbv file=/shared_disk/oradata/ARJU/data02.dbf feedback=1000

DBVERIFY: Release 10.2.0.1.0 - Production on Wed Apr 9 17:16:00 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.

DBVERIFY - Verification starting : FILE = /shared_disk/oradata/ARJU/data02.dbf
....................................................................

...
Page 80491 is marked corrupt
Corrupt block relative dba: 0x02013a6b (file 8, block 80491)
Bad check value found during dbv:
Data in bad block:
type: 6 format: 2 rdba: 0x02013a6b
last change scn: 0x0000.0573a224 seq: 0x1 flg: 0x04
spare1: 0x0 spare2: 0x0 spare3: 0x0
consistency value in tail: 0xa2240601
check value in block header: 0xf88
computed block checksum: 0x10
.
.
.
.

DBVERIFY - Verification complete

Total Pages Examined : 3045632
Total Pages Processed (Data) : 1620222
Total Pages Failing (Data) : 1
Total Pages Processed (Index): 586
Total Pages Failing (Index): 0
Total Pages Processed (Other): 1384939
Total Pages Processed (Seg) : 0
Total Pages Failing (Seg) : 0
Total Pages Empty : 39848
Total Pages Marked Corrupt : 37
Total Pages Influx : 0
Highest block SCN : 93175751 (0.93175751)


Type dbv help=y to view the available options.

Some parameters:
--------------------
USERID:
This parameter is only necessary when the files being verified are ASM files.
FEEDBACK:Display a progress report to the terminal in form of dot (.). If feedback=100 is set then after verifying 100 blocks a dot(.) will be displayed.

B)To check segment.
--------------------------


In this mode, DBVERIFY enables you to specify either a table segment or index segment for verification.Duriing this mode, the segment is locked. If the specified segment is an index, the parent table is locked.

You can invoke dbv command to validate a segment like this,

dbv USERID=prod/prod SEGMENT_ID=2.2.890


SEGMENT_ID: The id of the segment that you want to verify. It is composed of the tablespace ID number (tsn), segment header file number (segfile), and segment header block number (segblock).

You can get this information from SYS_USER_SEGS. The required columns are TABLESPACE_ID, HEADER_FILE, and HEADER_BLOCK. SYSDBA privilege is required to query SYS_USER_SEGS.

Related Documents
-----------------------
To verify Physical data corruption as well as logical data corruption you can also use
RMAN Validation

Sunday, April 6, 2008

Extract DDL from Dump/ Read contents of dumpfile

I have been asked several times from different forums about the question of how we can extract ddl from dump?
In a nutshell I will try to show the procedure of how we can extract ddl from dump without importing any data.

Before proceed you need to know by which command dump was taken? Is it by exp or by expdp. Both have different procedures.I demonstrates below these two in two sections.

A) Dump was taken by exp.
B) Dump wad taken by expdp.

A) Dump was taken by exp.

1) You can simply use any viewer command and search the specified strings for the contents within it. I tried with cat, less and strings command and it worked.

Suppose in unix , strings a.dmp |grep "CREATE TABLE"
or, cat dumpfile_name |grep "YOUR SEARCH STRING"

2)you can invoke imp command with dump file name and with show=y option. The DDL command inside the dump will be shown to the screen. Like,

imp userid/password file=dumpfilename show=y

You can include log=filename with show=y option in order to save the contents inside file. Like,
imp userid/password file=dumpfilename show=y log=logfile.txt

3)A preferred method to me is to use indexfile=file.txt with imp command. It is better than show=y option. As output from indexfile=file.txt can be easily used without removing rem keyword and that script you can use easily. But output by show=y can't be used easily as double quote places in awkward places. Like you should use,
imp userid/password file=dumpfilename indexfile=file.txt

B) Dump was taken by expdp.

1)With impdp you have to use sqlfile=file.txt in which the ddl commands will be exported to file file.txt.Like,
impdp userid/password dumpfile=dumpfilename sqlfile=file.txt

Related Documents
How to get different object_type creation scripts
How to get DDL from database objects

Saturday, April 5, 2008

How does one use the import/export utilities?

exp/imp allow to transfer the data across databases that reside on different hardware platforms and/or on different Oracle versions. If the data is exported on a system with a different Oracle version then on that on which it is imported, imp must be the newer version. That means, if something needs to be exported from 10g into 9i, it must be exported with 9i's exp.

In order to use exp and imp, the catexp.sql script must be run. catexp.sql basically creates the exp_full_database and imp_full_database roles. No need to run catexp.sql for every time. It is only one time work. It is already executed if you create your database by dbca. So , don't bother with it if you create database with dbca.
It is found under $ORACLE_HOME/rdbms/admin.

Prerequisites
--------

One must have the create session privilege for being able to use exp. If objects of another user's schema need to be exported, the EXP_FULL_DATABASE role is required.

Export/Import Modes:
--------------------------------------------


1)Full database export:
--------------------------

The EXP_FULL_DATABASE and IMP_FULL_DATABASE, respectively, are needed to perform a full export.
Use the full export parameter for a full export.

2)Tablespace:
--------------------

Use the tablespaces export parameter for a tablespace export. It is only apply to transportable tablespaces.

3)Schema:
--------------

This mode can be used to export and import all objects that belong to a user.Use the owner export parameter and the fromuser import parameter for a user (owner) export-import.

4)Table:
---------------

Specific tables (and partitions) can be exported/imported with table export mode.
Use the tables export parameter for a table export.


Example:
-----------
The following examples demonstrate how the imp/exp utilities can be used:

exp scott/tiger file=emp.dmp log=emp.log tables=emp rows=yes indexes=no
exp scott/tiger file=emp.dmp tables=(emp,dept)

imp scott/tiger file=emp.dmp full=yes
imp scott/tiger file=emp.dmp fromuser=scott touser=scott tables=dept


Using a parameter file:

exp userid=scott/tiger@orcl parfile=export.txt


... where export.txt contains:

BUFFER=100000
FILE=account.dmp
FULL=n
OWNER=scott
GRANTS=y
COMPRESS=y


To see DDL within Dump:
---------------------

To write ddl into a file,
imp userid/pass file=a.dmp indexfile=myfile.txt

or, to see ddl into screen,
imp userid/pass file=a.dmp show=y

Related Documents:
--------------------

Data Pump Export/Import
Export/Import from Enterprise Manager
Original Export/Import

Tuesday, April 1, 2008

Export and Import from Enterprise Manager

If u want to do operation like export or import then u have to be administrator type user of EM.To do this,u have to login as system and then setup ---> Administrators ---> create.

Now login with the new user, Now Maintenance -----> Export to Export Files
Give the OS user id and password in the Host Credentials.

Now create a directory and test it . make sure oracle has the permission to write in that directory.

Import is simple also,Maintenance -----> Import from Export Files .Then select the directory and give the file name.

Related Documents:
--------------------

Data Pump Export/Import

Export/Import from Enterprise Manager

Original Export/Import