Problem Description
I ran my datapump export operation with the combination of multiple dumpfile arguments and filesize parameter but it fails with ORA-39095 as below.
maximsg@TIGER> $expdp maximsg/a dumpfile=d:parti1.dmp,e:parti2.dmp,c:resit%U.dmp filesize=188417 tablespaces=users
Export: Release 10.2.0.1.0 - Production on Thursday, 01 January, 2009 12:23:30
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MAXIMSG"."SYS_EXPORT_TABLESPACE_05": maximsg/******** dumpfile=d:parti1.dmp,e:parti2.dmp,c:resit%U.dmp filesize=188417 tablespace
s=users
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-39095: Dump file space has been exhausted: Unable to allocate 188416 bytes
Job "MAXIMSG"."SYS_EXPORT_TABLESPACE_05" stopped due to fatal error at 12:23:47
Just a variant of the above is below.
>expdp maximsg/a@tiger tablespaces=users filesize=200K directory=d dumpfile=b.dmp
Export: Release 10.2.0.1.0 - Production on Saturday, 03 January, 2009 22:29:34
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MAXIMSG"."SYS_EXPORT_TABLESPACE_08": maximsg/********@tiger tablespaces=users filesize=200K directory=d dumpfile=b.d
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-39095: Dump file space has been exhausted: Unable to allocate 208896 bytes
Job "MAXIMSG"."SYS_EXPORT_TABLESPACE_08" stopped due to fatal error at 10:30:17
Note that both in first and second case same errors are return but they show various bytes in order of unable to allocate. In first case Unable to allocate 188416 bytes and in second case Unable to allocate 208896 bytes.
Cause of the Error
They are many cause for which ORA-39095 is fired.
1)If the summation of dumpfile sizes that is specified in the datapump parameters is less than total dumpfile size that produces then ORA-39095 returns. Like in the second case the filesize is specified as 200K and only one dumpfile is specified. So maximum dumpfile size that can be produced is no more than 200K. If dumpfile to be produced is greater than 100K then above error will return.
Note that %U specification for the dump file can expand up to 99 files.
If 99 files have been generated before the export has completed, it will again return the ORA-39095 error. If in the second case %U is specified then maximum dumpfile size can be 99*200K=19800K. If dumpfile to be produced is greater than 19800K then error will return.
2)Like in the first case the size of filesize is too small and hence the error returns. filesize is defined about 184K(188407 bytes). And in the error message it shows unable to allocate 184K(188407 bytes). The solutions is and extra block is needed. As with 4k (with standard block size 8k) a block is generated so 188K filesize will solve the problem.
maximsg@TIGER> $expdp maximsg/a dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp filesize=188K tablespaces=users
Export: Release 10.2.0.1.0 - Production on Thursday, 01 January, 2009 12:25:09
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MAXIMSG"."SYS_EXPORT_TABLESPACE_06": maximsg/******** dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp filesize=188K tablespaces=use
rs
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "CR_2"."OUT_CDR_TEST" 310.9 KB 2166 rows
. . exported "CR_2"."OUT_CDR" 484.4 KB 2606 rows
. . exported "CR_2"."USER_IP" 126 KB 3403 rows
. . exported "CR_2"."OUT_CDR_TEMP" 33.72 KB 361 rows
. . exported "CR_2"."USERS" 42.57 KB 230 rows
. . exported "SCOTT"."DEPT" 5.656 KB 4 rows
. . exported "SCOTT"."EMP" 7.851 KB 15 rows
. . exported "SCOTT"."SALGRADE" 5.585 KB 5 rows
. . exported "SCOTT"."BONUS" 0 KB 0 rows
Master table "MAXIMSG"."SYS_EXPORT_TABLESPACE_06" successfully loaded/unloaded
******************************************************************************
Dump file set for MAXIMSG.SYS_EXPORT_TABLESPACE_06 is:
D:\PART1.DMP
E:\PART2.DMP
C:\REST01.DMP
C:\REST02.DMP
C:\REST03.DMP
C:\REST04.DMP
C:\REST05.DMP
Job "MAXIMSG"."SYS_EXPORT_TABLESPACE_06" successfully completed at 12:25:32
Solution of the Error
1)Increase the total dumpfile number so that dump produced by export operation is lower than total dumpfile sizes that is specified in the expdp.
With the 'wildcard' specification (%U) the dump file can expand up to 99 files.
2)Use a bigger filesize value.
3)Another option is to use,
dumpfile=dmpdir1:part1_%U.dmp, dmpdir2:part2_%U.dmp, dmpdir3:part3_%U.dmp
Wednesday, December 31, 2008
Interact with the executing data pump jobs in oracle
Oracle data pump export jobs run on server machine. Sometimes from remote machine you run data pump jobs on server or in server machine you do it. If you close your window then still data pump export operation runs background in the server machine. You may wish to control your data pump jobs that is running in the background and wish to know the status of it like which objects now it is processing or may wish to kill the job.
In order to do that first be sure which jobs are running now and see the state by querying from dba_datapump_jobs view.
SQL> select job_name,state from dba_datapump_jobs;
JOB_NAME STATE
------------------------------ ------------------------------
SYS_EXPORT_SCHEMA_01 EXECUTING
You may also wish to see other fields of the view like owner_name, operation etc.
Now in order to go to the interactive mode of the executing jobs just ATTCH the executing jobs like below,
Now you have reached in interactive mode. In order to back to the command line mode just write continue_client.
Export> continue_client
In order to kill the job that is executing,
Export> kill_job
Are you sure you wish to stop this job ([yes]/no): yes
In this way through ATTACH command you can interact with the job that is running. Similarly you can interact the data pump import operation.
In order to do that first be sure which jobs are running now and see the state by querying from dba_datapump_jobs view.
SQL> select job_name,state from dba_datapump_jobs;
JOB_NAME STATE
------------------------------ ------------------------------
SYS_EXPORT_SCHEMA_01 EXECUTING
You may also wish to see other fields of the view like owner_name, operation etc.
Now in order to go to the interactive mode of the executing jobs just ATTCH the executing jobs like below,
SQL> $expdp maximsg/a attach=SYS_EXPORT_SCHEMA_01
Export: Release 10.2.0.1.0 - Production on Thursday, 01 January, 2009 11:43:49
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Job: SYS_EXPORT_SCHEMA_01
Owner: MAXIMSG
Operation: EXPORT
Creator Privs: FALSE
GUID: BF0614AD15254C7B964B78333B7D9E1A
Start Time: Thursday, 01 January, 2009 11:38:30
Mode: SCHEMA
Instance: tiger
Max Parallelism: 1
EXPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND maximsg/******** dumpfile=d:part1.dmp,e:part2.dmp,dat
a_pump_dir%U.dmp
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: E:\oracle\product\10.2.0\admin\tiger\dpdump\data_pump_dir%u.dmp
Dump File: D:\PART1.DMP
bytes written: 704,512
Dump File: E:\PART2.DMP
bytes written: 4,096
Dump File: E:\ORACLE\PRODUCT\10.2.0\ADMIN\TIGER\DPDUMP\DATA_PUMP_DIR01.DMP
bytes written: 4,096
Worker 1 Status:
State: EXECUTING
Object Schema: MAXIMSG
Object Name: ACCOUNTING
Object Type: SCHEMA_EXPORT/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 171
Completed Rows: 8,169,569
Worker Parallelism: 1
Now you have reached in interactive mode. In order to back to the command line mode just write continue_client.
Export> continue_client
In order to kill the job that is executing,
Export> kill_job
Are you sure you wish to stop this job ([yes]/no): yes
In this way through ATTACH command you can interact with the job that is running. Similarly you can interact the data pump import operation.
Monday, December 29, 2008
ORA-02070: database does not support in this context
Problem Description
I like to differentiate between columns of table HISTORY from two databases using database link as below and it resulted error ORA-02070.
SQL> select * from cols where table_name='HISTORY' minus select * from cols@tiger1.net where table_name='HISTORY';
select * from cols where table_name='HISTORY' minus select * from cols@tiger1.net where table_name='HISTORY'
*
ERROR at line 1:
ORA-02070: database TIGER1.NET does not support operator NLS_CHARSET_DECL_LEN
in this context
I found that the cols contain LONG column and hence result above error. If I only query long datatype data_default then same error return with different error message.
SQL> select data_default from cols minus select data_default from cols@tiger1.net;
select data_default from cols minus select data_default from cols@tiger1.net
*
ERROR at line 1:
ORA-02070: database does not support in this context
Experiment and cause of the Error
The reason behind the error is the long datatype of data_default column. Wheenever you go to access long datatype column of cols then above error returns. However for experiment I created LONG datatype column table but did not get above error. I conclude that only SYSTEM defined table/view/synonym throw above error and user defined table/view/synonym throw ORA-00997.
For experiment,
1)Create table t with only long column in both database and then try to use minus operation using database link.
SQL> create table t(a long);
Table created.
SQL> select * from t minus select * from t@tiger1.net;
select * from t minus select * from t@tiger1.net
*
ERROR at line 1:
ORA-00997: illegal use of LONG datatype
ORA-02063: preceding line from TIGER1.NET
So in order to avoid above error just don't do any operation over LONG column through database link.
I like to differentiate between columns of table HISTORY from two databases using database link as below and it resulted error ORA-02070.
SQL> select * from cols where table_name='HISTORY' minus select * from cols@tiger1.net where table_name='HISTORY';
select * from cols where table_name='HISTORY' minus select * from cols@tiger1.net where table_name='HISTORY'
*
ERROR at line 1:
ORA-02070: database TIGER1.NET does not support operator NLS_CHARSET_DECL_LEN
in this context
I found that the cols contain LONG column and hence result above error. If I only query long datatype data_default then same error return with different error message.
SQL> select data_default from cols minus select data_default from cols@tiger1.net;
select data_default from cols minus select data_default from cols@tiger1.net
*
ERROR at line 1:
ORA-02070: database does not support in this context
Experiment and cause of the Error
The reason behind the error is the long datatype of data_default column. Wheenever you go to access long datatype column of cols then above error returns. However for experiment I created LONG datatype column table but did not get above error. I conclude that only SYSTEM defined table/view/synonym throw above error and user defined table/view/synonym throw ORA-00997.
For experiment,
1)Create table t with only long column in both database and then try to use minus operation using database link.
SQL> create table t(a long);
Table created.
SQL> select * from t minus select * from t@tiger1.net;
select * from t minus select * from t@tiger1.net
*
ERROR at line 1:
ORA-00997: illegal use of LONG datatype
ORA-02063: preceding line from TIGER1.NET
So in order to avoid above error just don't do any operation over LONG column through database link.
Tuesday, December 23, 2008
Understanding Execution Plan Statistics
Below is the output of an execution plan statistics.
SQL> set autot trace statistics
SQL> select * from tab;
107 rows selected.
Statistics
----------------------------------------------------------
8 recursive calls
0 db block gets
898 consistent gets
0 physical reads
0 redo size
3459 bytes sent via SQL*Net to client
458 bytes received via SQL*Net from client
9 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
107 rows processed
You sometimes want to know what these fields indicates. Below is the details of these fields.
1)recursive calls: For processing of sql statements oracle database maintain internal tables. Sometimes for processing sql statements change is needed in the internal tables and sometimes not. When Oracle Database needs to make a change to these tables, it internally generates an internal SQL statement, which in turn generates a recursive call.
2)db block gets: Number of times a CURRENT block was requested.
3)consistent gets: Number of times a consistent read was requested for a block. This is called the logical reads indicates for processing of a query how many blocks needs to be accessed.
4)physical reads: Total number of data blocks read from disk. This number equals the value of "physical reads direct" + all reads into buffer cache.
5)redo size: For processing of a query total amount of redo generated in bytes.
6)bytes sent through SQL*Net to client: Total number of bytes sent to the client from the foreground processes.
7)bytes received through SQL*Net from client: Total number of bytes received from the client over Oracle Net.
8)SQL*Net round-trips to/from client: Total number of Oracle Net messages sent to and received from the client.
9)sorts (memory):Number of sort operations that were performed completely in memory and did not require any disk writes.
10)sorts (disk):Number of sort operations that required at least one disk write.
11)rows processed: Number of rows processed during the operation.
SQL> set autot trace statistics
SQL> select * from tab;
107 rows selected.
Statistics
----------------------------------------------------------
8 recursive calls
0 db block gets
898 consistent gets
0 physical reads
0 redo size
3459 bytes sent via SQL*Net to client
458 bytes received via SQL*Net from client
9 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
107 rows processed
You sometimes want to know what these fields indicates. Below is the details of these fields.
1)recursive calls: For processing of sql statements oracle database maintain internal tables. Sometimes for processing sql statements change is needed in the internal tables and sometimes not. When Oracle Database needs to make a change to these tables, it internally generates an internal SQL statement, which in turn generates a recursive call.
2)db block gets: Number of times a CURRENT block was requested.
3)consistent gets: Number of times a consistent read was requested for a block. This is called the logical reads indicates for processing of a query how many blocks needs to be accessed.
4)physical reads: Total number of data blocks read from disk. This number equals the value of "physical reads direct" + all reads into buffer cache.
5)redo size: For processing of a query total amount of redo generated in bytes.
6)bytes sent through SQL*Net to client: Total number of bytes sent to the client from the foreground processes.
7)bytes received through SQL*Net from client: Total number of bytes received from the client over Oracle Net.
8)SQL*Net round-trips to/from client: Total number of Oracle Net messages sent to and received from the client.
9)sorts (memory):Number of sort operations that were performed completely in memory and did not require any disk writes.
10)sorts (disk):Number of sort operations that required at least one disk write.
11)rows processed: Number of rows processed during the operation.
Controlling the Autotrace Report in sql*plus
In sql*plus with AUTOTRACE command you can see the execution plan of successful SQL DML (that is, SELECT, DELETE, UPDATE and INSERT) statements which is useful for monitoring and tuning the performance of these statements.
You can control the report by setting the AUTOTRACE system variable.
Following is the available AUTOTRACE settings.
1)SET AUTOTRACE OFF: No AUTOTRACE report is generated. This is the default. Result of a query will be displayed on the console as usual.
2)SET AUTOTRACE ON EXPLAIN: The AUTOTRACE report shows the optimizer execution path along with query output and note but does not show statistics.
3)SET AUTOTRACE ON STATISTICS: The AUTOTRACE report shows the SQL statement execution statistics along with query output but does not display execution plan.
4)SET AUTOTRACE ON: The AUTOTRACE report includes both the optimizer execution path and the SQL statement execution statistics along with query output.
5)SET AUTOTRACE TRACEONLY: Report displays both optimizer execution path as well as execution statistics but does not display query output. If STATISTICS is enabled, query data is still fetched, but not printed.
You can control the report by setting the AUTOTRACE system variable.
Following is the available AUTOTRACE settings.
1)SET AUTOTRACE OFF: No AUTOTRACE report is generated. This is the default. Result of a query will be displayed on the console as usual.
2)SET AUTOTRACE ON EXPLAIN: The AUTOTRACE report shows the optimizer execution path along with query output and note but does not show statistics.
3)SET AUTOTRACE ON STATISTICS: The AUTOTRACE report shows the SQL statement execution statistics along with query output but does not display execution plan.
4)SET AUTOTRACE ON: The AUTOTRACE report includes both the optimizer execution path and the SQL statement execution statistics along with query output.
5)SET AUTOTRACE TRACEONLY: Report displays both optimizer execution path as well as execution statistics but does not display query output. If STATISTICS is enabled, query data is still fetched, but not printed.
Related Documents:
-------------------------------------------------------
How to set Environmental variable to SQL*Plus permanently
-------------------------------------------------------
How to set Environmental variable to SQL*Plus permanently
Sunday, December 21, 2008
Formatting SQL*Plus Reports Part 2
Computing Summary of a column
With combination of BREAK and COMPUTE command you can do several calculations on SQL reports using sql*plus.
Remember the COMPUTE command has no effect without a corresponding BREAK command.
Below is the lists of compute functions of Sql*plus and their effects.
1)SUM: Sum of the values in the column.
2)MINIMUM: Minimum value in the column.
3)MAXIMUM: Maximum value in the column.
4)AVG: Average of the values in the column.
5)STD: Standard deviation of the values in the column.
6)VARIANCE: Variance of the values in the column.
7)COUNT: Number of non-null values in the column.
8)NUMBER: Number of rows in the column.
Let's look at our data,
Now we wish to compute the total of SALARY by department. To do that use,
Note that the word sum appears in every break. If you don't want to print sum word then do as,
To compute the salaries just at the end of the report,
To calculate grand total of salary and make it a level do,
To compute the both average and sum of salaries of a department do,
To see current compute settings,
SQL> compute
COMPUTE sum LABEL 'sum' OF SALARY ON DUMMY
COMPUTE sum LABEL 'TOTAL' OF SALARY ON REPORT
COMPUTE avg LABEL 'avg' sum LABEL 'sum' OF SALARY ON DEPT_ID
To remove all computes definitions,
SQL> clear compute
computes cleared
With combination of BREAK and COMPUTE command you can do several calculations on SQL reports using sql*plus.
Remember the COMPUTE command has no effect without a corresponding BREAK command.
Below is the lists of compute functions of Sql*plus and their effects.
1)SUM: Sum of the values in the column.
2)MINIMUM: Minimum value in the column.
3)MAXIMUM: Maximum value in the column.
4)AVG: Average of the values in the column.
5)STD: Standard deviation of the values in the column.
6)VARIANCE: Variance of the values in the column.
7)COUNT: Number of non-null values in the column.
8)NUMBER: Number of rows in the column.
Let's look at our data,
SQL> select dept_id,name,salary from emp;
DEPT_ID NAME SALARY
---------- ------------ ----------
10 Arju 60000
10 Ershad 22000
10 Dulal 40000
20 Riaz 35000
20 Hasib 60000
30 Raihan 25000
6 rows selected.
Now we wish to compute the total of SALARY by department. To do that use,
SQL> break on dept_id
SQL> compute sum of salary on dept_id
SQL> /
DEPT_ID NAME SALARY
---------- ------------ ----------
10 Arju 60000
Ershad 22000
Dulal 40000
********** ----------
sum 122000
20 Riaz 35000
Hasib 60000
********** ----------
sum 95000
30 Raihan 25000
********** ----------
sum 25000
6 rows selected.
Note that the word sum appears in every break. If you don't want to print sum word then do as,
SQL> COLUMN DUMMY NOPRINT;
SQL> COMPUTE SUM OF SALARY ON DUMMY;
SQL> BREAK ON DUMMY SKIP 1;
SQL> select dept_id DUMMY,dept_id,name,salary from emp;
DEPT_ID NAME SALARY
---------- ------------ ----------
10 Arju 60000
10 Ershad 22000
10 Dulal 40000
----------
122000
20 Riaz 35000
20 Hasib 60000
----------
95000
30 Raihan 25000
----------
25000
6 rows selected.
To compute the salaries just at the end of the report,
SQL> COLUMN DUMMY NOPRINT;
SQL> COMPUTE SUM OF SALARY ON DUMMY;
SQL> BREAK ON DUMMY;
SQL> SELECT NULL DUMMY,DEPT_ID,NAME,SALARY FROM EMP;
DEPT_ID NAME SALARY
---------- ------------ ----------
10 Arju 60000
10 Ershad 22000
10 Dulal 40000
20 Riaz 35000
20 Hasib 60000
30 Raihan 25000
----------
242000
6 rows selected.
To calculate grand total of salary and make it a level do,
SQL> BREAK ON REPORT
SQL> COMPUTE SUM LABEL TOTAL OF SALARY ON REPORT
SQL> select dept_id,name,salary from emp;
DEPT_ID NAME SALARY
---------- ------------ ----------
10 Arju 60000
10 Ershad 22000
10 Dulal 40000
20 Riaz 35000
20 Hasib 60000
30 Raihan 25000
----------
TOTAL 242000
6 rows selected.
To compute the both average and sum of salaries of a department do,
SQL> BREAK ON DEPT_ID
SQL> COMPUTE AVG SUM OF SALARY ON DEPT_ID
SQL> select dept_id,name,salary from emp where dept_id=10;
DEPT_ID NAME SALARY
---------- ------------ ----------
10 Arju 60000
Ershad 22000
Dulal 40000
********** ----------
avg 40666.6667
sum 122000
To see current compute settings,
SQL> compute
COMPUTE sum LABEL 'sum' OF SALARY ON DUMMY
COMPUTE sum LABEL 'TOTAL' OF SALARY ON REPORT
COMPUTE avg LABEL 'avg' sum LABEL 'sum' OF SALARY ON DEPT_ID
To remove all computes definitions,
SQL> clear compute
computes cleared
Related Documents:
-------------------------------------------------------
How to set Environmental variable to SQL*Plus permanently
-------------------------------------------------------
How to set Environmental variable to SQL*Plus permanently
Wednesday, December 17, 2008
Troubleshoot ORA-02292, ORA-02449 and ORA-02266
SQL> delete from task;
delete from task
*
ERROR at line 1:
ORA-02292: integrity constraint (OMS.TASKUSER_TASK) violated - child record
found
SQL> drop table task;
drop table task
*
ERROR at line 1:
ORA-02449: unique/primary keys in table referenced by foreign keys
SQL> truncate table task;
truncate table task
*
ERROR at line 1:
ORA-02266: unique/primary keys in table referenced by enabled foreign keys
To get list of child table with the constraint name that is referencing the TASK table issue,
Let's start by disabling constraint from the child table one by one and try to delete rows from the table TASK.
SQL> ALTER TABLE TASK_USER DISABLE CONSTRAINT TASKUSER_TASK;
Table altered.
SQL> delete from task;
delete from task
*
ERROR at line 1:
ORA-02292: integrity constraint (OMS.TASKSCHEDULE_TASK) violated - child record found
SQL> ALTER TABLE task_schedule DISABLE CONSTRAINT TASKSCHEDULE_TASK;
Table altered.
SQL> delete from task;
delete from task
*
ERROR at line 1:
ORA-02292: integrity constraint (OMS.TASK_ATTACH_FILE_FK) violated - child record found
SQL> ALTER TABLE TASK_ATTACHED_FILEINFO DISABLE CONSTRAINT TASK_ATTACH_FILE_FK;
Table altered.
SQL> delete from task;
1289 rows deleted.
And now delete succeed. In order to get a list of parent tables that is referencing a child table issue,
http://arjudba.blogspot.com/2009/12/oracle-object-type-exercises-varray.html
http://arjudba.blogspot.com/2009/12/practice-oracle-joins-examples.html
http://arjudba.blogspot.com/2009/12/oracle-security-practices.html
http://arjudba.blogspot.com/2009/12/exercises-with-oracle-create-table-add.html
http://arjudba.blogspot.com/2009/12/oracle-database-creation-exercises.html
http://arjudba.blogspot.com/2009/12/basic-oracle-sql-exercise.html
http://arjudba.blogspot.com/2009/08/format-model-modifiers-fx-and-fm.html
http://arjudba.blogspot.com/2009/08/number-format-models-in-oracle.html
http://arjudba.blogspot.com/2009/08/format-models-in-oracle.html
http://arjudba.blogspot.com/2009/07/sql-decode.html
http://arjudba.blogspot.com/2009/07/how-to-know-row-of-table-belong-to.html
http://arjudba.blogspot.com/2009/06/how-to-know-which-objects-are-being.html
http://arjudba.blogspot.com/2009/06/ddl-with-wait-option-in-11g.html
http://arjudba.blogspot.com/2009/06/ora-00939-too-many-arguments-when-case.html
http://arjudba.blogspot.com/2009/03/oracle-datatype-internal-code.html
http://arjudba.blogspot.com/2009/03/how-to-know-list-of-constraints-and.html
http://arjudba.blogspot.com/2009/02/how-to-know-dependent-objectswhich.html
http://arjudba.blogspot.com/2009/02/how-to-search-stringkey-value-from.html
http://arjudba.blogspot.com/2009/02/how-to-know-when-tableobjects-ddlcode.html
http://arjudba.blogspot.com/2009/02/ora-00920-invalid-relational-operator.html
http://arjudba.blogspot.com/2009/01/adding-default-value-to-column-on-table.html
http://arjudba.blogspot.com/2009/01/ora-12838-cannot-readmodify-object.html
http://arjudba.blogspot.com/2009/01/ora-01779-cannot-modify-column-which.html
http://arjudba.blogspot.com/2009/01/updating-table-based-on-another-table.html
http://arjudba.blogspot.com/2009/01/ora-00054-resource-busy-and-acquire.html
http://arjudba.blogspot.com/2008/12/troubleshoot-ora-02292-ora-02449-and.html
delete from task
*
ERROR at line 1:
ORA-02292: integrity constraint (OMS.TASKUSER_TASK) violated - child record
found
SQL> drop table task;
drop table task
*
ERROR at line 1:
ORA-02449: unique/primary keys in table referenced by foreign keys
SQL> truncate table task;
truncate table task
*
ERROR at line 1:
ORA-02266: unique/primary keys in table referenced by enabled foreign keys
To get list of child table with the constraint name that is referencing the TASK table issue,
SQL> set lines 140
SQL> SELECT p.table_name "Parent Table", c.table_name "Child Table",
p.constraint_name "Parent Constraint", c.constraint_name "Child Constraint"
FROM user_constraints p
JOIN user_constraints c ON(p.constraint_name=c.r_constraint_name)
WHERE (p.constraint_type = 'P' OR p.constraint_type = 'U')
AND c.constraint_type = 'R'
AND p.table_name = UPPER('&table_name');
Enter value for table_name: task
old 7: AND p.table_name = UPPER('&table_name')
new 7: AND p.table_name = UPPER('task')
Parent Table Child Table Parent Constraint Child Constraint
------------------------------ ------------------------------ ------------------------------ ------------------------------
TASK TASK_REF TASK_PK TASKREF_TASK
TASK TASK TASK_PK TASK_TASK_FK
TASK TASK_SCHEDULE TASK_PK TASKSCHEDULE_TASK
TASK TASK_USER TASK_PK TASKUSER_TASK
TASK TASK_ATTACHED_FILEINFO TASK_PK TASK_ATTACH_FILE_FK
Let's start by disabling constraint from the child table one by one and try to delete rows from the table TASK.
SQL> ALTER TABLE TASK_USER DISABLE CONSTRAINT TASKUSER_TASK;
Table altered.
SQL> delete from task;
delete from task
*
ERROR at line 1:
ORA-02292: integrity constraint (OMS.TASKSCHEDULE_TASK) violated - child record found
SQL> ALTER TABLE task_schedule DISABLE CONSTRAINT TASKSCHEDULE_TASK;
Table altered.
SQL> delete from task;
delete from task
*
ERROR at line 1:
ORA-02292: integrity constraint (OMS.TASK_ATTACH_FILE_FK) violated - child record found
SQL> ALTER TABLE TASK_ATTACHED_FILEINFO DISABLE CONSTRAINT TASK_ATTACH_FILE_FK;
Table altered.
SQL> delete from task;
1289 rows deleted.
And now delete succeed. In order to get a list of parent tables that is referencing a child table issue,
SQL> SELECT c.table_name "Child Table", p.table_name "Parent Table",Related Documents
p.constraint_name "Parent Constraint", c.constraint_name "Child Constraint"
FROM user_constraints p JOIN user_constraints c
ON (p.constraint_name = c.r_constraint_name)
WHERE (p.constraint_type = 'P' OR p.constraint_type = 'U')
AND c.constraint_type = 'R'
AND c.table_name = UPPER('&table_name');
Enter value for table_name: task
old 7: AND c.table_name = UPPER('&table_name')
new 7: AND c.table_name = UPPER('task')
Child Table Parent Table Parent Constraint Child Constraint
------------------------------ ------------------------------ ------------------------------ ------------------------------
TASK TASK TASK_PK TASK_TASK_FK
http://arjudba.blogspot.com/2009/12/oracle-object-type-exercises-varray.html
http://arjudba.blogspot.com/2009/12/practice-oracle-joins-examples.html
http://arjudba.blogspot.com/2009/12/oracle-security-practices.html
http://arjudba.blogspot.com/2009/12/exercises-with-oracle-create-table-add.html
http://arjudba.blogspot.com/2009/12/oracle-database-creation-exercises.html
http://arjudba.blogspot.com/2009/12/basic-oracle-sql-exercise.html
http://arjudba.blogspot.com/2009/08/format-model-modifiers-fx-and-fm.html
http://arjudba.blogspot.com/2009/08/number-format-models-in-oracle.html
http://arjudba.blogspot.com/2009/08/format-models-in-oracle.html
http://arjudba.blogspot.com/2009/07/sql-decode.html
http://arjudba.blogspot.com/2009/07/how-to-know-row-of-table-belong-to.html
http://arjudba.blogspot.com/2009/06/how-to-know-which-objects-are-being.html
http://arjudba.blogspot.com/2009/06/ddl-with-wait-option-in-11g.html
http://arjudba.blogspot.com/2009/06/ora-00939-too-many-arguments-when-case.html
http://arjudba.blogspot.com/2009/03/oracle-datatype-internal-code.html
http://arjudba.blogspot.com/2009/03/how-to-know-list-of-constraints-and.html
http://arjudba.blogspot.com/2009/02/how-to-know-dependent-objectswhich.html
http://arjudba.blogspot.com/2009/02/how-to-search-stringkey-value-from.html
http://arjudba.blogspot.com/2009/02/how-to-know-when-tableobjects-ddlcode.html
http://arjudba.blogspot.com/2009/02/ora-00920-invalid-relational-operator.html
http://arjudba.blogspot.com/2009/01/adding-default-value-to-column-on-table.html
http://arjudba.blogspot.com/2009/01/ora-12838-cannot-readmodify-object.html
http://arjudba.blogspot.com/2009/01/ora-01779-cannot-modify-column-which.html
http://arjudba.blogspot.com/2009/01/updating-table-based-on-another-table.html
http://arjudba.blogspot.com/2009/01/ora-00054-resource-busy-and-acquire.html
http://arjudba.blogspot.com/2008/12/troubleshoot-ora-02292-ora-02449-and.html
Tuesday, December 16, 2008
How to put dumps in multiple locations
You have a very big database and you want to take data pump export of this very big database. Suppose your dump size would be 100G. But in your hard disk in one partition 100G is not available. You may get free space more than 100G that span over partitions. For example in one partition 40G free space is available, in another 50G and in another partition 30G.
You then span this 100G dumpfile in this partitions and you can take the dump. The combination of two parameters DUMPFILE and FILESIZE of data pump export operation help to take the dump.
Suppose in /oradata1 40G is available.
in /oradata2 50G is available.
in /oradata3 30G is available.
And you have created directories dir1,dir2 and dir3 that represents /oradata1, /oradata3 and /oradata3 respectively.
If your approximate dump size is 100G then you would write your datapump export command as,
expdp user_name/pass dumpfile=dir1:part1.dmp, dir2:part2.dmp, dir3:part3.dmp, dir2:rest%U.dmp filesize=30G
In this case dump file will be created sequentially.
First in /oradata1 30G dump size will be created named part1.dmp.
Then in /oradata2 30G dump size will be created named part2.dmp.
Then in /oradata3 30G dump size will be created named part3.dmp.
Finally rest 10G dump size will be created in /oradata2 named rest01.dmp.
The %U causes file name to be generated by oracle and sequentially like rest01.dmp, rest02.dmp and etc.
Below is an example which will take dump of tablespace users and each dump size will be 300K and it will span in C:\, D:\ and E:\ drive each.
SQL> create or replace directory d as 'd:';
Directory created.
SQL> create or replace directory e as 'e:';
Directory created.
SQL> create directory c as 'c:';
Directory created.
maximsg@TIGER> $expdp maximsg/a dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp filesize=300K tablespaces=users
Export: Release 10.2.0.1.0 - Production on Thursday, 01 January, 2009 12:16:15
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MAXIMSG"."SYS_EXPORT_TABLESPACE_01": maximsg/******** dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp filesize=300K tablespaces=use
rs
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "CR_2"."O_CDR_TEST" 310.9 KB 2166 rows
. . exported "CR_2"."O_CDR" 484.4 KB 2606 rows
. . exported "CR_2"."USER_IP" 126 KB 3403 rows
. . exported "CR_2"."O_CDR_TEMP" 33.72 KB 361 rows
. . exported "CR_2"."USERS" 42.57 KB 230 rows
. . exported "SCOTT"."DEPT" 5.656 KB 4 rows
. . exported "SCOTT"."EMP" 7.851 KB 15 rows
. . exported "SCOTT"."SALGRADE" 5.585 KB 5 rows
. . exported "SCOTT"."BONUS" 0 KB 0 rows
Master table "MAXIMSG"."SYS_EXPORT_TABLESPACE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MAXIMSG.SYS_EXPORT_TABLESPACE_01 is:
D:\PART1.DMP
E:\PART2.DMP
C:\REST01.DMP
C:\REST02.DMP
C:\REST03.DMP
Job "MAXIMSG"."SYS_EXPORT_TABLESPACE_01" successfully completed at 12:16:33
For dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp,
first in D: drive part1.dmp will be created with sized 300K.
Then in E: drive part2.dmp will be created with sized 300K.
Then all remaining parts of the dump will be created in C: drive each with 300K (possibly except last one due to dump size is not multiple of 300K) and their name will be like rest01.dmp, rest02.dmp etc.
If you use original export then the you control the size with FILE and FILESIZE parameter. The FILE parameter takes a list of file names to be created and FILESIZE parameter shows the maximum size of each dump file.
You then span this 100G dumpfile in this partitions and you can take the dump. The combination of two parameters DUMPFILE and FILESIZE of data pump export operation help to take the dump.
Suppose in /oradata1 40G is available.
in /oradata2 50G is available.
in /oradata3 30G is available.
And you have created directories dir1,dir2 and dir3 that represents /oradata1, /oradata3 and /oradata3 respectively.
If your approximate dump size is 100G then you would write your datapump export command as,
expdp user_name/pass dumpfile=dir1:part1.dmp, dir2:part2.dmp, dir3:part3.dmp, dir2:rest%U.dmp filesize=30G
In this case dump file will be created sequentially.
First in /oradata1 30G dump size will be created named part1.dmp.
Then in /oradata2 30G dump size will be created named part2.dmp.
Then in /oradata3 30G dump size will be created named part3.dmp.
Finally rest 10G dump size will be created in /oradata2 named rest01.dmp.
The %U causes file name to be generated by oracle and sequentially like rest01.dmp, rest02.dmp and etc.
Below is an example which will take dump of tablespace users and each dump size will be 300K and it will span in C:\, D:\ and E:\ drive each.
SQL> create or replace directory d as 'd:';
Directory created.
SQL> create or replace directory e as 'e:';
Directory created.
SQL> create directory c as 'c:';
Directory created.
maximsg@TIGER> $expdp maximsg/a dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp filesize=300K tablespaces=users
Export: Release 10.2.0.1.0 - Production on Thursday, 01 January, 2009 12:16:15
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MAXIMSG"."SYS_EXPORT_TABLESPACE_01": maximsg/******** dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp filesize=300K tablespaces=use
rs
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "CR_2"."O_CDR_TEST" 310.9 KB 2166 rows
. . exported "CR_2"."O_CDR" 484.4 KB 2606 rows
. . exported "CR_2"."USER_IP" 126 KB 3403 rows
. . exported "CR_2"."O_CDR_TEMP" 33.72 KB 361 rows
. . exported "CR_2"."USERS" 42.57 KB 230 rows
. . exported "SCOTT"."DEPT" 5.656 KB 4 rows
. . exported "SCOTT"."EMP" 7.851 KB 15 rows
. . exported "SCOTT"."SALGRADE" 5.585 KB 5 rows
. . exported "SCOTT"."BONUS" 0 KB 0 rows
Master table "MAXIMSG"."SYS_EXPORT_TABLESPACE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MAXIMSG.SYS_EXPORT_TABLESPACE_01 is:
D:\PART1.DMP
E:\PART2.DMP
C:\REST01.DMP
C:\REST02.DMP
C:\REST03.DMP
Job "MAXIMSG"."SYS_EXPORT_TABLESPACE_01" successfully completed at 12:16:33
For dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp,
first in D: drive part1.dmp will be created with sized 300K.
Then in E: drive part2.dmp will be created with sized 300K.
Then all remaining parts of the dump will be created in C: drive each with 300K (possibly except last one due to dump size is not multiple of 300K) and their name will be like rest01.dmp, rest02.dmp etc.
If you use original export then the you control the size with FILE and FILESIZE parameter. The FILE parameter takes a list of file names to be created and FILESIZE parameter shows the maximum size of each dump file.
Expdp fails with ORA-39125, ORA-04031
Problem Description
I was performing data pump export operation and it fails with ORA-39125, ORA-04031 as below.
SQL> $expdp full=y dumpfile=fulldb.dmp directory=d userid=system/a
Export: Release 10.2.0.1.0 - Production on Tuesday, 16 December, 2008 17:07:44
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
FLASHBACK automatically enabled to preserve database integrity.
Starting "SYSTEM"."SYS_EXPORT_FULL_02": full=y dumpfile=fulldb.dmp directory=d userid=system/********
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
ORA-39125: Worker unexpected fatal error in KUPW$WORKER.GET_TABLE_DATA_OBJECTS while calling DBMS_METADATA.FETCH_XML_CLOB []
ORA-04031: unable to allocate 28 bytes of shared memory ("shared pool","SELECT /*+rule*/ SYS_XMLGEN(...","sql area","ub1[]: qkexrXformVal")
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 6235
----- PL/SQL Call Stack -----
object line object
handle number name
64E077B4 14916 package body SYS.KUPW$WORKER
64E077B4 6300 package body SYS.KUPW$WORKER
64E077B4 9120 package body SYS.KUPW$WORKER
64E077B4 1880 package body SYS.KUPW$WORKER
64E077B4 6861 package body SYS.KUPW$WORKER
64E077B4 1262 package body SYS.KUPW$WORKER
64CB4398 2 anonymous block
Job "SYSTEM"."SYS_EXPORT_FULL_02" stopped due to fatal error at 17:08:03
Cause of the Error
The ORA-04031 error returns whenever oracle attempts to allocate a large piece of contiguous memory in the shared pool but fails to allocate. Oracle first flushes all objects that are not currently in use from the pool and the resulting free memory chunks are merged. If there is still not a single chunk large enough to satisfy the request the ORA-04031 error is returned.
Since in 10g automatic memory management in enabled and this allows the dba to reserve a pool of shared memory that is used to allocate the shared pool, the buffer cache, the java pool and the large pool with the single parameter SGA_TARGET. So simply increase of SGA_TARGET likely solve the problem.
Solution
Let's see the sga_target value.
SQL> show parameter sga_t
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_target big integer 100M
And SGA_MAX_SIZE value,
SQL> show parameter sga_max
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_max_size big integer 100M
We see these two parameter settings are low. So we increase it and restart database. As setting of SGA_TARGET is depend on SGA_MAX_SIZE and to change SGA_MAX_SIZE we need to bounce the database in order to effect.
SQL> alter system set sga_max_size=300M scope=spfile;
System altered.
SQL> alter system set sga_target=300M scope=spfile;
System altered.
SQL> startup force
ORACLE instance started.
Total System Global Area 314572800 bytes
Fixed Size 1248768 bytes
Variable Size 96469504 bytes
Database Buffers 209715200 bytes
Redo Buffers 7139328 bytes
Database mounted.
Database opened.
SQL> show parameter sga_target
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_target big integer 300M
Now the export operation runs smoothly as shared pool finds enough memory to do the operation.
SQL> $expdp full=y dumpfile=fulldb.dmp directory=d userid=system/a
I was performing data pump export operation and it fails with ORA-39125, ORA-04031 as below.
SQL> $expdp full=y dumpfile=fulldb.dmp directory=d userid=system/a
Export: Release 10.2.0.1.0 - Production on Tuesday, 16 December, 2008 17:07:44
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
FLASHBACK automatically enabled to preserve database integrity.
Starting "SYSTEM"."SYS_EXPORT_FULL_02": full=y dumpfile=fulldb.dmp directory=d userid=system/********
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
ORA-39125: Worker unexpected fatal error in KUPW$WORKER.GET_TABLE_DATA_OBJECTS while calling DBMS_METADATA.FETCH_XML_CLOB []
ORA-04031: unable to allocate 28 bytes of shared memory ("shared pool","SELECT /*+rule*/ SYS_XMLGEN(...","sql area","ub1[]: qkexrXformVal")
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 6235
----- PL/SQL Call Stack -----
object line object
handle number name
64E077B4 14916 package body SYS.KUPW$WORKER
64E077B4 6300 package body SYS.KUPW$WORKER
64E077B4 9120 package body SYS.KUPW$WORKER
64E077B4 1880 package body SYS.KUPW$WORKER
64E077B4 6861 package body SYS.KUPW$WORKER
64E077B4 1262 package body SYS.KUPW$WORKER
64CB4398 2 anonymous block
Job "SYSTEM"."SYS_EXPORT_FULL_02" stopped due to fatal error at 17:08:03
Cause of the Error
The ORA-04031 error returns whenever oracle attempts to allocate a large piece of contiguous memory in the shared pool but fails to allocate. Oracle first flushes all objects that are not currently in use from the pool and the resulting free memory chunks are merged. If there is still not a single chunk large enough to satisfy the request the ORA-04031 error is returned.
Since in 10g automatic memory management in enabled and this allows the dba to reserve a pool of shared memory that is used to allocate the shared pool, the buffer cache, the java pool and the large pool with the single parameter SGA_TARGET. So simply increase of SGA_TARGET likely solve the problem.
Solution
Let's see the sga_target value.
SQL> show parameter sga_t
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_target big integer 100M
And SGA_MAX_SIZE value,
SQL> show parameter sga_max
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_max_size big integer 100M
We see these two parameter settings are low. So we increase it and restart database. As setting of SGA_TARGET is depend on SGA_MAX_SIZE and to change SGA_MAX_SIZE we need to bounce the database in order to effect.
SQL> alter system set sga_max_size=300M scope=spfile;
System altered.
SQL> alter system set sga_target=300M scope=spfile;
System altered.
SQL> startup force
ORACLE instance started.
Total System Global Area 314572800 bytes
Fixed Size 1248768 bytes
Variable Size 96469504 bytes
Database Buffers 209715200 bytes
Redo Buffers 7139328 bytes
Database mounted.
Database opened.
SQL> show parameter sga_target
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_target big integer 300M
Now the export operation runs smoothly as shared pool finds enough memory to do the operation.
SQL> $expdp full=y dumpfile=fulldb.dmp directory=d userid=system/a
Saturday, December 13, 2008
ORA-02297: cannot disable constraint -dependencies exist
Problem Description
Whenever you try to disable a constraint of a table it fails with error message ORA-02297: cannot disable constraint -dependencies exist as below.
SQL> alter table transaction disable constraint TRANSACTION_PK;
alter table transaction disable constraint TRANSACTION_PK
*
ERROR at line 1:
ORA-02297: cannot disable constraint (OMS.TRANSACTION_PK) - dependencies exist
Cause of the Problem
Disable constraint command fails as the table is parent table and it has foreign key that are dependent on this constraint.
Solution of the Problem
Two solutions exist for this problem.
1)Find foreign key constraints on the table and disable those foreign key constraints and then disable this table constraint.
Following query will check dependent table and the dependent constraint name. After that disable child first and then parent constraint.
2)Disable the constraint with cascade option.
SQL> alter table transaction disable constraint TRANSACTION_PK cascade;
Table altered.
Related Documents
How to Disable and Enable all constraints using SQL
Type of constraint in oracle
How to know list of constraints and reference constraints in a table
Whenever you try to disable a constraint of a table it fails with error message ORA-02297: cannot disable constraint -dependencies exist as below.
SQL> alter table transaction disable constraint TRANSACTION_PK;
alter table transaction disable constraint TRANSACTION_PK
*
ERROR at line 1:
ORA-02297: cannot disable constraint (OMS.TRANSACTION_PK) - dependencies exist
Cause of the Problem
Disable constraint command fails as the table is parent table and it has foreign key that are dependent on this constraint.
Solution of the Problem
Two solutions exist for this problem.
1)Find foreign key constraints on the table and disable those foreign key constraints and then disable this table constraint.
Following query will check dependent table and the dependent constraint name. After that disable child first and then parent constraint.
SQL> SELECT p.table_name "Parent Table", c.table_name "Child Table",
p.constraint_name "Parent Constraint", c.constraint_name "Child Constraint"
FROM user_constraints p
JOIN user_constraints c ON(p.constraint_name=c.r_constraint_name)
WHERE (p.constraint_type = 'P' OR p.constraint_type = 'U')
AND c.constraint_type = 'R'
AND p.table_name = UPPER('&table_name');
Enter value for table_name: transaction
old 7: AND p.table_name = UPPER('&table_name')
new 7: AND p.table_name = UPPER('transaction')
Parent Table Child Table Parent Constraint Child Constraint
------------------------------ ------------------------------ ------------------------------ ------------------------------
TRANSACTION USER_SALARY_RECORD TRANSACTION_PK SYS_C005564
TRANSACTION TRANSACTION_DETAIL TRANSACTION_PK TRNSCTN_DTL_TRNSCTN_FK
SQL> alter table USER_SALARY_RECORD disable constraint SYS_C005564;
Table altered.
SQL> alter table TRANSACTION_DETAIL disable constraint TRNSCTN_DTL_TRNSCTN_FK;
Table altered.
SQL> alter table transaction disable constraint TRANSACTION_PK;
Table altered.
2)Disable the constraint with cascade option.
SQL> alter table transaction disable constraint TRANSACTION_PK cascade;
Table altered.
Related Documents
How to Disable and Enable all constraints using SQL
Type of constraint in oracle
How to know list of constraints and reference constraints in a table
Saturday, December 6, 2008
ORA-07445: exception encountered: core dump kghalp SIGSEGV ORA-10980
Problem Description
I wanted to do online redefinition on a table. Whenever I run start_redef_table procedure then, while creating materialized view it failed with ORA-03114 and ORA-03113.
Below is the scenario.
SQL> exec dbms_redefinition.start_redef_table('M', 'HISTORY', 'HISTORY_INT');
ERROR:
ORA-03114: not connected to ORACLE
BEGIN dbms_redefinition.start_redef_table('M', 'HISTORY', 'HISTORY_INT'); END;
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
In the Alert log file it displays,
Sat Dec 6 16:34:40 2008
Errors in file /var/opt/dumpfile/udump/arjudb_ora_12860.trc:
ORA-07445: exception encountered: core dump [kghalp()+58] [SIGSEGV] [Address not mapped to object] [0x000000068] [] []
ORA-10980: prevent sharing of parsed query during Materialized View query generation
Sat Dec 6 16:36:39 2008
Cause of the Problem
This is an oracle bug. When auditing is enabled and creating materialized view or executing start_redef_table and a Commit/Rollback/Abort transaction is done, memory which is being cleaned up is accessed leading to a dump.
Solution of the Problem
As this problem is an oracle bug and this bug fires when auditing is enabled so there are two solutions.
1)Disable Audit and Restart database:
SQL> alter system set audit_trail=NONE scope=spfile;
System altered.
Since audit_trail is a static parameter so it is needed to bounce the database.
SQL> shutdown immediate;
SQL> startup
2)Apply Patch 10.2.0.3:
This bug is fixed on 10.2.0.3. So apply patch set 10.2.0.3.
I wanted to do online redefinition on a table. Whenever I run start_redef_table procedure then, while creating materialized view it failed with ORA-03114 and ORA-03113.
Below is the scenario.
SQL> exec dbms_redefinition.start_redef_table('M', 'HISTORY', 'HISTORY_INT');
ERROR:
ORA-03114: not connected to ORACLE
BEGIN dbms_redefinition.start_redef_table('M', 'HISTORY', 'HISTORY_INT'); END;
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
In the Alert log file it displays,
Sat Dec 6 16:34:40 2008
Errors in file /var/opt/dumpfile/udump/arjudb_ora_12860.trc:
ORA-07445: exception encountered: core dump [kghalp()+58] [SIGSEGV] [Address not mapped to object] [0x000000068] [] []
ORA-10980: prevent sharing of parsed query during Materialized View query generation
Sat Dec 6 16:36:39 2008
Cause of the Problem
This is an oracle bug. When auditing is enabled and creating materialized view or executing start_redef_table and a Commit/Rollback/Abort transaction is done, memory which is being cleaned up is accessed leading to a dump.
Solution of the Problem
As this problem is an oracle bug and this bug fires when auditing is enabled so there are two solutions.
1)Disable Audit and Restart database:
SQL> alter system set audit_trail=NONE scope=spfile;
System altered.
Since audit_trail is a static parameter so it is needed to bounce the database.
SQL> shutdown immediate;
SQL> startup
2)Apply Patch 10.2.0.3:
This bug is fixed on 10.2.0.3. So apply patch set 10.2.0.3.
Related Documents
Thursday, December 4, 2008
Import fails with ORA-39005, ORA-31600: invalid input value NULL for parameter
Problem Description
I used both remap_schema(import operation will be performed in another user than the user whose data to be imported) and remap_tablespace(tablespace to be changed while importing) and the import operation failed with ORA-39005 and ORA-31600.
$host impdp directory=d dumpfile=user1_metadata04-12-08.dmp remap_schema=user1:user2 table_exists_action=skip logfile=user1_metadata04-12-08.log \
remap_tablespace=user1_SPC:user2_SPC \
remap_tablespace=user1_HISTORY_DETAIL_SPC: user2_HISTORY_DETAIL_SPC \
remap_tablespace=user1_HISTORY_SPC:user2_HISTORY_SPC \
remap_tablespace=user1_ACC_SPC:user2_SPC \
remap_tablespace=user1_HISTORY_DETAIL_INDX_SPC:user2_HIS_DETAIL_INDX_SPC \
remap_tablespace=user1_HISTORY_INDX_SPC:user2_HISTORY_INDX_SPC
Import: Release 10.2.0.1.0 - 64bit Production on Thursday, 04 December, 2008 19:17:08
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Username: system
Password:
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
ORA-39005: inconsistent arguments
ORA-31600: invalid input value NULL for parameter VALUE in function DBMS_DATAPUMP.METADATA_REMAP
Cause of the Problem
A NULL or invalid value was supplied for the parameter. Here a NULL value was supplied for the parameter remap_tablespace. The new_value was considered as NULL because of space after colon.
There was a space in the line user1_HISTORY_DETAIL_SPC: user2_HISTORY_DETAIL_SPC.
The space is just after colon and hence error comes.
Solution of the ProblemEnsure that parameters are not supplied as value NULL. Rewrite the above script as below will solve the problem.
impdp directory=d dumpfile=user1_metadata04-12-08.dmp remap_schema=user1:user2 table_exists_action=skip logfile=user1_metadata04-12-08.log \
remap_tablespace=user1_SPC:user2_SPC \
remap_tablespace=user1_HISTORY_DETAIL_SPC:user2_HISTORY_DETAIL_SPC \
remap_tablespace=user1_HISTORY_SPC:user2_HISTORY_SPC \
remap_tablespace=user1_ACC_SPC:user2_SPC \
remap_tablespace=user1_HISTORY_DETAIL_INDX_SPC:user2_HIS_DETAIL_INDX_SPC \
remap_tablespace=user1_HISTORY_INDX_SPC:user2_HISTORY_INDX_SPC
I used both remap_schema(import operation will be performed in another user than the user whose data to be imported) and remap_tablespace(tablespace to be changed while importing) and the import operation failed with ORA-39005 and ORA-31600.
$host impdp directory=d dumpfile=user1_metadata04-12-08.dmp remap_schema=user1:user2 table_exists_action=skip logfile=user1_metadata04-12-08.log \
remap_tablespace=user1_SPC:user2_SPC \
remap_tablespace=user1_HISTORY_DETAIL_SPC: user2_HISTORY_DETAIL_SPC \
remap_tablespace=user1_HISTORY_SPC:user2_HISTORY_SPC \
remap_tablespace=user1_ACC_SPC:user2_SPC \
remap_tablespace=user1_HISTORY_DETAIL_INDX_SPC:user2_HIS_DETAIL_INDX_SPC \
remap_tablespace=user1_HISTORY_INDX_SPC:user2_HISTORY_INDX_SPC
Import: Release 10.2.0.1.0 - 64bit Production on Thursday, 04 December, 2008 19:17:08
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Username: system
Password:
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
ORA-39005: inconsistent arguments
ORA-31600: invalid input value NULL for parameter VALUE in function DBMS_DATAPUMP.METADATA_REMAP
Cause of the Problem
A NULL or invalid value was supplied for the parameter. Here a NULL value was supplied for the parameter remap_tablespace. The new_value was considered as NULL because of space after colon.
There was a space in the line user1_HISTORY_DETAIL_SPC: user2_HISTORY_DETAIL_SPC.
The space is just after colon and hence error comes.
Solution of the ProblemEnsure that parameters are not supplied as value NULL. Rewrite the above script as below will solve the problem.
impdp directory=d dumpfile=user1_metadata04-12-08.dmp remap_schema=user1:user2 table_exists_action=skip logfile=user1_metadata04-12-08.log \
remap_tablespace=user1_SPC:user2_SPC \
remap_tablespace=user1_HISTORY_DETAIL_SPC:user2_HISTORY_DETAIL_SPC \
remap_tablespace=user1_HISTORY_SPC:user2_HISTORY_SPC \
remap_tablespace=user1_ACC_SPC:user2_SPC \
remap_tablespace=user1_HISTORY_DETAIL_INDX_SPC:user2_HIS_DETAIL_INDX_SPC \
remap_tablespace=user1_HISTORY_INDX_SPC:user2_HISTORY_INDX_SPC
Sunday, November 30, 2008
Formatting SQL*Plus Reports
Sometimes you may find difficulties while displaying result on SQL*Plus like a column length displayed as a long length and thus does not result a fine output. In this post I will try to show how we can make good looking result displayed on SQL*Plus.
With COLUMN a column can be formatted. For string type data format is specified by A and then length. Like A7 means total column will span to 7 word length. For number data type the format is 99999 where number of 9s decide the length of digits of number data type.
Changing Column Headings
To change the heading of column a,b and c use commands like below. The vertical bar is used if you want to display the columns in a new line.
Setting the Underline Character
You see the underline character in the above output under heading is set to -. If you want to change it to = then use,
Default Display of columns
1)A NUMBER column's width equals the width of the heading or the width of the FORMAT plus one space for the sign, whichever is greater. If you do not explicitly use FORMAT, then the column's width will always be at least the value of SET NUMWIDTH.
BY default the NUMWIDTH is set to 10.
2)The default width of datatype columns is the width of the column in the database. The column width of a LONG, CLOB, NCLOB or XMLType defaults to the value of SET LONGCHUNKSIZE or SET LONG, whichever is the smaller.
If LONG is set to 80 and LONGCHUNKSIZE is set to 90 then for CLOB database only first 80 character will be shown on Sql*plus by default.
3)The default width and format of unformatted DATE columns in SQL*Plus is determined by the database NLS_DATE_FORMAT parameter. Otherwise, the default format width is A9.
Listing and Resetting Column Display Attributes
1)To list the current display attributes for a given column, use the COLUMN command followed by the column name.
COLUMN column_name
2)To list the current display attributes for all columns, enter just the COLUMN command.
COLUMN
3)To reset the display attributes for a column to their default values, use the CLEAR clause of the COLUMN command as below.
COLUMN column_name CLEAR
4)To reset all column display attributes to their default values enter,
CLEAR COLUMNS
Suppressing and Restoring Column Display Attributes
Though COLUMN command you once format a column and now you want to use the default display attributes for the column, but you don't want to remove the attributes you have defined through the COLUMN command. You can achieve that by,
COLUMN column_name OFF
To restore the attributes you defined through COLUMN, use the ON clause:
COLUMN column_name ON
Printing a Line of Characters after Wrapped Column Values
RECSEP determines when the line of characters is printed;
-you set RECSEP to EACH to print after every line,
-to WRAPPED to print after wrapped lines,
-The default setting of RECSEP is WRAPPED.
RECSEPCHAR sets the character printed in each line.
To print a line of dashes after each wrapped column value, enter:
SET RECSEP WRAPPED
SET RECSEPCHAR "-"
Clarifying Your Report with Spacing and Summary Lines
With the BREAK command you can suppress the duplicate values in a column specified in an ORDER BY clause. If two columns values come in the subsequent rows and you use BREAK on the column then only one will be shown as below.
Note that here I did not use order by clause. You should use order by on the column that you break. If you do not do this, breaks occur every time the column value changes.
To insert n blank lines, use the BREAK command in the following form:
BREAK ON break_column SKIP n
where n can be 1 or 2 or ....
To skip a page, use the command in this form:
BREAK ON break_column SKIP PAGE
You may wish to insert blank lines or a blank page after every row.
To skip n lines after every row, use BREAK in the following form:
BREAK ON ROW SKIP n
To skip a page after every row, use
BREAK ON ROW SKIP PAGE
You can list your current break definition by entering the BREAK command with no clauses:
BREAK
You can remove the current break definition by entering the CLEAR command with the BREAKS clause:
CLEAR BREAKS
With COLUMN a column can be formatted. For string type data format is specified by A and then length. Like A7 means total column will span to 7 word length. For number data type the format is 99999 where number of 9s decide the length of digits of number data type.
Changing Column Headings
To change the heading of column a,b and c use commands like below. The vertical bar is used if you want to display the columns in a new line.
SQL> COLUMN A HEADING 'FIRST|COLUMN'
SQL> COLUMN C HEADING 'THIRD|COLUMN'
SQL> COLUMN B HEADING 'SECOND|COLUMN' FORMAT A7
SQL> select a,b,c from test1;
FIRST SECOND THIRD
COLUMN COLUMN COLUMN
---------- ------- ------
1 Hi Hi2
Setting the Underline Character
You see the underline character in the above output under heading is set to -. If you want to change it to = then use,
SQL> SET UNDERLINE =
SQL> /
FIRST SECOND THIRD
COLUMN COLUMN COLUMN
========== ======= ======
1 Hi Hi2
Default Display of columns
1)A NUMBER column's width equals the width of the heading or the width of the FORMAT plus one space for the sign, whichever is greater. If you do not explicitly use FORMAT, then the column's width will always be at least the value of SET NUMWIDTH.
BY default the NUMWIDTH is set to 10.
SQL> select 22/7 "This is the pi value" from dual;
This is the pi value
--------------------
3.14285714
SQL> show numwidth
numwidth 10
2)The default width of datatype columns is the width of the column in the database. The column width of a LONG, CLOB, NCLOB or XMLType defaults to the value of SET LONGCHUNKSIZE or SET LONG, whichever is the smaller.
If LONG is set to 80 and LONGCHUNKSIZE is set to 90 then for CLOB database only first 80 character will be shown on Sql*plus by default.
3)The default width and format of unformatted DATE columns in SQL*Plus is determined by the database NLS_DATE_FORMAT parameter. Otherwise, the default format width is A9.
Listing and Resetting Column Display Attributes
1)To list the current display attributes for a given column, use the COLUMN command followed by the column name.
COLUMN column_name
2)To list the current display attributes for all columns, enter just the COLUMN command.
COLUMN
3)To reset the display attributes for a column to their default values, use the CLEAR clause of the COLUMN command as below.
COLUMN column_name CLEAR
4)To reset all column display attributes to their default values enter,
CLEAR COLUMNS
Suppressing and Restoring Column Display Attributes
Though COLUMN command you once format a column and now you want to use the default display attributes for the column, but you don't want to remove the attributes you have defined through the COLUMN command. You can achieve that by,
COLUMN column_name OFF
To restore the attributes you defined through COLUMN, use the ON clause:
COLUMN column_name ON
Printing a Line of Characters after Wrapped Column Values
RECSEP determines when the line of characters is printed;
-you set RECSEP to EACH to print after every line,
-to WRAPPED to print after wrapped lines,
-The default setting of RECSEP is WRAPPED.
RECSEPCHAR sets the character printed in each line.
To print a line of dashes after each wrapped column value, enter:
SET RECSEP WRAPPED
SET RECSEPCHAR "-"
Clarifying Your Report with Spacing and Summary Lines
With the BREAK command you can suppress the duplicate values in a column specified in an ORDER BY clause. If two columns values come in the subsequent rows and you use BREAK on the column then only one will be shown as below.
Note that here I did not use order by clause. You should use order by on the column that you break. If you do not do this, breaks occur every time the column value changes.
SQL> select station_id,uname from users where station_id!=1 and rownum<=10;
STATION_ID UNAME
---------- --------------------------------------------------
4 a.sadat
14 parvez
14 reza
14 voyeger
40 support
71 accounts5
71 bill5
71 chinmoy
71 crash5
71 mubeen5
10 rows selected.
SQL> break on station_id skip 2
SQL> select station_id,uname from users where station_id!=1 and rownum<=10;
STATION_ID UNAME
---------- --------------------------------------------------
4 a.sadat
14 parvez
reza
voyeger
40 support
71 accounts5
bill5
chinmoy
crash5
mubeen5
10 rows selected.
SQL> break on station_id skip page
SQL> /
STATION_ID UNAME
---------- --------------------------------------------------
4 a.sadat
STATION_ID UNAME
---------- --------------------------------------------------
14 parvez
reza
voyeger
STATION_ID UNAME
---------- --------------------------------------------------
40 support
STATION_ID UNAME
---------- --------------------------------------------------
71 accounts5
bill5
chinmoy
crash5
mubeen5
10 rows selected.
To insert n blank lines, use the BREAK command in the following form:
BREAK ON break_column SKIP n
where n can be 1 or 2 or ....
To skip a page, use the command in this form:
BREAK ON break_column SKIP PAGE
You may wish to insert blank lines or a blank page after every row.
To skip n lines after every row, use BREAK in the following form:
BREAK ON ROW SKIP n
To skip a page after every row, use
BREAK ON ROW SKIP PAGE
You can list your current break definition by entering the BREAK command with no clauses:
BREAK
You can remove the current break definition by entering the CLEAR command with the BREAKS clause:
CLEAR BREAKS
Related Documents:
-------------------------------------------------------
How to set Environmental variable to SQL*Plus permanently
-------------------------------------------------------
How to set Environmental variable to SQL*Plus permanently
Use of bind variables in Sql*plus
Bind variables are variables that are declared inside PL/SQL or a variable in a SQL statement that must be replaced with a valid value.
In the SQL*Plus you can also create bind variable. If you create a bind variable in SQL*Plus, you can use the variable as you would a declared variable in your PL/SQL subprogram and then access the variable from SQL*Plus.
Creating Bind Variables
In order to create bind variable in Sql*plus you have to use variable keyword. To create a bind variable named bind_var as number datatype use,
variable bind_var NUMBER
Displaying Bind Variables
To display all bind variables in the session just use variable keyword.
SQL> variable
variable bind_var
datatype NUMBER
In order to display the value of a particular bind variable use,
SQL> print bind_var
BIND_VAR
----------
Using Bind Variables
To use bind variable first use colon(:) and then give variable name. In order to change values of bind variable you must enter in a PL/SQL block. To change bind variable of bind_var to 10 use,
begin
:bind_var:=10;
end;
/
In the SQL*Plus you can also create bind variable. If you create a bind variable in SQL*Plus, you can use the variable as you would a declared variable in your PL/SQL subprogram and then access the variable from SQL*Plus.
Creating Bind Variables
In order to create bind variable in Sql*plus you have to use variable keyword. To create a bind variable named bind_var as number datatype use,
variable bind_var NUMBER
Displaying Bind Variables
To display all bind variables in the session just use variable keyword.
SQL> variable
variable bind_var
datatype NUMBER
In order to display the value of a particular bind variable use,
SQL> print bind_var
BIND_VAR
----------
Using Bind Variables
To use bind variable first use colon(:) and then give variable name. In order to change values of bind variable you must enter in a PL/SQL block. To change bind variable of bind_var to 10 use,
begin
:bind_var:=10;
end;
/
Related Documents:
-------------------------------------------------------
How to set Environmental variable to SQL*Plus permanently
-------------------------------------------------------
How to set Environmental variable to SQL*Plus permanently
List of country calling codes in the world
Subscribe to:
Posts (Atom)