Showing posts with label Import. Show all posts
Showing posts with label Import. Show all posts

Monday, February 2, 2009

Import error ORA-39083, ORA-02298 parent keys not found

Error Description
From live server I have taken a dump and then in another database I have imported it. All tables, indexes, functions, constraints, procedures went fine. But while importing REF_CONSTRAINT two constraints failed to create.
From the import log,
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
ORA-39083: Object type REF_CONSTRAINT failed to create with error:
ORA-02298: cannot validate (MAXIM.HIS_DET_HISID_FK) - parent keys not found
Failing sql is:
ALTER TABLE "MAXIM"."HISTORY_DETAIL" ADD CONSTRAINT "HIS_DET_HISID_FK" FOREIGN KEY
("HISTORY_ID") REFERENCES "MAXIM"."HISTORY" ("ID") ENABLE
ORA-39083: Object type REF_CONSTRAINT failed to create with error:
ORA-02298: cannot validate (MAXIM.FK_REGIS_PHON_SUBSCRIBERS) - parent keys not found
Failing sql is:
ALTER TABLE "MAXIM"."PHONES" ADD CONSTRAINT "FK_REGIS_PHON_SUBSCRIBERS"
FOREIGN KEY ("SB_ID") REFERENCES "MAXIM"."SUBSCRIBERS" ("ID") ENABLE


Cause of the Problem
ORA-02298 throws because it was trying to enable a foreign key constraint on a table with data populated into it but it does not find any references parent key. As to enable a constraint it must satisfy the constraint condition and it could not enable foreign key constraint, so above error arises.

There may be several scenarios behind ORA-02298 while you doing import operation.

1)The data being imported is from a database where there is more than one user. The export is being done in user/schema mode, so only that user's data is being exported. Any tables that the user does not own, but may have a referential integrity link to, are not exported.

2)The export is taken while database was in read write mode and export was taken consistent. That is if you take export with exp command then you did not use CONSISTENT=Y option. And if you take data pump export with expdp command then you did not use FLASHBACK_SCN or FLASHBACK_TIME parameter to make the dump consistent. As a result at first parent table is exported and then some data is inserted into parent as well as child table. After parent table is exported child table is exported and child table contains those rows which references are not exist inside the parent table existed in the dump file. So, in case of import while enabling constraint it could not enable because referenced parent key is not there.

Solution of the Problem
If you face problem for cause 1) then find the parent table and before exporting child tables first import the parent tables dump and then import user dumps. Although if you take full export it will solve the problem.

If you face problem for cause 2) then while taking export by using exp utility use CONSISTENT=Y option.
And if you take data pump export by using expdp utility then use FLASHBACK_SCN or FLASHBACK_TIME parameter. You can query from database for the values of these parameters and use those value. You can follow http://arjudba.blogspot.com/2008/05/export-data-to-previous-time-or-point.html in order to have an idea about usage of these parameters. To know more about FLASHBACK_SCN or FLASHBACK_TIME parameters search within this blog.

Related Documents
http://arjudba.blogspot.com/2008/05/how-to-get-scn-from-database-timestamp.html
http://arjudba.blogspot.com/2008/05/export-data-to-previous-time-or-point.html

Thursday, December 4, 2008

Import fails with ORA-39005, ORA-31600: invalid input value NULL for parameter

Problem Description
I used both remap_schema(import operation will be performed in another user than the user whose data to be imported) and remap_tablespace(tablespace to be changed while importing) and the import operation failed with ORA-39005 and ORA-31600.
$host impdp directory=d dumpfile=user1_metadata04-12-08.dmp remap_schema=user1:user2 table_exists_action=skip logfile=user1_metadata04-12-08.log \
remap_tablespace=user1_SPC:user2_SPC \
remap_tablespace=user1_HISTORY_DETAIL_SPC: user2_HISTORY_DETAIL_SPC \
remap_tablespace=user1_HISTORY_SPC:user2_HISTORY_SPC \
remap_tablespace=user1_ACC_SPC:user2_SPC \
remap_tablespace=user1_HISTORY_DETAIL_INDX_SPC:user2_HIS_DETAIL_INDX_SPC \
remap_tablespace=user1_HISTORY_INDX_SPC:user2_HISTORY_INDX_SPC

Import: Release 10.2.0.1.0 - 64bit Production on Thursday, 04 December, 2008 19:17:08

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Username: system
Password:

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
ORA-39005: inconsistent arguments
ORA-31600: invalid input value NULL for parameter VALUE in function DBMS_DATAPUMP.METADATA_REMAP

Cause of the Problem
A NULL or invalid value was supplied for the parameter. Here a NULL value was supplied for the parameter remap_tablespace. The new_value was considered as NULL because of space after colon.

There was a space in the line user1_HISTORY_DETAIL_SPC: user2_HISTORY_DETAIL_SPC.
The space is just after colon and hence error comes.

Solution of the Problem
Ensure that parameters are not supplied as value NULL. Rewrite the above script as below will solve the problem.

impdp directory=d dumpfile=user1_metadata04-12-08.dmp remap_schema=user1:user2 table_exists_action=skip logfile=user1_metadata04-12-08.log \
remap_tablespace=user1_SPC:user2_SPC \
remap_tablespace=user1_HISTORY_DETAIL_SPC:user2_HISTORY_DETAIL_SPC \
remap_tablespace=user1_HISTORY_SPC:user2_HISTORY_SPC \
remap_tablespace=user1_ACC_SPC:user2_SPC \
remap_tablespace=user1_HISTORY_DETAIL_INDX_SPC:user2_HIS_DETAIL_INDX_SPC \
remap_tablespace=user1_HISTORY_INDX_SPC:user2_HISTORY_INDX_SPC

Wednesday, November 12, 2008

IMPDP fails with ORA-39002, ORA-29283: invalid file operation

Problem Description
My impdp operation fails with error ORA-39002, ORA-39070, ORA-29283, ORA-29283: invalid file operation as below.
SQL> host impdp cr_3/cr_3 remap_schema=cr_2:cr_3 dumpfile=skydbcr_2-04-10-2008.dmp directory=e

Import: Release 10.2.0.1.0 - Production on Wednesday, 12 November, 2008 16:42:27

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 475
ORA-29283: invalid file operation

Cause and Solution of the Problem
1. Check whether the user who run the expdp operation has READ,WRITE permission on the database directory. If not then as a dba user run grant permission on cr_3,
GRANT READ,WRITE ON DIRECTORY E TO CR_3;

2. Check whether the path used in database directory physically exists on the OS.
Whenever you create a directory oracle does not check existence physical directory on OS. The checking is done on runtime.

3. Check whether the owner of oracle process has enough permission on the OS directory where the file is accessed. On unix issue,
ls -l name_of_the_file and see permission. If not then using chmod grant read, write access to the file to oracle user.
On windows if NTFS file system then you can change it by right clicking on the file or folder and then select sharing and security option.

4. Even though oracle-user has enough permissions on the directory, it may raise ora-29283 if the owner of the file is not oracle. In such case using chown change the ownership of the file on unix. On windows if NTFS file system then you can change it by right clicking on the file or folder and then select sharing and security option.

Saturday, July 12, 2008

Data Pump Import fails with ORA-39000 and ORA-39140.

Error Description:
---------------------------------------------------

While I gave multiple file name in the dumpfile parameter of expdp it raised error ORA-39000 and ORA-39140.
$impdp prod16k/thword16k directory=d dumpfile=dump.dmp,dump2.dmp,dump3.dmp
Import: Release 10.2.0.2.0 - Production on Sunday, 13 July, 2008 1:56:14

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-39140: dump file "/backup1/dump2.dmp" belongs to job "PROD72"."SYS_EXPORT_TABLE_02"

Cause of The Problem:
------------------------------------------

The error happened as data pump can't import several files at a time if they don't produce from a single export job. The files are produced by multiple export operation. At a time we can import dumps produced from a single export operation.

We can get the cause of the error from OS using oerr as follows,
bash-3.00$ oerr ora 39140
39140, 00000, "dump file \"%s\" belongs to job %s"
// *Cause: When a dump file set consists of multiple files, all files
// in the set must be specified for an import operation, and all
// files must have been produced by the same export job.
// One of the files provided does not belong to the original dump
// file set. For instance, it was created by a different export job
// than the other files.
// *Action: Remove the dump file indicated in the message and retry the
// import operation providing only the complete set of
// dump files created by a specific export job.

Solution of The Problem:
-----------------------------------

Remove the file name from dumpfile parameter if they are not created from a single export operation. If both of above three dumps are generated from three different export operation then to import it invoke three separate operation like,

$impdp prod16k/thword16k directory=d dumpfile=dump.dmp
$impdp prod16k/thword16k directory=d dumpfile=dump3.dmp
$impdp prod16k/thword16k directory=d dumpfile=dump2.dmp

Data pump import fails with ORA-39125, ORA-00379, ORA-06512

Error Description:
-------------------------------

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
ORA-39125: Worker unexpected fatal error in KUPW$WORKER.GET_TABLE_DATA_OBJECTS while calling DBMS_METADATA.FETCH_XML_CLOB []
ORA-00379: no free buffers available in buffer pool DEFAULT for block size 4K

ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 6235

Cause of the Problem:
---------------------------------------------

With using oracle message we get,
SQL> !oerr ora 00379
00379, 00000, "no free buffers available in buffer pool %s for block size %sK"
// *Cause: All buffers in the specified buffer pool for the specified block size
// are in use and no free buffers are available.
// *Action: Increase the number of buffers in the specified pool for the specified
// block size

So we get ORA-00379 when for the specified block size there is no free buffers are available. Here we see from error message we get block size 4K.

Solution of The problem:
----------------------------

As oracle recommends increase the size of the 4K buffer pool.
SQL> alter system set db_4k_cache_size=100M;
System altered.

Tuesday, July 8, 2008

LRM-00116: ORA-39001 ORA-39071 ORA-00907 in impdp

Problem Description:
------------------------------

While using special character without escape character causes error like LRM-00116: ORA-39001 ORA-39071 ORA-00907.

-bash-3.00$ expdp dba/dba exclude=table:"in('ARJU_DATA','INDIA_ACTIVITY','USA_IMAGE')"

LRM-00116: syntax error at ')' following 'USA_IMAGE'

-bash-3.00$ expdp dba/dba exclude=table:"in\('ARJU_DATA','INDIA_ACTIVITY','USA_IMAGE'\)"

Export: Release 10.2.0.2.0 - Production on Tuesday, 08 July, 2008 23:54:13

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
ORA-39001: invalid argument value
ORA-39071: Value for EXCLUDE is badly formed.
ORA-00907: missing right parenthesis

Cause of The Problem:
------------------------------------------------

double quotes ("), single quotes (') parentheses () are termed as special character on unix system. To know the list of special character on unix please have a look at Special Character on Unix. In order to use specual character on unix you have to use escape characters. How you can use escape character are discussed on How does one escape special characters.
If you don't use escape characters while using special character above error will arise.

Solution of The problem:
---------------------------------

Use escape character before using special character. Like in stead of using
expdp dba/dba exclude=table:"in('ARJU_DATA','INDIA_ACTIVITY','USA_IMAGE')"

Use,
expdp prod/prod exclude=table:\"in\(\'ARJU_DATA\',\'INDIA_ACTIVITY\',\'USA_IMAGE\'\)\"

The details of the exclude and include parameters of datapump are described in DataPump Parameters INCLUDE and EXCLUDE - How to Load and Unload Specific Objects

Tuesday, May 27, 2008

ORA-39014 ORA-39029 ORA-31672 One or more workers have prematurely exited

Error Description:
-------------------------

Whenever you tried to import data into the database it failed with error as below.

Processing object type DATABASE_EXPORT/DE_SYSTEM_PROCOBJACT/DE_PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
Processing object type DATABASE_EXPORT/DE_SYSTEM_PROCOBJACT/DE_POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
ORA-39014: One or more workers have prematurely exited.
ORA-39029: worker 1 with process name "DW01" prematurely terminated
ORA-31672: Worker process DW01 died unexpectedly.
Job "SYSTEM"."SYS_IMPORT_FULL_03" stopped due to fatal error at 14:13
ORA-39014: One or more workers have prematurely exited.

Cause of The Problem:
-------------------------

This problem is likely to happen as because of the parallel settings. If you have only 1 dump file in the dump file set and you give PARALLEL parameter to a higher value then this error may occur. The parallelism setting should be less than or equal to the number of dump files in the dump file set.

Solution of The Problem:
-------------------------------------

The parallelism setting (specified with the PARALLEL parameter) should be less than or equal to the number of dump files in the dump file set. If there are not enough dump files, the performance will not be optimal because multiple threads of execution will be trying to access the same dump file.

As workaround do import will PARALLEL=1.

Monday, April 21, 2008

How to Import data without Dumpfile-Network Mode Import

It is possible to import data in a database without the dump file. From network the data will be retrieved from one database and then import that data back to the target database. There are no dump files involved.

How to do the Network Mode Import:
-------------------------------------

1)Create a database link (with the source database) in the database where you will perform the import operation.

SQL> create database link ARJU.REGRESS.RDBMS.DEV.US.ORACLE.COM connect to arju identified by a using 'NEPTUNE';
Database link created.


In your tnsnames.ora you have entry with NEPTUNE and it works correctly.

I have to used this long name because the database global name settings.

2)Check the database link works:

If destination table name is select perfectly then your database link is ok.
SQL> select table_name from user_tables@ARJU.REGRESS.RDBMS.DEV.US.ORACLE.COM;
TABLE_NAME
------------------------------
FIRST_TABLE
TEST
NAME
In source database you can check by,
SQL> select table_name from user_tables;
no rows selected

3)Connect to the source database and note down the SCN of the database. This SCN is needed in order to ensure consistency in the dumpfile. You can do it while you are sitting on destination database by helping database links.
SQL>SELECT CURRENT_SCN FROM V$DATABASE@ARJU.REGRESS.RDBMS.DEV.US.ORACLE.COM;
118740640

Also you can logon to that database and see SCN.

4)Perform the network mode import operation.Specify database link parameter with NETWORK_LINK. Also specify FLASHBACK_SCN that you find in step 3.

SQL> host impdp arju/arju directory=dnet FLASHBACK_SCN=118740640 NETWORK_LINK=ARJU.REGRESS.RDBMS.DEV.US.ORACLE.COM


Note that no dumpfile parameter here. I created dnet directory into which log file will be written.

Restrictions of Network Mode Import:
-------------------------------------------


1.If the source database is read-only, then the user on the source database must have a locally-managed tablespace assigned as a default temporary tablespace. Otherwise, the job will fail.

2.If the USERID that is executing the import job has the IMP_FULL_DATABASE role on the target database, then that user must also have the EXP_FULL_DATABASE role on the source database.

3.The only types of database links supported by Data Pump Import are: public, fixed-user, and connected-user.

4.When the NETWORK_LINK parameter is used in conjunction with the TABLES parameter, only whole tables can be imported (not partitions of tables).

5.Network imports do not support the use of evolved types.

Related Documents:
Extract DDL from Dump

Import Oracle Data from Anywhere.

Suppose, I have server machine named "NEPTUNE". In the server machine under /oradata2 directory there is a dump file named net.dmp. However to run import operation we need not connect to server "NEPTUNE" and then perform import operation. From any machine which have data pump import client can do the operation.

In this case we need to specify with username/password field a net service name. We can easily use easy name service method here.

Note that, The dump file, the directory are all in server. Only impdp client is on another machine from which I will invoke impdp utility.


In server machine "NEPTUNE" I invoked lsnrctl status and saw the listener port is 1522 and service name is ARJU. The dump file is /oradata2/net.dmp and a directory is there named as dimp.


From Another machine just invoke,

impdp arju/a@neptune:1522/ARJU directory=dimp dumpfile=net.dmp

Related Documents:
-------------------

Extract DDL from Dump

Saturday, April 12, 2008

Original Export and Import Versus Data Pump Export and Import

If you have worked with prior 10g database you possibly are familiar with exp/imp utilities of oracle database. Oracle 10g introduces a new feature called data pump export and import.Data pump export/import differs from original export/import. The difference is listed below.

1)Impdp/Expdp has self-tuning unities. Tuning parameters that were used in original Export and Import, such as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import.

2)Data Pump represent metadata in the dump file set as XML documents rather than as DDL commands.

3)Impdp/Expdp use parallel execution rather than a single stream of execution, for improved performance.

4)In Data Pump expdp full=y and then impdp schemas=prod is same as of expdp schemas=prod and then impdp full=y where in original export/import does not always exhibit this behavior.

5)Expdp/Impdp access files on the server rather than on the client.

6)Expdp/Impdp operate on a group of files called a dump file set rather than on a single sequential dump file.

7)Sequential media, such as tapes and pipes, are not supported in oracle data pump.But in original export/import we could directly compress the dump by using pipes.

8)The Data Pump method for moving data between different database versions is different than the method used by original Export/Import.

9)When you are importing data into an existing table using either APPEND or TRUNCATE, if any row violates an active constraint, the load is discontinued and no data is loaded. This is different from original Import, which logs any rows that are in violation and continues with the load.

10)Expdp/Impdp consume more undo tablespace than original Export and Import.

11)If a table has compression enabled, Data Pump Import attempts to compress the data being loaded. Whereas, the original Import utility loaded data in such a way that if a even table had compression enabled, the data was not compressed upon import.

12)Data Pump supports character set conversion for both direct path and external tables. Most of the restrictions that exist for character set conversions in the original Import utility do not apply to Data Pump. The one case in which character set conversions are not supported under the Data Pump is when using transportable tablespaces.

13)There is no option to merge extents when you re-create tables. In original Import, this was provided by the COMPRESS parameter. Instead, extents are reallocated according to storage parameters for the target table.

Wednesday, April 9, 2008

Verify Physical Data Structure in Oracle.

With Oracle database it is sometime needed to check whether the physical data structure is intact or not. There are different ways to check whether physical data structure is intact or not, like to detect data block corruption.

I will try to demontrate two procedure in order to validate physical data structure.

1)With Export/Import Method.
2)With DBVERIFY Utility.

I have discussed about 1st method in other of my thread. Here I will try to explain about 2nd one.

DBVERIFY Utility
-------------------------


DBVERIFY utility can be used on offline or online databases, as well on backup files. We can use it to ensure that a backup database (or datafile) is valid before it is restored, to detect the corrupted block.

There are two command line interface of DBVERIFY utility.

A)To check disk blocks of a single datafile.
B)To check segment.

This utility is invoked by dbv command.

A)To check disk blocks of a single datafile.
--------------------------------------------------

Invoke dbv utility with the datafile name in file parameter. Like,

bash-3.00$ dbv file=/shared_disk/oradata/ARJU/data02.dbf feedback=1000

DBVERIFY: Release 10.2.0.1.0 - Production on Wed Apr 9 17:16:00 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.

DBVERIFY - Verification starting : FILE = /shared_disk/oradata/ARJU/data02.dbf
....................................................................

...
Page 80491 is marked corrupt
Corrupt block relative dba: 0x02013a6b (file 8, block 80491)
Bad check value found during dbv:
Data in bad block:
type: 6 format: 2 rdba: 0x02013a6b
last change scn: 0x0000.0573a224 seq: 0x1 flg: 0x04
spare1: 0x0 spare2: 0x0 spare3: 0x0
consistency value in tail: 0xa2240601
check value in block header: 0xf88
computed block checksum: 0x10
.
.
.
.

DBVERIFY - Verification complete

Total Pages Examined : 3045632
Total Pages Processed (Data) : 1620222
Total Pages Failing (Data) : 1
Total Pages Processed (Index): 586
Total Pages Failing (Index): 0
Total Pages Processed (Other): 1384939
Total Pages Processed (Seg) : 0
Total Pages Failing (Seg) : 0
Total Pages Empty : 39848
Total Pages Marked Corrupt : 37
Total Pages Influx : 0
Highest block SCN : 93175751 (0.93175751)


Type dbv help=y to view the available options.

Some parameters:
--------------------
USERID:
This parameter is only necessary when the files being verified are ASM files.
FEEDBACK:Display a progress report to the terminal in form of dot (.). If feedback=100 is set then after verifying 100 blocks a dot(.) will be displayed.

B)To check segment.
--------------------------


In this mode, DBVERIFY enables you to specify either a table segment or index segment for verification.Duriing this mode, the segment is locked. If the specified segment is an index, the parent table is locked.

You can invoke dbv command to validate a segment like this,

dbv USERID=prod/prod SEGMENT_ID=2.2.890


SEGMENT_ID: The id of the segment that you want to verify. It is composed of the tablespace ID number (tsn), segment header file number (segfile), and segment header block number (segblock).

You can get this information from SYS_USER_SEGS. The required columns are TABLESPACE_ID, HEADER_FILE, and HEADER_BLOCK. SYSDBA privilege is required to query SYS_USER_SEGS.

Related Documents
-----------------------
To verify Physical data corruption as well as logical data corruption you can also use
RMAN Validation

Sunday, April 6, 2008

Extract DDL from Dump/ Read contents of dumpfile

I have been asked several times from different forums about the question of how we can extract ddl from dump?
In a nutshell I will try to show the procedure of how we can extract ddl from dump without importing any data.

Before proceed you need to know by which command dump was taken? Is it by exp or by expdp. Both have different procedures.I demonstrates below these two in two sections.

A) Dump was taken by exp.
B) Dump wad taken by expdp.

A) Dump was taken by exp.

1) You can simply use any viewer command and search the specified strings for the contents within it. I tried with cat, less and strings command and it worked.

Suppose in unix , strings a.dmp |grep "CREATE TABLE"
or, cat dumpfile_name |grep "YOUR SEARCH STRING"

2)you can invoke imp command with dump file name and with show=y option. The DDL command inside the dump will be shown to the screen. Like,

imp userid/password file=dumpfilename show=y

You can include log=filename with show=y option in order to save the contents inside file. Like,
imp userid/password file=dumpfilename show=y log=logfile.txt

3)A preferred method to me is to use indexfile=file.txt with imp command. It is better than show=y option. As output from indexfile=file.txt can be easily used without removing rem keyword and that script you can use easily. But output by show=y can't be used easily as double quote places in awkward places. Like you should use,
imp userid/password file=dumpfilename indexfile=file.txt

B) Dump was taken by expdp.

1)With impdp you have to use sqlfile=file.txt in which the ddl commands will be exported to file file.txt.Like,
impdp userid/password dumpfile=dumpfilename sqlfile=file.txt

Related Documents
How to get different object_type creation scripts
How to get DDL from database objects

Saturday, April 5, 2008

How does one use the import/export utilities?

exp/imp allow to transfer the data across databases that reside on different hardware platforms and/or on different Oracle versions. If the data is exported on a system with a different Oracle version then on that on which it is imported, imp must be the newer version. That means, if something needs to be exported from 10g into 9i, it must be exported with 9i's exp.

In order to use exp and imp, the catexp.sql script must be run. catexp.sql basically creates the exp_full_database and imp_full_database roles. No need to run catexp.sql for every time. It is only one time work. It is already executed if you create your database by dbca. So , don't bother with it if you create database with dbca.
It is found under $ORACLE_HOME/rdbms/admin.

Prerequisites
--------

One must have the create session privilege for being able to use exp. If objects of another user's schema need to be exported, the EXP_FULL_DATABASE role is required.

Export/Import Modes:
--------------------------------------------


1)Full database export:
--------------------------

The EXP_FULL_DATABASE and IMP_FULL_DATABASE, respectively, are needed to perform a full export.
Use the full export parameter for a full export.

2)Tablespace:
--------------------

Use the tablespaces export parameter for a tablespace export. It is only apply to transportable tablespaces.

3)Schema:
--------------

This mode can be used to export and import all objects that belong to a user.Use the owner export parameter and the fromuser import parameter for a user (owner) export-import.

4)Table:
---------------

Specific tables (and partitions) can be exported/imported with table export mode.
Use the tables export parameter for a table export.


Example:
-----------
The following examples demonstrate how the imp/exp utilities can be used:

exp scott/tiger file=emp.dmp log=emp.log tables=emp rows=yes indexes=no
exp scott/tiger file=emp.dmp tables=(emp,dept)

imp scott/tiger file=emp.dmp full=yes
imp scott/tiger file=emp.dmp fromuser=scott touser=scott tables=dept


Using a parameter file:

exp userid=scott/tiger@orcl parfile=export.txt


... where export.txt contains:

BUFFER=100000
FILE=account.dmp
FULL=n
OWNER=scott
GRANTS=y
COMPRESS=y


To see DDL within Dump:
---------------------

To write ddl into a file,
imp userid/pass file=a.dmp indexfile=myfile.txt

or, to see ddl into screen,
imp userid/pass file=a.dmp show=y

Related Documents:
--------------------

Data Pump Export/Import
Export/Import from Enterprise Manager
Original Export/Import

Tuesday, April 1, 2008

Export and Import from Enterprise Manager

If u want to do operation like export or import then u have to be administrator type user of EM.To do this,u have to login as system and then setup ---> Administrators ---> create.

Now login with the new user, Now Maintenance -----> Export to Export Files
Give the OS user id and password in the Host Credentials.

Now create a directory and test it . make sure oracle has the permission to write in that directory.

Import is simple also,Maintenance -----> Import from Export Files .Then select the directory and give the file name.

Related Documents:
--------------------

Data Pump Export/Import

Export/Import from Enterprise Manager

Original Export/Import