You have a very big database and you want to take data pump export of this very big database. Suppose your dump size would be 100G. But in your hard disk in one partition 100G is not available. You may get free space more than 100G that span over partitions. For example in one partition 40G free space is available, in another 50G and in another partition 30G.
You then span this 100G dumpfile in this partitions and you can take the dump. The combination of two parameters DUMPFILE and FILESIZE of data pump export operation help to take the dump.
Suppose in /oradata1 40G is available.
in /oradata2 50G is available.
in /oradata3 30G is available.
And you have created directories dir1,dir2 and dir3 that represents /oradata1, /oradata3 and /oradata3 respectively.
If your approximate dump size is 100G then you would write your datapump export command as,
expdp user_name/pass dumpfile=dir1:part1.dmp, dir2:part2.dmp, dir3:part3.dmp, dir2:rest%U.dmp filesize=30G
In this case dump file will be created sequentially.
First in /oradata1 30G dump size will be created named part1.dmp.
Then in /oradata2 30G dump size will be created named part2.dmp.
Then in /oradata3 30G dump size will be created named part3.dmp.
Finally rest 10G dump size will be created in /oradata2 named rest01.dmp.
The %U causes file name to be generated by oracle and sequentially like rest01.dmp, rest02.dmp and etc.
Below is an example which will take dump of tablespace users and each dump size will be 300K and it will span in C:\, D:\ and E:\ drive each.
SQL> create or replace directory d as 'd:';
Directory created.
SQL> create or replace directory e as 'e:';
Directory created.
SQL> create directory c as 'c:';
Directory created.
maximsg@TIGER> $expdp maximsg/a dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp filesize=300K tablespaces=users
Export: Release 10.2.0.1.0 - Production on Thursday, 01 January, 2009 12:16:15
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MAXIMSG"."SYS_EXPORT_TABLESPACE_01": maximsg/******** dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp filesize=300K tablespaces=use
rs
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "CR_2"."O_CDR_TEST" 310.9 KB 2166 rows
. . exported "CR_2"."O_CDR" 484.4 KB 2606 rows
. . exported "CR_2"."USER_IP" 126 KB 3403 rows
. . exported "CR_2"."O_CDR_TEMP" 33.72 KB 361 rows
. . exported "CR_2"."USERS" 42.57 KB 230 rows
. . exported "SCOTT"."DEPT" 5.656 KB 4 rows
. . exported "SCOTT"."EMP" 7.851 KB 15 rows
. . exported "SCOTT"."SALGRADE" 5.585 KB 5 rows
. . exported "SCOTT"."BONUS" 0 KB 0 rows
Master table "MAXIMSG"."SYS_EXPORT_TABLESPACE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MAXIMSG.SYS_EXPORT_TABLESPACE_01 is:
D:\PART1.DMP
E:\PART2.DMP
C:\REST01.DMP
C:\REST02.DMP
C:\REST03.DMP
Job "MAXIMSG"."SYS_EXPORT_TABLESPACE_01" successfully completed at 12:16:33
For dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp,
first in D: drive part1.dmp will be created with sized 300K.
Then in E: drive part2.dmp will be created with sized 300K.
Then all remaining parts of the dump will be created in C: drive each with 300K (possibly except last one due to dump size is not multiple of 300K) and their name will be like rest01.dmp, rest02.dmp etc.
If you use original export then the you control the size with FILE and FILESIZE parameter. The FILE parameter takes a list of file names to be created and FILESIZE parameter shows the maximum size of each dump file.
No comments:
Post a Comment