Search

Wednesday, October 3, 2012

DATAPUMP CONCEPTS & SCENARIO

Important Datapump view "dba+datapump_jobs"
Common parameters IN(Traditional Export vs Expdp)

FILESIZE
FLASHBACK_SCN
FLASHBACK_TIME
FULL
HELP
PARFILE
QUERY
TABLES
TABLESPACES
TRANSPORT_TABLESPACES(exp value is Y/N, expdp value is name of the tablespace)

Common parameters IN(Traditional Import vs Impdp)

FULL
HELP
PARFILE
QUERY
SKIP_UNUSABLE_INDEXES
TABLES
TABLESPACES

Comparing (Traditional Export vs  Expdp) parameters



 Export
 Expdb
 FEEDBACK
 STATUS
 FILE
 DUMPFILE
 LOG
 LOGFILE
 OWNER
 SCHEMAS
 TTS_FULL_CHECK  
 TRANSPROT_FULL_CHECK



Comparing (Traditional Import vs  Impdp) parameters

Import
Importdp
DATAFILE
TRANSPORT_DATAFILES
DESTROY
REUSE_DATAFILES
FEEDBACK
STATUS
FILE
DUMPFILE
FROMUSER
SCHEMAS, REMAP_SCHEMAS
IGNORE
TABLE_EXISTS_ACTION(SKIP,APPEND,TRUNCATE,REPLACE)

LOG
LOGFILE
INDEXFILE, SHOW
SQLFILE
TOUSER
REMAP_SCHEMA




Parameters in expdp Utility

ATTACH Attach the client session to existing data pump jobs
ATTACH [=[schema_name.]job_name]

COMPRESSION  Specifies which data to compress before writing to the dump file set.
COMPRESSION={ALL | DATA_ONLY | METADATA_ONLY | NONE}

CONTENT Specify what to export(ALL, DATA_ONLY, METADATA_ONLY)
CONTENT={ALL | DATA_ONLY | METADATA_ONLY}

DATA_OPTIONS   Default: There is no default. If this parameter is not used, then the special data handling options it provides simply do not take effect.

DIRECTORY   Default: DATA_PUMP_DIR   Location to write the dump file and log file.

DUMPFILE     Default: we have to give  exp.dmp

ENCRYPTION  The default value depends upon the combination of encryption-related parameters that are used
ENCRYPTION = {ALL | DATA_ONLY | ENCRYPTED_COLUMNS_ONLY | METADATA_ONLY | NONE}

ENCRYPTION_ALGORITHM   Specifies which cryptographic algorithm should be used to perform the encryption
ENCRYPTION_ALGORITHM = { AES128 | AES192 | AES256 }

ENCRYPTION_MODE  The default mode depends on which other encryption-related parameters are used. Specifies the type of security to use when encryption and decryption are performed.
ENCRYPTION_MODE = { DUAL | PASSWORD | TRANSPARENT }

ENCRYPTION_PASSWORD   Default: There is no default; the value is user-provided. Specifies a password for encrypting encrypted column data, metadata, or table data in the export dumpfile. This prevents unauthorized access to an encrypted dump file set.

ESTIMATE Show how much disk space each table in the export job consumes.
ESTIMATE={BLOCKS | STATISTICS}

ESTIMATE_ONLY It estimate the space, but does not perform export
ESTIMATE_ONLY={y | n}

EXCLUDE List of objects to be excluded

FILESIZE  Default: 0 (equivalent to the maximum size of 16 terabytes).Specifies the maximum size of each dump file. If the size is reached for any member of the dump file set, then that file is closed and an attempt is made to create a new file, if the file specification contains a substitution variable or if additional dump files have been added to the job.
FILESIZE=integer[B | KB | MB | GB | TB]

INCLUDE List of jobs to be included

JOB_NAME Name of the export job

KEEP_MASTER Specify Y not to drop the master table after export

NETWORK_LINK Specify database dblink to export from remote database

NOLOGFILE Specify Y if you do not want to create log file
NOLOGFILE={y | n}

PARALLEL  Specify the maximum number of threads for the export job. also help to split the dumpfile into number dump-file set

QUERY  Default: There is no default. Allows you to specify a query clause that is used to filter the data that gets exported.
QUERY = [schema.][table_name:] query_clause

VERSION objects that are incompatible with the specified version will not be exported.

SAMPLE - Allows you to specify a percentage of data to be sampled and unloaded from the source database. The sample_percent indicates the probability that a block of rows will be selected as part of the sample.


Parameters in impdp Utility

CONTENT={ALL | DATA_ONLY | METADATA_ONLY}

EXCLUDE List of objects to be excluded

ENCRYPTION_PASSWORD It is required on an import operation if an encryption password was specified on the export operation.
 
FLASHBACK_SCN Performs import operation that is consistent with the SCN specified from the source database. Valid only when NETWORK_LINK parameter is used.

FLASHBACK_TIME Similar to FLASHBACK_SCN, but oracle finds the SCN close to the time specified.

INCLUDE List of jobs to be included

NETWORK_LINK Performs import directly from a source database using database link name specified in the parameter. The dump file will be not be created in server when we use this parameter. To get a consistent export from the source database, we can use the FLASHBACK_SCN or FLASHBACK_TIME parameters. These two parameters are only valid when we use NETWORK_LINK parameter.

REMAP_DATAFILE Changes name of the source DB data file to a different name in the target.

REMAP_SCHEMA Loads objects to a different target schema name.

REMAP_TABLESPACE Changes name of the source tablespace to a different name in the target.

TRANSFORM We can specify that the storage clause should not be generated in the DDL for import. This is useful if the storage characteristics of the source and target database are different. The valid values are SEGMENT_ATTRIBUTES, STORAGE. STORAGE removes the storage clause from the CREATE statement DDL, whereas SEGMENT_ATTRIBUTES removes physical attributes, tablespace, logging, and storage attributes.

TRANSFORM = name:boolean_value[:object_type], where boolean_value is Y or N.
For instance, TRANSFORM=storage:N:table

Data Pump differnt Export Levels in command line:

o   Full Export Level
o   Schema Level
o   Table Level
o   Tablespace  Level
o   Transportable Tablespace Level





EXAMPLE 1 :          
EXPDP explanation
SOURCE DB-EXPDP
TARGET DB-IMPDP
IMPDP explanation
Common for most of the condition
DUMPFILE,
LOGFILE
DIRECTORY
DUMPFILE,
LOGFILE
DIRECTORY


USERS01 tablespace need to be export  from cust_schema in source_db And  import to sample table in USERS02 tablespace from cust_schema in Target_db
SCHEMAS
REMAP_TABLESPACE
TABLE_EXISTS_ACTION
REMAP_TABLESPACE: we importing the data to different tablespace.

TABLE_EXISTS_ACTION:REPLACE the table already present in the source location. so replacing with new data

Exclude few objects during the import

EXCLUDE
And exclude few objects and import  into the table named "sample" in USERS02 tablespace

Expdp parfile information
Import parfile information
userid=system/password@alias_name
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dump_dir
schemas=cust_schema
userid=system/password@alias_name
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace
remap_tablespace=res:users
exclude=sequence,view,package,cluster,table:"in('sample')"




EXAMPLE 2.
EXPDP explanation
SOURCE DB-EXPDP
TARGET DB-IMPDP
IMPDP explanation
Common for most of the condition
USERID
DUMPFILE,
LOGFILE
DIRECTORY
USERID
DUMPFILE,
LOGFILE
DIRECTORY


Here going to export the sample table FROM cust_schema from the instance called source . And import into sample table FROM cust_schema from the instance called
target
SCHEMAS
TABLE_EXISTS_ACTION

TABLE_EXISTS_ACTION: the table already present in the source location. so replacing with new data




Expdp parfile information
Import parfile information
userid=system/password@alias_name
dumpfile=table_expdb.dmp
logfile=table_expdb.log
directory=dump_location
tables= cust_schema.sample


userid=system/password@alias_name
dumpfile=table_expdb.dmp
logfile=tabimpdb.log
directory=dump_location
table_exists_action=REPLACE




EXAMPLE 3:                                     
EXPDP explanation
SOURCE DB-EXPDP
TARGET DB-IMPDP
IMPDP explanation
Common for most of the condition
DUMPFILE,
LOGFILE
DIRECTORY
DUMPFILE,
LOGFILE
DIRECTORY


Export the sample tables from cust_schema. this sample has partitions now we are going to export the contents in partition1 and partition2
TABLES
TABLE_EXISTS_ACTION
TABLE_EXISTS_ACTION: APPEND   the content of  sample table in the target database

(NOTE: partition information not required if target has same table name with same partition


sql> delete sample where deptno=1;


sql> delete sample where deptno=2;



NOTE: Before overwrite the partition in target database we need to manually delete partition and import the dump file set to the target database.

EX: If we want to overwrite the exported data in target database, then we need to delete value in the partition has sample table information of  deptno in(1,2).


Expdp parfile information
Import parfile information

userid=system/password@alias_name
dumpfile=partition_tab_expdb.dmp
logfile=partition_tab_expdb.log
directory=dump_location
tables=cust_schema.sample:partition1, cust_schema.sample:partition2



userid=system/password@alias_name
dumpfile=partition_expdb.dmp
logfile=tab_impdb.log
directory=dump_location
table_exists_action=append




EXAMPLE 4:

EXPDP explanation
SOURCE DB-EXPDP
TARGET DB-IMPDP
IMPDP explanation
Common for most of the condition
DUMPFILE,
LOGFILE
DIRECTORY
DUMPFILE,
LOGFILE
DIRECTORY


To export only table in cust_schema at source instance and import into the target instace.
CONTENT=table
TABLE_EXISTS_ACTION
TABLE_EXISTS_ACTION:REPLACE the table already present in the source location. so replacing with new data



Expdp parfile information
Import parfile information

userid=system/password@source
dumpfile=schema_tab_expdb.dmp
logfile= schema_tab_expdb.log
directory=dump_location
include=table
schemas=cust_schema



userid=system/password@target
dumpfile=schema_tab_expdb.dmp
logfile=schema_tab_impdb.log
directory=dump_location
table_exists_action=replace





EXAMPLE 5:

EXPDP explanation
SOURCE DB-EXPDP
TARGET DB-IMPDP
IMPDP explanation
Common for most of the condition
DUMPFILE,
LOGFILE
DIRECTORY
DUMPFILE,
LOGFILE
DIRECTORY


Export the rows belongs to the sampdept,  sample need to export the value of dept 10,20.
Need to export only the content is  data of the table.
INCLUDE,
QUERY,
CONTENT,

SCHEMAS
TABLE_EXISTS_ACTION,
QUERY
SCHEMAS,

TABLE_EXISTS_ACTION: APPEND   the content of  sample table in the target database

QUERY: we have the content of samdept 10,20 but we need to import only the content of samdept 20.
(note:always check the expdp logfile what content are exported)

Expdp parfile information
Import parfile information

userid=system/password@source
dumpfile=data_filter_expdb.dmp
logfile=data_filter_expdb.log
directory=dumplocation
schemas=cust_schema
query="where deptno in(10,20)"
content=data_only
include=table:"in('SAMPLE','SAMDEPT')"



userid=system/password@target
dumpfile=data_filter_expdb.dmp
logfile=data_filter_impdb.log
directory=dumplocation
schemas=cust_schema
query="where deptno = 20"
table_exists_action=APPEND




EXAMPLE 6:
EXPDP explanation
SOURCE DB-EXPDP
TARGET DB-IMPDP
IMPDP explanation
Common for most of the condition
DUMPFILE,
LOGFILE
DIRECTORY
DUMPFILE,
LOGFILE
DIRECTORY

In using impdp multiple files can be detected automatically detected by passing the value  "%U" to dumpfile
Export the cust_schema schema from source database and split the dump file sets into 70MB in size. Import the dump file into target database. so this expdp will create the backupsets in 70mb in size. by using "%U" in dumpfile it will automatically create the sequence dumpset
EX: schema_exp_split_01.dmp, schema_exp_split_02.dmp till create the of total size of dump set.
INCLUDE,
SCHEMAS,

FILESIZE
TABLE_EXISTS_ACTION,
REMAP_TABLESPACE
TABLE_EXISTS_ACTION:REPLACE the table already present in the source location. so replacing with new data

REMAP_TABLESPACE=tbs1:tbs01
Import the value to new tablespace


Expdp parfile information
Import parfile information

userid=system/password@alias_name
dumpfile=schema_exp_split_%U.dmp
logfile=schema_exp_split.log
directory=dump_location
schemas=cust_schema
include=table
filesize=70M




userid=system/password@alias_name
dumpfile=schema_exp_split_%U.dmp
logfile=schema_imp_split.log
directory=dump_location
table_exists_action=replace
remap_tablespace=tbs1:tbs01




EXAMPLE 7:
EXPDP explanation
SOURCE DB-EXPDP
TARGET DB-IMPDP
IMPDP explanation
Common for most of the condition
DUMPFILE,
LOGFILE
DIRECTORY
DUMPFILE,
LOGFILE
DIRECTORY

In using impdp multiple files can be detected automatically detected by passing the value  "%U" to dumpfile
Export the cust_schema schema from source database and split the dump file sets into 6 parts of total size of export data. Import the dump file into target database. so this expdp will create the backupsets in total size of six parts. by using "%U" in dumpfile it will automatically create the sequence dumpset
EX: schema_exp_split_01.dmp, schema_exp_split_02.dmp till create the dumpset upto schema_exp_split_06.dmp
INCLUDE,
SCHEMAS,

PARALLEL
TABLE_EXISTS_ACTION,
REMAP_TABLESPACE
TABLE_EXISTS_ACTION:REPLACE the table already present in the source location. so replacing with new data

REMAP_TABLESPACE=tbs1:tbs01
Import the value to new tablespace


Expdp parfile information
Import parfile information

userid=system/password@source
dumpfile=schema_exp_split_%U.dmp
logfile=schema_exp_split.log
directory=dump_location
schemas=cust_schema
include=table
parallel=6



userid=system/password@target
dumpfile=schema_exp_split_%U.dmp
logfile=schema_imp_split.log
directory=dump_location
table_exists_action=replace
remap_tablespace=tbs1:tbs01
exclude=grant



EXAMPLE 8




EXPDP explanation
SOURCE DB-EXPDP
TARGET DB-IMPDP
IMPDP explanation
Common for most of the condition
DUMPFILE,
LOGFILE
DIRECTORY
DUMPFILE,
LOGFILE
DIRECTORY

In using impdp multiple files can be detected automatically detected by passing the value  "%U" to dumpfile
Export the cust_schema schema from source database and split the dump file sets into 6 parts of total size of export data in different loaction. Import the dump file into target database. So this expdp will create the backupsets in total size of six parts. by using "%U" in dumpfile it will automatically create the sequence dumpset
EX: schema_exp_split_01.dmp, schema_exp_split_02.dmp till create the dumpset upto schema_exp_split_06.dmp
INCLUDE,
SCHEMAS,

FILESIZE
TABLE_EXISTS_ACTION,

TABLE_EXISTS_ACTION:REPLACE the table already present in the source location. so replacing with new data




Expdp parfile information
Import parfile information

userid=system/password@alias_name
logfile=schemaexp_split.log
dumpfile=dumplocation1:schemaexp_%U.dmp,
dumplocation2:schemaexp_%U.dmp,
dumplocation3:schemaexp_%U.dmp
directory=dump_location
schemas=user_schema
include=table
filesize=100M




Impdp parfile content:
userid=system/password@alias_name
logfile=schemaimp_split.log
dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
directory=dumplocation
table_exists_action=replace




EXAMPLE :9.a
Creating the database link (for destination system schema)
Before performing exporting the data over a network link it is required  to create a database link in the DESTINATION database first. The link can be created by either destination locations's user schema or in the SYSTEM schema. If the link is created in SYSTEM schema the user will be created automatically with the same grants during the import.

Creating the link as SYSTEM schema:

create database link source_link connect to system identified by manager using 'source';
(Note: The database connection string for the database “source” is defined in the local TNSNAMES.ORA

Creating the link as USER schema Creating the database link (in destination user schema):

For creating the database link in the destination user schema you need to create the user first. If done create the link with:

create database link source_link connect to test identified by user using 'source';

Warning raise when mixing privileged user and non-privileged users
There is one warning when mixing privileged (e.g. DBA users) with non privileged users data pump will complain about that:

ORA-31631: privileges are required
ORA-39149: cannot link privileged user to non-privileged user
You need to look out for database links created in the SYS or SYSTEM schema pointing to a non-DBA-user in the remote database or vice versa.

For example the following link created the in SYSTEM schema wont work because SYSTEM is a dba user and test is not:

create database link source_link connect to test identified by test using 'source';

EXAMPLE:9 (b)



EXPDP explanation
SOURCE DB-EXPDP
TARGET DB-IMPDP
IMPDP explanation
Common for most of the condition
DUMPFILE,
LOGFILE
DIRECTORY
DUMPFILE,
LOGFILE
DIRECTORY

In using impdp multiple files can be detected automatically detected by passing the value  "%U" to dumpfile
Currently working in source database. but taking the export from destination location and storing the dump file set in source location using network_link this method of export is possible. But users of the source and destination must have identical privileges. if users privileges fail to match it throws an error. Using tns entry we can achieve this job

INCLUDE,
SCHEMAS,

NETWORK_LINK
TABLE_EXISTS_ACTION,

TABLE_EXISTS_ACTION:REPLACE the table already present in the source location. so replacing with new data



If user privilage fails to match we get this error:

ORA-31631: privileges are required
ORA-39149: cannot link privileged user to non-privileged user
You need to look out for database links created in the SYS or SYSTEM schema pointing to a non-DBA-user in the remote database or vice versa.


Expdp parfile information
Import parfile information

userid=user/user@source
logfile=netwrok_exp.log
directory=dump_location
dumpfile=network_exp.dmp
schemas=user
include=table
network_link=target




userid=system/password@source
logfile=network_imp.log
directory=dump_location
dumpfile=network_exp.dmp
table_exists_action=replace


EXAMPLE 10:



EXPDP explanation
SOURCE DB-EXPDP
TARGET DB-IMPDP
IMPDP explanation
Common for most of the condition

LOGFILE,
DIRECTORY


Currently working in target database. So without using the export the dump file set here directly taking import the data  from target location to source location. using  network_link  this  method is possible.

TABLE_EXISTS_ACTION,

TABLE_EXISTS_ACTION:REPLACE the table already present in the source location. so replacing with new data




(NOTE: From the target server contact the source server the extract and import the data into source database. this methos is very useful when we have no space to create dump file set. then we can implement this operation over network.)


Import parfile information
userid=user/user@target
network_link=source
logfile=network_imp.log
directory=dumplocation
table_exists_action=replace


EXAMPLE 11:



EXPDP explanation
SOURCE DB-EXPDP
TARGET DB-IMPDP
IMPDP explanation
Common for most of the condition
DUMPFILE,
LOGFILE
DIRECTORY
LOGFILE,
DIRECTORY


Currently working in target database. But performing  the export in target database itself. Importing the dump file set from target to source. without moving the dumpfile in the source location.
 EX: imagine we have no space on source database but target have enough space so to utilize the target space without consuming the source space we can achieve this method  
INCLUDE,
SCHEMAS,


TABLE_EXISTS_ACTION,

NETWORK_LINK
TABLE_EXISTS_ACTION:REPLACE the table already present in the source location. so replacing with new data



Expdp parfile information
Import parfile information

userid=user/user@target
logfile=netwrok_exp.log
dumpfile=network_exp.dmp
directory=dump_location
schemas=cust_schema
include=table




userid=system/password@target
logfile=network_imp.log
dumpfile=network_exp.dmp
directory=dump_location
table_exists_action=replace
remap_schema=user:user01



EXAMPLE 12:



EXPDP explanation
SOURCE DB-EXPDP
TARGET DB-IMPDP
IMPDP explanation
Common for most of the condition
DUMPFILE,
LOGFILE
DIRECTORY
LOGFILE,
DIRECTORY


SAMPLE parameter allows you to export subsets of data by specifying the percentage of data to be sampled and exported. The sample_percent indicates the probability that a block of rows will be selected as part of the sample. if table name is not specified then entire job 40% percentage value will exported
TABLE,
SAMPLE,


TABLE_EXISTS_ACTION,


TABLE_EXISTS_ACTION:REPLACE the table already present in the source location. so replacing with new data



Expdp parfile information
Import parfile information

userid=system/password@source
dumpfile=sample_expdb.dmp
logfile= sample_expdb.log
directory=dump_location
tables=user.sample
SAMPLE=40




userid=system/password@target
dumpfile=sample_expdb.dmp
logfile= sample_impdb.log
directory=dump_location
table_exists_action=replace