Monday, June 24, 2024

How to export and import a large table with BasicFiles LOB(s) quicker

1. Divide the source table into chunks during export. Number of chunks depends on the capabilities of the hardware (you might want to start with 10). This gives you the opportunity to use number_of_chunks parallel processes during the import.

Below is the example export script (based on the calculation of the remainder in the division of the row block number by the fixed number of chunks) :

#!/bin/bash

case $1 in
start)

 chunks=$2 ; if [ -z $2 ] && exit

 
 for i in $(eval echo {00..${chunks}}) ; do
   expdp user_id/pass job_name=expdp_table_name_${i} tables=owner.table_name query=table_name:\"where mod\(dbms_rowid.rowid_block_number\(rowid\), ${chunks}\) = ${i}\" directory=directory_for_dumpfile dumpfile=table_name_chunk_${i}.dmp logfile=directory_for_logfile:expdp_table_name_chunk_${i}.log &
   echo $i
 done

;;
esac


2. Transfer dump files to the recipient site and run the import.

dd

#!/bin/bash

case $1 in
start)

 # enter the same number of chunks as it was during export
 chunks=$2 ; if [ -z $2 ] && exit

 for i in $(eval echo {00..${chunks}}) ; do
   impdp user_id/pass job_name=impdp_table_name_${i} directory=directory_for_dumpfile dumpfile=table_name_chunk_${i}.dmp logfile=directory_for_logfile:impdp_table_name_chunk_${i}.log
remap_table=table_name:table_name_temp remap_schema=old_schema:new_schema content=data_only data_options=disable_append_hint &
   echo $i
 done

;;
esac

The parameters content= and data_options=, as well as running all of the impdp processes in the background, do the whole job.

P.S. You don't have to make the first step; it's possible to use the direct import over database link (network_link parameter of the impdp).