Archive for December, 2011|Monthly archive page

“There is always next year”

In DBA on December 27, 2011 at 16:40

Database administration has been heavily influenced by 2 main trends during the last years:

1. Enterprise Manager Grid/Cloud Control
2. The automation features introduced with Oracle 10g and especially with Oracle 11g

The job of the modern DBA is now more like the job of an analytical manager than of a database operator. Several companies hire now DBAs who are not just technicians but data and database professionals with high sense for analytical thinking and logical intuition.

Chris Foot wrote last year an excellent article on The Future of Database Tuning and Database Administration. As Chris says, “The new breed of top tuners will be the administrators who focus on how to use the toolsets and interpret their output.” The times of drilling down into X$ tables and V_$ views are over. DBAs who use most efficiently Grid Control/Cloud Control and turn on properly most (if not all) automated database features are the ones achieving best results in the field.

Let me quote Chris again: “.. reading SQL traces and statistics dumps will be a thing of the past.” The Advisors, the AWR and the ADDM reports reveal much more at a glance than we ever were able to comprehend after reading outputs from complex dictionary queries, statspack or trace files.

I am not underestimating the skills and knowledge of understanding mutexes, private redo strands and Bloom filters but with such a vast product like the Oracle database one should very carefully decide how and on what to concentrate his skills and time.

The DBA profession has several subfields and different trends prevail from time to time but one is the skill that is most wanted and badly needed: performance tuning. It is the most difficult one to learn and master.

Wonder why Grid Control and database automation are changing the DBA profession?

– using Grid/Cloud Control, the packs it comes with, and all the integrated advisors and recommendation reports can reduce the time drastically. Understand it in a way that the time a DBA can spend on an issue might vary from minutes to hours (if not days) depending on if Grid Control is used or not. I think EM is still an underestimated tool mostly because of the fact that companies cannot make/decide on the proper investments. Most managers cannot get the fact that those Grid Control packs can produce significant ROI in their database department. Often, they just can’t get it. Even if you give it to them in a box names “It”.

– automation within the database is where Oracle is going to. Best results with databases can be achieved by benefiting from all automated features offered by Oracle. Automating the Oracle Database is something overlooked and underestimated. Mostly by the old generation of DBAs who are often suspicious about everything. Let me quote Shunryu Suzuki/Steve Jobs: “In the beginner’s mind there are many possibilities, in the expert’s mind there are few.”

However, with all those new features, there comes a flock of bugs and looks like their fixes cannot catch up with the new versions and patches. Keeping all these new features in sync with the database kernel is not at all so easy. As Chris says, “our role will be how to utilize the ever-increasing number of features provided by the database to solve business problems” and I would add how to solve the business problems caused by the bugs these new features introduce. Catch 22, right?

According to Oracle 2020: A Glimpse Into the Future of Database Management, during this year 2011, “Oracle’s Larry Elision finances the Worldwide Internet Identification Database, requiring non-anonymous access and reducing cybercrime. Ellison receives the Nobel Peace Prize for his humanitarian efforts.” I am almost certain that no one takes the stuff written there seriously but there is always next year 🙂

On 4 undocumented DataPump parameters

In DBA, Oracle utilities on December 20, 2011 at 18:14

DataPump is probably the best free Oracle utility/tool ever developed. With the exception of SQL*Plus of course which is somehow naturally excluded.

It is like SELECT being the most common command written/typed by DBAs and Developers. Guess which is the next one after SELECT? You might not want to believe it but it seems to be EXIT 🙂

Now, 11g came with several new parameters: COMPRESSION, ENCRYPTION, TRANSPORTABLE, NETWORK_LINK, PARTITION_OPTIONS, DATA_OPTIONS, REUSE_DUMPFILES, REMAP_TABLE, etc. which are probably well-known but here are few undocumented ones you can have as an ace up your sleeve.


You can use the parameter METRICS=Y to include additional logging information about the number of objects and the time it took to process them in the log file. With METRICS, additional information can be obtained about the number of objects that were processed and the time it took for processing them.

Note that the METRICS parameter does not create any additional trace files. Instead, the extra details are logged in the logfile of the Export Data Pump or Import Data Pump job.

Here is an example:

expdp \'/ as sysdba\' DUMPFILE=data_pump_dir:sysman_%U.dat SCHEMAS=sysman_apm JOB_NAME=sysman_pump METRICS=Y
[oracle@zurich Desktop]$ expdp \'/ as sysdba\' DUMPFILE=data_pump_dir:sysman_%U.dat SCHEMAS=sysman_apm JOB_NAME=sysman_pump METRICS=Y

Export: Release - Production on Tue Dec 20 10:46:08 2011

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYS"."SYSMAN_PUMP":  "/******** AS SYSDBA" DUMPFILE=data_pump_dir:sysman_%U.dat SCHEMAS=sysman_apm JOB_NAME=sysman_pump METRICS=Y 
Estimate in progress using BLOCKS method...
Total estimation using BLOCKS method: 320 KB
Processing object type SCHEMA_EXPORT/USER
     Completed 1 USER objects in 0 seconds
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
     Completed 1 DEFAULT_ROLE objects in 2 seconds
     Completed 1 TABLESPACE_QUOTA objects in 8 seconds
     Completed 1 PROCACT_SCHEMA objects in 27 seconds
. . exported "SYSMAN_APM"."PATCH_RUN"                        0 KB       0 rows
Master table "SYS"."SYSMAN_PUMP" successfully loaded/unloaded
Dump file set for SYS.SYSMAN_PUMP is:
Job "SYS"."SYSMAN_PUMP" successfully completed at 10:49:19


In some situations, the undocumented parameter ACCESS_METHOD can be used to enforce a specific method to unload or load the data.

If the parameter is not specified, then Data Pump will automatically choose the best method to load or unload the data.

The parameter can only be specified when the Data Pump job is initially started i.e., the parameter cannot be specified when the job is restarted.

If the parameter is specified, the method of loading or unloading the data is enforced on all tables that need to be loaded or unloaded with the job.

Enforcing a specific method may result in a slower performance of the overall Data Pump job.

Here is an example:

expdp \'/ as sysdba\'  ... ACCESS_METHOD=DIRECT_PATH  
expdp \'/ as sysdba\'  ... ACCESS_METHOD=EXTERNAL_TABLE 

impdp \'/ as sysdba\'  ... ACCESS_METHOD=DIRECT_PATH  
impdp \'/ as sysdba\'  ... ACCESS_METHOD=EXTERNAL_TABLE 


Tracing can be enabled by specifying an 7 digit hexadecimal mask in the TRACE parameter of expdp or impdp. The first three digits enable tracing for a specific DataPump component, while the last four digits are usually: 0300. Any leading zero’s can be omitted, and the value specified for the TRACE parameter is not case sensitive. TRACE does not add anything to the output of DataPump, it creates additional trace files.

Here is a summary of the Data Pump trace levels:

  10300 SHDW: To trace the Shadow process 
  20300 KUPV: To trace Fixed table
  40300 'div' To trace Process services
  80300 KUPM: To trace Master Control Process 
 100300 KUPF: To trace File Manager
 200300 KUPC: To trace Queue services
 400300 KUPW: To trace Worker process(es)         
 800300 KUPD: To trace Data Package
1000300 META: To trace Metadata Package
1FF0300 'all' To trace all components, full tracing

As you see, in order to trace all Data Pump components, level 1FF0300 can be specified, here is an example:

impdp \'/ as sysdba\' NETWORK_LINK=edw_p SCHEMAS=CLOUD PARALLEL=8 JOB_NAME=net_pump EXCLUDE=statistics TRACE=1FF0300 
SHDW:     Version = 9999
KUPP:10:58:22.050: Input trace/debug flags: 01FF0300 = 33489664
KUPP:10:58:22.050: Current trace/debug flags: 01FF0300 = 33489664
SHDW:10:58:22.050: Current user = SYS
SHDW:10:58:22.050: Current schema = SYS
SHDW:10:58:22.050: Current language = AMERICAN_AMERICA.AL32UTF8
SHDW:10:58:22.052: Current session address = 0000000077F7AFA0
SHDW:10:58:22.052: *** OPEN call ***
SHDW:10:58:22.052:     operation       = IMPORT
SHDW:10:58:22.052:     job_mode        = schema
SHDW:10:58:22.052:     remote_link     = edw_p
SHDW:10:58:22.052:     version         = 
SHDW:10:58:22.052:     compression     = 2
KUPV:10:58:22.052: Open request for job: SYS.NET_PUMP
KUPV:10:58:22.058: Master Table create statement: CREATE TABLE "SYS"."SYSMAN_PUMP" (process_order NUMBER, duplicate NUMBER, dump_fileid NUMBER, dump_position NUMBER, dump_length NUMBER, dump_orig_length NUMBER 

The 3 types of trace files can be found in $ADR_HOME/trace:
– Data Pump Master Control Processes files have format: SID_dm_process_id.trc
– Data Pump Worker Process trace files have format: SID_dw_process_id.trc
– Data Pump Shadow Process trace files have format: SID_ora_process_id.trc


If the undocumented DataPump parameter KEEP_MASTER=Y is used, then the Master Table is not deleted after the job completes. The drop of the master table does not lead to any data dictionary corruption and if you keep the master table after the job completes, then a drop of the master table afterwards will not cause any corruption either.

You might use this parameter if you create transportable tablespaces where the source and the destination are both ASM based.

Here is an example:

expdp \'/ as sysdba\' directory=DUMPDIR DUMPFILE=skat_full.dmp LOGFILE=skat_full.log FULL=Y KEEP_MASTER=Y

One last note: More than 50 bugs related to DataPump have been fixed in Oracle Database 11g Release 2 ( Check document 1348303.1: List of Bug Fixes by Problem Type available on the My Oracle Support Web site.