Archive for the ‘Oracle utilities’ Category

Enhanced Diagnosability of Oracle Database: What is an Attention Log?

In Database tuning, Databases, DBA, Oracle database, Oracle utilities on March 1, 2021 at 07:18

A correct diagnosis is three-fourths the remedy – Mahatma Gandhi

How true this statement is also in the database world! Often it is more difficult to diagnose an issue than fixing it. Going in the past through the alert.log file to find information that needs attention has been simplified by creating a single attention.log file.

In the latest Oracle release, diagnosability of database issues is enhanced through a new attention log, as well as classification of information written to database trace files. The new attention log is written in a structured format (XML or JSON) that is much easier to process or interpret and only contains information that requires attention from an administrator.

The attention log is an externally modifiable file that contains information about critical and highly visible database events. We can use the attention log to quickly access information about critical events that need action.

So, where is the location of the new log file introduced by Oracle Database 21c? Notice that V$DIAG_INFO has a new entry called Attention log:

The attention log contains a pre-determined, translatable series of messages, with one message for each event. There is one attention log for each database instance. There are two ways to view the contents of the attention log.

1. Open the attention.log file that can be found under $ORACLE_BASE/diag/rdbms/database_name/instance_id/log directory. Note that the documentation does not reflect the correct location of the attention.log file.

2. Query the V$DIAG_ALERT_EXT view using the required filters:

The filtering is done through message_level. A message_level of 1 corresponds to critical errors that need immediate action.

You can filter V$DIAG_ALERT_EXT on either MESSAGE_TYPE or MESSAGE_LEVEL. As you can see in the example above, I am filtering for only critical errors. Here is what the types and levels mean by number:


1: UNKNOWN – essentially the NULL type
2: INCIDENT_ERROR – error for some internal or unexpected reason: must be reported to Oracle Support
3: ERROR – an error of some kind has occurred – not really meaningful
4: WARNING: an action occurred or a condition was discovered that should be reviewed and may require action
5: NOTIFICATION: reports a normal action or event: for example “logon completed”
6: TRACE: output of a diagnostic trace


1: CRITICAL: critical errors
2: SEVERE: severe errors
8: IMPORTANT: important message
16: NORMAL: normal message

The attention log has the following attributes, not all documented yet:

– Attention ID: a unique identifier for the message
– Attention type: Error, Warning, Notification or Additional information
– Message text
– Notification
– Error
– Urgency: Immediate, Soon, Deferrable or Information
– Scope: Session, Process, PDB Instance, CDB Instance, CDB Cluster, PDB or CDB
– Info
– Cause
– Action
– Class: Clusterware Admin, CDB admin or PDB admin
– Time

Note that an attention log entry seldom contains all attributes.

Six new features of Data Pump in Oracle Database 20c

In Cloud, Data, Databases, DBA, New features, Oracle database, Oracle utilities on June 8, 2020 at 10:55

In Oracle version 20c, the Data Pump export job accepts 54 different parameters and the import jobs can have up to 59 different parameters. Most of us who used until Oracle 10g (and even afterwards) the old fashioned traditional export import tool probably knew by heart the syntax of exp/imp and never needed to copy paste the command from a text file. With more than 50 parameters in 20c, I am sure it is a different story.

There are 6 new features of Data Pump in Oracle 20c and here are their short descriptions:

1. Oracle Data Pump checksums support for cloud migrations

The new parameter CHECKSUM can be used for validity as a checksum is now added to the dumpfile. Oracle Data Pump can be, and is mostly used, for migrating data from on-premises Oracle Databases into the Oracle Public Cloud. You can use the checksum to help confirming that the file is valid after a transfer to or from the Oracle Cloud object store. Checksums are also useful after saving dumpfiles to on-prem locations for confirming that the dump files have no accidental or malicious changes.

Here is an example of how to use the new CHECKSUM parameter – I am exporting my own schema. The parameter accepts 2 values:

YES – Oracle calculates a file checksum for each dump file in the export dump file set
NO – Oracle does not calculate file checksums

The checksum is calculated at the end of the data pump job. Notice that I am not specifying below the other new parameter CHECKSUM_ALGORITHM, thus using its default value SHA256. The other optional secure hash algorithms are: CRC32, SHA384 and SHA512.

SQL> !expdp julian DIRECTORY=data_pump_dir DUMPFILE=jmd.dmp CHECKSUM=YES

Export: Release - Production on Sat May 30 07:20:55 2020

Copyright (c) 1982, 2020, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 20c EE High Perf Release - Production
Starting "JULIAN"."SYS_EXPORT_SCHEMA_01":  julian/********@//localhost:1521/ DIRECTORY=data_pump_dir DUMPFILE=jmd.dmp CHECKSUM=YES
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
. . exported "JULIAN"."BLOGS"                            9.983 MB   73991 rows
. . exported "JULIAN"."SALES"                            14.38 MB  295240 rows
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
. . exported "JULIAN"."RDBMS_BRANDS"                     7.726 KB      12 rows
. . exported "JULIAN"."CLIENTS"                          6.007 KB       2 rows
. . exported "JULIAN"."T"                                5.476 KB       1 rows
ORA-39173: Encrypted data has been stored unencrypted in dump file set.
Master table "JULIAN"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
Generating checksums for dump file set
Dump file set for JULIAN.SYS_EXPORT_SCHEMA_01 is:
Job "JULIAN"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat May 30 07:24:59 2020 elapsed 0 00:04:01

You probably noticed the ORA-39173 warning at the end – it is because I did not specify an encryption password while exporting encrypted data. It is just a warning and not a real error.

Goes without saying that COMPATIBLE must be set to at least 20.0

2. Oracle Data Pump exports from Autonomous Database

Starting with Oracle Database 20c, Data Pump can perform exports from Oracle Autonomous Database into dump files in a cloud object store. Thus, now we can easily migrate data out from an Oracle Autonomous Database and import it into another location.

For how to create object store credentials check either Oracle Cloud : Autonomous Database (ADW or ATP) – Load Data from an Object Store (DBMS_CLOUD) by Tim Hall or DBMS_CLOUD Package – A Reference Guide by Christian Antognini.

The new in 20c is the use of the new CREDENTIAL parameter which enables the export to write data stored into object stores. The CREDENTIAL parameter changes how expdp interprets the text string in DUMPFILE. If the CREDENTIAL parameter is not specified, then the DUMPFILE parameter can specify an optional directory object and file name in directory-object-name:file-name format. If the CREDENTIAL parameter is used, then it provides authentication and authorization for expdp to write to one or more object storage URIs specified by DUMPFILE.

Here is an example assuming that we have already created the credential_name JMD_OBJ_STORE_CRED:

expdp julian DUMPFILE= 

Need more information? The Oracle Cloud Infrastructure User Guide has “only” 5952 pages. Lat week, they were 5919. Enjoy reading it 🙂 And while reading this blog post, there will be more than 5952 pages… I do not think I have ever seen a longer users guide in our industry!

3. Oracle Data Pump includes and excludes in the same operation

Oracle Data Pump 20c can include and exclude objects in the same export or import operation meaning that now, Oracle Data Pump commands can include both INCLUDE and EXCLUDE parameters in the same operation. By enabling greater specificity about what is being migrated, this enhancement makes it easier to migrate to Oracle Cloud, or to another on-premises Oracle Database.

Note: when you include both parameters in a command, Oracle Data Pump processes the INCLUDE parameter first, and includes all objects identified by the parameter. Then it processes the EXCLUDE parameters, eliminating the excluded objects from the included set. Here is an example of including only 2 tables (SALES and CLIENTS) but excluding all indexes except the PKs (real use case: you want to enable Oracle Auto Indexing in ADB and while importing the data you need to drop all indexes except the PKs):


4. Oracle Data Pump parallelizes transportable tablespace metadata operations

Starting with Oracle 20c, Data Pump improves Transportable Tablespace metadata operations with parallelism. Starting with Oracle Database 20c, transportable tablespace exports can be done with degrees of parallelism greater than 1.

Parallelism higher than 1 improves TTS export and import performance especially when there are really a lot of database objects in the data files including tables indexes partitions and subpartitions. We probably see the real benefit when exporting packaged application schemas from SAP, EBS, etc.

5. Oracle Data Pump provides optional index compression

In Oracle Database 20c, Data Pump supports optional index compression on import including the Autonomous Database by introducing a new TRANSFORM parameter clause INDEX_COMPRESSION_CLAUSE. Thus, you can control whether index compression is performed during import.

If NONE is specified in the INDEX_COMPRESSION_CLAUSE, then the index compression clause is omitted (and the index is given the default compression for the tablespace). However, if you use compression, then Oracle recommends that you use COMPRESS ADVANCED LOW. Indexes are created with the specified compression.

If the index compression clause is more than one word, then it must be contained in single or double quotation marks. Also, your operating system can require you to enclose the clause in escape characters, such as the backslash character. Here is an example of how to use the INDEX_COMPRESSION_CLAUSE:


Specifying this transform changes the type of compression for all indexes in the job.

6. Oracle Data Pump resumes transportable tablespace jobs

Starting with Oracle Database 20c, Data Pump resumes transportable tablespace export and import jobs that are stopped due to errors or any other problems. Oracle Data Pump’s capacity to resume these stopped jobs helps us to save time and makes the system more available.

Transportable jobs are now restartable at or near the point of failure.

To restart the job JMD_EXP_20C, first perform:

expdp system/password attach=jmd_exp_20c

Then restart the job with:

Export> continue_client

For an import job, the syntax is exactly the same.

ORAchk replaces EXAchks and RACcheck

In Database tuning, DBA, Oracle utilities on June 21, 2014 at 12:48

The new tool from Oracle ORAchk includes EXAchks and RACcheck functionality and replaces the older tools. It is no longer just a database tool but also runs health check on E-Business Suite Financials, Enterprise Manager Cloud Control repository, GoldenGate and Sun Systems.


Going memory lane, have a look at how Statspack compares with BSTAT/ESTAT. Well, those days are long over. DBAs used to have their own scripts, ran their own healthchecks but some special scripts can still shed some more light on the system health.

One such example is Tanel Poder’s Exadata Snapper.

For details on the ORAchk tool, how to download it, what’s new in ORAchk, read MOS Note 1268927.2. If you do not have access to MOS, here are 2 blog posts that describe the tool quite well:

1. ORAchk Health Checks for the Oracle Stack (including Solaris) by Gerry Haskins

2. ORAchk Health Checks For The Oracle Stack by Levi Pereira

Here is how System Health Score is calculated:


I would recommend glancing first at the ORAchk Known Issues before starting using the tool: MOS 1509746.1

Note that ORAchk’s requirements are as follows:

Supported Platforms:
Linux x86-64* (Enterprise Linux, RedHat and SuSE 9, SuSE 10 and SuSE 11)
Oracle Solaris SPARC (Solaris 10 and 11)
Oracle Solaris x86-64 (Solaris 10 and 11)
AIX **

* 32-bit platforms not supported, no planned support for Linux Itanium
**Requires BASH Shell 3.2 or higher to be installed

Supported Oracle Releases:

Still, no Windows…

And, do not aim at a 100% score, you will not get it easily. Points get deducted if you have more than 1 instance on the server:

WARNING OS: Multiple RDBMS instances discovered, observe database consolidation best practices

According to the tool, there is a “risk for shared resource contention leading for instability, performance degradation or scalability problems”. Then it says: “if the database consolidation best practices have already been reviewed and observed then this message can be ignored”.

Not to mention that you might run the report twice in a row and get a different score.

But there is more: ORAchk Collection Manager is a companion application to ORAchk, RACcheck and Exachk. When having lots of systems, auditing them with ORAchk might be a tedious piece of work. ORAchk has long had the ability to upload the results of its audit checks into a database automatically at run time.

However, it was up to us to create a custom front-end to that data for reporting purposes and trend analysis. Now, with ORAchk Collection Manager, Oracle provides this Application Express application to be used as a dashboard in which they can track their ORAchk, RACcheck and Exachk collection data in one easy to use interface.

Download the ORAchk Collection Manager and go through the ORAchk Collection Manager Users Guide for more details.

Online reorganization enhancements in Oracle 12c

In Database tuning, DBA, Oracle database, Oracle utilities on March 27, 2014 at 18:38

The Online Reorganization and Redefinition feature in Oracle Database offers DBAs significant flexibility not just to modify the physical attributes of a table but also to defragment and compress the table while allowing online access to the database.


A 5TB database can have 1TB of white space, in fact it can even have 4TB for all I have seen! Reclaiming the wasted space is important from both storage and performance perspective.

Before discussing the new 12c features related to online database reorganization, I will show you how easy it is to reorganize a table in 12c while at the same time all operations on the table are allowed. I have a tablespace called EM_DATA with just one datafile em_data_01.dbf and a table called EM_OBJECTS having more than 11 million rows. The file is almost 4GB in size. There are no another segments within the EM_DATA tablespace.


As you can see the segment EM_OBJECTS was 1774MB in size before the reorganization and after compressing it online we managed to shrink it to 644MB. The data in the table was not artificially ordered in any way whatsoever. Moreover, after the reorganization, I was able to resize the file down to 640MB and thus release more than 3GB of storage space on filesystem level!

Clearly, in 12c the complete procedure is very simple, there is no need to create explicitly the interim table but if an errors occurs during the execution, then the interim table is dropped, and the REDEF_TABLE procedure must be re-executed.

Now, here are all new features related to online redefinition in Oracle 12c:

1. The REDEF_TABLE procedure enables you to perform online redefinition on the table’s storage properties in a single step when you want to change the following properties:

– Tablespace changes, including a tablespace change for a table, partition, index, or LOB columns
– Compression type changes (as shown in the example above), including a compression type change for a table, partition, index key, or LOB columns
– For LOB columns, a change to SECUREFILE or BASICFILE storage

Check out these examples.

2. In 11g, it was only possible to add or drop columns when redefining tables online. In 12c, we can now set an unused column online. Check Doc ID 1120704.1 for details.

3. New in Oracle Database 12c: drop index online, alter index visible/invisible, alter index unusable online, and drop constraint online. I would suggest the master note “An overview of online redefinition of tables” Doc ID 1357825.1 for detailed examples and known bugs.

4. Since 10gR2, it has been possible to reorganize a single partition with online reorganization. With 12c, the redefinition of multiple partitions in a single redefinition session gives us the possibility to reduce the completion time to redefine multiple partitions.

5. Redefinition of tables that have Virtual Private Database (VPD) policies defined on them helps us eliminate downtime for redefining these tables. On top of that, there is a new function called EXPAND_SQL_TEXT which has been added to the DBMS_UTILITY packages. It makes seeing what SQL is actually executed pretty easy.

6. The performance of SYNC_INTERIM_TABLE with Materialized View Log processing has been significantly improved in 12c. I really hope not to see any more ORA-32416.

7. There is also improved resilience of FINISH_REDEF_TABLE with better lock management: you can avoid user cancellations or indefinite waits by using the new dml_lock_timeout parameter when running DBMS_REDEFINITION.FINISH_REDEF_TABLE procedure. Amit Bansal gives more details here.

Finally, here is all about DBMS_REDEFINITION in 12c from Dan Morgan’s library.


Online database reorganization is something of paramount importance, especially nowadays, when downtime is hard to get and data grows rather exponentially. Removing the unnecessary data is something few people in the companies care about although often more than 50% of database data is nothing else but junk data. Buying and adding more storage seems to be the easier option although reorganized and defragmented databases have better performance and require less memory and CPU.

Additional resources:
Understanding, detecting, and eliminating harmful database fragmentation by Craig A. Shallahamer
Reorganizing Tables in Oracle – is it worth the effort? by Uwe Hesse
Oracle table reorganization: Online or Offline by FJ Franken
Oracle Tablespace Reorg to reclaim space by AppsDBA
Using DBMS_REDEFINITION package to reorganize tables online by Marco Vigelini
Reclaiming Unused Space in Datafiles by Tim Hall

On 4 undocumented DataPump parameters

In DBA, Oracle utilities on December 20, 2011 at 18:14

DataPump is probably the best free Oracle utility/tool ever developed. With the exception of SQL*Plus of course which is somehow naturally excluded.

It is like SELECT being the most common command written/typed by DBAs and Developers. Guess which is the next one after SELECT? You might not want to believe it but it seems to be EXIT 🙂

Now, 11g came with several new parameters: COMPRESSION, ENCRYPTION, TRANSPORTABLE, NETWORK_LINK, PARTITION_OPTIONS, DATA_OPTIONS, REUSE_DUMPFILES, REMAP_TABLE, etc. which are probably well-known but here are few undocumented ones you can have as an ace up your sleeve.


You can use the parameter METRICS=Y to include additional logging information about the number of objects and the time it took to process them in the log file. With METRICS, additional information can be obtained about the number of objects that were processed and the time it took for processing them.

Note that the METRICS parameter does not create any additional trace files. Instead, the extra details are logged in the logfile of the Export Data Pump or Import Data Pump job.

Here is an example:

expdp \'/ as sysdba\' DUMPFILE=data_pump_dir:sysman_%U.dat SCHEMAS=sysman_apm JOB_NAME=sysman_pump METRICS=Y
[oracle@zurich Desktop]$ expdp \'/ as sysdba\' DUMPFILE=data_pump_dir:sysman_%U.dat SCHEMAS=sysman_apm JOB_NAME=sysman_pump METRICS=Y

Export: Release - Production on Tue Dec 20 10:46:08 2011

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYS"."SYSMAN_PUMP":  "/******** AS SYSDBA" DUMPFILE=data_pump_dir:sysman_%U.dat SCHEMAS=sysman_apm JOB_NAME=sysman_pump METRICS=Y 
Estimate in progress using BLOCKS method...
Total estimation using BLOCKS method: 320 KB
Processing object type SCHEMA_EXPORT/USER
     Completed 1 USER objects in 0 seconds
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
     Completed 1 DEFAULT_ROLE objects in 2 seconds
     Completed 1 TABLESPACE_QUOTA objects in 8 seconds
     Completed 1 PROCACT_SCHEMA objects in 27 seconds
. . exported "SYSMAN_APM"."PATCH_RUN"                        0 KB       0 rows
Master table "SYS"."SYSMAN_PUMP" successfully loaded/unloaded
Dump file set for SYS.SYSMAN_PUMP is:
Job "SYS"."SYSMAN_PUMP" successfully completed at 10:49:19


In some situations, the undocumented parameter ACCESS_METHOD can be used to enforce a specific method to unload or load the data.

If the parameter is not specified, then Data Pump will automatically choose the best method to load or unload the data.

The parameter can only be specified when the Data Pump job is initially started i.e., the parameter cannot be specified when the job is restarted.

If the parameter is specified, the method of loading or unloading the data is enforced on all tables that need to be loaded or unloaded with the job.

Enforcing a specific method may result in a slower performance of the overall Data Pump job.

Here is an example:

expdp \'/ as sysdba\'  ... ACCESS_METHOD=DIRECT_PATH  
expdp \'/ as sysdba\'  ... ACCESS_METHOD=EXTERNAL_TABLE 

impdp \'/ as sysdba\'  ... ACCESS_METHOD=DIRECT_PATH  
impdp \'/ as sysdba\'  ... ACCESS_METHOD=EXTERNAL_TABLE 


Tracing can be enabled by specifying an 7 digit hexadecimal mask in the TRACE parameter of expdp or impdp. The first three digits enable tracing for a specific DataPump component, while the last four digits are usually: 0300. Any leading zero’s can be omitted, and the value specified for the TRACE parameter is not case sensitive. TRACE does not add anything to the output of DataPump, it creates additional trace files.

Here is a summary of the Data Pump trace levels:

  10300 SHDW: To trace the Shadow process 
  20300 KUPV: To trace Fixed table
  40300 'div' To trace Process services
  80300 KUPM: To trace Master Control Process 
 100300 KUPF: To trace File Manager
 200300 KUPC: To trace Queue services
 400300 KUPW: To trace Worker process(es)         
 800300 KUPD: To trace Data Package
1000300 META: To trace Metadata Package
1FF0300 'all' To trace all components, full tracing

As you see, in order to trace all Data Pump components, level 1FF0300 can be specified, here is an example:

impdp \'/ as sysdba\' NETWORK_LINK=edw_p SCHEMAS=CLOUD PARALLEL=8 JOB_NAME=net_pump EXCLUDE=statistics TRACE=1FF0300 
SHDW:     Version = 9999
KUPP:10:58:22.050: Input trace/debug flags: 01FF0300 = 33489664
KUPP:10:58:22.050: Current trace/debug flags: 01FF0300 = 33489664
SHDW:10:58:22.050: Current user = SYS
SHDW:10:58:22.050: Current schema = SYS
SHDW:10:58:22.050: Current language = AMERICAN_AMERICA.AL32UTF8
SHDW:10:58:22.052: Current session address = 0000000077F7AFA0
SHDW:10:58:22.052: *** OPEN call ***
SHDW:10:58:22.052:     operation       = IMPORT
SHDW:10:58:22.052:     job_mode        = schema
SHDW:10:58:22.052:     remote_link     = edw_p
SHDW:10:58:22.052:     version         = 
SHDW:10:58:22.052:     compression     = 2
KUPV:10:58:22.052: Open request for job: SYS.NET_PUMP
KUPV:10:58:22.058: Master Table create statement: CREATE TABLE "SYS"."SYSMAN_PUMP" (process_order NUMBER, duplicate NUMBER, dump_fileid NUMBER, dump_position NUMBER, dump_length NUMBER, dump_orig_length NUMBER 

The 3 types of trace files can be found in $ADR_HOME/trace:
– Data Pump Master Control Processes files have format: SID_dm_process_id.trc
– Data Pump Worker Process trace files have format: SID_dw_process_id.trc
– Data Pump Shadow Process trace files have format: SID_ora_process_id.trc


If the undocumented DataPump parameter KEEP_MASTER=Y is used, then the Master Table is not deleted after the job completes. The drop of the master table does not lead to any data dictionary corruption and if you keep the master table after the job completes, then a drop of the master table afterwards will not cause any corruption either.

You might use this parameter if you create transportable tablespaces where the source and the destination are both ASM based.

Here is an example:

expdp \'/ as sysdba\' directory=DUMPDIR DUMPFILE=skat_full.dmp LOGFILE=skat_full.log FULL=Y KEEP_MASTER=Y

One last note: More than 50 bugs related to DataPump have been fixed in Oracle Database 11g Release 2 ( Check document 1348303.1: List of Bug Fixes by Problem Type available on the My Oracle Support Web site.

Oracle OpenWorld 2011

In Database tuning, DBA, Grid Control, OOW, Oracle database, Oracle utilities, Personal on September 14, 2011 at 05:45

I am Speaking at Oracle OpenWorld 2011 and I hope you will join me there!

Welcome to my session: Tuning Toolkit for Advanced DBAs: Learn from the Past, and Prepare for the Future. The presentation will be on Tuesday at 03:30 PM, Moscone South – 104.

There will be over 80 Oracle ACEs and ACE Directors who will speak at OOW 2011!

The content catalog shows 75 sessions on Database performance and scalability out of the 306 database conference session. Just a reminder that 4-5 years ago the database conference sessions were about 100.

The database stream has been now divided into the following 11 substreams:

– Cloud Consolidation: 39 sessions
– Data Warehousing: 32 sessions
– Database Manageability: 55 sessions
– Database Platforms: 31 sessions
– Database Security: 23 sessions
– High Availability: 47 sessions
– MySQL: 29 sessions
– Oracle Exadata: 57 sessions
– Performance and Scalability: 75 sessions
– Specialty Data and Search: 15 sessions
– Storage Management: 24 sessions

One of my favorite topics, Oracle Enterprise Manager (part of the Cross Stream track), will be covered in 161 conference sessions. I have said it before, I have had it in several final conference slides, I will say it now: Oracle Enterprise Manager Grid Control is probably the best feature/tool/utility in Oracle after 9i.

And for the DBAs, as this is a DBA blog, here is a complete list of all sessions having the work “DBA” in the title:

14642: AWR and ASH in 3-D: Performance Analysis Tools No DBA Has Seen Before
20880: Becoming a Rock Star MySQL DBA
9218: DBA Fusion: An Introduction to Oracle Enterprise Manager 11g Architecture and Installation
13781: Day in the Life of a DBA: Cloud-Ready Management Solutions for Any IT Shop
14641: Extreme Database Administration: New Features for Expert DBAs
13081: Mastering Oracle Data Pump: Technical Deep Dive into Performance/Internals for Hands-on DBAs
15143: MySQL for Oracle DBAs, or How to Speak MySQL for Beginners
13446: Oracle Database Vault: DBA Best Practices
8046: Oracle Exadata Management for Oracle DBAs
14365: Oracle WebLogic Server Management for DBAs: Cross-Tier Visibility from JVM to Database
6681: Trends in Database Administration and the Changing Role of the DBA
14644: Tuning Toolkit for Advanced DBAs: Learn from the Past, and Prepare for the Future
8044: What If Kramer Were Your DBA and Seinfeld Tuned Your Database?
28900: Launching the IOUG Virtualization SIG: 360 Degrees of Virtualization for Oracle DBAs

If you wonder why so many people attend OOW, here are the high-level benefits of attending Oracle OpenWorld according to Oracle:

– Participate in sessions and hands-on workshops led by the world’s foremost IT experts
– Find out firsthand how to streamline upgrades and deployments
– Attend sessions in the all-new Server and Storage Systems stream
– Dig deep into application deployment, scaling, upgrading, and best practices
– Meet with your industry and technology peers
– Share notes in special interest groups, and product and industry sessions
– See hundreds of partners and customers as they present product demos and solutions in three Exhibition Halls

And here is a picture from LJE’s keynote on Cloud Computing I took last year from my seat:

P.S. I cannot find a single session on Oracle bugs and we all know that they play a major role in the database world. For sure, many DBAs would be very interested in a presentation called “11gR2 top 10 bugs“.

Data Pump Legacy Mode

In DBA, Oracle database, Oracle utilities on May 19, 2011 at 16:42

During the past 10 years we have seen 3 new versions of Oracle: 9i, 10g and 11g. I am often asked and I do ask as well which is the best or which are the best features of Oracle. 10 persons, I bet there will be 10 different opinions. Here is how I see it:

Real Application Clusters and Data Guard in 9i
Grid Control and Data Pump in 10g
Automatic SQL Tuning and Edition-Based Redefinition in 11g

There have been so many new, very good and important features but let us say that they all are part of the progress of mankind 🙂 And the recent boom of IT. Perhaps?

In 11gR2, Oracle decides to introduce Data Pump Legacy Mode in order to provide backward compatibility for scripts and parameter files used for original export/import scripts. The documentation briefly says: “This feature enables users to continue using original Export and Import scripts with Data Pump Export and Import. Development time is reduced as new scripts do not have to be created.”

If you check AristaDBA’s Oracle Blog and read the first three paragraphs you will probably see what it is about. I totally agree with all written there.

And I don’t really get it, why do we have Data Pump Legacy Mode? Lazy Oracle clients badly need it? Exp/imp was so, so good that we must have it back? How about RAC Legacy Mode if I want to use gc_files_to_lock or freeze_db_for_fast_instance_recovery parameters? There really was a parameter called freeze_db_for_fast_instance_recovery, I am not making this up. Run this one:

SELECT kspponm, 
DECODE(ksppoflg, 1,'Obsolete', 2, 'Underscored') as "Status"
FROM x$ksppo
WHERE kspponm like '%freeze%'
ORDER BY kspponm;

Or how about if I want to run DBMS_SCHEDULER in DBMS_JOB mode 🙂 And if you want to run DB2 in Oracle mode?

However, the Data Pump Legacy Mode feature exists and once you use any conventional export/import parameter you enable data pump in legacy mode. Just one parameter is enough. In Oracle terminology, Data Pump enters legacy mode once it determines a parameter unique to original Export or Import is present. Of course some parameters like buffer, commit, compress, object_consistent, recordlength, resumable, statistics, compile, filesize, tts_owners, streams_configuration and streams_instantiation are simply ignored.

Now, here is a paradox or trivial documentation error: deferred_segment_creation is set to TRUE by default in 11gR2. Have a look at the documentation:

It is true. Data Pump does not ignore tables without segments: regardless of the mode it runs in. Note additionally that legacy mode can be activated by even adding just one ignorable parameter (like buffer for example):

 SQL> create table TEST (c1 number, c2 varchar2(10), c3 date) storage (initial 5M);

 Table created.

 SQL> select bytes, blocks, segment_type, segment_name from dba_segments where segment_name='TEST';

 no rows selected

 C:\>expdp julian/password dumpfile=data_pump_dir:abc_%U.dat schemas=julian include=TABLE:in('TEST') logfile=abc.log buffer=1024

 Export: Release - Production on Wed May 11 07:53:45 2011

 Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
 Connected to: Oracle Database 11g Enterprise Edition Release - 64bit Production
 With the Partitioning, OLAP, Data Mining and Real Application Testing options
  Legacy Mode Active due to the following parameters:
  Legacy Mode Parameter: "buffer=1024" Location: Command Line, ignored.
  Legacy Mode has set reuse_dumpfiles=true parameter.
 Starting "JULIAN"."SYS_EXPORT_SCHEMA_01":  julian/******** dumpfile=data_pump_dir:abc_%U.dat schemas=julian 
  include=TABLE:in('TEST') logfile=abc.log reuse_dumpfiles=true 
 Estimate in progress using BLOCKS method...
 Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
 Total estimation using BLOCKS method: 0 KB
 Processing object type SCHEMA_EXPORT/TABLE/TABLE
 . . exported "JULIAN"."TEST"                                 0 KB       0 rows
 Master table "JULIAN"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
 Dump file set for JULIAN.SYS_EXPORT_SCHEMA_01 is:
 Job "JULIAN"."SYS_EXPORT_SCHEMA_01" successfully completed at 07:54:07

But it is not true that you cannot export tables with no segments. Here is the proof:

 C:\>exp userid=julian/abc file=C:\Oracle\admin\JMD\dpdump\a.dmp tables=julian.test

 Export: Release - Production on Wed May 11 08:31:08 2011

 Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

 Connected to: Oracle Database 11g Enterprise Edition Release - 64bit Production
 With the Partitioning, OLAP, Data Mining and Real Application Testing options
 Export done in AL32UTF8 character set and AL16UTF16 NCHAR character set

 About to export specified tables via Conventional Path ...
 . . exporting table                           TEST          0 rows exported
 Export terminated successfully without warnings.

But forget about this Legacy Mode. Do not use. Pretend it does not exit 🙂

Let us look now at some new features of Data Pump. Note that in 11gR2 couple of restrictions have been already removed:

– The restriction that in TABLES mode all tables had to reside in the same schema.
– The restriction that only one object (table or partition) could be specified if wildcards were used as part of the object name.

For RAC users: Data Pump worker processes can now be distributed across RAC instances, a subset of Oracle RAC instances, or confined to the instance where the Data Pump job is started. It is also now possible to start Data Pump jobs and run them on different Oracle RAC instances simultaneously.

For XMLType column users: there is a new DISABLE_APPEND_HINT value on the DATA_OPTIONS parameter which disables the APPEND hint while loading the data object.

For EBR users: specific editions can be exported and imported. Using the SOURCE_EDITION parameter upon export and the import TARGET_EDITION parameter you can export a certain edition of the database and import into a specific edition of the database.

P.S. I think Data Pump is among the Oracle tools with least bugs! Although it is hard to say nowadays what is a bug, what is a feature, what is a duplicate enhancement… There are about 100 types of bugs.

P.P.S. I did attend the 11g beta OCP exam as soon as it was released. There were like more than 300 questions, not a single one on Data Pump! And 11g came with so many new features.