Archive for May, 2011|Monthly archive page

On Return On Investment and New Database Management Features in Oracle Enterprise Manager Grid Control 11g

In DBA on May 24, 2011 at 23:00

“A picture is worth a thousand words”. This picture is worth a thousand dollars. Times ten. And times ten again (at least).

Or even more depending on the number of packs you are using and the number of CPUs used in the DB stack. However, it is still worth having it! It is the best thing Oracle has ever developed. There is RAC, there is Exadata but these products are incomparable with Grid Control.

Forrester Research was able to estimate costs, benefits, and risks for a composite sample Organization. Their findings indicate that the Organization should achieve a three-year risk-adjusted NPV of $1,427,320, a very favorable 100% ROI, and a payback period of 16 months. It is very close to what I have experienced, witness and calculated.

Being one of the first Grid Control customers in the world and as far as I know the first (company) to start using it in production in Europe, I must say that DBA productivity can grow up to 5 times with DBAs managing more than 150 databases which is much, much above the industry standards and average (picture below is from Forrester):

I recommend that you also check the 9 papers under featured resources.

Of course, Grid Control is free to use if no packs are used but that is where the functionally is. Here are all the database packs for Database Management:

There is a very interesting thread on about why (some) companies do not use EM Grid Control. I agree that it is time for Oracle to make an official ROI Calculator as blogs and forums might not be enough convincing for the decision makers. Who usually are not the DBAs 🙂

11g comes with several new improvements. For details, check the following article by Scott McNeil: What’s new in Oracle Enterprise Manager 11g for Oracle Database 11g Release 2 and Exadata V2 or directly the Oracle Documentation.

A very recent Q&A blog post by Porus Homi Havewala has attracted my attention. It is about his favourite questions and answers from EM. Very good one indeed!

If you are still with me I will switch to new features. The ones I find most important are the ones related to performance. Two of the new SQL Tuning features are alternate execution plan identification in SQL Tuning Advisor based on real-time or historical performance and parallel SQL Profile recommendation in SQL Tuning Advisor to enable parallel execution of a long-running query. I really like the way Oracle let’s us compare the profile testing results, have a look. A picture is worth a thousand words:

The second new feature is parallel SQL Profile recommendation in SQL Tuning Advisor to enable parallel execution of a long-running query. I could not find any example for this on the Internet, so here is one:

I think it is an excellent improvement from Oracle!

But here is an Oracle internal statement:

SELECT MAX(NVL(sev.severity_code,0)) severity_code, count (*)
FROM mgmt_current_severity sev, (
select t.target_guid, m.metric_guid
from mgmt_metrics m, mgmt_targets t
where (m.metric_name, m.metric_column) in ( ('ha_rac_intrconn_traffic', 'interconnect_rate'), ('ha_rac_intrconn_type', 'if_type') ) AND m.type_meta_ver = t.type_meta_ver AND t.target_guid in (
select member_target_guid
from mgmt_target_memberships
where composite_target_type = :1 and composite_target_name = :2 ) AND (m.category_prop_1 = t.category_prop_1 OR m.category_prop_1 = ' ') AND (m.category_prop_2 = t.category_prop_2 OR m.category_prop_2 = ' ') AND (m.category_prop_3 = t.category_prop_3 OR m.category_prop_3 = ' ') AND (m.category_prop_4 = t.category_prop_4 OR m.category_prop_4 = ' ') AND (m.category_prop_5 = t.category_prop_5 OR m.category_prop_5 = ' ') ) t1
where sev.target_guid = t1.target_guid AND sev.metric_guid = t1.metric_guid AND sev.severity_code >= 20;

Interesting enough Oracle seems to struggle with its own execution plans, in its own tables. Should we really consider the recommendations and add indexes to SYSMAN’s tables? I guess just the SQL Profile 🙂

But look at the comparison of the execution plans. The Cost will drop from 624006 to 19 and the I/O Cost will drop from 623607 to 18:

Bottom line: it is safe to enable automatic SQL tuning and let Oracle profile the worst statements in the database. I would recommend running the following:

And before hitting enter make sure you have already licensed the Tuning and Diagnostics packs!

Data Pump Legacy Mode

In DBA, Oracle database, Oracle utilities on May 19, 2011 at 16:42

During the past 10 years we have seen 3 new versions of Oracle: 9i, 10g and 11g. I am often asked and I do ask as well which is the best or which are the best features of Oracle. 10 persons, I bet there will be 10 different opinions. Here is how I see it:

Real Application Clusters and Data Guard in 9i
Grid Control and Data Pump in 10g
Automatic SQL Tuning and Edition-Based Redefinition in 11g

There have been so many new, very good and important features but let us say that they all are part of the progress of mankind 🙂 And the recent boom of IT. Perhaps?

In 11gR2, Oracle decides to introduce Data Pump Legacy Mode in order to provide backward compatibility for scripts and parameter files used for original export/import scripts. The documentation briefly says: “This feature enables users to continue using original Export and Import scripts with Data Pump Export and Import. Development time is reduced as new scripts do not have to be created.”

If you check AristaDBA’s Oracle Blog and read the first three paragraphs you will probably see what it is about. I totally agree with all written there.

And I don’t really get it, why do we have Data Pump Legacy Mode? Lazy Oracle clients badly need it? Exp/imp was so, so good that we must have it back? How about RAC Legacy Mode if I want to use gc_files_to_lock or freeze_db_for_fast_instance_recovery parameters? There really was a parameter called freeze_db_for_fast_instance_recovery, I am not making this up. Run this one:

SELECT kspponm, 
DECODE(ksppoflg, 1,'Obsolete', 2, 'Underscored') as "Status"
FROM x$ksppo
WHERE kspponm like '%freeze%'
ORDER BY kspponm;

Or how about if I want to run DBMS_SCHEDULER in DBMS_JOB mode 🙂 And if you want to run DB2 in Oracle mode?

However, the Data Pump Legacy Mode feature exists and once you use any conventional export/import parameter you enable data pump in legacy mode. Just one parameter is enough. In Oracle terminology, Data Pump enters legacy mode once it determines a parameter unique to original Export or Import is present. Of course some parameters like buffer, commit, compress, object_consistent, recordlength, resumable, statistics, compile, filesize, tts_owners, streams_configuration and streams_instantiation are simply ignored.

Now, here is a paradox or trivial documentation error: deferred_segment_creation is set to TRUE by default in 11gR2. Have a look at the documentation:

It is true. Data Pump does not ignore tables without segments: regardless of the mode it runs in. Note additionally that legacy mode can be activated by even adding just one ignorable parameter (like buffer for example):

 SQL> create table TEST (c1 number, c2 varchar2(10), c3 date) storage (initial 5M);

 Table created.

 SQL> select bytes, blocks, segment_type, segment_name from dba_segments where segment_name='TEST';

 no rows selected

 C:\>expdp julian/password dumpfile=data_pump_dir:abc_%U.dat schemas=julian include=TABLE:in('TEST') logfile=abc.log buffer=1024

 Export: Release - Production on Wed May 11 07:53:45 2011

 Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
 Connected to: Oracle Database 11g Enterprise Edition Release - 64bit Production
 With the Partitioning, OLAP, Data Mining and Real Application Testing options
  Legacy Mode Active due to the following parameters:
  Legacy Mode Parameter: "buffer=1024" Location: Command Line, ignored.
  Legacy Mode has set reuse_dumpfiles=true parameter.
 Starting "JULIAN"."SYS_EXPORT_SCHEMA_01":  julian/******** dumpfile=data_pump_dir:abc_%U.dat schemas=julian 
  include=TABLE:in('TEST') logfile=abc.log reuse_dumpfiles=true 
 Estimate in progress using BLOCKS method...
 Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
 Total estimation using BLOCKS method: 0 KB
 Processing object type SCHEMA_EXPORT/TABLE/TABLE
 . . exported "JULIAN"."TEST"                                 0 KB       0 rows
 Master table "JULIAN"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
 Dump file set for JULIAN.SYS_EXPORT_SCHEMA_01 is:
 Job "JULIAN"."SYS_EXPORT_SCHEMA_01" successfully completed at 07:54:07

But it is not true that you cannot export tables with no segments. Here is the proof:

 C:\>exp userid=julian/abc file=C:\Oracle\admin\JMD\dpdump\a.dmp tables=julian.test

 Export: Release - Production on Wed May 11 08:31:08 2011

 Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

 Connected to: Oracle Database 11g Enterprise Edition Release - 64bit Production
 With the Partitioning, OLAP, Data Mining and Real Application Testing options
 Export done in AL32UTF8 character set and AL16UTF16 NCHAR character set

 About to export specified tables via Conventional Path ...
 . . exporting table                           TEST          0 rows exported
 Export terminated successfully without warnings.

But forget about this Legacy Mode. Do not use. Pretend it does not exit 🙂

Let us look now at some new features of Data Pump. Note that in 11gR2 couple of restrictions have been already removed:

– The restriction that in TABLES mode all tables had to reside in the same schema.
– The restriction that only one object (table or partition) could be specified if wildcards were used as part of the object name.

For RAC users: Data Pump worker processes can now be distributed across RAC instances, a subset of Oracle RAC instances, or confined to the instance where the Data Pump job is started. It is also now possible to start Data Pump jobs and run them on different Oracle RAC instances simultaneously.

For XMLType column users: there is a new DISABLE_APPEND_HINT value on the DATA_OPTIONS parameter which disables the APPEND hint while loading the data object.

For EBR users: specific editions can be exported and imported. Using the SOURCE_EDITION parameter upon export and the import TARGET_EDITION parameter you can export a certain edition of the database and import into a specific edition of the database.

P.S. I think Data Pump is among the Oracle tools with least bugs! Although it is hard to say nowadays what is a bug, what is a feature, what is a duplicate enhancement… There are about 100 types of bugs.

P.P.S. I did attend the 11g beta OCP exam as soon as it was released. There were like more than 300 questions, not a single one on Data Pump! And 11g came with so many new features.

Auditing vs. Performance in the Oracle Database

In Bugs, Database tuning, DBA, Oracle database, Security and auditing on May 12, 2011 at 02:24

You show this (part of a) AWR report to the DBA and he proudly concludes: disable auditing, it is killing the performance! And thus, quite often Oracle database auditing is not enabled. And here are the 3 main reasons why auditing is not turned on:

– DBAs, developers, etc. are not familiar with this feature: For those who are not familiar with auditing, I suggest Tim Hall’s and Pete Finnigan’s articles: Auditing in Oracle 10g Release 2 and Introduction to Simple Oracle Auditing.
– Security is not considered important and necessary: For those who do not consider auditing important, I wish them luck. They are anyway not interested in what I have to say..
– Performance is being hit by enabling auditing: For the ones having issues with performance when auditing is enabled, here is something.

There are 3 major reasons why performance suffers when auditing is enabled: too much is being audited, AUD$ still hangs in the SYSTEM tablespace and surprise, surprise: the Oracle bugs.

1. Too much is being audited. If it is a new database, spend some time with all parties involved on what to audit. The reality however is something like that: go-live day is getting closer, oh do we have auditing enabled? How do you enable it, can you give me the command please. And it should not go like that. You first decide on the value of audit_trail and then audit what is really needed, do not audit repetitive commands that generate too many inserts into the AUD$ table for it can grow very fast indeed.

Have a look at this thread from Pete Finnigan’s site called Performance Impact of Auditing.

If it is an existing database, check first what is being audited. To find out system audited stuff run the following:

select * from DBA_PRIV_AUDIT_OPTS
union all
select * from DBA_STMT_AUDIT_OPTS;

Note that the difference between the two views above is very small and I have not found yet a place with explanation about the difference. The documentation says that DBA_STMT_AUDIT_OPTS describes the current system auditing options across the system and by user while DBA_PRIV_AUDIT_OPTS describes the current system privileges being audited across the system and by user. Puzzled? Me too.



For the auditing options on all objects, check DBA_OBJ_AUDIT_OPTS.

Check the Oracle 11gR2 documentation for the Recommended Audit Settings.

2. AUD$ still hangs in the SYSTEM tablespace. The system tablespace might be fragmented. Starting 11gR2, Oracle supports moving the AUD$ table out of the SYSTEM tablespace. But first, noaudit your policy or stop the auditing.

If using 11.2.0 and above follow the documentation instruction.

If still running 11.1.0 or a below, here is how to do it:

create tablespace AUDIT_DATA datafile ...;
create table AUDX tablespace AUDIT_DATA as select * from AUD$;
rename AUD$ to AUD$$;
rename AUDX to AUD$;
create index i_aud2 on AUD$(sessionid, ses$tid) tablespace AUDIT_DATA;

Remember to purge the records on regular basis. Do not just delete them but move them to a centralized auditing repository. Use the new DBMS_AUDIT_MGMT package. Check Tim Hall’s instructionon how to purge audit trail records. In urgent cases, it is safe to run truncate table AUD$;

If you use FGA, remember to move also FGA_LOG$ away from the SYSTEM tablespace:

    audit_trail_type           => DBMS_AUDIT_MGMT.AUDIT_TRAIL_FGA_STD,
    audit_trail_location_value => 'AUDIT_DATA');

And I would recommend this article by Martin Widlake: Why is my SYSTEM Tablespace so Big?! The 3rd SYSTEM table after SYS.AUD$ and SYS.HISTGRM$ that I have seen to grow up is SYS.IDL_UB1$.

3. Oracle bugs. If you enable auditing you might get several bugs for free, most old ones should be fixed in, don’t know about the new ones 🙂

20 years ago, Bug 52646: audit trail degrades performances too much was solved in Oracle 6:

Well, you still have many similar ones. As of today, all these bugs have empty field for “Fixed in Product Version”. And this is not the complete list!

Bug 10347785: huge version count for insert into sys.aud$ causing library cache: mutex x/hang
Bug 504968: ora-600[17051] and database crash when writing to audit table
Bug 11901734: dbms_audit_mgmt audit trail cleanup cannot keep up with aud$ volume
Bug 8236755: ora-00060 occurs while updating aud$ table.
Bug 6159102: export session spins when logging off in repeated update sys.aud$ statements
Bug 6334058: deadlock with ora-00060 while updating sys.aud$ and auditing on select is enable
Bug 4405301: too many entries in aud$ when sequence is queried and audit is by session
Bug 1257564: noaudit does not turn off auditing of database (very nice indeed!)

I wish Oracle will create one bug called “Performance issues with the AUD$ table”, solve it so finally no one complains about the performance of one simple table which in my opinion is not even a real dictionary table.

Question: In RAC, sessions from both/all nodes are being audited. Will AUD$ hot blocks “stuffed” with new data be ping-ponged via the interconnect?

Patching the database

In DBA, Oracle database on May 5, 2011 at 05:27

Oracle have already announced the following important changes to Oracle Database Patch Sets: “Starting with the first patch set for Oracle Database 11g Release 2 (, Oracle Database patch sets are full installations of the Oracle Database software. In past releases, Oracle Database patch sets consisted of a set of files that replaced files in an existing Oracle home. Beginning with Oracle Database 11g Release 2, patch sets are full installations that replace existing installations.” Details can be found on Metalink Note 1189783.1.

That is good and fine unless we are not being bombarded with dozens of additional patches on top of these installations. It’s another story if is really a patchset of 11gR2 or is it rather 11gR3.

As of yesterday, May 4th 2011, there are 324 different patches available for Linux x86-64 on top of For Solaris SPARC 64-bit, they are only 214.

Looking at the bugs fixes you might have mixed feelings: while we are all used to seeing fixes for the never ending ORA-00600 and ORA-07445 errors, I really cannot understand how we still face issues with wrongly returned data. That should be so basis requirement from a database, right?

10190421 Possible wrong results in query using connect by clause with parrallelism
10353054 Wrong result due to loss of semi joins

Looking at the readme files those “wrongly returned data” errors might even be seen coming sequentially 🙂

As DBAs, when reading emails for example, we all probably have this mild smile on our faces when seeing words like “data base” and “table space”. Well, probably not a database guy who writes them? Or DB2 background. But how about when you find them in Oracle’s documents:

By the way, did you notice the typo above, the word parrallelism? How about using a spell checker? Here is a screenshot from the readme:

Another issue I am amazed with: if a junior developer does not handle his/her exceptions in the PL/SQL code, (s)he might be excused: (s)he is learning, we all make mistakes. So does Oracle:

And now back to the real issue: you install and wonder if you have the latest release of Oracle. Well, yes but minus the patches that were created by “the bug department”. Of course you don’t want to install 100s of additional patches. Here is what Oracle recommends:

Just install patch 12311357 which is GI PSU as it includes PSU which on its behalf includes CPUAPR2011. If using RAC and ASM add 10157506 ( RAC GI BUNDLE1) and the recommended bundle patches up to BP10 (BP10 is patch 12773458, the patch must be applied on the Oracle home of Oracle Database as well as Oracle Grid Infrastructure). For second opinion on this click here.

Important for Siebel users: if you run Siebel on top of Oracle RAC, check Metalink Note 1297080.1 Certification Siebel Applications With 11gr2 Rac [ID 1297080.1]. It says at the end: “It was also clarified by product marketing that basically oracle database version is the base version that is supported and implies that any or higher version is included”. That is: put all patches mentioned above also for Siebel applications!

Looks like database bug fixes are released rather frequently, the day before yesterday 3 new fixes came for the Linux platform:

Hint: try opatch auto!