Dontcheff

Archive for the ‘SQL’ Category

Reading Data in Oracle Database 12c

In Cloud, Consolidation, Database options, DBA, Security and auditing, SQL on December 1, 2014 at 18:02

1. For DBAs and Developers, the words READ and SELECT have been for years somehow synonyms. In 12c, is there now any difference?

2. Before pluggable databases, selecting data from the SALES table for instance meant selecting data from a table called SALES in a certain SCHEMA within the database. How about if a table called SALES belongs to several pluggable databases under the same schema name?

The aim of this blog post is to shed some light on these new concepts.

lock_read

1. New READ privilege.

Until Oracle 12.1.0.2. the SELECT object privilege allowed users to perform the following two operations in addition to just reading data from the SALES table:

LOCK TABLE sales IN EXCLUSIVE MODE;
SELECT ... FROM sales FOR UPDATE;

These 2 commands enabled the users to lock the rows of the SALES table.

The READ object privilege does not provide these additional privileges. For better security, grant users the READ object privilege if you want to restrict them to performing queries only.

In addition to the READ object privilege, you can grant users the READ ANY TABLE privilege to enable them to query any table in the database.

When a user who has been granted the READ object privilege wants to perform a query, the user still must use the SELECT statement. There is no accompanying READ SQL statement for the READ object privilege.

The GRANT ALL PRIVILEGES TO user SQL statement includes the READ ANY TABLE system privilege. The GRANT ALL PRIVILEGES ON object TO user statement includes the READ object privilege.

If you want the user only to be able to query tables, views, materialized views, or synonyms, then grant the READ object privilege. For example:

GRANT READ ON SALES TO julian;

CDB_PDB

2. Querying a table owned by a common user across all PDBs.

Consider the following scenario:

– The container database has several pluggable databases, i.e., it has a separate PDB for each different office location of the company.
– Each PDB has a SALES table that tracks the sales of the office, i.e., the SALES table in each PDB contains different sales information.
– The root container also has an empty SALES table.
– The SALES table in each container is owned by the same common user.

To run a query that returns all of the sales across the company connect to each PDB as a common user, and create a view with the following statement:

CREATE OR REPLACE VIEW sales AS SELECT * FROM sales;

The common user that owns the view must be the same common user that owns the sales table in the root. After you run this statement in each PDB, the common user has a view named sales in each PDB.

With the root as the current container and the common user as the current user, run the following query with the CONTAINERS clause to return all of the sales in the sales table in all PDBs:

SELECT * FROM CONTAINERS(sales);

You can also query the view in specific containers. For example, the following SQL statement queries the view in the containers with a CON_ID of 3 and 4:

SELECT * FROM CONTAINERS(sales) WHERE CON_ID IN (3,4);

3. Delegate.

Something else: staring 12.1.0.2, when granting a role to a user, you can specify the WITH DELEGATE OPTION clause. Then the grantee can do the following two things:

A) Grant the role to a program unit in the grantee’s schema
B) Revoke the role from a program unit in the grantee’s schema

delegate

Oracle in-database archiving and invisible columns in 12c

In Database tuning, DBA, Oracle database, Oracle internals, SQL on September 30, 2013 at 01:15

“The true mystery of the world is the visible, not the invisible.” Oscar Wilde

invisible_woman

In-Database archiving is new to Oracle 12c. The idea behind it is to make rows invisible. This is very similar to the invisible column concept.

The two features, combined together, can make big data in the table look not that big to the application.

Check the demonstration from orafaq to see how it works. It is simple and rather trivial. Another example is given by Contract Oracle Limited.

What is really interesting is John Watson’s comment and example at the very end of the orafaq article. He shows the need for filters that cannot be accomplished with access. John suggest that “all indexes need to include the ora_archive_state column, perhaps appended to primary and unique key indexes and prepended to other indexes. If you don’t do this, you will find that many operations that could previously be completed without touching the table will require table scans”.

The AMIS Techonology blog pays attention to the very same thing:

The Oracle Documentations says the following: “In-Database Archiving enables you to archive rows within a table by marking them as inactive. These inactive rows are in the database and can be optimized using compression, but are not visible to an application. The data in these rows is available for compliance purposes if needed by setting a session parameter.

With In-Database Archiving you can store more data for a longer period of time within a single database, without compromising application performance. Archived data can be compressed to help improve backup performance, and updates to archived data can be deferred during application upgrades to improve the performance of upgrades.”

The AMIS Technology Blog says: Note the emphasis on ‘without compromising performance’. It would seem that such in line archived records are skipped in a smart way during full table scans and other records retrieval operations.

Yong Woo, shows in his short note 12c: In-Database Archival …good, but not good with an example how a ‘TABLE ACCESS BY INDEX ROWID’ can be brought to the execution plan because of the new pseudo column.

What I would like to show, is in a way also the opposite, that performance can be even better with Oracle in-database archiving. I have a table called clients and visibility is set to ACTIVE. Almost all rows of the table are “invisible”. Check how bytes and cost increase, once I set the visibility to ALL. And of course, the full table scan!


SQL> select * from clients;

ep1


SQL> alter session set row archival visibility = all;

Session altered.

SQL> select * from clients;

ep2

It is interesting to note that virtual columns can be made invisible:


SQL> alter table clients add (vip_status number 
     generated always as (power(status,status)) virtual);

Table altered.

SQL> alter table clients modify (vip_status invisible);

Table altered.

INVISIBLE columns are not supported in external tables, cluster tables, or temporary tables and you cannot make a system-generated hidden column visible.

You can not yet make a table invisible. If you start marking all columns from the table as invisible, at the very end you will get:

SQL> alter table clients modify (client_id invisible);
alter table clients modify (client_id invisible)
            *
ERROR at line 1:
ORA-54039: table must have at least one column that is not invisible

On a final note, I wonder what developers will do if they cannot insert a row into a table if a DBA adds an invisible column without a default value declared as NOT NULL 🙂

Pseudo cursors and invisible SQL

In DBA, Oracle database, Oracle internals, SQL on February 17, 2013 at 11:39

Do you ever wonder why Oracle Enterprise Manager and AWR reports show the SQL text for some SQL IDs as not available?

SQL_text_not_available

The DBA forums and discussion groups mention that it might be because the SQLs have already been flushed from the shared pool or wonder how do you get and confirm the SQL ID for a query without seeing the text of the query?

The truth is however slightly different. These are Oracle internal or so called pseudo cursors which are just structures providing direct access database objects by avoiding the overhead of writing an explicit SELECT statement. These structures access directly most of the time the data dictionary tables, CLOBs and BLOBs. Sometimes even NCHAR and NVARCHAR2.

This is the reason why these internal cursors show in OEM and AWR with the legend “SQL Text Not Available”. There is just no SQL.

They are used internally within the database and cannot be described. At least not completely. However, here is what you can do as suggested by MOS 1298471.1:

create or replace view h$pseudo_cursor as
select Pseudo_cursor, sql_id,obj_id hex_obj_id
     ,obj# object_id, u.name owner, o.name object_name
from (select distinct 
             KGLNAOBJ Pseudo_cursor,kglobt03 sql_id
        ,substr(KGLNAOBJ
               ,instr(KGLNAOBJ,'_',1,3)+1
               ,instr(KGLNAOBJ,'_',1,4)-instr(KGLNAOBJ,'_',1,3)-1) obj_id 
       ,(case when 
         replace(translate(substr(upper(KGLNAOBJ)
                                 ,instr(KGLNAOBJ,'_',1,3)+1
                                 ,instr(KGLNAOBJ,'_',1,4)
                                  -instr(KGLNAOBJ,'_',1,3)-1)
                          ,'0123456789ABCDEF','................')
                ,'.') is null then 'Y' else 'N' end) is_safe_to_compare
            from x$kglob) k
   , obj$ o, user$ u
where obj#=decode(is_safe_to_compare,'Y',to_number(obj_id,'xxxxxxxxxx'),0)
   and o.owner#=u.user#

Now, we can try to find out more about our invisible SQL:

col hex_obj_id  format a10
col pseudo_cursor format a30
col owner format a20
col object_name for A22
select * from h$pseudo_cursor where sql_id='grv54uyqkw2yc';

pseudo_cursors

The second query shows us the pseudo PL/SQL used by Oracle internally. Notice the missing value for SQL ID!

Now, here is a question: how to tune pseudo cursors? If you run STA (SQL Tuning Advisor), the error message you get will be “The SQL has aged out of the cursor cache”. OEM will notify you with “The SQL statement is not available”:

sql_text_not_available2

So the bottom line is: if you see messages about SQL Text not being available, it is most likely due to Oracle trying to access internally LOBs or its own tables.

SQL Tuning Advisor Internals

In Database tuning, DBA, Oracle database, Oracle internals, SQL on January 27, 2013 at 19:07

Douglas Adams, the author of “The Hitchhiker’s Guide to the Galaxy” said: “A common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools”.

I am talking here about Oracle’s Cost Based Optimizer.

STA_01

The optimizer needs to make decisions about execution plans in a very short time: not all execution plans can be scanned. Staring Oracle 10g, Oracle allows the optimizer to run in tuning mode where it can gather additional information and make recommendations about how specific statements can be tuned further.

Funny thing here is that we rely on the same component (CBO) to improve the SQL statement that could not generate an optimal plan in the first place.

In this post, I would like to show something about Oracle 11gR2 that cannot be found either on Google or on Metalink. It is about tracing the SQL Tuning Advisor.

A very good review of what STA does was written by Tim Hall. Worth checking is also what Kerry Osborne writes about SQL Profiles. Cool stuff!

Christian Antognini’s article on SQL Profiles and how to trace it is probably the best written on that subject. Using the undocumented parameter _STN_TRACE, it is possible to trace what is happening during the SQL Tuning Advisor performance analysis. I find slight amusing the choice of the name of the domain used for Oracle FAQ but I guess this is not intentional.

However, if you check Jonathan Lewis’s article on SQL Profiles and the comments after it, you will notice that: _stn_trace does not seem to be available anymore in 11gR2″. That is visible from the _STN_TRACE documentation available on Oracle FAQ.

Starting 11gR2, there is an event trace[sql_manage.*] disk=low. Disk can be set to one of these 3 values: low, medium, or highest. The ora_debug_table is no longer needed or used.

You can set the event either at session or system level. If using OEM, then set it at system level, just like this:

alter system set events 'trace[sql_manage.*] disk=highest';

Once you run the SQL Tuning Advisor, go to either user_dump_dest or diagnostic_dest/diag/rdbms/SID/trace. You will find a long text file containing all analysis performed by STA.

The screenshot on the top of this post shows you the alternative execution plan offered by the SQL Tuning Advisor, the one below shows you the recommendations:

STA_02

The trace file generated because of enabling STA tracing on the highest level is way too big (20MB) but I am anyway posting it here.

This is twenty megabytes of text fitting into 8670 pages for what Oracle does for a single SQL statement!

STA_03

Tuning Exadata with parallel profiles and SPM baselines

In Database tuning, DBA, Exadata, SQL on October 29, 2012 at 06:34

Harold Geneen used to say: “It is much more difficult to measure nonperformance than performance”.

Without automation database management becomes increasingly difficult. An Exadata database can strongly benefit performance-wise from fully enabled Automatic SQL Tuning:

BEGIN
  DBMS_SQLTUNE.set_tuning_task_parameter(
    task_name => 'SYS_AUTO_SQL_TUNING_TASK',
    parameter => 'ACCEPT_SQL_PROFILES',
    value     => 'TRUE');
END;
/

I will explain why.

The SQL management base (SMB) is a part of the data dictionary that resides in the SYSAUX tablespace. It stores statement logs, plan histories, SQL plan baselines, and SQL profiles.

The difference between SQL Profiles and SQL Plan Management Baselines is very well explained by Maxym Kharchenko and Maria Colgan.

The process of automatic evolution of SQL Plan Baselines is described with a perfect example by Christian Antognini.

Converting baselines into profiles is almost straightforward but not simple, Sumit Bhatia explains how.

Now, for complex and long running SQL statements in Exadata, it is quite important to reduce the amount of data being transferred from the storage nodes to the database and to have the right degree of parallelism.

The former is achieved very much by smart scans which take place only when there are full scans or direct path reads. The blocks are not cached but they are returned to the PGA. The latter is even trickier. How in fact to determine the right degree of parallelism for an SQL statement?

Most often parallel_degree_policy is set to manual (default) and parallel_max_servers is sort of guessed. In my opinion, the right way is to use Oracle’s Auto DOP solution by enabling parallel_degree_policy to a non-default value. Let me quote Gwen Shapira:

There are two levels to Auto DOP:
Limited: when accessing tables and indexes that have been declared with parallel clause, Oracle will decide on the degree of parallelism based on the query and system resources
Auto: Oracle will decide on degree of parallelism for every query. In addition two new features are enabled: parallel statement queuing and in-memory parallel execution.

Even doing all that, because of Oracle’s unpredictable behavior at times, some SQL statements might not get the right number of parallel slaves or might not go for smart scans. For such statements creating an SPM baseline is a good thing to start from. Then, during the Automatic SQL Tuning maintenance window, the SQL Tuning Advisor might notice that there is a non-accepted plan which is better than the accepted one. The recommended SQL profile will simple do nothing else than to evolve the SPM baseline execution plan into an accepted plan.

All that happens automatically due to the fact that we have advised Oracle to automatically accept SQL Profiles.

But there is more to that: how to get a good degree of parallelism for the SQL statement? Oracle will automatically do so if needed by creating a parallel SQL Profile. Note that there are two types of profiles:

– REGULAR_PROFILE – profile without a change to parallel execution
– PX_PROFILE – regular profile with a change to parallel execution

Here is an example:

Automatically accepted parallel profiles can really boost the Exadata performance. But as PX_Profiles can consume also more resources than expected, it is important to have Instance Caging enabled.

The most important step for configuring instance caging is to determine the value of CPU_COUNT for each database instance and the sum of CPU_COUNTs across the server. There are two approaches:

– Partitioned approach for mission-critical pools sum(CPU_COUNT) < 75% x Total CPUs
– Over-subscribed approach for non-critical pools sum (CPU_COUNT) <= up to 3 x Total CPUs

Second step for instance caging: ensure resource_manager_plan is enabled.

Bottom line: tuning an Exadata database is not exactly the same things as tuning a traditional database. Actual tuning of an Exadata database is in fact a lot easier once understood.

And a lot of that tuning work can be really automated!

What is new with Automatic SQL Tuning in 11.2.0.2?

In DBA, Oracle database, SQL on April 24, 2011 at 14:32

When learning PL/1 and Fortran 4 in the very beginning of the 80s, I remember how the teacher was telling us that perhaps some day software will be able to create software. Something like that, I am paraphrasing. At that time software development was called programming.

With Oracle11, this became somehow true. What happens behind the curtain is that Oracle’s CBO is able to correct/change some bad execution plans without human intervention. Which actually turns out to be one of the best new features in 11g and for sure the best 11g new feature for Data Center DBAs.

But even in 11.2.0.2, the complete automation of Automatic SQL Tuning is not turned on by default. You have to manually enable it, just like this:

BEGIN
DBMS_AUTO_SQLTUNE.SET_AUTO_TUNING_TASK_PARAMETER(
parameter => 'ACCEPT_SQL_PROFILES', value => 'TRUE');
END;
/

After creating an 11g database, that is all you have to do in order to automate the process! For all the details, check the 11gR2 documentation on Automatic SQL Tuning.

The first time I did this in a mission critical production database, was in November 2007, as soon as Oracle 11g was available for Solaris. However, I did spend 100s of hours before that working/testing on 11.1.0.3 as an alpha tester.

In my opinion, starting 12g, accepting SQL profiles should be enabled by default. However, in order to use automatic SQL tuning and in particular SQL profiles the Tuning and the Diagnostics packs need to be licensed! Automatic SQL tuning via profiles is the most efficient way to massively tune SQL statements: CBO learns from its mistakes!

Back to 11.2.0.2: there are 3 new and updated performance tuning features:

• Resource Manager enhancements for parallel statement queuing and CPU utilization limit
• The Oracle Orion I/O Calibration Tool
• The new DBMS_AUTO_SQLTUNE package for Automatic SQL Tuning which is the new interface for manually managing the automatic SQL tuning task

Let us have a look at the 3 main subprograms of DBMS_AUTO_SQLTUNE:

Here are example of how you should use them:

1. Execute the Automatic SQL Tuning task immediately: EXEC DBMS_AUTO_SQLTUNE.EXECUTE_AUTO_TUNING_TASK; Note: must be run as SYS!

2. Display a text report of the automatic tuning task’s history: SELECT DBMS_AUTO_SQLTUNE.REPORT_AUTO_TUNING_TASK FROM DUAL;

3. Change a task parameter value for the daily automatic runs: see in the beginning how to enable automatic acceptance of SQL profiles. You can view all parameters as follows:

SELECT PARAMETER_NAME, PARAMETER_VALUE, IS_DEFAULT, DESCRIPTION
FROM DBA_ADVISOR_PARAMETERS
WHERE task_name = 'SYS_AUTO_SQL_TUNING_TASK'
/

I never understood why IE and Mozilla show me a mixture of Finnish and English in the description fields:

Back to the beginning of my post, have a look at how Oracle fixes its own errors, the Optimizer is optimizing its own execution plans! If you have not yet enabled Automatic SQL Tuning, do not heziate to do it, ask me for help or assistance if needed:

As of today when posting this, 24 April 2011, Metalink returns only 1 article (from all sources) on DBMS_AUTO_SQLTUNE, while I get almost 500 on DBMS_SQLTUNE:

If you are still interested in the topic check additionally:

DBMS_AUTO_SQLTUNE: ORA-01748 and Documentation Bugs
Morgan’s Library Oracle DBMS_AUTO_SQLTUNE Example Sample Code

Table Annotations in 11.2.0.2

In DBA, Oracle database, PL/SQL, SQL on April 22, 2011 at 10:12

11.2.0.2 is not just a patchset of 11gR2. It is more like 11gR3. Last month, I presented my favorite top 10 new features for DBAs. However, the new table annotation feature deserves special attention.

After a new release of Oracle, it is always worthwhile taking a look into DBA_TABLES (or USER_TABLES). Now, the last new column is called RESULT_CACHE. It has for now two possible values: DEFAULT and FORCE.

FORCE means that the table is annotated. That is, if all table names in a given SQL query have this setting, then the result set returned by the query is always considered for caching unless the NO_RESULT_CACHE hint is specified for the query. If one or more tables named in the query are set to DEFAULT, then the effective table annotation for that query is DEFAULT. Which means the result set will not be cached.

Let us see how it works by first creating a table with 15000 rows and run a full table scan query:

If you want to flush the result cache, just run: exec DBMS_RESULT_CACHE.FLUSH;

Here are the objects related to the topic, it is worth checking them:

Caching is not guaranteed by Oracle even if all tables in the query are annotated. The actual caching depends on internal restrictions for client and server cache, query cache worthiness and space available in the cache. Try to run in a row for instance sql with SYSDATE in it and have a look:

Queries that are not Cached:

• Remote objects
• Complex types in the select list
• Snapshot-based or flashback queries
• Queries executed in a serializable, read-only transaction, or inside a flashback session
• Queries that have PL/SQL functions in them
• Queries that have VPD policies enabled on the tables

How many records can Oracle store? All depends on two init.ora parameters: RESULT_CACHE_MAX_SIZE & RESULT_CACHE_MAX_RESULT:

RESULT_CACHE_MAX_SIZE specifies the maximum amount of SGA memory that can be used by the result cache. Values of this parameter greater than 0 are rounded up to the next multiple of 32 KB. If the value of this parameter is 0, then the feature is disabled. The default value is derived from the values of SHARED_POOL_SIZE, SGA_TARGET, and MEMORY_TARGET. By default, the cache takes 0.025% of the memory_target parameter’s value. However, the max allocation cannot exceed 75% of the size of the shared pool. With my current memory_target of 384M, the result cache max size does not even reach 1M. For production systems, I would recommend that you set it to at least 32M.

RESULT_CACHE_MAX_RESULT specifies the percentage of RESULT_CACHE_MAX_SIZE that any single result can use. The default value is 5% of the RESULT_CACHE_MAX_SIZE. This should be OK in most cases.

And finally, look at the execution plan when annotated tables are involved:

A good question is: so which tables from the database should I annotate? I would probably go through few AWR reports and try to find the repetitive queries run at least 100s of times. Then force annotate the tables used in the queries. And test first plus follow on regular basis what comes from V$RESULT_CACHE_OBJECTS! With Oracle at times you get the opposite of what is expected.

P.S. I often see in AWR reports how WebLogic runs 1000s of times per hour “select * from dual”. In a recent report, I saw the DUAL table as number one segment in terms of logical reads. You can annotate even system tables:

SQL> alter table dual result_cache (mode force);

Table altered.

Scorecards for ranking relational databases

In DB2 database, DBA, Oracle database, SQL on March 18, 2011 at 18:10

In June 1970, Edgar “Ted” F. Codd publicly publishes the following paper: A Relational Model of Data for Large Shared Data Banks. IBM refused to implement his model to preserve revenues of IMS/DB. Customers pressured IBM to build it (System-R project) and a relational language SEQUEL: Structured English Query Language, later SQL, evoked.

Codd’s produced 12 rules, which in fact is a set of thirteen rules (numbered zero to twelve), that define a relational database.

Depending on your brand preferences, one can play with the words in the definition of a relational database and for example get Oracle on top as most “relational”. Or get IBM or even 3-4 other brands above Oracle. Some give one point for each rule being fulfilled, thus 12 points is the maximum. Some give 0.5 points for partially fulfilled rules in which case Oracle would be rated as 11.5/12 as (R6) is not satisfied (and will never be).

Oracle have come closer than all other brands to solving the update view issue using MVs (materialized views). DB2 can also support the update view fully, if the integrity between a view and its base table can be maintained. Microsoft SQL Server has to solve the view problem as they are behind.

Some people claim that there is no RDBMS that supports totally the update views rule (R6), the management of null values rule (R3) and and logical data independence (R9). My claim is that Oracle satisfies (R3) & (R9). For (R6) , I give Oracle 0.5 points.

You can see rather obscure statements like on http://www.firstsql.com that “only Ingres and FirstSQL among the RDBMS vendors have a solid foundation in their engine for future improvements.” Additionally they say that “Oracle would rate about 60% compliance. Others are lower except Ingres and FirstSQL which rate about 65% compliance. Interestingly enough, Xbase products rate 0% in compliance with the relational model.” Yeah, right 🙂

Overall, regardless of the question is Oracle, Microsoft or IBM most relational in their database products, we should remember than nowadays it doesn’t really matter, does it? What Codd (and Date) produced was just a model and they missed (or didn’t bother to wait for) the business train. Larry Ellison spotted the business idea and built a database empire. With all the limitations faced throughout the years.

Today, the technology does not drive the business. It is vice versa. We develop technology to make business move forward. That is why I don’t care at all if DB2, Oracle or SQL Server is 100% relational. It doesn’t matter any longer! What would you rather be: right or rich?

And finally, here is for your reference Chris Date’s version of the Codd’s Rules and note that in Version 8, Oracle was 8.5/12 RDBMS compliant 🙂

0. (Yes, there is a Rule 0!)
For a system to qualify as a RELATIONAL, DATABASE, MANAGEMENT system, that system must use its RELATIONAL facilities (exclusively) to MANAGE the DATABASE.

1. The information rule:
The information rule simply requires all information in the database to be represented in one and only one way, Namely by values in column positions within rows of tables.

2. The guaranteed access rule:
This rule is essentially a restatement of the fundamental requirement for primary keys. It says that every individual scalar value in the database must be logically addressable by specifying the mane of the containing table, the name of the containing column and the primary key value of the containing row.

3. Systematic treatment of null values:
The DBMS is required to support a representation of “missing information and inapplicable information” that is systematic, distinct from all regular values (for example, “distinct from zero or any other number,” in the case of numeric values),
and independent of data type. It is also implied that such representations must be manipulated by the DBMS in a systematic way.

4. Active online catalog based on the relational model:
The system is required to support an online, inline, relational catalog that is accessible to authorized users by means of their regular query language.

5. The comprehensive data sublanguage rule:
The system must support a least one relational language that
(a) has a linear syntax,
(b) can be used both interactively and within application programs,
(c) supports data definition operations (including view definitions), data manipulation operations (update as well as retrieval),
security and integrity constraints, and transaction management operations (begin, commit, and rollback).

6. The view updating rule:
All views that are theoretically updatable must be updatable by the system.

7. High-level insert, update, and delete:
The system must support set-at-a-time INSERT, UPDATE, and DELETE operators.

8. Physical data independence:
Self-explanatory.

9. Logical data independence:
Self-explanatory

10. Integrity independence:
Integrity constraints must be specified separately from application programs and stored in the catalog. It must be possible to change such constraints as and when appropriate without unnecessarily affecting existing applications.

11. Distribution independence:
Existing applications should continue to operate successfully
(a) when a distributed version of the DBMS is first introduced;
(b) when existing distributed data is redistributed around the system.

12. The nonsubversion rule:
If the system provides a low-level (record-at-a-time) interface, then that interface cannot be used to subvert the system (e.g.) bypassing a relational security or integrity constraint.

P.S. I have not mentioned anything about MySQL for in my opinion MySQL is rather an SDK (Software Development Kit) than a database.

11g Grid Control: PL/SQL execution monitoring in SQL Monitoring

In DBA, Grid Control, PL/SQL, SQL on September 6, 2010 at 11:43

One of the most useful new features of 11g Grid Control is PL/SQL execution monitoring. Wonder now if a developer wants to know how his PL/SQL procedure did in terms of its SQL and DML statements? What calls did it make and how long it took per call? Beats me is what I would have said before 🙂 Have a look at what Oracle did with 11g Grid Control:

Not only we see the duration of the PL/SQL procedure but we can also see its activites and metrics per SQL call:

We can monitor LIVE either the execution details compared to a previous run with a real time 15 second refresh:

or view its execution details with real time 10 second refresh:

giving us time, wait and io statistics. You can view it or save it in a report format or even mail it directly to the developer!

Good one from Oracle.

How did we do similar things in 8i?

Oracle teaching Oracle

In Grid Control, SQL on July 22, 2010 at 17:09

Interesting enough, often we rely on the same component to improve the SQL statement that could not generate an optimal plan in the first place. I am talking about the Cost Based Optimizer. Look at this. Note that the database in question is an active database.

So, top SQL is one being generated by Grid Control. So far so good. Let’s run the SQL Tuning Advisor and have a look at the recommendations:

What do we witness? CBO learning from its mistakes? Or Oracle teaching Oracle? Or Oracle learning from Oracle. Whatever…

Interesting recommendation: “Rewrite the predicate into an equivalent form to take advantage of indices”. I wish I could start re-writing the Grid Control code. But I will analyze SYS.LIBRARY$ and accept the profile. Note: in 11g Grid Control, we can implement the profile with forced matching via the GUI, no need to do it any longer in SQL*Plus:

However, I still claim that Oracle Enterprise Manager Grid Control is probably the best feature/tool/utility in Oracle after 9i.