The James Bond of Database Administration

In Data, DBA, Golden Gate, Oracle database, Oracle Engineered Systems on October 27, 2015 at 07:23

“Defending our systems needs to be as sexy as attacking others. There’s really only one solution: Bond.”

That is what ‘The Guardian’ wrote recently in an article entitled “The Man with the Golden Mouse: why data security needs a James Bond“.

Attending the annual Oracle ACE Director Briefing at Oracle HQ awoke up an interesting debate on the following question: What will happen in the near future with the DBA profession? Who is now the James Bond of Database Administration?


According to TechTarget, big data tools are changing data architectures in many companies. The effect on the skill sets required by database administrators may be moderate, but some new IT tricks are likely to be needed. GoldenGate is the new Streams, Exadata is the new RAC, Sharding the new Partitioning, Big Data is the new data (Texas is an exception), you name it…

Having the privilege to work throughout the years with some of the best database experts in the world has, for all it matters, proved to me that Double-O-Sevens are in fact more like Double-O-six-hundreds. Meaning that there are 100s of DBAs that qualify with no hesitation whatsoever as the James Bonds of Database Administration. I have learned so much from my ex Nokia colleagues, from my current Enkitec and Accenture colleagues. Not to mention friends from companies like eDBA, Pythian, Miracle, etc.

A DBA needs to have so many skills. Look for instance at Craig S. Mullins’ suggested 17 skills required of a DBA. Kyle Hunter’s article The evolution of the DBA and the Data Architect is clearly pointing to the emerging skillsets in the Data Revolution.

In the IT business, and in database administration in particular, it is not that important how well you know the old stuff, it is more important how fast you can learn the new things. Here are some of the tools that help every modern Oracle DBA:

Oracle Enterprise Manager 12c
Developer Tools
Oracle Application Express (APEX)
SQL Developer
Oracle JDeveloper
SQL Developer Data Modeler
And last but not least SQL*Plus®


These additional Metalink tools might be often of great help:

Diagnostic Tools Catalog – Note ID 559339.1
OS Watcher (Support Tool) – Note 301137.1
LTOM (Support Tool) – Note 352363.1
HANGFG (Support Tool) – Note 362094.1
SQLT (Support Tool) – Note 215187.1
PLSQL Profiler (Support Script) – Note 243755.1
MSRDT for the Oracle Lite Repository – Note 458350.1
Trace Analyzer TRCANLZR – Note 224270.1
ORA-600/ORA-7445 Error Look-up Tool – Note 153788.1
Statspack (causing more problems than help in 12c)

The Man with the Golden Mouse is the James Bond of Database Administration. The best DBA tools are still knowledge and experience.


New Features of Oracle NoSQL

In DBA, NoSQL, Oracle database, Oracle Engineered Systems on July 4, 2015 at 16:44

“In open source, we feel strongly that to really do something well, you have to get a lot of people involved.” Linus Torvalds

Currently, there are about 150 NoSQL databases (= Not Only SQL).

There are 5 major NoSQL data models: Collection, Columnar, Document-oriented, Graph and Key-value.

Oracle NoSQL, based on BerekelyDB (first release in 1994), was recently named by Forrester Research as a leader in the NoSQL key-value database market and Oracle NoSQL Database was called out as having strong adoption and maturity. A very good study and comparison of several NoSQL databases entitled 21 NoSQL Innovators to Look for in 2020 was written by Gary MacFadden.


Here are few examples:

Collection/Multi-model: OrientDB, FoundationDB, ArangoDB, Alchemy Database, CortexDB
Columnar: Accumulo, Cassandra, Druid, HBase, Vertica
Document-oriented: Lotus Notes, Clusterpoint, Apache CouchDB, Couchbase, HyperDex, MarkLogic, MongoDB, OrientDB, Qizx
Graph: Allegro, Neo4J, InfiniteGraph, OrientDB, Virtuoso, Stardog
Key-value: Redis, CouchDB, Oracle NoSQL Database, Dynamo, FoundationDB, HyperDex, MemcacheDB, Riak, FairCom c-treeACE, Aerospike, OrientDB, MUMPS

Lat month (June 2015), Oracle announced Oracle NoSQL Database Version 3.3.4. This release offers new security features, including User Roles and Table-level Authorization, new language interfaces for Node.js and Python, and integration with Oracle Database Mobile Server. The prior release offers Big Data SQL support, RESTful API, C Table Driver, SQL-like DDL, Apache Hive support and much more.

A good starting point in order to get deeper into the NoSQL and Big Data world is the Oracle Big Data Learning Library.

Oracle recently announced Big Data SQL for Oracle NoSQL Database. This feature will allow Oracle Database users to connect to external data repositories like Oracle NoSQL Database or Hadoop in order to fetch data from any or all of the repositories (at once) through single SQL query.

Oracle Big Data SQL is an innovation from Oracle only available on Oracle Big Data Appliance. It is a new architecture for SQL on Hadoop, seamlessly integrating data in Hadoop and NoSQL with data in Oracle Database. Using Oracle Big Data SQL one can:

• Combine data from Oracle Database, Hadoop and NoSQL in a single SQLquery
• Query and analyze data in Hadoop and NoSQL
• Integrate big data analysis into existing applications and architectures
• Extend security and access policies from Oracle Database to data in Hadoopand NoSQL
• Maximize query performance on all data using Smart Scan


The recent update to Oracle REST Data Services enables a consistent RESTful interface to Oracle Database’s relational tables, JSON document store, and also enables access to Oracle NoSQL Database tables.

I still recommend reading the excellent article by Gwen Shapira entitled Hadoop and NoSQL Mythbusting.

Here are some useful links:

NoSQL Database Administrator’s Guide
Getting Started with NoSQL Database Table API
NoSQL Database Run Book
NoSQL Database Security Guide
Oracle NoSQL Database Availability and Failover
Download Oracle NoSQL Database, Server


It is interesting to note that according to Wikipedia, is the first stable Oracle NoSQL release.

Check the DBMS popularity broken down by database model!


Table Temperature, Big Tables In-Memory and Automatic Big Table Caching

In DBA, Init.ora, Oracle database on April 3, 2015 at 16:10

If you have never heard of Oracle object temperature and how this relates to caching, in-memory and big tables, here is a short note perhaps worth reading.

And all this is purely Oracle database related, nothing to do with Oracle Lighting Products.


Background: Oracle came with 5 new init.ora parameters (actually 12 but the other 7 are In-Memory related):

1. common_user_prefix
2. dbfips_140
3. enable_goldengate_replication
4. exafusion_enabled
5. db_big_table_cache_percent_target

The first 4 are rather on-off type of parameters but the last one, db_big_table_cache_percent_target, is worth looking into.

A table is considered small, if the number of blocks in the segment is lower or equal than the value of the parameter _small_table_threshold. In 12c, this parameter defaults 2% of the buffers in the cache. To be more precise, it is 2% of _db_block_buffers. For details, check Jonathan Lewis’s Small tables. Another very good one is by Tanel Poder entitled Optimizer statistics-driven direct path read decision for full table scans.

Otherwise, we call the table big.


In, a DBA can enable full database caching with the following command: ALTER DATABASE FORCE FULL DATABASE CACHING;

Under normal running the Oracle database decides what data to cache in the buffer cache. If there is not enough room, data can be aged out of the cache. If Oracle determines that the buffer cache is big enough to hold the entire database it will cache all blocks.

However, seldom we have enough memory to cache the whole database. In this case, the parameter db_big_table_cache_percent_target can be used. And of course, for “big tables” only.

The default value is 0, so under normal circumstances automatic big table caching is not enabled. You can decide on your own if this is Poor Man’s In-Memory Caching but even if it isn’t the benefits are rather questionable. Let us see why.

We have 3 tables: SALES, OLD_SALES and VERY_OLD_SALES.


SALES and OLD_SALES are huge and SALES is in-memory, that is the in-memory option has been enabled for the SALES table. VERY_OLD_SALES is a rather small table but still considered big by Oracle as the small table threshold is 401 (2% of 20090):

select a.ksppinm name, b.ksppstvl value 
from x$ksppi a, x$ksppcv b 
where a.indx = b.indx 
and a.ksppinm like '%small_table%';


select a.ksppinm name, b.ksppstvl value 
from x$ksppi a, x$ksppcv b 
where a.indx = b.indx 
and a.ksppinm like '%block_buffer%';


Let us see if the table is In-Memory enabled and what is the number of its blocks:


3 new dynamic views give us details on big table caching:
– V$BT_SCAN_CACHE shows the parameters and status of the big table cache section.
– V$BT_SCAN_OBJ_TEMPS shows the active objects currently tracked by the big table cache.

As SALES is an IM-table (DATAOBJ# is 91985), regardless of how often you scan it, no buffers will go to the big table cache (I have set db_big_table_cache_percent_target to 50):


If you do a full table scan on OLD_SALES, as it is way to big for the BT-cache (= Big Table Cache), no use either… I have tested and at least V$BT_SCAN_OBJ_TEMPS shows nothing.

However, when I do a subset scan on OLD_SALES, then the BT-cache is used as the buffers will fit.

Now, let is look at the temperature of the tables. Oracle assigned 2000 for the temperature of VERY_OLD_SALES (DATAOBJ# is 92326) while 1000 for the temperature of OLD_SALES (DATAOBJ# is 92322) which is only partially loaded into the BT-cache. After updating VERY_OLD_SALES, the temperature grows to 5000:


The more SQL & DML we do on the table, the higher the temperature gets. A detailed study by Mahmoud Hatem can be found here.

In short:

– It is not possible to control this feature on table basis, there are no attributes/parameters like KEEP or STORAGE clauses on table level.
– If we dedicate real memory just for such type of caching, then it will not always be used for mega huge tables that anyway will not fit, it will only be used for subset retrievals.
– The memory could be more useful for the IM option as benefits are much more clear and obvious.
– The BT-cache is populated also by the full index or range scans.
– CBO is not aware that segments are being cached.
– One cannot manually flush the BT-cache but it is gone after flushing the buffer cache.
– If the BT-cache is not big enough for several objects, then the ones with highest temperature are given the priority.

Big table cache: an optional, integrated portion of the database buffer cache that uses a temperature-based, object-level replacement algorithm instead of the traditional LRU-based, block-level replacement algorithm.


Get every new post delivered to your Inbox.

Join 346 other followers