Dontcheff

Reading Data in Oracle Database 12c

In Cloud, Consolidation, Database options, DBA, Security and auditing, SQL on December 1, 2014 at 18:02

1. For DBAs and Developers, the words READ and SELECT have been for years somehow synonyms. In 12c, is there now any difference?

2. Before pluggable databases, selecting data from the SALES table for instance meant selecting data from a table called SALES in a certain SCHEMA within the database. How about if a table called SALES belongs to several pluggable databases under the same schema name?

The aim of this blog post is to shed some light on these new concepts.

lock_read

1. New READ privilege.

Until Oracle 12.1.0.2. the SELECT object privilege allowed users to perform the following two operations in addition to just reading data from the SALES table:

LOCK TABLE sales IN EXCLUSIVE MODE;
SELECT ... FROM sales FOR UPDATE;

These 2 commands enabled the users to lock the rows of the SALES table.

The READ object privilege does not provide these additional privileges. For better security, grant users the READ object privilege if you want to restrict them to performing queries only.

In addition to the READ object privilege, you can grant users the READ ANY TABLE privilege to enable them to query any table in the database.

When a user who has been granted the READ object privilege wants to perform a query, the user still must use the SELECT statement. There is no accompanying READ SQL statement for the READ object privilege.

The GRANT ALL PRIVILEGES TO user SQL statement includes the READ ANY TABLE system privilege. The GRANT ALL PRIVILEGES ON object TO user statement includes the READ object privilege.

If you want the user only to be able to query tables, views, materialized views, or synonyms, then grant the READ object privilege. For example:

GRANT READ ON SALES TO julian;

CDB_PDB

2. Querying a table owned by a common user across all PDBs.

Consider the following scenario:

– The container database has several pluggable databases, i.e., it has a separate PDB for each different office location of the company.
– Each PDB has a SALES table that tracks the sales of the office, i.e., the SALES table in each PDB contains different sales information.
– The root container also has an empty SALES table.
– The SALES table in each container is owned by the same common user.

To run a query that returns all of the sales across the company connect to each PDB as a common user, and create a view with the following statement:

CREATE OR REPLACE VIEW sales AS SELECT * FROM sales;

The common user that owns the view must be the same common user that owns the sales table in the root. After you run this statement in each PDB, the common user has a view named sales in each PDB.

With the root as the current container and the common user as the current user, run the following query with the CONTAINERS clause to return all of the sales in the sales table in all PDBs:

SELECT * FROM CONTAINERS(sales);

You can also query the view in specific containers. For example, the following SQL statement queries the view in the containers with a CON_ID of 3 and 4:

SELECT * FROM CONTAINERS(sales) WHERE CON_ID IN (3,4);

3. Delegate.

Something else: staring 12.1.0.2, when granting a role to a user, you can specify the WITH DELEGATE OPTION clause. Then the grantee can do the following two things:

A) Grant the role to a program unit in the grantee’s schema
B) Revoke the role from a program unit in the grantee’s schema

delegate

New Features of Oracle NoSQL

In DBA, NoSQL, Oracle database, Oracle Engineered Systems on July 4, 2015 at 16:44

“In open source, we feel strongly that to really do something well, you have to get a lot of people involved.” Linus Torvalds

Currently, there are about 150 NoSQL databases (= Not Only SQL).

There are 5 major NoSQL data models: Collection, Columnar, Document-oriented, Graph and Key-value.

Oracle NoSQL, based on BerekelyDB (first release in 1994), was recently named by Forrester Research as a leader in the NoSQL key-value database market and Oracle NoSQL Database was called out as having strong adoption and maturity. A very good study and comparison of several NoSQL databases entitled 21 NoSQL Innovators to Look for in 2020 was written by Gary MacFadden.

Forrester_NoSQL

Here are few examples:

Collection/Multi-model: OrientDB, FoundationDB, ArangoDB, Alchemy Database, CortexDB
Columnar: Accumulo, Cassandra, Druid, HBase, Vertica
Document-oriented: Lotus Notes, Clusterpoint, Apache CouchDB, Couchbase, HyperDex, MarkLogic, MongoDB, OrientDB, Qizx
Graph: Allegro, Neo4J, InfiniteGraph, OrientDB, Virtuoso, Stardog
Key-value: Redis, CouchDB, Oracle NoSQL Database, Dynamo, FoundationDB, HyperDex, MemcacheDB, Riak, FairCom c-treeACE, Aerospike, OrientDB, MUMPS

Lat month (June 2015), Oracle announced Oracle NoSQL Database Version 3.3.4. This release offers new security features, including User Roles and Table-level Authorization, new language interfaces for Node.js and Python, and integration with Oracle Database Mobile Server. The prior release offers Big Data SQL support, RESTful API, C Table Driver, SQL-like DDL, Apache Hive support and much more.

A good starting point in order to get deeper into the NoSQL and Big Data world is the Oracle Big Data Learning Library.

Oracle recently announced Big Data SQL for Oracle NoSQL Database. This feature will allow Oracle Database users to connect to external data repositories like Oracle NoSQL Database or Hadoop in order to fetch data from any or all of the repositories (at once) through single SQL query.

Oracle Big Data SQL is an innovation from Oracle only available on Oracle Big Data Appliance. It is a new architecture for SQL on Hadoop, seamlessly integrating data in Hadoop and NoSQL with data in Oracle Database. Using Oracle Big Data SQL one can:

• Combine data from Oracle Database, Hadoop and NoSQL in a single SQLquery
• Query and analyze data in Hadoop and NoSQL
• Integrate big data analysis into existing applications and architectures
• Extend security and access policies from Oracle Database to data in Hadoopand NoSQL
• Maximize query performance on all data using Smart Scan

BDA_big-data-appliance

The recent update to Oracle REST Data Services enables a consistent RESTful interface to Oracle Database’s relational tables, JSON document store, and also enables access to Oracle NoSQL Database tables.

I still recommend reading the excellent article by Gwen Shapira entitled Hadoop and NoSQL Mythbusting.

Here are some useful links:

NoSQL Database Administrator’s Guide
Getting Started with NoSQL Database Table API
NoSQL Database Run Book
NoSQL Database Security Guide
Oracle NoSQL Database Availability and Failover
Download Oracle NoSQL Database, Server

NoSQL_DBs

It is interesting to note that according to Wikipedia, 12.1.3.3.4 is the first stable Oracle NoSQL release.

Check the DBMS popularity broken down by database model!

DB_popularity

Table Temperature, Big Tables In-Memory and Automatic Big Table Caching

In DBA, Init.ora, Oracle database on April 3, 2015 at 16:10

If you have never heard of Oracle object temperature and how this relates to caching, in-memory and big tables, here is a short note perhaps worth reading.

And all this is purely Oracle database related, nothing to do with Oracle Lighting Products.

Oracle_light

Background: Oracle 12.1.0.2 came with 5 new init.ora parameters (actually 12 but the other 7 are In-Memory related):

1. common_user_prefix
2. dbfips_140
3. enable_goldengate_replication
4. exafusion_enabled
5. db_big_table_cache_percent_target

The first 4 are rather on-off type of parameters but the last one, db_big_table_cache_percent_target, is worth looking into.

A table is considered small, if the number of blocks in the segment is lower or equal than the value of the parameter _small_table_threshold. In 12c, this parameter defaults 2% of the buffers in the cache. To be more precise, it is 2% of _db_block_buffers. For details, check Jonathan Lewis’s Small tables. Another very good one is by Tanel Poder entitled Optimizer statistics-driven direct path read decision for full table scans.

Otherwise, we call the table big.

small_big_table

In 12.1.0.2, a DBA can enable full database caching with the following command: ALTER DATABASE FORCE FULL DATABASE CACHING;

Under normal running the Oracle database decides what data to cache in the buffer cache. If there is not enough room, data can be aged out of the cache. If Oracle determines that the buffer cache is big enough to hold the entire database it will cache all blocks.

However, seldom we have enough memory to cache the whole database. In this case, the parameter db_big_table_cache_percent_target can be used. And of course, for “big tables” only.

The default value is 0, so under normal circumstances automatic big table caching is not enabled. You can decide on your own if this is Poor Man’s In-Memory Caching but even if it isn’t the benefits are rather questionable. Let us see why.

We have 3 tables: SALES, OLD_SALES and VERY_OLD_SALES.

sales_objects

SALES and OLD_SALES are huge and SALES is in-memory, that is the in-memory option has been enabled for the SALES table. VERY_OLD_SALES is a rather small table but still considered big by Oracle as the small table threshold is 401 (2% of 20090):

select a.ksppinm name, b.ksppstvl value 
from x$ksppi a, x$ksppcv b 
where a.indx = b.indx 
and a.ksppinm like '%small_table%';

_small_table_threshold
401

select a.ksppinm name, b.ksppstvl value 
from x$ksppi a, x$ksppcv b 
where a.indx = b.indx 
and a.ksppinm like '%block_buffer%';

_db_block_buffers
20090

Let us see if the table is In-Memory enabled and what is the number of its blocks:

sales_tables

3 new dynamic views give us details on big table caching:
– V$BTS_STAT
– V$BT_SCAN_CACHE shows the parameters and status of the big table cache section.
– V$BT_SCAN_OBJ_TEMPS shows the active objects currently tracked by the big table cache.

As SALES is an IM-table (DATAOBJ# is 91985), regardless of how often you scan it, no buffers will go to the big table cache (I have set db_big_table_cache_percent_target to 50):

big_table_cache

If you do a full table scan on OLD_SALES, as it is way to big for the BT-cache (= Big Table Cache), no use either… I have tested and at least V$BT_SCAN_OBJ_TEMPS shows nothing.

However, when I do a subset scan on OLD_SALES, then the BT-cache is used as the buffers will fit.

Now, let is look at the temperature of the tables. Oracle assigned 2000 for the temperature of VERY_OLD_SALES (DATAOBJ# is 92326) while 1000 for the temperature of OLD_SALES (DATAOBJ# is 92322) which is only partially loaded into the BT-cache. After updating VERY_OLD_SALES, the temperature grows to 5000:

temperature

The more SQL & DML we do on the table, the higher the temperature gets. A detailed study by Mahmoud Hatem can be found here.

In short:

– It is not possible to control this feature on table basis, there are no attributes/parameters like KEEP or STORAGE clauses on table level.
– If we dedicate real memory just for such type of caching, then it will not always be used for mega huge tables that anyway will not fit, it will only be used for subset retrievals.
– The memory could be more useful for the IM option as benefits are much more clear and obvious.
– The BT-cache is populated also by the full index or range scans.
– CBO is not aware that segments are being cached.
– One cannot manually flush the BT-cache but it is gone after flushing the buffer cache.
– If the BT-cache is not big enough for several objects, then the ones with highest temperature are given the priority.

Big table cache: an optional, integrated portion of the database buffer cache that uses a temperature-based, object-level replacement algorithm instead of the traditional LRU-based, block-level replacement algorithm.

Follow

Get every new post delivered to your Inbox.

Join 326 other followers