Dontcheff

Archive for the ‘Databases’ Category

Licensing Types of the Oracle Database

In Cloud, Database options, Databases, DBA, New features, Oracle database on May 16, 2021 at 13:18

After being asked on daily basis all kinds of questions on Oracle Database Licensing, as time goes by, you sort of understand it. Sort of, because the Oracle Database Licensing Guide is 602 pages long and gets often updated. The latest one is from April 2021 – now it is mid-May.

Moreover, you have perhaps seen all Oracle certifications but if you search for one on licensing you will find what I did – there isn’t one.

What I am trying to do now, is to summarize Database Licensing in a short blog post – this might be helpful for many to at least understand the concept.

There are 3 types of licenses for the Oracle Database: Packs, Options and Features and 9 Oracle Database Offerings: Standard Edition 2, Enterprise Edition, Oracle Database Appliance, Exadata, Exadata Cloud Service and Cloud@Customer, Database Cloud Service Standard Edition, Database Cloud Service Enterprise Edition, Database Cloud Service Enterprise Edition – High Performance and Database Cloud Service Enterprise Edition – Extreme Performance (you can see their abbreviations in the table below).

  1. Packs: there are 5 different packs for the Oracle Database:

2. Options: there are 15 database options for the Oracle Database:

  • Oracle Active Data Guard
  • Oracle Advanced Compression
  • Oracle Advanced Security
  • Oracle Database In-Memory
  • Oracle Database Vault
  • Oracle Label Security
  • Oracle Machine Learning
  • Oracle Multitenant
  • Oracle On-Line Analytical Processing (OLAP)
  • Oracle Partitioning
  • Oracle RAC One Node
  • Oracle Real Application Clusters (Oracle RAC)
  • Oracle Real Application Testing
  • Oracle Spatial and Graph
  • Oracle TimesTen Application-Tier Database Cache

Here are the ones related to Consolidation, HA, Managability and Performance:

3. Features: there are 131 features that can be licensed with the Oracle Database out of which 105 are for EE and 123 are for Exadata. As you can see, there are 3 features available for Exadata, ExaCS and ExaC@C falling under the functional category of Autonomous:

If you would like to drill down in detail, use the Database Feature and Licensing tool which is available online without the need to register or have an Oracle account.

Moreover, the Oracle Enterprise Manager Licensing Manual is 366 pages, so there is more to read if you are done with the Database Licensing Manual.

You might think that is way too much for me, and perhaps it is, but the situation is very similar with other database vendors. Let us look at AWS and GCP for instance:

AWS have more than 10 database offering:

Amazon Aurora
Amazon RDS
Amazon Redshift
Amazon DynamoDB
Amazon ElastiCache
Amazon DocumentDB (with MongoDB compatibility)
Amazon Keyspaces (for Apache Cassandra)
Amazon Neptune
Amazon Timestream
Amazon Quantum Ledger Database (QLDB)
AWS Database Migration Service (DMS)

GCP have also more than 10 database offerings:

Relational: Bare Metal Solution for Oracle workloads
Cloud SQL: Managed MySQL, PostgreSQL and SQL Server
Cloud Spanner and BigQuery
Key value: Cloud Bigtable
Document: Firestore and Firebase Realtime Database
In-memory: Memorystore
NoSQL: MongoDB Atlas and managed offerings from open source partner network including MongoDB, Datastax, Redis Labs, and Neo4j

And, after all, Azure are not much behind:

Azure SQL Database
Azure SQL Managed Instance
SQL Server on Virtual Machines
Azure Database for PostgreSQL
Azure Database for MySQL
Azure Database for MariaDB
Azure Cosmos DB
Azure Cache for Redis
Azure Database Migration Service
Azure Managed Instance for Apache Cassandra

After all, being expert in database licensing in a skill of its own!

Migrating databases with several database links

In Cloud, Consolidation, Databases, DBA, Oracle database, Replication on April 1, 2021 at 09:08

In a couple of recent database migration cases, one of the main questions raised, was how to figure out all outgoing and incoming database links as they have to be modified after the massive migrations.

DBLINKS5

Outgoing database links is simple: DBA_DB_LINKS describes all database links in the database. And this view has been part of the database (at least) since 7.3.4

The tricky part is how to find all incoming database links. At least before 12.2, where a new view called DBA_DB_LINK_SOURCES, shows the information of the source databases that opened database links to the local database.

So, how about the databases that are version 12.1 and below?

An Oracle community discussion on the MOS DBA forum gives several ideas:

Option 1: Bruno suggests to “start from the listener logfile; with some “awk/sed/vi” work it should be possible to extract the list of “origins” of the connections… -> From this list, identify the database servers -> Search database links on relevant databases on these servers”.

Might work but might be rather tedious work if there are 100s of different servers.

Option 2: Brian suggests “to query V$SESSION to see active sessions from the other database server. Hint…look at the MACHINE column to see if it matches the other database server name. Querying V$SESSION will only work if the link is open when you query it. As such, you may want to add an AFTER LOGON trigger which writes an audit entry to a table if the connection is from that database server.”

If you create a logon trigger to insert all incoming connection via database link note that in 11g, you can do that using value sys_context(‘USERENV’,’DBLINK_INFO’) which will give us all information. But check first Doc ID 2593966.1 as there is Bug 18974508: sys_context(‘userenv’, ‘dblink_info’) returns incomplete information.

But before 10g, there is no DBLINK_INFO, we we must use x$k2gte:

 
select username, osuser, status, sid, serial#, machine,
process, terminal, program from v$session
where saddr in (select k2gtdses from sys.x$k2gte);

The above is documented in Doc ID 332326.1: How to identify a session started by a remote distributed transaction? The fixed table x$k2gte contains 2PC Global Transaction Entry. The column k2gtdses in x$k2gte has the session state object and this can be mapped to the saddr column of v$session.

But as explained by Mark, the problem is that until the trigger finishes the session the remote db link session is not considered to exist and only upon successful session connection does Oracle then go and update related facts about the session.  Oracle does not guarantee read consistency on v$ views and the v$ views are based on x$ tables which are really program storage areas.  These areas get updated at various points in the logic.  It is possible that a logon trigger may not work in this specific case.  An alternate approach would be to run a process every N time that just snapshots what is out there and records new remote queries.  After all you really only need one capture per remote source whether you care about only database links or care about each client server.

One of the top database experts, Mariami Kupatadze, gave us a very elegant way of how to find remote sessions executing over a database link using x$k2gte, x$ktcxb, x$ksuse and v$session_wait in a single SQL statement.

A more detailed version called Identifying database link usage was written by John Hallas in 2015.

Long story short: for databases from 7.3 till 12.1 create a job capturing the distributed transactions based on the script given in Doc ID 104420.1 “Script to show Active Distributed Transactions”. And you can modify the scripts if not only the active remote transactions need to be captured. For 12.2 and after, just use the view  DBA_DB_LINK_SOURCES. 

create_database_link

 

Enhanced Diagnosability of Oracle Database: What is an Attention Log?

In Database tuning, Databases, DBA, Oracle database, Oracle utilities on March 1, 2021 at 07:18

A correct diagnosis is three-fourths the remedy – Mahatma Gandhi

How true this statement is also in the database world! Often it is more difficult to diagnose an issue than fixing it. Going in the past through the alert.log file to find information that needs attention has been simplified by creating a single attention.log file.

In the latest Oracle release, diagnosability of database issues is enhanced through a new attention log, as well as classification of information written to database trace files. The new attention log is written in a structured format (XML or JSON) that is much easier to process or interpret and only contains information that requires attention from an administrator.

The attention log is an externally modifiable file that contains information about critical and highly visible database events. We can use the attention log to quickly access information about critical events that need action.

So, where is the location of the new log file introduced by Oracle Database 21c? Notice that V$DIAG_INFO has a new entry called Attention log:

The attention log contains a pre-determined, translatable series of messages, with one message for each event. There is one attention log for each database instance. There are two ways to view the contents of the attention log.

1. Open the attention.log file that can be found under $ORACLE_BASE/diag/rdbms/database_name/instance_id/log directory. Note that the documentation does not reflect the correct location of the attention.log file.

2. Query the V$DIAG_ALERT_EXT view using the required filters:

The filtering is done through message_level. A message_level of 1 corresponds to critical errors that need immediate action.

You can filter V$DIAG_ALERT_EXT on either MESSAGE_TYPE or MESSAGE_LEVEL. As you can see in the example above, I am filtering for only critical errors. Here is what the types and levels mean by number:

MESSAGE_TYPE:

1: UNKNOWN – essentially the NULL type
2: INCIDENT_ERROR – error for some internal or unexpected reason: must be reported to Oracle Support
3: ERROR – an error of some kind has occurred – not really meaningful
4: WARNING: an action occurred or a condition was discovered that should be reviewed and may require action
5: NOTIFICATION: reports a normal action or event: for example “logon completed”
6: TRACE: output of a diagnostic trace

MESSAGE_LEVEL:

1: CRITICAL: critical errors
2: SEVERE: severe errors
8: IMPORTANT: important message
16: NORMAL: normal message

The attention log has the following attributes, not all documented yet:

– Attention ID: a unique identifier for the message
– Attention type: Error, Warning, Notification or Additional information
– Message text
– Notification
– Error
– Urgency: Immediate, Soon, Deferrable or Information
– Scope: Session, Process, PDB Instance, CDB Instance, CDB Cluster, PDB or CDB
– Info
– Cause
– Action
– Class: Clusterware Admin, CDB admin or PDB admin
– Time

Note that an attention log entry seldom contains all attributes.

Autonomous Robots need Autonomous Databases or Top 100 Automatic Features of the Oracle Database

In Autonomous, Databases, DBA, Oracle database on February 13, 2021 at 17:34

One of the very few words in the English language originating from Slavic languages is the word robot. It comes from an Old Church Slavonic word rabota which means (servitude of forced) labor/work.

100 years ago, in 1921, Karel Čapek’s play RUR introduced the word “robot” to science fiction. And for the Oracle DBAs: he itroduced just the robots not the “Release Update Revisions” 🙂

In a recent paper called “A Realtime Autonomous Robot Navigation Framework for Human like High-level Interaction and Task Planning in Global Dynamic Environment“, the authors proposed the self-driving technology framework to perform high-level interactions and human-like tasks:

You can see the overview of the proposed framework: a robot-centric on-demand database and a working memory are used for autonomous navigation, behavior planner, and learning. The robot can download database from network or cloud if necessary.

Yet in another research paper called “Autonomous robot system architecture for automation of structural health monitoring“, the proposed system uses an autonomous robot, database and the proposed architecture is used to integrate all sub-systems for the automation of the Structural Health Monitoring.

For such Autonomous Robots, we need also Autonomous Databases.

Here is a new Guide to the Oracle’s self-driving database and you can see below Gartner’s rankings where Oracle Autonomous Database is on top of every list but the aim of this post is not to compare Oracle Autonomous Database with other databases but to list all 100 automation/autonomous features of the Oracle database. First have a look at what is new with the latest release Oracle 21c:

– Native Blockchain Tables
– Automatic Index Optimization
– Automatic Zone Maps
– Automatic SQL Tuning Sets
– Automatic Indexing Enhancements
– Autonomous Health Framework and Object Activity Tracking System
– Automatic Materialized Views
– Auto-Result Cache

And here are the Top 100 features (in no order whatsoever) that make Oracle database “The Autonomous Database”:

1. Automatic Storage Management (ASM)
2. Automatic Workload Repository (AWR)
3. Automatic Data Optimization (ADO)
4. Automatic Database Diagnostic Monitor (ADDM)
5. Automatic Space Segment Management (ASSM)
6. Automatic Diagnostic Repository (ADR)
7. Automatic In-Memory
8. Automatic Big Table Caching
9. Automatic Degree of Parallelism
10. Automatic Index Optimization
11. Automatic Indexing
12. Automatic Materialized Views
13. Automatic SQL Plan Management
14. Automatic SQL Tuning Set
15. Automatic Zone Maps
16. Autonomous Health Framework
17. Automatic Memory Management
18. Automatic Undo Management
19. Automatic Undo Retention Tuning
20. Automatic Shared Memory Management
21. Automatic PGA Memory Management
22. Automatic Plan Capture with SPM
23. Automatic Refresh of Materialized Views
24. Automatic VIP failback
25. Automatic AWR PDB Autoflush
26. Automatic Segment Advisor
27. Automatic Tablespace Point In Time Recovery
28. Automatic Outage Resolution with Oracle Data Guard
29. Automatic Statistics Gathering
30. Automatically Created Storage Indexes in Exadata
31. Automates Database Upgrades with AutoUpgrade
32. Auto-List Partitioning
33. Automating Diagnostic Collection for Trace File Analyzer
34. Oracle GoldenGate Automatic CDR
35. Automatically Enabled Resource Manager for Database In-Memory
36. Automatic Data Corruption Repair
37. Automatic Oracle restart
38. Automated tracking: Object Activity Tracking System (OATS)
39. Automatic Storage Management Cluster File System
40. SQL*Net Auto-Detection of Support for Out-of-Band Breaks
41. Automatic Service Registration
42. Automatic Purging of Workspaces
43. Automatic DBCA creation of database management policy for RAC
44. Automatic Storage Management Cluster File System (Oracle ACFS) Snapshots
45. Oracle Data Guard Broker Support for Multiple Automatic Failover Targets
46. ACFS Auto-Resize
47. Automatic Background Index Maintenance for Oracle Text
48. Automatically Maintained Standby Databases
49. Automatically Enabling Oracle Database Quality of Service (QoS) Management
50. Automatic Collection of Dynamic Statistics
51. Automatically Synchronize Password Files in Oracle Data Guard Configurations
52. Automatic Data Optimization (ADO) for Hybrid Columnar Compression (HCC)
53. Automatic Locking of Inactive User Accounts
54. Automatic Management of Near Real-Time Indexes
55. Heat Map and Automatic Data Optimization Support for CDBs
56. High-Frequency Automatic Optimizer Statistics Collection
57. Automatic Storage Management (ASM) File Access Control
58. Automatic Support for Both SASL and Non-SASL Active Directory Connections
59. Automatic KDC Discovery When Configuring OCI Clients
60. Automatic Column Group Detection
61. Automatic CDR
62. AutoShrink for ACFS
63. Auto-Tuning Oracle C Client Interface (OCI) Applications
64. Automated Transaction Draining for Oracle Grid Infrastructure Upgrades
65. Automatic CDR Support of Tables with Unique Indexes/Constraints
66. Automatic Real-Time statistics
67. Automatic Correction of Non-logged Blocks at a Data Guard Standby Database
68. Automatic Disaster Recovery Fails with RMAN
69. Automatic Channel Failover
70. Automatic Synchronization of Oracle Text Index
71. Automatic Registration of the Database with the Default Listener
72. Automatic Offlining of the Datafile in Noarchivelog
73. Automatic Datatype Conversion
74. Automatic Tape Drive Cleaning and a Cleaning Tape in a Tape Library
75. Automatic Controlfile Backup
76. Automatic Eject Of Tape After Backup Using Oracle Secure Backup
77. Automatic BackupSet Failover On Missing or Corrupt BackupPieces
78. Automatic BMR (Block Media Recovery)
79. Automatic System Tasks
80. Automatic Database Performance Monitoring
81. Automatic Archiving
82. Automatic Propagation in Replication
83. Automatic Job Scheduling
84. Automatic Resume of DataPump
85. Automatic (Real-Time) ADDM Trigger Control
86. Automatic Identify and Allocate Compute & Storage Resources for ADB
87. Automatic Install and Configure Database Software for ADB
88. Automatic Configure Oracle Grid Infrastructure for ADB
89. Automatic Configure Oracle Real Application Clusters for ADB
90. Automatic System & Storage Optimization for ADB
91. Automatically Derived Settings for ADB
92. Automatic Scripted Scaling for ADB
93. Automatic Health Check for ADB
94. Automatic Updates for ADB
95. Automatic Schema Level Optimization for ADB
96. Fault Detection & Resolution for ADB
97. Automatic Security Updates for ADB
98. Automatic Secondary Indexing for ADB
99. Automatic Database Encryption ADB
100. Automatic Database Backups for ADB

Note please: these are the Top 100 on my list – there are much more!

Automatic Materialized Views in Oracle Database 21c

In Databases, DBA, New features, Oracle database on February 5, 2021 at 15:51

“It’s supposed to be automatic, but actually you have to push this button.” ― John Brunner

With Oracle Database 21c, there is no button for Automatic Materialized Views – it is fully automatic.

DBA_MVIEWS has a new column call AUTO – that is how a DBA can distinguish the auto MVs from the manual ones. The auto naming convention is something like AUTO_MV$$_H3KBHG7DAH6T5. That is all start with AUTO_MV$$.

The Oracle database automatically collects workload statistics, SQL statements and query execution statistics. Oracle also maintains and purges the history of the workload. Automatic materialized views use workload information provided by the Object Activity Tracking System (OATS) as part of the automated decision-making processes.

All preconfigured / additionally configured parameters can be viewed from DBA_AUTO_MV_CONFIG:

Here is how automatic materialized views work:

– The database automatically detects and collects workload query execution statistics including buffer-gets, database time, estimated cost, and other statistics: DBA_AUTO_MV_ANALYSIS_EXECUTIONS displays information about analysis and tuning executions, including concurrency, degree of parallelism (DOP) requested by the user and actual DOP upon execution finish, status, associated advisor, and informational or error message.

– Oracle creates candidate materialized views hidden from the database workload and verifies that they will deliver the projected performance benefit by test executing a sample of workload queries in the background: DBA_AUTO_MV_ANALYSIS_RECOMMENDATIONS
Displays recommendations associated with automatic materialized views

– There are provided reports with detailed performance test results and which materialized views have been implemented: DBA_AUTO_MV_ANALYSIS_REPORT reports on analyses and recommendations, including task and execution names, sequence number of the journal entry, and message entry in the journal

– Automatic materialized view refresh is also automatic: DBA_AUTO_MV_REFRESH_HISTORY displays the owner name, view name, date, start and end time, elapsed time, status, and error number (if an error occurred) for each automatic materialized view refresh

Automatic MVs are off by default. As DBAs, we can use the CONFIGURE procedure of the DBMS_AUTO_MV package to configure automatic materialized views creation in the database. The AUTO_MV_MODE parameter enables (IMPLEMENT) or disables (OFF) automatic materialized views, or engages report-only mode (REPORT ONLY). The AUTO_MV_MAINT_TASK parameter activates or deactivates the task performing the maintenance (refreshes, validations, and clean up).

Let us enable it all:

Oracle Database 21c includes data dictionary views that display information about automatic materialized views as well as OATS (Object Activity Tracking System). DBAs can use the DBMS_ACTIVITY.CONFIGURE procedure to control the three OATS parameters within a specific database.

ACTIVITY_INTERVAL defines the interval between snapshots:

exec dbms_activity.configure('ACTIVITY_INTERVAL_MINUTES','30');

ACTIVITY_RETENTION_DAYS defines how long snapshots are saved:

exec dbms_activity.configure('ACTIVITY_RETENTION_DAYS','60');

ACTIVITY_SPACE_PERCENT sets how much of available space is reserved for snapshots:

exec dbms_activity.configure('ACTIVITY_SPACE_PERCENT','10');

You mostly likely will get though the following error, even alter system set “_exadata_feature_on”=true scope=spfile; and restart:

ERROR at line 1:
ORA-40216: feature not supported
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_ACTIVITY", line 278
ORA-06512: at "SYS.DBMS_ACTIVITY", line 314
ORA-06512: at line 1

The feature is only available on Exadata and ExaCs.

Here are some additional details:

– Automatic materialized views support partitioned and non-partitioned base tables
– Incremental materialized view refresh is supported
– If partitioned base tables use either range, list, or composite partitioning, then they are eligible for Partition Change Tracking (PCT) view refresh
– If there is performance advantage, the automatic materialized view recommendations will include a partitioned automatic materialized view based on the partitioning of the base table of the materialized view and the partitioning type supported is auto-list partitioning, which will mirror the partitioning of the fact table
– The automatic materialized view maintenance module decides the type of refresh that is the most beneficial at the time of refresh, and will decide during run time whether to switch from incremental refresh to full refresh
– DBAs can drop automatic materialized views using the dbms_auto_mv.drop_auto_mv procedure

DBAs are a wealth of knowledge!

In Autonomous, Cloud, Databases, DBA on October 19, 2020 at 10:53

Fifteen years ago, in 2005, I remember I read the following speculation about the DBA profession: Oracle 2020 – a Glimpse Into the Future of Database Management. I was curious what would be true 15 years later. Just have a look at some of the predictions:

– 2018: Oracle 14m provides inter-instance sharing of RAM resources. All Oracle instances become self-managing.
– 2019: The first 128-bit processors are introduced.
– 2020: Oracle 16ss introduces solid-state, non-disk database management.

Looking even further we read: “Changing Role of the Oracle DBA in 2020: But the sad reality of server consolidation was that thousands of mediocre Oracle DBAs lost their jobs to this trend. The best DBAs continued to find work, but DBAs who were used for the repetitive tasks of installing upgrades on hundreds of small servers were displaced.” And further: “Inside Oracle 2020: The world of Oracle management is totally different today than it was back in 2004. We no longer have to worry about applying patches to Oracle software, all tuning is fully automated and hundreds of Oracle instances all reside within a single company-wide server.” ADB rings a bell?

Four years ago, another forecast article called The 2020 DBA: A Look Into the Future appeared in dzone.com predicting that despite the evolving role of the DBAs, DevOps has actually made database administrators more relevant than ever before.

True, as the DBA profession is still among the Top 10 jobs in Technology. DBA is #4 in Best Technology Jobs, #15 in Best STEM Jobs and #30 in 100 Best Jobs.

Now, going back to the title of this blog post. Jeff Smith’s, Russ Lowenthal’s and Chris Saxon’s Oracle DBA 2020 Data Masterclasses are something I recommend now to every DBA. Jeff raised an important topic but let us take a step back.

Last month (September 2020), FlashDBA discussed the Evolution of the DBA from 1.0 through 2.0 until 3.0:

DBA 1.0: The (Good) Old Days – clearly the old days are over. Regardless if they were good or bad is a memory-lane discussion.

DBA 2.0: The IT Generalist – I remember when saw this paper for the first time about 10-11 years ago: Oracle DBA 2.0. ASM, Direct NFS, Clusterware, VMware, Flash, Linux. The DBA had to learn OS, Storage and Network administration. With Exadata, I even heard the term DBA 2.1

DBA 3.0: The Cloud DevOps DBA – new game, rather new set! “A DBA building a database in the public cloud is making decisions which have a direct affect on the (quite possibly massive) monthly bill from AWS/Azure/GCP/OCI”.

Checking the Database Management Predictions from 2019, we can see they now we are close to:

DBA 4.0: Autonomous DBA – more attention on data management, data security, data architecture, machine learning, devops:

“Oracle Autonomous Database can quickly provision, resize, and relocate databases with little human interaction. However, as more database provisioning tasks are automated, DBAs will still need to classify the data.” – Michelle Malcher

“CEOs will force DBAs to step into more-important roles—such as data architects, data managers, and chief data offcers—as a company’s data and machine learning algorithms become important drivers of the stock price.” — Rich Niemiec

“Oracle Enterprise Manager Cloud and other third-party tools, in conjunction with Oracle Multitenant will make managing large numbers of databases easier. DBAs will be able to manage 10 times or more databases after consolidation with Oracle Multitenant.” — Anuj Mohan

“DBAs need to understand that there is a true sea change afoot, and there’s no way to stop these market forces. Hopefully, we’ll all be able to embrace this tidal wave and avoid being caught up in the undertow.” — Jim Czuprynski

I believe the adoption of a hybrid infrastructure is inevitable, and the new skills needed (such as cloud set-up, configuration, and monitoring) are mostly cloud-related. DBAs will need to be able to assess the databases and define what they are best suited for. Replication of data and databases will become more complex in hybrid environments—especially when different clouds are involved.

That is coming in few years at most (when most companies will adopt multi-cloud) and call the profession DBA 5.0 if you prefer or just DBA. Every decade has its challenges for IT professionals. Challenges are becoming more complex and DBAs are often on the front line with every new fashionable IT concept. And we can quote now Larry Ellison who said that the computer industry is the only industry that is more fashion-driven than women’s fashion.

Here are finally few additional recent articles on the topic of the future of the DBA profession (if interested to read more on the topic):

My Three Beliefs About The Future Of The DBA Job
The Future for the DBA
Will automated databases kill the DBA position?
What Does the Future Hold for DBAs?
For DBAs In 2020: Understand Your Worth, Seize The Moment
What Happens to DBAs When We Move to the Cloud?
The Future of The DBA in The Era of The Autonomous Database

MySQL DB System on Oracle Cloud Infrastructure

In Cloud, Databases, DBA, IaaS, MySQL, New features on September 1, 2020 at 08:37

MySQL has been for a long time second next to Oracle in the DB-Engines rankings. Moreover, MySQL was their 2019 Database of the Year!

Now, MySQL is also available as DB System on Oracle Cloud Infrastructure. The database version is 8.0.21 with InnoDB Storage Engine. Here is how the set-up works:

1. Create a MySQL DB System:

2. Note that there are different shutdown types:

3. Connect to a MySQL DB system:

It is not possible to connect directly from a remote IP to the MySQL DB System endpoint. You must connect to a Compute Instance, and from the Compute Instance to the MySQL DB System.

We first install MySQL Shell on the Compute instance:

 
[opc@julian ~]$ sudo yum install https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm
Loaded plugins: ulninfo, versionlock
mysql80-community-release-el7-3.noarch.rpm               |  25 kB     00:00
Examining /var/tmp/yum-root-X8dAdb/mysql80-community-release-el7-3.noarch.rpm: mysql80-community-release-el7-3.noarch
Marking /var/tmp/yum-root-X8dAdb/mysql80-community-release-el7-3.noarch.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package mysql80-community-release.noarch 0:el7-3 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package             Arch   Version
                                  Repository                               Size
================================================================================
Installing:
 mysql80-community-release
                     noarch el7-3 /mysql80-community-release-el7-3.noarch  31 k

Transaction Summary
================================================================================
Install  1 Package

Total size: 31 k
Installed size: 31 k
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
  Installing : mysql80-community-release-el7-3.noarch                       1/1
  Verifying  : mysql80-community-release-el7-3.noarch                       1/1

Installed:
  mysql80-community-release.noarch 0:el7-3

Complete!
[opc@julian ~]$

===

[opc@julian ~]$ sudo yum install mysql-shell
Loaded plugins: ulninfo, versionlock
mysql-connectors-community                               | 2.5 kB     00:00
mysql-tools-community                                    | 2.5 kB     00:00
mysql80-community                                        | 2.5 kB     00:00
ol7_UEKR5                                                | 2.5 kB     00:00
ol7_latest                                               | 2.7 kB     00:00
(1/3): mysql80-community/x86_64/primary_db                 | 115 kB   00:00
(2/3): mysql-connectors-community/x86_64/primary_db        |  62 kB   00:00
(3/3): mysql-tools-community/x86_64/primary_db             |  76 kB   00:00
(1/5): ol7_UEKR5/x86_64/updateinfo                         |  72 kB   00:00
(2/5): ol7_latest/x86_64/group                             | 660 kB   00:00
(3/5): ol7_latest/x86_64/updateinfo                        | 2.9 MB   00:00
(4/5): ol7_UEKR5/x86_64/primary_db                         |  12 MB   00:00
(5/5): ol7_latest/x86_64/primary_db                        |  36 MB   00:00
Excluding 238 updates due to versionlock (use "yum versionlock status" to show them)
Resolving Dependencies
--> Running transaction check
---> Package mysql-shell.x86_64 0:8.0.21-1.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package         Arch       Version             Repository                 Size
================================================================================
Installing:
 mysql-shell     x86_64     8.0.21-1.el7        mysql-tools-community      31 M

Transaction Summary
================================================================================
Install  1 Package

Total download size: 31 M
Installed size: 106 M
Is this ok [y/d/N]: y
Downloading packages:
warning: /var/cache/yum/x86_64/7Server/mysql-tools-community/packages/mysql-shell-8.0.21-1.el7.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 5072e1f5: NOKEY
Public key for mysql-shell-8.0.21-1.el7.x86_64.rpm is not installed
mysql-shell-8.0.21-1.el7.x86_64.rpm                        |  31 MB   00:02
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
Importing GPG key 0x5072E1F5:
 Userid     : "MySQL Release Engineering "
 Fingerprint: a4a9 4068 76fc bd3c 4567 70c8 8c71 8d3b 5072 e1f5
 Package    : mysql80-community-release-el7-3.noarch (@/mysql80-community-release-el7-3.noarch)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : mysql-shell-8.0.21-1.el7.x86_64                              1/1
  Verifying  : mysql-shell-8.0.21-1.el7.x86_64                              1/1

Installed:
  mysql-shell.x86_64 0:8.0.21-1.el7

Complete!
[opc@julian ~]$

Then connect to your DB System using the MySQL client:

 
mysqlsh ICO@10.0.0.3 

[opc@julian ~]$ mysqlsh ICO@10.0.0.3
Please provide the password for 'ICO@10.0.0.3': ************
MySQL Shell 8.0.21

Copyright (c) 2016, 2020, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
Creating a session to 'ICO@10.0.0.3'
[opc@julian ~]$

My suggestion is to start from the MySQL Database Service Overview.

MySQL database is also available from GCP, Azure and AWS but check the pricing:

Do not just Simplify with Databases: Automate, Innovate and Innovate!

In Autonomous, Database tuning, Databases, DB2 database, DBA, PostgreSQL, SQL Server on June 15, 2020 at 09:46

“Automation applied to an inefficient operation will magnify the inefficiency” – Bill Gates
“Innovation distinguishes between a leader and a follower! – Steve Jobs

In the database industry, simplification, automation and innovation have sort of become buzz words – we hear them more often in meetings and see them in power point presentations than in the real world implementations. Databases are being patched, upgraded and migrated but how often automation and innovation are part of the process?

A recent article entitled “After The Pandemic, Don’t Simply Automate. Innovate” quoted an Accenture report that surveyed 1500 C-suite executives in 16 industries, 76% of respondents said they were struggling to scale the technology across their businesses. The numbers tell the story: A full 84% of C-suite executives believe they must leverage artificial intelligence (AI) to achieve their growth objectives.

A survey of database managers and administrators shows the benefits they expect from automation, plus what they think about DBAs’ current workloads. What are the highlights:

– 63% expect faster innovation from database automation
– 62% of the data pros expect that data will grow 25% or more annually at their enterprise over the next three years
– 66% said the DBA/data team’s overwork or scheduling issues are the biggest challenge in database deployments
– The majority of companies have not made significant automation gains in any key data management processes
– 69% of the DBAs think automation will make their job more business-centric.

So, why aren’t we seeing those innovative automations being implemented in full? Is it the DBA mindset, the business is behind the technology and the innovation, lack of time and resources, lack of knowledge and trust in the new?

We can try to look into the issue from the DBA perspective. Recently, Jeff Erickson listed the 3 Can’t Miss Ways to Turn Your DBA Skills into Gold. Here is my paraphrased version of these 3 ways:

1. Expand DBAs skils towards app development – Python, PL/SQL, etc.
2. DBAs should get into the data science game – look more into Oracle Autonomous Database which comes with Oracle ML, a rich library of machine learning algorithms
3. Convert the “A” in DBA from Administrator to Architect – look into big data and data architecture – big data is the driver for innovation in databases

At the end of the article he quoted Kerry Osborne about dealing with vast and growing data volumes: “That added complexity and scale should sound like a huge opportunity knocking“.

A interesting article by Duncan Harvey entitled What is the innovation, automation dynamic?, pointed out how autonomous systems are finally giving people the headspace they need to innovate at speed in the digital era.

So, is there a Database Automation Guide? Yes indeed!

Get a cloud boost with Oracle Autonomous!

Have a look of all Automatic features that came after Oracle 9i. Oracle 7 an 8 were probably the first releases added automatic functionalities to the database but the boost started with 9i. 20c is not skipping on that part either.

From all database brands arguably the Oracle Database is most advanced in terms of tools, automation features and innovation capabilities. Only from 11.2 until Oracle 20c, there are 133 new features purely focused on Automation. But I still see detabases where even old enough automation features (such as Automatic SQL Tuning) have not been yet implemented.

Let us look into few database brands:

SQL Server:

There are dozens of Automatic features and properties embedded into SQL Server:

Automatic Tuning can do a lot things such as Automated performance tuning of databases, Automated verification of performance gains and Automated rollback and self-correction. More importantly, it is the only database besides Oracle to have Automatic Indexing. It is worth checking how Automatic index management works in the Azure SQL database. Azure SQL Database analyzes your workload, identifies the queries that could be executed faster if you create an index, identifies indexes that are not used in a longer period of time, and identifies duplicated indexes in the database.

Db2:

IBM Db2 also has a relatively good list of Automatic features including self-tuning memory (single-partition databases only), Automatic storage, Automatic database backups, Automatic reorganization and Automatic statistics collection. The Db2 documenattion claims that the Db2® autonomic computing environment is self-configuring, self-healing, self-optimizing, and self-protecting but this is far behind what Oracle ADB has to offer. Tuning SQL is more difficult in Db2 than in Oracle and the single reason for that are the tools and features offered by both systems.

Redshift:

Amazon’s Redshift has a good set of features. Looking at what automation is like, we find enough for a relatively new database brand (Redshift, not PostgreSQL).

In terms of storage, it is mostly about how Redshift automatically takes care of data formatting and data movement into S3 and how with managed storage, capacity is added automatically to support workloads up to 8PB of compressed data.

There is of course Automated provisioning and Automated backups. Plus, Automatic workload management (WLM) uses machine learning to dynamically manage memory and concurrency, helping maximize query throughput.

Also, Amazon Redshift continuously monitors the health of the cluster, and automatically re-replicates data from failed drives and replaces nodes as necessary for fault tolerance.

Very much like Oracle RAC and Oracle Exadata, as of May 2020, Amazon Redshift now leverages Bloom filters to enable early and effective data filtering. Redshift automatically determines what queries are suitable for leveraging Bloom filters at query runtime.

Recently, Amazon Redshift also introduces ATS (= Automatic Table Sort), an automated alternative to Vacuum Sort. Automatic table sort complements Automatic Vacuum Delete and Automatic Analyze and together these capabilities fully automate table maintenance. Automatic table sort is now enabled by default on Redshift tables where a sort key is specified.

Amazon Redshift automatically takes incremental snapshots (backups) of your data every 8 hours or 5 GB per node of data change. You now get more information and control over a snapshot including the ability to control the automatic snapshot’s schedule.

In terms of SQL tuning, Amazon Redshift now automatically and elastically scales query processing power to provide consistently fast performance for hundreds of concurrent queries. The database automatically shuts down Concurrency Scaling resources to save you cost. Also, Amazon Redshift now updates table statistics by running ANALYZE automatically but that is no news for Oracle database users. Amazon Redshift improves query performance by automatically moving read and write queries to the next matching queue without restarting the moved queries.

Snowflake:

Looking into the Top 10 cool things about Snowflake, we see the fact that the JSON documents are stored in a table and optimized automatically in the background for MPP and columnar access. There is Automatic Encryption of Data and Automatic Query Optimization. No Tuning! Really cool as “It is all handled “auto-magically” via a dynamic query optimization engine in our cloud services layer. So, no indexes, no need to figure out partitions and partition keys, no need to pre-shard any data for even distribution, and no need to remember to update statistics.”

I like the “auto-magical” part 🙂

My interest was caught by the deep dive of the Revolutionary features of Snowflake that sets it apart — A Deep dive: all I found on automation was “Automatic scale down”. No comment here.

PostgreSQL:

If we look into the PostgreSQL Feature Matrix, we see only one feature about automation: Automatic plan invalidation. There is a mention of WAL Buffer auto-tuning and Autovacuum.

For databases such as MySQL, PosgreSQL and MongoDB, check this article: Why is Database Automation Important?

A recent discussion Oracle vs. PostgreSQL basically tells it all – in case you have the patience to read it – here are both sides:

Pro-Oracle: “Comparing Postgres with Oracle is a bit like comparing a rubber duck you might buy your three year old, with a 300000 ton super tanker. Do they both float? Yeah, but that’s about the only similarity.”
Pro-PostgreSQL: “So bottom line, PostgreSQL beats Oracle by far in my opinion, at least as far as installing it and sizes are concerned.”

Looking into the Features and Benefits of EDB Postgres Cloud Database Service, we see Automated Notifications, Automated Monitoring, Automated Backups and Automated Replica Failover.

Oracle

Of course, Oracle ADB (Autonomous Database) has all automation features embedded into the service but not all applications are certified or fit ADB. Not yet at least. Still, I think that most automation features can be easily implemented (into a non-ADB) with some prior testing and the benefits are enormous. The list if automation features is long – here are the major ones:

Features in 19c and 20c which are worth implementing (at least looking into and testing) are:

Automatic Indexing
– Automatic SQL Plan Management
Automatic Index Optimization
Automatic Zone Maps

Automation and innovation go hand in hand. Database innovation in an enterprise involves using the database technology in new ways in order to create a more efficient organization and improve alignment between technology and business by completing the same DBA work with smaller teams by implementing automation features in the database to reduce storage and labor cost & increase system uptime and performance.

Bottom line for DBAs: regardless of the database, there is room for innovation in every database brand!

Six new features of Data Pump in Oracle Database 20c

In Cloud, Data, Databases, DBA, New features, Oracle database, Oracle utilities on June 8, 2020 at 10:55

In Oracle version 20c, the Data Pump export job accepts 54 different parameters and the import jobs can have up to 59 different parameters. Most of us who used until Oracle 10g (and even afterwards) the old fashioned traditional export import tool probably knew by heart the syntax of exp/imp and never needed to copy paste the command from a text file. With more than 50 parameters in 20c, I am sure it is a different story.

There are 6 new features of Data Pump in Oracle 20c and here are their short descriptions:

1. Oracle Data Pump checksums support for cloud migrations

The new parameter CHECKSUM can be used for validity as a checksum is now added to the dumpfile. Oracle Data Pump can be, and is mostly used, for migrating data from on-premises Oracle Databases into the Oracle Public Cloud. You can use the checksum to help confirming that the file is valid after a transfer to or from the Oracle Cloud object store. Checksums are also useful after saving dumpfiles to on-prem locations for confirming that the dump files have no accidental or malicious changes.

Here is an example of how to use the new CHECKSUM parameter – I am exporting my own schema. The parameter accepts 2 values:

YES – Oracle calculates a file checksum for each dump file in the export dump file set
NO – Oracle does not calculate file checksums

The checksum is calculated at the end of the data pump job. Notice that I am not specifying below the other new parameter CHECKSUM_ALGORITHM, thus using its default value SHA256. The other optional secure hash algorithms are: CRC32, SHA384 and SHA512.

 
SQL> !expdp julian DIRECTORY=data_pump_dir DUMPFILE=jmd.dmp CHECKSUM=YES

Export: Release 20.0.0.0.0 - Production on Sat May 30 07:20:55 2020
Version 20.2.0.0.0

Copyright (c) 1982, 2020, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 20c EE High Perf Release 20.0.0.0.0 - Production
Starting "JULIAN"."SYS_EXPORT_SCHEMA_01":  julian/********@//localhost:1521/novopdb1.laika7.laika.oraclevcn.com DIRECTORY=data_pump_dir DUMPFILE=jmd.dmp CHECKSUM=YES
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PASSWORD_HISTORY
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
. . exported "JULIAN"."BLOGS"                            9.983 MB   73991 rows
. . exported "JULIAN"."SALES"                            14.38 MB  295240 rows
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
. . exported "JULIAN"."RDBMS_BRANDS"                     7.726 KB      12 rows
. . exported "JULIAN"."CLIENTS"                          6.007 KB       2 rows
. . exported "JULIAN"."T"                                5.476 KB       1 rows
ORA-39173: Encrypted data has been stored unencrypted in dump file set.
Master table "JULIAN"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
Generating checksums for dump file set
******************************************************************************
Dump file set for JULIAN.SYS_EXPORT_SCHEMA_01 is:
  /u01/app/oracle/admin/ORCL/dpdump/9D45645C541E0B7FE0530206F40AE9E9/jmd.dmp
Job "JULIAN"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat May 30 07:24:59 2020 elapsed 0 00:04:01

You probably noticed the ORA-39173 warning at the end – it is because I did not specify an encryption password while exporting encrypted data. It is just a warning and not a real error.

Goes without saying that COMPATIBLE must be set to at least 20.0

2. Oracle Data Pump exports from Autonomous Database

Starting with Oracle Database 20c, Data Pump can perform exports from Oracle Autonomous Database into dump files in a cloud object store. Thus, now we can easily migrate data out from an Oracle Autonomous Database and import it into another location.

For how to create object store credentials check either Oracle Cloud : Autonomous Database (ADW or ATP) – Load Data from an Object Store (DBMS_CLOUD) by Tim Hall or DBMS_CLOUD Package – A Reference Guide by Christian Antognini.

The new in 20c is the use of the new CREDENTIAL parameter which enables the export to write data stored into object stores. The CREDENTIAL parameter changes how expdp interprets the text string in DUMPFILE. If the CREDENTIAL parameter is not specified, then the DUMPFILE parameter can specify an optional directory object and file name in directory-object-name:file-name format. If the CREDENTIAL parameter is used, then it provides authentication and authorization for expdp to write to one or more object storage URIs specified by DUMPFILE.

Here is an example assuming that we have already created the credential_name JMD_OBJ_STORE_CRED:

 
expdp julian DUMPFILE=https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/juliandon/data_pump/jmd.dmp 
CREDENTIAL=jmd_obj_store_cred

Need more information? The Oracle Cloud Infrastructure User Guide has “only” 5952 pages. Lat week, they were 5919. Enjoy reading it 🙂 And while reading this blog post, there will be more than 5952 pages… I do not think I have ever seen a longer users guide in our industry!

3. Oracle Data Pump includes and excludes in the same operation

Oracle Data Pump 20c can include and exclude objects in the same export or import operation meaning that now, Oracle Data Pump commands can include both INCLUDE and EXCLUDE parameters in the same operation. By enabling greater specificity about what is being migrated, this enhancement makes it easier to migrate to Oracle Cloud, or to another on-premises Oracle Database.

Note: when you include both parameters in a command, Oracle Data Pump processes the INCLUDE parameter first, and includes all objects identified by the parameter. Then it processes the EXCLUDE parameters, eliminating the excluded objects from the included set. Here is an example of including only 2 tables (SALES and CLIENTS) but excluding all indexes except the PKs (real use case: you want to enable Oracle Auto Indexing in ADB and while importing the data you need to drop all indexes except the PKs):

 
expdp julian SCHEMAS=JULIAN DUMPFILE=julian.dmp REUSE_DUMPFILES=YES 
INCLUDE=TABLE:\"IN \(\'CLIENTS\',\'SALES\'\)\" 
EXCLUDE=INDEX:\"LIKE \'IDX\%\'\"

4. Oracle Data Pump parallelizes transportable tablespace metadata operations

Starting with Oracle 20c, Data Pump improves Transportable Tablespace metadata operations with parallelism. Starting with Oracle Database 20c, transportable tablespace exports can be done with degrees of parallelism greater than 1.

Parallelism higher than 1 improves TTS export and import performance especially when there are really a lot of database objects in the data files including tables indexes partitions and subpartitions. We probably see the real benefit when exporting packaged application schemas from SAP, EBS, etc.

5. Oracle Data Pump provides optional index compression

In Oracle Database 20c, Data Pump supports optional index compression on import including the Autonomous Database by introducing a new TRANSFORM parameter clause INDEX_COMPRESSION_CLAUSE. Thus, you can control whether index compression is performed during import.

If NONE is specified in the INDEX_COMPRESSION_CLAUSE, then the index compression clause is omitted (and the index is given the default compression for the tablespace). However, if you use compression, then Oracle recommends that you use COMPRESS ADVANCED LOW. Indexes are created with the specified compression.

If the index compression clause is more than one word, then it must be contained in single or double quotation marks. Also, your operating system can require you to enclose the clause in escape characters, such as the backslash character. Here is an example of how to use the INDEX_COMPRESSION_CLAUSE:

 
TRANSFORM=INDEX_COMPRESSION_CLAUSE:\"COMPRESS ADVANCED LOW\"

Specifying this transform changes the type of compression for all indexes in the job.

6. Oracle Data Pump resumes transportable tablespace jobs

Starting with Oracle Database 20c, Data Pump resumes transportable tablespace export and import jobs that are stopped due to errors or any other problems. Oracle Data Pump’s capacity to resume these stopped jobs helps us to save time and makes the system more available.

Transportable jobs are now restartable at or near the point of failure.

To restart the job JMD_EXP_20C, first perform:

expdp system/password attach=jmd_exp_20c

Then restart the job with:

Export> continue_client

For an import job, the syntax is exactly the same.

Automatic Zone Maps in the Oracle Database

In Data, Database tuning, Databases, DBA, New features, Oracle database on May 18, 2020 at 06:18

A zone is a set of a contiguous data blocks on disk.

A zone map is an index-like structure built on a table and stores information about the zones of that table.

There are 2 major differences between indexes and zone maps:

– A zone map stores information per zone instead of per row which makes it much more compact than an index
– A zone map is not actively managed the way an index is kept in sync with the DML on the table

Zone maps are closer as a concept to Exadata’s storage indexes than to B-tree indexes.

Before going into how Automatic Zone Maps work in Oracle 20c, let me explain the concept with an example. Consider a small table containing basic information about some relational databases from db-engines.com (rank, score, initial and last release, cloud based):

The RDBMS_BRANDS segment has 6 data blocks with 2 rows per block:

Let us now create the zonemap on the RDBMS_BRANDS table (on 3 columns only):

 
SQL> CREATE MATERIALIZED ZONEMAP rdbms_zmap ON 
rdbms_brands (db_engines_rank, db_engines_score, initial_release); 

Materialized zonemap RDBMS_ZMAP created.

We have now 3 zones and each zone contains two blocks and stores the minimum and maximum of db_engines_rank, db_engines_score and initial_release:

Next, let us run a query returning all RDBMS brands with ranking score more than 1000:

Looking at the execution plan below we see that Oracle is scanning only Zone 1 as the maximum score in all other zone is smaller than 1000:

That is how zone maps work … but what is new in Oracle 20c?

We can now enable automatic creation and maintenance of basic zone maps for both partitioned and non-partitioned tables. But for now, the creation is not available for join zone maps, IOTs, external tables or temporary tables!

In 20c, you can use the new package DBMS_AUTO_ZONEMAP to enable Automatic Zone Maps in the database. Automatic zone map creation is turned off by default.

These four values are allowed for the parameter AUTO_ZONEMAP_MODE:

ON: Turns on auto zone map feature completely. Both for foreground and background zone map creation and maintenance
OFF: Turns off auto zone map feature completely. Both for foreground and background zone map creation and maintenance
FOREGROUND: Turns on only for foreground zone map creation and maintenance
BACKGROUND: Turns on only for background zone map creation and maintenance

You may use the ACTIVITY_REPORT function to view auto zone map activity for a given time window. Note that the background job that performs automatic zone map processing starts once per hour and each run may last up to three hours.

 
SET LONG 100000
SELECT dbms_auto_zonemap.activity_report() report FROM dual;

These 2 zonemaps related views show the most important information DBAs need:

DBA_ZONEMAPS displays all zone maps in the database
DBA_ZONEMAP_MEASURES displays the measures for all zone maps in the database

On a final note: Automatic Zone Maps are available for now only on Exadata and requires the Oracle Partitioning option.