Dontcheff

Archive for the ‘DBA’ Category

AI in AI: Artificial Intelligence in Automatic Indexing

In DBA on March 7, 2019 at 17:30

“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” — Eliezer Yudkowsky

CNBC: 40% of A.I. start-ups in Europe have almost nothing to do with A.I.

Oracle 19c brings one key feature which does not exist in database systems: Automatic Indexing. Something very similar does exit in Azure SQL Database but with some limitations.

For a very long time, both DBAs and Developers, have been struggling (really struggling) with what indexes should be created, what type of indexes they should be created as and what indexes should be dropped from the database. Automatic Index creation (AI Creation) means the explicit creation of new indexes and also dropping existing, unused indexes without human intervention.

In the long run, this is to be arguably one of the most important features in the Oracle database. I have already covered the basics in a previous blog post entitled Automatic Indexing in 19c. The expert system works in the following way passing through the stages of identification, verification and decision making:

Based on the captured workload, Oracle’s expert system identifies the index candidates which are created first as UNUSABLE & INVISIBLE (metadata only):

Then, there is the verification process. Some indexes will become VALID (physical segments are created) but will still stay INVISIBLE to the optimizer.

Later, Oracle decides if some of these indexes can become VISIBLE and this happens based on how the performance increases and how these new indexes affect other activities in the database.

Look for possible error using this query:

select EX.execution_type, EX.execution_name,F.message
from DBA_ADVISOR_FINDINGS F, DBA_ADVISOR_EXECUTIONS EX 
WHERE F.EXECUTION_NAME = EX.EXECUTION_NAME AND F.TYPE = 'ERROR';

If you need a detailed report from (say) the last 30 days, here is how to obtain it:

spool report
select dbms_auto_index.report_activity(sysdate-30,null,'text','all','all') report from dual;
spool off

A sample report shows beside the index candidates, space used, fatal errors also the overall improvement factor and also the SQL statement improvement factor:

When using/implementing the feature, have in mind the following:

– AUTO_INDEX_MODE must be set in every PDB: even set on container level it is not cascading to the pluggable databases
– Manually created indexes are nor dropped by default, you need to set separately AUTO_INDEX_RETENTION_FOR_MANUAL
– Follow the expert system runs from CDB_AUTO_INDEX_EXECUTIONS
– Hint for an INVISIBLE VALID index (for example /*+ index(clients SYS_AI_64uvm6wb5168u) */): I have seen how the index becomes VISIBLE in a second (if really useful)

For more details, check the recent bog post 19c Auto Index: the dictionary views by Franck Pachot

Automatic Indexing is by far one of the best examples of Artificial Intelligence and Machine Learning in the IT Industry. Really! I still remember a 5TB Oracle database I used to administer (mission critical one, a 24×7 system) where the indexes were almost 4.5TB in size while the real data was like half a TB only.

Advertisements

Automatic Indexing in 19c

In Autonomous, Database tuning, Databases, DBA, Oracle database on February 18, 2019 at 17:38

One of the most impressive new features of Oracle Database 19c is Automatic Indexing. Arguably, this is the most interesting innovation in the database world for a rather long time.

I remember some years ago when a DBA asked me at an Oracle conference: “Julian, why are half of the presentations at Oracle database conferences only about performance tuning? Is the Oracle database performing that badly that people should tune it all the time?” Sigh…

With 19c and ADB (Oracle Autonomous Database), things look very different now, don’t they? Automatic Indexing provides what database systems need: continuous optimization of the database workload, stable & solid performance and almost no human interaction. Let me share some of my early experience with Automatic Indexing and where human interaction is needed.

For now (February 18th, 2019), Oracle 19c is only available on Exadata (Linux 7.4) and in order to enable Automatic Indexing you need to do the following:

EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','IMPLEMENT');

As you can guess from the picture above, the so called expert system of Automatic Indexing runs every 15th minute for as long as one hour. Note that I disabled the job from 4:43 till 5:56. The Resource Manager plan limits the task to 1 CPU only and the next run is skipped if the job has not been completed within the 15 minutes.

Here are details on how Automatic Indexing works but what is most important to remember is as follows:

– The auto index candidates are created as invisible auto indexes
– If the performance of SQL statements is not improved from the auto indexes, then the indexes are marked as unusable and the corresponding SQL statements are blacklisted
– Auto indexes cannot be used for any first time SQL run against the database
– Auto indexes are created as either single, concatenated indexes or function-based indexes and they all use advanced low compression
– The unused auto indexes are deleted after 373 days (can be changed)
– The unused non-auto indexes (manual indexes) are never deleted by the automatic indexing process but can be deleted automatically if needed

The Auto Indexing can be disabled at any time or can be set to set to reporting mode (new auto indexes as created asinvisible indexes, so that they cannot be used in SQL) with the following commands:

EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','OFF');

 

EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','REPORT ONLY');

Here is a way to ask Oracle to create new auto indexes in a separate tablespace called AUTO_INDEX_TS:

SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_DEFAULT_TABLESPACE','AUTO_INDEX_TS');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01

You can easily check the configuration for Automatic Indexing for the root container and the PDBs from CDB_AUTO_INDEX_CONFIG;

If you need a report of what happened during the expert system activity (either last 3 days or during the last activity), here is a way to generate it:

set long 300000
select DBMS_AUTO_INDEX.REPORT_ACTIVITY(SYSTIMESTAMP-3,SYSTIMESTAMP,'TEXT','ALL','ALL') from dual;
select DBMS_AUTO_INDEX.REPORT_LAST_ACTIVITY('TEXT','ALL','ALL') from dual;

These are the most important views about Auto Indexing:

DBA_AUTO_INDEX_EXECUTIONS: history of execution of automatic indexing tasks
DBA_AUTO_INDEX_STATISTICS: statistics related to auto indexes
DBA_AUTO_INDEX_IND_ACTIONS: actions performed on auto indexes
DBA_AUTO_INDEX_SQL_ACTIONS: actions performed on SQL statements for verifying auto indexes
DBA_AUTO_INDEX_CONFIG: configuration settings related to auto indexes
DBA_AUTO_INDEX_VERIFICATIONS: stats about PLAN_HASH_VALUE, AUTO_INDEX_BUFFER_GETS, etc.

The new package DBMS_AUTO_INDEX can be used for 3 main things:

1. Configuration of the parameters related to Auto Indexing
2. Drop *all* the indexes except the ones used for constraints
3. Report the activity of the “expert system”:

Finally, here are some additional resources:

Automatic Indexing in Oracle Database 19c
Oracle Database 19c is now available!
Managing Auto Indexes

How to check if I have any auto indexes in my database: select auto, count(*) from dba_indexes group by auto;

Few interesting facts about Oracle ADB, Redshift and Snowflake

In Autonomous, Data Warehouse, Databases, DBA on January 14, 2019 at 16:17

Building a new data warehouse in the cloud or migrating an existing one to cloud requires careful consideration and the answer to the question “Which cloud should I use?” is often “It depends”.

An interesting comparison of system properties comparing Amazon Redshift vs. Oracle vs. Snowflake can be found on db-engines.com

There are several other options too: Azure SQL Data Warehouse, Presto, Google BigQuery, etc.

An interesting benchmark paper called “Data Warehouse Benchmark: Redshift, Snowflake, Azure, Presto and BigQuery” by Fivetran is worth reading!

Another comparison called Interactive Analytics: Redshift vs Snowflake vs BigQuery is already more than 2 years old but still interesting.

Recently, things have changed. Oracle’s Autonomous Data Warehouse Cloud has been in GA for almost 1 year (since March 2018). ADW is for enterprise loads and mission critical systems arguably the best solution right now.

Viscosity compared both Oracle Autonomous and Amazon Redshift. The result? Check it here: Amazon vs Oracle: Data Warehouse Services, How do They Compare?

In short, the conclusion of the research above is:

– Oracle’s ADW was able to achieve data retrieval at the lowest latencies, and achieved the highest volume of queries per hour. In terms of serial query execution and multi-user query throughput.
– Oracle’s ADW consistently outperformed Redshift by a factor of 4x in both sets of tests.

And do not ignore the db-engines ranking! Only one of the three is in the Top 10.

What is interesting to know on top of all papers above are these 10 differences or let us call them less known technical facts (in no order of importance) between Oracle Autonomous, Amazon Redshift and Snowflake:

1. Snowflake compute usage is billed on a per-second basis, with a minimum of 60 seconds. Amazon Redshift is based on PostgreSQL 8.0.2 and is built on top of technology from the MPP data warehousing company ParAccel. Oracle Autonomous Database is based on Exadata and 18c.

2. In Oracle Autonomous Cloud, you can provision up to 128 CPUs and 128TB directly from the cloud console but you can provision more if needed.

3. Snowflake manages all aspects of how data is stored in S3 including data organization, file sizes, structure, compression, and statistics.

4. The only things needed for BYOL in Oracle Autonomous Database are Multitenant and RAC (only when using more than sixteen OCPUs). The standby option (not yet available) will require Active Data Guard as well.

5. Snowflake does not disclose the information about processing power and memory. Oracle do disclose the information via internal views but you cannot directly define the SGA or PGA size.

6. Redshift is not built as a high-concurrency database with several concurrent running queries and AWS recommends that you execute no more than 15 queries at a time. The number of concurrent user connections that can be made to a cluster is 500.

7. Oracle ADW and ATP allow you to partition both indexes and tables. In Snowflake partitioning is handled internally. Amazon Redshift does not support tablespaces, table partitioning, inheritance, and even certain constraints. Amazon Redshift Spectrum supports table partitioning using the CREATE EXTERNAL TABLE command.

8. The maximum number of tables in Amazon Redshift is 9,900 for large and xlarge cluster node types and 20,000 for 8xlarge cluster node types. The limit includes temporary tables. An Oracle database does not have a limit for the number of tables.

9. Oracle automatically applies all security updates (and online!) to ensure data is not vulnerable to known attack vectors. Additional in-database features like Virtual Private Database and Data Redaction are also available.

10. There is no operation in Snowflake for collecting database statistics. It is handled by the engine. In Oracle, database statistics collection is allowed. Both Oracle Autonomous and Amazon Redshift monitor changes to your workload and automatically update statistics in the background.

Finally, here are official URLs of all three products:

Oracle Autonomous Database
Amazon Redshift
Snowflake Database

Autonomous Data Warehouse, Autonomous Transaction Processing or Something Else?

In DBA on November 30, 2018 at 14:37

First things first: there is nothing else. Let me explain why.

Both Forbes and the Wall Street Journal wrote about the top 5 industry early adopters of Autonomous Systems.

According to the article, “in the IT industry, the pioneering product is Oracle’s Autonomous Data Warehouse Cloud, a cloud-based database that configures, optimizes and patches itself with minimal human intervention. Oracle Executive Chairman and CTO Larry Ellison says the machine learning technology that underpins the company’s autonomous data warehouse, as well as autonomous integration, developer, mobile and other platform services that will follow, is as revolutionary as the internet.”

To make it clear, the new Autonomous Data Warehouse and the Autonomous Transaction Processing databases are not based on newly written software. It is the same Oracle database with a lot of automation and mathematical algorithms embedded into the original database software. Think of machine learning and computer intelligence.

If you are looking for something similar among other database brands – good luck! Finding all areas of Self-Securing, Self-Automation and Self-Repairing outside Oracle Autonomous Database Cloud is mission impossible. And here are the areas:

Four Areas of Self-Securing of Autonomous Databases:

1. Self-securing starts with the security of the Oracle Cloud infrastructure and database service. Security patches are automatically applied every quarter or as needed, narrowing the window of vulnerability. Patching includes the full stack: firmware, operating system [OS], clusterware, and database. There are no steps required from the customer side.

2. Oracle encrypt customer data everywhere: in motion, at rest, and in backups. The encryption keys are managed automatically, without requiring any customer intervention. And encryption cannot be turned off.

3. Administrator activity on Oracle Autonomous Data Warehouse Cloud is logged centrally and monitored for any abnormal activities. Oracle have enabled database auditing using predefined policies so that customers can view logs for any abnormal access: UNIFIED_AUDIT_TRAIL

4. Built upon Oracle Database Vault, unique to Oracle Cloud, operations personnel have privilege to do all administrative tasks without any ability to ever see any customer data.

Four Areas of Self-Automation of Autonomous Databases:

1. Self-Automation: automatic provisioning of pluggable databases and automatic scaling – PDB resource manager.

2. Automatic tuning: SQL Plan Management, Adaptive Plans, SQL Tuning Advisor – Automatic SQL Tuning, Storage Indexes, Automatic Storage Management, Automatic detection and correction of regressions due to plan changes, Automatically tune memory, process, sessions.

3. Automatic Fault Tolerant Failover: RAC and Data Guard. Automatically kill run-away transactions and SQL. Automatically kill inactive session.

4. Automatic Backup and Recovery: RMAN, Flashback.

Seven Areas of Self-Repairing of Autonomous Databases:

Both Maria Colgan and Franck Pachot wrote on the differences between ADW and ADP:

How does Autonomous Transaction Processing differ from the Autonomous Data Warehouse? by Maria Colgan

ATP vs ADW – the Autonomous Database lockdown profiles by Franck Pachot

But here are in short the four main areas of differences between ADW and ADP:

1. Settings and parameters:
– In ADW: the majority of the memory is allocated to the PGA – joins, aggregations in memory
– In ATP: the majority of the memory is allocated to the SGA – minimize I/O

For DBAs: ADW runs on 94 non-default parameters out of which 35 are underscore. In ATP, the corresponding numbers are 94 and 36. Not same 94 though! And these numbers may slightly vary.

2. Data formats:
– In ADW: data is stored in a columnar format as that’s the best format for analytics processing – ADW uses DBIM option features like in-memory columnar flash cache under the covers
– In ATP: data is stored in a row format

3. Statistics/CBO:
– In ADW: statistics are automatically maintained as part of bulk load and DBMS_CLOUD activities
– In ATP: statistics are automatically gathered when the volume of data changes significantly enough to make a difference to the statistics

4. Client services/connections:
– In ADW: only one service (LOW) automatically runs SQL statements serially, all is parallel
– In ATP: the PARALLEL service does no longer exist (as of 12.11.2018)

FAQ for Oracle Autonomous Database

In order to show the other side of the coin, here are two perspective from IBM and SAP point of view:

Oracle Autonomous Database – is it truly self-driving? by Danny Arnold

How Real is The Oracle Automated Database? by Shaun Snapp

But if you prefer more neutral reading check Oracle’s next chapter: The Autonomous Database and the DBA and Will Autonomous Database Entice Big Business To The Cloud?

Bottom line: if you need extreme high reliability, top-level security, 100% automation of DBA routine tasks and no funny surprises – start testing and using the Oracle Autonomous Database. Really!

Amazon’s Aurora and Oracle’s Autonomous ATP

In Autonomous, Cloud, DBA, PostgreSQL on August 29, 2018 at 09:26

Databases are very much like wine, cheese and trees: they get better as they age.

Amazon Aurora exists since 2015. The word aurora comes Latin, means dawn. The name was borne by the Roman mythological goddess of dawn and by the princess in the fairy tale Sleeping Beauty.

Both Amazon’s “dawn” Aurora and Oracle’s ATP are typical cloud OLTP systems.

The question is: what are their differences, which one is better and meant exactly for my needs?

Oracle ATP is based on Oracle’s database and Exadata, here are all the innovations adopted from both systems:

Amazon’s Aurora has 2 flavors: Amazon Aurora MySQL and Amazon Aurora PostgreSQL.

Amazon Aurora MySQL is compatible with MySQL 5.6 using the InnoDB storage engine. Certain MySQL features like the MyISAM storage engine are not available with Amazon Aurora. Amazon Aurora PostgreSQL is compatible with PostgreSQL 9.6. The storage layer is virtualized and sits on a proprietary virtualized storage system backed up by SSD. And you pay $0.20 per 1 million IO requests.

Oracle’s Autonomous database comes also in 2 flavors: Oracle ADW and Oracle ATP. Check Franck Pachot’s article ATP vs ADW – the Autonomous Database lockdown profiles to see the differences of both cloud databases.

In general, one can compare Oracle ADW with Amazon Redshift and Oracle ATP with Amazon Aurora.

One way to compare is to look at the ranking provided by DB-Engines: Amazon Aurora vs. Oracle. No-brainer who the leader is: score of 1300 vs score of 5 in favor of Oracle.

Another interesting comparison comes from Amalgam Insights. Check how Oracle Autonomous Transaction Processing lowers barriers to entry for data-driven business. Check out the DBA labor cost involved: 5 times less in favor of Oracle ATP compared to Amazon! All the routine DBA tasks have been totally eliminated.

The message from them is very clear: “Oracle ATP could reduce the cost of cloud-based transactional database hosting by 65%. Companies seeking to build net-new transactional databases to support Internet of Things, messaging, and other new data-driven businesses should consider Oracle ATP and should do due diligence on Oracle Autonomous Database Cloud for reducing long-term Total Cost of Ownership.”

This month (August 2018), there was an interesting article by Den Howlett entitled Oracle introduces autonomous transaction processing database – pounds on AWS. Here are 2 interesting and probably correct statements/quotes from there:

1. It really is hard to get off an established database, even one that can be as expensive as Oracle can turn out to be.
2. Some of the very largest workloads will not go to the public cloud anytime soon. Maybe never which in internet years is after 2030.

As a kind of proof of how reliable and fast Oracle’s Autonomous Transaction Processing database is consider the following OLTP workload running non-stop in a balanced way without any major spikes and without a single queued statement!

No human labor, no human error, and no manual performance tuning!

Migrating Amazon Redshift to Autonomous Data Warehouse Cloud

In Autonomous, Data Warehouse, DBA, Exadata, PostgreSQL on July 4, 2018 at 18:34

“Big Data wins games but Data Warehousing wins championships” says Michael Jordan. Data Scientists create the algorithm, but as Todd Goldman says, if there is no data engineer to put it into production for use by the business, does it have any value?

If you google for Amazon Redshift vs Oracle, you will find lots of articles on how to migrate Oracle to Redshift. Is it worth it? Perhaps in some cases before Oracle Autonomous Data Warehouse Cloud existed.

Now, things look quite different. “Oracle Autonomous Data Warehouse processes data 8-14 times faster than AWS Redshift. In addition, Autonomous Data Warehouse Cloud costs 5 to 8x less than AWS Redshift. Oracle performs in an hour what Redshift does in 10 hours.” At least according to Oracle Autonomous Data Warehouse Cloud white paper. And I have nothing but great experiences with ADWC. For the past half an year or so.

But, what are the major issues and problems reported by Redshift users?

One of the most common complaints involves how Amazon Redshift handles large updates. In particular, the process of moving massive data sets across the internet requires substantial bandwidth. While Redshift is set up for high performance with large data sets, “there have been some reports of less than optimal performance,” for the largest data sets. An article by Alan R. Earls entitled Amazon Redshift review reveals quirks, frustrations claims that reviewers want more from the big data service. So:

Why to migrate from Amazon Redshift to Autonomous Data Warehouse Cloud?

1. Amazon Redshift is ranked 2nd in Cloud Data Warehouse with 14 reviews vs Oracle Exadata which is ranked 1st in Data Warehouse with 55 reviews.

The top reviewer of Amazon Redshift writes “It processes petabytes of data and supports many file formats. Restoring huge snapshots takes too long”. The top reviewer of Oracle Exadata writes “Thanks to smart scans, the amount of data transferred from storage to database nodes significantly decreases”.

2. Oracle Autonomous dominates in features and capabilities:

DB-engines shows an excellent system properties comparison of Amazon Redshift vs. Oracle.

In addition, reading through these thoughts on using Amazon Redshift as a replacement for an Oracle Data Warehouse can be worthwhile. It shows how Amazon Redshift compares with a more traditional DW approach. But Enterprises have some Redshift concerns, including:

– The difference between versions of PostgreSQL and the version Amazon uses with Redshift
– The scalability of very large data volume is limited and performance suffers
– The query interface is not modern, interface is a bit behind
– Redshift needs more flexibility to create user-defined functions
– Access to the underlying operating system and certain database functions and capabilities aren’t available
– Starting sizes may be too large for some use cases
– Redshift also resides in a single AWS availability zone

3. Amazon Redshift has several limitation: Limits in Amazon Redshift. On the other hand, you can hardly find a database feature not yet implemented by Oracle.

4. But the most important reason why to migrate to ADWC is that the Oracle Autonomous Database Cloud offers total automation based on machine learning and eliminates human labor, human error, and manual tuning.

How to migrate from Amazon Redshift to Autonomous Data Warehouse Cloud?

Use the SQL Developer Amazon Redshift Migration Assistant which is available with SQL Developer 17.4. It provides easy migration of Amazon Redshift environments on a per-schema basis.

Here are the 5 steps on how to migarte from Amazon Redshift to Autonomous Data Warehouse Cloud:

1. Connect to Amazon Redshift
2. Start the Cloud Migration Wizard
3. Review and Finish the Amazon Redshift Migration
4. Use the Generated Amazon Redshift Migration Scripts
5. Perform the Post Migration Tasks

Check out what Paul Way says about why Oracle thinks Autonomous IT can ultimately win the Cloud War.

Finally, here is what Amazon CTO Werner Vogels is saying: Our cloud offers any database you need. And I agree with him that a one size fits all database doesn’t fit anyone. But mission and business critical enterprise systems with huge requirements and resource needs deserve only the best.

The DBA profession beyond autonomous: a database without a DBA is like a tree without roots

In Autonomous, Cloud, Databases, DBA on May 30, 2018 at 19:41

“To make a vehicle autonomous, you need to gather massive streams of data from loads of sensors and cameras and process that data on the fly so that the car can ‘see’ what’s around it” Daniel Lyons

Let me add that the data must be stored somewhere, analyzed by some software, monitored and backed up by someone, and so on and so on…

Top 5 Industry Early Adopters Of Autonomous Systems are: (1) Information Technology: Oracle’s Autonomous Data Warehouse Cloud, (2) Automotive, (3) Manufacturing, (4) Retail and (5) Healthcare.

Being an early adopter of ADWC, I must say that it is probably the best product created by Oracle Corporation. For sure part of Top Five.

This month (May 2018), ComputerWeekly published an article quoting Oracle CEO Mark Hurd that the long-term future of database administrators could be at risk if every enterprise adopts the Oracle 18c autonomous database.

“Hurd said it could take almost a year to get on-premise databases patched, whereas patching was instant with the autonomous version. If everyone had the autonomous database, that would change to instantaneous.”

So where does that leave Oracle DBAs around the world? Possibly in the unemployment queue, at least according to Hurd.

“There are hundreds of thousands of DBAs managing Oracle databases. If all of that moved to the autonomous database, the number would change to zero,” Hurd said at an Oracle media event in Redwood Shores, California.

If you are interested in more detail on this subject, I suggest you read the following articles in the order below:

The Robots are coming by James Anthony: “But surely we’ve been here before? Indeed, a quick Google search brings up the following examples of white papers by Oracle with a reference to the database being self-managing all the way back to 2003.”

Oracle Autonomous Database and the Death of the DBA by Tim Hall: “Myself and many others have been talking about this for over a decade. ”

Death of the DBA, Long Live the DBA by Kellyn Pot’Vin-Gorman: “With DBAs that have been around a while, we know the idea that you don’t need a DBA has been around since Oracle 7, the self-healing database.”

No DBA Required? by Tim Hall: “It will be interesting to see what Oracle actually come up with at the end of all this…”

Self-Driving Databases are Coming: What Next for DBAs? by Maria Colgan: “It’s also important for DBAs to remember that the transition to an autonomous environment is not something that will occur overnight.”

Death of the Oracle DBA (again) by Johanthan Stuart: “Twenty years later I run Claremont’s Managed Services practice and the DBA group is our largest delivery team.”

Don’t Fall For The “Autonomous Database” Distraction by Greg McStravick: a totally different point of view on autonomous databases.

Now, “a picture is worth a thousand words”. Here are 5 screenshots from the Autonomous Data Warehouse Cloud documentation:

1. Who will be creating external tables using the DBMS_CLOUD package?

2. Who will run “alter database property set.. ” in order to create credentials for the Oracle Cloud Infrastructure?

3. Who will restore and recover the database in case of any type of failure? Or failures never happen, right?

4. Who will manage run away SQL with cs_resource_manager and run “alter system kill session”?

5. Who will manage the CBO statistics and add hints?

As of today, we have 4 Exadata choices with Autonomous being by far the best. For data warehouse loads for now. As explained by Alan Zeichick, Autonomous Capabilities Will Make Data Warehouses — And DBAs — More Valuable. “No need for a resume writer: DBAs will still have plenty of work to do.”

So still: a database without a DBA is like a tree without roots.

P.S. Check out the book Human + Machine: Reimagining Work in the Age of AI by Paul R. Daugherty and H. James (Jim) Wilson.

DBA Internals of the Oracle Autonomous Database

In Cloud, DBA, Oracle database, Oracle internals on March 28, 2018 at 07:11

First things first: the word autonomous come from the Greek word autónomos which means “with laws of one’s own, independent”.

After starting using the Autonomous Data Warehouse Cloud, I must say I am pleasantly surprised to see something totally new, simple, uncomplicated and effortless, with no additional tuning or re-architecturing of the Oracle databases needed – the underlying Oracle Cloud Infrastructure is super fast and highly reliable.

1. You may connect to ADWC by either using the web interface as you can see above or as a client (I use SQL Developer 17.4) but for the client connection type choose Cloud PDB and not TNS. Your configuration file is a zip file and not a plain text file to what DBAs are used to.

2. You cannot create indexes on columns, you cannot partition tables, you cannot create materialized views, etc. Not even database links. You will get an error message: “ORA-00439: feature not enabled: Partitioning” or “ORA-01031: insufficient privileges”.

ADWC lets you create primary keys, unique keys and a foreign key constraints in RELY DISABLE NOVALIDATE mode which means that they are not enforced. These constraints can be created also in enforced mode, so technically you can create constraints as in a non-autonomous Oracle database.

Note that in execution plans primary keys and unique keys will only be used for single table lookups by the optimizer, they will not be used for joins.

But … you can run alter system kill session!

3. The Oracle Autonomous Data Warehouse interface contains all necessary capabilities for a non-professional database user to create its own data marts and run analytical reports on the data. You can even run AWR reports.

4. You do not have full DBA control as Oracle (in my opinion) uses lockdown profiles in order to make the database autonomous. As ADMIN user, you have 25 roles including the new DWROLE which you would normally grant to all ADWC users created by you. Among those 25 roles, you have GATHER_SYSTEM_STATISTICS, SELECT_CATALOG_ROLE, CONSOLE_ADMIN, etc. You have access to most DBA_ and GV_$ views. Not to mention the 211 system privileges.

5. ADWC configures the database initialization parameters based on the compute and storage capacity you provision. ADWC runs on dozens of non-default init.ora parameters. For example:

parallel_degree_policy = AUTO
optimizer_ignore_parallel_hints = TRUE
result_cache_mode = FORCE
inmemory_size = 1G

You are allowed to change almost no init.ora parameters except few NLS_ and PLSQL_ parameters.

And the DB block size is 8K!

6. I can see 31 underscore parameters which are not having default values, here are few:

_max_io_size = 33554432 (default is 1048576)
_sqlmon_max_plan = 4000 (default is 0)
_enable_parallel_dml = TRUE (default is FALSE)
_optimizer_answering_query_using_stats = TRUE (default is FALSE)

One of the few alter session commands you can run is “alter session disable parallel dml;”

7. Monitoring SQL is easy:

But there is no Oracle Tuning Pack: you did not expect to have that in an autonomous database, did you? There is no RAT, Data Masking and Subsetting Pack, Cloud Management Pack, Text, Java in DB, Oracle XML DB, APEX, Multimedia, etc.

8. Note that this is (for now) a data warehousing platform. However, DML is surprisingly fast too. I managed to insert more than half a billion records in just about 3 minutes:

Do not try to create nested tables, media or spatial types, or use LONG datatype: not supported. Compression is enabled by default. ADWC uses HCC for all tables by default, changing the compression method is not allowed.

9. The new Machine Learning interface is easy and simple:


You can create Notebooks where you have place for data discovery and analytics. Commands are run in a SQL Query Scratchpad.

10. Users of Oracle Autonomous database are allowed to analyze the tables and thus influence on the Cost Based Optimizer and hence on performance – I think end users should not be able to influence on the laws (“νόμος, nomos”) of the database.

Conclusion: The Autonomous Database is one of the best things Oracle have ever made. And they have quite a portfolio of products….

Finally, here is a live demo of the Oracle Autonomous Data Warehouse Cloud:

2018, the year of the Cloud underdog Oracle?

In Cloud, DBA, Oracle database on January 8, 2018 at 10:46

“Without data you’re just another person with an opinion.” – W. Edwards Deming

Let us see, based on data, why the Cloud underdog Oracle can be the winner of 2018 and beyond. Especially, for databases in the Cloud!

Let us check out the most recent data coming from Forrester, Gartner, Forbes and Accenture:

1. Enterprise Workloads Meet the Cloud (Accenture)

“Simply put, an enterprise system consists of an application and the underlying database and infrastructure. Regardless of whether the solution in on-premises or delivered ‘as a service’ the application relies on those two components. Thus, the performance, uptime and security of an application will depend on how well the infrastructure and databases support those attributes.”

Both Figure 1 and Figure 2 show impressive results: the Oracle Cloud Infrastructure allows more than 3000 transactions per second while the leading cloud provider cannot even reach 400. Even the old Oracle Cloud Infrastructure Classic is at 1300 transactions per second.

The Oracle Cloud Infrastructure latency averages at 0.168ms while the leading cloud providers have about 6 times higher latency in average: 0.962ms.

“Armed with these insights, companies should be ready to consider moving their Oracle mission critical workloads to the Oracle Cloud—and reaping the benefits of greater flexibility and more manageable costs.”

2. The Total Economic Impact Of Oracle Java Cloud Service (Forrester)

Let us move to the Java Cloud Service and check the new Forrester Reserch

The costs and benefits for a composite organization with 30 Java developers, based on customer interviews, are:
– Investment costs: $827,384.
– Total benefits: $3,360,871.
– Net cost savings and benefits: $2,533,488.

The composite organization analysis points to benefits of $1,120,290 per year versus investment costs of $275,794, adding up to a net present value (NPV) of $2,533,488 over three years. With Java Cloud Service, developers gained valuable time with near instant development instances and were finally able to provide continuous delivery with applications and functionality for the organization.

3. Market Share Analysis: Public Cloud Services, Worldwide (Gartner)

Table 2, PaaS Public Cloud Service Market Share, 2015-2016 (Millions of U.S. Dollars), ranking by Annual Growth Rate 2016:

1. Oracle 166.9%
2. Amazon 109.1%
3. Alibaba 99.0%
4. Microsoft 46.4%
5. Salesforce 40.2%

Table 3. SaaS Public Cloud Service Market Share, 2015-2016 (Millions of U.S. Dollars), ranking by Annual Growth Rate 2016 (Forrester):

1. Oracle 71.6%
2. Workday 38.8%
3. Dropbox 38.0%
4. Google 37.9%
5. Microsoft 32.6%

4. Oracle And Its Cloud Business Are In Great Shape–And Here Are 10 Reasons Why (Forbes)

For its fiscal Q2 ending Nov. 30, Oracle reported total cloud revenue of $1.5 billion, up 44%, including SaaS revenue of $1.1 billion, up 55%. The combined revenue for cloud and on-premise software was up 9% to $7.8 billion.

Oracle’s Q3 guidance offered growth rates extremely close to those recently posted by salesforce.com: when you add in the highly nontrivial fact that that same company with the $6-billion cloud business also has a $33-billion on-premises business and has rewritten every single bit of that IP for the cloud, with complete compatibility for customers taking the hybrid approach—and the percentage of customers taking the hybrid approach will be somewhere between 98.4% and 100%.

5. Oracle’s Larry Ellison Challenges Amazon, Salesforce And Workday On The Future Of The Cloud (Forbes):

While Salesforce.com’s current SaaS revenue of more than $10 billion is much larger than Oracle’s current SaaS revenue—for the three months ended Aug. 31, Oracle posted SaaS revenue of $1.1 billion—Oracle’s bringing in new SaaS customers and revenue much faster than Salesforce.

The following quote is rather interesting: “Since Larry Ellison has spent the past 40 years competing brashly against and beating rivals large and small, it wasn’t a huge shock to hear him recently rail about how cloud archrival Amazon “has no expertise in database.” But it was a shocker to hear Ellison go on to say that “Amazon runs their entire operation on Oracle [Database]…. They paid us $60 million last year in [database] support and license! And you know who’s not on Amazon? Amazon is not on Amazon.

And finally, the topic of In-Memory databases is quite hot. Several database brands have their IMDB. A picture is worth a thousand words:

Artificial stupidity as a DBA limitation of artificial intelligence

In Data, Database tuning, Databases, DBA on December 6, 2017 at 07:47

“Artificial intelligence is no match for natural stupidity” ― Albert Einstein

What about introducing Artificial Intelligence into the database to an extent it tunes itself into all possible dimensions?

You have probably either seen the question above or have already asked yourself if that was at all possible. On Ask Tom, John from Guildford wrote the following:

As for Artificial Intelligence, well Artificial Stupidity is more likely to be true. Humanity is not privy to the algorithm for intelligence. Anyone who’s had the pleasure of dealing with machine generated code knows that software is no more capable of writing a cohesive system than it is of becoming self-aware.

Provided you’re not trying to be a cheap alternative to an automaton you just need to think. That one function alone differentiates us from computers, so do more of it. The most sublime software on the planet has an IQ of zero, so outdoing it shouldn’t be all that hard.

Stephen Hawking thinks computers may surpass human intelligence and take over the world. Fear artificial stupidity, not artificial intelligence!

Einstein is credited with saying (but it was probably Alexandre Dumas or Elbert Hubbard who deserve the recognition): “The difference between genius and stupidity is that genius has its limits.”

Explore artificial stupidity (AS) and/or read Charles Wheelan’s book Naked Statistics to understand this kind of AI danger. By the way, according to Woody Allen, 94.5% of all statistics are made up!

So what are the limitations of AI? Jay Liebowitz argues that “if intelligence and stupidity naturally exist, and if AI is said to exist, then is there something that might be called “artificial stupidity?” According to him three of these limitations are:

  • Ability to possess and use common sense
  • Development of deep reasoning systems
  • Ability to easily acquire and update knowledge
  • But does artificial intelligence use a database in order to be an artificial intelligence? Few very interesting answers to that question are give by Douglas Green, Jordan Miller and Ramon Morales, here is a summary:

    Although AI could be built without a database, it would probably be more powerful if a database were added. AI and databases are currently not very well integrated. The database is just a standard tool that the AI uses. However, as AI becomes more advanced, it may become more a part of the database itself.

    I don’t believe you can have an effective Artificial Intelligence without a database or memory structure of some kind.

    While it is theoretically possible to have an artificial intelligence without using a database, it makes things a LOT easier if you can store what the AI knows somewhere convenient.

    As Demystifying Artificial Intelligence explains, AI hass been embedded into some of the most fundamental aspects of data management, making those critical data-driven processes more celeritous and manageable.

    Amazon Mechanical Turk is worth looking into and Oracle are also ready for business with AI.

    Matt Johnson, a Navy pilot turned AI researcher, at a conference this simmer by saying that one of the places we are not making a lot of advances is in that teaming, in that interaction (of humans and AI) – Artificial Stupidity: When Artificial Intelligence + Human = Disaster

    Bottom line: if AI uses a database, then the intelligent database should be at least autonomous and have most tasks automated but not relying on artificial stupidity as a DBA limitation of artificial intelligence. Whatever it means… I do not want to curb your enthusiasm but we need to first fill in the skills gap: we need data engineers who understand databases and data warehouses, infrastructure and tools that span data cleaning, ingestion, security, predictions. And in this aspect Cloud is critical and a big differentiator.

    P.S. Is Artificial Intelligence Progress a Bubble? was published 4 days after this blog post.