Dontcheff

Archive for the ‘Oracle database’ Category

Oracle Autonomous Database: Dedicated vs Serverless

In Autonomous, Databases, DBA, Exadata, Oracle database, Oracle Engineered Systems on July 22, 2019 at 11:45

This blog post describes the 5 main differences between ATP-D & ATP-S and the 5 main ATP-D physical characteristics and constraints.

Autonomous Transaction Processing Dedicated (ATP-D) is a deployment choice that enables us to provision autonomous databases into dedicated Exadata cloud infrastructure instead of a shared infrastructure with other tenants.

ATP-D can be used for both OLTP or hybrid workloads for databases of any size. ATP-D is specifically good when you need highest governance, consistent performance and operational control.

For now, dedicated infrastructure means a Quarter Exadata Rack. Half and Full will be soon available too.

Besides complete physical storage isolation, ATP-D provides private IP networking, secure connections using transport layer security (TLS) credentials, and customization of software image lifecycle to align with application lifecycle.

The Fleet Administrator (think of a DBA for Autonomous Cloud) needs to create first the Exadata Infrastructure, then the CDB and only at the end the PDB. Recall that for ATP-S, you directly create the PDB.

These are the 5 main differences between ATP-D and ATP-S:

– Private IPs are not yet supported for serverless ADB deployments but they are on the short-term roadmap
– Private IPs are supported with ATP-Dedicated

– Serverless edition has no minimums or maximums for terms of usage
– Dedicated edition has a minimum term of one month and the minimum OCPU purchase is 1 OCPU per database node and up to the maximum number of OCPUs per rack: $26.88 per hours which means about $645 per day

– Loading data from object stores via DBMS_CLOUD is the recommended method for loading large data sets
– DBMS_CLOUD to load data is not applicable for ATP-D because DBMS_CLOUD is not available on ATP-D

– In ATP-D, the database version is 19c which is required for Auto-Indexing which is on by default
– Support for 19c / Auto-Indexing on ATP-S is on the roadmap

– ADB (serverless) does have auto-scaling – you can select auto scaling during provisioning or later using the Scale Up/Down button on the Oracle Cloud Infrastructure console
– ATP-D does not have auto-scaling support

Here are the 5 main ATP-D physical characteristics and constraints:

1. Quarter rack X7 Exadata Infrastructure:
– 2 severs: 92 OCPUs and 1.44TB RAM
– 3 Storage Servers: 76.8TB Flash and 107TB Disk

2. Cluster / Virtual Cloud Network:
– 1 Cluster per quarter rack

3. Autonomous Container Database:
– Maximum of 4 CDBs per Cluster
– The default data and temporary tablespaces for the database are configured automatically
– The name of the default data tablespace is DATA
– The database character set is Unicode AL32UTF8
– Compression is not enabled by default – use the table_compression clause if needed

4. Autonomous Database:
– High Availability SLA – Maximum 200 DBs
– Extreme Availability SLA – Maximum 25 DBs
– Placement Logic – Open on 1 server < 16 OCPU

5. Types of Users: Fleet Admin, Database Admin and Database User as Fleet Admin activities separated from DB Admin using IAM privileges

Note that now, there are 2 tabs for possible database connections – the serverless style DB connection and application connection:

Fleet Administrator: Fleet administrators create, monitor and manage Autonomous Exadata Infrastructure and Autonomous Container Database resources – a fleet administrator must be an Oracle Cloud user whose permissions permit the management of these resources and permit the use of the networking resources that need to be specified when creating these resources

Database Administrator: DBAs create, monitor and manage Autonomous Databases. Additionally, they create and manage Oracle Database users within these databases, and provide others the information necessary access the database – when creating an Autonomous Database resource, the DBA defines and gains access to the ADMIN administrative user account for the database

Database User: Database users are the developers who write applications that connect to and use an Autonomous Database to store and access the data. Database users do not need Oracle Cloud accounts: they gain network connectivity to and connection authorization information for the database from the database administrator

Few useful links:

A Using Oracle Database Features in Autonomous Transaction Processing
FAQs for Autonomous Transaction Processing – Dedicated
Data Center Regions for PaaS and IaaS
Oracle broadens the audience for Automated Transaction Database
2 Ways Oracle’s Autonomous Database Just Got More Useful

Bottom line: even with Autonomous, DBAs will be still needed!

Advertisements

ORA-56955: quarantined plan used

In DBA, New features, Oracle database, Security and auditing, SQL on May 29, 2019 at 13:15

“The way that worms and viruses spread on the Internet is not that different from the way they spread in the real world, and the way you quarantine them is not that different, either” – David Ulevitch

And now, in Oracle 19c, you can do the same with SQL:

 SQL> SELECT client, COUNT(*) OVER (PARTITION BY price) CLIENT_COUNT 
 FROM sales WHERE price IN (2, 91984);
  *
 ERROR at line 1:
 ORA-56955: quarantined plan used
 Elapsed: 00:00:00.00

Error “ORA-56955: quarantined plan used” is new in the Oracle database, it comes when the SQL run fulfills the quarantine conditions.

It is important to differentiate Oracle SQL Quarantines in 19c from Oracle Object Quarantines in 18c. There is also the concept of Offload Quarantines.

1. A good way to start understanding what SQL quarantines are about is to watch the following short video from Rich Niemiec:

2. Check also page 23 of the Optimizer In Oracle Database 19c white paper. “The new Oracle Database 19c feature SQL Quarantine can be used to eliminate the overhead of runaway queries. When DBRM detects a SQL statement is exceeding a resource or run-time limit, the SQL execution plan used by the statement is quarantined. If the SQL statement is executed again and it is using the same SQL execution plan then it will be terminated immediately. This can significantly reduce the amount of system resource that would otherwise be wasted.”

Think of SQL Quarantines as a way to prevent unnecessary SQL being run in the database, of course based on your own definition of unnecessary SQL. You can prevent the use of “bad” execution plans and exhausting the databases from resources.

In the database, there might be SQL statements with high utilization of CPU and IO: you can prevent them from being started so once they are quarantined they no longer consume system resources because they are terminated prior to their execution.

Note that SQL quarantines work only in 19c on Exadata and DBCS/ExaCS. Check out the Database Licensing Information User Manual:

3. You can quarantine a statement based on:

– SQL_ID and one of its execution plan
– SQL_ID and all of its executions plans
– specific SQL_TEXT

You quarantine a statement in 2 steps:

(A) create a SQL Quarantine by using (for example) DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_ID
(B) add thresholds by using DBMS_SQLQ.ALTER_QUARANTINE

Here are some examples and some more.

4. For some interesting and non-documented stuff check the article by Mahmoud Hatem entitled Oracle 19c : The QUARANTINE hint.

For instance, it shows how you can test by setting “_exadata_feature_on”=true in order to get SQL QUARANTINE feature to work on a non-Exadata box.

The following parameters can affect a quarantine kick off:

CPU_TIME
ELAPSED_TIME
IO_MEGABYTES
IO_REQUESTS
IO_LOGICAL
PHV

There is also the special value called ALWAYS_QUARANTINE.

5. All details can be of course found in the SQL Quarantine documentation.

The following columns of the V$SQL and GV$SQL views show the quarantine information of execution plans of SQL statements:

– SQL_QUARANTINE: This column shows the name of the quarantine configuration for an execution plan of a SQL statement.
– AVOIDED_EXECUTIONS: This column shows the number of times an execution plan of a SQL statement was prevented from running after it was quarantined.

There is a new view in 19c called DBA_SQL_QUARANTINE which displays information about quarantine configurations.

Good news also for admin users of the Autonomous Database: you have full access to the feature:

And note that a DBA can also transfer quarantine configurations from one database to another database using the DBMS_SQLQ package subprograms: CREATE_STGTAB_QUARANTINE, PACK_STGTAB_QUARANTINE, and UNPACK_STGTAB_QUARANTINE.

If you plan to visit Oracle OpenWorld this year (September 16-19, 2019), as of now, there are a couple of presentations on SQL Quarantine:

– Oracle Database 19c: SQL Tuning Using Plan Stability Methods SPM/SQL Quarantine: by Soumendra Paik, Senior Principal Technical Support Engineer, Oracle
– What’s New in Oracle Optimizer, by Nigel Bayliss, Senior Principal Product Manager, Oracle

Question: how do you quarantine a statement based on a sub-string of the query? Like, how can you quarantine statements starting with ‘select *‘?

What else besides Automatic Indexing is new in Oracle Database 19c?

In DBA, Init.ora, New features, Oracle database, Security and auditing, SQL on April 23, 2019 at 10:53

Automatic Indexing (AI) is probably the most important new feature of Oracle Database 19c and AI is arguably one of the best example of AI in the IT industry. But there is much more that came along with 19c.

Here is my choice of the top 10 least known (for now at least) new features of Oracle Database 19.2 (in no order whatsoever):

1. Security: if the database password file name or location has been changed and the metadata cache needs to be refreshed with the details of the updated database password file, you can do it simply with the following command:

SQL> alter system flush passwordfile_metadata_cache;

System altered.

In 19c, most of the Oracle Database supplied schema-only accounts now have their passwords removed to prevent users from authenticating to these accounts. But DBAs can still assign passwords to the default schema-only accounts.

2. SQL: the LISTAGG aggregate function now supports duplicate elimination by using the new DISTINCT keyword.

There are also a new bitvector SQL operators can be used to speed up COUNT DISTINCT operations within a SQL query.

3. Active Data Guard DML redirection: You can now run DML on the Active Data Guard standby database. When run on the standby side, the update is passed to the Primary database where it is executed and the resulting redo of the transaction will update the standby after which control will be returned to the application.

4. There are 6 new init.ora parameters in 19c. DBAs can now tune in DataGuard the amount of wait time by using DATA_GUARD_MAX_IO_TIME and DATA_GUARD_MAX_LONGIO_TIME. You can check details for all six:

ADG_REDIRECT_DML
DATA_GUARD_MAX_IO_TIME
DATA_GUARD_MAX_LONGIO_TIME
ENABLE_IMC_WITH_MIRA
LOB_SIGNATURE_ENABLE
MAX_DATAPUMP_PARALLEL_PER_JOB

5. You can now create Materialized Views containing bitmap-based COUNT(DISTINCT) functions. This means that the MVs are based on SQL aggregate functions that use bitmap representation to express the computation of COUNT(DISTINCT) operations.

SQL> create materialized view MV_SALES as
select CLIENT, 
BITMAP_CONSTRUCT_AGG(BITMAP_BIT_POSITION(PRICE),'RAW') bm_price
from SALES
group by CLIENT,BITMAP_BUCKET_NUMBER(PRICE);  

Materialized view created.

6. Looks like there is now automatic resolution of SQL plan regressions: “SQL plan management searches for SQL statements in the Automatic Workload Repository (AWR). Prioritizing by highest load, it looks for alternative plans in all available sources, adding better-performing plans to the SQL plan baseline. Oracle Database also provides a plan comparison facility and improved hint reporting.”

7. Real-Time Statistics is also a new cool feature. “Oracle automatically gathers online statistics during conventional DML operations. Statistics can go stale between execution of DBMS_STATS statistics gathering jobs. By gathering some statistics automatically during DML operations, the database augments the statistics gathered by DBMS_STATS. Fresh statistics enable the optimizer to produce more optimal plans.”

8. Hybrid Partitioned Tables: now large portions of a table can reside in external partitions, for example in the Oracle Cloud. With this new feature, you can also easily move non-active partitions to external files, such as Oracle Data Pump files, for a cheaper storage solution. Hybrid partitioned tables support all existing external table types for external partitions: ORACLE_DATAPUMP, ORACLE_LOADER, ORACLE_HDFS, ORACLE_HIVE.

9. Data Pump:

– Oracle Data Pump allows tablespaces to stay read-only during TTS import.
– Oracle Data Pump can work in test mode for transportable tablespaces.
– Oracle Data Pump supports resource usage limitations with the introduction of two new parameters: MAX_DATAPUMP_JOBS_PER_PDB and MAX_DATAPUMP_PARALLEL_PER_JOB.
– Data Pump no longer enables secure, password-protected roles by default. Beginning with 19c, you must explicitly enable password-protected roles for an individual export or import job. A new command-line parameter has been added, ENABLE_SECURE_ROLES=YES | NO that can be used to explicitly enable or disable these types of roles for an individual export or import job.
– The new Data Pump command-line parameter CREDENTIAL enables secure import into a managed service from dump files in the Oracle Object Store Service.

10. SQL Quarantine: “SQL statements that are terminated by Oracle Database Resource Manager due to their excessive consumption of CPU and I/O resources can be automatically quarantined. The execution plans associated with the terminated SQL statements are quarantined to prevent them from being executed again.”

Check out the new Oracle package DBMS_SQLQ – cool stuff!

Automatic Indexing in 19c

In Autonomous, Database tuning, Databases, DBA, Oracle database on February 18, 2019 at 17:38

One of the most impressive new features of Oracle Database 19c is Automatic Indexing. Arguably, this is the most interesting innovation in the database world for a rather long time.

I remember some years ago when a DBA asked me at an Oracle conference: “Julian, why are half of the presentations at Oracle database conferences only about performance tuning? Is the Oracle database performing that badly that people should tune it all the time?” Sigh…

With 19c and ADB (Oracle Autonomous Database), things look very different now, don’t they? Automatic Indexing provides what database systems need: continuous optimization of the database workload, stable & solid performance and almost no human interaction. Let me share some of my early experience with Automatic Indexing and where human interaction is needed.

For now (February 18th, 2019), Oracle 19c is only available on Exadata (Linux 7.4) and in order to enable Automatic Indexing you need to do the following:

EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','IMPLEMENT');

As you can guess from the picture above, the so called expert system of Automatic Indexing runs every 15th minute for as long as one hour. Note that I disabled the job from 4:43 till 5:56. The Resource Manager plan limits the task to 1 CPU only and the next run is skipped if the job has not been completed within the 15 minutes.

Here are details on how Automatic Indexing works but what is most important to remember is as follows:

– The auto index candidates are created as invisible auto indexes
– If the performance of SQL statements is not improved from the auto indexes, then the indexes are marked as unusable and the corresponding SQL statements are blacklisted
– Auto indexes cannot be used for any first time SQL run against the database
– Auto indexes are created as either single, concatenated indexes or function-based indexes and they all use advanced low compression
– The unused auto indexes are deleted after 373 days (can be changed)
– The unused non-auto indexes (manual indexes) are never deleted by the automatic indexing process but can be deleted automatically if needed

The Auto Indexing can be disabled at any time or can be set to set to reporting mode (new auto indexes as created asinvisible indexes, so that they cannot be used in SQL) with the following commands:

EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','OFF');

 

EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','REPORT ONLY');

Here is a way to ask Oracle to create new auto indexes in a separate tablespace called AUTO_INDEX_TS:

SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_DEFAULT_TABLESPACE','AUTO_INDEX_TS');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01

You can easily check the configuration for Automatic Indexing for the root container and the PDBs from CDB_AUTO_INDEX_CONFIG;

If you need a report of what happened during the expert system activity (either last 3 days or during the last activity), here is a way to generate it:

set long 300000
select DBMS_AUTO_INDEX.REPORT_ACTIVITY(SYSTIMESTAMP-3,SYSTIMESTAMP,'TEXT','ALL','ALL') from dual;
select DBMS_AUTO_INDEX.REPORT_LAST_ACTIVITY('TEXT','ALL','ALL') from dual;

These are the most important views about Auto Indexing:

DBA_AUTO_INDEX_EXECUTIONS: history of execution of automatic indexing tasks
DBA_AUTO_INDEX_STATISTICS: statistics related to auto indexes
DBA_AUTO_INDEX_IND_ACTIONS: actions performed on auto indexes
DBA_AUTO_INDEX_SQL_ACTIONS: actions performed on SQL statements for verifying auto indexes
DBA_AUTO_INDEX_CONFIG: configuration settings related to auto indexes
DBA_AUTO_INDEX_VERIFICATIONS: stats about PLAN_HASH_VALUE, AUTO_INDEX_BUFFER_GETS, etc.

The new package DBMS_AUTO_INDEX can be used for 3 main things:

1. Configuration of the parameters related to Auto Indexing
2. Drop *all* the indexes except the ones used for constraints
3. Report the activity of the “expert system”:

Finally, here are some additional resources:

Automatic Indexing in Oracle Database 19c
Oracle Database 19c is now available!
Managing Auto Indexes

How to check if I have any auto indexes in my database: select auto, count(*) from dba_indexes group by auto;

PL/SQL quiz

In Oracle database, PL/SQL on October 27, 2018 at 23:42

Years ago, I saw this quiz on dbasupport. Below, we have 2 PL/SQL blocks, have a look:


BEGIN
  WHILE sysdate = sysdate LOOP
    NULL;
  END LOOP;
END;
/

DECLARE
  x DATE;
BEGIN
  LOOP
    BEGIN
      SELECT null INTO x FROM dual WHERE sysdate = sysdate;
    EXCEPTION
      WHEN NO_DATA_FOUND THEN EXIT;
    END;
  END LOOP;
END;
/

Question is: what happens after you run them? Are the loops above both finite, both infinite or is it so that one of them is finite and the other one infinite?

If you cannot answer the question just run them in SQL*Plus, etc. Then, try to explain why – the reason for being finite or infinite.

I will update this blog post after month or so with the answer.

And I have just run this against ATP (~18c) but the output is same in previous versions too:

with SALES as (select /*+ materialize */ 0/0 from DUAL)
select count(*) from SALES;

ORA-01476: divisor is equal to zero
01476. 00000 -  "divisor is equal to zero"
*Cause:
*Action:

with SALES as (select /*+ inline */ 0/0 from DUAL)
select count(*) from SALES;

  COUNT(*)
----------
         1

Updated on October 31st:

As you can see from Eugen Iacob’s answer/comments below, the answer is read consistency. The first loop is finite, the second is infinite. Well done Eugen!

In SQL sysdate is evaluated only once – unlike in PL/SQL.

DBA Internals of the Oracle Autonomous Database

In Cloud, DBA, Oracle database, Oracle internals on March 28, 2018 at 07:11

First things first: the word autonomous come from the Greek word autónomos which means “with laws of one’s own, independent”.

After starting using the Autonomous Data Warehouse Cloud, I must say I am pleasantly surprised to see something totally new, simple, uncomplicated and effortless, with no additional tuning or re-architecturing of the Oracle databases needed – the underlying Oracle Cloud Infrastructure is super fast and highly reliable.

1. You may connect to ADWC by either using the web interface as you can see above or as a client (I use SQL Developer 17.4) but for the client connection type choose Cloud PDB and not TNS. Your configuration file is a zip file and not a plain text file to what DBAs are used to.

2. You cannot create indexes on columns, you cannot partition tables, you cannot create materialized views, etc. Not even database links. You will get an error message: “ORA-00439: feature not enabled: Partitioning” or “ORA-01031: insufficient privileges”.

ADWC lets you create primary keys, unique keys and a foreign key constraints in RELY DISABLE NOVALIDATE mode which means that they are not enforced. These constraints can be created also in enforced mode, so technically you can create constraints as in a non-autonomous Oracle database.

Note that in execution plans primary keys and unique keys will only be used for single table lookups by the optimizer, they will not be used for joins.

But … you can run alter system kill session!

3. The Oracle Autonomous Data Warehouse interface contains all necessary capabilities for a non-professional database user to create its own data marts and run analytical reports on the data. You can even run AWR reports.

4. You do not have full DBA control as Oracle (in my opinion) uses lockdown profiles in order to make the database autonomous. As ADMIN user, you have 25 roles including the new DWROLE which you would normally grant to all ADWC users created by you. Among those 25 roles, you have GATHER_SYSTEM_STATISTICS, SELECT_CATALOG_ROLE, CONSOLE_ADMIN, etc. You have access to most DBA_ and GV_$ views. Not to mention the 211 system privileges.

5. ADWC configures the database initialization parameters based on the compute and storage capacity you provision. ADWC runs on dozens of non-default init.ora parameters. For example:

parallel_degree_policy = AUTO
optimizer_ignore_parallel_hints = TRUE
result_cache_mode = FORCE
inmemory_size = 1G

You are allowed to change almost no init.ora parameters except few NLS_ and PLSQL_ parameters.

And the DB block size is 8K!

6. I can see 31 underscore parameters which are not having default values, here are few:

_max_io_size = 33554432 (default is 1048576)
_sqlmon_max_plan = 4000 (default is 0)
_enable_parallel_dml = TRUE (default is FALSE)
_optimizer_answering_query_using_stats = TRUE (default is FALSE)

One of the few alter session commands you can run is “alter session disable parallel dml;”

7. Monitoring SQL is easy:

But there is no Oracle Tuning Pack: you did not expect to have that in an autonomous database, did you? There is no RAT, Data Masking and Subsetting Pack, Cloud Management Pack, Text, Java in DB, Oracle XML DB, APEX, Multimedia, etc.

8. Note that this is (for now) a data warehousing platform. However, DML is surprisingly fast too. I managed to insert more than half a billion records in just about 3 minutes:

Do not try to create nested tables, media or spatial types, or use LONG datatype: not supported. Compression is enabled by default. ADWC uses HCC for all tables by default, changing the compression method is not allowed.

9. The new Machine Learning interface is easy and simple:


You can create Notebooks where you have place for data discovery and analytics. Commands are run in a SQL Query Scratchpad.

10. Users of Oracle Autonomous database are allowed to analyze the tables and thus influence on the Cost Based Optimizer and hence on performance – I think end users should not be able to influence on the laws (“νόμος, nomos”) of the database.

Conclusion: The Autonomous Database is one of the best things Oracle have ever made. And they have quite a portfolio of products….

Finally, here is a live demo of the Oracle Autonomous Data Warehouse Cloud:

2018, the year of the Cloud underdog Oracle?

In Cloud, DBA, Oracle database on January 8, 2018 at 10:46

“Without data you’re just another person with an opinion.” – W. Edwards Deming

Let us see, based on data, why the Cloud underdog Oracle can be the winner of 2018 and beyond. Especially, for databases in the Cloud!

Let us check out the most recent data coming from Forrester, Gartner, Forbes and Accenture:

1. Enterprise Workloads Meet the Cloud (Accenture)

“Simply put, an enterprise system consists of an application and the underlying database and infrastructure. Regardless of whether the solution in on-premises or delivered ‘as a service’ the application relies on those two components. Thus, the performance, uptime and security of an application will depend on how well the infrastructure and databases support those attributes.”

Both Figure 1 and Figure 2 show impressive results: the Oracle Cloud Infrastructure allows more than 3000 transactions per second while the leading cloud provider cannot even reach 400. Even the old Oracle Cloud Infrastructure Classic is at 1300 transactions per second.

The Oracle Cloud Infrastructure latency averages at 0.168ms while the leading cloud providers have about 6 times higher latency in average: 0.962ms.

“Armed with these insights, companies should be ready to consider moving their Oracle mission critical workloads to the Oracle Cloud—and reaping the benefits of greater flexibility and more manageable costs.”

2. The Total Economic Impact Of Oracle Java Cloud Service (Forrester)

Let us move to the Java Cloud Service and check the new Forrester Reserch

The costs and benefits for a composite organization with 30 Java developers, based on customer interviews, are:
– Investment costs: $827,384.
– Total benefits: $3,360,871.
– Net cost savings and benefits: $2,533,488.

The composite organization analysis points to benefits of $1,120,290 per year versus investment costs of $275,794, adding up to a net present value (NPV) of $2,533,488 over three years. With Java Cloud Service, developers gained valuable time with near instant development instances and were finally able to provide continuous delivery with applications and functionality for the organization.

3. Market Share Analysis: Public Cloud Services, Worldwide (Gartner)

Table 2, PaaS Public Cloud Service Market Share, 2015-2016 (Millions of U.S. Dollars), ranking by Annual Growth Rate 2016:

1. Oracle 166.9%
2. Amazon 109.1%
3. Alibaba 99.0%
4. Microsoft 46.4%
5. Salesforce 40.2%

Table 3. SaaS Public Cloud Service Market Share, 2015-2016 (Millions of U.S. Dollars), ranking by Annual Growth Rate 2016 (Forrester):

1. Oracle 71.6%
2. Workday 38.8%
3. Dropbox 38.0%
4. Google 37.9%
5. Microsoft 32.6%

4. Oracle And Its Cloud Business Are In Great Shape–And Here Are 10 Reasons Why (Forbes)

For its fiscal Q2 ending Nov. 30, Oracle reported total cloud revenue of $1.5 billion, up 44%, including SaaS revenue of $1.1 billion, up 55%. The combined revenue for cloud and on-premise software was up 9% to $7.8 billion.

Oracle’s Q3 guidance offered growth rates extremely close to those recently posted by salesforce.com: when you add in the highly nontrivial fact that that same company with the $6-billion cloud business also has a $33-billion on-premises business and has rewritten every single bit of that IP for the cloud, with complete compatibility for customers taking the hybrid approach—and the percentage of customers taking the hybrid approach will be somewhere between 98.4% and 100%.

5. Oracle’s Larry Ellison Challenges Amazon, Salesforce And Workday On The Future Of The Cloud (Forbes):

While Salesforce.com’s current SaaS revenue of more than $10 billion is much larger than Oracle’s current SaaS revenue—for the three months ended Aug. 31, Oracle posted SaaS revenue of $1.1 billion—Oracle’s bringing in new SaaS customers and revenue much faster than Salesforce.

The following quote is rather interesting: “Since Larry Ellison has spent the past 40 years competing brashly against and beating rivals large and small, it wasn’t a huge shock to hear him recently rail about how cloud archrival Amazon “has no expertise in database.” But it was a shocker to hear Ellison go on to say that “Amazon runs their entire operation on Oracle [Database]…. They paid us $60 million last year in [database] support and license! And you know who’s not on Amazon? Amazon is not on Amazon.

And finally, the topic of In-Memory databases is quite hot. Several database brands have their IMDB. A picture is worth a thousand words:

DBA 3.0 – Database Administration in the Cloud

In Cloud, DBA, OOW, Oracle database on September 23, 2017 at 10:35

“The interesting thing about cloud computing is that we’ve redefined cloud computing to include everything that we already do … The computer industry is the only industry that is more fashion-driven than women’s fashion.” – Larry Ellison, CTO, Oracle

DBA 1.0 -> DBA 2.0 -> DBA 3.0: Definitely, the versioning of DBAs is falling behind the database versions of Oracle, Microsoft, IBM, etc. Mainframe, client-server, internet, grid computing, cloud computing…

The topic on the DBA profession and how it changes, how it evolves and how it expands has been of interest among top experts in the industry:

Penny Arvil, VP of Oracle Database Product Development, stated that DBAs are being asked to understand what businesses do with data rather than just the mechanics of keeping the database healthy and running.

Kellyn Pot’Vin-Gorman claims that DBAs with advanced skills will have plenty of work to keep them busy and if Larry is successful with the bid to rid companies of their DBAs for a period of time, they’ll be very busy cleaning up the mess afterwards.

Tim Hall said that for pragmatic DBAs the role has evolved so much over the years, and will continue to do so. Such DBAs have to continue to adapt or die.

Megan Elphingstone concluded that DBA skills would be helpful, but not required in a DBaaS environment.

Jim Donahoe hosted a discussion about the state of the DBA as the cloud continues to increase in popularity.

First time I heard about DBA 2.0 was about 10 years ago. At Oracle OpenWorld 2017 (next week or so), I will be listening to what DBA 3.0 is: How the life of a Database Administrator has changed! If you google for DBA 3.0 most likely you will find information about how to play De Bellis Antiquitatis DBA 3.0. Different story…

But if I can also donate something to the discussion is probably the fact that ever since a database vendor automated something in the database, it only generated more work for DBAs in the future. More DBAs are needed now as ever. Growing size and complexity of IT systems is definitely contributing to that need.

These DBA sessions in San Francisco are quite relevant to the DBA profession (last one on the list will be delivered by me):

– Advance from DBA to Cloud Administrator: Wednesday, Oct 04, 2:00 p.m. – 2:45 p.m. | Moscone West – Room 3022
– Navigating Your DBA Career in the Oracle Cloud: Monday, Oct 02, 1:15 p.m. – 2:00 p.m. | Moscone West – Room 3005
– Security in Oracle Database Cloud Service: Sunday, Oct 01, 3:45 p.m. – 4:30 p.m. | Moscone South – Room 159
– How to Eliminate the Storm When Moving to the Cloud: Sunday, Oct 01, 1:45 p.m. – 2:30 p.m. | Moscone South – Room 160
– War of the Worlds: DBAs Versus Developers: Wednesday, Oct 04, 1:00 p.m. – 1:45 p.m. | Moscone West – Room 3014
– DBA Types: Sunday, Oct 01, 1:45 p.m. – 2:30 p.m. | Marriott Marquis (Yerba Buena Level) – Nob Hill A/B

And finally, a couple of quotes about databases:

– “Database Management System [Origin: Data + Latin basus “low, mean, vile, menial, degrading, ounterfeit.”] A complex set of interrelational data structures allowing data to be lost in many convenient sequences while retaining a complete record of the logical relations between the missing items. — From The Devil’s DP Dictionary” ― Stan Kelly Bootle
– “I’m an oracle of the past. I can accurately predict up to 1 minute in the future, by thoroughly investigating the last 2 years of your life. Also, I look like an old database – flat and full of useless info.” ― Will Advise, Nothing is here…

DBA Statements

In DBA, Oracle database, PL/SQL, SQL on September 5, 2017 at 11:51

“A statement is persuasive and credible either because it is directly self-evident or because it appears to be proved from other statements that are so.” Aristotle

In Oracle 12.2, there is a new view called DBA_STATEMENTS. It can helps us understand better what SQL we have within our PL/SQL functions, procedures and packages.

There is too little on the Internet and nothing on Metalink about this new view:

PL/Scope was introduced with Oracle 11.1 and covered only PL/SQL. In 12.2, PL/Scope was enhanced by Oracle in order to report on the occurrences of static and dynamic SQL call sites in PL/SQL units.

PL/Scope can help you answer questions such as:
– Where and how a column x in table y is used in the PL/SQL code?
– Is the SQL in my application PL/SQL code compatible with TimesTen?
– What are the constants, variables and exceptions in my application that are declared but never used?
– Is my code at risk for SQL injection and what are the SQL statements with an optimizer hint coded in the application?
– Which SQL has a BULK COLLECT or EXECUTE IMMEDIATE clause?

Details can be found in the PL/Scope Database Development Guide or at Philipp Salvisberg’s blog.

Here is an example: how to find all “execute immediate” statements and all hints used in my PL/SQL units? If needed, you can limit the query to only RULE hints (for example).

1. You need to set the PLSCOPE_SETTINGS parameter and ensure SYSAUX has enough space:


SQL> SELECT SPACE_USAGE_KBYTES FROM V$SYSAUX_OCCUPANTS 
WHERE OCCUPANT_NAME='PL/SCOPE';

SPACE_USAGE_KBYTES
------------------
              1984

SQL> show parameter PLSCOPE_SETTINGS

NAME                                 TYPE        VALUE
------------------------------------ ----------- -------------------
plscope_settings                     string      IDENTIFIERS:NONE

SQL> alter system set plscope_settings='STATEMENTS:ALL' scope=both;

System altered.

SQL> show parameter PLSCOPE_SETTINGS

NAME                                 TYPE        VALUE
------------------------------------ ----------- -------------------
plscope_settings                     string      STATEMENTS:ALL

2. You must compile the PL/SQL units with the PLSCOPE_SETTINGS=’STATEMENTS:ALL’ to collect the metadata. SQL statement types that PL/Scope collects are: SELECT, UPDATE, INSERT, DELETE, MERGE, EXECUTE IMMEDIATE, SET TRANSACTION, LOCK TABLE, COMMIT, SAVEPOINT, ROLLBACK, OPEN, CLOSE and FETCH.


-- start
SQL> select TYPE, OBJECT_NAME, OBJECT_TYPE, HAS_HINT, 
SUBSTR(TEXT,1,LENGTH(TEXT)-INSTR(REVERSE(TEXT), '/*') +2 ) as "HINT" 
from DBA_STATEMENTS
where TYPE='EXECUTE IMMEDIATE' or HAS_HINT='YES';

TYPE              OBJECT_NAME   OBJECT_TYPE  HAS HINT
----------------- ------------- ------------ --- ------------------
EXECUTE IMMEDIATE LASKE_KAIKKI  PROCEDURE    NO  
SELECT            LASKE_KAIKKI  PROCEDURE    YES SELECT /*+ RULE */

-- end

Check also the DBA_STATEMENTS and ALL_STATEMENTS documentation. And the blog post by Jeff Smith entitled PL/Scope in Oracle Database 12c Release 2 and Oracle SQL Developer.

But finally, here is a way how to regenerate the SQL statements without the hints:


-- start
SQL> select TEXT from DBA_STATEMENTS where HAS_HINT='YES';

TEXT
------------------------------------------------------------
SELECT /*+ RULE */ NULL FROM DUAL WHERE SYSDATE = SYSDATE

SQL> select 'SELECT '||
TRIM(SUBSTR(TEXT, LENGTH(TEXT) - INSTR(REVERSE(TEXT), '/*') + 2))
as "SQL without HINT"
from DBA_STATEMENTS where HAS_HINT='YES';

SQL without HINT
-------------------------------------------------------------
SELECT NULL FROM DUAL WHERE SYSDATE = SYSDATE

-- end

Cloud Nine

In Cloud, DBA, IaaS, Oracle database on May 3, 2017 at 16:58

“Get happiness out of your work or you may never know what happiness is.” — Elbert Hubbard

According to Amazon and quoted by Fortune Magazine in a recent article entitled “Amazon Data Center Chief schools Oracle CEO on Cloud claims“, AWS executive Andy Jassy said (at AWS Re:Invent two years ago) that every database customer he talks to is unhappy with their vendor: “I haven’t met a database customer that is not looking to flee their vendor.”

Another interesting article by James Hamilton entitled “How many Data Centers needed world-wide” discusses more or less the same topic. Reading the comments after it is worthwhile. Also consider reading these extensive performance results about Cloud performance and TCO of the Oracle database.

A young and extremely smart analyst from my company asked me last week: “Why is the Oracle database better than MySQL or MongoDB?”. Tough question, right? You may ask the same about DB2 or SQL Server. All databases have their pros and cons. And we as people have our preferences, based on experience, knowledge and prejudices.

If you try to find out the explanation of the quote statement on top, you might very like end up with this one: “You have to spend most of your life working, so if you’re unhappy at your work you’re likely to always be unhappy”.

So, I have been happy (if that is the right word) working with the Oracle database. Unlike DB2, you have all the tools, options and automation to tune it. With about couple of hundred MySQL databases at Nokia, we spent more time (thank you Google!) investigating issues than with more than one thousand Oracle databases. SQL Server: if you prefer using the mouse instead of the keyboard, then this is the right database for you! Teradata compared to Exadata: let me not start…

As Forrester say, In-Memory Databases are driving next-generation workloads and use cases. Check out this recent comparison of all vendors.

But back to Cloud. Have a look at what speed and what features Oracle is embedding into its Cloud. By far the best Cloud for Oracle workloads! All these are new additions to the Oracle IaaS:

What’s New for Oracle Compute Cloud Service (IaaS)

– Compute Service: 8 and 16 OCPU Virtual Machines

CentOS, Ubuntu OS Images

RHEL via BYOI OS Image

Multipart Upload: Multipart upload enables uploading an object in parts, enhancing speed of upload and accommodating larger objects

Audit Service: This new service automatically records calls to all supported BMCS public API endpoints as log events

Search Domain DHCP

Terraform Provider: The BMCS Terraform provider is now available. Terraform is an open source infrastructure automation and management software tool

Developer Tools Enhancements: New versions of BMCS developer tools are now available, including Ruby, Python, and Java SDKs, HDFS Connector, and CLI

New Instance Shapes

– Windows BYOL: It is now possible to Bring Your Own License (BYOL) for Windows Server

– NVMe Storage: You can now use NVMe SSD disks as ephemeral data disks attached to your instances

– SSD Block Storage: These high-performance volumes can be used for persistent block storage or for bootable volumes

New Web UI: This new interface can be used to perform basic operations against Storage Cloud resources

HSM Cloud Service Integration