Dontcheff

Archive for the ‘DBA’ Category

Amazon’s Aurora and Oracle’s Autonomous ATP

In Autonomous, Cloud, DBA, PostgreSQL on August 29, 2018 at 09:26

Databases are very much like wine, cheese and trees: they get better as they age.

Amazon Aurora exists since 2015. The word aurora comes Latin, means dawn. The name was borne by the Roman mythological goddess of dawn and by the princess in the fairy tale Sleeping Beauty.

Both Amazon’s “dawn” Aurora and Oracle’s ATP are typical cloud OLTP systems.

The question is: what are their differences, which one is better and meant exactly for my needs?

Oracle ATP is based on Oracle’s database and Exadata, here are all the innovations adopted from both systems:

Amazon’s Aurora has 2 flavors: Amazon Aurora MySQL and Amazon Aurora PostgreSQL.

Amazon Aurora MySQL is compatible with MySQL 5.6 using the InnoDB storage engine. Certain MySQL features like the MyISAM storage engine are not available with Amazon Aurora. Amazon Aurora PostgreSQL is compatible with PostgreSQL 9.6. The storage layer is virtualized and sits on a proprietary virtualized storage system backed up by SSD. And you pay $0.20 per 1 million IO requests.

Oracle’s Autonomous database comes also in 2 flavors: Oracle ADW and Oracle ATP. Check Franck Pachot’s article ATP vs ADW – the Autonomous Database lockdown profiles to see the differences of both cloud databases.

In general, one can compare Oracle ADW with Amazon Redshift and Oracle ATP with Amazon Aurora.

One way to compare is to look at the ranking provided by DB-Engines: Amazon Aurora vs. Oracle. No-brainer who the leader is: score of 1300 vs score of 5 in favor of Oracle.

Another interesting comparison comes from Amalgam Insights. Check how Oracle Autonomous Transaction Processing lowers barriers to entry for data-driven business. Check out the DBA labor cost involved: 5 times less in favor of Oracle ATP compared to Amazon! All the routine DBA tasks have been totally eliminated.

The message from them is very clear: “Oracle ATP could reduce the cost of cloud-based transactional database hosting by 65%. Companies seeking to build net-new transactional databases to support Internet of Things, messaging, and other new data-driven businesses should consider Oracle ATP and should do due diligence on Oracle Autonomous Database Cloud for reducing long-term Total Cost of Ownership.”

This month (August 2018), there was an interesting article by Den Howlett entitled Oracle introduces autonomous transaction processing database – pounds on AWS. Here are 2 interesting and probably correct statements/quotes from there:

1. It really is hard to get off an established database, even one that can be as expensive as Oracle can turn out to be.
2. Some of the very largest workloads will not go to the public cloud anytime soon. Maybe never which in internet years is after 2030.

As a kind of proof of how reliable and fast Oracle’s Autonomous Transaction Processing database is consider the following OLTP workload running non-stop in a balanced way without any major spikes and without a single queued statement!

No human labor, no human error, and no manual performance tuning!

Advertisements

Migrating Amazon Redshift to Autonomous Data Warehouse Cloud

In Autonomous, Data Warehouse, DBA, Exadata, PostgreSQL on July 4, 2018 at 18:34

“Big Data wins games but Data Warehousing wins championships” says Michael Jordan. Data Scientists create the algorithm, but as Todd Goldman says, if there is no data engineer to put it into production for use by the business, does it have any value?

If you google for Amazon Redshift vs Oracle, you will find lots of articles on how to migrate Oracle to Redshift. Is it worth it? Perhaps in some cases before Oracle Autonomous Data Warehouse Cloud existed.

Now, things look quite different. “Oracle Autonomous Data Warehouse processes data 8-14 times faster than AWS Redshift. In addition, Autonomous Data Warehouse Cloud costs 5 to 8x less than AWS Redshift. Oracle performs in an hour what Redshift does in 10 hours.” At least according to Oracle Autonomous Data Warehouse Cloud white paper. And I have nothing but great experiences with ADWC. For the past half an year or so.

But, what are the major issues and problems reported by Redshift users?

One of the most common complaints involves how Amazon Redshift handles large updates. In particular, the process of moving massive data sets across the internet requires substantial bandwidth. While Redshift is set up for high performance with large data sets, “there have been some reports of less than optimal performance,” for the largest data sets. An article by Alan R. Earls entitled Amazon Redshift review reveals quirks, frustrations claims that reviewers want more from the big data service. So:

Why to migrate from Amazon Redshift to Autonomous Data Warehouse Cloud?

1. Amazon Redshift is ranked 2nd in Cloud Data Warehouse with 14 reviews vs Oracle Exadata which is ranked 1st in Data Warehouse with 55 reviews.

The top reviewer of Amazon Redshift writes “It processes petabytes of data and supports many file formats. Restoring huge snapshots takes too long”. The top reviewer of Oracle Exadata writes “Thanks to smart scans, the amount of data transferred from storage to database nodes significantly decreases”.

2. Oracle Autonomous dominates in features and capabilities:

DB-engines shows an excellent system properties comparison of Amazon Redshift vs. Oracle.

In addition, reading through these thoughts on using Amazon Redshift as a replacement for an Oracle Data Warehouse can be worthwhile. It shows how Amazon Redshift compares with a more traditional DW approach. But Enterprises have some Redshift concerns, including:

– The difference between versions of PostgreSQL and the version Amazon uses with Redshift
– The scalability of very large data volume is limited and performance suffers
– The query interface is not modern, interface is a bit behind
– Redshift needs more flexibility to create user-defined functions
– Access to the underlying operating system and certain database functions and capabilities aren’t available
– Starting sizes may be too large for some use cases
– Redshift also resides in a single AWS availability zone

3. Amazon Redshift has several limitation: Limits in Amazon Redshift. On the other hand, you can hardly find a database feature not yet implemented by Oracle.

4. But the most important reason why to migrate to ADWC is that the Oracle Autonomous Database Cloud offers total automation based on machine learning and eliminates human labor, human error, and manual tuning.

How to migrate from Amazon Redshift to Autonomous Data Warehouse Cloud?

Use the SQL Developer Amazon Redshift Migration Assistant which is available with SQL Developer 17.4. It provides easy migration of Amazon Redshift environments on a per-schema basis.

Here are the 5 steps on how to migarte from Amazon Redshift to Autonomous Data Warehouse Cloud:

1. Connect to Amazon Redshift
2. Start the Cloud Migration Wizard
3. Review and Finish the Amazon Redshift Migration
4. Use the Generated Amazon Redshift Migration Scripts
5. Perform the Post Migration Tasks

Check out what Paul Way says about why Oracle thinks Autonomous IT can ultimately win the Cloud War.

Finally, here is what Amazon CTO Werner Vogels is saying: Our cloud offers any database you need. And I agree with him that a one size fits all database doesn’t fit anyone. But mission and business critical enterprise systems with huge requirements and resource needs deserve only the best.

The DBA profession beyond autonomous: a database without a DBA is like a tree without roots

In Autonomous, Cloud, Databases, DBA on May 30, 2018 at 19:41

“To make a vehicle autonomous, you need to gather massive streams of data from loads of sensors and cameras and process that data on the fly so that the car can ‘see’ what’s around it” Daniel Lyons

Let me add that the data must be stored somewhere, analyzed by some software, monitored and backed up by someone, and so on and so on…

Top 5 Industry Early Adopters Of Autonomous Systems are: (1) Information Technology: Oracle’s Autonomous Data Warehouse Cloud, (2) Automotive, (3) Manufacturing, (4) Retail and (5) Healthcare.

Being an early adopter of ADWC, I must say that it is probably the best product created by Oracle Corporation. For sure part of Top Five.

This month (May 2018), ComputerWeekly published an article quoting Oracle CEO Mark Hurd that the long-term future of database administrators could be at risk if every enterprise adopts the Oracle 18c autonomous database.

“Hurd said it could take almost a year to get on-premise databases patched, whereas patching was instant with the autonomous version. If everyone had the autonomous database, that would change to instantaneous.”

So where does that leave Oracle DBAs around the world? Possibly in the unemployment queue, at least according to Hurd.

“There are hundreds of thousands of DBAs managing Oracle databases. If all of that moved to the autonomous database, the number would change to zero,” Hurd said at an Oracle media event in Redwood Shores, California.

If you are interested in more detail on this subject, I suggest you read the following articles in the order below:

The Robots are coming by James Anthony: “But surely we’ve been here before? Indeed, a quick Google search brings up the following examples of white papers by Oracle with a reference to the database being self-managing all the way back to 2003.”

Oracle Autonomous Database and the Death of the DBA by Tim Hall: “Myself and many others have been talking about this for over a decade. ”

Death of the DBA, Long Live the DBA by Kellyn Pot’Vin-Gorman: “With DBAs that have been around a while, we know the idea that you don’t need a DBA has been around since Oracle 7, the self-healing database.”

No DBA Required? by Tim Hall: “It will be interesting to see what Oracle actually come up with at the end of all this…”

Self-Driving Databases are Coming: What Next for DBAs? by Maria Colgan: “It’s also important for DBAs to remember that the transition to an autonomous environment is not something that will occur overnight.”

Death of the Oracle DBA (again) by Johanthan Stuart: “Twenty years later I run Claremont’s Managed Services practice and the DBA group is our largest delivery team.”

Don’t Fall For The “Autonomous Database” Distraction by Greg McStravick: a totally different point of view on autonomous databases.

Now, “a picture is worth a thousand words”. Here are 5 screenshots from the Autonomous Data Warehouse Cloud documentation:

1. Who will be creating external tables using the DBMS_CLOUD package?

2. Who will run “alter database property set.. ” in order to create credentials for the Oracle Cloud Infrastructure?

3. Who will restore and recover the database in case of any type of failure? Or failures never happen, right?

4. Who will manage run away SQL with cs_resource_manager and run “alter system kill session”?

5. Who will manage the CBO statistics and add hints?

As of today, we have 4 Exadata choices with Autonomous being by far the best. For data warehouse loads for now. As explained by Alan Zeichick, Autonomous Capabilities Will Make Data Warehouses — And DBAs — More Valuable. “No need for a resume writer: DBAs will still have plenty of work to do.”

So still: a database without a DBA is like a tree without roots.

P.S. Check out the book Human + Machine: Reimagining Work in the Age of AI by Paul R. Daugherty and H. James (Jim) Wilson.

DBA Internals of the Oracle Autonomous Database

In Cloud, DBA, Oracle database, Oracle internals on March 28, 2018 at 07:11

First things first: the word autonomous come from the Greek word autónomos which means “with laws of one’s own, independent”.

After starting using the Autonomous Data Warehouse Cloud, I must say I am pleasantly surprised to see something totally new, simple, uncomplicated and effortless, with no additional tuning or re-architecturing of the Oracle databases needed – the underlying Oracle Cloud Infrastructure is super fast and highly reliable.

1. You may connect to ADWC by either using the web interface as you can see above or as a client (I use SQL Developer 17.4) but for the client connection type choose Cloud PDB and not TNS. Your configuration file is a zip file and not a plain text file to what DBAs are used to.

2. You cannot create indexes on columns, you cannot partition tables, you cannot create materialized views, etc. Not even database links. You will get an error message: “ORA-00439: feature not enabled: Partitioning” or “ORA-01031: insufficient privileges”.

ADWC lets you create primary keys, unique keys and a foreign key constraints in RELY DISABLE NOVALIDATE mode which means that they are not enforced. These constraints can be created also in enforced mode, so technically you can create constraints as in a non-autonomous Oracle database.

Note that in execution plans primary keys and unique keys will only be used for single table lookups by the optimizer, they will not be used for joins.

But … you can run alter system kill session!

3. The Oracle Autonomous Data Warehouse interface contains all necessary capabilities for a non-professional database user to create its own data marts and run analytical reports on the data. You can even run AWR reports.

4. You do not have full DBA control as Oracle (in my opinion) uses lockdown profiles in order to make the database autonomous. As ADMIN user, you have 25 roles including the new DWROLE which you would normally grant to all ADWC users created by you. Among those 25 roles, you have GATHER_SYSTEM_STATISTICS, SELECT_CATALOG_ROLE, CONSOLE_ADMIN, etc. You have access to most DBA_ and GV_$ views. Not to mention the 211 system privileges.

5. ADWC configures the database initialization parameters based on the compute and storage capacity you provision. ADWC runs on dozens of non-default init.ora parameters. For example:

parallel_degree_policy = AUTO
optimizer_ignore_parallel_hints = TRUE
result_cache_mode = FORCE
inmemory_size = 1G

You are allowed to change almost no init.ora parameters except few NLS_ and PLSQL_ parameters.

And the DB block size is 8K!

6. I can see 31 underscore parameters which are not having default values, here are few:

_max_io_size = 33554432 (default is 1048576)
_sqlmon_max_plan = 4000 (default is 0)
_enable_parallel_dml = TRUE (default is FALSE)
_optimizer_answering_query_using_stats = TRUE (default is FALSE)

One of the few alter session commands you can run is “alter session disable parallel dml;”

7. Monitoring SQL is easy:

But there is no Oracle Tuning Pack: you did not expect to have that in an autonomous database, did you? There is no RAT, Data Masking and Subsetting Pack, Cloud Management Pack, Text, Java in DB, Oracle XML DB, APEX, Multimedia, etc.

8. Note that this is (for now) a data warehousing platform. However, DML is surprisingly fast too. I managed to insert more than half a billion records in just about 3 minutes:

Do not try to create nested tables, media or spatial types, or use LONG datatype: not supported. Compression is enabled by default. ADWC uses HCC for all tables by default, changing the compression method is not allowed.

9. The new Machine Learning interface is easy and simple:


You can create Notebooks where you have place for data discovery and analytics. Commands are run in a SQL Query Scratchpad.

10. Users of Oracle Autonomous database are allowed to analyze the tables and thus influence on the Cost Based Optimizer and hence on performance – I think end users should not be able to influence on the laws (“νόμος, nomos”) of the database.

Conclusion: The Autonomous Database is one of the best things Oracle have ever made. And they have quite a portfolio of products….

Finally, here is a live demo of the Oracle Autonomous Data Warehouse Cloud:

2018, the year of the Cloud underdog Oracle?

In Cloud, DBA, Oracle database on January 8, 2018 at 10:46

“Without data you’re just another person with an opinion.” – W. Edwards Deming

Let us see, based on data, why the Cloud underdog Oracle can be the winner of 2018 and beyond. Especially, for databases in the Cloud!

Let us check out the most recent data coming from Forrester, Gartner, Forbes and Accenture:

1. Enterprise Workloads Meet the Cloud (Accenture)

“Simply put, an enterprise system consists of an application and the underlying database and infrastructure. Regardless of whether the solution in on-premises or delivered ‘as a service’ the application relies on those two components. Thus, the performance, uptime and security of an application will depend on how well the infrastructure and databases support those attributes.”

Both Figure 1 and Figure 2 show impressive results: the Oracle Cloud Infrastructure allows more than 3000 transactions per second while the leading cloud provider cannot even reach 400. Even the old Oracle Cloud Infrastructure Classic is at 1300 transactions per second.

The Oracle Cloud Infrastructure latency averages at 0.168ms while the leading cloud providers have about 6 times higher latency in average: 0.962ms.

“Armed with these insights, companies should be ready to consider moving their Oracle mission critical workloads to the Oracle Cloud—and reaping the benefits of greater flexibility and more manageable costs.”

2. The Total Economic Impact Of Oracle Java Cloud Service (Forrester)

Let us move to the Java Cloud Service and check the new Forrester Reserch

The costs and benefits for a composite organization with 30 Java developers, based on customer interviews, are:
– Investment costs: $827,384.
– Total benefits: $3,360,871.
– Net cost savings and benefits: $2,533,488.

The composite organization analysis points to benefits of $1,120,290 per year versus investment costs of $275,794, adding up to a net present value (NPV) of $2,533,488 over three years. With Java Cloud Service, developers gained valuable time with near instant development instances and were finally able to provide continuous delivery with applications and functionality for the organization.

3. Market Share Analysis: Public Cloud Services, Worldwide (Gartner)

Table 2, PaaS Public Cloud Service Market Share, 2015-2016 (Millions of U.S. Dollars), ranking by Annual Growth Rate 2016:

1. Oracle 166.9%
2. Amazon 109.1%
3. Alibaba 99.0%
4. Microsoft 46.4%
5. Salesforce 40.2%

Table 3. SaaS Public Cloud Service Market Share, 2015-2016 (Millions of U.S. Dollars), ranking by Annual Growth Rate 2016 (Forrester):

1. Oracle 71.6%
2. Workday 38.8%
3. Dropbox 38.0%
4. Google 37.9%
5. Microsoft 32.6%

4. Oracle And Its Cloud Business Are In Great Shape–And Here Are 10 Reasons Why (Forbes)

For its fiscal Q2 ending Nov. 30, Oracle reported total cloud revenue of $1.5 billion, up 44%, including SaaS revenue of $1.1 billion, up 55%. The combined revenue for cloud and on-premise software was up 9% to $7.8 billion.

Oracle’s Q3 guidance offered growth rates extremely close to those recently posted by salesforce.com: when you add in the highly nontrivial fact that that same company with the $6-billion cloud business also has a $33-billion on-premises business and has rewritten every single bit of that IP for the cloud, with complete compatibility for customers taking the hybrid approach—and the percentage of customers taking the hybrid approach will be somewhere between 98.4% and 100%.

5. Oracle’s Larry Ellison Challenges Amazon, Salesforce And Workday On The Future Of The Cloud (Forbes):

While Salesforce.com’s current SaaS revenue of more than $10 billion is much larger than Oracle’s current SaaS revenue—for the three months ended Aug. 31, Oracle posted SaaS revenue of $1.1 billion—Oracle’s bringing in new SaaS customers and revenue much faster than Salesforce.

The following quote is rather interesting: “Since Larry Ellison has spent the past 40 years competing brashly against and beating rivals large and small, it wasn’t a huge shock to hear him recently rail about how cloud archrival Amazon “has no expertise in database.” But it was a shocker to hear Ellison go on to say that “Amazon runs their entire operation on Oracle [Database]…. They paid us $60 million last year in [database] support and license! And you know who’s not on Amazon? Amazon is not on Amazon.

And finally, the topic of In-Memory databases is quite hot. Several database brands have their IMDB. A picture is worth a thousand words:

Artificial stupidity as a DBA limitation of artificial intelligence

In Data, Database tuning, Databases, DBA on December 6, 2017 at 07:47

“Artificial intelligence is no match for natural stupidity” ― Albert Einstein

What about introducing Artificial Intelligence into the database to an extent it tunes itself into all possible dimensions?

You have probably either seen the question above or have already asked yourself if that was at all possible. On Ask Tom, John from Guildford wrote the following:

As for Artificial Intelligence, well Artificial Stupidity is more likely to be true. Humanity is not privy to the algorithm for intelligence. Anyone who’s had the pleasure of dealing with machine generated code knows that software is no more capable of writing a cohesive system than it is of becoming self-aware.

Provided you’re not trying to be a cheap alternative to an automaton you just need to think. That one function alone differentiates us from computers, so do more of it. The most sublime software on the planet has an IQ of zero, so outdoing it shouldn’t be all that hard.

Stephen Hawking thinks computers may surpass human intelligence and take over the world. Fear artificial stupidity, not artificial intelligence!

Einstein is credited with saying (but it was probably Alexandre Dumas or Elbert Hubbard who deserve the recognition): “The difference between genius and stupidity is that genius has its limits.”

Explore artificial stupidity (AS) and/or read Charles Wheelan’s book Naked Statistics to understand this kind of AI danger. By the way, according to Woody Allen, 94.5% of all statistics are made up!

So what are the limitations of AI? Jay Liebowitz argues that “if intelligence and stupidity naturally exist, and if AI is said to exist, then is there something that might be called “artificial stupidity?” According to him three of these limitations are:

  • Ability to possess and use common sense
  • Development of deep reasoning systems
  • Ability to easily acquire and update knowledge
  • But does artificial intelligence use a database in order to be an artificial intelligence? Few very interesting answers to that question are give by Douglas Green, Jordan Miller and Ramon Morales, here is a summary:

    Although AI could be built without a database, it would probably be more powerful if a database were added. AI and databases are currently not very well integrated. The database is just a standard tool that the AI uses. However, as AI becomes more advanced, it may become more a part of the database itself.

    I don’t believe you can have an effective Artificial Intelligence without a database or memory structure of some kind.

    While it is theoretically possible to have an artificial intelligence without using a database, it makes things a LOT easier if you can store what the AI knows somewhere convenient.

    As Demystifying Artificial Intelligence explains, AI hass been embedded into some of the most fundamental aspects of data management, making those critical data-driven processes more celeritous and manageable.

    Amazon Mechanical Turk is worth looking into and Oracle are also ready for business with AI.

    Matt Johnson, a Navy pilot turned AI researcher, at a conference this simmer by saying that one of the places we are not making a lot of advances is in that teaming, in that interaction (of humans and AI) – Artificial Stupidity: When Artificial Intelligence + Human = Disaster

    Bottom line: if AI uses a database, then the intelligent database should be at least autonomous and have most tasks automated but not relying on artificial stupidity as a DBA limitation of artificial intelligence. Whatever it means… I do not want to curb your enthusiasm but we need to first fill in the skills gap: we need data engineers who understand databases and data warehouses, infrastructure and tools that span data cleaning, ingestion, security, predictions. And in this aspect Cloud is critical and a big differentiator.

    P.S. Is Artificial Intelligence Progress a Bubble? was published 4 days after this blog post.

    Blockchain for DBAs

    In Data, Databases, DBA on October 30, 2017 at 09:25

    “Instead of putting the taxi driver out of a job, blockhchain puts Uber out of a job and lets the taxi driver work with the customer directly.” – Vitalik Buterin

    A blockchain database consists of two kinds of records: transactions and blocks. Blocks contain the lists of the transactions that are hashed and encoded into a hash (Merkle) tree. The linked blocks form a chain as every block holds the hash pointer to the previous block.

    The blockchain can be stored in a flat file or in a database. For example, the Bitcoin core client stores the blockchain metadata using LevelDB (based on Google’s Bigtable database system).

    The diagram above can be used to create the schema in PostgreSQL. “As far as what DBMS you should put it in”, says Ali Razeghi, “that’s up to your use case. If you want to analyze the transactions/wallet IDs to see some patterns or do BI work I would recommend a relational DB. If you want to setup a live ingest with multiple cryptocoins I would recommend something that doesn’t need the transaction log so a MongoDB solution would be good.”

    If you want to setup a MySQL database: here are 8 easy steps.

    But what is the structure of the block, what does it look like?

    The block has 4 fields:

    1. Block Size: The size of the block in bytes
    2. Block Header: Six fields in the block header
    3. Transaction Counter: How many transactions follow
    4. Transactions: The transactions recorded in this block

    The block header has 6 fields:

    1. Version: A version number to track software/protocol upgrades
    2. Previous Block Hash: A reference to the hash of the previous (parent) block in the chain
    3. Merkle Root: A hash of the root of the merkle tree of this block’s transactions
    4. Timestamp: The approximate creation time of this block (seconds from Unix Epoch)
    5. Difficulty Target: The proof-of-work algorithm difficulty target for this block
    6. Nonce: A counter used for the proof-of-work algorithm

    More details, like for example details on block header hash and block height, can be found here.

    But how about blockchain vs. relational database: Which is right for your application? As you can see, because the term “blockchain” is not clearly defined, you could argue that almost any IT project could be described as using a blockchain.

    It is worth reading Guy Harrison’s article Sealing MongoDB documents on the blockchain. Here is a nice quote: “As a database administrator in the early 1990s, I remember the shock I felt when I realized that the contents of the database files were plain text; I’d just assumed they were encrypted and could only be modified by the database engine acting on behalf of a validated user.”

    The Blockchain technology is a very special kind of a distributed database. Sebastien Meunier’s post concludes that ironically, there is no consensus on the definition of what blockchain technology is.

    I particularly, like his last question: Is a private blockchain without token really more efficient than a centralized system? And I would add: private blockchain, really?

    But once more, what is blockchain? Rockford Lhotka gives a very good DBA-friendly definition/characteristics of blockchain:

    1. A linked list where each node contains data
    2. Immutable:
    – Each new node is cryptographically linked to the previous node
    – The list and the data in each node is therefore immutable, tampering breaks the cryptography
    3. Append-only
    – New nodes can be added to the list, though existing nodes can’t be altered
    4. Persistent
    – Hence it is a data store – the list and nodes of data are persisted
    5. Distributed
    – Copies of the list exist on many physical devices/servers
    – Failure of 1+ physical devices has no impact on the integrity of the data
    – The physical devices form a type of networked cluster and work together
    – New nodes are only appended to the list if some quorum of physical devices agree with the cryptography and validity of the node via consistent algorithms running on all devices.

    Kevin Ford’s reply is a good one to conclude with: “Based on this description (above) it really sounds like your (Rockford Lhotka’s) earlier comparison to the hype around XML is spot on. It sounds like in and of itself it isn’t particularly anything except a low level technology until you structure it to meet a particular problem.”

    The nature of blockchain technology makes it difficult to work with high transnational volumes.

    But DBAs can have a look at (1) BigchainDB, a database with several blockchain characteristics added: high-transaction, decentralized database, immutability & native support for assets and (2) at Chainfrog if interested in connecting legacy databases together. As far as I know, they support as of now at least MySQL and SQL Server.

    DBA 3.0 – Database Administration in the Cloud

    In Cloud, DBA, OOW, Oracle database on September 23, 2017 at 10:35

    “The interesting thing about cloud computing is that we’ve redefined cloud computing to include everything that we already do … The computer industry is the only industry that is more fashion-driven than women’s fashion.” – Larry Ellison, CTO, Oracle

    DBA 1.0 -> DBA 2.0 -> DBA 3.0: Definitely, the versioning of DBAs is falling behind the database versions of Oracle, Microsoft, IBM, etc. Mainframe, client-server, internet, grid computing, cloud computing…

    The topic on the DBA profession and how it changes, how it evolves and how it expands has been of interest among top experts in the industry:

    Penny Arvil, VP of Oracle Database Product Development, stated that DBAs are being asked to understand what businesses do with data rather than just the mechanics of keeping the database healthy and running.

    Kellyn Pot’Vin-Gorman claims that DBAs with advanced skills will have plenty of work to keep them busy and if Larry is successful with the bid to rid companies of their DBAs for a period of time, they’ll be very busy cleaning up the mess afterwards.

    Tim Hall said that for pragmatic DBAs the role has evolved so much over the years, and will continue to do so. Such DBAs have to continue to adapt or die.

    Megan Elphingstone concluded that DBA skills would be helpful, but not required in a DBaaS environment.

    Jim Donahoe hosted a discussion about the state of the DBA as the cloud continues to increase in popularity.

    First time I heard about DBA 2.0 was about 10 years ago. At Oracle OpenWorld 2017 (next week or so), I will be listening to what DBA 3.0 is: How the life of a Database Administrator has changed! If you google for DBA 3.0 most likely you will find information about how to play De Bellis Antiquitatis DBA 3.0. Different story…

    But if I can also donate something to the discussion is probably the fact that ever since a database vendor automated something in the database, it only generated more work for DBAs in the future. More DBAs are needed now as ever. Growing size and complexity of IT systems is definitely contributing to that need.

    These DBA sessions in San Francisco are quite relevant to the DBA profession (last one on the list will be delivered by me):

    – Advance from DBA to Cloud Administrator: Wednesday, Oct 04, 2:00 p.m. – 2:45 p.m. | Moscone West – Room 3022
    – Navigating Your DBA Career in the Oracle Cloud: Monday, Oct 02, 1:15 p.m. – 2:00 p.m. | Moscone West – Room 3005
    – Security in Oracle Database Cloud Service: Sunday, Oct 01, 3:45 p.m. – 4:30 p.m. | Moscone South – Room 159
    – How to Eliminate the Storm When Moving to the Cloud: Sunday, Oct 01, 1:45 p.m. – 2:30 p.m. | Moscone South – Room 160
    – War of the Worlds: DBAs Versus Developers: Wednesday, Oct 04, 1:00 p.m. – 1:45 p.m. | Moscone West – Room 3014
    – DBA Types: Sunday, Oct 01, 1:45 p.m. – 2:30 p.m. | Marriott Marquis (Yerba Buena Level) – Nob Hill A/B

    And finally, a couple of quotes about databases:

    – “Database Management System [Origin: Data + Latin basus “low, mean, vile, menial, degrading, ounterfeit.”] A complex set of interrelational data structures allowing data to be lost in many convenient sequences while retaining a complete record of the logical relations between the missing items. — From The Devil’s DP Dictionary” ― Stan Kelly Bootle
    – “I’m an oracle of the past. I can accurately predict up to 1 minute in the future, by thoroughly investigating the last 2 years of your life. Also, I look like an old database – flat and full of useless info.” ― Will Advise, Nothing is here…

    DBA Statements

    In DBA, Oracle database, PL/SQL, SQL on September 5, 2017 at 11:51

    “A statement is persuasive and credible either because it is directly self-evident or because it appears to be proved from other statements that are so.” Aristotle

    In Oracle 12.2, there is a new view called DBA_STATEMENTS. It can helps us understand better what SQL we have within our PL/SQL functions, procedures and packages.

    There is too little on the Internet and nothing on Metalink about this new view:

    PL/Scope was introduced with Oracle 11.1 and covered only PL/SQL. In 12.2, PL/Scope was enhanced by Oracle in order to report on the occurrences of static and dynamic SQL call sites in PL/SQL units.

    PL/Scope can help you answer questions such as:
    – Where and how a column x in table y is used in the PL/SQL code?
    – Is the SQL in my application PL/SQL code compatible with TimesTen?
    – What are the constants, variables and exceptions in my application that are declared but never used?
    – Is my code at risk for SQL injection and what are the SQL statements with an optimizer hint coded in the application?
    – Which SQL has a BULK COLLECT or EXECUTE IMMEDIATE clause?

    Details can be found in the PL/Scope Database Development Guide or at Philipp Salvisberg’s blog.

    Here is an example: how to find all “execute immediate” statements and all hints used in my PL/SQL units? If needed, you can limit the query to only RULE hints (for example).

    1. You need to set the PLSCOPE_SETTINGS parameter and ensure SYSAUX has enough space:

    
    SQL> SELECT SPACE_USAGE_KBYTES FROM V$SYSAUX_OCCUPANTS 
    WHERE OCCUPANT_NAME='PL/SCOPE';
    
    SPACE_USAGE_KBYTES
    ------------------
                  1984
    
    SQL> show parameter PLSCOPE_SETTINGS
    
    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- -------------------
    plscope_settings                     string      IDENTIFIERS:NONE
    
    SQL> alter system set plscope_settings='STATEMENTS:ALL' scope=both;
    
    System altered.
    
    SQL> show parameter PLSCOPE_SETTINGS
    
    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- -------------------
    plscope_settings                     string      STATEMENTS:ALL
    
    

    2. You must compile the PL/SQL units with the PLSCOPE_SETTINGS=’STATEMENTS:ALL’ to collect the metadata. SQL statement types that PL/Scope collects are: SELECT, UPDATE, INSERT, DELETE, MERGE, EXECUTE IMMEDIATE, SET TRANSACTION, LOCK TABLE, COMMIT, SAVEPOINT, ROLLBACK, OPEN, CLOSE and FETCH.

    
    -- start
    SQL> select TYPE, OBJECT_NAME, OBJECT_TYPE, HAS_HINT, 
    SUBSTR(TEXT,1,LENGTH(TEXT)-INSTR(REVERSE(TEXT), '/*') +2 ) as "HINT" 
    from DBA_STATEMENTS
    where TYPE='EXECUTE IMMEDIATE' or HAS_HINT='YES';
    
    TYPE              OBJECT_NAME   OBJECT_TYPE  HAS HINT
    ----------------- ------------- ------------ --- ------------------
    EXECUTE IMMEDIATE LASKE_KAIKKI  PROCEDURE    NO  
    SELECT            LASKE_KAIKKI  PROCEDURE    YES SELECT /*+ RULE */
    
    -- end
    

    Check also the DBA_STATEMENTS and ALL_STATEMENTS documentation. And the blog post by Jeff Smith entitled PL/Scope in Oracle Database 12c Release 2 and Oracle SQL Developer.

    But finally, here is a way how to regenerate the SQL statements without the hints:

    
    -- start
    SQL> select TEXT from DBA_STATEMENTS where HAS_HINT='YES';
    
    TEXT
    ------------------------------------------------------------
    SELECT /*+ RULE */ NULL FROM DUAL WHERE SYSDATE = SYSDATE
    
    SQL> select 'SELECT '||
    TRIM(SUBSTR(TEXT, LENGTH(TEXT) - INSTR(REVERSE(TEXT), '/*') + 2))
    as "SQL without HINT"
    from DBA_STATEMENTS where HAS_HINT='YES';
    
    SQL without HINT
    -------------------------------------------------------------
    SELECT NULL FROM DUAL WHERE SYSDATE = SYSDATE
    
    -- end
    

    Can you use Oracle RAC on Third-Party Clouds?

    In DBA on July 11, 2017 at 14:40

    Q: Can you use Oracle RAC on Third-Party Clouds?
    A: No.

    The Licensing Oracle Software in the Cloud Computing Environment document states:

    “This policy applies to cloud computing environments from the following vendors: Amazon Web Services – Amazon Elastic Compute Cloud (EC2), Amazon Relational Database Service (RDS) and Microsoft Azure Platform (collectively, the ‘Authorized Cloud Environments’). This policy applies to these Oracle programs.”

    The document that lists “these Oracle Programs” does not include RAC.

    For more details check Markus Michalewicz’s (Oracle RAC Product Manager) white paper entitled: How to Use Oracle RAC in a Cloud? – A Support Question. The image about is slide 45/50 from the same paper.

    And here is how to use Oracle Real Application Clusters (RAC) in the Oracle Database Cloud Service.

    An interesting blog post by Brian Peasland entitled Oracle RAC on Third-Party Clouds concludes: “But if I were looking to move my company’s RAC database infrastructure to the cloud, I would seriously investigate the claims in this Oracle white paper before committing to the AWS solution. That last sentence is the entire point of this blog post.”

    For business and mission critical applications I would by all means recommend Oracle Real Application Clusters on Oracle Bare Metal Cloud.

    We should not forget that something works and something being supported are two totally different things. Even for the Oracle Cloud check the Known issues for Oracle Database Cloud Service document: Updating the cloud tooling on a deployment hosting Oracle RAC requires manual update of the Oracle Database Cloud Backup Module.

    Conclusion: Oracle RAC can NOT be licensed (and consequently not be used) in the above mentioned cloud environments although such claims were published even yesterday in the internet (July 10, 2017).