Dontcheff

The DBMS_CLOUD and UTL_SMTP packages in the Autonomous Database

In Autonomous, Cloud, DBA, PL/SQL on January 16, 2020 at 16:07

New Autonomous Database features are being added all the time. For now, ADB for shared infrastructure supports 18c while 19c can be used only in preview mode.

The preview period for ADB 19c ended yeaterday: January 15th, 2020. From now on, it is no longer possible to provision new preview instances nor clone existing instances to a preview instance. However, existing preview instances will remain available until January 30th when the final termination process will happen. And now, we are awaiting for the Oracle 20c preview version.

So, what else is new in the Autonomous Database (Shared Infrastructure):

The DBMS_CLOUD REST API functions provide a generic API that lets you call any REST API with the following supported cloud services:

Oracle Cloud Infrastructure
Amazon Web Services
Azure Cloud
Oracle Cloud Infrastructure Classic

DBMS_CLOUD supports GET, PUT, POST, HEAD and DELETE HTTP methods. The REST API method to be used for an HTTP request is typically documented in the Cloud REST API documentation.

Check also the summary of the DBMS_CLOUD_ADMIN package and especially the CREATE_DATABASE_LINK procedure as this is the supported way to create a database link in ADB.

To run DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK with a user other than ADMIN you need to grant EXECUTE and CREATE DATABASE LINK privileges to that user. For example, run the following command as ADMIN to grant privileges to JULIAN:

GRANT EXECUTE ON DBMS_CLOUD_ADMIN TO JULIAN;
GRANT CREATE DATABASE LINK TO JULIAN;

Behind the curtains, Oracle runs C##CLOUD$SERVICE.DBMS_CLOUD_DBLINK_INTERNAL.

Note that packages like DBMS_CLOUD, DBMS_CLOUD_ADMIN, DBMS_CLOUD_CORE, DBMS_CLOUD_DBLINK, etc. are owned by C##CLOUD$SERVICE (owns 23 packages), not by SYS!

In ADB, there are few restrictions for some PL/SQL packages. Oracle have removed the UTL_TCP package.

UTL_HTTP Restrictions:

Connections through IP addresses are not allowed
– Only HTTPS connections are allowed (HTTP and HTTP_PROXY are disallowed)
– The only allowed ports are 443 and 8443
– The SET_WALLET and SET_AUTHENTICATION_FROM_WALLET APIs are disallowed
– The WALLET_PATH and WALLET_PASSWORD arguments for the CREATE_REQUEST_CONTEXT, REQUEST, and REQUEST_PIECES APIs are ignored

UTL_SMTP Restrictions:

– The only supported email provider is Oracle Cloud Infrastructure Email Delivery service
– Mail with an IP address in the host name is not allowed
– The only allowed ports are 25 and 587

DBMS_NETWORK_ACL_ADMIN Restrictions:

– Granting ACL privileges on IP addresses is not allowed
– The http_proxy and use_passwords ACL privileges are not allowed

This Oracle example is using DBMS_CLOUD.SEND_REQUEST in order to create and delete an Oracle Cloud Infrastructure Object Storage bucket.

Here are 2 new additions to the ADB feature list:

1. Access Control Lists Honored By ADB Built-in Tools

You can control and restrict access to your Autonomous Database by setting network access control lists (ACLs). When you provision your Autonomous Database you can either choose the database to be accessible from all IP addresses or you can restrict access to your database to a whitelisted set of clients. You can change your decision after provisioning and set or change the access rules.

Here are 3 links for additional details:

1. VCNs and Subnets for details on Virtual Cloud Networks (VCN).
2. Oracle Services: Service Gateway Access for details on setting up a Service Gateway.
3. Transit Routing: Private Access to Oracle Services for details on Transit Routing.

2. Send Emails from ADB using UTL_SMTP

There are 4 steps you must follow in order to send an email from ADB:

1. Configure Email Delivery Service
2. Allow SMTP Access for ADMIN via an Access Control Entry (ACE)
3. Create a PL/SQL Procedure to Send Email
4. Send a Test Email

Check How to Send an Email using UTL_SMTP in Autonomous Database by Can Tuzla for all the details and examples.

At the end, note that now we have the “Next Maintenance” field which shows the date and time for the upcoming maintenance window. All ADB instances are automatically assigned to a maintenance window and different instances can have different maintenance windows.

Database Innovation and Automation: the DBA employment after 2020

In Autonomous, DBA on December 13, 2019 at 09:57

DBA employment is expected to have increased by 11% in 2026:

According to the Bureau of Labor Statistics employment of database administrators (DBAs) is projected to grow 9 percent from 2018 to 2028, faster than the average for all occupations. Growth in this occupation will be driven by the increased data needs of companies in all sectors of the economy. Database administrators will be needed to organize and present data in a way that makes it easy for analysts and other stakeholders to understand.

As you can see from the DBA job outlook article, the employment of DBAs in the computer systems design and related services industry is projected to grow 21 percent from 2018 to 2028.

In a new study from Intellipaat, the top IT skills required for a high-flying career are Big Data, Data Science, Business Intelligence and ETL, Salesforce, SAP, Oracle Developer and Administrator, Cloud Computing, Artificial Intelligence, Python Developers, and Blockchain Developers. Oracle DBA/Developer is #6 on the list – followed by Cloud Computing and Artificial Intelligence.

Most of us have been witnessing this strange phenomenon in the IT world where anything successful must have been decided from the top and pushed down. In reality, most innovative concepts are implemented by teams at lower levels in the organizations, like the DBA teams, and afterwards successfully proven to be among the best decisions taken within the company. Example: automation. This is nothing new. But now it has been become more fashionable than ever.

Database innovation in IT is often perceived as jumping from one database brand to another or creating/adding new features into a new database brand. Although that might be true to a certain extent, database innovation is a long, incremental process taking years of research, testing and series of software improvements.

Autonomous databases are an evolution of the database software, part of the progress. Like the cost based optimizer did not radically change the way we write SQL statements, automation will not (and is not) radically changing the way DBAs work on daily basis.

In less than 7 minutes, Connor McDonald answers the most typical DBA question nowadays about the future of autonomous: Does this mean an autonomous database will be free from me and my DBA career?

It is impossible to predict what is going to happen, all is up to speculations. Just have a look at something that was written on dba-oracle.com about 15 years go. Oracle 17-3d did not introduce the time dimension to database management, allowing three-dimensional data representation. And as of 2019, there are no computers on the market that are inherently 128-bit machines.

And how many times can you find the word Cloud in the article above: “Oracle 2020: A Glimpse Into the Future of Database Management”?

These ten article cover the subject in depth:

Autonomous Database: A DBA’s New Best Friend?
2019 Data Management Predictions: Oracle Autonomous Database is in your future
The DBA’s Role and Oracle Autonomous Database
Every DBA Needs to Understand the Impacts of Oracle Autonomous Database
Oracle’s next chapter: The Autonomous Database and the DBA
Will automated databases kill the DBA position?
A Veteran DBA’s Insights On Oracle’s New Autonomous Data Warehouse
The Autonomous Database Wants to Eat Your Lunch (and maybe your job)
Oracle: the autonomous database drives the ‘evolution’ of the DBA
Oracle Autonomous Database features free DBAs from routine tasks

Bottom line, DBA will still be needed in the autonomous future for (at least) these tasks (in no order whatsoever):

– database recovery
– data security
– user management
– data migration between environments
– capacity planning and sizing
– database provisioning
– scaling, starting, stopping the db services
– monitoring
– performance tuning
– automate more and more .. and more
– migrate databases from non-Oracle brands to ADB 🙂

“The long-term future of database administrators could be at risk if every enterprise adopts the Oracle 18c autonomous database”: very, very long-term indeed!

Once more and finally – all comes from people, not from AI and SW – let me quote LJE: “What is Oracle? A bunch of people. And all of our products were just ideas in the heads of those people – ideas that people typed into a computer, tested, and that turned out to be the best idea for a database or for a programming language.” Larry Ellison

Take your time! Linear thinking based on a limited short term perspective does not derive database innovation. In reality, innovation is a set of small steps that eventually lead up to a big deal that changes the game. Like the Autonomous Database.

Memoptimized Rowstore Fast Ingest in Oracle Database 19c

In Data, Database tuning, Databases, DBA, Oracle database, Oracle internals on November 25, 2019 at 15:53

“A filmmaker should never assume he’s superior to his subject. I often find that even the simplest topic remains an enigma. The best film portraits not only evoke that enigma but ingest it in a process that renders what’s invisible visible.” – Damian Pettigrew, a documentary SME

The same statement above holds for database SMEs using the new fast ingest process. Because their data might be at one point neither visible nor invisible. From DBA point of view I mean…

Memoptimized Rowstore fast ingest is a new feature of Oracle 19c and is all about high performance data streaming. Here is an example from Oracle which is ultra fast, arguably 25 million inserts per second on two socket server:

Memoptimized Rowstore was introduced in Oracle 18c and here are few good blog posts on the subject:

Oracle Database 18c: Introduction to Memoptimized RowStore (Memory of a Free Festival) by Richard Foote
MemOptimized RowStore in Oracle Database 18c by Philipp Salvisberg

The fast ingest functionality of Memoptimized Rowstore enables fast data inserts into an Oracle Database from applications that ingest small but high volume transactions with almost no transactional overhead. The inserted data is temporarily buffered in the large pool before writing it to disk in bulk in a deferred, asynchronous manner.

Memoptimized Rowstore provides the following two functionalities:

Fast ingest:
– Fast ingest optimizes the processing of high-frequency, single-row data inserts into a database
– Fast ingest uses the large pool for buffering the inserts before writing them to disk, so as to improve data insert performance

Fast lookup:
– Fast lookup enables fast retrieval of data from for high-frequency queries
– Fast lookup uses a separate memory area in the SGA called the memoptimize pool for buffering the data queried from tables
– For using fast lookup, you must allocate appropriate memory size to the memoptimize pool using MEMOPTIMIZE_POOL_SIZE

The memoptimize pool contains two parts: (1) the memoptimize buffer area which occupies 75% of the memoptimize pool and (2) a hash index that is a non-persistent segment data structure having several units containing a number of hash buckets. The hash index occupies 25% of the memoptimize pool.

Memoptimize buffer area

There are two steps for using fast ingest for inserting data:

1. Enable a table for fast ingest: specify the MEMOPTIMIZE FOR WRITE clause in the CREATE TABLE or ALTER TABLE statement: Example on how to enable a table for Fast Ingest

2. Specify a hint for using fast ingest for an insert operation by specifying the MEMOPTIMIZE_WRITE hint in the INSERT statement: Example on specifying a hint for using Fast Ingest for data inserts

Note that a table can be configured for using both fast ingest and fast lookup but these 4 points should be always considered:

– Because fast ingest buffers data in the large pool, there is a possibility of data loss in the event of a system failure. To avoid data loss, a client must keep a local copy of the data after performing inserts, so that it can replay the inserts in the event of a system failure before the data is written to disk. A client can use the DBMS_MEMOPTIMIZE package subprograms to track the durability of the inserts. After inserts are written to disk, a client can destroy its local copy of the inserted data.

Use the GET_APPLY_HWM_SEQID function to find the low high-water mark of sequence numbers of the records that have been successfully written to disk across all the sessions:

select DBMS_MEMOPTIMIZE.GET_APPLY_HWM_SEQID from dual;

Equivalently, use the GET_WRITE_HWM_SEQID function to return the high-water mark sequence number of the record that has been written to the large pool for the current session.

select DBMS_MEMOPTIMIZE.GET_WRITE_HWM_SEQID from dual;

– Queries do not read data from the large pool, hence data inserted using fast ingest cannot be queried until it is written to disk.

If you need to flush all the fast ingest data from the large pool to disk for the current session, just run:

exec DBMS_MEMOPTIMIZE.WRITE_END;

– Parent-child transactions must be synchronized to avoid errors. For example, foreign key inserts and updates of rows inserted into the large pool can return errors, if the parent data is not yet written to disk.

– Index operations are supported by fast ingest similar to the regular inserts. However, for fast ingest, database performs index operations while writing data to disk, and not while writing data into the large pool.

You can disable a table for fast ingest by specifying the NO MEMOPTIMIZE FOR WRITE clause in the ALTER TABLE statement. You can view the fast ingest data in the large pool using the V$MEMOPTIMIZE_WRITE_AREA view. You can also view and control the fast ingest data in the large pool using the subprograms of the packages DBMS_MEMOPTIMIZE and DBMS_MEMOPTIMIZE_ADMIN.

The DBMS_MEMOPTIMIZE_ADMIN package has only one procedure WRITES_FLUSH which does not accept any input or output parameters.

exec DBMS_MEMOPTIMIZE.WRITES_FLUSH;

Here is one more example from Oracle:

As you see, you can lose data using this new technique. Use it with caution unless you can afford that or re-insert the data.

If you need to remove or populate fast lookup data for a certain table from the memoptimize pool, use DBMS_MEMOPTIMIZE.DROP_OBJECT and DBMS_MEMOPTIMIZE.POPULATE respectively.

Check MemOptimized RowStore in Oracle Database 19c by Philipp Salvisberg for a good performance comparison between PL/SQL and Java Thin.