Dontcheff

Archive for the ‘New features’ Category

On Kafka, JSON, PL/SQL and Advanced Queuing in Oracle Database 20c

In Autonomous, Cloud, Data, DBA, New features, PL/SQL on June 1, 2020 at 05:50

Oracle Corp. starts today (June 1st, 2020) a new fiscal year as the Oracle Database is slowly moving towards version 20c (still in preview mode only) which comes with almost 500 init.ora parameters and 5326 hidden/underscore parameters. There is a new one for 20c called kafka_config_file. But let us first take a step back and see its connection with Advanced Queuing.


Advanced Queuing is available in all editions of Oracle database, including XE. Since Oracle Advanced Queuing is implemented in database tables, all the operational benefits of high availability, scalability, and reliability are applicable to queue data.

Advanced Queuing can be accessed through the several interfaces: PL/SQL, Visual Basic, Java, Java Message Service, JDBC, ODP.NET, OCI (do not read Oracle Cloud Infrastructure – think of C language), etc.

Using PL/SQL to access Oracle Advanced Queuing is probably the most common method: using the PL/SQL packages DBMS_AQADM and DBMS_AQ.

Reading all that, one might think of Kafka. Kafka is a distributed, partitioned, replicated commit log service providing the functionality of a messaging system, but with a unique design. The aim here is not position Kafka and Oracle AQ against each other but show what is new in Oracle Database 20c that brings Kafka and Oracle together and what is important from DBA point of view.

Todd Sharp explained recently how to use Kafka Connect wth Oracle Streaming Service and Autonomous Database. Let us now see what is new in Oracle Database 20c:

Oracle Database 20c introduces Transactional Event Queues (TEQ), which are partitioned message queues that combine the best of messaging, streaming, direct messages, and publish/subscribe. TEQ operates at scale on the Oracle database. TEQ provides transactional event streaming, and runs in the database in a scale of 10s to 100s of billions of messages per day on 2-node to 8-node Oracle RAC databases, both on-premise and on the cloud. TEQ has Kafka client compatibility, which means, Kafka producer and consumer can use TEQ in the Oracle database instead of a Kafka broker. Check Advanced Queuing in 20c for more details.

1. Advanced Queuing: Kafka Java Client for Transactional Event Queues

Kafka Java Client for Transactional Event Queues (TEQ) enables Kafka application compatibility with Oracle database. This provides easy migration of Kafka applications to TEQ.

You do not have to manage a separate Kafka infrastructure and this new feature simplifies the event-driven application architectures with an Oracle converged database that now includes events data. Starting from Oracle Database 20c, Kafka Java APIs can connect to Oracle database server and use Transactional Event Queues (TEQ) as a messaging platform. Developers can migrate an existing Java application that uses Kafka to the Oracle database. A client side library allows Kafka applications to connect to Oracle database instead of Kafka cluster and use TEQ messaging platform transparently.

Two levels of configuration are required to migrate Kafka application to TEQ messaging platform:

– Database level configuration
– Application level configuration

Kafka application needs to set certain properties which will allow OKafka library to locate the Oracle Database. This is analogous to how Kafka application provides zoo keeper information. These connection properties can be set in the following two ways:

– using database user and password provided in plain text
– using JDBC wallet

The following are the prerequisites for configuring and running Kafka Java client for TEQ in an Oracle Database. Create a database user. Grant the following user privileges:

grant connect, resource to user;
grant execute on dbms_aq to user;
grant execute on dbms_aqadm to user;
grant execute on dbms_aqin to user;
grant execute on dbms_aqjms to user;
grant select_catalog_role to user;

Next, set the correct database init.ora parameter to use TEQ:

streams_pool_size=512M

Set the local listener too:

LOCAL_LISTENER= (ADDRESS=(PROTOCOL=TCP)(HOST= )(PORT=))

2. Advanced Queuing Support for JSON Data Type and PL/SQL

Oracle Database Advanced Queuing now supports the JSON data type.

Many client application and micro-services which use Advanced Queuing for messaging have better performance if they use JSON data type to handle JavaScript Object Notation (JSON) messages.

In this aspect, PUBLIC is granted EXECUTE privilege on all these types:

AQ$_AGENT, AQ$_AGENT_LIST_T, AQ$_DESCRIPTOR, AQ$_NTFN_DESCRIPTOR, AQ$_NTFN_MSGID_ARRAY, AQ$_POST_INFO, AQ$_POST_INFO_LIST, AQ$_PURGE_OPTIONS_T, AQ$_RECIPIENT_LIST_T,
AQ$_REG_INFO, AQ$_REG_INFO_LIST, AQ$_SUBSCRIBER_LIST_T, DEQUEUE_OPTIONS_T, ENQUEUE_OPTIONS_T, QUEUE_PROPS_T, SEEK_INPUT_T, , EK_OUTPUT_T, SYS.MSG_PROP_T, MESSAGE_PROPERTIES_T, MESSAGE_PROPERTIES_ARRAY_T, MSGID_ARRAY_T

Regarding the new features of PL/SQL in 20c, check first the interesting example of Steven Feuerstein on extension of loop iterators.

Check the New Features in Release 20c for Oracle Database PL/SQL for more details on PL/SQL extended iterators, PL/SQL qualified expressions enhancements, SQL macros, the new JSON data type and the new pragma SUPPRESSES_WARNING_6009.

3. Advanced Queuing: PL/SQL Enqueue and Dequeue Support

The following features are new in this release:

– Kafka Java Client for Transactional Event Queues (TEQ) which enables Kafka application compatibility with Oracle Database and thus providing easy migration of Kafka applications to TEQ
– PL/SQL Enqueue and Dequeue Support for JMS Payload and non-JMS Payload in Transactional Event Queues
– Transactional Event Queues for Performance and Scalability
– Simplified Metadata and Schema in Transactional Event Queues
– Support for Message Retention and Seekable Subscribers
– Advanced Queuing Support for JSON Data Type

For all the details, check the Changes in Oracle Database Advanced Queuing Release 20c.

In terms of performance and scalability, Oracle Transactional Event Queues have their Queue tables partitioned in 20c into multiple Event Streams which are distributed across multiple RAC nodes for high throughput messaging and streaming of events.

Remember that in 10.1, AQ was integrated into Oracle Streams and thus Oracle AQ was called “Oracle Streams AQ”. But in 12.1, Oracle Streams got deprecated and AQ was again named just “Oracle AQ”.

And finally: here is the 546 page long Transactional Event Queues and Advanced Queuing User’s Guide along with few good additional articles:

Oracle + Kafka = Better Architecture by Jonathan Wallace
Streaming data from Oracle into Kafka by Robin Moffatt
Extending Oracle Streaming with Kafka Compatibility by Somnath Lahiri

Defragmentation of Large Objects / SecureFiles LOBs in Oracle Database 20c

In Database tuning, DBA, New features, Oracle database on May 25, 2020 at 06:09

In Oracle 20c, the init.ora parameter DB_SECUREFILE defaults to PREFERRED. This means that all large objects / LOBs are created as SecureFiles unless BASICFILE is explicitly specified in the LOB storage clause or the tablespace is an MSSM (= Manual Segment Space Management) tablespace.

Until 19c, only defragmentation of BasicFile LOBs was possible. Tim Hall showed, that in order to shrink a SecureFile LOB you need to move it.

SecureFiles defragmentation in 20c provides online defragmentation of allocated and freed space in SecureFiles segments for all types of SecureFiles LOBs – compressed, deduplicated and encrypted.

In an Oracle 20.2.0 database, I have a table called BLOGS. Let us turn on compression, deduplication and encryption:

 
SQL> ALTER TABLE blogs MODIFY LOB (blog_text) 
(COMPRESS HIGH ENCRYPT DEDUPLICATE); 

Table altered.

Defragmentation can be done automatically by a background process and the segment advisor can estimate the fragmentation levels and how much space can be saved. Note that some temp segment space needed to hold intermediate results.

Let us try to defragment the SecureFiles LOB column BLOG_TEXT and use the segment space advisor to see what is forecast vs. reallity.

In order to defragment the SecureFiles LOBs, we need to use the shrink_clause. The shrink_clause lets us (in general) manually shrink space in a table, index-organized table or its overflow segment, index, partition, subpartition, LOB segment, materialized view, or materialized view log. This clause is valid only for segments in tablespaces with automatic segment management.

By default, Oracle Database compacts the segment, adjusts the high water mark, and releases the recuperated space immediately. Compacting the segment requires row movement. Therefore, you must enable row movement for the object you want to shrink before specifying this clause. Further, if your application has any rowid-based triggers, you should disable them before issuing this clause.

 
SQL> ALTER TABLE blogs ENABLE ROW MOVEMENT;

Table altered.

With release 20c, you can use the shrink_clause on SecureFile LOB segments by using these two ways in order to invoke it:

1. Target a specific LOB column and all its partitions:

ALTER TABLE blogs MODIFY LOB (blog_text) (SHRINK SPACE);

2. Cascade the shrink operation for all the LOB columns of the table and its partitions:

ALTER TABLE blogs SHRINK SPACE CASCADE;

Do not attempt to enable row movement for an index-organized table before specifying the shrink_clause. The ROWID of an index-organized table is its primary key, which never changes. Therefore, row movement is neither relevant nor valid for IOTs.

There are 2 important options/keywords with the shrink space syntax:

COMPACT: If you specify COMPACT, then Oracle only defragments the segment space and compacts the table rows for subsequent release. Meaning Oracle will recover space but will not amend the high water mark (HWM). So, Oracle does not release the space immediately.

CASCADE: If you specify CASCADE, then Oracle performs the same operations on all dependent objects of table, including secondary indexes on index-organized tables. Meaning Oracle will recover space for the object and all dependent objects.

Lat us follow the steps for the BLOGS table:

1. Run the Segment Space Advisor:

 
DECLARE
seg_task_id   number;
seg_task_name varchar2(100);
seg_task_desc varchar2(500);
BEGIN
seg_task_name := 'SecureFileDefragmentation1';
seg_task_desc := 'Manual Segment Advisor Run for table BLOGS';
dbms_advisor.create_task (
advisor_name := 'Segment Advisor',
task_id      := seg_task_id,
task_name    := seg_task_name,
task_desc    := seg_task_desc);
END;
/

DECLARE
obj_id        number;
BEGIN
dbms_advisor.create_object (
task_name   := 'SecureFileDefragmentation1',
object_type := 'TABLE',
attr1       := 'JULIAN',
attr2       := 'BLOGS', 
attr3       := NULL,
attr4       := NULL,
attr5       := NULL,
object_id   := obj_id);
END;
/

BEGIN
dbms_advisor.set_task_parameter(
task_name := 'SecureFileDefragmentation1',
parameter := 'recommend_all',
value     := 'TRUE');
END;
/

exec dbms_advisor.execute_task('SecureFileDefragmentation1');

2. Let us check the findings from DBA_ADVISOR_FINDINGS:

 
SQL> select message,more_info from dba_advisor_findings where task_name='SecureFileDefragmentation1';

MESSAGE
-------
MORE_INFO
-----------------------------------------------------------------------
The free space in the object is less than 10MB.
Allocated Space:15728640: Used Space:4013928: Reclaimable Space :180376:

3. Now let us defragment the SecureFile LOB:

 
SQL> select bytes from dba_segments where segment_name='BLOGS';

     BYTES
----------
  15728640

SQL> ALTER TABLE blogs MODIFY LOB (blog_text) (SHRINK SPACE);

Table altered.

SQL> select bytes from dba_segments where segment_name='BLOGS';

     BYTES
----------
  14745600

SQL> ALTER TABLE blogs SHRINK SPACE CASCADE;

Table altered.

SQL> select bytes from dba_segments where segment_name='BLOGS';

     BYTES
----------
   1048576

As you can see, with the simple operations above, we managed to decrease the size of the BLOGs table 15 times: from 15728640 to 1048576 bytes.

The shrink_clause is subject to the following restrictions:
– You cannot combine this clause with any other clauses in the same ALTER TABLE statement.
– You cannot specify this clause for a cluster, a clustered table, or any object with a LONG column
– Segment shrink is not supported for tables with function-based indexes, domain indexes, or bitmap join indexes
– With this clause, Oracle does not shrink mapping tables of index-organized tables, even if you specify CASCADE
– You can specify the shrink_clause for a table with advanced row compression enabled (ROW STORE COMPRESS ADVANCED) but you cannot specify this clause for a table with any other type of table compression enabled
– You cannot shrink a table that is the master table of an ON COMMIT materialized view
– Rowid materialized views must be rebuilt after the shrink operation.

Automatic Zone Maps in the Oracle Database

In Data, Database tuning, Databases, DBA, New features, Oracle database on May 18, 2020 at 06:18

A zone is a set of a contiguous data blocks on disk.

A zone map is an index-like structure built on a table and stores information about the zones of that table.

There are 2 major differences between indexes and zone maps:

– A zone map stores information per zone instead of per row which makes it much more compact than an index
– A zone map is not actively managed the way an index is kept in sync with the DML on the table

Zone maps are closer as a concept to Exadata’s storage indexes than to B-tree indexes.

Before going into how Automatic Zone Maps work in Oracle 20c, let me explain the concept with an example. Consider a small table containing basic information about some relational databases from db-engines.com (rank, score, initial and last release, cloud based):

The RDBMS_BRANDS segment has 6 data blocks with 2 rows per block:

Let us now create the zonemap on the RDBMS_BRANDS table (on 3 columns only):

 
SQL> CREATE MATERIALIZED ZONEMAP rdbms_zmap ON 
rdbms_brands (db_engines_rank, db_engines_score, initial_release); 

Materialized zonemap RDBMS_ZMAP created.

We have now 3 zones and each zone contains two blocks and stores the minimum and maximum of db_engines_rank, db_engines_score and initial_release:

Next, let us run a query returning all RDBMS brands with ranking score more than 1000:

Looking at the execution plan below we see that Oracle is scanning only Zone 1 as the maximum score in all other zone is smaller than 1000:

That is how zone maps work … but what is new in Oracle 20c?

We can now enable automatic creation and maintenance of basic zone maps for both partitioned and non-partitioned tables. But for now, the creation is not available for join zone maps, IOTs, external tables or temporary tables!

In 20c, you can use the new package DBMS_AUTO_ZONEMAP to enable Automatic Zone Maps in the database. Automatic zone map creation is turned off by default.

These four values are allowed for the parameter AUTO_ZONEMAP_MODE:

ON: Turns on auto zone map feature completely. Both for foreground and background zone map creation and maintenance
OFF: Turns off auto zone map feature completely. Both for foreground and background zone map creation and maintenance
FOREGROUND: Turns on only for foreground zone map creation and maintenance
BACKGROUND: Turns on only for background zone map creation and maintenance

You may use the ACTIVITY_REPORT function to view auto zone map activity for a given time window. Note that the background job that performs automatic zone map processing starts once per hour and each run may last up to three hours.

 
SET LONG 100000
SELECT dbms_auto_zonemap.activity_report() report FROM dual;

These 2 zonemaps related views show the most important information DBAs need:

DBA_ZONEMAPS displays all zone maps in the database
DBA_ZONEMAP_MEASURES displays the measures for all zone maps in the database

On a final note: Automatic Zone Maps are available for now only on Exadata and requires the Oracle Partitioning option.

Automatic Index Optimization in Oracle Database 20c

In Database tuning, Databases, DBA, New features, Oracle database on May 11, 2020 at 05:47

Oracle database 20c came with 138 new features and one of them is directly related to the indexes in the database. It is called Automatic Index Optimization.

For some DBAs indexes in the database do not need extra care and they don’t bother much about rebuilding, compressing, coalescing or shrinking them. I have administered in the past a 24×7 mission critical database with size of 5TB where indexes were occupying 4.5TB of all that. Correct, real data was less than 500GB granted that you have SYSTEM, SYSAUX, etc.

Automatic Index Optimization does not mean optimization of the Automatic Indexes in the database but rather making now the Index Optimization an automatic process. Here is how it works and what you have to do in order to enable it and make it work.

First, in order to implement an ILM strategy, you have to enable Heat Maps in the database to track data access and modification. You can enable and disable heat map tracking at the system or session level with the ALTER SYSTEM or ALTER SESSION statement using the HEAT_MAP init.ora parameter, for example:

SQL> alter system set HEAT_MAP = ON;

Like ADO for data segments, Automatic Index Optimization works via ILM on indexes by enabling policies that automatically optimize indexes by compressing, shrinking and rebuilding them. Oracle is using the existing Heat Maps and collects activity statistics on the indexes.

So next, add ADO policies for indexes in order to enable their compression and optimization using the existing Automatic Data Optimization (ADO) framework. You can do it for newly created indexes as well as for already existing indexes.

There are 2 options:

– ADD POLICY TIER in order to perform the operation on a say low cost/ tier 2 tablespace when tier 1 storage is under space pressure
– ADD POLICY OPTIMIZE in order to kick off the process after a certain number of days passes without accessing the index

Here are few examples:

SQL> create index julian.price_idx ON julian.sales(price)
ILM ADD POLICY OPTIMIZE AFTER 31 DAYS OF NO MODIFICATION;

SQL> alter index julian.price_idx ILM ADD POLICY TIER TO BC_DATA;

SQL> alter index julian.price_idx
ILM ADD POLICY OPTIMIZE AFTER 3 DAYS OF NO ACCESS;

SQL> SELECT POLICY_NAME, POLICY_TYPE, ENABLED FROM DBA_ILMPOLICIES;

POLICY_NAME             POLICY_TYPE    ENA
---------------------   -------------  ---
P1                      DATA MOVEMENT  YES

Note that the Oracle documentation has the tier syntax wrong: instead of “ILM ADD POLICY SEGMENT TIER” use “ILM ADD POLICY TIER”.

The optimization process includes actions such as compressing, shrinking or rebuilding the indexes:

Compress: Compresses portions of the key values in an index segment (~3 times)
Shrink: Merges the contents of index blocks where possible to free blocks for reuse
Rebuild: Rebuilds an index to improve space usage and access speed

Notice that you cannot decide which of the 3 above to use. Oracle automatically determines which action is optimal for the index and implements that action as part of the optimization process.

But can we have Automatic Index optimization for Automatic Indexes and not have to rebuild them manually any longer? I did try indeed and here is the result:

SQL> alter index julian."SYS_AI_abrca2u9qmxt7"
ILM ADD POLICY OPTIMIZE AFTER 7 DAYS OF NO ACCESS;

Error starting at line : 17 in command -
alter index julian."SYS_AI_abrca2u9qmxt7"
ILM ADD POLICY OPTIMIZE AFTER 7 DAYS OF NO ACCESS
Error report -
ORA-65532: cannot alter or drop automatically created indexes

Clearly not possible but the error message is sort of misleading because actually you can alter and even drop automatic indexes:

SQL> alter index julian."SYS_AI_abrca2u9qmxt7" rebuild online;
Index altered.
SQL> alter index julian."SYS_AI_abrca2u9qmxt7" coalesce;
Index altered.
SQL> alter index julian."SYS_AI_abrca2u9qmxt7" shrink space;
Index altered.

How to drop the Auto Index? Just rebuild it a new tablespace and run “drop tablespace … including contents and datafiles” – yes it works.

Also, while administering ADO policies for indexes, you cannot manually disable these policies but you can delete an index policy. An ADO policy for indexes executes only one time. After the policy executes successfully, the policy is disabled and is not evaluated again.

SQL> alter index julian.price_idx ILM DELETE POLICY p1;

Moreover, such policies for indexes on partition level are not yet supported. The ADO policy is cascaded to all partitions. So, if we have hybrid tables in the Cloud, we cannot move local indexes automatically to object storage but it should work for global indexes. Note that we can use Automatic Data Optimization (ADO) policies with hybrid partitioned tables under some conditions.

Here are some other limitations:

– ADO does not perform checks for storage space in a target tablespace when using storage tiering
– ADO is not supported on materialized views
– ADO is not supported with index-organized tables or clusters
– ADO concurrency depends on the concurrency of the Oracle scheduler meaning if a policy job for ADO fails more than two times, then the job is marked disabled and the job must be manually enabled later

The feature is available in almost all flavors of the database, i.e., EE, Database Cloud Service, Exadata and ODA: but it requires the Oracle Advanced Compression option.

Tablespaces in Oracle Database 20c

In Databases, DBA, New features, Oracle database on April 8, 2020 at 08:52

Oracle 20c, released recently with 138 new features, is still only available from the cloud but it brings one major change for DBAs which they can no longer avoid: use containers and pluggable databases. And this changes slightly the way how DBAs look also at tablespaces.

But what is new in 20c about tablespaces? Now, DBAs can set the default tablespace encryption algorithm:

The new TABLESPACE_ENCRYPTION_DEFAULT_ALGORITHM dynamic init.ora parameter defines the default encryption algorithm for tablespace creation operations. The default value in 20.2.0 is AES128.

If you set TABLESPACE_ENCRYPTION_DEFAULT_ALGORITHM to say 3DES168, then all future tablespace creation operations will use 3DES168 as the default encryption algorithm. And this is both for offline and online tablespace encryption operations.

The supported values for the encryption algorithms are AES128, AES192, AES256, ARIA128, ARIA192, ARIA256, GOST256, SEED128 and 3DES168.

The CREATE TABLESPACE command in 20c is not more complex than before. Tablespaces can be permanent, undo or temporary; bigfile (one data file or temp file up to approx. 4 billion blocks, max. size is 128 terabytes (TB) for a tablespace with 32K blocks and 32TB for a tablespace with 8K blocks) or smallfile (max. 1022 data files or temp files each with max. 4 million blocks).

The concept of tablespace sets is not new but not highly adopted.

But going back to tablespaces and PDBs. Here is the output from DBA_TABLESPACES run from a PDB as SYS:

And here is the the output from DBA_TABLESPACES run from the root container as SYS:

Of course, DBA_TABLESPACES is no longer the place from which the DBA will find all tablespaces from the database. The correct one is CDB_TABLEPSPACES but if you run the “select * from cdb_tablepsaces;” as SYS from within the PDB, you will still get the same output.

However, run is as SYS from the root container, you get what you are looking for:

From 20c onwards, DBA should get used to the new CDB_ views. And then, less and less people will ask the question “Why do we have 2 tablespaces with the same name?”….

Here is something else which is new in 20c: Oracle automatically shrinks an Oracle ACFS file system based on policy providing there is enough free storage in the volume. ACFS stands for Automatic Storage Management Cluster File System.

In 20c, you can use now File Group templates by customizing and setting default file group properties for automatically created file groups. Without file group templates if you wanted to change properties for an automatically created file group you would have to manually change the properties after the associated files are created which triggers an unnecessary rebalance. The file group templates feature provides a much better option.

New is also the SecureFiles defragmentation, it provides online defragmentation of allocated and freed space in SecureFiles segments for all types of SecureFiles LOBs – compressed deduplicated encrypted. Defragmentation can be done automatically by a background process and the segment advisor can estimate the fragmentation levels and how much space can be saved. Defragmentation can be done mostly in-place with some temp segment space needed to hold intermediate results.

Next, there are 2 new features of Data Pump related to tablespaces:

– Starting with Oracle Database 20c, Data Pump improves transportable tablespace metadata operations with parallelism. Oracle Data Pump now supports parallel export and import operations for Transportable Tablespace (TTS) metadata. This is the information that associates the tablespace data files with the target database in a TTS migration. Parallelism improves TTS export and import performance especially when there are millions of database objects in the data files including tables indexes partitions and subpartitions.

– Starting with Oracle Database 20c, Data Pump resumes transportable tablespace export and import jobs that are stopped. Oracle Data Pump has the capacity to resume transportable tablespace export and import jobs. Due to errors or other problems you can find that transportable tablespace export or import jobs are stopped. Oracle Data Pump’s capacity to resume these stopped jobs helps to save you time and makes the system more available.

Another new 20c feature is Automatic Zone Maps which are created and maintained for any user table without any customer intervention. Zone maps allow the pruning of blocks and partitions based on the predicates in the queries without any user intervention. Automatic zone maps are maintained for direct loads and are maintained and refreshed for any other DML operation incrementally and periodically in the background. Automatic zone maps are improving the performance of any query transparently and automatically without the need of any user action. However, this feature is available only on Exadata.

And finally, something which I find interesting in the good way: starting with Oracle Database 20c, you can specify an expression as the value for some initialization parameters, which enables the database to automatically adjust the parameter value in response to environmental changes. This is especially useful in Oracle Autonomous Database environments.

Example: CPU_COUNT = $SYSTEM_CPU/4 or SESSIONS = MAX(200, PROCESSES * 1.5)

Oracle 20.2.0 new features for DBAs

In Autonomous, Cloud, Databases, New features, Oracle database on February 25, 2020 at 09:38

Oracle 20c is now available in preview mode from the Oracle Public Cloud. Preview version databases are not intended for production use and have limited functionality.

After testing some of the new features of 20c, here is what might be of interest for most DBAs:

1. Provisioning the database is relatively simple. You need an SSH Public key, create a VCN (Virtual Cloud Network) and a client subnet in your compartment. A hostname prefix is also mandatory. Note that the administrator password must be 9 to 30 characters and contain at least 2 uppercase, 2 lowercase, 2 special, and 2 numeric characters. The special characters must be _, #, or -. You cannot bypass that. The shape type must be “Virtal Machine” and the SMS (Storage Management Software) must be “Logical Volume Manager”.

Once provisioned you need the IP address which is under “Nodes” (bottom left, just under “Resources”). As you can see, it is no longer under “General Information”:

And … you can stop the database only manually (as of today, February 25th, 2020) – there is no button for stopping the node. Just terminate in case you are on a tight budget.

2. Blockchain tables

Blockchain tables are append-only tables in which only insert operations are allowed. Deleting rows is either prohibited or restricted based on time. Rows in a blockchain table are made tamper-resistant by special sequencing & chaining algorithms. Users can verify that rows have not been tampered. Have a look at an example I used to create a blockchain table:

Here is how to manage blockchain tables.

Most important is to specify the Retention Period for the Blockchain Table by using the NO DROP clause in the CREATE BLOCKCHAIN TABLE statement. Also specify the Retention Period for Rows in the Blockchain Table: use the NO DELETE clause in a CREATE BLOCKCHAIN TABLE statement.

3. A multitenant container database is the only supported architecture in Oracle Database 20c. While the documentation is being revised, legacy terminology may persist. In most cases, “database” and “non-CDB” refer to a CDB or PDB, depending on context. In some contexts, such as upgrades, “non-CDB” refers to a non-CDB from a previous release. Check the changes in Oracle 20c for Oracle Multitenant.

4. Data Pump

– Oracle Data Pump 20c can include and exclude objects in the same export or import operation meaning that now, Oracle Data Pump commands can include both INCLUDE and EXCLUDE parameters in the same operation. By enabling greater specificity about what is being migrated, this enhancement makes it easier to migrate to Oracle Cloud, or to another on-premises Oracle Database.

Note: when you include both parameters in a command, Oracle Data Pump processes the INCLUDE parameter first, and includes all objects identified by the parameter. Then it processes the EXCLUDE parameters, eliminating the excluded objects from the included set. Here is an example of including only 2 tables but excluding all indexes except the PKs (real use case: you want to enable Oracle Auto Indexing in ADB and while importing the data you need to drop all indexes except the PKs):

expdp julian SCHEMAS=JULIAN DUMPFILE=julian.dmp REUSE_DUMPFILES=YES 
INCLUDE=TABLE:\"IN \(\'CLIENTS\',\'SALES\'\)\" 
EXCLUDE=INDEX:\"LIKE \'IDX\%\'\"

– Oracle Data Pump 20c resumes transportable tablespace export and import jobs that are stopped

– Oracle Data Pump 20c supports parallel export and import operations for Transportable Tablespace (TTS) metadata

– Oracle Data Pump 20c supports optional index compression on imports, including for Oracle Autonomous Database

– Oracle Data Pump 20c supports adding, changing and eliminating table compression

– Oracle Database 20c supports index compression as well by introducing a new TRANSFORM parameter clause, INDEX_COMPRESSION_CLAUSE

– Oracle Data Pump 20c can perform exports from Oracle Autonomous Database into dump files in a cloud object store

– Starting with Oracle Database 20c, a checksum is now added to the dumpfile – you can use the checksum to help to confirm that the file is valid after a transfer to or from the object store and also after saving dumpfiles on on-premises and that it has no
accidental or malicious changes

5. Small improvements and changes in 20c:

– The IGNORECASE parameter for the orapwd file is desupported – all newly created password files are case-sensitive

– A new dynamic view called V$PMEM_FILESTORE displays information about Persistent Memory Filestores

– Certain predefined columns of unified audit records from common unified audit policies can be written to the UNIX SYSLOG destination – to enable this feature, you set UNIFIED_AUDIT_COMMON_SYSTEMLOG, a new CDB level init.ora parameter (added in Oracle 19c (19.3) but not included in the References)

– You now can set the TABLESPACE_ENCRYPTION_DEFAULT_ALGORITHM dynamic parameter to define the default encryption algorithm for tablespace creation operations

– Database Vault: a DV_OWNER common user in the CDB root can prevent local users from creating Oracle Database Vault controls on common objects in a PDB

– AutoShrink: Oracle Automatic Storage Management Cluster File System (Oracle ACFS) automatic shrinking automatically shrinks an Oracle ACFS file system based on policy, providing there is enough free storage in the volume

– The Oracle Grid Infrastructure feature Automatic Storage Management Cluster File System (Oracle ACFS) is desupported with Microsoft Windows

– An Oracle Database installation configures all Oracle Database homes in read-only mode by default

– Traditional auditing is deprecated in Oracle Database 20c thus Oracle recommend that we use unified auditing, which enables selective and more effective auditing inside Oracle Database

– The package DBMS_OBFUSCATION_TOOLKIT is desupported, and replaced with DBMS_CRYPTO

– Older encryption and hashing algorithms contained within DBMS_CRYPTO are deprecated

– The Large Object (LOB) features DBMS_LOB.LOADFROMFILE and LOB buffering are desupported

– You can configure database clients to maintain multiple Secure Sockets Layer (SSL) sessions using different SSL certificates

– In the DBMS_ROLLING.set_parameter(), there is a new parameter, called BLOCK_UNSUPPORTED – by default, BLOCK_UNSUPPORTED
is set to 1 [YES], indicating that operations performed on tables that are unsupported by Transient Logical Standby will be blocked on the primary database. If set to 0 [OFF], then the DBMS_ROLLING package does not block operations on unsupported tables

– In order to coordinate with the Oracle GoldenGate feature OGG EXTRACT, the LOGICAL_REPLICATION clause now provides support for automatic extract of tables

– Two new views, DBA_OGG_AUTO_CAPTURED_TABLES, and USER_OGG_AUTO_CAPTURED_TABLES, provide you with tools to query which tables are enabled for Oracle GoldenGate automatic capture

6. Finally, her are the 7 new init.ora parameters in Oracle 20.2.0:

DBNEST_ENABLE (DbNest is OS resource and file system isolation for PDBs)
DBNEST_PDB_FS_CONF
DIAGNOSTICS_CONTROL (meant to be used with Oracle Support)
MAX_IDLE_BLOCKER_TIME (maximum number of minutes before a blocking session is automatically terminated)
PMEM_FILESTORE
TABLESPACE_ENCRYPTION_DEFAULT_ALGORITHM
OPTIMIZER_REAL_TIME_STATISTICS

What is Preview Versions for Autonomous Database

In Autonomous, Cloud, DBA, Init.ora, New features, Oracle database on October 18, 2019 at 12:14

A first preview is not exactly a pleasant experience for producers, directors and actors. In the IT world, it is exactly the opposite: DBAs can use the new version of the database before it becomes mainstream.

The “Preview Versions for Autonomous Database” has been available since mid-October 2019. The current preview version of Autonomous Database is Oracle Database 19c. More precisely, the version is 19.4.0.

Databases provisioned or cloned with a preview version display the end date of the preview period at the top of the Autonomous Database details page in the Oracle Cloud Infrastructure Console.

Note that the preview for my database below will end on December 2, 2019. All resources created with the preview will be terminated at the end of the preview period.

The basis details of preview versions for Autonomous Database can be found in the Oracle documentation.

Connecting to the preview database is as to a standard ADB:

Here are some of the default settings in Oracle Database Preview Version 19.4.0 for Autonomous Database:

Real-Time Statistics: Enabled by default
– Oracle automatically gathers online statistics during conventional DML operations
– By gathering some statistics automatically during DML operations, the database augments the statistics gathered by DBMS_STATS
– Fresh statistics enable the optimizer to produce more optimal plans
– EXEC DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO; forces the database to write optimizer statistics to the data dictionary
– NO_GATHER_OPTIMIZER_STATISTICS prevents the collection of real-time statistics

High-Frequency Automatic Optimizer Statistics Collection: Enabled by default
– High-Frequency Automatic Optimizer Statistics Collection complements the standard automatic statistics collection job
– By default, the high-frequency statistics collection occurs every 15 minutes 
– Turn on/off: EXEC DBMS_STATS.SET_GLOBAL_PREFS(‘AUTO_TASK_STATUS’,’ON’);
– Change the default period: EXEC DBMS_STATS.SET_GLOBAL_PREFS(‘AUTO_TASK_INTERVAL’,’300’);
– Monitor: DBA_AUTO_STAT_EXECUTIONS
– The high-frequency automatic statistics task will not start during the maintenance window

SQL Quarantine: Disabled and not supported in Autonomous Database
– Check ORA-56955: quarantined plan used for details on SQL Quarantine

High-Frequency SQL Plan Management Evolve Advisor Task: Enabled by default
– High-Frequency Automatic Optimizer Statistics Collection complements the standard automatic statistics collection job
– By default, the high-frequency statistics collection occurs every 15 minutes 
– Turn on/off: EXEC DBMS_STATS.SET_GLOBAL_PREFS(‘AUTO_TASK_STATUS’,’ON’);
– Change the default period: EXEC DBMS_STATS.SET_GLOBAL_PREFS(‘AUTO_TASK_INTERVAL’,’300’);
– Monitor: DBA_AUTO_STAT_EXECUTIONS
– The high-frequency automatic statistics task will not start during the maintenance window

Automatic Indexing: Disabled by default
– Check Automatic Indexing in 19c for more details on Automatic Indexing

New 19c spfile parameters
– All 6 new init.ora parameters in 19c are set to their default values (click on the links below to see the values):

ADG_REDIRECT_DML
DATA_GUARD_MAX_IO_TIME
DATA_GUARD_MAX_LONGIO_TIME
ENABLE_IMC_WITH_MIRA
LOB_SIGNATURE_ENABLE
MAX_DATAPUMP_PARALLEL_PER_JOB

Preview is not free of charge. Preview provides you test/development capacity and capabilities so it is charged identically to production environments. Also, preview is not available for free service instances.

Preview instances are fully supported but should not be used for production purposes.

At the end, you may want to check out the new static data dictionary views in 19c:

• ALL_TRIGGERS_AE, DBA_TRIGGERS_AE, and USER_TRIGGERS_AE
• DBA_AUTO_INDEX_CONFIG
• DBA_AUTO_STAT_EXECUTIONS
• DBA_PDB_SNAPSHOTFILE
• DBA_RAT_CAPTURE_SCHEMA_INFO
• DBA_REGISTRY_BACKPORTS
• DBA_SQL_QUARANTINE

and also the new dynamic performance views in 19c:

• GV$AQ_PARTITION_STATS and V$AQ_PARTITION_STATS
• GV$ASM_ACFSAUTORESIZE and V$ASM_ACFSAUTORESIZE
• GV$ASM_DBCLONE_INFO and V$ASM_DBCLONE_INFO
• GV$MEMOPTIMIZE_WRITE_AREA and V$MEMOPTIMIZE_WRITE_AREA
• GV$SQL_TESTCASES and V$SQL_TESTCASES

ORA-56955: quarantined plan used

In DBA, New features, Oracle database, Security and auditing, SQL on May 29, 2019 at 13:15

“The way that worms and viruses spread on the Internet is not that different from the way they spread in the real world, and the way you quarantine them is not that different, either” – David Ulevitch

And now, in Oracle 19c, you can do the same with SQL:

 SQL> SELECT client, COUNT(*) OVER (PARTITION BY price) CLIENT_COUNT 
 FROM sales WHERE price IN (2, 91984);
  *
 ERROR at line 1:
 ORA-56955: quarantined plan used
 Elapsed: 00:00:00.00

Error “ORA-56955: quarantined plan used” is new in the Oracle database, it comes when the SQL run fulfills the quarantine conditions.

It is important to differentiate Oracle SQL Quarantines in 19c from Oracle Object Quarantines in 18c. There is also the concept of Offload Quarantines.

1. A good way to start understanding what SQL quarantines are about is to watch the following short video from Rich Niemiec:

2. Check also page 23 of the Optimizer In Oracle Database 19c white paper. “The new Oracle Database 19c feature SQL Quarantine can be used to eliminate the overhead of runaway queries. When DBRM detects a SQL statement is exceeding a resource or run-time limit, the SQL execution plan used by the statement is quarantined. If the SQL statement is executed again and it is using the same SQL execution plan then it will be terminated immediately. This can significantly reduce the amount of system resource that would otherwise be wasted.”

Think of SQL Quarantines as a way to prevent unnecessary SQL being run in the database, of course based on your own definition of unnecessary SQL. You can prevent the use of “bad” execution plans and exhausting the databases from resources.

In the database, there might be SQL statements with high utilization of CPU and IO: you can prevent them from being started so once they are quarantined they no longer consume system resources because they are terminated prior to their execution.

Note that SQL quarantines work only in 19c on Exadata and DBCS/ExaCS. Check out the Database Licensing Information User Manual:

3. You can quarantine a statement based on:

– SQL_ID and one of its execution plan
– SQL_ID and all of its executions plans
– specific SQL_TEXT

You quarantine a statement in 2 steps:

(A) create a SQL Quarantine by using (for example) DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_ID
(B) add thresholds by using DBMS_SQLQ.ALTER_QUARANTINE

Here are some examples and some more.

4. For some interesting and non-documented stuff check the article by Mahmoud Hatem entitled Oracle 19c : The QUARANTINE hint.

For instance, it shows how you can test by setting “_exadata_feature_on”=true in order to get SQL QUARANTINE feature to work on a non-Exadata box.

The following parameters can affect a quarantine kick off:

CPU_TIME
ELAPSED_TIME
IO_MEGABYTES
IO_REQUESTS
IO_LOGICAL
PHV

There is also the special value called ALWAYS_QUARANTINE.

5. All details can be of course found in the SQL Quarantine documentation.

The following columns of the V$SQL and GV$SQL views show the quarantine information of execution plans of SQL statements:

– SQL_QUARANTINE: This column shows the name of the quarantine configuration for an execution plan of a SQL statement.
– AVOIDED_EXECUTIONS: This column shows the number of times an execution plan of a SQL statement was prevented from running after it was quarantined.

There is a new view in 19c called DBA_SQL_QUARANTINE which displays information about quarantine configurations.

Good news also for admin users of the Autonomous Database: you have full access to the feature:

And note that a DBA can also transfer quarantine configurations from one database to another database using the DBMS_SQLQ package subprograms: CREATE_STGTAB_QUARANTINE, PACK_STGTAB_QUARANTINE, and UNPACK_STGTAB_QUARANTINE.

If you plan to visit Oracle OpenWorld this year (September 16-19, 2019), as of now, there are a couple of presentations on SQL Quarantine:

– Oracle Database 19c: SQL Tuning Using Plan Stability Methods SPM/SQL Quarantine: by Soumendra Paik, Senior Principal Technical Support Engineer, Oracle
– What’s New in Oracle Optimizer, by Nigel Bayliss, Senior Principal Product Manager, Oracle

Question: how do you quarantine a statement based on a sub-string of the query? Like, how can you quarantine statements starting with ‘select *‘?

What else besides Automatic Indexing is new in Oracle Database 19c?

In DBA, Init.ora, New features, Oracle database, Security and auditing, SQL on April 23, 2019 at 10:53

Automatic Indexing (AI) is probably the most important new feature of Oracle Database 19c and AI is arguably one of the best example of AI in the IT industry. But there is much more that came along with 19c.

Here is my choice of the top 10 least known (for now at least) new features of Oracle Database 19.2 (in no order whatsoever):

1. Security: if the database password file name or location has been changed and the metadata cache needs to be refreshed with the details of the updated database password file, you can do it simply with the following command:

SQL> alter system flush passwordfile_metadata_cache;

System altered.

In 19c, most of the Oracle Database supplied schema-only accounts now have their passwords removed to prevent users from authenticating to these accounts. But DBAs can still assign passwords to the default schema-only accounts.

2. SQL: the LISTAGG aggregate function now supports duplicate elimination by using the new DISTINCT keyword.

There are also a new bitvector SQL operators can be used to speed up COUNT DISTINCT operations within a SQL query.

3. Active Data Guard DML redirection: You can now run DML on the Active Data Guard standby database. When run on the standby side, the update is passed to the Primary database where it is executed and the resulting redo of the transaction will update the standby after which control will be returned to the application.

4. There are 6 new init.ora parameters in 19c. DBAs can now tune in DataGuard the amount of wait time by using DATA_GUARD_MAX_IO_TIME and DATA_GUARD_MAX_LONGIO_TIME. You can check details for all six:

ADG_REDIRECT_DML
DATA_GUARD_MAX_IO_TIME
DATA_GUARD_MAX_LONGIO_TIME
ENABLE_IMC_WITH_MIRA
LOB_SIGNATURE_ENABLE
MAX_DATAPUMP_PARALLEL_PER_JOB

5. You can now create Materialized Views containing bitmap-based COUNT(DISTINCT) functions. This means that the MVs are based on SQL aggregate functions that use bitmap representation to express the computation of COUNT(DISTINCT) operations.

SQL> create materialized view MV_SALES as
select CLIENT, 
BITMAP_CONSTRUCT_AGG(BITMAP_BIT_POSITION(PRICE),'RAW') bm_price
from SALES
group by CLIENT,BITMAP_BUCKET_NUMBER(PRICE);  

Materialized view created.

6. Looks like there is now automatic resolution of SQL plan regressions: “SQL plan management searches for SQL statements in the Automatic Workload Repository (AWR). Prioritizing by highest load, it looks for alternative plans in all available sources, adding better-performing plans to the SQL plan baseline. Oracle Database also provides a plan comparison facility and improved hint reporting.”

7. Real-Time Statistics is also a new cool feature. “Oracle automatically gathers online statistics during conventional DML operations. Statistics can go stale between execution of DBMS_STATS statistics gathering jobs. By gathering some statistics automatically during DML operations, the database augments the statistics gathered by DBMS_STATS. Fresh statistics enable the optimizer to produce more optimal plans.”

8. Hybrid Partitioned Tables: now large portions of a table can reside in external partitions, for example in the Oracle Cloud. With this new feature, you can also easily move non-active partitions to external files, such as Oracle Data Pump files, for a cheaper storage solution. Hybrid partitioned tables support all existing external table types for external partitions: ORACLE_DATAPUMP, ORACLE_LOADER, ORACLE_HDFS, ORACLE_HIVE.

9. Data Pump:

– Oracle Data Pump allows tablespaces to stay read-only during TTS import.
– Oracle Data Pump can work in test mode for transportable tablespaces.
– Oracle Data Pump supports resource usage limitations with the introduction of two new parameters: MAX_DATAPUMP_JOBS_PER_PDB and MAX_DATAPUMP_PARALLEL_PER_JOB.
– Data Pump no longer enables secure, password-protected roles by default. Beginning with 19c, you must explicitly enable password-protected roles for an individual export or import job. A new command-line parameter has been added, ENABLE_SECURE_ROLES=YES | NO that can be used to explicitly enable or disable these types of roles for an individual export or import job.
– The new Data Pump command-line parameter CREDENTIAL enables secure import into a managed service from dump files in the Oracle Object Store Service.

10. SQL Quarantine: “SQL statements that are terminated by Oracle Database Resource Manager due to their excessive consumption of CPU and I/O resources can be automatically quarantined. The execution plans associated with the terminated SQL statements are quarantined to prevent them from being executed again.”

Check out the new Oracle package DBMS_SQLQ – cool stuff!