Dontcheff

Archive for June, 2020|Monthly archive page

An example of the Oracle Autonomous Database Advisor

In Autonomous, DBA, Init.ora, New features, Oracle database on June 22, 2020 at 08:33

Moving the database to Oracle Autonomous Cloud is a rather simple task but how do you avoid possible issues after the migration? How do you know in advance which objects will not at all migrate to Autonomous or which objects will be migrated with some changes? How do you get beforehand advice and guidelines on the migration to Autonomous?

Note from January 13th, 2022: Cloud Premigration Advisor Tool (CPAT) Analyzes Databases for Suitability of Cloud Migration (Doc ID 2758371.1) is the new/replaced tool. The link below for the “Oracle Autonomous Database Schema Advisor” is not working and the note is missing. 

The new tool called “Oracle Autonomous Database Schema Advisor” can give you the answers to these questions.

All the details are in MOS Doc ID 2462677.1 but here are the basics and an example in which I will be moving my 20c schema to ATP Dedicated.

First, you need to install the advisor. Meaning run the script install_adb_advisor.sql (as sysdba) which you download from the MOS note above. The script will create a user, 7 tables, 4 indexes and a package. I have decided to call the user adb_advisor and the password will be tiger:

[oracle@julian ~]$ sqlplus "/ as sysdba"

SQL*Plus: Release 20.0.0.0.0 - Production on Thu Jun 18 07:24:31 2020
Version 20.2.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to:
Oracle Database 20c EE High Perf Release 20.0.0.0.0 - Production
Version 20.2.0.0.0

SQL> @install_adb_advisor.sql adb_advisor tiger

...

Index created.
Package created.
Package body created.
No errors.
SQL>

Second, you run the advisor as the user created in step one. What you need to specify is (1) the database schemas and (2) the ADB type you will be using in the Autonomous Cloud. You can list maximum 30 schemas in a single advisor run or just use schemas=>’ALL’. And these are the 4 options for the ADB type:

– ATP for Autonomous Transaction Processing (Serverless)
– ADW for Autonomous Data Warehouse (Serverless)
– ATPD for Autonomous Transaction Processing (Dedicated)
– ADWD for Autonomous Data Warehouse (Dedicated)

Do not try to run the script as SYSDBA, you will be getting all sorts of errors (I found it the hard way), something like:

ORA-06598: insufficient INHERIT PRIVILEGES privilege
PLS-00201: identifier ‘ADB_ADVISOR.REPORT’ must be declared
ORA-00942: table or view does not exist

If the package complies with a loop index error and ORA-00942, just run GRANT SELECT ON DBA_XML_TABLES TO ADB_ADVISOR; and recompile.

The Advisor will generate a report with the following information:

– The counts of discovered objects and a summary of migration status
– Objects that cannot be migrated due to the restrictions and lockdowns imposed by the Autonomous Database on certain data types, database options and SQL statements
– The objects that will migrate with modifications that are automatically made during the import process or upon the execution of object creation DDL
– Informational section containing certain best practice recommendations and guidance

Here is the output after I ran ADB_ADVISOR.REPORT:

SET SERVEROUTPUT ON FORMAT WRAPPED
SET LINES 200
exec ADB_ADVISOR.REPORT(schemas=>'ALL', adb_type=>'ATPD');
PL/SQL procedure successfully completed.

==========================================================================================
== ATPD SCHEMA MIGRATION REPORT FOR PDBADMIN,pdbuser,JULIAN
==========================================================================================

ADB Advisor Version   : 19.3.0.0.2
Instance Name         : DB0223
Database Name         : DB0223
Host Name             : julian
Database Version      : 20.0.0.0.0
Pluggable Database    : NOVOPDB1
Schemas Analyzed      : PDBADMIN,PDBUSER,JULIAN
Analyzing for         : Autonomous Transaction Processing (Dedicated)
Report Start date/time: 18-JUN-2020 07:47

------------------------------------------------------------------------------------------
-- SECTION 1: SUMMARY
------------------------------------------------------------------------------------------

                                           Objects         Objects         Total
                           Object          Will Not        Will Migrate    Objects
Object Type                Count           Migrate         With Changes    Will Migrate
-------------------------  --------------  --------------  --------------  --------------
CONSTRAINT                 8               0               0               8
INDEX                      4               0               0               4
INDEX PARTITION            4               0               0               4
TABLE                      8               0               2               8
TABLE PARTITION            4               0               0               4
User Objects in SYS        87              87              0               0
User Objects in SYSTEM     0               0               0               0             

------------------------------------------------------------------------------------------
-- SECTION 2: FOLLOWING OBJECTS WILL NOT MIGRATE
------------------------------------------------------------------------------------------

1) User-defined objects in SYS schema will not migrate (Count=87):
------------------------------------------------------------------
Note: User-defined objects were detected in SYS schema. Consider moving them out of SYS prior to migration.

Owner      Object Type                    Object Name
---------- ------------------------------ ----------------------------------------
SYS        INDEX                          SYS_C008044
SYS        TABLE                          AQ_SRVNTFN_TABLE_1
....
SYS        EVALUATION CONTEXT             AQ$_KUPC$DATAPUMP_QUETAB_1_V
...
SYS        MATERIALIZED ZONEMAP           RDBMS_ZMAP
SYS        JOB                            ORA$_ATSK_AUTOZM
SYS        TABLE                          WRI$_ADV_OBJSPACE_TREND_DATA
SYS        TABLE                          WRI$_ADV_OBJSPACE_CHROW_DATA
SYS        TABLE                          WRI$_ADV_SEGADV_SEGROW                  

------------------------------------------------------------------------------------------
-- SECTION 3: FOLLOWING OBJECTS WILL MIGRATE WITH CHANGES
------------------------------------------------------------------------------------------

1) Index Organized table will be created as regular table (Count=1):
--------------------------------------------------------------------
Note: Index Organized tables are disallowed in ADB. When you create
an IOT in ADB, the table gets created as non-IOT (regular table).
When the Data Pump export file contains tables with IOT, use
'dwcs_cvt_iots:y' transformation at import time to transform IOTs
as regular tables.

JULIAN.PTIOT1                    

2) INMEMORY Tables will be created as NO INMEMORY Tables (Count=1):
-------------------------------------------------------------------
Note: Database In-Memory is not enabled in ADB. All In-Memory
tables and partitions will be created with NO INMEMORY clause.

JULIAN.SALES                     

------------------------------------------------------------------------------------------
-- SECTION 4: MIGRATION ADDITIONAL INFO
------------------------------------------------------------------------------------------

1) Database Parameters are detected as modified in the current database but can't be modified in the ADB (Count=13):
--------------------------------------------------------------------------------------------------------------------
Note: The following init parameters are modified in your database
that you would not be able to modify in ADB. Please refer to the
Oracle Autonomous Database documentation on the parameters that
you are allowed to modify.

_exadata_feature_on
_gc_policy_time
_gc_undo_affinity
_zonemap_auto_processing
clonedb
control_management_pack_access
db_block_checksum
enable_ddl_logging
encrypt_new_tablespaces
local_listener
tablespace_encryption_default_algorithm
tde_configuration
use_large_pages                                                                 

------------------------------------------------------------------------------------------
-- END OF REPORT
------------------------------------------------------------------------------------------
Report End Datetime   : 18-JUN-2020 07:47
Report Runtime        : +000000000 03:00:02.125000000
------------------------------------------------------------------------------------------

So, I am facing based on the report above the following 4 issues:

– I have user-defined objects in the SYS schema. They will not be migrated.
– My IOT table cannot be migrated as-is. It will be migrated as an normal, regular table.
– My INMEMORY table cannot be migrated as-is. It will be created as a NO INMEMORY table.
– I cannot use some of my init.ora parameters in ADB-D.

My recommendation is that, before you migrate your schemas to Autonomous Cloud, to run the advisor. It will minimize your post-migration hassle.

Few additional comments about the advisor:

By default, the output gets truncated when the number of rows exceeds the maximum limit set in the Advisor package. You can reset the number of rows by running the following command prior to running the Advisor.

SQL> exec ADB_ADVISOR.setmaxrows(500);

You have to reset the max rows every time you run the Advsior as the settings is not saved in the database.

If you want to query the data dictionary for the output, you may try:

SELECT a.owner, a.object_type, a.object_name, c.*
FROM adb_advisor_objects_tmp a,
adb_advisor_rejects_tmp b,
adb_advisor_codes_tmp c
WHERE a.id = b.id AND b.cd = c.cd;

Final note: you can run the advisor on any database version from 10g to 20c!

Do not just Simplify with Databases: Automate, Innovate and Innovate!

In Autonomous, Database tuning, Databases, DB2 database, DBA, PostgreSQL, SQL Server on June 15, 2020 at 09:46

“Automation applied to an inefficient operation will magnify the inefficiency” – Bill Gates
“Innovation distinguishes between a leader and a follower! – Steve Jobs

In the database industry, simplification, automation and innovation have sort of become buzz words – we hear them more often in meetings and see them in power point presentations than in the real world implementations. Databases are being patched, upgraded and migrated but how often automation and innovation are part of the process?

A recent article entitled “After The Pandemic, Don’t Simply Automate. Innovate” quoted an Accenture report that surveyed 1500 C-suite executives in 16 industries, 76% of respondents said they were struggling to scale the technology across their businesses. The numbers tell the story: A full 84% of C-suite executives believe they must leverage artificial intelligence (AI) to achieve their growth objectives.

A survey of database managers and administrators shows the benefits they expect from automation, plus what they think about DBAs’ current workloads. What are the highlights:

– 63% expect faster innovation from database automation
– 62% of the data pros expect that data will grow 25% or more annually at their enterprise over the next three years
– 66% said the DBA/data team’s overwork or scheduling issues are the biggest challenge in database deployments
– The majority of companies have not made significant automation gains in any key data management processes
– 69% of the DBAs think automation will make their job more business-centric.

So, why aren’t we seeing those innovative automations being implemented in full? Is it the DBA mindset, the business is behind the technology and the innovation, lack of time and resources, lack of knowledge and trust in the new?

We can try to look into the issue from the DBA perspective. Recently, Jeff Erickson listed the 3 Can’t Miss Ways to Turn Your DBA Skills into Gold. Here is my paraphrased version of these 3 ways:

1. Expand DBAs skils towards app development – Python, PL/SQL, etc.
2. DBAs should get into the data science game – look more into Oracle Autonomous Database which comes with Oracle ML, a rich library of machine learning algorithms
3. Convert the “A” in DBA from Administrator to Architect – look into big data and data architecture – big data is the driver for innovation in databases

At the end of the article he quoted Kerry Osborne about dealing with vast and growing data volumes: “That added complexity and scale should sound like a huge opportunity knocking“.

A interesting article by Duncan Harvey entitled What is the innovation, automation dynamic?, pointed out how autonomous systems are finally giving people the headspace they need to innovate at speed in the digital era.

So, is there a Database Automation Guide? Yes indeed!

Get a cloud boost with Oracle Autonomous!

Have a look of all Automatic features that came after Oracle 9i. Oracle 7 an 8 were probably the first releases added automatic functionalities to the database but the boost started with 9i. 20c is not skipping on that part either.

From all database brands arguably the Oracle Database is most advanced in terms of tools, automation features and innovation capabilities. Only from 11.2 until Oracle 20c, there are 133 new features purely focused on Automation. But I still see detabases where even old enough automation features (such as Automatic SQL Tuning) have not been yet implemented.

Let us look into few database brands:

SQL Server:

There are dozens of Automatic features and properties embedded into SQL Server:

Automatic Tuning can do a lot things such as Automated performance tuning of databases, Automated verification of performance gains and Automated rollback and self-correction. More importantly, it is the only database besides Oracle to have Automatic Indexing. It is worth checking how Automatic index management works in the Azure SQL database. Azure SQL Database analyzes your workload, identifies the queries that could be executed faster if you create an index, identifies indexes that are not used in a longer period of time, and identifies duplicated indexes in the database.

Db2:

IBM Db2 also has a relatively good list of Automatic features including self-tuning memory (single-partition databases only), Automatic storage, Automatic database backups, Automatic reorganization and Automatic statistics collection. The Db2 documenattion claims that the Db2® autonomic computing environment is self-configuring, self-healing, self-optimizing, and self-protecting but this is far behind what Oracle ADB has to offer. Tuning SQL is more difficult in Db2 than in Oracle and the single reason for that are the tools and features offered by both systems.

Redshift:

Amazon’s Redshift has a good set of features. Looking at what automation is like, we find enough for a relatively new database brand (Redshift, not PostgreSQL).

In terms of storage, it is mostly about how Redshift automatically takes care of data formatting and data movement into S3 and how with managed storage, capacity is added automatically to support workloads up to 8PB of compressed data.

There is of course Automated provisioning and Automated backups. Plus, Automatic workload management (WLM) uses machine learning to dynamically manage memory and concurrency, helping maximize query throughput.

Also, Amazon Redshift continuously monitors the health of the cluster, and automatically re-replicates data from failed drives and replaces nodes as necessary for fault tolerance.

Very much like Oracle RAC and Oracle Exadata, as of May 2020, Amazon Redshift now leverages Bloom filters to enable early and effective data filtering. Redshift automatically determines what queries are suitable for leveraging Bloom filters at query runtime.

Recently, Amazon Redshift also introduces ATS (= Automatic Table Sort), an automated alternative to Vacuum Sort. Automatic table sort complements Automatic Vacuum Delete and Automatic Analyze and together these capabilities fully automate table maintenance. Automatic table sort is now enabled by default on Redshift tables where a sort key is specified.

Amazon Redshift automatically takes incremental snapshots (backups) of your data every 8 hours or 5 GB per node of data change. You now get more information and control over a snapshot including the ability to control the automatic snapshot’s schedule.

In terms of SQL tuning, Amazon Redshift now automatically and elastically scales query processing power to provide consistently fast performance for hundreds of concurrent queries. The database automatically shuts down Concurrency Scaling resources to save you cost. Also, Amazon Redshift now updates table statistics by running ANALYZE automatically but that is no news for Oracle database users. Amazon Redshift improves query performance by automatically moving read and write queries to the next matching queue without restarting the moved queries.

Snowflake:

Looking into the Top 10 cool things about Snowflake, we see the fact that the JSON documents are stored in a table and optimized automatically in the background for MPP and columnar access. There is Automatic Encryption of Data and Automatic Query Optimization. No Tuning! Really cool as “It is all handled “auto-magically” via a dynamic query optimization engine in our cloud services layer. So, no indexes, no need to figure out partitions and partition keys, no need to pre-shard any data for even distribution, and no need to remember to update statistics.”

I like the “auto-magical” part 🙂

My interest was caught by the deep dive of the Revolutionary features of Snowflake that sets it apart — A Deep dive: all I found on automation was “Automatic scale down”. No comment here.

PostgreSQL:

If we look into the PostgreSQL Feature Matrix, we see only one feature about automation: Automatic plan invalidation. There is a mention of WAL Buffer auto-tuning and Autovacuum.

For databases such as MySQL, PosgreSQL and MongoDB, check this article: Why is Database Automation Important?

A recent discussion Oracle vs. PostgreSQL basically tells it all – in case you have the patience to read it – here are both sides:

Pro-Oracle: “Comparing Postgres with Oracle is a bit like comparing a rubber duck you might buy your three year old, with a 300000 ton super tanker. Do they both float? Yeah, but that’s about the only similarity.”
Pro-PostgreSQL: “So bottom line, PostgreSQL beats Oracle by far in my opinion, at least as far as installing it and sizes are concerned.”

Looking into the Features and Benefits of EDB Postgres Cloud Database Service, we see Automated Notifications, Automated Monitoring, Automated Backups and Automated Replica Failover.

Oracle

Of course, Oracle ADB (Autonomous Database) has all automation features embedded into the service but not all applications are certified or fit ADB. Not yet at least. Still, I think that most automation features can be easily implemented (into a non-ADB) with some prior testing and the benefits are enormous. The list if automation features is long – here are the major ones:

Features in 19c and 20c which are worth implementing (at least looking into and testing) are:

Automatic Indexing
– Automatic SQL Plan Management
Automatic Index Optimization
Automatic Zone Maps

Automation and innovation go hand in hand. Database innovation in an enterprise involves using the database technology in new ways in order to create a more efficient organization and improve alignment between technology and business by completing the same DBA work with smaller teams by implementing automation features in the database to reduce storage and labor cost & increase system uptime and performance.

Bottom line for DBAs: regardless of the database, there is room for innovation in every database brand!

Six new features of Data Pump in Oracle Database 20c

In Cloud, Data, Databases, DBA, New features, Oracle database, Oracle utilities on June 8, 2020 at 10:55

In Oracle version 20c, the Data Pump export job accepts 54 different parameters and the import jobs can have up to 59 different parameters. Most of us who used until Oracle 10g (and even afterwards) the old fashioned traditional export import tool probably knew by heart the syntax of exp/imp and never needed to copy paste the command from a text file. With more than 50 parameters in 20c, I am sure it is a different story.

There are 6 new features of Data Pump in Oracle 20c and here are their short descriptions:

1. Oracle Data Pump checksums support for cloud migrations

The new parameter CHECKSUM can be used for validity as a checksum is now added to the dumpfile. Oracle Data Pump can be, and is mostly used, for migrating data from on-premises Oracle Databases into the Oracle Public Cloud. You can use the checksum to help confirming that the file is valid after a transfer to or from the Oracle Cloud object store. Checksums are also useful after saving dumpfiles to on-prem locations for confirming that the dump files have no accidental or malicious changes.

Here is an example of how to use the new CHECKSUM parameter – I am exporting my own schema. The parameter accepts 2 values:

YES – Oracle calculates a file checksum for each dump file in the export dump file set
NO – Oracle does not calculate file checksums

The checksum is calculated at the end of the data pump job. Notice that I am not specifying below the other new parameter CHECKSUM_ALGORITHM, thus using its default value SHA256. The other optional secure hash algorithms are: CRC32, SHA384 and SHA512.

 
SQL> !expdp julian DIRECTORY=data_pump_dir DUMPFILE=jmd.dmp CHECKSUM=YES

Export: Release 20.0.0.0.0 - Production on Sat May 30 07:20:55 2020
Version 20.2.0.0.0

Copyright (c) 1982, 2020, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 20c EE High Perf Release 20.0.0.0.0 - Production
Starting "JULIAN"."SYS_EXPORT_SCHEMA_01":  julian/********@//localhost:1521/novopdb1.laika7.laika.oraclevcn.com DIRECTORY=data_pump_dir DUMPFILE=jmd.dmp CHECKSUM=YES
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PASSWORD_HISTORY
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
. . exported "JULIAN"."BLOGS"                            9.983 MB   73991 rows
. . exported "JULIAN"."SALES"                            14.38 MB  295240 rows
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
. . exported "JULIAN"."RDBMS_BRANDS"                     7.726 KB      12 rows
. . exported "JULIAN"."CLIENTS"                          6.007 KB       2 rows
. . exported "JULIAN"."T"                                5.476 KB       1 rows
ORA-39173: Encrypted data has been stored unencrypted in dump file set.
Master table "JULIAN"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
Generating checksums for dump file set
******************************************************************************
Dump file set for JULIAN.SYS_EXPORT_SCHEMA_01 is:
  /u01/app/oracle/admin/ORCL/dpdump/9D45645C541E0B7FE0530206F40AE9E9/jmd.dmp
Job "JULIAN"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat May 30 07:24:59 2020 elapsed 0 00:04:01

You probably noticed the ORA-39173 warning at the end – it is because I did not specify an encryption password while exporting encrypted data. It is just a warning and not a real error.

Goes without saying that COMPATIBLE must be set to at least 20.0

2. Oracle Data Pump exports from Autonomous Database

Starting with Oracle Database 20c, Data Pump can perform exports from Oracle Autonomous Database into dump files in a cloud object store. Thus, now we can easily migrate data out from an Oracle Autonomous Database and import it into another location.

For how to create object store credentials check either Oracle Cloud : Autonomous Database (ADW or ATP) – Load Data from an Object Store (DBMS_CLOUD) by Tim Hall or DBMS_CLOUD Package – A Reference Guide by Christian Antognini.

The new in 20c is the use of the new CREDENTIAL parameter which enables the export to write data stored into object stores. The CREDENTIAL parameter changes how expdp interprets the text string in DUMPFILE. If the CREDENTIAL parameter is not specified, then the DUMPFILE parameter can specify an optional directory object and file name in directory-object-name:file-name format. If the CREDENTIAL parameter is used, then it provides authentication and authorization for expdp to write to one or more object storage URIs specified by DUMPFILE.

Here is an example assuming that we have already created the credential_name JMD_OBJ_STORE_CRED:

 
expdp julian DUMPFILE=https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/juliandon/data_pump/jmd.dmp 
CREDENTIAL=jmd_obj_store_cred

Need more information? The Oracle Cloud Infrastructure User Guide has “only” 5952 pages. Lat week, they were 5919. Enjoy reading it 🙂 And while reading this blog post, there will be more than 5952 pages… I do not think I have ever seen a longer users guide in our industry!

3. Oracle Data Pump includes and excludes in the same operation

Oracle Data Pump 20c can include and exclude objects in the same export or import operation meaning that now, Oracle Data Pump commands can include both INCLUDE and EXCLUDE parameters in the same operation. By enabling greater specificity about what is being migrated, this enhancement makes it easier to migrate to Oracle Cloud, or to another on-premises Oracle Database.

Note: when you include both parameters in a command, Oracle Data Pump processes the INCLUDE parameter first, and includes all objects identified by the parameter. Then it processes the EXCLUDE parameters, eliminating the excluded objects from the included set. Here is an example of including only 2 tables (SALES and CLIENTS) but excluding all indexes except the PKs (real use case: you want to enable Oracle Auto Indexing in ADB and while importing the data you need to drop all indexes except the PKs):

 
expdp julian SCHEMAS=JULIAN DUMPFILE=julian.dmp REUSE_DUMPFILES=YES 
INCLUDE=TABLE:\"IN \(\'CLIENTS\',\'SALES\'\)\" 
EXCLUDE=INDEX:\"LIKE \'IDX\%\'\"

4. Oracle Data Pump parallelizes transportable tablespace metadata operations

Starting with Oracle 20c, Data Pump improves Transportable Tablespace metadata operations with parallelism. Starting with Oracle Database 20c, transportable tablespace exports can be done with degrees of parallelism greater than 1.

Parallelism higher than 1 improves TTS export and import performance especially when there are really a lot of database objects in the data files including tables indexes partitions and subpartitions. We probably see the real benefit when exporting packaged application schemas from SAP, EBS, etc.

5. Oracle Data Pump provides optional index compression

In Oracle Database 20c, Data Pump supports optional index compression on import including the Autonomous Database by introducing a new TRANSFORM parameter clause INDEX_COMPRESSION_CLAUSE. Thus, you can control whether index compression is performed during import.

If NONE is specified in the INDEX_COMPRESSION_CLAUSE, then the index compression clause is omitted (and the index is given the default compression for the tablespace). However, if you use compression, then Oracle recommends that you use COMPRESS ADVANCED LOW. Indexes are created with the specified compression.

If the index compression clause is more than one word, then it must be contained in single or double quotation marks. Also, your operating system can require you to enclose the clause in escape characters, such as the backslash character. Here is an example of how to use the INDEX_COMPRESSION_CLAUSE:

 
TRANSFORM=INDEX_COMPRESSION_CLAUSE:\"COMPRESS ADVANCED LOW\"

Specifying this transform changes the type of compression for all indexes in the job.

6. Oracle Data Pump resumes transportable tablespace jobs

Starting with Oracle Database 20c, Data Pump resumes transportable tablespace export and import jobs that are stopped due to errors or any other problems. Oracle Data Pump’s capacity to resume these stopped jobs helps us to save time and makes the system more available.

Transportable jobs are now restartable at or near the point of failure.

To restart the job JMD_EXP_20C, first perform:

expdp system/password attach=jmd_exp_20c

Then restart the job with:

Export> continue_client

For an import job, the syntax is exactly the same.

On Kafka, JSON, PL/SQL and Advanced Queuing in Oracle Database 20c

In Autonomous, Cloud, Data, DBA, New features, PL/SQL on June 1, 2020 at 05:50

Oracle Corp. starts today (June 1st, 2020) a new fiscal year as the Oracle Database is slowly moving towards version 20c (still in preview mode only) which comes with almost 500 init.ora parameters and 5326 hidden/underscore parameters. There is a new one for 20c called kafka_config_file. But let us first take a step back and see its connection with Advanced Queuing.


Advanced Queuing is available in all editions of Oracle database, including XE. Since Oracle Advanced Queuing is implemented in database tables, all the operational benefits of high availability, scalability, and reliability are applicable to queue data.

Advanced Queuing can be accessed through the several interfaces: PL/SQL, Visual Basic, Java, Java Message Service, JDBC, ODP.NET, OCI (do not read Oracle Cloud Infrastructure – think of C language), etc.

Using PL/SQL to access Oracle Advanced Queuing is probably the most common method: using the PL/SQL packages DBMS_AQADM and DBMS_AQ.

Reading all that, one might think of Kafka. Kafka is a distributed, partitioned, replicated commit log service providing the functionality of a messaging system, but with a unique design. The aim here is not position Kafka and Oracle AQ against each other but show what is new in Oracle Database 20c that brings Kafka and Oracle together and what is important from DBA point of view.

Todd Sharp explained recently how to use Kafka Connect wth Oracle Streaming Service and Autonomous Database. Let us now see what is new in Oracle Database 20c:

Oracle Database 20c introduces Transactional Event Queues (TEQ), which are partitioned message queues that combine the best of messaging, streaming, direct messages, and publish/subscribe. TEQ operates at scale on the Oracle database. TEQ provides transactional event streaming, and runs in the database in a scale of 10s to 100s of billions of messages per day on 2-node to 8-node Oracle RAC databases, both on-premise and on the cloud. TEQ has Kafka client compatibility, which means, Kafka producer and consumer can use TEQ in the Oracle database instead of a Kafka broker. Check Advanced Queuing in 20c for more details.

1. Advanced Queuing: Kafka Java Client for Transactional Event Queues

Kafka Java Client for Transactional Event Queues (TEQ) enables Kafka application compatibility with Oracle database. This provides easy migration of Kafka applications to TEQ.

You do not have to manage a separate Kafka infrastructure and this new feature simplifies the event-driven application architectures with an Oracle converged database that now includes events data. Starting from Oracle Database 20c, Kafka Java APIs can connect to Oracle database server and use Transactional Event Queues (TEQ) as a messaging platform. Developers can migrate an existing Java application that uses Kafka to the Oracle database. A client side library allows Kafka applications to connect to Oracle database instead of Kafka cluster and use TEQ messaging platform transparently.

Two levels of configuration are required to migrate Kafka application to TEQ messaging platform:

– Database level configuration
– Application level configuration

Kafka application needs to set certain properties which will allow OKafka library to locate the Oracle Database. This is analogous to how Kafka application provides zoo keeper information. These connection properties can be set in the following two ways:

– using database user and password provided in plain text
– using JDBC wallet

The following are the prerequisites for configuring and running Kafka Java client for TEQ in an Oracle Database. Create a database user. Grant the following user privileges:

grant connect, resource to user;
grant execute on dbms_aq to user;
grant execute on dbms_aqadm to user;
grant execute on dbms_aqin to user;
grant execute on dbms_aqjms to user;
grant select_catalog_role to user;

Next, set the correct database init.ora parameter to use TEQ:

streams_pool_size=512M

Set the local listener too:

LOCAL_LISTENER= (ADDRESS=(PROTOCOL=TCP)(HOST= )(PORT=))

2. Advanced Queuing Support for JSON Data Type and PL/SQL

Oracle Database Advanced Queuing now supports the JSON data type.

Many client application and micro-services which use Advanced Queuing for messaging have better performance if they use JSON data type to handle JavaScript Object Notation (JSON) messages.

In this aspect, PUBLIC is granted EXECUTE privilege on all these types:

AQ$_AGENT, AQ$_AGENT_LIST_T, AQ$_DESCRIPTOR, AQ$_NTFN_DESCRIPTOR, AQ$_NTFN_MSGID_ARRAY, AQ$_POST_INFO, AQ$_POST_INFO_LIST, AQ$_PURGE_OPTIONS_T, AQ$_RECIPIENT_LIST_T,
AQ$_REG_INFO, AQ$_REG_INFO_LIST, AQ$_SUBSCRIBER_LIST_T, DEQUEUE_OPTIONS_T, ENQUEUE_OPTIONS_T, QUEUE_PROPS_T, SEEK_INPUT_T, , EK_OUTPUT_T, SYS.MSG_PROP_T, MESSAGE_PROPERTIES_T, MESSAGE_PROPERTIES_ARRAY_T, MSGID_ARRAY_T

Regarding the new features of PL/SQL in 20c, check first the interesting example of Steven Feuerstein on extension of loop iterators.

Check the New Features in Release 20c for Oracle Database PL/SQL for more details on PL/SQL extended iterators, PL/SQL qualified expressions enhancements, SQL macros, the new JSON data type and the new pragma SUPPRESSES_WARNING_6009.

3. Advanced Queuing: PL/SQL Enqueue and Dequeue Support

The following features are new in this release:

– Kafka Java Client for Transactional Event Queues (TEQ) which enables Kafka application compatibility with Oracle Database and thus providing easy migration of Kafka applications to TEQ
– PL/SQL Enqueue and Dequeue Support for JMS Payload and non-JMS Payload in Transactional Event Queues
– Transactional Event Queues for Performance and Scalability
– Simplified Metadata and Schema in Transactional Event Queues
– Support for Message Retention and Seekable Subscribers
– Advanced Queuing Support for JSON Data Type

For all the details, check the Changes in Oracle Database Advanced Queuing Release 20c.

In terms of performance and scalability, Oracle Transactional Event Queues have their Queue tables partitioned in 20c into multiple Event Streams which are distributed across multiple RAC nodes for high throughput messaging and streaming of events.

Remember that in 10.1, AQ was integrated into Oracle Streams and thus Oracle AQ was called “Oracle Streams AQ”. But in 12.1, Oracle Streams got deprecated and AQ was again named just “Oracle AQ”.

And finally: here is the 546 page long Transactional Event Queues and Advanced Queuing User’s Guide along with few good additional articles:

Oracle + Kafka = Better Architecture by Jonathan Wallace
Streaming data from Oracle into Kafka by Robin Moffatt
Extending Oracle Streaming with Kafka Compatibility by Somnath Lahiri