Dontcheff

Archive for the ‘New features’ Category

5 new core DBA features in Oracle Database 21c

In DBA, New features, Oracle database, Oracle internals, PL/SQL on January 8, 2021 at 08:14

If wondering what new can come after so many releases of the Oracle Database – well, here is something for system DBAs.

Here are 5 new features related to Database Vault, Syslog, the new SYS_AUTO_STS_MODULE, and the 2 new DBA packages SYS.DBMS_FLASHBACK_ARCHIVE_MIGRATE & SYS.DBMS_ACTIVITY.

1. No need any longer to disable Oracle Database Vault in every container before the upgrade! We only need to grant the DV_PATCH_ADMIN role to SYS commonly before performing the upgrade. After the upgrade is complete the Database Vault controls work as before. Then we can revoke the DV_PATCH_ADMIN role from SYS. Alternatively, we can explicitly disable Oracle Database Vault in all containers before the upgrade and then after the upgrade explicitly enable Oracle Database Vault in all the containers.

2. SYSLOG destination for common Unified Audit Policies:

Certain predefined columns of unified audit records from common unified audit policies can be written to the UNIX SYSLOG destination.

To enable this new feature we should set UNIFIED_AUDIT_COMMON_SYSTEMLOG which is a new CDB level init.ora parameter. This enhancement enables all audit records from common unified audit policies to be consolidated into a single destination. This feature is available only on UNIX platforms not Windows (of course).

The new parameter has no default, we should set both the facility_clause and the priority_clause values.

Only a subset of unified audit record fields are written to ensure that the audit record entries do not exceed the maximum allowed size for a SYSLOG entry (typically 1024 bytes).

3. SYS_AUTO_STS_MODULE is the new module for the Auto SQL Tuning Sets. The feature exists since Oracle 19.7 so technically not really a new 21c feature although listed as such. Check Automatic SQL Tuning Sets (ASTS) 19c RU 19.7 Onwards (Doc ID 2686869.1)

If you try searching Google or MOS for SYS_AUTO_STS_MODULE, then most likely you will get similar to what I got from my search:

Clearly, Franck Pachot has already met with the auto SQL tuning set module but that is expected behavior – he is one of the pioneers in new features and database research.

First, what is the Automatic SQL Tuning Set? In short, auto STS is an automatic repository for historic SQL performance metrics and execution plans. Oracle is now having the module handling all that automatically.

Recently, Doc ID 2733254.1 shows after a search for the SYS_AUTO_STS_MODULE but the note is about a hang when creating text indexes although SYS_AUTO_STS_MODULE pops up 3 times under SQL ordered by CPU Time.

Out of curiosity, you may run this SQL also against your 21c database and observe the actions the module has been taking:

select ACTION, count(*) 
from V_$SQLAREA_PLAN_HASH 
where module = 'SYS_AUTO_STS_MODULE' 
group by ACTION;

In a real production database (ADW), I noticed that after the module stopped appearing in the AWR report, the performance went back to normal:

4. DBMS_FLASHBACK_ARCHIVE_MIGRATE

A new PL/SQL package called DBMS_FLASHBACK_ARCHIVE_MIGRATE enables the migration of Flashback Data Archive enabled tables from a database on any release (in which the package exists) to any database on any release (that supports Flashback Data Archive).

The package has 3 procedures:

– EXPORT exports the given Flashback Archive enabled base tables, their history and related tablespaces
– EXPORT_ANALYZE analyzes the given Flashback Archive enabled base tables, their history, and related tablespaces for self containment using Transportable tablesapce checks
– IMPORT imports the Flashback Archive enabled base tables that were exported, their history, and related tablespaces

Here are the prerequisites:

Database version >= 11.2
If database version is 11.2, following conditions should be met:
shared_pool_size >= 500M
streams_pool_size >= 40M or
sga_target >= 2G

The DBMS_FLASHBACK_ARCHIVE_MIGRATE package must be compiled on both the source and target databases as SYS. The source file is located at ?/rdbms/admin/crefbamig.sql, using which the package can be created or compiled.

The export and import procedures must be executed as SYS. Since the package uses DBMS_DATAPUMP, DBMS_LOCK, DBMS_SYSTEM, DBMS_SQL and DBMS_SCHEDULER PL/SQL packages, their security models are also applicable.

5. A new concept in the Oracle Database is the Object Activity Tracking System.

Object Activity Tracking System (OATS) tracks the usage of various types of database objects. Usage includes operations such as access data manipulation or refresh.

The DBMS_ACTIVITY PL/SQL package contains functions and procedures for configuring Object Activity Tracking System (OATS) information collection and management. Data collected by OATS is used in analyses performed by automatic materialized views.

DBAs can use the DBMS_ACTIVITY.CONFIGURE procedure to control three OATS parameters within a specific database.

ACTIVITY_INTERVAL defines the interval between snapshots:

exec dbms_activity.configure('ACTIVITY_INTERVAL_MINUTES',30);

ACTIVITY_RETENTION_DAYS defines how long snapshots are saved:

exec dbms_activity.configure('ACTIVITY_RETENTION_DAYS',60);

ACTIVITY_SPACE_PERCENT sets how much of available space is reserved for snapshots:

exec dbms_activity.configure('ACTIVITY_SPACE_PERCENT',10);

And use these tables to monitor the activity:

ACTIVITY_CONFIG$
ACTIVITY_TABLE$
ACTIVITY_MVIEW$
ACTIVITY_SNAP_META$

More on 21c? Check:

1. Introducing Oracle Database 21c
2. A glimpse of what is new in Oracle Database 21c

A glimpse of what is new in Oracle Database 21c

In Autonomous, New features, Oracle database on December 4, 2020 at 16:12

Oracle Database 21c will be soon available first on Oracle Cloud: from the Database Cloud Service and the Autonomous Database Free Tier.

Here is a preview of what we can expect as new features:

1. You can enable automatic indexing at the table level:

The AUTO_INDEX_TABLE configuration setting specifies tables that can use auto indexes. When you enable automatic indexing for a schema, all the tables in that schema can use auto indexes. However, if there is a conflict between the schema level and table level setting, the table level setting takes precedence.

Here is an example of how to instruct Oracle to create auto indexes on the NDA_DOCS table:

exec DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_TABLE','NDA_DOCS',TRUE);

If I would like to add the NDA_DOCS table to the auto index exclusion list, I simply run:

exec DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_TABLE','NDA_DOCS',FALSE);

If later, I decide to remove it from the exclusion list, I will run:

exec DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_TABLE','NDA_DOCS',NULL);

And if at one point, I decide to remove all the tables from the exclusion list, so that all the tables in the database can use auto indexes, I will execute:

exec DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_TABLE',NULL,TRUE);

In Oracle 20c and below, you will get the following error message: ORA-38133: invalid parameter name AUTO_INDEX_TABLE specified.

2. Attention Log

The attention log which is unique for each database instance is according to the documentation “structured, externally modifiable file that contains information about critical and highly visible database events”.

We can use the attention log to quickly access information about critical events that need action.

The attention log has the following attributes:

Attention ID: A unique identifier for the message.
Attention type: The type of attention message. Possible values are Error, Warning, Notification, or Additional information. The attention type can be modified dynamically.
Message text
Urgency: Possible values are Immediate, Soon, Deferrable, or Information.
Scope: Possible values are Session, Process, PDB Instance, CDB Instance, CDB Cluster, PDB (for issues in persistent storage that a database restart will not fix), or CDB (for issues in persistent storage that a database restart will not fix).
Target user: The user who must act on this attention log message. Possible values are Clusterware Admin, CDB admin, or PDB admin.
Cause
Action

Here is an example from Oracle:

3. For Oracle Autonomous Database, the size of the sequence cache is dynamically computed based on the rate of usage of sequence numbers:

“The automatic sequence cache size on each instance is dynamically computed based on the rate of usage of sequence numbers. Each instance caches the maximum of the manually configured sequence cache size and the projected cache size requirement for the next 10 seconds. Based on the sequence usage, the sequence cache size can shrink or grow. The minimum size to which the cache can shrink is the manually configured cache size. To prevent the sequence cache size from growing indefinitely, the cache size and each increment in the cache size is capped.”

4. Traditional auditing is now deprecated and Oracle recommends that we use unified auditing instead.

5. REMOTE_OS_AUTHENT is now desupported – in fact it was deprecated in Oracle 11.1 and afterwards retained only for backward compatibility.

The New Features 21c Guide lists also the following features which you can use also in Oracle 20c:

Blockchain Tables
– Database Resident Connection Pooling (DRCP) can be configured at the PDB level:

Connected to:
Oracle Database 20c EE High Perf Release 20.0.0.0.0 - Production
Version 20.2.0.0.0

SQL> exec DBMS_CONNECTION_POOL.START_POOL();
PL/SQL procedure successfully completed.

SQL>  exec DBMS_CONNECTION_POOL.STOP_POOL();
PL/SQL procedure successfully completed.

– Transportable tablespace jobs can be restarted:

expdp system/password attach=jmd_exp_20c
Export> continue_client

– In Oracle Cloud environments, a PDB can be downsized by reducing the value of the CPU_MIN_COUNT parameter. The default value for JOB_QUEUE_PROCESSES across all containers is automatically derived from the number of sessions and CPUs configured in the system. It is the lesser value of:

CPU_COUNT * 20
SESSIONS / 4

– The IGNORECASE parameter for the orapwd file is desupported and all newly created password files are case-sensitive.

SecureFiles segments can be shrunk for improving performance.

Detecting Data Tampering and Measuring Asymmetry and Tailedness of Data in Oracle Database 20c

In DBA, New features, SQL on September 11, 2020 at 11:27

There are seven types of SQL functions in the Oracle database:

1. Single-Row Functions
2. Aggregate Functions
3. Analytic Functions
4. Object Reference Functions
5. Model Functions
6. OLAP Functions
7. Data Cartridge Functions

The single-row functions are the most popular and used ones but the aggregate and analytical function are also extremely popular among Oracle developers (and by DBAs too).

Three new analytical and statistical aggregate functions are now available in Oracle Database 20c. Let us use the RDBMS_BRANDS table for the 3 examples below. For the sake of clarity the last column shows if the database is only available from the cloud.

1. CHECKSUM computes the checksum of the input values or expression and can be applied on a column, a constant, a bind variable, or an expression involving them. All datatypes except ADT and JSON are supported.

In earlier releases you can still use DBMS_SQLHASH.GET_HASH in order to check the integrity of result sets or some other other PL/SQL packages: STANDARD_HASH or DBMS_CRYPTO.

Here is an example of how to use CHECKSUM:

SQL> select DB_ENGINES_RANK,DB_ENGINES_SCORE
from RDBMS_BRANDS
where DB_ENGINES_RANK < 50; 

DB_ENGINES_RANK DB_ENGINES_SCORE
--------------- ----------------
              1          1345.44
              2          1282.64
              3          1078.31
              4           514.81
              5           162.64
              8            90.09
             10            73.89
             13            50.54
             15            27.59
             18            20.27
             42             3.73
             48             2.36

12 rows selected.

SQL> select CHECKSUM(DB_ENGINES_RANK), CHECKSUM(DB_ENGINES_SCORE)
from RDBMS_BRANDS where DB_ENGINES_RANK < 50;  

CHECKSUM(DB_ENGINES_RANK) CHECKSUM(DB_ENGINES_SCORE)
------------------------- --------------------------
                   288250                     209742

Let us now update the rankings of Oracle which in September went up to 1369.36 points and observe how the checksum value for DB_ENGINES_SCORE changed from 209742 to 180002!


SQL> update RDBMS_BRANDS set DB_ENGINES_SCORE=1369.36 
where RDBMS_NAME='ORACLE';

1 row updated.

SQL> commit;

Commit complete.

SQL> select 
CHECKSUM(DB_ENGINES_RANK), CHECKSUM(DB_ENGINES_SCORE)
from RDBMS_BRANDS where DB_ENGINES_RANK < 50;  

CHECKSUM(DB_ENGINES_RANK) CHECKSUM(DB_ENGINES_SCORE)
------------------------- --------------------------
                   288250                     180002


Note: NULL values in CHECKSUM column are ignored. Also, if you rollback the transaction, then the checksum value does not change.

There are 2 other analytical functions which are new to Oracle 20c. Skewness and Kurtosis describes the shape of a probability distribution and there are different ways of quantifying it for a theoretical distribution and corresponding ways of estimating it from a sample from a population. Oracle is now offering a very easy way of calculating their values by providing the in-built analytical functions in Oracle 20c.

2. SKEWNESS functions SKEWNESS_POP and SKEWNESS_SAMP are measures of asymmetry in data. A positive skewness is means the data skews to the right of the center point. A negative skewness means the data skews to the left.

Here is an example of how to use SKEWNESS:

SQL> select CLOUD_ONLY, count(*) from RDBMS_BRANDS group by CLOUD_ONLY;

C   COUNT(*)
- ----------
N          9
Y          3

SQL> select CLOUD_ONLY,
SKEWNESS_POP(DB_ENGINES_RANK), SKEWNESS_POP(DB_ENGINES_SCORE)
from RDBMS_BRANDS
group by CLOUD_ONLY; 

C SKEWNESS_POP(DB_ENGINES_RANK) SKEWNESS_POP(DB_ENGINES_SCORE)
- ----------------------------- ------------------------------
N                     2.0503487                     .584150452
Y                    .685667537                      -.4626607
   

Skewness makes sense in the situation where DB_ENGINES_RANK and DB_ENGINES_SCORE represent the database brand rank and score in the list and you want to determine whether the outliers in data are biased towards the top end or the bottom end of the distribution, that is, if there are more values to the top of the mean when compared to the number of values to the bottom of the mean.

3. KURTOSIS functions KURTOSIS_POP and KURTOSIS_SAMP measure the tailedness of a data set where a higher value means more of the variance within the data set is the result of infrequent extreme deviations as opposed to frequent modestly sized deviations.

Here is an example of how to use KURTOSIS:

SQL> select CLOUD_ONLY, count(*) from RDBMS_BRANDS group by CLOUD_ONLY;

C   COUNT(*)
- ----------
N          9
Y          3

SQL> select CLOUD_ONLY,
KURTOSIS_POP(DB_ENGINES_RANK), KURTOSIS_POP(DB_ENGINES_SCORE)
from RDBMS_BRANDS
group by CLOUD_ONLY;  

C KURTOSIS_POP(DB_ENGINES_RANK) KURTOSIS_POP(DB_ENGINES_SCORE)
- ----------------------------- ------------------------------
N                    2.88880521                     -1.4382391
Y                             0                              0
  

Note that a normal distribution has a kurtosis of zero. Have a look at the KURTOSIS_POP for the cloud-only databases Google BigQuery, Amazon Redshift and Snowflake.

MySQL DB System on Oracle Cloud Infrastructure

In Cloud, Databases, DBA, IaaS, MySQL, New features on September 1, 2020 at 08:37

MySQL has been for a long time second next to Oracle in the DB-Engines rankings. Moreover, MySQL was their 2019 Database of the Year!

Now, MySQL is also available as DB System on Oracle Cloud Infrastructure. The database version is 8.0.21 with InnoDB Storage Engine. Here is how the set-up works:

1. Create a MySQL DB System:

2. Note that there are different shutdown types:

3. Connect to a MySQL DB system:

It is not possible to connect directly from a remote IP to the MySQL DB System endpoint. You must connect to a Compute Instance, and from the Compute Instance to the MySQL DB System.

We first install MySQL Shell on the Compute instance:

 
[opc@julian ~]$ sudo yum install https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm
Loaded plugins: ulninfo, versionlock
mysql80-community-release-el7-3.noarch.rpm               |  25 kB     00:00
Examining /var/tmp/yum-root-X8dAdb/mysql80-community-release-el7-3.noarch.rpm: mysql80-community-release-el7-3.noarch
Marking /var/tmp/yum-root-X8dAdb/mysql80-community-release-el7-3.noarch.rpm to be installed
Resolving Dependencies
--&gt; Running transaction check
---&gt; Package mysql80-community-release.noarch 0:el7-3 will be installed
--&gt; Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package             Arch   Version
                                  Repository                               Size
================================================================================
Installing:
 mysql80-community-release
                     noarch el7-3 /mysql80-community-release-el7-3.noarch  31 k

Transaction Summary
================================================================================
Install  1 Package

Total size: 31 k
Installed size: 31 k
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
  Installing : mysql80-community-release-el7-3.noarch                       1/1
  Verifying  : mysql80-community-release-el7-3.noarch                       1/1

Installed:
  mysql80-community-release.noarch 0:el7-3

Complete!
[opc@julian ~]$

===

[opc@julian ~]$ sudo yum install mysql-shell
Loaded plugins: ulninfo, versionlock
mysql-connectors-community                               | 2.5 kB     00:00
mysql-tools-community                                    | 2.5 kB     00:00
mysql80-community                                        | 2.5 kB     00:00
ol7_UEKR5                                                | 2.5 kB     00:00
ol7_latest                                               | 2.7 kB     00:00
(1/3): mysql80-community/x86_64/primary_db                 | 115 kB   00:00
(2/3): mysql-connectors-community/x86_64/primary_db        |  62 kB   00:00
(3/3): mysql-tools-community/x86_64/primary_db             |  76 kB   00:00
(1/5): ol7_UEKR5/x86_64/updateinfo                         |  72 kB   00:00
(2/5): ol7_latest/x86_64/group                             | 660 kB   00:00
(3/5): ol7_latest/x86_64/updateinfo                        | 2.9 MB   00:00
(4/5): ol7_UEKR5/x86_64/primary_db                         |  12 MB   00:00
(5/5): ol7_latest/x86_64/primary_db                        |  36 MB   00:00
Excluding 238 updates due to versionlock (use "yum versionlock status" to show them)
Resolving Dependencies
--&gt; Running transaction check
---&gt; Package mysql-shell.x86_64 0:8.0.21-1.el7 will be installed
--&gt; Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package         Arch       Version             Repository                 Size
================================================================================
Installing:
 mysql-shell     x86_64     8.0.21-1.el7        mysql-tools-community      31 M

Transaction Summary
================================================================================
Install  1 Package

Total download size: 31 M
Installed size: 106 M
Is this ok [y/d/N]: y
Downloading packages:
warning: /var/cache/yum/x86_64/7Server/mysql-tools-community/packages/mysql-shell-8.0.21-1.el7.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 5072e1f5: NOKEY
Public key for mysql-shell-8.0.21-1.el7.x86_64.rpm is not installed
mysql-shell-8.0.21-1.el7.x86_64.rpm                        |  31 MB   00:02
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
Importing GPG key 0x5072E1F5:
 Userid     : "MySQL Release Engineering "
 Fingerprint: a4a9 4068 76fc bd3c 4567 70c8 8c71 8d3b 5072 e1f5
 Package    : mysql80-community-release-el7-3.noarch (@/mysql80-community-release-el7-3.noarch)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : mysql-shell-8.0.21-1.el7.x86_64                              1/1
  Verifying  : mysql-shell-8.0.21-1.el7.x86_64                              1/1

Installed:
  mysql-shell.x86_64 0:8.0.21-1.el7

Complete!
[opc@julian ~]$

Then connect to your DB System using the MySQL client:

 
mysqlsh ICO@10.0.0.3 

[opc@julian ~]$ mysqlsh ICO@10.0.0.3
Please provide the password for 'ICO@10.0.0.3': ************
MySQL Shell 8.0.21

Copyright (c) 2016, 2020, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
Creating a session to 'ICO@10.0.0.3'
[opc@julian ~]$

My suggestion is to start from the MySQL Database Service Overview.

MySQL database is also available from GCP, Azure and AWS but check the pricing:

Sharding Advisor and Federated Sharding in Oracle Database 20c

In DBA, New features, Oracle database on August 3, 2020 at 05:58

“Shards of glass can cut and wound or magnify a vision.” – Terry Tempest Williams

Same with databases: a single shared database can be a performance bottleneck, a sharded database can magnify your IT vision stragegy.

Oracle RAC and Oracle Data Guard meet more than 99% of the scalability and availability needed by business applications but sometimes infinite availability is also required.

Sharding is an architectural concept which aims at high scalability and absolute availability by splitting the database into a collection of independent physical databases. But this is from the perspective of the DBA. From the perspective of an application, a sharded database looks like a single database; the number of shards, and the distribution of data across those shards, are completely transparent to the application.

Sharding was introduced with Oracle 12c and MOS Doc ID 2226341.1 is the master note for Oracle Sharding. What is important to know is that you can shard database tables by consistent hash (system-managed sharding), by range or list (user-defined sharding), or a combination (composite sharding).

In Oracle 20c, Oracle came with 2 new advisors: Oracle Autonomous Database Advisor and the Oracle Sharding Advisor. The Oracle Sharding Reference is good page to bookmark!

Sharding Advisor is a standalone command-line tool that helps you redesign a database schema so that you can efficiently migrate an existing non-sharded Oracle Database to an Oracle sharding environment. Sharding Advisor analyzes your existing database schema and produces a ranked list of possible sharded database designs.

The Sharding Advisor is a an OS command line tool called gwsadv and by default the workload is captured from V$SQL_PLAN_STATISTICS_ALL.

Here is how you invoke/run the Oracle Sharding Advisor:

 

[oracle@julian ~]$ gwsadv -n 138.121.142.24:1521 -s novopdb1.laika.oraclevcn.com -u julian -p abc -c -w

********** WELCOME TO THE SHARDING ADVISOR **********

Sharding Advisor: Release 20.0 - Development on Sun Jul 26 2020 07:16:07
Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.
  

Altogether there are 10 possible parameters/flags for the tool, only 3 are mandatory: the username -u, the password -p and -w which directs Sharding Advisor to use the query workload for sharding configuration generation and ranking. The -c parameter is required only on a first run or on a changed workload. For the remaining 6 non-mandatory flags, check the Sharding Advisor Usage and Options.

Note that the sch flag specifies the list of schemas for the Sharding Advisor if you want to run it as a different user which you create specifically for the Sharding Advisor. That user will need SELECT privileges on GV$SQL_PLAN and GV$SQL_PLAN_STATISTICS_ALL, and on the DBA_HIST_SQL_PLAN, DBA_HIST_SQLSTAT, and DBA_HIST_SNAPHSOT tables. The user does not need any other special privileges.

Once you start the Sharding Advisor with the command above, you will be asked you some questions to help in the sharded database design, the first question being on how many shards you are planning to have:

 
** The number of shards in a sharded configuration is an important parameter **
** that can influence scalability, fault isolation, and availability. **

ENTER THE ESTIMATED NUMBER OF PRIMARY SHARDS FOR YOUR CURRENT CONFIGURATION (1-1000): 4

*** CAPTURING QUERY WORKLOAD PREDICATES ... ***

*** ANALYZING USER SCHEMA ... ***
  

Usually, a database schema has some tables with multiple foreign key constraints. Sharding has a limitation that a sharded table can only have one foreign key constraint that links it to another table in the same table family. As a result, for every table that has more than one foreign key constraint, a single constraint must be chosen to be enforced in the sharded schema. You will be asked questions like this:

 
** The 'JULIAN.ORDERS' table has multiple foreign key constraints as shown below. **

** 1. 'JULIAN.ORDERS' is related to primary table 'JULIAN.ORDER_TYPE' by foreign key 'ORDER_TYPE_FK' with a fan-out of '1.8' **
** 2. 'JULIAN.ORDERS' is related to primary table 'JULIAN.CLIENTS' by foreign key 'CLIENTS_FK1' with a fan-out of '1.3' **

ENTER WHICH OF THESE CONSTRAINTS YOU WANT TO CONSIDER FOR TABLE FAMILY ENUMERATION
(AS COMMA-SEPARATED RANGES OR ENTER '0' TO CHOOSE NONE): 1
   

Note: Fan-out, in any tree, is number of pointers to child nodes in a node. In Oracle, in this case, fan-out is the ratio of number of rows in the referencing table to that of the referenced table (parent/primary table).

To review the Sharding Advisor, use the following 6 output tables out of which only the first 3 are documented:

SHARDINGADVISOR_CONFIGURATIONS
SHARDINGADVISOR_CONFIGDETAILS
SHARDINGADVISOR_QUERYTYPES
– SHARDINGADVISOR_PREDS
– SHARDINGADVISOR_ECPREDS
– SHARDINGADVISOR_IMPORTANT_TABS

Check also the following Sharding Advisor Output Review SQL Examples.

There are few restrictions to the Sharding Advisor:

– The source database must be Oracle Database 10g or later
– The Sharding Advisor discovers the table families based on primary key-foreign key relationships, so if the schema does not have any primary key-foreign key constraints, sharding by PARENT clause is recommended
– Sharding Advisor recommends only single-table family, system-managed sharding (sharding by reference) configurations if the source database has foreign key constraints
– Sharding Advisor recommends sharding using the PARENT clause if the source database does not have foreign key constraints

Oracle have declarative rapid deployment automation for Sharding available in OCI Marketplace as well as on-premises with Terraform and Kubernitis. Check the Oracle blog article Sharding Oracle Database Cloud Service by Shailesh Dwivedi.

===

In Oracle 20c, you can also create a Sharded Database from multiple existing databases – this is called Federated Sharding.

It is now possible to convert a set of existing physical independent databases running the same application into a sharded database without modifying the database schemas or the application.

Oracle Sharding, in a federated sharding configuration, treats each independent physical database as a shard, and thus you can run multi-shard queries on all the shards.

You can create a federated sharding configuration with minor version mismatches between the shards. For example, the “North America” region could be on Oracle 20.2 and the “Europe” region could be on Oracle 20.3. But all database shards and the shard catalog must be on Oracle Database 20c or later. The databases must have the same schemas and only minor differences are supported. For example, a table can have an extra column or a missing column in one of the databases.

Often there are application upgrades which means that new tables, new columns, new constraint are added or column data types are being modified. When part of an overall Federated Sharding configuration, Oracle Sharding handles the schema differences caused by an application upgrade, as long as the overall schema structure stays the same.

To create a Federated Sharding environment using existing databases, you define the database layout just as you would for user-defined sharding, using GDSCTL commands. You need to execute these 7 steps in order to deploy a Federated Sharding configuration.

Just as an example, here is how you convert tables to duplicated or sharded tables:

 
ALTER TABLE catalog EXTERNALLY DUPLICATED;
ALTER TABLE orders EXTERNALLY SHARDED;

These are the limitations on Federated Sharding:

– There is no concept of chunk in a federated sharding configuration, so the GDSCTL MOVE CHUNK command is not supported
– Application sharding key-based routing is not supported
– The existing databases, before being added to a federated sharding configuration, must be upgraded to Oracle Database 20c or later

If you managed to bare reading until this last paragraph, please check also Partitioning vs. Federation vs. Sharding. Its last paragraph too…

PDB Point-in-Time Recovery and Flashback in Oracle 20c

In DBA, New features, Oracle database, RMAN on July 10, 2020 at 14:41

The most significant point about the Oracle 20c database architecture is that non-CDB Oracle Database upgrades to non-CDB architecture are desupported. Meaning you need a container database in 20c and your data will reside within a pluggable database.

But then how about if you need to restore one PDB to any time in the recent past?

In Oracle database 20c, flashback and PITR (=point-in-time recovery) are supported when recovering PDBs to an ancestor or orphan PDB incarnations. These operations were not possible in 19c and below. Just as a reminder, in Oracle 12.1 flashback database operations were possible on root container level and thus affected all PDBs under the root container. Oracle 12.2 started supporting flashback of a PDB.

There is one restriction though in 20c: you cannot perform PDB flashback or PITR operation to a PDB incarnation within an orphan database incarnation. In other words, you can flashback as long as the CDB incarnation does not change. Or restated: you can flashback a PDB to an orphan PDB incarnation that is either within the same CDB incarnation or in an ancestor CDB incarnation. Also, flashback of a PDB to an orphan incarnation is supported only when the database uses local undo.

Now, this might confusing. Let me first shortly explain what is an ancestor incarnation and an orphan incarnation.

Database incarnations have the following relationships to each other:

– The current incarnation is the one in which the database is currently operating
– The incarnation from which the current incarnation originated after an OPEN RESETLOGS operation is the parent incarnation of the current incarnation
– The parent of the parent incarnation is an ancestor incarnation and any parent of an ancestor incarnation is also an ancestor of the current incarnation
– A noncurrent incarnation that is not a direct ancestor of the current incarnation is called orphan incarnation

During the flashback of the PDB, Oracle modifies only the data files for that PDB. The data in the other PDBs is not impacted. The point in time for the flashback can be one of the following:

– System Change Number
– Specific time in the past
– CDB restore point
– PDB restore point
– PDB clean restore point
– PDB guaranteed restore point

Here is an example of how flashback to any time in the recent past works in Oracle 20c.

We have lost at 2:30pm a table called RDBMS_BRANDS and a materialized zone map RDBMS_ZMAP from a pluggable database called NOVOPDB2. We have a restore point called rp1_novo_pdb2 created before the “disaster” at 8am in the morning. So, let us first flashback and verify we get the 2 objects back:

SQL> select systimestamp from dual;

SYSTIMESTAMP
-------------------------------------------------------------
05-JUL-20 02.35.30.569344 PM +00:00

SQL> SELECT table_name FROM DBA_TABLES where TABLE_NAME like '%RDBMS%';

no rows selected

SQL> ALTER PLUGGABLE DATABASE novopdb2 CLOSE;

Pluggable database altered.

SQL> FLASHBACK PLUGGABLE DATABASE novopdb2 TO RESTORE POINT rp1_novo_pdb2;

Flashback complete.

SQL> ALTER PLUGGABLE DATABASE novopdb2 OPEN RESETLOGS;

Pluggable database altered.

SQL> SELECT table_name FROM DBA_TABLES where TABLE_NAME like '%RDBMS%';

TABLE_NAME
----------------------------------------------------------------------
RDBMS_BRANDS
RDBMS_ZMAP
  

Well, unfortunately, now we notice that slightly before noon time data was loaded into a new table called RDBMS_HISTORY which was not at 8am in the PDB. All SCNs between 8am and the current time are now on an orphan PDB incarnation. We will flahsback again using another restore point created at 12 o’clock.


SQL> ALTER PLUGGABLE DATABASE novopdb2 CLOSE;

Pluggable database altered.

SQL> FLASHBACK PLUGGABLE DATABASE novopdb2 TO RESTORE POINT rp2_novo_pdb2;

Flashback complete.

SQL> ALTER PLUGGABLE DATABASE novopdb2 OPEN RESETLOGS;

Pluggable database altered.

SQL> SELECT table_name FROM DBA_TABLES where TABLE_NAME like '%RDBMS%';

TABLE_NAME
--------------------------------------------------------------------------------
RDBMS_BRANDS
RDBMS_ZMAP
RDBMS_HISTORY

SQL> select systimestamp from dual;

SYSTIMESTAMP
---------------------------------------------------------------------------
05-JUL-20 02.48.30.569344 PM +00:00

So, we managed to flashback to a point few hours after we opened with RESETLOGS from the previous flashback. But within the same incarnation of the CDB.

DBAs can follow the process of restore and recovery using the V$SESSION_LONGOPS and V$RECOVERY_PROGRESS views, respectively.

For the restore, the V$SESSION_LONGOPS view’s column OPNAME should be ‘Flashback Database’. Just like this:

SELECT sofar, totalwork, units 
FROM v$session_longops 
WHERE opname = 'Flashback Database';

The column SOFAR shows the data currently read in megabytes while the column TOTALWORK shows the total number of megabytes of flashback logs that must be read.

Restore points are created with the following command:

SQL> CREATE RESTORE POINT rp17 FOR PLUGGABLE DATABASE novopdb2;

Restore point created.

The CLOUD_VERIFY_FUNCTION in Autonomous Database Cloud

In Autonomous, Cloud, DBA, New features, Oracle internals, Security and auditing on July 1, 2020 at 12:58

Choosing a hard-to-guess but easy-to-remember password is by far the easiest one from all the hard tasks!

1. Database passwords and their complexity:

According to GDPR personal data must be processed “in a manner that ensures appropriate security of personal data including protection against unauthorized or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organizational measures.”

But GDPR does not define any requirements about passwords such as password length, complexity, or how often password should be renewed. Regulation (EU) 2016/679 just stipulates that “a high level of protection of personal data” is required.

One way to enforce strong passwords on database users is by using the following rule:

A minimum of 1 lower case letter [a-z] and
a minimum of 1 upper case letter [A-Z] and
a minimum of 1 numeric character [0-9] and
a minimum of 1 special character: ~`!@#$%^&*()-_+={}[]|\;:”,./?
Passwords must be at least N characters in length
N attempts to block login
Set password expiration to N days

Oracle is following the above mentioned rules and the Oracle script catpvf.sql provides several password functions for taking care of the verification process:

– ora_complexity_check,
– verify_function
– verify_function_11G
– ora12c_verify_function
– ora12c_strong_verify_function
– ora12c_stig_verify_function

Note that the VERIFY_FUNCTION and VERIFY_FUNCTION_11G password verify functions are desupported in Oracle Database 20c. Also, in Oracle 20c, the IGNORECASE parameter for the orapwd file is desupported. All newly created password files are case-sensitive.

3. Non-autonomous databases

Now, how about those who prefer to use less complex passwords for database users? How do you bypass that problem first in a non-autonomous environment?

There are several ways to avoid the verification process by say the ora12c_verify_function:

– ALTER PROFILE DEFAULT LIMIT PASSWORD_VERIFY_FUNCTION NULL;
– Create a separate profile for the user
– Edit the catpvf.sql script to use the password verification function that you want, and then run the script to enable it – it is located in $ORACLE_HOME/rdbms/admin/utlpwdmg.sql
– Modify “CREATE OR REPLACE FUNCTION ora12c_verify_function …” in utlpwdmg.sql, a file which is used to change the DEFAULT profile to use different password complexity functions – it is located in $ORACLE_HOME/rdbms/admin/utlpwdmg.sql (not in 20c though)

Note here that the Oracle documentation says clearly: “Do not modify the admin/catpvf.sql script or the Oracle-supplied password complexity functions. You can create your own functions based on the contents of these files.”

3. Autonomous databases

Next, how about Autonomous, where we have no access to the operating system layer?

The Oracle Autonomous Database Cloud offers a new (unique to ADB) a function called CLOUD_VERIFY_FUNCTION. It is not available in the non-autonomous releases and not even in Oracle 20c.

The CLOUD_VERIFY_FUNCTION function is specified in the PASSWORD_VERIFY_FUNCTION attribute of the DEFAULT profile. This function internally calls ORA_COMPLEXITY_CHECK and checks the password entered according to the following specifications.

– If password contains the username
– The password must contain 1 or more lowercase characters
– The password must contain 1 or more uppercase characters
– The password must contain 1 or more digits
– The password length less than 12 bytes or more than 60 bytes

Let us check first what the function CLOUD_VERIFY_FUNCTION looks like:

 
create or replace FUNCTION cloud_verify_function
(username varchar2,
password varchar2,
old_password varchar2)
RETURN boolean IS
   differ integer;
  db_name varchar2(40);
  i integer;
  reverse_user dbms_id;
  canon_username dbms_id := username;
  len integer := nvl (length(password), 0);
BEGIN
  IF (substr(username,1,1) = '"') THEN
    execute immediate 'begin dbms_utility.canonicalize(:p1,  :p2, 128); end;'
                        using IN username, OUT canon_username;
  END IF;
   IF NOT ora_complexity_check(password, 12, null, 1, 1, 1, null) THEN
     RETURN(FALSE);
   END IF;
-- Check password length
   IF len > 60 THEN
     raise_application_error(-20020, 'Password too long');
   END IF;
-- Check if the password contains the username
   IF regexp_instr(password, canon_username, 1, 1, 0, 'i') > 0 THEN
     raise_application_error(-20002, 'Password contains the username');
   END IF;
RETURN(TRUE);
END;
/

We cannot modify the scripts mentioned above as we do not have OS access in ADB – may be then we can change the default profile or create a new one? But in ATP, user’s profile will be set to ‘DEFAULT’, and you are not allowed to create additional PROFILEs. Autonomous Data Warehouse requires strong passwords – the password must meet the default password complexity rules.

The output below is identical in ADW and ATP:

Well, we are stubborn – so let us try in any case:

In ATP:

 
create profile DBA_PROFILE 
LIMIT PASSWORD_REUSE_MAX 10 PASSWORD_REUSE_TIME 30
ORA-01031: insufficient privileges

In ADW:

 
create profile DBA_PROFILE 
LIMIT PASSWORD_REUSE_MAX 10 PASSWORD_REUSE_TIME 30;
Profile DBA_PROFILE created.

alter profile DBA_PROFILE limit PASSWORD_VERIFY_FUNCTION null;
Profile DBA_PROFILE altered.

alter user admin profile DBA_PROFILE;
ORA-01031: insufficient privileges

create user app_user identified by abc profile DBA_PROFILE
ORA-28219: password verification failed for mandatory profile
ORA-20000: password length less than 12 bytes

create user app_user identified by Exadataa2020 profile DBA_PROFILE;
User APP_USER created.

 
alter user app_user identified by abc
ORA-28219: password verification failed for mandatory profile
ORA-20000: password length less than 12 bytes

Well, the password verify function is still used although we set the app_user’s profile to DBA_PROFILE.

Conclusion: there is no way to modify or bypass the CLOUD_VERIFY_FUNCTION in Autonomous Cloud.

In 20c, the script utlpwdmg.sql has been modified, it does not contain any longer the creation of the verification functions. I really don’t know if that is intentional.

 
[oracle@julian admin]$ cat utlpwdmg.sql
...

-- This script sets the default password resource parameters
-- This script needs to be run to enable the password features.
-- However the default resource parameters can be changed based
-- on the need.
-- A default password complexity function is provided.

Rem *************************************************************************
Rem BEGIN Password Management Parameters
Rem *************************************************************************

-- This script alters the default parameters for Password Management
-- This means that all the users on the system have Password Management
-- enabled and set to the following values unless another profile is
-- created with parameter values set to different value or UNLIMITED
-- is created and assigned to the user.

ALTER PROFILE DEFAULT LIMIT
PASSWORD_LIFE_TIME 180
PASSWORD_GRACE_TIME 7
PASSWORD_REUSE_TIME UNLIMITED
PASSWORD_REUSE_MAX  UNLIMITED
FAILED_LOGIN_ATTEMPTS 10
PASSWORD_LOCK_TIME 1
INACTIVE_ACCOUNT_TIME UNLIMITED
PASSWORD_VERIFY_FUNCTION ora12c_verify_function;

/**
The below set of password profile parameters would take into consideration
recommendations from Center for Internet Security[CIS Oracle 11g].

ALTER PROFILE DEFAULT LIMIT
PASSWORD_LIFE_TIME 90
PASSWORD_GRACE_TIME 3
PASSWORD_REUSE_TIME 365
PASSWORD_REUSE_MAX  20
FAILED_LOGIN_ATTEMPTS 3
PASSWORD_LOCK_TIME 1
PASSWORD_VERIFY_FUNCTION ora12c_verify_function;
*/

/**
The below set of password profile parameters would take into
consideration recommendations from Department of Defense Database
Security Technical Implementation Guide[STIG v8R1].

ALTER PROFILE DEFAULT LIMIT
PASSWORD_LIFE_TIME 60
PASSWORD_REUSE_TIME 365
PASSWORD_REUSE_MAX  5
FAILED_LOGIN_ATTEMPTS 3
PASSWORD_VERIFY_FUNCTION ora12c_strong_verify_function;
*/

Rem *************************************************************************
Rem END Password Management Parameters
Rem *************************************************************************
[oracle@julian admin]$

On a final note: the hashed values of the passwords in 20c can be still found in the data dictionary – look into the SPARE4 column of the SYS.USER$ table. It is similar to 19c, read this article for all the details.

An example of the Oracle Autonomous Database Advisor

In Autonomous, DBA, Init.ora, New features, Oracle database on June 22, 2020 at 08:33

Moving the database to Oracle Autonomous Cloud is a rather simple task but how do you avoid possible issues after the migration? How do you know in advance which objects will not at all migrate to Autonomous or which objects will be migrated with some changes? How do you get beforehand advice and guidelines on the migration to Autonomous?

The new tool called “Oracle Autonomous Database Schema Advisor” can give you the answers to these questions.

All the details are in MOS Doc ID 2462677.1 but here are the basics and an example in which I will be moving my 20c schema to ATP Dedicated.

First, you need to install the advisor. Meaning run the script install_adb_advisor.sql (as sysdba) which you download from the MOS note above. The script will create a user, 7 tables, 4 indexes and a package. I have decided to call the user adb_advisor and the password will be tiger:

 
[oracle@julian ~]$ sqlplus "/ as sysdba"

SQL*Plus: Release 20.0.0.0.0 - Production on Thu Jun 18 07:24:31 2020
Version 20.2.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to:
Oracle Database 20c EE High Perf Release 20.0.0.0.0 - Production
Version 20.2.0.0.0

SQL> @install_adb_advisor.sql adb_advisor tiger

...

Index created.
Package created.
Package body created.
No errors.
SQL>

Second, you run the advisor as the user created in step one. What you need to specify is (1) the database schemas and (2) the ADB type you will be using in the Autonomous Cloud. You can list maximum 30 schemas in a single advisor run or just use schemas=>’ALL’. And these are the 4 options for the ADB type:

– ATP for Autonomous Transaction Processing (Serverless)
– ADW for Autonomous Data Warehouse (Serverless)
– ATPD for Autonomous Transaction Processing (Dedicated)
– ADWD for Autonomous Data Warehouse (Dedicated)

Do not try to run the script as SYSDBA, you will be getting all sorts of errors (I found it the hard way), something like:

ORA-06598: insufficient INHERIT PRIVILEGES privilege
PLS-00201: identifier ‘ADB_ADVISOR.REPORT’ must be declared
ORA-00942: table or view does not exist

If the package complies with a loop index error and ORA-00942, just run GRANT SELECT ON DBA_XML_TABLES TO ADB_ADVISOR; and recompile.

The Advisor will generate a report with the following information:

– The counts of discovered objects and a summary of migration status
– Objects that cannot be migrated due to the restrictions and lockdowns imposed by the Autonomous Database on certain data types, database options and SQL statements
– The objects that will migrate with modifications that are automatically made during the import process or upon the execution of object creation DDL
– Informational section containing certain best practice recommendations and guidance

Here is the output after I ran ADB_ADVISOR.REPORT:

 
SET SERVEROUTPUT ON FORMAT WRAPPED
SET LINES 200
exec ADB_ADVISOR.REPORT(schemas=>'ALL', adb_type=>'ATPD');
PL/SQL procedure successfully completed.

==========================================================================================
== ATPD SCHEMA MIGRATION REPORT FOR PDBADMIN,pdbuser,JULIAN
==========================================================================================

ADB Advisor Version   : 19.3.0.0.2
Instance Name         : DB0223
Database Name         : DB0223
Host Name             : julian
Database Version      : 20.0.0.0.0
Pluggable Database    : NOVOPDB1
Schemas Analyzed      : PDBADMIN,PDBUSER,JULIAN
Analyzing for         : Autonomous Transaction Processing (Dedicated)
Report Start date/time: 18-JUN-2020 07:47

------------------------------------------------------------------------------------------
-- SECTION 1: SUMMARY
------------------------------------------------------------------------------------------

                                           Objects         Objects         Total         
                           Object          Will Not        Will Migrate    Objects       
Object Type                Count           Migrate         With Changes    Will Migrate  
-------------------------  --------------  --------------  --------------  --------------
CONSTRAINT                 8               0               0               8             
INDEX                      4               0               0               4             
INDEX PARTITION            4               0               0               4             
TABLE                      8               0               2               8             
TABLE PARTITION            4               0               0               4             
User Objects in SYS        87              87              0               0             
User Objects in SYSTEM     0               0               0               0             

------------------------------------------------------------------------------------------
-- SECTION 2: FOLLOWING OBJECTS WILL NOT MIGRATE
------------------------------------------------------------------------------------------

1) User-defined objects in SYS schema will not migrate (Count=87):
------------------------------------------------------------------
Note: User-defined objects were detected in SYS schema. Consider moving them out of SYS prior to migration.

Owner      Object Type                    Object Name                             
---------- ------------------------------ ----------------------------------------
SYS        INDEX                          SYS_C008044                             
SYS        TABLE                          AQ_SRVNTFN_TABLE_1                   
....     
SYS        EVALUATION CONTEXT             AQ$_KUPC$DATAPUMP_QUETAB_1_V            
...        
SYS        MATERIALIZED ZONEMAP           RDBMS_ZMAP                              
SYS        JOB                            ORA$_ATSK_AUTOZM                        
SYS        TABLE                          WRI$_ADV_OBJSPACE_TREND_DATA            
SYS        TABLE                          WRI$_ADV_OBJSPACE_CHROW_DATA            
SYS        TABLE                          WRI$_ADV_SEGADV_SEGROW                  

------------------------------------------------------------------------------------------
-- SECTION 3: FOLLOWING OBJECTS WILL MIGRATE WITH CHANGES
------------------------------------------------------------------------------------------

1) Index Organized table will be created as regular table (Count=1):
--------------------------------------------------------------------
Note: Index Organized tables are disallowed in ADB. When you create 
an IOT in ADB, the table gets created as non-IOT (regular table). 
When the Data Pump export file contains tables with IOT, use 
'dwcs_cvt_iots:y' transformation at import time to transform IOTs 
as regular tables.

JULIAN.PTIOT1                    

2) INMEMORY Tables will be created as NO INMEMORY Tables (Count=1):
-------------------------------------------------------------------
Note: Database In-Memory is not enabled in ADB. All In-Memory 
tables and partitions will be created with NO INMEMORY clause.

JULIAN.SALES                     

------------------------------------------------------------------------------------------
-- SECTION 4: MIGRATION ADDITIONAL INFO
------------------------------------------------------------------------------------------

1) Database Parameters are detected as modified in the current database but can't be modified in the ADB (Count=13):
--------------------------------------------------------------------------------------------------------------------
Note: The following init parameters are modified in your database 
that you would not be able to modify in ADB. Please refer to the 
Oracle Autonomous Database documentation on the parameters that 
you are allowed to modify.

_exadata_feature_on                                                             
_gc_policy_time                                                                 
_gc_undo_affinity                                                               
_zonemap_auto_processing                                                        
clonedb                                                                         
control_management_pack_access                                                  
db_block_checksum                                                               
enable_ddl_logging                                                              
encrypt_new_tablespaces                                                         
local_listener                                                                  
tablespace_encryption_default_algorithm                                         
tde_configuration                                                               
use_large_pages                                                                 

------------------------------------------------------------------------------------------
-- END OF REPORT
------------------------------------------------------------------------------------------
Report End Datetime   : 18-JUN-2020 07:47
Report Runtime        : +000000000 03:00:02.125000000
------------------------------------------------------------------------------------------

So, I am facing based on the report above the following 4 issues:

– I have user-defined objects in the SYS schema. They will not be migrated.
– My IOT table cannot be migrated as-is. It will be migrated as an normal, regular table.
– My INMEMORY table cannot be migrated as-is. It will be created as a NO INMEMORY table.
– I cannot use some of my init.ora parameters in ADB-D.

My recommendation is that, before you migrate your schemas to Autonomous Cloud, to run the advisor. It will minimize your post-migration hassle.

Few additional comments about the advisor:

By default, the output gets truncated when the number of rows exceeds the maximum limit set in the Advisor package. You can reset the number of rows by running the following command prior to running the Advisor.

 
SQL> exec ADB_ADVISOR.setmaxrows(500);

You have to reset the max rows every time you run the Advsior as the settings is not saved in the database.

If you want to query the data dictionary for the output, you may try:

 
SELECT a.owner, a.object_type, a.object_name, c.* 
FROM adb_advisor_objects_tmp a, 
adb_advisor_rejects_tmp b, 
adb_advisor_codes_tmp c 
WHERE a.id = b.id AND b.cd = c.cd;

Final note: you can run the advisor on any database version from 10g to 20c!

Six new features of Data Pump in Oracle Database 20c

In Cloud, Data, Databases, DBA, New features, Oracle database, Oracle utilities on June 8, 2020 at 10:55

In Oracle version 20c, the Data Pump export job accepts 54 different parameters and the import jobs can have up to 59 different parameters. Most of us who used until Oracle 10g (and even afterwards) the old fashioned traditional export import tool probably knew by heart the syntax of exp/imp and never needed to copy paste the command from a text file. With more than 50 parameters in 20c, I am sure it is a different story.

There are 6 new features of Data Pump in Oracle 20c and here are their short descriptions:

1. Oracle Data Pump checksums support for cloud migrations

The new parameter CHECKSUM can be used for validity as a checksum is now added to the dumpfile. Oracle Data Pump can be, and is mostly used, for migrating data from on-premises Oracle Databases into the Oracle Public Cloud. You can use the checksum to help confirming that the file is valid after a transfer to or from the Oracle Cloud object store. Checksums are also useful after saving dumpfiles to on-prem locations for confirming that the dump files have no accidental or malicious changes.

Here is an example of how to use the new CHECKSUM parameter – I am exporting my own schema. The parameter accepts 2 values:

YES – Oracle calculates a file checksum for each dump file in the export dump file set
NO – Oracle does not calculate file checksums

The checksum is calculated at the end of the data pump job. Notice that I am not specifying below the other new parameter CHECKSUM_ALGORITHM, thus using its default value SHA256. The other optional secure hash algorithms are: CRC32, SHA384 and SHA512.

 
SQL> !expdp julian DIRECTORY=data_pump_dir DUMPFILE=jmd.dmp CHECKSUM=YES

Export: Release 20.0.0.0.0 - Production on Sat May 30 07:20:55 2020
Version 20.2.0.0.0

Copyright (c) 1982, 2020, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 20c EE High Perf Release 20.0.0.0.0 - Production
Starting "JULIAN"."SYS_EXPORT_SCHEMA_01":  julian/********@//localhost:1521/novopdb1.laika7.laika.oraclevcn.com DIRECTORY=data_pump_dir DUMPFILE=jmd.dmp CHECKSUM=YES
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PASSWORD_HISTORY
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
. . exported "JULIAN"."BLOGS"                            9.983 MB   73991 rows
. . exported "JULIAN"."SALES"                            14.38 MB  295240 rows
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
. . exported "JULIAN"."RDBMS_BRANDS"                     7.726 KB      12 rows
. . exported "JULIAN"."CLIENTS"                          6.007 KB       2 rows
. . exported "JULIAN"."T"                                5.476 KB       1 rows
ORA-39173: Encrypted data has been stored unencrypted in dump file set.
Master table "JULIAN"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
Generating checksums for dump file set
******************************************************************************
Dump file set for JULIAN.SYS_EXPORT_SCHEMA_01 is:
  /u01/app/oracle/admin/ORCL/dpdump/9D45645C541E0B7FE0530206F40AE9E9/jmd.dmp
Job "JULIAN"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat May 30 07:24:59 2020 elapsed 0 00:04:01

You probably noticed the ORA-39173 warning at the end – it is because I did not specify an encryption password while exporting encrypted data. It is just a warning and not a real error.

Goes without saying that COMPATIBLE must be set to at least 20.0

2. Oracle Data Pump exports from Autonomous Database

Starting with Oracle Database 20c, Data Pump can perform exports from Oracle Autonomous Database into dump files in a cloud object store. Thus, now we can easily migrate data out from an Oracle Autonomous Database and import it into another location.

For how to create object store credentials check either Oracle Cloud : Autonomous Database (ADW or ATP) – Load Data from an Object Store (DBMS_CLOUD) by Tim Hall or DBMS_CLOUD Package – A Reference Guide by Christian Antognini.

The new in 20c is the use of the new CREDENTIAL parameter which enables the export to write data stored into object stores. The CREDENTIAL parameter changes how expdp interprets the text string in DUMPFILE. If the CREDENTIAL parameter is not specified, then the DUMPFILE parameter can specify an optional directory object and file name in directory-object-name:file-name format. If the CREDENTIAL parameter is used, then it provides authentication and authorization for expdp to write to one or more object storage URIs specified by DUMPFILE.

Here is an example assuming that we have already created the credential_name JMD_OBJ_STORE_CRED:

 
expdp julian DUMPFILE=https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/juliandon/data_pump/jmd.dmp 
CREDENTIAL=jmd_obj_store_cred

Need more information? The Oracle Cloud Infrastructure User Guide has “only” 5952 pages. Lat week, they were 5919. Enjoy reading it 🙂 And while reading this blog post, there will be more than 5952 pages… I do not think I have ever seen a longer users guide in our industry!

3. Oracle Data Pump includes and excludes in the same operation

Oracle Data Pump 20c can include and exclude objects in the same export or import operation meaning that now, Oracle Data Pump commands can include both INCLUDE and EXCLUDE parameters in the same operation. By enabling greater specificity about what is being migrated, this enhancement makes it easier to migrate to Oracle Cloud, or to another on-premises Oracle Database.

Note: when you include both parameters in a command, Oracle Data Pump processes the INCLUDE parameter first, and includes all objects identified by the parameter. Then it processes the EXCLUDE parameters, eliminating the excluded objects from the included set. Here is an example of including only 2 tables (SALES and CLIENTS) but excluding all indexes except the PKs (real use case: you want to enable Oracle Auto Indexing in ADB and while importing the data you need to drop all indexes except the PKs):

 
expdp julian SCHEMAS=JULIAN DUMPFILE=julian.dmp REUSE_DUMPFILES=YES 
INCLUDE=TABLE:\"IN \(\'CLIENTS\',\'SALES\'\)\" 
EXCLUDE=INDEX:\"LIKE \'IDX\%\'\"

4. Oracle Data Pump parallelizes transportable tablespace metadata operations

Starting with Oracle 20c, Data Pump improves Transportable Tablespace metadata operations with parallelism. Starting with Oracle Database 20c, transportable tablespace exports can be done with degrees of parallelism greater than 1.

Parallelism higher than 1 improves TTS export and import performance especially when there are really a lot of database objects in the data files including tables indexes partitions and subpartitions. We probably see the real benefit when exporting packaged application schemas from SAP, EBS, etc.

5. Oracle Data Pump provides optional index compression

In Oracle Database 20c, Data Pump supports optional index compression on import including the Autonomous Database by introducing a new TRANSFORM parameter clause INDEX_COMPRESSION_CLAUSE. Thus, you can control whether index compression is performed during import.

If NONE is specified in the INDEX_COMPRESSION_CLAUSE, then the index compression clause is omitted (and the index is given the default compression for the tablespace). However, if you use compression, then Oracle recommends that you use COMPRESS ADVANCED LOW. Indexes are created with the specified compression.

If the index compression clause is more than one word, then it must be contained in single or double quotation marks. Also, your operating system can require you to enclose the clause in escape characters, such as the backslash character. Here is an example of how to use the INDEX_COMPRESSION_CLAUSE:

 
TRANSFORM=INDEX_COMPRESSION_CLAUSE:\"COMPRESS ADVANCED LOW\"

Specifying this transform changes the type of compression for all indexes in the job.

6. Oracle Data Pump resumes transportable tablespace jobs

Starting with Oracle Database 20c, Data Pump resumes transportable tablespace export and import jobs that are stopped due to errors or any other problems. Oracle Data Pump’s capacity to resume these stopped jobs helps us to save time and makes the system more available.

Transportable jobs are now restartable at or near the point of failure.

To restart the job JMD_EXP_20C, first perform:

expdp system/password attach=jmd_exp_20c

Then restart the job with:

Export> continue_client

For an import job, the syntax is exactly the same.

On Kafka, JSON, PL/SQL and Advanced Queuing in Oracle Database 20c

In Autonomous, Cloud, Data, DBA, New features, PL/SQL on June 1, 2020 at 05:50

Oracle Corp. starts today (June 1st, 2020) a new fiscal year as the Oracle Database is slowly moving towards version 20c (still in preview mode only) which comes with almost 500 init.ora parameters and 5326 hidden/underscore parameters. There is a new one for 20c called kafka_config_file. But let us first take a step back and see its connection with Advanced Queuing.


Advanced Queuing is available in all editions of Oracle database, including XE. Since Oracle Advanced Queuing is implemented in database tables, all the operational benefits of high availability, scalability, and reliability are applicable to queue data.

Advanced Queuing can be accessed through the several interfaces: PL/SQL, Visual Basic, Java, Java Message Service, JDBC, ODP.NET, OCI (do not read Oracle Cloud Infrastructure – think of C language), etc.

Using PL/SQL to access Oracle Advanced Queuing is probably the most common method: using the PL/SQL packages DBMS_AQADM and DBMS_AQ.

Reading all that, one might think of Kafka. Kafka is a distributed, partitioned, replicated commit log service providing the functionality of a messaging system, but with a unique design. The aim here is not position Kafka and Oracle AQ against each other but show what is new in Oracle Database 20c that brings Kafka and Oracle together and what is important from DBA point of view.

Todd Sharp explained recently how to use Kafka Connect wth Oracle Streaming Service and Autonomous Database. Let us now see what is new in Oracle Database 20c:

Oracle Database 20c introduces Transactional Event Queues (TEQ), which are partitioned message queues that combine the best of messaging, streaming, direct messages, and publish/subscribe. TEQ operates at scale on the Oracle database. TEQ provides transactional event streaming, and runs in the database in a scale of 10s to 100s of billions of messages per day on 2-node to 8-node Oracle RAC databases, both on-premise and on the cloud. TEQ has Kafka client compatibility, which means, Kafka producer and consumer can use TEQ in the Oracle database instead of a Kafka broker. Check Advanced Queuing in 20c for more details.

1. Advanced Queuing: Kafka Java Client for Transactional Event Queues

Kafka Java Client for Transactional Event Queues (TEQ) enables Kafka application compatibility with Oracle database. This provides easy migration of Kafka applications to TEQ.

You do not have to manage a separate Kafka infrastructure and this new feature simplifies the event-driven application architectures with an Oracle converged database that now includes events data. Starting from Oracle Database 20c, Kafka Java APIs can connect to Oracle database server and use Transactional Event Queues (TEQ) as a messaging platform. Developers can migrate an existing Java application that uses Kafka to the Oracle database. A client side library allows Kafka applications to connect to Oracle database instead of Kafka cluster and use TEQ messaging platform transparently.

Two levels of configuration are required to migrate Kafka application to TEQ messaging platform:

– Database level configuration
– Application level configuration

Kafka application needs to set certain properties which will allow OKafka library to locate the Oracle Database. This is analogous to how Kafka application provides zoo keeper information. These connection properties can be set in the following two ways:

– using database user and password provided in plain text
– using JDBC wallet

The following are the prerequisites for configuring and running Kafka Java client for TEQ in an Oracle Database. Create a database user. Grant the following user privileges:

grant connect, resource to user;
grant execute on dbms_aq to user;
grant execute on dbms_aqadm to user;
grant execute on dbms_aqin to user;
grant execute on dbms_aqjms to user;
grant select_catalog_role to user;

Next, set the correct database init.ora parameter to use TEQ:

streams_pool_size=512M

Set the local listener too:

LOCAL_LISTENER= (ADDRESS=(PROTOCOL=TCP)(HOST= )(PORT=))

2. Advanced Queuing Support for JSON Data Type and PL/SQL

Oracle Database Advanced Queuing now supports the JSON data type.

Many client application and micro-services which use Advanced Queuing for messaging have better performance if they use JSON data type to handle JavaScript Object Notation (JSON) messages.

In this aspect, PUBLIC is granted EXECUTE privilege on all these types:

AQ$_AGENT, AQ$_AGENT_LIST_T, AQ$_DESCRIPTOR, AQ$_NTFN_DESCRIPTOR, AQ$_NTFN_MSGID_ARRAY, AQ$_POST_INFO, AQ$_POST_INFO_LIST, AQ$_PURGE_OPTIONS_T, AQ$_RECIPIENT_LIST_T,
AQ$_REG_INFO, AQ$_REG_INFO_LIST, AQ$_SUBSCRIBER_LIST_T, DEQUEUE_OPTIONS_T, ENQUEUE_OPTIONS_T, QUEUE_PROPS_T, SEEK_INPUT_T, , EK_OUTPUT_T, SYS.MSG_PROP_T, MESSAGE_PROPERTIES_T, MESSAGE_PROPERTIES_ARRAY_T, MSGID_ARRAY_T

Regarding the new features of PL/SQL in 20c, check first the interesting example of Steven Feuerstein on extension of loop iterators.

Check the New Features in Release 20c for Oracle Database PL/SQL for more details on PL/SQL extended iterators, PL/SQL qualified expressions enhancements, SQL macros, the new JSON data type and the new pragma SUPPRESSES_WARNING_6009.

3. Advanced Queuing: PL/SQL Enqueue and Dequeue Support

The following features are new in this release:

– Kafka Java Client for Transactional Event Queues (TEQ) which enables Kafka application compatibility with Oracle Database and thus providing easy migration of Kafka applications to TEQ
– PL/SQL Enqueue and Dequeue Support for JMS Payload and non-JMS Payload in Transactional Event Queues
– Transactional Event Queues for Performance and Scalability
– Simplified Metadata and Schema in Transactional Event Queues
– Support for Message Retention and Seekable Subscribers
– Advanced Queuing Support for JSON Data Type

For all the details, check the Changes in Oracle Database Advanced Queuing Release 20c.

In terms of performance and scalability, Oracle Transactional Event Queues have their Queue tables partitioned in 20c into multiple Event Streams which are distributed across multiple RAC nodes for high throughput messaging and streaming of events.

Remember that in 10.1, AQ was integrated into Oracle Streams and thus Oracle AQ was called “Oracle Streams AQ”. But in 12.1, Oracle Streams got deprecated and AQ was again named just “Oracle AQ”.

And finally: here is the 546 page long Transactional Event Queues and Advanced Queuing User’s Guide along with few good additional articles:

Oracle + Kafka = Better Architecture by Jonathan Wallace
Streaming data from Oracle into Kafka by Robin Moffatt
Extending Oracle Streaming with Kafka Compatibility by Somnath Lahiri