Dontcheff

Archive for the ‘Databases’ Category

DBAs are a wealth of knowledge!

In Autonomous, Cloud, Databases, DBA on October 19, 2020 at 10:53

Fifteen years ago, in 2005, I remember I read the following speculation about the DBA profession: Oracle 2020 – a Glimpse Into the Future of Database Management. I was curious what would be true 15 years later. Just have a look at some of the predictions:

– 2018: Oracle 14m provides inter-instance sharing of RAM resources. All Oracle instances become self-managing.
– 2019: The first 128-bit processors are introduced.
– 2020: Oracle 16ss introduces solid-state, non-disk database management.

Looking even further we read: “Changing Role of the Oracle DBA in 2020: But the sad reality of server consolidation was that thousands of mediocre Oracle DBAs lost their jobs to this trend. The best DBAs continued to find work, but DBAs who were used for the repetitive tasks of installing upgrades on hundreds of small servers were displaced.” And further: “Inside Oracle 2020: The world of Oracle management is totally different today than it was back in 2004. We no longer have to worry about applying patches to Oracle software, all tuning is fully automated and hundreds of Oracle instances all reside within a single company-wide server.” ADB rings a bell?

Four years ago, another forecast article called The 2020 DBA: A Look Into the Future appeared in dzone.com predicting that despite the evolving role of the DBAs, DevOps has actually made database administrators more relevant than ever before.

True, as the DBA profession is still among the Top 10 jobs in Technology. DBA is #4 in Best Technology Jobs, #15 in Best STEM Jobs and #30 in 100 Best Jobs.

Now, going back to the title of this blog post. Jeff Smith’s, Russ Lowenthal’s and Chris Saxon’s Oracle DBA 2020 Data Masterclasses are something I recommend now to every DBA. Jeff raised an important topic but let us take a step back.

Last month (September 2020), FlashDBA discussed the Evolution of the DBA from 1.0 through 2.0 until 3.0:

DBA 1.0: The (Good) Old Days – clearly the old days are over. Regardless if they were good or bad is a memory-lane discussion.

DBA 2.0: The IT Generalist – I remember when saw this paper for the first time about 10-11 years ago: Oracle DBA 2.0. ASM, Direct NFS, Clusterware, VMware, Flash, Linux. The DBA had to learn OS, Storage and Network administration. With Exadata, I even heard the term DBA 2.1

DBA 3.0: The Cloud DevOps DBA – new game, rather new set! “A DBA building a database in the public cloud is making decisions which have a direct affect on the (quite possibly massive) monthly bill from AWS/Azure/GCP/OCI”.

Checking the Database Management Predictions from 2019, we can see they now we are close to:

DBA 4.0: Autonomous DBA – more attention on data management, data security, data architecture, machine learning, devops:

“Oracle Autonomous Database can quickly provision, resize, and relocate databases with little human interaction. However, as more database provisioning tasks are automated, DBAs will still need to classify the data.” – Michelle Malcher

“CEOs will force DBAs to step into more-important roles—such as data architects, data managers, and chief data offcers—as a company’s data and machine learning algorithms become important drivers of the stock price.” — Rich Niemiec

“Oracle Enterprise Manager Cloud and other third-party tools, in conjunction with Oracle Multitenant will make managing large numbers of databases easier. DBAs will be able to manage 10 times or more databases after consolidation with Oracle Multitenant.” — Anuj Mohan

“DBAs need to understand that there is a true sea change afoot, and there’s no way to stop these market forces. Hopefully, we’ll all be able to embrace this tidal wave and avoid being caught up in the undertow.” — Jim Czuprynski

I believe the adoption of a hybrid infrastructure is inevitable, and the new skills needed (such as cloud set-up, configuration, and monitoring) are mostly cloud-related. DBAs will need to be able to assess the databases and define what they are best suited for. Replication of data and databases will become more complex in hybrid environments—especially when different clouds are involved.

That is coming in few years at most (when most companies will adopt multi-cloud) and call the profession DBA 5.0 if you prefer or just DBA. Every decade has its challenges for IT professionals. Challenges are becoming more complex and DBAs are often on the front line with every new fashionable IT concept. And we can quote now Larry Ellison who said that the computer industry is the only industry that is more fashion-driven than women’s fashion.

Here are finally few additional recent articles on the topic of the future of the DBA profession (if interested to read more on the topic):

My Three Beliefs About The Future Of The DBA Job
The Future for the DBA
Will automated databases kill the DBA position?
What Does the Future Hold for DBAs?
For DBAs In 2020: Understand Your Worth, Seize The Moment
What Happens to DBAs When We Move to the Cloud?
The Future of The DBA in The Era of The Autonomous Database

MySQL DB System on Oracle Cloud Infrastructure

In Cloud, Databases, DBA, IaaS, MySQL, New features on September 1, 2020 at 08:37

MySQL has been for a long time second next to Oracle in the DB-Engines rankings. Moreover, MySQL was their 2019 Database of the Year!

Now, MySQL is also available as DB System on Oracle Cloud Infrastructure. The database version is 8.0.21 with InnoDB Storage Engine. Here is how the set-up works:

1. Create a MySQL DB System:

2. Note that there are different shutdown types:

3. Connect to a MySQL DB system:

It is not possible to connect directly from a remote IP to the MySQL DB System endpoint. You must connect to a Compute Instance, and from the Compute Instance to the MySQL DB System.

We first install MySQL Shell on the Compute instance:

 
[opc@julian ~]$ sudo yum install https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm
Loaded plugins: ulninfo, versionlock
mysql80-community-release-el7-3.noarch.rpm               |  25 kB     00:00
Examining /var/tmp/yum-root-X8dAdb/mysql80-community-release-el7-3.noarch.rpm: mysql80-community-release-el7-3.noarch
Marking /var/tmp/yum-root-X8dAdb/mysql80-community-release-el7-3.noarch.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package mysql80-community-release.noarch 0:el7-3 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package             Arch   Version
                                  Repository                               Size
================================================================================
Installing:
 mysql80-community-release
                     noarch el7-3 /mysql80-community-release-el7-3.noarch  31 k

Transaction Summary
================================================================================
Install  1 Package

Total size: 31 k
Installed size: 31 k
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
  Installing : mysql80-community-release-el7-3.noarch                       1/1
  Verifying  : mysql80-community-release-el7-3.noarch                       1/1

Installed:
  mysql80-community-release.noarch 0:el7-3

Complete!
[opc@julian ~]$

===

[opc@julian ~]$ sudo yum install mysql-shell
Loaded plugins: ulninfo, versionlock
mysql-connectors-community                               | 2.5 kB     00:00
mysql-tools-community                                    | 2.5 kB     00:00
mysql80-community                                        | 2.5 kB     00:00
ol7_UEKR5                                                | 2.5 kB     00:00
ol7_latest                                               | 2.7 kB     00:00
(1/3): mysql80-community/x86_64/primary_db                 | 115 kB   00:00
(2/3): mysql-connectors-community/x86_64/primary_db        |  62 kB   00:00
(3/3): mysql-tools-community/x86_64/primary_db             |  76 kB   00:00
(1/5): ol7_UEKR5/x86_64/updateinfo                         |  72 kB   00:00
(2/5): ol7_latest/x86_64/group                             | 660 kB   00:00
(3/5): ol7_latest/x86_64/updateinfo                        | 2.9 MB   00:00
(4/5): ol7_UEKR5/x86_64/primary_db                         |  12 MB   00:00
(5/5): ol7_latest/x86_64/primary_db                        |  36 MB   00:00
Excluding 238 updates due to versionlock (use "yum versionlock status" to show them)
Resolving Dependencies
--> Running transaction check
---> Package mysql-shell.x86_64 0:8.0.21-1.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package         Arch       Version             Repository                 Size
================================================================================
Installing:
 mysql-shell     x86_64     8.0.21-1.el7        mysql-tools-community      31 M

Transaction Summary
================================================================================
Install  1 Package

Total download size: 31 M
Installed size: 106 M
Is this ok [y/d/N]: y
Downloading packages:
warning: /var/cache/yum/x86_64/7Server/mysql-tools-community/packages/mysql-shell-8.0.21-1.el7.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 5072e1f5: NOKEY
Public key for mysql-shell-8.0.21-1.el7.x86_64.rpm is not installed
mysql-shell-8.0.21-1.el7.x86_64.rpm                        |  31 MB   00:02
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
Importing GPG key 0x5072E1F5:
 Userid     : "MySQL Release Engineering "
 Fingerprint: a4a9 4068 76fc bd3c 4567 70c8 8c71 8d3b 5072 e1f5
 Package    : mysql80-community-release-el7-3.noarch (@/mysql80-community-release-el7-3.noarch)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : mysql-shell-8.0.21-1.el7.x86_64                              1/1
  Verifying  : mysql-shell-8.0.21-1.el7.x86_64                              1/1

Installed:
  mysql-shell.x86_64 0:8.0.21-1.el7

Complete!
[opc@julian ~]$

Then connect to your DB System using the MySQL client:

 
mysqlsh ICO@10.0.0.3 

[opc@julian ~]$ mysqlsh ICO@10.0.0.3
Please provide the password for 'ICO@10.0.0.3': ************
MySQL Shell 8.0.21

Copyright (c) 2016, 2020, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
Creating a session to 'ICO@10.0.0.3'
[opc@julian ~]$

My suggestion is to start from the MySQL Database Service Overview.

MySQL database is also available from GCP, Azure and AWS but check the pricing:

Do not just Simplify with Databases: Automate, Innovate and Innovate!

In Autonomous, Database tuning, Databases, DB2 database, DBA, PostgreSQL, SQL Server on June 15, 2020 at 09:46

“Automation applied to an inefficient operation will magnify the inefficiency” – Bill Gates
“Innovation distinguishes between a leader and a follower! – Steve Jobs

In the database industry, simplification, automation and innovation have sort of become buzz words – we hear them more often in meetings and see them in power point presentations than in the real world implementations. Databases are being patched, upgraded and migrated but how often automation and innovation are part of the process?

A recent article entitled “After The Pandemic, Don’t Simply Automate. Innovate” quoted an Accenture report that surveyed 1500 C-suite executives in 16 industries, 76% of respondents said they were struggling to scale the technology across their businesses. The numbers tell the story: A full 84% of C-suite executives believe they must leverage artificial intelligence (AI) to achieve their growth objectives.

A survey of database managers and administrators shows the benefits they expect from automation, plus what they think about DBAs’ current workloads. What are the highlights:

– 63% expect faster innovation from database automation
– 62% of the data pros expect that data will grow 25% or more annually at their enterprise over the next three years
– 66% said the DBA/data team’s overwork or scheduling issues are the biggest challenge in database deployments
– The majority of companies have not made significant automation gains in any key data management processes
– 69% of the DBAs think automation will make their job more business-centric.

So, why aren’t we seeing those innovative automations being implemented in full? Is it the DBA mindset, the business is behind the technology and the innovation, lack of time and resources, lack of knowledge and trust in the new?

We can try to look into the issue from the DBA perspective. Recently, Jeff Erickson listed the 3 Can’t Miss Ways to Turn Your DBA Skills into Gold. Here is my paraphrased version of these 3 ways:

1. Expand DBAs skils towards app development – Python, PL/SQL, etc.
2. DBAs should get into the data science game – look more into Oracle Autonomous Database which comes with Oracle ML, a rich library of machine learning algorithms
3. Convert the “A” in DBA from Administrator to Architect – look into big data and data architecture – big data is the driver for innovation in databases

At the end of the article he quoted Kerry Osborne about dealing with vast and growing data volumes: “That added complexity and scale should sound like a huge opportunity knocking“.

A interesting article by Duncan Harvey entitled What is the innovation, automation dynamic?, pointed out how autonomous systems are finally giving people the headspace they need to innovate at speed in the digital era.

So, is there a Database Automation Guide? Yes indeed!

Get a cloud boost with Oracle Autonomous!

Have a look of all Automatic features that came after Oracle 9i. Oracle 7 an 8 were probably the first releases added automatic functionalities to the database but the boost started with 9i. 20c is not skipping on that part either.

From all database brands arguably the Oracle Database is most advanced in terms of tools, automation features and innovation capabilities. Only from 11.2 until Oracle 20c, there are 133 new features purely focused on Automation. But I still see detabases where even old enough automation features (such as Automatic SQL Tuning) have not been yet implemented.

Let us look into few database brands:

SQL Server:

There are dozens of Automatic features and properties embedded into SQL Server:

Automatic Tuning can do a lot things such as Automated performance tuning of databases, Automated verification of performance gains and Automated rollback and self-correction. More importantly, it is the only database besides Oracle to have Automatic Indexing. It is worth checking how Automatic index management works in the Azure SQL database. Azure SQL Database analyzes your workload, identifies the queries that could be executed faster if you create an index, identifies indexes that are not used in a longer period of time, and identifies duplicated indexes in the database.

Db2:

IBM Db2 also has a relatively good list of Automatic features including self-tuning memory (single-partition databases only), Automatic storage, Automatic database backups, Automatic reorganization and Automatic statistics collection. The Db2 documenattion claims that the Db2® autonomic computing environment is self-configuring, self-healing, self-optimizing, and self-protecting but this is far behind what Oracle ADB has to offer. Tuning SQL is more difficult in Db2 than in Oracle and the single reason for that are the tools and features offered by both systems.

Redshift:

Amazon’s Redshift has a good set of features. Looking at what automation is like, we find enough for a relatively new database brand (Redshift, not PostgreSQL).

In terms of storage, it is mostly about how Redshift automatically takes care of data formatting and data movement into S3 and how with managed storage, capacity is added automatically to support workloads up to 8PB of compressed data.

There is of course Automated provisioning and Automated backups. Plus, Automatic workload management (WLM) uses machine learning to dynamically manage memory and concurrency, helping maximize query throughput.

Also, Amazon Redshift continuously monitors the health of the cluster, and automatically re-replicates data from failed drives and replaces nodes as necessary for fault tolerance.

Very much like Oracle RAC and Oracle Exadata, as of May 2020, Amazon Redshift now leverages Bloom filters to enable early and effective data filtering. Redshift automatically determines what queries are suitable for leveraging Bloom filters at query runtime.

Recently, Amazon Redshift also introduces ATS (= Automatic Table Sort), an automated alternative to Vacuum Sort. Automatic table sort complements Automatic Vacuum Delete and Automatic Analyze and together these capabilities fully automate table maintenance. Automatic table sort is now enabled by default on Redshift tables where a sort key is specified.

Amazon Redshift automatically takes incremental snapshots (backups) of your data every 8 hours or 5 GB per node of data change. You now get more information and control over a snapshot including the ability to control the automatic snapshot’s schedule.

In terms of SQL tuning, Amazon Redshift now automatically and elastically scales query processing power to provide consistently fast performance for hundreds of concurrent queries. The database automatically shuts down Concurrency Scaling resources to save you cost. Also, Amazon Redshift now updates table statistics by running ANALYZE automatically but that is no news for Oracle database users. Amazon Redshift improves query performance by automatically moving read and write queries to the next matching queue without restarting the moved queries.

Snowflake:

Looking into the Top 10 cool things about Snowflake, we see the fact that the JSON documents are stored in a table and optimized automatically in the background for MPP and columnar access. There is Automatic Encryption of Data and Automatic Query Optimization. No Tuning! Really cool as “It is all handled “auto-magically” via a dynamic query optimization engine in our cloud services layer. So, no indexes, no need to figure out partitions and partition keys, no need to pre-shard any data for even distribution, and no need to remember to update statistics.”

I like the “auto-magical” part 🙂

My interest was caught by the deep dive of the Revolutionary features of Snowflake that sets it apart — A Deep dive: all I found on automation was “Automatic scale down”. No comment here.

PostgreSQL:

If we look into the PostgreSQL Feature Matrix, we see only one feature about automation: Automatic plan invalidation. There is a mention of WAL Buffer auto-tuning and Autovacuum.

For databases such as MySQL, PosgreSQL and MongoDB, check this article: Why is Database Automation Important?

A recent discussion Oracle vs. PostgreSQL basically tells it all – in case you have the patience to read it – here are both sides:

Pro-Oracle: “Comparing Postgres with Oracle is a bit like comparing a rubber duck you might buy your three year old, with a 300000 ton super tanker. Do they both float? Yeah, but that’s about the only similarity.”
Pro-PostgreSQL: “So bottom line, PostgreSQL beats Oracle by far in my opinion, at least as far as installing it and sizes are concerned.”

Looking into the Features and Benefits of EDB Postgres Cloud Database Service, we see Automated Notifications, Automated Monitoring, Automated Backups and Automated Replica Failover.

Oracle

Of course, Oracle ADB (Autonomous Database) has all automation features embedded into the service but not all applications are certified or fit ADB. Not yet at least. Still, I think that most automation features can be easily implemented (into a non-ADB) with some prior testing and the benefits are enormous. The list if automation features is long – here are the major ones:

Features in 19c and 20c which are worth implementing (at least looking into and testing) are:

Automatic Indexing
– Automatic SQL Plan Management
Automatic Index Optimization
Automatic Zone Maps

Automation and innovation go hand in hand. Database innovation in an enterprise involves using the database technology in new ways in order to create a more efficient organization and improve alignment between technology and business by completing the same DBA work with smaller teams by implementing automation features in the database to reduce storage and labor cost & increase system uptime and performance.

Bottom line for DBAs: regardless of the database, there is room for innovation in every database brand!

Six new features of Data Pump in Oracle Database 20c

In Cloud, Data, Databases, DBA, New features, Oracle database, Oracle utilities on June 8, 2020 at 10:55

In Oracle version 20c, the Data Pump export job accepts 54 different parameters and the import jobs can have up to 59 different parameters. Most of us who used until Oracle 10g (and even afterwards) the old fashioned traditional export import tool probably knew by heart the syntax of exp/imp and never needed to copy paste the command from a text file. With more than 50 parameters in 20c, I am sure it is a different story.

There are 6 new features of Data Pump in Oracle 20c and here are their short descriptions:

1. Oracle Data Pump checksums support for cloud migrations

The new parameter CHECKSUM can be used for validity as a checksum is now added to the dumpfile. Oracle Data Pump can be, and is mostly used, for migrating data from on-premises Oracle Databases into the Oracle Public Cloud. You can use the checksum to help confirming that the file is valid after a transfer to or from the Oracle Cloud object store. Checksums are also useful after saving dumpfiles to on-prem locations for confirming that the dump files have no accidental or malicious changes.

Here is an example of how to use the new CHECKSUM parameter – I am exporting my own schema. The parameter accepts 2 values:

YES – Oracle calculates a file checksum for each dump file in the export dump file set
NO – Oracle does not calculate file checksums

The checksum is calculated at the end of the data pump job. Notice that I am not specifying below the other new parameter CHECKSUM_ALGORITHM, thus using its default value SHA256. The other optional secure hash algorithms are: CRC32, SHA384 and SHA512.

 
SQL> !expdp julian DIRECTORY=data_pump_dir DUMPFILE=jmd.dmp CHECKSUM=YES

Export: Release 20.0.0.0.0 - Production on Sat May 30 07:20:55 2020
Version 20.2.0.0.0

Copyright (c) 1982, 2020, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 20c EE High Perf Release 20.0.0.0.0 - Production
Starting "JULIAN"."SYS_EXPORT_SCHEMA_01":  julian/********@//localhost:1521/novopdb1.laika7.laika.oraclevcn.com DIRECTORY=data_pump_dir DUMPFILE=jmd.dmp CHECKSUM=YES
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PASSWORD_HISTORY
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
. . exported "JULIAN"."BLOGS"                            9.983 MB   73991 rows
. . exported "JULIAN"."SALES"                            14.38 MB  295240 rows
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
. . exported "JULIAN"."RDBMS_BRANDS"                     7.726 KB      12 rows
. . exported "JULIAN"."CLIENTS"                          6.007 KB       2 rows
. . exported "JULIAN"."T"                                5.476 KB       1 rows
ORA-39173: Encrypted data has been stored unencrypted in dump file set.
Master table "JULIAN"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
Generating checksums for dump file set
******************************************************************************
Dump file set for JULIAN.SYS_EXPORT_SCHEMA_01 is:
  /u01/app/oracle/admin/ORCL/dpdump/9D45645C541E0B7FE0530206F40AE9E9/jmd.dmp
Job "JULIAN"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat May 30 07:24:59 2020 elapsed 0 00:04:01

You probably noticed the ORA-39173 warning at the end – it is because I did not specify an encryption password while exporting encrypted data. It is just a warning and not a real error.

Goes without saying that COMPATIBLE must be set to at least 20.0

2. Oracle Data Pump exports from Autonomous Database

Starting with Oracle Database 20c, Data Pump can perform exports from Oracle Autonomous Database into dump files in a cloud object store. Thus, now we can easily migrate data out from an Oracle Autonomous Database and import it into another location.

For how to create object store credentials check either Oracle Cloud : Autonomous Database (ADW or ATP) – Load Data from an Object Store (DBMS_CLOUD) by Tim Hall or DBMS_CLOUD Package – A Reference Guide by Christian Antognini.

The new in 20c is the use of the new CREDENTIAL parameter which enables the export to write data stored into object stores. The CREDENTIAL parameter changes how expdp interprets the text string in DUMPFILE. If the CREDENTIAL parameter is not specified, then the DUMPFILE parameter can specify an optional directory object and file name in directory-object-name:file-name format. If the CREDENTIAL parameter is used, then it provides authentication and authorization for expdp to write to one or more object storage URIs specified by DUMPFILE.

Here is an example assuming that we have already created the credential_name JMD_OBJ_STORE_CRED:

 
expdp julian DUMPFILE=https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/juliandon/data_pump/jmd.dmp 
CREDENTIAL=jmd_obj_store_cred

Need more information? The Oracle Cloud Infrastructure User Guide has “only” 5952 pages. Lat week, they were 5919. Enjoy reading it 🙂 And while reading this blog post, there will be more than 5952 pages… I do not think I have ever seen a longer users guide in our industry!

3. Oracle Data Pump includes and excludes in the same operation

Oracle Data Pump 20c can include and exclude objects in the same export or import operation meaning that now, Oracle Data Pump commands can include both INCLUDE and EXCLUDE parameters in the same operation. By enabling greater specificity about what is being migrated, this enhancement makes it easier to migrate to Oracle Cloud, or to another on-premises Oracle Database.

Note: when you include both parameters in a command, Oracle Data Pump processes the INCLUDE parameter first, and includes all objects identified by the parameter. Then it processes the EXCLUDE parameters, eliminating the excluded objects from the included set. Here is an example of including only 2 tables (SALES and CLIENTS) but excluding all indexes except the PKs (real use case: you want to enable Oracle Auto Indexing in ADB and while importing the data you need to drop all indexes except the PKs):

 
expdp julian SCHEMAS=JULIAN DUMPFILE=julian.dmp REUSE_DUMPFILES=YES 
INCLUDE=TABLE:\"IN \(\'CLIENTS\',\'SALES\'\)\" 
EXCLUDE=INDEX:\"LIKE \'IDX\%\'\"

4. Oracle Data Pump parallelizes transportable tablespace metadata operations

Starting with Oracle 20c, Data Pump improves Transportable Tablespace metadata operations with parallelism. Starting with Oracle Database 20c, transportable tablespace exports can be done with degrees of parallelism greater than 1.

Parallelism higher than 1 improves TTS export and import performance especially when there are really a lot of database objects in the data files including tables indexes partitions and subpartitions. We probably see the real benefit when exporting packaged application schemas from SAP, EBS, etc.

5. Oracle Data Pump provides optional index compression

In Oracle Database 20c, Data Pump supports optional index compression on import including the Autonomous Database by introducing a new TRANSFORM parameter clause INDEX_COMPRESSION_CLAUSE. Thus, you can control whether index compression is performed during import.

If NONE is specified in the INDEX_COMPRESSION_CLAUSE, then the index compression clause is omitted (and the index is given the default compression for the tablespace). However, if you use compression, then Oracle recommends that you use COMPRESS ADVANCED LOW. Indexes are created with the specified compression.

If the index compression clause is more than one word, then it must be contained in single or double quotation marks. Also, your operating system can require you to enclose the clause in escape characters, such as the backslash character. Here is an example of how to use the INDEX_COMPRESSION_CLAUSE:

 
TRANSFORM=INDEX_COMPRESSION_CLAUSE:\"COMPRESS ADVANCED LOW\"

Specifying this transform changes the type of compression for all indexes in the job.

6. Oracle Data Pump resumes transportable tablespace jobs

Starting with Oracle Database 20c, Data Pump resumes transportable tablespace export and import jobs that are stopped due to errors or any other problems. Oracle Data Pump’s capacity to resume these stopped jobs helps us to save time and makes the system more available.

Transportable jobs are now restartable at or near the point of failure.

To restart the job JMD_EXP_20C, first perform:

expdp system/password attach=jmd_exp_20c

Then restart the job with:

Export> continue_client

For an import job, the syntax is exactly the same.

Automatic Zone Maps in the Oracle Database

In Data, Database tuning, Databases, DBA, New features, Oracle database on May 18, 2020 at 06:18

A zone is a set of a contiguous data blocks on disk.

A zone map is an index-like structure built on a table and stores information about the zones of that table.

There are 2 major differences between indexes and zone maps:

– A zone map stores information per zone instead of per row which makes it much more compact than an index
– A zone map is not actively managed the way an index is kept in sync with the DML on the table

Zone maps are closer as a concept to Exadata’s storage indexes than to B-tree indexes.

Before going into how Automatic Zone Maps work in Oracle 20c, let me explain the concept with an example. Consider a small table containing basic information about some relational databases from db-engines.com (rank, score, initial and last release, cloud based):

The RDBMS_BRANDS segment has 6 data blocks with 2 rows per block:

Let us now create the zonemap on the RDBMS_BRANDS table (on 3 columns only):

 
SQL> CREATE MATERIALIZED ZONEMAP rdbms_zmap ON 
rdbms_brands (db_engines_rank, db_engines_score, initial_release); 

Materialized zonemap RDBMS_ZMAP created.

We have now 3 zones and each zone contains two blocks and stores the minimum and maximum of db_engines_rank, db_engines_score and initial_release:

Next, let us run a query returning all RDBMS brands with ranking score more than 1000:

Looking at the execution plan below we see that Oracle is scanning only Zone 1 as the maximum score in all other zone is smaller than 1000:

That is how zone maps work … but what is new in Oracle 20c?

We can now enable automatic creation and maintenance of basic zone maps for both partitioned and non-partitioned tables. But for now, the creation is not available for join zone maps, IOTs, external tables or temporary tables!

In 20c, you can use the new package DBMS_AUTO_ZONEMAP to enable Automatic Zone Maps in the database. Automatic zone map creation is turned off by default.

These four values are allowed for the parameter AUTO_ZONEMAP_MODE:

ON: Turns on auto zone map feature completely. Both for foreground and background zone map creation and maintenance
OFF: Turns off auto zone map feature completely. Both for foreground and background zone map creation and maintenance
FOREGROUND: Turns on only for foreground zone map creation and maintenance
BACKGROUND: Turns on only for background zone map creation and maintenance

You may use the ACTIVITY_REPORT function to view auto zone map activity for a given time window. Note that the background job that performs automatic zone map processing starts once per hour and each run may last up to three hours.

 
SET LONG 100000
SELECT dbms_auto_zonemap.activity_report() report FROM dual;

These 2 zonemaps related views show the most important information DBAs need:

DBA_ZONEMAPS displays all zone maps in the database
DBA_ZONEMAP_MEASURES displays the measures for all zone maps in the database

On a final note: Automatic Zone Maps are available for now only on Exadata and requires the Oracle Partitioning option.

Automatic Index Optimization in Oracle Database 20c

In Database tuning, Databases, DBA, New features, Oracle database on May 11, 2020 at 05:47

Oracle database 20c came with 138 new features and one of them is directly related to the indexes in the database. It is called Automatic Index Optimization.

For some DBAs indexes in the database do not need extra care and they don’t bother much about rebuilding, compressing, coalescing or shrinking them. I have administered in the past a 24×7 mission critical database with size of 5TB where indexes were occupying 4.5TB of all that. Correct, real data was less than 500GB granted that you have SYSTEM, SYSAUX, etc.

Automatic Index Optimization does not mean optimization of the Automatic Indexes in the database but rather making now the Index Optimization an automatic process. Here is how it works and what you have to do in order to enable it and make it work.

First, in order to implement an ILM strategy, you have to enable Heat Maps in the database to track data access and modification. You can enable and disable heat map tracking at the system or session level with the ALTER SYSTEM or ALTER SESSION statement using the HEAT_MAP init.ora parameter, for example:

SQL> alter system set HEAT_MAP = ON;

Like ADO for data segments, Automatic Index Optimization works via ILM on indexes by enabling policies that automatically optimize indexes by compressing, shrinking and rebuilding them. Oracle is using the existing Heat Maps and collects activity statistics on the indexes.

So next, add ADO policies for indexes in order to enable their compression and optimization using the existing Automatic Data Optimization (ADO) framework. You can do it for newly created indexes as well as for already existing indexes.

There are 2 options:

– ADD POLICY TIER in order to perform the operation on a say low cost/ tier 2 tablespace when tier 1 storage is under space pressure
– ADD POLICY OPTIMIZE in order to kick off the process after a certain number of days passes without accessing the index

Here are few examples:

SQL> create index julian.price_idx ON julian.sales(price)
ILM ADD POLICY OPTIMIZE AFTER 31 DAYS OF NO MODIFICATION;

SQL> alter index julian.price_idx ILM ADD POLICY TIER TO BC_DATA;

SQL> alter index julian.price_idx
ILM ADD POLICY OPTIMIZE AFTER 3 DAYS OF NO ACCESS;

SQL> SELECT POLICY_NAME, POLICY_TYPE, ENABLED FROM DBA_ILMPOLICIES;

POLICY_NAME             POLICY_TYPE    ENA
---------------------   -------------  ---
P1                      DATA MOVEMENT  YES

Note that the Oracle documentation has the tier syntax wrong: instead of “ILM ADD POLICY SEGMENT TIER” use “ILM ADD POLICY TIER”.

The optimization process includes actions such as compressing, shrinking or rebuilding the indexes:

Compress: Compresses portions of the key values in an index segment (~3 times)
Shrink: Merges the contents of index blocks where possible to free blocks for reuse
Rebuild: Rebuilds an index to improve space usage and access speed

Notice that you cannot decide which of the 3 above to use. Oracle automatically determines which action is optimal for the index and implements that action as part of the optimization process.

But can we have Automatic Index optimization for Automatic Indexes and not have to rebuild them manually any longer? I did try indeed and here is the result:

SQL> alter index julian."SYS_AI_abrca2u9qmxt7"
ILM ADD POLICY OPTIMIZE AFTER 7 DAYS OF NO ACCESS;

Error starting at line : 17 in command -
alter index julian."SYS_AI_abrca2u9qmxt7"
ILM ADD POLICY OPTIMIZE AFTER 7 DAYS OF NO ACCESS
Error report -
ORA-65532: cannot alter or drop automatically created indexes

Clearly not possible but the error message is sort of misleading because actually you can alter and even drop automatic indexes:

SQL> alter index julian."SYS_AI_abrca2u9qmxt7" rebuild online;
Index altered.
SQL> alter index julian."SYS_AI_abrca2u9qmxt7" coalesce;
Index altered.
SQL> alter index julian."SYS_AI_abrca2u9qmxt7" shrink space;
Index altered.

How to drop the Auto Index? Just rebuild it a new tablespace and run “drop tablespace … including contents and datafiles” – yes it works.

Also, while administering ADO policies for indexes, you cannot manually disable these policies but you can delete an index policy. An ADO policy for indexes executes only one time. After the policy executes successfully, the policy is disabled and is not evaluated again.

SQL> alter index julian.price_idx ILM DELETE POLICY p1;

Moreover, such policies for indexes on partition level are not yet supported. The ADO policy is cascaded to all partitions. So, if we have hybrid tables in the Cloud, we cannot move local indexes automatically to object storage but it should work for global indexes. Note that we can use Automatic Data Optimization (ADO) policies with hybrid partitioned tables under some conditions.

Here are some other limitations:

– ADO does not perform checks for storage space in a target tablespace when using storage tiering
– ADO is not supported on materialized views
– ADO is not supported with index-organized tables or clusters
– ADO concurrency depends on the concurrency of the Oracle scheduler meaning if a policy job for ADO fails more than two times, then the job is marked disabled and the job must be manually enabled later

The feature is available in almost all flavors of the database, i.e., EE, Database Cloud Service, Exadata and ODA: but it requires the Oracle Advanced Compression option.

The 20c Database Upgrade Mystery

In Cloud, Databases, DBA, Oracle database on April 27, 2020 at 14:31

Oracle Database 20c is available only for preview. It is not available for production use. Upgrades to or from Oracle Database 20c are not supported.

So, upgrades are not supported but there exists an Oracle Database 20c Upgrade Guide. The 20c upgrade pdf file is 461 pages long!

Let me shed some light on this “mystery”.

As of today, April 27th 2020 (Oracle Database 19c is today 1 year old on Linux), Oracle 20c is only available from the Oracle Public Cloud in preview mode. To try out this 20c preview, you will need an Oracle Public Cloud tenancy with cloud credits (paid or promotional), as the DBCS instance will require an OCI Compute VM. And, by the way, 20c does *not* run on Exadata.

Once 20c becomes generally available also in non-preview only mode, upgrades will be also available.

You can perform a direct upgrade to the new release from the following releases:

• 19c
• 18c
• 12.2.0

Here is what we need to know about compatibility:

– Before upgrading to Oracle Database 20c, you must set the COMPATIBLE initialization parameter to at least 12.2.0
– In Oracle Database 20c, when the COMPATIBLE initialization parameter is not set in your parameter file, the COMPATIBLE parameter value defaults to 20.0.0
– Installing earlier releases of Oracle Database on the same computer that is running Oracle Database 20c can cause issues with client connections

The most important point about the 20c architecture is that starting with the first release of Oracle Database 20c, non-CDB Oracle Database upgrades to non-CDB architecture are desupported. Meaning you need a container and your data will be in a pluggable database.

Starting with Oracle 19c you can have up to 3 pluggable databases (PDBs) per container database (CDB) without requiring additional multitenant license. See page 11 of Oracle Database Licensing Information User Manual for all the details (it is a 361 page pdf file).

So, when upgrading to 20c, we have 2 options:

Option 1: Convert the non-CDB to a PDB before upgrade

With this option, you plug in the non-CDB Oracle Database release to the same release CDB. For example, plug in a non-CDB Oracle Database Release 19c into an Oracle Database 19c release CDB. Finish converting the non-CDB Oracle Database to a PDB. Then, upgrade the entire CDB, with its PDBs, to Oracle Database 20c.

Option 2: Plug in the non-CDB, upgrade, and finish converting the non-CDB to a PDB after upgrade

With this option, you plug in a non-CDB Oracle Database release to an OracleDatabase 20c CDB. Upgrade the plugged-in non-CDB Oracle Database to Oracle Database 20c. Then, finish converting the non-CDB Oracle Database to a PDB.

Here is what you need to know about the DBUA and the Oracle home:

– Starting with Oracle Database 20c, Database Upgrade Assistant (DBUA) is replaced by the AutoUpgrade utility
– Starting with Oracle Database 20c, the default network administration directory changes from the previous default in the local Oracle home, Oracle_home/network (for example, /u01/app/oracle/product/19.1.0/dbhome_1/network), to a new location.
– The new default location is the shared Oracle Base Home, in the path ORACLE_BASE/ homes/HOME_NAME/network/admin
– Starting with Oracle Database 20c, an Oracle Database installation configures all Oracle Database homes in read-only mode by default

And here is what you need to know before the upgrade about security and the init.ora parameters:

– Starting with Oracle Database 20c, the data types DBMS_CRYPTO_TOOLKIT_TYPES and package DBMS_CRYPTO_TOOLKIT are desupported
– The init.ora parameters UNIFIED_AUDIT_SGA_QUEUE_SIZE, UNIFIED_AUDIT_SGA_QUEUE_SIZE, AUDIT_FILE_DEST, AUDIT_SYS_OPERATIONS, AUDIT_SYSLOG_LEVEL an d AUDIT_TRAIL have been desupported
– Desupport of IGNORECASE parameter for passwords: starting in Oracle Database 20c, the IGNORECASE parameter for the orapwd file is desupported and all newly created password files are case-sensitive
– Desupport of DISABLE_DIRECTORY_LINK_CHECK: the DISABLE_DIRECTORY_LINK_CHECK parameter is desupported, with no replacement

Further info and links were listed in detail by the King of Database Upgrades Mike Dietrich – his blog is dedicated to Oracle Database upgrades!

Now, how about long term support? Do you care only about LTS database releases when planning upgrades? As Tim Hall says: “It’s important to check out what is happening in the 20c release, because it may alter how you use the earlier releases now. There is no point launching into a new development using a feature that is about to disappear. Remember Oracle Streams anyone?” Yes, indeed Tim: Streams, Advanced Replication… I rather be on the latest release and use the new features – live and learn…

Tablespaces in Oracle Database 20c

In Databases, DBA, New features, Oracle database on April 8, 2020 at 08:52

Oracle 20c, released recently with 138 new features, is still only available from the cloud but it brings one major change for DBAs which they can no longer avoid: use containers and pluggable databases. And this changes slightly the way how DBAs look also at tablespaces.

But what is new in 20c about tablespaces? Now, DBAs can set the default tablespace encryption algorithm:

The new TABLESPACE_ENCRYPTION_DEFAULT_ALGORITHM dynamic init.ora parameter defines the default encryption algorithm for tablespace creation operations. The default value in 20.2.0 is AES128.

If you set TABLESPACE_ENCRYPTION_DEFAULT_ALGORITHM to say 3DES168, then all future tablespace creation operations will use 3DES168 as the default encryption algorithm. And this is both for offline and online tablespace encryption operations.

The supported values for the encryption algorithms are AES128, AES192, AES256, ARIA128, ARIA192, ARIA256, GOST256, SEED128 and 3DES168.

The CREATE TABLESPACE command in 20c is not more complex than before. Tablespaces can be permanent, undo or temporary; bigfile (one data file or temp file up to approx. 4 billion blocks, max. size is 128 terabytes (TB) for a tablespace with 32K blocks and 32TB for a tablespace with 8K blocks) or smallfile (max. 1022 data files or temp files each with max. 4 million blocks).

The concept of tablespace sets is not new but not highly adopted.

But going back to tablespaces and PDBs. Here is the output from DBA_TABLESPACES run from a PDB as SYS:

And here is the the output from DBA_TABLESPACES run from the root container as SYS:

Of course, DBA_TABLESPACES is no longer the place from which the DBA will find all tablespaces from the database. The correct one is CDB_TABLEPSPACES but if you run the “select * from cdb_tablepsaces;” as SYS from within the PDB, you will still get the same output.

However, run is as SYS from the root container, you get what you are looking for:

From 20c onwards, DBA should get used to the new CDB_ views. And then, less and less people will ask the question “Why do we have 2 tablespaces with the same name?”….

Here is something else which is new in 20c: Oracle automatically shrinks an Oracle ACFS file system based on policy providing there is enough free storage in the volume. ACFS stands for Automatic Storage Management Cluster File System.

In 20c, you can use now File Group templates by customizing and setting default file group properties for automatically created file groups. Without file group templates if you wanted to change properties for an automatically created file group you would have to manually change the properties after the associated files are created which triggers an unnecessary rebalance. The file group templates feature provides a much better option.

New is also the SecureFiles defragmentation, it provides online defragmentation of allocated and freed space in SecureFiles segments for all types of SecureFiles LOBs – compressed deduplicated encrypted. Defragmentation can be done automatically by a background process and the segment advisor can estimate the fragmentation levels and how much space can be saved. Defragmentation can be done mostly in-place with some temp segment space needed to hold intermediate results.

Next, there are 2 new features of Data Pump related to tablespaces:

– Starting with Oracle Database 20c, Data Pump improves transportable tablespace metadata operations with parallelism. Oracle Data Pump now supports parallel export and import operations for Transportable Tablespace (TTS) metadata. This is the information that associates the tablespace data files with the target database in a TTS migration. Parallelism improves TTS export and import performance especially when there are millions of database objects in the data files including tables indexes partitions and subpartitions.

– Starting with Oracle Database 20c, Data Pump resumes transportable tablespace export and import jobs that are stopped. Oracle Data Pump has the capacity to resume transportable tablespace export and import jobs. Due to errors or other problems you can find that transportable tablespace export or import jobs are stopped. Oracle Data Pump’s capacity to resume these stopped jobs helps to save you time and makes the system more available.

Another new 20c feature is Automatic Zone Maps which are created and maintained for any user table without any customer intervention. Zone maps allow the pruning of blocks and partitions based on the predicates in the queries without any user intervention. Automatic zone maps are maintained for direct loads and are maintained and refreshed for any other DML operation incrementally and periodically in the background. Automatic zone maps are improving the performance of any query transparently and automatically without the need of any user action. However, this feature is available only on Exadata.

And finally, something which I find interesting in the good way: starting with Oracle Database 20c, you can specify an expression as the value for some initialization parameters, which enables the database to automatically adjust the parameter value in response to environmental changes. This is especially useful in Oracle Autonomous Database environments.

Example: CPU_COUNT = $SYSTEM_CPU/4 or SESSIONS = MAX(200, PROCESSES * 1.5)

Blockchain tables in Oracle Database 20c

In Databases, DBA, Oracle database on March 16, 2020 at 10:58

Blockchain tables are insert-only tables that organize rows into a number of chains and is a new concept starting with Oracle 20c. Each row in a chain, except the first row, is chained to the previous row in the chain by using a cryptographic hash. For each Oracle RAC instance a blockchain table contains thirty two chains, ranging from 0 through 31.

This is an example of how a blockchain table is created in 20c:

Let me first point out the main restrictions:

– Blockchain tables cannot be created in the root container and in an application root container: ORA-05729: blockchain table cannot be created in root container
– You cannot update the rows: ORA-05715: operation not allowed on the blockchain table
– In general, you cannot delete rows, truncate the table or drop the blockchain table: ORA-05723: drop blockchain table NDA_RECORDS not allowed
– Don’t even try to drop the tablespace containing blockchain tables, here is what happens:

DROP TABLESPACE BC_DATA INCLUDING CONTENTS AND DATAFILES;
ORA-00604: error occurred at recursive SQL level 1
ORA-05723: drop blockchain table NDA_RECORDS not allowed

The most important new view in 20c related to blockchain tables is DBA_BLOCKCHAIN_TABLES:

The 4 (non-trivial) columns of DBA_BLOCKCHAIN_TABLES contain the following information:

1. ROW_RETENTION: The minimum number of days a row must be retained after it is inserted into the table – if the value of this column is NULL, then rows can never be deleted from the table. In the example above, the row can be deleted after 16 days. Otherwise, you will get: ORA-05715: operation not allowed on the blockchain table

2. ROW_RETENTION_LOCKED: 2 possible values (YES and NO) showing if the row retention period for the blockchain table is locked.

YES: The row retention period is locked. You cannot change the row retention period.
NO: The row retention period is not locked. You can change the row retention period to a value higher than the current value with the SQL statement ALTER TABLE … NO DELETE UNTIL n DAYS AFTER INSERT.

3. TABLE_INACTIVITY_RETENTION: Number of days for which the blockchain table must be inactive before it can be dropped, that is, the number of days that must pass after the most recent row insertion before the table can be dropped. A table with no rows can be dropped at any time, regardless of this column value. In the example above, a year of inactivity must pass before the table can be dropped.

4. HASH_ALGORITHM: The algorithm used for computing the hash value for each table row.

To each row you add/insert to the blockchain table, Oracle adds values to the hidden columns of the blockchain table. Hidden columns are populated after you commit. They are used to implement sequencing of rows and verify that data is tamper-resistant. You can create indexes on hidden columns. In order to view the values of the hidden columns, you should explicitly include their names in the SQL, just like this:

Hidden Columns in Blockchain Tables will give you more details about the subject.

The following additional operations are not allowed with blockchain tables:

– Adding, dropping, and renaming columns
– Dropping partitions
– Defining BEFORE ROW triggers that fire for update operations (other triggers are allowed)
– Direct-path loading
– Inserting data using parallel DML
– Converting a regular table to a blockchain table (or vice versa)

There is a new PL/SQL procedure DBMS_BLOCKCHAIN_TABLE which contains 5 procedures, one of which VERIFY_ROWS is used to validate he data in the blockchain table.

Use DBMS_BLOCKCHAIN_TABLE.DELETE_EXPIRED_ROWS to remove rows that are beyond the retention period of the blockchain table.

For DBAs:

– For each chain in a database instance, periodically save the current hash and the corresponding sequence number outside the database.
– In an Oracle Data Guard environment, consider using the maximum protection mode or maximum availability mode to avoid loss of data.

You can use certificates to verify the signature of a blockchain table row. Check here on how to add and delete certificates to blockchain table rows.

Final note: you really have a good eye if you noticed the new 20c datatype I used in the table creation at the top of this blog post.

Oracle 20.2.0 new features for DBAs

In Autonomous, Cloud, Databases, New features, Oracle database on February 25, 2020 at 09:38

Oracle 20c is now available in preview mode from the Oracle Public Cloud. Preview version databases are not intended for production use and have limited functionality.

After testing some of the new features of 20c, here is what might be of interest for most DBAs:

1. Provisioning the database is relatively simple. You need an SSH Public key, create a VCN (Virtual Cloud Network) and a client subnet in your compartment. A hostname prefix is also mandatory. Note that the administrator password must be 9 to 30 characters and contain at least 2 uppercase, 2 lowercase, 2 special, and 2 numeric characters. The special characters must be _, #, or -. You cannot bypass that. The shape type must be “Virtal Machine” and the SMS (Storage Management Software) must be “Logical Volume Manager”.

Once provisioned you need the IP address which is under “Nodes” (bottom left, just under “Resources”). As you can see, it is no longer under “General Information”:

And … you can stop the database only manually (as of today, February 25th, 2020) – there is no button for stopping the node. Just terminate in case you are on a tight budget.

2. Blockchain tables

Blockchain tables are append-only tables in which only insert operations are allowed. Deleting rows is either prohibited or restricted based on time. Rows in a blockchain table are made tamper-resistant by special sequencing & chaining algorithms. Users can verify that rows have not been tampered. Have a look at an example I used to create a blockchain table:

Here is how to manage blockchain tables.

Most important is to specify the Retention Period for the Blockchain Table by using the NO DROP clause in the CREATE BLOCKCHAIN TABLE statement. Also specify the Retention Period for Rows in the Blockchain Table: use the NO DELETE clause in a CREATE BLOCKCHAIN TABLE statement.

3. A multitenant container database is the only supported architecture in Oracle Database 20c. While the documentation is being revised, legacy terminology may persist. In most cases, “database” and “non-CDB” refer to a CDB or PDB, depending on context. In some contexts, such as upgrades, “non-CDB” refers to a non-CDB from a previous release. Check the changes in Oracle 20c for Oracle Multitenant.

4. Data Pump

– Oracle Data Pump 20c can include and exclude objects in the same export or import operation meaning that now, Oracle Data Pump commands can include both INCLUDE and EXCLUDE parameters in the same operation. By enabling greater specificity about what is being migrated, this enhancement makes it easier to migrate to Oracle Cloud, or to another on-premises Oracle Database.

Note: when you include both parameters in a command, Oracle Data Pump processes the INCLUDE parameter first, and includes all objects identified by the parameter. Then it processes the EXCLUDE parameters, eliminating the excluded objects from the included set. Here is an example of including only 2 tables but excluding all indexes except the PKs (real use case: you want to enable Oracle Auto Indexing in ADB and while importing the data you need to drop all indexes except the PKs):

expdp julian SCHEMAS=JULIAN DUMPFILE=julian.dmp REUSE_DUMPFILES=YES 
INCLUDE=TABLE:\"IN \(\'CLIENTS\',\'SALES\'\)\" 
EXCLUDE=INDEX:\"LIKE \'IDX\%\'\"

– Oracle Data Pump 20c resumes transportable tablespace export and import jobs that are stopped

– Oracle Data Pump 20c supports parallel export and import operations for Transportable Tablespace (TTS) metadata

– Oracle Data Pump 20c supports optional index compression on imports, including for Oracle Autonomous Database

– Oracle Data Pump 20c supports adding, changing and eliminating table compression

– Oracle Database 20c supports index compression as well by introducing a new TRANSFORM parameter clause, INDEX_COMPRESSION_CLAUSE

– Oracle Data Pump 20c can perform exports from Oracle Autonomous Database into dump files in a cloud object store

– Starting with Oracle Database 20c, a checksum is now added to the dumpfile – you can use the checksum to help to confirm that the file is valid after a transfer to or from the object store and also after saving dumpfiles on on-premises and that it has no
accidental or malicious changes

5. Small improvements and changes in 20c:

– The IGNORECASE parameter for the orapwd file is desupported – all newly created password files are case-sensitive

– A new dynamic view called V$PMEM_FILESTORE displays information about Persistent Memory Filestores

– Certain predefined columns of unified audit records from common unified audit policies can be written to the UNIX SYSLOG destination – to enable this feature, you set UNIFIED_AUDIT_COMMON_SYSTEMLOG, a new CDB level init.ora parameter (added in Oracle 19c (19.3) but not included in the References)

– You now can set the TABLESPACE_ENCRYPTION_DEFAULT_ALGORITHM dynamic parameter to define the default encryption algorithm for tablespace creation operations

– Database Vault: a DV_OWNER common user in the CDB root can prevent local users from creating Oracle Database Vault controls on common objects in a PDB

– AutoShrink: Oracle Automatic Storage Management Cluster File System (Oracle ACFS) automatic shrinking automatically shrinks an Oracle ACFS file system based on policy, providing there is enough free storage in the volume

– The Oracle Grid Infrastructure feature Automatic Storage Management Cluster File System (Oracle ACFS) is desupported with Microsoft Windows

– An Oracle Database installation configures all Oracle Database homes in read-only mode by default

– Traditional auditing is deprecated in Oracle Database 20c thus Oracle recommend that we use unified auditing, which enables selective and more effective auditing inside Oracle Database

– The package DBMS_OBFUSCATION_TOOLKIT is desupported, and replaced with DBMS_CRYPTO

– Older encryption and hashing algorithms contained within DBMS_CRYPTO are deprecated

– The Large Object (LOB) features DBMS_LOB.LOADFROMFILE and LOB buffering are desupported

– You can configure database clients to maintain multiple Secure Sockets Layer (SSL) sessions using different SSL certificates

– In the DBMS_ROLLING.set_parameter(), there is a new parameter, called BLOCK_UNSUPPORTED – by default, BLOCK_UNSUPPORTED
is set to 1 [YES], indicating that operations performed on tables that are unsupported by Transient Logical Standby will be blocked on the primary database. If set to 0 [OFF], then the DBMS_ROLLING package does not block operations on unsupported tables

– In order to coordinate with the Oracle GoldenGate feature OGG EXTRACT, the LOGICAL_REPLICATION clause now provides support for automatic extract of tables

– Two new views, DBA_OGG_AUTO_CAPTURED_TABLES, and USER_OGG_AUTO_CAPTURED_TABLES, provide you with tools to query which tables are enabled for Oracle GoldenGate automatic capture

6. Finally, her are the 7 new init.ora parameters in Oracle 20.2.0:

DBNEST_ENABLE (DbNest is OS resource and file system isolation for PDBs)
DBNEST_PDB_FS_CONF
DIAGNOSTICS_CONTROL (meant to be used with Oracle Support)
MAX_IDLE_BLOCKER_TIME (maximum number of minutes before a blocking session is automatically terminated)
PMEM_FILESTORE
TABLESPACE_ENCRYPTION_DEFAULT_ALGORITHM
OPTIMIZER_REAL_TIME_STATISTICS