Dontcheff

Archive for the ‘Cloud’ Category

How to use DBSAT for Oracle 20c and the Autonomous Database

In Autonomous, Cloud, DBA, Oracle database, Security and auditing on November 13, 2020 at 10:13

“There should be no on/off button for security, it should always be on, everything should always be encrypted – there should be no option to turn security off” – Larry Ellison

If you would like to check how secure your Oracle database it, just run the free tool called Oracle Database Security Assessment Tool (DBSAT)!

First, download it for free from MOS Doc ID 2138254.1. Check also the DBSAT documentation.

The Oracle Database Security Assessment Tool is a simple command line tool showing how securely your database is configured, who are the users and what are their privileges, what security policies and controls are in place, and where sensitive data resides with the goal of promoting successful approaches to mitigate potential security risks.

DBSAT has three components: Collector, Reporter, and Discoverer.

Here are the steps of how to produce the report in Oracle Database 20c. DBSAT supports also Oracle Database versions Oracle 10.2 through Oracle 19c.

1. Make sure first you have zip, unzip and python on the database server:

 [root@julian ~]# id
 uid=0(root) gid=0(root) groups=0(root)
 [root@julian ~]# cd /home/oracle/
  [root@julian oracle]# yum install -y zip unzip python
 Loaded plugins: ulninfo, versionlock
 mysql-connectors-community                               | 2.6 kB     00:00
 mysql-tools-community                                    | 2.6 kB     00:00
 mysql80-community                                        | 2.6 kB     00:00
 ol7_UEKR5                                                | 2.5 kB     00:00
 ol7_latest                                               | 2.7 kB     00:00
 (1/3): mysql-connectors-community/x86_64/primary_db        |  68 kB   00:00
 (2/3): mysql80-community/x86_64/primary_db                 | 128 kB   00:00
 (3/3): mysql-tools-community/x86_64/primary_db             |  83 kB   00:00
 (1/5): ol7_UEKR5/x86_64/updateinfo                         |  41 kB   00:00
 (2/5): ol7_UEKR5/x86_64/primary_db                         | 7.6 MB   00:00
 (3/5): ol7_latest/x86_64/updateinfo                        | 3.1 MB   00:00
 (4/5): ol7_latest/x86_64/group                             | 660 kB   00:00
 (5/5): ol7_latest/x86_64/primary_db                        |  30 MB   00:00
 Excluding 262 updates due to versionlock (use "yum versionlock status" to show them)
 Package zip-3.0-11.el7.x86_64 already installed and latest version
 Package unzip-6.0-20.el7.x86_64 already installed and latest version
 Package python-2.7.5-86.0.1.el7.x86_64 already installed and latest version
 Nothing to do
 [root@julian oracle]#
   

2. Create a database user for running the Security Assessment Tool. You can also run it as sysdba but I would rather have a separate user:

create user dbsat_user identified by dbsat_user;
grant create session to dbsat_user;
grant select_catalog_role to dbsat_user;
grant select on sys.registry$history to dbsat_user;
grant select on sys.dba_users_with_defpwd to dbsat_user; 
grant select on audsys.aud$unified to dbsat_user; 
grant audit_viewer to dbsat_user; 
grant capture_admin to dbsat_user;
grant dv_secanalyst to dbsat_user;

3. Run the tool:

[oracle@julian dbsat]$ ./dbsat collect dbsat_user/dbsat_user@//localhost:1521/xxxxx.oraclevcn.com dbsat_output

Database Security Assessment Tool version 2.2.1 (May 2020)

This tool is intended to assist you in securing your Oracle database
system. You are solely responsible for your system and the effect and
results of the execution of this tool (including, without limitation,
any damage or data loss). Further, the output generated by this tool may
include potentially sensitive system configuration data and information
that could be used by a skilled attacker to penetrate your system. You
are solely responsible for ensuring that the output of this tool,
including any generated reports, is handled in accordance with your
company's policies.

Connecting to the target Oracle database...

SQL*Plus: Release 20.0.0.0.0 - Production on Tue Nov 10 08:35:00 2020
Version 20.2.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to:
Oracle Database 20c EE High Perf Release 20.0.0.0.0 - Production
Version 20.2.0.0.0

Setup complete.
SQL queries complete.
.......
OS commands complete.
Disconnected from Oracle Database 20c EE High Perf Release 20.0.0.0.0 - Production
Version 20.2.0.0.0
DBSAT Collector completed successfully.

Calling /u01/app/oracle/product/20.0.0/dbhome_1/bin/zip to encrypt dbsat_output.json...

Enter password:
Verify password:
  adding: dbsat_output.json (deflated 88%)
zip completed successfully.
  
[oracle@julian dbsat]$

4. Generate the report:

[oracle@julian dbsat]$ ./dbsat report dbsat_output

Database Security Assessment Tool version 2.2.1 (May 2020)

This tool is intended to assist you in securing your Oracle database
system. You are solely responsible for your system and the effect and
results of the execution of this tool (including, without limitation,
any damage or data loss). Further, the output generated by this tool may
include potentially sensitive system configuration data and information
that could be used by a skilled attacker to penetrate your system. You
are solely responsible for ensuring that the output of this tool,
including any generated reports, is handled in accordance with your
company's policies.

Archive:  dbsat_output.zip
[dbsat_output.zip] dbsat_output.json password:
  inflating: dbsat_output.json
DBSAT Reporter ran successfully.

Calling /usr/bin/zip to encrypt the generated reports...

Enter password:
Verify password:
        zip warning: dbsat_output_report.zip not found or empty
  adding: dbsat_output_report.txt (deflated 78%)
  adding: dbsat_output_report.html (deflated 84%)
  adding: dbsat_output_report.xlsx (deflated 3%)
  adding: dbsat_output_report.json (deflated 82%)
zip completed successfully.

5. Extract (for example) the .html file:

[oracle@julian dbsat]$ unzip dbsat_output_report.zip
Archive:  dbsat_output_report.zip
[dbsat_output_report.zip] dbsat_output_report.txt password:
  inflating: dbsat_output_report.txt
  inflating: dbsat_output_report.html
  inflating: dbsat_output_report.xlsx
  inflating: dbsat_output_report.json

6. View the report, note that most areas will probably be in PASS status, some will be with LOW RISK, some might be even classified as HIGH RISK:

Although in ADB we have no OS access, you can still run the Database Security Assessment Tool. Roy Salazar explained on the Pythian blog how to run DBSAT against Autonomous Database.

You need to have the Instant Client installed and then use the ADMIN database user from Autonomous DB.

Clearly, you will get “ORA-20002: Complete without OS Commands” as the execution of the Collector was on the client server instead of on the underlying DB server to which we have no access.

The security of Autonomous Database is so high that you most likely will get a very different report from what we usually see on-premises.

DBSAT on Youtube:

DBAs are a wealth of knowledge!

In Autonomous, Cloud, Databases, DBA on October 19, 2020 at 10:53

Fifteen years ago, in 2005, I remember I read the following speculation about the DBA profession: Oracle 2020 – a Glimpse Into the Future of Database Management. I was curious what would be true 15 years later. Just have a look at some of the predictions:

– 2018: Oracle 14m provides inter-instance sharing of RAM resources. All Oracle instances become self-managing.
– 2019: The first 128-bit processors are introduced.
– 2020: Oracle 16ss introduces solid-state, non-disk database management.

Looking even further we read: “Changing Role of the Oracle DBA in 2020: But the sad reality of server consolidation was that thousands of mediocre Oracle DBAs lost their jobs to this trend. The best DBAs continued to find work, but DBAs who were used for the repetitive tasks of installing upgrades on hundreds of small servers were displaced.” And further: “Inside Oracle 2020: The world of Oracle management is totally different today than it was back in 2004. We no longer have to worry about applying patches to Oracle software, all tuning is fully automated and hundreds of Oracle instances all reside within a single company-wide server.” ADB rings a bell?

Four years ago, another forecast article called The 2020 DBA: A Look Into the Future appeared in dzone.com predicting that despite the evolving role of the DBAs, DevOps has actually made database administrators more relevant than ever before.

True, as the DBA profession is still among the Top 10 jobs in Technology. DBA is #4 in Best Technology Jobs, #15 in Best STEM Jobs and #30 in 100 Best Jobs.

Now, going back to the title of this blog post. Jeff Smith’s, Russ Lowenthal’s and Chris Saxon’s Oracle DBA 2020 Data Masterclasses are something I recommend now to every DBA. Jeff raised an important topic but let us take a step back.

Last month (September 2020), FlashDBA discussed the Evolution of the DBA from 1.0 through 2.0 until 3.0:

DBA 1.0: The (Good) Old Days – clearly the old days are over. Regardless if they were good or bad is a memory-lane discussion.

DBA 2.0: The IT Generalist – I remember when saw this paper for the first time about 10-11 years ago: Oracle DBA 2.0. ASM, Direct NFS, Clusterware, VMware, Flash, Linux. The DBA had to learn OS, Storage and Network administration. With Exadata, I even heard the term DBA 2.1

DBA 3.0: The Cloud DevOps DBA – new game, rather new set! “A DBA building a database in the public cloud is making decisions which have a direct affect on the (quite possibly massive) monthly bill from AWS/Azure/GCP/OCI”.

Checking the Database Management Predictions from 2019, we can see they now we are close to:

DBA 4.0: Autonomous DBA – more attention on data management, data security, data architecture, machine learning, devops:

“Oracle Autonomous Database can quickly provision, resize, and relocate databases with little human interaction. However, as more database provisioning tasks are automated, DBAs will still need to classify the data.” – Michelle Malcher

“CEOs will force DBAs to step into more-important roles—such as data architects, data managers, and chief data offcers—as a company’s data and machine learning algorithms become important drivers of the stock price.” — Rich Niemiec

“Oracle Enterprise Manager Cloud and other third-party tools, in conjunction with Oracle Multitenant will make managing large numbers of databases easier. DBAs will be able to manage 10 times or more databases after consolidation with Oracle Multitenant.” — Anuj Mohan

“DBAs need to understand that there is a true sea change afoot, and there’s no way to stop these market forces. Hopefully, we’ll all be able to embrace this tidal wave and avoid being caught up in the undertow.” — Jim Czuprynski

I believe the adoption of a hybrid infrastructure is inevitable, and the new skills needed (such as cloud set-up, configuration, and monitoring) are mostly cloud-related. DBAs will need to be able to assess the databases and define what they are best suited for. Replication of data and databases will become more complex in hybrid environments—especially when different clouds are involved.

That is coming in few years at most (when most companies will adopt multi-cloud) and call the profession DBA 5.0 if you prefer or just DBA. Every decade has its challenges for IT professionals. Challenges are becoming more complex and DBAs are often on the front line with every new fashionable IT concept. And we can quote now Larry Ellison who said that the computer industry is the only industry that is more fashion-driven than women’s fashion.

Here are finally few additional recent articles on the topic of the future of the DBA profession (if interested to read more on the topic):

My Three Beliefs About The Future Of The DBA Job
The Future for the DBA
Will automated databases kill the DBA position?
What Does the Future Hold for DBAs?
For DBAs In 2020: Understand Your Worth, Seize The Moment
What Happens to DBAs When We Move to the Cloud?
The Future of The DBA in The Era of The Autonomous Database

How is Oracle Autonomous JSON Database different from Oracle ATP and MongoDB?

In Autonomous, Cloud, DBA, Oracle database on October 1, 2020 at 09:09

Oracle Autonomous JSON Database is an Oracle Cloud service that is specialized for developing NoSQL-style applications that use JavaScript Object Notation (JSON) documents. Autonomous JSON Database (in short AJD) stores the JSON documents in a native tree-oriented binary format making it highly optimized for fast reads (avoiding linear scans) and partial updates (reducing redo/undo log sizes).

Like ADW and ATP, AJD delivers also auto-scaling, automated patching, upgrades, maximum security and auto-tuning. I do agree with Philipp Salvisberg that AJD is a special version of the Autonomous Transaction Processing (ATP).

The leader of pure document stores (as of September 2020) is MongoDB. Amazon DynamoDB and Microsoft Azure Cosmos DB are also extremely popular.

Autonomous JSON Database provides all of the same features as Autonomous Transaction Processing and, unlike MongoDB, allows developers to also store up to 20 GB of non-JSON data. There is no storage limit for JSON collections.

An excellent comparison of Oracle AJD and MongoDB can be found in the article entitled Introducing Oracle Autonomous JSON Database for application developers by Beda Hammerschmidt:

Oracle AJD is very similar to Oracle ATP with the major difference that AJD is meant for document databases containing lots of JSON format documents. You can think of ATP as more of a hybrid version of the Autonomous database.

Looking at the init.ora parameters I observed something after comparing an ATP and an AJD spfile parameters. Except the obvious ones like instance_name or service_names I found only the following differences (both are with 1 OCPU and 1TB of storage):

cpu_count: 6 for AJD and 2 for ATP
db_recovery_file_dest_size: 88406716M for AJD and 123789329M for ATP
gcs_server_processes: 4 for AJD and 5 for ATP
pdb_lockdown: JDCS for AJD and OLTP for ATP
resource_manager_cpu_allocation: 92 for AJD and 100 for ATP
shared_pool_reserved_size: 2254857830 for AJD and 3248069017 for ATP
transactions: 66083 for AJD and 66110 for ATP

Clearly, this is what I did not expect. Obviously, the biggest difference between AJD and ATP comes from the lockdown profiles.

So, here are the three differences between Oracle AJD and ATP:

1. Lockdown profiles
2. AJD can store at most 20 GB of non-JSON data
3. Small differences in the init.ora parameters

And here are the three differences between Oracle AJD and MongoDB Atlas:

1. Autonomous JSON Database costs 30% less than comparable MongoDB Atlas configurations: $2.74/hr versus $3.95/hr
2. Autonomous JSON database gives you 2x throughput consistently across different workload types and collection sizes
3. Autonomous JSON Database comes with more capabilities than MongoDB Atlas

A good starting point is the documentation of Autonomous JSON Database for Experienced Oracle Database Users. It is mostly about restrictions for SODA and JSON, SQL and other database features.

The SODA and JSON Tutorials are a good starting point to getting used to working with AJD.

Finally, here are 5 Oracle Autonomous JSON Database use cases:

– Mobile applications
– Applications with dynamic personalized experiences
– Content and catalog management
– Integrated IoT applications
– Digital payment applications

And here is a good article by Maria Colgan on How does Autonomous Transaction Processing differ from the Autonomous Data Warehouse.

MySQL DB System on Oracle Cloud Infrastructure

In Cloud, Databases, DBA, IaaS, MySQL, New features on September 1, 2020 at 08:37

MySQL has been for a long time second next to Oracle in the DB-Engines rankings. Moreover, MySQL was their 2019 Database of the Year!

Now, MySQL is also available as DB System on Oracle Cloud Infrastructure. The database version is 8.0.21 with InnoDB Storage Engine. Here is how the set-up works:

1. Create a MySQL DB System:

2. Note that there are different shutdown types:

3. Connect to a MySQL DB system:

It is not possible to connect directly from a remote IP to the MySQL DB System endpoint. You must connect to a Compute Instance, and from the Compute Instance to the MySQL DB System.

We first install MySQL Shell on the Compute instance:

 
[opc@julian ~]$ sudo yum install https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm
Loaded plugins: ulninfo, versionlock
mysql80-community-release-el7-3.noarch.rpm               |  25 kB     00:00
Examining /var/tmp/yum-root-X8dAdb/mysql80-community-release-el7-3.noarch.rpm: mysql80-community-release-el7-3.noarch
Marking /var/tmp/yum-root-X8dAdb/mysql80-community-release-el7-3.noarch.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package mysql80-community-release.noarch 0:el7-3 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package             Arch   Version
                                  Repository                               Size
================================================================================
Installing:
 mysql80-community-release
                     noarch el7-3 /mysql80-community-release-el7-3.noarch  31 k

Transaction Summary
================================================================================
Install  1 Package

Total size: 31 k
Installed size: 31 k
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
  Installing : mysql80-community-release-el7-3.noarch                       1/1
  Verifying  : mysql80-community-release-el7-3.noarch                       1/1

Installed:
  mysql80-community-release.noarch 0:el7-3

Complete!
[opc@julian ~]$

===

[opc@julian ~]$ sudo yum install mysql-shell
Loaded plugins: ulninfo, versionlock
mysql-connectors-community                               | 2.5 kB     00:00
mysql-tools-community                                    | 2.5 kB     00:00
mysql80-community                                        | 2.5 kB     00:00
ol7_UEKR5                                                | 2.5 kB     00:00
ol7_latest                                               | 2.7 kB     00:00
(1/3): mysql80-community/x86_64/primary_db                 | 115 kB   00:00
(2/3): mysql-connectors-community/x86_64/primary_db        |  62 kB   00:00
(3/3): mysql-tools-community/x86_64/primary_db             |  76 kB   00:00
(1/5): ol7_UEKR5/x86_64/updateinfo                         |  72 kB   00:00
(2/5): ol7_latest/x86_64/group                             | 660 kB   00:00
(3/5): ol7_latest/x86_64/updateinfo                        | 2.9 MB   00:00
(4/5): ol7_UEKR5/x86_64/primary_db                         |  12 MB   00:00
(5/5): ol7_latest/x86_64/primary_db                        |  36 MB   00:00
Excluding 238 updates due to versionlock (use "yum versionlock status" to show them)
Resolving Dependencies
--> Running transaction check
---> Package mysql-shell.x86_64 0:8.0.21-1.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package         Arch       Version             Repository                 Size
================================================================================
Installing:
 mysql-shell     x86_64     8.0.21-1.el7        mysql-tools-community      31 M

Transaction Summary
================================================================================
Install  1 Package

Total download size: 31 M
Installed size: 106 M
Is this ok [y/d/N]: y
Downloading packages:
warning: /var/cache/yum/x86_64/7Server/mysql-tools-community/packages/mysql-shell-8.0.21-1.el7.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 5072e1f5: NOKEY
Public key for mysql-shell-8.0.21-1.el7.x86_64.rpm is not installed
mysql-shell-8.0.21-1.el7.x86_64.rpm                        |  31 MB   00:02
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
Importing GPG key 0x5072E1F5:
 Userid     : "MySQL Release Engineering "
 Fingerprint: a4a9 4068 76fc bd3c 4567 70c8 8c71 8d3b 5072 e1f5
 Package    : mysql80-community-release-el7-3.noarch (@/mysql80-community-release-el7-3.noarch)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : mysql-shell-8.0.21-1.el7.x86_64                              1/1
  Verifying  : mysql-shell-8.0.21-1.el7.x86_64                              1/1

Installed:
  mysql-shell.x86_64 0:8.0.21-1.el7

Complete!
[opc@julian ~]$

Then connect to your DB System using the MySQL client:

 
mysqlsh ICO@10.0.0.3 

[opc@julian ~]$ mysqlsh ICO@10.0.0.3
Please provide the password for 'ICO@10.0.0.3': ************
MySQL Shell 8.0.21

Copyright (c) 2016, 2020, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
Creating a session to 'ICO@10.0.0.3'
[opc@julian ~]$

My suggestion is to start from the MySQL Database Service Overview.

MySQL database is also available from GCP, Azure and AWS but check the pricing:

The CLOUD_VERIFY_FUNCTION in Autonomous Database Cloud

In Autonomous, Cloud, DBA, New features, Oracle internals, Security and auditing on July 1, 2020 at 12:58

Choosing a hard-to-guess but easy-to-remember password is by far the easiest one from all the hard tasks!

1. Database passwords and their complexity:

According to GDPR personal data must be processed “in a manner that ensures appropriate security of personal data including protection against unauthorized or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organizational measures.”

But GDPR does not define any requirements about passwords such as password length, complexity, or how often password should be renewed. Regulation (EU) 2016/679 just stipulates that “a high level of protection of personal data” is required.

One way to enforce strong passwords on database users is by using the following rule:

A minimum of 1 lower case letter [a-z] and
a minimum of 1 upper case letter [A-Z] and
a minimum of 1 numeric character [0-9] and
a minimum of 1 special character: ~`!@#$%^&*()-_+={}[]|\;:”,./?
Passwords must be at least N characters in length
N attempts to block login
Set password expiration to N days

Oracle is following the above mentioned rules and the Oracle script catpvf.sql provides several password functions for taking care of the verification process:

– ora_complexity_check,
– verify_function
– verify_function_11G
– ora12c_verify_function
– ora12c_strong_verify_function
– ora12c_stig_verify_function

Note that the VERIFY_FUNCTION and VERIFY_FUNCTION_11G password verify functions are desupported in Oracle Database 20c. Also, in Oracle 20c, the IGNORECASE parameter for the orapwd file is desupported. All newly created password files are case-sensitive.

3. Non-autonomous databases

Now, how about those who prefer to use less complex passwords for database users? How do you bypass that problem first in a non-autonomous environment?

There are several ways to avoid the verification process by say the ora12c_verify_function:

– ALTER PROFILE DEFAULT LIMIT PASSWORD_VERIFY_FUNCTION NULL;
– Create a separate profile for the user
– Edit the catpvf.sql script to use the password verification function that you want, and then run the script to enable it – it is located in $ORACLE_HOME/rdbms/admin/utlpwdmg.sql
– Modify “CREATE OR REPLACE FUNCTION ora12c_verify_function …” in utlpwdmg.sql, a file which is used to change the DEFAULT profile to use different password complexity functions – it is located in $ORACLE_HOME/rdbms/admin/utlpwdmg.sql (not in 20c though)

Note here that the Oracle documentation says clearly: “Do not modify the admin/catpvf.sql script or the Oracle-supplied password complexity functions. You can create your own functions based on the contents of these files.”

3. Autonomous databases

Next, how about Autonomous, where we have no access to the operating system layer?

The Oracle Autonomous Database Cloud offers a new (unique to ADB) a function called CLOUD_VERIFY_FUNCTION. It is not available in the non-autonomous releases and not even in Oracle 20c.

The CLOUD_VERIFY_FUNCTION function is specified in the PASSWORD_VERIFY_FUNCTION attribute of the DEFAULT profile. This function internally calls ORA_COMPLEXITY_CHECK and checks the password entered according to the following specifications.

– If password contains the username
– The password must contain 1 or more lowercase characters
– The password must contain 1 or more uppercase characters
– The password must contain 1 or more digits
– The password length less than 12 bytes or more than 60 bytes

Let us check first what the function CLOUD_VERIFY_FUNCTION looks like:

 
create or replace FUNCTION cloud_verify_function
(username varchar2,
password varchar2,
old_password varchar2)
RETURN boolean IS
   differ integer;
  db_name varchar2(40);
  i integer;
  reverse_user dbms_id;
  canon_username dbms_id := username;
  len integer := nvl (length(password), 0);
BEGIN
  IF (substr(username,1,1) = '"') THEN
    execute immediate 'begin dbms_utility.canonicalize(:p1,  :p2, 128); end;'
                        using IN username, OUT canon_username;
  END IF;
   IF NOT ora_complexity_check(password, 12, null, 1, 1, 1, null) THEN
     RETURN(FALSE);
   END IF;
-- Check password length
   IF len > 60 THEN
     raise_application_error(-20020, 'Password too long');
   END IF;
-- Check if the password contains the username
   IF regexp_instr(password, canon_username, 1, 1, 0, 'i') > 0 THEN
     raise_application_error(-20002, 'Password contains the username');
   END IF;
RETURN(TRUE);
END;
/

We cannot modify the scripts mentioned above as we do not have OS access in ADB – may be then we can change the default profile or create a new one? But in ATP, user’s profile will be set to ‘DEFAULT’, and you are not allowed to create additional PROFILEs. Autonomous Data Warehouse requires strong passwords – the password must meet the default password complexity rules.

The output below is identical in ADW and ATP:

Well, we are stubborn – so let us try in any case:

In ATP:

 
create profile DBA_PROFILE 
LIMIT PASSWORD_REUSE_MAX 10 PASSWORD_REUSE_TIME 30
ORA-01031: insufficient privileges

In ADW:

 
create profile DBA_PROFILE 
LIMIT PASSWORD_REUSE_MAX 10 PASSWORD_REUSE_TIME 30;
Profile DBA_PROFILE created.

alter profile DBA_PROFILE limit PASSWORD_VERIFY_FUNCTION null;
Profile DBA_PROFILE altered.

alter user admin profile DBA_PROFILE;
ORA-01031: insufficient privileges

create user app_user identified by abc profile DBA_PROFILE
ORA-28219: password verification failed for mandatory profile
ORA-20000: password length less than 12 bytes

create user app_user identified by Exadataa2020 profile DBA_PROFILE;
User APP_USER created.

 
alter user app_user identified by abc
ORA-28219: password verification failed for mandatory profile
ORA-20000: password length less than 12 bytes

Well, the password verify function is still used although we set the app_user’s profile to DBA_PROFILE.

Conclusion: there is no way to modify or bypass the CLOUD_VERIFY_FUNCTION in Autonomous Cloud.

In 20c, the script utlpwdmg.sql has been modified, it does not contain any longer the creation of the verification functions. I really don’t know if that is intentional.

 
[oracle@julian admin]$ cat utlpwdmg.sql
...

-- This script sets the default password resource parameters
-- This script needs to be run to enable the password features.
-- However the default resource parameters can be changed based
-- on the need.
-- A default password complexity function is provided.

Rem *************************************************************************
Rem BEGIN Password Management Parameters
Rem *************************************************************************

-- This script alters the default parameters for Password Management
-- This means that all the users on the system have Password Management
-- enabled and set to the following values unless another profile is
-- created with parameter values set to different value or UNLIMITED
-- is created and assigned to the user.

ALTER PROFILE DEFAULT LIMIT
PASSWORD_LIFE_TIME 180
PASSWORD_GRACE_TIME 7
PASSWORD_REUSE_TIME UNLIMITED
PASSWORD_REUSE_MAX  UNLIMITED
FAILED_LOGIN_ATTEMPTS 10
PASSWORD_LOCK_TIME 1
INACTIVE_ACCOUNT_TIME UNLIMITED
PASSWORD_VERIFY_FUNCTION ora12c_verify_function;

/**
The below set of password profile parameters would take into consideration
recommendations from Center for Internet Security[CIS Oracle 11g].

ALTER PROFILE DEFAULT LIMIT
PASSWORD_LIFE_TIME 90
PASSWORD_GRACE_TIME 3
PASSWORD_REUSE_TIME 365
PASSWORD_REUSE_MAX  20
FAILED_LOGIN_ATTEMPTS 3
PASSWORD_LOCK_TIME 1
PASSWORD_VERIFY_FUNCTION ora12c_verify_function;
*/

/**
The below set of password profile parameters would take into
consideration recommendations from Department of Defense Database
Security Technical Implementation Guide[STIG v8R1].

ALTER PROFILE DEFAULT LIMIT
PASSWORD_LIFE_TIME 60
PASSWORD_REUSE_TIME 365
PASSWORD_REUSE_MAX  5
FAILED_LOGIN_ATTEMPTS 3
PASSWORD_VERIFY_FUNCTION ora12c_strong_verify_function;
*/

Rem *************************************************************************
Rem END Password Management Parameters
Rem *************************************************************************
[oracle@julian admin]$

On a final note: the hashed values of the passwords in 20c can be still found in the data dictionary – look into the SPARE4 column of the SYS.USER$ table. It is similar to 19c, read this article for all the details.

Six new features of Data Pump in Oracle Database 20c

In Cloud, Data, Databases, DBA, New features, Oracle database, Oracle utilities on June 8, 2020 at 10:55

In Oracle version 20c, the Data Pump export job accepts 54 different parameters and the import jobs can have up to 59 different parameters. Most of us who used until Oracle 10g (and even afterwards) the old fashioned traditional export import tool probably knew by heart the syntax of exp/imp and never needed to copy paste the command from a text file. With more than 50 parameters in 20c, I am sure it is a different story.

There are 6 new features of Data Pump in Oracle 20c and here are their short descriptions:

1. Oracle Data Pump checksums support for cloud migrations

The new parameter CHECKSUM can be used for validity as a checksum is now added to the dumpfile. Oracle Data Pump can be, and is mostly used, for migrating data from on-premises Oracle Databases into the Oracle Public Cloud. You can use the checksum to help confirming that the file is valid after a transfer to or from the Oracle Cloud object store. Checksums are also useful after saving dumpfiles to on-prem locations for confirming that the dump files have no accidental or malicious changes.

Here is an example of how to use the new CHECKSUM parameter – I am exporting my own schema. The parameter accepts 2 values:

YES – Oracle calculates a file checksum for each dump file in the export dump file set
NO – Oracle does not calculate file checksums

The checksum is calculated at the end of the data pump job. Notice that I am not specifying below the other new parameter CHECKSUM_ALGORITHM, thus using its default value SHA256. The other optional secure hash algorithms are: CRC32, SHA384 and SHA512.

 
SQL> !expdp julian DIRECTORY=data_pump_dir DUMPFILE=jmd.dmp CHECKSUM=YES

Export: Release 20.0.0.0.0 - Production on Sat May 30 07:20:55 2020
Version 20.2.0.0.0

Copyright (c) 1982, 2020, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 20c EE High Perf Release 20.0.0.0.0 - Production
Starting "JULIAN"."SYS_EXPORT_SCHEMA_01":  julian/********@//localhost:1521/novopdb1.laika7.laika.oraclevcn.com DIRECTORY=data_pump_dir DUMPFILE=jmd.dmp CHECKSUM=YES
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PASSWORD_HISTORY
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
. . exported "JULIAN"."BLOGS"                            9.983 MB   73991 rows
. . exported "JULIAN"."SALES"                            14.38 MB  295240 rows
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
. . exported "JULIAN"."RDBMS_BRANDS"                     7.726 KB      12 rows
. . exported "JULIAN"."CLIENTS"                          6.007 KB       2 rows
. . exported "JULIAN"."T"                                5.476 KB       1 rows
ORA-39173: Encrypted data has been stored unencrypted in dump file set.
Master table "JULIAN"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
Generating checksums for dump file set
******************************************************************************
Dump file set for JULIAN.SYS_EXPORT_SCHEMA_01 is:
  /u01/app/oracle/admin/ORCL/dpdump/9D45645C541E0B7FE0530206F40AE9E9/jmd.dmp
Job "JULIAN"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat May 30 07:24:59 2020 elapsed 0 00:04:01

You probably noticed the ORA-39173 warning at the end – it is because I did not specify an encryption password while exporting encrypted data. It is just a warning and not a real error.

Goes without saying that COMPATIBLE must be set to at least 20.0

2. Oracle Data Pump exports from Autonomous Database

Starting with Oracle Database 20c, Data Pump can perform exports from Oracle Autonomous Database into dump files in a cloud object store. Thus, now we can easily migrate data out from an Oracle Autonomous Database and import it into another location.

For how to create object store credentials check either Oracle Cloud : Autonomous Database (ADW or ATP) – Load Data from an Object Store (DBMS_CLOUD) by Tim Hall or DBMS_CLOUD Package – A Reference Guide by Christian Antognini.

The new in 20c is the use of the new CREDENTIAL parameter which enables the export to write data stored into object stores. The CREDENTIAL parameter changes how expdp interprets the text string in DUMPFILE. If the CREDENTIAL parameter is not specified, then the DUMPFILE parameter can specify an optional directory object and file name in directory-object-name:file-name format. If the CREDENTIAL parameter is used, then it provides authentication and authorization for expdp to write to one or more object storage URIs specified by DUMPFILE.

Here is an example assuming that we have already created the credential_name JMD_OBJ_STORE_CRED:

 
expdp julian DUMPFILE=https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/juliandon/data_pump/jmd.dmp 
CREDENTIAL=jmd_obj_store_cred

Need more information? The Oracle Cloud Infrastructure User Guide has “only” 5952 pages. Lat week, they were 5919. Enjoy reading it 🙂 And while reading this blog post, there will be more than 5952 pages… I do not think I have ever seen a longer users guide in our industry!

3. Oracle Data Pump includes and excludes in the same operation

Oracle Data Pump 20c can include and exclude objects in the same export or import operation meaning that now, Oracle Data Pump commands can include both INCLUDE and EXCLUDE parameters in the same operation. By enabling greater specificity about what is being migrated, this enhancement makes it easier to migrate to Oracle Cloud, or to another on-premises Oracle Database.

Note: when you include both parameters in a command, Oracle Data Pump processes the INCLUDE parameter first, and includes all objects identified by the parameter. Then it processes the EXCLUDE parameters, eliminating the excluded objects from the included set. Here is an example of including only 2 tables (SALES and CLIENTS) but excluding all indexes except the PKs (real use case: you want to enable Oracle Auto Indexing in ADB and while importing the data you need to drop all indexes except the PKs):

 
expdp julian SCHEMAS=JULIAN DUMPFILE=julian.dmp REUSE_DUMPFILES=YES 
INCLUDE=TABLE:\"IN \(\'CLIENTS\',\'SALES\'\)\" 
EXCLUDE=INDEX:\"LIKE \'IDX\%\'\"

4. Oracle Data Pump parallelizes transportable tablespace metadata operations

Starting with Oracle 20c, Data Pump improves Transportable Tablespace metadata operations with parallelism. Starting with Oracle Database 20c, transportable tablespace exports can be done with degrees of parallelism greater than 1.

Parallelism higher than 1 improves TTS export and import performance especially when there are really a lot of database objects in the data files including tables indexes partitions and subpartitions. We probably see the real benefit when exporting packaged application schemas from SAP, EBS, etc.

5. Oracle Data Pump provides optional index compression

In Oracle Database 20c, Data Pump supports optional index compression on import including the Autonomous Database by introducing a new TRANSFORM parameter clause INDEX_COMPRESSION_CLAUSE. Thus, you can control whether index compression is performed during import.

If NONE is specified in the INDEX_COMPRESSION_CLAUSE, then the index compression clause is omitted (and the index is given the default compression for the tablespace). However, if you use compression, then Oracle recommends that you use COMPRESS ADVANCED LOW. Indexes are created with the specified compression.

If the index compression clause is more than one word, then it must be contained in single or double quotation marks. Also, your operating system can require you to enclose the clause in escape characters, such as the backslash character. Here is an example of how to use the INDEX_COMPRESSION_CLAUSE:

 
TRANSFORM=INDEX_COMPRESSION_CLAUSE:\"COMPRESS ADVANCED LOW\"

Specifying this transform changes the type of compression for all indexes in the job.

6. Oracle Data Pump resumes transportable tablespace jobs

Starting with Oracle Database 20c, Data Pump resumes transportable tablespace export and import jobs that are stopped due to errors or any other problems. Oracle Data Pump’s capacity to resume these stopped jobs helps us to save time and makes the system more available.

Transportable jobs are now restartable at or near the point of failure.

To restart the job JMD_EXP_20C, first perform:

expdp system/password attach=jmd_exp_20c

Then restart the job with:

Export> continue_client

For an import job, the syntax is exactly the same.

On Kafka, JSON, PL/SQL and Advanced Queuing in Oracle Database 20c

In Autonomous, Cloud, Data, DBA, New features, PL/SQL on June 1, 2020 at 05:50

Oracle Corp. starts today (June 1st, 2020) a new fiscal year as the Oracle Database is slowly moving towards version 20c (still in preview mode only) which comes with almost 500 init.ora parameters and 5326 hidden/underscore parameters. There is a new one for 20c called kafka_config_file. But let us first take a step back and see its connection with Advanced Queuing.


Advanced Queuing is available in all editions of Oracle database, including XE. Since Oracle Advanced Queuing is implemented in database tables, all the operational benefits of high availability, scalability, and reliability are applicable to queue data.

Advanced Queuing can be accessed through the several interfaces: PL/SQL, Visual Basic, Java, Java Message Service, JDBC, ODP.NET, OCI (do not read Oracle Cloud Infrastructure – think of C language), etc.

Using PL/SQL to access Oracle Advanced Queuing is probably the most common method: using the PL/SQL packages DBMS_AQADM and DBMS_AQ.

Reading all that, one might think of Kafka. Kafka is a distributed, partitioned, replicated commit log service providing the functionality of a messaging system, but with a unique design. The aim here is not position Kafka and Oracle AQ against each other but show what is new in Oracle Database 20c that brings Kafka and Oracle together and what is important from DBA point of view.

Todd Sharp explained recently how to use Kafka Connect wth Oracle Streaming Service and Autonomous Database. Let us now see what is new in Oracle Database 20c:

Oracle Database 20c introduces Transactional Event Queues (TEQ), which are partitioned message queues that combine the best of messaging, streaming, direct messages, and publish/subscribe. TEQ operates at scale on the Oracle database. TEQ provides transactional event streaming, and runs in the database in a scale of 10s to 100s of billions of messages per day on 2-node to 8-node Oracle RAC databases, both on-premise and on the cloud. TEQ has Kafka client compatibility, which means, Kafka producer and consumer can use TEQ in the Oracle database instead of a Kafka broker. Check Advanced Queuing in 20c for more details.

1. Advanced Queuing: Kafka Java Client for Transactional Event Queues

Kafka Java Client for Transactional Event Queues (TEQ) enables Kafka application compatibility with Oracle database. This provides easy migration of Kafka applications to TEQ.

You do not have to manage a separate Kafka infrastructure and this new feature simplifies the event-driven application architectures with an Oracle converged database that now includes events data. Starting from Oracle Database 20c, Kafka Java APIs can connect to Oracle database server and use Transactional Event Queues (TEQ) as a messaging platform. Developers can migrate an existing Java application that uses Kafka to the Oracle database. A client side library allows Kafka applications to connect to Oracle database instead of Kafka cluster and use TEQ messaging platform transparently.

Two levels of configuration are required to migrate Kafka application to TEQ messaging platform:

– Database level configuration
– Application level configuration

Kafka application needs to set certain properties which will allow OKafka library to locate the Oracle Database. This is analogous to how Kafka application provides zoo keeper information. These connection properties can be set in the following two ways:

– using database user and password provided in plain text
– using JDBC wallet

The following are the prerequisites for configuring and running Kafka Java client for TEQ in an Oracle Database. Create a database user. Grant the following user privileges:

grant connect, resource to user;
grant execute on dbms_aq to user;
grant execute on dbms_aqadm to user;
grant execute on dbms_aqin to user;
grant execute on dbms_aqjms to user;
grant select_catalog_role to user;

Next, set the correct database init.ora parameter to use TEQ:

streams_pool_size=512M

Set the local listener too:

LOCAL_LISTENER= (ADDRESS=(PROTOCOL=TCP)(HOST= )(PORT=))

2. Advanced Queuing Support for JSON Data Type and PL/SQL

Oracle Database Advanced Queuing now supports the JSON data type.

Many client application and micro-services which use Advanced Queuing for messaging have better performance if they use JSON data type to handle JavaScript Object Notation (JSON) messages.

In this aspect, PUBLIC is granted EXECUTE privilege on all these types:

AQ$_AGENT, AQ$_AGENT_LIST_T, AQ$_DESCRIPTOR, AQ$_NTFN_DESCRIPTOR, AQ$_NTFN_MSGID_ARRAY, AQ$_POST_INFO, AQ$_POST_INFO_LIST, AQ$_PURGE_OPTIONS_T, AQ$_RECIPIENT_LIST_T,
AQ$_REG_INFO, AQ$_REG_INFO_LIST, AQ$_SUBSCRIBER_LIST_T, DEQUEUE_OPTIONS_T, ENQUEUE_OPTIONS_T, QUEUE_PROPS_T, SEEK_INPUT_T, , EK_OUTPUT_T, SYS.MSG_PROP_T, MESSAGE_PROPERTIES_T, MESSAGE_PROPERTIES_ARRAY_T, MSGID_ARRAY_T

Regarding the new features of PL/SQL in 20c, check first the interesting example of Steven Feuerstein on extension of loop iterators.

Check the New Features in Release 20c for Oracle Database PL/SQL for more details on PL/SQL extended iterators, PL/SQL qualified expressions enhancements, SQL macros, the new JSON data type and the new pragma SUPPRESSES_WARNING_6009.

3. Advanced Queuing: PL/SQL Enqueue and Dequeue Support

The following features are new in this release:

– Kafka Java Client for Transactional Event Queues (TEQ) which enables Kafka application compatibility with Oracle Database and thus providing easy migration of Kafka applications to TEQ
– PL/SQL Enqueue and Dequeue Support for JMS Payload and non-JMS Payload in Transactional Event Queues
– Transactional Event Queues for Performance and Scalability
– Simplified Metadata and Schema in Transactional Event Queues
– Support for Message Retention and Seekable Subscribers
– Advanced Queuing Support for JSON Data Type

For all the details, check the Changes in Oracle Database Advanced Queuing Release 20c.

In terms of performance and scalability, Oracle Transactional Event Queues have their Queue tables partitioned in 20c into multiple Event Streams which are distributed across multiple RAC nodes for high throughput messaging and streaming of events.

Remember that in 10.1, AQ was integrated into Oracle Streams and thus Oracle AQ was called “Oracle Streams AQ”. But in 12.1, Oracle Streams got deprecated and AQ was again named just “Oracle AQ”.

And finally: here is the 546 page long Transactional Event Queues and Advanced Queuing User’s Guide along with few good additional articles:

Oracle + Kafka = Better Architecture by Jonathan Wallace
Streaming data from Oracle into Kafka by Robin Moffatt
Extending Oracle Streaming with Kafka Compatibility by Somnath Lahiri

SYSDATE and Time Zones in the Autonomous Database

In Autonomous, Cloud, DBA, Oracle database on May 4, 2020 at 06:16

In the Oracle database, SYSDATE is one of the most popular functions, arguably the most popular.

PL/SQL based applications use quite extensively the SYSDATE function. What will happen when you migrate their data to Autonomous Cloud (or build a new application on Autonomous) as by default the Time Zone is set to Coordinated Universal Time (= UTC) in an OCI environment?

You are allowed to change the database and session timezones in ADB, but this doesn’t change the SYSDATE and SYSTIMESTAMP in the timezones. So, the PL/SQL packages, procedure and functions and in particular all SQL using SYSDATE and SYSTIMESTAMP might not return what you expect. You need to check for instance what happened during the last one hour (sysdate-1/24) – and you might be surprised – you will get the result for the past one hour but it might not be your last hour.

Currently, it is not possible to change the timezone of SYSDATE and SYSTIMESTAMP since they are coming from the operating system. The SYSDATE and SYSTIMESTAMP functions simply performs a system-call to the server Operating System to get the time – the so called “gettimeofday” call. The server OS (Unix) timezone settings influences the time that the OS will pass on to Oracle and returned by SYSDATE and SYSTIMESTAMP.

Moreover, there is no Oracle database parameter/init.ora setting that will impact the SYSDATE and SYSTIMESTAMP timezone (= give a result that is in an other timezone than the OS timezone setting). Also, SYSDATE and SYSTIMESTAMP do not use the Oracle timezone information (= Oracle RDBMS dst patches) in the database and are NOT affected by the used DBTIMEZONE / SESSIONTIMEZONE settings.

On a side note, in the Oracle database there is a dynamic init.ora parameter called FIXED_DATE which enables you to set a constant date that SYSDATE will always return instead of the current date.

As far as I know, there is a project going on inside Oracle to provide a parameter to make SYSDATE return date in a database timezone, but there is no ETA for it.

So, how can we bypass this issue?

The general recommendation is to use CURRENT_DATE and CURRENT_TIMESTAMP which will return the date in session timezone. If you cannot modify the applications, then there is a workaround to use the SQL Translation Framework functionality (Hello Kerry!) to change SYSDATE to CURRENT_DATE in the queries on the fly. This would require recreating the PL/SQL procedures, so that they can be replaced as well. It also requires creating a schema level logon rigger to enable the Translation Framework. This needs to be carefully tested, but in general it works.

UTC is the recommended time zone to use, and of course, in theory at least, this makes a lot of sense in a global distributed database environment. In reality, everyone like his/her own time zone!

These 4 links below are worth reading if you are interested in timestamp and time zone issues and functionalities:

How to change the Time Zone in Oracle Database hosted in OCI with an Example: Doc ID 2459830.1
Timestamps and time zones – Frequently Asked Questions: Doc ID 340512.1
How to Change the “Database Time” (SYSDATE and SYSTIMESTAMP) to another Time/Timezone: Doc ID 1988586.1
Datetime Data Types and Time Zone Support

The following are some points to note related to time zone upgrade in a multitenant environment:

– Updating the time zone version of a CDB does not change the time zone version of the PDBs
– Updating the time zone version of a PDB does not change the time zone version of the other PDBs
– A new PDB is always assigned the time zone version of PDB$SEED
– PDB$SEED is always assigned the time zone version at the time of CDB creation
– The time zone version of PDB$SEED cannot be changed

And if you plan on running ALTER DATABASE SET TIME_ZONE, have in mind that:

– The ALTER DATABASE SET TIME_ZONE statement returns an error if the database contains a table with a TIMESTAMP WITH LOCAL TIME ZONE column and the column contains data
– The change does not take effect until the database has been shut down and restarted
– You can find out the database time zone by entering the following query: SELECT dbtimezone FROM DUAL;

SET TIME_ZONE can be done also on session level if needed:

alter session set time_zone='-03:00';

And please note once more that SYSDATE and SYSTIMESTAMP have nothing to with the DATABASE TIMEZONE (DBTIMEZONE).

On a final note: the information above is valid for all flavors of Autonomous: ADW, ATP, ADW-D and ATP-D.

===

Years ago, I saw this quiz on dbasupport.com – we have the following two PL/SQL blocks:

 

BEGIN
  WHILE sysdate = sysdate LOOP
    NULL;
  END LOOP;
END;
/
 
DECLARE
  x DATE;
BEGIN
  LOOP
    BEGIN
      SELECT null INTO x FROM dual WHERE sysdate = sysdate;
    EXCEPTION
      WHEN NO_DATA_FOUND THEN EXIT;
    END;
  END LOOP;
END;
/

Question is: what happens after you run them? Are the loops above both finite, both infinite or is it so that one of them is finite and the other one infinite?

If you cannot answer the question just run them in SQL*Plus, etc. Then, try to explain why – the reason for being finite or infinite.

The 20c Database Upgrade Mystery

In Cloud, Databases, DBA, Oracle database on April 27, 2020 at 14:31

Oracle Database 20c is available only for preview. It is not available for production use. Upgrades to or from Oracle Database 20c are not supported.

So, upgrades are not supported but there exists an Oracle Database 20c Upgrade Guide. The 20c upgrade pdf file is 461 pages long!

Let me shed some light on this “mystery”.

As of today, April 27th 2020 (Oracle Database 19c is today 1 year old on Linux), Oracle 20c is only available from the Oracle Public Cloud in preview mode. To try out this 20c preview, you will need an Oracle Public Cloud tenancy with cloud credits (paid or promotional), as the DBCS instance will require an OCI Compute VM. And, by the way, 20c does *not* run on Exadata.

Once 20c becomes generally available also in non-preview only mode, upgrades will be also available.

You can perform a direct upgrade to the new release from the following releases:

• 19c
• 18c
• 12.2.0

Here is what we need to know about compatibility:

– Before upgrading to Oracle Database 20c, you must set the COMPATIBLE initialization parameter to at least 12.2.0
– In Oracle Database 20c, when the COMPATIBLE initialization parameter is not set in your parameter file, the COMPATIBLE parameter value defaults to 20.0.0
– Installing earlier releases of Oracle Database on the same computer that is running Oracle Database 20c can cause issues with client connections

The most important point about the 20c architecture is that starting with the first release of Oracle Database 20c, non-CDB Oracle Database upgrades to non-CDB architecture are desupported. Meaning you need a container and your data will be in a pluggable database.

Starting with Oracle 19c you can have up to 3 pluggable databases (PDBs) per container database (CDB) without requiring additional multitenant license. See page 11 of Oracle Database Licensing Information User Manual for all the details (it is a 361 page pdf file).

So, when upgrading to 20c, we have 2 options:

Option 1: Convert the non-CDB to a PDB before upgrade

With this option, you plug in the non-CDB Oracle Database release to the same release CDB. For example, plug in a non-CDB Oracle Database Release 19c into an Oracle Database 19c release CDB. Finish converting the non-CDB Oracle Database to a PDB. Then, upgrade the entire CDB, with its PDBs, to Oracle Database 20c.

Option 2: Plug in the non-CDB, upgrade, and finish converting the non-CDB to a PDB after upgrade

With this option, you plug in a non-CDB Oracle Database release to an OracleDatabase 20c CDB. Upgrade the plugged-in non-CDB Oracle Database to Oracle Database 20c. Then, finish converting the non-CDB Oracle Database to a PDB.

Here is what you need to know about the DBUA and the Oracle home:

– Starting with Oracle Database 20c, Database Upgrade Assistant (DBUA) is replaced by the AutoUpgrade utility
– Starting with Oracle Database 20c, the default network administration directory changes from the previous default in the local Oracle home, Oracle_home/network (for example, /u01/app/oracle/product/19.1.0/dbhome_1/network), to a new location.
– The new default location is the shared Oracle Base Home, in the path ORACLE_BASE/ homes/HOME_NAME/network/admin
– Starting with Oracle Database 20c, an Oracle Database installation configures all Oracle Database homes in read-only mode by default

And here is what you need to know before the upgrade about security and the init.ora parameters:

– Starting with Oracle Database 20c, the data types DBMS_CRYPTO_TOOLKIT_TYPES and package DBMS_CRYPTO_TOOLKIT are desupported
– The init.ora parameters UNIFIED_AUDIT_SGA_QUEUE_SIZE, UNIFIED_AUDIT_SGA_QUEUE_SIZE, AUDIT_FILE_DEST, AUDIT_SYS_OPERATIONS, AUDIT_SYSLOG_LEVEL an d AUDIT_TRAIL have been desupported
– Desupport of IGNORECASE parameter for passwords: starting in Oracle Database 20c, the IGNORECASE parameter for the orapwd file is desupported and all newly created password files are case-sensitive
– Desupport of DISABLE_DIRECTORY_LINK_CHECK: the DISABLE_DIRECTORY_LINK_CHECK parameter is desupported, with no replacement

Further info and links were listed in detail by the King of Database Upgrades Mike Dietrich – his blog is dedicated to Oracle Database upgrades!

Now, how about long term support? Do you care only about LTS database releases when planning upgrades? As Tim Hall says: “It’s important to check out what is happening in the 20c release, because it may alter how you use the earlier releases now. There is no point launching into a new development using a feature that is about to disappear. Remember Oracle Streams anyone?” Yes, indeed Tim: Streams, Advanced Replication… I rather be on the latest release and use the new features – live and learn…

Oracle 20.2.0 new features for DBAs

In Autonomous, Cloud, Databases, New features, Oracle database on February 25, 2020 at 09:38

Oracle 20c is now available in preview mode from the Oracle Public Cloud. Preview version databases are not intended for production use and have limited functionality.

After testing some of the new features of 20c, here is what might be of interest for most DBAs:

1. Provisioning the database is relatively simple. You need an SSH Public key, create a VCN (Virtual Cloud Network) and a client subnet in your compartment. A hostname prefix is also mandatory. Note that the administrator password must be 9 to 30 characters and contain at least 2 uppercase, 2 lowercase, 2 special, and 2 numeric characters. The special characters must be _, #, or -. You cannot bypass that. The shape type must be “Virtal Machine” and the SMS (Storage Management Software) must be “Logical Volume Manager”.

Once provisioned you need the IP address which is under “Nodes” (bottom left, just under “Resources”). As you can see, it is no longer under “General Information”:

And … you can stop the database only manually (as of today, February 25th, 2020) – there is no button for stopping the node. Just terminate in case you are on a tight budget.

2. Blockchain tables

Blockchain tables are append-only tables in which only insert operations are allowed. Deleting rows is either prohibited or restricted based on time. Rows in a blockchain table are made tamper-resistant by special sequencing & chaining algorithms. Users can verify that rows have not been tampered. Have a look at an example I used to create a blockchain table:

Here is how to manage blockchain tables.

Most important is to specify the Retention Period for the Blockchain Table by using the NO DROP clause in the CREATE BLOCKCHAIN TABLE statement. Also specify the Retention Period for Rows in the Blockchain Table: use the NO DELETE clause in a CREATE BLOCKCHAIN TABLE statement.

3. A multitenant container database is the only supported architecture in Oracle Database 20c. While the documentation is being revised, legacy terminology may persist. In most cases, “database” and “non-CDB” refer to a CDB or PDB, depending on context. In some contexts, such as upgrades, “non-CDB” refers to a non-CDB from a previous release. Check the changes in Oracle 20c for Oracle Multitenant.

4. Data Pump

– Oracle Data Pump 20c can include and exclude objects in the same export or import operation meaning that now, Oracle Data Pump commands can include both INCLUDE and EXCLUDE parameters in the same operation. By enabling greater specificity about what is being migrated, this enhancement makes it easier to migrate to Oracle Cloud, or to another on-premises Oracle Database.

Note: when you include both parameters in a command, Oracle Data Pump processes the INCLUDE parameter first, and includes all objects identified by the parameter. Then it processes the EXCLUDE parameters, eliminating the excluded objects from the included set. Here is an example of including only 2 tables but excluding all indexes except the PKs (real use case: you want to enable Oracle Auto Indexing in ADB and while importing the data you need to drop all indexes except the PKs):

expdp julian SCHEMAS=JULIAN DUMPFILE=julian.dmp REUSE_DUMPFILES=YES 
INCLUDE=TABLE:\"IN \(\'CLIENTS\',\'SALES\'\)\" 
EXCLUDE=INDEX:\"LIKE \'IDX\%\'\"

– Oracle Data Pump 20c resumes transportable tablespace export and import jobs that are stopped

– Oracle Data Pump 20c supports parallel export and import operations for Transportable Tablespace (TTS) metadata

– Oracle Data Pump 20c supports optional index compression on imports, including for Oracle Autonomous Database

– Oracle Data Pump 20c supports adding, changing and eliminating table compression

– Oracle Database 20c supports index compression as well by introducing a new TRANSFORM parameter clause, INDEX_COMPRESSION_CLAUSE

– Oracle Data Pump 20c can perform exports from Oracle Autonomous Database into dump files in a cloud object store

– Starting with Oracle Database 20c, a checksum is now added to the dumpfile – you can use the checksum to help to confirm that the file is valid after a transfer to or from the object store and also after saving dumpfiles on on-premises and that it has no
accidental or malicious changes

5. Small improvements and changes in 20c:

– The IGNORECASE parameter for the orapwd file is desupported – all newly created password files are case-sensitive

– A new dynamic view called V$PMEM_FILESTORE displays information about Persistent Memory Filestores

– Certain predefined columns of unified audit records from common unified audit policies can be written to the UNIX SYSLOG destination – to enable this feature, you set UNIFIED_AUDIT_COMMON_SYSTEMLOG, a new CDB level init.ora parameter (added in Oracle 19c (19.3) but not included in the References)

– You now can set the TABLESPACE_ENCRYPTION_DEFAULT_ALGORITHM dynamic parameter to define the default encryption algorithm for tablespace creation operations

– Database Vault: a DV_OWNER common user in the CDB root can prevent local users from creating Oracle Database Vault controls on common objects in a PDB

– AutoShrink: Oracle Automatic Storage Management Cluster File System (Oracle ACFS) automatic shrinking automatically shrinks an Oracle ACFS file system based on policy, providing there is enough free storage in the volume

– The Oracle Grid Infrastructure feature Automatic Storage Management Cluster File System (Oracle ACFS) is desupported with Microsoft Windows

– An Oracle Database installation configures all Oracle Database homes in read-only mode by default

– Traditional auditing is deprecated in Oracle Database 20c thus Oracle recommend that we use unified auditing, which enables selective and more effective auditing inside Oracle Database

– The package DBMS_OBFUSCATION_TOOLKIT is desupported, and replaced with DBMS_CRYPTO

– Older encryption and hashing algorithms contained within DBMS_CRYPTO are deprecated

– The Large Object (LOB) features DBMS_LOB.LOADFROMFILE and LOB buffering are desupported

– You can configure database clients to maintain multiple Secure Sockets Layer (SSL) sessions using different SSL certificates

– In the DBMS_ROLLING.set_parameter(), there is a new parameter, called BLOCK_UNSUPPORTED – by default, BLOCK_UNSUPPORTED
is set to 1 [YES], indicating that operations performed on tables that are unsupported by Transient Logical Standby will be blocked on the primary database. If set to 0 [OFF], then the DBMS_ROLLING package does not block operations on unsupported tables

– In order to coordinate with the Oracle GoldenGate feature OGG EXTRACT, the LOGICAL_REPLICATION clause now provides support for automatic extract of tables

– Two new views, DBA_OGG_AUTO_CAPTURED_TABLES, and USER_OGG_AUTO_CAPTURED_TABLES, provide you with tools to query which tables are enabled for Oracle GoldenGate automatic capture

6. Finally, her are the 7 new init.ora parameters in Oracle 20.2.0:

DBNEST_ENABLE (DbNest is OS resource and file system isolation for PDBs)
DBNEST_PDB_FS_CONF
DIAGNOSTICS_CONTROL (meant to be used with Oracle Support)
MAX_IDLE_BLOCKER_TIME (maximum number of minutes before a blocking session is automatically terminated)
PMEM_FILESTORE
TABLESPACE_ENCRYPTION_DEFAULT_ALGORITHM
OPTIMIZER_REAL_TIME_STATISTICS