Dontcheff

2022: What’s new in the database world?

In Databases, DB2 database, DBA, New features, Oracle database, PostgreSQL, SQL Server on May 23, 2022 at 07:55

The big picture in the global database market is changing rapidly. It is an $80B market right now – the largest software market and growing in double digits year over year.

Gartner has recently published the software market revenue numbers for 2021.

In 2021, the revenue for managed cloud services is more than $39B – which means about half of al global DBMS revenue. Have a look at the drastic change since 2017:

Microsoft and AWS have right now about half of the global database market while Oracle have slipped down to having only about one fifth of the overall share. Google have entered for the first time the top 5 while IBM have just lost there 4th position. SAP are out of the top 5.

In terms of database popularity, Oracle are still #1 based on the DB engines rankings:

So, what is new and causing these turbulances?

Google have just announced AlloyDB for PostgreSQL. Compared with standard PostgreSQL, according to Google’s own performance tests, AlloyDB was more than four times faster for transactional workloads, and up to 100 times faster for analytical queries. AlloyDB was also two times faster for transactional workloads than Amazon’s comparable service. This makes AlloyDB a powerful new modernization option for transitioning off of legacy databases. Here are the other offerings from Google Cloud:

The latest from AWS is predominantly around RDS and Redshift (a database based on PostgreSQL). Amazon Redshift RA3 instances are now available in the Asia Pacific (Osaka), Europe (Milan), Middle East (Bahrain) and Africa (Cape Town) regions. Amazon Redshift now offers new enhancements for Audit Logging, which enables faster delivery of logs for analysis by minimizing latency while also adding Amazon CloudWatch as a new log destination. With the latest release, customers can choose to stream audit logs directly to Amazon CloudWatch, which enables customers to perform real-time monitoring. Amazon Redshift now also provides native integration with Microsoft Azure Active Directory (AD), which customers can use for authentication and authorization with tools like Microsoft Power BI. 

Amazon Relational Database Service (Amazon RDS) Performance Insights now makes it easier for you to see the database performance metrics for the exact timeframe you want to analyze, by choosing a custom time window within your retention period. Previously, you could only see metrics in Performance Insights by choosing relative time intervals such as the past 1 hour, the past 24 hours, etc. Amazon RDS for PostgreSQL, MariaDB and MySQL now supports M6i and R6i instances with new instance sizes up to 128 vCPUs and 1,024 GiB RAM. Also, Amazon Aurora Serverless v2 is now generally available.

The latest from Azure is around Change data capture: CDC lets you track all the changes that occur on a database. Though this feature has been available for SQL Server for quite some time, using it with Azure SQL Database is now generally available. When creating a Hyperscale database, you can choose your preferred storage type: read-access geo-redundant storage (RA-GRS), zone-redundant storage (ZRS), or locally redundant storage (LRS) Azure standard storage. The selected storage redundancy option will be used for the lifetime of the database for both data storage redundancy and backup storage redundancy. It is now possible to configure your Azure SQL Database to allow authentication only from Azure Active Directory.

IBM i 7.5, previously know as AS/400, is coming with some “interesting” features. There is a new BOOLEAN data type, the maximum size of a binary radix index is extended, up to 16 TB. Used t be 1.7TB. I do not know many DBAs who enjoy dealing with double digit TB indexes. The Db2 for i SQL Query Engine (SQE) provides a Query Supervisor which enables real-time monitoring of resource consumption by SQL and native queries. There is now a RESTRICT ON DROP attribute that can be added or removed using the ALTER TABLE (SQL) statement (ALTER TABLE TABLE_NAME ADD RESTRICT ON DROP;). When RESTRICT ON DROP is added, nobody will be allowed to delete or drop that file. Even users with *ALLOBJ user special authority will not be allowed to delete the file.

The only change I am aware of in Oracle Database Release 21c, Version 21.5 are the new AE Analytic Views. AE stands for “All Edition”. Each new AE analytic view corresponds to an existing non-AE analytic view. AE views have the same columns as their non-AE counterparts, plus a column that displays the name of the application edition where the editioned object is defined. For the complete list of changes in Oracle 21, check this link.

Oracle Database 21c Upgrade: good to know

In Databases, DBA, New features on April 25, 2022 at 13:57

The latest release of the Oracle database 21.3 can be downloaded from here. 21c is available for Linux, HP-UX and Windows. By using Oracle Fleet Patching & Provisioning, you can perform a direct upgrade to Oracle database release 21c from the following releases:

Oracle Fleet Patching and Provisioning is a software lifecycle management method for provisioning and maintaining Oracle homes. Configuring Oracle Fleet Patching and Provisioning (Oracle FPP) involves creating an Oracle Fleet Patching and Provisioning Server, adding gold images to the server, and creating working copies of gold images to provision software.

The Oracle Grid Infrastructure on which the database to be upgrade is running must be at least of the same release or later than the database release to which you will be upgrading.

The source Oracle home to be upgraded can be either a managed working copy, i.e., an Oracle home provisioned using Fleet Patching and Provisioning, or an unmanaged home, i.e., an Oracle home not provisioned using Fleet Patching and Provisioning. If you are upgrading an unmanaged Oracle home, you have to provide the complete path of the database for upgrade.

Otherwise, without Oracle FPP, you can upgrade directly from 12.2.0.1, any 18c or any 19c.

Here is what you need to know about compatibility:

  • Before upgrading to Oracle Database 21c, you must set the COMPATIBLE initialization parameter to at least 12.2.0 (I am unsure if 11.2.0 is ok as well)
  • In Oracle Database 21c, when the COMPATIBLE initialization parameter is not set in your parameter file, the COMPATIBLE parameter value defaults to 21.0.0
  • Installing earlier releases of Oracle Database on the same computer that is running Oracle Database 21c can cause issues with client connections

Starting with Oracle Database 21c, non-CDB Oracle Database upgrades to non-CDB architecture are de-supported:

Option 1: Convert the non-CDB to a PDB before upgrade: with this option, you plug in the non-CDB Oracle Database release to the same release CDB. (For example, plug in a non-CDB Oracle Database Release 19c into an Oracle Database 19c release CDB). Finish converting the non-CDB Oracle Database to a PDB. Then, upgrade the entire CDB, with its PDBs, to Oracle Database 21c

Option 2: Plug in the non-CDB, upgrade, and finish converting the non-CDB to a PDB after upgrade: with this option, you plug in a non-CDB Oracle Database release to an OracleDatabase 21c CDB. Upgrade the plugged-in non-CDB Oracle Database to Oracle Database 21c. Then, finish converting the non-CDB Oracle Database to a PDB.

Here is what you need to know about the DBUA and the Oracle home:

  • Starting with Oracle Database 21c, Database Upgrade Assistant (DBUA) is replaced by the AutoUpgrade utility
  • Starting with Oracle Database 21c, the default network administration directory changes from the previous default in the local Oracle home, Oracle_home/network (for example, /u01/app/oracle/product/19.1.0/dbhome_1/network), to a new location
  • The new default location is the shared Oracle Base Home, in the path ORACLE_BASE/ homes/HOME_NAME/network/admin
  • Starting with Oracle Database 21c, an Oracle Database installation configures all Oracle Database homes in read-only mode by default

Note last what is being changed about security and parameters:

  • Starting with Oracle Database 21c, the data types DBMS_CRYPTO_TOOLKIT_TYPES and package DBMS_CRYPTO_TOOLKIT are desupported
  • The init.ora parameters UNIFIED_AUDIT_SGA_QUEUE_SIZE, UNIFIED_AUDIT_SGA_QUEUE_SIZE, AUDIT_FILE_DEST, AUDIT_SYS_OPERATIONS, AUDIT_SYSLOG_LEVEL and AUDIT_TRAIL have been desupported
  • Desupport of IGNORECASE parameter for passwords
    Starting in Oracle Database 21c, the IGNORECASE parameter for the orapwd file is desupported and all newly created password files are case-sensitive
  • Desupport of DISABLE_DIRECTORY_LINK_CHECK
    The DISABLE_DIRECTORY_LINK_CHECK parameter is desupported, with no replacement

Unified Auditing for Top-Level Statements in the Oracle Database

In DBA, Oracle database, PL/SQL, Security and auditing, SQL on April 4, 2022 at 07:26

Auditing a database has undoubtfully its benefits and more, but for the DBA, this can cause serious headaches in terms of managing space and performance. Especially in the cloud, where security and auditing is even more important that on premises.

Did you know that there are 239 available audit actions in 19c? select * from AUDIT_ACTIONS; shows them all.

A new database feature introduced in 19c and 21c, called “auditing top-level statements“, can help us minimize the issue with space and performance (caused by auditing), by ignoring SQL statements that run from within PL/SQL procedures, triggers or functions, as they may be less relevant for auditing purposes.

Inside the Oracle database, a top-level statement is a statement that is executed directly by a user, not a statement that is run from within a PL/SQL procedure.

Top-level statements from the user SYS can be also audited. The unified audit trail can grow immensely if all statements are audited. Especially when there is a large number of audit trail records that are generated for a single statement in the unified audit policy. With auditing only top-level statements, we can reduce the audit of the recursive SQL statements. An example of this scenario would be audits for the DBMS_STATS.GATHER_DATABASE_STATS statement, which can generate over 200,000 individual audit records.

The unified audit records are written to a table in the AUDSYS schema called AUD$UNIFIED. Access to the AUDSYS. AUD$UNIFIED table is provided via the view SYS.UNIFIED_AUDIT_TRAIL. Here is the size of AUD$UNIFIED in an autonomous database where I have not configured any out of the box auditing. The auditing options enabled by default in ADB are logon, alter user, insert, update and change password:

Note that there are two components to the Audit Trail: (1) the AUDSYS schema in the database and (2) the spillover files on disk. More details in Receiving ORA-55940: An error occurred during execution of ktliIngestExternData by Mike Dietrich.

The X$UNIFIED_AUDIT_TRAIL table is owned by SYS, it is not in the SYSAUD schema! Check Unified Auditing – some insights by Thomas Saviour and X$UNIFIED_AUDIT_TRAIL is slow by Magnus Johansson for more details.

Now, configuring an Unified Audit Policy to capture only Top-Level statements is simple:

The ONLY TOPLEVEL clause in the CREATE AUDIT POLICY statement enables you to audit only the SQL statements that are directly issued by an end user by honoring the audit configuration in the audit policy.

In my autonomous database, I have decided to top-level audit all actions from SYS and ADMIN:

Here are other examples of auditing Top-Level statements.

It is important to understand that the ONLY TOPLEVEL clause has no impact on the output for an individual unified audit trail record. The only effect that ONLY TOPLEVEL has on a policy is to limit the number of records generated for the given unified audit policy.

P.S. 10 years go I wrote a blog post entitled Auditing vs. Performance in the Oracle Database. This topic is after all still relevant and important. With a colleague of mine from Australia, about a month ago, we could not figure out why auditing was causing performance issues – after unifying all possible efforts, not records 🙂 The problem could be only reproduced on Exadata. Still a mystery to us.