Dontcheff

Archive for 2022|Yearly archive page

DBMS_AUTO_PARTITION: Automatic Partitioning in the Autonomous Database

In DBA, Cloud, Databases, Autonomous on September 26, 2022 at 06:33

Choosing a partitioning strategy is often far from straightforward. Range and interval partitioning are useful when organizing similar data types, most true for date and time data. Hash partitioning is useful for randomly distributing data across partitions based on a hashing algorithm, rather than grouping similar data. I remember how we managed to boost performance in a huge Siebel database by hash partitioning the big indexes. List partitioning is useful to explicitly map rows to partitions based on a set of values – for example the states in the USA.

But this is a tedious process and when doing it, regardless if you are a DBA or a Developer, the functional knowledge of the application and the SQL using the tables and indexes, is of paramount importance.

When using the Oracle autonomous database, we rather prefer things to be automated – goes without saying. So, here comes in ADB a handy package called DBMS_AUTO_PARTITION.

First, we need to configure auto partitioning with the option we need. We use the parameter IMPLEMENT (see the screenshot below) but if you want to enable recommendations and not implement those recommendations automatically, then use the parameter REPORT_ONLY instead (which is the default value).

You can always disable it using the OFF parameter.

Automatic partitioning uses a single-column partition key combined with single-level partitioning. Automatic partitioning does not support more complex partitioning strategies such as multi-column partitioned tables or composite partitioning.

Automatic partitioning chooses from the following 3 partition methods:

  • AUTOMATIC INTERVAL: This choice is best suited for ranges of partition key values
  • LIST AUTOMATIC: This partitioning method applies to distinct partition key values
  • HASH: Applies partitioning on the partition key’s hash values

Next, you can use the AUTO_PARTITION_SCHEMA and AUTO_PARTITION_TABLE settings to specify schemas and tables considered (or not considered) for automatic partitioning. Note that when automatic partitioning is set then all schemas and tables (in user-managed schemas) are considered for automatic partitioning if both the inclusion and exclusion lists are empty.

Here is an example where I prefer to manually decide on the partitioning strategy in the JULIAN schema – so I want to exclude it from auto partitioning:

If I wanted to do the opposite, that is add only the schema JULIAN to the auto partitioning list, I should have replaced FALSE with TRUE in the screenshot above. As soon as the inclusion list is no longer empty, only schemas in the inclusion list are considered.

The analysis and verification of automatic partitioning using RECOMMEND_PARTITION_METHOD is potentially a resource-intensive and long running operation, so avoid doing it in a production database. I would rather use a cloned database for automatic partitioning. The recommendations can be implemented with the APPLY_RECOMMENDATION procedure.

It is good to know that the Autonomous Database automatically collects workload information over time in an internally managed SQL workload repository maintained in the SQL Tuning Set (SYS_AUTO_STS). Recommendations of automatic partitioning generated by the RECOMMEND_PARTITION_METHOD function have a time limit, specified by the TIME_LIMIT parameter, with a default of 1 day. If you are analyzing a large system with many candidate tables, a single invocation may not generate a recommendation for all tables and you will have to invoke the procedure repeatedly to generate recommendations for additional tables. 

Here are the data dictionary views for automatic partitioning: DBA_AUTO_PARTITION_CONFIG and DBA_AUTO_PARTITION_RECOMMENDATIONS.

It is important to understand that unlike automatic indexing, automatic partitioning does not run periodically as a background task. Automatic partitioning only runs when you invoke it by using the DBMS_AUTO_PARTITION.RECOMMEND_PARTITION_METHOD function.

Not all tables are candidates for auto partitioning. The function VALIDATE_CANDIDATE_TABLE checks if a given table is a valid candidate for auto partitioning in Autonomous Database. There are several conditions for a table to be a valid candidate, here are few:

  • Table passes inclusion and exclusion tests specified by AUTO_PARTITION_SCHEMA and AUTO_PARTITION_TABLE configuration parameters
  • Table has up-to-date statistics
  • Table is at least 64 GB
  • Table has 5 or more queries in the SQL tuning set that scanned the table
  • Table does not contain a LONG data type column.

As you can see below, my SALES table is not a candidate as the statistics are stale:

Last, here are few underscore/hidden parameters related to auto partitioning (from a 21c database) :

_autoptn_costing: DBMS_AUTO_PARTITION is compiling query for cost estimates
_autoptn_flags: DBMS_AUTO_PARTITION flags
_autoptn_translate_table_name: DBMS_AUTO_PARTITION table name translation
_autoptn_workload_id: DBMS_AUTO_PARTITION workload id

The package is though missing in 21c:

SQL> desc DBMS_AUTO_PARTITION
ERROR:
ORA-04043: object DBMS_AUTO_PARTITION does not exist

If you are interested in auto partitioning also the application, then I would recommend a rather scientific paper called Automatic Partitioning of Database Applications. But is built onto the assumption that stored procedures have several disadvantages and not everyone would agree with that.

How to view the version history of an Oracle database?

In DBA, Oracle database on September 9, 2022 at 10:57

Often an Oracle database, even being say 19c, was initially created as 10g and upgraded with scripts over the years. From v$database and v$instance, we can find out the current version, the platform name and when the database was created but how to view the version history of that database? Often simple SQL statements can extract important data that we need. We should just know what table or view to query.

Two important views, which actually came with Oracle 10g, can shed some light on the question above: DBA_HIGH_WATER_MARK_STATISTICS and DBA_REGISTRY_HISTORY.

Let us see what information they provide:

SELECT * from DBA_REGISTRY_HISTORY where version is not null order by 4 desc;

I said above “shed some light” and not “answer the question” as the view came only with 10gR2. So we can see the history from 10.2.0.3 until now (19.13) but whether the database was created initially as 10g, 9i or even as v7 is an open question.

Note that since since 12.1.0.1 Oracle use DBA_REGISTRY_SQLPATCH instead of DBA_REGISTRY_HISTORY to track PSUs and BPs applied to the database. Check the post by Mike Dietrich called DBA_REGISTRY_HISTORY vs DBA_REGISTRY_SQLPATCH.

The view DBA_HIGH_WATER_MARK_STATISTICS is based on the table WRI$_DBU_HIGH_WATER_MARK:

create table WRI$_DBU_HIGH_WATER_MARK
(name                 varchar2(64)  not null,
 dbid                 number        not null,
 version              varchar2(17)  not null,
 highwater            number,
 last_value           number,
 error_count          number,
 constraint WRI$_DBU_HIGH_WATER_MARK_PK primary key
    (name, dbid, version)
 using index tablespace SYSAUX
) tablespace SYSAUX
/

Next, we run the following query which shows us along with the database size the database version as well:

SELECT * from DBA_HIGH_WATER_MARK_STATISTICS where name = 'DB_SIZE' order by 3 desc;

From DBA_HIGH_WATER_MARK_STATISTICS, we can view several other historical stats about the database: number of user tables, size of the largest segment, maximum number of partitions belonging to an user table, maximum number of partitions belonging to an user index, number of user indexes, maximum number of concurrent sessions seen in the database, maximum number of datafiles, maximum number of tablespaces, maximum number of CPUs and maximum query length.

If the high-water mark statistics are not populated, then execute manually DBMS_FEATURE_USAGE_INTERNAL.SAMPLE_ONE_HWM. The internal package looks like this:

PROCEDURE CLEANUP_DATABASE
Argument Name                  Type                    In/Out Default?
------------------------------ ----------------------- ------ --------
CLEANUP_LOCAL                  PL/SQL BOOLEAN          IN     DEFAULT 

PROCEDURE EXEC_DB_USAGE_SAMPLING
Argument Name                  Type                    In/Out Default?
------------------------------ ----------------------- ------ --------
CURR_DATE                      DATE                    IN             

PROCEDURE SAMPLE_ONE_FEATURE
Argument Name                  Type                    In/Out Default?
------------------------------ ----------------------- ------ --------
FEAT_NAME                      VARCHAR2                IN             

PROCEDURE SAMPLE_ONE_HWM
Argument Name                  Type                    In/Out Default?
------------------------------ ----------------------- ------ --------
HWM_NAME                       VARCHAR2                IN             

If you would like to update the HWM statistics manually as they are gathered once a week, here is an example:

SQL> SELECT name, highwater, last_value FROM dba_high_water_mark_statistics WHERE name = 'USER_TABLES';

NAME                            HIGHWATER LAST_VALUE
------------------------------ ---------- ----------
USER_TABLES                           533        533

SQL> CREATE TABLE JMD (c1 json);

Table created.

SQL> exec dbms_feature_usage_internal.sample_one_hwm('USER_TABLES');

PL/SQL procedure successfully completed.

SQL> SELECT name, highwater, last_value FROM dba_high_water_mark_statistics WHERE name = 'USER_TABLES';

NAME                            HIGHWATER LAST_VALUE
------------------------------ ---------- ----------
USER_TABLES                           534        534

As post scriptum, I can say without ever being able to prove it scientifically, a better performing database is one created as a fresh database and not upgraded with scripts. I would always advise to create a new database and transfer the data and all objects from the previous version than just upgrade the database (the data dictionary) with scripts. But with current DB sizes and limited downtime, this is getting more and more difficult to achieve.

How to calculate the size of a database?

In Data, Databases, DBA on August 29, 2022 at 10:03

Often even DBAs have misalignment on what is meant by “database size”. As data and databases grow with a rather high rate, it is important to understand the size of the database estate. Worldwide data is expected to hit 175 zettabytes by 2025, representing a 61% CAGR. Also, 51% of the data will be in data centers and 49% will be in the public cloud. So, when moving to cloud, it is important to know how much storage is needed for the databases.

Ever wondered which is the biggest database in the world? Here is an interesting link: 10 Largest Databases in the World.Really? Sure about that? How do you actually know if it’s true? Who is the source of information, and can we trust it?” I was wondering exactly the same when checking the list.

And also: how should all the data be stored? According to Data: A Lenovo Solutions Perspective, the answer is that within the core (i.e. data center or cloud) the classic relational database will still be the dominant approach for many years. Thus, one more reason to properly estimate and calculate the size of your databases.

So, how how to calculate the size of say an Oracle database (check the links at the end for other database brands)? The most common way is to simply calculate the space which the database files physically consume on disk:

select sum(bytes)/1024/1024/1024 size_in_GB from dba_data_files;

But how about the other files? Like temp, control and redolog files? Are the undo files really part of the database? Do we always want to include the files from SYSTEM and SYSAUX too?

Also, not all this space in the files listed in dba_data_files is necessarily allocated. There could be sections of these files that are not used. Then, we can go with this method:

select sum(bytes)/1024/1024/1024 size_in_GB from dba_segments;

I often prefer using the query which ignores the system, temp and undo data:

select nvl(ceil(sum(bytes)/(1024*1024*1024)),0) size_in_GB
from dba_extents
where tablespace_name not in ('SYSTEM','TEMP','SYSAUX') and tablespace_name not like '%UNDO%';

Another way used is to only calculate the size of the real data from each schema in the database – the methods above include the indexes too. This requires recent analyze (not estimate but rather compute 100%) for all tables.

select owner, nvl(ceil(sum(num_rows*avg_row_len)/(1024*1024*1024)),0) size_in_GB
from dba_tables
where owner not in ('SYSTEM','OUTLN','SYS') group by owner;

However, checking the overall size including TEMP and REDO is perhaps the best approach. There is a MOS note, How to Calculate the Size of the Database (Doc ID 1360446.1) which is also worth reading. Here is the suggested method to calculate the total database size:

select a.data_size+b.temp_size+c.redo_size+d.cont_size "total_size"
from ( select sum(bytes) data_size
       from dba_data_files ) a,
     ( select nvl(sum(bytes),0) temp_size
       from dba_temp_files ) b,
     ( select sum(bytes) redo_size
       from sys.v_$logfile lf, sys.v_$log l
       where lf.group# = l.group#) c,
     ( select sum(block_size*file_size_blks) cont_size
       from v$controlfile ) d;

If you have enabled block change tracking and want to be really pedantic with the database size, you should also add the BCT file.

Here are some additional links and a query which will help you find indexes which are bigger than tables.

Database Size in Oracle by Bijay Kumar Sahoo

How to calculate current DB size from asktom

Estimate the Size of a Database in SQL Server

How to calculate total size of the database

How to calculate the size of a MySQL database

select owner "TABLE_OWNER", tablename "TABLE_NAME", tablesize "TABLE SIZE (GB)", indexname "INDEX_NAME", indexsize "INDEX SIZE (GB)", indexsize/tablesize "INDEX/TABLE" from
(
with
tabs as (select owner, segment_name tablename,sum(bytes/1024/1024/1024) tablesize from dba_segments where segment_type='TABLE' group by owner, segment_name),
inds as (select i.owner, i.index_name indexname, i.table_name tablename, sum(s.bytes/1024/1024/1024) indexsize from dba_indexes i join dba_segments s on (i.owner=s.owner and i.index_name=s.segment_name) group by i.owner, i.index_name, i.table_name)
select * from tabs natural join inds where indexsize > tablesize and indexsize>1
)
order by indexsize/tablesize desc;

Finally, there is a view in Oracle, called dba_hist_tbspc_space_usage, which displays historical tablespace usage statistics. So, you can retrieve historical growth of the tablespaces and thus the database as a whole. Check this blog post: How to retrieve growth history for Oracle tablespaces. Note that the history is available for as far back as AWR data is retained. It is considered good practice to keep track of the database size on say weekly basis. You will be always able to answer questions on capacity needs, trends, growth, size, etc.

P.S. DBA_HIGH_WATER_MARK_STATISTICS contains a column called DB_SIZE which is dependent on the internal package DBMS_SWRF_REPORT_INTERNAL. The definition of DBA_HIGH_WATER_MARK_STATISTICS is:

select dbid, hwm.name, version, highwater, last_value, description
from wri$_dbu_high_water_mark hwm, wri$_dbu_hwm_metadata mt
where hwm.name = mt.name and
hwm.name not like '_HWM_TEST%' and /* filter out test hwm */
bitand(mt.method, 4) != 4          /* filter out disabled hwm */

The table WRI$_DBU_HIGH_WATER_MARK is mostly likely populated using DBMS_SWRF_REPORT_INTERNAL. The view DBA_HIGH_WATER_MARK_STATISTICS (existing since 10g!) is useful as it tells you the maximum size of the database for each version of the lifespan of the database.

Time to rebuild indexes in the Oracle database?

In Databases, DBA, Oracle database on August 19, 2022 at 14:13

One of the most controversial topics among DBAs is whether to rebuild indexes or not.

But it is so generic that the answer to the question depends on the database version, the index type and the reasons behind doing it.

In most cases, we consider b-tree indexes for rebuilt. Note that b stands not for “binary” but for “balanced”. We create indexes mostly for performance reasons. If performance is the real concern, we need to first understand at least at high level how those indexes are organized.

The MOS note Lists All Indexes that Benefit from a Rebuild (Doc ID 122008.1) suggests that indexes considered for rebuilt are indexes for which:

– deleted entries represent 20% or more of the current entries
– the index depth is more then 4 levels

And possible candidate for bitmap index were considered when the distinctiveness was more than 99%:

if ( height > 5 ) OR ( (del_lf_rows/lf_rows) > 0.2 ) then
    dbms_output.put_line (rpad(r_indx.owner,16,' ') || rpad(r_indx.index_name,40,' ') || 
                          lpad(round((del_lf_rows/lf_rows)*100,3),17,' ') || 
                          lpad(height-1,7,' ') || lpad(round((lf_rows-distinct_keys)*100/lf_rows,3),16,' '));
end if;

However, it is a good question if this applies to version 11g and above. I have personally seen good benefits of rebuilding indexes satisfying those conditions in lower versions of the database.

Another MOS article, Script to investigate a b-tree index structure (Doc ID 989186.1), provides a script which is quite handy as it verifies the structure of a b-tree index based on the existing table and index statistics.  The script calculates the following items:

– Estimate the size the index should be as optimal packing can be specified

– The index layout

This script keeps a history of the the data gathered in the INDEX_HIST table. This can be useful to prevent a pattern in index rebuilding. The history log can be user defined.

Yes another MOS article, Index Rebuild, the Need vs the Implications (Doc ID 989093.1), specifies that the most common justifications given for rebuilding an index are:
– index becomes fragmented
– index grows and grows – deleted space is not re-used
– index clustering factor becomes out of sync

However, as noted, the impact of rebuilding the index can be quite significant:

  • Most scripts around depend on the index_stats dynamic table
  • Redo activity may increase and general performance might be impacted as a direct result of rebuilding an index

An index coalesce is often preferred instead of an index rebuild. It has the following advantages:

  • does not require approximately 2 times the disk storage
  • always online
  • does not restructure the index, but combines index leaf blocks as much as possible, avoiding system overhead

Due to the reasons listed above, it is strongly advised not to rebuild indexes on a regular basis but instead use proper diagnostics.

Next, here is my personal experience from rebuilding indexes: either drop and create or simply rebuild. I have seen 3 main use cases:

  1. For performance reasons: because of too many deleted rows or because of split index nodes. With every new release of Oracle, it looks like that we need less and less time on performing such a rebuild operation.
  2. Indexes get fragmented over time and occupy too much space. There was a mission critical production database where data was about 500MB and the indexes all together were about 4TB. You can release sometimes quite a lot of space.
  3. Indexes should be in my opinion in a separate tablespace. Not where the data is. If something gets wrong with the index tablespace: logical or physical corruption, or deleting files unintentionally, then it is just a matter of time to recreate the indexes again.

In Oracle 21c, there is a new feature called Automatic Index Optimization. The optimization process includes 3 actions:

• Compress: Compresses portions of the key values in an index segment (~3 times)

• Shrink: Merges the contents of index blocks where possible to free blocks for reuse

• Rebuild: Rebuilds an index to improve space usage and access speed

For a very long time, both DBAs and Developers, have been struggling (really struggling) with what indexes should be created, what type of indexes they should be created as and what indexes should be dropped from the database. By far, the most interesting new feature of Oracle Database 19c is Automatic Index creation (AI Creation). In the long run, this is to be one of the most important features in the Oracle database. Note that you cannot rebuild an auto index! Nice and useful capability for AI is that Oracle automatically rebuilds indexes that are marked as “Unusable”.

For more, check:

Automatic Indexes: Automatically Rebuild Unusable Indexes Part I by Richard Foote

Oracle B-Tree Index Internals: Rebuilding The Truth:

“Generally rebuild index when the clustering factor exceeds eight times the number of dirty blocks in the base table, when the levels exceed two or when there are excessive brown nodes in the index”.

When an index should be rebuilt? by Gouranga

SQL Server: Reorganize and Rebuild Indexes in the Database

“Microsoft recommends fixing index fragmentation issues by rebuilding the index if the fragmentation percentage of the index exceeds 30%, where it recommends fixing the index fragmentation issue by reorganizing the index if the index fragmentation percentage exceeds 5% and less than 30%”

How Often Do You Rebuild Indexes?

Rebuilding Indexes by Jonathan Lewis

Automatic Segment Advisor in the Oracle Database

In Autonomous, Database tuning, DBA, Oracle database, Oracle internals on July 1, 2022 at 06:50

“From my experience, the best advisors help in three ways: encourage you to look at the problem or opportunity from multiple angles; help you balance the tug of the short-term with important long-term priorities; and ask the tough questions you need to know to reach the best solution.” Margo Georgiadis

The Oracle Segment Advisor identifies segments that have space which can be reclaimed. However, the Automatic Segment Advisor can be at times resource consuming and even slow down your database:

Why is this happening granted the Automatic Segment Advisor does not analyze every database object? Here is how it works internally: the advisor examines the database statistics, it samples segment data, and then selects the following objects to analyze:

  • Tablespaces that have exceeded a critical or warning space threshold
  • Segments that have the most activity
  • Segments that have the highest growth rate

In addition, the Automatic Segment Advisor evaluates tables that are at least 10MB in size and have at least 3 indexes to determine the amount of space saved if the tables are compressed with advanced row compression.

Now, here is the important part: if a database object is selected for analysis by the advisor but the maintenance window expires before the advisor can process the object, the object is included in the next Automatic Segment Advisor run. So, at one point you may come to a situation where lots of objects have to be analyzed. During the maintenance window, the following clients/task are being run, these are the predefined automated maintenance tasks:

Sometimes, it makes sense to disable the auto space advisor as you cannot change the set of tablespaces and segments that the Automatic Segment Advisor selects for analysis. You can, however, enable or disable the Automatic Segment Advisor task, change the times during which the Automatic Segment Advisor is scheduled to run, or adjust automated maintenance task system resource utilization. Especially in a situation like this:

Often, also ADDM may point out to the problem with the Automatic Segment Advisor task:

Here is how to disable (and enable) the tasks individually. The main switch is controlled by DBMS_AUTO_TASK_ADMIN.DISABLE().

Even after disabling the entire autotask job by running DBMS_AUTO_TASK_ADMIN.DISABLE(), the DBA_AUTOTASK_TASK will still show autotask client as enabled. For this you need to disable the jobs individually as shown above. And for a multitenant environment, CDB and PDBs have their own autotasks, disabling CDB’s autotask will not affect the PDBs, so you will have to do for each pluggable database.

Staring with 12.2, there is a parameter called ENABLE_AUTOMATIC_MAINTENANCE_PDB that can be used to enable or disable the running of automated maintenance tasks for all the PDBs in a CDB or for individual PDBs in a CDB. Changing ENABLE_AUTOMATIC_MAINTENANCE_PDB in the CDB root from TRUE to FALSE, the new value FALSE takes effect in the root and in all the PDBs in the CDB.

If you get into a situation where the Automatic Segment Advisor is consuming lots of resource and slowing the database during the maintenance windows do one of the following:

  1. Disable the autotask client for the segment advisor and run it manually on per need basis.
  2. Increase the maintenance window from the default which starts at 10 p.m. on Monday to Friday and ends at 2 a.m. Often 4 hours in just not enough. The weekend window is 20h long and in most cases long enough.

In the autonomous database, you have access to dba_autotask_client, etc. and you can disable and enable the auto space advisor task however you do not have full visibility on dba_scheduler_window_groups, etc. Some columns just show as NULL. Still the performance task in ADB can give you some idea of what is going on:

And finally, here are some other situations that might require disabling the Auto Space Advisor Job:

  • ORA-01555 While Running Auto Space Advisor Job on Object wri$_adv_objspace_trend_data (Doc ID 2576430.1)
  • Auto Space Advisor is Taking More Time due to Recursive Query Taking a Long Time (Doc ID 2382419.1)
  • SEGMENT ADVISOR not working as expected for LOB or SYS_LOB SEGMENT (Doc ID 988744.1)

User defined locks: enq: UL – contention

In Database tuning, DBA, Oracle database, Oracle internals on June 13, 2022 at 17:30

DBAs are familiar with most wait events, however there is one called “enq: UL – contention”, which does not pop up that often:

The foreground events above are from an Oracle 19c database. Clearly, we see a lot of user defined lock waits: enq: UL – contention. They come usually when the DBMS_LOCK package is being used.

There are 3 enqueue types which are defined in Oracle as “User Type” locks, they are:

  1. TM – DML Enqueue called against base tables or partitions for operations that need to be coordinated
  2. TX – Transaction Enqueue used to protect transaction information
  3. UL – User Lock Enqueue used when an application makes use of the DBMS_LOCK package

MOS has a note Resolving Issues Where ‘enq: UL – contention’ Waits are Occurring (Doc ID 1915980.1) which can help us identify what session is holding the lock and what part of the application it is currently executing.

The ‘UL’  (User-defined Lock) Enqueue is a lock that is created and defined by a developer (or seldom a DBA) using the DBMS_LOCK package.  The DBMS_LOCK package allows the application developer to request, convert and release locks which are independent from the locks provided automatically by Oracle. These routines are meant for synchronization within the application and augment the locking which is already there in the database.

Locks are automatically released at the end of a session. DBSM_LOCK.RELEASE explicitly releases a lock previously acquired using the REQUEST function.

Wondering How can I find who has allocated a user lock via dbms_lock?

Here is the query from the link above showing how to find the active blocking situations around DBMS_LOCK:

SELECT blocker.username blocker, blocker.sid blocker_session
     , blocked.username blocked, blocked.sid blocked_session
  FROM v$session blocked
  JOIN v$session blocker
  ON (blocked.blocking_session = blocker.sid)
 WHERE blocked.wait_class = 'Application'
   AND blocked.event='enq: UL - contention';

The DBMS_LOCK_ALLOCATED table is periodically cleared out by calling the allocate_unique() procedure when the modulus of the lockid is 0 and the expiration column is less than the current date. On every 100th attempt to create a new entry, allocate_unique purges expired entries from the table. If the entry already exists, allocate_unique updates the expiry time.

There are a couple important MOS notes related to the DBMS_LOCK_ALLOCATED table:

The first one called DBMS_LOCK_ALLOCATED Table Keeps Growing Eventually Depleting Tablespace (Doc ID 1054343.6) explains how to delete entries for locks that are no longer being used. The second on entitled How To Cleanup And Shrink DBMS_LOCK_ALLOCATED? (Doc ID 1065384.1) is a bit similar and it warns that Bug 2624130 is going to make the delete very slow. In that case, one option is to use DBMS_REDEFINITION: How To Cleanup And Shrink DBMS_LOCK_ALLOCATED? (Doc ID 1065384.1).

The structure of the DBMS_LOCK_ALLOCATED table is rather simple it has only 3 columns: name, lockid and expiration.

There are few bugs in the Oracle database related to the “enq: UL – contention” wait event:

Bug 30070584 – 13.3 ‘enq: UL – contention’ wait event (OEM related)
Bug 27282543 – 13.2 Frequent UL contention wait events for the node 2 on EMNPRD repository db (OEM)
Bug 26618817 – UL contention from ProcessBuddyAgent jobs in scheduled state

A recent update from 2020 (Oracle 19.3) by Jonathan Lewis confirmed that PL/SQL execution time includes the time spent waiting for UL enqueues, so we should not forget that we may need to subtract wait time for ‘enq: UL – contention’ from the PL/SQL time before you start to worry about how much actual work we are doing in PL/SQL.

If you are working with Oracle Text, you may also observe the “enq: UL – contention” wait event. DBMS_LOCK is used internally by ctxsys.drvdml.com_sync_index, so requesting UL is not a problem. However, fragmenting the index composition table of CTXSYS increases the acquisition time of UL enqueue, so this issue will appear.

Bottom line: DBMS_LOCK should be used (carefully) in the application code only when there are no other options.

2022: What’s new in the database world?

In Databases, DB2 database, DBA, New features, Oracle database, PostgreSQL, SQL Server on May 23, 2022 at 07:55

The big picture in the global database market is changing rapidly. It is an $80B market right now – the largest software market and growing in double digits year over year.

Gartner has recently published the software market revenue numbers for 2021.

In 2021, the revenue for managed cloud services is more than $39B – which means about half of al global DBMS revenue. Have a look at the drastic change since 2017:

Microsoft and AWS have right now about half of the global database market while Oracle have slipped down to having only about one fifth of the overall share. Google have entered for the first time the top 5 while IBM have just lost there 4th position. SAP are out of the top 5.

In terms of database popularity, Oracle are still #1 based on the DB engines rankings:

So, what is new and causing these turbulances?

Google have just announced AlloyDB for PostgreSQL. Compared with standard PostgreSQL, according to Google’s own performance tests, AlloyDB was more than four times faster for transactional workloads, and up to 100 times faster for analytical queries. AlloyDB was also two times faster for transactional workloads than Amazon’s comparable service. This makes AlloyDB a powerful new modernization option for transitioning off of legacy databases. Here are the other offerings from Google Cloud:

The latest from AWS is predominantly around RDS and Redshift (a database based on PostgreSQL). Amazon Redshift RA3 instances are now available in the Asia Pacific (Osaka), Europe (Milan), Middle East (Bahrain) and Africa (Cape Town) regions. Amazon Redshift now offers new enhancements for Audit Logging, which enables faster delivery of logs for analysis by minimizing latency while also adding Amazon CloudWatch as a new log destination. With the latest release, customers can choose to stream audit logs directly to Amazon CloudWatch, which enables customers to perform real-time monitoring. Amazon Redshift now also provides native integration with Microsoft Azure Active Directory (AD), which customers can use for authentication and authorization with tools like Microsoft Power BI. 

Amazon Relational Database Service (Amazon RDS) Performance Insights now makes it easier for you to see the database performance metrics for the exact timeframe you want to analyze, by choosing a custom time window within your retention period. Previously, you could only see metrics in Performance Insights by choosing relative time intervals such as the past 1 hour, the past 24 hours, etc. Amazon RDS for PostgreSQL, MariaDB and MySQL now supports M6i and R6i instances with new instance sizes up to 128 vCPUs and 1,024 GiB RAM. Also, Amazon Aurora Serverless v2 is now generally available.

The latest from Azure is around Change data capture: CDC lets you track all the changes that occur on a database. Though this feature has been available for SQL Server for quite some time, using it with Azure SQL Database is now generally available. When creating a Hyperscale database, you can choose your preferred storage type: read-access geo-redundant storage (RA-GRS), zone-redundant storage (ZRS), or locally redundant storage (LRS) Azure standard storage. The selected storage redundancy option will be used for the lifetime of the database for both data storage redundancy and backup storage redundancy. It is now possible to configure your Azure SQL Database to allow authentication only from Azure Active Directory.

IBM i 7.5, previously know as AS/400, is coming with some “interesting” features. There is a new BOOLEAN data type, the maximum size of a binary radix index is extended, up to 16 TB. Used t be 1.7TB. I do not know many DBAs who enjoy dealing with double digit TB indexes. The Db2 for i SQL Query Engine (SQE) provides a Query Supervisor which enables real-time monitoring of resource consumption by SQL and native queries. There is now a RESTRICT ON DROP attribute that can be added or removed using the ALTER TABLE (SQL) statement (ALTER TABLE TABLE_NAME ADD RESTRICT ON DROP;). When RESTRICT ON DROP is added, nobody will be allowed to delete or drop that file. Even users with *ALLOBJ user special authority will not be allowed to delete the file.

The only change I am aware of in Oracle Database Release 21c, Version 21.5 are the new AE Analytic Views. AE stands for “All Edition”. Each new AE analytic view corresponds to an existing non-AE analytic view. AE views have the same columns as their non-AE counterparts, plus a column that displays the name of the application edition where the editioned object is defined. For the complete list of changes in Oracle 21, check this link.

Oracle Database 21c Upgrade: good to know

In Databases, DBA, New features on April 25, 2022 at 13:57

The latest release of the Oracle database 21.3 can be downloaded from here. 21c is available for Linux, HP-UX and Windows. By using Oracle Fleet Patching & Provisioning, you can perform a direct upgrade to Oracle database release 21c from the following releases:

Oracle Fleet Patching and Provisioning is a software lifecycle management method for provisioning and maintaining Oracle homes. Configuring Oracle Fleet Patching and Provisioning (Oracle FPP) involves creating an Oracle Fleet Patching and Provisioning Server, adding gold images to the server, and creating working copies of gold images to provision software.

The Oracle Grid Infrastructure on which the database to be upgrade is running must be at least of the same release or later than the database release to which you will be upgrading.

The source Oracle home to be upgraded can be either a managed working copy, i.e., an Oracle home provisioned using Fleet Patching and Provisioning, or an unmanaged home, i.e., an Oracle home not provisioned using Fleet Patching and Provisioning. If you are upgrading an unmanaged Oracle home, you have to provide the complete path of the database for upgrade.

Otherwise, without Oracle FPP, you can upgrade directly from 12.2.0.1, any 18c or any 19c.

Here is what you need to know about compatibility:

  • Before upgrading to Oracle Database 21c, you must set the COMPATIBLE initialization parameter to at least 12.2.0 (I am unsure if 11.2.0 is ok as well)
  • In Oracle Database 21c, when the COMPATIBLE initialization parameter is not set in your parameter file, the COMPATIBLE parameter value defaults to 21.0.0
  • Installing earlier releases of Oracle Database on the same computer that is running Oracle Database 21c can cause issues with client connections

Starting with Oracle Database 21c, non-CDB Oracle Database upgrades to non-CDB architecture are de-supported:

Option 1: Convert the non-CDB to a PDB before upgrade: with this option, you plug in the non-CDB Oracle Database release to the same release CDB. (For example, plug in a non-CDB Oracle Database Release 19c into an Oracle Database 19c release CDB). Finish converting the non-CDB Oracle Database to a PDB. Then, upgrade the entire CDB, with its PDBs, to Oracle Database 21c

Option 2: Plug in the non-CDB, upgrade, and finish converting the non-CDB to a PDB after upgrade: with this option, you plug in a non-CDB Oracle Database release to an OracleDatabase 21c CDB. Upgrade the plugged-in non-CDB Oracle Database to Oracle Database 21c. Then, finish converting the non-CDB Oracle Database to a PDB.

Here is what you need to know about the DBUA and the Oracle home:

  • Starting with Oracle Database 21c, Database Upgrade Assistant (DBUA) is replaced by the AutoUpgrade utility
  • Starting with Oracle Database 21c, the default network administration directory changes from the previous default in the local Oracle home, Oracle_home/network (for example, /u01/app/oracle/product/19.1.0/dbhome_1/network), to a new location
  • The new default location is the shared Oracle Base Home, in the path ORACLE_BASE/ homes/HOME_NAME/network/admin
  • Starting with Oracle Database 21c, an Oracle Database installation configures all Oracle Database homes in read-only mode by default

Note last what is being changed about security and parameters:

  • Starting with Oracle Database 21c, the data types DBMS_CRYPTO_TOOLKIT_TYPES and package DBMS_CRYPTO_TOOLKIT are desupported
  • The init.ora parameters UNIFIED_AUDIT_SGA_QUEUE_SIZE, UNIFIED_AUDIT_SGA_QUEUE_SIZE, AUDIT_FILE_DEST, AUDIT_SYS_OPERATIONS, AUDIT_SYSLOG_LEVEL and AUDIT_TRAIL have been desupported
  • Desupport of IGNORECASE parameter for passwords
    Starting in Oracle Database 21c, the IGNORECASE parameter for the orapwd file is desupported and all newly created password files are case-sensitive
  • Desupport of DISABLE_DIRECTORY_LINK_CHECK
    The DISABLE_DIRECTORY_LINK_CHECK parameter is desupported, with no replacement

Unified Auditing for Top-Level Statements in the Oracle Database

In DBA, Oracle database, PL/SQL, Security and auditing, SQL on April 4, 2022 at 07:26

Auditing a database has undoubtfully its benefits and more, but for the DBA, this can cause serious headaches in terms of managing space and performance. Especially in the cloud, where security and auditing is even more important that on premises.

Did you know that there are 239 available audit actions in 19c? select * from AUDIT_ACTIONS; shows them all.

A new database feature introduced in 19c and 21c, called “auditing top-level statements“, can help us minimize the issue with space and performance (caused by auditing), by ignoring SQL statements that run from within PL/SQL procedures, triggers or functions, as they may be less relevant for auditing purposes.

Inside the Oracle database, a top-level statement is a statement that is executed directly by a user, not a statement that is run from within a PL/SQL procedure.

Top-level statements from the user SYS can be also audited. The unified audit trail can grow immensely if all statements are audited. Especially when there is a large number of audit trail records that are generated for a single statement in the unified audit policy. With auditing only top-level statements, we can reduce the audit of the recursive SQL statements. An example of this scenario would be audits for the DBMS_STATS.GATHER_DATABASE_STATS statement, which can generate over 200,000 individual audit records.

The unified audit records are written to a table in the AUDSYS schema called AUD$UNIFIED. Access to the AUDSYS. AUD$UNIFIED table is provided via the view SYS.UNIFIED_AUDIT_TRAIL. Here is the size of AUD$UNIFIED in an autonomous database where I have not configured any out of the box auditing. The auditing options enabled by default in ADB are logon, alter user, insert, update and change password:

Note that there are two components to the Audit Trail: (1) the AUDSYS schema in the database and (2) the spillover files on disk. More details in Receiving ORA-55940: An error occurred during execution of ktliIngestExternData by Mike Dietrich.

The X$UNIFIED_AUDIT_TRAIL table is owned by SYS, it is not in the SYSAUD schema! Check Unified Auditing – some insights by Thomas Saviour and X$UNIFIED_AUDIT_TRAIL is slow by Magnus Johansson for more details.

Now, configuring an Unified Audit Policy to capture only Top-Level statements is simple:

The ONLY TOPLEVEL clause in the CREATE AUDIT POLICY statement enables you to audit only the SQL statements that are directly issued by an end user by honoring the audit configuration in the audit policy.

In my autonomous database, I have decided to top-level audit all actions from SYS and ADMIN:

Here are other examples of auditing Top-Level statements.

It is important to understand that the ONLY TOPLEVEL clause has no impact on the output for an individual unified audit trail record. The only effect that ONLY TOPLEVEL has on a policy is to limit the number of records generated for the given unified audit policy.

P.S. 10 years go I wrote a blog post entitled Auditing vs. Performance in the Oracle Database. This topic is after all still relevant and important. With a colleague of mine from Australia, about a month ago, we could not figure out why auditing was causing performance issues – after unifying all possible efforts, not records 🙂 The problem could be only reproduced on Exadata. Still a mystery to us.

Running JavaScript from within the Oracle Database using DBMS_MLE

In DBA, New features, PL/SQL on March 8, 2022 at 08:39

JavaScript is by far the most used language according to Github’s Octoverse Report.

As of 2021 JavaScript is the most commonly used programming language among software developers around the world, with nearly 65 percent of respondents stating that they used JavaScript. HTML/CSS, Python, SQL, and Java rounded out the top five most widely used programming languages around the world.

Did you know that you can run JavaScript code from right inside the Oracle Database?

By using the new 21c PL/SQL multilingual engine package called DBMS_MLE, we can now run JavaScrips from within the Oracle database.

The new package has 18 procedures and 1 function. So, it might take some time to understand what they do and how they should be used.

As of today, it is not possible to create stored JavaScript procedures in the database. But what we can do for now, is store them in a CLOB column in a table and create an executable PL/SQL procedure which will select the CLOB, i.e., the JavaScript procedure, and execute it via DBMS_MLE.EVAL

We cannot only run JavaScript code inside the Oracle Database but also exchange data seamlessly between PL/SQL and JavaScript. The JavaScript code itself can execute PL/SQL and SQL through built-in JavaScript modules. JavaScript data types are automatically mapped to Oracle Database data types and vice versa.

Well, here is a simple example of how to run JavaScript from the database:

The user of DBMS_MLE must have EXECUTE DYNAMIC MLE and EXECUTE ON JAVASCRIPT otherwise you will get again ORA-01031.

For further and more detailed examples, check Lucas Jellema‘s and Stefan Dobre‘s articles.

One might wonder, why run JavaScript code from within Oracle?

Actually, there are several reason why we would want to run JavaScript from within the Oracle database.

Sean Stacey outlined 3 reasons in his blog post How to Run JavaScript In Oracle Database 21c:

  1. Use JavaScript with your APEX applications: Stefan Dobre wrote an excellent blog post entitled JavaScript as a Server-Side Language in Oracle APEX 20.2
  2. Use your existing JavaScript programs and run them directly against your Oracle database without having to worry about re-writing the logic in PL/SQL. There are about 100K reusable JS libraries that can be leveraged.
  3. Not to use the Oracle database simply as a bag of tables with data. There is no need to move the data out of the database to run your JavaScript code against it.

There are several other reasons as well. There are more software developers who are and interested in writing JS code compared to PL/SQL. R and Java are already part of the Oracle database, so it is natural that JavaScript is being embedded as well. Naturally, most likely Python, Ruby, etc. are also in the pipeline.

As a side note (on the humorous side), here is a Twitter screenshot for the ones watching on Nextflix “Emily in Paris” 🙂

Two important additional resources:

Oracle Database Multilingual Engine (MLE): GraalVM in the Database

Executing JavaScript from PL/SQL in Oracle Database 21c with Multi Language

Oracle JavaScript Extension Toolkit (Oracle JET)