Dontcheff

Archive for the ‘DBA’ Category

Automatic Segment Advisor in the Oracle Database

In Autonomous, Database tuning, DBA, Oracle database, Oracle internals on July 1, 2022 at 06:50

“From my experience, the best advisors help in three ways: encourage you to look at the problem or opportunity from multiple angles; help you balance the tug of the short-term with important long-term priorities; and ask the tough questions you need to know to reach the best solution.” Margo Georgiadis

The Oracle Segment Advisor identifies segments that have space which can be reclaimed. However, the Automatic Segment Advisor can be at times resource consuming and even slow down your database:

Why is this happening granted the Automatic Segment Advisor does not analyze every database object? Here is how it works internally: the advisor examines the database statistics, it samples segment data, and then selects the following objects to analyze:

  • Tablespaces that have exceeded a critical or warning space threshold
  • Segments that have the most activity
  • Segments that have the highest growth rate

In addition, the Automatic Segment Advisor evaluates tables that are at least 10MB in size and have at least 3 indexes to determine the amount of space saved if the tables are compressed with advanced row compression.

Now, here is the important part: if a database object is selected for analysis by the advisor but the maintenance window expires before the advisor can process the object, the object is included in the next Automatic Segment Advisor run. So, at one point you may come to a situation where lots of objects have to be analyzed. During the maintenance window, the following clients/task are being run, these are the predefined automated maintenance tasks:

Sometimes, it makes sense to disable the auto space advisor as you cannot change the set of tablespaces and segments that the Automatic Segment Advisor selects for analysis. You can, however, enable or disable the Automatic Segment Advisor task, change the times during which the Automatic Segment Advisor is scheduled to run, or adjust automated maintenance task system resource utilization. Especially in a situation like this:

Often, also ADDM may point out to the problem with the Automatic Segment Advisor task:

Here is how to disable (and enable) the tasks individually. The main switch is controlled by DBMS_AUTO_TASK_ADMIN.DISABLE().

Even after disabling the entire autotask job by running DBMS_AUTO_TASK_ADMIN.DISABLE(), the DBA_AUTOTASK_TASK will still show autotask client as enabled. For this you need to disable the jobs individually as shown above. And for a multitenant environment, CDB and PDBs have their own autotasks, disabling CDB’s autotask will not affect the PDBs, so you will have to do for each pluggable database.

Staring with 12.2, there is a parameter called ENABLE_AUTOMATIC_MAINTENANCE_PDB that can be used to enable or disable the running of automated maintenance tasks for all the PDBs in a CDB or for individual PDBs in a CDB. Changing ENABLE_AUTOMATIC_MAINTENANCE_PDB in the CDB root from TRUE to FALSE, the new value FALSE takes effect in the root and in all the PDBs in the CDB.

If you get into a situation where the Automatic Segment Advisor is consuming lots of resource and slowing the database during the maintenance windows do one of the following:

  1. Disable the autotask client for the segment advisor and run it manually on per need basis.
  2. Increase the maintenance window from the default which starts at 10 p.m. on Monday to Friday and ends at 2 a.m. Often 4 hours in just not enough. The weekend window is 20h long and in most cases long enough.

In the autonomous database, you have access to dba_autotask_client, etc. and you can disable and enable the auto space advisor task however you do not have full visibility on dba_scheduler_window_groups, etc. Some columns just show as NULL. Still the performance task in ADB can give you some idea of what is going on:

And finally, here are some other situations that might require disabling the Auto Space Advisor Job:

  • ORA-01555 While Running Auto Space Advisor Job on Object wri$_adv_objspace_trend_data (Doc ID 2576430.1)
  • Auto Space Advisor is Taking More Time due to Recursive Query Taking a Long Time (Doc ID 2382419.1)
  • SEGMENT ADVISOR not working as expected for LOB or SYS_LOB SEGMENT (Doc ID 988744.1)

User defined locks: enq: UL – contention

In Database tuning, DBA, Oracle database, Oracle internals on June 13, 2022 at 17:30

DBAs are familiar with most wait events, however there is one called “enq: UL – contention”, which does not pop up that often:

The foreground events above are from an Oracle 19c database. Clearly, we see a lot of user defined lock waits: enq: UL – contention. They come usually when the DBMS_LOCK package is being used.

There are 3 enqueue types which are defined in Oracle as “User Type” locks, they are:

  1. TM – DML Enqueue called against base tables or partitions for operations that need to be coordinated
  2. TX – Transaction Enqueue used to protect transaction information
  3. UL – User Lock Enqueue used when an application makes use of the DBMS_LOCK package

MOS has a note Resolving Issues Where ‘enq: UL – contention’ Waits are Occurring (Doc ID 1915980.1) which can help us identify what session is holding the lock and what part of the application it is currently executing.

The ‘UL’  (User-defined Lock) Enqueue is a lock that is created and defined by a developer (or seldom a DBA) using the DBMS_LOCK package.  The DBMS_LOCK package allows the application developer to request, convert and release locks which are independent from the locks provided automatically by Oracle. These routines are meant for synchronization within the application and augment the locking which is already there in the database.

Locks are automatically released at the end of a session. DBSM_LOCK.RELEASE explicitly releases a lock previously acquired using the REQUEST function.

Wondering How can I find who has allocated a user lock via dbms_lock?

Here is the query from the link above showing how to find the active blocking situations around DBMS_LOCK:

SELECT blocker.username blocker, blocker.sid blocker_session
     , blocked.username blocked, blocked.sid blocked_session
  FROM v$session blocked
  JOIN v$session blocker
  ON (blocked.blocking_session = blocker.sid)
 WHERE blocked.wait_class = 'Application'
   AND blocked.event='enq: UL - contention';

The DBMS_LOCK_ALLOCATED table is periodically cleared out by calling the allocate_unique() procedure when the modulus of the lockid is 0 and the expiration column is less than the current date. On every 100th attempt to create a new entry, allocate_unique purges expired entries from the table. If the entry already exists, allocate_unique updates the expiry time.

There are a couple important MOS notes related to the DBMS_LOCK_ALLOCATED table:

The first one called DBMS_LOCK_ALLOCATED Table Keeps Growing Eventually Depleting Tablespace (Doc ID 1054343.6) explains how to delete entries for locks that are no longer being used. The second on entitled How To Cleanup And Shrink DBMS_LOCK_ALLOCATED? (Doc ID 1065384.1) is a bit similar and it warns that Bug 2624130 is going to make the delete very slow. In that case, one option is to use DBMS_REDEFINITION: How To Cleanup And Shrink DBMS_LOCK_ALLOCATED? (Doc ID 1065384.1).

The structure of the DBMS_LOCK_ALLOCATED table is rather simple it has only 3 columns: name, lockid and expiration.

There are few bugs in the Oracle database related to the “enq: UL – contention” wait event:

Bug 30070584 – 13.3 ‘enq: UL – contention’ wait event (OEM related)
Bug 27282543 – 13.2 Frequent UL contention wait events for the node 2 on EMNPRD repository db (OEM)
Bug 26618817 – UL contention from ProcessBuddyAgent jobs in scheduled state

A recent update from 2020 (Oracle 19.3) by Jonathan Lewis confirmed that PL/SQL execution time includes the time spent waiting for UL enqueues, so we should not forget that we may need to subtract wait time for ‘enq: UL – contention’ from the PL/SQL time before you start to worry about how much actual work we are doing in PL/SQL.

If you are working with Oracle Text, you may also observe the “enq: UL – contention” wait event. DBMS_LOCK is used internally by ctxsys.drvdml.com_sync_index, so requesting UL is not a problem. However, fragmenting the index composition table of CTXSYS increases the acquisition time of UL enqueue, so this issue will appear.

Bottom line: DBMS_LOCK should be used (carefully) in the application code only when there are no other options.

2022: What’s new in the database world?

In Databases, DB2 database, DBA, New features, Oracle database, PostgreSQL, SQL Server on May 23, 2022 at 07:55

The big picture in the global database market is changing rapidly. It is an $80B market right now – the largest software market and growing in double digits year over year.

Gartner has recently published the software market revenue numbers for 2021.

In 2021, the revenue for managed cloud services is more than $39B – which means about half of al global DBMS revenue. Have a look at the drastic change since 2017:

Microsoft and AWS have right now about half of the global database market while Oracle have slipped down to having only about one fifth of the overall share. Google have entered for the first time the top 5 while IBM have just lost there 4th position. SAP are out of the top 5.

In terms of database popularity, Oracle are still #1 based on the DB engines rankings:

So, what is new and causing these turbulances?

Google have just announced AlloyDB for PostgreSQL. Compared with standard PostgreSQL, according to Google’s own performance tests, AlloyDB was more than four times faster for transactional workloads, and up to 100 times faster for analytical queries. AlloyDB was also two times faster for transactional workloads than Amazon’s comparable service. This makes AlloyDB a powerful new modernization option for transitioning off of legacy databases. Here are the other offerings from Google Cloud:

The latest from AWS is predominantly around RDS and Redshift (a database based on PostgreSQL). Amazon Redshift RA3 instances are now available in the Asia Pacific (Osaka), Europe (Milan), Middle East (Bahrain) and Africa (Cape Town) regions. Amazon Redshift now offers new enhancements for Audit Logging, which enables faster delivery of logs for analysis by minimizing latency while also adding Amazon CloudWatch as a new log destination. With the latest release, customers can choose to stream audit logs directly to Amazon CloudWatch, which enables customers to perform real-time monitoring. Amazon Redshift now also provides native integration with Microsoft Azure Active Directory (AD), which customers can use for authentication and authorization with tools like Microsoft Power BI. 

Amazon Relational Database Service (Amazon RDS) Performance Insights now makes it easier for you to see the database performance metrics for the exact timeframe you want to analyze, by choosing a custom time window within your retention period. Previously, you could only see metrics in Performance Insights by choosing relative time intervals such as the past 1 hour, the past 24 hours, etc. Amazon RDS for PostgreSQL, MariaDB and MySQL now supports M6i and R6i instances with new instance sizes up to 128 vCPUs and 1,024 GiB RAM. Also, Amazon Aurora Serverless v2 is now generally available.

The latest from Azure is around Change data capture: CDC lets you track all the changes that occur on a database. Though this feature has been available for SQL Server for quite some time, using it with Azure SQL Database is now generally available. When creating a Hyperscale database, you can choose your preferred storage type: read-access geo-redundant storage (RA-GRS), zone-redundant storage (ZRS), or locally redundant storage (LRS) Azure standard storage. The selected storage redundancy option will be used for the lifetime of the database for both data storage redundancy and backup storage redundancy. It is now possible to configure your Azure SQL Database to allow authentication only from Azure Active Directory.

IBM i 7.5, previously know as AS/400, is coming with some “interesting” features. There is a new BOOLEAN data type, the maximum size of a binary radix index is extended, up to 16 TB. Used t be 1.7TB. I do not know many DBAs who enjoy dealing with double digit TB indexes. The Db2 for i SQL Query Engine (SQE) provides a Query Supervisor which enables real-time monitoring of resource consumption by SQL and native queries. There is now a RESTRICT ON DROP attribute that can be added or removed using the ALTER TABLE (SQL) statement (ALTER TABLE TABLE_NAME ADD RESTRICT ON DROP;). When RESTRICT ON DROP is added, nobody will be allowed to delete or drop that file. Even users with *ALLOBJ user special authority will not be allowed to delete the file.

The only change I am aware of in Oracle Database Release 21c, Version 21.5 are the new AE Analytic Views. AE stands for “All Edition”. Each new AE analytic view corresponds to an existing non-AE analytic view. AE views have the same columns as their non-AE counterparts, plus a column that displays the name of the application edition where the editioned object is defined. For the complete list of changes in Oracle 21, check this link.

Oracle Database 21c Upgrade: good to know

In Databases, DBA, New features on April 25, 2022 at 13:57

The latest release of the Oracle database 21.3 can be downloaded from here. 21c is available for Linux, HP-UX and Windows. By using Oracle Fleet Patching & Provisioning, you can perform a direct upgrade to Oracle database release 21c from the following releases:

Oracle Fleet Patching and Provisioning is a software lifecycle management method for provisioning and maintaining Oracle homes. Configuring Oracle Fleet Patching and Provisioning (Oracle FPP) involves creating an Oracle Fleet Patching and Provisioning Server, adding gold images to the server, and creating working copies of gold images to provision software.

The Oracle Grid Infrastructure on which the database to be upgrade is running must be at least of the same release or later than the database release to which you will be upgrading.

The source Oracle home to be upgraded can be either a managed working copy, i.e., an Oracle home provisioned using Fleet Patching and Provisioning, or an unmanaged home, i.e., an Oracle home not provisioned using Fleet Patching and Provisioning. If you are upgrading an unmanaged Oracle home, you have to provide the complete path of the database for upgrade.

Otherwise, without Oracle FPP, you can upgrade directly from 12.2.0.1, any 18c or any 19c.

Here is what you need to know about compatibility:

  • Before upgrading to Oracle Database 21c, you must set the COMPATIBLE initialization parameter to at least 12.2.0 (I am unsure if 11.2.0 is ok as well)
  • In Oracle Database 21c, when the COMPATIBLE initialization parameter is not set in your parameter file, the COMPATIBLE parameter value defaults to 21.0.0
  • Installing earlier releases of Oracle Database on the same computer that is running Oracle Database 21c can cause issues with client connections

Starting with Oracle Database 21c, non-CDB Oracle Database upgrades to non-CDB architecture are de-supported:

Option 1: Convert the non-CDB to a PDB before upgrade: with this option, you plug in the non-CDB Oracle Database release to the same release CDB. (For example, plug in a non-CDB Oracle Database Release 19c into an Oracle Database 19c release CDB). Finish converting the non-CDB Oracle Database to a PDB. Then, upgrade the entire CDB, with its PDBs, to Oracle Database 21c

Option 2: Plug in the non-CDB, upgrade, and finish converting the non-CDB to a PDB after upgrade: with this option, you plug in a non-CDB Oracle Database release to an OracleDatabase 21c CDB. Upgrade the plugged-in non-CDB Oracle Database to Oracle Database 21c. Then, finish converting the non-CDB Oracle Database to a PDB.

Here is what you need to know about the DBUA and the Oracle home:

  • Starting with Oracle Database 21c, Database Upgrade Assistant (DBUA) is replaced by the AutoUpgrade utility
  • Starting with Oracle Database 21c, the default network administration directory changes from the previous default in the local Oracle home, Oracle_home/network (for example, /u01/app/oracle/product/19.1.0/dbhome_1/network), to a new location
  • The new default location is the shared Oracle Base Home, in the path ORACLE_BASE/ homes/HOME_NAME/network/admin
  • Starting with Oracle Database 21c, an Oracle Database installation configures all Oracle Database homes in read-only mode by default

Note last what is being changed about security and parameters:

  • Starting with Oracle Database 21c, the data types DBMS_CRYPTO_TOOLKIT_TYPES and package DBMS_CRYPTO_TOOLKIT are desupported
  • The init.ora parameters UNIFIED_AUDIT_SGA_QUEUE_SIZE, UNIFIED_AUDIT_SGA_QUEUE_SIZE, AUDIT_FILE_DEST, AUDIT_SYS_OPERATIONS, AUDIT_SYSLOG_LEVEL and AUDIT_TRAIL have been desupported
  • Desupport of IGNORECASE parameter for passwords
    Starting in Oracle Database 21c, the IGNORECASE parameter for the orapwd file is desupported and all newly created password files are case-sensitive
  • Desupport of DISABLE_DIRECTORY_LINK_CHECK
    The DISABLE_DIRECTORY_LINK_CHECK parameter is desupported, with no replacement

Unified Auditing for Top-Level Statements in the Oracle Database

In DBA, Oracle database, PL/SQL, Security and auditing, SQL on April 4, 2022 at 07:26

Auditing a database has undoubtfully its benefits and more, but for the DBA, this can cause serious headaches in terms of managing space and performance. Especially in the cloud, where security and auditing is even more important that on premises.

Did you know that there are 239 available audit actions in 19c? select * from AUDIT_ACTIONS; shows them all.

A new database feature introduced in 19c and 21c, called “auditing top-level statements“, can help us minimize the issue with space and performance (caused by auditing), by ignoring SQL statements that run from within PL/SQL procedures, triggers or functions, as they may be less relevant for auditing purposes.

Inside the Oracle database, a top-level statement is a statement that is executed directly by a user, not a statement that is run from within a PL/SQL procedure.

Top-level statements from the user SYS can be also audited. The unified audit trail can grow immensely if all statements are audited. Especially when there is a large number of audit trail records that are generated for a single statement in the unified audit policy. With auditing only top-level statements, we can reduce the audit of the recursive SQL statements. An example of this scenario would be audits for the DBMS_STATS.GATHER_DATABASE_STATS statement, which can generate over 200,000 individual audit records.

The unified audit records are written to a table in the AUDSYS schema called AUD$UNIFIED. Access to the AUDSYS. AUD$UNIFIED table is provided via the view SYS.UNIFIED_AUDIT_TRAIL. Here is the size of AUD$UNIFIED in an autonomous database where I have not configured any out of the box auditing. The auditing options enabled by default in ADB are logon, alter user, insert, update and change password:

Note that there are two components to the Audit Trail: (1) the AUDSYS schema in the database and (2) the spillover files on disk. More details in Receiving ORA-55940: An error occurred during execution of ktliIngestExternData by Mike Dietrich.

The X$UNIFIED_AUDIT_TRAIL table is owned by SYS, it is not in the SYSAUD schema! Check Unified Auditing – some insights by Thomas Saviour and X$UNIFIED_AUDIT_TRAIL is slow by Magnus Johansson for more details.

Now, configuring an Unified Audit Policy to capture only Top-Level statements is simple:

The ONLY TOPLEVEL clause in the CREATE AUDIT POLICY statement enables you to audit only the SQL statements that are directly issued by an end user by honoring the audit configuration in the audit policy.

In my autonomous database, I have decided to top-level audit all actions from SYS and ADMIN:

Here are other examples of auditing Top-Level statements.

It is important to understand that the ONLY TOPLEVEL clause has no impact on the output for an individual unified audit trail record. The only effect that ONLY TOPLEVEL has on a policy is to limit the number of records generated for the given unified audit policy.

P.S. 10 years go I wrote a blog post entitled Auditing vs. Performance in the Oracle Database. This topic is after all still relevant and important. With a colleague of mine from Australia, about a month ago, we could not figure out why auditing was causing performance issues – after unifying all possible efforts, not records 🙂 The problem could be only reproduced on Exadata. Still a mystery to us.

Running JavaScript from within the Oracle Database using DBMS_MLE

In DBA, New features, PL/SQL on March 8, 2022 at 08:39

JavaScript is by far the most used language according to Github’s Octoverse Report.

As of 2021 JavaScript is the most commonly used programming language among software developers around the world, with nearly 65 percent of respondents stating that they used JavaScript. HTML/CSS, Python, SQL, and Java rounded out the top five most widely used programming languages around the world.

Did you know that you can run JavaScript code from right inside the Oracle Database?

By using the new 21c PL/SQL multilingual engine package called DBMS_MLE, we can now run JavaScrips from within the Oracle database.

The new package has 18 procedures and 1 function. So, it might take some time to understand what they do and how they should be used.

As of today, it is not possible to create stored JavaScript procedures in the database. But what we can do for now, is store them in a CLOB column in a table and create an executable PL/SQL procedure which will select the CLOB, i.e., the JavaScript procedure, and execute it via DBMS_MLE.EVAL

We cannot only run JavaScript code inside the Oracle Database but also exchange data seamlessly between PL/SQL and JavaScript. The JavaScript code itself can execute PL/SQL and SQL through built-in JavaScript modules. JavaScript data types are automatically mapped to Oracle Database data types and vice versa.

Well, here is a simple example of how to run JavaScript from the database:

The user of DBMS_MLE must have EXECUTE DYNAMIC MLE and EXECUTE ON JAVASCRIPT otherwise you will get again ORA-01031.

For further and more detailed examples, check Lucas Jellema‘s and Stefan Dobre‘s articles.

One might wonder, why run JavaScript code from within Oracle?

Actually, there are several reason why we would want to run JavaScript from within the Oracle database.

Sean Stacey outlined 3 reasons in his blog post How to Run JavaScript In Oracle Database 21c:

  1. Use JavaScript with your APEX applications: Stefan Dobre wrote an excellent blog post entitled JavaScript as a Server-Side Language in Oracle APEX 20.2
  2. Use your existing JavaScript programs and run them directly against your Oracle database without having to worry about re-writing the logic in PL/SQL. There are about 100K reusable JS libraries that can be leveraged.
  3. Not to use the Oracle database simply as a bag of tables with data. There is no need to move the data out of the database to run your JavaScript code against it.

There are several other reasons as well. There are more software developers who are and interested in writing JS code compared to PL/SQL. R and Java are already part of the Oracle database, so it is natural that JavaScript is being embedded as well. Naturally, most likely Python, Ruby, etc. are also in the pipeline.

As a side note (on the humorous side), here is a Twitter screenshot for the ones watching on Nextflix “Emily in Paris” 🙂

Two important additional resources:

Oracle Database Multilingual Engine (MLE): GraalVM in the Database

Executing JavaScript from PL/SQL in Oracle Database 21c with Multi Language

Oracle JavaScript Extension Toolkit (Oracle JET)

 

Viewing cost saving recommendations in OCI

In Cloud, DBA, OCI on February 24, 2022 at 15:45

Do not save what is left after spending, but spend what is left after saving – Warren Buffett

Tracking and managing usage and cost in the Cloud in often neglected. Often it is seen as complex and difficult. Here is what we can do in Oracle Cloud Infrastructure. In the most right console screen of OCI, under Account Center, we have now Billing and Cost Management Savings. It shows how many savings have been already implemented and how many are still pending:

Best practices framework for Oracle Cloud Infrastructure describe the possible actions:

  • Evaluate the Different Pricing Models
  • Implement a Compartment Structure That Fits Your Organization
  • Set Up Compartment Quota Policies to Control Resource Usage
  • Implement Cost Tracking Tags for Flexible Cost Tracking
  • Define Budgets
  • Enable Block Volume Performance Auto Tuning
  • Implement Object Storage, Object Lifecycle Management
  • Leverage Cost Reports
  • Track and Optimize Your Spending by Using Cost Analysis
  • Implement a Process to Terminate or STOP Unused Resources
  • Evaluate What Compute Shape Fits Your Workload
  • Become Familiar with Cloud Advisor

If you go under recommendations (under Cloud Advisor), the screen will list all types and you can also filter on a category:

As I have several databases on OCI, both ADB and Database Systems, both Oracle and MySQL, spread among couple of regions, it is rather important for me to manage the cost as I use the databases on daily basis and often do no stop them during the week.

As you can see above, my pending recommendation is about defining lifecycle policy rule which that automatically moves Object Storage data to lower cost tiers when possible. Meaning in practice archive storage.

It is worth going through all recommendation and although some of them are rather obvious, they might not always come to mind to implement.

Viewing Autonomous database usage is straightforward: as you pay only for storage when the database is stopped (charges below are for February):

but for MySQL especially if using HeatWave, the charges are slightly different:

Finally, here are some useful links:

Cloud Advisor Overview

Now available: Oracle Cloud Advisor

10 effective ways to save cost in the cloud: Part 1

10 effective ways to save cost in the cloud: Part 2

The new CS_SESSION package and DB_NOTIFICATIONS view in the Autonomous Database

In Autonomous, DBA, Oracle database on January 20, 2022 at 09:19

Two new objects have been recently introduced in the Oracle Autonomous Database on Shared Exadata Infrastructure.

1. The first one is the CS_SESSION package.

When you open a connection to the Autonomous Database, that session is assigned a consumer group. But consumer groups affect the concurrency and the DOP (degree of parallelism). HIGH service is run in parallel while LOW service runs in serial. So for example, if you have a PL/SQL procedure or function that has to execute statements in the same session with changing consumer groups, the way is to use the new CS_SESSION.SERVICE_NAME procedure.

The number of concurrent statements run in the database depends on 2 factors: the type of the service name and the numbers of the Oracle CPUs provisioned for the ADB:

The new package CS_SESSION contains only one procedure: SERVICE_NAME. My current version is 19.14, I am not sure if the package/procedure is included in all lower RUs. Most likely it is.

The valid values accepted as an IN parameter are: HIGHMEDIUMLOWTP and TPURGENT.

After running the procedure, Oracle does not reset any session attributes. Anything the user set for their session before calling this procedure will continue as-is. 

The ADMIN user is granted EXECUTE privilege on CS_SESSION with GRANT OPTION. The privilege is also granted to DWROLE without the GRANT OPTION.

Here is how to change the consumer group from LOW to TPURGENT:

You might get the following error:

ORA-02097: parameter cannot be modified because specified value is invalid ORA-01031: insufficient privileges ORA-06512: at “C##CLOUD$SERVICE.CS_SESSION”, line 142

Note there is an open bug for this.

You can create an AFTER SET CONTAINER trigger if you would like to limit the values a user can set for the consumer group session. You might not want that all users can set to TPURGENT but allow them go with TP only.

2. The second new object is the DB_NOTIFICATIONS view which stores information about maintenance status notifications and timezone version upgrade notifications for the ADB instance. Let us describe the view

and check what is in it:

As we can see there was a patch run on the data dictionary and there is another maintenance scheduled for the 19th of January.

3. Good to know that we can now use GitHub Raw URLs with DBMS_CLOUD APIs to access source files that reside on a GitHub Repository.

4. Also, the DBMS_PIPE package is now available in the Autonomous Database.

Moving Autonomous Databases across regions

In Autonomous, DBA, OCI, Oracle database, Replication on January 3, 2022 at 13:26

With the new OCI regions, we might want to move our databases to a closer location – in my case moving my databases from Frankfurt to Stockholm.

First what comes to mind is to enable Autonomous Data Guard and then switch over. However, you need a paired region – we cannot create a standby database just anywhere:

The list of Autonomous Data Guard Paired Regions shows that for each Source Region, we have 1-4 Paired Regions.

You need to be subscribed to a region before it can appear on the list and use it. And note: once you subscribe, there is no unsubscribe! You will need to apply for a limit increase.

I am told that soon, once a good standing payment history has been established, you will be eligible for unlimited regions. 

Another option is simply to create a clone:

Note that if you are using a clone from a backup, you cannot have the clone in another region, no idea why not! As you can see from the screenshot below, I am not getting a drop down list of regions:

So, let us create the database clone in Stockholm from the current AEG database running in Frankfurt:

Once the database is cloned, you can optionally terminate the copy in Frankfurt. But do not do it before you have verified the new clone as you might get an (unknown) error, something like this:

In case you do not want to deal with support, just create a new database and move the data with Data Pump.

My home region is Germany Central (Frankfurt). So am I able to change now my home region to Sweden Central (Stockholm) instead of Germany Central (Frankfurt)? Alas, this is not possible. Not yet at least.

According to the Metalink document Change Home Region in OCI console (Doc ID 2389905.1), the home region of the tenancy is fixed at creation time and cannot be moved. If you still wish to have the region changed, the only way to get it done would be to re-create the tenancy. That would mean removal of the current tenancy with all its resources and creating a new one.

Having the primary database in one region and a standby database in another is good practice in terms of DR. Even more complex scenarios are available. How about having a sharded Oracle Database spread among OCI, AWS and Azure? Possible but I would avoid such a complexity.

Operating System access from within the Autonomous Database

In Autonomous, DBA, Oracle database, OS on December 1, 2021 at 09:47

Oracle Autonomous Database does not let us access the operating system. Not even DBA access is allowed.

So, how can we view the content of ADB log or trace files from the OS? How about the alert.log file? “Not needed as all is autonomous” is the common answer. But that is not good enough for most DBAs.

Since September 2021, SQL Trace is supported in the Autonomous Database. If DBAs enable SQL Trace, the same tracing information saved to the trace file on Cloud Object Store is available in the SESSION_CLOUD_TRACE view in the session where the tracing was enabled.

When you enable SQL Tracing, the same trace information that is saved to the trace file on Cloud Object Store is available in the SESSION_CLOUD_TRACE view in the session where the tracing was enabled.

But how about other trace files? It is still possible via Oracle’s internal views. In fact, we don’t have access directly to the V_$ views but rather to their V$ synonyms.

Full access is available to V$DIAG_OPT_TRACE_RECORDS, V$DIAG_SQL_TRACE_RECORDS, V$DIAG_SESS_SQL_TRACE_RECORDS and V$DIAG_SESS_OPT_TRACE_RECORDS.

The names of the trace files and their content is visible through V$DIAG_TRACE_FILE_CONTENTS:

We can see the contents of a a given trace file, say eqp12pod2_p009_193404.trc. The file has 253 lines, so here is a screenshot from the top of the file content:

We can view the latest trace files generated by the database from V$DIAG_TRACE_FILE:

Here are a couple of blog post and articles related to the topic:

How about the alert.log file? The content is in X$DBGALERTEXT:

select to_char(ORIGINATING_TIMESTAMP,'YYYYMMDD-HH24:MI:SS'), MESSAGE_TEXT
from X$DBGALERTEXT
order by RECORD_ID;

However, in ADB, we do not have access to X$DBGALERTEXT. Good news is that we have access to V$DIAG_ALERT_EXT which shows the contents of the XML-based alert log in the Automatic Diagnostic Repository (ADR) for the current container (PDB). V$DIAG_ALERT_EXT which came in 12.2 is sort of a subset of X$DBGALERTEXT. Here is how you can search for ORA- errors. Use:

SELECT ORIGINATING_TIMESTAMP, MESSAGE_LEVEL, MESSAGE_TEXT, PROBLEM_KEY
FROM V$DIAG_ALERT_EXT
WHERE MESSAGE_TEXT LIKE '%ORA-%' AND ORIGINATING_TIMESTAMP > sysdate-7
ORDER BY ORIGINATING_TIMESTAMP DESC;

to find the ORA- errors during the past week.

Possible level message values are:

1: CRITICAL: critical errors
2: SEVERE: severe errors
8: IMPORTANT: important message
16: NORMAL: normal message

And here is how to see the top of the alert.log file content from the last 24 hours:

SELECT to_char(ORIGINATING_TIMESTAMP,'DD.MM.YYYY-HH24:MI:SS')||': '||MESSAGE_TEXT as "alert.log"
FROM V$DIAG_ALERT_EXT
WHERE ORIGINATING_TIMESTAMP > sysdate-1
ORDER BY ORIGINATING_TIMESTAMP DESC;

Regardless of the fact that we do not have server OS access, we can still create directories from within the Autonomous Database.

CREATE DIRECTORY creates the database directory object and also creates the file system directory if it does not already exist. If the file system directory exists then CREATE DIRECTORY only creates the database directory object.

It is possible also to create subdirectories. For example, the following command creates the database directory object version_patches and the file system directory version/patches:

CREATE DIRECTORY version_patches AS 'version/patches;

You can list the contents of a directory in the Autonomous Database by using the function DBMS_CLOUD.LIST_FILES.

The following query lists the contents of the VERSION directory:

SELECT * FROM DBMS_CLOUD.LIST_FILES('VERSION');

You can copy Files Between Object Store and a Directory in Autonomous Database.

Starting with Oracle Database 21c, Data Pump can perform exports from Oracle Autonomous Database into dump files in a cloud object store.

Thus, now we can easily migrate data out from an Oracle Autonomous Database and import it into another location.

Here is an example assuming that we have already created the credential_name JMD_OBJ_STORE_CRED:

Details on how to create the credentials can be found in the Oracle Cloud Infrastructure User Guide which has “only” 5952 pages!

Beware Bug 33323028 – DATA PUMP EXPORT TO OCI OBJECT STORAGE FAILS – present even in 21.4 – Object store ODM Library is not enabled by default – How To Setup And Use DBMS_CLOUD Package (Doc ID 2748362.1)

Conclusion: as you can see the OS access is close to zero. There are very few views that let us see what is residing on the the operating system. What I like most is (1) the possibility to run SQL Trace, (2) being able to view the contents of the XML-based alert log and (3) the ability to run an export directly from object store (21c and above). I would like to see access to X$DBGALERTEXT.