Dontcheff

Archive for the ‘DBA’ Category

Database links within pluggable databases

In Databases, DBA, Oracle database on November 18, 2022 at 16:15

Sometimes, you might need a database link between 2 schemas within the same (pluggable) database.

Why? There are several reasons. Here is one: may be you want to refresh one schema from another using Data Pump via network link? This is very common practice for development databases. I will show in this blog how this can be done step-by-step.

Here is what is needed before you can start: 2 tnsnames.ora entries pointing to the same service name, just with different names. I will need a logical directory, say schema_dir, although I will not place anything there.

I am doing the schema cloning within the same PDB in a 21c CDB, although nothing is preventing us from doing the same in 12c, 18c or 19c.

The schema julian will be duplicated into another schema called kerry:

julian1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = PDB1.laika2.laika.oraclevcn.com)
)
)

julian2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = PDB1.laika2.laika.oraclevcn.com)
)
)

[oracle@aeg admin]$ sqlplus sys/password@//localhost:1521/PDB1.laika2.laika.oraclevcn.com as sysdba

SQL*Plus: Release 21.0.0.0.0 - Production on Thu Nov 10 10:48:47 2022
Version 21.1.0.0.0

Copyright (c) 1982, 2020, Oracle. All rights reserved.

Connected to:
Oracle Database 21c EE High Perf Release 21.0.0.0.0 - Production
Version 21.1.0.0.0

SQL> CREATE OR REPLACE DIRECTORY schema_dir AS '/u01/app/oracle/homes/OraDB21Home1/datapump';

Directory created.

SQL> GRANT READ, WRITE ON DIRECTORY schema_dir TO julian;

Grant succeeded.

SQL> conn julian/password@julian1 as sysdba
Connected.

SQL> create user kerry identified by password;

User created.

SQL> grant dba to kerry;

Grant succeeded.

SQL> conn julian/password@julian1
Connected.

-- Now, let us create the database link:

SQL> create database link data_pump_link connect to kerry identified by password using 'julian2';

Database link created.

SQL> select sysdate from dual@data_pump_link;
select sysdate from dual@data_pump_link
*
ERROR at line 1:
ORA-02085: database link DATA_PUMP_LINK.LAIKA2.LAIKA.ORACLEVCN.COM connects to
PDB1.LAIKA2.LAIKA.ORACLEVCN.COM

SQL> show parameter global

NAME TYPE VALUE

allow_global_dblinks boolean FALSE
global_names boolean TRUE
global_txn_processes integer 1

SQL> alter system set global_names=false scope=memory;

System altered.

SQL> select sysdate from dual@data_pump_link;

SYSDATE

10-NOV-22

SQL>

-- and now it is time to do the import:

[oracle@aeg datapump]$ impdp julian/password@julian1 DIRECTORY=schema_dir NETWORK_LINK=data_pump_link schemas=julian remap_schema=julian:kerry

Import: Release 21.0.0.0.0 - Production on Thu Nov 10 11:12:22 2022
Version 21.1.0.0.0

Copyright (c) 1982, 2020, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 21c EE High Perf Release 21.0.0.0.0 - Production
Starting "JULIAN"."SYS_IMPORT_SCHEMA_01": julian/@julian1 DIRECTORY=schema_dir NETWORK_LINK=data_pump_link schemas=julian remap_schema=julian:kerry
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 2.162 GB
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"KERRY" already exists

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ORACLE_OBJECT_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PASSWORD_HISTORY
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/DB_LINK
Processing object type SCHEMA_EXPORT/TABLE/TABLE
ORA-39129: Object type TABLE: "JULIAN"."SYS_IMPORT_SCHEMA_01" not imported. Name conflicts with the master table

. . imported "KERRY"."SALES" 37790720 rows
. . imported "KERRY"."BLOGS" 73991 rows
. .
. .
. . imported "KERRY"."RDBMS_BRANDS" 12 rows
. . imported "KERRY"."SHARDINGADVISOR_ECPREDS" 1 rows
. . imported "KERRY"."SHARDINGADVISOR_PREDS" 4 rows
. . imported "KERRY"."SHARDINGADVISOR_CONFIGDETAILS" 0 rows
. . imported "KERRY"."SHARDINGADVISOR_CONFIGURATIONS" 0 rows
. . imported "KERRY"."SHARDINGADVISOR_IMPORTANT_TABS" 0 rows
. . imported "KERRY"."SHARDINGADVISOR_QUERYTYPES" 0 rows
. . imported "KERRY"."USER_TABLE" 0 rows
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
ORA-39083: Object type COMMENT failed to create with error:
ORA-00942: table or view does not exist

Failing sql is:
COMMENT ON TABLE "KERRY"."SYS_IMPORT_SCHEMA_01" IS 'Data Pump Master Table IMPORT SCHEMA '

Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
ORA-39083: Object type INDEX:"KERRY"."SYS_MTABLE_00001374A_IND_3" failed to create with error:
ORA-00942: table or view does not exist

...

Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Processing object type SCHEMA_EXPORT/MATERIALIZED_ZONEMAP
Job "JULIAN"."SYS_IMPORT_SCHEMA_01" completed with 9 error(s) at Thu Nov 10 11:34:49 2022 elapsed 0 00:22:22

[oracle@aeg datapump]$

Note the errors related to the master table SYS_IMPORT_SCHEMA_01 which Data Pump uses for processing exports and imports. Of course, being in the same pluggable database, there is a conflict in replacing the Master Table and hence these can be totally ignored.

Note also the importance of GLOBAL_NAMES when creating the DB link.

Advertisement

List of the Oracle Database 23c New Features

In Databases, DBA, New features, Oracle database on November 1, 2022 at 10:28

On October 18th 2022, Oracle announced the new long term release of 23c Beta.

Here is a compiled list of the new features I am aware of:

OLTP and Core DB:

Accelerate SecureFiles LOB Write Performance
Automatic SecureFiles Shrink
Automatic Transaction Abort
Escrow Column Concurrency Control
Fast Ingest (Memoptimize for Write) Enhancements
Increased Column Limit to 4k
Managing Flashback Database Logs Outside the Fast Recovery Area
Remove One-Touch Restrictions after Parallel DML
Annotations – Define Metadata for Database Objects
SELECT Without the FROM Clause
Usage of Column Alias in GROUP BY and HAVING
Table Value Constructor – Group Multiple Rows of Data in a Single DML or SELECT statement
Better Error Messages to Explain why a Statement Failed to Execute
New Developer Role: dbms_developer_admin.grant_privs(‘JULIAN’);
Schema Level Privileges
RUR’s are transitioning to MRPs (available on Linux x86-64)

Application Development:

Aggregation over INTERVAL Data Types
Asynchronous Programming
Blockchain Table Enhancements
DEFAULT ON NULL for UPDATE Statements
Direct Joins for UPDATE and DELETE Statements
GROUP BY Column Alias or Position
Introduction to Javascript Modules and MLE Environments MLE – Module Calls
New Database Role for Application Developers
OJVM Web Services Callout Enhancement
OJVM Allow HTTP and TCP Access While Disabling Other OS Calls
Oracle Text Indexes with Automatic Maintenance
Sagas for Microservices
SQL Domains
SQL Support for Boolean Datatype
SQL UPDATE RETURN Clause Enhancements
Table Value Constructor
Transparent Application Continuity
Transportable Binary XML
Ubiquitous Search With DBMS_SEARCH Packages
Unicode IVS (Ideographic Variation Sequence) Support

Compression:

Improve Performance and Disk Utilization for Hybrid Columnar Compression
Index-Organized Tables (IOTs) Advanced Low Compression

Data Guard:

Per-PDB Data Guard Integration Enhancements

Event Processing:

Advanced Queuing and Transactional Event Queues Enhancements
OKafka (Oracle’s Kafka implementation)
Prometheus/Grafana Observability for Oracle Database

In-Memory:

Automatic In-Memory enhancements for improving column store performance

Java:

JDBC Enhancements to Transparent Application Continuity
JDBC Support for Native BOOLEAN Datatype
JDBC Support for OAuth2.0 for DB Authentication and Azure AD Integration
JDBC Support for Radius Enhancements (Challenge Response Mode a.k.a. Two Factor Authentication)
JDBC Support for Self-Driven Diagnosability
JDBC-Thin support for longer passwords
UCP Asynchronous Extension

JSON:

JSON-Relational Duality View
JSON SCHEMA

RAC:

Local Rolling Patching
Oracle RAC on Kubernetes
Sequence Optimizations in Oracle RAC
Simplified Database Deployment
Single-Server Rolling Patching
Smart Connection Rebalance

Security:

Ability to Audit Object Actions at the Column Level for Tables and Views
Enhancements to RADIUS Configuration
Increased Oracle Database Password Length: 1024 Byte Password
Schema Privileges to Simplify Access Control
TLS 1.3

Sharding:

JDBC Support for Split Partition Set and Directory based Sharding
New Directory-Based Sharding Method
RAFT Replication
UCP Support for XA Transactions with Oracle Database Sharding

Spatial and Graph:

Native Representation of Graphs in Oracle Database
Spatial: 3D Models and Analytics
Spatial: Spatial Studio UI Support for Point Cloud Features
Support for the ISO/IEC SQL Property Graph Queries (SQL/PGQ) Standard
Use JSON Collections as a Graph Data Source

And here are few links that give you additional information on what will be coming with the new release:

Oracle Database 23c: New feature highlights by Lucas Jellema

Oracle 23c: New features by René Nyffenegger

23c Twitter Summary by Philipp Salvisberg

P.S. Just found out that road cyclists are moving away from 23C and that there are products called Merkur 23C Safety Razor and Glock Model 23C.

DBA_OPERATOR_ACCESS and SYSDATE_AT_DBTIMEZONE in ADB-S

In Autonomous, DBA, OCI, Oracle database on October 13, 2022 at 08:36

In a blog post in 2020, entitled SYSDATE and Time Zones in the Autonomous Database, I covered the sysdate/systimestamp issue in ADB-S. Basically, you are allowed to change the database and session timezones in ADB, but this doesn’t change the SYSDATE and SYSTIMESTAMP in the timezones. So, the PL/SQL packages, procedure and functions and in particular all SQL using SYSDATE and SYSTIMESTAMP might not return what you expect.

But now, there is a parameter called SYSDATE_AT_DBTIMEZONE available now on system level. Depending on the value of SYSDATE_AT_DBTIMEZONE, you see either the date and time based on the default Autonomous Database time zone, Coordinated Universal Time ‎(UTC)‎, or based on the time zone that you set in your database.

Here is how it works. Let us first check the database timezone:

The value of SYSDATE_AT_DBTIMEZONE is the default, FALSE:

With the default value of FALSE, I see GMT time:

If I change from FALSE to TRUE, then I see database TZ time:

If you decide to change the TZ, then you must restart the Autonomous Database instance for the change to take effect.

So, when SYSDATE_AT_DBTIMEZONE is FALSE in a session, calls to SYSDATE and SYSTIMESTAMP return values based on the default Autonomous Database time zone, Coordinated Universal Time ‎(UTC)‎. When SYSDATE_AT_DBTIMEZONE is TRUE in a session, calls to SYSDATE or SYSTIMESTAMP return the date and time based on the database time zone.

In case you need your application to show the database timezone (or a certain TZ) when calling SYSDATE or SYSTIMESTAMP, then change this new parameter to TRUE, set the correct TZ, if needed, and restart!

There is also a new view in ADB-S called DBA_OPERATOR_ACCESS. This view stores information on the actions that OCI cloud operations performs on your Autonomous Database. This view will not show results if Oracle Cloud Infrastructure cloud operations hasn’t performed any actions or run any statements in your Autonomous Database instance.

The DBA_OPERATOR_ACCESS view provides information starting on October 4, 2022, the date this feature was introduced. You cannot see anything done before October 4, 2022.

The view is based on the PDB_SYNC$ table:

The view contains the following 4 columns:

1. SQL_TEXT: SQL text of the statement executed by the operator

2. EVENT_TIMESTAMP: Timestamp of the operator action in UTC

3. REQUEST_ID: Request number related to the reason behind the operator action. This could be a bug number, an SR number, or a change ticket request number that provides information on the reason for the action

4. REASON: Reason for the operator action. This provides context for the reason behind the action and may have a value such as: MITIGATION, DIAGNOSTIC COLLECTION, or CUSTOMER REQUEST

So, the DBA_OPERATOR_ACCESS view provides good and useful information on the top level SQL statements that OCI cloud operations performs.

DBMS_AUTO_PARTITION: Automatic Partitioning in the Autonomous Database

In Autonomous, Cloud, Databases, DBA on September 26, 2022 at 06:33

Choosing a partitioning strategy is often far from straightforward. Range and interval partitioning are useful when organizing similar data types, most true for date and time data. Hash partitioning is useful for randomly distributing data across partitions based on a hashing algorithm, rather than grouping similar data. I remember how we managed to boost performance in a huge Siebel database by hash partitioning the big indexes. List partitioning is useful to explicitly map rows to partitions based on a set of values – for example the states in the USA.

But this is a tedious process and when doing it, regardless if you are a DBA or a Developer, the functional knowledge of the application and the SQL using the tables and indexes, is of paramount importance.

When using the Oracle autonomous database, we rather prefer things to be automated – goes without saying. So, here comes in ADB a handy package called DBMS_AUTO_PARTITION.

First, we need to configure auto partitioning with the option we need. We use the parameter IMPLEMENT (see the screenshot below) but if you want to enable recommendations and not implement those recommendations automatically, then use the parameter REPORT_ONLY instead (which is the default value).

You can always disable it using the OFF parameter.

Automatic partitioning uses a single-column partition key combined with single-level partitioning. Automatic partitioning does not support more complex partitioning strategies such as multi-column partitioned tables or composite partitioning.

Automatic partitioning chooses from the following 3 partition methods:

  • AUTOMATIC INTERVAL: This choice is best suited for ranges of partition key values
  • LIST AUTOMATIC: This partitioning method applies to distinct partition key values
  • HASH: Applies partitioning on the partition key’s hash values

Next, you can use the AUTO_PARTITION_SCHEMA and AUTO_PARTITION_TABLE settings to specify schemas and tables considered (or not considered) for automatic partitioning. Note that when automatic partitioning is set then all schemas and tables (in user-managed schemas) are considered for automatic partitioning if both the inclusion and exclusion lists are empty.

Here is an example where I prefer to manually decide on the partitioning strategy in the JULIAN schema – so I want to exclude it from auto partitioning:

If I wanted to do the opposite, that is add only the schema JULIAN to the auto partitioning list, I should have replaced FALSE with TRUE in the screenshot above. As soon as the inclusion list is no longer empty, only schemas in the inclusion list are considered.

The analysis and verification of automatic partitioning using RECOMMEND_PARTITION_METHOD is potentially a resource-intensive and long running operation, so avoid doing it in a production database. I would rather use a cloned database for automatic partitioning. The recommendations can be implemented with the APPLY_RECOMMENDATION procedure.

It is good to know that the Autonomous Database automatically collects workload information over time in an internally managed SQL workload repository maintained in the SQL Tuning Set (SYS_AUTO_STS). Recommendations of automatic partitioning generated by the RECOMMEND_PARTITION_METHOD function have a time limit, specified by the TIME_LIMIT parameter, with a default of 1 day. If you are analyzing a large system with many candidate tables, a single invocation may not generate a recommendation for all tables and you will have to invoke the procedure repeatedly to generate recommendations for additional tables. 

Here are the data dictionary views for automatic partitioning: DBA_AUTO_PARTITION_CONFIG and DBA_AUTO_PARTITION_RECOMMENDATIONS.

It is important to understand that unlike automatic indexing, automatic partitioning does not run periodically as a background task. Automatic partitioning only runs when you invoke it by using the DBMS_AUTO_PARTITION.RECOMMEND_PARTITION_METHOD function.

Not all tables are candidates for auto partitioning. The function VALIDATE_CANDIDATE_TABLE checks if a given table is a valid candidate for auto partitioning in Autonomous Database. There are several conditions for a table to be a valid candidate, here are few:

  • Table passes inclusion and exclusion tests specified by AUTO_PARTITION_SCHEMA and AUTO_PARTITION_TABLE configuration parameters
  • Table has up-to-date statistics
  • Table is at least 64 GB
  • Table has 5 or more queries in the SQL tuning set that scanned the table
  • Table does not contain a LONG data type column.

As you can see below, my SALES table is not a candidate as the statistics are stale:

Last, here are few underscore/hidden parameters related to auto partitioning (from a 21c database) :

_autoptn_costing: DBMS_AUTO_PARTITION is compiling query for cost estimates
_autoptn_flags: DBMS_AUTO_PARTITION flags
_autoptn_translate_table_name: DBMS_AUTO_PARTITION table name translation
_autoptn_workload_id: DBMS_AUTO_PARTITION workload id

The package is though missing in 21c:

SQL> desc DBMS_AUTO_PARTITION
ERROR:
ORA-04043: object DBMS_AUTO_PARTITION does not exist

If you are interested in auto partitioning also the application, then I would recommend a rather scientific paper called Automatic Partitioning of Database Applications. But is built onto the assumption that stored procedures have several disadvantages and not everyone would agree with that.

How to view the version history of an Oracle database?

In DBA, Oracle database on September 9, 2022 at 10:57

Often an Oracle database, even being say 19c, was initially created as 10g and upgraded with scripts over the years. From v$database and v$instance, we can find out the current version, the platform name and when the database was created but how to view the version history of that database? Often simple SQL statements can extract important data that we need. We should just know what table or view to query.

Two important views, which actually came with Oracle 10g, can shed some light on the question above: DBA_HIGH_WATER_MARK_STATISTICS and DBA_REGISTRY_HISTORY.

Let us see what information they provide:

SELECT * from DBA_REGISTRY_HISTORY where version is not null order by 4 desc;

I said above “shed some light” and not “answer the question” as the view came only with 10gR2. So we can see the history from 10.2.0.3 until now (19.13) but whether the database was created initially as 10g, 9i or even as v7 is an open question.

Note that since since 12.1.0.1 Oracle use DBA_REGISTRY_SQLPATCH instead of DBA_REGISTRY_HISTORY to track PSUs and BPs applied to the database. Check the post by Mike Dietrich called DBA_REGISTRY_HISTORY vs DBA_REGISTRY_SQLPATCH.

The view DBA_HIGH_WATER_MARK_STATISTICS is based on the table WRI$_DBU_HIGH_WATER_MARK:

create table WRI$_DBU_HIGH_WATER_MARK
(name                 varchar2(64)  not null,
 dbid                 number        not null,
 version              varchar2(17)  not null,
 highwater            number,
 last_value           number,
 error_count          number,
 constraint WRI$_DBU_HIGH_WATER_MARK_PK primary key
    (name, dbid, version)
 using index tablespace SYSAUX
) tablespace SYSAUX
/

Next, we run the following query which shows us along with the database size the database version as well:

SELECT * from DBA_HIGH_WATER_MARK_STATISTICS where name = 'DB_SIZE' order by 3 desc;

From DBA_HIGH_WATER_MARK_STATISTICS, we can view several other historical stats about the database: number of user tables, size of the largest segment, maximum number of partitions belonging to an user table, maximum number of partitions belonging to an user index, number of user indexes, maximum number of concurrent sessions seen in the database, maximum number of datafiles, maximum number of tablespaces, maximum number of CPUs and maximum query length.

If the high-water mark statistics are not populated, then execute manually DBMS_FEATURE_USAGE_INTERNAL.SAMPLE_ONE_HWM. The internal package looks like this:

PROCEDURE CLEANUP_DATABASE
Argument Name                  Type                    In/Out Default?
------------------------------ ----------------------- ------ --------
CLEANUP_LOCAL                  PL/SQL BOOLEAN          IN     DEFAULT 

PROCEDURE EXEC_DB_USAGE_SAMPLING
Argument Name                  Type                    In/Out Default?
------------------------------ ----------------------- ------ --------
CURR_DATE                      DATE                    IN             

PROCEDURE SAMPLE_ONE_FEATURE
Argument Name                  Type                    In/Out Default?
------------------------------ ----------------------- ------ --------
FEAT_NAME                      VARCHAR2                IN             

PROCEDURE SAMPLE_ONE_HWM
Argument Name                  Type                    In/Out Default?
------------------------------ ----------------------- ------ --------
HWM_NAME                       VARCHAR2                IN             

If you would like to update the HWM statistics manually as they are gathered once a week, here is an example:

SQL> SELECT name, highwater, last_value FROM dba_high_water_mark_statistics WHERE name = 'USER_TABLES';

NAME                            HIGHWATER LAST_VALUE
------------------------------ ---------- ----------
USER_TABLES                           533        533

SQL> CREATE TABLE JMD (c1 json);

Table created.

SQL> exec dbms_feature_usage_internal.sample_one_hwm('USER_TABLES');

PL/SQL procedure successfully completed.

SQL> SELECT name, highwater, last_value FROM dba_high_water_mark_statistics WHERE name = 'USER_TABLES';

NAME                            HIGHWATER LAST_VALUE
------------------------------ ---------- ----------
USER_TABLES                           534        534

As post scriptum, I can say without ever being able to prove it scientifically, a better performing database is one created as a fresh database and not upgraded with scripts. I would always advise to create a new database and transfer the data and all objects from the previous version than just upgrade the database (the data dictionary) with scripts. But with current DB sizes and limited downtime, this is getting more and more difficult to achieve.

How to calculate the size of a database?

In Data, Databases, DBA on August 29, 2022 at 10:03

Often even DBAs have misalignment on what is meant by “database size”. As data and databases grow with a rather high rate, it is important to understand the size of the database estate. Worldwide data is expected to hit 175 zettabytes by 2025, representing a 61% CAGR. Also, 51% of the data will be in data centers and 49% will be in the public cloud. So, when moving to cloud, it is important to know how much storage is needed for the databases.

Ever wondered which is the biggest database in the world? Here is an interesting link: 10 Largest Databases in the World.Really? Sure about that? How do you actually know if it’s true? Who is the source of information, and can we trust it?” I was wondering exactly the same when checking the list.

And also: how should all the data be stored? According to Data: A Lenovo Solutions Perspective, the answer is that within the core (i.e. data center or cloud) the classic relational database will still be the dominant approach for many years. Thus, one more reason to properly estimate and calculate the size of your databases.

So, how how to calculate the size of say an Oracle database (check the links at the end for other database brands)? The most common way is to simply calculate the space which the database files physically consume on disk:

select sum(bytes)/1024/1024/1024 size_in_GB from dba_data_files;

But how about the other files? Like temp, control and redolog files? Are the undo files really part of the database? Do we always want to include the files from SYSTEM and SYSAUX too?

Also, not all this space in the files listed in dba_data_files is necessarily allocated. There could be sections of these files that are not used. Then, we can go with this method:

select sum(bytes)/1024/1024/1024 size_in_GB from dba_segments;

I often prefer using the query which ignores the system, temp and undo data:

select nvl(ceil(sum(bytes)/(1024*1024*1024)),0) size_in_GB
from dba_extents
where tablespace_name not in ('SYSTEM','TEMP','SYSAUX') and tablespace_name not like '%UNDO%';

Another way used is to only calculate the size of the real data from each schema in the database – the methods above include the indexes too. This requires recent analyze (not estimate but rather compute 100%) for all tables.

select owner, nvl(ceil(sum(num_rows*avg_row_len)/(1024*1024*1024)),0) size_in_GB
from dba_tables
where owner not in ('SYSTEM','OUTLN','SYS') group by owner;

However, checking the overall size including TEMP and REDO is perhaps the best approach. There is a MOS note, How to Calculate the Size of the Database (Doc ID 1360446.1) which is also worth reading. Here is the suggested method to calculate the total database size:

select a.data_size+b.temp_size+c.redo_size+d.cont_size "total_size"
from ( select sum(bytes) data_size
       from dba_data_files ) a,
     ( select nvl(sum(bytes),0) temp_size
       from dba_temp_files ) b,
     ( select sum(bytes) redo_size
       from sys.v_$logfile lf, sys.v_$log l
       where lf.group# = l.group#) c,
     ( select sum(block_size*file_size_blks) cont_size
       from v$controlfile ) d;

If you have enabled block change tracking and want to be really pedantic with the database size, you should also add the BCT file.

Here are some additional links and a query which will help you find indexes which are bigger than tables.

Database Size in Oracle by Bijay Kumar Sahoo

How to calculate current DB size from asktom

Estimate the Size of a Database in SQL Server

How to calculate total size of the database

How to calculate the size of a MySQL database

select owner "TABLE_OWNER", tablename "TABLE_NAME", tablesize "TABLE SIZE (GB)", indexname "INDEX_NAME", indexsize "INDEX SIZE (GB)", indexsize/tablesize "INDEX/TABLE" from
(
with
tabs as (select owner, segment_name tablename,sum(bytes/1024/1024/1024) tablesize from dba_segments where segment_type='TABLE' group by owner, segment_name),
inds as (select i.owner, i.index_name indexname, i.table_name tablename, sum(s.bytes/1024/1024/1024) indexsize from dba_indexes i join dba_segments s on (i.owner=s.owner and i.index_name=s.segment_name) group by i.owner, i.index_name, i.table_name)
select * from tabs natural join inds where indexsize > tablesize and indexsize>1
)
order by indexsize/tablesize desc;

Finally, there is a view in Oracle, called dba_hist_tbspc_space_usage, which displays historical tablespace usage statistics. So, you can retrieve historical growth of the tablespaces and thus the database as a whole. Check this blog post: How to retrieve growth history for Oracle tablespaces. Note that the history is available for as far back as AWR data is retained. It is considered good practice to keep track of the database size on say weekly basis. You will be always able to answer questions on capacity needs, trends, growth, size, etc.

P.S. DBA_HIGH_WATER_MARK_STATISTICS contains a column called DB_SIZE which is dependent on the internal package DBMS_SWRF_REPORT_INTERNAL. The definition of DBA_HIGH_WATER_MARK_STATISTICS is:

select dbid, hwm.name, version, highwater, last_value, description
from wri$_dbu_high_water_mark hwm, wri$_dbu_hwm_metadata mt
where hwm.name = mt.name and
hwm.name not like '_HWM_TEST%' and /* filter out test hwm */
bitand(mt.method, 4) != 4          /* filter out disabled hwm */

The table WRI$_DBU_HIGH_WATER_MARK is mostly likely populated using DBMS_SWRF_REPORT_INTERNAL. The view DBA_HIGH_WATER_MARK_STATISTICS (existing since 10g!) is useful as it tells you the maximum size of the database for each version of the lifespan of the database.

Time to rebuild indexes in the Oracle database?

In Databases, DBA, Oracle database on August 19, 2022 at 14:13

One of the most controversial topics among DBAs is whether to rebuild indexes or not.

But it is so generic that the answer to the question depends on the database version, the index type and the reasons behind doing it.

In most cases, we consider b-tree indexes for rebuilt. Note that b stands not for “binary” but for “balanced”. We create indexes mostly for performance reasons. If performance is the real concern, we need to first understand at least at high level how those indexes are organized.

The MOS note Lists All Indexes that Benefit from a Rebuild (Doc ID 122008.1) suggests that indexes considered for rebuilt are indexes for which:

– deleted entries represent 20% or more of the current entries
– the index depth is more then 4 levels

And possible candidate for bitmap index were considered when the distinctiveness was more than 99%:

if ( height > 5 ) OR ( (del_lf_rows/lf_rows) > 0.2 ) then
    dbms_output.put_line (rpad(r_indx.owner,16,' ') || rpad(r_indx.index_name,40,' ') || 
                          lpad(round((del_lf_rows/lf_rows)*100,3),17,' ') || 
                          lpad(height-1,7,' ') || lpad(round((lf_rows-distinct_keys)*100/lf_rows,3),16,' '));
end if;

However, it is a good question if this applies to version 11g and above. I have personally seen good benefits of rebuilding indexes satisfying those conditions in lower versions of the database.

Another MOS article, Script to investigate a b-tree index structure (Doc ID 989186.1), provides a script which is quite handy as it verifies the structure of a b-tree index based on the existing table and index statistics.  The script calculates the following items:

– Estimate the size the index should be as optimal packing can be specified

– The index layout

This script keeps a history of the the data gathered in the INDEX_HIST table. This can be useful to prevent a pattern in index rebuilding. The history log can be user defined.

Yes another MOS article, Index Rebuild, the Need vs the Implications (Doc ID 989093.1), specifies that the most common justifications given for rebuilding an index are:
– index becomes fragmented
– index grows and grows – deleted space is not re-used
– index clustering factor becomes out of sync

However, as noted, the impact of rebuilding the index can be quite significant:

  • Most scripts around depend on the index_stats dynamic table
  • Redo activity may increase and general performance might be impacted as a direct result of rebuilding an index

An index coalesce is often preferred instead of an index rebuild. It has the following advantages:

  • does not require approximately 2 times the disk storage
  • always online
  • does not restructure the index, but combines index leaf blocks as much as possible, avoiding system overhead

Due to the reasons listed above, it is strongly advised not to rebuild indexes on a regular basis but instead use proper diagnostics.

Next, here is my personal experience from rebuilding indexes: either drop and create or simply rebuild. I have seen 3 main use cases:

  1. For performance reasons: because of too many deleted rows or because of split index nodes. With every new release of Oracle, it looks like that we need less and less time on performing such a rebuild operation.
  2. Indexes get fragmented over time and occupy too much space. There was a mission critical production database where data was about 500MB and the indexes all together were about 4TB. You can release sometimes quite a lot of space.
  3. Indexes should be in my opinion in a separate tablespace. Not where the data is. If something gets wrong with the index tablespace: logical or physical corruption, or deleting files unintentionally, then it is just a matter of time to recreate the indexes again.

In Oracle 21c, there is a new feature called Automatic Index Optimization. The optimization process includes 3 actions:

• Compress: Compresses portions of the key values in an index segment (~3 times)

• Shrink: Merges the contents of index blocks where possible to free blocks for reuse

• Rebuild: Rebuilds an index to improve space usage and access speed

For a very long time, both DBAs and Developers, have been struggling (really struggling) with what indexes should be created, what type of indexes they should be created as and what indexes should be dropped from the database. By far, the most interesting new feature of Oracle Database 19c is Automatic Index creation (AI Creation). In the long run, this is to be one of the most important features in the Oracle database. Note that you cannot rebuild an auto index! Nice and useful capability for AI is that Oracle automatically rebuilds indexes that are marked as “Unusable”.

For more, check:

Automatic Indexes: Automatically Rebuild Unusable Indexes Part I by Richard Foote

Oracle B-Tree Index Internals: Rebuilding The Truth:

“Generally rebuild index when the clustering factor exceeds eight times the number of dirty blocks in the base table, when the levels exceed two or when there are excessive brown nodes in the index”.

When an index should be rebuilt? by Gouranga

SQL Server: Reorganize and Rebuild Indexes in the Database

“Microsoft recommends fixing index fragmentation issues by rebuilding the index if the fragmentation percentage of the index exceeds 30%, where it recommends fixing the index fragmentation issue by reorganizing the index if the index fragmentation percentage exceeds 5% and less than 30%”

How Often Do You Rebuild Indexes?

Rebuilding Indexes by Jonathan Lewis

Automatic Segment Advisor in the Oracle Database

In Autonomous, Database tuning, DBA, Oracle database, Oracle internals on July 1, 2022 at 06:50

“From my experience, the best advisors help in three ways: encourage you to look at the problem or opportunity from multiple angles; help you balance the tug of the short-term with important long-term priorities; and ask the tough questions you need to know to reach the best solution.” Margo Georgiadis

The Oracle Segment Advisor identifies segments that have space which can be reclaimed. However, the Automatic Segment Advisor can be at times resource consuming and even slow down your database:

Why is this happening granted the Automatic Segment Advisor does not analyze every database object? Here is how it works internally: the advisor examines the database statistics, it samples segment data, and then selects the following objects to analyze:

  • Tablespaces that have exceeded a critical or warning space threshold
  • Segments that have the most activity
  • Segments that have the highest growth rate

In addition, the Automatic Segment Advisor evaluates tables that are at least 10MB in size and have at least 3 indexes to determine the amount of space saved if the tables are compressed with advanced row compression.

Now, here is the important part: if a database object is selected for analysis by the advisor but the maintenance window expires before the advisor can process the object, the object is included in the next Automatic Segment Advisor run. So, at one point you may come to a situation where lots of objects have to be analyzed. During the maintenance window, the following clients/task are being run, these are the predefined automated maintenance tasks:

Sometimes, it makes sense to disable the auto space advisor as you cannot change the set of tablespaces and segments that the Automatic Segment Advisor selects for analysis. You can, however, enable or disable the Automatic Segment Advisor task, change the times during which the Automatic Segment Advisor is scheduled to run, or adjust automated maintenance task system resource utilization. Especially in a situation like this:

Often, also ADDM may point out to the problem with the Automatic Segment Advisor task:

Here is how to disable (and enable) the tasks individually. The main switch is controlled by DBMS_AUTO_TASK_ADMIN.DISABLE().

Even after disabling the entire autotask job by running DBMS_AUTO_TASK_ADMIN.DISABLE(), the DBA_AUTOTASK_TASK will still show autotask client as enabled. For this you need to disable the jobs individually as shown above. And for a multitenant environment, CDB and PDBs have their own autotasks, disabling CDB’s autotask will not affect the PDBs, so you will have to do for each pluggable database.

Staring with 12.2, there is a parameter called ENABLE_AUTOMATIC_MAINTENANCE_PDB that can be used to enable or disable the running of automated maintenance tasks for all the PDBs in a CDB or for individual PDBs in a CDB. Changing ENABLE_AUTOMATIC_MAINTENANCE_PDB in the CDB root from TRUE to FALSE, the new value FALSE takes effect in the root and in all the PDBs in the CDB.

If you get into a situation where the Automatic Segment Advisor is consuming lots of resource and slowing the database during the maintenance windows do one of the following:

  1. Disable the autotask client for the segment advisor and run it manually on per need basis.
  2. Increase the maintenance window from the default which starts at 10 p.m. on Monday to Friday and ends at 2 a.m. Often 4 hours in just not enough. The weekend window is 20h long and in most cases long enough.

In the autonomous database, you have access to dba_autotask_client, etc. and you can disable and enable the auto space advisor task however you do not have full visibility on dba_scheduler_window_groups, etc. Some columns just show as NULL. Still the performance task in ADB can give you some idea of what is going on:

And finally, here are some other situations that might require disabling the Auto Space Advisor Job:

  • ORA-01555 While Running Auto Space Advisor Job on Object wri$_adv_objspace_trend_data (Doc ID 2576430.1)
  • Auto Space Advisor is Taking More Time due to Recursive Query Taking a Long Time (Doc ID 2382419.1)
  • SEGMENT ADVISOR not working as expected for LOB or SYS_LOB SEGMENT (Doc ID 988744.1)

User defined locks: enq: UL – contention

In Database tuning, DBA, Oracle database, Oracle internals on June 13, 2022 at 17:30

DBAs are familiar with most wait events, however there is one called “enq: UL – contention”, which does not pop up that often:

The foreground events above are from an Oracle 19c database. Clearly, we see a lot of user defined lock waits: enq: UL – contention. They come usually when the DBMS_LOCK package is being used.

There are 3 enqueue types which are defined in Oracle as “User Type” locks, they are:

  1. TM – DML Enqueue called against base tables or partitions for operations that need to be coordinated
  2. TX – Transaction Enqueue used to protect transaction information
  3. UL – User Lock Enqueue used when an application makes use of the DBMS_LOCK package

MOS has a note Resolving Issues Where ‘enq: UL – contention’ Waits are Occurring (Doc ID 1915980.1) which can help us identify what session is holding the lock and what part of the application it is currently executing.

The ‘UL’  (User-defined Lock) Enqueue is a lock that is created and defined by a developer (or seldom a DBA) using the DBMS_LOCK package.  The DBMS_LOCK package allows the application developer to request, convert and release locks which are independent from the locks provided automatically by Oracle. These routines are meant for synchronization within the application and augment the locking which is already there in the database.

Locks are automatically released at the end of a session. DBSM_LOCK.RELEASE explicitly releases a lock previously acquired using the REQUEST function.

Wondering How can I find who has allocated a user lock via dbms_lock?

Here is the query from the link above showing how to find the active blocking situations around DBMS_LOCK:

SELECT blocker.username blocker, blocker.sid blocker_session
     , blocked.username blocked, blocked.sid blocked_session
  FROM v$session blocked
  JOIN v$session blocker
  ON (blocked.blocking_session = blocker.sid)
 WHERE blocked.wait_class = 'Application'
   AND blocked.event='enq: UL - contention';

The DBMS_LOCK_ALLOCATED table is periodically cleared out by calling the allocate_unique() procedure when the modulus of the lockid is 0 and the expiration column is less than the current date. On every 100th attempt to create a new entry, allocate_unique purges expired entries from the table. If the entry already exists, allocate_unique updates the expiry time.

There are a couple important MOS notes related to the DBMS_LOCK_ALLOCATED table:

The first one called DBMS_LOCK_ALLOCATED Table Keeps Growing Eventually Depleting Tablespace (Doc ID 1054343.6) explains how to delete entries for locks that are no longer being used. The second on entitled How To Cleanup And Shrink DBMS_LOCK_ALLOCATED? (Doc ID 1065384.1) is a bit similar and it warns that Bug 2624130 is going to make the delete very slow. In that case, one option is to use DBMS_REDEFINITION: How To Cleanup And Shrink DBMS_LOCK_ALLOCATED? (Doc ID 1065384.1).

The structure of the DBMS_LOCK_ALLOCATED table is rather simple it has only 3 columns: name, lockid and expiration.

There are few bugs in the Oracle database related to the “enq: UL – contention” wait event:

Bug 30070584 – 13.3 ‘enq: UL – contention’ wait event (OEM related)
Bug 27282543 – 13.2 Frequent UL contention wait events for the node 2 on EMNPRD repository db (OEM)
Bug 26618817 – UL contention from ProcessBuddyAgent jobs in scheduled state

A recent update from 2020 (Oracle 19.3) by Jonathan Lewis confirmed that PL/SQL execution time includes the time spent waiting for UL enqueues, so we should not forget that we may need to subtract wait time for ‘enq: UL – contention’ from the PL/SQL time before you start to worry about how much actual work we are doing in PL/SQL.

If you are working with Oracle Text, you may also observe the “enq: UL – contention” wait event. DBMS_LOCK is used internally by ctxsys.drvdml.com_sync_index, so requesting UL is not a problem. However, fragmenting the index composition table of CTXSYS increases the acquisition time of UL enqueue, so this issue will appear.

Bottom line: DBMS_LOCK should be used (carefully) in the application code only when there are no other options.

2022: What’s new in the database world?

In Databases, DB2 database, DBA, New features, Oracle database, PostgreSQL, SQL Server on May 23, 2022 at 07:55

The big picture in the global database market is changing rapidly. It is an $80B market right now – the largest software market and growing in double digits year over year.

Gartner has recently published the software market revenue numbers for 2021.

In 2021, the revenue for managed cloud services is more than $39B – which means about half of al global DBMS revenue. Have a look at the drastic change since 2017:

Microsoft and AWS have right now about half of the global database market while Oracle have slipped down to having only about one fifth of the overall share. Google have entered for the first time the top 5 while IBM have just lost there 4th position. SAP are out of the top 5.

In terms of database popularity, Oracle are still #1 based on the DB engines rankings:

So, what is new and causing these turbulances?

Google have just announced AlloyDB for PostgreSQL. Compared with standard PostgreSQL, according to Google’s own performance tests, AlloyDB was more than four times faster for transactional workloads, and up to 100 times faster for analytical queries. AlloyDB was also two times faster for transactional workloads than Amazon’s comparable service. This makes AlloyDB a powerful new modernization option for transitioning off of legacy databases. Here are the other offerings from Google Cloud:

The latest from AWS is predominantly around RDS and Redshift (a database based on PostgreSQL). Amazon Redshift RA3 instances are now available in the Asia Pacific (Osaka), Europe (Milan), Middle East (Bahrain) and Africa (Cape Town) regions. Amazon Redshift now offers new enhancements for Audit Logging, which enables faster delivery of logs for analysis by minimizing latency while also adding Amazon CloudWatch as a new log destination. With the latest release, customers can choose to stream audit logs directly to Amazon CloudWatch, which enables customers to perform real-time monitoring. Amazon Redshift now also provides native integration with Microsoft Azure Active Directory (AD), which customers can use for authentication and authorization with tools like Microsoft Power BI. 

Amazon Relational Database Service (Amazon RDS) Performance Insights now makes it easier for you to see the database performance metrics for the exact timeframe you want to analyze, by choosing a custom time window within your retention period. Previously, you could only see metrics in Performance Insights by choosing relative time intervals such as the past 1 hour, the past 24 hours, etc. Amazon RDS for PostgreSQL, MariaDB and MySQL now supports M6i and R6i instances with new instance sizes up to 128 vCPUs and 1,024 GiB RAM. Also, Amazon Aurora Serverless v2 is now generally available.

The latest from Azure is around Change data capture: CDC lets you track all the changes that occur on a database. Though this feature has been available for SQL Server for quite some time, using it with Azure SQL Database is now generally available. When creating a Hyperscale database, you can choose your preferred storage type: read-access geo-redundant storage (RA-GRS), zone-redundant storage (ZRS), or locally redundant storage (LRS) Azure standard storage. The selected storage redundancy option will be used for the lifetime of the database for both data storage redundancy and backup storage redundancy. It is now possible to configure your Azure SQL Database to allow authentication only from Azure Active Directory.

IBM i 7.5, previously know as AS/400, is coming with some “interesting” features. There is a new BOOLEAN data type, the maximum size of a binary radix index is extended, up to 16 TB. Used t be 1.7TB. I do not know many DBAs who enjoy dealing with double digit TB indexes. The Db2 for i SQL Query Engine (SQE) provides a Query Supervisor which enables real-time monitoring of resource consumption by SQL and native queries. There is now a RESTRICT ON DROP attribute that can be added or removed using the ALTER TABLE (SQL) statement (ALTER TABLE TABLE_NAME ADD RESTRICT ON DROP;). When RESTRICT ON DROP is added, nobody will be allowed to delete or drop that file. Even users with *ALLOBJ user special authority will not be allowed to delete the file.

The only change I am aware of in Oracle Database Release 21c, Version 21.5 are the new AE Analytic Views. AE stands for “All Edition”. Each new AE analytic view corresponds to an existing non-AE analytic view. AE views have the same columns as their non-AE counterparts, plus a column that displays the name of the application edition where the editioned object is defined. For the complete list of changes in Oracle 21, check this link.