Dontcheff

Archive for the ‘Database tuning’ Category

Automatic Segment Advisor in the Oracle Database

In Autonomous, Database tuning, DBA, Oracle database, Oracle internals on July 1, 2022 at 06:50

“From my experience, the best advisors help in three ways: encourage you to look at the problem or opportunity from multiple angles; help you balance the tug of the short-term with important long-term priorities; and ask the tough questions you need to know to reach the best solution.” Margo Georgiadis

The Oracle Segment Advisor identifies segments that have space which can be reclaimed. However, the Automatic Segment Advisor can be at times resource consuming and even slow down your database:

Why is this happening granted the Automatic Segment Advisor does not analyze every database object? Here is how it works internally: the advisor examines the database statistics, it samples segment data, and then selects the following objects to analyze:

  • Tablespaces that have exceeded a critical or warning space threshold
  • Segments that have the most activity
  • Segments that have the highest growth rate

In addition, the Automatic Segment Advisor evaluates tables that are at least 10MB in size and have at least 3 indexes to determine the amount of space saved if the tables are compressed with advanced row compression.

Now, here is the important part: if a database object is selected for analysis by the advisor but the maintenance window expires before the advisor can process the object, the object is included in the next Automatic Segment Advisor run. So, at one point you may come to a situation where lots of objects have to be analyzed. During the maintenance window, the following clients/task are being run, these are the predefined automated maintenance tasks:

Sometimes, it makes sense to disable the auto space advisor as you cannot change the set of tablespaces and segments that the Automatic Segment Advisor selects for analysis. You can, however, enable or disable the Automatic Segment Advisor task, change the times during which the Automatic Segment Advisor is scheduled to run, or adjust automated maintenance task system resource utilization. Especially in a situation like this:

Often, also ADDM may point out to the problem with the Automatic Segment Advisor task:

Here is how to disable (and enable) the tasks individually. The main switch is controlled by DBMS_AUTO_TASK_ADMIN.DISABLE().

Even after disabling the entire autotask job by running DBMS_AUTO_TASK_ADMIN.DISABLE(), the DBA_AUTOTASK_TASK will still show autotask client as enabled. For this you need to disable the jobs individually as shown above. And for a multitenant environment, CDB and PDBs have their own autotasks, disabling CDB’s autotask will not affect the PDBs, so you will have to do for each pluggable database.

Staring with 12.2, there is a parameter called ENABLE_AUTOMATIC_MAINTENANCE_PDB that can be used to enable or disable the running of automated maintenance tasks for all the PDBs in a CDB or for individual PDBs in a CDB. Changing ENABLE_AUTOMATIC_MAINTENANCE_PDB in the CDB root from TRUE to FALSE, the new value FALSE takes effect in the root and in all the PDBs in the CDB.

If you get into a situation where the Automatic Segment Advisor is consuming lots of resource and slowing the database during the maintenance windows do one of the following:

  1. Disable the autotask client for the segment advisor and run it manually on per need basis.
  2. Increase the maintenance window from the default which starts at 10 p.m. on Monday to Friday and ends at 2 a.m. Often 4 hours in just not enough. The weekend window is 20h long and in most cases long enough.

In the autonomous database, you have access to dba_autotask_client, etc. and you can disable and enable the auto space advisor task however you do not have full visibility on dba_scheduler_window_groups, etc. Some columns just show as NULL. Still the performance task in ADB can give you some idea of what is going on:

And finally, here are some other situations that might require disabling the Auto Space Advisor Job:

  • ORA-01555 While Running Auto Space Advisor Job on Object wri$_adv_objspace_trend_data (Doc ID 2576430.1)
  • Auto Space Advisor is Taking More Time due to Recursive Query Taking a Long Time (Doc ID 2382419.1)
  • SEGMENT ADVISOR not working as expected for LOB or SYS_LOB SEGMENT (Doc ID 988744.1)

User defined locks: enq: UL – contention

In Database tuning, DBA, Oracle database, Oracle internals on June 13, 2022 at 17:30

DBAs are familiar with most wait events, however there is one called “enq: UL – contention”, which does not pop up that often:

The foreground events above are from an Oracle 19c database. Clearly, we see a lot of user defined lock waits: enq: UL – contention. They come usually when the DBMS_LOCK package is being used.

There are 3 enqueue types which are defined in Oracle as “User Type” locks, they are:

  1. TM – DML Enqueue called against base tables or partitions for operations that need to be coordinated
  2. TX – Transaction Enqueue used to protect transaction information
  3. UL – User Lock Enqueue used when an application makes use of the DBMS_LOCK package

MOS has a note Resolving Issues Where ‘enq: UL – contention’ Waits are Occurring (Doc ID 1915980.1) which can help us identify what session is holding the lock and what part of the application it is currently executing.

The ‘UL’  (User-defined Lock) Enqueue is a lock that is created and defined by a developer (or seldom a DBA) using the DBMS_LOCK package.  The DBMS_LOCK package allows the application developer to request, convert and release locks which are independent from the locks provided automatically by Oracle. These routines are meant for synchronization within the application and augment the locking which is already there in the database.

Locks are automatically released at the end of a session. DBSM_LOCK.RELEASE explicitly releases a lock previously acquired using the REQUEST function.

Wondering How can I find who has allocated a user lock via dbms_lock?

Here is the query from the link above showing how to find the active blocking situations around DBMS_LOCK:

SELECT blocker.username blocker, blocker.sid blocker_session
     , blocked.username blocked, blocked.sid blocked_session
  FROM v$session blocked
  JOIN v$session blocker
  ON (blocked.blocking_session = blocker.sid)
 WHERE blocked.wait_class = 'Application'
   AND blocked.event='enq: UL - contention';

The DBMS_LOCK_ALLOCATED table is periodically cleared out by calling the allocate_unique() procedure when the modulus of the lockid is 0 and the expiration column is less than the current date. On every 100th attempt to create a new entry, allocate_unique purges expired entries from the table. If the entry already exists, allocate_unique updates the expiry time.

There are a couple important MOS notes related to the DBMS_LOCK_ALLOCATED table:

The first one called DBMS_LOCK_ALLOCATED Table Keeps Growing Eventually Depleting Tablespace (Doc ID 1054343.6) explains how to delete entries for locks that are no longer being used. The second on entitled How To Cleanup And Shrink DBMS_LOCK_ALLOCATED? (Doc ID 1065384.1) is a bit similar and it warns that Bug 2624130 is going to make the delete very slow. In that case, one option is to use DBMS_REDEFINITION: How To Cleanup And Shrink DBMS_LOCK_ALLOCATED? (Doc ID 1065384.1).

The structure of the DBMS_LOCK_ALLOCATED table is rather simple it has only 3 columns: name, lockid and expiration.

There are few bugs in the Oracle database related to the “enq: UL – contention” wait event:

Bug 30070584 – 13.3 ‘enq: UL – contention’ wait event (OEM related)
Bug 27282543 – 13.2 Frequent UL contention wait events for the node 2 on EMNPRD repository db (OEM)
Bug 26618817 – UL contention from ProcessBuddyAgent jobs in scheduled state

A recent update from 2020 (Oracle 19.3) by Jonathan Lewis confirmed that PL/SQL execution time includes the time spent waiting for UL enqueues, so we should not forget that we may need to subtract wait time for ‘enq: UL – contention’ from the PL/SQL time before you start to worry about how much actual work we are doing in PL/SQL.

If you are working with Oracle Text, you may also observe the “enq: UL – contention” wait event. DBMS_LOCK is used internally by ctxsys.drvdml.com_sync_index, so requesting UL is not a problem. However, fragmenting the index composition table of CTXSYS increases the acquisition time of UL enqueue, so this issue will appear.

Bottom line: DBMS_LOCK should be used (carefully) in the application code only when there are no other options.

Enhanced Diagnosability of Oracle Database: What is an Attention Log?

In Database tuning, Databases, DBA, Oracle database, Oracle utilities on March 1, 2021 at 07:18

A correct diagnosis is three-fourths the remedy – Mahatma Gandhi

How true this statement is also in the database world! Often it is more difficult to diagnose an issue than fixing it. Going in the past through the alert.log file to find information that needs attention has been simplified by creating a single attention.log file.

In the latest Oracle release, diagnosability of database issues is enhanced through a new attention log, as well as classification of information written to database trace files. The new attention log is written in a structured format (XML or JSON) that is much easier to process or interpret and only contains information that requires attention from an administrator.

The attention log is an externally modifiable file that contains information about critical and highly visible database events. We can use the attention log to quickly access information about critical events that need action.

So, where is the location of the new log file introduced by Oracle Database 21c? Notice that V$DIAG_INFO has a new entry called Attention log:

The attention log contains a pre-determined, translatable series of messages, with one message for each event. There is one attention log for each database instance. There are two ways to view the contents of the attention log.

1. Open the attention.log file that can be found under $ORACLE_BASE/diag/rdbms/database_name/instance_id/log directory. Note that the documentation does not reflect the correct location of the attention.log file.

2. Query the V$DIAG_ALERT_EXT view using the required filters:

The filtering is done through message_level. A message_level of 1 corresponds to critical errors that need immediate action.

You can filter V$DIAG_ALERT_EXT on either MESSAGE_TYPE or MESSAGE_LEVEL. As you can see in the example above, I am filtering for only critical errors. Here is what the types and levels mean by number:

MESSAGE_TYPE:

1: UNKNOWN – essentially the NULL type
2: INCIDENT_ERROR – error for some internal or unexpected reason: must be reported to Oracle Support
3: ERROR – an error of some kind has occurred – not really meaningful
4: WARNING: an action occurred or a condition was discovered that should be reviewed and may require action
5: NOTIFICATION: reports a normal action or event: for example “logon completed”
6: TRACE: output of a diagnostic trace

MESSAGE_LEVEL:

1: CRITICAL: critical errors
2: SEVERE: severe errors
8: IMPORTANT: important message
16: NORMAL: normal message

The attention log has the following attributes, not all documented yet:

– Attention ID: a unique identifier for the message
– Attention type: Error, Warning, Notification or Additional information
– Message text
– Notification
– Error
– Urgency: Immediate, Soon, Deferrable or Information
– Scope: Session, Process, PDB Instance, CDB Instance, CDB Cluster, PDB or CDB
– Info
– Cause
– Action
– Class: Clusterware Admin, CDB admin or PDB admin
– Time

Note that an attention log entry seldom contains all attributes.

Automatic SQL Tuning Sets (ASTS) in Oracle 19c and 20c

In Autonomous, Database tuning, DBA, Oracle database on August 27, 2020 at 09:26

A well performing SQL statement starts all of sudden to misbehave? Fixing such queries is very much embedded into Oracle Autonomous’ strategy and here is how Automatic SQL Tuning Sets help.

What is the Automatic SQL Tuning Set? – Nigel Bayliss explains it all in a short informative blog post where he describes it as a system-maintained SQL tuning set that is an incredibly useful source of historic SQL performance metrics and execution plans. You can use ASTS to repair SQL performance regressions very quickly using SQL plan management.

An example of how to fix a SQL statement using SQL Plan management can be found in a previous post of Nigel entitled Repairing SQL Performance Regression with SQL Plan Management. Check this github example or who to “fix a misbehaving SQL query”.

Automatic SQL Tuning Sets were introduced in Oracle 19.7 where they were enabled by default. In 19.8 and 20c, they are disabled by default and MOS Doc ID 2686869.1 gives examples on how to enable and disable them:

 
exec DBMS_AUTO_TASK_ADMIN.ENABLE(
Client_Name => 'Auto STS Capture Task',
Operation => NULL,
Window_name => NULL);

You can check if the task is disabled or enabled by running:

 
SELECT Task_Name, Enabled 
FROM DBA_AUTOTASK_SCHEDULE_CONTROL
WHERE Task_Name = 'Auto STS Capture Task';

The MOS note above shows also how to check the amount of space consumed by all SQL tuning sets and also the statement counts per SQL tuning set:

 
SELECT 
Decode(SQLSet_Name,'SYS_AUTO_STS','ASTS','NON-ASTS'), count(*)
FROM DBA_SQLSet_Statements
GROUP BY Decode(SQLSet_Name,'SYS_AUTO_STS','ASTS','NON-ASTS');

Oracle recommends that you enable ASTS and one reason for this is that Automatic Indexing relies on the ASTS.

The view DBA_SQLSET_STATEMENTS from the query above displays information about the SQL statements, along with their statistics that form all SQL tuning sets in the database. Worth keeping it in mind. Franck Pachot showed interesting screenshots on the topic.

In 20c, the data dictionary views related to ASTS changed a bit, so the queries form 19c will not work. Oracle moved it to DBA_AUTOTASK_SETTINGS.

The 20c run the following:

 
SELECT Task_Name, Enabled, Interval Task_Interval_in_Seconds
FROM DBA_AUTOTASK_SETTINGS
WHERE Task_Name = 'Auto STS Capture Task';

You can check details about the status of the task by running:

 
SELECT Task_Name, Status
FROM DBA_AUTOTASK_SCHEDULE_CONTROL
WHERE Task_Name = 'Auto STS Capture Task';

Do not worry that Oracle captures a lot of SQL statements – that is the idea after all as you improve the likelihood that you can repair queries that previously ran very quickly and did not show up in AWR. SQL statement performance regressions are repaired automatically and the automatic mode is enabled by setting the DBMS_SPM parameter ALTERNATE_PLAN_BASELINE to AUTO on supported platforms and it is currently equivalent to “AUTOMATIC_WORKLOAD_REPOSITORY+CURSOR_CACHE+SQL_TUNING_SET”.

Check also Do you love unexpected surprises? SYS_AUTO_STS in Oracle 19.7.0 by Mike Dietrich.

Do not just Simplify with Databases: Automate, Innovate and Innovate!

In Autonomous, Database tuning, Databases, DB2 database, DBA, PostgreSQL, SQL Server on June 15, 2020 at 09:46

“Automation applied to an inefficient operation will magnify the inefficiency” – Bill Gates
“Innovation distinguishes between a leader and a follower! – Steve Jobs

In the database industry, simplification, automation and innovation have sort of become buzz words – we hear them more often in meetings and see them in power point presentations than in the real world implementations. Databases are being patched, upgraded and migrated but how often automation and innovation are part of the process?

A recent article entitled “After The Pandemic, Don’t Simply Automate. Innovate” quoted an Accenture report that surveyed 1500 C-suite executives in 16 industries, 76% of respondents said they were struggling to scale the technology across their businesses. The numbers tell the story: A full 84% of C-suite executives believe they must leverage artificial intelligence (AI) to achieve their growth objectives.

A survey of database managers and administrators shows the benefits they expect from automation, plus what they think about DBAs’ current workloads. What are the highlights:

– 63% expect faster innovation from database automation
– 62% of the data pros expect that data will grow 25% or more annually at their enterprise over the next three years
– 66% said the DBA/data team’s overwork or scheduling issues are the biggest challenge in database deployments
– The majority of companies have not made significant automation gains in any key data management processes
– 69% of the DBAs think automation will make their job more business-centric.

So, why aren’t we seeing those innovative automations being implemented in full? Is it the DBA mindset, the business is behind the technology and the innovation, lack of time and resources, lack of knowledge and trust in the new?

We can try to look into the issue from the DBA perspective. Recently, Jeff Erickson listed the 3 Can’t Miss Ways to Turn Your DBA Skills into Gold. Here is my paraphrased version of these 3 ways:

1. Expand DBAs skils towards app development – Python, PL/SQL, etc.
2. DBAs should get into the data science game – look more into Oracle Autonomous Database which comes with Oracle ML, a rich library of machine learning algorithms
3. Convert the “A” in DBA from Administrator to Architect – look into big data and data architecture – big data is the driver for innovation in databases

At the end of the article he quoted Kerry Osborne about dealing with vast and growing data volumes: “That added complexity and scale should sound like a huge opportunity knocking“.

A interesting article by Duncan Harvey entitled What is the innovation, automation dynamic?, pointed out how autonomous systems are finally giving people the headspace they need to innovate at speed in the digital era.

So, is there a Database Automation Guide? Yes indeed!

Get a cloud boost with Oracle Autonomous!

Have a look of all Automatic features that came after Oracle 9i. Oracle 7 an 8 were probably the first releases added automatic functionalities to the database but the boost started with 9i. 20c is not skipping on that part either.

From all database brands arguably the Oracle Database is most advanced in terms of tools, automation features and innovation capabilities. Only from 11.2 until Oracle 20c, there are 133 new features purely focused on Automation. But I still see detabases where even old enough automation features (such as Automatic SQL Tuning) have not been yet implemented.

Let us look into few database brands:

SQL Server:

There are dozens of Automatic features and properties embedded into SQL Server:

Automatic Tuning can do a lot things such as Automated performance tuning of databases, Automated verification of performance gains and Automated rollback and self-correction. More importantly, it is the only database besides Oracle to have Automatic Indexing. It is worth checking how Automatic index management works in the Azure SQL database. Azure SQL Database analyzes your workload, identifies the queries that could be executed faster if you create an index, identifies indexes that are not used in a longer period of time, and identifies duplicated indexes in the database.

Db2:

IBM Db2 also has a relatively good list of Automatic features including self-tuning memory (single-partition databases only), Automatic storage, Automatic database backups, Automatic reorganization and Automatic statistics collection. The Db2 documenattion claims that the Db2® autonomic computing environment is self-configuring, self-healing, self-optimizing, and self-protecting but this is far behind what Oracle ADB has to offer. Tuning SQL is more difficult in Db2 than in Oracle and the single reason for that are the tools and features offered by both systems.

Redshift:

Amazon’s Redshift has a good set of features. Looking at what automation is like, we find enough for a relatively new database brand (Redshift, not PostgreSQL).

In terms of storage, it is mostly about how Redshift automatically takes care of data formatting and data movement into S3 and how with managed storage, capacity is added automatically to support workloads up to 8PB of compressed data.

There is of course Automated provisioning and Automated backups. Plus, Automatic workload management (WLM) uses machine learning to dynamically manage memory and concurrency, helping maximize query throughput.

Also, Amazon Redshift continuously monitors the health of the cluster, and automatically re-replicates data from failed drives and replaces nodes as necessary for fault tolerance.

Very much like Oracle RAC and Oracle Exadata, as of May 2020, Amazon Redshift now leverages Bloom filters to enable early and effective data filtering. Redshift automatically determines what queries are suitable for leveraging Bloom filters at query runtime.

Recently, Amazon Redshift also introduces ATS (= Automatic Table Sort), an automated alternative to Vacuum Sort. Automatic table sort complements Automatic Vacuum Delete and Automatic Analyze and together these capabilities fully automate table maintenance. Automatic table sort is now enabled by default on Redshift tables where a sort key is specified.

Amazon Redshift automatically takes incremental snapshots (backups) of your data every 8 hours or 5 GB per node of data change. You now get more information and control over a snapshot including the ability to control the automatic snapshot’s schedule.

In terms of SQL tuning, Amazon Redshift now automatically and elastically scales query processing power to provide consistently fast performance for hundreds of concurrent queries. The database automatically shuts down Concurrency Scaling resources to save you cost. Also, Amazon Redshift now updates table statistics by running ANALYZE automatically but that is no news for Oracle database users. Amazon Redshift improves query performance by automatically moving read and write queries to the next matching queue without restarting the moved queries.

Snowflake:

Looking into the Top 10 cool things about Snowflake, we see the fact that the JSON documents are stored in a table and optimized automatically in the background for MPP and columnar access. There is Automatic Encryption of Data and Automatic Query Optimization. No Tuning! Really cool as “It is all handled “auto-magically” via a dynamic query optimization engine in our cloud services layer. So, no indexes, no need to figure out partitions and partition keys, no need to pre-shard any data for even distribution, and no need to remember to update statistics.”

I like the “auto-magical” part 🙂

My interest was caught by the deep dive of the Revolutionary features of Snowflake that sets it apart — A Deep dive: all I found on automation was “Automatic scale down”. No comment here.

PostgreSQL:

If we look into the PostgreSQL Feature Matrix, we see only one feature about automation: Automatic plan invalidation. There is a mention of WAL Buffer auto-tuning and Autovacuum.

For databases such as MySQL, PosgreSQL and MongoDB, check this article: Why is Database Automation Important?

A recent discussion Oracle vs. PostgreSQL basically tells it all – in case you have the patience to read it – here are both sides:

Pro-Oracle: “Comparing Postgres with Oracle is a bit like comparing a rubber duck you might buy your three year old, with a 300000 ton super tanker. Do they both float? Yeah, but that’s about the only similarity.”
Pro-PostgreSQL: “So bottom line, PostgreSQL beats Oracle by far in my opinion, at least as far as installing it and sizes are concerned.”

Looking into the Features and Benefits of EDB Postgres Cloud Database Service, we see Automated Notifications, Automated Monitoring, Automated Backups and Automated Replica Failover.

Oracle

Of course, Oracle ADB (Autonomous Database) has all automation features embedded into the service but not all applications are certified or fit ADB. Not yet at least. Still, I think that most automation features can be easily implemented (into a non-ADB) with some prior testing and the benefits are enormous. The list if automation features is long – here are the major ones:

Features in 19c and 20c which are worth implementing (at least looking into and testing) are:

Automatic Indexing
– Automatic SQL Plan Management
Automatic Index Optimization
Automatic Zone Maps

Automation and innovation go hand in hand. Database innovation in an enterprise involves using the database technology in new ways in order to create a more efficient organization and improve alignment between technology and business by completing the same DBA work with smaller teams by implementing automation features in the database to reduce storage and labor cost & increase system uptime and performance.

Bottom line for DBAs: regardless of the database, there is room for innovation in every database brand!

Defragmentation of Large Objects / SecureFiles LOBs in Oracle Database 20c

In Database tuning, DBA, New features, Oracle database on May 25, 2020 at 06:09

In Oracle 20c, the init.ora parameter DB_SECUREFILE defaults to PREFERRED. This means that all large objects / LOBs are created as SecureFiles unless BASICFILE is explicitly specified in the LOB storage clause or the tablespace is an MSSM (= Manual Segment Space Management) tablespace.

Until 19c, only defragmentation of BasicFile LOBs was possible. Tim Hall showed, that in order to shrink a SecureFile LOB you need to move it.

SecureFiles defragmentation in 20c provides online defragmentation of allocated and freed space in SecureFiles segments for all types of SecureFiles LOBs – compressed, deduplicated and encrypted.

In an Oracle 20.2.0 database, I have a table called BLOGS. Let us turn on compression, deduplication and encryption:

 
SQL> ALTER TABLE blogs MODIFY LOB (blog_text) 
(COMPRESS HIGH ENCRYPT DEDUPLICATE); 

Table altered.

Defragmentation can be done automatically by a background process and the segment advisor can estimate the fragmentation levels and how much space can be saved. Note that some temp segment space needed to hold intermediate results.

Let us try to defragment the SecureFiles LOB column BLOG_TEXT and use the segment space advisor to see what is forecast vs. reallity.

In order to defragment the SecureFiles LOBs, we need to use the shrink_clause. The shrink_clause lets us (in general) manually shrink space in a table, index-organized table or its overflow segment, index, partition, subpartition, LOB segment, materialized view, or materialized view log. This clause is valid only for segments in tablespaces with automatic segment management.

By default, Oracle Database compacts the segment, adjusts the high water mark, and releases the recuperated space immediately. Compacting the segment requires row movement. Therefore, you must enable row movement for the object you want to shrink before specifying this clause. Further, if your application has any rowid-based triggers, you should disable them before issuing this clause.

 
SQL> ALTER TABLE blogs ENABLE ROW MOVEMENT;

Table altered.

With release 20c, you can use the shrink_clause on SecureFile LOB segments by using these two ways in order to invoke it:

1. Target a specific LOB column and all its partitions:

ALTER TABLE blogs MODIFY LOB (blog_text) (SHRINK SPACE);

2. Cascade the shrink operation for all the LOB columns of the table and its partitions:

ALTER TABLE blogs SHRINK SPACE CASCADE;

Do not attempt to enable row movement for an index-organized table before specifying the shrink_clause. The ROWID of an index-organized table is its primary key, which never changes. Therefore, row movement is neither relevant nor valid for IOTs.

There are 2 important options/keywords with the shrink space syntax:

COMPACT: If you specify COMPACT, then Oracle only defragments the segment space and compacts the table rows for subsequent release. Meaning Oracle will recover space but will not amend the high water mark (HWM). So, Oracle does not release the space immediately.

CASCADE: If you specify CASCADE, then Oracle performs the same operations on all dependent objects of table, including secondary indexes on index-organized tables. Meaning Oracle will recover space for the object and all dependent objects.

Lat us follow the steps for the BLOGS table:

1. Run the Segment Space Advisor:

 
DECLARE
seg_task_id   number;
seg_task_name varchar2(100);
seg_task_desc varchar2(500);
BEGIN
seg_task_name := 'SecureFileDefragmentation1';
seg_task_desc := 'Manual Segment Advisor Run for table BLOGS';
dbms_advisor.create_task (
advisor_name := 'Segment Advisor',
task_id      := seg_task_id,
task_name    := seg_task_name,
task_desc    := seg_task_desc);
END;
/

DECLARE
obj_id        number;
BEGIN
dbms_advisor.create_object (
task_name   := 'SecureFileDefragmentation1',
object_type := 'TABLE',
attr1       := 'JULIAN',
attr2       := 'BLOGS', 
attr3       := NULL,
attr4       := NULL,
attr5       := NULL,
object_id   := obj_id);
END;
/

BEGIN
dbms_advisor.set_task_parameter(
task_name := 'SecureFileDefragmentation1',
parameter := 'recommend_all',
value     := 'TRUE');
END;
/

exec dbms_advisor.execute_task('SecureFileDefragmentation1');

2. Let us check the findings from DBA_ADVISOR_FINDINGS:

 
SQL> select message,more_info from dba_advisor_findings where task_name='SecureFileDefragmentation1';

MESSAGE
-------
MORE_INFO
-----------------------------------------------------------------------
The free space in the object is less than 10MB.
Allocated Space:15728640: Used Space:4013928: Reclaimable Space :180376:

3. Now let us defragment the SecureFile LOB:

 
SQL> select bytes from dba_segments where segment_name='BLOGS';

     BYTES
----------
  15728640

SQL> ALTER TABLE blogs MODIFY LOB (blog_text) (SHRINK SPACE);

Table altered.

SQL> select bytes from dba_segments where segment_name='BLOGS';

     BYTES
----------
  14745600

SQL> ALTER TABLE blogs SHRINK SPACE CASCADE;

Table altered.

SQL> select bytes from dba_segments where segment_name='BLOGS';

     BYTES
----------
   1048576

As you can see, with the simple operations above, we managed to decrease the size of the BLOGs table 15 times: from 15728640 to 1048576 bytes.

The shrink_clause is subject to the following restrictions:
– You cannot combine this clause with any other clauses in the same ALTER TABLE statement.
– You cannot specify this clause for a cluster, a clustered table, or any object with a LONG column
– Segment shrink is not supported for tables with function-based indexes, domain indexes, or bitmap join indexes
– With this clause, Oracle does not shrink mapping tables of index-organized tables, even if you specify CASCADE
– You can specify the shrink_clause for a table with advanced row compression enabled (ROW STORE COMPRESS ADVANCED) but you cannot specify this clause for a table with any other type of table compression enabled
– You cannot shrink a table that is the master table of an ON COMMIT materialized view
– Rowid materialized views must be rebuilt after the shrink operation.

Automatic Zone Maps in the Oracle Database

In Data, Database tuning, Databases, DBA, New features, Oracle database on May 18, 2020 at 06:18

A zone is a set of a contiguous data blocks on disk.

A zone map is an index-like structure built on a table and stores information about the zones of that table.

There are 2 major differences between indexes and zone maps:

– A zone map stores information per zone instead of per row which makes it much more compact than an index
– A zone map is not actively managed the way an index is kept in sync with the DML on the table

Zone maps are closer as a concept to Exadata’s storage indexes than to B-tree indexes.

Before going into how Automatic Zone Maps work in Oracle 20c, let me explain the concept with an example. Consider a small table containing basic information about some relational databases from db-engines.com (rank, score, initial and last release, cloud based):

The RDBMS_BRANDS segment has 6 data blocks with 2 rows per block:

Let us now create the zonemap on the RDBMS_BRANDS table (on 3 columns only):

 
SQL> CREATE MATERIALIZED ZONEMAP rdbms_zmap ON 
rdbms_brands (db_engines_rank, db_engines_score, initial_release); 

Materialized zonemap RDBMS_ZMAP created.

We have now 3 zones and each zone contains two blocks and stores the minimum and maximum of db_engines_rank, db_engines_score and initial_release:

Next, let us run a query returning all RDBMS brands with ranking score more than 1000:

Looking at the execution plan below we see that Oracle is scanning only Zone 1 as the maximum score in all other zone is smaller than 1000:

That is how zone maps work … but what is new in Oracle 20c?

We can now enable automatic creation and maintenance of basic zone maps for both partitioned and non-partitioned tables. But for now, the creation is not available for join zone maps, IOTs, external tables or temporary tables!

In 20c, you can use the new package DBMS_AUTO_ZONEMAP to enable Automatic Zone Maps in the database. Automatic zone map creation is turned off by default.

These four values are allowed for the parameter AUTO_ZONEMAP_MODE:

ON: Turns on auto zone map feature completely. Both for foreground and background zone map creation and maintenance
OFF: Turns off auto zone map feature completely. Both for foreground and background zone map creation and maintenance
FOREGROUND: Turns on only for foreground zone map creation and maintenance
BACKGROUND: Turns on only for background zone map creation and maintenance

You may use the ACTIVITY_REPORT function to view auto zone map activity for a given time window. Note that the background job that performs automatic zone map processing starts once per hour and each run may last up to three hours.

 
SET LONG 100000
SELECT dbms_auto_zonemap.activity_report() report FROM dual;

These 2 zonemaps related views show the most important information DBAs need:

DBA_ZONEMAPS displays all zone maps in the database
DBA_ZONEMAP_MEASURES displays the measures for all zone maps in the database

On a final note: Automatic Zone Maps are available for now only on Exadata and requires the Oracle Partitioning option.

Automatic Index Optimization in Oracle Database 20c

In Database tuning, Databases, DBA, New features, Oracle database on May 11, 2020 at 05:47

Oracle database 20c came with 138 new features and one of them is directly related to the indexes in the database. It is called Automatic Index Optimization.

For some DBAs indexes in the database do not need extra care and they don’t bother much about rebuilding, compressing, coalescing or shrinking them. I have administered in the past a 24×7 mission critical database with size of 5TB where indexes were occupying 4.5TB of all that. Correct, real data was less than 500GB granted that you have SYSTEM, SYSAUX, etc.

Automatic Index Optimization does not mean optimization of the Automatic Indexes in the database but rather making now the Index Optimization an automatic process. Here is how it works and what you have to do in order to enable it and make it work.

First, in order to implement an ILM strategy, you have to enable Heat Maps in the database to track data access and modification. You can enable and disable heat map tracking at the system or session level with the ALTER SYSTEM or ALTER SESSION statement using the HEAT_MAP init.ora parameter, for example:

SQL> alter system set HEAT_MAP = ON;

Like ADO for data segments, Automatic Index Optimization works via ILM on indexes by enabling policies that automatically optimize indexes by compressing, shrinking and rebuilding them. Oracle is using the existing Heat Maps and collects activity statistics on the indexes.

So next, add ADO policies for indexes in order to enable their compression and optimization using the existing Automatic Data Optimization (ADO) framework. You can do it for newly created indexes as well as for already existing indexes.

There are 2 options:

– ADD POLICY TIER in order to perform the operation on a say low cost/ tier 2 tablespace when tier 1 storage is under space pressure
– ADD POLICY OPTIMIZE in order to kick off the process after a certain number of days passes without accessing the index

Here are few examples:

SQL> create index julian.price_idx ON julian.sales(price)
ILM ADD POLICY OPTIMIZE AFTER 31 DAYS OF NO MODIFICATION;

SQL> alter index julian.price_idx ILM ADD POLICY TIER TO BC_DATA;

SQL> alter index julian.price_idx
ILM ADD POLICY OPTIMIZE AFTER 3 DAYS OF NO ACCESS;

SQL> SELECT POLICY_NAME, POLICY_TYPE, ENABLED FROM DBA_ILMPOLICIES;

POLICY_NAME             POLICY_TYPE    ENA
---------------------   -------------  ---
P1                      DATA MOVEMENT  YES

Note that the Oracle documentation has the tier syntax wrong: instead of “ILM ADD POLICY SEGMENT TIER” use “ILM ADD POLICY TIER”.

The optimization process includes actions such as compressing, shrinking or rebuilding the indexes:

Compress: Compresses portions of the key values in an index segment (~3 times)
Shrink: Merges the contents of index blocks where possible to free blocks for reuse
Rebuild: Rebuilds an index to improve space usage and access speed

Notice that you cannot decide which of the 3 above to use. Oracle automatically determines which action is optimal for the index and implements that action as part of the optimization process.

But can we have Automatic Index optimization for Automatic Indexes and not have to rebuild them manually any longer? I did try indeed and here is the result:

SQL> alter index julian."SYS_AI_abrca2u9qmxt7"
ILM ADD POLICY OPTIMIZE AFTER 7 DAYS OF NO ACCESS;

Error starting at line : 17 in command -
alter index julian."SYS_AI_abrca2u9qmxt7"
ILM ADD POLICY OPTIMIZE AFTER 7 DAYS OF NO ACCESS
Error report -
ORA-65532: cannot alter or drop automatically created indexes

Clearly not possible but the error message is sort of misleading because actually you can alter and even drop automatic indexes:

SQL> alter index julian."SYS_AI_abrca2u9qmxt7" rebuild online;
Index altered.
SQL> alter index julian."SYS_AI_abrca2u9qmxt7" coalesce;
Index altered.
SQL> alter index julian."SYS_AI_abrca2u9qmxt7" shrink space;
Index altered.

How to drop the Auto Index? Just rebuild it a new tablespace and run “drop tablespace … including contents and datafiles” – yes it works.

Also, while administering ADO policies for indexes, you cannot manually disable these policies but you can delete an index policy. An ADO policy for indexes executes only one time. After the policy executes successfully, the policy is disabled and is not evaluated again.

SQL> alter index julian.price_idx ILM DELETE POLICY p1;

Moreover, such policies for indexes on partition level are not yet supported. The ADO policy is cascaded to all partitions. So, if we have hybrid tables in the Cloud, we cannot move local indexes automatically to object storage but it should work for global indexes. Note that we can use Automatic Data Optimization (ADO) policies with hybrid partitioned tables under some conditions.

Here are some other limitations:

– ADO does not perform checks for storage space in a target tablespace when using storage tiering
– ADO is not supported on materialized views
– ADO is not supported with index-organized tables or clusters
– ADO concurrency depends on the concurrency of the Oracle scheduler meaning if a policy job for ADO fails more than two times, then the job is marked disabled and the job must be manually enabled later

The feature is available in almost all flavors of the database, i.e., EE, Database Cloud Service, Exadata and ODA: but it requires the Oracle Advanced Compression option.

Memoptimized Rowstore Fast Ingest in Oracle Database 19c

In Data, Database tuning, Databases, DBA, Oracle database, Oracle internals on November 25, 2019 at 15:53

“A filmmaker should never assume he’s superior to his subject. I often find that even the simplest topic remains an enigma. The best film portraits not only evoke that enigma but ingest it in a process that renders what’s invisible visible.” – Damian Pettigrew, a documentary SME

The same statement above holds for database SMEs using the new fast ingest process. Because their data might be at one point neither visible nor invisible. From DBA point of view I mean…

Memoptimized Rowstore fast ingest is a new feature of Oracle 19c and is all about high performance data streaming. Here is an example from Oracle which is ultra fast, arguably 25 million inserts per second on two socket server:

Memoptimized Rowstore was introduced in Oracle 18c and here are few good blog posts on the subject:

Oracle Database 18c: Introduction to Memoptimized RowStore (Memory of a Free Festival) by Richard Foote
MemOptimized RowStore in Oracle Database 18c by Philipp Salvisberg

The fast ingest functionality of Memoptimized Rowstore enables fast data inserts into an Oracle Database from applications that ingest small but high volume transactions with almost no transactional overhead. The inserted data is temporarily buffered in the large pool before writing it to disk in bulk in a deferred, asynchronous manner.

Memoptimized Rowstore provides the following two functionalities:

Fast ingest:
– Fast ingest optimizes the processing of high-frequency, single-row data inserts into a database
– Fast ingest uses the large pool for buffering the inserts before writing them to disk, so as to improve data insert performance

Fast lookup:
– Fast lookup enables fast retrieval of data from for high-frequency queries
– Fast lookup uses a separate memory area in the SGA called the memoptimize pool for buffering the data queried from tables
– For using fast lookup, you must allocate appropriate memory size to the memoptimize pool using MEMOPTIMIZE_POOL_SIZE

The memoptimize pool contains two parts: (1) the memoptimize buffer area which occupies 75% of the memoptimize pool and (2) a hash index that is a non-persistent segment data structure having several units containing a number of hash buckets. The hash index occupies 25% of the memoptimize pool.

Memoptimize buffer area

There are two steps for using fast ingest for inserting data:

1. Enable a table for fast ingest: specify the MEMOPTIMIZE FOR WRITE clause in the CREATE TABLE or ALTER TABLE statement: Example on how to enable a table for Fast Ingest

2. Specify a hint for using fast ingest for an insert operation by specifying the MEMOPTIMIZE_WRITE hint in the INSERT statement: Example on specifying a hint for using Fast Ingest for data inserts

Note that a table can be configured for using both fast ingest and fast lookup but these 4 points should be always considered:

– Because fast ingest buffers data in the large pool, there is a possibility of data loss in the event of a system failure. To avoid data loss, a client must keep a local copy of the data after performing inserts, so that it can replay the inserts in the event of a system failure before the data is written to disk. A client can use the DBMS_MEMOPTIMIZE package subprograms to track the durability of the inserts. After inserts are written to disk, a client can destroy its local copy of the inserted data.

Use the GET_APPLY_HWM_SEQID function to find the low high-water mark of sequence numbers of the records that have been successfully written to disk across all the sessions:

select DBMS_MEMOPTIMIZE.GET_APPLY_HWM_SEQID from dual;

Equivalently, use the GET_WRITE_HWM_SEQID function to return the high-water mark sequence number of the record that has been written to the large pool for the current session.

select DBMS_MEMOPTIMIZE.GET_WRITE_HWM_SEQID from dual;

– Queries do not read data from the large pool, hence data inserted using fast ingest cannot be queried until it is written to disk.

If you need to flush all the fast ingest data from the large pool to disk for the current session, just run:

exec DBMS_MEMOPTIMIZE.WRITE_END;

– Parent-child transactions must be synchronized to avoid errors. For example, foreign key inserts and updates of rows inserted into the large pool can return errors, if the parent data is not yet written to disk.

– Index operations are supported by fast ingest similar to the regular inserts. However, for fast ingest, database performs index operations while writing data to disk, and not while writing data into the large pool.

You can disable a table for fast ingest by specifying the NO MEMOPTIMIZE FOR WRITE clause in the ALTER TABLE statement. You can view the fast ingest data in the large pool using the V$MEMOPTIMIZE_WRITE_AREA view. You can also view and control the fast ingest data in the large pool using the subprograms of the packages DBMS_MEMOPTIMIZE and DBMS_MEMOPTIMIZE_ADMIN.

The DBMS_MEMOPTIMIZE_ADMIN package has only one procedure WRITES_FLUSH which does not accept any input or output parameters.

exec DBMS_MEMOPTIMIZE.WRITES_FLUSH;

Here is one more example from Oracle:

As you see, you can lose data using this new technique. Use it with caution unless you can afford that or re-insert the data.

If you need to remove or populate fast lookup data for a certain table from the memoptimize pool, use DBMS_MEMOPTIMIZE.DROP_OBJECT and DBMS_MEMOPTIMIZE.POPULATE respectively.

Check MemOptimized RowStore in Oracle Database 19c by Philipp Salvisberg for a good performance comparison between PL/SQL and Java Thin.

Automatic Indexing in 19c

In Autonomous, Database tuning, Databases, DBA, Oracle database on February 18, 2019 at 17:38

One of the most impressive new features of Oracle Database 19c is Automatic Indexing. Arguably, this is the most interesting innovation in the database world for a rather long time.

I remember some years ago when a DBA asked me at an Oracle conference: “Julian, why are half of the presentations at Oracle database conferences only about performance tuning? Is the Oracle database performing that badly that people should tune it all the time?” Sigh…

With 19c and ADB (Oracle Autonomous Database), things look very different now, don’t they? Automatic Indexing provides what database systems need: continuous optimization of the database workload, stable & solid performance and almost no human interaction. Let me share some of my early experience with Automatic Indexing and where human interaction is needed.

For now (February 18th, 2019), Oracle 19c is only available on Exadata (Linux 7.4) and in order to enable Automatic Indexing you need to do the following:

EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','IMPLEMENT');

As you can guess from the picture above, the so called expert system of Automatic Indexing runs every 15th minute for as long as one hour. Note that I disabled the job from 4:43 till 5:56. The Resource Manager plan limits the task to 1 CPU only and the next run is skipped if the job has not been completed within the 15 minutes.

Here are details on how Automatic Indexing works but what is most important to remember is as follows:

– The auto index candidates are created as invisible auto indexes
– If the performance of SQL statements is not improved from the auto indexes, then the indexes are marked as unusable and the corresponding SQL statements are blacklisted
– Auto indexes cannot be used for any first time SQL run against the database
– Auto indexes are created as either single, concatenated indexes or function-based indexes and they all use advanced low compression
– The unused auto indexes are deleted after 373 days (can be changed)
– The unused non-auto indexes (manual indexes) are never deleted by the automatic indexing process but can be deleted automatically if needed

The Auto Indexing can be disabled at any time or can be set to set to reporting mode (new auto indexes as created asinvisible indexes, so that they cannot be used in SQL) with the following commands:

EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','OFF');

 

EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','REPORT ONLY');

Here is a way to ask Oracle to create new auto indexes in a separate tablespace called AUTO_INDEX_TS:

SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_DEFAULT_TABLESPACE','AUTO_INDEX_TS');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01

You can easily check the configuration for Automatic Indexing for the root container and the PDBs from CDB_AUTO_INDEX_CONFIG;

If you need a report of what happened during the expert system activity (either last 3 days or during the last activity), here is a way to generate it:

set long 300000
select DBMS_AUTO_INDEX.REPORT_ACTIVITY(SYSTIMESTAMP-3,SYSTIMESTAMP,'TEXT','ALL','ALL') from dual;
select DBMS_AUTO_INDEX.REPORT_LAST_ACTIVITY('TEXT','ALL','ALL') from dual;

These are the most important views about Auto Indexing:

DBA_AUTO_INDEX_EXECUTIONS: history of execution of automatic indexing tasks
DBA_AUTO_INDEX_STATISTICS: statistics related to auto indexes
DBA_AUTO_INDEX_IND_ACTIONS: actions performed on auto indexes
DBA_AUTO_INDEX_SQL_ACTIONS: actions performed on SQL statements for verifying auto indexes
DBA_AUTO_INDEX_CONFIG: configuration settings related to auto indexes
DBA_AUTO_INDEX_VERIFICATIONS: stats about PLAN_HASH_VALUE, AUTO_INDEX_BUFFER_GETS, etc.

The new package DBMS_AUTO_INDEX can be used for 3 main things:

1. Configuration of the parameters related to Auto Indexing
2. Drop *all* the indexes except the ones used for constraints
3. Report the activity of the “expert system”:

Finally, here are some additional resources:

Automatic Indexing in Oracle Database 19c
Oracle Database 19c is now available!
Managing Auto Indexes

How to check if I have any auto indexes in my database: select auto, count(*) from dba_indexes group by auto;