Dontcheff

Archive for the ‘Oracle internals’ Category

Automatic Segment Advisor in the Oracle Database

In Autonomous, Database tuning, DBA, Oracle database, Oracle internals on July 1, 2022 at 06:50

“From my experience, the best advisors help in three ways: encourage you to look at the problem or opportunity from multiple angles; help you balance the tug of the short-term with important long-term priorities; and ask the tough questions you need to know to reach the best solution.” Margo Georgiadis

The Oracle Segment Advisor identifies segments that have space which can be reclaimed. However, the Automatic Segment Advisor can be at times resource consuming and even slow down your database:

Why is this happening granted the Automatic Segment Advisor does not analyze every database object? Here is how it works internally: the advisor examines the database statistics, it samples segment data, and then selects the following objects to analyze:

  • Tablespaces that have exceeded a critical or warning space threshold
  • Segments that have the most activity
  • Segments that have the highest growth rate

In addition, the Automatic Segment Advisor evaluates tables that are at least 10MB in size and have at least 3 indexes to determine the amount of space saved if the tables are compressed with advanced row compression.

Now, here is the important part: if a database object is selected for analysis by the advisor but the maintenance window expires before the advisor can process the object, the object is included in the next Automatic Segment Advisor run. So, at one point you may come to a situation where lots of objects have to be analyzed. During the maintenance window, the following clients/task are being run, these are the predefined automated maintenance tasks:

Sometimes, it makes sense to disable the auto space advisor as you cannot change the set of tablespaces and segments that the Automatic Segment Advisor selects for analysis. You can, however, enable or disable the Automatic Segment Advisor task, change the times during which the Automatic Segment Advisor is scheduled to run, or adjust automated maintenance task system resource utilization. Especially in a situation like this:

Often, also ADDM may point out to the problem with the Automatic Segment Advisor task:

Here is how to disable (and enable) the tasks individually. The main switch is controlled by DBMS_AUTO_TASK_ADMIN.DISABLE().

Even after disabling the entire autotask job by running DBMS_AUTO_TASK_ADMIN.DISABLE(), the DBA_AUTOTASK_TASK will still show autotask client as enabled. For this you need to disable the jobs individually as shown above. And for a multitenant environment, CDB and PDBs have their own autotasks, disabling CDB’s autotask will not affect the PDBs, so you will have to do for each pluggable database.

Staring with 12.2, there is a parameter called ENABLE_AUTOMATIC_MAINTENANCE_PDB that can be used to enable or disable the running of automated maintenance tasks for all the PDBs in a CDB or for individual PDBs in a CDB. Changing ENABLE_AUTOMATIC_MAINTENANCE_PDB in the CDB root from TRUE to FALSE, the new value FALSE takes effect in the root and in all the PDBs in the CDB.

If you get into a situation where the Automatic Segment Advisor is consuming lots of resource and slowing the database during the maintenance windows do one of the following:

  1. Disable the autotask client for the segment advisor and run it manually on per need basis.
  2. Increase the maintenance window from the default which starts at 10 p.m. on Monday to Friday and ends at 2 a.m. Often 4 hours in just not enough. The weekend window is 20h long and in most cases long enough.

In the autonomous database, you have access to dba_autotask_client, etc. and you can disable and enable the auto space advisor task however you do not have full visibility on dba_scheduler_window_groups, etc. Some columns just show as NULL. Still the performance task in ADB can give you some idea of what is going on:

And finally, here are some other situations that might require disabling the Auto Space Advisor Job:

  • ORA-01555 While Running Auto Space Advisor Job on Object wri$_adv_objspace_trend_data (Doc ID 2576430.1)
  • Auto Space Advisor is Taking More Time due to Recursive Query Taking a Long Time (Doc ID 2382419.1)
  • SEGMENT ADVISOR not working as expected for LOB or SYS_LOB SEGMENT (Doc ID 988744.1)

User defined locks: enq: UL – contention

In Database tuning, DBA, Oracle database, Oracle internals on June 13, 2022 at 17:30

DBAs are familiar with most wait events, however there is one called “enq: UL – contention”, which does not pop up that often:

The foreground events above are from an Oracle 19c database. Clearly, we see a lot of user defined lock waits: enq: UL – contention. They come usually when the DBMS_LOCK package is being used.

There are 3 enqueue types which are defined in Oracle as “User Type” locks, they are:

  1. TM – DML Enqueue called against base tables or partitions for operations that need to be coordinated
  2. TX – Transaction Enqueue used to protect transaction information
  3. UL – User Lock Enqueue used when an application makes use of the DBMS_LOCK package

MOS has a note Resolving Issues Where ‘enq: UL – contention’ Waits are Occurring (Doc ID 1915980.1) which can help us identify what session is holding the lock and what part of the application it is currently executing.

The ‘UL’  (User-defined Lock) Enqueue is a lock that is created and defined by a developer (or seldom a DBA) using the DBMS_LOCK package.  The DBMS_LOCK package allows the application developer to request, convert and release locks which are independent from the locks provided automatically by Oracle. These routines are meant for synchronization within the application and augment the locking which is already there in the database.

Locks are automatically released at the end of a session. DBSM_LOCK.RELEASE explicitly releases a lock previously acquired using the REQUEST function.

Wondering How can I find who has allocated a user lock via dbms_lock?

Here is the query from the link above showing how to find the active blocking situations around DBMS_LOCK:

SELECT blocker.username blocker, blocker.sid blocker_session
     , blocked.username blocked, blocked.sid blocked_session
  FROM v$session blocked
  JOIN v$session blocker
  ON (blocked.blocking_session = blocker.sid)
 WHERE blocked.wait_class = 'Application'
   AND blocked.event='enq: UL - contention';

The DBMS_LOCK_ALLOCATED table is periodically cleared out by calling the allocate_unique() procedure when the modulus of the lockid is 0 and the expiration column is less than the current date. On every 100th attempt to create a new entry, allocate_unique purges expired entries from the table. If the entry already exists, allocate_unique updates the expiry time.

There are a couple important MOS notes related to the DBMS_LOCK_ALLOCATED table:

The first one called DBMS_LOCK_ALLOCATED Table Keeps Growing Eventually Depleting Tablespace (Doc ID 1054343.6) explains how to delete entries for locks that are no longer being used. The second on entitled How To Cleanup And Shrink DBMS_LOCK_ALLOCATED? (Doc ID 1065384.1) is a bit similar and it warns that Bug 2624130 is going to make the delete very slow. In that case, one option is to use DBMS_REDEFINITION: How To Cleanup And Shrink DBMS_LOCK_ALLOCATED? (Doc ID 1065384.1).

The structure of the DBMS_LOCK_ALLOCATED table is rather simple it has only 3 columns: name, lockid and expiration.

There are few bugs in the Oracle database related to the “enq: UL – contention” wait event:

Bug 30070584 – 13.3 ‘enq: UL – contention’ wait event (OEM related)
Bug 27282543 – 13.2 Frequent UL contention wait events for the node 2 on EMNPRD repository db (OEM)
Bug 26618817 – UL contention from ProcessBuddyAgent jobs in scheduled state

A recent update from 2020 (Oracle 19.3) by Jonathan Lewis confirmed that PL/SQL execution time includes the time spent waiting for UL enqueues, so we should not forget that we may need to subtract wait time for ‘enq: UL – contention’ from the PL/SQL time before you start to worry about how much actual work we are doing in PL/SQL.

If you are working with Oracle Text, you may also observe the “enq: UL – contention” wait event. DBMS_LOCK is used internally by ctxsys.drvdml.com_sync_index, so requesting UL is not a problem. However, fragmenting the index composition table of CTXSYS increases the acquisition time of UL enqueue, so this issue will appear.

Bottom line: DBMS_LOCK should be used (carefully) in the application code only when there are no other options.

5 new core DBA features in Oracle Database 21c

In DBA, New features, Oracle database, Oracle internals, PL/SQL on January 8, 2021 at 08:14

If wondering what new can come after so many releases of the Oracle Database – well, here is something for system DBAs.

Here are 5 new features related to Database Vault, Syslog, the new SYS_AUTO_STS_MODULE, and the 2 new DBA packages SYS.DBMS_FLASHBACK_ARCHIVE_MIGRATE & SYS.DBMS_ACTIVITY.

1. No need any longer to disable Oracle Database Vault in every container before the upgrade! We only need to grant the DV_PATCH_ADMIN role to SYS commonly before performing the upgrade. After the upgrade is complete the Database Vault controls work as before. Then we can revoke the DV_PATCH_ADMIN role from SYS. Alternatively, we can explicitly disable Oracle Database Vault in all containers before the upgrade and then after the upgrade explicitly enable Oracle Database Vault in all the containers.

2. A new concept in the Oracle Database is the Object Activity Tracking System.

Object Activity Tracking System (OATS) tracks the usage of various types of database objects. Usage includes operations such as access data manipulation or refresh.

The DBMS_ACTIVITY PL/SQL package contains functions and procedures for configuring Object Activity Tracking System (OATS) information collection and management. Data collected by OATS is used in analyses performed by automatic materialized views.

DBAs can use the DBMS_ACTIVITY.CONFIGURE procedure to control three OATS parameters within a specific database.

ACTIVITY_INTERVAL defines the interval between snapshots:

exec dbms_activity.configure('ACTIVITY_INTERVAL_MINUTES',30);

ACTIVITY_RETENTION_DAYS defines how long snapshots are saved:

exec dbms_activity.configure('ACTIVITY_RETENTION_DAYS',60);

ACTIVITY_SPACE_PERCENT sets how much of available space is reserved for snapshots:

exec dbms_activity.configure('ACTIVITY_SPACE_PERCENT',10);

And use these tables to monitor the activity:

ACTIVITY_CONFIG$
ACTIVITY_TABLE$
ACTIVITY_MVIEW$
ACTIVITY_SNAP_META$

3. SYSLOG destination for common Unified Audit Policies:

Certain predefined columns of unified audit records from common unified audit policies can be written to the UNIX SYSLOG destination.

To enable this new feature we should set UNIFIED_AUDIT_COMMON_SYSTEMLOG which is a new CDB level init.ora parameter. This enhancement enables all audit records from common unified audit policies to be consolidated into a single destination. This feature is available only on UNIX platforms not Windows (of course).

The new parameter has no default, we should set both the facility_clause and the priority_clause values.

Only a subset of unified audit record fields are written to ensure that the audit record entries do not exceed the maximum allowed size for a SYSLOG entry (typically 1024 bytes).

4. SYS_AUTO_STS_MODULE is the new module for the Auto SQL Tuning Sets. The feature exists since Oracle 19.7 so technically not really a new 21c feature although listed as such. Check Automatic SQL Tuning Sets (ASTS) 19c RU 19.7 Onwards (Doc ID 2686869.1)

If you try searching Google or MOS for SYS_AUTO_STS_MODULE, then most likely you will get similar to what I got from my search:

Clearly, Franck Pachot has already met with the auto SQL tuning set module but that is expected behavior – he is one of the pioneers in new features and database research.

First, what is the Automatic SQL Tuning Set? In short, auto STS is an automatic repository for historic SQL performance metrics and execution plans. Oracle is now having the module handling all that automatically.

Recently, Doc ID 2733254.1 shows after a search for the SYS_AUTO_STS_MODULE but the note is about a hang when creating text indexes although SYS_AUTO_STS_MODULE pops up 3 times under SQL ordered by CPU Time.

Out of curiosity, you may run this SQL also against your 21c database and observe the actions the module has been taking:

select ACTION, count(*) 
from V_$SQLAREA_PLAN_HASH 
where module = 'SYS_AUTO_STS_MODULE' 
group by ACTION;

In a real production database (ADW), I noticed that after the module stopped appearing in the AWR report, the performance went back to normal:

5. DBMS_FLASHBACK_ARCHIVE_MIGRATE

A new PL/SQL package called DBMS_FLASHBACK_ARCHIVE_MIGRATE enables the migration of Flashback Data Archive enabled tables from a database on any release (in which the package exists) to any database on any release (that supports Flashback Data Archive).

The package has 3 procedures:

– EXPORT exports the given Flashback Archive enabled base tables, their history and related tablespaces
– EXPORT_ANALYZE analyzes the given Flashback Archive enabled base tables, their history, and related tablespaces for self containment using Transportable tablesapce checks
– IMPORT imports the Flashback Archive enabled base tables that were exported, their history, and related tablespaces

Here are the prerequisites:

Database version >= 11.2
If database version is 11.2, following conditions should be met:
shared_pool_size >= 500M
streams_pool_size >= 40M or
sga_target >= 2G

The DBMS_FLASHBACK_ARCHIVE_MIGRATE package must be compiled on both the source and target databases as SYS. The source file is located at ?/rdbms/admin/crefbamig.sql, using which the package can be created or compiled.

The export and import procedures must be executed as SYS. Since the package uses DBMS_DATAPUMP, DBMS_LOCK, DBMS_SYSTEM, DBMS_SQL and DBMS_SCHEDULER PL/SQL packages, their security models are also applicable.

More on 21c? Check:

1. Introducing Oracle Database 21c
2. A glimpse of what is new in Oracle Database 21c

The CLOUD_VERIFY_FUNCTION in Autonomous Database Cloud

In Autonomous, Cloud, DBA, New features, Oracle internals, Security and auditing on July 1, 2020 at 12:58

Choosing a hard-to-guess but easy-to-remember password is by far the easiest one from all the hard tasks!

1. Database passwords and their complexity:

According to GDPR personal data must be processed “in a manner that ensures appropriate security of personal data including protection against unauthorized or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organizational measures.”

But GDPR does not define any requirements about passwords such as password length, complexity, or how often password should be renewed. Regulation (EU) 2016/679 just stipulates that “a high level of protection of personal data” is required.

One way to enforce strong passwords on database users is by using the following rule:

A minimum of 1 lower case letter [a-z] and
a minimum of 1 upper case letter [A-Z] and
a minimum of 1 numeric character [0-9] and
a minimum of 1 special character: ~`!@#$%^&*()-_+={}[]|\;:”,./?
Passwords must be at least N characters in length
N attempts to block login
Set password expiration to N days

Oracle is following the above mentioned rules and the Oracle script catpvf.sql provides several password functions for taking care of the verification process:

– ora_complexity_check,
– verify_function
– verify_function_11G
– ora12c_verify_function
– ora12c_strong_verify_function
– ora12c_stig_verify_function

Note that the VERIFY_FUNCTION and VERIFY_FUNCTION_11G password verify functions are desupported in Oracle Database 20c. Also, in Oracle 20c, the IGNORECASE parameter for the orapwd file is desupported. All newly created password files are case-sensitive.

3. Non-autonomous databases

Now, how about those who prefer to use less complex passwords for database users? How do you bypass that problem first in a non-autonomous environment?

There are several ways to avoid the verification process by say the ora12c_verify_function:

– ALTER PROFILE DEFAULT LIMIT PASSWORD_VERIFY_FUNCTION NULL;
– Create a separate profile for the user
– Edit the catpvf.sql script to use the password verification function that you want, and then run the script to enable it – it is located in $ORACLE_HOME/rdbms/admin/utlpwdmg.sql
– Modify “CREATE OR REPLACE FUNCTION ora12c_verify_function …” in utlpwdmg.sql, a file which is used to change the DEFAULT profile to use different password complexity functions – it is located in $ORACLE_HOME/rdbms/admin/utlpwdmg.sql (not in 20c though)

Note here that the Oracle documentation says clearly: “Do not modify the admin/catpvf.sql script or the Oracle-supplied password complexity functions. You can create your own functions based on the contents of these files.”

3. Autonomous databases

Next, how about Autonomous, where we have no access to the operating system layer?

The Oracle Autonomous Database Cloud offers a new (unique to ADB) a function called CLOUD_VERIFY_FUNCTION. It is not available in the non-autonomous releases and not even in Oracle 20c.

The CLOUD_VERIFY_FUNCTION function is specified in the PASSWORD_VERIFY_FUNCTION attribute of the DEFAULT profile. This function internally calls ORA_COMPLEXITY_CHECK and checks the password entered according to the following specifications.

– If password contains the username
– The password must contain 1 or more lowercase characters
– The password must contain 1 or more uppercase characters
– The password must contain 1 or more digits
– The password length less than 12 bytes or more than 60 bytes

Let us check first what the function CLOUD_VERIFY_FUNCTION looks like:

 
create or replace FUNCTION cloud_verify_function
(username varchar2,
password varchar2,
old_password varchar2)
RETURN boolean IS
   differ integer;
  db_name varchar2(40);
  i integer;
  reverse_user dbms_id;
  canon_username dbms_id := username;
  len integer := nvl (length(password), 0);
BEGIN
  IF (substr(username,1,1) = '"') THEN
    execute immediate 'begin dbms_utility.canonicalize(:p1,  :p2, 128); end;'
                        using IN username, OUT canon_username;
  END IF;
   IF NOT ora_complexity_check(password, 12, null, 1, 1, 1, null) THEN
     RETURN(FALSE);
   END IF;
-- Check password length
   IF len > 60 THEN
     raise_application_error(-20020, 'Password too long');
   END IF;
-- Check if the password contains the username
   IF regexp_instr(password, canon_username, 1, 1, 0, 'i') > 0 THEN
     raise_application_error(-20002, 'Password contains the username');
   END IF;
RETURN(TRUE);
END;
/

We cannot modify the scripts mentioned above as we do not have OS access in ADB – may be then we can change the default profile or create a new one? But in ATP, user’s profile will be set to ‘DEFAULT’, and you are not allowed to create additional PROFILEs. Autonomous Data Warehouse requires strong passwords – the password must meet the default password complexity rules.

The output below is identical in ADW and ATP:

Well, we are stubborn – so let us try in any case:

In ATP:

 
create profile DBA_PROFILE 
LIMIT PASSWORD_REUSE_MAX 10 PASSWORD_REUSE_TIME 30
ORA-01031: insufficient privileges

In ADW:

 
create profile DBA_PROFILE 
LIMIT PASSWORD_REUSE_MAX 10 PASSWORD_REUSE_TIME 30;
Profile DBA_PROFILE created.

alter profile DBA_PROFILE limit PASSWORD_VERIFY_FUNCTION null;
Profile DBA_PROFILE altered.

alter user admin profile DBA_PROFILE;
ORA-01031: insufficient privileges

create user app_user identified by abc profile DBA_PROFILE
ORA-28219: password verification failed for mandatory profile
ORA-20000: password length less than 12 bytes

create user app_user identified by Exadataa2020 profile DBA_PROFILE;
User APP_USER created.

 
alter user app_user identified by abc
ORA-28219: password verification failed for mandatory profile
ORA-20000: password length less than 12 bytes

Well, the password verify function is still used although we set the app_user’s profile to DBA_PROFILE.

Conclusion: there is no way to modify or bypass the CLOUD_VERIFY_FUNCTION in Autonomous Cloud.

In 20c, the script utlpwdmg.sql has been modified, it does not contain any longer the creation of the verification functions. I really don’t know if that is intentional.

 
[oracle@julian admin]$ cat utlpwdmg.sql
...

-- This script sets the default password resource parameters
-- This script needs to be run to enable the password features.
-- However the default resource parameters can be changed based
-- on the need.
-- A default password complexity function is provided.

Rem *************************************************************************
Rem BEGIN Password Management Parameters
Rem *************************************************************************

-- This script alters the default parameters for Password Management
-- This means that all the users on the system have Password Management
-- enabled and set to the following values unless another profile is
-- created with parameter values set to different value or UNLIMITED
-- is created and assigned to the user.

ALTER PROFILE DEFAULT LIMIT
PASSWORD_LIFE_TIME 180
PASSWORD_GRACE_TIME 7
PASSWORD_REUSE_TIME UNLIMITED
PASSWORD_REUSE_MAX  UNLIMITED
FAILED_LOGIN_ATTEMPTS 10
PASSWORD_LOCK_TIME 1
INACTIVE_ACCOUNT_TIME UNLIMITED
PASSWORD_VERIFY_FUNCTION ora12c_verify_function;

/**
The below set of password profile parameters would take into consideration
recommendations from Center for Internet Security[CIS Oracle 11g].

ALTER PROFILE DEFAULT LIMIT
PASSWORD_LIFE_TIME 90
PASSWORD_GRACE_TIME 3
PASSWORD_REUSE_TIME 365
PASSWORD_REUSE_MAX  20
FAILED_LOGIN_ATTEMPTS 3
PASSWORD_LOCK_TIME 1
PASSWORD_VERIFY_FUNCTION ora12c_verify_function;
*/

/**
The below set of password profile parameters would take into
consideration recommendations from Department of Defense Database
Security Technical Implementation Guide[STIG v8R1].

ALTER PROFILE DEFAULT LIMIT
PASSWORD_LIFE_TIME 60
PASSWORD_REUSE_TIME 365
PASSWORD_REUSE_MAX  5
FAILED_LOGIN_ATTEMPTS 3
PASSWORD_VERIFY_FUNCTION ora12c_strong_verify_function;
*/

Rem *************************************************************************
Rem END Password Management Parameters
Rem *************************************************************************
[oracle@julian admin]$

On a final note: the hashed values of the passwords in 20c can be still found in the data dictionary – look into the SPARE4 column of the SYS.USER$ table. It is similar to 19c, read this article for all the details.

Memoptimized Rowstore Fast Ingest in Oracle Database 19c

In Data, Database tuning, Databases, DBA, Oracle database, Oracle internals on November 25, 2019 at 15:53

“A filmmaker should never assume he’s superior to his subject. I often find that even the simplest topic remains an enigma. The best film portraits not only evoke that enigma but ingest it in a process that renders what’s invisible visible.” – Damian Pettigrew, a documentary SME

The same statement above holds for database SMEs using the new fast ingest process. Because their data might be at one point neither visible nor invisible. From DBA point of view I mean…

Memoptimized Rowstore fast ingest is a new feature of Oracle 19c and is all about high performance data streaming. Here is an example from Oracle which is ultra fast, arguably 25 million inserts per second on two socket server:

Memoptimized Rowstore was introduced in Oracle 18c and here are few good blog posts on the subject:

Oracle Database 18c: Introduction to Memoptimized RowStore (Memory of a Free Festival) by Richard Foote
MemOptimized RowStore in Oracle Database 18c by Philipp Salvisberg

The fast ingest functionality of Memoptimized Rowstore enables fast data inserts into an Oracle Database from applications that ingest small but high volume transactions with almost no transactional overhead. The inserted data is temporarily buffered in the large pool before writing it to disk in bulk in a deferred, asynchronous manner.

Memoptimized Rowstore provides the following two functionalities:

Fast ingest:
– Fast ingest optimizes the processing of high-frequency, single-row data inserts into a database
– Fast ingest uses the large pool for buffering the inserts before writing them to disk, so as to improve data insert performance

Fast lookup:
– Fast lookup enables fast retrieval of data from for high-frequency queries
– Fast lookup uses a separate memory area in the SGA called the memoptimize pool for buffering the data queried from tables
– For using fast lookup, you must allocate appropriate memory size to the memoptimize pool using MEMOPTIMIZE_POOL_SIZE

The memoptimize pool contains two parts: (1) the memoptimize buffer area which occupies 75% of the memoptimize pool and (2) a hash index that is a non-persistent segment data structure having several units containing a number of hash buckets. The hash index occupies 25% of the memoptimize pool.

Memoptimize buffer area

There are two steps for using fast ingest for inserting data:

1. Enable a table for fast ingest: specify the MEMOPTIMIZE FOR WRITE clause in the CREATE TABLE or ALTER TABLE statement: Example on how to enable a table for Fast Ingest

2. Specify a hint for using fast ingest for an insert operation by specifying the MEMOPTIMIZE_WRITE hint in the INSERT statement: Example on specifying a hint for using Fast Ingest for data inserts

Note that a table can be configured for using both fast ingest and fast lookup but these 4 points should be always considered:

– Because fast ingest buffers data in the large pool, there is a possibility of data loss in the event of a system failure. To avoid data loss, a client must keep a local copy of the data after performing inserts, so that it can replay the inserts in the event of a system failure before the data is written to disk. A client can use the DBMS_MEMOPTIMIZE package subprograms to track the durability of the inserts. After inserts are written to disk, a client can destroy its local copy of the inserted data.

Use the GET_APPLY_HWM_SEQID function to find the low high-water mark of sequence numbers of the records that have been successfully written to disk across all the sessions:

select DBMS_MEMOPTIMIZE.GET_APPLY_HWM_SEQID from dual;

Equivalently, use the GET_WRITE_HWM_SEQID function to return the high-water mark sequence number of the record that has been written to the large pool for the current session.

select DBMS_MEMOPTIMIZE.GET_WRITE_HWM_SEQID from dual;

– Queries do not read data from the large pool, hence data inserted using fast ingest cannot be queried until it is written to disk.

If you need to flush all the fast ingest data from the large pool to disk for the current session, just run:

exec DBMS_MEMOPTIMIZE.WRITE_END;

– Parent-child transactions must be synchronized to avoid errors. For example, foreign key inserts and updates of rows inserted into the large pool can return errors, if the parent data is not yet written to disk.

– Index operations are supported by fast ingest similar to the regular inserts. However, for fast ingest, database performs index operations while writing data to disk, and not while writing data into the large pool.

You can disable a table for fast ingest by specifying the NO MEMOPTIMIZE FOR WRITE clause in the ALTER TABLE statement. You can view the fast ingest data in the large pool using the V$MEMOPTIMIZE_WRITE_AREA view. You can also view and control the fast ingest data in the large pool using the subprograms of the packages DBMS_MEMOPTIMIZE and DBMS_MEMOPTIMIZE_ADMIN.

The DBMS_MEMOPTIMIZE_ADMIN package has only one procedure WRITES_FLUSH which does not accept any input or output parameters.

exec DBMS_MEMOPTIMIZE.WRITES_FLUSH;

Here is one more example from Oracle:

As you see, you can lose data using this new technique. Use it with caution unless you can afford that or re-insert the data.

If you need to remove or populate fast lookup data for a certain table from the memoptimize pool, use DBMS_MEMOPTIMIZE.DROP_OBJECT and DBMS_MEMOPTIMIZE.POPULATE respectively.

Check MemOptimized RowStore in Oracle Database 19c by Philipp Salvisberg for a good performance comparison between PL/SQL and Java Thin.

DBA Internals of the Oracle Autonomous Database

In Cloud, DBA, Oracle database, Oracle internals on March 28, 2018 at 07:11

First things first: the word autonomous come from the Greek word autónomos which means “with laws of one’s own, independent”.

After starting using the Autonomous Data Warehouse Cloud, I must say I am pleasantly surprised to see something totally new, simple, uncomplicated and effortless, with no additional tuning or re-architecturing of the Oracle databases needed – the underlying Oracle Cloud Infrastructure is super fast and highly reliable.

1. You may connect to ADWC by either using the web interface as you can see above or as a client (I use SQL Developer 17.4) but for the client connection type choose Cloud PDB and not TNS. Your configuration file is a zip file and not a plain text file to what DBAs are used to.

2. You cannot create indexes on columns, you cannot partition tables, you cannot create materialized views, etc. Not even database links. You will get an error message: “ORA-00439: feature not enabled: Partitioning” or “ORA-01031: insufficient privileges”.

ADWC lets you create primary keys, unique keys and a foreign key constraints in RELY DISABLE NOVALIDATE mode which means that they are not enforced. These constraints can be created also in enforced mode, so technically you can create constraints as in a non-autonomous Oracle database.

Note that in execution plans primary keys and unique keys will only be used for single table lookups by the optimizer, they will not be used for joins.

But … you can run alter system kill session!

3. The Oracle Autonomous Data Warehouse interface contains all necessary capabilities for a non-professional database user to create its own data marts and run analytical reports on the data. You can even run AWR reports.

4. You do not have full DBA control as Oracle (in my opinion) uses lockdown profiles in order to make the database autonomous. As ADMIN user, you have 25 roles including the new DWROLE which you would normally grant to all ADWC users created by you. Among those 25 roles, you have GATHER_SYSTEM_STATISTICS, SELECT_CATALOG_ROLE, CONSOLE_ADMIN, etc. You have access to most DBA_ and GV_$ views. Not to mention the 211 system privileges.

5. ADWC configures the database initialization parameters based on the compute and storage capacity you provision. ADWC runs on dozens of non-default init.ora parameters. For example:

parallel_degree_policy = AUTO
optimizer_ignore_parallel_hints = TRUE
result_cache_mode = FORCE
inmemory_size = 1G

You are allowed to change almost no init.ora parameters except few NLS_ and PLSQL_ parameters.

And the DB block size is 8K!

6. I can see 31 underscore parameters which are not having default values, here are few:

_max_io_size = 33554432 (default is 1048576)
_sqlmon_max_plan = 4000 (default is 0)
_enable_parallel_dml = TRUE (default is FALSE)
_optimizer_answering_query_using_stats = TRUE (default is FALSE)

One of the few alter session commands you can run is “alter session disable parallel dml;”

7. Monitoring SQL is easy:

But there is no Oracle Tuning Pack: you did not expect to have that in an autonomous database, did you? There is no RAT, Data Masking and Subsetting Pack, Cloud Management Pack, Text, Java in DB, Oracle XML DB, APEX, Multimedia, etc.

8. Note that this is (for now) a data warehousing platform. However, DML is surprisingly fast too. I managed to insert more than half a billion records in just about 3 minutes:

Do not try to create nested tables, media or spatial types, or use LONG datatype: not supported. Compression is enabled by default. ADWC uses HCC for all tables by default, changing the compression method is not allowed.

9. The new Machine Learning interface is easy and simple:


You can create Notebooks where you have place for data discovery and analytics. Commands are run in a SQL Query Scratchpad.

10. Users of Oracle Autonomous database are allowed to analyze the tables and thus influence on the Cost Based Optimizer and hence on performance – I think end users should not be able to influence on the laws (“νόμος, nomos”) of the database.

Conclusion: The Autonomous Database is one of the best things Oracle have ever made. And they have quite a portfolio of products….

Finally, here is a live demo of the Oracle Autonomous Data Warehouse Cloud:

Twelve new DBA commands for Oracle 12c

In DBA, Init.ora, Oracle database, Oracle internals, RMAN on January 25, 2014 at 10:47

With more than 500 new features, Oracle Database 12c is designed to give us exactly what we need for easy database administration, consolidation, security and availability. And I cannot deny the complexity it brings into our pluggable world…

12_commands

But straight to the point. Here are 12 new commands for Oracle DBAs:

1. impdp … transform=disable_archive_logging:Y

The new TRANSFORM option DISABLE_ARCHIVE_LOGGING appended to the impdp command line will cause Oracle Data Pump to disable (almost completely) redo logging when loading data into tables and when creating indexes. Check this example.

2. select xmltransform(dbms_qopatch.get_opatch_lsinventory, dbms_qopatch.get_opatch_xslt) from dual;

With the new DBMS_QOPATCH package we can view the installed database patches:

GET_OPATCH_BUGS: Provides a bugs list for a patch
GET_OPATCH_COUNT: Provides the total number of installed patches
GET_OPATCH_LSINVENTORY: Returns whole opatch inventory
GET_OPATCH_PREQS: Provides prerequisite patches for a given patch

Check this example.

3. grant sysdba, sysoper, sysbackup, sysdg, syskm to c##julian identified by password container=all;

If we want to create a common user in the CDB, we must prefix it with c##. It is because the default common user prefix in 12.1.0 is c##. However, the _common_user_prefix parameter controls that prefix. One option is to set it to NULL.

SYSBACKUP, SYSDG and SYSKM are also database users. SYSDG is used for Oracle Data Guard operations: startup, shutdown and flashback database while SYSKM is used for encryption key management operations: connect to the database even if the database is not open.

4. create table porter(owner varchar2(128), object_name varchar2(128), created date invisble);

porter

This “trick” does not work with temporary tables, external tables and cluster tables. Check this example.

5. alter system set max_string_size=extended;

extended_size

The new init.ora parameter MAX_STRING_SIZE (default is standard) controls the maximum size of the VARCHAR2, NVARCHAR2, and RAW data types. It can be increased from 4,000 to 32767 bytes. Check this example.

6. alter database move datafile ‘old_location’ to ‘new_location’;

move_online

In 12.1.0, a data file can now be moved online while it is open and being accessed, all these are possible now:

– Renaming an Online Datafile
– Relocating an Online Datafile
– Copying an Online Datafile
– Relocating an Online Datafile and Overwriting an Existing File
– Relocating an Online Datafile to Oracle ASM
– Moving a File from One ASM Location to Another ASM Location

7. recover table emp until time ‘sysdate-1’;

table_recovery

RMAN enables us to recover one or more tables or table partitions to a specified point in time without affecting the remaining database objects. We can use previously-created RMAN backups to recover tables and table partitions to a specified point in time. Check this example.

8. alter session set row archival visibility = active;
alter session set row archival visibility = all;

In-Database Archiving enables us to archive rows within a table by marking them as inactive. These inactive rows are in the database and can be optimized using compression, but are not visible to an application. To manage In-Database Archiving for a table, we must enable ROW ARCHIVAL for the table and manipulate the ORA_ARCHIVE_STATE hidden column of the table. Check my old blog post.

9. alter session set temp_undo_enabled = true;

Check this example.

10. truncate table dept cascade;

TRUNCATE TABLE is enhanced in the following way: a new clause CASCADE allows you to recursively truncate child tables. Check this example.

11. create sequence temp_order_seq start with 100 increment by 10 session;

The new clause SESSION allows you to create a session sequence. Check this example.

12. $ORACLE_HOME/perl/bin/perl catctl.pl -n 8 -l $ORACLE_HOME/diagnostics catupgrd.sql

Oracle Database 12c introduces the Parallel Upgrade Utility (catctl.pl), which enables components that do not have to be upgraded in a specific order to be upgraded at the same time, taking full advantage of CPU capacity. The Parallel Upgrade Utility reduces the total amount of time it takes to perform an upgrade by loading the database dictionary in parallel using multiple SQL processes to upgrade the database.

-n specifies the number of processes to use for parallel operations (default = 4). The maximum is 8. Set this parameter to 0 to run catctl.pl in serial mode.

Oracle in-database archiving and invisible columns in 12c

In Database tuning, DBA, Oracle database, Oracle internals, SQL on September 30, 2013 at 01:15

“The true mystery of the world is the visible, not the invisible.” Oscar Wilde

invisible_woman

In-Database archiving is new to Oracle 12c. The idea behind it is to make rows invisible. This is very similar to the invisible column concept.

The two features, combined together, can make big data in the table look not that big to the application.

Check the demonstration from orafaq to see how it works. It is simple and rather trivial. Another example is given by Contract Oracle Limited.

What is really interesting is John Watson’s comment and example at the very end of the orafaq article. He shows the need for filters that cannot be accomplished with access. John suggest that “all indexes need to include the ora_archive_state column, perhaps appended to primary and unique key indexes and prepended to other indexes. If you don’t do this, you will find that many operations that could previously be completed without touching the table will require table scans”.

The AMIS Techonology blog pays attention to the very same thing:

The Oracle Documentations says the following: “In-Database Archiving enables you to archive rows within a table by marking them as inactive. These inactive rows are in the database and can be optimized using compression, but are not visible to an application. The data in these rows is available for compliance purposes if needed by setting a session parameter.

With In-Database Archiving you can store more data for a longer period of time within a single database, without compromising application performance. Archived data can be compressed to help improve backup performance, and updates to archived data can be deferred during application upgrades to improve the performance of upgrades.”

The AMIS Technology Blog says: Note the emphasis on ‘without compromising performance’. It would seem that such in line archived records are skipped in a smart way during full table scans and other records retrieval operations.

Yong Woo, shows in his short note 12c: In-Database Archival …good, but not good with an example how a ‘TABLE ACCESS BY INDEX ROWID’ can be brought to the execution plan because of the new pseudo column.

What I would like to show, is in a way also the opposite, that performance can be even better with Oracle in-database archiving. I have a table called clients and visibility is set to ACTIVE. Almost all rows of the table are “invisible”. Check how bytes and cost increase, once I set the visibility to ALL. And of course, the full table scan!


SQL> select * from clients;

ep1


SQL> alter session set row archival visibility = all;

Session altered.

SQL> select * from clients;

ep2

It is interesting to note that virtual columns can be made invisible:


SQL> alter table clients add (vip_status number 
     generated always as (power(status,status)) virtual);

Table altered.

SQL> alter table clients modify (vip_status invisible);

Table altered.

INVISIBLE columns are not supported in external tables, cluster tables, or temporary tables and you cannot make a system-generated hidden column visible.

You can not yet make a table invisible. If you start marking all columns from the table as invisible, at the very end you will get:

SQL> alter table clients modify (client_id invisible);
alter table clients modify (client_id invisible)
            *
ERROR at line 1:
ORA-54039: table must have at least one column that is not invisible

On a final note, I wonder what developers will do if they cannot insert a row into a table if a DBA adds an invisible column without a default value declared as NOT NULL 🙂

DBA tips for tuning Siebel on RAC and Exadata

In Database tuning, DBA, Exadata, Oracle database, Oracle internals, RAC, Siebel on April 21, 2013 at 11:27

More than 100 years ago, Mark Twain said that “the elastic heart of youth cannot be compressed into one constrained shape long at a time”.

In modern technology, “the elastic cloud can be compressed with the constraints of long datatypes”. Let me explain the meaning of my statement in terms of Siebel applications running on RAC and Exadata.

COTS_Exadata

Chained and migrated rows are often a big problem in a Siebel database. The issue of chained rows can be resolved by using big block size while migrated rows require segment reorganization with higher PCTFREE (20, 30 or even 50). Chained tables in Siebel are often S_ORG_EXT, S_ASSET, S_ORDER_ITEM and S_ORDER_ITEM_OM. At least these will require using a bigger block size.

For LONG and CLOB details in a Siebel CRM, check CLOB Physical Type in Siebel.

But as Joel Goodman noted in How Smart is Your Smart Scan?, there is an Exadata specific situation that causes migrated rows.

When a row is updated in a Hybrid Columnar Compressed (HCC) table, then it is migrated to another block in the segment that is managed using “OLTP compression“. Any HCC Compression Unit (CU) containing at least one migrated row, will also cause the block containing that row to be accessed by the server using a “cell single block physical read“.

Look at the “Top 5 Timed Foreground Events” in a Siebel on Exadata OLTP DB using HCC:

cell single block physical read

The situation with HCC in OLTP is tricky for the following 3 reasons:

1. Every update of a record stored in HCC format results in a migrated row
2. The new row is stored in a new block that is marked for OLTP compression
3. Non-direct path inserts will be loaded into OLTP compressed blocks as opposed to HCC format

For the above reasons, mixing HCC with DML is not recommended. Partitioning can provide a mechanism for avoiding these issues since each partition can have its own storage format.

Only after decompressing the OLTP tables, the event “cell single block physical read“ disappeared and the performance got significantly improved.

Another good tip for Siebel on RAC is the usage of high number of hash partitions for hot indexes. High means 256 and more (should be a power of 2).

Look at the situation with gc buffer busy waits before the high hash partitioning:

gc_buffer_busy_segments

As you can see the “gc buffer busy acquire” was the top event:

Siebel_hot_blocks_before

Looking at issues with these events in MOS might incline you to believe that this is a bug. However, this was not the case as you can see what happened after making the number of hash partitions for some of the indexes 256, the database performance was back to normal:

Siebel_hot_blocks_after

Note that also non-partitioned tables can have hash partitioned indexes!

Another tip: SecureFiles is a feature introduced in Oracle Database 11g that is *specifically* engineered to deliver *high performance* for this type of unstructured data. I have seen several queries getting even 10 times faster after migrating LOBs to SecureFiles.

About sequences: in RAC/Exadata, using the CACHE and NOORDER options together results in the best performance for a sequence. For example, in a Siebel database the S_DOCK_TXN_LOG_S sequence is used to generate the transaction ID used by S_DOCK_TXN_LOG table. The default cache size for sequences in Oracle is 20. If you are having thousands of concurrent users, Siebel/Oracle suggest you increase the cache size to be at least 10000.

Last 2 things:

– For gathering Siebel database statistics use always the latest version of coe_siebel_stats.sql. As of now, the latest version is 11.4.4.5
– The script coe_siebel_profile.sql provides a list of columns that are not indexed but potentially are good candidates for indexing according to their usage by the Optimizer.

For additional tips, check Oracle’s white paper Siebel on Exadata!

And using Oracle Enterprise Manager makes all tuning so much easier!

OEM

Pseudo cursors and invisible SQL

In DBA, Oracle database, Oracle internals, SQL on February 17, 2013 at 11:39

Do you ever wonder why Oracle Enterprise Manager and AWR reports show the SQL text for some SQL IDs as not available?

SQL_text_not_available

The DBA forums and discussion groups mention that it might be because the SQLs have already been flushed from the shared pool or wonder how do you get and confirm the SQL ID for a query without seeing the text of the query?

The truth is however slightly different. These are Oracle internal or so called pseudo cursors which are just structures providing direct access database objects by avoiding the overhead of writing an explicit SELECT statement. These structures access directly most of the time the data dictionary tables, CLOBs and BLOBs. Sometimes even NCHAR and NVARCHAR2.

This is the reason why these internal cursors show in OEM and AWR with the legend “SQL Text Not Available”. There is just no SQL.

They are used internally within the database and cannot be described. At least not completely. However, here is what you can do as suggested by MOS 1298471.1:

create or replace view h$pseudo_cursor as
select Pseudo_cursor, sql_id,obj_id hex_obj_id
     ,obj# object_id, u.name owner, o.name object_name
from (select distinct 
             KGLNAOBJ Pseudo_cursor,kglobt03 sql_id
        ,substr(KGLNAOBJ
               ,instr(KGLNAOBJ,'_',1,3)+1
               ,instr(KGLNAOBJ,'_',1,4)-instr(KGLNAOBJ,'_',1,3)-1) obj_id 
       ,(case when 
         replace(translate(substr(upper(KGLNAOBJ)
                                 ,instr(KGLNAOBJ,'_',1,3)+1
                                 ,instr(KGLNAOBJ,'_',1,4)
                                  -instr(KGLNAOBJ,'_',1,3)-1)
                          ,'0123456789ABCDEF','................')
                ,'.') is null then 'Y' else 'N' end) is_safe_to_compare
            from x$kglob) k
   , obj$ o, user$ u
where obj#=decode(is_safe_to_compare,'Y',to_number(obj_id,'xxxxxxxxxx'),0)
   and o.owner#=u.user#

Now, we can try to find out more about our invisible SQL:

col hex_obj_id  format a10
col pseudo_cursor format a30
col owner format a20
col object_name for A22
select * from h$pseudo_cursor where sql_id='grv54uyqkw2yc';

pseudo_cursors

The second query shows us the pseudo PL/SQL used by Oracle internally. Notice the missing value for SQL ID!

Now, here is a question: how to tune pseudo cursors? If you run STA (SQL Tuning Advisor), the error message you get will be “The SQL has aged out of the cursor cache”. OEM will notify you with “The SQL statement is not available”:

sql_text_not_available2

So the bottom line is: if you see messages about SQL Text not being available, it is most likely due to Oracle trying to access internally LOBs or its own tables.