Dontcheff

Migrating databases with several database links

In Cloud, Consolidation, Databases, DBA, Oracle database, Replication on April 1, 2021 at 09:08

In a couple of recent database migration cases, one of the main questions raised, was how to figure out all outgoing and incoming database links as they have to be modified after the massive migrations.

DBLINKS5

Outgoing database links is simple: DBA_DB_LINKS describes all database links in the database. And this view has been part of the database (at least) since 7.3.4

The tricky part is how to find all incoming database links. At least before 12.2, where a new view called DBA_DB_LINK_SOURCES, shows the information of the source databases that opened database links to the local database.

So, how about the databases that are version 12.1 and below?

An Oracle community discussion on the MOS DBA forum gives several ideas:

Option 1: Bruno suggests to “start from the listener logfile; with some “awk/sed/vi” work it should be possible to extract the list of “origins” of the connections… -> From this list, identify the database servers -> Search database links on relevant databases on these servers”.

Might work but might be rather tedious work if there are 100s of different servers.

Option 2: Brian suggests “to query V$SESSION to see active sessions from the other database server. Hint…look at the MACHINE column to see if it matches the other database server name. Querying V$SESSION will only work if the link is open when you query it. As such, you may want to add an AFTER LOGON trigger which writes an audit entry to a table if the connection is from that database server.”

If you create a logon trigger to insert all incoming connection via database link note that in 11g, you can do that using value sys_context(‘USERENV’,’DBLINK_INFO’) which will give us all information. But check first Doc ID 2593966.1 as there is Bug 18974508: sys_context(‘userenv’, ‘dblink_info’) returns incomplete information.

But before 10g, there is no DBLINK_INFO, we we must use x$k2gte:

 
select username, osuser, status, sid, serial#, machine,
process, terminal, program from v$session
where saddr in (select k2gtdses from sys.x$k2gte);

The above is documented in Doc ID 332326.1: How to identify a session started by a remote distributed transaction? The fixed table x$k2gte contains 2PC Global Transaction Entry. The column k2gtdses in x$k2gte has the session state object and this can be mapped to the saddr column of v$session.

But as explained by Mark, the problem is that until the trigger finishes the session the remote db link session is not considered to exist and only upon successful session connection does Oracle then go and update related facts about the session.  Oracle does not guarantee read consistency on v$ views and the v$ views are based on x$ tables which are really program storage areas.  These areas get updated at various points in the logic.  It is possible that a logon trigger may not work in this specific case.  An alternate approach would be to run a process every N time that just snapshots what is out there and records new remote queries.  After all you really only need one capture per remote source whether you care about only database links or care about each client server.

One of the top database experts, Mariami Kupatadze, gave us a very elegant way of how to find remote sessions executing over a database link using x$k2gte, x$ktcxb, x$ksuse and v$session_wait in a single SQL statement.

A more detailed version called Identifying database link usage was written by John Hallas in 2015.

Long story short: for databases from 7.3 till 12.1 create a job capturing the distributed transactions based on the script given in Doc ID 104420.1 “Script to show Active Distributed Transactions”. And you can modify the scripts if not only the active remote transactions need to be captured. For 12.2 and after, just use the view  DBA_DB_LINK_SOURCES. 

create_database_link

 

Oracle Cloud Infrastructure (OCI) Database Management

In DBA, OCI, Oracle database on March 18, 2021 at 15:51

Early this year, Oracle announced the Announcing the general availability of Oracle Cloud Infrastructure Database Management.

With Database Management Cloud Service, DBAs get a unified console for on-premises and cloud databases with lifecycle database management capabilities for monitoring, performance management, tuning and administration.

As of today, Database Management is available to use with external Oracle databases deployed on-premises. Support for Oracle Databases on Oracle Cloud Infrastructure will be coming soon.

The Database Management pricing is quite simple and clear, here it is:

DBM_price

This is a deep dive into the new OCI Database Management Service, and here are the basics:

  1. From the main menu go down and search for Database Management:

DMB2

2. On the right side, go to “Oracle Databases” in order to enable Database Management (opens a new tab):

DBM3

3. Start registering external pluggable databases – as of today we cannot register cloud databases:

DBM4

4. Select the compartment and choose a database display name:

DBM5

5. Select the database and click on register.

DBM6

The Database Management service comes with 4 main features:

  • Fleet monitoring and management
  • Database groups
  • Database summary
  • Custom PL/SQL jobs

Before you enable and use Database Management, you must review and complete some prerequisite tasks:

  • Install Management Agents for Database Management
  • Register the Oracle Database with the External Database service
  • Connect the Oracle Database to the External Database handle
  • Create Oracle Cloud Infrastructure IAM user groups

After you complete these prerequisite tasks, you must create policies to assign permissions to user groups in order to enable and use Database Management. 

In Oracle Cloud Infrastructure, policies provide the permissions required to allow users to work in certain ways with specific types of resources in a tenancy or a particular compartment.

A policy is written to determine who can perform what functions on which resources using the following basic syntax:

Allow <subject> to <verb> <resource> in <location>

In this policy example, you can create to grant the DB-AEG user group the permission to enable Database Management for all External Databases my your tenancy JULIANDON:

Allow group DB-AEG to use external-database-family in JULIANDON

The external-database-family is the family resource-type for the External Database service, which includes the following individual resource-types:

  • external-container-databases
  • external-pluggable-databases
  • external-non-container-databases
  • external-database-connectors

For more details, check the Oracle Cloud Infrastructure Documentation.

Enhanced Diagnosability of Oracle Database: What is an Attention Log?

In Database tuning, Databases, DBA, Oracle database, Oracle utilities on March 1, 2021 at 07:18

A correct diagnosis is three-fourths the remedy – Mahatma Gandhi

How true this statement is also in the database world! Often it is more difficult to diagnose an issue than fixing it. Going in the past through the alert.log file to find information that needs attention has been simplified by creating a single attention.log file.

In the latest Oracle release, diagnosability of database issues is enhanced through a new attention log, as well as classification of information written to database trace files. The new attention log is written in a structured format (XML or JSON) that is much easier to process or interpret and only contains information that requires attention from an administrator.

The attention log is an externally modifiable file that contains information about critical and highly visible database events. We can use the attention log to quickly access information about critical events that need action.

So, where is the location of the new log file introduced by Oracle Database 21c? Notice that V$DIAG_INFO has a new entry called Attention log:

The attention log contains a pre-determined, translatable series of messages, with one message for each event. There is one attention log for each database instance. There are two ways to view the contents of the attention log.

1. Open the attention.log file that can be found under $ORACLE_BASE/diag/rdbms/database_name/instance_id/log directory. Note that the documentation does not reflect the correct location of the attention.log file.

2. Query the V$DIAG_ALERT_EXT view using the required filters:

The filtering is done through message_level. A message_level of 1 corresponds to critical errors that need immediate action.

You can filter V$DIAG_ALERT_EXT on either MESSAGE_TYPE or MESSAGE_LEVEL. As you can see in the example above, I am filtering for only critical errors. Here is what the types and levels mean by number:

MESSAGE_TYPE:

1: UNKNOWN – essentially the NULL type
2: INCIDENT_ERROR – error for some internal or unexpected reason: must be reported to Oracle Support
3: ERROR – an error of some kind has occurred – not really meaningful
4: WARNING: an action occurred or a condition was discovered that should be reviewed and may require action
5: NOTIFICATION: reports a normal action or event: for example “logon completed”
6: TRACE: output of a diagnostic trace

MESSAGE_LEVEL:

1: CRITICAL: critical errors
2: SEVERE: severe errors
8: IMPORTANT: important message
16: NORMAL: normal message

The attention log has the following attributes, not all documented yet:

– Attention ID: a unique identifier for the message
– Attention type: Error, Warning, Notification or Additional information
– Message text
– Notification
– Error
– Urgency: Immediate, Soon, Deferrable or Information
– Scope: Session, Process, PDB Instance, CDB Instance, CDB Cluster, PDB or CDB
– Info
– Cause
– Action
– Class: Clusterware Admin, CDB admin or PDB admin
– Time

Note that an attention log entry seldom contains all attributes.