Dontcheff

Archive for the ‘Replication’ Category

SQL Trace and X-ADG in the Oracle Autonomous Database

In Autonomous, DBA, OCI, Oracle database, Replication on October 6, 2021 at 09:12

Two very different in nature but equality useful features are now available in the Oracle Autonomous Database:

  1. SQL Tracing in Autonomous Database
  2. Cross-Region Autonomous Data Guard in ADB-S

Here is how to enable and use them:

SQL Trace in ADB:

You need first a standard bucket as SQL tracing files are only supported with buckets created in the standard storage tier. Also, create a token (you can have at most 2 tokens) and do not use your OCI password when creating the credentials.

Next, you have to create a credential for your Cloud Object Storage account. Note the full username below – do not simply use the one with what you login to the console.

BEGIN
  DBMS_CLOUD.CREATE_CREDENTIAL(
    credential_name => 'JULIANDON_CREDENTIAL',
    username => 'oracleidentitycloudservice/juliandon@yahoo.com', 
    password => 'generated_token'
);
END;
/

PL/SQL procedure successfully completed.

Afterwards, set the init.ora parameters DEFAULT_LOGGING_BUCKET to specify the Cloud Object Storage URL for a bucket for SQL trace files:

SET DEFINE OFF;
ALTER DATABASE PROPERTY SET 
   DEFAULT_LOGGING_BUCKET = 'https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/juliandon/b/adbkofa/o/';

Database altered.

Next, specify the credentials to access the Cloud Object Storage. Note that although I am doing this as the ADMIN user, I still have to prefix the credential with ADMIN. Otherwise, you get an error message.

ALTER DATABASE PROPERTY SET DEFAULT_CREDENTIAL = 'ADMIN.JULIANDON_CREDENTIAL';

Database altered.

Before we can enable SQL trace, we configure the database to save SQL Trace files:

exec DBMS_SESSION.SET_IDENTIFIER('sqltrace_jd');

PL/SQL procedure successfully completed.

exec DBMS_APPLICATION_INFO.SET_MODULE('module_jmd', null);

PL/SQL procedure successfully completed.

ALTER SESSION SET SQL_TRACE = TRUE;

After running the SQLs, disable SQL tracing so that the collected data for the session is written to a table in your session and to a trace file in the bucket you configured when you set up SQL trace.

ALTER SESSION SET SQL_TRACE = FALSE;
ALTER DATABASE PROPERTY SET DEFAULT_LOGGING_BUCKET = '';

The SQL Trace facility writes the trace data collected in the session to Cloud Object Store in the following format:

default_logging_bucket/sqltrace/clientID/moduleName/sqltrace_numID1_numID2.trc

When you enable SQL Tracing, the same trace information that is saved to the trace file on Cloud Object Store is available in the SESSION_CLOUD_TRACE view in the session where the tracing was enabled.

SELECT trace FROM SESSION_CLOUD_TRACE ORDER BY row_number;

After you close the session, the data is no longer available in SESSION_CLOUD_TRACE.

DESC SESSION_CLOUD_TRACE

Name       Null? Type
---------- ----- ------------------------------
ROW_NUMBER       NUMBER
TRACE            VARCHAR2(32767)

Check Connor McDonald’s blog entitled SQL trace on your cloud database.

Cross-Region Autonomous Data Guard in ADB-S

Autonomous Data Guard provides a standby database instance in a different availability domain in the same region or in a standby database instance in different region.

If you create the standby database in the current/local region and if the primary instance becomes unavailable – the Autonomous Database automatically switches the role of the standby database to primary and begins recreating a new standby database.

ADB currently supports up to 2 standby databases – a local one in the same-region and an additional one which is remote – called cross-region.

So, with the new cross-region standby database, you can perform a manual failover to the standby database if the current region goes down.

A detailed blog by Nilay Panchal entitled Cross-Region Autonomous Data Guard – Your complete Autonomous Database disaster recovery solution! covers in detail how to create the remote standby database and how to manually switch over.

Note that each region has one or a few nearby paired regions in which a remote standby may be created. As you can see from the screenshot above my tenancy in Frankfurt is subscribed to 3 remote regions in which I can create a remote standby.

It is important to know that ADB-S does not allow us access to the standby databases but after a switchover or failover, the database wallet downloaded in the primary database region can be used in the remote region.

It is extremely simple to manually switchover to the other region – in my case from Frankfurt to Zurich, just with a click of a button:

Simple and elegant!

Migrating databases with several database links

In Cloud, Consolidation, Databases, DBA, Oracle database, Replication on April 1, 2021 at 09:08

In a couple of recent database migration cases, one of the main questions raised, was how to figure out all outgoing and incoming database links as they have to be modified after the massive migrations.

DBLINKS5

Outgoing database links is simple: DBA_DB_LINKS describes all database links in the database. And this view has been part of the database (at least) since 7.3.4

The tricky part is how to find all incoming database links. At least before 12.2, where a new view called DBA_DB_LINK_SOURCES, shows the information of the source databases that opened database links to the local database.

So, how about the databases that are version 12.1 and below?

An Oracle community discussion on the MOS DBA forum gives several ideas:

Option 1: Bruno suggests to “start from the listener logfile; with some “awk/sed/vi” work it should be possible to extract the list of “origins” of the connections… -> From this list, identify the database servers -> Search database links on relevant databases on these servers”.

Might work but might be rather tedious work if there are 100s of different servers.

Option 2: Brian suggests “to query V$SESSION to see active sessions from the other database server. Hint…look at the MACHINE column to see if it matches the other database server name. Querying V$SESSION will only work if the link is open when you query it. As such, you may want to add an AFTER LOGON trigger which writes an audit entry to a table if the connection is from that database server.”

If you create a logon trigger to insert all incoming connection via database link note that in 11g, you can do that using value sys_context(‘USERENV’,’DBLINK_INFO’) which will give us all information. But check first Doc ID 2593966.1 as there is Bug 18974508: sys_context(‘userenv’, ‘dblink_info’) returns incomplete information.

But before 10g, there is no DBLINK_INFO, we we must use x$k2gte:

 
select username, osuser, status, sid, serial#, machine,
process, terminal, program from v$session
where saddr in (select k2gtdses from sys.x$k2gte);

The above is documented in Doc ID 332326.1: How to identify a session started by a remote distributed transaction? The fixed table x$k2gte contains 2PC Global Transaction Entry. The column k2gtdses in x$k2gte has the session state object and this can be mapped to the saddr column of v$session.

But as explained by Mark, the problem is that until the trigger finishes the session the remote db link session is not considered to exist and only upon successful session connection does Oracle then go and update related facts about the session.  Oracle does not guarantee read consistency on v$ views and the v$ views are based on x$ tables which are really program storage areas.  These areas get updated at various points in the logic.  It is possible that a logon trigger may not work in this specific case.  An alternate approach would be to run a process every N time that just snapshots what is out there and records new remote queries.  After all you really only need one capture per remote source whether you care about only database links or care about each client server.

One of the top database experts, Mariami Kupatadze, gave us a very elegant way of how to find remote sessions executing over a database link using x$k2gte, x$ktcxb, x$ksuse and v$session_wait in a single SQL statement.

A more detailed version called Identifying database link usage was written by John Hallas in 2015.

Long story short: for databases from 7.3 till 12.1 create a job capturing the distributed transactions based on the script given in Doc ID 104420.1 “Script to show Active Distributed Transactions”. And you can modify the scripts if not only the active remote transactions need to be captured. For 12.2 and after, just use the view  DBA_DB_LINK_SOURCES. 

create_database_link

 

GoldenGate enters the Oracle database

In DBA, Golden Gate, MySQL, Oracle database, Replication, Sybase on June 1, 2011 at 00:02

A friend of mine claims that GoldenGate is a product that is supposed to be used and exist outside the Oracle database. And if you run Oracle on a version below 11.2.0.2, the following query for GOLDENGATE objects will probably return no rows selected. But here is what happens with Oracle 11.2.0.2:

Interesting, right ?

Note that recently Oracle GoldenGate 11.1.1.1 was released. For Oracle GoldenGate on Oracle versions 10gR2 and higher, Extract now manages log retention by default. With this new feature, the documentation says you should be aware of the following:

1. The upgrade will automatically register each of your Extract processes with the database, which will start retaining the logs based on the current recovery requirements for each process.

2. An Oracle Streams Capture process will be created for each Extract process and will have the same name as that process. These Capture processes are non-functional and can operate concurrently with other local Oracle Streams installations.

3. In order for the upgrade to succeed (on 11.2.0.2), you will need to assign the Extract user some additional privileges.

Note first that you need to commit after executing the procedure in the dbms_goldengate_auth package:

SQL> exec dbms_goldengate_auth.grant_admin_privilege('GGATE');
PL/SQL procedure successfully completed.
SQL> commit work;
Commit complete.

Note additionally that the privilege_type can be CAPTURE, APPLY or BOTH. Both (= *) is the default. If you prefer that the GGATE user is granted also the select_catalog_role, you should run the command like this:

SQL> exec dbms_goldengate_auth.grant_admin_privilege('GGATE','*',TRUE);
PL/SQL procedure successfully completed.
SQL> commit work;
Commit complete.

Oracle GoldenGate version 11.1.1.1 contains improved sequence support. To enable this support, you must install some database procedures that support the new FLUSH SEQUENCE command. I recommend that for details you read Samuel Oleleh’s Oracle Goldengate 11.1.1.1 Sequence Support article.

The (Oracle database related) new features of GoldenGate 11.1.1.1 are very well described in the following blob entries:

1. Oracle GoldenGate version 11.1.1.1 new features by Samuel Oleleh.
2. Oracle GoldenGate version 11.1.1.1 released by Miladin Modrakovic.

For all the MySQL users (are there any?), I will note that there is a new ALTLOGDEST option that was added to TRANLOGOPTIONS to specify the location of the MySQL log index file. This value overrides the database default location. Extract reads the log index file to find the binary log file that it needs to read.

For the Sybase users I will mention that the range of values for DBOPTIONS TDSPACKETSIZE has been changed. Additionally, two formerly internal parameters are now available for use in a TRANLOGOPTIONS statement:
– TRANLOGOPTIONS MAXREADSIZE specifies how many records Extract reads from the transaction log at one time. It can be used to improve performance.
– TRANLOGOPTIONS READQUEUESIZE specifies the internal queue size, in bytes, for transaction data. It can be increased to improve performance.

For Oracle GoldenGate installations please check:

1. Pythian’s Oracle GoldenGate Installation, Part 1
2. Pythian’s Oracle GoldenGate Installation, Part 2

As I hinted above MySQL and GoldenGate do not in my opinion go hand in hand. But I would briefly like to mention that Oracle GoldenGate can be used together with Active Data Guard to achieve almost zero planned and/or non-planned down time:

I will soon write more on live upgrades, zero downtime and 100% SLAs 🙂

P.S. As of today, Google returns only two pages with results for dba_goldengate_privileges, try for yourself to see if things have already changed:

Watson, IBM InfoSphere Streams and Oracle Streams

In DB2 database, Grid Control, Oracle database, Replication on February 24, 2011 at 19:30

Initially, my plan was to write only on Oracle Streams but after Watson’s victory on Jeopardy, I got an interesting question from an ex-colleague of mine: Is this supercomputer HA? What if the database crashes and there is no source for answers? Funny, isn’t it 🙂 After all Watson is an example of an analytics workload optimized system, not an online sales platform.

But first, have a look at this, how Ken Jennings and Brad Rutter, the show’s two biggest winners, took on their database rival on Feb. 14-16 2011. Note that according to the president of WABC – New York City’s ABC television affiliate Monday’s broadcast of the IBM Watson/Jeopardy Challenge was the most watched show on any channel in the NY Metro area.






Watson is a mixture of 10 racks of IBM POWER 750 servers, Linux OS, 15 TB of RAM, 2880 processor cores and operates at 80 teraflops. The data is stored in a DB2 database. Another one of the 4 major technologies Watson is based on is IBM InfoSphere Streams. While IBM InfoSphere Streams enables continuous and super fast analysis of massive volumes of information, Oracle Streams enables continuous and very fast replication of massive volumes of information. Well, at least when it works and there are no errors. Say one like this:

Looking at the details of the error, what do we understand?

As much as we understand from the statistics:

My personal opinion is that Streams is as good as Advanced Replication was. After all, Oracle will not develop it any longer. Oracle bought Golden Gate and will try to use it for all data integration/replication solutions.

P.S. If only Sherlock Holmes had this type of Watson 🙂

The quality of the Oracle database product

In Bugs, DBA, Oracle database, RAC, Replication on January 4, 2011 at 10:19

Two major things have recently caught my attention. They are more like strategic and of long term importance for the Oracle database.

1. Real online patching and database upgrades are still happening mostly on power point presentations. They should not be dependent on database types, patch type, version, replication, etc. and should be in DBA terms simple.

Top Google searches for Oracle database live upgrades relate rather to certification upgrades, not database upgrades:

Does Oracle have zero downtime maintenance, have a look:

Is Oracle Golden Gate similar to QREP in IBM’s DB2?

2. Product quality of all database components is getting worse and worse all the time.

– Look at the 10.2.0.5 bug fixes: they are just too many!
– The following three components throw too many errors and bugs: all types of replication, RAC and Grid Control
– ORA-600 pops up too often nowadays!