Dontcheff

Archive for the ‘RAC’ Category

DBA tips for tuning Siebel on RAC and Exadata

In Database tuning, DBA, Exadata, Oracle database, Oracle internals, RAC, Siebel on April 21, 2013 at 11:27

More than 100 years ago, Mark Twain said that “the elastic heart of youth cannot be compressed into one constrained shape long at a time”.

In modern technology, “the elastic cloud can be compressed with the constraints of long datatypes”. Let me explain the meaning of my statement in terms of Siebel applications running on RAC and Exadata.

COTS_Exadata

Chained and migrated rows are often a big problem in a Siebel database. The issue of chained rows can be resolved by using big block size while migrated rows require segment reorganization with higher PCTFREE (20, 30 or even 50). Chained tables in Siebel are often S_ORG_EXT, S_ASSET, S_ORDER_ITEM and S_ORDER_ITEM_OM. At least these will require using a bigger block size.

For LONG and CLOB details in a Siebel CRM, check CLOB Physical Type in Siebel.

But as Joel Goodman noted in How Smart is Your Smart Scan?, there is an Exadata specific situation that causes migrated rows.

When a row is updated in a Hybrid Columnar Compressed (HCC) table, then it is migrated to another block in the segment that is managed using “OLTP compression“. Any HCC Compression Unit (CU) containing at least one migrated row, will also cause the block containing that row to be accessed by the server using a “cell single block physical read“.

Look at the “Top 5 Timed Foreground Events” in a Siebel on Exadata OLTP DB using HCC:

cell single block physical read

The situation with HCC in OLTP is tricky for the following 3 reasons:

1. Every update of a record stored in HCC format results in a migrated row
2. The new row is stored in a new block that is marked for OLTP compression
3. Non-direct path inserts will be loaded into OLTP compressed blocks as opposed to HCC format

For the above reasons, mixing HCC with DML is not recommended. Partitioning can provide a mechanism for avoiding these issues since each partition can have its own storage format.

Only after decompressing the OLTP tables, the event “cell single block physical read“ disappeared and the performance got significantly improved.

Another good tip for Siebel on RAC is the usage of high number of hash partitions for hot indexes. High means 256 and more (should be a power of 2).

Look at the situation with gc buffer busy waits before the high hash partitioning:

gc_buffer_busy_segments

As you can see the “gc buffer busy acquire” was the top event:

Siebel_hot_blocks_before

Looking at issues with these events in MOS might incline you to believe that this is a bug. However, this was not the case as you can see what happened after making the number of hash partitions for some of the indexes 256, the database performance was back to normal:

Siebel_hot_blocks_after

Note that also non-partitioned tables can have hash partitioned indexes!

Another tip: SecureFiles is a feature introduced in Oracle Database 11g that is *specifically* engineered to deliver *high performance* for this type of unstructured data. I have seen several queries getting even 10 times faster after migrating LOBs to SecureFiles.

About sequences: in RAC/Exadata, using the CACHE and NOORDER options together results in the best performance for a sequence. For example, in a Siebel database the S_DOCK_TXN_LOG_S sequence is used to generate the transaction ID used by S_DOCK_TXN_LOG table. The default cache size for sequences in Oracle is 20. If you are having thousands of concurrent users, Siebel/Oracle suggest you increase the cache size to be at least 10000.

Last 2 things:

– For gathering Siebel database statistics use always the latest version of coe_siebel_stats.sql. As of now, the latest version is 11.4.4.5
– The script coe_siebel_profile.sql provides a list of columns that are not indexed but potentially are good candidates for indexing according to their usage by the Optimizer.

For additional tips, check Oracle’s white paper Siebel on Exadata!

And using Oracle Enterprise Manager makes all tuning so much easier!

OEM

Advertisements

Bloom filters for DBAs

In Database tuning, DBA, Exadata, Oracle database, RAC on August 28, 2012 at 19:59

Lady Bird Johnson said: Where flowers bloom so does hope. I would re-phrase: Where Bloom filters bloom, so does database performance.

A Bloom filter is a probabilistic algorithm for doing existence tests in less memory than a full list of keys would require. In other words, a Bloom filter is a method for representing a set of n elements (also called keys) to support membership queries. John Rose wrote a very detailed blog article about Bloom filters. Another good one is by Christian Antognini entitled simply Bloom Filters.

These underscore parameters are all related to Bloom filters:

The Oracle database makes use of Bloom filters in the following 4 situations:

– To reduce data communication between slave processes in parallel joins: mostly in RAC
– To implement join-filter pruning: in partition pruning, the optimizer analyzes FROM and WHERE clauses in SQL statements to eliminate unneeded partitions when building the partition access list
– To support result caches: when you run a query, Oracle will first see if the results of that query have already been computed and cached by some session or user, and if so, it will retrieve the answer from the server result cache instead of gathering all of the database blocks
– To filter members in different cells in Exadata: Exadata performs joins between large tables and small lookup tables, a very common scenario for data warehouses with star schemas. This is implemented using Bloom filters as to determine whether a row is a member of the desired result set.

You can identify a bloom pruning in a plan when you see :BF0000 in the Pstart and Pstop columns of the execution plan and PART JOIN FILTER CREATE in the operations column:

Two hints can be used: px_join_filter and no_px_join_filter; and there are a couple of views for monitoring Bloom filters:

– v$sql_join_filter: information about Bloom filters; number of rows filtered out and probed by Bloom filters
– v$pq_tqstat: check reduced communication due to usage of Bloom filters

There are two undocumented Oracle functions: SYS_OP_BLOOM_FILTER and SYS_OP_BLOOM_FILTER_LIST:

In 11gR2 execution plans you may see :BF0000 replaced by KEY(AP). This is due to new functionality that performs partition pruning based on AND (= And Pruning) multiple pruning descriptors.

In 11.2.0.2 and later a SQL query on a table might not deliver any results. Unfortunately, it might not deliver any error message either. The solution is to disable Bloom filters:

alter system set “_bloom_filter_enabled” = false scope=both;

Restrictions:

– For Exadata customers, I would suggest to set a low degree of parallelism on small tables and indexes in order to take advantage of Bloom filters.
– Bloom filters are not supported in partition-wise joins. This is documented as Bug 14325392 but will be fixed in 12.1.0.
– About small tables with indexes: If Oracle goes for an index range scan of that small index, then it will be done in serial and not parallel and thus no Bloom filter will be used (just drop those small indexes).
– CURRENT_DATE does not use Bloom filters while SYSDATE does.
– Bloom filters are not used in DML statements: Bug 13801198 says that a simple two table join shows use of Bloom filter/pruning when running as a query and not when running as part of a INSERT as SELECT. The performance delta is 4:24 vs 0:24.

The Business of Business is Business

In Bugs, Database tuning, DBA, Grid Control, Oracle database, RAC on July 11, 2011 at 21:58

Winston Churchill said: “Some regard private enterprise as if it were a predatory tiger to be shot. Others look upon it as a cow that they can milk. Only a handful see it for what it really is – the strong horse that pulls the whole cart.”

Have a glimpse at Nasdaq’s article: Upside to Oracle’s Dominant Position in Database Software to double check that namely Oracle is the strong horse. “Oracle continues to maintain its leadership position in the database software market with a share of close to 50%, thanks to constant innovation in its database business and positive feedback from the Exadata line of servers.”

According to Gartner’s market share numbers by relational database management systems, Oracle leads worldwide RDBMS software market share and total software revenue, with its Linux and Unix garnering 74.3% and 60.7% market share respectively.

But for good horses, you need good bug control:

Let me show you an example of a perfectly well working RAC database where all of a sudden you see this “Other” activity in Grid Control:

Drilling down in EM, you witness the unorthodox event “latch: ges resource hash list”. Note that GES resources (GES = Global Enqueue Service) are accessed via a hash array where each resource is protected by a ges resource hash list child latch.

So, what have we done wrong that we have the privilege of witnessing this horrible histogram below (doesn’t the one above look like the top of the head of Bart Simpson)?

Turns out it is Bug 11690639 – High enqueue activity results in “latch: ges resource hash list” waits (Doc ID 11690639.8). Question: is the document ID named after the bug number or is it vice versa?

This is just an example. Many DBAs are nowadays mildly frustrated with the daily routine of bug fixing, looking for workarounds and possible patches. While some years ago the first thing a DBA would do for a problem was to try and tune the database, improve the SQL, etc., now most DBAs try to overrule at first the possibility of a bug.

Bugs are fixed but they re-appear. Fixing a bug can bring another one. Bugs bring systems down. You have covered the infinite availability SLA with RAC, Gloden Gate, Active Dataguard, etc. but one small bug can cause a so severe performance problem that all your efforts and diagnostic tools count for nothing.

How many Database experts talk about bugs at database events? How often do you see blog articles tagged with the word bug? Have you ever tried to look on the Internet for books on database bugs? Check out Martin Decker’s ora-solutions.net! I enjoy the site a lot!

Oracle keep all their bugs in a database called BugDB. It is a mission critical one and was the first internal application in Oracle to be upgraded with EBR. Check this one out: Online Application Upgrade of Oracle’s Bug DB with EBR!

A Bug DBA is something one would probably not like to be called (unlike a performance tuning DBA) but this phenomenon in the database world is significantly growing its importance and the faster the DBA spots out the bug the less the downtime will be. Unlike using the traditional way where anyone can open an SR with MOS/Metalink and wait for a (possible) resolution.

Bug spotting is of extreme importance nowadays as hours and days of slow performance or downtime can be devastating for the enterprise. And the work of the Bug Buster does not finish after the identification of the bug: then (s)he must decide between recommending patching or using a workaround. And all over again..

The quality of the Oracle database product

In Bugs, DBA, Oracle database, RAC, Replication on January 4, 2011 at 10:19

Two major things have recently caught my attention. They are more like strategic and of long term importance for the Oracle database.

1. Real online patching and database upgrades are still happening mostly on power point presentations. They should not be dependent on database types, patch type, version, replication, etc. and should be in DBA terms simple.

Top Google searches for Oracle database live upgrades relate rather to certification upgrades, not database upgrades:

Does Oracle have zero downtime maintenance, have a look:

Is Oracle Golden Gate similar to QREP in IBM’s DB2?

2. Product quality of all database components is getting worse and worse all the time.

– Look at the 10.2.0.5 bug fixes: they are just too many!
– The following three components throw too many errors and bugs: all types of replication, RAC and Grid Control
– ORA-600 pops up too often nowadays!