More than 100 years ago, Mark Twain said that “the elastic heart of youth cannot be compressed into one constrained shape long at a time”.
In modern technology, “the elastic cloud can be compressed with the constraints of long datatypes”. Let me explain the meaning of my statement in terms of Siebel applications running on RAC and Exadata.
Chained and migrated rows are often a big problem in a Siebel database. The issue of chained rows can be resolved by using big block size while migrated rows require segment reorganization with higher PCTFREE (20, 30 or even 50). Chained tables in Siebel are often S_ORG_EXT, S_ASSET, S_ORDER_ITEM and S_ORDER_ITEM_OM. At least these will require using a bigger block size.
For LONG and CLOB details in a Siebel CRM, check CLOB Physical Type in Siebel.
But as Joel Goodman noted in How Smart is Your Smart Scan?, there is an Exadata specific situation that causes migrated rows.
When a row is updated in a Hybrid Columnar Compressed (HCC) table, then it is migrated to another block in the segment that is managed using “OLTP compression“. Any HCC Compression Unit (CU) containing at least one migrated row, will also cause the block containing that row to be accessed by the server using a “cell single block physical read“.
Look at the “Top 5 Timed Foreground Events” in a Siebel on Exadata OLTP DB using HCC:
The situation with HCC in OLTP is tricky for the following 3 reasons:
1. Every update of a record stored in HCC format results in a migrated row
2. The new row is stored in a new block that is marked for OLTP compression
3. Non-direct path inserts will be loaded into OLTP compressed blocks as opposed to HCC format
For the above reasons, mixing HCC with DML is not recommended. Partitioning can provide a mechanism for avoiding these issues since each partition can have its own storage format.
Only after decompressing the OLTP tables, the event “cell single block physical read“ disappeared and the performance got significantly improved.
Another good tip for Siebel on RAC is the usage of high number of hash partitions for hot indexes. High means 256 and more (should be a power of 2).
Look at the situation with gc buffer busy waits before the high hash partitioning:
As you can see the “gc buffer busy acquire” was the top event:
Looking at issues with these events in MOS might incline you to believe that this is a bug. However, this was not the case as you can see what happened after making the number of hash partitions for some of the indexes 256, the database performance was back to normal:
Note that also non-partitioned tables can have hash partitioned indexes!
Another tip: SecureFiles is a feature introduced in Oracle Database 11g that is *specifically* engineered to deliver *high performance* for this type of unstructured data. I have seen several queries getting even 10 times faster after migrating LOBs to SecureFiles.
About sequences: in RAC/Exadata, using the CACHE and NOORDER options together results in the best performance for a sequence. For example, in a Siebel database the S_DOCK_TXN_LOG_S sequence is used to generate the transaction ID used by S_DOCK_TXN_LOG table. The default cache size for sequences in Oracle is 20. If you are having thousands of concurrent users, Siebel/Oracle suggest you increase the cache size to be at least 10000.
Last 2 things:
- For gathering Siebel database statistics use always the latest version of coe_siebel_stats.sql. As of now, the latest version is 18.104.22.168
- The script coe_siebel_profile.sql provides a list of columns that are not indexed but potentially are good candidates for indexing according to their usage by the Optimizer.
For additional tips, check Oracle’s white paper Siebel on Exadata!
And using Oracle Enterprise Manager makes all tuning so much easier!