Dontcheff

Archive for the ‘Exadata’ Category

Oracle Exadata Cloud Machine Q&A

In Cloud, Database options, DBA, Exadata, IaaS, Oracle database, Oracle Engineered Systems on March 28, 2017 at 07:25

“The computer industry is the only industry that is more fashion-driven than women’s fashion.” Larry Ellison

Amazon have no on-premise presence, it is cloud only. Microsoft have the Azure Stack but the Azure Stack Technical Preview 3 is being made available as a Proof of Concept (POC) and must not be used as a production environment and should only be used for testing, evaluation, and demonstration.

Oracle Database Exadata Public Cloud Machine, or Exadata Cloud Machine, or ExaCM in short, is a cloud-based Oracle database subscription service available on Oracle Exadata, and deployed in the customer or partner data center behind their firewall. This allows customers to subscribe to fully functional Oracle databases on Exadata, on an OPEX driven consumption model, using agile cloud-based provisioning, while the associated Exadata infrastructure is maintained by Oracle.

This blog post contains the top 10 (5 commercial and 5 technical) facts about the Exadata Cloud Machine:

1. On-premise licenses cannot be transferred to ExaCM.

2. The minimum commitment to both the ExaCM and OCM is 4 years and the minimum configuration is Eighth Rack.

3. The subscription price for Oracle Database Exadata Cloud Machine X6 Eighth Rack is $40,000 per month (= $2,500 X 16) and that includes all DB options/features, Exadata Software and OEM DB Packs.

4. Standalone products such as Oracle Secure Backup and Oracle GoldenGate are not included in the ExaCM subscription. Only the database options (such as RAC, In-Memory, Partitioning, Active Data Guard, etc.), the database OEM packs and the Exadata storage server software are included.

5. ExaCM requires Oracle Cloud Machine to deploy Exadata Cloud Control Plane (separate subscription). OCM subscription requires similar minimum term commitment as ExaCM. If a customer already has an OCM, that can be leveraged to deploy Exadata Control Plane at no extra cost. One OCM Model 288 can manage 6 ExaCM Full Racks (i.e. 24 ExaCM Quarter Racks or 12 ExaCM Half Racks). Theoretically one OCM can support a much larger number of ExaCM full racks: about 50.

6. The 1/8th rack SKU is very similar to the on-premises 1/8th rack – i.e. minimum configuration of 16 OCPUs (cores), 240 GB RAM per database server, 144 TB raw storage (42 TB usable), 19.2TB of Flash. Compared to the Quarter Rack, it ships with less RAM, disk storage and flash. Those will be field installed if the customer chooses to go for the 1/8th to Quarter Rack upgrade. Note that this 1/8th rack enables customers to have an entry level configuration that is similar to what exists in Exadata Cloud Service.

7. Hourly Online Compute Bursting is supported with ExaCM. The commercial terms are the same as in Exadata Cloud Service – i.e. 25% premium over the Metered rate, calculated on an hourly basis. Customers can scale up or down, dynamically. Bursting does not kick in automatically based on load. Bursting of OCPUs needs to be configured by customers as needed. Once customer initiates bursting, the OCPU update is done dynamically without downtime. Customers will be billed later on the hours of bursting usage. Price: $8.401 per OCPU per hour.

8. If Cloud Control Plane is down, it doesn’t affect the availability of steady state runtime operations. However, cloud-based management (e.g. selfservice UI and REST API access) will be impacted.

9. Access: the Exadata Cloud Machine compute nodes are each configured with a Virtual Machine (VM). You have root privilege for the Exadata compute node VMs, so you can load and run additional software on the Exadata compute nodes. However, you do not have administrative access to the Exadata infrastructure components, including the physical compute node hardware, network switches, power distribution units (PDUs), integrated lights-out management (ILOM) interfaces, or the Exadata Storage Servers, which are all administered by Oracle.

10. Patching and backups: you can produce a list of available patches using the exadbcpatchmulti command as follows

# /var/opt/oracle/exapatch/exadbcpatchmulti -list_patches 
-sshkey=/home/opc/.ssh/id_rsa 
-oh=hostname1:/u01/app/oracle/product/12.2.0.1/dbhome_1

When you create a database deployment on Exadata Cloud Machine, you must choose from the following automatic backup configuration options:

– Remote Storage Only: uses remote NFS storage to store periodic full (RMAN level 0) backups and daily incremental backups, with a seven day cycle between full backups and an overall retention period of thirty days.
– None: no automatic backups are configured. Automatic backups cannot be configured later if you select the None option when you create a database deployment.

Useful links:

Oracle Exadata Cloud Machine Documentation
Oracle Database Exadata Cloud Machine Data Sheet
Features and Benefits of Oracle ExaCM
Oracle Database Exadata Cloud Machine Pricing
Creating an Exadata Cloud Machine Instance
Known Issues for Oracle Exadata Cloud Machine
Oracle SVP, Juan Loaiza, describes Oracle Database Exadata Cloud Machine

Oracle X5 vs BMW X5

In DBA, Exadata, Oracle Engineered Systems on February 12, 2015 at 13:48

Oracle Exadata X5 is the latest generation of Engineered Systems which can be compared to a BMW X5. The “equivalent” integrated systems of competitors fall into the category of a mid class sedan except for the price. You must be willing to pay less in order to get the double and triple performance of Exadata.

X5X5

To get more details on the topic, it is worth reading CIO Angle’s Oracle declares war on EMC, VCE: “Oracle will sell its latest generation X5 Engineered Systems hardware, which is Oracle’s term for converged systems, separately from Oracle applications on Linux or Windows at a significantly lower price than market leader VCE’s comparable systems. Ellison announced list prices that are half of VCE’s discounted prices. And he said “We will negotiate. We will discount.”

Ellison also announce an all-flash array that’s part of X5 but sold as a stand-alone unit. Again, the list price is one-third of the price of what Oracle said is a comparable EMC XtremeIO array. Significantly, he did not announce a stand-alone server or network switch, although these are also X5 components. This seems to indicate that Oracle is not going after Cisco Systems, at this time.”

Exadata_VCE

What is worth rather watching than reading is the following presentation by Larry Ellison: The Next Generation of Oracle Engineered Systems.

LJE_X5

And here are some useful links:

1. Juan Loaiza – Oracle Next Generation Engineered Systems Launch – theCUBE

2. Gartner: New X5 Generation Will Bring Pricing Improvements to Oracle Exadata

3. Oracle aims to undercut competition with X5 generation of engineered systems

4. Oracle Debuting FS1 Series Flash Storage System

5. Oracle Introduces New Generation of Engineered Systems and New Pricing Strategy

6. Oracle Exadata X5 Data Sheet

7. Oracle Serer X5-2 System Architeture

analysts

Exadata X5 with the In-Memory option enabled would be M6 Gran Coupe, if it were to be car.

Cross-Platform Transportable Database and Oracle Engineered Systems

In Consolidation, DBA, Exadata, Oracle database, RMAN, SuperCluster on May 25, 2014 at 15:25

Cross-platform transportable database is not the same thing as transportable tablespace. When performing x-platform transportable database we copy the entire database, including the SYSTEM and SYSAUX tablespaces from one platform to another. The usual containment checks are no longer needed and because the SYSTEM tablespace is also being copied, no metadata datapump export/import step is required. But cross-platform transportable database can only be performed between platforms that have the same endian format.

migration

When consolidating a large number of databases onto Exadata or SuperCluster, the work has to be automated as much as possible. When the source and the target platform share the same endian (see the 2 endian group below), then the best option is to use the transportable database method.

Otherwise, there are 3 options:

1. Create a new database on the EXA platform manually, and transport the needed tablespaces from the source database using the Cross Platform Transportable Tablespace method.
2. If downtime during the upgrade is an issue, Cross Platform Incremental Backup is an extremely good option. You can Refresh Standby Databases using the method.
3. Use the new Full Transportable Export/Import method. Note that full transportable export is available starting 11.2.0.3 while full transportable import is available starting 12.1.0.1.

trans_platform

DBAs should consider the following points before the migrations:

– Some parts of the database cannot be transported directly: redo log files and control files from the source database are not transported. New control files and redo log files are created for the new database during the transport process (alter database backup controlfile to trace resetlogs;), and an OPEN RESETLOGS is performed once the new database is created.

– BFILEs are not transported. RMAN provides a list of objects using the BFILE datatype in the output for the CONVERT DATABASE command, but users must copy the BFILEs themselves and fix their locations on the destination database. Execute DBMS_TDB.CHECK_EXTERNAL in order to identify any external tables, directories or BFILEs.

– Tempfiles belonging to locally managed temporary tablespaces are not transported. The temporary tablespace will be re-created on the target platform when the transport script is run. After opening with resetlogs, run alter tablespace TEMP add tempfile…

– External tables and directories are not transported. RMAN provides a list of affected objects as part of the output of the CONVERT DATABASE command, but users must redefine these on the destination platform. Run select DIRECTORY_NAME, DIRECTORY_PATH from DBA_DIRECTORIES and ensure that the same paths are available on the target system.

– Password files are not transported. If a password file was used with the source database, the output of CONVERT DATABASE includes a list of all usernames and their associated privileges. Create a new password file on the destination database using this information.

– When all is done, run UTLIRP and UTLRP.

For SAP users, check Note 105047 – Support for Oracle functions in the SAP environment:

86. Transportable Tablespaces: the implicit use is supported in the BRSPACE function “-f dbcreate” (Note748434) and the “Tablespace Point in Time Recovery” function of BRRECOVER. Explicit use as part of system copying is tolerated.

87. Transportable database: Can be used (Note 1367451).

And finally, here is some more information on how to migrate your databases to Oracle Engineered Systems:

1. Exadata MAA Best Practices Series Session 6: Migrating to Exadata

2. Exadata MAA Best Practices Migrating Oracle Databases

3. Oracle’s SPARC M5-32 and SPARC M6-32 Servers: Domaining Best Practices

migration_startegies

Oracle Exadata myths and misconceptions

In DBA, Exadata, Oracle database on February 9, 2014 at 04:47

Exadata experience, what does that actually mean? by Martin Bach and Friday Philosophy – I Am An Exadata Expert? by Martin Widlake raise few questions on how is an Exadata DBA different than an Oracle DBA.

This chart provides the 3-month moving total of permanent IT jobs citing Oracle Exadata within the UK as a proportion of the total demand within the Database and Business Intelligence category.

Exadata_demand

Clearly, the demand is high and growing. Several Oracle DB experts are now also Exadata experts. There are various internet blogs and discussion forums dedicated to Exadata. Event conferences: E4.

The aim of this blog is to point out few Oracle Exadata myths and misconceptions I have encountered and discussed with Exadata experts during the past year or so.

1. “When logged into the database (SQL*Plus, SQL Worksheet, etc.) there is no way to see if the database is an Exadata one or not.” Actually there is. If select count(*) from (select distinct cell_name from gv$cell_state) returns a number > 0, then the database runs on Exadata storage. This SQL is also used by the SYS.DBMS_FEATURE_EXADATA procedure:

gv_cell_state

Number 7 tells us also that we are using a half Exadata rack. Full X4 Exadata rack has 14 cells. Full T5-8 SuperCluster rack has 8 cells.

2. “Flash Cache compression is enabled by default.” No, it is not. The reason is that clients must have licensed the advanced compression option on all databases that access the compressed flash cache.

You can enable using the Flash Cache compression by the cell command “alter cell flashCacheCompress=TRUE”. On an X3 Exadata you should also run “alter cell flashCacheCompX3Support= TRUE”.

Note that the Flash Cache compression is supported on X3 or X4 storage servers and it requires F40 or F80 cards. X4 Flash Cache compression is expanded to 88TB (raw) per rack. Monitor the Flash Cache compression by using the cell metric FC_BY_USED!

3. “Exadata is always RAC.” No, it is not. In fact, consider carefully if you want to use the RAC option on your Exadata. Ask for second opinion too! Ask for third opinion as well. You can have more databases on a quarter or 1/8th rack if you do not use the RAC option. And if someone comes with the HA excuse to use RAC, ask if (s)he can walk and run on one foot when the other one is injured. With well implemented instance caging one can run even close to 100 database instances on one database node (yes there are such cases). Often simplicity brings better results.

4. “DBAs should let Oracle decide on what is placed in the Flash Cache.” In general yes but I have seen significant improvement when frequently used tables are manually kept in the Flash Cache. Although the Flash Cache is fully automated for reads and writes, DBAs can decide on object caching by setting cell_flash_cache_keep. The granularity is all the way to the object level, where you can define higher priority for caching (KEEP) or not to cache at all (NONE). The flash cache can differentiate between full table scans and single I/O reads.

Cached:

– Frequently accessed data and index blocks
– Control file reads and writes
– File header reads and writes

Not cached:

– I/Os to mirror copies
– Backup-related I/O
– Data Pump I/O
– Data file formatting
– Table scans do not monopolize the cache

Check Uwe Hesse’s article about the Exadata The Flash Cache.

5. “The bigger SGA the better.” Although I have often seen how increasing the SGA can really boost the application performance, this is not always the case with Exadata. Why? In order to offload the work to the sells, the table should be enough big. But what is meant by a big table?

“_small_table_threshold” is sized in units of database blocks, and is supposed to be set by default to 2% the size of the database buffer cache (1.5% in 12c according to my tests) and _very_large_object_threshold = 500 (value for 12c)

Tables with size > 5 x “_small_table_threshold”, are being treated as ‘large tables’ for their reads result in direct reads. It is rather unclear how these things work for tables with size between 2–5 times the small table threshold. Note that 12c has 108 underscore parameters for different threshold values!

Check out these 2 article:

Smart Scan: Why is “_small_table_threshold” Important?
Exadata Optimization Tips

cell_smart_table_scan

6. “Storage indexes can be managed and maintained.” I wish that could be possible. Kellyn says in the above mentioned paper Exadata optimization tips: “Do not count on storage indexes. The feature, when implemented by Oracle is one requiring the choice of a complicated set of algorithms. They can be your savior or your devil.” Well, Oracle tries to maintain storage index summaries during loads but it is just a best effort – not the most optimal given the memory constraint and the characteristics of writes during loads.

Plus storage indexes are not persistent: after cell reboot the very first query will not benefit. In simple words, whoever comes first to the office on Monday morning (after weekend’s cell maintenance) might expect better performance from his/her reports.

You can disable/enable the storage indexes at cell level:

alter cell events = “immediate cellsrv.cellsrv_storidx(‘disable’, ‘ALL’, 0, 0, 0)”;
alter cell events = “immediate cellsrv.cellsrv_storidx(‘enable’, ‘ALL’, 0, 0, 0)”;
alter cell events = “immediate cellsrv.cellsrv_storidx(‘purge’, ‘ALL’, 0, 0, 0)”;

storage_indexes

I would really like to have a dynamic view showing the existing storage indexes!

7. “Oracle Database 12c running on Exadata Database Machine requires Exadata version 12.1 or later.” This is the recommended approach but you can still use Oracle Database 12.1.0 with Exadata version 11.2.3.3 or Exadata version 11.2.3.2.1.

Oracle Database 12c running on Exadata 11.2 is subject to the following restrictions:

– The 12.1 offload libraries are not present on Exadata version 11.2. Therefore, smart scan offloaded filtering and storage indexes are disabled for 12.1 databases. Smart scans are still issued, and the initiation of IOs is offloaded, but complete blocks are returned instead of just the selected rows and columns. 11.2 databases on the same system have full offload capabilities.
– I/O Resource Management (IORM) plans for 12.1 database are not enforced.
– Database resource plans (intradatabase resource plans) set on 12.1 databases are not enabled on cells.
– An interdatabase plan cannot manage 12.1 databases, therefore an interdatabase plan cannot be used to manage flash log or flash cache for a 12.1 database.
– Other Exadata features like Smart Flash Cache, Smart Flash Log, Hybrid Columnar Compression, and Low Latency and High Throughput InfiniBand messaging continue to work with 12.1 databases. Note that these restrictions only effect performance of 12.1 databases.
– All 12c database functionality, such as Pluggable Databases, is fully available.

A final comment: I see often the power of Exachk tool is somehow underestimated. Use also the ExaWatcher utility which is located in each Exadata database server and storage cell under /opt/oracle.ExaWatcher/. ExaWatcher replaces OSWatcher in Exadata software versions 11.2.3.3 and up.

No Data and Big SQL: Oracle Database 12c on Exadata and SuperCluster

In DBA, Exadata, Oracle database on July 25, 2013 at 10:51

“If you want to succeed you should strike out on new paths, rather than travel the worn paths of accepted success.” John D. Rockefeller

No Data: with Exadata, with the help of storage indexes, you can sometimes perform full table scans without any physical I/O! Really, no data is being read or written to! Richard Footie explains the details in his article entitled Exadata Storage Indexes – Part I (Beginning To See The Light).

Big SQL: with Oracle Database 12c, on a single AWR container report, you can see SQLs from all databases (up to 252) plugged into the container databases, that is a big SQL report! We will see lot of Big SQL with the increased usage of container databases. You can see a screenshot of a 12.1.0 AWR report (SQL ordered by Elapsed Time) at the end of this blog post. Note that in IBM terminology, Big SQL is the SQL interface to its Hadoop-based platform, InfoSphere BigInsights.

How about 12c on Exadata or SuperCluster? Is it supported?

SuperCluster

Yes, Oracle 12.1.0 is supported on Exadata and SPARC SuperCluster. The system should run Exadata software version 11.2.3.2.1. You can have both 11gR2 and 12cR1 databases running simultaneously on the same cluster.

The Grid Infrastructure must be first upgraded from 11g to 12c. You may install fresh 12c GI only in case you do not plan on creating new 11gR2 databases (this will fail with SSO wallet incompatibility due to Bug 16844086).

PDB Quiz

Oracle Database 12c running on Exadata or SuperCluster requires the following:

• Exadata version 11.2.3.2.1 or later installed on Exadata Storage Servers and database servers
• Patch 16547261 applied on Exadata Storage Servers
• Database servers running Oracle Linux Unbreakable Enterprise Kernel (UEK) 2.6.32-400.21.1.el5uek updated to 2.6.32-400.29.1.el5uek. Other Oracle Linux kernels supplied with Exadata 11.2.3.2.1 do not require update.

Note that Oracle Exadata is sometimes still being referred to as Oracle SAGE (SAGE = Storage Appliance for Grid Environments):

patch_for_sage2

Oracle Database 12c running on Exadata Database Machine is subject to the following restrictions:

• The 12.1 offload libraries are not present on Exadata version 11.2.3.2.1. Therefore, Smart Scan offloaded filtering and storage indexes are disabled for 12.1 databases. Smart Scans
are still issued, and the initiation of IOs is offloaded, but complete blocks are returned instead of just the selected rows and columns. 11.2 databases on the same system have
full offload capabilities. A future release of Exadata storage server software will add 12.1 offload libraries and lift these restrictions.

• I/O Resource Management (IORM) plans for 12.1 database are not enforced.
– Database resource plans (intradatabase resource plans) set on 12.1 databases are not enabled on cells.
– An interdatabase plan cannot manage 12.1 databases, therefore an interdatabase plan cannot be used to manage flash log or flash cache for a 12.1 database.

• Cell metrics for 12.1 databases are reported under OTHER_DATABASE.

Other Exadata features like smart flash cache, smart flash log, hybrid columnar compression, and low latency and high throughput infiniband messaging continue to work with 12.1 databases. Note that these restrictions only effect performance of 12.1 databases. All 12c database functionality, such as Pluggable Databases, is fully available.

Not directly related to the topic but look at the AWR report in 12c. It shows the PDB name where the SQL was run:

AWR_12c_11

AWR reports in 12.1.0 are slightly different than those in 11g. CDBs and PDBs are the main factor.

Back to Exadata: the new Oracle Exadata Plug-in 12.1.0.4.0 comes with several new features. Now, it supports SPARC SuperCluster, 1/8 Rack and multi-rack certifications. Check the complete list here.

DBA tips for tuning Siebel on RAC and Exadata

In Database tuning, DBA, Exadata, Oracle database, Oracle internals, RAC, Siebel on April 21, 2013 at 11:27

More than 100 years ago, Mark Twain said that “the elastic heart of youth cannot be compressed into one constrained shape long at a time”.

In modern technology, “the elastic cloud can be compressed with the constraints of long datatypes”. Let me explain the meaning of my statement in terms of Siebel applications running on RAC and Exadata.

COTS_Exadata

Chained and migrated rows are often a big problem in a Siebel database. The issue of chained rows can be resolved by using big block size while migrated rows require segment reorganization with higher PCTFREE (20, 30 or even 50). Chained tables in Siebel are often S_ORG_EXT, S_ASSET, S_ORDER_ITEM and S_ORDER_ITEM_OM. At least these will require using a bigger block size.

For LONG and CLOB details in a Siebel CRM, check CLOB Physical Type in Siebel.

But as Joel Goodman noted in How Smart is Your Smart Scan?, there is an Exadata specific situation that causes migrated rows.

When a row is updated in a Hybrid Columnar Compressed (HCC) table, then it is migrated to another block in the segment that is managed using “OLTP compression“. Any HCC Compression Unit (CU) containing at least one migrated row, will also cause the block containing that row to be accessed by the server using a “cell single block physical read“.

Look at the “Top 5 Timed Foreground Events” in a Siebel on Exadata OLTP DB using HCC:

cell single block physical read

The situation with HCC in OLTP is tricky for the following 3 reasons:

1. Every update of a record stored in HCC format results in a migrated row
2. The new row is stored in a new block that is marked for OLTP compression
3. Non-direct path inserts will be loaded into OLTP compressed blocks as opposed to HCC format

For the above reasons, mixing HCC with DML is not recommended. Partitioning can provide a mechanism for avoiding these issues since each partition can have its own storage format.

Only after decompressing the OLTP tables, the event “cell single block physical read“ disappeared and the performance got significantly improved.

Another good tip for Siebel on RAC is the usage of high number of hash partitions for hot indexes. High means 256 and more (should be a power of 2).

Look at the situation with gc buffer busy waits before the high hash partitioning:

gc_buffer_busy_segments

As you can see the “gc buffer busy acquire” was the top event:

Siebel_hot_blocks_before

Looking at issues with these events in MOS might incline you to believe that this is a bug. However, this was not the case as you can see what happened after making the number of hash partitions for some of the indexes 256, the database performance was back to normal:

Siebel_hot_blocks_after

Note that also non-partitioned tables can have hash partitioned indexes!

Another tip: SecureFiles is a feature introduced in Oracle Database 11g that is *specifically* engineered to deliver *high performance* for this type of unstructured data. I have seen several queries getting even 10 times faster after migrating LOBs to SecureFiles.

About sequences: in RAC/Exadata, using the CACHE and NOORDER options together results in the best performance for a sequence. For example, in a Siebel database the S_DOCK_TXN_LOG_S sequence is used to generate the transaction ID used by S_DOCK_TXN_LOG table. The default cache size for sequences in Oracle is 20. If you are having thousands of concurrent users, Siebel/Oracle suggest you increase the cache size to be at least 10000.

Last 2 things:

– For gathering Siebel database statistics use always the latest version of coe_siebel_stats.sql. As of now, the latest version is 11.4.4.5
– The script coe_siebel_profile.sql provides a list of columns that are not indexed but potentially are good candidates for indexing according to their usage by the Optimizer.

For additional tips, check Oracle’s white paper Siebel on Exadata!

And using Oracle Enterprise Manager makes all tuning so much easier!

OEM

The Oracle Platform Administrator

In DBA, Exadata, Oracle database on March 1, 2013 at 10:31

Question: what do the following two pairs have in common:
– Oracle database and non-Oracle hardware/software
– Rowan Atkinson and Audrey Hepburn?

Rowan Hepburn 2

My answer: they are both good but don’t fit very well into the same picture.

Creating a back-end database system based on multiple vendors’ components make less and less sense nowadays. Consider the following:

multiple_vs_single_vendor

Which one would you go after, left or right? To make it more clear, choose between Exadata and IBM POWER7 with AIX OS on top of EMC being monitoring with BMC Patrol and backed up with HP Data Protector integration?

I have witnessed from my experience that hardware and software components from multiple vendors are complex to support. No one supplier is usually accountable at a system level.

In a consolidated database environment, a complete test of a major change often must include synchronous testing the multiple workloads in order to catch situations where a change in one area might cause an issue in another. Patching for example one component breaks sometimes another one.

Kaleidoscope

With Exadata, Oracle provides a single point of support for the Oracle Exadata Database Machine both for hardware and software. Support problems are resolved much faster than where multiple vendors are involved and can be reported and managed by the DBA. But is the Exadata DBA still just a DBA?

Kevin Closson used the term Oracle Platform Administrator (OPA) in a response to an article from Martin Bach about the meaning of Exadata experience.

Let me quote Kevin:

“It’s nearing the point where an Oracle DBA cannot do everything that is expected. The acronym no longer really fits. Database Administrator? What does that have to do with cellsrv? DBA? What does that have to do with clusters (at the platform level [you mention fencing]), etc, etc, etc.

It’s nearing time that there should be two roles:

1) The Oracle Platform Administrator (OPA)
2) The Oracle Database Administrator

The DBA tends to the *contents* of the database and applications’ needs for data from the database, tuning, etc. The OPA deals with all the activity that was once the domain of the System Admin and Storage Admin and, moreover, the OPA deals with the crushing load of maintaining Oracle software (patching).

I can’t see it any other way. Oracle has moved their technology stack in a direction that essentially warrants this split in responsibilities. Think of any other RDBMS product where the DBA is expected to be a platform administrator as well. I’m not saying good or bad on that matter, but it is high time to consider the split.”

Uwe Hesse has just written an excellent blog post entitled “How Exadata will impact your IT Organization”. The difference between the Exadata Database Administration (EDBA) and the Oracle Platform Administrator (OPA) is probably close to zero. I see it more like that the EDBA is just a DBA managining also Exadata, while OPA is more of a general admin having excellent DBA skills. Semantics!

Obviously, additional training for the DBAs is needed but by no means the 285 days of new DBA training for existing Oracle DBA suggested by Allen Licitra and Mark Shainman. I wonder how that number was calculated and if we are talking calendar or workdays 🙂

I would highly recommend Enkitec’s Course on Oracle Exadata Administration!

Arup Nanda suggests that the role should have the following skillsets:

OPA_chart_1

I would furthermore split the Database Administrator part as follows:

OPA_chart_2

I do not know where the meetings and conf. calls fit in.

But as Andrew Carnegie said: “Concentrate your energies, your thoughts and your capital. The wise man puts all his eggs in one basket and watches the basket!” And who else to that on the Oracle side but the Oracle Platform Administrator.

Exadata’s Total Cost of Ownership

In Cloud, Consolidation, DBA, Exadata, Oracle database on November 29, 2012 at 11:55

According to Elbert Hubbard “One machine can do the work of fifty ordinary men. No machine can do the work of one extraordinary man”.

Of course, neither Exadata nor Bloom filters existed when the Bloomington born philosopher made the above statement.

Forbes magazine published an interesting article this month: Oracle’s Secret Sauce: Why Exadata Is Rocking the Tech Industry.

The article says: “The new approach is embodied in a technology strategy pioneered by Oracle and recently endorsed/followed by IBM (although IBM’s effort to date is rather modest): building richly integrated and fully optimized systems from the ground up, with hardware and software expressly created to work together to deliver maximum performance.”

And as you might guess from the image above, this time I am not only after the technical benefits and advantages of Exadata. I would like to clarify what they bring to business. And see how Oracle Exadata compares to IBM P-Series.

The FactPoint Group created a 30 page cost comparison report for business decision makers: Oracle Exadata Database Machine vs. IBM Power Systems.

In brief, the results of the report are:

• IBM 3 year TCO is 31% higher than Oracle.
• Exadata can be deployed more quickly and easily requiring 59% fewer man-hours than a traditional IBM Power Systems solution.
• Exadata requires 40% fewer sysadmin hours to maintain and operate annually, including quicker support calls because of less finger-pointing and faster service with a single vendor.
• Exadata delivers dramatically higher performance typically up to 12x improvement, as described by customers, over their prior solution.
• Will become even easier to operate over time as users become more proficient and organize around the benefits of integrated infrastructure.
• Supplies a highly available, highly scalable and robust solution that results in reserve capacity that make Exadata easier for IT to operate because IT administrators can manage proactively, not reactively.

Overall, Exadata operations and maintenance keep IT administrators from “living on the edge.” And it’s pre-engineered for long-term growth.

Check Kerry Osborne’s Oracle Blog for more details about the Exadata vs. IBM P-Series comparison.

I personally think that the benefits of Exadata are even much bigger granted the system is properly configured which I see is not always the case but as I said I will not comment on technical issues this time.

But after all, this is a DBA blog, so this part of the research might be of interest for most DBAs:

“For this emerging Database Machine Administrator (DMA) job category, IT employees are cross-trained to handle tasks currently undertaken by admin specialists in hardware, operating systems, network, applications or storage. IT managers who pursue this adaptive path likely will gain operational efficiencies for managing packaged solutions, although it may take several years as IT administrators are trained in new competencies.

The emergence of the DMA also may help restructure IT departments into more efficient operations, but the full benefits of this development cannot be fully realized until most older systems that demand a stove-piped IT organization are decommissioned and IT organizations adapt. At that time, IT operations managers may be able to reduce headcount. In time, packaged solutions should involve not only fewer workers but also fewer IT groups, which should reduce costs; in the meantime IT will be able to do more without adding headcount.”

That is very important! Let me quote here Paul Vallee, who in a recent discussion predicted that in the near future organizations will need few but very skillful DBAs, an opinion I 100% agree with!

“This change in job roles is not necessarily comfortable for everyone in IT because Exadata marginalizes various system administrators as it empowers the DBA: “The DBAs are doing more hardware tasks and diagnostics because most of the Exadata stuff is geared around database commands, not hardware commands or operating system commands. The gearheads have designed Exadata from the DBA’s perspective—when you look at the sys admin portion, it’s all written by a DBA, not by a Sys Admin,” lamented a System Administrator at a Business Services Co.

Other System Administrators have expressed similar sentiments as many of their traditional responsibilities shift towards the DBA—the source of the much of the operational savings we have identified.”

More on the DMA subject from Arup Nanda: Who Manages the Exadata Machine?

For all DBAs: here is an excellent book on Exadata: Expert Oracle Exadata, by Kerry Osborne, Randy Johnson and Tanel Põder.

Tuning Exadata with parallel profiles and SPM baselines

In Database tuning, DBA, Exadata, SQL on October 29, 2012 at 06:34

Harold Geneen used to say: “It is much more difficult to measure nonperformance than performance”.

Without automation database management becomes increasingly difficult. An Exadata database can strongly benefit performance-wise from fully enabled Automatic SQL Tuning:

BEGIN
  DBMS_SQLTUNE.set_tuning_task_parameter(
    task_name => 'SYS_AUTO_SQL_TUNING_TASK',
    parameter => 'ACCEPT_SQL_PROFILES',
    value     => 'TRUE');
END;
/

I will explain why.

The SQL management base (SMB) is a part of the data dictionary that resides in the SYSAUX tablespace. It stores statement logs, plan histories, SQL plan baselines, and SQL profiles.

The difference between SQL Profiles and SQL Plan Management Baselines is very well explained by Maxym Kharchenko and Maria Colgan.

The process of automatic evolution of SQL Plan Baselines is described with a perfect example by Christian Antognini.

Converting baselines into profiles is almost straightforward but not simple, Sumit Bhatia explains how.

Now, for complex and long running SQL statements in Exadata, it is quite important to reduce the amount of data being transferred from the storage nodes to the database and to have the right degree of parallelism.

The former is achieved very much by smart scans which take place only when there are full scans or direct path reads. The blocks are not cached but they are returned to the PGA. The latter is even trickier. How in fact to determine the right degree of parallelism for an SQL statement?

Most often parallel_degree_policy is set to manual (default) and parallel_max_servers is sort of guessed. In my opinion, the right way is to use Oracle’s Auto DOP solution by enabling parallel_degree_policy to a non-default value. Let me quote Gwen Shapira:

There are two levels to Auto DOP:
Limited: when accessing tables and indexes that have been declared with parallel clause, Oracle will decide on the degree of parallelism based on the query and system resources
Auto: Oracle will decide on degree of parallelism for every query. In addition two new features are enabled: parallel statement queuing and in-memory parallel execution.

Even doing all that, because of Oracle’s unpredictable behavior at times, some SQL statements might not get the right number of parallel slaves or might not go for smart scans. For such statements creating an SPM baseline is a good thing to start from. Then, during the Automatic SQL Tuning maintenance window, the SQL Tuning Advisor might notice that there is a non-accepted plan which is better than the accepted one. The recommended SQL profile will simple do nothing else than to evolve the SPM baseline execution plan into an accepted plan.

All that happens automatically due to the fact that we have advised Oracle to automatically accept SQL Profiles.

But there is more to that: how to get a good degree of parallelism for the SQL statement? Oracle will automatically do so if needed by creating a parallel SQL Profile. Note that there are two types of profiles:

– REGULAR_PROFILE – profile without a change to parallel execution
– PX_PROFILE – regular profile with a change to parallel execution

Here is an example:

Automatically accepted parallel profiles can really boost the Exadata performance. But as PX_Profiles can consume also more resources than expected, it is important to have Instance Caging enabled.

The most important step for configuring instance caging is to determine the value of CPU_COUNT for each database instance and the sum of CPU_COUNTs across the server. There are two approaches:

– Partitioned approach for mission-critical pools sum(CPU_COUNT) < 75% x Total CPUs
– Over-subscribed approach for non-critical pools sum (CPU_COUNT) <= up to 3 x Total CPUs

Second step for instance caging: ensure resource_manager_plan is enabled.

Bottom line: tuning an Exadata database is not exactly the same things as tuning a traditional database. Actual tuning of an Exadata database is in fact a lot easier once understood.

And a lot of that tuning work can be really automated!

Bloom filters for DBAs

In Database tuning, DBA, Exadata, Oracle database, RAC on August 28, 2012 at 19:59

Lady Bird Johnson said: Where flowers bloom so does hope. I would re-phrase: Where Bloom filters bloom, so does database performance.

A Bloom filter is a probabilistic algorithm for doing existence tests in less memory than a full list of keys would require. In other words, a Bloom filter is a method for representing a set of n elements (also called keys) to support membership queries. John Rose wrote a very detailed blog article about Bloom filters. Another good one is by Christian Antognini entitled simply Bloom Filters.

These underscore parameters are all related to Bloom filters:

The Oracle database makes use of Bloom filters in the following 4 situations:

– To reduce data communication between slave processes in parallel joins: mostly in RAC
– To implement join-filter pruning: in partition pruning, the optimizer analyzes FROM and WHERE clauses in SQL statements to eliminate unneeded partitions when building the partition access list
– To support result caches: when you run a query, Oracle will first see if the results of that query have already been computed and cached by some session or user, and if so, it will retrieve the answer from the server result cache instead of gathering all of the database blocks
– To filter members in different cells in Exadata: Exadata performs joins between large tables and small lookup tables, a very common scenario for data warehouses with star schemas. This is implemented using Bloom filters as to determine whether a row is a member of the desired result set.

You can identify a bloom pruning in a plan when you see :BF0000 in the Pstart and Pstop columns of the execution plan and PART JOIN FILTER CREATE in the operations column:

Two hints can be used: px_join_filter and no_px_join_filter; and there are a couple of views for monitoring Bloom filters:

– v$sql_join_filter: information about Bloom filters; number of rows filtered out and probed by Bloom filters
– v$pq_tqstat: check reduced communication due to usage of Bloom filters

There are two undocumented Oracle functions: SYS_OP_BLOOM_FILTER and SYS_OP_BLOOM_FILTER_LIST:

In 11gR2 execution plans you may see :BF0000 replaced by KEY(AP). This is due to new functionality that performs partition pruning based on AND (= And Pruning) multiple pruning descriptors.

In 11.2.0.2 and later a SQL query on a table might not deliver any results. Unfortunately, it might not deliver any error message either. The solution is to disable Bloom filters:

alter system set “_bloom_filter_enabled” = false scope=both;

Restrictions:

– For Exadata customers, I would suggest to set a low degree of parallelism on small tables and indexes in order to take advantage of Bloom filters.
– Bloom filters are not supported in partition-wise joins. This is documented as Bug 14325392 but will be fixed in 12.1.0.
– About small tables with indexes: If Oracle goes for an index range scan of that small index, then it will be done in serial and not parallel and thus no Bloom filter will be used (just drop those small indexes).
– CURRENT_DATE does not use Bloom filters while SYSDATE does.
– Bloom filters are not used in DML statements: Bug 13801198 says that a simple two table join shows use of Bloom filter/pruning when running as a query and not when running as part of a INSERT as SELECT. The performance delta is 4:24 vs 0:24.