Archive for the ‘Database tuning’ Category

Exadata Consolidation: “You must spend money to make money”

In Cloud, Database tuning, DBA, Exadata, Oracle database on July 20, 2012 at 20:28

Titus Maccius Plautus, a roman poet and philosopher who lived from 254 BC to 184 BC, said: “Wisdom is not attained by years, but by ability”. And also the famous quote in the title above.

What do I have in mind? Even midsize companies have nowadays dozens, if not hundreds, of database instances. Corporate IT departments would not surprise anyone if they support a couple of thousand databases.

The consolidation of these databases can be achieved in several ways. But here are some general guidelines on how to accomplish this task by using Exadata in the most optimal way in order to maximize cost reduction, high availability, secure separation of administrative duties and ease of performance tuning, management and monitoring.

It is really surprising how slowly companies adopt database consolidation granted the pressure IT management has in every possible direction. We can speculate for hours why so, but what I will concentrate on, are the technical aspects of Exadata consolidation.

Oracle recommends the creation of Exadata Hardware Pools. Hardware Pool is a machine or group of machines used as the target consolidation platform.

According to Oracle’s white paper “Best Practices For Database Consolidation On Exadata Database Machine“, an enterprise might create multiple Hardware Pools to make each consolidation target platform more manageable. The recommended minimum Hardware Pool size Oracle Maximum Availability Architecture Exadata Consolidation Best Practices is Exadata X2-2 Half Rack and the maximum recommended Hardware Pool size is two Exadata Database Machines Full Racks (plus additional Exadata storage expansion racks if required). Hardware Pools that fall within this range are the most common Exadata configurations for consolidation and provide sufficient capacity to efficiently achieve objectives for database consolidation.

The recommended storage configuration is one shared Exadata storage grid for each Hardware Pool. This storage grid contains all Exadata cells and Exadata disks, and is configured with either ASM high or normal redundancy. The recommended setup for Oracle Grid Infrastructure (which includes Oracle Clusterware and Oracle ASM) is to use one cluster per Hardware Pool.

Oracle has recommended several parameter settings for Exadata database consolidation:

If PageTables in /proc/meminfo is set to more than 2% of the physical memory size, then set the operating system parameter HugePages to the sum of all shared memory segments. Starting in, setting the database initialization parameter USE_LARGE_PAGES=ONLY on all instances prevents any instance from starting unless sufficient HugePages are available. Hugepages can only be used for SGA, so do not over-allocate. Also, the database parameters MEMORY_MAX_TARGET and MEMORY_TARGET are not compatible when HugePages are enabled. This is only for Linux. On Solaris, HugePages are automatically configured and used via intimate shared memory (ISM).

Operating system setting:

Set the number of shared memory segments (kernel.shmmni) greater than the number of databases.
Set the maximum shared memory segment size (kernel.shmmax) to 85% of physical memory size, which is the default.
Set the maximum total number of system semaphores (SEMMNS) greater than the sum of all database processes.
Set the maximum number of semaphores in a semaphore set (SEMMSL) greater than the largest number of processes in any single database.

Exadata Memory:

1. Exadata X2-2 based on the Sun Fire X4170 Oracle Database Servers (also known as V2) has 72 GB per database server.
2. Exadata X2-2 has 96 gigabytes (GB) of memory in the default configuration, with an option to expand to 144 GB of memory (with the Exadata memory expansion kit).
3. Exadata X2-8 has 1 terabyte (TB) (with the X4800) or 2 TB (with the X4800M2) per database server.

An important rule:

OLTP applications: SUM of databases (SGA_TARGET + PGA_AGGREGATE_TARGET) + 4 MB * (Maximum PROCESSES) < Physical Memory per Database Node
DW/BI applications: SUM of databases (SGA_TARGET + 3 * PGA_AGGREGATE_TARGET) < Physical Memory per Database Node

Remember to enable instance caging!

For the Oracle ASM instance, set PROCESSES= 50 * MIN ( # database instances on db node+ 1, 11) + 10 * MAX (# database instances on db node – 10, 0).


1. X2-2: sum(PARALLEL_MAX_SERVERS) for all instances <= 240

Limit the number of processes and connections to the database servers:

For Exadata running Exadata Storage Server Software or higher, configure a maximum of 60,000 processes per Hardware Pool. Upper limit target is 7,500 processes per node for X2-2. Upper limit target is 30,000 processes per node for X2-8. For Exadata running Exadata Storage Server Software or less, configure a maximum of 20,000 processes per Hardware Pool. Upper limit target is 2,500 processes per node for X2-2. Upper limit target is 10,000 processes per node for X2-8.

The temporary tablespace should be:

BigFile Tablespace,
Located in DATA or RECO, whichever one is not HIGH redundancy,
Sized 32GB initially,
Configured with AutoExtend on at 4GB,
Configured with a MaxSize defined to limit out of control growth.

Answers to several additional Exadata related questions can be found in the following blogs/articles:

Who Manages the Exadata Machine? by Arup Nanda
Upgrade Exadata to by Gleb Otochkin from Pythian
Best Practices For Database Consolidation On Exadata by Javier Puerta
Consolidation Strategies for Oracle Exadata and Oracle Database Machine by Dan Norris, X Team, Oracle Exadata Development
Expert Oracle Exadata by Kerry Osborne, Randy Johnson, Tanel Põder
Oracle Exadata – A platform for consolidation by Umesh Tanna

An excellent reference is the Consolidation Parameters Reference Table available at MOS: “Oracle Sun Database Machine Setup/Configuration Best Practices” [ID 1274318.1]

Back to the saying of Plautus: one does not have to wait for years in order to start implementing an Exadata consolidation. The product has the perfect ability to serve its many purposes: this is software, not wine!


Is Oracle Advanced Compression underestimated?

In Database options, Database tuning, DBA, Oracle database on May 17, 2012 at 02:43

Abraham Lincoln said: “I can make more generals, but horses cost money”.

Often all I hear about Oracle Advanced Compression is: “.. but it is an extra option on top of Enterprise Edition”. And thus its hidden power is often neglected.

Advanced Compression is not just about having the data occupy less space on disk, tape or in the network. It is also about better performance of the database, deduplication and the utilization of less hardware resources: memory and CPU. Thus, you can consolidate more databases on the same server and reduce the license costs.

I will try to compress this blog post, I will make it as short as possible:

1. What and which tables to compress?

Compress the segments that are involved in most reads: physical, logical or unoptimized. Consider also the segments that are used in SQL statements having most reads.

But do not compress all of them! Use DBMS_COMPRESSION.GET_COMPRESSION_RATIO in order to verify that you will benefit from the compression. Do only those with “Compression Ratio” 2 to 1 and higher. Here is an example of how you can estimate the compression ratio:

set serveroutput on
blkcnt_cmp pls_integer;
blkcnt_uncmp pls_integer;
row_cmp pls_integer;
row_uncmp pls_integer;
cmp_ratio pls_integer;
comptype_str varchar2(100);
blkcnt_cmp, blkcnt_uncmp, row_cmp, row_uncmp, cmp_ratio, comptype_str);
DBMS_OUTPUT.PUT_LINE('Block count compressed = ' || blkcnt_cmp);
DBMS_OUTPUT.PUT_LINE('Block count uncompressed = ' || blkcnt_uncmp);
DBMS_OUTPUT.PUT_LINE('Row count per block compressed = ' || row_cmp);
DBMS_OUTPUT.PUT_LINE('Row count per block uncompressed = ' || row_uncmp);
DBMS_OUTPUT.PUT_LINE('Compression type = ' || comptype_str);
DBMS_OUTPUT.PUT_LINE('Compression ratio = '||round(blkcnt_uncmp/blkcnt_cmp,2)||' to 1');
DBMS_OUTPUT.PUT_LINE('Compression ratio org= '||cmp_ratio);

2. How to measure the performance benefits of compression?

For the SQL statements run awrsqrpt.sql and compare the results:

Before Compression:

After Compression:

3. What comes after compression?

Follow the average CPU and I/O statistics for the compressed tables:

col c0 heading ‘Begin|Interval|time’ format a8
col c1 heading ‘Owner’               format a10
col c2 heading ‘Object|Type’         format a10
col c3 heading ‘Object|Name’         format a15
col c4 heading ‘Average|CPU|Cost’    format 9,999,999,999
col c5 heading ‘Average|IO|Cost’     format 9,999,999
 to_char(sn.begin_interval_time,'mm-dd hh24') c0,  
 p.object_owner                               c1,
 p.object_type                                c2,
 p.object_name                                c3,
 avg(p.cpu_cost)                              c4,
 avg(p.io_cost)                               c5
 dba_hist_sql_plan p,
 dba_hist_sqlstat  st,
 dba_hist_snapshot sn
 p.object_name = '/BI0/PMATERIAL' 
 and p.sql_id = st.sql_id and st.snap_id = sn.snap_id     
group by
 to_char(sn.begin_interval_time,'mm-dd hh24'),
order by
 1,2,3 desc;  

P.S. About Exadata: do not mix HCC and DML, i.e., do not use HCC in OLTP.

On DBA innovation: who is afraid to fail will keep falling forever

In Cloud, Database tuning, DBA, Exadata, Grid Control, Oracle database on April 24, 2012 at 19:06

Managers always ask software engineers to deliver something sooner rather than waiting to deliver everything later.

How come it is fine to deliver an incomplete low quality IT product just because it is delivered on time? Most of us have been on several occasions limited by that deadline and time-schedule in terms of creativity, innovation and pro-activeness.

Innovation in database management and database administration has nothing to do with DBAs from whom you often hear phrases like “If it ain’t broke, don’t fix it” or DBAs who follow blindly the KISS principle.

In DBA terms, innovation is the process of introducing and implementing new features in the database and using new options and database products.

A good example is the adoption of Oracle Enterprise Manager Grid/Cloud control. I have seen so many excuses for not implementing it or delaying its implementation. And the benefits and savings of OEM implementation can be measures in times!

Do you wonder how quickly and efficiently one can find out all details of a certain SQL statement without Cloud Control? No other database brand has so highly sophisticated tool for performance analysis like Oracle has. See all the details offered with one mouse click: Exadata Cell Offload Efficiency (96% in this case), Wait Activity in detail, use of the Result Cache, etc. All that from a single screen!

Accepting new innovative database properties, tools and appliances is hard for many IT architects, IT managers and most of all for DBAs who have the primary responsibility to test, verify and promote these features.

Let us have a look at one other innovation from Oracle. Implementing Exadata can be strongly considered if any of the following 5 points are in the IT roadmap:

1. Implementing a new Business Intelligence solution
2. Datawarehouse licenses are up for renewal
3. Database platform consolidation exercise
4. Storage requirements are increasing and the performance is decreasing
5. Performance of transactional systems requires major improvements

Look at the list below and think how many of these are used by your company or client:

– Exadata
– Enterprise Manager Cloud Control
– Edition-Based Redefinition
– Advanced Compression
– Oracle Database Appliance
– Automatic SQL Tuning
– Total Recall
– Real Application Testing
– One of these init.ora parameters: db_ultra_safe, result_cache_mode, optimizer_capture_sql_plan_baselines, awr_snapshot_time_offset

Although I did not put Enterprise Manager Cloud Control on top of the list, is it still a must for every Enterprise using Oracle products. On the light side, I was recently asked if you can see the temperature of a given computer from OEM 🙂 Here we go:

Jon Taplin said it very well in his article on Steve Jobs and Innovation: “At the Innovation Lab we try to inculcate the notion that you can’t be afraid. You can’t be afraid to fail. You can’t be afraid to “be different”. You can’t be afraid to celebrate the weird mix of art and science that is true innovation. Steve Jobs embodied all of those qualities. I wrote a bit about him in my new book and there is a cool video in the book of his graduation speech at Stanford that you will see replayed too often in the next few days.”

As a DBA, one should try to step out of his comfort zone where (s)he is surrounded by the everyday administrative tasks and reactive performance tuning work and try to make difference in the enterprise by acting more like a Database Architect than a Database Operator. Prove the complexity and importance of the DBA role!

P.S. Often in IT, the way from Insight to Action is longer than the Way of St. James.

Oracle database 11gR2 upgrades: advanced tips and best practices

In Database tuning, DBA, Oracle database on February 23, 2012 at 20:19

The Strugatsky brothers have a book where in the army the officer asks the soldiers if they are ready to attack the enemy. One of the men says yes but do we have incense? Why do we need incense, asks the army officer? Well, in case we meet the devil says the soldier.

Do I really need incense when upgrading to 11g? Let us see.

According to the 2011 IOUG Survey on Database Manageability, keeping databases at current patch levels is the primary DBA challenge. Of course, in order to have that current level you need to upgrade/patch the database.

For basic tips and best practices on how to upgrade to 11gR2, read carefully Best Practices for Upgrading to Oracle Database 11g Release 2, the issues I describe below come on top of Oracle’s document and are based on hands-on experience, I am far from advertising how simple the 11g upgrade is:

1. Control the size of the SYSAUX tablespace. It grows due to two main reasons: the CBO historical statistics do not get purged automatically and the SQL Plan Baselines consume gigabytes of disk space.

Have a look at these:

Bug 9910484 - SQL Plan Management Capture uses excessive space in SYSAUX [ID 9910484.8]
Bug 8553944 - SYSAUX tablespace grows [ID 8553944.8]
Bug 10279045 - Slow Statistics purging (SYSAUX grows) [ID 10279045.8]
Bug 12958182 - SYSAUX tablespace grows (mon_mods_all$ and col_usage$) [ID 12958182.8]

By default the MMON performs the automatic purge that removes all history older than the older of:

* current time – statistics history retention (by default 31 days) and
* time of recent analyze in the system – 1

MMON performs the purge of the optimizer stats history automatically but it has an internal limit of 5 minutes to perform this job. If the operation takes more than 5 minutes then it’s aborted and the stats are not purged and no trace or alert message is reported.

You will have to manually start the DBMS_STATS.PURGE_STATS procedure or schedule a job to run it on daily basis.

Handling SM/OPTSTAT component growth in SYSAUX tablespace gives an excellent overview on the problem with the historical CBO statistics.

The bigger issue is with bug 13632540. It is not fixed in and SQL Plan Baselines consume more space than historical CBO data. If you do not have much data in the DB, you may witness the unique situation of having a database with more metadata than real application data.

2. Enable Automatic SQL Tuning, Automatic Memory Management (not on Linux of course where HugePages are enabled) and use (carefully) SQL Plan Management.

While the automated features make sense, you might wonder doesn’t the urge to use SQL Plan Management contradict with point 1 above? Yes it does, and this makes the 11g upgrade tricky. Verify you can afford some extra space in SYSAUX if you have optimizer_capture_sql_plan_baselines = TRUE. Extra means like 10-50G. It varies from database to databases.

You may create a job that runs on regular basis: delete from sys.col_usage$ c where not exists (select /*+ unnest */ 1 from sys.obj$ o where o.obj# = c.obj#); That is the workaround.

Automatic SQL Tuning is really an underestimated feature in the Oracle database and I am surprised to see so many databases where the feature is not enabled. What I see and read are different blogs and articles on how to enable and disable the feature but almost nothing on real life experience.

For SAP users: SAP note 105047 says that “SQL Plan Management” is allowed by SAP.

3. Use DataPump to populate a fresh database, avoid script upgrade if possible. The Oracle 11gR2 Database Upgrade Guide clearly states the 6 major benefits:

3.1 Defragments the data. You can compress the imported data to improve performance.

3.2 Restructures the database. You can create new tablespaces or modify existing tables, tablespaces, or partitions to be populated by imported data.

3.3 Facilitates side-by-side testing of the old and new versions of Oracle Database because an entirely new database is created.

3.4 Enables the copying of specified database objects or users. Importing only the objects, users, and other items you need is useful for establishing a test environment for the new software on only a subset of the production data. Data Pump Export / Import provides flexible data subsetting capabilities.

3.5 Serves as a backup archive – you can use a full database export as an archive of the current database.

3.6 Enables the upgraded database to be established on an operating system or hardware platform that is different from that which is supporting the database being upgraded.Network-based Data Pump Import allows the new Oracle database to be directly loaded across the network from the old database being upgraded. Thus, no intervening dump files are required.

I cannot prove it scientifically but from my experience fresh/new databases are faster and less buggier than manually upgraded databases. It is probably the fragmentation that is a factor, the messed up data dictionary being artificially modified to a higher version, etc.

4. Read the upgrade documents in advance or involve someone who has already read them and has strong experience with database upgrades. Avoid exotic parameters in the init.ora file (unless you are Siebel, EBS, etc.) and gather workload system and fixed table statistics after the upgrade. Check also the “Master Note For Oracle Database Upgrades and Migrations” [ID 1152016.1].

You might think that tip 4 contradicts the title of this post: advanced tips and best practices. What is so advanced with reading the friendly manuals? What indeed 🙂 Note that Oracle list “Read the FRIENDLY manuals!” as Best Practice #1 in the paper above.

Here is the path diagram:

Finally, some interesting statistics from the IOUG report:

Don’t make plans, make options

In Database tuning, DBA, Oracle database on February 2, 2012 at 21:50

An old Chinese proverb says: “When planning for a year, plant corn. When planning for a decade, plant trees. When planning for life, train and educate people.”

When new IT applications are created, how often does someone plan about the future of the data? Its growth, reorganization, what is being logged, how often unnecessary data is purged and deleted; what is being audited and how? I guess we all know the answer. Here is a picture from The Data Affordability GAP Widens – Year by Year

Building the Knowledge! That is the starting point. Andy Flower, the president of IOUG, wrote an excellent article in the January/February 2012 issue of Oracle Magazine entitled “Building the Knowledgebase”.

Andy Flower says: “30 percent of organizations have seen data grow at an annual rate of more than 25 percent, with 10 percent seeing greater than 50 percent growth. 27 percent of respondents currently have more than 100 TB of data. Nearly 1 out of 10 sites now has data stores in the petabyte range.”

1. Unnecessary data is seldom deleted, purged or archived. I have often (rather quite often) seen databases where less that 10% of the data stored is used on daily basis. Oracle AWR reports can provide you excellent information where the physical and logical reads come from and with what percentage. Here is a simple example:

2. Databases often get fragmented, and they need regular reorganization. An interesting example is a global database which was 7TB in size (now bigger I guess) with real data less than 1TB. If that’s not a classical example of database fragmentation, then please give me a better one. Datafiles had so much white space! You may use this query to check for datafile fragmentation:

set lines 256
set pages 999
col "File Name" for A47
column file_name format a40; 
column highwater format 9999999999; 
SELECT  /*+ RULE */ df.File_id, Substr(df.file_name,1,47) "File Name",
        Round(df.bytes/1024/1024,2) "Size (M)",
        Round(e.used_bytes/1024/1024) "Used (M)",
        Round(f.free_bytes/1024/1024) "Free (M)",
        round((b.maximum+c.blocks-1)*d.db_block_size/(1024*1024)) "HWM (M)"
FROM    dba_data_files df,
        (SELECT file_id, Sum(Decode(bytes,NULL,0,bytes)) used_bytes FROM dba_extents GROUP by file_id) e,
        (SELECT Max(bytes) free_bytes, file_id FROM dba_free_space GROUP BY file_id) f,
        (SELECT file_id, max(block_id) maximum from dba_extents group by file_id) b,
        dba_extents c,
        (SELECT value db_block_size from v$parameter where name='db_block_size') d
WHERE   e.file_id (+) = df.file_id
AND     df.file_id = f.file_id (+)
AND     df.file_id = b.file_id and c.file_id = b.file_id and c.block_id = b.maximum
        df.tablespace_name, df.file_name

I am sorry about the RULE hint but it really runs faster with RBO.

Two key benefits of regular database reorganization are:

– better performance as data and indexes are spread over less blocks and thus less I/O and CPU are needed to access the data (and less memory of course)
– backups take less time and occupy less storage and tape (accordingly restore & recovery will be faster)

3. Databases are copied one-to-one from environment to another when data refresh is needed. Just like that. Copy and Paste. Not really, but often with the RMAN’s command “duplicate target database to ..” Have a look at what Oracle has to offer these days: the Oracle Test Data Management Pack for Oracle and non-Oracle Databases. The key benefits of the pack are:

– Reduce application storage costs dramatically through data subsetting
– Increase developer productivity through right-sized production copies
– Eliminate labor-intensive error-prone manual process to create test systems by automating application discovery, data relationship modeling, dynamic subset rules and parameter-driven execution

One really doesn’t need all the data from Production in the Development or the Test environment!

4. Data growth is often not monitored and seldom one knows which data can be deleted. I have just 3 words for this: “Oracle Cloud Control”.

A very detailed explanation of the problem is described by IBM in Control application data growth before it controls your business. Let me quote part of the executive summary:

“What are these effects? The most obvious are complexity and risk, along with storage and management costs. Data that has accumulated at a significant rate is highly distributed over multiple applications, databases and platforms creating complicated data relationships that are difficult to define, understand, control and house. Also, managing years of historical data in production databases can impact service levels and disaster recovery initiatives. Expanding online and batch processing windows, along with routine maintenance tasks, takes much more time and can reduce application availability significantly.”

Wonder why people started all of a sudden talk about Big Data?

Restoring the Oracle database

In Bugs, Database tuning, DBA, Oracle database, RMAN on January 11, 2012 at 06:08

If you are reading this, there is high chance that someone has been breathing behind your back while you were restoring a database or mildly harassing you over the phone with the only question: when do you think that the database will be open for WebLogic connections?

Indeed when? There is no way to say precisely. You can guess based on the time the backup had taken in average but that is just an estimate. After the restore, there might be a nasty long sequence of archivelog files that has to be applied (with the hope that none is missing) and then you can open with resetlogs. Unless you hit a “feature” or two.

There are 5 major reasons why RMAN restore performance may be poor:

1. Missing statistics: Before a restore even begins the the control file is queried, datafile headers are read, media managers are initialised, the catalog is resynced, PL/SQL is generated and compiled, RMAN metadata either in the control file or the catalog is queried. All this incurs I/O against the control file, the datafile headers and the recovery catalog.

Make sure that the catalog database is analyzed and reorganized on regular basis. It also requires regular maintenance: crosscheck backup, etc.

Generate statistics on the fixed objects, with the GATHER_FIXED_OBJECTS_STATS procedure. This should be done when the database has been running for awhile with generic workload, so the information in the fixed objects reflects a reasonable state of the database load.

Gather the statistics with the following command:

exec dbms_stats.gather_fixed_objects_stats;

Believe it or now, often adding this to the RMAN script might help for both faster backup and restore:

alter session set optimizer_mode=RULE;

2. The backup is hardware multiplexed: all channels write to a single tape device. If the media manager does not support multiplexing only one backuppiece can be returned at a time. Considering the fact that each backuppiece is multiplexed with the rest of the others, this restore would require as many scans as the number of channels allocated of at most the whole backup size each (with a tape rewind after each scan).

3. Individual files or tablespaces are being restored: A restore of a single file needs a scan of potentially the whole database backup depending on where the header of the file being restored is positioned on the tape. The worst situation is when single file is being restored so that the file’s block header is the last block written to the backuppiece.

4. A different number of channels is used for restore compared to backup: The number of channels to be allocated should be equal to the number of physical tape drives. Try to do a couple of things:

Set enough big large pool: set LARGE_POOL_SIZE to at least 256M
Enable the backup_tape_io_slaves by setting BACKUP_TAPE_IO_SLAVES = true

However, some media managers will allow hardware multiplexing to a single tape and are able to parallelize the restore (in the media manager layer) such that the backuppieces are returned from a single tape to multiple channels in parallel. In such a situation it is possible to use an unpublished SET PARALLELMEDIARESTORE OFF command to make RMAN ignore this check.

      RUN { ...

5. The RMAN relation with bugs is not a new one. The “mend-it-or-end-it” principle does not play any role here. 5 years ago, Jaffar posted on his blog Known RMAN Performance Problems. It is worth checking again the MOS note 247611.1

Yet, here are some Oracle or MML bugs not fixed by January 1st, 2012:

Bug 12363733: restoring backups are very slow (reproduced both with asm and filesystem)
Bug 7573468: RMAN duplicate until time very slow
Bug 11835641: 11gr2 – RMAN restore took more time – duplicate or restore is slow
Bug 10210264: RMAN restore is slower following upgrade to 11.2
Bug 11827990: select name from v$datafile wait on “rdbms ipc reply” on rman restore
Bug 12543119: EM not using correct ASM diskgroup to restore datafiles when creating clone
Bug 11892765: please allow RMAN restore to pre-allocate file extents
Bug 9556740: RMAN job failed when using sudo
Bug 9724316: dbclone not choosing the correct controlfile for the clone
Bug 9845029: OEM clone 11gR2 ASM database fails during recovery
Bug 5381095: RMAN: poor restore performance in nocatalog mode
Bug 10093810: RMAN backup need to maintain an index for file header block for each file
Bug 9918138: RMAN restoring a compressed backup running very slow
Bug 6964596: RMAN single channel restore performance is faster than 6 channels
Bug 5462916: RMAN: recover database skip forever tbs with ORA-01157
Bug 2608118: restore of autobackup controlfile take a long time to complete
Bug 1533686: RMAN hangs when restoring via adsm with an ops target db
Bug 2911857: bad performance of RMAN duplication with large number of files (> 1500)
Bug 6770620: RMAN recover of copy database command consumes CPU

Bug 13109129: after upgrade to 11.2 compressed RMAN backups are slow
Bug 13066081: RMAN unregister database is slow/hangs
Bug 12896388: RMAN backup slower with – check logical – option on compressed tablespace
Bug 13500084: RMAN switch command is taking too long
Bug 13064833: RMAN process hangs after finished backup
Bug 13520050: RMAN session does not terminate- loops -spinning on cpu when destination is full

Some of the bugs will never be probably fixed as you can see that they may get, and are closed (or suspended) because of vendor OS problems, problem cannot be replicated, closed as duplicate, information not avaiable, etc.

Or how about those bugs with status 92: closed, not a bug. For example bug 11835641 is actually not a bug? Right.

Additional information: 11.2 Oracle Database Backup and Recovery User’s Guide. Check Part VI, Tuning and Troubleshooting.

The Newest init.ora Parameter in the Oracle Database

In Database tuning, DBA, Init.ora, Oracle database on November 6, 2011 at 14:18

Altogether there are 347 init.ora parameters in Oracle Database Looking into all documented and undocumented parameters in, we witness a high number: 2752. does not sparkle with any fancy features and no one is expecting it to but there is one new init.ora parameter added by Oracle: awr_snapshot_time_offset.

Why is it needed? The answer is very clear in case you have a server with several databases running on it. Consolidation has been fashionable for the past decade and many scattered corporative databases have perched on the same computer.

You might have observed those spikes on the top of every hour when all the AWR snapshots are taken. How do you avoid this, i.e., say you have 12 databases on the server and you want snapshots taken on hour bases but within 5 minutes difference for every database?

Quite simply with awr_snapshot_time_offset by setting it for n*300 (seconds) for every n:th database (n = 1..12).

The documentation of awr_snapshot_time_offset gives additional details for the parameter.

And finally, here is how one can search for the values of certain parameters, for example Bloom filter related:

select a.ksppinm name, b.ksppstvl value 
from x$ksppi a, x$ksppcv b 
where a.indx = b.indx 
and a.ksppinm like '%bloom%';

New DBA Features in Oracle Enterprise Manager Cloud Control 12c

In Cloud, Database tuning, DBA, Grid Control, Oracle database on October 22, 2011 at 10:17

During Oracle OpenWorld 2011, Oracle introduced Enterprise Manager Cloud Control 12c Release 1 ( I have already done the first installation on OEL Linux 6 and I must say I am impressed with the product.

Oracle Enterprise Manager Cloud Control 12c came with new features covered in 12 areas:

– Framework Enhancements
Database Management Features
– Fusion Middleware Management Features
– Oracle Fusion Applications Management
– Application Performance Management Features
– Cloud Management Features
– Incident Management Features
– My Oracle Support Integration
– Security Features
– Lifecycle Management and Provisioning Features
– Extensibility Support
– Coherence Management Features

In this post, I will present the new 20 DBA features of OEM CC 12c as documented by Oracle (the screenshots in between are from my own installation):

1. Database Creation Via Enterprise Manager Cloud Control

A wizard allows you to create an Oracle Database from within Enterprise Manager Cloud Control. You can create different configurations, including Single Instance and Real Application Clusters (RAC) databases, as well as file system and Automatic Storage Management (ASM) storage options.

2. Database Upgrade Via Enterprise Manager Cloud Control

You can now upgrade Single Instance and Real Application Clusters (RAC) Oracle databases through Cloud Control. This feature makes it possible to manage upgrades from a single console. You no longer have to access each individual database machine to perform upgrades.

3. Database Cloning Enhancements

Cloning procedures have been enhanced to capture configuration properties as well as the software payload. This is particularly useful when migrating databases from development to test to production or the reverse. A new EMCLI verb (clone_database) allows for database cloning using the same job type as the existing Clone Database feature of Cloud Control.

4. Oracle Exadata Server Management Enhancements

Oracle Exadata management capabilities now provide support for full target lifecycle management:

– Automatically discover Exadata targets
– Automatically create an Enterprise Manager System for end-to-end monitoring
– Provide extensive administration capabilities for databases, Exadata cells and Infiniband switches
– Simplify performance diagnostics with the help of in-depth performance charts covering all Exadata components

5. Manage Automatic Storage Management (ASM) Clusters as a Target

You can now manage clustered ASM resources as a single target, instead of each individual ASM instance having to be managed separately. Incident management and metric monitoring can be managed for the entire cluster.

6. Database Configuration Compliance Standards Support

Oracle database configuration data can now be managed within the new configuration and compliance standards frameworks.

7. Emergency Performance

This feature allows a DBA to diagnose and troubleshoot a hung or extremely slow database using the memory access mode. This mode is designed to bypass the SQL information retrieval layer and read performance statistics directly from the SGA of the target.

8. Database Backup and Restore Enhancements

You can now use Enterprise Manager Cloud Control to centrally maintain the settings for Oracle database and file system backups. This enhancement enables you to apply uniform settings to multiple databases and file systems when backing up multiple objects—all in one operation. Users can perform a backup on multiple databases concurrently or sequentially in one deployment procedure. An Oracle Home OSB tape backup can be restored either to the original or to a different location, and the restored Oracle Home can be reconfigured to function in the newly restored location.

9. Database System Discovery Enhancements

As the database system is now built upon the new target and association model, you can use it to monitor and manage a database’s storage, connectivity, and high availability. This also enables you to easily monitor and manage the applications that are dependent on the database. The database discovery functionality is enhanced to work with the new discovery framework and to provide a reliable workflow to create a database system.

10. Change Plans Added to Change Management Pack

As part of the Oracle Change Management Pack, the new Change Plans function allows application developers and database administrators to encapsulate schema changes needed to be made to a database into a “change plan,” which can be used to document, capture, and apply schema changes. Change Plans are also integrated with developer and DBA tasks into SQL*Developer and Oracle Enterprise Manager task automation. This integration reduces the manual processes between the various stakeholders involved in the process of promoting planned changes across enterprise databases while ensuring the integrity of the process.

11. Compare Period Advisor

This feature compares the performance of a database over two different time ranges. It analyzes changes in performance, workload, configuration, and hardware to highlight changes between the two time periods. The Compare Period Advisor gives the DBA the ability to compare two arbitrary periods of time.

12. Compare Functionality

The Compare functionality has been enhanced with new capabilities such as template support, system level comparison, and change notification. Users can now selectively include or ignore types of differences. Output of a comparison can easily be saved and exported, both in printable (for example, plain text) and data-centric (for example, CSV) formats. Users can select comparison start and end dates and view a history of changes for composite targets.

13. Active Reports

A new Active Reports function allows users to save performance data into an HTML file. Once saved, the report can be used for offline analysis or sent to other users, including Oracle Support. Active Reports enhances the visual representation of performance data and facilitates the convenient exchange of complex data.

14. Real Application Testing and Data Masking Integration

Real Application Testing and Data Masking integration provides users with the ability to perform secure testing in situations where data in production needs to be shared by nonproduction users due to organization or business requirements. Typically testing is done in a nonproduction environment or by a different group or organization. This integration addresses a common requirement that the data used for testing be shared in a manner that adheres to data privacy and compliance regulations.

15. Application Templates for Data Masking and Data Subsetting

This feature provides predefined data masking and data subsetting templates for applications. It allows users to automatically create test systems based on best practices recommendations.

16. Data Subsetting

Data subsetting provides the ability to create a smaller sized copy of the original production data that can be given to developers for testing. While it is a data subset, the referential relationships are preserved so that the data set is complete. This allows enterprises to lower storage costs while making production data available to developers for testing, without having to incur the storage footprint of the entire production database.

17. Application Data Model Support for Data Masking

The application data model (ADM) now stores the sensitive data elements used to generate mask definitions dynamically. Instead of having to manually discover sensitive data, the application data model identifies and stores the sensitive data elements.

18. Reversible Data Masking

Using encryption and decryption algorithms, reversible masking allows encryption of a user’s data deterministically into a format chosen by the user as a regular expression. Unmasking reverses the process to revert back to the original data. This feature is useful in environments where sensitive data needs to be masked and sent to a third party for processing. Coupling integrated masking with the application data model (ADM), an application’s data model is now available for certain packaged applications and can serve as a knowledge base containing sensitive column and data relationships.

19. Performance Diagnostics Enhancements

With the interactive user interface in the Active Session History (ASH) Viewer, users now can visualize the many performance dimensions that were not available to them in earlier releases. The Enhanced Enterprise Manager Performance and Top Activity pages allow users to visualize the multidimensional data in ASH. The ASH viewer enhances the performance troubleshooting capabilities of a DBA by providing the facility to detect skews in workload. Emergency ADDM adds performance diagnostics for databases suffering from severe performance problems.

20. Streams and XStreams Support

Streams and XStreams configurations can now be managed and monitored using Cloud Control. In addition to improvements in configuration and performance monitoring screens, logical change record (LCR) tracking is available for high-level diagnosis of replication issues. Cloud Control also simplifies the management and monitoring of replicated environments.

Oracle Database Appliance and Automatic Bug Fixing in the Cloud

In Bugs, Database tuning, DB2 database, DBA, Grid Control, Oracle database on September 21, 2011 at 19:13

Mathematician Alfred North Whitehead said: “Civilization advances by extending the number of important operations which we can perform without thinking about them.”

Same holds for the Database!

Oracle have just announced the new “Oracle Database Appliance” with self managing automatic features preconfigured (simple, highly reliable, affordable database system for small/midsize enterprises and departments):

I have just gathered a list of the automated processes offered by Oracle within the Oracle database (in no order whatsoever):

– Automatic Statistics Collection
– Automatic Tuning Optimizer (ATO)
– Automatic Repair in a Data Guard Configuration
– Automatic Undo Management
– Automatic Undo Retention Tuning
– Automatic Shared Memory Management
– Automatic Space Segment Management (ASSM)
– Automatic PGA Memory Management
– Automatic Memory Management (AMM)
– Automatic Degree of Parallelism
– Automatic Storage Management (ASM)
– Automatic RAC Database Startup/Restart
– Automatic Maintenance Tasks
– Automatic Tablespace Point In Time Recovery
– Automatic Workload Repository (AWR)
– Automatic Service Registration
– Automatic SQL Tuning (my favourite!)
– Automatic Database Diagnostic Monitor
– Automatic Segment Advisor
– Automatic Diagnostic Repository (ADR)
– Automatic Checkpoint Tuning
– Automatic Maintenance Jobs
– Automatic Global Index Maintenance During DDL
– Automatic Client Failover
– Automatic OCR Backup Rotation
– Automatic Plan Capture with SPM
– Automatic Refresh of Materialized Views
– Automatic VIP failback
– Automatic Block Recover
– Automatic Disaster Recovery Fails with RMAN
– Automatic Channel Failover
– Automatic Synchronization of Oracle Text Index
– Automatic Registration of the Database with the Default Listener
– Automatic Offlining of the Datafile in Noarchivelog
– Automatic Datatype Conversion
– Automatic Tape Drive Cleaning and a Cleaning Tape in a Tape Library
– Automatic Controlfile Backup
– Automatic Eject Of Tape After Backup Using Oracle Secure Backup
– Automatic BackupSet Failover On Missing or Corrupt BackupPieces
– Automatic BMR (Block Media Recovery)
– Automatic System Tasks
– Automatic Database Performance Monitoring
– Automatic Archiving
– Automatic Propagation in Replication
– Automatic Job Scheduling
– Automatic Resume of DataPump

Quite a list I would say, right?

An excellent paper from Oracle called Oracle Database 11g vs. IBM DB2 UDB V9.7 points out the most important trend of database manageability: the self automation of the database product. Let me quote (part of) the conclusion of the paper:

“The Automatic Database Diagnostic Monitor (ADDM), SQL Advisors and Real Application Testing are just some of the unique Oracle Database 11g features that are yet unmatched by DB2 version 9.7. Oracle Database 11g is the only database product available today that automatically collects and manages historical performance data for self-management purposes, periodically and automatically analyses this data and makes tuning recommendations. Oracle Database 11g is also the only RDBMS with rich software quality management features for real workload testing. These distinct technologies are at the core of the next generation of Oracle databases that represent simplicity, ease of management and software quality management while still providing the most robust, reliable and secure of relational databases.”

How true indeed! But what we would like to see in the future is even more: how about automatic bug fixing in the Cloud? Just like this:

1. First, we set the credentials with Oracle Support: How to set the ‘My Oracle Support’ Preferred Credentials in the Grid Console? [ID 1168603.1]
2. The Oracle database creates an incident and it is transferred to Oracle Support via the Enterprise Manager Support Workbench.
3. Then it is internally verified if the problems is bug related.
4. If it is a bug then Oracle’s own BugDB checks for a patch or workaround which fixes the bug.
5. If there is a patch available, then the patch is automatically uploaded to the clients cloud environment and then applied (online of course)!
6. If there is a workaround with an init.ora parameter, then the “alter system” command is automatically applied, as what Oracle Support can remotely run in the client’s database is controlled by the client with a new init.ora parameter called mos_cloud_permission_level.

That is what I call automation!

Note that something similar is even now offered by Oracle via the SQL Repair Advisor. It is a semi-automatic patching of SQL statements throwing ORA-600 or ORA-7445. But here by patch Oracle mean more of an SQL transformation than a standard patch downloadable from MOS.

Oracle OpenWorld 2011

In Database tuning, DBA, Grid Control, OOW, Oracle database, Oracle utilities, Personal on September 14, 2011 at 05:45

I am Speaking at Oracle OpenWorld 2011 and I hope you will join me there!

Welcome to my session: Tuning Toolkit for Advanced DBAs: Learn from the Past, and Prepare for the Future. The presentation will be on Tuesday at 03:30 PM, Moscone South – 104.

There will be over 80 Oracle ACEs and ACE Directors who will speak at OOW 2011!

The content catalog shows 75 sessions on Database performance and scalability out of the 306 database conference session. Just a reminder that 4-5 years ago the database conference sessions were about 100.

The database stream has been now divided into the following 11 substreams:

– Cloud Consolidation: 39 sessions
– Data Warehousing: 32 sessions
– Database Manageability: 55 sessions
– Database Platforms: 31 sessions
– Database Security: 23 sessions
– High Availability: 47 sessions
– MySQL: 29 sessions
– Oracle Exadata: 57 sessions
– Performance and Scalability: 75 sessions
– Specialty Data and Search: 15 sessions
– Storage Management: 24 sessions

One of my favorite topics, Oracle Enterprise Manager (part of the Cross Stream track), will be covered in 161 conference sessions. I have said it before, I have had it in several final conference slides, I will say it now: Oracle Enterprise Manager Grid Control is probably the best feature/tool/utility in Oracle after 9i.

And for the DBAs, as this is a DBA blog, here is a complete list of all sessions having the work “DBA” in the title:

14642: AWR and ASH in 3-D: Performance Analysis Tools No DBA Has Seen Before
20880: Becoming a Rock Star MySQL DBA
9218: DBA Fusion: An Introduction to Oracle Enterprise Manager 11g Architecture and Installation
13781: Day in the Life of a DBA: Cloud-Ready Management Solutions for Any IT Shop
14641: Extreme Database Administration: New Features for Expert DBAs
13081: Mastering Oracle Data Pump: Technical Deep Dive into Performance/Internals for Hands-on DBAs
15143: MySQL for Oracle DBAs, or How to Speak MySQL for Beginners
13446: Oracle Database Vault: DBA Best Practices
8046: Oracle Exadata Management for Oracle DBAs
14365: Oracle WebLogic Server Management for DBAs: Cross-Tier Visibility from JVM to Database
6681: Trends in Database Administration and the Changing Role of the DBA
14644: Tuning Toolkit for Advanced DBAs: Learn from the Past, and Prepare for the Future
8044: What If Kramer Were Your DBA and Seinfeld Tuned Your Database?
28900: Launching the IOUG Virtualization SIG: 360 Degrees of Virtualization for Oracle DBAs

If you wonder why so many people attend OOW, here are the high-level benefits of attending Oracle OpenWorld according to Oracle:

– Participate in sessions and hands-on workshops led by the world’s foremost IT experts
– Find out firsthand how to streamline upgrades and deployments
– Attend sessions in the all-new Server and Storage Systems stream
– Dig deep into application deployment, scaling, upgrading, and best practices
– Meet with your industry and technology peers
– Share notes in special interest groups, and product and industry sessions
– See hundreds of partners and customers as they present product demos and solutions in three Exhibition Halls

And here is a picture from LJE’s keynote on Cloud Computing I took last year from my seat:

P.S. I cannot find a single session on Oracle bugs and we all know that they play a major role in the database world. For sure, many DBAs would be very interested in a presentation called “11gR2 top 10 bugs“.