Dontcheff

Archive for the ‘Oracle utilities’ Category

ORAchk replaces EXAchks and RACcheck

In Database tuning, DBA, Oracle utilities on June 21, 2014 at 12:48

The new tool from Oracle ORAchk includes EXAchks and RACcheck functionality and replaces the older tools. It is no longer just a database tool but also runs health check on E-Business Suite Financials, Enterprise Manager Cloud Control repository, GoldenGate and Sun Systems.

ORAchk

Going memory lane, have a look at how Statspack compares with BSTAT/ESTAT. Well, those days are long over. DBAs used to have their own scripts, ran their own healthchecks but some special scripts can still shed some more light on the system health.

One such example is Tanel Poder’s Exadata Snapper.

For details on the ORAchk tool, how to download it, what’s new in ORAchk, read MOS Note 1268927.2. If you do not have access to MOS, here are 2 blog posts that describe the tool quite well:

1. ORAchk Health Checks for the Oracle Stack (including Solaris) by Gerry Haskins

2. ORAchk Health Checks For The Oracle Stack by Levi Pereira

Here is how System Health Score is calculated:

ORAchk2

I would recommend glancing first at the ORAchk Known Issues before starting using the tool: MOS 1509746.1

Note that ORAchk’s requirements are as follows:

Supported Platforms:
————–
Linux x86-64* (Enterprise Linux, RedHat and SuSE 9, SuSE 10 and SuSE 11)
Oracle Solaris SPARC (Solaris 10 and 11)
Oracle Solaris x86-64 (Solaris 10 and 11)
AIX **
HPUX**

* 32-bit platforms not supported, no planned support for Linux Itanium
**Requires BASH Shell 3.2 or higher to be installed

Supported Oracle Releases:
————–
10gR2
11gR1
11gR2
12cR1

Still, no Windows…

And, do not aim at a 100% score, you will not get it easily. Points get deducted if you have more than 1 instance on the server:

WARNING OS: Multiple RDBMS instances discovered, observe database consolidation best practices

According to the tool, there is a “risk for shared resource contention leading for instability, performance degradation or scalability problems”. Then it says: “if the database consolidation best practices have already been reviewed and observed then this message can be ignored”.

Not to mention that you might run the report twice in a row and get a different score.

But there is more: ORAchk Collection Manager is a companion application to ORAchk, RACcheck and Exachk. When having lots of systems, auditing them with ORAchk might be a tedious piece of work. ORAchk has long had the ability to upload the results of its audit checks into a database automatically at run time.

However, it was up to us to create a custom front-end to that data for reporting purposes and trend analysis. Now, with ORAchk Collection Manager, Oracle provides this Application Express application to be used as a dashboard in which they can track their ORAchk, RACcheck and Exachk collection data in one easy to use interface.

Download the ORAchk Collection Manager and go through the ORAchk Collection Manager Users Guide for more details.

Advertisements

Online reorganization enhancements in Oracle 12c

In Database tuning, DBA, Oracle database, Oracle utilities on March 27, 2014 at 18:38

The Online Reorganization and Redefinition feature in Oracle Database offers DBAs significant flexibility not just to modify the physical attributes of a table but also to defragment and compress the table while allowing online access to the database.

DB_reorg

A 5TB database can have 1TB of white space, in fact it can even have 4TB for all I have seen! Reclaiming the wasted space is important from both storage and performance perspective.

Before discussing the new 12c features related to online database reorganization, I will show you how easy it is to reorganize a table in 12c while at the same time all operations on the table are allowed. I have a tablespace called EM_DATA with just one datafile em_data_01.dbf and a table called EM_OBJECTS having more than 11 million rows. The file is almost 4GB in size. There are no another segments within the EM_DATA tablespace.

EM_OBJECTS

As you can see the segment EM_OBJECTS was 1774MB in size before the reorganization and after compressing it online we managed to shrink it to 644MB. The data in the table was not artificially ordered in any way whatsoever. Moreover, after the reorganization, I was able to resize the file down to 640MB and thus release more than 3GB of storage space on filesystem level!

Clearly, in 12c the complete procedure is very simple, there is no need to create explicitly the interim table but if an errors occurs during the execution, then the interim table is dropped, and the REDEF_TABLE procedure must be re-executed.

Now, here are all new features related to online redefinition in Oracle 12c:

1. The REDEF_TABLE procedure enables you to perform online redefinition on the table’s storage properties in a single step when you want to change the following properties:

– Tablespace changes, including a tablespace change for a table, partition, index, or LOB columns
– Compression type changes (as shown in the example above), including a compression type change for a table, partition, index key, or LOB columns
– For LOB columns, a change to SECUREFILE or BASICFILE storage

Check out these examples.

2. In 11g, it was only possible to add or drop columns when redefining tables online. In 12c, we can now set an unused column online. Check Doc ID 1120704.1 for details.

3. New in Oracle Database 12c: drop index online, alter index visible/invisible, alter index unusable online, and drop constraint online. I would suggest the master note “An overview of online redefinition of tables” Doc ID 1357825.1 for detailed examples and known bugs.

4. Since 10gR2, it has been possible to reorganize a single partition with online reorganization. With 12c, the redefinition of multiple partitions in a single redefinition session gives us the possibility to reduce the completion time to redefine multiple partitions.

5. Redefinition of tables that have Virtual Private Database (VPD) policies defined on them helps us eliminate downtime for redefining these tables. On top of that, there is a new function called EXPAND_SQL_TEXT which has been added to the DBMS_UTILITY packages. It makes seeing what SQL is actually executed pretty easy.

6. The performance of SYNC_INTERIM_TABLE with Materialized View Log processing has been significantly improved in 12c. I really hope not to see any more ORA-32416.

7. There is also improved resilience of FINISH_REDEF_TABLE with better lock management: you can avoid user cancellations or indefinite waits by using the new dml_lock_timeout parameter when running DBMS_REDEFINITION.FINISH_REDEF_TABLE procedure. Amit Bansal gives more details here.

Finally, here is all about DBMS_REDEFINITION in 12c from Dan Morgan’s library.

junk_data

Online database reorganization is something of paramount importance, especially nowadays, when downtime is hard to get and data grows rather exponentially. Removing the unnecessary data is something few people in the companies care about although often more than 50% of database data is nothing else but junk data. Buying and adding more storage seems to be the easier option although reorganized and defragmented databases have better performance and require less memory and CPU.

Additional resources:
Understanding, detecting, and eliminating harmful database fragmentation by Craig A. Shallahamer
Reorganizing Tables in Oracle – is it worth the effort? by Uwe Hesse
Oracle table reorganization: Online or Offline by FJ Franken
Oracle Tablespace Reorg to reclaim space by AppsDBA
Using DBMS_REDEFINITION package to reorganize tables online by Marco Vigelini
Reclaiming Unused Space in Datafiles by Tim Hall

On 4 undocumented DataPump parameters

In DBA, Oracle utilities on December 20, 2011 at 18:14

DataPump is probably the best free Oracle utility/tool ever developed. With the exception of SQL*Plus of course which is somehow naturally excluded.

It is like SELECT being the most common command written/typed by DBAs and Developers. Guess which is the next one after SELECT? You might not want to believe it but it seems to be EXIT 🙂

Now, 11g came with several new parameters: COMPRESSION, ENCRYPTION, TRANSPORTABLE, NETWORK_LINK, PARTITION_OPTIONS, DATA_OPTIONS, REUSE_DUMPFILES, REMAP_TABLE, etc. which are probably well-known but here are few undocumented ones you can have as an ace up your sleeve.

1. METRICS

You can use the parameter METRICS=Y to include additional logging information about the number of objects and the time it took to process them in the log file. With METRICS, additional information can be obtained about the number of objects that were processed and the time it took for processing them.

Note that the METRICS parameter does not create any additional trace files. Instead, the extra details are logged in the logfile of the Export Data Pump or Import Data Pump job.

Here is an example:

expdp \'/ as sysdba\' DUMPFILE=data_pump_dir:sysman_%U.dat SCHEMAS=sysman_apm JOB_NAME=sysman_pump METRICS=Y
...
[oracle@zurich Desktop]$ expdp \'/ as sysdba\' DUMPFILE=data_pump_dir:sysman_%U.dat SCHEMAS=sysman_apm JOB_NAME=sysman_pump METRICS=Y

Export: Release 11.2.0.3.0 - Production on Tue Dec 20 10:46:08 2011

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYS"."SYSMAN_PUMP":  "/******** AS SYSDBA" DUMPFILE=data_pump_dir:sysman_%U.dat SCHEMAS=sysman_apm JOB_NAME=sysman_pump METRICS=Y 
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 320 KB
Processing object type SCHEMA_EXPORT/USER
     Completed 1 USER objects in 0 seconds
...
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
     Completed 1 DEFAULT_ROLE objects in 2 seconds
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
     Completed 1 TABLESPACE_QUOTA objects in 8 seconds
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
     Completed 1 PROCACT_SCHEMA objects in 27 seconds
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
...
. . exported "SYSMAN_APM"."PATCH_RUN"                        0 KB       0 rows
Master table "SYS"."SYSMAN_PUMP" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYSMAN_PUMP is:
  /u01/app/oracle/admin/CLOUD/dpdump/sysman_01.dat
Job "SYS"."SYSMAN_PUMP" successfully completed at 10:49:19

2. ACCESS_METHOD

In some situations, the undocumented parameter ACCESS_METHOD can be used to enforce a specific method to unload or load the data.

If the parameter is not specified, then Data Pump will automatically choose the best method to load or unload the data.

The parameter can only be specified when the Data Pump job is initially started i.e., the parameter cannot be specified when the job is restarted.

If the parameter is specified, the method of loading or unloading the data is enforced on all tables that need to be loaded or unloaded with the job.

Enforcing a specific method may result in a slower performance of the overall Data Pump job.

Here is an example:

expdp \'/ as sysdba\'  ... ACCESS_METHOD=DIRECT_PATH  
expdp \'/ as sysdba\'  ... ACCESS_METHOD=EXTERNAL_TABLE 

impdp \'/ as sysdba\'  ... ACCESS_METHOD=DIRECT_PATH  
impdp \'/ as sysdba\'  ... ACCESS_METHOD=EXTERNAL_TABLE 

3. TRACE

Tracing can be enabled by specifying an 7 digit hexadecimal mask in the TRACE parameter of expdp or impdp. The first three digits enable tracing for a specific DataPump component, while the last four digits are usually: 0300. Any leading zero’s can be omitted, and the value specified for the TRACE parameter is not case sensitive. TRACE does not add anything to the output of DataPump, it creates additional trace files.

Here is a summary of the Data Pump trace levels:

  10300 SHDW: To trace the Shadow process 
  20300 KUPV: To trace Fixed table
  40300 'div' To trace Process services
  80300 KUPM: To trace Master Control Process 
 100300 KUPF: To trace File Manager
 200300 KUPC: To trace Queue services
 400300 KUPW: To trace Worker process(es)         
 800300 KUPD: To trace Data Package
1000300 META: To trace Metadata Package
1FF0300 'all' To trace all components, full tracing

As you see, in order to trace all Data Pump components, level 1FF0300 can be specified, here is an example:

impdp \'/ as sysdba\' NETWORK_LINK=edw_p SCHEMAS=CLOUD PARALLEL=8 JOB_NAME=net_pump EXCLUDE=statistics TRACE=1FF0300 
...
SHDW: *** GET_STATUS_VERSION call ***
SHDW:     Version = 9999
KUPP:10:58:22.050: Input trace/debug flags: 01FF0300 = 33489664
KUPP:10:58:22.050: Current trace/debug flags: 01FF0300 = 33489664
SHDW:10:58:22.050: Current user = SYS
SHDW:10:58:22.050: Current schema = SYS
SHDW:10:58:22.050: Current language = AMERICAN_AMERICA.AL32UTF8
SHDW:10:58:22.052: Current session address = 0000000077F7AFA0
SHDW:10:58:22.052: *** OPEN call ***
SHDW:10:58:22.052:     operation       = IMPORT
SHDW:10:58:22.052:     job_mode        = schema
SHDW:10:58:22.052:     remote_link     = edw_p
SHDW:10:58:22.052:     version         = 
SHDW:10:58:22.052:     compression     = 2
KUPV:10:58:22.052: Open request for job: SYS.NET_PUMP
KUPV:10:58:22.058: Master Table create statement: CREATE TABLE "SYS"."SYSMAN_PUMP" (process_order NUMBER, duplicate NUMBER, dump_fileid NUMBER, dump_position NUMBER, dump_length NUMBER, dump_orig_length NUMBER 
...

The 3 types of trace files can be found in $ADR_HOME/trace:
– Data Pump Master Control Processes files have format: SID_dm_process_id.trc
– Data Pump Worker Process trace files have format: SID_dw_process_id.trc
– Data Pump Shadow Process trace files have format: SID_ora_process_id.trc

4. KEEP_MASTER

If the undocumented DataPump parameter KEEP_MASTER=Y is used, then the Master Table is not deleted after the job completes. The drop of the master table does not lead to any data dictionary corruption and if you keep the master table after the job completes, then a drop of the master table afterwards will not cause any corruption either.

You might use this parameter if you create transportable tablespaces where the source and the destination are both ASM based.

Here is an example:

expdp \'/ as sysdba\' directory=DUMPDIR DUMPFILE=skat_full.dmp LOGFILE=skat_full.log FULL=Y KEEP_MASTER=Y

One last note: More than 50 bugs related to DataPump have been fixed in Oracle Database 11g Release 2 (11.2.0.3). Check document 1348303.1: List of Bug Fixes by Problem Type available on the My Oracle Support Web site.

Oracle OpenWorld 2011

In Database tuning, DBA, Grid Control, OOW, Oracle database, Oracle utilities, Personal on September 14, 2011 at 05:45

I am Speaking at Oracle OpenWorld 2011 and I hope you will join me there!

Welcome to my session: Tuning Toolkit for Advanced DBAs: Learn from the Past, and Prepare for the Future. The presentation will be on Tuesday at 03:30 PM, Moscone South – 104.

There will be over 80 Oracle ACEs and ACE Directors who will speak at OOW 2011!

The content catalog shows 75 sessions on Database performance and scalability out of the 306 database conference session. Just a reminder that 4-5 years ago the database conference sessions were about 100.

The database stream has been now divided into the following 11 substreams:

– Cloud Consolidation: 39 sessions
– Data Warehousing: 32 sessions
– Database Manageability: 55 sessions
– Database Platforms: 31 sessions
– Database Security: 23 sessions
– High Availability: 47 sessions
– MySQL: 29 sessions
– Oracle Exadata: 57 sessions
– Performance and Scalability: 75 sessions
– Specialty Data and Search: 15 sessions
– Storage Management: 24 sessions

One of my favorite topics, Oracle Enterprise Manager (part of the Cross Stream track), will be covered in 161 conference sessions. I have said it before, I have had it in several final conference slides, I will say it now: Oracle Enterprise Manager Grid Control is probably the best feature/tool/utility in Oracle after 9i.

And for the DBAs, as this is a DBA blog, here is a complete list of all sessions having the work “DBA” in the title:

14642: AWR and ASH in 3-D: Performance Analysis Tools No DBA Has Seen Before
20880: Becoming a Rock Star MySQL DBA
9218: DBA Fusion: An Introduction to Oracle Enterprise Manager 11g Architecture and Installation
13781: Day in the Life of a DBA: Cloud-Ready Management Solutions for Any IT Shop
14641: Extreme Database Administration: New Features for Expert DBAs
13081: Mastering Oracle Data Pump: Technical Deep Dive into Performance/Internals for Hands-on DBAs
15143: MySQL for Oracle DBAs, or How to Speak MySQL for Beginners
13446: Oracle Database Vault: DBA Best Practices
8046: Oracle Exadata Management for Oracle DBAs
14365: Oracle WebLogic Server Management for DBAs: Cross-Tier Visibility from JVM to Database
6681: Trends in Database Administration and the Changing Role of the DBA
14644: Tuning Toolkit for Advanced DBAs: Learn from the Past, and Prepare for the Future
8044: What If Kramer Were Your DBA and Seinfeld Tuned Your Database?
28900: Launching the IOUG Virtualization SIG: 360 Degrees of Virtualization for Oracle DBAs

If you wonder why so many people attend OOW, here are the high-level benefits of attending Oracle OpenWorld according to Oracle:

– Participate in sessions and hands-on workshops led by the world’s foremost IT experts
– Find out firsthand how to streamline upgrades and deployments
– Attend sessions in the all-new Server and Storage Systems stream
– Dig deep into application deployment, scaling, upgrading, and best practices
– Meet with your industry and technology peers
– Share notes in special interest groups, and product and industry sessions
– See hundreds of partners and customers as they present product demos and solutions in three Exhibition Halls

And here is a picture from LJE’s keynote on Cloud Computing I took last year from my seat:

P.S. I cannot find a single session on Oracle bugs and we all know that they play a major role in the database world. For sure, many DBAs would be very interested in a presentation called “11gR2 top 10 bugs“.

Data Pump Legacy Mode

In DBA, Oracle database, Oracle utilities on May 19, 2011 at 16:42

During the past 10 years we have seen 3 new versions of Oracle: 9i, 10g and 11g. I am often asked and I do ask as well which is the best or which are the best features of Oracle. 10 persons, I bet there will be 10 different opinions. Here is how I see it:

Real Application Clusters and Data Guard in 9i
Grid Control and Data Pump in 10g
Automatic SQL Tuning and Edition-Based Redefinition in 11g

There have been so many new, very good and important features but let us say that they all are part of the progress of mankind 🙂 And the recent boom of IT. Perhaps?

In 11gR2, Oracle decides to introduce Data Pump Legacy Mode in order to provide backward compatibility for scripts and parameter files used for original export/import scripts. The documentation briefly says: “This feature enables users to continue using original Export and Import scripts with Data Pump Export and Import. Development time is reduced as new scripts do not have to be created.”

If you check AristaDBA’s Oracle Blog and read the first three paragraphs you will probably see what it is about. I totally agree with all written there.

And I don’t really get it, why do we have Data Pump Legacy Mode? Lazy Oracle clients badly need it? Exp/imp was so, so good that we must have it back? How about RAC Legacy Mode if I want to use gc_files_to_lock or freeze_db_for_fast_instance_recovery parameters? There really was a parameter called freeze_db_for_fast_instance_recovery, I am not making this up. Run this one:

SELECT kspponm, 
DECODE(ksppoflg, 1,'Obsolete', 2, 'Underscored') as "Status"
FROM x$ksppo
WHERE kspponm like '%freeze%'
ORDER BY kspponm;

Or how about if I want to run DBMS_SCHEDULER in DBMS_JOB mode 🙂 And if you want to run DB2 in Oracle mode?

However, the Data Pump Legacy Mode feature exists and once you use any conventional export/import parameter you enable data pump in legacy mode. Just one parameter is enough. In Oracle terminology, Data Pump enters legacy mode once it determines a parameter unique to original Export or Import is present. Of course some parameters like buffer, commit, compress, object_consistent, recordlength, resumable, statistics, compile, filesize, tts_owners, streams_configuration and streams_instantiation are simply ignored.

Now, here is a paradox or trivial documentation error: deferred_segment_creation is set to TRUE by default in 11gR2. Have a look at the documentation:

It is true. Data Pump does not ignore tables without segments: regardless of the mode it runs in. Note additionally that legacy mode can be activated by even adding just one ignorable parameter (like buffer for example):

 SQL> create table TEST (c1 number, c2 varchar2(10), c3 date) storage (initial 5M);

 Table created.

 SQL> select bytes, blocks, segment_type, segment_name from dba_segments where segment_name='TEST';

 no rows selected

 C:\>expdp julian/password dumpfile=data_pump_dir:abc_%U.dat schemas=julian include=TABLE:in('TEST') logfile=abc.log buffer=1024

 Export: Release 11.2.0.2.0 - Production on Wed May 11 07:53:45 2011

 Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
 
 Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
 With the Partitioning, OLAP, Data Mining and Real Application Testing options
  Legacy Mode Active due to the following parameters:
  Legacy Mode Parameter: "buffer=1024" Location: Command Line, ignored.
  Legacy Mode has set reuse_dumpfiles=true parameter.
 Starting "JULIAN"."SYS_EXPORT_SCHEMA_01":  julian/******** dumpfile=data_pump_dir:abc_%U.dat schemas=julian 
  include=TABLE:in('TEST') logfile=abc.log reuse_dumpfiles=true 
 Estimate in progress using BLOCKS method...
 Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
 Total estimation using BLOCKS method: 0 KB
 Processing object type SCHEMA_EXPORT/TABLE/TABLE
 . . exported "JULIAN"."TEST"                                 0 KB       0 rows
 Master table "JULIAN"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
 ******************************************************************************
 Dump file set for JULIAN.SYS_EXPORT_SCHEMA_01 is:
   C:\ORACLE\ADMIN\JMD\DPDUMP\ABC_01.DAT
 Job "JULIAN"."SYS_EXPORT_SCHEMA_01" successfully completed at 07:54:07

But it is not true that you cannot export tables with no segments. Here is the proof:

 C:\>exp userid=julian/abc file=C:\Oracle\admin\JMD\dpdump\a.dmp tables=julian.test

 Export: Release 11.2.0.2.0 - Production on Wed May 11 08:31:08 2011

 Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

 Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
 With the Partitioning, OLAP, Data Mining and Real Application Testing options
 Export done in AL32UTF8 character set and AL16UTF16 NCHAR character set

 About to export specified tables via Conventional Path ...
 . . exporting table                           TEST          0 rows exported
 Export terminated successfully without warnings.

But forget about this Legacy Mode. Do not use. Pretend it does not exit 🙂

Let us look now at some new features of Data Pump. Note that in 11gR2 couple of restrictions have been already removed:

– The restriction that in TABLES mode all tables had to reside in the same schema.
– The restriction that only one object (table or partition) could be specified if wildcards were used as part of the object name.

For RAC users: Data Pump worker processes can now be distributed across RAC instances, a subset of Oracle RAC instances, or confined to the instance where the Data Pump job is started. It is also now possible to start Data Pump jobs and run them on different Oracle RAC instances simultaneously.

For XMLType column users: there is a new DISABLE_APPEND_HINT value on the DATA_OPTIONS parameter which disables the APPEND hint while loading the data object.

For EBR users: specific editions can be exported and imported. Using the SOURCE_EDITION parameter upon export and the import TARGET_EDITION parameter you can export a certain edition of the database and import into a specific edition of the database.

P.S. I think Data Pump is among the Oracle tools with least bugs! Although it is hard to say nowadays what is a bug, what is a feature, what is a duplicate enhancement… There are about 100 types of bugs.

P.P.S. I did attend the 11g beta OCP exam as soon as it was released. There were like more than 300 questions, not a single one on Data Pump! And 11g came with so many new features.