Dontcheff

Archive for the ‘Database options’ Category

Oracle Exadata Cloud Machine Q&A

In Cloud, Database options, DBA, Exadata, IaaS, Oracle database, Oracle Engineered Systems on March 28, 2017 at 07:25

“The computer industry is the only industry that is more fashion-driven than women’s fashion.” Larry Ellison

Amazon have no on-premise presence, it is cloud only. Microsoft have the Azure Stack but the Azure Stack Technical Preview 3 is being made available as a Proof of Concept (POC) and must not be used as a production environment and should only be used for testing, evaluation, and demonstration.

Oracle Database Exadata Public Cloud Machine, or Exadata Cloud Machine, or ExaCM in short, is a cloud-based Oracle database subscription service available on Oracle Exadata, and deployed in the customer or partner data center behind their firewall. This allows customers to subscribe to fully functional Oracle databases on Exadata, on an OPEX driven consumption model, using agile cloud-based provisioning, while the associated Exadata infrastructure is maintained by Oracle.

This blog post contains the top 10 (5 commercial and 5 technical) facts about the Exadata Cloud Machine:

1. On-premise licenses cannot be transferred to ExaCM.

2. The minimum commitment to both the ExaCM and OCM is 4 years and the minimum configuration is Eighth Rack.

3. The subscription price for Oracle Database Exadata Cloud Machine X6 Eighth Rack is $40,000 per month (= $2,500 X 16) and that includes all DB options/features, Exadata Software and OEM DB Packs.

4. Standalone products such as Oracle Secure Backup and Oracle GoldenGate are not included in the ExaCM subscription. Only the database options (such as RAC, In-Memory, Partitioning, Active Data Guard, etc.), the database OEM packs and the Exadata storage server software are included.

5. ExaCM requires Oracle Cloud Machine to deploy Exadata Cloud Control Plane (separate subscription). OCM subscription requires similar minimum term commitment as ExaCM. If a customer already has an OCM, that can be leveraged to deploy Exadata Control Plane at no extra cost. One OCM Model 288 can manage 6 ExaCM Full Racks (i.e. 24 ExaCM Quarter Racks or 12 ExaCM Half Racks). Theoretically one OCM can support a much larger number of ExaCM full racks: about 50.

6. The 1/8th rack SKU is very similar to the on-premises 1/8th rack – i.e. minimum configuration of 16 OCPUs (cores), 240 GB RAM per database server, 144 TB raw storage (42 TB usable), 19.2TB of Flash. Compared to the Quarter Rack, it ships with less RAM, disk storage and flash. Those will be field installed if the customer chooses to go for the 1/8th to Quarter Rack upgrade. Note that this 1/8th rack enables customers to have an entry level configuration that is similar to what exists in Exadata Cloud Service.

7. Hourly Online Compute Bursting is supported with ExaCM. The commercial terms are the same as in Exadata Cloud Service – i.e. 25% premium over the Metered rate, calculated on an hourly basis. Customers can scale up or down, dynamically. Bursting does not kick in automatically based on load. Bursting of OCPUs needs to be configured by customers as needed. Once customer initiates bursting, the OCPU update is done dynamically without downtime. Customers will be billed later on the hours of bursting usage. Price: $8.401 per OCPU per hour.

8. If Cloud Control Plane is down, it doesn’t affect the availability of steady state runtime operations. However, cloud-based management (e.g. selfservice UI and REST API access) will be impacted.

9. Access: the Exadata Cloud Machine compute nodes are each configured with a Virtual Machine (VM). You have root privilege for the Exadata compute node VMs, so you can load and run additional software on the Exadata compute nodes. However, you do not have administrative access to the Exadata infrastructure components, including the physical compute node hardware, network switches, power distribution units (PDUs), integrated lights-out management (ILOM) interfaces, or the Exadata Storage Servers, which are all administered by Oracle.

10. Patching and backups: you can produce a list of available patches using the exadbcpatchmulti command as follows

# /var/opt/oracle/exapatch/exadbcpatchmulti -list_patches 
-sshkey=/home/opc/.ssh/id_rsa 
-oh=hostname1:/u01/app/oracle/product/12.2.0.1/dbhome_1

When you create a database deployment on Exadata Cloud Machine, you must choose from the following automatic backup configuration options:

– Remote Storage Only: uses remote NFS storage to store periodic full (RMAN level 0) backups and daily incremental backups, with a seven day cycle between full backups and an overall retention period of thirty days.
– None: no automatic backups are configured. Automatic backups cannot be configured later if you select the None option when you create a database deployment.

Useful links:

Oracle Exadata Cloud Machine Documentation
Oracle Database Exadata Cloud Machine Data Sheet
Features and Benefits of Oracle ExaCM
Oracle Database Exadata Cloud Machine Pricing
Creating an Exadata Cloud Machine Instance
Known Issues for Oracle Exadata Cloud Machine
Oracle SVP, Juan Loaiza, describes Oracle Database Exadata Cloud Machine

Advertisements

Database Magic with Oracle Database 12c

In Database options, Oracle database on November 28, 2016 at 09:40

“Science is magic that works” Kurt Vonnegut

magic

Have a look at the following series of commands.

A query on the SALES table takes normally more than 2 minutes but setting the database_performance parameter to SUPER_FAST makes it … as expected super fast: less than 1 second. Setting the database_performance parameter to SUPER_SLOW makes the query hang. Again “as expected”.

sqltf

So, how come all this is possible?

Before blogging it, I showed this “magical trick” to 100s of people: at Oracle OpenWorld, at some Oracle User Group events and conferences and to many DBAs. Here is the explanation.

Behind the curtains, I am using the Oracle SQL Translation Framework.

exec dbms_sql_translator.create_profile('OOW');

select object_name, object_type from dba_objects where object_name like 'OOW';

exec dbms_sql_translator.register_sql_translation('OOW',
'SELECT max(price) most_expensive_order from sales',
'SELECT max(price) most_expensive_order from julian.sales')
/

exec dbms_sql_translator.register_sql_translation('OOW',
'alter session set database_performance="SUPER_FAST"',
'alter session set inmemory_query="ENABLE"')
/

exec dbms_sql_translator.register_sql_translation('OOW',
'alter session set database_performance="RATHER_SLOW"',
'alter session set inmemory_query="DISABLE"')
/

exec dbms_sql_translator.register_sql_translation('OOW',
'alter session set database_performance="SUPER_SLOW"',
'begin uups; end;')
/

The procedure uups is just helping us mimic a never ending loop:

create or replace procedure uups
as  
 x date;
 BEGIN
   LOOP
     BEGIN
       SELECT null INTO x FROM dual WHERE sysdate = sysdate;
     EXCEPTION
       WHEN NO_DATA_FOUND THEN EXIT;
     END;
   END LOOP;
 END;
/

Then, once connected to the database as SYS you run the following commands:

set timing on

alter session set sql_translation_profile = OOW
/

alter session set events = '10601 trace name context forever, level 32'
/
 

The last question now is probably what changes the performance? But this should be clear from the usage of the inmemory_query parameter. I am simply keeping the SALES table in memory. So yes, the in-memory option can be 137 times faster! 133.22/0.97 ~ 137

Here is something more to read if you find the topic interesting:

SQL Translation Framework in Oracle Database 12c by Okcan Yasin Saygili
SQL Translation Framework by Kerry Osborne
Oracle 12c Security – SQL Translation and Last Logins by Pete Finnigan

The 7 Initialization Parameters Related to the IM Column Store

In Database options, DBA, Init.ora, Oracle database on January 14, 2015 at 14:42

I often see and hear claims how many times faster Oracle 12c IM is compared to whatever. Although, the numbers are rather realistic than wishful thinking, I must say that all that strongly depends on how the IM option has been set up.

Besides the way you set up the tables, the MVs and the tablespaces, there are 7 init.ora parameters that directly affect the behavior, performance and speed of the inmemory option.

7init

1. INMEMORY_SIZE

Default: 0

This initialization parameter sets the size of the IM column store in a database instance.

The default value is 0, which means that the IM column store is not used. This initialization parameter must be set to a non-zero value to enable the IM column store. If the parameter is set to a non-zero value, then the minimum setting is 100M.

In a multitenant environment, the setting for this parameter in the root is the setting for the entire multitenant container database (CDB). This parameter can also be set in each pluggable database (PDB) to limit the maximum size of the IM column store for each PDB. The sum of the PDB values can be less than, equal to, or greater than the CDB value. However, the CDB value is the maximum amount of memory available in the IM column store for the entire CDB, including the root and all of the PDBs. Unless this parameter is specifically set for a PDB, the PDB inherits the CDB value, which means that the PDB can use all of the available IM column store for the CDB.

Julian’s tip: the bigger the better: more stuff will fit in.

2. INMEMORY_MAX_POPULATE_SERVERS

Default: Half the effective CPU thread count or the PGA_AGGREGATE_TARGET value divided by 512M, whichever is less.

This initialization parameter specifies the maximum number of background populate servers to use for IM column store population, so that these servers do not overload the rest of the system. Set this parameter to an appropriate value based on the number of cores in the system.

Julian’s tip: use something between the default and its double

3. INMEMORY_FORCE

Default: DEFAULT

This initialization parameter can enable tables and materialized views for the IM column store or disable all tables and materialized views for the IM column store.

Set this parameter to DEFAULT, the default value, to allow the INMEMORY or NO INMEMORY attributes on the individual database objects determine if they will be populated in the IM column store.

Set this parameter to OFF to specify that all tables and materialized views are disabled for the IM column store.

Julian’s tip: no brainer

4. INMEMORY_CLAUSE_DEFAULT

Default: an empty string

This initialization parameter enables you to specify a default IM column store clause for new tables and materialized views.

Leave this parameter unset or set it to an empty string to specify that there is no default IM column store clause for new tables and materialized views. Setting the value of this parameter to NO INMEMORY has the same effect as setting it to the default value (the empty string).

Set this parameter to a valid INMEMORY clause to specify that the clause is the default for all new tables and materialized views. The clause can include valid clauses for IM column store compression methods and data population options.

If the clause starts with INMEMORY, then all new tables and materialized views, including those without an INMEMORY clause, are populated in the IM column store. If the clause omits INMEMORY, then it only applies to new tables and materialized views that are enabled for the IM column store with an INMEMORY clause during creation.

Julian’s tip: start with an emty string and depending on your IM strategy, modify it later on

5. INMEMORY_QUERY

Default: ENABLE

This initialization parameter specifies whether in-memory queries are allowed. Set this parameter to ENABLE, the default value, to allow queries to access database objects populated in the IM column store, or set this parameter to DISABLE to disable access to the database objects populated in the IM column store.

Julian’s tip: very useful parameter on session level for testing how fast the IM option is

6. INMEMORY_TRICKLE_REPOPULATE_SERVERS_PERCENT

Default: 1

This initialization parameter limits the maximum number of background populate servers used for IM column store repopulation, as trickle repopulation is designed to use only a small percentage of the populate servers. The value for this parameter is a percentage of the INMEMORY_MAX_POPULATE_SERVERS initialization parameter value. For example, if this parameter is set to 10 and INMEMORY_MAX_POPULATE_SERVERS is set to 10, then on average one core is used for trickle repopulation.

Julian’s tip: increase to to a level based on your CPU cores and need for fast repopulation.

7. OPTIMIZER_INMEMORY_AWARE

Default: TRUE

This initialization parameter enables or disables all of the optimizer cost model enhancements for in-memory. Setting the parameter to FALSE causes the optimizer to ignore the in-memory property of tables during the optimization of SQL statements.

Julian’s tip: good for testing purposes to see how IM changes the behaviour of the CBO

Interesting test case for the same SQL:

select_IM

I would always recommend PARALLEL_DEGREE_POLICY = AUTO when using the 12c in-memory option.

And finally some interesting articles on 12c IM:

Off May Not Be Totally Off: Is Oracle In-Memory Database 12c (12.1.0.2.0) Faster? by Craig Shallahamer

12c In-Memory in PDB by Kerry Osborne

Getting started with Oracle Database In-Memory Part I – Installing & Enabling by Maria Colgan

In-Memory Column Store in Oracle Database 12c Release 1 (12.1.0.2) by Tim Hall

Our take on the Oracle Database 12c In-Memory Option by Tanel Poder

Reading Data in Oracle Database 12c

In Cloud, Consolidation, Database options, DBA, Security and auditing, SQL on December 1, 2014 at 18:02

1. For DBAs and Developers, the words READ and SELECT have been for years somehow synonyms. In 12c, is there now any difference?

2. Before pluggable databases, selecting data from the SALES table for instance meant selecting data from a table called SALES in a certain SCHEMA within the database. How about if a table called SALES belongs to several pluggable databases under the same schema name?

The aim of this blog post is to shed some light on these new concepts.

lock_read

1. New READ privilege.

Until Oracle 12.1.0.2. the SELECT object privilege allowed users to perform the following two operations in addition to just reading data from the SALES table:

LOCK TABLE sales IN EXCLUSIVE MODE;
SELECT ... FROM sales FOR UPDATE;

These 2 commands enabled the users to lock the rows of the SALES table.

The READ object privilege does not provide these additional privileges. For better security, grant users the READ object privilege if you want to restrict them to performing queries only.

In addition to the READ object privilege, you can grant users the READ ANY TABLE privilege to enable them to query any table in the database.

When a user who has been granted the READ object privilege wants to perform a query, the user still must use the SELECT statement. There is no accompanying READ SQL statement for the READ object privilege.

The GRANT ALL PRIVILEGES TO user SQL statement includes the READ ANY TABLE system privilege. The GRANT ALL PRIVILEGES ON object TO user statement includes the READ object privilege.

If you want the user only to be able to query tables, views, materialized views, or synonyms, then grant the READ object privilege. For example:

GRANT READ ON SALES TO julian;

CDB_PDB

2. Querying a table owned by a common user across all PDBs.

Consider the following scenario:

– The container database has several pluggable databases, i.e., it has a separate PDB for each different office location of the company.
– Each PDB has a SALES table that tracks the sales of the office, i.e., the SALES table in each PDB contains different sales information.
– The root container also has an empty SALES table.
– The SALES table in each container is owned by the same common user.

To run a query that returns all of the sales across the company connect to each PDB as a common user, and create a view with the following statement:

CREATE OR REPLACE VIEW sales AS SELECT * FROM sales;

The common user that owns the view must be the same common user that owns the sales table in the root. After you run this statement in each PDB, the common user has a view named sales in each PDB.

With the root as the current container and the common user as the current user, run the following query with the CONTAINERS clause to return all of the sales in the sales table in all PDBs:

SELECT * FROM CONTAINERS(sales);

You can also query the view in specific containers. For example, the following SQL statement queries the view in the containers with a CON_ID of 3 and 4:

SELECT * FROM CONTAINERS(sales) WHERE CON_ID IN (3,4);

3. Delegate.

Something else: staring 12.1.0.2, when granting a role to a user, you can specify the WITH DELEGATE OPTION clause. Then the grantee can do the following two things:

A) Grant the role to a program unit in the grantee’s schema
B) Revoke the role from a program unit in the grantee’s schema

delegate

Oracle Database 12.1.0.2 New Features

In Database options, DBA, Oracle database on July 23, 2014 at 14:25

Oracle Database 12.1.0.2 was released yesterday, July 22nd 2014.

I found no issues whatsoever with downloading, installing the software and creating a container database plus enabling the in-memory option.

OEMDBEXpress12102b

Here are few useful links:

1. Oracle Database 12.1.0.2 Software Download
2. Oracle 12cR1 Documentation
3. MOS 1905806.1 about 12.1.0.2
4. Oracle Database Blog: 12.1.0.2 is available!!!

When you setup the IM option, note the inmemory area specified in the output below:


SQL> startup
ORACLE instance started.

Total System Global Area  838860800 bytes
Fixed Size		    2929936 bytes
Variable Size		  511707888 bytes
Database Buffers	   50331648 bytes
Redo Buffers		    5455872 bytes
In-Memory Area		  268435456 bytes
Database mounted.
Database opened.
SQL> show parameter inmemory

NAME				     TYPE			       VALUE
------------------------------------ --------------------------------- ------------------------------
inmemory_clause_default 	     string
inmemory_force			     string			       DEFAULT
inmemory_max_populate_servers	     integer			       1
inmemory_query			     string			       ENABLE
inmemory_size			     big integer		       256M
inmemory_trickle_repopulate_servers_ integer			       1
percent
optimizer_inmemory_aware	     boolean			       TRUE


Let me show you how one can see the incredible speed of the inmemory option:


SQL> alter table SALES inmemory;

Table altered.

SQL> select max(price) most_expensive_order from sales;

MOST_EXPENSIVE_ORDER
--------------------
	       91978

Elapsed: 00:00:02.50

SQL> alter session set inmemory_query="DISABLE";

Session altered.

Elapsed: 00:00:00.03

SQL> select max(price) most_expensive_order from sales;

MOST_EXPENSIVE_ORDER
--------------------
	       91978

Elapsed: 00:01:25.51

Check the following new commands and views related to the in-memory option:


SQL> alter table SALES inmemory memcompress for capacity high;

Table altered.

SQL> alter table SALES no inmemory (client);

Table altered.

SQL> select segment_name, inmemory_size, inmemory_compression, bytes/inmemory_size comp_ratio from v$im_segments;

SEGMENT_NAME	     INMEMORY_SIZE INMEMORY_COMPRESSION 	  COMP_RATIO
-------------------- ------------- ------------------------------ ----------
SALES			  24969216 FOR CAPACITY HIGH		  11.6325459

SQL> select table_name, cache, inmemory_compression comp, inmemory_priority priority, inmemory_distribute RAC from dba_tables where table_name = 'SALES';

TABLE_NAME   CACHE	COMP		     PRIORITY	RAC
------------ ---------- -------------------- ---------- ----------
SALES		 N	FOR CAPACITY HIGH    NONE	AUTO

SQL> select view_name from dba_views where view_name like 'V_$IM%';

VIEW_NAME
----------------------------------------
V_$IM_SEGMENTS_DETAIL
V_$IM_SEGMENTS
V_$IM_USER_SEGMENTS
V_$IM_TBS_EXT_MAP
V_$IM_SEG_EXT_MAP
V_$IM_HEADER
V_$IM_COL_CU
V_$IM_SMU_HEAD
V_$IM_SMU_CHUNK
V_$IM_COLUMN_LEVEL

10 rows selected.

Some of the most interesting new features are:

Oracle Database In-Memory
In-Memory Aggregation and In-Memory Column Store
Oracle Big Data SQL
Advanced Index Compression
Automatic Big Table Caching
Zone Maps for full table access
New and optimized SQL function, APPROX_COUNT_DISTINCT()
Full Database Caching
Rapid Home Provisioning based on gold images stored in a catalog of pre-created homes
New database parameter: DBFIPS_140

OEMDBEXpress12102c

Hana and her sisters

In Database options, DBA, Oracle database on November 30, 2013 at 17:50

Often Oracle DBAs are curious about the differences between SAP HANA and Oracle. As of the end of 2013, the current release of Oracle is 12.1.0.1.0 and the current release of SAP HANA is SAP HANA SP6 Rev.69.

The Oracle database has five software editions: Enterprise, Standard, Standard One, Express and Personal. There are falso five known SAP HANA software editions: Platform, Enterprise, Extended Enterprise, HANA for SAP BW, and HANA Edge.

Hannah_and_her_sisters

Most articles on the Net are rather at high level and very few DBA-level comparisons exist. saphana.com claims that the dependency on the DBA has been reduced to a large extent although I see there are HANA DBA job openings. Let us go a bit deeper.

One of the new features in SP6 is the new ACCESSIBLE_VIEWS view which allows you to see which views a user can access. The corresponding Oracle view is called USER_VIEWS. It was for sure present before Oracle 12c.

A new parameter reserved_connections is available in the indexserver.ini configuration file (in Oracle this file is called init.ora), which you can edit on the Configuration tab of the Administration editor. You can use this parameter to reserve a certain number of connections for administrative access only. When the maximum number of connections minus the number reserved connections is reached, only an administrator with the system privilege SESSION ADMIN can log on to the system, for example, to resolve blocking situations by canceling sessions.

Oracle does not have such a parameter but you can use Real-time ADDM: you can connect to the database and investigate what is going on when you cannot connect to the database because it is hanging on it is extremely slow. This is for real, connection to the database is in diagnostic mode, there is direct connections to the SGA.

From backup point of view, there are some similarities: in HANA the BACKUP OPERATOR is a new system privilege that allows you to implement a finer-grained separation of duties. On Oracle the similar privilege is SYSBACKUP. In SP6, the Backup editor of the SAP HANA studio now displays the following additional information: last successful data backup with information on start/end time, duration, size, and throughput. This has been visible in Oracle for many releases.

You can follow several discussion SAPA HANA here.

Recovery_HANA

SAP HANA automatically creates additional files if existing files in a data volume located in an ext3 file system reach the 2 TB limit. The autoextend option is much more flexible in Oracle, there are several options for the DBA.

There is now support for hybrid LOBs (BLOB, CLOB, NCLOB) in SAP HANA. Each LOB now has its own virtual file inside SAP HANA. In Oracle, LOB is already an old concept. Modern LOBs are call SecureFiles.

Think of the power of all Oracle database features using the in-memory option! It is really a switch: DBAs have to set the new inmemory_size parameter and run alter table .. inmemory; for all tables for which you would use the option!

According to IDC, 80% of SAP’s installations runs on Oracle and Oracle has continued to enhance its leadership position by certifying Exadata for both SAP ERP and BW. However, core database features like multiversion read consistency with row level locking (came in 1988) and cost based optimizer (present since 1992) are non-existent in HANA. I have seen claims that “SAP HANA in-memory processing greatly reduces the need for a cost based optimizer and basically eliminating the need for data access query optimization via optimizer hints” but I let you decide if this is true or not.

The reality is that HANA misses dozens of DW features like Bitmap Indexes, Join Indexes, Materialized Views, Parallel DML, Parallel Query, etc. Not to mention the security one like TDE, indexing and compression of encrypted data, network encryption and we can go on and on…

I will let you read The Top 10 SAP HANA Myths Spread by Other Vendors and decide for yourself what is myth and what is not.

Let us go to monitoring. To improve usability for SAP HANA DBAs, the user-defined filters and layout configuration applied on the several tabs in the Administration editor are automatically saved when the SAP HANA studio is closed and re-applied the next time it is opened, independently of system. This feature is available on the following sub-tabs of the Performance tab:

○ Sessions
○ SQL Plan Cache
○ Expensive Statements Trace
○ Job Progress

Interesting! Moreover, the SAP HANA studio now provides support for system replication. You can even monitor the status to confirm that both systems are in sync. It is also possible to display zipped diagnosis files, that is files ending with *.gz, on the Diagnosis Files tab. The file is unzipped to a temporary file in the local SAP HANA studio workspace. Additional configuration options are available for the performance trace, SQL trace, and expensive statements trace. All this information and more can be found in SAP HANA Platform Release Notes.

In a recent conference, a heard a claim that “If the Oracle database is the Garden of Eden, then DB2 is dry branch of a tree”. I wonder if someone would claim similar for SAP HANA Studio/DBA Cockpit and Oracle Enterprise Manager?

Is Oracle Advanced Compression underestimated?

In Database options, Database tuning, DBA, Oracle database on May 17, 2012 at 02:43

Abraham Lincoln said: “I can make more generals, but horses cost money”.

Often all I hear about Oracle Advanced Compression is: “.. but it is an extra option on top of Enterprise Edition”. And thus its hidden power is often neglected.

Advanced Compression is not just about having the data occupy less space on disk, tape or in the network. It is also about better performance of the database, deduplication and the utilization of less hardware resources: memory and CPU. Thus, you can consolidate more databases on the same server and reduce the license costs.

I will try to compress this blog post, I will make it as short as possible:

1. What and which tables to compress?

Compress the segments that are involved in most reads: physical, logical or unoptimized. Consider also the segments that are used in SQL statements having most reads.

But do not compress all of them! Use DBMS_COMPRESSION.GET_COMPRESSION_RATIO in order to verify that you will benefit from the compression. Do only those with “Compression Ratio” 2 to 1 and higher. Here is an example of how you can estimate the compression ratio:

set serveroutput on
DECLARE
blkcnt_cmp pls_integer;
blkcnt_uncmp pls_integer;
row_cmp pls_integer;
row_uncmp pls_integer;
cmp_ratio pls_integer;
comptype_str varchar2(100);
BEGIN
DBMS_COMPRESSION.GET_COMPRESSION_RATIO ('PSAPSR3', 'SAPSR3', '/BI0/PMATERIAL', '',DBMS_COMPRESSION.COMP_FOR_OLTP,
blkcnt_cmp, blkcnt_uncmp, row_cmp, row_uncmp, cmp_ratio, comptype_str);
DBMS_OUTPUT.PUT_LINE('Block count compressed = ' || blkcnt_cmp);
DBMS_OUTPUT.PUT_LINE('Block count uncompressed = ' || blkcnt_uncmp);
DBMS_OUTPUT.PUT_LINE('Row count per block compressed = ' || row_cmp);
DBMS_OUTPUT.PUT_LINE('Row count per block uncompressed = ' || row_uncmp);
DBMS_OUTPUT.PUT_LINE('Compression type = ' || comptype_str);
DBMS_OUTPUT.PUT_LINE('Compression ratio = '||round(blkcnt_uncmp/blkcnt_cmp,2)||' to 1');
DBMS_OUTPUT.PUT_LINE('Compression ratio org= '||cmp_ratio);
END;
/

2. How to measure the performance benefits of compression?

For the SQL statements run awrsqrpt.sql and compare the results:

Before Compression:

After Compression:

3. What comes after compression?

Follow the average CPU and I/O statistics for the compressed tables:

col c0 heading ‘Begin|Interval|time’ format a8
col c1 heading ‘Owner’               format a10
col c2 heading ‘Object|Type’         format a10
col c3 heading ‘Object|Name’         format a15
col c4 heading ‘Average|CPU|Cost’    format 9,999,999,999
col c5 heading ‘Average|IO|Cost’     format 9,999,999
select
 to_char(sn.begin_interval_time,'mm-dd hh24') c0,  
 p.object_owner                               c1,
 p.object_type                                c2,
 p.object_name                                c3,
 avg(p.cpu_cost)                              c4,
 avg(p.io_cost)                               c5
from
 dba_hist_sql_plan p,
 dba_hist_sqlstat  st,
 dba_hist_snapshot sn
where
 p.object_name = '/BI0/PMATERIAL' 
 and p.sql_id = st.sql_id and st.snap_id = sn.snap_id     
group by
 to_char(sn.begin_interval_time,'mm-dd hh24'),
 p.object_owner,
 p.object_type,
 p.object_name
order by
 1,2,3 desc;  

P.S. About Exadata: do not mix HCC and DML, i.e., do not use HCC in OLTP.