Dontcheff

Archive for the ‘Init.ora’ Category

An example of the Oracle Autonomous Database Advisor

In Autonomous, DBA, Init.ora, New features, Oracle database on June 22, 2020 at 08:33

Moving the database to Oracle Autonomous Cloud is a rather simple task but how do you avoid possible issues after the migration? How do you know in advance which objects will not at all migrate to Autonomous or which objects will be migrated with some changes? How do you get beforehand advice and guidelines on the migration to Autonomous?

The new tool called “Oracle Autonomous Database Schema Advisor” can give you the answers to these questions.

All the details are in MOS Doc ID 2462677.1 but here are the basics and an example in which I will be moving my 20c schema to ATP Dedicated.

First, you need to install the advisor. Meaning run the script install_adb_advisor.sql (as sysdba) which you download from the MOS note above. The script will create a user, 7 tables, 4 indexes and a package. I have decided to call the user adb_advisor and the password will be tiger:

 
[oracle@julian ~]$ sqlplus "/ as sysdba"

SQL*Plus: Release 20.0.0.0.0 - Production on Thu Jun 18 07:24:31 2020
Version 20.2.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to:
Oracle Database 20c EE High Perf Release 20.0.0.0.0 - Production
Version 20.2.0.0.0

SQL> @install_adb_advisor.sql adb_advisor tiger

...

Index created.
Package created.
Package body created.
No errors.
SQL>

Second, you run the advisor as the user created in step one. What you need to specify is (1) the database schemas and (2) the ADB type you will be using in the Autonomous Cloud. You can list maximum 30 schemas in a single advisor run or just use schemas=>’ALL’. And these are the 4 options for the ADB type:

– ATP for Autonomous Transaction Processing (Serverless)
– ADW for Autonomous Data Warehouse (Serverless)
– ATPD for Autonomous Transaction Processing (Dedicated)
– ADWD for Autonomous Data Warehouse (Dedicated)

Do not try to run the script as SYSDBA, you will be getting all sorts of errors (I found it the hard way), something like:

ORA-06598: insufficient INHERIT PRIVILEGES privilege
PLS-00201: identifier ‘ADB_ADVISOR.REPORT’ must be declared
ORA-00942: table or view does not exist

If the package complies with a loop index error and ORA-00942, just run GRANT SELECT ON DBA_XML_TABLES TO ADB_ADVISOR; and recompile.

The Advisor will generate a report with the following information:

– The counts of discovered objects and a summary of migration status
– Objects that cannot be migrated due to the restrictions and lockdowns imposed by the Autonomous Database on certain data types, database options and SQL statements
– The objects that will migrate with modifications that are automatically made during the import process or upon the execution of object creation DDL
– Informational section containing certain best practice recommendations and guidance

Here is the output after I ran ADB_ADVISOR.REPORT:

 
SET SERVEROUTPUT ON FORMAT WRAPPED
SET LINES 200
exec ADB_ADVISOR.REPORT(schemas=>'ALL', adb_type=>'ATPD');
PL/SQL procedure successfully completed.

==========================================================================================
== ATPD SCHEMA MIGRATION REPORT FOR PDBADMIN,pdbuser,JULIAN
==========================================================================================

ADB Advisor Version   : 19.3.0.0.2
Instance Name         : DB0223
Database Name         : DB0223
Host Name             : julian
Database Version      : 20.0.0.0.0
Pluggable Database    : NOVOPDB1
Schemas Analyzed      : PDBADMIN,PDBUSER,JULIAN
Analyzing for         : Autonomous Transaction Processing (Dedicated)
Report Start date/time: 18-JUN-2020 07:47

------------------------------------------------------------------------------------------
-- SECTION 1: SUMMARY
------------------------------------------------------------------------------------------

                                           Objects         Objects         Total         
                           Object          Will Not        Will Migrate    Objects       
Object Type                Count           Migrate         With Changes    Will Migrate  
-------------------------  --------------  --------------  --------------  --------------
CONSTRAINT                 8               0               0               8             
INDEX                      4               0               0               4             
INDEX PARTITION            4               0               0               4             
TABLE                      8               0               2               8             
TABLE PARTITION            4               0               0               4             
User Objects in SYS        87              87              0               0             
User Objects in SYSTEM     0               0               0               0             

------------------------------------------------------------------------------------------
-- SECTION 2: FOLLOWING OBJECTS WILL NOT MIGRATE
------------------------------------------------------------------------------------------

1) User-defined objects in SYS schema will not migrate (Count=87):
------------------------------------------------------------------
Note: User-defined objects were detected in SYS schema. Consider moving them out of SYS prior to migration.

Owner      Object Type                    Object Name                             
---------- ------------------------------ ----------------------------------------
SYS        INDEX                          SYS_C008044                             
SYS        TABLE                          AQ_SRVNTFN_TABLE_1                   
....     
SYS        EVALUATION CONTEXT             AQ$_KUPC$DATAPUMP_QUETAB_1_V            
...        
SYS        MATERIALIZED ZONEMAP           RDBMS_ZMAP                              
SYS        JOB                            ORA$_ATSK_AUTOZM                        
SYS        TABLE                          WRI$_ADV_OBJSPACE_TREND_DATA            
SYS        TABLE                          WRI$_ADV_OBJSPACE_CHROW_DATA            
SYS        TABLE                          WRI$_ADV_SEGADV_SEGROW                  

------------------------------------------------------------------------------------------
-- SECTION 3: FOLLOWING OBJECTS WILL MIGRATE WITH CHANGES
------------------------------------------------------------------------------------------

1) Index Organized table will be created as regular table (Count=1):
--------------------------------------------------------------------
Note: Index Organized tables are disallowed in ADB. When you create 
an IOT in ADB, the table gets created as non-IOT (regular table). 
When the Data Pump export file contains tables with IOT, use 
'dwcs_cvt_iots:y' transformation at import time to transform IOTs 
as regular tables.

JULIAN.PTIOT1                    

2) INMEMORY Tables will be created as NO INMEMORY Tables (Count=1):
-------------------------------------------------------------------
Note: Database In-Memory is not enabled in ADB. All In-Memory 
tables and partitions will be created with NO INMEMORY clause.

JULIAN.SALES                     

------------------------------------------------------------------------------------------
-- SECTION 4: MIGRATION ADDITIONAL INFO
------------------------------------------------------------------------------------------

1) Database Parameters are detected as modified in the current database but can't be modified in the ADB (Count=13):
--------------------------------------------------------------------------------------------------------------------
Note: The following init parameters are modified in your database 
that you would not be able to modify in ADB. Please refer to the 
Oracle Autonomous Database documentation on the parameters that 
you are allowed to modify.

_exadata_feature_on                                                             
_gc_policy_time                                                                 
_gc_undo_affinity                                                               
_zonemap_auto_processing                                                        
clonedb                                                                         
control_management_pack_access                                                  
db_block_checksum                                                               
enable_ddl_logging                                                              
encrypt_new_tablespaces                                                         
local_listener                                                                  
tablespace_encryption_default_algorithm                                         
tde_configuration                                                               
use_large_pages                                                                 

------------------------------------------------------------------------------------------
-- END OF REPORT
------------------------------------------------------------------------------------------
Report End Datetime   : 18-JUN-2020 07:47
Report Runtime        : +000000000 03:00:02.125000000
------------------------------------------------------------------------------------------

So, I am facing based on the report above the following 4 issues:

– I have user-defined objects in the SYS schema. They will not be migrated.
– My IOT table cannot be migrated as-is. It will be migrated as an normal, regular table.
– My INMEMORY table cannot be migrated as-is. It will be created as a NO INMEMORY table.
– I cannot use some of my init.ora parameters in ADB-D.

My recommendation is that, before you migrate your schemas to Autonomous Cloud, to run the advisor. It will minimize your post-migration hassle.

Few additional comments about the advisor:

By default, the output gets truncated when the number of rows exceeds the maximum limit set in the Advisor package. You can reset the number of rows by running the following command prior to running the Advisor.

 
SQL> exec ADB_ADVISOR.setmaxrows(500);

You have to reset the max rows every time you run the Advsior as the settings is not saved in the database.

If you want to query the data dictionary for the output, you may try:

 
SELECT a.owner, a.object_type, a.object_name, c.* 
FROM adb_advisor_objects_tmp a, 
adb_advisor_rejects_tmp b, 
adb_advisor_codes_tmp c 
WHERE a.id = b.id AND b.cd = c.cd;

Final note: you can run the advisor on any database version from 10g to 20c!

What is Preview Versions for Autonomous Database

In Autonomous, Cloud, DBA, Init.ora, New features, Oracle database on October 18, 2019 at 12:14

A first preview is not exactly a pleasant experience for producers, directors and actors. In the IT world, it is exactly the opposite: DBAs can use the new version of the database before it becomes mainstream.

The “Preview Versions for Autonomous Database” has been available since mid-October 2019. The current preview version of Autonomous Database is Oracle Database 19c. More precisely, the version is 19.4.0.

Databases provisioned or cloned with a preview version display the end date of the preview period at the top of the Autonomous Database details page in the Oracle Cloud Infrastructure Console.

Note that the preview for my database below will end on December 2, 2019. All resources created with the preview will be terminated at the end of the preview period.

The basis details of preview versions for Autonomous Database can be found in the Oracle documentation.

Connecting to the preview database is as to a standard ADB:

Here are some of the default settings in Oracle Database Preview Version 19.4.0 for Autonomous Database:

Real-Time Statistics: Enabled by default
– Oracle automatically gathers online statistics during conventional DML operations
– By gathering some statistics automatically during DML operations, the database augments the statistics gathered by DBMS_STATS
– Fresh statistics enable the optimizer to produce more optimal plans
– EXEC DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO; forces the database to write optimizer statistics to the data dictionary
– NO_GATHER_OPTIMIZER_STATISTICS prevents the collection of real-time statistics

High-Frequency Automatic Optimizer Statistics Collection: Enabled by default
– High-Frequency Automatic Optimizer Statistics Collection complements the standard automatic statistics collection job
– By default, the high-frequency statistics collection occurs every 15 minutes 
– Turn on/off: EXEC DBMS_STATS.SET_GLOBAL_PREFS(‘AUTO_TASK_STATUS’,’ON’);
– Change the default period: EXEC DBMS_STATS.SET_GLOBAL_PREFS(‘AUTO_TASK_INTERVAL’,’300’);
– Monitor: DBA_AUTO_STAT_EXECUTIONS
– The high-frequency automatic statistics task will not start during the maintenance window

SQL Quarantine: Disabled and not supported in Autonomous Database
– Check ORA-56955: quarantined plan used for details on SQL Quarantine

High-Frequency SQL Plan Management Evolve Advisor Task: Enabled by default
– High-Frequency Automatic Optimizer Statistics Collection complements the standard automatic statistics collection job
– By default, the high-frequency statistics collection occurs every 15 minutes 
– Turn on/off: EXEC DBMS_STATS.SET_GLOBAL_PREFS(‘AUTO_TASK_STATUS’,’ON’);
– Change the default period: EXEC DBMS_STATS.SET_GLOBAL_PREFS(‘AUTO_TASK_INTERVAL’,’300’);
– Monitor: DBA_AUTO_STAT_EXECUTIONS
– The high-frequency automatic statistics task will not start during the maintenance window

Automatic Indexing: Disabled by default
– Check Automatic Indexing in 19c for more details on Automatic Indexing

New 19c spfile parameters
– All 6 new init.ora parameters in 19c are set to their default values (click on the links below to see the values):

ADG_REDIRECT_DML
DATA_GUARD_MAX_IO_TIME
DATA_GUARD_MAX_LONGIO_TIME
ENABLE_IMC_WITH_MIRA
LOB_SIGNATURE_ENABLE
MAX_DATAPUMP_PARALLEL_PER_JOB

Preview is not free of charge. Preview provides you test/development capacity and capabilities so it is charged identically to production environments. Also, preview is not available for free service instances.

Preview instances are fully supported but should not be used for production purposes.

At the end, you may want to check out the new static data dictionary views in 19c:

• ALL_TRIGGERS_AE, DBA_TRIGGERS_AE, and USER_TRIGGERS_AE
• DBA_AUTO_INDEX_CONFIG
• DBA_AUTO_STAT_EXECUTIONS
• DBA_PDB_SNAPSHOTFILE
• DBA_RAT_CAPTURE_SCHEMA_INFO
• DBA_REGISTRY_BACKPORTS
• DBA_SQL_QUARANTINE

and also the new dynamic performance views in 19c:

• GV$AQ_PARTITION_STATS and V$AQ_PARTITION_STATS
• GV$ASM_ACFSAUTORESIZE and V$ASM_ACFSAUTORESIZE
• GV$ASM_DBCLONE_INFO and V$ASM_DBCLONE_INFO
• GV$MEMOPTIMIZE_WRITE_AREA and V$MEMOPTIMIZE_WRITE_AREA
• GV$SQL_TESTCASES and V$SQL_TESTCASES

What else besides Automatic Indexing is new in Oracle Database 19c?

In DBA, Init.ora, New features, Oracle database, Security and auditing, SQL on April 23, 2019 at 10:53

Automatic Indexing (AI) is probably the most important new feature of Oracle Database 19c and AI is arguably one of the best example of AI in the IT industry. But there is much more that came along with 19c.

Here is my choice of the top 10 least known (for now at least) new features of Oracle Database 19.2 (in no order whatsoever):

1. Security: if the database password file name or location has been changed and the metadata cache needs to be refreshed with the details of the updated database password file, you can do it simply with the following command:

SQL> alter system flush passwordfile_metadata_cache;

System altered.

In 19c, most of the Oracle Database supplied schema-only accounts now have their passwords removed to prevent users from authenticating to these accounts. But DBAs can still assign passwords to the default schema-only accounts.

2. SQL: the LISTAGG aggregate function now supports duplicate elimination by using the new DISTINCT keyword.

There are also a new bitvector SQL operators can be used to speed up COUNT DISTINCT operations within a SQL query.

3. Active Data Guard DML redirection: You can now run DML on the Active Data Guard standby database. When run on the standby side, the update is passed to the Primary database where it is executed and the resulting redo of the transaction will update the standby after which control will be returned to the application.

4. There are 6 new init.ora parameters in 19c. DBAs can now tune in DataGuard the amount of wait time by using DATA_GUARD_MAX_IO_TIME and DATA_GUARD_MAX_LONGIO_TIME. You can check details for all six:

ADG_REDIRECT_DML
DATA_GUARD_MAX_IO_TIME
DATA_GUARD_MAX_LONGIO_TIME
ENABLE_IMC_WITH_MIRA
LOB_SIGNATURE_ENABLE
MAX_DATAPUMP_PARALLEL_PER_JOB

5. You can now create Materialized Views containing bitmap-based COUNT(DISTINCT) functions. This means that the MVs are based on SQL aggregate functions that use bitmap representation to express the computation of COUNT(DISTINCT) operations.

SQL> create materialized view MV_SALES as
select CLIENT, 
BITMAP_CONSTRUCT_AGG(BITMAP_BIT_POSITION(PRICE),'RAW') bm_price
from SALES
group by CLIENT,BITMAP_BUCKET_NUMBER(PRICE);  

Materialized view created.

6. Looks like there is now automatic resolution of SQL plan regressions: “SQL plan management searches for SQL statements in the Automatic Workload Repository (AWR). Prioritizing by highest load, it looks for alternative plans in all available sources, adding better-performing plans to the SQL plan baseline. Oracle Database also provides a plan comparison facility and improved hint reporting.”

7. Real-Time Statistics is also a new cool feature. “Oracle automatically gathers online statistics during conventional DML operations. Statistics can go stale between execution of DBMS_STATS statistics gathering jobs. By gathering some statistics automatically during DML operations, the database augments the statistics gathered by DBMS_STATS. Fresh statistics enable the optimizer to produce more optimal plans.”

8. Hybrid Partitioned Tables: now large portions of a table can reside in external partitions, for example in the Oracle Cloud. With this new feature, you can also easily move non-active partitions to external files, such as Oracle Data Pump files, for a cheaper storage solution. Hybrid partitioned tables support all existing external table types for external partitions: ORACLE_DATAPUMP, ORACLE_LOADER, ORACLE_HDFS, ORACLE_HIVE.

9. Data Pump:

– Oracle Data Pump allows tablespaces to stay read-only during TTS import.
– Oracle Data Pump can work in test mode for transportable tablespaces.
– Oracle Data Pump supports resource usage limitations with the introduction of two new parameters: MAX_DATAPUMP_JOBS_PER_PDB and MAX_DATAPUMP_PARALLEL_PER_JOB.
– Data Pump no longer enables secure, password-protected roles by default. Beginning with 19c, you must explicitly enable password-protected roles for an individual export or import job. A new command-line parameter has been added, ENABLE_SECURE_ROLES=YES | NO that can be used to explicitly enable or disable these types of roles for an individual export or import job.
– The new Data Pump command-line parameter CREDENTIAL enables secure import into a managed service from dump files in the Oracle Object Store Service.

10. SQL Quarantine: “SQL statements that are terminated by Oracle Database Resource Manager due to their excessive consumption of CPU and I/O resources can be automatically quarantined. The execution plans associated with the terminated SQL statements are quarantined to prevent them from being executed again.”

Check out the new Oracle package DBMS_SQLQ – cool stuff!

Table Temperature, Big Tables In-Memory and Automatic Big Table Caching

In DBA, Init.ora, Oracle database on April 3, 2015 at 16:10

If you have never heard of Oracle object temperature and how this relates to caching, in-memory and big tables, here is a short note perhaps worth reading.

And all this is purely Oracle database related, nothing to do with Oracle Lighting Products.

Oracle_light

Background: Oracle 12.1.0.2 came with 5 new init.ora parameters (actually 12 but the other 7 are In-Memory related):

1. common_user_prefix
2. dbfips_140
3. enable_goldengate_replication
4. exafusion_enabled
5. db_big_table_cache_percent_target

The first 4 are rather on-off type of parameters but the last one, db_big_table_cache_percent_target, is worth looking into.

A table is considered small, if the number of blocks in the segment is lower or equal than the value of the parameter _small_table_threshold. In 12c, this parameter defaults 2% of the buffers in the cache. To be more precise, it is 2% of _db_block_buffers. For details, check Jonathan Lewis’s Small tables. Another very good one is by Tanel Poder entitled Optimizer statistics-driven direct path read decision for full table scans.

Otherwise, we call the table big.

small_big_table

In 12.1.0.2, a DBA can enable full database caching with the following command: ALTER DATABASE FORCE FULL DATABASE CACHING;

Under normal running the Oracle database decides what data to cache in the buffer cache. If there is not enough room, data can be aged out of the cache. If Oracle determines that the buffer cache is big enough to hold the entire database it will cache all blocks.

However, seldom we have enough memory to cache the whole database. In this case, the parameter db_big_table_cache_percent_target can be used. And of course, for “big tables” only.

The default value is 0, so under normal circumstances automatic big table caching is not enabled. You can decide on your own if this is Poor Man’s In-Memory Caching but even if it isn’t the benefits are rather questionable. Let us see why.

We have 3 tables: SALES, OLD_SALES and VERY_OLD_SALES.

sales_objects

SALES and OLD_SALES are huge and SALES is in-memory, that is the in-memory option has been enabled for the SALES table. VERY_OLD_SALES is a rather small table but still considered big by Oracle as the small table threshold is 401 (2% of 20090):

select a.ksppinm name, b.ksppstvl value 
from x$ksppi a, x$ksppcv b 
where a.indx = b.indx 
and a.ksppinm like '%small_table%';

_small_table_threshold
401

select a.ksppinm name, b.ksppstvl value 
from x$ksppi a, x$ksppcv b 
where a.indx = b.indx 
and a.ksppinm like '%block_buffer%';

_db_block_buffers
20090

Let us see if the table is In-Memory enabled and what is the number of its blocks:

sales_tables

3 new dynamic views give us details on big table caching:
– V$BTS_STAT
– V$BT_SCAN_CACHE shows the parameters and status of the big table cache section.
– V$BT_SCAN_OBJ_TEMPS shows the active objects currently tracked by the big table cache.

As SALES is an IM-table (DATAOBJ# is 91985), regardless of how often you scan it, no buffers will go to the big table cache (I have set db_big_table_cache_percent_target to 50):

big_table_cache

If you do a full table scan on OLD_SALES, as it is way to big for the BT-cache (= Big Table Cache), no use either… I have tested and at least V$BT_SCAN_OBJ_TEMPS shows nothing.

However, when I do a subset scan on OLD_SALES, then the BT-cache is used as the buffers will fit.

Now, let is look at the temperature of the tables. Oracle assigned 2000 for the temperature of VERY_OLD_SALES (DATAOBJ# is 92326) while 1000 for the temperature of OLD_SALES (DATAOBJ# is 92322) which is only partially loaded into the BT-cache. After updating VERY_OLD_SALES, the temperature grows to 5000:

temperature

The more SQL & DML we do on the table, the higher the temperature gets. A detailed study by Mahmoud Hatem can be found here.

In short:

– It is not possible to control this feature on table basis, there are no attributes/parameters like KEEP or STORAGE clauses on table level.
– If we dedicate real memory just for such type of caching, then it will not always be used for mega huge tables that anyway will not fit, it will only be used for subset retrievals.
– The memory could be more useful for the IM option as benefits are much more clear and obvious.
– The BT-cache is populated also by the full index or range scans.
– CBO is not aware that segments are being cached.
– One cannot manually flush the BT-cache but it is gone after flushing the buffer cache.
– If the BT-cache is not big enough for several objects, then the ones with highest temperature are given the priority.

Big table cache: an optional, integrated portion of the database buffer cache that uses a temperature-based, object-level replacement algorithm instead of the traditional LRU-based, block-level replacement algorithm.

The 7 Initialization Parameters Related to the IM Column Store

In Database options, DBA, Init.ora, Oracle database on January 14, 2015 at 14:42

I often see and hear claims how many times faster Oracle 12c IM is compared to whatever. Although, the numbers are rather realistic than wishful thinking, I must say that all that strongly depends on how the IM option has been set up.

Besides the way you set up the tables, the MVs and the tablespaces, there are 7 init.ora parameters that directly affect the behavior, performance and speed of the inmemory option.

7init

1. INMEMORY_SIZE

Default: 0

This initialization parameter sets the size of the IM column store in a database instance.

The default value is 0, which means that the IM column store is not used. This initialization parameter must be set to a non-zero value to enable the IM column store. If the parameter is set to a non-zero value, then the minimum setting is 100M.

In a multitenant environment, the setting for this parameter in the root is the setting for the entire multitenant container database (CDB). This parameter can also be set in each pluggable database (PDB) to limit the maximum size of the IM column store for each PDB. The sum of the PDB values can be less than, equal to, or greater than the CDB value. However, the CDB value is the maximum amount of memory available in the IM column store for the entire CDB, including the root and all of the PDBs. Unless this parameter is specifically set for a PDB, the PDB inherits the CDB value, which means that the PDB can use all of the available IM column store for the CDB.

Julian’s tip: the bigger the better: more stuff will fit in.

2. INMEMORY_MAX_POPULATE_SERVERS

Default: Half the effective CPU thread count or the PGA_AGGREGATE_TARGET value divided by 512M, whichever is less.

This initialization parameter specifies the maximum number of background populate servers to use for IM column store population, so that these servers do not overload the rest of the system. Set this parameter to an appropriate value based on the number of cores in the system.

Julian’s tip: use something between the default and its double

3. INMEMORY_FORCE

Default: DEFAULT

This initialization parameter can enable tables and materialized views for the IM column store or disable all tables and materialized views for the IM column store.

Set this parameter to DEFAULT, the default value, to allow the INMEMORY or NO INMEMORY attributes on the individual database objects determine if they will be populated in the IM column store.

Set this parameter to OFF to specify that all tables and materialized views are disabled for the IM column store.

Julian’s tip: no brainer

4. INMEMORY_CLAUSE_DEFAULT

Default: an empty string

This initialization parameter enables you to specify a default IM column store clause for new tables and materialized views.

Leave this parameter unset or set it to an empty string to specify that there is no default IM column store clause for new tables and materialized views. Setting the value of this parameter to NO INMEMORY has the same effect as setting it to the default value (the empty string).

Set this parameter to a valid INMEMORY clause to specify that the clause is the default for all new tables and materialized views. The clause can include valid clauses for IM column store compression methods and data population options.

If the clause starts with INMEMORY, then all new tables and materialized views, including those without an INMEMORY clause, are populated in the IM column store. If the clause omits INMEMORY, then it only applies to new tables and materialized views that are enabled for the IM column store with an INMEMORY clause during creation.

Julian’s tip: start with an emty string and depending on your IM strategy, modify it later on

5. INMEMORY_QUERY

Default: ENABLE

This initialization parameter specifies whether in-memory queries are allowed. Set this parameter to ENABLE, the default value, to allow queries to access database objects populated in the IM column store, or set this parameter to DISABLE to disable access to the database objects populated in the IM column store.

Julian’s tip: very useful parameter on session level for testing how fast the IM option is

6. INMEMORY_TRICKLE_REPOPULATE_SERVERS_PERCENT

Default: 1

This initialization parameter limits the maximum number of background populate servers used for IM column store repopulation, as trickle repopulation is designed to use only a small percentage of the populate servers. The value for this parameter is a percentage of the INMEMORY_MAX_POPULATE_SERVERS initialization parameter value. For example, if this parameter is set to 10 and INMEMORY_MAX_POPULATE_SERVERS is set to 10, then on average one core is used for trickle repopulation.

Julian’s tip: increase to to a level based on your CPU cores and need for fast repopulation.

7. OPTIMIZER_INMEMORY_AWARE

Default: TRUE

This initialization parameter enables or disables all of the optimizer cost model enhancements for in-memory. Setting the parameter to FALSE causes the optimizer to ignore the in-memory property of tables during the optimization of SQL statements.

Julian’s tip: good for testing purposes to see how IM changes the behaviour of the CBO

Interesting test case for the same SQL:

select_IM

I would always recommend PARALLEL_DEGREE_POLICY = AUTO when using the 12c in-memory option.

And finally some interesting articles on 12c IM:

Off May Not Be Totally Off: Is Oracle In-Memory Database 12c (12.1.0.2.0) Faster? by Craig Shallahamer

12c In-Memory in PDB by Kerry Osborne

Getting started with Oracle Database In-Memory Part I – Installing & Enabling by Maria Colgan

In-Memory Column Store in Oracle Database 12c Release 1 (12.1.0.2) by Tim Hall

Our take on the Oracle Database 12c In-Memory Option by Tanel Poder

Twelve new DBA commands for Oracle 12c

In DBA, Init.ora, Oracle database, Oracle internals, RMAN on January 25, 2014 at 10:47

With more than 500 new features, Oracle Database 12c is designed to give us exactly what we need for easy database administration, consolidation, security and availability. And I cannot deny the complexity it brings into our pluggable world…

12_commands

But straight to the point. Here are 12 new commands for Oracle DBAs:

1. impdp … transform=disable_archive_logging:Y

The new TRANSFORM option DISABLE_ARCHIVE_LOGGING appended to the impdp command line will cause Oracle Data Pump to disable (almost completely) redo logging when loading data into tables and when creating indexes. Check this example.

2. select xmltransform(dbms_qopatch.get_opatch_lsinventory, dbms_qopatch.get_opatch_xslt) from dual;

With the new DBMS_QOPATCH package we can view the installed database patches:

GET_OPATCH_BUGS: Provides a bugs list for a patch
GET_OPATCH_COUNT: Provides the total number of installed patches
GET_OPATCH_LSINVENTORY: Returns whole opatch inventory
GET_OPATCH_PREQS: Provides prerequisite patches for a given patch

Check this example.

3. grant sysdba, sysoper, sysbackup, sysdg, syskm to c##julian identified by password container=all;

If we want to create a common user in the CDB, we must prefix it with c##. It is because the default common user prefix in 12.1.0 is c##. However, the _common_user_prefix parameter controls that prefix. One option is to set it to NULL.

SYSBACKUP, SYSDG and SYSKM are also database users. SYSDG is used for Oracle Data Guard operations: startup, shutdown and flashback database while SYSKM is used for encryption key management operations: connect to the database even if the database is not open.

4. create table porter(owner varchar2(128), object_name varchar2(128), created date invisble);

porter

This “trick” does not work with temporary tables, external tables and cluster tables. Check this example.

5. alter system set max_string_size=extended;

extended_size

The new init.ora parameter MAX_STRING_SIZE (default is standard) controls the maximum size of the VARCHAR2, NVARCHAR2, and RAW data types. It can be increased from 4,000 to 32767 bytes. Check this example.

6. alter database move datafile ‘old_location’ to ‘new_location’;

move_online

In 12.1.0, a data file can now be moved online while it is open and being accessed, all these are possible now:

– Renaming an Online Datafile
– Relocating an Online Datafile
– Copying an Online Datafile
– Relocating an Online Datafile and Overwriting an Existing File
– Relocating an Online Datafile to Oracle ASM
– Moving a File from One ASM Location to Another ASM Location

7. recover table emp until time ‘sysdate-1’;

table_recovery

RMAN enables us to recover one or more tables or table partitions to a specified point in time without affecting the remaining database objects. We can use previously-created RMAN backups to recover tables and table partitions to a specified point in time. Check this example.

8. alter session set row archival visibility = active;
alter session set row archival visibility = all;

In-Database Archiving enables us to archive rows within a table by marking them as inactive. These inactive rows are in the database and can be optimized using compression, but are not visible to an application. To manage In-Database Archiving for a table, we must enable ROW ARCHIVAL for the table and manipulate the ORA_ARCHIVE_STATE hidden column of the table. Check my old blog post.

9. alter session set temp_undo_enabled = true;

Check this example.

10. truncate table dept cascade;

TRUNCATE TABLE is enhanced in the following way: a new clause CASCADE allows you to recursively truncate child tables. Check this example.

11. create sequence temp_order_seq start with 100 increment by 10 session;

The new clause SESSION allows you to create a session sequence. Check this example.

12. $ORACLE_HOME/perl/bin/perl catctl.pl -n 8 -l $ORACLE_HOME/diagnostics catupgrd.sql

Oracle Database 12c introduces the Parallel Upgrade Utility (catctl.pl), which enables components that do not have to be upgraded in a specific order to be upgraded at the same time, taking full advantage of CPU capacity. The Parallel Upgrade Utility reduces the total amount of time it takes to perform an upgrade by loading the database dictionary in parallel using multiple SQL processes to upgrade the database.

-n specifies the number of processes to use for parallel operations (default = 4). The maximum is 8. Set this parameter to 0 to run catctl.pl in serial mode.

The Newest init.ora Parameter in the Oracle Database

In Database tuning, DBA, Init.ora, Oracle database on November 6, 2011 at 14:18

Altogether there are 347 init.ora parameters in Oracle Database 11.2.0.3. Looking into all documented and undocumented parameters in 11.2.0.3, we witness a high number: 2752.

11.2.0.3 does not sparkle with any fancy features and no one is expecting it to but there is one new init.ora parameter added by Oracle: awr_snapshot_time_offset.

Why is it needed? The answer is very clear in case you have a server with several databases running on it. Consolidation has been fashionable for the past decade and many scattered corporative databases have perched on the same computer.

You might have observed those spikes on the top of every hour when all the AWR snapshots are taken. How do you avoid this, i.e., say you have 12 databases on the server and you want snapshots taken on hour bases but within 5 minutes difference for every database?

Quite simply with awr_snapshot_time_offset by setting it for n*300 (seconds) for every n:th database (n = 1..12).

The documentation of awr_snapshot_time_offset gives additional details for the parameter.

And finally, here is how one can search for the values of certain parameters, for example Bloom filter related:

select a.ksppinm name, b.ksppstvl value 
from x$ksppi a, x$ksppcv b 
where a.indx = b.indx 
and a.ksppinm like '%bloom%';