Dontcheff

Archive for the ‘Init.ora’ Category

What else besides Automatic Indexing is new in Oracle Database 19c?

In DBA, Init.ora, New features, Oracle database, Security and auditing, SQL on April 23, 2019 at 10:53

Automatic Indexing (AI) is probably the most important new feature of Oracle Database 19c and AI is arguably one of the best example of AI in the IT industry. But there is much more that came along with 19c.

Here is my choice of the top 10 least known (for now at least) new features of Oracle Database 19.2 (in no order whatsoever):

1. Security: if the database password file name or location has been changed and the metadata cache needs to be refreshed with the details of the updated database password file, you can do it simply with the following command:

SQL> alter system flush passwordfile_metadata_cache;

System altered.

In 19c, most of the Oracle Database supplied schema-only accounts now have their passwords removed to prevent users from authenticating to these accounts. But DBAs can still assign passwords to the default schema-only accounts.

2. SQL: the LISTAGG aggregate function now supports duplicate elimination by using the new DISTINCT keyword.

There are also a new bitvector SQL operators can be used to speed up COUNT DISTINCT operations within a SQL query.

3. Active Data Guard DML redirection: You can now run DML on the Active Data Guard standby database. When run on the standby side, the update is passed to the Primary database where it is executed and the resulting redo of the transaction will update the standby after which control will be returned to the application.

4. There are 6 new init.ora parameters in 19c. DBAs can now tune in DataGuard the amount of wait time by using DATA_GUARD_MAX_IO_TIME and DATA_GUARD_MAX_LONGIO_TIME. You can check details for all six:

ADG_REDIRECT_DML
DATA_GUARD_MAX_IO_TIME
DATA_GUARD_MAX_LONGIO_TIME
ENABLE_IMC_WITH_MIRA
LOB_SIGNATURE_ENABLE
MAX_DATAPUMP_PARALLEL_PER_JOB

5. You can now create Materialized Views containing bitmap-based COUNT(DISTINCT) functions. This means that the MVs are based on SQL aggregate functions that use bitmap representation to express the computation of COUNT(DISTINCT) operations.

SQL> create materialized view MV_SALES as
select CLIENT, 
BITMAP_CONSTRUCT_AGG(BITMAP_BIT_POSITION(PRICE),'RAW') bm_price
from SALES
group by CLIENT,BITMAP_BUCKET_NUMBER(PRICE);  

Materialized view created.

6. Looks like there is now automatic resolution of SQL plan regressions: “SQL plan management searches for SQL statements in the Automatic Workload Repository (AWR). Prioritizing by highest load, it looks for alternative plans in all available sources, adding better-performing plans to the SQL plan baseline. Oracle Database also provides a plan comparison facility and improved hint reporting.”

7. Real-Time Statistics is also a new cool feature. “Oracle automatically gathers online statistics during conventional DML operations. Statistics can go stale between execution of DBMS_STATS statistics gathering jobs. By gathering some statistics automatically during DML operations, the database augments the statistics gathered by DBMS_STATS. Fresh statistics enable the optimizer to produce more optimal plans.”

8. Hybrid Partitioned Tables: now large portions of a table can reside in external partitions, for example in the Oracle Cloud. With this new feature, you can also easily move non-active partitions to external files, such as Oracle Data Pump files, for a cheaper storage solution. Hybrid partitioned tables support all existing external table types for external partitions: ORACLE_DATAPUMP, ORACLE_LOADER, ORACLE_HDFS, ORACLE_HIVE.

9. Data Pump:

– Oracle Data Pump allows tablespaces to stay read-only during TTS import.
– Oracle Data Pump can work in test mode for transportable tablespaces.
– Oracle Data Pump supports resource usage limitations with the introduction of two new parameters: MAX_DATAPUMP_JOBS_PER_PDB and MAX_DATAPUMP_PARALLEL_PER_JOB.
– Data Pump no longer enables secure, password-protected roles by default. Beginning with 19c, you must explicitly enable password-protected roles for an individual export or import job. A new command-line parameter has been added, ENABLE_SECURE_ROLES=YES | NO that can be used to explicitly enable or disable these types of roles for an individual export or import job.
– The new Data Pump command-line parameter CREDENTIAL enables secure import into a managed service from dump files in the Oracle Object Store Service.

10. SQL Quarantine: “SQL statements that are terminated by Oracle Database Resource Manager due to their excessive consumption of CPU and I/O resources can be automatically quarantined. The execution plans associated with the terminated SQL statements are quarantined to prevent them from being executed again.”

Check out the new Oracle package DBMS_SQLQ – cool stuff!

Advertisements

Table Temperature, Big Tables In-Memory and Automatic Big Table Caching

In DBA, Init.ora, Oracle database on April 3, 2015 at 16:10

If you have never heard of Oracle object temperature and how this relates to caching, in-memory and big tables, here is a short note perhaps worth reading.

And all this is purely Oracle database related, nothing to do with Oracle Lighting Products.

Oracle_light

Background: Oracle 12.1.0.2 came with 5 new init.ora parameters (actually 12 but the other 7 are In-Memory related):

1. common_user_prefix
2. dbfips_140
3. enable_goldengate_replication
4. exafusion_enabled
5. db_big_table_cache_percent_target

The first 4 are rather on-off type of parameters but the last one, db_big_table_cache_percent_target, is worth looking into.

A table is considered small, if the number of blocks in the segment is lower or equal than the value of the parameter _small_table_threshold. In 12c, this parameter defaults 2% of the buffers in the cache. To be more precise, it is 2% of _db_block_buffers. For details, check Jonathan Lewis’s Small tables. Another very good one is by Tanel Poder entitled Optimizer statistics-driven direct path read decision for full table scans.

Otherwise, we call the table big.

small_big_table

In 12.1.0.2, a DBA can enable full database caching with the following command: ALTER DATABASE FORCE FULL DATABASE CACHING;

Under normal running the Oracle database decides what data to cache in the buffer cache. If there is not enough room, data can be aged out of the cache. If Oracle determines that the buffer cache is big enough to hold the entire database it will cache all blocks.

However, seldom we have enough memory to cache the whole database. In this case, the parameter db_big_table_cache_percent_target can be used. And of course, for “big tables” only.

The default value is 0, so under normal circumstances automatic big table caching is not enabled. You can decide on your own if this is Poor Man’s In-Memory Caching but even if it isn’t the benefits are rather questionable. Let us see why.

We have 3 tables: SALES, OLD_SALES and VERY_OLD_SALES.

sales_objects

SALES and OLD_SALES are huge and SALES is in-memory, that is the in-memory option has been enabled for the SALES table. VERY_OLD_SALES is a rather small table but still considered big by Oracle as the small table threshold is 401 (2% of 20090):

select a.ksppinm name, b.ksppstvl value 
from x$ksppi a, x$ksppcv b 
where a.indx = b.indx 
and a.ksppinm like '%small_table%';

_small_table_threshold
401

select a.ksppinm name, b.ksppstvl value 
from x$ksppi a, x$ksppcv b 
where a.indx = b.indx 
and a.ksppinm like '%block_buffer%';

_db_block_buffers
20090

Let us see if the table is In-Memory enabled and what is the number of its blocks:

sales_tables

3 new dynamic views give us details on big table caching:
– V$BTS_STAT
– V$BT_SCAN_CACHE shows the parameters and status of the big table cache section.
– V$BT_SCAN_OBJ_TEMPS shows the active objects currently tracked by the big table cache.

As SALES is an IM-table (DATAOBJ# is 91985), regardless of how often you scan it, no buffers will go to the big table cache (I have set db_big_table_cache_percent_target to 50):

big_table_cache

If you do a full table scan on OLD_SALES, as it is way to big for the BT-cache (= Big Table Cache), no use either… I have tested and at least V$BT_SCAN_OBJ_TEMPS shows nothing.

However, when I do a subset scan on OLD_SALES, then the BT-cache is used as the buffers will fit.

Now, let is look at the temperature of the tables. Oracle assigned 2000 for the temperature of VERY_OLD_SALES (DATAOBJ# is 92326) while 1000 for the temperature of OLD_SALES (DATAOBJ# is 92322) which is only partially loaded into the BT-cache. After updating VERY_OLD_SALES, the temperature grows to 5000:

temperature

The more SQL & DML we do on the table, the higher the temperature gets. A detailed study by Mahmoud Hatem can be found here.

In short:

– It is not possible to control this feature on table basis, there are no attributes/parameters like KEEP or STORAGE clauses on table level.
– If we dedicate real memory just for such type of caching, then it will not always be used for mega huge tables that anyway will not fit, it will only be used for subset retrievals.
– The memory could be more useful for the IM option as benefits are much more clear and obvious.
– The BT-cache is populated also by the full index or range scans.
– CBO is not aware that segments are being cached.
– One cannot manually flush the BT-cache but it is gone after flushing the buffer cache.
– If the BT-cache is not big enough for several objects, then the ones with highest temperature are given the priority.

Big table cache: an optional, integrated portion of the database buffer cache that uses a temperature-based, object-level replacement algorithm instead of the traditional LRU-based, block-level replacement algorithm.

The 7 Initialization Parameters Related to the IM Column Store

In Database options, DBA, Init.ora, Oracle database on January 14, 2015 at 14:42

I often see and hear claims how many times faster Oracle 12c IM is compared to whatever. Although, the numbers are rather realistic than wishful thinking, I must say that all that strongly depends on how the IM option has been set up.

Besides the way you set up the tables, the MVs and the tablespaces, there are 7 init.ora parameters that directly affect the behavior, performance and speed of the inmemory option.

7init

1. INMEMORY_SIZE

Default: 0

This initialization parameter sets the size of the IM column store in a database instance.

The default value is 0, which means that the IM column store is not used. This initialization parameter must be set to a non-zero value to enable the IM column store. If the parameter is set to a non-zero value, then the minimum setting is 100M.

In a multitenant environment, the setting for this parameter in the root is the setting for the entire multitenant container database (CDB). This parameter can also be set in each pluggable database (PDB) to limit the maximum size of the IM column store for each PDB. The sum of the PDB values can be less than, equal to, or greater than the CDB value. However, the CDB value is the maximum amount of memory available in the IM column store for the entire CDB, including the root and all of the PDBs. Unless this parameter is specifically set for a PDB, the PDB inherits the CDB value, which means that the PDB can use all of the available IM column store for the CDB.

Julian’s tip: the bigger the better: more stuff will fit in.

2. INMEMORY_MAX_POPULATE_SERVERS

Default: Half the effective CPU thread count or the PGA_AGGREGATE_TARGET value divided by 512M, whichever is less.

This initialization parameter specifies the maximum number of background populate servers to use for IM column store population, so that these servers do not overload the rest of the system. Set this parameter to an appropriate value based on the number of cores in the system.

Julian’s tip: use something between the default and its double

3. INMEMORY_FORCE

Default: DEFAULT

This initialization parameter can enable tables and materialized views for the IM column store or disable all tables and materialized views for the IM column store.

Set this parameter to DEFAULT, the default value, to allow the INMEMORY or NO INMEMORY attributes on the individual database objects determine if they will be populated in the IM column store.

Set this parameter to OFF to specify that all tables and materialized views are disabled for the IM column store.

Julian’s tip: no brainer

4. INMEMORY_CLAUSE_DEFAULT

Default: an empty string

This initialization parameter enables you to specify a default IM column store clause for new tables and materialized views.

Leave this parameter unset or set it to an empty string to specify that there is no default IM column store clause for new tables and materialized views. Setting the value of this parameter to NO INMEMORY has the same effect as setting it to the default value (the empty string).

Set this parameter to a valid INMEMORY clause to specify that the clause is the default for all new tables and materialized views. The clause can include valid clauses for IM column store compression methods and data population options.

If the clause starts with INMEMORY, then all new tables and materialized views, including those without an INMEMORY clause, are populated in the IM column store. If the clause omits INMEMORY, then it only applies to new tables and materialized views that are enabled for the IM column store with an INMEMORY clause during creation.

Julian’s tip: start with an emty string and depending on your IM strategy, modify it later on

5. INMEMORY_QUERY

Default: ENABLE

This initialization parameter specifies whether in-memory queries are allowed. Set this parameter to ENABLE, the default value, to allow queries to access database objects populated in the IM column store, or set this parameter to DISABLE to disable access to the database objects populated in the IM column store.

Julian’s tip: very useful parameter on session level for testing how fast the IM option is

6. INMEMORY_TRICKLE_REPOPULATE_SERVERS_PERCENT

Default: 1

This initialization parameter limits the maximum number of background populate servers used for IM column store repopulation, as trickle repopulation is designed to use only a small percentage of the populate servers. The value for this parameter is a percentage of the INMEMORY_MAX_POPULATE_SERVERS initialization parameter value. For example, if this parameter is set to 10 and INMEMORY_MAX_POPULATE_SERVERS is set to 10, then on average one core is used for trickle repopulation.

Julian’s tip: increase to to a level based on your CPU cores and need for fast repopulation.

7. OPTIMIZER_INMEMORY_AWARE

Default: TRUE

This initialization parameter enables or disables all of the optimizer cost model enhancements for in-memory. Setting the parameter to FALSE causes the optimizer to ignore the in-memory property of tables during the optimization of SQL statements.

Julian’s tip: good for testing purposes to see how IM changes the behaviour of the CBO

Interesting test case for the same SQL:

select_IM

I would always recommend PARALLEL_DEGREE_POLICY = AUTO when using the 12c in-memory option.

And finally some interesting articles on 12c IM:

Off May Not Be Totally Off: Is Oracle In-Memory Database 12c (12.1.0.2.0) Faster? by Craig Shallahamer

12c In-Memory in PDB by Kerry Osborne

Getting started with Oracle Database In-Memory Part I – Installing & Enabling by Maria Colgan

In-Memory Column Store in Oracle Database 12c Release 1 (12.1.0.2) by Tim Hall

Our take on the Oracle Database 12c In-Memory Option by Tanel Poder

Twelve new DBA commands for Oracle 12c

In DBA, Init.ora, Oracle database, Oracle internals, RMAN on January 25, 2014 at 10:47

With more than 500 new features, Oracle Database 12c is designed to give us exactly what we need for easy database administration, consolidation, security and availability. And I cannot deny the complexity it brings into our pluggable world…

12_commands

But straight to the point. Here are 12 new commands for Oracle DBAs:

1. impdp … transform=disable_archive_logging:Y

The new TRANSFORM option DISABLE_ARCHIVE_LOGGING appended to the impdp command line will cause Oracle Data Pump to disable (almost completely) redo logging when loading data into tables and when creating indexes. Check this example.

2. select xmltransform(dbms_qopatch.get_opatch_lsinventory, dbms_qopatch.get_opatch_xslt) from dual;

With the new DBMS_QOPATCH package we can view the installed database patches:

GET_OPATCH_BUGS: Provides a bugs list for a patch
GET_OPATCH_COUNT: Provides the total number of installed patches
GET_OPATCH_LSINVENTORY: Returns whole opatch inventory
GET_OPATCH_PREQS: Provides prerequisite patches for a given patch

Check this example.

3. grant sysdba, sysoper, sysbackup, sysdg, syskm to c##julian identified by password container=all;

If we want to create a common user in the CDB, we must prefix it with c##. It is because the default common user prefix in 12.1.0 is c##. However, the _common_user_prefix parameter controls that prefix. One option is to set it to NULL.

SYSBACKUP, SYSDG and SYSKM are also database users. SYSDG is used for Oracle Data Guard operations: startup, shutdown and flashback database while SYSKM is used for encryption key management operations: connect to the database even if the database is not open.

4. create table porter(owner varchar2(128), object_name varchar2(128), created date invisble);

porter

This “trick” does not work with temporary tables, external tables and cluster tables. Check this example.

5. alter system set max_string_size=extended;

extended_size

The new init.ora parameter MAX_STRING_SIZE (default is standard) controls the maximum size of the VARCHAR2, NVARCHAR2, and RAW data types. It can be increased from 4,000 to 32767 bytes. Check this example.

6. alter database move datafile ‘old_location’ to ‘new_location’;

move_online

In 12.1.0, a data file can now be moved online while it is open and being accessed, all these are possible now:

– Renaming an Online Datafile
– Relocating an Online Datafile
– Copying an Online Datafile
– Relocating an Online Datafile and Overwriting an Existing File
– Relocating an Online Datafile to Oracle ASM
– Moving a File from One ASM Location to Another ASM Location

7. recover table emp until time ‘sysdate-1’;

table_recovery

RMAN enables us to recover one or more tables or table partitions to a specified point in time without affecting the remaining database objects. We can use previously-created RMAN backups to recover tables and table partitions to a specified point in time. Check this example.

8. alter session set row archival visibility = active;
alter session set row archival visibility = all;

In-Database Archiving enables us to archive rows within a table by marking them as inactive. These inactive rows are in the database and can be optimized using compression, but are not visible to an application. To manage In-Database Archiving for a table, we must enable ROW ARCHIVAL for the table and manipulate the ORA_ARCHIVE_STATE hidden column of the table. Check my old blog post.

9. alter session set temp_undo_enabled = true;

Check this example.

10. truncate table dept cascade;

TRUNCATE TABLE is enhanced in the following way: a new clause CASCADE allows you to recursively truncate child tables. Check this example.

11. create sequence temp_order_seq start with 100 increment by 10 session;

The new clause SESSION allows you to create a session sequence. Check this example.

12. $ORACLE_HOME/perl/bin/perl catctl.pl -n 8 -l $ORACLE_HOME/diagnostics catupgrd.sql

Oracle Database 12c introduces the Parallel Upgrade Utility (catctl.pl), which enables components that do not have to be upgraded in a specific order to be upgraded at the same time, taking full advantage of CPU capacity. The Parallel Upgrade Utility reduces the total amount of time it takes to perform an upgrade by loading the database dictionary in parallel using multiple SQL processes to upgrade the database.

-n specifies the number of processes to use for parallel operations (default = 4). The maximum is 8. Set this parameter to 0 to run catctl.pl in serial mode.

The Newest init.ora Parameter in the Oracle Database

In Database tuning, DBA, Init.ora, Oracle database on November 6, 2011 at 14:18

Altogether there are 347 init.ora parameters in Oracle Database 11.2.0.3. Looking into all documented and undocumented parameters in 11.2.0.3, we witness a high number: 2752.

11.2.0.3 does not sparkle with any fancy features and no one is expecting it to but there is one new init.ora parameter added by Oracle: awr_snapshot_time_offset.

Why is it needed? The answer is very clear in case you have a server with several databases running on it. Consolidation has been fashionable for the past decade and many scattered corporative databases have perched on the same computer.

You might have observed those spikes on the top of every hour when all the AWR snapshots are taken. How do you avoid this, i.e., say you have 12 databases on the server and you want snapshots taken on hour bases but within 5 minutes difference for every database?

Quite simply with awr_snapshot_time_offset by setting it for n*300 (seconds) for every n:th database (n = 1..12).

The documentation of awr_snapshot_time_offset gives additional details for the parameter.

And finally, here is how one can search for the values of certain parameters, for example Bloom filter related:

select a.ksppinm name, b.ksppstvl value 
from x$ksppi a, x$ksppcv b 
where a.indx = b.indx 
and a.ksppinm like '%bloom%';