Dontcheff

Archive for the ‘Oracle database’ Category

DBA 3.0 – Database Administration in the Cloud

In Cloud, DBA, OOW, Oracle database on September 23, 2017 at 10:35

“The interesting thing about cloud computing is that we’ve redefined cloud computing to include everything that we already do … The computer industry is the only industry that is more fashion-driven than women’s fashion.” – Larry Ellison, CTO, Oracle

DBA 1.0 -> DBA 2.0 -> DBA 3.0: Definitely, the versioning of DBAs is falling behind the database versions of Oracle, Microsoft, IBM, etc. Mainframe, client-server, internet, grid computing, cloud computing…

The topic on the DBA profession and how it changes, how it evolves and how it expands has been of interest among top experts in the industry:

Penny Arvil, VP of Oracle Database Product Development, stated that DBAs are being asked to understand what businesses do with data rather than just the mechanics of keeping the database healthy and running.

Kellyn Pot’Vin-Gorman claims that DBAs with advanced skills will have plenty of work to keep them busy and if Larry is successful with the bid to rid companies of their DBAs for a period of time, they’ll be very busy cleaning up the mess afterwards.

Tim Hall said that for pragmatic DBAs the role has evolved so much over the years, and will continue to do so. Such DBAs have to continue to adapt or die.

Megan Elphingstone concluded that DBA skills would be helpful, but not required in a DBaaS environment.

Jim Donahoe hosted a discussion about the state of the DBA as the cloud continues to increase in popularity.

First time I heard about DBA 2.0 was about 10 years ago. At Oracle OpenWorld 2017 (next week or so), I will be listening to what DBA 3.0 is: How the life of a Database Administrator has changed! If you google for DBA 3.0 most likely you will find information about how to play De Bellis Antiquitatis DBA 3.0. Different story…

But if I can also donate something to the discussion is probably the fact that ever since a database vendor automated something in the database, it only generated more work for DBAs in the future. More DBAs are needed now as ever. Growing size and complexity of IT systems is definitely contributing to that need.

These DBA sessions in San Francisco are quite relevant to the DBA profession (last one on the list will be delivered by me):

– Advance from DBA to Cloud Administrator: Wednesday, Oct 04, 2:00 p.m. – 2:45 p.m. | Moscone West – Room 3022
– Navigating Your DBA Career in the Oracle Cloud: Monday, Oct 02, 1:15 p.m. – 2:00 p.m. | Moscone West – Room 3005
– Security in Oracle Database Cloud Service: Sunday, Oct 01, 3:45 p.m. – 4:30 p.m. | Moscone South – Room 159
– How to Eliminate the Storm When Moving to the Cloud: Sunday, Oct 01, 1:45 p.m. – 2:30 p.m. | Moscone South – Room 160
– War of the Worlds: DBAs Versus Developers: Wednesday, Oct 04, 1:00 p.m. – 1:45 p.m. | Moscone West – Room 3014
– DBA Types: Sunday, Oct 01, 1:45 p.m. – 2:30 p.m. | Marriott Marquis (Yerba Buena Level) – Nob Hill A/B

And finally, a couple of quotes about databases:

– “Database Management System [Origin: Data + Latin basus “low, mean, vile, menial, degrading, ounterfeit.”] A complex set of interrelational data structures allowing data to be lost in many convenient sequences while retaining a complete record of the logical relations between the missing items. — From The Devil’s DP Dictionary” ― Stan Kelly Bootle
– “I’m an oracle of the past. I can accurately predict up to 1 minute in the future, by thoroughly investigating the last 2 years of your life. Also, I look like an old database – flat and full of useless info.” ― Will Advise, Nothing is here…

Advertisements

DBA Statements

In DBA, Oracle database, PL/SQL, SQL on September 5, 2017 at 11:51

“A statement is persuasive and credible either because it is directly self-evident or because it appears to be proved from other statements that are so.” Aristotle

In Oracle 12.2, there is a new view called DBA_STATEMENTS. It can helps us understand better what SQL we have within our PL/SQL functions, procedures and packages.

There is too little on the Internet and nothing on Metalink about this new view:

PL/Scope was introduced with Oracle 11.1 and covered only PL/SQL. In 12.2, PL/Scope was enhanced by Oracle in order to report on the occurrences of static and dynamic SQL call sites in PL/SQL units.

PL/Scope can help you answer questions such as:
– Where and how a column x in table y is used in the PL/SQL code?
– Is the SQL in my application PL/SQL code compatible with TimesTen?
– What are the constants, variables and exceptions in my application that are declared but never used?
– Is my code at risk for SQL injection and what are the SQL statements with an optimizer hint coded in the application?
– Which SQL has a BULK COLLECT or EXECUTE IMMEDIATE clause?

Details can be found in the PL/Scope Database Development Guide or at Philipp Salvisberg’s blog.

Here is an example: how to find all “execute immediate” statements and all hints used in my PL/SQL units? If needed, you can limit the query to only RULE hints (for example).

1. You need to set the PLSCOPE_SETTINGS parameter and ensure SYSAUX has enough space:


SQL> SELECT SPACE_USAGE_KBYTES FROM V$SYSAUX_OCCUPANTS 
WHERE OCCUPANT_NAME='PL/SCOPE';

SPACE_USAGE_KBYTES
------------------
              1984

SQL> show parameter PLSCOPE_SETTINGS

NAME                                 TYPE        VALUE
------------------------------------ ----------- -------------------
plscope_settings                     string      IDENTIFIERS:NONE

SQL> alter system set plscope_settings='STATEMENTS:ALL' scope=both;

System altered.

SQL> show parameter PLSCOPE_SETTINGS

NAME                                 TYPE        VALUE
------------------------------------ ----------- -------------------
plscope_settings                     string      STATEMENTS:ALL

2. You must compile the PL/SQL units with the PLSCOPE_SETTINGS=’STATEMENTS:ALL’ to collect the metadata. SQL statement types that PL/Scope collects are: SELECT, UPDATE, INSERT, DELETE, MERGE, EXECUTE IMMEDIATE, SET TRANSACTION, LOCK TABLE, COMMIT, SAVEPOINT, ROLLBACK, OPEN, CLOSE and FETCH.


-- start
SQL> select TYPE, OBJECT_NAME, OBJECT_TYPE, HAS_HINT, 
SUBSTR(TEXT,1,LENGTH(TEXT)-INSTR(REVERSE(TEXT), '/*') +2 ) as "HINT" 
from DBA_STATEMENTS
where TYPE='EXECUTE IMMEDIATE' or HAS_HINT='YES';

TYPE              OBJECT_NAME   OBJECT_TYPE  HAS HINT
----------------- ------------- ------------ --- ------------------
EXECUTE IMMEDIATE LASKE_KAIKKI  PROCEDURE    NO  
SELECT            LASKE_KAIKKI  PROCEDURE    YES SELECT /*+ RULE */

-- end

Check also the DBA_STATEMENTS and ALL_STATEMENTS documentation. And the blog post by Jeff Smith entitled PL/Scope in Oracle Database 12c Release 2 and Oracle SQL Developer.

But finally, here is a way how to regenerate the SQL statements without the hints:


-- start
SQL> select TEXT from DBA_STATEMENTS where HAS_HINT='YES';

TEXT
------------------------------------------------------------
SELECT /*+ RULE */ NULL FROM DUAL WHERE SYSDATE = SYSDATE

SQL> select 'SELECT '||
TRIM(SUBSTR(TEXT, LENGTH(TEXT) - INSTR(REVERSE(TEXT), '/*') + 2))
as "SQL without HINT"
from DBA_STATEMENTS where HAS_HINT='YES';

SQL without HINT
-------------------------------------------------------------
SELECT NULL FROM DUAL WHERE SYSDATE = SYSDATE

-- end

Cloud Nine

In Cloud, DBA, IaaS, Oracle database on May 3, 2017 at 16:58

“Get happiness out of your work or you may never know what happiness is.” — Elbert Hubbard

According to Amazon and quoted by Fortune Magazine in a recent article entitled “Amazon Data Center Chief schools Oracle CEO on Cloud claims“, AWS executive Andy Jassy said (at AWS Re:Invent two years ago) that every database customer he talks to is unhappy with their vendor: “I haven’t met a database customer that is not looking to flee their vendor.”

Another interesting article by James Hamilton entitled “How many Data Centers needed world-wide” discusses more or less the same topic. Reading the comments after it is worthwhile. Also consider reading these extensive performance results about Cloud performance and TCO of the Oracle database.

A young and extremely smart analyst from my company asked me last week: “Why is the Oracle database better than MySQL or MongoDB?”. Tough question, right? You may ask the same about DB2 or SQL Server. All databases have their pros and cons. And we as people have our preferences, based on experience, knowledge and prejudices.

If you try to find out the explanation of the quote statement on top, you might very like end up with this one: “You have to spend most of your life working, so if you’re unhappy at your work you’re likely to always be unhappy”.

So, I have been happy (if that is the right word) working with the Oracle database. Unlike DB2, you have all the tools, options and automation to tune it. With about couple of hundred MySQL databases at Nokia, we spent more time (thank you Google!) investigating issues than with more than one thousand Oracle databases. SQL Server: if you prefer using the mouse instead of the keyboard, then this is the right database for you! Teradata compared to Exadata: let me not start…

As Forrester say, In-Memory Databases are driving next-generation workloads and use cases. Check out this recent comparison of all vendors.

But back to Cloud. Have a look at what speed and what features Oracle is embedding into its Cloud. By far the best Cloud for Oracle workloads! All these are new additions to the Oracle IaaS:

What’s New for Oracle Compute Cloud Service (IaaS)

– Compute Service: 8 and 16 OCPU Virtual Machines

CentOS, Ubuntu OS Images

RHEL via BYOI OS Image

Multipart Upload: Multipart upload enables uploading an object in parts, enhancing speed of upload and accommodating larger objects

Audit Service: This new service automatically records calls to all supported BMCS public API endpoints as log events

Search Domain DHCP

Terraform Provider: The BMCS Terraform provider is now available. Terraform is an open source infrastructure automation and management software tool

Developer Tools Enhancements: New versions of BMCS developer tools are now available, including Ruby, Python, and Java SDKs, HDFS Connector, and CLI

New Instance Shapes

– Windows BYOL: It is now possible to Bring Your Own License (BYOL) for Windows Server

– NVMe Storage: You can now use NVMe SSD disks as ephemeral data disks attached to your instances

– SSD Block Storage: These high-performance volumes can be used for persistent block storage or for bootable volumes

New Web UI: This new interface can be used to perform basic operations against Storage Cloud resources

HSM Cloud Service Integration

Oracle Exadata Cloud Machine Q&A

In Cloud, Database options, DBA, Exadata, IaaS, Oracle database, Oracle Engineered Systems on March 28, 2017 at 07:25

“The computer industry is the only industry that is more fashion-driven than women’s fashion.” Larry Ellison

Amazon have no on-premise presence, it is cloud only. Microsoft have the Azure Stack but the Azure Stack Technical Preview 3 is being made available as a Proof of Concept (POC) and must not be used as a production environment and should only be used for testing, evaluation, and demonstration.

Oracle Database Exadata Public Cloud Machine, or Exadata Cloud Machine, or ExaCM in short, is a cloud-based Oracle database subscription service available on Oracle Exadata, and deployed in the customer or partner data center behind their firewall. This allows customers to subscribe to fully functional Oracle databases on Exadata, on an OPEX driven consumption model, using agile cloud-based provisioning, while the associated Exadata infrastructure is maintained by Oracle.

This blog post contains the top 10 (5 commercial and 5 technical) facts about the Exadata Cloud Machine:

1. On-premise licenses cannot be transferred to ExaCM.

2. The minimum commitment to both the ExaCM and OCM is 4 years and the minimum configuration is Eighth Rack.

3. The subscription price for Oracle Database Exadata Cloud Machine X6 Eighth Rack is $40,000 per month (= $2,500 X 16) and that includes all DB options/features, Exadata Software and OEM DB Packs.

4. Standalone products such as Oracle Secure Backup and Oracle GoldenGate are not included in the ExaCM subscription. Only the database options (such as RAC, In-Memory, Partitioning, Active Data Guard, etc.), the database OEM packs and the Exadata storage server software are included.

5. ExaCM requires Oracle Cloud Machine to deploy Exadata Cloud Control Plane (separate subscription). OCM subscription requires similar minimum term commitment as ExaCM. If a customer already has an OCM, that can be leveraged to deploy Exadata Control Plane at no extra cost. One OCM Model 288 can manage 6 ExaCM Full Racks (i.e. 24 ExaCM Quarter Racks or 12 ExaCM Half Racks). Theoretically one OCM can support a much larger number of ExaCM full racks: about 50.

6. The 1/8th rack SKU is very similar to the on-premises 1/8th rack – i.e. minimum configuration of 16 OCPUs (cores), 240 GB RAM per database server, 144 TB raw storage (42 TB usable), 19.2TB of Flash. Compared to the Quarter Rack, it ships with less RAM, disk storage and flash. Those will be field installed if the customer chooses to go for the 1/8th to Quarter Rack upgrade. Note that this 1/8th rack enables customers to have an entry level configuration that is similar to what exists in Exadata Cloud Service.

7. Hourly Online Compute Bursting is supported with ExaCM. The commercial terms are the same as in Exadata Cloud Service – i.e. 25% premium over the Metered rate, calculated on an hourly basis. Customers can scale up or down, dynamically. Bursting does not kick in automatically based on load. Bursting of OCPUs needs to be configured by customers as needed. Once customer initiates bursting, the OCPU update is done dynamically without downtime. Customers will be billed later on the hours of bursting usage. Price: $8.401 per OCPU per hour.

8. If Cloud Control Plane is down, it doesn’t affect the availability of steady state runtime operations. However, cloud-based management (e.g. selfservice UI and REST API access) will be impacted.

9. Access: the Exadata Cloud Machine compute nodes are each configured with a Virtual Machine (VM). You have root privilege for the Exadata compute node VMs, so you can load and run additional software on the Exadata compute nodes. However, you do not have administrative access to the Exadata infrastructure components, including the physical compute node hardware, network switches, power distribution units (PDUs), integrated lights-out management (ILOM) interfaces, or the Exadata Storage Servers, which are all administered by Oracle.

10. Patching and backups: you can produce a list of available patches using the exadbcpatchmulti command as follows

# /var/opt/oracle/exapatch/exadbcpatchmulti -list_patches 
-sshkey=/home/opc/.ssh/id_rsa 
-oh=hostname1:/u01/app/oracle/product/12.2.0.1/dbhome_1

When you create a database deployment on Exadata Cloud Machine, you must choose from the following automatic backup configuration options:

– Remote Storage Only: uses remote NFS storage to store periodic full (RMAN level 0) backups and daily incremental backups, with a seven day cycle between full backups and an overall retention period of thirty days.
– None: no automatic backups are configured. Automatic backups cannot be configured later if you select the None option when you create a database deployment.

Useful links:

Oracle Exadata Cloud Machine Documentation
Oracle Database Exadata Cloud Machine Data Sheet
Features and Benefits of Oracle ExaCM
Oracle Database Exadata Cloud Machine Pricing
Creating an Exadata Cloud Machine Instance
Known Issues for Oracle Exadata Cloud Machine
Oracle SVP, Juan Loaiza, describes Oracle Database Exadata Cloud Machine

DBA Productivity and Oracle Database 12.2

In Cloud, DBA, Oracle database on February 9, 2017 at 15:15

“Technology can be our best friend, and technology can also be the biggest party pooper of our lives. It interrupts our own story, interrupts our ability to have a thought or a daydream, to imagine something wonderful, because we’re too busy bridging the walk from the cafeteria back to the office on the cell phone.” Steven Spielberg

busy

The DBA profession was recently rated as #6 among the Best Technology Jobs. Good for all of us who are in this line of business. But notice the stress level: Above Average!

DBAs are often busy people. Is that good or bad? Is “busy the new stupid”?

Automation is not a luxury for the DBAs but it is a way in which DBAs execute their job. Of course, there is one thing that cannot be automated and that is quality but the best DBAs automate almost everything.

Automating the database is a Win-Win for DBAs and DevOps. The mindset of the Enterprise DBA should be focused on harnessing the power of automation.

The following data shows what tasks are mostly and least automated:

dba_automation

Look at the last row above. I still wonder why Automatic SQL Tuning is so underestimated. It was so powerfully helping the DBA team of Nokia…

Oracle Database 12cR2 is out. And 12.2 comes with yet another new set of database automation related features:

– Oracle Data Guard now supports multiple failover targets in a fast-start failover configuration. Previous functionality allowed for only a single fast-start failover target. Multiple failover targets increase high availability by making an automatic failover more likely to occur if there is a primary outage.

– Oracle automatically synchronizes password files in Data Guard configurations: when the passwords of SYS, SYSDG, and so on, are changed, the password file at the primary database is updated and then the changes are propagated to all standby databases in the configuration.

– Online table move: nonpartitioned tables can be moved as an online operation without blocking any concurrent DML operations. A table move operation now also supports automatic index maintenance as part of the move.

– Automatic deployment of Oracle Data Guard: deployment is automatic for Oracle Data Guard physical replication between shards with Oracle Data Guard fast-start failover (automatic database failover): automatic database failover provides high availability for server, database, network, and site outages.

– Automatically set user tablespaces to read-only during upgrade: the new -T option for the parallel upgrade utility (catctl.pl) can be used to automatically set user tablespaces to read-only during an upgrade, and then back to read/write after the upgrade.

– The Oracle Trace File Analyzer (TFA) collector provides the option to automatically collect diagnostic information when TFA detects an incident.

– Oracle Data Guard support for Oracle Diagnostics Pack: this enables you to capture the performance data to the Automatic Workload Repository (AWR) for an Active Data Guard standby database and to run Automatic Database Diagnostic Monitor (ADDM) analysis on the AWR data.

– Automatic Workload Repository (AWR) support for pluggable databases: the AWR can be used in a PDB. This enables the capture and storage of performance data in the SYSAUX tablespace of the PDB.

– The new ENABLE_AUTOMATIC_MAINTENANCE_PDB initialization parameter can be used to enable or disable the running of automated maintenance tasks for all the pluggable databases (PDBs) in a multitenant container database (CDB) or for individual PDBs in a CDB.

– Automatic Data Optimization Support for In-Memory Column Store: Automatic Data Optimization (ADO) enables the automation of Information Lifecycle Management (ILM) tasks. The automated capability of ADO depends on the Heat Map feature that tracks access at the row level (aggregated to block-level statistics) and at the segment level.

– Automatic Provisioning of Kerberos Keytab for Oracle Databases: the new okcreate utility automates the registering of an Oracle database as a Kerberos service principal, creating a keytab for it, and securely copying the keytab to the database for use in Kerberos authentication.

– Role-Based Conditional Auditing: auditing for new users with the DBA role would begin automatically when they are granted the role.

– Automatic Locking of Inactive User Accounts: within a user profile, the new INACTIVE_ACCOUNT_TIME parameter controls the maximum time that an account can remain unused. The account is automatically locked if a log in does not occur in the specified number of days.

12_2_automation

Database Magic with Oracle Database 12c

In Database options, Oracle database on November 28, 2016 at 09:40

“Science is magic that works” Kurt Vonnegut

magic

Have a look at the following series of commands.

A query on the SALES table takes normally more than 2 minutes but setting the database_performance parameter to SUPER_FAST makes it … as expected super fast: less than 1 second. Setting the database_performance parameter to SUPER_SLOW makes the query hang. Again “as expected”.

sqltf

So, how come all this is possible?

Before blogging it, I showed this “magical trick” to 100s of people: at Oracle OpenWorld, at some Oracle User Group events and conferences and to many DBAs. Here is the explanation.

Behind the curtains, I am using the Oracle SQL Translation Framework.

exec dbms_sql_translator.create_profile('OOW');

select object_name, object_type from dba_objects where object_name like 'OOW';

exec dbms_sql_translator.register_sql_translation('OOW',
'SELECT max(price) most_expensive_order from sales',
'SELECT max(price) most_expensive_order from julian.sales')
/

exec dbms_sql_translator.register_sql_translation('OOW',
'alter session set database_performance="SUPER_FAST"',
'alter session set inmemory_query="ENABLE"')
/

exec dbms_sql_translator.register_sql_translation('OOW',
'alter session set database_performance="RATHER_SLOW"',
'alter session set inmemory_query="DISABLE"')
/

exec dbms_sql_translator.register_sql_translation('OOW',
'alter session set database_performance="SUPER_SLOW"',
'begin uups; end;')
/

The procedure uups is just helping us mimic a never ending loop:

create or replace procedure uups
as  
 x date;
 BEGIN
   LOOP
     BEGIN
       SELECT null INTO x FROM dual WHERE sysdate = sysdate;
     EXCEPTION
       WHEN NO_DATA_FOUND THEN EXIT;
     END;
   END LOOP;
 END;
/

Then, once connected to the database as SYS you run the following commands:

set timing on

alter session set sql_translation_profile = OOW
/

alter session set events = '10601 trace name context forever, level 32'
/
 

The last question now is probably what changes the performance? But this should be clear from the usage of the inmemory_query parameter. I am simply keeping the SALES table in memory. So yes, the in-memory option can be 137 times faster! 133.22/0.97 ~ 137

Here is something more to read if you find the topic interesting:

SQL Translation Framework in Oracle Database 12c by Okcan Yasin Saygili
SQL Translation Framework by Kerry Osborne
Oracle 12c Security – SQL Translation and Last Logins by Pete Finnigan

Oracle In-Memory for SAP Databases

In DBA, Oracle database, SAP HANA on March 19, 2016 at 14:02

The Japanese proverb “HANA YORI DANGO” means literally “Dumplings rather than flowers” meaning “to prefer substance over style, as in to prefer to be given functional, useful items (such as dumplings) instead of merely decorative items (such as flowers)”.

HANA is nowadays a very stylish and trendy concept among the database professionals but I would follow the Japanese saying and rather go with substance.

c82-wsj-ad-sap-cloud-motion

The two different points of view can be easily found on the internet:

SAP: What Oracle won’t tell you about SAP HANA
Oracle: Oracle Database In-Memory vs SAP HANA benchmark results

The following “Facts vs Claims” will most likely perplex and bewilder everyone. Just have a look and decide for yourself. Let us not go into details but consider this statement:

FACT: Oracle makes big data a bigger problem with 4 copies of the data (3 in-memory and 1 on disk).

I have been trying for quite some time to figure out the 3 copies of in-memory data that Oracle creates with no success. Perhaps this is a riddle?

An year ago, on March 31st 2015, SAP was certified to run on Oracle Database 12.1.0.2. Rather odd one would say, as in the past SAP were awaiting the 2nd release of the Oracle database in order to certify it for SAP. At least a sign that 12.1.0.2 is mature enough to be used for SAP production systems.

As of June 30th 2015, the Oracle Database In-Memory Option is supported and certified for SAP environments for all SAP products based on SAP NetWeaver 7.x. on Unix/Linux, Windows and Oracle Engineered Systems platforms running Oracle Database 12c – in single instance and Oracle Oracle Real Application Clusters deployments.

In-Memory is such a great feature – but (for both Oracle and SAP) often the challenge you’ll face are these 2 tricky questions:

1. Which of your tables, partitions, and even columns should you mark for In-Memory column store availability?
2. What should be the value of the SGA, PGA and IMCS size? That is, how much memory do I need (global_allocation_limit in SAP and inmemory_size for Oracle)?

Here you have a very strong advantage of Oracle over SAP. The answers are now easier to find with the new In-Memory Advisor which is available via download from MOS Note:1965343.1. Use of In-Memory Advisor for databases where the IM option has not yet been deployed does NOT require an Oracle Tuning Pack license.

For SAP, Quick Sizer is used for a new implementation of Business Suite powered by SAP HANA. Here is a How to Properly Size an SAP In-Memory Database. There are two different approaches for performing the sizing: user-based sizing, which determines sizing requirements based on the number of users in the system, and throughput-based sizing, which bases the sizing on the items being processed. The sizing rules for SAP Business Suite on SAP HANA that are outlined in SAP Note 1793345.

Memory Management in the Column Store: SAP HANA aims to keep all relevant data in memory. Standard row tables are loaded into memory when the database is started and remain there as long as it is running. They are not unloaded. Column tables, on the other hand, are loaded on demand, column by column when they are first accessed. With Oracle, only tables (or columns/partitions of the table) that need to be in memory are set in memory thus avoiding waist of unnecessary memory and having the ability to do more with less.

The Oracle approach is much more sophisticated but there are 10 major issues DBAs should pay attention to when using the In-Memory option with SAP databases:

1. To use Oracle Database In-Memory with SAP NetWeaver the following technical and business prerequisites must be met:
– Oracle Database 12c Release 1 Patch Set 1 (12.1.0.2)
– UNIX/Linux: Oracle Database SAP Bundle Patch June 2015 (SAP1202P_1506) or newer. Strongly recommended Oracle Database SAP Bundle Patch August 2015 (SAP1202P_1508)
– Windows: Windows DB Bundle Patch 12.1.0.2.6 or newer. Strongly Recommended Windows DB Bundle Patch 12.1.0.2.8
– SAP NetWeaver 7.x Version with minimum SAP Kernel 7.21_EXT

2. Indexes. It is not allowed to make any changes to the standard index design of the SAP installations. However, customer specific index design can be changed. That is, all indexes which belong to the Y or Z namespaces can be changed.

3. Database Buffer Cache. It is not allowed to reduce the size of the database buffer cache and assign the memory to the In-Memory column store.

4. SAP Dictionary Support. Full SAP Dictionary (DDIC) Support of in-memory attributes at the table level starts with the support package SAP_BASIS 7.40 SP12.

5. Individual Columns. It is not supported to load individual columns of an SAP table or partition into the IM column store. It is also not supported to exclude individual columns from an SAP table or partition from the IM column store. An SAP table is a database table used by an SAP application.

6. The database where you want to run In-Memory Advisor must have XDB component installed as the In-Memory Advisor relies on functions provided by XDB. In 12c XDB is installed by default.

7. The In-Memory Advisor is contained in the SAP Bundle Patch as patch 21231656.

8. For SAP applications it is strongly recommended to use a reasonable time window of collected AWR data. At least 2-3 days of AWR data should be used for the In-Memory Advisor. It absolutely makes no sense to use data from a 1-2 hour time window.

9. Use these In-Memory Advisor Parameter Name and Value:

– WRITE_DISADVANTAGE_FACTOR = 0.7
– LOB_BENEFIT_REDUCTION = 1.2
– MIN_INMEMORY_OBJECT_SIZE = 1024000
– READ_BENEFIT_FACTOR = 2

10. For SAP systems therefore the following init.ora parameters should be used:

inmemory_max_populate_servers = 4
inmemory_clause_default = “PRIORITY HIGH”
inmemory_size should be set to the value (+ ~20% for metadata and journals) used in the generate recommendation step of the In-Memory Advisor or set to value calculated by the SAP_IM_ADV Package for all tables/partitions to be loaded into the In-Memory column store.

Using RAT shows the benefit of the In-Memory option. Compare the DB time of the 2 replays after going to Exadata with the In-Memory option. DB time is the best and possibly only metric to compare captures with replays.

IMDB_Advisor

SAP Notes: 2178980 – Using Oracle Database In-Memory with SAP NetWeaver
MOS Notes: 1292089.1 – Master Note for Oracle XML Database (XDB) Install / Deinstall & 1965343.1 – Oracle Database In-Memory Advisor
Using SAP NetWeaver with Oracle Database In-Memory

Bottom line: if my SAP application runs on top of an Oracle database, I would rather put it on Exadata with the In-Memory option enabled that move it to SAP HANA.

Oracle Database Cloud Service vs Amazon Relational Database Service

In Cloud, Consolidation, DBA, Oracle database on February 28, 2016 at 15:00

How to compare Oracle’s Database Public Cloud with Amazon’s Relational Database Service (RDS) for enterprise usage? Let us have a look.

Oracle’s Database has 4 editions: Personal Edition, Express Edition (XE): free of charge and used by very small businesses and students, Standard Edition (SE): light version of Enterprise Edition and purpose designed to lack most features needed for running production grade workloads and Enterprise Edition (EE): provides the performance, availability, scalability, and security required for mission-critical applications.

In the comparison in this post, we will evaluate Oracle and Amazon in relation to the Enterprise Edition of Oracle’s database.

Oracle_Public_Database_Cloud

Oracle Public Database Cloud consists of 4 DB Cloud offerings: DBaaS, Virtual Image, Schema Service and Exadata Service. Here are few characterizations:

– Oracle supports Exadata, RAC & all DB options
– Simple pricing structure with published costs representing actual costs (unlimited I/Os, etc.)
– Hourly, Monthly & Annual pricing options
– Lowest cloud storage pricing across all major IaaS vendors

Amazon RDS for Oracle Database supports two different licensing models – “License Included” and “Bring-Your-Own-License (BYOL)”. In the “License Included” service model, you do not need separately purchased Oracle licenses. Here are few characterizations:

Enterprise Edition supports only db.r3.large and larger instance classes, up to db.r3.8xlarge
– Need to choose between Single-AZ (= Availability Zone) Deployment and Multi-AZ Deployment
– For Multi-AZ Deployment, Amazon RDS will automatically provision and manage a “standby” replica in a different Availability Zone (prior to failover you cannot directly access the standby, and it cannot be used to serve read traffic)
– Only 2 instance types support 10 Gigabit network: db.m4.10xlarge and db.r3.8xlarge
– Amazon RDS for Oracle is an exciting option for small to medium-sized clients and includes Oracle Database Standard Edition in it’s pricing
– Several application with limited requirements might find Amazon RDS to be a suitable platform for hosting a database
– As the enterprise requirements and resulting degree of complexity of the database solution increase, RDS is gradually ruled out as an option

So, here is high level comparison:

Oracle_Cloud_vs_Amazon_RDS
Notes:

– Oracle’s price includes the EE license with all options
– Amazon AWS is BYOL for EE
– Prices above are based on the EU (Frankfurt) region
– Amazon’s Oracle database hour prices vary from $0.290 to $4.555 for Single AZ Deplyoments and from $0.575 to $9.105 for Multi-AZ Deployments
– Oracle’s database hour prices vary from $0.672 to $8.569

Sources:

Oracle Archive Storage Pricing
Amazon Glacier Storage Pricing
Amazon Database Pricing
Oracle Database Pricing
Amazon Options for Oracle Database Engine
Oracle on Amazon RDS Support & Limitations

So, Amazon RDS is not an option if you need any of the following: Real Application Clusters (RAC), Real Application Testing, Data Guard / Active Data Guard, Oracle Enterprise Manager Grid Control, Automated Storage Management, Database Vault, Streams, Java Support, Locator, Oracle Label Security, Spatial, Oracle XML DB Protocol Server or Network access utilities such as utl_http, utl_tcp, utl_smtp, and utl_mail.

Interesting articles related to this topic:

1. Burning question for Oracle: What’s your response to Amazon? by Barb Darrow
2. Shootout: Oracle DB Cloud vs. Amazon RDS by Jan Navratil
3. The Oracle Database Cloud Service vs Oracle on Amazon RDS by Ranko Mosic
4. A Most Simple Cloud: Is Amazon RDS for Oracle Right for you? by by Jeremiah Wilton
5. Oracle RAC and AWS: A Hybrid Cloud Solution by Lindsay Van Thoen
6. How Much Does It Cost to Run Relational Database (RDS) Options on AWS by Yoav Mor
7. Oracle vs. Amazon: The Cloud Wars by Chris Lawless

The James Bond of Database Administration

In Data, DBA, Golden Gate, Oracle database, Oracle Engineered Systems on October 27, 2015 at 07:23

“Defending our systems needs to be as sexy as attacking others. There’s really only one solution: Bond.”

That is what ‘The Guardian’ wrote recently in an article entitled “The Man with the Golden Mouse: why data security needs a James Bond“.

Attending the annual Oracle ACE Director Briefing at Oracle HQ awoke up an interesting debate on the following question: What will happen in the near future with the DBA profession? Who is now the James Bond of Database Administration?

JB007

According to TechTarget, big data tools are changing data architectures in many companies. The effect on the skill sets required by database administrators may be moderate, but some new IT tricks are likely to be needed. GoldenGate is the new Streams, Exadata is the new RAC, Sharding the new Partitioning, Big Data is the new data (Texas is an exception), you name it…

Having the privilege to work throughout the years with some of the best database experts in the world has, for all it matters, proved to me that Double-O-Sevens are in fact more like Double-O-six-hundreds. Meaning that there are 100s of DBAs that qualify with no hesitation whatsoever as the James Bonds of Database Administration. I have learned so much from my ex Nokia colleagues, from my current Enkitec and Accenture colleagues. Not to mention friends from companies like eDBA, Pythian, Miracle, etc.

A DBA needs to have so many skills. Look for instance at Craig S. Mullins’ suggested 17 skills required of a DBA. Kyle Hunter’s article The evolution of the DBA and the Data Architect is clearly pointing to the emerging skillsets in the Data Revolution.

In the IT business, and in database administration in particular, it is not that important how well you know the old stuff, it is more important how fast you can learn the new things. Here are some of the tools that help every modern Oracle DBA:

Oracle Enterprise Manager 12c
ORAchk
Metalink/MOS
Developer Tools
Oracle Application Express (APEX)
SQL Developer
Oracle JDeveloper
SQL Developer Data Modeler
And last but not least SQL*Plus®

7init

These additional Metalink tools might be often of great help:

Diagnostic Tools Catalog – Note ID 559339.1
OS Watcher (Support Tool) – Note 301137.1
LTOM (Support Tool) – Note 352363.1
HANGFG (Support Tool) – Note 362094.1
SQLT (Support Tool) – Note 215187.1
PLSQL Profiler (Support Script) – Note 243755.1
MSRDT for the Oracle Lite Repository – Note 458350.1
Trace Analyzer TRCANLZR – Note 224270.1
ORA-600/ORA-7445 Error Look-up Tool – Note 153788.1
Statspack (causing more problems than help in 12c)

The Man with the Golden Mouse is the James Bond of Database Administration. The best DBA tools are still knowledge and experience.

GoldenMouse

New Features of Oracle NoSQL

In DBA, NoSQL, Oracle database, Oracle Engineered Systems on July 4, 2015 at 16:44

“In open source, we feel strongly that to really do something well, you have to get a lot of people involved.” Linus Torvalds

Currently, there are about 150 NoSQL databases (= Not Only SQL).

There are 5 major NoSQL data models: Collection, Columnar, Document-oriented, Graph and Key-value.

Oracle NoSQL, based on BerekelyDB (first release in 1994), was recently named by Forrester Research as a leader in the NoSQL key-value database market and Oracle NoSQL Database was called out as having strong adoption and maturity. A very good study and comparison of several NoSQL databases entitled 21 NoSQL Innovators to Look for in 2020 was written by Gary MacFadden.

Forrester_NoSQL

Here are few examples:

Collection/Multi-model: OrientDB, FoundationDB, ArangoDB, Alchemy Database, CortexDB
Columnar: Accumulo, Cassandra, Druid, HBase, Vertica
Document-oriented: Lotus Notes, Clusterpoint, Apache CouchDB, Couchbase, HyperDex, MarkLogic, MongoDB, OrientDB, Qizx
Graph: Allegro, Neo4J, InfiniteGraph, OrientDB, Virtuoso, Stardog
Key-value: Redis, CouchDB, Oracle NoSQL Database, Dynamo, FoundationDB, HyperDex, MemcacheDB, Riak, FairCom c-treeACE, Aerospike, OrientDB, MUMPS

Lat month (June 2015), Oracle announced Oracle NoSQL Database Version 3.3.4. This release offers new security features, including User Roles and Table-level Authorization, new language interfaces for Node.js and Python, and integration with Oracle Database Mobile Server. The prior release offers Big Data SQL support, RESTful API, C Table Driver, SQL-like DDL, Apache Hive support and much more.

A good starting point in order to get deeper into the NoSQL and Big Data world is the Oracle Big Data Learning Library.

Oracle recently announced Big Data SQL for Oracle NoSQL Database. This feature will allow Oracle Database users to connect to external data repositories like Oracle NoSQL Database or Hadoop in order to fetch data from any or all of the repositories (at once) through single SQL query.

Oracle Big Data SQL is an innovation from Oracle only available on Oracle Big Data Appliance. It is a new architecture for SQL on Hadoop, seamlessly integrating data in Hadoop and NoSQL with data in Oracle Database. Using Oracle Big Data SQL one can:

• Combine data from Oracle Database, Hadoop and NoSQL in a single SQLquery
• Query and analyze data in Hadoop and NoSQL
• Integrate big data analysis into existing applications and architectures
• Extend security and access policies from Oracle Database to data in Hadoopand NoSQL
• Maximize query performance on all data using Smart Scan

BDA_big-data-appliance

The recent update to Oracle REST Data Services enables a consistent RESTful interface to Oracle Database’s relational tables, JSON document store, and also enables access to Oracle NoSQL Database tables.

I still recommend reading the excellent article by Gwen Shapira entitled Hadoop and NoSQL Mythbusting.

Here are some useful links:

NoSQL Database Administrator’s Guide
Getting Started with NoSQL Database Table API
NoSQL Database Run Book
NoSQL Database Security Guide
Oracle NoSQL Database Availability and Failover
Download Oracle NoSQL Database, Server

NoSQL_DBs

It is interesting to note that according to Wikipedia, 12.1.3.3.4 is the first stable Oracle NoSQL release.

Check the DBMS popularity broken down by database model!

DB_popularity