Dontcheff

Archive for the ‘DBA’ Category

Oracle Bare Metal Cloud in Action

In Cloud, DBA, IaaS on October 15, 2016 at 08:56

Bare Metal Cloud means non-virtualized physical compute servers, i.e., no hypervisor running to create virtual machines!

capturebmc

The aim of this blog post is to show you how simple it is to provision the Oracle BMC. But first here are few links on the subject that you may find useful:

What’s Inside Oracle’s AWS-Killing Bare Metal Cloud by Craig Matsumoto
Oracle’s infrastructure business focuses on bare metal to go after AWS by Blair Hanley Frank
Does Oracle have a shot in the public cloud vs. Amazon and Microsoft? by Brandon Butler
Virtual or Bare Metal Dedicated Cloud: Which Option is Right for You? by Ashar Baig
Oracle IaaS Generation 2 by Marcel van den Berg
– Documentation: Oracle Bare Metal Cloud Services
– The Bare Metal Cloud Service on oracle.com

This is how you create a Bare Metal Cloud machine in the Oracle Infrastructure Cloud (took me less than 15 minutes to connect as root from scratch):

1. First page:

capture1

2. Create the VCN (Virtual Cloud Network):

capture2a

3. Launch the instance:

capture3a

4. In progress (provisioning):

capture4a

5. Created (36 CPUs):

capture5a

6. Details:

capture6a

7. Install MongoDB:

mongodb

The Oracle Bare Metal Cloud is based on totally new architectural concepts, modern hardware, it is easy to provision and use and most importantly reliable and secure. In addition, Oracle BMC is supposed to be 11 times faster and 20 percent cheaper than the fastest solution offered by the competition.

13th Oracle Open World in San Francisco

In Cloud, Consolidation, DBA, OOW, Oracle Engineered Systems on August 20, 2016 at 19:57

“Why are our days numbered and not, say, lettered?” Woody Allen

Early tall-building designers, fearing a fire on the 13th floor, or fearing tenants’ superstitions about the rumor, decided to omit having a 13th floor listed on their elevator numbering. This practice became common, and eventually found its way into American mainstream culture and building design. If hotel floors are lettered, would you mind staying at floor M?

Next month, thousands of Oracle professionals will come to San Francisco where Oracle organize for the 13th time in a row, Oracle OpenWorld.

Having 13 presentation in honor of this jubilee is technically impossible but here are 3 ones that I will deliver next month in San Francisco:

Black_cat_13

1. My 13 DBA Mistakes in 13 Years [UGF1127]
Julian Dontcheff, Global Database Lead, Accenture
Sunday, Sep 18, 3:30 p.m. – 4:15 p.m. | Moscone South—102

Abstract: In this session learn about the biggest 13 mistakes in my DBA career. Lessons learned. Be careful when you press enter. Don’t do like I do, people will make fun of you.. It is sad and funny.

2. The Benefits and Simplicity of Oracle Cloud: Infrastructure as a Service [CON1126]
Julian Dontcheff, Global Database Lead, Accenture
Wednesday, Sep 21, 4:15 p.m. – 5:00 p.m. | Moscone South—309

Abstract: I will do a LIVE demo and try to create from scratch an Oracle compute instance in less than 13 minutes. Countdown stops after I am root in the virtual machine.

Infrastructure as a service (IaaS) is the fastest-growing area of public cloud computing. Oracle Cloud IaaS, with built-in security and high availability, offers elastic compute, networking, and storage to help any company quickly reach both value and productivity. This presentation covers the benefits of Oracle IaaS over other cloud providers, and shows how fast and easy it is to set up IaaS services in Oracle Cloud.

3. Lift and Shift onto Oracle Database Exadata Cloud Service Using Database Consolidation Advisor [CON1125]
Julian Dontcheff, Global Database Lead, Accenture
Thursday, Sep 22, 12:00 p.m. – 12:45 p.m. | Marriott Marquis—Salon 12

Abstract: Dedicated to OEM 13c newest feature: Database Consolidation Workbench. I will give 13 database consolidation strategy tips for DBAs.

Oracle Database Exadata Cloud Service provides service instances that contain a full Oracle Database hosted on Oracle Exadata inside Oracle Cloud. This presentation is about best practices on how to migrate and consolidate Oracle databases onto Oracle Database Exadata Cloud Service. It covers the three phases—planning, migration, and validation—of Oracle’s database consolidation workbench that helps in end-to-end consolidation of databases and enables consolidation of more databases on the same Oracle Exadata system, both on premises and in the public cloud.

If you are reading this post, I welcome you to join my talks at OpenWorld. Thank you in advance. Please join also other presentation from our team: The Accenture Enkitec Group. Here are the remaining talks:

Who Wins the Oracle Database in the Cloud Bake-Off? [CON5672]
Christopher Pasternak, Managing Director, Accenture
Robby Robertson, Sr. Manager, Accenture
Richard Miners, Senior Infrastructure Principal, Accenture
Tuesday, Sep 20, 12:15 p.m. – 1:00 p.m. | Moscone South—309

Maximizing Oracle Exadata Database Machine Reliability with Oracle EXAchk [CON1142]
Andy Colvin, Infrastructure Principal Director, Accenture
Thursday, Sep 22, 10:45 a.m. – 11:30 a.m. | Moscone South—302

Interfacing Raspberry Pi with Oracle Application Express [UGF5667]
Christoph Ruepprich, Programmer / Developer, Accenture Enkitec
Sunday, Sep 18, 2:15 p.m. – 3:00 p.m. | Moscone South—304

SQLd360: SQL Tuning Diagnostics Made Easy [UGF6168]
Mauro Pagano, Infrastructure Senior Principal, Accenture Enkitec Group
Sunday, Sep 18, 2:15 p.m. – 3:00 p.m. | Moscone South—302

Oracle GoldenGate and Baseball: Five Fundamentals Before Jumping to the Cloud [UGF5120]
Bobby Curtis, Infrastructure Principal, Accenture Enkitec Group
Sunday, Sep 18, 8:00 a.m. – 8:45 a.m. | Moscone West—3022

The Best Oracle Database 12c New Features for Developers and DBAs [UGF2028]
Alex Zaballa, Senior Oracle Database Administrator, Accenture Enkitec Group
Sunday, Sep 18, 8:00 a.m. – 8:45 a.m. | Moscone West—2010

Oracle Multitenant: Customer Panel [CON6563]
Randall Wilcox, Manager / Senior Manager, SAS Institute Inc.
Michael Sorrels, Sr. VP, Database Technologies, Regions Bank
Patrick Wheeler, Senior Director, Product Management, Oracle Database, Oracle
Andy Colvin, Infrastructure Principal Director, Accenture
Wednesday, Sep 21, 1:30 p.m. – 2:15 p.m. | Moscone South—301

Leveraging Oracle Database 12c Release 2 Multitenant Features [CON3075]
Kai Yu, Senior Principal Engineer, Oracle ACE Director, Dell, Inc.
Anuj Mohan, Technical Account Manager, Data Intensity
Andy Colvin, Infrastructure Principal Director, Accenture
James Czuprynski, Strategic Solutions Architect, OnX USA LLC
Deiby Gomez Gómez Robles, Oracle Database Consultant, Nuvola, S.A.
Thursday, Sep 22, 12:00 p.m. – 12:45 p.m. | Park Central—Concordia

And here is one where several AEG ACE Directors (including me) will present their point of views on new Oracle features one after each other in just few minutes:

EOUC Database ACES Share Their Favorite Database Things: Part I [UGF2630]
Debra Lilley, VP Certus Cloud Services, Certus Solutions Consulting Services Ltd
Ralf Koelling, Senior Consultant, CGI Deutschland Ltd. & Co. KG
David Kurtz, Consultant, Accenture Enkitec Group
Sunday, Sep 18, 1:00 p.m. – 1:45 p.m. | Moscone South—102

EOUC Database ACES Share Their Favorite Database Things: Part II [UGF2632]
Debra Lilley, VP Certus Cloud Services, Certus Solutions Consulting Services Ltd
Bjoern Rost, Principal Consultant, The Pythian Group Inc.
Carl Dudley, Database Administrator, Tradba
Sunday, Sep 18, 2:15 p.m. – 3:00 p.m. | Moscone South—102

oow-250x250-im-speaking-3093305

P.S. I wonder how many sessions will be delivered altogether by the Enkitec group?

What’s New for Oracle Compute Cloud Service (IaaS)

In Cloud, Consolidation, DBA, IaaS on May 1, 2016 at 11:15

Behind every cloud is another cloud.” – Judy Garland

Whenever you find yourself on the side of the majority, it is time to pause and reflect.” – Mark Twain

Oracle Cloud Infrastructure allows large businesses and corporations to run their workloads, replicate their networks, and back up their data and databases in the cloud. And I would say in a much easy and efficient way than any other provider!

Oracle provides a free software appliance for accessing cloud storage on-premise. The Oracle Storage Cloud Software Appliance is offered free of charge. You do not get this from Amazon. And from Azure, you do not get as much memory on a VM for a core as you get from Oracle. In addition to the hourly metered service, Oracle also provides a non-metered compute capacity with a monthly subscription so that you can provision resources up to twice the subscribed capacity. This is a way to control the budget through a predictable monthly fee rather than the less controllable pure pay-as-you-go model.

Sing_In2

PCMag.com provided recently an excellent overview of the Oracle Cloud Infrastructure.

Pat Shuff’s Blog describes in detail the steps for creating an Oracle Linux service on the Oracle Compute Cloud. Glynn Foster’s Blog shows how to create an Oracle Solaris VMs in the Oracle Cloud Compute Service.

Creating an Oracle Compute Service took me (the first time) less than 10 minutes. Accessing it was an immediate process. This is simple, fast, easy and most of all I had no issues whatsoever. OK, I did not find lshw but I installed it in a minute:

yum -y install lshw*
...
Dependency Updated:
  dbus-libs.x86_64 1:1.2.24-8.0.1.el6_6

Complete!

VPN for Engineered Systems: if you need a VPN between Oracle and your own infrastructure, then go to the My Oracle Support Note 2056914.1 and follow its instructions.

Compute7

Creating an Oracle Storage Volume takes about one minute! Even less, if you have done it few times.

AFPO_Storage

[opc@f24074 ~]$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda2      9.4G  1.9G  7.0G  22% /
tmpfs           7.4G     0  7.4G   0% /dev/shm
/dev/xvda1      479M   81M  369M  18% /boot

Note: before connecting to the Oracle VM from any client, remember to add the IP address(es) to the Security IP list and then update the security rules (add a new one).

Few useful links:

Oracle Infrastructure as a Service (IaaS) Training Content
Using Oracle Compute Cloud Service (IaaS)
Accessing an Oracle Linux Instance Using SSH
Frequently Asked Questions for Oracle Compute Cloud Service
Troubleshooting Oracle Compute Cloud Service
Best Practices for Using Oracle Compute Cloud Service
Siebel CRM in Oracle Public Cloud IAAS
Compute Cloud Pricing / Storage Cloud Pricing / Network Cloud Service Pricing
Oracle Cloud Services Delivered in Your Data Center / Cloud Machine Documentation

Oracle Cloud Machine Operations: Roles and Responsibilities:

Cloud_Machine_Responsibilities

What’s New for Oracle Compute Cloud Service (IaaS):

– Both metered and non-metered options of Oracle Compute Cloud Service are now generally available.
– You can no longer subscribe for 50 or 100 OCPU configurations. Instead, you can specify the required number of 1 OCPU subscriptions.
– If you have a non-metered subscription, you can now provision resources up to twice the subscribed capacity. The additional usage will be charged per hour and billed monthly.
– Oracle provides images for Microsoft Windows Server 2012 R2 Standard Edition.
– Oracle provides images for Oracle Solaris 11.3.
– You can clone storage volumes by taking a snapshot of a storage volume and using it to create new storage volumes.
– You can clone an instance by taking a snapshot and using the resulting image to launch new instances.
– You can increase the size of a storage volume, even when it’s attached to an instance.
– You can now find the public and private IP addresses of each instance on the Instances page. Earlier, this information was displayed only on the instance details page of each instance.
– The CLI tool for uploading custom images to Oracle Storage Cloud Service has been updated to support various operating systems. The tool has also been renamed to uploadcli. Earlier it was called upload-img.

For more details, check What’s New for Oracle Compute Cloud Service (IaaS).

storage_prices

And finally, do you wonder what is the underlying hardware?

[root@f24074 ~]# lshw -short
H/W path    Device  Class      Description
==========================================
                    system     HVM domU
/0                  bus        Motherboard
/0/0                memory     96KiB BIOS
/0/1                processor  Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz
/0/2                processor  CPU
/0/3                processor  CPU
/0/4                processor  CPU
/0/5                memory     System Memory
/0/5/0              memory     15GiB DIMM RAM
/0/5/1              memory     15GiB DIMM RAM
/0/6                memory     96KiB BIOS
/0/7                processor  CPU
/0/8                processor  CPU
/0/9                processor  CPU
/0/a                processor  CPU
/0/b                memory     System Memory
/0/c                memory
/0/d                memory
/0/100              bridge     440FX - 82441FX PMC [Natoma]
/0/100/1            bridge     82371SB PIIX3 ISA [Natoma/Triton II]
/0/100/1.1          storage    82371SB PIIX3 IDE [Natoma/Triton II]
/0/100/1.3          bridge     82371AB/EB/MB PIIX4 ACPI
/0/100/2            display    GD 5446
/0/100/3            generic    Xen Platform Device
/1          eth0    network    Ethernet interface

Oracle In-Memory for SAP Databases

In DBA, Oracle database, SAP HANA on March 19, 2016 at 14:02

The Japanese proverb “HANA YORI DANGO” means literally “Dumplings rather than flowers” meaning “to prefer substance over style, as in to prefer to be given functional, useful items (such as dumplings) instead of merely decorative items (such as flowers)”.

HANA is nowadays a very stylish and trendy concept among the database professionals but I would follow the Japanese saying and rather go with substance.

c82-wsj-ad-sap-cloud-motion

The two different points of view can be easily found on the internet:

SAP: What Oracle won’t tell you about SAP HANA
Oracle: Oracle Database In-Memory vs SAP HANA benchmark results

The following “Facts vs Claims” will most likely perplex and bewilder everyone. Just have a look and decide for yourself. Let us not go into details but consider this statement:

FACT: Oracle makes big data a bigger problem with 4 copies of the data (3 in-memory and 1 on disk).

I have been trying for quite some time to figure out the 3 copies of in-memory data that Oracle creates with no success. Perhaps this is a riddle?

An year ago, on March 31st 2015, SAP was certified to run on Oracle Database 12.1.0.2. Rather odd one would say, as in the past SAP were awaiting the 2nd release of the Oracle database in order to certify it for SAP. At least a sign that 12.1.0.2 is mature enough to be used for SAP production systems.

As of June 30th 2015, the Oracle Database In-Memory Option is supported and certified for SAP environments for all SAP products based on SAP NetWeaver 7.x. on Unix/Linux, Windows and Oracle Engineered Systems platforms running Oracle Database 12c – in single instance and Oracle Oracle Real Application Clusters deployments.

In-Memory is such a great feature – but (for both Oracle and SAP) often the challenge you’ll face are these 2 tricky questions:

1. Which of your tables, partitions, and even columns should you mark for In-Memory column store availability?
2. What should be the value of the SGA, PGA and IMCS size? That is, how much memory do I need (global_allocation_limit in SAP and inmemory_size for Oracle)?

Here you have a very strong advantage of Oracle over SAP. The answers are now easier to find with the new In-Memory Advisor which is available via download from MOS Note:1965343.1. Use of In-Memory Advisor for databases where the IM option has not yet been deployed does NOT require an Oracle Tuning Pack license.

For SAP, Quick Sizer is used for a new implementation of Business Suite powered by SAP HANA. Here is a How to Properly Size an SAP In-Memory Database. There are two different approaches for performing the sizing: user-based sizing, which determines sizing requirements based on the number of users in the system, and throughput-based sizing, which bases the sizing on the items being processed. The sizing rules for SAP Business Suite on SAP HANA that are outlined in SAP Note 1793345.

Memory Management in the Column Store: SAP HANA aims to keep all relevant data in memory. Standard row tables are loaded into memory when the database is started and remain there as long as it is running. They are not unloaded. Column tables, on the other hand, are loaded on demand, column by column when they are first accessed. With Oracle, only tables (or columns/partitions of the table) that need to be in memory are set in memory thus avoiding waist of unnecessary memory and having the ability to do more with less.

The Oracle approach is much more sophisticated but there are 10 major issues DBAs should pay attention to when using the In-Memory option with SAP databases:

1. To use Oracle Database In-Memory with SAP NetWeaver the following technical and business prerequisites must be met:
– Oracle Database 12c Release 1 Patch Set 1 (12.1.0.2)
– UNIX/Linux: Oracle Database SAP Bundle Patch June 2015 (SAP1202P_1506) or newer. Strongly recommended Oracle Database SAP Bundle Patch August 2015 (SAP1202P_1508)
– Windows: Windows DB Bundle Patch 12.1.0.2.6 or newer. Strongly Recommended Windows DB Bundle Patch 12.1.0.2.8
– SAP NetWeaver 7.x Version with minimum SAP Kernel 7.21_EXT

2. Indexes. It is not allowed to make any changes to the standard index design of the SAP installations. However, customer specific index design can be changed. That is, all indexes which belong to the Y or Z namespaces can be changed.

3. Database Buffer Cache. It is not allowed to reduce the size of the database buffer cache and assign the memory to the In-Memory column store.

4. SAP Dictionary Support. Full SAP Dictionary (DDIC) Support of in-memory attributes at the table level starts with the support package SAP_BASIS 7.40 SP12.

5. Individual Columns. It is not supported to load individual columns of an SAP table or partition into the IM column store. It is also not supported to exclude individual columns from an SAP table or partition from the IM column store. An SAP table is a database table used by an SAP application.

6. The database where you want to run In-Memory Advisor must have XDB component installed as the In-Memory Advisor relies on functions provided by XDB. In 12c XDB is installed by default.

7. The In-Memory Advisor is contained in the SAP Bundle Patch as patch 21231656.

8. For SAP applications it is strongly recommended to use a reasonable time window of collected AWR data. At least 2-3 days of AWR data should be used for the In-Memory Advisor. It absolutely makes no sense to use data from a 1-2 hour time window.

9. Use these In-Memory Advisor Parameter Name and Value:

– WRITE_DISADVANTAGE_FACTOR = 0.7
– LOB_BENEFIT_REDUCTION = 1.2
– MIN_INMEMORY_OBJECT_SIZE = 1024000
– READ_BENEFIT_FACTOR = 2

10. For SAP systems therefore the following init.ora parameters should be used:

inmemory_max_populate_servers = 4
inmemory_clause_default = “PRIORITY HIGH”
inmemory_size should be set to the value (+ ~20% for metadata and journals) used in the generate recommendation step of the In-Memory Advisor or set to value calculated by the SAP_IM_ADV Package for all tables/partitions to be loaded into the In-Memory column store.

Using RAT shows the benefit of the In-Memory option. Compare the DB time of the 2 replays after going to Exadata with the In-Memory option. DB time is the best and possibly only metric to compare captures with replays.

IMDB_Advisor

SAP Notes: 2178980 – Using Oracle Database In-Memory with SAP NetWeaver
MOS Notes: 1292089.1 – Master Note for Oracle XML Database (XDB) Install / Deinstall & 1965343.1 – Oracle Database In-Memory Advisor
Using SAP NetWeaver with Oracle Database In-Memory

Bottom line: if my SAP application runs on top of an Oracle database, I would rather put it on Exadata with the In-Memory option enabled that move it to SAP HANA.

Oracle Database Cloud Service vs Amazon Relational Database Service

In Cloud, Consolidation, DBA, Oracle database on February 28, 2016 at 15:00

How to compare Oracle’s Database Public Cloud with Amazon’s Relational Database Service (RDS) for enterprise usage? Let us have a look.

Oracle’s Database has 4 editions: Personal Edition, Express Edition (XE): free of charge and used by very small businesses and students, Standard Edition (SE): light version of Enterprise Edition and purpose designed to lack most features needed for running production grade workloads and Enterprise Edition (EE): provides the performance, availability, scalability, and security required for mission-critical applications.

In the comparison in this post, we will evaluate Oracle and Amazon in relation to the Enterprise Edition of Oracle’s database.

Oracle_Public_Database_Cloud

Oracle Public Database Cloud consists of 4 DB Cloud offerings: DBaaS, Virtual Image, Schema Service and Exadata Service. Here are few characterizations:

– Oracle supports Exadata, RAC & all DB options
– Simple pricing structure with published costs representing actual costs (unlimited I/Os, etc.)
– Hourly, Monthly & Annual pricing options
– Lowest cloud storage pricing across all major IaaS vendors

Amazon RDS for Oracle Database supports two different licensing models – “License Included” and “Bring-Your-Own-License (BYOL)”. In the “License Included” service model, you do not need separately purchased Oracle licenses. Here are few characterizations:

Enterprise Edition supports only db.r3.large and larger instance classes, up to db.r3.8xlarge
– Need to choose between Single-AZ (= Availability Zone) Deployment and Multi-AZ Deployment
– For Multi-AZ Deployment, Amazon RDS will automatically provision and manage a “standby” replica in a different Availability Zone (prior to failover you cannot directly access the standby, and it cannot be used to serve read traffic)
– Only 2 instance types support 10 Gigabit network: db.m4.10xlarge and db.r3.8xlarge
– Amazon RDS for Oracle is an exciting option for small to medium-sized clients and includes Oracle Database Standard Edition in it’s pricing
– Several application with limited requirements might find Amazon RDS to be a suitable platform for hosting a database
– As the enterprise requirements and resulting degree of complexity of the database solution increase, RDS is gradually ruled out as an option

So, here is high level comparison:

Oracle_Cloud_vs_Amazon_RDS
Notes:

– Oracle’s price includes the EE license with all options
– Amazon AWS is BYOL for EE
– Prices above are based on the EU (Frankfurt) region
– Amazon’s Oracle database hour prices vary from $0.290 to $4.555 for Single AZ Deplyoments and from $0.575 to $9.105 for Multi-AZ Deployments
– Oracle’s database hour prices vary from $0.672 to $8.569

Sources:

Oracle Archive Storage Pricing
Amazon Glacier Storage Pricing
Amazon Database Pricing
Oracle Database Pricing
Amazon Options for Oracle Database Engine
Oracle on Amazon RDS Support & Limitations

So, Amazon RDS is not an option if you need any of the following: Real Application Clusters (RAC), Real Application Testing, Data Guard / Active Data Guard, Oracle Enterprise Manager Grid Control, Automated Storage Management, Database Vault, Streams, Java Support, Locator, Oracle Label Security, Spatial, Oracle XML DB Protocol Server or Network access utilities such as utl_http, utl_tcp, utl_smtp, and utl_mail.

Interesting articles related to this topic:

1. Burning question for Oracle: What’s your response to Amazon? by Barb Darrow
2. Shootout: Oracle DB Cloud vs. Amazon RDS by Jan Navratil
3. The Oracle Database Cloud Service vs Oracle on Amazon RDS by Ranko Mosic
4. A Most Simple Cloud: Is Amazon RDS for Oracle Right for you? by by Jeremiah Wilton
5. Oracle RAC and AWS: A Hybrid Cloud Solution by Lindsay Van Thoen
6. How Much Does It Cost to Run Relational Database (RDS) Options on AWS by Yoav Mor
7. Oracle vs. Amazon: The Cloud Wars by Chris Lawless

The James Bond of Database Administration

In Data, DBA, Golden Gate, Oracle database, Oracle Engineered Systems on October 27, 2015 at 07:23

“Defending our systems needs to be as sexy as attacking others. There’s really only one solution: Bond.”

That is what ‘The Guardian’ wrote recently in an article entitled “The Man with the Golden Mouse: why data security needs a James Bond“.

Attending the annual Oracle ACE Director Briefing at Oracle HQ awoke up an interesting debate on the following question: What will happen in the near future with the DBA profession? Who is now the James Bond of Database Administration?

JB007

According to TechTarget, big data tools are changing data architectures in many companies. The effect on the skill sets required by database administrators may be moderate, but some new IT tricks are likely to be needed. GoldenGate is the new Streams, Exadata is the new RAC, Sharding the new Partitioning, Big Data is the new data (Texas is an exception), you name it…

Having the privilege to work throughout the years with some of the best database experts in the world has, for all it matters, proved to me that Double-O-Sevens are in fact more like Double-O-six-hundreds. Meaning that there are 100s of DBAs that qualify with no hesitation whatsoever as the James Bonds of Database Administration. I have learned so much from my ex Nokia colleagues, from my current Enkitec and Accenture colleagues. Not to mention friends from companies like eDBA, Pythian, Miracle, etc.

A DBA needs to have so many skills. Look for instance at Craig S. Mullins’ suggested 17 skills required of a DBA. Kyle Hunter’s article The evolution of the DBA and the Data Architect is clearly pointing to the emerging skillsets in the Data Revolution.

In the IT business, and in database administration in particular, it is not that important how well you know the old stuff, it is more important how fast you can learn the new things. Here are some of the tools that help every modern Oracle DBA:

Oracle Enterprise Manager 12c
ORAchk
Metalink/MOS
Developer Tools
Oracle Application Express (APEX)
SQL Developer
Oracle JDeveloper
SQL Developer Data Modeler
And last but not least SQL*Plus®

7init

These additional Metalink tools might be often of great help:

Diagnostic Tools Catalog – Note ID 559339.1
OS Watcher (Support Tool) – Note 301137.1
LTOM (Support Tool) – Note 352363.1
HANGFG (Support Tool) – Note 362094.1
SQLT (Support Tool) – Note 215187.1
PLSQL Profiler (Support Script) – Note 243755.1
MSRDT for the Oracle Lite Repository – Note 458350.1
Trace Analyzer TRCANLZR – Note 224270.1
ORA-600/ORA-7445 Error Look-up Tool – Note 153788.1
Statspack (causing more problems than help in 12c)

The Man with the Golden Mouse is the James Bond of Database Administration. The best DBA tools are still knowledge and experience.

GoldenMouse

New Features of Oracle NoSQL

In DBA, NoSQL, Oracle database, Oracle Engineered Systems on July 4, 2015 at 16:44

“In open source, we feel strongly that to really do something well, you have to get a lot of people involved.” Linus Torvalds

Currently, there are about 150 NoSQL databases (= Not Only SQL).

There are 5 major NoSQL data models: Collection, Columnar, Document-oriented, Graph and Key-value.

Oracle NoSQL, based on BerekelyDB (first release in 1994), was recently named by Forrester Research as a leader in the NoSQL key-value database market and Oracle NoSQL Database was called out as having strong adoption and maturity. A very good study and comparison of several NoSQL databases entitled 21 NoSQL Innovators to Look for in 2020 was written by Gary MacFadden.

Forrester_NoSQL

Here are few examples:

Collection/Multi-model: OrientDB, FoundationDB, ArangoDB, Alchemy Database, CortexDB
Columnar: Accumulo, Cassandra, Druid, HBase, Vertica
Document-oriented: Lotus Notes, Clusterpoint, Apache CouchDB, Couchbase, HyperDex, MarkLogic, MongoDB, OrientDB, Qizx
Graph: Allegro, Neo4J, InfiniteGraph, OrientDB, Virtuoso, Stardog
Key-value: Redis, CouchDB, Oracle NoSQL Database, Dynamo, FoundationDB, HyperDex, MemcacheDB, Riak, FairCom c-treeACE, Aerospike, OrientDB, MUMPS

Lat month (June 2015), Oracle announced Oracle NoSQL Database Version 3.3.4. This release offers new security features, including User Roles and Table-level Authorization, new language interfaces for Node.js and Python, and integration with Oracle Database Mobile Server. The prior release offers Big Data SQL support, RESTful API, C Table Driver, SQL-like DDL, Apache Hive support and much more.

A good starting point in order to get deeper into the NoSQL and Big Data world is the Oracle Big Data Learning Library.

Oracle recently announced Big Data SQL for Oracle NoSQL Database. This feature will allow Oracle Database users to connect to external data repositories like Oracle NoSQL Database or Hadoop in order to fetch data from any or all of the repositories (at once) through single SQL query.

Oracle Big Data SQL is an innovation from Oracle only available on Oracle Big Data Appliance. It is a new architecture for SQL on Hadoop, seamlessly integrating data in Hadoop and NoSQL with data in Oracle Database. Using Oracle Big Data SQL one can:

• Combine data from Oracle Database, Hadoop and NoSQL in a single SQLquery
• Query and analyze data in Hadoop and NoSQL
• Integrate big data analysis into existing applications and architectures
• Extend security and access policies from Oracle Database to data in Hadoopand NoSQL
• Maximize query performance on all data using Smart Scan

BDA_big-data-appliance

The recent update to Oracle REST Data Services enables a consistent RESTful interface to Oracle Database’s relational tables, JSON document store, and also enables access to Oracle NoSQL Database tables.

I still recommend reading the excellent article by Gwen Shapira entitled Hadoop and NoSQL Mythbusting.

Here are some useful links:

NoSQL Database Administrator’s Guide
Getting Started with NoSQL Database Table API
NoSQL Database Run Book
NoSQL Database Security Guide
Oracle NoSQL Database Availability and Failover
Download Oracle NoSQL Database, Server

NoSQL_DBs

It is interesting to note that according to Wikipedia, 12.1.3.3.4 is the first stable Oracle NoSQL release.

Check the DBMS popularity broken down by database model!

DB_popularity

Table Temperature, Big Tables In-Memory and Automatic Big Table Caching

In DBA, Init.ora, Oracle database on April 3, 2015 at 16:10

If you have never heard of Oracle object temperature and how this relates to caching, in-memory and big tables, here is a short note perhaps worth reading.

And all this is purely Oracle database related, nothing to do with Oracle Lighting Products.

Oracle_light

Background: Oracle 12.1.0.2 came with 5 new init.ora parameters (actually 12 but the other 7 are In-Memory related):

1. common_user_prefix
2. dbfips_140
3. enable_goldengate_replication
4. exafusion_enabled
5. db_big_table_cache_percent_target

The first 4 are rather on-off type of parameters but the last one, db_big_table_cache_percent_target, is worth looking into.

A table is considered small, if the number of blocks in the segment is lower or equal than the value of the parameter _small_table_threshold. In 12c, this parameter defaults 2% of the buffers in the cache. To be more precise, it is 2% of _db_block_buffers. For details, check Jonathan Lewis’s Small tables. Another very good one is by Tanel Poder entitled Optimizer statistics-driven direct path read decision for full table scans.

Otherwise, we call the table big.

small_big_table

In 12.1.0.2, a DBA can enable full database caching with the following command: ALTER DATABASE FORCE FULL DATABASE CACHING;

Under normal running the Oracle database decides what data to cache in the buffer cache. If there is not enough room, data can be aged out of the cache. If Oracle determines that the buffer cache is big enough to hold the entire database it will cache all blocks.

However, seldom we have enough memory to cache the whole database. In this case, the parameter db_big_table_cache_percent_target can be used. And of course, for “big tables” only.

The default value is 0, so under normal circumstances automatic big table caching is not enabled. You can decide on your own if this is Poor Man’s In-Memory Caching but even if it isn’t the benefits are rather questionable. Let us see why.

We have 3 tables: SALES, OLD_SALES and VERY_OLD_SALES.

sales_objects

SALES and OLD_SALES are huge and SALES is in-memory, that is the in-memory option has been enabled for the SALES table. VERY_OLD_SALES is a rather small table but still considered big by Oracle as the small table threshold is 401 (2% of 20090):

select a.ksppinm name, b.ksppstvl value 
from x$ksppi a, x$ksppcv b 
where a.indx = b.indx 
and a.ksppinm like '%small_table%';

_small_table_threshold
401

select a.ksppinm name, b.ksppstvl value 
from x$ksppi a, x$ksppcv b 
where a.indx = b.indx 
and a.ksppinm like '%block_buffer%';

_db_block_buffers
20090

Let us see if the table is In-Memory enabled and what is the number of its blocks:

sales_tables

3 new dynamic views give us details on big table caching:
– V$BTS_STAT
– V$BT_SCAN_CACHE shows the parameters and status of the big table cache section.
– V$BT_SCAN_OBJ_TEMPS shows the active objects currently tracked by the big table cache.

As SALES is an IM-table (DATAOBJ# is 91985), regardless of how often you scan it, no buffers will go to the big table cache (I have set db_big_table_cache_percent_target to 50):

big_table_cache

If you do a full table scan on OLD_SALES, as it is way to big for the BT-cache (= Big Table Cache), no use either… I have tested and at least V$BT_SCAN_OBJ_TEMPS shows nothing.

However, when I do a subset scan on OLD_SALES, then the BT-cache is used as the buffers will fit.

Now, let is look at the temperature of the tables. Oracle assigned 2000 for the temperature of VERY_OLD_SALES (DATAOBJ# is 92326) while 1000 for the temperature of OLD_SALES (DATAOBJ# is 92322) which is only partially loaded into the BT-cache. After updating VERY_OLD_SALES, the temperature grows to 5000:

temperature

The more SQL & DML we do on the table, the higher the temperature gets. A detailed study by Mahmoud Hatem can be found here.

In short:

– It is not possible to control this feature on table basis, there are no attributes/parameters like KEEP or STORAGE clauses on table level.
– If we dedicate real memory just for such type of caching, then it will not always be used for mega huge tables that anyway will not fit, it will only be used for subset retrievals.
– The memory could be more useful for the IM option as benefits are much more clear and obvious.
– The BT-cache is populated also by the full index or range scans.
– CBO is not aware that segments are being cached.
– One cannot manually flush the BT-cache but it is gone after flushing the buffer cache.
– If the BT-cache is not big enough for several objects, then the ones with highest temperature are given the priority.

Big table cache: an optional, integrated portion of the database buffer cache that uses a temperature-based, object-level replacement algorithm instead of the traditional LRU-based, block-level replacement algorithm.

Oracle X5 vs BMW X5

In DBA, Exadata, Oracle Engineered Systems on February 12, 2015 at 13:48

Oracle Exadata X5 is the latest generation of Engineered Systems which can be compared to a BMW X5. The “equivalent” integrated systems of competitors fall into the category of a mid class sedan except for the price. You must be willing to pay less in order to get the double and triple performance of Exadata.

X5X5

To get more details on the topic, it is worth reading CIO Angle’s Oracle declares war on EMC, VCE: “Oracle will sell its latest generation X5 Engineered Systems hardware, which is Oracle’s term for converged systems, separately from Oracle applications on Linux or Windows at a significantly lower price than market leader VCE’s comparable systems. Ellison announced list prices that are half of VCE’s discounted prices. And he said “We will negotiate. We will discount.”

Ellison also announce an all-flash array that’s part of X5 but sold as a stand-alone unit. Again, the list price is one-third of the price of what Oracle said is a comparable EMC XtremeIO array. Significantly, he did not announce a stand-alone server or network switch, although these are also X5 components. This seems to indicate that Oracle is not going after Cisco Systems, at this time.”

Exadata_VCE

What is worth rather watching than reading is the following presentation by Larry Ellison: The Next Generation of Oracle Engineered Systems.

LJE_X5

And here are some useful links:

1. Juan Loaiza – Oracle Next Generation Engineered Systems Launch – theCUBE

2. Gartner: New X5 Generation Will Bring Pricing Improvements to Oracle Exadata

3. Oracle aims to undercut competition with X5 generation of engineered systems

4. Oracle Debuting FS1 Series Flash Storage System

5. Oracle Introduces New Generation of Engineered Systems and New Pricing Strategy

6. Oracle Exadata X5 Data Sheet

7. Oracle Serer X5-2 System Architeture

analysts

Exadata X5 with the In-Memory option enabled would be M6 Gran Coupe, if it were to be car.

The 7 Initialization Parameters Related to the IM Column Store

In Database options, DBA, Init.ora, Oracle database on January 14, 2015 at 14:42

I often see and hear claims how many times faster Oracle 12c IM is compared to whatever. Although, the numbers are rather realistic than wishful thinking, I must say that all that strongly depends on how the IM option has been set up.

Besides the way you set up the tables, the MVs and the tablespaces, there are 7 init.ora parameters that directly affect the behavior, performance and speed of the inmemory option.

7init

1. INMEMORY_SIZE

Default: 0

This initialization parameter sets the size of the IM column store in a database instance.

The default value is 0, which means that the IM column store is not used. This initialization parameter must be set to a non-zero value to enable the IM column store. If the parameter is set to a non-zero value, then the minimum setting is 100M.

In a multitenant environment, the setting for this parameter in the root is the setting for the entire multitenant container database (CDB). This parameter can also be set in each pluggable database (PDB) to limit the maximum size of the IM column store for each PDB. The sum of the PDB values can be less than, equal to, or greater than the CDB value. However, the CDB value is the maximum amount of memory available in the IM column store for the entire CDB, including the root and all of the PDBs. Unless this parameter is specifically set for a PDB, the PDB inherits the CDB value, which means that the PDB can use all of the available IM column store for the CDB.

Julian’s tip: the bigger the better: more stuff will fit in.

2. INMEMORY_MAX_POPULATE_SERVERS

Default: Half the effective CPU thread count or the PGA_AGGREGATE_TARGET value divided by 512M, whichever is less.

This initialization parameter specifies the maximum number of background populate servers to use for IM column store population, so that these servers do not overload the rest of the system. Set this parameter to an appropriate value based on the number of cores in the system.

Julian’s tip: use something between the default and its double

3. INMEMORY_FORCE

Default: DEFAULT

This initialization parameter can enable tables and materialized views for the IM column store or disable all tables and materialized views for the IM column store.

Set this parameter to DEFAULT, the default value, to allow the INMEMORY or NO INMEMORY attributes on the individual database objects determine if they will be populated in the IM column store.

Set this parameter to OFF to specify that all tables and materialized views are disabled for the IM column store.

Julian’s tip: no brainer

4. INMEMORY_CLAUSE_DEFAULT

Default: an empty string

This initialization parameter enables you to specify a default IM column store clause for new tables and materialized views.

Leave this parameter unset or set it to an empty string to specify that there is no default IM column store clause for new tables and materialized views. Setting the value of this parameter to NO INMEMORY has the same effect as setting it to the default value (the empty string).

Set this parameter to a valid INMEMORY clause to specify that the clause is the default for all new tables and materialized views. The clause can include valid clauses for IM column store compression methods and data population options.

If the clause starts with INMEMORY, then all new tables and materialized views, including those without an INMEMORY clause, are populated in the IM column store. If the clause omits INMEMORY, then it only applies to new tables and materialized views that are enabled for the IM column store with an INMEMORY clause during creation.

Julian’s tip: start with an emty string and depending on your IM strategy, modify it later on

5. INMEMORY_QUERY

Default: ENABLE

This initialization parameter specifies whether in-memory queries are allowed. Set this parameter to ENABLE, the default value, to allow queries to access database objects populated in the IM column store, or set this parameter to DISABLE to disable access to the database objects populated in the IM column store.

Julian’s tip: very useful parameter on session level for testing how fast the IM option is

6. INMEMORY_TRICKLE_REPOPULATE_SERVERS_PERCENT

Default: 1

This initialization parameter limits the maximum number of background populate servers used for IM column store repopulation, as trickle repopulation is designed to use only a small percentage of the populate servers. The value for this parameter is a percentage of the INMEMORY_MAX_POPULATE_SERVERS initialization parameter value. For example, if this parameter is set to 10 and INMEMORY_MAX_POPULATE_SERVERS is set to 10, then on average one core is used for trickle repopulation.

Julian’s tip: increase to to a level based on your CPU cores and need for fast repopulation.

7. OPTIMIZER_INMEMORY_AWARE

Default: TRUE

This initialization parameter enables or disables all of the optimizer cost model enhancements for in-memory. Setting the parameter to FALSE causes the optimizer to ignore the in-memory property of tables during the optimization of SQL statements.

Julian’s tip: good for testing purposes to see how IM changes the behaviour of the CBO

Interesting test case for the same SQL:

select_IM

I would always recommend PARALLEL_DEGREE_POLICY = AUTO when using the 12c in-memory option.

And finally some interesting articles on 12c IM:

Off May Not Be Totally Off: Is Oracle In-Memory Database 12c (12.1.0.2.0) Faster? by Craig Shallahamer

12c In-Memory in PDB by Kerry Osborne

Getting started with Oracle Database In-Memory Part I – Installing & Enabling by Maria Colgan

In-Memory Column Store in Oracle Database 12c Release 1 (12.1.0.2) by Tim Hall

Our take on the Oracle Database 12c In-Memory Option by Tanel Poder