Dontcheff

Archive for the ‘DBA’ Category

Can you use Oracle RAC on Third-Party Clouds?

In DBA on July 11, 2017 at 14:40

Q: Can you use Oracle RAC on Third-Party Clouds?
A: No.

The Licensing Oracle Software in the Cloud Computing Environment document states:

“This policy applies to cloud computing environments from the following vendors: Amazon Web Services – Amazon Elastic Compute Cloud (EC2), Amazon Relational Database Service (RDS) and Microsoft Azure Platform (collectively, the ‘Authorized Cloud Environments’). This policy applies to these Oracle programs.”

The document that lists “these Oracle Programs” does not include RAC (or Multitenant or In-Memory DB).

For more details check Markus Michalewicz’s (Oracle RAC Product Manager) white paper entitled: How to Use Oracle RAC in a Cloud? – A Support Question. The image about is slide 45/50 from the same paper.

And here is how to use Oracle Real Application Clusters (RAC) in the Oracle Database Cloud Service.

An interesting blog post by Brian Peasland entitled Oracle RAC on Third-Party Clouds concludes: “But if I were looking to move my company’s RAC database infrastructure to the cloud, I would seriously investigate the claims in this Oracle white paper before committing to the AWS solution. That last sentence is the entire point of this blog post.”

For business and mission critical applications I would by all means recommend Oracle Real Application Clusters on Oracle Bare Metal Cloud.

We should not forget that something works and something being supported are two totally different things. Even for the Oracle Cloud check the Known issues for Oracle Database Cloud Service document: Updating the cloud tooling on a deployment hosting Oracle RAC requires manual update of the Oracle Database Cloud Backup Module.

Conclusion: Oracle RAC can NOT be licensed (and consequently not be used) in the above mentioned cloud environments although such claims were published even yesterday in the internet (July 10, 2017).

Twelve new features for Cyber Security DBAs

In Cloud, Data, DBA, Security and auditing on June 2, 2017 at 08:32

In the early years of Oracle, Larry Ellison was asked if clients ever ask for their money back. “Nobody’s asked for their money back yet – he replied – a few have asked for their data back though!

A relatively new Wells Fargo Insurance Cyber Security study shows that companies are more concerned with private data loss than with hackers:

Thus, one of the main roles of the cyber security DBA is to protect and secure the data.

Here is what the latest Oracle release 12cR2 is offering us:

1. A Fully Encrypted Database

To encrypt an entire database, you must encrypt all the tablespaces within this database, including the Oracle-supplied SYSTEM, SYSAUX, UNDO, and TEMP tablespaces (which is now possible in 12.2). For a temporary tablespace, drop it and then recreate it as encrypted – do not specify an algorithm. Oracle recommends that you encrypt the Oracle-supplied tablespaces by using the default tablespace encryption algorithm, AES128. Here is how you do it:

ALTER TABLESPACE system ENCRYPTION ONLINE ENCRYPT 
FILE_NAME_CONVERT=('system01.dbf','system01_enc.dbf'); 

2. TDE Tablespace Live Conversion

You can now encrypt, decrypt, and rekey existing tablespaces with Transparent Data Encryption (TDE) tablespace live conversion. The feature performs initial cryptographic migration for TDE tablespace encryption on the tablespace data in the background so that the tablespace can continue servicing SQL and DML statements like insert, delete, select, merge, and so on. Ensure that you have enough auxiliary space to complete the encryption and run (for example):

ALTER TABLESPACE users ENCRYPTION ONLINE USING 'AES192' ENCRYPT 
FILE_NAME_CONVERT = ('users.dbf', 'users_enc.dbf'); 

3. Support for ARIA, SEED, and GOST algorithms

By default, Transparent Data Encryption (TDE) Column encryption uses the Advanced Encryption Standard with a 192-bit length cipher key (AES192), and tablespace and database encryption use the 128–bit length cipher key (AES128). 12.2 provides advanced security Transparent Data Encryption (TDE) support for these encryption algorithms:

– SEED (Korea Information Security Agency (KISA) for South Korea
– ARIA (Academia, Research Institute, and Agency) for South Korea
– GOST (GOsudarstvennyy STandart) for Russia

ALTER TABLE clients REKEY USING 'GOST256'; 

4. TDE Tablespace Offline Conversion

12.2 introduces new SQL commands to encrypt tablespace files in place with no storage overhead. You can do this on multiple instances across multiple cores. Using this feature requires downtime, because you must take the tablespace temporarily offline. With Data Guard configurations, you can either encrypt the physical standby first and switchover, or encrypt the primary database, one tablespace at a time. This feature provides fast offline conversion of existing clear data to TDE encrypted tablespaces. Use the following syntax:

ALTER TABLESPACE users ENCRYPTION OFFLINE ENCRYPT; 

5. Setting Future Tablespaces to be Encrypted

ALTER SYSTEM SET ENCRYPT_NEW_TABLESPACES = CLOUD_ONLY; 

CLOUD_ONLY transparently encrypts the tablespace in the Cloud using the AES128 algorithm if you do not specify the ENCRYPTION clause of the CREATE TABLESPACE SQL statement: it applies only to an Oracle Cloud environment. ALWAYS automatically encrypts the tablespace using the AES128 algorithm if you omit the ENCRYPTION clause of CREATE TABLESPACE, for both the Cloud and premises scenarios.

6. Role-Based Conditional Auditing

Role-based conditional auditing provides the ability to define unified audit policies that conditionally audit users based on a role in addition to the current capability to audit by users. This feature enables more powerful policy-based conditional auditing by using database roles as the condition for auditing. For example, auditing for new users with the DBA role would begin automatically when they are granted the role:

CREATE AUDIT POLICY role_dba_audit_pol ROLES DBA CONTAINER = ALL; 
AUDIT POLICY role_dba_audit_pol;

7. Strong Password Verifiers by Default and Minimum Authentication Protocols

The newer verifiers use salted hashes, modern SHA-1 and SHA-2 hashing algorithms, and mixed-case passwords.

The allowed_logon_version_server in the sqlnet.ora file is used to specify the minimum authentication protocol allowed when connecting to Oracle Database instances. 
Oracle notes that the term “version” in the allowed_logon_version_server parameter name refers to the version of the authentication protocol.  It does NOT refer to the Oracle release version.

– SQLNET.ALLOWED_LOGON_VERSION_SERVER=8 generates all three password versions 10g, 11g, and 12c
– SQLNET.ALLOWED_LOGON_VERSION_SERVER=12 generates both 11g and 12c password versions, and removes the 10g password version
– SQLNET.ALLOWED_LOGON_VERSION_SERVER=12a generates only the 12c password version

8. New init.ora parametercalled OUTBOUND_DBLINK_PROTOCOLS

Due to direct SQL*Net Access Over Oracle Cloud, existing applications can now use Oracle Cloud without any code changes. We can easily control the outbound database link options:

– OUTBOUND_DBLINK_PROTOCOLS specifies the allowed network protocols for outbound database link connections: this can be used to restrict database links to use secure protocols
– ALL_GLOBAL_DBLINKS allows or disallow global database links, which look up LDAP by default

9. SYSRAC – Separation of Duty in a RAC

SYSRAC is a new role for Oracle Real Application Clusters (Oracle RAC) management. This administrative privilege is the default mode for connecting to the database by the clusterware agent on behalf of the Oracle RAC utilities such as srvctl. For example, we can now create a named administrative account and grant only the administrative privileges needed such as SYSRAC and SYSDG to manage both Oracle RAC and Oracle Data Guard configurations.

10. Automatic Locking of Inactive User Accounts

CREATE PROFILE time_limit LIMIT INACTIVE_ACCOUNT_TIME 60;

Within a user profile, the INACTIVE_ACCOUNT_TIME parameter controls the maximum time that an account can remain unused. The account is automatically locked if a log in does not occur in the specified number of days. Locking inactive user accounts prevents attackers from using them to gain access to the database. The minimum setting is 15 and the maximum is 24855. The default for INACTIVE_ACCOUNT_TIME is UNLIMITED.

11. Kerberos-Based Authentication for Direct NFS

Oracle Database now supports Kerberos implementation with Direct NFS communication. This feature solves the problem of authentication, message integrity, and optional encryption over unsecured networks for data exchange between Oracle Database and NFS servers using Direct NFS protocols.

12. Lockdown Profiles

Lockdown profile is a mechanism used to restrict operations that can be performed by connections to a given PDB for both cloud and non-cloud.

There are three functionalities that you can disable:

Feature: it lets us enable or disable database features for say junior DBAs (or cowboy DBAs)
Option: for now, the two options we can enable/disable are “DATABASE QUEUING” and “PARTITIONING”
Statement: we can either enable or disable the statements “ALTER DATABASE”, “ALTER PLUGGABLE DATABASE”, “ALTER SESSION”, and “ALTER SYSTEM”. In addition, we can specify granular options along with these statements. Example:

ALTER LOCKDOWN PROFILE junior_dba_prof STATEMENT = ('ALTER SYSTEM') 
CLAUSE = ('SET')  OPTION= ('OPTIMIZER_INDEX_COST_ADJ');

But .. the most secure database is the database with no users connected to it.

GDPR for DBAs

In DBA, Security and auditing on May 25, 2017 at 07:10

Exactly one year from now, from May 25th 2018, all businesses that handle personal data will have to comply with the new General Data Protection Regulation (GDPR) legislation.

At 260 pages in length, with 99 Articles and over 100 pages of explanatory notes known as ‘Annexes’, the GDPR is roughly three times the length of the Data Protection Act 1998 it is replacing.

The requirements for databases are:

– Discovery
– Classification
– Masking
– Monitoring
– Audit reporting
– Incident response and notification

The maximum penalty for non-compliance is 4% of annual revenue or €20 million, whichever is higher. Lower fines of up to 2% are possible for administrative breaches, such as not carrying out impact assessments or notifying the authorities or individuals in the event of a data breach. This puts data protection penalties into same category as anti-corruption or competition compliance.

What DBAs should start with now is account and identify 100% of the private data located in all databases!

There are 4 major categories where DBAa will be involved. The details can be found in the appendix on page 19/23 entitled Mapping of Oracle Database Security Products to GDPR.

1. Assess (Article 35 and Recital 84)
2. Prevent (Articles 5,6,29,32 and Recitals 26,28,64,83)
3. Detect (Articles 30,33,34)
4. Maximum protection (Articles 25,32)

Article 25 is about data minimization, user access limits and limit period of storage and accessibility.
Article 32 is about pseudonymization and encryption, ongoing protection and regular testing and verification.
Article 33 and 34 are about data breach notification: there is 72 hour notification following discovery of data breach.
Article 35 is about the data protection impact assessment.
Article 44 treats data transfers to third country or international organizations where the allowed transfers are only to entities in compliance with the regulation.

As you can see, DBA job ads include nowadays the GDPR skills and responsibilities:

The main lawful bases for data processing are consent and necessity. Data can be recognized as a necessity if it:

• Relates to the performance of a contract
• Illustrates compliance with a legal obligation
• Protects the vital interests of the data subject or another person
• Relates to a task that’s in the public interest
• Is used for purposes of legitimate interests pursued by the controller or a third party (expect where overridden by the rights of the data subject)

Data subjects’ requests for access should be responded to within a month and without charge. This is new legislation within the GDPR and the same one month time frame applies to rectifying inaccurate data.

Breach notifications should be made within 72 hours of becoming aware. If this time frame isn’t met, a fine of 10M€, or 2% of global turnover, can be issued as a penalty. A breach is any failure of security leading to the destruction, loss, alteration, unauthorized disclosure of/access to personal data. Supervisory authorities must be notified if a breach results in a risk to the rights and freedoms of individuals.

Data held in an encrypted or pseudonymized form isn’t deemed to be personal data and falls outside of the scope of these new rules altogether. Despite this, data that’s encrypted and considered secure using today’s technology may become readable in the future. Therefore it’s worth considering format preserving encryption/pseudonymization which renders anonymous but stills allows selected processing of that data.

Here are few interesting articles meant mostly for DBAs:

Accelerate Your Response to the EU General Data Protection Regulation (GDPR)

Data Privacy and Protection GDPR Compliance for Databases

European Union GDPR compliance for the DBA

SQL Server 2016 – Always Encrypted and the GDPR

How Oracle Security Solutions Can Help the EU GDPR

Top 10 operational impacts of the GDPR

Cloud Nine

In Cloud, DBA, IaaS, Oracle database on May 3, 2017 at 16:58

“Get happiness out of your work or you may never know what happiness is.” — Elbert Hubbard

According to Amazon and quoted by Fortune Magazine in a recent article entitled “Amazon Data Center Chief schools Oracle CEO on Cloud claims“, AWS executive Andy Jassy said (at AWS Re:Invent two years ago) that every database customer he talks to is unhappy with their vendor: “I haven’t met a database customer that is not looking to flee their vendor.”

Another interesting article by James Hamilton entitled “How many Data Centers needed world-wide” discusses more or less the same topic. Reading the comments after it is worthwhile. Also consider reading these extensive performance results about Cloud performance and TCO of the Oracle database.

A young and extremely smart analyst from my company asked me last week: “Why is the Oracle database better than MySQL or MongoDB?”. Tough question, right? You may ask the same about DB2 or SQL Server. All databases have their pros and cons. And we as people have our preferences, based on experience, knowledge and prejudices.

If you try to find out the explanation of the quote statement on top, you might very like end up with this one: “You have to spend most of your life working, so if you’re unhappy at your work you’re likely to always be unhappy”.

So, I have been happy (if that is the right word) working with the Oracle database. Unlike DB2, you have all the tools, options and automation to tune it. With about couple of hundred MySQL databases at Nokia, we spent more time (thank you Google!) investigating issues than with more than one thousand Oracle databases. SQL Server: if you prefer using the mouse instead of the keyboard, then this is the right database for you! Teradata compared to Exadata: let me not start…

As Forrester say, In-Memory Databases are driving next-generation workloads and use cases. Check out this recent comparison of all vendors.

But back to Cloud. Have a look at what speed and what features Oracle is embedding into its Cloud. By far the best Cloud for Oracle workloads! All these are new additions to the Oracle IaaS:

What’s New for Oracle Compute Cloud Service (IaaS)

– Compute Service: 8 and 16 OCPU Virtual Machines

CentOS, Ubuntu OS Images

RHEL via BYOI OS Image

Multipart Upload: Multipart upload enables uploading an object in parts, enhancing speed of upload and accommodating larger objects

Audit Service: This new service automatically records calls to all supported BMCS public API endpoints as log events

Search Domain DHCP

Terraform Provider: The BMCS Terraform provider is now available. Terraform is an open source infrastructure automation and management software tool

Developer Tools Enhancements: New versions of BMCS developer tools are now available, including Ruby, Python, and Java SDKs, HDFS Connector, and CLI

New Instance Shapes

– Windows BYOL: It is now possible to Bring Your Own License (BYOL) for Windows Server

– NVMe Storage: You can now use NVMe SSD disks as ephemeral data disks attached to your instances

– SSD Block Storage: These high-performance volumes can be used for persistent block storage or for bootable volumes

New Web UI: This new interface can be used to perform basic operations against Storage Cloud resources

HSM Cloud Service Integration

Oracle Exadata Cloud Machine Q&A

In Cloud, Database options, DBA, Exadata, IaaS, Oracle database, Oracle Engineered Systems on March 28, 2017 at 07:25

“The computer industry is the only industry that is more fashion-driven than women’s fashion.” Larry Ellison

Amazon have no on-premise presence, it is cloud only. Microsoft have the Azure Stack but the Azure Stack Technical Preview 3 is being made available as a Proof of Concept (POC) and must not be used as a production environment and should only be used for testing, evaluation, and demonstration.

Oracle Database Exadata Public Cloud Machine, or Exadata Cloud Machine, or ExaCM in short, is a cloud-based Oracle database subscription service available on Oracle Exadata, and deployed in the customer or partner data center behind their firewall. This allows customers to subscribe to fully functional Oracle databases on Exadata, on an OPEX driven consumption model, using agile cloud-based provisioning, while the associated Exadata infrastructure is maintained by Oracle.

This blog post contains the top 10 (5 commercial and 5 technical) facts about the Exadata Cloud Machine:

1. On-premise licenses cannot be transferred to ExaCM.

2. The minimum commitment to both the ExaCM and OCM is 4 years and the minimum configuration is Eighth Rack.

3. The subscription price for Oracle Database Exadata Cloud Machine X6 Eighth Rack is $40,000 per month (= $2,500 X 16) and that includes all DB options/features, Exadata Software and OEM DB Packs.

4. Standalone products such as Oracle Secure Backup and Oracle GoldenGate are not included in the ExaCM subscription. Only the database options (such as RAC, In-Memory, Partitioning, Active Data Guard, etc.), the database OEM packs and the Exadata storage server software are included.

5. ExaCM requires Oracle Cloud Machine to deploy Exadata Cloud Control Plane (separate subscription). OCM subscription requires similar minimum term commitment as ExaCM. If a customer already has an OCM, that can be leveraged to deploy Exadata Control Plane at no extra cost. One OCM Model 288 can manage 6 ExaCM Full Racks (i.e. 24 ExaCM Quarter Racks or 12 ExaCM Half Racks). Theoretically one OCM can support a much larger number of ExaCM full racks: about 50.

6. The 1/8th rack SKU is very similar to the on-premises 1/8th rack – i.e. minimum configuration of 16 OCPUs (cores), 240 GB RAM per database server, 144 TB raw storage (42 TB usable), 19.2TB of Flash. Compared to the Quarter Rack, it ships with less RAM, disk storage and flash. Those will be field installed if the customer chooses to go for the 1/8th to Quarter Rack upgrade. Note that this 1/8th rack enables customers to have an entry level configuration that is similar to what exists in Exadata Cloud Service.

7. Hourly Online Compute Bursting is supported with ExaCM. The commercial terms are the same as in Exadata Cloud Service – i.e. 25% premium over the Metered rate, calculated on an hourly basis. Customers can scale up or down, dynamically. Bursting does not kick in automatically based on load. Bursting of OCPUs needs to be configured by customers as needed. Once customer initiates bursting, the OCPU update is done dynamically without downtime. Customers will be billed later on the hours of bursting usage. Price: $8.401 per OCPU per hour.

8. If Cloud Control Plane is down, it doesn’t affect the availability of steady state runtime operations. However, cloud-based management (e.g. selfservice UI and REST API access) will be impacted.

9. Access: the Exadata Cloud Machine compute nodes are each configured with a Virtual Machine (VM). You have root privilege for the Exadata compute node VMs, so you can load and run additional software on the Exadata compute nodes. However, you do not have administrative access to the Exadata infrastructure components, including the physical compute node hardware, network switches, power distribution units (PDUs), integrated lights-out management (ILOM) interfaces, or the Exadata Storage Servers, which are all administered by Oracle.

10. Patching and backups: you can produce a list of available patches using the exadbcpatchmulti command as follows

# /var/opt/oracle/exapatch/exadbcpatchmulti -list_patches 
-sshkey=/home/opc/.ssh/id_rsa 
-oh=hostname1:/u01/app/oracle/product/12.2.0.1/dbhome_1

When you create a database deployment on Exadata Cloud Machine, you must choose from the following automatic backup configuration options:

– Remote Storage Only: uses remote NFS storage to store periodic full (RMAN level 0) backups and daily incremental backups, with a seven day cycle between full backups and an overall retention period of thirty days.
– None: no automatic backups are configured. Automatic backups cannot be configured later if you select the None option when you create a database deployment.

Useful links:

Oracle Exadata Cloud Machine Documentation
Oracle Database Exadata Cloud Machine Data Sheet
Features and Benefits of Oracle ExaCM
Oracle Database Exadata Cloud Machine Pricing
Creating an Exadata Cloud Machine Instance
Known Issues for Oracle Exadata Cloud Machine
Oracle SVP, Juan Loaiza, describes Oracle Database Exadata Cloud Machine

DBA Productivity and Oracle Database 12.2

In Cloud, DBA, Oracle database on February 9, 2017 at 15:15

“Technology can be our best friend, and technology can also be the biggest party pooper of our lives. It interrupts our own story, interrupts our ability to have a thought or a daydream, to imagine something wonderful, because we’re too busy bridging the walk from the cafeteria back to the office on the cell phone.” Steven Spielberg

busy

The DBA profession was recently rated as #6 among the Best Technology Jobs. Good for all of us who are in this line of business. But notice the stress level: Above Average!

DBAs are often busy people. Is that good or bad? Is “busy the new stupid”?

Automation is not a luxury for the DBAs but it is a way in which DBAs execute their job. Of course, there is one thing that cannot be automated and that is quality but the best DBAs automate almost everything.

Automating the database is a Win-Win for DBAs and DevOps. The mindset of the Enterprise DBA should be focused on harnessing the power of automation.

The following data shows what tasks are mostly and least automated:

dba_automation

Look at the last row above. I still wonder why Automatic SQL Tuning is so underestimated. It was so powerfully helping the DBA team of Nokia…

Oracle Database 12cR2 is out. And 12.2 comes with yet another new set of database automation related features:

– Oracle Data Guard now supports multiple failover targets in a fast-start failover configuration. Previous functionality allowed for only a single fast-start failover target. Multiple failover targets increase high availability by making an automatic failover more likely to occur if there is a primary outage.

– Oracle automatically synchronizes password files in Data Guard configurations: when the passwords of SYS, SYSDG, and so on, are changed, the password file at the primary database is updated and then the changes are propagated to all standby databases in the configuration.

– Online table move: nonpartitioned tables can be moved as an online operation without blocking any concurrent DML operations. A table move operation now also supports automatic index maintenance as part of the move.

– Automatic deployment of Oracle Data Guard: deployment is automatic for Oracle Data Guard physical replication between shards with Oracle Data Guard fast-start failover (automatic database failover): automatic database failover provides high availability for server, database, network, and site outages.

– Automatically set user tablespaces to read-only during upgrade: the new -T option for the parallel upgrade utility (catctl.pl) can be used to automatically set user tablespaces to read-only during an upgrade, and then back to read/write after the upgrade.

– The Oracle Trace File Analyzer (TFA) collector provides the option to automatically collect diagnostic information when TFA detects an incident.

– Oracle Data Guard support for Oracle Diagnostics Pack: this enables you to capture the performance data to the Automatic Workload Repository (AWR) for an Active Data Guard standby database and to run Automatic Database Diagnostic Monitor (ADDM) analysis on the AWR data.

– Automatic Workload Repository (AWR) support for pluggable databases: the AWR can be used in a PDB. This enables the capture and storage of performance data in the SYSAUX tablespace of the PDB.

– The new ENABLE_AUTOMATIC_MAINTENANCE_PDB initialization parameter can be used to enable or disable the running of automated maintenance tasks for all the pluggable databases (PDBs) in a multitenant container database (CDB) or for individual PDBs in a CDB.

– Automatic Data Optimization Support for In-Memory Column Store: Automatic Data Optimization (ADO) enables the automation of Information Lifecycle Management (ILM) tasks. The automated capability of ADO depends on the Heat Map feature that tracks access at the row level (aggregated to block-level statistics) and at the segment level.

– Automatic Provisioning of Kerberos Keytab for Oracle Databases: the new okcreate utility automates the registering of an Oracle database as a Kerberos service principal, creating a keytab for it, and securely copying the keytab to the database for use in Kerberos authentication.

– Role-Based Conditional Auditing: auditing for new users with the DBA role would begin automatically when they are granted the role.

– Automatic Locking of Inactive User Accounts: within a user profile, the new INACTIVE_ACCOUNT_TIME parameter controls the maximum time that an account can remain unused. The account is automatically locked if a log in does not occur in the specified number of days.

12_2_automation

Oracle Bare Metal Cloud in Action

In Cloud, DBA, IaaS on October 15, 2016 at 08:56

Bare Metal Cloud means non-virtualized physical compute servers, i.e., no hypervisor running to create virtual machines!

capturebmc

The aim of this blog post is to show you how simple it is to provision the Oracle BMC. But first here are few links on the subject that you may find useful:

What’s Inside Oracle’s AWS-Killing Bare Metal Cloud by Craig Matsumoto
Oracle’s infrastructure business focuses on bare metal to go after AWS by Blair Hanley Frank
Does Oracle have a shot in the public cloud vs. Amazon and Microsoft? by Brandon Butler
Virtual or Bare Metal Dedicated Cloud: Which Option is Right for You? by Ashar Baig
Oracle IaaS Generation 2 by Marcel van den Berg
– Documentation: Oracle Bare Metal Cloud Services
– The Bare Metal Cloud Service on oracle.com

This is how you create a Bare Metal Cloud machine in the Oracle Infrastructure Cloud (took me less than 15 minutes to connect as root from scratch):

1. First page:

capture1

2. Create the VCN (Virtual Cloud Network):

capture2a

3. Launch the instance:

capture3a

4. In progress (provisioning):

capture4a

5. Created (36 CPUs):

capture5a

6. Details:

capture6a

7. Install MongoDB:

mongodb

The Oracle Bare Metal Cloud is based on totally new architectural concepts, modern hardware, it is easy to provision and use and most importantly reliable and secure. In addition, Oracle BMC is supposed to be 11 times faster and 20 percent cheaper than the fastest solution offered by the competition.

13th Oracle Open World in San Francisco

In Cloud, Consolidation, DBA, OOW, Oracle Engineered Systems on August 20, 2016 at 19:57

“Why are our days numbered and not, say, lettered?” Woody Allen

Early tall-building designers, fearing a fire on the 13th floor, or fearing tenants’ superstitions about the rumor, decided to omit having a 13th floor listed on their elevator numbering. This practice became common, and eventually found its way into American mainstream culture and building design. If hotel floors are lettered, would you mind staying at floor M?

Next month, thousands of Oracle professionals will come to San Francisco where Oracle organize for the 13th time in a row, Oracle OpenWorld.

Having 13 presentation in honor of this jubilee is technically impossible but here are 3 ones that I will deliver next month in San Francisco:

Black_cat_13

1. My 13 DBA Mistakes in 13 Years [UGF1127]
Julian Dontcheff, Global Database Lead, Accenture
Sunday, Sep 18, 3:30 p.m. – 4:15 p.m. | Moscone South—102

Abstract: In this session learn about the biggest 13 mistakes in my DBA career. Lessons learned. Be careful when you press enter. Don’t do like I do, people will make fun of you.. It is sad and funny.

2. The Benefits and Simplicity of Oracle Cloud: Infrastructure as a Service [CON1126]
Julian Dontcheff, Global Database Lead, Accenture
Wednesday, Sep 21, 4:15 p.m. – 5:00 p.m. | Moscone South—309

Abstract: I will do a LIVE demo and try to create from scratch an Oracle compute instance in less than 13 minutes. Countdown stops after I am root in the virtual machine.

Infrastructure as a service (IaaS) is the fastest-growing area of public cloud computing. Oracle Cloud IaaS, with built-in security and high availability, offers elastic compute, networking, and storage to help any company quickly reach both value and productivity. This presentation covers the benefits of Oracle IaaS over other cloud providers, and shows how fast and easy it is to set up IaaS services in Oracle Cloud.

3. Lift and Shift onto Oracle Database Exadata Cloud Service Using Database Consolidation Advisor [CON1125]
Julian Dontcheff, Global Database Lead, Accenture
Thursday, Sep 22, 12:00 p.m. – 12:45 p.m. | Marriott Marquis—Salon 12

Abstract: Dedicated to OEM 13c newest feature: Database Consolidation Workbench. I will give 13 database consolidation strategy tips for DBAs.

Oracle Database Exadata Cloud Service provides service instances that contain a full Oracle Database hosted on Oracle Exadata inside Oracle Cloud. This presentation is about best practices on how to migrate and consolidate Oracle databases onto Oracle Database Exadata Cloud Service. It covers the three phases—planning, migration, and validation—of Oracle’s database consolidation workbench that helps in end-to-end consolidation of databases and enables consolidation of more databases on the same Oracle Exadata system, both on premises and in the public cloud.

If you are reading this post, I welcome you to join my talks at OpenWorld. Thank you in advance. Please join also other presentation from our team: The Accenture Enkitec Group. Here are the remaining talks:

Who Wins the Oracle Database in the Cloud Bake-Off? [CON5672]
Christopher Pasternak, Managing Director, Accenture
Robby Robertson, Sr. Manager, Accenture
Richard Miners, Senior Infrastructure Principal, Accenture
Tuesday, Sep 20, 12:15 p.m. – 1:00 p.m. | Moscone South—309

Maximizing Oracle Exadata Database Machine Reliability with Oracle EXAchk [CON1142]
Andy Colvin, Infrastructure Principal Director, Accenture
Thursday, Sep 22, 10:45 a.m. – 11:30 a.m. | Moscone South—302

Interfacing Raspberry Pi with Oracle Application Express [UGF5667]
Christoph Ruepprich, Programmer / Developer, Accenture Enkitec
Sunday, Sep 18, 2:15 p.m. – 3:00 p.m. | Moscone South—304

SQLd360: SQL Tuning Diagnostics Made Easy [UGF6168]
Mauro Pagano, Infrastructure Senior Principal, Accenture Enkitec Group
Sunday, Sep 18, 2:15 p.m. – 3:00 p.m. | Moscone South—302

Oracle GoldenGate and Baseball: Five Fundamentals Before Jumping to the Cloud [UGF5120]
Bobby Curtis, Infrastructure Principal, Accenture Enkitec Group
Sunday, Sep 18, 8:00 a.m. – 8:45 a.m. | Moscone West—3022

The Best Oracle Database 12c New Features for Developers and DBAs [UGF2028]
Alex Zaballa, Senior Oracle Database Administrator, Accenture Enkitec Group
Sunday, Sep 18, 8:00 a.m. – 8:45 a.m. | Moscone West—2010

Oracle Multitenant: Customer Panel [CON6563]
Randall Wilcox, Manager / Senior Manager, SAS Institute Inc.
Michael Sorrels, Sr. VP, Database Technologies, Regions Bank
Patrick Wheeler, Senior Director, Product Management, Oracle Database, Oracle
Andy Colvin, Infrastructure Principal Director, Accenture
Wednesday, Sep 21, 1:30 p.m. – 2:15 p.m. | Moscone South—301

Leveraging Oracle Database 12c Release 2 Multitenant Features [CON3075]
Kai Yu, Senior Principal Engineer, Oracle ACE Director, Dell, Inc.
Anuj Mohan, Technical Account Manager, Data Intensity
Andy Colvin, Infrastructure Principal Director, Accenture
James Czuprynski, Strategic Solutions Architect, OnX USA LLC
Deiby Gomez Gómez Robles, Oracle Database Consultant, Nuvola, S.A.
Thursday, Sep 22, 12:00 p.m. – 12:45 p.m. | Park Central—Concordia

And here is one where several AEG ACE Directors (including me) will present their point of views on new Oracle features one after each other in just few minutes:

EOUC Database ACES Share Their Favorite Database Things: Part I [UGF2630]
Debra Lilley, VP Certus Cloud Services, Certus Solutions Consulting Services Ltd
Ralf Koelling, Senior Consultant, CGI Deutschland Ltd. & Co. KG
David Kurtz, Consultant, Accenture Enkitec Group
Sunday, Sep 18, 1:00 p.m. – 1:45 p.m. | Moscone South—102

EOUC Database ACES Share Their Favorite Database Things: Part II [UGF2632]
Debra Lilley, VP Certus Cloud Services, Certus Solutions Consulting Services Ltd
Bjoern Rost, Principal Consultant, The Pythian Group Inc.
Carl Dudley, Database Administrator, Tradba
Sunday, Sep 18, 2:15 p.m. – 3:00 p.m. | Moscone South—102

oow-250x250-im-speaking-3093305

P.S. I wonder how many sessions will be delivered altogether by the Enkitec group?

What’s New for Oracle Compute Cloud Service (IaaS)

In Cloud, Consolidation, DBA, IaaS on May 1, 2016 at 11:15

Behind every cloud is another cloud.” – Judy Garland

Whenever you find yourself on the side of the majority, it is time to pause and reflect.” – Mark Twain

Oracle Cloud Infrastructure allows large businesses and corporations to run their workloads, replicate their networks, and back up their data and databases in the cloud. And I would say in a much easy and efficient way than any other provider!

Oracle provides a free software appliance for accessing cloud storage on-premise. The Oracle Storage Cloud Software Appliance is offered free of charge. You do not get this from Amazon. And from Azure, you do not get as much memory on a VM for a core as you get from Oracle. In addition to the hourly metered service, Oracle also provides a non-metered compute capacity with a monthly subscription so that you can provision resources up to twice the subscribed capacity. This is a way to control the budget through a predictable monthly fee rather than the less controllable pure pay-as-you-go model.

Sing_In2

PCMag.com provided recently an excellent overview of the Oracle Cloud Infrastructure.

Pat Shuff’s Blog describes in detail the steps for creating an Oracle Linux service on the Oracle Compute Cloud. Glynn Foster’s Blog shows how to create an Oracle Solaris VMs in the Oracle Cloud Compute Service.

Creating an Oracle Compute Service took me (the first time) less than 10 minutes. Accessing it was an immediate process. This is simple, fast, easy and most of all I had no issues whatsoever. OK, I did not find lshw but I installed it in a minute:

yum -y install lshw*
...
Dependency Updated:
  dbus-libs.x86_64 1:1.2.24-8.0.1.el6_6

Complete!

VPN for Engineered Systems: if you need a VPN between Oracle and your own infrastructure, then go to the My Oracle Support Note 2056914.1 and follow its instructions.

Compute7

Creating an Oracle Storage Volume takes about one minute! Even less, if you have done it few times.

AFPO_Storage

[opc@f24074 ~]$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda2      9.4G  1.9G  7.0G  22% /
tmpfs           7.4G     0  7.4G   0% /dev/shm
/dev/xvda1      479M   81M  369M  18% /boot

Note: before connecting to the Oracle VM from any client, remember to add the IP address(es) to the Security IP list and then update the security rules (add a new one).

Few useful links:

Oracle Infrastructure as a Service (IaaS) Training Content
Using Oracle Compute Cloud Service (IaaS)
Accessing an Oracle Linux Instance Using SSH
Frequently Asked Questions for Oracle Compute Cloud Service
Troubleshooting Oracle Compute Cloud Service
Best Practices for Using Oracle Compute Cloud Service
Siebel CRM in Oracle Public Cloud IAAS
Compute Cloud Pricing / Storage Cloud Pricing / Network Cloud Service Pricing
Oracle Cloud Services Delivered in Your Data Center / Cloud Machine Documentation

Oracle Cloud Machine Operations: Roles and Responsibilities:

Cloud_Machine_Responsibilities

What’s New for Oracle Compute Cloud Service (IaaS):

– Both metered and non-metered options of Oracle Compute Cloud Service are now generally available.
– You can no longer subscribe for 50 or 100 OCPU configurations. Instead, you can specify the required number of 1 OCPU subscriptions.
– If you have a non-metered subscription, you can now provision resources up to twice the subscribed capacity. The additional usage will be charged per hour and billed monthly.
– Oracle provides images for Microsoft Windows Server 2012 R2 Standard Edition.
– Oracle provides images for Oracle Solaris 11.3.
– You can clone storage volumes by taking a snapshot of a storage volume and using it to create new storage volumes.
– You can clone an instance by taking a snapshot and using the resulting image to launch new instances.
– You can increase the size of a storage volume, even when it’s attached to an instance.
– You can now find the public and private IP addresses of each instance on the Instances page. Earlier, this information was displayed only on the instance details page of each instance.
– The CLI tool for uploading custom images to Oracle Storage Cloud Service has been updated to support various operating systems. The tool has also been renamed to uploadcli. Earlier it was called upload-img.

For more details, check What’s New for Oracle Compute Cloud Service (IaaS).

storage_prices

And finally, do you wonder what is the underlying hardware?

[root@f24074 ~]# lshw -short
H/W path    Device  Class      Description
==========================================
                    system     HVM domU
/0                  bus        Motherboard
/0/0                memory     96KiB BIOS
/0/1                processor  Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz
/0/2                processor  CPU
/0/3                processor  CPU
/0/4                processor  CPU
/0/5                memory     System Memory
/0/5/0              memory     15GiB DIMM RAM
/0/5/1              memory     15GiB DIMM RAM
/0/6                memory     96KiB BIOS
/0/7                processor  CPU
/0/8                processor  CPU
/0/9                processor  CPU
/0/a                processor  CPU
/0/b                memory     System Memory
/0/c                memory
/0/d                memory
/0/100              bridge     440FX - 82441FX PMC [Natoma]
/0/100/1            bridge     82371SB PIIX3 ISA [Natoma/Triton II]
/0/100/1.1          storage    82371SB PIIX3 IDE [Natoma/Triton II]
/0/100/1.3          bridge     82371AB/EB/MB PIIX4 ACPI
/0/100/2            display    GD 5446
/0/100/3            generic    Xen Platform Device
/1          eth0    network    Ethernet interface

Oracle In-Memory for SAP Databases

In DBA, Oracle database, SAP HANA on March 19, 2016 at 14:02

The Japanese proverb “HANA YORI DANGO” means literally “Dumplings rather than flowers” meaning “to prefer substance over style, as in to prefer to be given functional, useful items (such as dumplings) instead of merely decorative items (such as flowers)”.

HANA is nowadays a very stylish and trendy concept among the database professionals but I would follow the Japanese saying and rather go with substance.

c82-wsj-ad-sap-cloud-motion

The two different points of view can be easily found on the internet:

SAP: What Oracle won’t tell you about SAP HANA
Oracle: Oracle Database In-Memory vs SAP HANA benchmark results

The following “Facts vs Claims” will most likely perplex and bewilder everyone. Just have a look and decide for yourself. Let us not go into details but consider this statement:

FACT: Oracle makes big data a bigger problem with 4 copies of the data (3 in-memory and 1 on disk).

I have been trying for quite some time to figure out the 3 copies of in-memory data that Oracle creates with no success. Perhaps this is a riddle?

An year ago, on March 31st 2015, SAP was certified to run on Oracle Database 12.1.0.2. Rather odd one would say, as in the past SAP were awaiting the 2nd release of the Oracle database in order to certify it for SAP. At least a sign that 12.1.0.2 is mature enough to be used for SAP production systems.

As of June 30th 2015, the Oracle Database In-Memory Option is supported and certified for SAP environments for all SAP products based on SAP NetWeaver 7.x. on Unix/Linux, Windows and Oracle Engineered Systems platforms running Oracle Database 12c – in single instance and Oracle Oracle Real Application Clusters deployments.

In-Memory is such a great feature – but (for both Oracle and SAP) often the challenge you’ll face are these 2 tricky questions:

1. Which of your tables, partitions, and even columns should you mark for In-Memory column store availability?
2. What should be the value of the SGA, PGA and IMCS size? That is, how much memory do I need (global_allocation_limit in SAP and inmemory_size for Oracle)?

Here you have a very strong advantage of Oracle over SAP. The answers are now easier to find with the new In-Memory Advisor which is available via download from MOS Note:1965343.1. Use of In-Memory Advisor for databases where the IM option has not yet been deployed does NOT require an Oracle Tuning Pack license.

For SAP, Quick Sizer is used for a new implementation of Business Suite powered by SAP HANA. Here is a How to Properly Size an SAP In-Memory Database. There are two different approaches for performing the sizing: user-based sizing, which determines sizing requirements based on the number of users in the system, and throughput-based sizing, which bases the sizing on the items being processed. The sizing rules for SAP Business Suite on SAP HANA that are outlined in SAP Note 1793345.

Memory Management in the Column Store: SAP HANA aims to keep all relevant data in memory. Standard row tables are loaded into memory when the database is started and remain there as long as it is running. They are not unloaded. Column tables, on the other hand, are loaded on demand, column by column when they are first accessed. With Oracle, only tables (or columns/partitions of the table) that need to be in memory are set in memory thus avoiding waist of unnecessary memory and having the ability to do more with less.

The Oracle approach is much more sophisticated but there are 10 major issues DBAs should pay attention to when using the In-Memory option with SAP databases:

1. To use Oracle Database In-Memory with SAP NetWeaver the following technical and business prerequisites must be met:
– Oracle Database 12c Release 1 Patch Set 1 (12.1.0.2)
– UNIX/Linux: Oracle Database SAP Bundle Patch June 2015 (SAP1202P_1506) or newer. Strongly recommended Oracle Database SAP Bundle Patch August 2015 (SAP1202P_1508)
– Windows: Windows DB Bundle Patch 12.1.0.2.6 or newer. Strongly Recommended Windows DB Bundle Patch 12.1.0.2.8
– SAP NetWeaver 7.x Version with minimum SAP Kernel 7.21_EXT

2. Indexes. It is not allowed to make any changes to the standard index design of the SAP installations. However, customer specific index design can be changed. That is, all indexes which belong to the Y or Z namespaces can be changed.

3. Database Buffer Cache. It is not allowed to reduce the size of the database buffer cache and assign the memory to the In-Memory column store.

4. SAP Dictionary Support. Full SAP Dictionary (DDIC) Support of in-memory attributes at the table level starts with the support package SAP_BASIS 7.40 SP12.

5. Individual Columns. It is not supported to load individual columns of an SAP table or partition into the IM column store. It is also not supported to exclude individual columns from an SAP table or partition from the IM column store. An SAP table is a database table used by an SAP application.

6. The database where you want to run In-Memory Advisor must have XDB component installed as the In-Memory Advisor relies on functions provided by XDB. In 12c XDB is installed by default.

7. The In-Memory Advisor is contained in the SAP Bundle Patch as patch 21231656.

8. For SAP applications it is strongly recommended to use a reasonable time window of collected AWR data. At least 2-3 days of AWR data should be used for the In-Memory Advisor. It absolutely makes no sense to use data from a 1-2 hour time window.

9. Use these In-Memory Advisor Parameter Name and Value:

– WRITE_DISADVANTAGE_FACTOR = 0.7
– LOB_BENEFIT_REDUCTION = 1.2
– MIN_INMEMORY_OBJECT_SIZE = 1024000
– READ_BENEFIT_FACTOR = 2

10. For SAP systems therefore the following init.ora parameters should be used:

inmemory_max_populate_servers = 4
inmemory_clause_default = “PRIORITY HIGH”
inmemory_size should be set to the value (+ ~20% for metadata and journals) used in the generate recommendation step of the In-Memory Advisor or set to value calculated by the SAP_IM_ADV Package for all tables/partitions to be loaded into the In-Memory column store.

Using RAT shows the benefit of the In-Memory option. Compare the DB time of the 2 replays after going to Exadata with the In-Memory option. DB time is the best and possibly only metric to compare captures with replays.

IMDB_Advisor

SAP Notes: 2178980 – Using Oracle Database In-Memory with SAP NetWeaver
MOS Notes: 1292089.1 – Master Note for Oracle XML Database (XDB) Install / Deinstall & 1965343.1 – Oracle Database In-Memory Advisor
Using SAP NetWeaver with Oracle Database In-Memory

Bottom line: if my SAP application runs on top of an Oracle database, I would rather put it on Exadata with the In-Memory option enabled that move it to SAP HANA.