If you have a tenancy in OCI, you probably know that the cost can be controlled and managed in most cases by either shutting down the nodes and the databases when not being used (and this can be even automated) but there is more to that.
The main OCI console page contains the basis information about your tenancy (most right): name, usage, subscription number, used/left credits, etc:
In terms of operational availability, I am usually interested in the Frankfurt region where my main tenancy resides:
Under the health dashboard (worth checking on daily basis) there is an “Analyze costs” button which takes you to the “Cost Management” page. It contains links to 4 different sub-pages:
Cost Analysis
Cost and Usage Report
Budget
Scheduled Reports
You can choose a period, here is what my usage report looks like for the whole month of May 2023:
As you can observe, the graphics are extremely clear, you can see by day what you have paid for and how much. Just below the Cost Graph, you can see the details for the past days:
Cost and usage reports are CSV files generated daily that show usage data for each resource in your tenancy. The CSV files are stored in an object storage bucket that is accessible using a cross-tenancy policy.
The most interesting part is perhaps viewing the recommendations under “Cost saving opportunities”:
There are 3 types/categories of recommendations (see below) and if you need any security recommendations, just view Cloud Guard.
Under “Global recommendations”, you can disable the Cloud Advisor and modify your recommendation profile to customize the logic that Cloud Advisor uses to make recommendations:
The Cloud Advisor is not just easy to use but also the fastest way to manage, control your tenancy from cost savings point of view.
If you might be wondering why on the Recommendations page, the “Estimated savings” column contains no values: I am using the smallest possible configuration and starting the VMs and the databases only when I need them. I have a pretty small tenancy and using only few services.
“There are two types of encryption: one that will prevent your sister from reading your diary and one that will prevent your government” – Bruce Schneier
What is Hybrid Tablespace Encryption? With the latest release updates of Oracle 19c (19.16 and above as 19.18 is the latest as of February 2023), it is now possible to decrypt redo operations in a hybrid cloud DR configurations where the cloud database is encrypted with TDE (Transparent Data Encryption) and the on-prem database is not.
We start seeing more often Oracle on-prem databases using Data Guard with the standby database being in the cloud, mostly in OCI. When the on-prem database is not using TDE and the cloud one is, it is important to ensure that redo data is encrypted during the transportation process. The Advanced Security Option places a key role here as this new feature is controlled via the (dynamic) TABLESPACE_ENCRYPTION initialization parameter.
V$PARAMETER_VALID_VALUES shows the accepted values of all database init.ora parameters, here is what we have for tablespace encryption (viewable even in ADB):
In a multitenant environment, remember that you need to change the value of TABLESPACE_ENCRYPTION in the root container. You cannot set TABLESPACE_ENCRYPTION in a PDB.
Here is how to set-up the encryption depending on (1) have/not have the Advanced Security Option and (2) how you would like to encrypt:
AUTO_ENABLE is the default for cloud databases. It encrypts all new tablespaces if you have the Oracle Advanced Security license.
If an existing tablespace is not encrypted, then the database writes a warning to the alert log.
Encrypted tablespaces cannot be converted to unencrypted tablespaces.
Because all tablespaces must be encrypted in OCI, setting TABLESPACE_ENCRYPTION to DECRYPT_ONLY or MANUAL_ENABLE will be ignored in the cloud and the database will behave as if the setting is AUTO_ENABLE.
In the primary database, this setting encrypts the new tablespace with an encryption key.
In a standby database, this setting adds a key to the new tablespace and encrypts all blocks.
DECRYPT_ONLY prevents new tablespaces from being encrypted. Use it in your on-prem database in case you do not have the Advanced Security Option.
If an existing tablespace is encrypted, then the database writes a warning to the alert log.
Unencrypted tablespaces cannot be changed to encrypted tablespaces!
MANUAL_ENABLE enables you to selectively encrypt tablespaces if the database is licensed for Oracle Advanced Security. This is the default for both on-prem primary and standby databases.
Note that (1) for RAC, set TABLESPACE_ENCRYPTION to the same value for all instances of the primary database, and for all instances of the standby database and (2) modifying TABLESPACE_ENCRYPTION does not affect the master key rotation operations.
On the second topic, Oracle Cloud Guard is useful when wanting to maintain good security posture by detecting weak security configurations and activities that can indicate cloud security risks. Cloud Guard detects security problems within a customer tenancy by ingesting audit and configuration data about resources in each region, processing it based on detector rules, and correlating the problems at the reporting region. Identified problems will be used to produce dashboards and metrics and may also trigger one or more provided responders to help resolve the problem.
Here are the key features of Oracle Cloud Guard:
Detects misconfigured resources and identifies risky activities.
Delivers out-of-the-box recipes that apply security best practices with a few clicks.
Provides automated responders to trigger corrective actions.
Gives a global view of customer’s overall security posture of their OCI tenancy, including security and risk scores and activity maps.
Enabling Cloud Guard is simple. The first step us to select/define the policies and then select some basis information:
The OCI console interface is changing rather often and new features are being added on monthly basis. Some features are very intuitive and for some one needs to go to the OCI documentation or MOS (links included below).
Although using the OCI console and dashboard on daily basis, there are still few actions being performed sporadically or just only once.
Here are 10 tips on how to manage certainly not so common operations within OCI:
Tip 1. If you need to change the Tenancy Administrator (Doc ID 2869402.1), you need to do the following 2 things (not possible to do it online via the GUI):
– Submit MOS SR using the Customer Support CSI with the following mandatory information:
Cloud Account (Tenancy) Name: Current Tenancy Admin : <User name and email> New/Desired Tenancy Admin : <User name and email> Order Number and/or Subscription ID:
and
– Approval email from VP/CIO contact, the attachment format must be in email/message format (.msg, .EML, .pdf), not a screenshot.
Tip 2. If you need to increase your limits in OCI (Doc ID 2434814.1), the instructions in the MOS note are not very straightforward, here is how to do it:
Hit the Help button (the question mark) and then choose “Visit the Support Center”:
Then click on the blue “Create Support Request” button. From “Limit Increase”, select the category and then the resource:
For example, if you select FastConnect, you have the following options:
Tip 3. If you need to change the bandwidth for FastConnect (Doc ID 2922934.1), you need to complete the following steps:
Log into the OCI console, select Networking and then FastConnect
Click Edit to update the provisioned bandwidth value
Select the provisioned bandwidth value
Click Save Changes
Note that you have 2 options when selecting the provisioned bandwidth:
The Lifecycle state will be updates to Provisioned once saved.
Here are some additional OCI tips which I find interesting and important:
Tip 4. You might want to to find the private IP Addresses consumption of the OCI LoadBalancer (Doc ID 2850625.1)
Tip 5. How to change default DATA_PUMP_DIR to a new directory at PDB level for a OCI DB system (Doc ID 2630666.1)
Tip 6. How to use Data Pump to import from File Systems Storage in OCI DBCS (Doc ID 2511714.1)
Tip 7. How to provide access roles to users in new unified OCI console (Doc ID 2590671.1)
Tip 8. How to add a new SSH key to an existing DB system instance (Doc ID 2687638.1)
Tip 10. How to change the idle timeout for a network Load Balancer (Doc ID 2921943.1)
Last note: It is not supported by Oracle to change the software edition of a database Cloud instance by say recompiling the binaries. You have to recreate the instance from backup of existing instance and choose appropriate software edition. If you need an edition change option with minimum downtime or need to migrate from single instance to RAC, you can use Zero Downtime Migration Utility.
Choosing a partitioning strategy is often far from straightforward. Range and interval partitioning are useful when organizing similar data types, most true for date and time data. Hash partitioning is useful for randomly distributing data across partitions based on a hashing algorithm, rather than grouping similar data. I remember how we managed to boost performance in a huge Siebel database by hash partitioning the big indexes. List partitioning is useful to explicitly map rows to partitions based on a set of values – for example the states in the USA.
But this is a tedious process and when doing it, regardless if you are a DBA or a Developer, the functional knowledge of the application and the SQL using the tables and indexes, is of paramount importance.
When using the Oracle autonomous database, we rather prefer things to be automated – goes without saying. So, here comes in ADB a handy package called DBMS_AUTO_PARTITION.
First, we need to configure auto partitioning with the option we need. We use the parameter IMPLEMENT (see the screenshot below) but if you want to enable recommendations and not implement those recommendations automatically, then use the parameter REPORT_ONLY instead (which is the default value).
You can always disable it using the OFF parameter.
Automatic partitioning uses a single-column partition key combined with single-level partitioning. Automatic partitioning does not support more complex partitioning strategies such as multi-column partitioned tables or composite partitioning.
Automatic partitioning chooses from the following 3 partition methods:
AUTOMATIC INTERVAL: This choice is best suited for ranges of partition key values
LIST AUTOMATIC: This partitioning method applies to distinct partition key values
HASH: Applies partitioning on the partition key’s hash values
Next, you can use the AUTO_PARTITION_SCHEMA and AUTO_PARTITION_TABLE settings to specify schemas and tables considered (or not considered) for automatic partitioning. Note that when automatic partitioning is set then all schemas and tables (in user-managed schemas) are considered for automatic partitioning if both the inclusion and exclusion lists are empty.
Here is an example where I prefer to manually decide on the partitioning strategy in the JULIAN schema – so I want to exclude it from auto partitioning:
If I wanted to do the opposite, that is add only the schema JULIAN to the auto partitioning list, I should have replaced FALSE with TRUE in the screenshot above. As soon as the inclusion list is no longer empty, only schemas in the inclusion list are considered.
The analysis and verification of automatic partitioning using RECOMMEND_PARTITION_METHOD is potentially a resource-intensive and long running operation, so avoid doing it in a production database. I would rather use a cloned database for automatic partitioning. The recommendations can be implemented with the APPLY_RECOMMENDATION procedure.
It is good to know that the Autonomous Database automatically collects workload information over time in an internally managed SQL workload repository maintained in the SQL Tuning Set (SYS_AUTO_STS). Recommendations of automatic partitioning generated by the RECOMMEND_PARTITION_METHOD function have a time limit, specified by the TIME_LIMIT parameter, with a default of 1 day. If you are analyzing a large system with many candidate tables, a single invocation may not generate a recommendation for all tables and you will have to invoke the procedure repeatedly to generate recommendations for additional tables.
Here are the data dictionary views for automatic partitioning: DBA_AUTO_PARTITION_CONFIG and DBA_AUTO_PARTITION_RECOMMENDATIONS.
It is important to understand that unlike automatic indexing, automatic partitioning does not run periodically as a background task. Automatic partitioning only runs when you invoke it by using the DBMS_AUTO_PARTITION.RECOMMEND_PARTITION_METHOD function.
Not all tables are candidates for auto partitioning. The function VALIDATE_CANDIDATE_TABLE checks if a given table is a valid candidate for auto partitioning in Autonomous Database. There are several conditions for a table to be a valid candidate, here are few:
Table passes inclusion and exclusion tests specified by AUTO_PARTITION_SCHEMA and AUTO_PARTITION_TABLE configuration parameters
Table has up-to-date statistics
Table is at least 64 GB
Table has 5 or more queries in the SQL tuning set that scanned the table
Table does not contain a LONG data type column.
As you can see below, my SALES table is not a candidate as the statistics are stale:
Last, here are few underscore/hidden parameters related to auto partitioning (from a 21c database) :
_autoptn_costing: DBMS_AUTO_PARTITION is compiling query for cost estimates
_autoptn_flags: DBMS_AUTO_PARTITION flags
_autoptn_translate_table_name: DBMS_AUTO_PARTITION table name translation
_autoptn_workload_id: DBMS_AUTO_PARTITION workload id
The package is though missing in 21c:
SQL> desc DBMS_AUTO_PARTITION
ERROR:
ORA-04043: object DBMS_AUTO_PARTITION does not exist
If you are interested in auto partitioning also the application, then I would recommend a rather scientific paper called Automatic Partitioning of Database Applications. But is built onto the assumption that stored procedures have several disadvantages and not everyone would agree with that.
Do not save what is left after spending, but spend what is left after saving – Warren Buffett
Tracking and managing usage and cost in the Cloud in often neglected. Often it is seen as complex and difficult. Here is what we can do in Oracle Cloud Infrastructure. In the most right console screen of OCI, under Account Center, we have now Billing and Cost Management Savings. It shows how many savings have been already implemented and how many are still pending:
Track and Optimize Your Spending by Using Cost Analysis
Implement a Process to Terminate or STOP Unused Resources
Evaluate What Compute Shape Fits Your Workload
Become Familiar with Cloud Advisor
If you go under recommendations (under Cloud Advisor), the screen will list all types and you can also filter on a category:
As I have several databases on OCI, both ADB and Database Systems, both Oracle and MySQL, spread among couple of regions, it is rather important for me to manage the cost as I use the databases on daily basis and often do no stop them during the week.
As you can see above, my pending recommendation is about defining lifecycle policy rule which that automatically moves Object Storage data to lower cost tiers when possible. Meaning in practice archive storage.
It is worth going through all recommendation and although some of them are rather obvious, they might not always come to mind to implement.
Viewing Autonomous database usage is straightforward: as you pay only for storage when the database is stopped (charges below are for February):
but for MySQL especially if using HeatWave, the charges are slightly different:
The following question in the Oracle Groundbreakers Developer Community forum made me investigate on how to change the session timeout in the Oracle Cloud console. Have a look, it is still for some reason unanswered:
For those who use Oracle Cloud Infrastructure on daily basis, they know that the default session timeout is 480 minutes although in my case it is an hour – so often after you switch to the OCI tab, you see the following screen:
Here is the way how you can change it to a longer period with 32767 minutes being the maximum allowed.
Step 1: Open the Service User Console
Step 2: Open the Identity Cloud Service Admin Console:
Step 3: From the Dashboard, choose “Settings” and from there “Session Settings”:
Step 4: Set the “Session Duration” to a longer period of time:
Step 5: Saving the settings to the new value: I chose the maximum which is 32767 minutes:
After being asked on daily basis all kinds of questions on Oracle Database Licensing, as time goes by, you sort of understand it. Sort of, because the Oracle Database Licensing Guide is 602 pages long and gets often updated. The latest one is from April 2021 – now it is mid-May.
Moreover, you have perhaps seen all Oracle certifications but if you search for one on licensing you will find what I did – there isn’t one.
What I am trying to do now, is to summarize Database Licensing in a short blog post – this might be helpful for many to at least understand the concept.
There are 3 types of licenses for the Oracle Database: Packs, Options and Features and 9 Oracle Database Offerings: Standard Edition 2, Enterprise Edition, Oracle Database Appliance, Exadata, Exadata Cloud Service and Cloud@Customer, Database Cloud Service Standard Edition, Database Cloud Service Enterprise Edition, Database Cloud Service Enterprise Edition – High Performance and Database Cloud Service Enterprise Edition – Extreme Performance (you can see their abbreviations in the table below).
Packs: there are 5 different packs for the Oracle Database:
2. Options: there are 15 database options for the Oracle Database:
Oracle Active Data Guard
Oracle Advanced Compression
Oracle Advanced Security
Oracle Database In-Memory
Oracle Database Vault
Oracle Label Security
Oracle Machine Learning
Oracle Multitenant
Oracle On-Line Analytical Processing (OLAP)
Oracle Partitioning
Oracle RAC One Node
Oracle Real Application Clusters (Oracle RAC)
Oracle Real Application Testing
Oracle Spatial and Graph
Oracle TimesTen Application-Tier Database Cache
Here are the ones related to Consolidation, HA, Managability and Performance:
3. Features: there are 131 features that can be licensed with the Oracle Database out of which 105 are for EE and 123 are for Exadata. As you can see, there are 3 features available for Exadata, ExaCS and ExaC@C falling under the functional category of Autonomous:
If you would like to drill down in detail, use the Database Feature and Licensing tool which is available online without the need to register or have an Oracle account.
You might think that is way too much for me, and perhaps it is, but the situation is very similar with other database vendors. Let us look at AWS and GCP for instance:
Relational: Bare Metal Solution for Oracle workloads Cloud SQL: Managed MySQL, PostgreSQL and SQL Server Cloud Spanner and BigQuery Key value: Cloud Bigtable Document: Firestore and Firebase Realtime Database In-memory: Memorystore NoSQL: MongoDB Atlas and managed offerings from open source partner network including MongoDB, Datastax, Redis Labs, and Neo4j
Azure SQL Database Azure SQL Managed Instance SQL Server on Virtual Machines Azure Database for PostgreSQL Azure Database for MySQL Azure Database for MariaDB Azure Cosmos DB Azure Cache for Redis Azure Database Migration Service Azure Managed Instance for Apache Cassandra
After all, being expert in database licensing in a skill of its own!
Most likely, you will first get an email from Oracle to let you know that the patches are already available:
How to apply the patch? The one-off patches (now they are call interim patches) can be applied via the Console, API or even manually. To apply an interim patch manually, you can use the Opatch utility. The detailed steps are provided in the Applying one-off patches on Oracle Database 21c documentation. The patches can be applied in any order.
Here is how simple and easy it is:
1. For the database on which you want to apply the patches, just click its name to display details and under Resources, click Updates:
2. Click on “Apply a one-off patch“:
3. Then, in the Apply one-off patch dialog, enter the patch numbers. Use a comma-separated list to enter more than one patch. I did apply them one after each other. Paste the patch number and then click Apply.
While the patch is being applied, the database’s status displays as Updating:
A work request is created for the patching operation allowing us to monitor the progress of the operation.
If the operation completes successfully, the database’s status changes to Available:
In a couple of recent database migration cases, one of the main questions raised, was how to figure out all outgoing and incoming database links as they have to be modified after the massive migrations.
Outgoing database links is simple: DBA_DB_LINKS describes all database links in the database. And this view has been part of the database (at least) since 7.3.4
The tricky part is how to find all incoming database links. At least before 12.2, where a new view called DBA_DB_LINK_SOURCES, shows the information of the source databases that opened database links to the local database.
So, how about the databases that are version 12.1 and below?
Option 1: Bruno suggests to “start from the listener logfile; with some “awk/sed/vi” work it should be possible to extract the list of “origins” of the connections… -> From this list, identify the database servers -> Search database links on relevant databases on these servers”.
Might work but might be rather tedious work if there are 100s of different servers.
Option 2: Brian suggests “to query V$SESSION to see active sessions from the other database server. Hint…look at the MACHINE column to see if it matches the other database server name. Querying V$SESSION will only work if the link is open when you query it. As such, you may want to add an AFTER LOGON trigger which writes an audit entry to a table if the connection is from that database server.”
If you create a logon trigger to insert all incoming connection via database link note that in 11g, you can do that using value sys_context(‘USERENV’,’DBLINK_INFO’) which will give us all information. But check first Doc ID 2593966.1 as there is Bug 18974508: sys_context(‘userenv’, ‘dblink_info’) returns incomplete information.
But before 10g, there is no DBLINK_INFO, we we must use x$k2gte:
select username, osuser, status, sid, serial#, machine,
process, terminal, program from v$session
where saddr in (select k2gtdses from sys.x$k2gte);
The above is documented in Doc ID 332326.1: How to identify a session started by a remote distributed transaction? The fixed table x$k2gte contains 2PC Global Transaction Entry. The column k2gtdses in x$k2gte has the session state object and this can be mapped to the saddr column of v$session.
But as explained by Mark, the problem is that until the trigger finishes the session the remote db link session is not considered to exist and only upon successful session connection does Oracle then go and update related facts about the session. Oracle does not guarantee read consistency on v$ views and the v$ views are based on x$ tables which are really program storage areas. These areas get updated at various points in the logic. It is possible that a logon trigger may not work in this specific case. An alternate approach would be to run a process every N time that just snapshots what is out there and records new remote queries. After all you really only need one capture per remote source whether you care about only database links or care about each client server.
One of the top database experts, Mariami Kupatadze, gave us a very elegant way of how to find remote sessions executing over a database link using x$k2gte, x$ktcxb, x$ksuse and v$session_wait in a single SQL statement.
Long story short: for databases from 7.3 till 12.1 create a job capturing the distributed transactions based on the script given in Doc ID 104420.1 “Script to show Active Distributed Transactions”. And you can modify the scripts if not only the active remote transactions need to be captured. For 12.2 and after, just use the view DBA_DB_LINK_SOURCES.
“There should be no on/off button for security, it should always be on, everything should always be encrypted – there should be no option to turn security off” – Larry Ellison
The Oracle Database Security Assessment Tool is a simple command line tool showing how securely your database is configured, who are the users and what are their privileges, what security policies and controls are in place, and where sensitive data resides with the goal of promoting successful approaches to mitigate potential security risks.
DBSAT has three components: Collector, Reporter, and Discoverer.
Here are the steps of how to produce the report in Oracle Database 20c. DBSAT supports also Oracle Database versions Oracle 10.2 through Oracle 19c.
1. Make sure first you have zip, unzip and python on the database server:
[root@julian ~]# id
uid=0(root) gid=0(root) groups=0(root)
[root@julian ~]# cd /home/oracle/
[root@julian oracle]# yum install -y zip unzip python
Loaded plugins: ulninfo, versionlock
mysql-connectors-community | 2.6 kB 00:00
mysql-tools-community | 2.6 kB 00:00
mysql80-community | 2.6 kB 00:00
ol7_UEKR5 | 2.5 kB 00:00
ol7_latest | 2.7 kB 00:00
(1/3): mysql-connectors-community/x86_64/primary_db | 68 kB 00:00
(2/3): mysql80-community/x86_64/primary_db | 128 kB 00:00
(3/3): mysql-tools-community/x86_64/primary_db | 83 kB 00:00
(1/5): ol7_UEKR5/x86_64/updateinfo | 41 kB 00:00
(2/5): ol7_UEKR5/x86_64/primary_db | 7.6 MB 00:00
(3/5): ol7_latest/x86_64/updateinfo | 3.1 MB 00:00
(4/5): ol7_latest/x86_64/group | 660 kB 00:00
(5/5): ol7_latest/x86_64/primary_db | 30 MB 00:00
Excluding 262 updates due to versionlock (use "yum versionlock status" to show them)
Package zip-3.0-11.el7.x86_64 already installed and latest version
Package unzip-6.0-20.el7.x86_64 already installed and latest version
Package python-2.7.5-86.0.1.el7.x86_64 already installed and latest version
Nothing to do
[root@julian oracle]#
2. Create a database user for running the Security Assessment Tool. You can also run it as sysdba but I would rather have a separate user:
create user dbsat_user identified by dbsat_user;
grant create session to dbsat_user;
grant select_catalog_role to dbsat_user;
grant select on sys.registry$history to dbsat_user;
grant select on sys.dba_users_with_defpwd to dbsat_user;
grant select on audsys.aud$unified to dbsat_user;
grant audit_viewer to dbsat_user;
grant capture_admin to dbsat_user;
grant dv_secanalyst to dbsat_user;
3. Run the tool:
[oracle@julian dbsat]$ ./dbsat collect dbsat_user/dbsat_user@//localhost:1521/xxxxx.oraclevcn.com dbsat_output
Database Security Assessment Tool version 2.2.1 (May 2020)
This tool is intended to assist you in securing your Oracle database
system. You are solely responsible for your system and the effect and
results of the execution of this tool (including, without limitation,
any damage or data loss). Further, the output generated by this tool may
include potentially sensitive system configuration data and information
that could be used by a skilled attacker to penetrate your system. You
are solely responsible for ensuring that the output of this tool,
including any generated reports, is handled in accordance with your
company's policies.
Connecting to the target Oracle database...
SQL*Plus: Release 20.0.0.0.0 - Production on Tue Nov 10 08:35:00 2020
Version 20.2.0.0.0
Copyright (c) 1982, 2019, Oracle. All rights reserved.
Connected to:
Oracle Database 20c EE High Perf Release 20.0.0.0.0 - Production
Version 20.2.0.0.0
Setup complete.
SQL queries complete.
.......
OS commands complete.
Disconnected from Oracle Database 20c EE High Perf Release 20.0.0.0.0 - Production
Version 20.2.0.0.0
DBSAT Collector completed successfully.
Calling /u01/app/oracle/product/20.0.0/dbhome_1/bin/zip to encrypt dbsat_output.json...
Enter password:
Verify password:
adding: dbsat_output.json (deflated 88%)
zip completed successfully.
[oracle@julian dbsat]$
4. Generate the report:
[oracle@julian dbsat]$ ./dbsat report dbsat_output
Database Security Assessment Tool version 2.2.1 (May 2020)
This tool is intended to assist you in securing your Oracle database
system. You are solely responsible for your system and the effect and
results of the execution of this tool (including, without limitation,
any damage or data loss). Further, the output generated by this tool may
include potentially sensitive system configuration data and information
that could be used by a skilled attacker to penetrate your system. You
are solely responsible for ensuring that the output of this tool,
including any generated reports, is handled in accordance with your
company's policies.
Archive: dbsat_output.zip
[dbsat_output.zip] dbsat_output.json password:
inflating: dbsat_output.json
DBSAT Reporter ran successfully.
Calling /usr/bin/zip to encrypt the generated reports...
Enter password:
Verify password:
zip warning: dbsat_output_report.zip not found or empty
adding: dbsat_output_report.txt (deflated 78%)
adding: dbsat_output_report.html (deflated 84%)
adding: dbsat_output_report.xlsx (deflated 3%)
adding: dbsat_output_report.json (deflated 82%)
zip completed successfully.
6. View the report, note that most areas will probably be in PASS status, some will be with LOW RISK, some might be even classified as HIGH RISK:
Although in ADB we have no OS access, you can still run the Database Security Assessment Tool. Roy Salazar explained on the Pythian blog how to run DBSAT against Autonomous Database.
You need to have the Instant Client installed and then use the ADMIN database user from Autonomous DB.
Clearly, you will get “ORA-20002: Complete without OS Commands” as the execution of the Collector was on the client server instead of on the underlying DB server to which we have no access.
The security of Autonomous Database is so high that you most likely will get a very different report from what we usually see on-premises.
Julian is the Global Database Lead of Accenture. His primary responsibility is managing and leading the Global Database Platforms which includes Autonomous Cloud, IaaS, PaaS, Database Services, Engineered Systems, Security and all other areas falling under Database Engineering (all DB brands). He is also the Global Managing Director of the ... Continue reading →