Search

Top 60 Oracle Blogs

Recent comments

Database Administration & Monitoring

Purging Unified Audit Trail in 12cR2

A good thing from 12.2 is that the implementation of Unified Audit Trail has changed a little. It was stored in a weird CLI_SWP$ table but now we have a normal partitioned table named AUD$UNIFIED. In a previous post I traced the two purge method: purge all before a timestamp, or purge all. Here is the same in 12.2

Purge old

I have quite a few record in Unified Audit Trail here.

How many members for standby redo logs?

I see lot of databases with two members for redo logs and also two members for standby redo logs. Why not, but when asking I realized that there are some mis-comprehension about it. And what was recommended 10 years ago may be different today.

Oracle 12cR2 on Windows: Virtual Accounts

Oracle Database 12.2.0.1 is released for Windows, just 2 weeks after the Linux release, and this is a very good news. Let’s see something new you will encounter in the first screens of Oracle 12.2 installer. Don’t worry, the default choice is the right one. But better understand it.

SYSTEM

On Linux, you don’t install Oracle Database as root. You create a user, usually called oracle, which will be the owner of the database files and the instance processes and shared memory. This looks obvious be before 12c the Oracle Instance is running as the root equivalent, the built-in SYSTEM user. This was very bad from a security point of view: running a software with the most powerful user on the system.

Google Cloud Platform instances and Oracle Database

When it comes to choose a cloud instance to run Oracle Database, you want to be able to run your workload on the minimum CPU cores. This is why in a previous post I measured how many logical reads per seconds can be achieved with a SLOB workload, on AWS which is often the first considered, and will probably do it on Azure in the future. I did the same on the Oracle Cloud which is the only one where Oracle make it easy to run an license the Oracle Database.

12cR2: TNS_ADMIN in env.ora

The network files (sqlnet.ora, tnsnames.ora, lsnrctl.ora) are read by default from ORACLE_HOME/network/admin but you may have several Oracle installations, and want only one place for those files. Then you can use TNS_ADMIN environment variable. But are you sure that it is always set when starting the database? the listener? the client? They must be consistent (see https://blog.dbi-services.com/oracle-12cr2-plsql-new-feature-tnsping-from-the-database/). Then what we do for the cases where TNS_ADMIN is not set: define symbolic links from the ORACLE_HOME to the common location. It would be better to just change the default location and this is what can be done in 12.2 with env.ora

By default, the $ORACLE_HOME/env.ora is empty. There are only comments.

Oracle 12cR2, RAC, Cloud, ACFS, PDB thin clones and asmadmin

In the Oracle Public Cloud, fast provisioning gets all its meaning when creating a RAC database service: in one hour you can get an operational highly available multitenant database. You can even create it in Data Guard for Disaster Recovery. Now, Oracle is pushing ACFS to store the datafiles rather than direct ASM. Especially in multitenant because a great feature is thin cloning: CREATE PLUGGABLE DATABASE AS SNAPSHOT COPY. However, I encountered an error when I tried it for the first time.

TDE keystore

SQL> create pluggable database pdb2 from pdb1 snapshot copy;
create pluggable database pdb2 from pdb1 snapshot copy
*
ERROR at line 1:
ORA-28357: password required to open the wallet

Oh yes, in the cloud all tablespaces are encrypted. In 12.2 we can put the keystore password in the command:

Oracle 12cR2: changes for login.sql

If you use a login.sql script to set the SQL*Plus environment from your current working directory, you will see that it will not run anymore in 12.2. This is a security feature, and a good occasion to explain how sqlplus finds the scritps to run, on Linux.

For my test I have login.sql, LOGIN.SQL and script.sql in the following directories

$ tree /tmp/mytest/
/tmp/mytest/
├── a
│   ├── login.sql
│   ├── LOGIN.SQL
│   └── script.sqlL
├── b
│   ├── login.sql
│   ├── LOGIN.SQL
│   └── script.sql
├── login.sql
├── LOGIN.SQL
└── script.sql

I’m going to the parent directory
cd /tmp/mytest

The scripts display their name:

+ head login.sql LOGIN.SQL script.sql
==> login.sql LOGIN.SQL script.sql <==
prompt Hello from /tmp/mytest/script.sql

Misleading wait event names clarified in V$EVENT_NAME

The oracle wait event names were originally implemented for the oracle rdbms developers and are now use by the database users to troubleshoot performance issues. The consequence is that the name may be misleading because they have a meaning from the internal point of view. Here is some clarification about them.

In 12c the clarification is easy because we have a new DISPLAY_NAME column in the V$EVENT_NAME view:

Purging Unified Audit Trail in 12cR1

When you want to empty a table you have two methods: delete and truncate. If, for any reason (see previous post) the Unified Audit Trail has become too big, you cannot directly delete or truncate the table. You must call the dbms_audit_mgmt.clean_audit_trail. But then you want to know if it will do slow deletes or quick truncates. Let’s trace it.

I have filled my Unified Audit Trail with hundred of thousands failed logins:
SQL> select unified_audit_policies,action_name,count(*) from unified_audit_trail group by unified_audit_policies,action_name;
 
UNIFIED_AUDIT_POLICIES ACTION_NAME COUNT(*)
---------------------------------------- -------------------- ----------
EXECUTE 2

CPU_COUNT

When you have less CPU threads than the number of processes that has something to run in CPU, the OS will schedule them to share the CPU resource. Increasing the workload at that point will not increase the throughput because you have reached the capacity of your system, and response time will increase because of queuing. Actually, performance will even decrease because of the overhead of context switching when trying to share the processors.
When you don’t want the OS scheduler to do the resource sharing job, you can, and should, use Instance Caging. For sure, the database instance can do resource sharing more intelligently than the OS as it knows the kind of workload and the performance requirement of each process.