Search

Top 60 Oracle Blogs

Recent comments

August 2013

Finding the slowest SQL execution of the same query

When running the same query multiple times, several questions come to mind:

  • Does the query always execute in the same amount of  time?
  • If some executions are slower, what is the slowest execution time?
  • When did the slowest exectution happen?
  • What more can I find out about the slowest exectution?

 

All of this can be answered from data in Active Session History or ASH.

 

The following query finds the maximum, minimum and average execution times in seconds as well as the time of the slowest execution which is given by start time and end time of the  slowest exectuiton:

Sorted Hash Clusters RIP

Sorted Hash Clusters have been around for several years, but I’ve not yet seen them being used, or even investigated in detail. This is a bit of a shame, really, because they seem to be engineered to address a couple of interesting performance patterns.

The basic concept is that data items that look alike are stored together (clustered) by applying a hashing function to generate a block address; but on top of that, if you query the data by “hashkey”, the results are returned in sorted order of a pre-defined “sortkey” without any need for sorting. (On top of everything else, the manuals describing what happens and how it works are wrong).

Flashback query FTS costs

There has been some information written on the subject already (see this post by Randolf Geist).

In a nutshell, the way optimizer costs full table scans when using flashback query makes it look much more expensive than without. What further complicates the problem is the fact that index access costs remain the same regardless of whether you're using flashback query or not. As a result you will be much more likely to see an index access paths when using flashback query.

Consider the following example:



SQL> create table test as
2 select level n, rpad('x', 200, 'x') v
3 from dual
4 connect by level <= 10000;

Table created

SQL> alter table test add constraint pk_test primary key (n);

Table altered

SQL> exec dbms_stats.gather_table_stats(user, 'test');

Oracle CPU Time

There are 3 kinds of CPU in the Oracle stats.

  1. Oracle CPU used
  2. System CPU used
  3. Oracle demand for CPU

Starting in 10g Oracle records both the CPU used by the instance as well as the load on the system in v$sysmetric. This is awesome as we can see how busy the system is and how much of the CPU Oracle is responsible for:

Where to begin with Oracle and SQL

Seeing more and more questions on “where do I start with Oracle if I want to be a DBA?”  My perspective is a bit off since I’ve been surrounded by Oracle for over 20 years.  I hardly remember what it was like to start with Oracle and starting with Oracle now in 2013 is quite different than starting with Oracle in 1990.

Here is my list and everything on this list is excellent. I’m sure I missed a few good ones, but maybe people can add them in the comments.

Start with Oracle Docs, they are free and good!

Get the best books and read them

Compressed partitions are not compressed tables

So…you have got a big table and you’re lucky enough to have a partitioning license.  So far, so good.

Then you read about using compression to make your older, static data faster and smaller and you thought … "Cool!"

Let’s see that in action – first here is my uncompressed partition table

Index Sorting

From time to time I’ve mentioned the fact that the optimizer will sort indexes alphabetically by name as the last tie-breaker when two plans have the same cost. Thanks to an email that arrived a couple of days ago I’ve been alerted to event 10089 (which has been around since at least 8.1.7.4) with the description: “CBO Disable index sorting”.

Setting this event will disable the index sorting (which may result in some tied execution plans changing, of course) and may reduce contention on the row cache objects latches if you’re running 11.2.0.3 or later and doing a lot of hard parsing.

Elysium…

I’ve just seen Elysium at the cinema…

The Good: Let’s start by saying it is visually stunning! Earth is now a sprawling shanty town with people living hand to mouth. In contrast Elysium is futuristic, high-tech, clean and inhabited by the ultra-rich. The shots of Elysium in space are great. That’s what sci-fi films should be all about. The scenes on earth have a very real quality about the special effects and are very reminiscent of District 9, which is hardly surprising as it’s the same writer/director. The contrast between Earth and Elysium is very striking and works well. There are some robots that remind me of the old units in I Robot, except these ones have a serious attitude problem. All this makes for some very nice stuff I would like to see again.

DBMS_FILE_TRANSFER potentially cool but then it is not

This post is interesting for all those of you who plan to transfer data files between database instance. Why would you consider this? Here’s an excerpt from the official 12.1 package documentation:

The DBMS_FILE_TRANSFER package provides procedures to copy a binary file within a database or to transfer a binary file between databases.

But it gets better:

The destination database converts each block when it receives a file from a platform with different endianness. Datafiles can be imported after they are moved to the destination database as part of a transportable operation without RMAN conversion.

So that’s a way not only to copy data files from one database to another but it also allows me to get a file from SPARC and make it available on Linux!

Quick Pluggable Database (PDB) Update…

I’ve been playing around with using SQL Developer and Cloud Control 12cR3 for managing pluggable databases (PDBs) this morning. I’ve added a couple of quick references to my PDB article.

If you understand the DBCA and SQL*Plus approach, the SQL Developer and Cloud Control screens are pretty self explanatory, so I’ve made my references to them brief. :)

Cheers

Tim…