Search

Top 60 Oracle Blogs

Recent comments

July 2015

Descending Indexes

I’ve written about optimizer defects with descending indexes before now but a problem came up on the OTN database forum a few days ago that made me decide to look very closely at an example where the arithmetic was clearly defective. The problem revolves around a table with two indexes, one on a date column (TH_UPDATE_TIMESTAMP) and the other a compound index which starts with the same column in descending order (TH_UPDATE_TIMESTAMP DESC, TH_TXN_CODE). The optimizer was picking the “descending” index in cases where it was clearly the wrong index (even after the statistics had been refreshed and all the usual errors relating to date-based indexes had been discounted). Here’s an execution plan from the system which shows that there’s something wrong with the optimizer:

I’ve Been Made an Oracle Ace Director

Well, I guess the title of this post says it all. As I tweeted yesterday:

I’m grateful, proud, honoured and overall just Jolly Chuffed to have been made an Oracle Ace Director! #ACED

I can now put this label on my belongings

I can now put this label on my belongings

Wrong Results

It is interesting how a combination of technologies in Oracle can play in a way which produce a seemingly bizarre outcomes.

Consider the following query:

SQL> with v as
(
select 20 n from dual
) select distinct v.n
from v, t
where v.n=t.n(+);

no rows selected

Note that I'm doing a left outer join, however, the query somehow managed to loose a single row I have in the subquery factoring (and just in case you're wondering I've used subquery factoring for simplicity and replacing it with a real table makes no difference).

The first reaction is how could something as simple as this go so terribly wrong?

Let's take a look at the plan:



SQL> with v as
(
select 20 n from dual
) select distinct v.n
from v, t
where v.n=t.n(+); 2 3 4 5 6

Execution Plan

PQ Index anomaly

Here’s an oddity prompted by a question that appeared on Oracle-L last night. The question was basically – “Why can’t I build an index in parallel when it’s single column with most of the rows set to null and only a couple of values for the non-null entries”.

That’s an interesting question, since the description of the index shouldn’t produce any reason for anything to go wrong, so I spent a few minutes on trying to emulate the problem. I created a table with 10M rows and a column that was 3% ‘Y’ and 0.1% ‘N’, then created and dropped an index in parallel in parallel a few times. The report I used to prove that the index build had run  parallel build showed an interesting waste of resources. Here’s the code to build the table and index:

This Is Not Glossy Marketing But You Still Won’t Believe Your Eyes. EMC XtremIO 4.0 Snapshot Refresh For Agile Test / Dev Storage Provisioning in Oracle Database Environments.

This is just a quick blog post to direct readers to a YouTube video I recently created to help explain to someone how flexible EMC XtremIO Snapshots are. The power of this array capability is probably most appreciated in the realm of provisioning storage for Test and Development environments.

Although this is a silent motion picture I think it will speak volumes–or at least 1,000 words.

Please note: This is just a video demonstration to show the base mechanisms and how they relate to Oracle Database with Automatic Storage Management. This is not a scale demonstration. XtremIO snapshots are supported to in the thousands and extremely powerful “sibling trees” are fully supported.

Snap Clone on Exadata

Recently I created a new screenwatch that walks you through the new features of Snap Clone on Exadata in Enterprise Manager 12c version 12.1.0.5. The screenwatch walked through using Snap Clone on Exadata, both from a multi-tenant architecture perspective and a standard non container database perspective. Unfortunately, putting both together meant the resultant video was around 20 minutes long.

Generally, we try to make our screenwatches shorter than that, as the evidence tends to show people lose concentration and interest if the screenwatches are that long. So we decided to split the screenwatch in two. One part would be for vanilla Snap Clone on Exadata, and the other was for Snap Clone Multitenant on Exadata.

How long does a logical IO take?

This is a question that I played with for a long time. There have been statements on logical IO performance (“Logical IO is x times faster than Physical IO”), but nobody could answer the question what the actual logical IO time is. Of course you can see part of it in the system and session statistics (v$sysstat/v$sesstat), statistic name “session logical reads”. However, if you divide the number of logical reads by the total time a query took, the logical IO time is too high, because then it assumed all the time the query took was spend on doing logical IO, which obviously is not the case, because there is time spend on parsing, maybe physical IO, etc. Also, when doing that, you calculate an average. Averages are known to hide actual behaviour.

Bugs Related to SQL Plan Directives Pack and Unpack

SQL plan directives are a new concept introduced in version 12.1. Their purpose is to help the query optimizer cope with misestimates. To do so, they store in the data dictionary information about the predicates that cause misestimates. Simply put, the purpose of SQL plan directives is to instruct the database engine to either use dynamic sampling or automatically create extended statistics (specifically, column groups).

Since the database engine automatically maintains (e.g. creates and purges) SQL plan directives, in some situations it is necessary to copy the SQL plan directives created in one database to another one. This can be done with the help of the DBMS_SPD package.

Here are the key steps for such a copy:

Missing Bloom

Here’s a little surprise that came up on the OTN database forum a few days ago. Rather than describe it, I’m just going to create a data set to demonstrate it, initially using 11.2.0.4 although the same thing happens on 12.1.0.2. The target is a query that joins to a range/hash composite partitioned table and uses a Bloom filter to do partition pruning at the subpartition level.  (Note to self: is it possible to see Bloom filters that operate at both the partition and subpartition level from a single join? I suspect not.). Here’s my code to create and populate both the partitioned table and a driving table for the query:

Friday Philosophy – Being Rejected by the Prom Queen

If you follow me on twitter (and if you are on twitter, why would you *not* follow me :-) See Twitter tag on right of page -> ) you will know what the title is all about. I posted the below on my twitter feed a few weeks ago:

Submitting to speak at #OOW15 is like asking out prom queens. You live in hope – but expect rejection :-)

{BTW if prom queens are not your thing and you would rather be asking out the captain of the football/ice hockey/chess team, the vampire slayer or whatever, just substitute as you see fit.}