Search

Top 60 Oracle Blogs

Recent comments

CBO

Bushy Joins – a closer look

When 12.2 came out most of the (optimizer) focus was around SPD and how to avoid the challenges from 12.1. Still 12.2 introduced several (less acclaimed) optimizations including “Bushy Join” transformation, which is interesting since (I think, corrections welcome) Bushy Join concept isn’t necessarily tied to query transformation in general, especially before 12.2 (some reference about “manual” bushy joins here and here) or in other RDBMS (a manual example on SQL Server here).
Anyway being the CBO way of improving our code query transformations here we go again.

SQLTXPLAIN vs SQLd360, differences and similarities

When talking tools to help with SQL Tuning the question on SQLTXPLAIN vs SQLd360 comes out very often. “What’s the difference?”, “Which one should I use?”, “Why another tool?” are probably the top ones </p />
</p></div>

    	  	<div class=

Character selectivity

A recent OTN posting asked how the optimizer dealt with “like” predicates for character types quoting the DDL and a query that I had published some time ago in a presentation I had done with Kyle Hailey. I thought that I had already given a detailed answer somewhere on my blog (or even in the presentation) but found that I couldn’t track down the necessary working, so here’s a repeat of the question and a full explanation of the working.

The query is very simple, and the optimizer’s arithmetic takes an “obvious” strategy in the arithmetic. Here’s the sample query, with the equiavalent query that we can use to do the calculation:

Index out of range

I’ve waxed lyrical in the past about creating suitable column group statistics whenever you drop an index because even when the optimizer doesn’t use an index in its execution path it might have used the number of distinct keys of the index (user_indexes.distinct_keys) in its estimates of cardinality.

min/max Upgrade

A question came up on the OTN database forum a little while ago about a very simple query that was taking different execution paths on two databases with the same table and index definitions and similar data. In one database the plan used the “index full scan (min/max)” operation while the other database used a brute force “index fast full scan” operation.

Join Elimination

A question has just appeared on OTN describing a problem where code that works in 11g doesn’t work in 12c (exact versions not specified). The code in question is a C-based wrapper for some SQL, and the problem is a buffer overflow problem. The query supplied is as follows:


select T1.C1 from T1, T2 where T1.C1 = T2.D1;

The problem is that this works in 11g where the receiving (C) variable is declared as

char myBuffer [31];

but it doesn’t work in 12c unless the receiving variable is declared as:

I don’t know (yet)

Here’s a question that came to mind while reading a recent question on the OTN database forum. It’s a question to which I don’t know the answer and, at present, I don’t really want to bother modelling at present – although if I were on a customer site and this looked like a likely explanation for a performance anomaly it’s the sort of thing I would create a model for.

Cost is Time (again)

The hoary old question about lower cost queries running faster or slower that higher cost queries has appeared once again on the OTN database forum. It’s one I’ve addressed numerous times in the past – including on this blog – but the Internet being what it is the signal keeps getting swamped by the noise. This time around a couple of “new” thoughts crossed my mind when reading the question.

There is a Time column on the standard forms of the execution plan output, and the description of this column is available in the manuals and has been for years (here’s a definition from v$sql_plan from 10gR2, for example):

Long Parsing and PGA limits

Recently I’ve seen not so smart optimizer behavior: one query took long time to parse, and ended with an error hitting PGA_AGGREGATE_LIMIT in few minutes; another query was just parsed for ages while using reasonable (under 2G :)) amount of PGA and still could hit PGA_AGGREGATE_LIMIT but after way more time – up to an hour.


Both cases were similar and involved queries which were accessing views; and those views’ code is generated by an application using lots of IN LISTs and other OR conditions. They both are really ugly SQLs with text length ~100K. When Oracle tried to parse them it took a lot of time and parse attempt had either failed with ORA-4036 soon or hanged for a long time and then failed. Strangely incident trace file generated for ORA-4036 doesn’t include PGA heaps breakdown and you have to manually enable PGA heapdump on error to get an idea what is taking up memory. Here’s what I’ve found in there:

"Cost Based Optimizer: Grundlagen – mit Update für Oracle 12c" Artikel (German)

Seit gestern steht auf der "Informatik Aktuell"-Seite mein Artikel "Cost Based Optimizer: Grundlagen – mit Update für Oracle 12c" zur Verfügung.

Er stimmt auch inhaltlich auf meinen Vortrag bei den IT-Tagen 2016 am 13. Dezember ein.
Sollte Sie das Thema interessieren, lade ich Sie hiermit also herzlich zu meinem Vortrag ein, bei dem ich das Thema auch mit Live-Demonstrationen vertiefen werde.