Search

Top 60 Oracle Blogs

Recent comments

Tuning

A: The most fundamental difference between hash and nested loop joins

Ok guys, thanks for waiting!

I ended up expanding the article quite a lot compared to what I had originally planned. In fact I only wrote 50% of what I plan to write, I’ll update the rest… um… later… Instead of just stating the difference between the joins I took a step back and elaborated something what I often see people doing (and talking about in newsgroups and lists too).

Basically the most fundamental (or biggest or most important) difference between nested loop and hash joins is that:

  • Hash joins can not look up rows from the inner (probed) row source based on values retrieved from the outer (driving) row source, nested loops can.

In other words, when joining table A and B (A is driving table, B is the probed table), then a nested loop join can take 1st row from A and perform a lookup to B using that value (of the column(s) you join by). Then nested loop takes the next row from A and performs another lookup to table B using the new value. And so on and so on and so on.

This opens up additional access paths to the table B, for example when joining ORDERS and ORDER_ITEMS by ORDER_ID (and ORDER_ID is leading column of PK in ORDER_ITEMS table), then for whatever orders are taken from ORDERS table, we can perform a focused, narrow index range scan on ORDER_ITEMS for every ORDER_ID retrieved from the driving ORDERS table. A hash join can’t do that.

Of course this doesn’t mean that hash joins can’t use any indexes for tables they read – index range scans and unique lookups can still be used under a hash join, but only if there are constant values in the query text (in form of literal or bind variables). If there are no such constant (filter) conditions under a hash join, then the other options to use that index would be to do an INDEX FULL SCAN (which is a range scan from end to end of the index) or INDEX FAST FULL SCAN (which is like a full table scan through the entire index segment). However none of these opportunities give the same benefits as nested loops looking up rows from row source B dynamically based on what was retrieved from A during runtime.

Note that this nested loops benefit isn’t limited to indexes only on table B, the difference is more fundamental than just a specific access path. For example, if table B happens to be a single table hash cluster or indexed X$ table, then the nested loop is also able to do “optimized” lookups from these row-sources, based on the values retrieved from table A.

So, my article with a lot of (loosely) related details is here:

In the comments section of my question, Tom, Bernard Polarski, Christian Antognini and Marc Musette got the closest to what I had in my mind when I asked the question. However, of course your mileage may vary somewhat depending on what kind of problems you have experienced the most over all the years. Also, Jonathan Lewis had a valid comment regarding that the answer depends on what exactly does one mean by “fundamental” and yeah this was open to interpretation.

Nevertheless, I wanted to emphasize that there’s a more important difference between NL and hash joins, than the usual stuff you see in training material which talk about implementation details like hash tables and memory allocation…

Some day I will complete that article, I plan to add some design advice in there, like denormalization opportunities for getting the best of the both worlds etc. But now I’m gonna get a beer instead.

Thanks for reading and answering my blog, I was quite impressed by the volume of comments & answers to my question. I must do this more often!

Share

New online seminars – Advanced Oracle Troubleshooting v2.0 Deep Dives

As I mentioned in a previous post, I won’t be doing much flying anymore and so am changing all my seminar offering to online seminars.

So, I’ve changed and re-arranged my seminar content into self-contained 4-hour deep dives and thanks to the online nature (no travel needed), people can choose which days they want to attend. If you’re interested in latch contention only, you can attend the Latch Contention deep dive for example etc. Or you can still attend all the deep dives. The cool thing is that these deep dive sessions take only half a day, too (and are priced accordingly). That way you don’t have to skip work for the whole day (or week) and still can get some of your daily work done too. Hopefully it makes your life a bit easier when getting approval to attend the sessions.

As the main feedback from my seminars has been that “there’s too much to learn” within the short 2-3 days I used to do my seminars in, I have arranged the material so that there will be more time to go deep into the subject area. Also, I have planned plenty of time for questions & answers (1 hour out of the 4 hours is planned Q&A sessions and attendees can also ask questions any time during the lecture & demos).

It looks like I will only offer my Advanced Oracle Troubleshooting v2.0 class online this year. I will probably schedule my Advanced SQL Tuning deep dives in January/February 2011 and the Advanced Troubleshooting class again in March/April and so on (until I go public with my other business, when I won’t have time for full length training anymore).

You can check the current schedule and pricing out here:

Here’s a brief outline of individual half-day Deep Dives I offer:

  1. AOT deep dive 1: Systematic approach for Advanced Oracle Troubleshooting
  2. AOT deep dive 2: Troubleshooting physical IO and buffer cache issues
  3. AOT deep dive 3: Troubleshooting commit, redo, undo and transaction issues
  4. AOT deep dive 4: Troubleshooting Oracle SGA/PGA/UGA and OS memory issues
  5. AOT deep dive 5: Troubleshooting shared pool and library cache issues
  6. AOT deep dive 6: Troubleshooting enqueue lock waits and deadlocks
  7. AOT deep dive 7: Troubleshooting latch contention
  8. AOT deep dive 8: Troubleshooting Mutex and “cursor: pin” contention
  9. AOT deep dive 9: Troubleshooting complex hangs and spins
  10. AOT deep dive 10: Troubleshooting crashes, bugs and ORA-600/ORA-7445 errors

So, sign up now, seats are limited ;-)

Share

Index degeneration

There’s a thread on OTN that talks about a particular deletion job taking increasing amounts of time each time it is run.

It looks like an example where some thought needs to go into index maintenance and I’ve contributed a few comments to the thread – so this is a lazy link so that I don’t have to repeat myself on the blog.

Subquery Factoring (4)

I’ve written before about the effects of subquery factoring (common table expressions – or CTEs) on the optimizer, and the way that the optimizer can “lose” some strategies when you start factoring out subquery expressions. Here’s another example I came across quite recently. It involved a join of about 15 tables so I’ve only extracted a few lines from the SQL and resulting execution plans.

We start with the original query, which had factored out an aggregate subquery then used it in place of an inline view:

with max_cust_comm as (
	select
		ccm.order_id,
		max(ccm.comm_date)
	from
		customer_communications ccm
	group by
		ccm.order_id
)
select
	...
from
	...
left join
	max_cust_comm	mcc
on	mcc.order_id = ord.order_id
...

The execution path for this query included the following lines:

|   6 |     HASH JOIN OUTER            |                            |     1 |
|   7 |      NESTED LOOP               |                            |     1 |
               ...
|  41 |      VIEW                      |                            |   798K|
|  42 |       HASH GROUP BY            |                            |   798K|
|  43 |        TABLE ACCESS FULL       | CUSTOMER_COMMUNICATIONS    |   798K|

You can see that the optimizer has created a result set (VIEW) at line 41 by scanning the entire customer_communications table, for a total of about 800,000 rows, aggregating the data by order_id. This is not very efficient becauase (a) I happen to have a very useful index on the customer_communications table that contains all the data I need, and (b) there are just a few input rows where I need to find this max(comm_date).

Hotsos Symposium 2010 Presentations

I got an email a few days ago asking if I would provide the scripts from my Hotsos Symposium 2010 presentations. I didn’t even realize the presentations had been posted anywhere but I managed to find them on my company’s website. So anyway, I decided to go ahead and post a link to the PDF’s and the scripts here as well. So click on the pretty pictures to get the PDFs and the cleverly titled text links to get the accompanying zip files with the scripts.

Controlling Execution Plans Zip File

My Favorite Scripts 2010 Zip File

Oh and Bob Sneed as “Disco Duck” (Thanks Marco)

Cardinalilty One

I think anyone who has read Wolfgang Breitling’s material about the optimizer will be familiar with the concept of Cardinality Feedback and one particular detail that when Oracle gets a cardinality estimate of one for a “driving” table then there’s a good chance that the execution plan will go wrong. (That’s not rule, by the way, just a fairly common observation after things have gone wrong.)

A recent note on OTN reminded me of a particular scenario where this specific problem can occur. It’s not particularly common, but it may hit people who are building data warehouses from multiple different sources. We start with an unlikely looking data set and very simple query:

drop table t1;

create table t1 as
select
	rownum id1,
	rownum id2
from
	all_objects
where
	rownum <= 10000
;

execute dbms_stats.gather_table_stats(user,'t1');

set autotrace traceonly

select
	count(*)
from
	t1
where
	id1 = id2
;

What do you think Oracle estimated cardinality will be for this predciate ? We know, because we saw the data being built, that we’re going to identify 10,000 rows. But the optimizer doesn’t see it that way – check line 2 of the execution plan. The optimizer thinks it will find just one row:

Exadata v2 Smart Scan Performance Troubleshooting article

I finally finished my first Exadata performance troubleshooting article.

This explains one bug I did hit when stress testing an Exadata v2 box, which caused smart scan to go very slow – and how I troubleshooted it:

Thanks to my secret startup company I’ve been way too busy to write anything serious lately, but apparently staying up until 6am helped this time! :-) Anyway, maybe next weekend I can repeat this and write Part 2 in the Exadata troubleshooting series ;-)

Enjoy! Comments are welcome to this blog entry as I haven’t figured out a good way to enable comments in the google sites page I’m using…

Share/Bookmark

Oracle Support Sanctions Manually Created SQL Profiles!

I originally titled this post: “SQLT - coe_xfr_sql_profile.sql”

Catchy title huh? - (that’s why I changed it)

I’ve been promoting the use of SQL Profiles as a plan control mechanism for some time. The basic idea is to use the undocumented procedure dbms_sqltune.import_sql_profile to build a set of hints to be applied behind the scenes via a SQL Profile. The hints can be created anyway can think of, but one of my favorite ways to generate them is to pull the hints from the other_xml filed of v$sql. This is a technique suggested to me originally by Randolf Geist. I have used this approach several times in the past but occasionally I’ve had a few doubts as to whether this is a good idea or even if SQL Profiles can apply all valid hints (see Jonathan Lewis’s comments on this post, Why Oracle Isn’t Using My Profile, where he expresses some doubts as well - he’s also written a bit about SQL Profiles on his site as you might imagine).

Ignoring Hints

I’ve previously published a couple of notes (here and here) about the driving_site() hint. The first note pointed out that the hint was deliberately ignored if you write a local CTAS or INSERT that did a remote query. I’ve just found another case where the hint is ignored – this time in a simple SELECT [...]

double trouble

In the latest Quiz Night, I asked how you could make a query more efficient by changing a two table join into a three table join – with the clue that my third table was a repeat of the first table. Gary Myers, in comment 4,  provided the type of answer I was looking for. Sometimes [...]