Search

Top 60 Oracle Blogs

Recent comments

Tuning

Frequency Histograms – 6

In an earlier post on frequency histograms I described how Oracle creates an approximate histogram when dealing with character columns, and warned you that the strategy could lead to a couple of anomalies if you were unlucky. I’ve already published a note about one such anomaly that can occur with fairly long character strings, this note describes another anomaly that could appear in less extreme cases. Again, we start by constructing a data set.


create table t1  (v1 varchar2(42));

insert	into t1
select	'next day'
from	all_objects
where 	rownum <= 150
;

insert into t1
select	'same day'
from 	all_objects
where 	rownum <= 350
;

insert into t1
select 	'priority high'
from 	all_objects
where 	rownum <= 500
;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'t1',
		estimate_percent => 100,
		method_opt 	 => 'for all columns size 254'
	);
end;
/

select
        endpoint_number,
        endpoint_number - nvl(prev_endpoint,0)  frequency,
        hex_val,
        chr(to_number(substr(hex_val, 2,2),'XX')) ||
        chr(to_number(substr(hex_val, 4,2),'XX')) ||
        chr(to_number(substr(hex_val, 6,2),'XX')) ||
        chr(to_number(substr(hex_val, 8,2),'XX')) ||
        chr(to_number(substr(hex_val,10,2),'XX')) ||
        chr(to_number(substr(hex_val,12,2),'XX')),
        endpoint_actual_value
from    (
        select
                endpoint_number,
                lag(endpoint_number,1) over(
                        order by endpoint_number
                )                                                       prev_endpoint,
                to_char(endpoint_value,'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX')hex_val,
                endpoint_actual_value
        from
                user_tab_histograms
        where
                table_name = 'T1'
        and     column_name = 'V1'
        )
order by
        endpoint_number
;

ENDPOINT_NUMBER  FREQUENCY HEX_VAL                         CHR(TO ENDPOINT_ACTUAL_VALUE
--------------- ---------- ------------------------------- ------ ------------------------------------------
            150        150  6E65787420644D6D50DD04A6400000 next d
            650        500  7072696F72698970D6651967200000 priori
           1000        350  73616D6520645C36B7AD479D600000 same d

3 rows selected.

This example captures a histogram at a point when my data set has only three popular values in it, and those values are sufficiently different that their “6-byte encoding” is enough for Oracle to tell the difference between them so the histogram doesn’t use the endpoint_actual_value column.

But I have a problem. During the course of the working day I introduce some data with different status codes, and all the queries I run are about these status codes, but by the end of the day (when, possibly, the stats will be collected again by the automatic stats collection job) all the “interesting” data has been updated to one of the three popular values.

What does Oracle do about queries for my “rare” values ? It depends what they look like. Here’s a couple of examples from 10.2.0.3:

select
	count(*)
from
	t1
where
	v1 = 'overnight'
;

select
	count(*)
from
	t1
where
	v1 = 'priority low'
;

set autotrace off

Neither of these values is in the histogram, so I would like Oracle to give a low cardinality estimate to them. But one of the values is very similar to a value that DOES exist in the histogram. This leads to an interesting variation in the numbers that appear in the execution plans:

-----------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost  |
-----------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |    11 |     2 |
|   1 |  SORT AGGREGATE    |      |     1 |    11 |       |
|*  2 |   TABLE ACCESS FULL| T1   |     1 |    11 |     2 |
-----------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("V1"='overnight')

Execution Plan
----------------------------------------------------------
Plan hash value: 3724264953

-----------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost  |
-----------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |    11 |     2 |
|   1 |  SORT AGGREGATE    |      |     1 |    11 |       |
|*  2 |   TABLE ACCESS FULL| T1   |   500 |  5500 |     2 |
-----------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("V1"='priority low')

In the first case (‘overnight’) the optimizer has decided that the value doesn’t exist in the histogram. Since this database is running 10.2.0.3 it has set the resulting cardinality to 1 (this would be 75 in 10.2.0.4 or 11g – half the lowest known value in the histogram). But in the second case (‘priority low’) the optimizer has done its 6-byte manipulation and decided that ‘priority low’ matches ‘priority high’, and given it a cardinality of 500.

If Oracle had seen ‘priority low’ when creating the histogram it would have used the endpoint_actual_value column and generated two separate rows for ‘priority low’ and ‘priority high’. But it is perfectly feasible that because of timing (the interesting data exists only briefly during the day) or sampling (the interesting data is tiny compared to the popular values) you could see Oracle taking a “large data” execution path when you know that you’re only expecting to handle a small data set.

The solution, of course, is to write code to create a histogram that represents the situation as you want Oracle to see it.

Good Nulls

I’ve often been heard to warn people of the accidents that can happen when they forget about the traps that appear when you start allowing columns to be NULL – but sometimes NULLs are good, especially when it helps Oracle understand where the important (e.g. not null) data might be.

An interesting example of this came up on OTN a few months ago where someone was testing the effects of changing a YES/NO column into a YES/NULL column (which is a nice idea because it allows you to create a very small index on the YESes, and avoid creating a histogram to tell the optimizer that the number of YESes is small).

They were a little puzzled, though, about why their tests showed Oracle using an index to find data in the YES/NO case, but not using the index in the YES/NULL case. I supplied a short explanation on the thread, and was planning to post a description on the blog, but someone on the thread supplied a link to AskTom where Tom Kyte had already answered the question, so I’m just going to leave you with a link to his explanation.

Manual Optimisation

Here’s an example of “creative SQL” that I wrote in response to a question on OTN about combining data from two indexes to optimise access to a table. It demonstrates the principle that you can treat an index as a special case of a table – allowing you to make a query go faster by referencing the same table more times.

Unfortunately you shouldn’t use this particular example in a production system because it relies on the data appearing in the right order without having an “order by” clause. This type of thing makes me really keen to have a hint that says something like: /*+ qb_name(my_driver) assume_ordered(@my_driver) */ so that you could tell the optimizer that it can assume that the rowset from a given query block will appear in the order of the final “order by” clause.

A: The most fundamental difference between hash and nested loop joins

Ok guys, thanks for waiting!

I ended up expanding the article quite a lot compared to what I had originally planned. In fact I only wrote 50% of what I plan to write, I’ll update the rest… um… later… Instead of just stating the difference between the joins I took a step back and elaborated something what I often see people doing (and talking about in newsgroups and lists too).

Basically the most fundamental (or biggest or most important) difference between nested loop and hash joins is that:

  • Hash joins can not look up rows from the inner (probed) row source based on values retrieved from the outer (driving) row source, nested loops can.

In other words, when joining table A and B (A is driving table, B is the probed table), then a nested loop join can take 1st row from A and perform a lookup to B using that value (of the column(s) you join by). Then nested loop takes the next row from A and performs another lookup to table B using the new value. And so on and so on and so on.

This opens up additional access paths to the table B, for example when joining ORDERS and ORDER_ITEMS by ORDER_ID (and ORDER_ID is leading column of PK in ORDER_ITEMS table), then for whatever orders are taken from ORDERS table, we can perform a focused, narrow index range scan on ORDER_ITEMS for every ORDER_ID retrieved from the driving ORDERS table. A hash join can’t do that.

Of course this doesn’t mean that hash joins can’t use any indexes for tables they read – index range scans and unique lookups can still be used under a hash join, but only if there are constant values in the query text (in form of literal or bind variables). If there are no such constant (filter) conditions under a hash join, then the other options to use that index would be to do an INDEX FULL SCAN (which is a range scan from end to end of the index) or INDEX FAST FULL SCAN (which is like a full table scan through the entire index segment). However none of these opportunities give the same benefits as nested loops looking up rows from row source B dynamically based on what was retrieved from A during runtime.

Note that this nested loops benefit isn’t limited to indexes only on table B, the difference is more fundamental than just a specific access path. For example, if table B happens to be a single table hash cluster or indexed X$ table, then the nested loop is also able to do “optimized” lookups from these row-sources, based on the values retrieved from table A.

So, my article with a lot of (loosely) related details is here:

In the comments section of my question, Tom, Bernard Polarski, Christian Antognini and Marc Musette got the closest to what I had in my mind when I asked the question. However, of course your mileage may vary somewhat depending on what kind of problems you have experienced the most over all the years. Also, Jonathan Lewis had a valid comment regarding that the answer depends on what exactly does one mean by “fundamental” and yeah this was open to interpretation.

Nevertheless, I wanted to emphasize that there’s a more important difference between NL and hash joins, than the usual stuff you see in training material which talk about implementation details like hash tables and memory allocation…

Some day I will complete that article, I plan to add some design advice in there, like denormalization opportunities for getting the best of the both worlds etc. But now I’m gonna get a beer instead.

Thanks for reading and answering my blog, I was quite impressed by the volume of comments & answers to my question. I must do this more often!

Share

New online seminars – Advanced Oracle Troubleshooting v2.0 Deep Dives

As I mentioned in a previous post, I won’t be doing much flying anymore and so am changing all my seminar offering to online seminars.

So, I’ve changed and re-arranged my seminar content into self-contained 4-hour deep dives and thanks to the online nature (no travel needed), people can choose which days they want to attend. If you’re interested in latch contention only, you can attend the Latch Contention deep dive for example etc. Or you can still attend all the deep dives. The cool thing is that these deep dive sessions take only half a day, too (and are priced accordingly). That way you don’t have to skip work for the whole day (or week) and still can get some of your daily work done too. Hopefully it makes your life a bit easier when getting approval to attend the sessions.

As the main feedback from my seminars has been that “there’s too much to learn” within the short 2-3 days I used to do my seminars in, I have arranged the material so that there will be more time to go deep into the subject area. Also, I have planned plenty of time for questions & answers (1 hour out of the 4 hours is planned Q&A sessions and attendees can also ask questions any time during the lecture & demos).

It looks like I will only offer my Advanced Oracle Troubleshooting v2.0 class online this year. I will probably schedule my Advanced SQL Tuning deep dives in January/February 2011 and the Advanced Troubleshooting class again in March/April and so on (until I go public with my other business, when I won’t have time for full length training anymore).

You can check the current schedule and pricing out here:

Here’s a brief outline of individual half-day Deep Dives I offer:

  1. AOT deep dive 1: Systematic approach for Advanced Oracle Troubleshooting
  2. AOT deep dive 2: Troubleshooting physical IO and buffer cache issues
  3. AOT deep dive 3: Troubleshooting commit, redo, undo and transaction issues
  4. AOT deep dive 4: Troubleshooting Oracle SGA/PGA/UGA and OS memory issues
  5. AOT deep dive 5: Troubleshooting shared pool and library cache issues
  6. AOT deep dive 6: Troubleshooting enqueue lock waits and deadlocks
  7. AOT deep dive 7: Troubleshooting latch contention
  8. AOT deep dive 8: Troubleshooting Mutex and “cursor: pin” contention
  9. AOT deep dive 9: Troubleshooting complex hangs and spins
  10. AOT deep dive 10: Troubleshooting crashes, bugs and ORA-600/ORA-7445 errors

So, sign up now, seats are limited ;-)

Share

Index degeneration

There’s a thread on OTN that talks about a particular deletion job taking increasing amounts of time each time it is run.

It looks like an example where some thought needs to go into index maintenance and I’ve contributed a few comments to the thread – so this is a lazy link so that I don’t have to repeat myself on the blog.

Subquery Factoring (4)

I’ve written before about the effects of subquery factoring (common table expressions – or CTEs) on the optimizer, and the way that the optimizer can “lose” some strategies when you start factoring out subquery expressions. Here’s another example I came across quite recently. It involved a join of about 15 tables so I’ve only extracted a few lines from the SQL and resulting execution plans.

We start with the original query, which had factored out an aggregate subquery then used it in place of an inline view:

with max_cust_comm as (
	select
		ccm.order_id,
		max(ccm.comm_date)
	from
		customer_communications ccm
	group by
		ccm.order_id
)
select
	...
from
	...
left join
	max_cust_comm	mcc
on	mcc.order_id = ord.order_id
...

The execution path for this query included the following lines:

|   6 |     HASH JOIN OUTER            |                            |     1 |
|   7 |      NESTED LOOP               |                            |     1 |
               ...
|  41 |      VIEW                      |                            |   798K|
|  42 |       HASH GROUP BY            |                            |   798K|
|  43 |        TABLE ACCESS FULL       | CUSTOMER_COMMUNICATIONS    |   798K|

You can see that the optimizer has created a result set (VIEW) at line 41 by scanning the entire customer_communications table, for a total of about 800,000 rows, aggregating the data by order_id. This is not very efficient becauase (a) I happen to have a very useful index on the customer_communications table that contains all the data I need, and (b) there are just a few input rows where I need to find this max(comm_date).

Hotsos Symposium 2010 Presentations

I got an email a few days ago asking if I would provide the scripts from my Hotsos Symposium 2010 presentations. I didn’t even realize the presentations had been posted anywhere but I managed to find them on my company’s website. So anyway, I decided to go ahead and post a link to the PDF’s and the scripts here as well. So click on the pretty pictures to get the PDFs and the cleverly titled text links to get the accompanying zip files with the scripts.

Controlling Execution Plans Zip File

My Favorite Scripts 2010 Zip File

Oh and Bob Sneed as “Disco Duck” (Thanks Marco)

Cardinalilty One

I think anyone who has read Wolfgang Breitling’s material about the optimizer will be familiar with the concept of Cardinality Feedback and one particular detail that when Oracle gets a cardinality estimate of one for a “driving” table then there’s a good chance that the execution plan will go wrong. (That’s not rule, by the way, just a fairly common observation after things have gone wrong.)

A recent note on OTN reminded me of a particular scenario where this specific problem can occur. It’s not particularly common, but it may hit people who are building data warehouses from multiple different sources. We start with an unlikely looking data set and very simple query:

drop table t1;

create table t1 as
select
	rownum id1,
	rownum id2
from
	all_objects
where
	rownum <= 10000
;

execute dbms_stats.gather_table_stats(user,'t1');

set autotrace traceonly

select
	count(*)
from
	t1
where
	id1 = id2
;

What do you think Oracle estimated cardinality will be for this predciate ? We know, because we saw the data being built, that we’re going to identify 10,000 rows. But the optimizer doesn’t see it that way – check line 2 of the execution plan. The optimizer thinks it will find just one row:

Exadata v2 Smart Scan Performance Troubleshooting article

I finally finished my first Exadata performance troubleshooting article.

This explains one bug I did hit when stress testing an Exadata v2 box, which caused smart scan to go very slow – and how I troubleshooted it:

Thanks to my secret startup company I’ve been way too busy to write anything serious lately, but apparently staying up until 6am helped this time! :-) Anyway, maybe next weekend I can repeat this and write Part 2 in the Exadata troubleshooting series ;-)

Enjoy! Comments are welcome to this blog entry as I haven’t figured out a good way to enable comments in the google sites page I’m using…

Share/Bookmark