Search

OakieTags

Who's online

There are currently 0 users and 29 guests online.

Recent comments

Affiliations

Troubleshooting

CBOddity

Warning – make sure you read to the end of this post.

Someone sent me an email this morning asking how Oracle calculates the index cardinality of an index range scan. Now, as I’ve often said, I don’t do private email about Oracle – unless it just happens to catch my attention and looks like something that is sufficiently generic to be worth publishing.

Today’s emailer was a lucky winner – he’d sent me a very short email that took about 30 seconds to read, contained a significant error, and (at first sight) probably had the right information in it for me to work with. Here’s the problem, as a cut-n-paste from an SQL*Plus session:

SQL> explain plan for select * from admlxa.QRT_BENCH where QRT_BENCH_DATE < :a3;
Explained. 

Elapsed: 00:00:00.01 

SQL> select * from table(dbms_xplan.display); 

PLAN_TABLE_OUTPUT
-----------------
Plan hash value: 2896103184 

-----------------------------------------------------------------------------------------------
| Id  | Operation                   | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                 |   424K|    81M|  3170   (4)| 00:00:05 |
|   1 |  TABLE ACCESS BY INDEX ROWID| QRT_BENCH       |   424K|    81M|  3170   (4)| 00:00:05 |
|*  2 |   INDEX RANGE SCAN          | IDX_QRT_BENCH_1 | 78876 |       |   303   (5)| 00:00:01 |
----------------------------------------------------------------------------------------------- 

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("QRT_BENCH_DATE"<:A3) 

Look closely at the “Rows” column – there’s clearly a logic error appearing here. If you select 78,876 rowids from an index you can’t possibly acquire 424,000 rows from the table – so where have those two numbers come from ?

The supporting information looked like this:

num_rows for QRT_BENCH table = 8480798
num_distinct for QRT_BENCH_DATE column = 458 

num_rows for IDX_QRT_BENCH_1 = 8763975
distinct_keys for IDX_QRT_BENCH_1 = 537

Of course, I really needed to know whether this was a single-column or multi-column index if I wanted to model the problem correctly and do further checks on when the error appeared, but this was good enough to get started. We note, first, that the 424K for the table cardinality is just the standard “5% due to range predicate with unknown value”: round(0.05 * 8480798) = 424040.

    Step 1: since I don’t know where the number 78876 comes from, let’s try to work backwards – use it in a bit if arithmetic and see what drops out. Let’s try dividing it into the table cardinality: 424040 / 78876 = 5.3760

    Step 2: Does 5.376 look familiar — it may be a coincidence, but isn’t that really close to 1% of the number of distinct keys in the index ?

At this point I don’t have any time to investigate in detail, but a scratch hypothesis is that Oracle is calculating something like: 5 * (number of rows in table / (number of distinct keys in index)); and maybe that magic five appears through a piece of code that takes 5%, but for some reason then divides by the 1% associated with the selectivity normally associated with function(col).

If I had the time (and the data set) I’d start playing with dbms_stats.set_index_stats() et.al. to see the effect of changing num_distinct and num_rows to see if my initial guess was somewhere in the right ballpark. As it is I’ve just emailed this note back to the source.

Erratum:

Before I published this note I got a reply from the original correspondent, with the following comment:

Sure enough – five years ago I had published some details about exactly this “feature” – and that 537 vs. 5.376 really was just a coincidence.

I decided to publish the note anyway for three reasons –

    one: to make sure you realise that I do make mistakes
    two: to show you that simple games with numbers may give you a working hypothesis
    three: to remind you that once you’ve got a working hypothesis it’s often easy to think of ways to demonstrate that your hypothesis is wrong. (A couple of hacks of the statistics would have shown me a constant 0.009 appearing, rather than anything like 5% divided by 1%.)

Upgrade issues

Here’s an example of how a bug-fix can create problems. It’s a code change in 11.2.x.x and (I believe) 10.2.0.5 relating to the costing of queries involving (but perhaps not restricted to) composite partitioned tables. I first saw this change in an email from Doug Burns, who sent me the 10053 traces from a very simple query that had started using the wrong index after an upgrade from 10.2.0.4 to 11.2.0.2.

As part of his testing he had set the optimizer_features_enable parameter back to 10.2.0.4 and found that not only did the choice of index change back to the expected index, but the costs of the two indexes changed dramatically. (The cost of using the “right” index changed from 15 to something in excess of 9,000 on the upgrade!)

The example I’m going to show you demonstrates the principles of the cost change – but with the data set and statistics I’ve generated you won’t see a change of execution plan. This is typical of some of the models I create – it’s enough to establish a principle, after which it’s simple enough to recognise the problems that the principle can cause.

So here’s a data set. I created it in a default 11.2.0.2 install on Windows 32-bit, so it’s running with 8KB blocks, autoallocate extents, and ASSM – but I’ve disabled CPU costing:

create table t1(
	list_col	number,
	range_col	number,
	v100		varchar2(10),
	v10		varchar2(10),
	vid		varchar2(10),
	padding		varchar2(100)
)
partition by range(range_col)
subpartition by list (list_col)
subpartition template (
	subpartition s0		values(0),
	subpartition s1		values(1),
	subpartition s2		values(2),
	subpartition s3		values(3),
	subpartition s4		values(4),
	subpartition s5		values(5),
	subpartition s6		values(6),
	subpartition s7		values(7),
	subpartition s8		values(8),
	subpartition s9		values(9)
)
(
	partition p_10 values less than (10),
	partition p_20 values less than (20),
	partition p_30 values less than (30),
	partition p_40 values less than (40),
	partition p_50 values less than (50),
	partition p_60 values less than (60),
	partition p_70 values less than (70),
	partition p_80 values less than (80),
	partition p_90 values less than (90),
	partition p_100 values less than (100)
)
;

insert into t1
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		rownum <= 10000
)
select
	mod(rownum,10),
	mod(rownum,100),
	lpad(mod(rownum,10),10),
	lpad(mod(rownum,100),10),
	lpad(rownum,10),
	rpad('x',100)
from
	generator	v1,
	generator	v2
where
	rownum <= 1000000
;

alter table t1 add constraint t1_pk primary key(v10, vid, v100, range_col, list_col) using index local;

create index t1_one_col on t1(v100) local;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1',
		method_opt 	 => 'for all columns size 1'

	);
end;
/

You’ll notice that one of the indexes I’ve created has only one column, while the other has five columns (including the single column of the smaller index). Since I’ve created only 1,000,000 rows spread across 100 partitions every partition is small and the corresponding index partitions are also small, so I’m going to create use dbms_stats.get_index_stats() and dbms_stats.set_index_stats() to make the indexes appear much larger. Specifically I will set the blevel on the single column index to 2, and the blevel on the multi-column index to 3. (This difference in settings isn’t just whimsy, it helps to emulate Doug’s problem.) I’ve previously published the type of code to make this possible; in this case I only set the table-level stats because the queries I’m going to use will trigger Oracle to use just the table-level stats.

select
	/*+ index(t1(v100)) */
	count(*)
from
	t1
where
	v10 = '0000000005'
and	v100 = '0000000005'
;

select
	/*+ index(t1(v10, vid, v100)) */
	count(*)
from
	t1
where
	v10 = '0000000005'
and	v100 = '0000000005'
;

I’m going to show you two sets of execution plans. The first one is where I’ve set optimizer_features_enable to ’10.2.0.4′, the second it where I’ve left it to default.

10.2.0.4 execution plans:
===================
---------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name       | Rows  | Bytes | Cost  | Pstart| Pstop |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |            |     1 |    22 |  2350 |       |       |
|   1 |  SORT AGGREGATE                      |            |     1 |    22 |       |       |       |
|   2 |   PARTITION RANGE ALL                |            |  1000 | 22000 |  2350 |     1 |    10 |
|   3 |    PARTITION LIST ALL                |            |  1000 | 22000 |  2350 |     1 |    10 |
|*  4 |     TABLE ACCESS BY LOCAL INDEX ROWID| T1         |  1000 | 22000 |  2350 |     1 |   100 |
|*  5 |      INDEX RANGE SCAN                | T1_ONE_COL |   100K|       |   330 |     1 |   100 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter("V10"='0000000005')
   5 - access("V100"='0000000005')

------------------------------------------------------------------------------
| Id  | Operation            | Name  | Rows  | Bytes | Cost  | Pstart| Pstop |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |       |     1 |    22 |   100 |       |       |
|   1 |  SORT AGGREGATE      |       |     1 |    22 |       |       |       |
|   2 |   PARTITION RANGE ALL|       |  1000 | 22000 |   100 |     1 |    10 |
|   3 |    PARTITION LIST ALL|       |  1000 | 22000 |   100 |     1 |    10 |
|*  4 |     INDEX RANGE SCAN | T1_PK |  1000 | 22000 |   100 |     1 |   100 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("V10"='0000000005' AND "V100"='0000000005')
       filter("V100"='0000000005')

11.2.0.2 execution plans:
===================
Execution Plan
----------------------------------------------------------
Plan hash value: 3019183742

---------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name       | Rows  | Bytes | Cost  | Pstart| Pstop |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |            |     1 |    22 |  2530 |       |       |
|   1 |  SORT AGGREGATE                      |            |     1 |    22 |       |       |       |
|   2 |   PARTITION RANGE ALL                |            |  1000 | 22000 |  2530 |     1 |    10 |
|   3 |    PARTITION LIST ALL                |            |  1000 | 22000 |  2530 |     1 |    10 |
|*  4 |     TABLE ACCESS BY LOCAL INDEX ROWID| T1         |  1000 | 22000 |  2530 |     1 |   100 |
|*  5 |      INDEX RANGE SCAN                | T1_ONE_COL |   100K|       |   510 |     1 |   100 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter("V10"='0000000005')
   5 - access("V100"='0000000005')

------------------------------------------------------------------------------
| Id  | Operation            | Name  | Rows  | Bytes | Cost  | Pstart| Pstop |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |       |     1 |    22 |   370 |       |       |
|   1 |  SORT AGGREGATE      |       |     1 |    22 |       |       |       |
|   2 |   PARTITION RANGE ALL|       |  1000 | 22000 |   370 |     1 |    10 |
|   3 |    PARTITION LIST ALL|       |  1000 | 22000 |   370 |     1 |    10 |
|*  4 |     INDEX RANGE SCAN | T1_PK |  1000 | 22000 |   370 |     1 |   100 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("V10"='0000000005' AND "V100"='0000000005')
       filter("V100"='0000000005')

Regardless of the setting of optimizer_features_enable, if I hint the single-column index I have a plan that visits each partition and subpartition in turn for the index then table; and if I hint the primary key I do that same walk through each partition and subpartition and visit just the index.

The big difference is in the cost. By default, both execution plans cost more in 11.2.0.2 – but the odd thing is that the change in costs is different. The cost of using the single-column index has gone up by 180, the cost of using the primary key index has gone up by 270; and the change in costs can be attributed completely to the indexes.

Is there a pattern in the change ? Yes, it’s not immediately obvious, of course, and needs a few extra tests to confirm it, but the change in cost can be calculated as: (number of range partitions * (number of list partitions – 1) * blevel of index).

    For the single column index this is: 10 * 9 * 2 = 180
    For the primary key index this is: 10 * 9 * 3 = 270

You don’t have to see an example of a change in plan actually happening once you’ve seen this change in arithmetic. In certain circumstances the cost of using composite partitions goes up when you upgrade to 11.2 – and the change in cost is proportional to the number of data segments (subpartitions) and the blevel of the indexes.

In Doug’s case he had a table with roughly 4,000 subpartitions in total – and he had a single column index with a blevel of 2, and a multi-column index with a blevel of 3: so the cost of using the (smaller, but less efficient) index went up by about 8,000 and the cost of using the (larger but efficient) index went up by about 12,000. The difference of 4,000 between these two increases was far higher than the original cost of using the bad index – so Oracle switched from the good index to the bad index.

The nasty thing about this problem is that it’s correcting an error – 11.2 is allowing for the cost of probing every single index subpartition, that’s what the extra multiples of the blevel represent; unfortunately the optimizer’s underlying paradigm of “every block visit is a disk I/O” makes this correction a threat.

Solutions and Workarounds

There is an /*+ optimizer_features_enable(‘x.x.x.x’) */ hint that could be used if the problem applies to just a couple of queries.

If the problem applies to classes of queries involving a couple of big tables and indexes you could use dbms_stats.set_index_stats() to adjust the blevel of the critical indexes.

If the problem appears all over the place you could set the optimizer_features_enable parameter to ’10.2.0.4′ in the parameter file, or in a session logon trigger.

If the problem appears all over the place, but there are other features of the 11.2.0.2 optimizer that are very helpful you could take advantage of the “_fix_control” parameter – after talking to Oracle support. This particular problem comes under bug fix 7132684, which appears in v$system_fix_control under 10.2.0.5, with the description “Fix costing for non prefixed local index”. If you’ve got the right symptoms, then the following statement will bypass the problem:

alter session set "_fix_control"='7132684:OFF';

Final Warning:

This note comes from 30 minutes looking at Doug’s trace files, one hour building a test case, and an hour spent writing this blog. This is not a complete determination of the problem it’s just an initial outline of what can happen and the symptoms to watch out for, posted as an early warning to help a few people save some time. Do not treat this note as a definitive reference.

Bug

There’s a note on the OTN database forum that’s worth reading if you’re puzzled by some of the Statspack or AWR results in 11g. The author is talking about 11.2.0.2 and walks through a couple of inconsistent reports from Statspack – but this prompted me to take a look at some underlying details using version 11.1.0.6 – and I think there’s a bug (or change) in the way that enqueues (locks) are handled that can make both statspack and AWR reports “lose time”.

Essentially the following sequence of events is possible:

    Session A starts waiting on a TM lock
    after a few seconds v$system_event records a wait for “enq: TM – contention”.
    AWR (or statspack) takes a snapshot (which includes v$system_event).
    Some time later session A acquires the TM lock – the total_waits for this event does not get incremented in v$system_event.
    AWR (or statspack) takes the next snapshot

When you run the AWR/Statspack report across the interval, the “Top N Timed Events” and the other sections on wait events will show time waited only if the total_waits for the event has changed in the interval. But we incremented the count before the first snapshot, and haven’t incremented it since – so the time we spent waiting in this interval disappears. (The time we spent waiting in the previous interval will be in the report for the previous interval, of course.)

Note – this is not the same as the inconsistencies that used to appear in statspack with statement-level errors, where the total resource usage across multiple periods would be reported against the snapshot where the statement completed. In those cases we finally saw the total resource usage even though it was summed against one interval; in this case we lose track of any time spent waiting after the first snapshot.

Of course the problem is not a total whitewash. The time may appear in other parts of the report – for example it will show up in the “DB Time” total – and the results will only be seriously deceptive for waits that are (a) long and (b) are so unluckily synchronised that an interval with a lot of wait time in it doesn’t have a wait event start in the interval and increment the total_waits. The problem may, therefore, only appear as a slight inconsistency in most systems.

Footnote: I haven’t spent much time looking at this, so I don’t know which versions are affected, nor which enqueues the new mechanism applies to (it may only be “TM” enqueue, as other waits tend to time out and restart every few seconds). This note is just an early warning to help you recognise the situation if you ever come across it.

Fix Control

There’s a very useful posting from Coskan Gundogar about tracking down a problem to do with an 11g upgrade.

The method basically revolves around a quick check for “known issues” that might be relevant by looking at the dynamic performance views v$system_fix_control.

When I read Coskan’s notes I had forgotten that I had written a short item about this myself about a year ago where I listed the  relatively small number of  items available in 10.2.0.3. The list is up to 551 in my copy of 11.2.0.2.

 

Speaking engagements 2011

Despite my attempts to not travel (much) anymore, I’m still going to speak at two events in first half of 2011.

The first event is at the Suncoast Oracle User Group meeting in Tampa, FL on Thursday, 27th January. I’ll speak about Advanced Oracle Troubleshooting in 60 minutes.

I will deliver the second presentation at Hotsos Symposium 2011 near Dallas, TX on Wednesday 9th March. I will talk about Troubleshooting the Most Complex Oracle Performance Problem I’ve ever seen. If you think that this performance problem was complex, then be prepared to see an even worse one there!

Share

ASSM Again

While checking my backlog of drafts (currently 75 articles in note form) I came across this one from August 2009 and was a little upset that I hadn’t finished it sooner – it’s a nice example of geek stuff that has the benefit of being useful.

From the Oracle newsgroup comp.databases.oracle.server, here’s an example of how to recreate a performance problem due to maintenance on ASSM bitmaps in 10.2.0.4.

Create a table in a tablespace with an 8KB block size, locally managed tablespace with uniform 1MB extents, and automatic segment space management (ASSM). Check the newsgroup thread if you want complete details on reproducing the test:

Session 1: Insert 100,000 rows of about 490 bytes into a table using a pl/sql loop and commit at end.
Session 1: Insert 1,000 rows into the table with single SQL inserts and no commits
Session 1: delete all data from the table with a single statement – but do not commit

Session 2: Insert 1,000 rows into the table with single SQL inserts and no commits – it’s very slow.

As one person on the thread pointed out – it looks as if Oracle is doing a full tablescan of the table, one block at a time showing “db file sequential read” waits all the way through the table. (If your db_cache_size is large enough you might not see this symptom).

I simplified the test – inserting just 100,000 of the rows (with the commit), then deleting them all (without the commit), then inserting one row from another session. Taking a snapshot of x$kcbsw and x$kcbwh, I got the following figures for the activity that took place inserting that one extra row (this was on Oracle 10.2.0.3):

---------------------------------
Buffer Cache - 23-Dec 11:50:36
Interval:-  0 seconds
---------------------------------
          Why0          Why1          Why2    Other Wait
          ----          ----          ----    ----------
         1,457             0             0             0 ktspfwh10: ktspscan_bmb
             8             0             0             0 ktspswh12: ktspffc
             1             0             0             0 ktsphwh39: ktspisc
         7,061             0             0             0 ktspbwh1: ktspfsrch
             1             0             0             0 ktuwh01: ktugus
         7,060             0             0             0 ktuwh05: ktugct
             1             0             0             0 ktuwh09: ktugfb
             2             0             0             0 kdswh02: kdsgrp
             2             0             0             0 kdiwh06: kdifbk
             2             0             0             0 kdiwh07: kdifbk
          ----          ----          ----    ----------
        15,595             0             0             0 Total: 10 rows

The figures tell us how much work Oracle has to do to find a table block that could accept a new row. The idea is simple – Oracle checks the first “level 3″ bitmap block (which is actually part of the segment header block) to find a pointer to the current “level 2″ bitmap block; it checks this level 2 bitmap block to find a pointer to the current “level 1″ bitmap block; and finally it checks the “level 1″ bitmap block to find a pointer to a data block that shows some free space.

Unfortunately every block in our table is apparently empty – but that’s only going to be true once session 1 commits. In this version of Oracle the blocks are all visible as “x% free” in the level 1 bitmaps – but when Oracle visits each block (“ktspbwh1: ktspfsrch”) it checks the ITL entry, which points it to the transaction table slot in the related undo segment header block to Get the Commit Time for the transaction (“ktuwh05: ktugct”) and finds that the transaction is not committed so the space is not really free. So Oracle has to visit the next block shown as free in the bitmap.

In our “bulk delete / no commit” case, we end up visitng every (or nearly every) block in the entire table before we find a block we can actually use – and, given the nature of the ASSM bitmap implementation, the order of the block visits is the “natural” table order, so we see something that looks like a full tablescan operating through single blocks reads (apart, perhaps, from a few blocks that are still cached).

I can’t explain why we do 1,457 visits to bitmap blocks (“ktspfwh10: ktspscan_bmb”) in this version of Oracle, but perhaps it’s simply an indication that Oracle picks (say) five “free block” entries from the bitmap block each time it visits it and therefore has to visit each bitmap block about 12 times if it doesn’t find a data block with space that really is free in it search.

Note – these results will be hugely dependent on version of Oracle – in an earlier version of Oracle the bitmap blocks were not updated by the delete until some time after the commit – and this variation of delayed block cleanout produced other unpleasant anomalies; and, just to make like difficult in later versions of Oracle, the x$kcbsw / x$kcbwh objects are not populated properly in 11g.

Footnote: In case you hadn’t realised, ASSM is a mechanism aimed at OLTP systems with a reasonable degree of concurrency – so it’s not too surprising that you can find problems and weak spots if you hit it with processing which is biased towards the DW and batch processing end of the spectrum.

Subquery Selectivity – 2

Here’s an item I thought I’d published a few years ago as a follow-up to an article on a 10g bug-fix for subquery selectivity. I was reminded of my oversight when a question came up on OTN that looked like an example of the bug introduced by the bug-fix – and I found that I couldn’t simply supply a link to explain the problem.

We start with some simple SQL to create a test data set:

create table t1
as
with generator as (
	select	--+ materialize
		rownum 	id
	from	all_objects
	where	rownum <= 3000
)
select
	mod(rownum,729)		id1,
	rownum			id2,
	mod(rownum,11)		n1,
	lpad(rownum,10,'0')	small_vc,
	rpad('x',100)		padding
from
	generator	v1,
	generator	v2
where
	rownum <= 1000000
;

alter table t1 add constraint t1_pk primary key(id1, id2);

create table t2
as
with generator as (
	select	--+ materialize
		rownum 	id
	from	all_objects
	where	rownum <= 3000
)
select
	mod(rownum,737)		id1,
	trunc((rownum-1)/737)	id2,
	mod(rownum,11)		n1,
	lpad(rownum,10,'0')	small_vc,
	rpad('x',100)		padding
from
	generator	v1,
	generator	v2
where
	rownum <= 1000000
;

alter table t2 add constraint t2_pk primary key(id1, id2);

-- gather statistics, compute, no histograms

Having created these data sets, we can run the following query against them. It’s a very simple query with aggregate subquery. I’ve included a no_unnest hint because many newer versions of Oracle would do a cost-based transformation on something this simple and unnest the subquery. I want to keep the code and numbers simple while demonstrating a principle. (In the earlier article I used a two-table join with two existence subqueries to make a point about how a change in arithmetic could cause a change in execution plan; in this article I’m just going to show you the change in arithmetic.)


select
	*
from
	t1
where
	1 = (
		select
			/*+ no_unnest */
			max(n1)
		from
			t2
		where
			t2.id1 = t1.id1
	)
;

Here are the execution plans from 9.2.0.8 and 11.1.0.6 respectively (the 10.2 plan matches the 11g plan). For consistency I am not using CPU costing (system statistics) in either case:


Execution Plan from 9.2.0.8
-----------------------------------------------------------------------------
| Id  | Operation                     |  Name       | Rows  | Bytes | Cost  |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |             | 10000 |  1201K|    13M|
|*  1 |  FILTER                       |             |       |       |       |
|   2 |   TABLE ACCESS FULL           | T1          | 10000 |  1201K|  2712 |
|   3 |   SORT AGGREGATE              |             |     1 |     7 |       |
|   4 |    TABLE ACCESS BY INDEX ROWID| T2          |  1357 |  9499 |  1363 |
|*  5 |     INDEX RANGE SCAN          | T2_PK       |  1357 |       |     6 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter( (SELECT /*+ NO_UNNEST */ MAX("T2"."N1") FROM "T2" "T2" WHERE
              "T2"."ID1"=:B1)=1)
   5 - access("T2"."ID1"=:B1)

Execution Plan from 11.1.0.6 (matches plan from 10.2.0.3)
-----------------------------------------------------------------------
| Id  | Operation                     | Name  | Rows  | Bytes | Cost  |
-----------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |    14 |  1722 |   996K|
|*  1 |  FILTER                       |       |       |       |       |
|   2 |   TABLE ACCESS FULL           | T1    |  1000K|   117M|  2712 |
|   3 |   SORT AGGREGATE              |       |     1 |     7 |       |
|   4 |    TABLE ACCESS BY INDEX ROWID| T2    |  1357 |  9499 |  1363 |
|*  5 |     INDEX RANGE SCAN          | T2_PK |  1357 |       |     6 |
-----------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter( (SELECT /*+ NO_UNNEST */ MAX("N1") FROM "T2" "T2" WHERE
              "T2"."ID1"=:B1)=1)
   5 - access("T2"."ID1"=:B1)

You will notice, of course, that the estimated cardinality of the result (line 0) is different, and the estimated cardinality of the scan on t1 (line 2) is different. There was an important code change introduced in 10g relating to the handling of subqueries that explains this effect.

In the previous article I pointed out that an existence subquery introduces a selectivity of 5% – and that the upgrade to 10g changed the point in the optimizer’s arithmetic where this 5% was applied (hence where the plan showed a change in cardinality). In the example above my code is of the form ” = (subquery) “, which means Oracle applies a “magic” 1% rather than 5%. (For a range-based predicate – i.e. <, > etc – it would have been the 5% that we got for existence.)

So in the 9i plan, line 2 shows a cardinality of 10,000 because that’s 1% of the original 1,000,000 rows but, as I pointed out in the earlier article, this means the optimizer will be working out the rest of the plan based on the estimated effect of a filter subquery that has not yet run. (Do read the previous article if you don’t follow that last comment.)

From 10g onwards (and in our 11.1 plan) the cardinality for line 2 is 1,000,000 because that’s how many rows will appear in the rowsource as the table is scanned. But then you have to ask why Oracle didn’t print a cardinality on line 1 (the FILTER line) because it’s at that line that we should see a cardinality of 10,000 as the filter selectivity of 1% is applied.

But there is another problem. The final cardinality of the 11.1 plan is NOT reported as 10,000; it’s reported as 14. As Oracle has fixed one defect in the optimizer it has introduced another (whether by accident, or by design, I don’t know). The estimate of 14 rows is effectively double-counting. The optimizer has applied the the 1% selectivity for the subquery, and then applied the selectivty of the predicate “t1.id1 = {unknown value}”. (If the predicate had been “> (subquery)” or some other range-based operator, or existence, then the arithmetic would have applied a 5% selectivity on top of that “t1.id1 = {unknown value}” selectivity.

In this case, of course, it probably doesn’t matter that the final cardinality is wrong – but in more complex cases this error may be introduced somewhere in the middle of the plan, making the optimizer do something very silly from that point onwards.

There is a surprising “workaround” to this issue – which may not be relevant in all cases and may not even work in some. If you force Oracle into running the subquery early (with the push_subq hint) the double-counting doesn’t occur:

select
	/*+ push_subq */	-- 9i hint
	*
from
	t1
where
	1 = (
		select
			/*+ no_unnest push_subq */	-- 10g+ hint (if you don't use query block naming)
			max(n1)
		from
			t2
		where
			t2.id1 = t1.id1
	)
;

-----------------------------------------------------------------------
| Id  | Operation                     | Name  | Rows  | Bytes | Cost  |
-----------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       | 10000 |  1201K|  4075 |
|*  1 |  TABLE ACCESS FULL            | T1    | 10000 |  1201K|  2712 |
|   2 |   SORT AGGREGATE              |       |     1 |     7 |       |
|   3 |    TABLE ACCESS BY INDEX ROWID| T2    |  1357 |  9499 |  1363 |
|*  4 |     INDEX RANGE SCAN          | T2_PK |  1357 |       |     6 |
-----------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter( (SELECT /*+ PUSH_SUBQ NO_UNNEST */ MAX("N1") FROM "T2"
              "T2" WHERE "T2"."ID1"=:B1)=1)
   4 - access("T2"."ID1"=:B1)

It strikes me as a little bizarre that this hint has any effect at all in this case since there is no “earlier point” at which the subquery can run. But it has pushed the optimizer through a different code path, which has produced a plan with a different shape and avoided the double-counting.

Footnote: I’ve just included in the final query text a reminder that the push_subq hint changed in the upgrade from 9i to 10g. Originally it went at the top of the query and caused every subquery to be pushed; from 10g you have to be selective about the subqueries you want pushed. The hint at the top will be “ignored” and you apply the hint to each subquery that you want pushed by writing it in the subquery (as I have here) or by naming each subquery with the qb_name hint and then putting the push_subq(@{your subquery name}) hint(s) at the top of the query, naming the subqueries to be pushed.

Fake Baselines

SQL Baslines in 11g are the new Stored Outlines – and one of the nicest features of SQL Baselines is that you are allowed to fake them; or rather, it’s legal to generate an execution plan for one query and transfer its execution plan to another query using the packaged procedure dbms_spm.load_plans_from_cursor_cache(). This posting is a demonstration of the technique.

We start with a sample data set and a query that is going to do “the wrong thing”. As usual I’ve got a locally managed tablespace with 8KB blocks and 1MB uniform extents, freelist management, and I’m running with CPU Costing disabled (and running 11.1.0.6 in this case):


set serveroutput off

create table t1
as
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		rownum <= 10000
)
select
	rownum			id,
	mod(rownum,1000)	n1000,
	lpad(rownum,10,'0')	small_vc,
	rpad('x',100)		padding
from
	generator	v1,
	generator	v2
where
	rownum <= 10000
;

create index t1_n1 on t1(n1000);

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1',
		method_opt 	 => 'for all columns size 1'
	);
end;
/
select
	/*+ target_query */
	id
from
	t1
where
	n1000 = 500
;

The final query is the one that I want to fake a baseline for. With my current set up it does an index range scan to pick up 10 rows, but I’m going to make it do a tablescan instead. I’m going to need to pull the exact text of the query from memory in a moment, so I’ll find its sql_id and child_number by searching for the “pseudo-hint” that I’ve included in the text, and I’ll report the execution plan to show that I’ve picked up the right thing (I’m assuming that there’s only one piece of SQL that’s going to include the text “target_query”, of course):

column	sql_id			new_value	m_sql_id_1
column 	plan_hash_value		new_value	m_plan_hash_value_1
column	child_number		new_value	m_child_number_1

select
	sql_id, plan_hash_value, child_number
from
	v$sql
where
	sql_text like '%target_query%'
and	sql_text not like '%v$sql%'
and	rownum = 1
;

select * from table(dbms_xplan.display_cursor('&m_sql_id_1',&m_child_number_1));

SQL_ID  306m98cpu9yz7, child number 0
-------------------------------------
select  /*+ target_query */  id from  t1 where  n1000 = 500

Plan hash value: 1420382924

---------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost  |
---------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |       |       |    11 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |    10 |    80 |    11 |
|*  2 |   INDEX RANGE SCAN          | T1_N1 |    10 |       |     1 |
---------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("N1000"=500)

The next step is to create a new query (and in a production system that might simply mean running a heavily hinted version of the target query) that uses the execution plan I want to see; and I’ll use the same search technique to find it and report its plan:

select
	/*+ full(t1) alternate_query */
	id
from
	t1
where
	n1000 = 500
;

column	sql_id			new_value	m_sql_id_2
column 	plan_hash_value		new_value	m_plan_hash_value_2
column	child_number		new_value	m_child_number_2

select
	sql_id, plan_hash_value, child_number
from
	v$sql
where
	sql_text like '%alternate_query%'
and	sql_text not like '%v$sql%'
and	rownum = 1
;

select * from table(dbms_xplan.display_cursor('&m_sql_id_2',&m_child_number_2));

SQL_ID  bvpb73bb6c6uy, child number 0
-------------------------------------
select  /*+ full(t1) alternate_query */  id from  t1 where  n1000 = 500

Plan hash value: 3617692013

----------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost  |
----------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |    28 |
|*  1 |  TABLE ACCESS FULL| T1   |    10 |    80 |    28 |
----------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("N1000"=500)

After been running and reporting these queries and their plans, I’ve captured the SQL_Id, child_number, and plan_hash_value for each query; and this is more than enough information to make it possible to create an SQL Baseline for one query using the execution plan for the other query.

declare
	m_clob	clob;
begin
	select
		sql_fulltext
	into
		m_clob
	from
		v$sql
	where
		sql_id = '&m_sql_id_1'
	and	child_number = &m_child_number_1
	;

	dbms_output.put_line(m_clob);

	dbms_output.put_line(
		dbms_spm.load_plans_from_cursor_cache(
			sql_id 			=> '&m_sql_id_2',
			plan_hash_value		=> &m_plan_hash_value_2,
			sql_text		=> m_clob,
			fixed			=> 'NO',
			enabled			=> 'YES'
		)
	);

end;
/

I used the SQL_ID and child_number from the first query to get the full SQL text of the query into an in-memory CLOB, and then use the SQL_id and plan_hash_value from the second query to associate the second plan with the first query – storing the result as a SQL Baseline that is enabled and ready for use.

You’ll have to take my word that I haven’t faked the following text, but this is what I get when I re-run the original query (flushing the shared pool first to make sure that I don’t accidentally end up picking up the original child cursor):

alter system flush shared_pool;

select
	/*+ target_query */
	id
from
	t1
where
	n1000 = 500
;

select * from table(dbms_xplan.display_cursor('&m_sql_id_1',null));

SQL_ID  306m98cpu9yz7, child number 1
-------------------------------------
select  /*+ target_query */  id from  t1 where  n1000 = 500

Plan hash value: 3617692013

----------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost  |
----------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |    28 |
|*  1 |  TABLE ACCESS FULL| T1   |    10 |    80 |    28 |
----------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("N1000"=500)

Note
-----
   - cpu costing is off (consider enabling it)
   - SQL plan baseline SYS_SQL_PLAN_3c0ea7f3dbd90e8e used for this statement

It’s so much easier than the fiddling one had to do for stored outlines which was quite easy in 8i, but got a little nasty in 9i because of the extra (undocumented) details that looked as if they might have been necessary when the third table appeared in the outln schema. However, in 10g, the dbms_outln package was enhanced to allow you to create outlines from the library cache – see this note from Randolf Geist for more details – but remember that stored outlines will be deprecated in Oracle 12.

Footnote: the dbms_spm package is created by script $ORACLE_HOME/rdbms/admin/dbmsspm.sql, and there is a comment near the top saying:


--           Grant the execute privilege on SPM package to public.           --
--           But ADMINISTER SQL MANAGEMENT OBJECT privilege is checked       --
--           before a user is allowed to execute.                            --

SQL Plan Baselines

Here’s another of my little catalogues of articles – this one on SQL Plan Baselines.

Be a little careful as you read through these notes – there are various changes in internal mechanisms, storage, etc. as you go through different versions of Oracle, so check which version the author is writing about.

Maria Colgan of Oracle Corp.

Christian Anognini:

Kerry Osborne

Tim Hall

Jason Arneil

Tracking Down Windows Memory Leaks

This is just a small windows O/S related note, covering how to track down memory usage of a specific windows service. This all happened on Windows Vista. The same steps are likely to work perfectly well on Windows 7 and both flavours of Server 2008. I’d expect them to also work on earlier versions of [...]