Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Exadata Storage Server and the Query Optimizer – Part 3

In the first and second post of this series I shared with you some basics about smart scan and gave some details about projection and restriction. The aim of this post is to cover the third basic technique: join filtering.
Join filtering is not something specific to the Exadata Storage Server. In fact, it is an [...]

Public Demonstration of PFCLScan in Edinburgh Thursday May 13th

We have demonstrated our database security vulnerability scanner product to a lot of our customers and partners customers over the last few months and the reactions and feedback have been amazing. The product has two major modes, which map to....[Read More]

Posted by Pete On 05/05/10 At 01:24 PM

Native Full Outer Join Officially Available in 10.2.0.5

Today I installed for the first time the patchset 10.2.0.5. While reading the README file, I noticed the following piece of information.
To enable a new native full outer join implementation in the database, a user has to set the following underscore parameter:
_optimizer_native_full_outer_join =force
You can set this parameter for the system or for a specific session.
Besides [...]

Cursor Sharing 3

Here’s a simple piece of code demonstrating an irritating problem. I’ve created a table, a function-based index, collected stats (without histograms), and then run a query that should use that index – but doesn’t. execute dbms_random.seed(0) create table t1 as with generator as ( select --+ materialize rownum id from dual connect by rownum <= [...]

A decade ago...

It was ten years ago that asktom was "born" (and five years and a month ago this blog was)...

The first question asked was about Oracle 7.3 on a Sun 5.5.1 machine - in early 2000 (right after we got by that year 2000 thing). Funny, the last time it was updated was.... just a little more than 3 months ago - it is still alive...

Man oh man - have things changed since. I stand at about 12,000 published Q&A's (about 40,000 in total - not all get published for various reasons...) I have 10 new ones in the queue (will get to them soon - have lots of travel planned for today and the week in fact)...

I've learned a lot in the last ten years - things have changed considerably.

Remember what a "big" server looked like in 2000? Remember what they cost? Think about how now you can buy stuff off of the shelf on your credit card that blows them away.

In 1997 - we were doing terabyte test to scales - a terabyte was such a big deal then that we would put out a press release. Now we have a database machine with 5tb of flash cache just to buffer part of the database.... A terabyte is nothing - my son would be disappointed in any laptop with less than that for storage...

It is only because things keep changing so fast, so much - that this stuff stays interesting. Imagine if it were still like 1993 (when I joined Oracle) 17 years later. It would be pretty boring...

Anyway, thanks for all of the great questions - looking forward to more...

Exadata Storage Server and the Query Optimizer – Part 2

In the first post of this series I shared with you some basics about smart scan and gave some details about projection. The aim of this post is to cover the second basic technique: restriction. Simply put, the aim of this technique is to offload to a cell the processing of predicates found in the [...]

Statistics on Partitioned Tables - Part 6c - COPY_TABLE_STATS - Bugs and Patches

I wanted to talk about a few of the bugs and patches you need to be aware of if you plan to use DBMS_STATS.COPY_TABLE_STATS. Believe me, when entering the world of stats on (sub-)partitioned objects, you had better be prepared to spend a lot of time on My Oracle Support and a little time applying patches. It's a complex picture but I'll try to simplify it as much as possible here.

10.2.0.3 or 10.2.0.4 (or, late update ... 10.2.0.5)

All of the posts so far have been based on the 10.2.0.4 patchset with additional one-off patches applied where necessary (more below). Although part of me can understand the resistance to applying new patchsets because of the additional testing, change management and outages required, I think it's asking for trouble to be on older patchsets if you're using newer features such as DBMS_STATS.COPY_TABLE_STATS. For example, as far as I'm aware COPY_TABLE_STATS is not truly supported in 10.2.0.3 but I've seen a number of people using it.

These features are being worked on continuously, as you'll see in the rest of this post, so I think you'll want to apply the latest patches on top of the latest patchset where possible.

So you should take my bitching about the new availability of 10.2.0.5 with a pinch of salt. In fact, I found myself quite excited that there might be some fixes for issues I've found. Back to 10.2.0.5 later ...

High/Low Partition Key Values

This is probably the best known of the stats copying bugs. Remember that at the end of part 6a, COPY_TABLE_STATS had copied the statistics for the new P_20100211_GROT subpartition, using the P_20100209_GROT subpartition statistics as the source. Looking at a very small sample of the subpartition column statistics that have been copied :-

*******************************
Subpartition-Level Column Stats
*******************************
SQL> select
  2     a.column_name, a.subpartition_name,
  3     a.num_distinct,
  4     display_raw(a.low_value,b.data_type) as low_val,
  5     display_raw(a.high_value,b.data_type) as high_val,
  6     b.data_type
  7  from
  8     dba_subpart_col_statistics a, dba_tab_cols b
  9  where
 10     a.owner='TESTUSER' and
 11     a.table_name='TEST_TAB1' and
 12     a.table_name=b.table_name and
 13     a.column_name=b.column_name
 14  order by 1, 2
 15  /

COLUMN_NAME                SUBPARTITION_NAME              NUM_DISTINCT LOW_VAL              HIGH_VAL            
-------------------------- ------------------------------ ------------ -------------------- --------------------
DATA_TYPE                                                                                       
--------------------                            
<>

REPORTING_DATE             P_20100211_GROT                           1 20100209             20100209            
NUMBER

<>

Spot the deliberate mistake? You would think that DBMS_STATS would be smart enough to know that the partition key column high and low values are implicitly different for each new partition. I added a partition to cover reporting dates 20100210 and 20100211 :-

SQL> ALTER TABLE TEST_TAB1
  2  ADD  PARTITION P_20100211 VALUES LESS THAN (20100212);

So I doubt that the high and low values for REPORTING_DATE are likely to be 20100209 ;-)

It's bug number 8318020 and is mentioned in several places on the net. You can work around it by setting the column stats manually for the partition key, but you might as well just apply the correct one-off patch. Of course, once you start to apply that one-off patch, you will find a whole bunch of DBMS_STATS-related patches and I would lay good odds now that you'll actually end up having to apply a Merge Label Request (MLR), encompassing a number of these patches. 8866627 is the particular one that I applied before most of these tests and that we use at work, which also solves the next issue.

Subpartition issues

I deliberately skipped over this one when I wrote part 6a by ensuring I had 8866627 applied but it's a nasty one, quite easy to miss and not referenced in too many places on the web, although I could find at least one useful thread.

If I repeat the partition-level stats copy from Part 6a without any patches applied to 10.2.0.4, this is the result.

SQL> exec dbms_stats.copy_table_stats(ownname => 'TESTUSER', tabname => 'TEST_TAB1', 
                                      srcpartname => 'P_20100209', dstpartname => 'P_20100210');
BEGIN dbms_stats.copy_table_stats(ownname => 'TESTUSER', tabname => 'TEST_TAB1', 
                                      srcpartname => 'P_20100209', dstpartname => 'P_20100210'); END;
*
ERROR at line 1:
ORA-06533: Subscript beyond count
ORA-06512: at "SYS.DBMS_STATS", line 17408
ORA-06512: at line 

As far as I'm aware and like many of the issues I'm raising, this would only be a problem when you have subpartitions as well as partitions. The important point is that I've never seen this bug since applying 8866627 and can't recreate it on 10.2.0.5 either.

ORA-03113 during aggregation

Unfortunately, 8866627 doesn't seem to fix the next issue and I can't
find any references to it on the web either, but I've been able to recreate it on several databases with different patches applied, on different O/S. First I'll confirm the patches that 8866627 includes.

Installed Top-level Products (2): 

Oracle Database 10g                                                  10.2.0.1.0
Oracle Database 10g Release 2 Patch Set 3                            10.2.0.4.0

There are 2 products installed in this Oracle Home.

Interim patches (1) :

Patch  8866627      : applied on Fri Apr 23 13:04:42 BST 2010
   Created on 6 Nov 2009, 05:38:04 hrs PST8PDT
   Bugs fixed:
     7475493, 7116357, 7463138, 6718212, 7441785, 6526370, 8318020

Now, having realised that copying the statistics at the partition level doesn't work when I have only one subpartition populated with data because it doesn't copy the statistics for the unpopulated partitions, I'll try gathering the statistics on each subpartition one by one. You can try the same yourself using stats_6c.sql (note that you will have to run stats_5_6a.sql first and that this script will generate a trace file) or see my results in stats_6c.txt. The script just repopulates the load table, exchanges it with the GROT subpartition and then gathers stats on each of the subpartitions in turn. If this seems a little weird, it really isn't because it's perfectly feasible that we would have some empty subpartitions.

The stats for each subpartition are copied successfully ....

SQL> exec dbms_monitor.session_trace_enable;

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.copy_table_stats(ownname => 'TESTUSER', tabname => 'TEST_TAB1', 
                     srcpartname => 'P_20100209_GROT', dstpartname => 'P_20100211_GROT');

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.copy_table_stats(ownname => 'TESTUSER', tabname => 'TEST_TAB1', 
                     srcpartname => 'P_20100209_JUNE', dstpartname => 'P_20100211_JUNE');

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.copy_table_stats(ownname => 'TESTUSER', tabname => 'TEST_TAB1', 
                     srcpartname => 'P_20100209_HALO', dstpartname => 'P_20100211_HALO');

PL/SQL procedure successfully completed.

Until we get to the last subpartition. If you've been wading through the earlier posts, then you might be expecting Oracle to aggregate the subpartition stats to generate Aggregated Stats at the Partition level, now that all subpartitions have valid stats (which they should, as I've just copied them). Clearly some sort of aggregation process does kick-in, because the session is disconnected at this point.

SQL> alter session set "_minimal_stats_aggregation"=TRUE;

Session altered.

SQL> exec dbms_stats.copy_table_stats(ownname => 'TESTUSER', tabname => 'TEST_TAB1', 
                     srcpartname => 'P_20100209_OTHERS', dstpartname => 'P_20100211_OTHERS');

SQL> show parameter user_dump_dest
ERROR:
ORA-03114: not connected to ORACLE 

The reason I set _minimal_stats_aggregation to TRUE was to make sure that I was using the default value but setting it to FALSE leads to the same end result, as you'll see from the test output posted earlier.

Because this seemed such a weird error, I decided to trace the session and here's a snippet of the resulting trace file that shows where the error occurs.

PARSING IN CURSOR #44 len=38 dep=1 uid=0 oct=7 lid=0 tim=1242214568228618 hv=624276838 ad='a03acac0'
delete from ind_online$ where obj#= :1
2END OF STMT
PARSE #44:c=0,e=43,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=1242214568228606
EXEC #44:c=0,e=141,p=0,cr=3,cu=0,mis=0,r=0,dep=1,og=4,tim=1242214568228961
STAT #44 id=1 cnt=0 pid=0 pos=1 obj=0 op='DELETE  IND_ONLINE$ (cr=3 pr=0 pw=0 time=101 us)'
STAT #44 id=2 cnt=0 pid=1 pos=1 obj=731 op='TABLE ACCESS FULL IND_ONLINE$ (cr=3 pr=0 pw=0 time=66 us)'
EXEC #7:c=0,e=42,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,tim=1242214568229627
FETCH #7:c=0,e=158,p=0,cr=12,cu=0,mis=0,r=1,dep=1,og=1,tim=1242214568229851
EXEC #40:c=0,e=167,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,tim=1242214568230144
FETCH #40:c=0,e=851,p=0,cr=13,cu=0,mis=0,r=1,dep=1,og=1,tim=1242214568231118
EXEC #38:c=0,e=43,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,tim=1242214568231322
FETCH #38:c=0,e=99,p=0,cr=12,cu=0,mis=0,r=1,dep=1,og=1,tim=1242214568231540
Exception signal: 11 (SIGSEGV), code: 1 (Address not mapped to object), addr: 0x0, 
                                PC: [0x1a1baf7, qospMinMaxPartCol()+3135]
*** 2010-04-23 14:01:57.888
ksedmp: internal or fatal error
ORA-07445: exception encountered: core dump [qospMinMaxPartCol()+3135] 
                                  [SIGSEGV] [Address not mapped to object] [0x000000000] [] []
Current SQL statement for this session:
BEGIN dbms_stats.copy_table_stats(ownname => 'TESTUSER', tabname => 'TEST_TAB1', 
                      srcpartname => 'P_20100209_OTHERS', dstpartname => 'P_20100211_OTHERS'); END;
----- PL/SQL Call Stack -----
  object      line  object
  handle    number  name
0xa0625b88       830  package body SYS.DBMS_STATS
0xa0625b88     18292  package body SYS.DBMS_STATS
0x97cbbe60         1  anonymous block 
----- Call Stack Trace -----
calling              call     entry                argument values in hex      
location             type     point                (? means dubious value)
-------------------- -------- -------------------- ----------------------------
ksedst()+31          call     ksedst1()            000000000 ? 000000001 ?
                                                   2B3BC0FACD50 ? 2B3BC0FACDB0 ?
                                                   2B3BC0FACCF0 ? 000000000 ?
ksedmp()+610         call     ksedst()             000000000 ? 000000001 ?
                                                   2B3BC0FACD50 ? 2B3BC0FACDB0 ?
                                                   2B3BC0FACCF0 ? 000000000 ?
ssexhd()+629         call     ksedmp()             000000003 ? 000000001 ?
                                                   2B3BC0FACD50 ? 2B3BC0FACDB0 ?
                                                   2B3BC0FACCF0 ? 000000000 ?
__restore_rt()+0     call     ssexhd()             00000000B ? 2B3BC0FADD70 ?
                                                   2B3BC0FADC40 ? 2B3BC0FACDB0 ?
                                                   2B3BC0FACCF0 ? 000000000 ?
qospMinMaxPartCol()  signal   __restore_rt()       0068986E0 ? 000000000 ?
+3135                                              000000001 ? 000000000 ?
                                                   000000000 ? 0000000C2 ?
spefcmpa()+687       call     qospMinMaxPartCol()  7FFF31FFEDA0 ? 2B3BC13AABB0 ?
                                                   000000000 ? 2B3BC13AAB70 ?
                                                   000000000 ? 2B3BC13AAB30 ?
spefmccallstd()+218  call     spefcmpa()           2B3BC13AAA90 ? 2B3BC13AAAE0 ?
                                                   2B3BC129E500 ? 2B3BC13AAB70 ?
                                                   000000000 ? 2B3BC13AAB30 ?
pextproc()+41        call     spefmccallstd()      7FFF31FFEE00 ? 7FFF31FFEA80 ?

An ORA-07445 error in qospMinMaxPartCol. I tried to find some references to this but haven't been successful so far so I'm not sure whether this is a new bug or not. I was hoping it would be fixed in 10.2.0.5 but it doesn't appear so.

Installed Top-level Products (3): 

Oracle Database 10g                                                  10.2.0.1.0
Oracle Database 10g Release 2 Patch Set 3                            10.2.0.4.0
Oracle Database 10g Release 2 Patch Set 4                            10.2.0.5.0
There are 3 products installed in this Oracle Home.


There are no Interim patches installed in this Oracle Home.

The test script leads to the same error. In fact, all of the example output was generate from 10.2.0.5, but I've recreated the same results on 10.2.0.4 many times now.

Exception signal: 11 (SIGSEGV), code: 1 (Address not mapped to object), addr: 0x0, 
                      PC: [0x1abfe92, qospMinMaxPartCol()+3210]
*** 2010-05-02 13:56:33.603
ksedmp: internal or fatal error
ORA-07445: exception encountered: core dump [qospMinMaxPartCol()+3210] 
                     [SIGSEGV] [Address not mapped to object] [0x000000000] [] []
Current SQL statement for this session:
BEGIN dbms_stats.copy_table_stats(ownname => 'TESTUSER', tabname => 'TEST_TAB1', 
                     srcpartname => 'P_20100206_OTHERS', dstpartname => 'P_20100212_OTHERS'); END;
----- PL/SQL Call Stack -----
  object      line  object
  handle    number  name
0xa07618e8       838  package body SYS.DBMS_STATS
0xa07618e8     19131  package body SYS.DBMS_STATS
0xa02394b8         1  anonymous block
----- Call Stack Trace -----
calling              call     entry                argument values in hex
location             type     point                (? means dubious value)
-------------------- -------- -------------------- ----------------------------
ksedst()+31          call     ksedst1()            000000000 ? 000000001 ?
                                                   2BA713361D50 ? 2BA713361DB0 ?
                                                   2BA713361CF0 ? 000000000 ?
ksedmp()+610         call     ksedst()             000000000 ? 000000001 ?
                                                   2BA713361D50 ? 2BA713361DB0 ?
                                                   2BA713361CF0 ? 000000000 ?
ssexhd()+629         call     ksedmp()             000000003 ? 000000001 ?
                                                   2BA713361D50 ? 2BA713361DB0 ?
                                                   2BA713361CF0 ? 000000000 ?
__restore_rt()+0     call     ssexhd()             00000000B ? 2BA713362D70 ?
                                                   2BA713362C40 ? 2BA713361DB0 ?
                                                   2BA713361CF0 ? 000000000 ?
qospMinMaxPartCol()  signal   __restore_rt()       006AE9A20 ? 000000000 ?
+3210                                              000000001 ? 000000000 ?
                                                   000000000 ? 0000000C2 ?
spefcmpa()+809       call     qospMinMaxPartCol()  7FFFDBEA5280 ? 2BA71361F008 ?
                                                   000000000 ? 2BA71361EFC8 ?
                                                   000000000 ? 2BA71361EF88 ?
spefmccallstd()+122  call     spefcmpa()           2BA71361EEE8 ? 2BA71361EF38 ?
8                                                  2BA7135EE500 ? 2BA71361EFC8 ?
                                                   000000000 ? 2BA71361EF88 ?
pextproc()+41        call     spefmccallstd()      7FFFDBEA52E0 ? 7FFFDBEA4F68 ?
                                                   7FFFDBEA4D10 ? 7FFFDBEA4C78 ?
                                                   000000000 ? 2BA71361EF88 ?

There is a workaround, of course, which is to copy stats at the Partition-level after all of the Subpartitions have been populated and I've shown that at the end of the test script, but the problem with that is that we could have quite a long lag before the stats are copied, because it appears that copying up-front before the data appears (which would seem sensible) doesn't work too well, based on part 6a.

I've probably added to any confusion again, rather than reducing it, but I'll leave you with two main messages

1) You're going to encounter some bugs because some of these features are new, so you need to work hard identifying them, applying the appropriate patches and, preferably, being on later releases in the first place.

2) Things are probably much more straightforward if you stick to Table and Partition-level stats, but that's not a luxury I have on my current system.

Remastering

For anyone doing complicated things with RAC, there’s a useful little note on Fairlie Rego’s blog about the improved options for handling and reporting dynamic remastering of objects.

Query transformation – Part 1

Query transformation is a set of techniques used by the optimizer to rewrite a query and optimizer it better. Few optimization paths open up to the optimizer after query transformation. Some query transformations must be costed to be chosen and some do not need to be costed. For example, if a table can be eliminated completely from the join, then that transformation is applied and need to cost that transformation is minimal.

Test case

We will use the following test case to illustrate the concepts behind Query transformation. Some of the optimizations that we see here works from version 11gR1 onwards and so, these test cases might not work in the versions 10g and below.


create table backup.t1 (n1 number not null primary key, n2 number not null, n3 varchar2(256) );
insert into backup.t1 select n1, n1, lpad ( n1, 250,'x') from (select level n1 from dual connect by level <=100);
create table backup.t2 (n1 number not null primary key , n2 number not null, n3 varchar2(256) );
alter table backup.t1 add constrainT t1_fk foreign key  (n2) references backup.t2(n1)
insert into backup.t2 select n1, n1, lpad ( n1, 250,'x') from (select level n1 from dual connect by level <=100);
insert into backup.t1 select n1, n1, lpad ( n1, 250,'x') from (select level n1 from dual connect by level <=100);

Join elimination (JE)

JE is a technique in which one or more tables can be eliminated from the execution plan without altering functional behavior. In the listing 1-1, query selects columns from the table t1 only, but there exists a join predicate between t1 and t2 in that query. Further, no columns are selected from table t2 in this query and join to t2 simply serves as to verify the existence of foreign key values. Enabled Foreign key constraint between these two tables establishes the existence check already and so, there is no need for explicit existence check in the query also. Join to table t2 can be eliminated by the optimizer safely.

select /*+ gather_plan_statistics */ t1.* from t1, t2 where t1.n2 = t2.n1;
select *  from table(dbms_xplan.display_cursor('','','allstats last'))
------------------------------------------------------------------------------------
| Id  | Operation         | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------
|   1 |  TABLE ACCESS FULL| T1   |      1 |     82 |    100 |00:00:00.01 |      13 |
-----------------------------------------------------------------------------------

Listing 1-1 :JE example case 1

As you see from the listing 1-1, Table T2 is removed from the execution plan. Since there is a valid foreign key constraint, optimizer eliminated the join condition to that table t2.

Let’s also discuss another Join Elimination test case. In the Listing 1-2, predicate is “t1.n2 not in (select t2.n1 from t2)”. As the enabled foreign key constraint dictates that this predicate will always be false and no rows will be returned. Optimizer promptly identified this condition and added a filter predicate in the step 1 with “NULL is NOT NULL” as a predicate. Step 1 is executed before step 2; Step 2 is never executed as the value of Starts column is zero in the execution plan.

SQL_ID  d09kmzum9wgta, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ t1.* from t1 where t1.n2 not in
(select t2.n1 from t2 )

Plan hash value: 3332582666
---------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |      0 |00:00:00.01 |
|*  1 |  FILTER            |      |      1 |        |      0 |00:00:00.01 |
|   2 |   TABLE ACCESS FULL| T1   |      0 |    100 |      0 |00:00:00.01 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(NULL IS NOT NULL)

Listing 1-2 :JE example case 2

Listing 1-3 provides another variation of JE.

select /*+ gather_plan_statistics */ t1.* from t1 where t1.n2 in
(select t2.n1 from t2 )

Plan hash value: 3617692013
------------------------------------------------------------------------------------
| Id  | Operation         | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |      1 |        |    100 |00:00:00.01 |      14 |
|   1 |  TABLE ACCESS FULL| T1   |      1 |    100 |    100 |00:00:00.01 |      14 |
------------------------------------------------------------------------------------
Listing 1-3 :JE example case 2

Following output shows the trace lines from the 10053 trace file.
JE: Considering Join Elimination on query block SEL$5DA710D3 (#1)
*************************
Join Elimination (JE)
*************************
JE: cfro: T1 objn:74684 col#:2 dfro:T2 dcol#:2
JE: cfro: T1 objn:74684 col#:2 dfro:T2 dcol#:2
Query block (0E0D43D0) before join elimination:
SQL:******* UNPARSED QUERY IS *******
SELECT “T1″.”N1″ “N1″,”T1″.”N2″ “N2″,”T1″.”N3″ “N3″ FROM “CBO3″.”T2″ “T2″,”CBO3″.”T1″ “T1″ WHERE “T1″.”N2″=”T2″.”N1″
JE: eliminate table: T2
Registered qb: SEL$14EF7918 0xe0d43d0 (JOIN REMOVED FROM QUERY BLOCK SEL$5DA710D3; SEL$5DA710D3; “T2″@”SEL$2″)

Filter Predicate(s) Generation from constraints

Various filter predicates are generated and added to the execution plan using enabled and validated constraints (check, not null constraints).

In the Listing 1-4, columns n1 and n2 has enabled valid NOT NULL constraints that precludes null values in the columns n1 and n2. Query in the listing 1-4 has predicate “n1 is null or n2 is null” which can never be true. This fact is used by the optimizer to improve the execution plan. Filter predicate (NULL IS NOT NULL) is added in step 1 which will be FALSE. So, Step 2 is never executed as the value of Starts column is 0 in the execution plan. This means that step (2) in the execution plan was never executed and table T1 was never accessed.

select /*+ gather_plan_statistics */ * from t1 where n1 is null or n2 is null;
select *  from table(dbms_xplan.display_cursor('','','allstats last'));
---------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   |
---------------------------------------------------------------------------
|*  1 |  FILTER            |      |      1 |        |      0 |00:00:00.01 |
|   2 |   TABLE ACCESS FULL| T1   |      0 |    100 |      0 |00:00:00.01 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(NULL IS NOT NULL)

Listing 1-4: Filter predicate generation from NOT NULL constraint.

Let’s add a check constraint to this column to explain this further. In the listing 1-5 a check constraint is added which specifies that “n1 200 and generated predicate n1<200 will nullify each other leading to an always FALSE condition. Optimizer identified this condition and added a filter predicate in step 1: NULL IS NOT NULL.


alter table t1 add constraint t1_n1_lt_150 check (n1 200;
select *  from table(dbms_xplan.display_cursor('','','allstats last'));
---------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   |
---------------------------------------------------------------------------
|*  1 |  FILTER            |      |      1 |        |      0 |00:00:00.01 |
|*  2 |   TABLE ACCESS FULL| T1   |      0 |     20 |      0 |00:00:00.01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(NULL IS NOT NULL)
   2 - filter("N1">200)

Listing 1-5:Filter predicate generation from a check constraint

Following lines from the trace file generated from event 10053 shows that a predicate n1<200 is added; This auto-generated predicate and existing predicate canceled each other leading to an eternally FALSE condition.

kkogcp: try to generate transitive predicate from check constraints for SEL$5DA710D3 (#0)
constraint: “T1″.”N1″<200
predicates with check contraints: "T1"."N2"="T2"."N1" AND "T1"."N1"<200
after transitive predicate generation: "T1"."N2"="T2"."N1" AND "T1"."N1"<200
finally: "T1"."N2"="T2"."N1"
apadrv-start: call(in-use=1056, alloc=16344), compile(in-use=44792, alloc=46272)
kkoqbc-start

SJC: Set to Join Conversion

In some cases, the optimizer can convert a set operator to a join operator. Interestingly, this feature is not enabled by default (up to 11gR1). In the listing 1-6, we enable this parameter. A MINUS set operation has been converted to a join operation.

alter session set "_convert_set_to_join"=true;
select /*+ gather_plan_statistics  */ n2 from t1 minus select n1 from t2
Plan hash value: 3050591313
------------------------------------------------------------------------------------...
| Id  | Operation           | Name         | Starts | E-Rows | A-Rows |   A-Time   |...
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |              |      1 |        |      0 |00:00:00.01 |...
|   1 |  HASH UNIQUE        |              |      1 |     99 |      0 |00:00:00.01 |
|   2 |   NESTED LOOPS ANTI |              |      1 |     99 |      0 |00:00:00.01 |
|   3 |    TABLE ACCESS FULL| T1           |      1 |    100 |    100 |00:00:00.01 |
|*  4 |    INDEX UNIQUE SCAN| SYS_C0010995 |    100 |      1 |    100 |00:00:00.01 |
------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("N2"="N1")

Listing 1-6: SJC

There is also a new hint set_to_join with this new feature.


/*+
  ...
      OPT_PARAM('_convert_set_to_join' 'true')
  ...
      SET_TO_JOIN(@"SET$1")
  ...
  */

SU: Subquery Unnesting

Subqueries can be unnested in to a join. Listing 1-7 shows that a subquery is unnested in to a view and then joined to other row sources. In this listing, a correlated subquery is moved in to a view VW_SQ_1, unnested and then joined using Nested Loops Join technique. There are many different variations of Subquery Unnesting possible, but the crux of the matter is that subqueries can be unnested, joined and then costed.

 select /*+ gather_plan_statistics  */ n1 from t1 where n1 >
	(select max(n2) from t2 where t2.n1 = t1.n1)
Plan hash value: 2311753844
-----------------------------------------------------------------------------------...
| Id  | Operation                       | Name         | Starts | E-Rows | A-Rows |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |              |      1 |        |      0 |
|   1 |  NESTED LOOPS                   |              |      1 |      1 |      0 |
|   2 |   VIEW                          | VW_SQ_1      |      1 |      5 |    100 |
|*  3 |    FILTER                       |              |      1 |        |    100 |
|   4 |     HASH GROUP BY               |              |      1 |      5 |    100 |
|   5 |      TABLE ACCESS BY INDEX ROWID| T2           |      1 |    100 |    100 |
|*  6 |       INDEX RANGE SCAN          | SYS_C0010995 |      1 |    100 |    100 |
|*  7 |   INDEX UNIQUE SCAN             | SYS_C0010992 |    100 |      1 |      0 |
-----------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter(MAX("N2")<200)
   6 - access("T2"."N1""MAX(N2)")

SU is one reason why there are many performance issues after a database upgrade to 10g and above. Cost of unnested subquery will go up or down leading to an unfortunate choice of not-so-optimal execution plan.

Use of ORDERED hint can really play havoc with SU feature too. For example, in the listing 1-8, join between t1 and t2 is preferred followed by other joins. You would expect to see the leading table in the join to be T1, but the leading row source is VW_SQ_1.

select /*+ gather_plan_statistics ORDERED  */ t1.n1, t2.n1 from t1 , t2
where t1.n1 = t2.n1 and t1.n1 > (select max(n2) from t2 where t2.n1 =t1.n1)
Plan hash value: 3904485247
------------------------------------------------------------------------------------...
| Id  | Operation                        | Name         | Starts | E-Rows | A-Rows |
------------------------------------------------------------------------------------...
|   0 | SELECT STATEMENT                 |              |      1 |        |      0 |...
|   1 |  NESTED LOOPS                    |              |      1 |      1 |      0 |
|   2 |   NESTED LOOPS                   |              |      1 |      1 |      0 |
|   3 |    VIEW                          | VW_SQ_1      |      1 |      5 |    100 |
|*  4 |     FILTER                       |              |      1 |        |    100 |
|   5 |      HASH GROUP BY               |              |      1 |      5 |    100 |
|   6 |       TABLE ACCESS BY INDEX ROWID| T2           |      1 |    100 |    100 |
|*  7 |        INDEX RANGE SCAN          | SYS_C0010995 |      1 |    100 |    100 |
|*  8 |    INDEX UNIQUE SCAN             | SYS_C0010992 |    100 |      1 |      0 |
|*  9 |   INDEX UNIQUE SCAN              | SYS_C0010995 |      0 |      1 |      0 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter(MAX("N2")<200)
   7 - access("T2"."N1""MAX(N2)")
   9 - access("T1"."N1"="T2"."N1")
       filter("T2"."N1"<200)

Is CBO not honoring our hint? It is honoring our hint. Except that ORDERED hint was applied after the SU transformation and so, unnested view is in the leading row source. A variation of the transfromed query from 10053 trace file is printed below. With ORDERED hint, of course, CBO must choose the unnested view as the leading row source. Use LEADING hint instead of ORDERED hint if necessary.

SELECT /*+ ORDERED */ “T1″.”N1″ “N1″,”T2″.”N1″ “N1″ FROM
(SELECT MAX(“T2″.”N2″) “MAX(N2)”,”T2″.”N1″ “ITEM_1″ FROM “CBO3″.”T2″ “T2″ GROUP BY “T2″.”N1″) “VW_SQ_2″,
“CBO3″.”T1″ “T1″,”CBO3″.”T2″ “T2″ WHERE “T1″.”N1″=”T2″.”N1″ AND “T1″.”N1″>”VW_SQ_2″.”MAX(N2)” AND “VW_SQ_2″.”ITEM_1″=”T1″.”N1″

Summary

There are many techniques to cover in one blog entry. We will discuss these features further in upcoming blogs. This can be read in more conventional format: query_transformation_part_1_orainternals

10.2.0.5

If I didn't laugh, I'd cry.

Mid-way through a series of posts, the next one of which I plan to talk about some of the myriad bugs and related patches for DBMS_STATS in 10.2.0.4, I discovered via several blogs that the 10.2.0.5 patchset for Linux x86-64 has been released ;-)