Search

Top 60 Oracle Blogs

Recent comments

10gR2

Does the Query Optimizer Cost PX Distribution Methods?

The short answer to this question is “yes”, it does. Unfortunately, the distribution costs are not externalized through the execution plans and, as a result, this limitation (yes, it is really a limitation in the current implementation, not a bug) confuses everyone that carefully look at the information provided in an execution plan of a [...]

Join Elimination

In some specific situations the query optimizer is able to completely avoid executing a join even if a SQL statement explicitly calls for it. Two are the cases currently covered by this optimization technique, which is called join elimination. The first one was introduced in Oracle Database 10g Release 2, the second one in Oracle [...]

Does CREATE INDEX Gather Global Statistics?

You can add the COMPUTE STATISTICS clause to the CREATE INDEX statement. It instructs the SQL statement to gather and store index statistics in the data dictionary, while creating the index. This is useful because the overhead associated with the gathering of statistics while executing this SQL statement is negligible. In Oracle9i, the gathering of [...]

Optimizer cleverness

At present I'm quite busy and therefore don't have much time to spent on writing blog notes, but I couldn't resist to publish this small and simple test case.

Often you can read (mostly unqualified) rants in various places and forums about the Cost Based Optimizer how stupid, unpredictable etc. it seems to be.

So I think it's time to demonstrate how clever the optimizer sometimes can be.

Consider the following setup:

drop table t_opt_clever purge;

-- Use PCTFREE 99 so that only one row per (leaf) block
-- This can tell us how many "rows" had to be inspected
-- by checking the number of (leaf) blocks accessed
-- Unfortunately Oracle (usually) doesn't provide the information
-- how many rows have been accessed in the execution plan,
-- but only how many rows are returned by an operation
create table t_opt_clever (
id not null constraint pk_opt_clever primary key,
col1 not null,
col2 not null,
col3 not null,
col4 not null,
col5 not null,
filler
)
pctfree 99
pctused 1
as
select
level as id
, round(dbms_random.value(0, 200)) as col1
, round(dbms_random.value(0, 400)) as col2
, case
when level <= 666
then 'FIRST_BUCKET'
when level <= 833
then 'SECOND_BUCKET'
when level <= 1000
then 'THIRD_BUCKET'
end as col3
, round(dbms_random.value(0, 600)) as col4
, round(dbms_random.value(0, 800)) as col5
, rpad('x', 100, 'x') as filler
from
dual
connect by
level <= 1000;

create index idx_opt_clever1 on t_opt_clever (col5, col1, col4, col2) pctfree 99 compute statistics;

create index idx_opt_clever2 on t_opt_clever (col5, col1, col3, col4, col2) pctfree 99 compute statistics;

exec dbms_stats.gather_table_stats(null, 'T_OPT_CLEVER')

-- scale the table and index by factor 1000
exec dbms_stats.set_table_stats(null, 'T_OPT_CLEVER', numrows => 1000000, numblks => 30000)

exec dbms_stats.set_index_stats(null, 'PK_OPT_CLEVER', numrows=> 1000000, numlblks => 2000, numdist=>1000000, clstfct => 100000, indlevel => 3)

exec dbms_stats.set_index_stats(null, 'IDX_OPT_CLEVER1', numrows=> 1000000, numlblks => 14000, numdist=>1000000, clstfct => 1000000, indlevel => 3)

exec dbms_stats.set_index_stats(null, 'IDX_OPT_CLEVER2', numrows=> 1000000, numlblks => 16000, numdist=>1000000, clstfct => 1000000, indlevel => 3)

Basically this simulates a 1,000,000 rows table with two suboptimal indexes given the following Top 100 query:

-- Now which index can be efficiently used by the optimizer?
select
*
from (
select
*
from
t_opt_clever
where
col3 = 'FIRST_BUCKET'
order by
col3, col5, col1, col4, col2
)
where
rownum <= 100;

Now what do you think, can one of these indexes efficiently be used by the optimizer, and if yes, which one?

At first sight both indexes can't be used to satisfy the requested sort order to avoid a costly full scan of data and a corresponding SORT ORDER BY (STOPKEY) operation, and can't be used efficiently to filter the data because the filter predicate is not among the leading columns.

Let's check the result:

SQL> select * from table(dbms_xplan.display_cursor(null, null, '+COST ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID fz6vky8n5a3xq, child number 0
-------------------------------------
select * from ( select * from t_opt_clever where
col3 = 'FIRST_BUCKET' order by col3, col5, col1, col4, col2 ) where
rownum <= 100

Plan hash value: 4203008252

---------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads |
---------------------------------------------------------------------------------------------------------------------------------
|* 1 | COUNT STOPKEY | | 1 | | | 100 |00:00:00.29 | 256 | 100 |
| 2 | VIEW | | 1 | 101 | 109 (0)| 100 |00:00:00.29 | 256 | 100 |
| 3 | TABLE ACCESS BY INDEX ROWID| T_OPT_CLEVER | 1 | 333K| 109 (0)| 100 |00:00:00.29 | 256 | 100 |
|* 4 | INDEX FULL SCAN | IDX_OPT_CLEVER2 | 1 | 101 | 8 (0)| 100 |00:00:00.01 | 156 | 0 |
---------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(ROWNUM<=100)
4 - access("COL3"='FIRST_BUCKET')
filter("COL3"='FIRST_BUCKET')

24 rows selected.

That is quite interesting, the index IDX_OPT_CLEVER2 is used and no SORT ORDER BY operation can be found in the execution plan, although the index doesn't match the requested sort order. And here comes the cleverness of the optimizer: It recognizes that due to the filter predicate on COL3 this index can actually be used to satisfy the sort order because it is not relevant for the resulting order since COL3 will always be the constant value of the filter predicate. And the same applies to IDX_OPT_CLEVER1, by the way.

But IDX_OPT_CLEVER2 is more efficient than using IDX_OPT_CLEVER1 because the filter predicate can be evaluated on the index data already eliminating some of the rows before visiting the table. Depending on the clustering factor this can make a significant difference to the cost of the operation, since random row accesses to table rows potentially require to access a different block per row.

This can be seen when forcing the usage of IDX_OPT_CLEVER1:

SQL> select * from table(dbms_xplan.display_cursor(null, null, '+COST ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 5tgmgfvyyx6z6, child number 0
-------------------------------------
select * from ( select /*+ index(t_opt_clever idx_opt_clever1) */ * from
t_opt_clever where col3 = 'FIRST_BUCKET' order by col3,
col5, col1, col4, col2 ) where rownum <= 100

Plan hash value: 678132971

---------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads |
---------------------------------------------------------------------------------------------------------------------------------
|* 1 | COUNT STOPKEY | | 1 | | | 100 |00:00:00.20 | 310 | 54 |
| 2 | VIEW | | 1 | 101 | 312 (1)| 100 |00:00:00.20 | 310 | 54 |
|* 3 | TABLE ACCESS BY INDEX ROWID| T_OPT_CLEVER | 1 | 101 | 312 (1)| 100 |00:00:00.20 | 310 | 54 |
| 4 | INDEX FULL SCAN | IDX_OPT_CLEVER1 | 1 | 1000K| 8 (0)| 154 |00:00:00.01 | 156 | 0 |
---------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(ROWNUM<=100)
3 - filter("COL3"='FIRST_BUCKET')

23 rows selected.

Two things can be seen here:

1. The optimizer is again smart and is able to avoid the SORT ORDER BY operation, because the index IDX_OPT_CLEVER1 can also be used to return in the data in the requested order, again because COL3 is constant.

2. Using IDX_OPT_CLEVER1 is less efficient because more table rows have to be visited to apply the filter predicate.

The fact that the indexes can only be used efficiently under this special circumstance can be verified by changing the filter predicate so that COL3 can have more than a single value and therefore it's no longer possible to avoid an ORDER BY operation:

-- Change the filter predicate and force index
select
*
from (
select /*+ index(t_opt_clever idx_opt_clever2) */
*
from
t_opt_clever
where
col3 in ('FIRST_BUCKET', 'SECOND_BUCKET')
order by
col5, col1, col4, col2
)
where
rownum <= 100;
SQL> select * from table(dbms_xplan.display_cursor(null, null, '+COST ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID axr6u0yvdk50f, child number 0
-------------------------------------
select * from ( select /*+ index(t_opt_clever idx_opt_clever2) */ * from
t_opt_clever where col3 in ('FIRST_BUCKET', 'SECOND_BUCKET') order by col3, col5, col1,
col4, col2 ) where rownum <= 100

Plan hash value: 2229390605

----------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------------
|* 1 | COUNT STOPKEY | | 1 | | | 100 |00:00:00.02 | 1835 | | | |
| 2 | VIEW | | 1 | 666K| 703K (1)| 100 |00:00:00.02 | 1835 | | | |
|* 3 | SORT ORDER BY STOPKEY | | 1 | 666K| 703K (1)| 100 |00:00:00.02 | 1835 | 20480 | 20480 |18432 (0)|
| 4 | TABLE ACCESS BY INDEX ROWID| T_OPT_CLEVER | 1 | 666K| 683K (1)| 833 |00:00:00.01 | 1835 | | | |
|* 5 | INDEX FULL SCAN | IDX_OPT_CLEVER2 | 1 | 666K| 16100 (1)| 833 |00:00:00.01 | 1002 | | | |
----------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(ROWNUM<=100)
3 - filter(ROWNUM<=100)
5 - filter(("COL3"='FIRST_BUCKET' OR "COL3"='SECOND_BUCKET'))

25 rows selected.

Without the index hint the optimizer chooses a full table scan. Forcing e.g. the index IDX_OPT_CLEVER2 shows that indeed all rows had to be processed first and additionally a sort operation was necessary.

So it's interesting to note that the optimizer recognizes special cases where single value predicates allow an index usage that otherwise wouldn't be possible. This is a nice move, since it allows to perform above query in quite an efficient manner although the setup is suboptimal (e.g. a different index with COL3 as leading column or an appropriate IOT could be more suitable, depending on what else is done with the table). Under these (simulated) circumstances this optimization makes quite a difference compared to the otherwise only possible full table scan operation of a 30,000 blocks table.

By the way, above results could be reproduced on 10.2.0.4 and 11.1.0.7 Win32 using default system statistics and an 8KB LMT MSSM tablespace.

PLAN_HASH_VALUE - How equal (and stable?) are your execution plans - part 2

Back to part 1

In the previous post I've shown some characteristics of the PLAN_HASH_VALUE information provided by Oracle.

Now if you want to have greater control over how a hash value on the execution plan should be calculated, in particular regarding some attributes of the PLAN_TABLE column that are not used to calculate the provided PLAN_HASH_VALUE, e.g. the filter and access predicates, here are some ideas how to do it yourself.

Obviously we need to calculate a hash value across multiple columns and multiple rows from a given PLAN_TABLE-equivalent source, which could be e.g. V$SQL_PLAN or a real PLAN_TABLE. This requires the following:

- A function calculating a hash value from a given input
- The input needs to represent the columns and rows from the PLAN_TABLE
- Therefore ideally we need to combine the columns to a single expression
- And finally combine the expressions from the different rows again to a single expression that can be fed into the hash value function.

In Oracle 10.2 and later all these things can be done with built-in functions. In previous releases some more work and user-defined functions are required, but it's definitely feasible without too much effort.

A function calculating a hash value from a given input

Apart from any user-defined functions that calculate a hash value there are multiple options available provided by Oracle as built-in functions.

In pre-10.2 versions there are only a few built-in functions that calculate a hash value.

One is DBMS_UTILITY.GET_HASH_VALUE which has already been there since pre-9i releases (at least 8i, but likely even earlier). The other one is DBMS_OBFUSCATION_TOOLKIT.MD5 which has been introduced in 9i.

The main difference is that the MD5 function - the name suggests it - calculates a hash value according to the standard MD5 algorithm whereas GET_HASH_VALUE simply returns a 31bit hash value calculated.

Version 10 adds some more functions, in particular the built-in ORA_HASH function and the DBMS_CRYPTO package that basically supersedes the DBMS_OBFUSCATION_TOOLKIT with enhanced functionality.

These functions differ in some details but basically can all be used for the given task.

Here I'm just highlighting some noticeable variations:

The DBMS_OBFUSCATION_TOOLKIT.MD5 is not very well documented and quite cumbersome to use. It cannot be used directly from SQL (you get always "ORA-06553: PLS-307: too many declarations of 'MD5' match this call), and it throws an exception when passing a NULL value. Therefore it's probably best to be wrapped by a user-defined function, e.g. something like this:

create or replace function hashkey(in_string in varchar2) return varchar2 as
begin
if in_string is null then
return to_char(null);
else
return rawtohex(utl_raw.cast_to_raw(sys.dbms_obfuscation_toolkit.md5(input_string=>in_string)));
end if;
end;
/

Note that this version of MD5 uses a string as input but seems to return a raw value, but VARCHAR2 as data type. Therefore some cumbersome conversions are required (cast to raw and then raw to hex) to get actual VARCHAR2 output.

The DBMS_CRYPTO package offers some overloaded versions of the HASH function. It's interesting to note that DBMS_CRYPTO.HASH offers the capability to process LOBs as input to calculate the hash value.

The most interesting function due to its simplicity and versatility is the ORA_HASH function. One special feature is that it is capable of taking nested tables as input in addition to the normal built-in scalar data types.

Here's a simple performance comparison of the different functions (performed on 11.1.0.7 Win32):

SQL>
SQL> drop function hashkey;

Function dropped.

Elapsed: 00:00:00.04
SQL>
SQL> create or replace function hashkey(in_string in varchar2) return varchar2 as
2 begin
3 if in_string is null then
4 return to_char(null);
5 else
6 return rawtohex(utl_raw.cast_to_raw(sys.dbms_obfuscation_toolkit.md5(input_string=>in_string)));
7 end if;
8 end;
9 /

Function created.

Elapsed: 00:00:00.04

SQL> drop table random_data purge;

Table dropped.

Elapsed: 00:00:00.04
SQL>
SQL> exec dbms_random.seed(0)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL>
SQL> create table random_data
2 as
3 select
4 dbms_random.string('A', 100) as the_data
5 from
6 dual
7 connect by
8 level <= 100000;

Table created.

Elapsed: 00:00:31.30
SQL>
SQL> insert /*+ append */ into random_data
2 select * from random_data;

100000 rows created.

Elapsed: 00:00:01.17
SQL>
SQL> commit;

Commit complete.

Elapsed: 00:00:00.01
SQL>
SQL> insert /*+ append */ into random_data
2 select * from random_data;

200000 rows created.

Elapsed: 00:00:01.06
SQL>
SQL> commit;

Commit complete.

Elapsed: 00:00:00.01
SQL>
SQL> select count(the_data)
2 from (
3 select
4 the_data
5 from
6 random_data
7 );

COUNT(THE_DATA)
---------------
400000

Elapsed: 00:00:01.66
SQL>
SQL> select count(the_data)
2 from (
3 select
4 the_data
5 from
6 random_data
7 );

COUNT(THE_DATA)
---------------
400000

Elapsed: 00:00:01.14
SQL>
SQL> select max(the_hash)
2 from (
3 select
4 ora_hash(the_data) as the_hash
5 from
6 random_data
7 );

MAX(THE_HASH)
-------------
4294934713

Elapsed: 00:00:02.09
SQL>
SQL> select max(the_hash)
2 from (
3 select
4 ora_hash(the_data) as the_hash
5 from
6 random_data
7 );

MAX(THE_HASH)
-------------
4294934713

Elapsed: 00:00:01.54
SQL>
SQL> select max(the_hash)
2 from (
3 select
4 dbms_utility.get_hash_value(the_data, 0, 2147483647) as the_hash
5 from
6 random_data
7 );

MAX(THE_HASH)
-------------
2147451066

Elapsed: 00:00:04.48
SQL>
SQL> select max(the_hash)
2 from (
3 select
4 dbms_utility.get_hash_value(the_data, 0, 2147483647) as the_hash
5 from
6 random_data
7 );

MAX(THE_HASH)
-------------
2147451066

Elapsed: 00:00:04.08
SQL>
SQL> select max(the_hash)
2 from (
3 select
4 hashkey(the_data) as the_hash
5 from
6 random_data
7 );

MAX(THE_HASH)
--------------------------------------------------------------------------------
FFFF54FE7872DBDD4E870FFC20C1DDE0

Elapsed: 00:00:15.23
SQL>
SQL> select max(the_hash)
2 from (
3 select
4 hashkey(the_data) as the_hash
5 from
6 random_data
7 );

MAX(THE_HASH)
--------------------------------------------------------------------------------
FFFF54FE7872DBDD4E870FFC20C1DDE0

Elapsed: 00:00:14.19
SQL>
SQL> select max(the_hash)
2 from (
3 select
4 dbms_crypto.hash(utl_raw.cast_to_raw(the_data), 2) as the_hash
5 from
6 random_data
7 );

MAX(THE_HASH)
--------------------------------------------------------------------------------
FFFF54FE7872DBDD4E870FFC20C1DDE0

Elapsed: 00:00:14.42
SQL>
SQL> select max(the_hash)
2 from (
3 select
4 dbms_crypto.hash(utl_raw.cast_to_raw(the_data), 2) as the_hash
5 from
6 random_data
7 );

MAX(THE_HASH)
--------------------------------------------------------------------------------
FFFF54FE7872DBDD4E870FFC20C1DDE0

Elapsed: 00:00:13.66
SQL>

So it's obvious that the ORA_HASH function has the least overhead (no PL/SQL context switch required), and the standard MD5 based functions do have the most overhead, but the calculated hash values are 128bit hashes and therefore are much more robust regarding hash collisions, but that doesn't really matter in our case here.

For our particular case here the DBMS_CRYPTO.HASH and the ORA_HASH function are best suited because they are capable of handling large concatenations. In case of DBMS_CRYPTO.HASH this is done via support for CLOBs, in case of ORA_HASH it's the support for collections.

The input needs to represent the columns and rows from the PLAN_TABLE

Concatenating the columns of interest to build a single expression from a PLAN_TABLE row is straightforward, however it poses the potential issue that some of the attributes that are likely to be included are of VARCHAR2(4000) type, so a simple VARCHAR2 based concatenation is not possible. You either need to use a CLOB instead or somehow shorten the expression. In order to avoid costly temporary LOB concatenation operations I prefer the latter and use again a hash function on the large columns to shorten the expression significantly, allowing to represent the whole row expression in a single VARCHAR2(4000).

Building a single expression for an execution plan out of the row expressions is again something that can be addressed by well-known techniques that transform rows to columns (basically a pivot operation which is explicitly supported by the PIVOT operator introduced in 11g), or in this particular case you could call it a "Rows to string" or "String Aggregation" operation.

There are several ways how to accomplish this using SQL, some of them are e.g. demonstrated on the SQL snippets site.

In this particular case again we have the potential issue that the resulting string could exceed the 4,000 bytes limit imposed by the VARCHAR2 data type, and therefore some of the available techniques might not be appropriate.

For instance the "hierarchical query" approach for string aggregation will fail with "result of string concatenation too long" in the SYS_CONNECT_BY_PATH function used to concatenate the string in such cases.

Using nested tables, the build-in 10g COLLECT aggregate function and a custom TO_STRING function that is capable of handling and generating CLOBs can not be used either because Oracle doesn't guarantee the order of aggregates and therefore the order of the aggregated strings might not be the same for the same input data leading to different hash values.

In 10g there are a couple of favorable approaches to this:

1. Use the MODEL clause to perform the aggregation. This allows to control the aggregation process and therefore can prevent any error caused by strings longer than 4,000 characters/bytes. Furthermore the model clause implicitly defines an order of processing by the dimensions defined in the model. The order is important in our case, since the concatenated values need to be ordered always in the same way, otherwise the same execution plan might result in different hash values.

2. Use some other built-in aggregation method capable of handling CLOBs, e.g. the XMLType based XMLAGG function can return a CLOB from the resulting XMLType (but depending on what you do be aware of specialties like the treatment of XML special characters like ampersand, less than, greater than, single-quote and double-quote, which can be addressed using the DBMS_XMLGEN.CONVERT function, which is in our case not an issue, since we simply what to generate a hash value and are not interested in any special character treatment as along as it is done consistently).

3. Use a custom aggregation function, similar to Tom Kyte's STRAGG custom aggregate function. This one also needs special treatment to address the requirement to order the aggregates in a deterministic way.

4. Use nested tables and the ORA_HASH function. This seems to be the most straightforward solution in 10g, since it addresses the sort order issue due to the implicit ordering performed by the ORA_HASH function, avoids any costly LOB operations and requires the least coding effort.

In pre-10g versions there are only a couple of options left:

1. The MODEL clause is not supported on pre-10g

2. Use some other built-in aggregation method capable of handling CLOBs, e.g. the XMLAGG function is supported in 9i

3. Use a custom aggregation function

4. ORA_HASH is not supported either

One particular issue in 9i is that it doesn't provide a built-in HASH function that supports large concatenations. Neither DBMS_CRYPTO.HASH nor ORA_HASH are supported.

Examples

So here are some examples for the aforementioned options. If you're going to use V$SQL_PLAN as source then note that in order to minimize the latch contention and the risk of inconsistencies when reading from V$SQL_PLAN (Oracle doesn't guarantee any read consistency on dynamic performance views) you might want to populate a (global temporary) table from the V$SQL_PLAN or use at least the (still undocumented) MATERIALIZE hint of the subquery factoring WITH clause to generate a global temporary table on the fly from the contents.

For the following examples a simple table has been created via CTAS:

create table my_sql_plan
as
select * from v$sql_plan;

Note that the examples cover all available columns from V$SQL_PLAN (Version 10.2.0.4 in that case) starting with the OPERATION column up to and except for the OTHER_XML column.

This means that you get different hash values even if e.g. only the BYTES or CPU_COST columns are different. For a reasonable analysis this is probably too granular and you should omit those columns that you don't want to consider for the hash value calculation.

1. Calculating a plan hash value using the MODEL clause for string aggregation

select
hash_value
, plan_hash_value
, child_number
, sys.dbms_crypto.hash(aggr1 || aggr2 || aggr3 || dummy_clob, 2) as the_hash
from (
select
hash_value
, plan_hash_value
, child_number
, aggr1
, aggr2
, aggr3
from (
select
hash_value
, plan_hash_value
, child_number
, id
, dense_rank() over (order by hash_value, plan_hash_value, child_number) as rnk
, sys.dbms_crypto.hash(utl_raw.cast_to_raw(
operation
|| '-' || options
|| '-' || object_node
|| '-' || object#
|| '-' || object_owner
|| '-' || object_name
|| '-' || object_alias
|| '-' || object_type
|| '-' || optimizer
|| '-' || id
|| '-' || parent_id
|| '-' || depth
|| '-' || position
|| '-' || search_columns
|| '-' || cost
|| '-' || cardinality
|| '-' || bytes
|| '-' || other_tag
|| '-' || partition_start
|| '-' || partition_stop
|| '-' || partition_id
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(other, 1, 2000)), 2)
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(other, 2001)), 2)
|| '-' || distribution
|| '-' || cpu_cost
|| '-' || io_cost
|| '-' || temp_space
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(access_predicates, 1, 2000)), 2)
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(access_predicates, 2001)), 2)
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(filter_predicates, 1, 2000)), 2)
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(filter_predicates, 2001)), 2)
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(projection, 1, 2000)), 2)
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(projection, 2001)), 2)
|| '-' || time
|| '-' || qblock_name
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(remarks, 1, 2000)), 2)
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(remarks, 2001)), 2)
)
, 2) as hash_path_row
from (
select
*
from
my_sql_plan
)
)
model
return updated rows
partition by (
rnk
)
dimension by (
id
)
measures (
cast(hash_path_row as varchar2(4000)) as aggr1
, cast('' as varchar2(4000)) as aggr2
, cast('' as varchar2(4000)) as aggr3
, hash_value
, plan_hash_value
, child_number
)
rules
iterate (10000)
until presentv(aggr1[ITERATION_NUMBER + 2], 1, 2) = 2 (
aggr1[0] = aggr1[0] ||
case
when length(aggr1[0]) + length(aggr1[ITERATION_NUMBER + 1]) + 1 > 4000
or length(aggr2[0]) + length(aggr1[ITERATION_NUMBER + 1]) + 1 > 4000
then ''
else '-' || aggr1[ITERATION_NUMBER+1]
end,
aggr2[0] = aggr2[0] ||
case
when length(aggr1[0]) + length(aggr1[ITERATION_NUMBER + 1]) + 1 > 4000
and length(aggr2[0]) + length(aggr1[ITERATION_NUMBER + 1]) + 1 < 4000
then '-' || aggr1[ITERATION_NUMBER+1]
else ''
end,
aggr3[0] = aggr3[0] ||
case
when length(aggr2[0]) + length(aggr1[ITERATION_NUMBER + 1]) + 1 > 4000
then '-' || aggr1[ITERATION_NUMBER+1]
else ''
end
)
),
(
select
substr(other_xml, 1, 0) as dummy_clob
from
my_sql_plan
where
rownum <= 1
);

Note that a dummy CLOB is used for the final string concatenation. Obviously other hash functions instead of DBMS_CRYPTO.HASH could be used but are limited by the 4,000 bytes maximum size of VARCHAR2 (except for ORA_HASH, see below). Due to the RAW input type the columns exceeding 2,000 bytes are split apart, since a RAW value can have a maximum of 2,000 bytes. However I've encountered different behaviour, under some circumstances the conversion failed when more than 2,000 bytes were in the VARCHAR2 data, but some other tests were successfully able to work on converted VARCHAR2 values larger than 2,000 bytes.

2. Calculating a plan hash value using the XMLAGG function for string aggregation

select
hash_value
, plan_hash_value
, child_number
, sys.dbms_crypto.hash(
substr(
dbms_xmlgen.convert(
extract(
xmlagg(
xmlelement("V", '-' || rawtohex(hash_path_row)
)
order by rawtohex(hash_path_row)
), '/V/text()'
).getclobval()
, 1
)
, 2
)
, 2
) as the_hash
from (
select
hash_value
, plan_hash_value
, child_number
, hash_path_row
from (
select
hash_value
, plan_hash_value
, child_number
, id
, dense_rank() over (order by hash_value, plan_hash_value, child_number) as rnk
, sys.dbms_crypto.hash(utl_raw.cast_to_raw(
operation
|| '-' || options
|| '-' || object_node
|| '-' || object#
|| '-' || object_owner
|| '-' || object_name
|| '-' || object_alias
|| '-' || object_type
|| '-' || optimizer
|| '-' || id
|| '-' || parent_id
|| '-' || depth
|| '-' || position
|| '-' || search_columns
|| '-' || cost
|| '-' || cardinality
|| '-' || bytes
|| '-' || other_tag
|| '-' || partition_start
|| '-' || partition_stop
|| '-' || partition_id
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(other, 1, 2000)), 2)
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(other, 2001)), 2)
|| '-' || distribution
|| '-' || cpu_cost
|| '-' || io_cost
|| '-' || temp_space
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(access_predicates, 1, 2000)), 2)
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(access_predicates, 2001)), 2)
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(filter_predicates, 1, 2000)), 2)
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(filter_predicates, 2001)), 2)
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(projection, 1, 2000)), 2)
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(projection, 2001)), 2)
|| '-' || time
|| '-' || qblock_name
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(remarks, 1, 2000)), 2)
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(remarks, 2001)), 2)
)
, 2) as hash_path_row
from (
select
*
from
my_sql_plan
)
)
)
group by
hash_value
, plan_hash_value
, child_number;

Note that I've added the DBMS_XMLGEN.CONVERT function for convenience to demonstrate its usage to overcome the handling of XML special characters like ampersand, greater than etc. Again different hash functions instead of DBMS_CRYPTO.HASH could be used but are limited by the 4,000 bytes maximum size of VARCHAR2 (except for ORA_HASH, see below).

3. Calculating a plan hash value using a user-defined aggregation function for string aggregation

drop function aggregate_concat_ord;

drop type agg_concat_ord;

drop type table_of_varchar;

create or replace type table_of_varchar as table of varchar2(4000);
/

create or replace type agg_concat_ord as object
(
vals table_of_varchar,
static function ODCIAggregateInitialize (init_context IN OUT agg_concat_ord) return number,
member function ODCIAggregateIterate (self IN OUT agg_concat_ord, this_value IN varchar2) return number,
member function ODCIAggregateTerminate (self IN agg_concat_ord, result OUT clob, flags IN number) return number,
member function ODCIAggregateMerge (self IN OUT agg_concat_ord, merge_context IN agg_concat_ord) return number
);
/

create or replace type body agg_concat_ord is
static function ODCIAggregateInitialize (init_context in out agg_concat_ord) return number is
begin
init_context := agg_concat_ord (null);
init_context.vals := table_of_varchar();
return ODCIConst.Success;
end;
member function ODCIAggregateIterate (self in out agg_concat_ord, this_value in varchar2) return number is
begin
self.vals.extend;
self.vals (self.vals.last) := this_value;
return ODCIConst.Success;
end;
member function ODCIAggregateTerminate (self IN agg_concat_ord, result OUT clob, flags IN number ) return number is
begin
for r in (select column_value from table (self.vals) order by 1) loop
result := result || r.column_value;
end loop;
return ODCIConst.Success;
end;
member function ODCIAggregateMerge (self IN OUT agg_concat_ord, merge_context IN agg_concat_ord) return number is
i integer;
begin
i := merge_context.vals.first;
while i is not null loop
self.vals.extend;
self.vals (self.vals.last) := merge_context.vals (i);
i := merge_context.vals.next (i);
end loop;
return ODCIConst.Success;
end;
end;
/

create or replace function aggregate_concat_ord (input varchar2) return clob parallel_enable
aggregate using agg_concat_ord;
/

select
hash_value
, plan_hash_value
, child_number
, sys.dbms_crypto.hash(substr(aggregate_concat_ord('-' || rawtohex(hash_path_row)), 2), 2) as the_hash
from (
select
hash_value
, plan_hash_value
, child_number
, hash_path_row
from (
select
hash_value
, plan_hash_value
, child_number
, id
, dense_rank() over (order by hash_value, plan_hash_value, child_number) as rnk
, sys.dbms_crypto.hash(utl_raw.cast_to_raw(
operation
|| '-' || options
|| '-' || object_node
|| '-' || object#
|| '-' || object_owner
|| '-' || object_name
|| '-' || object_alias
|| '-' || object_type
|| '-' || optimizer
|| '-' || id
|| '-' || parent_id
|| '-' || depth
|| '-' || position
|| '-' || search_columns
|| '-' || cost
|| '-' || cardinality
|| '-' || bytes
|| '-' || other_tag
|| '-' || partition_start
|| '-' || partition_stop
|| '-' || partition_id
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(other, 1, 2000)), 2)
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(other, 2001)), 2)
|| '-' || distribution
|| '-' || cpu_cost
|| '-' || io_cost
|| '-' || temp_space
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(access_predicates, 1, 2000)), 2)
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(access_predicates, 2001)), 2)
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(filter_predicates, 1, 2000)), 2)
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(filter_predicates, 2001)), 2)
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(projection, 1, 2000)), 2)
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(projection, 2001)), 2)
|| '-' || time
|| '-' || qblock_name
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(remarks, 1, 2000)), 2)
|| '-' || dbms_crypto.hash(utl_raw.cast_to_raw(substr(remarks, 2001)), 2)
)
, 2) as hash_path_row
from (
select
*
from
my_sql_plan
)
)
)
group by
hash_value
, plan_hash_value
, child_number;

Note that the user-defined aggregation function has two noteworthy characteristics: It ensures an order of the aggregates and it generates a CLOB value.
Again different hash functions could be used instead of DBMS_CRYPTO.HASH with the known limitations.

4. Calculating a plan hash value using the ORA_HASH and a user-defined type for string aggregation

create or replace type ntt_varchar2 as table of varchar2(4000);
/

select
hash_value
, plan_hash_value
, child_number
, ora_hash(cast(collect(to_char(hash_path_row, 'TM')) as ntt_varchar2)) as the_hash
from (
select
hash_value
, plan_hash_value
, child_number
, hash_path_row
from (
select
hash_value
, plan_hash_value
, child_number
, id
, dense_rank() over (order by hash_value, plan_hash_value, child_number) as rnk
, ora_hash(
operation
|| '-' || options
|| '-' || object_node
|| '-' || object#
|| '-' || object_owner
|| '-' || object_name
|| '-' || object_alias
|| '-' || object_type
|| '-' || optimizer
|| '-' || id
|| '-' || parent_id
|| '-' || depth
|| '-' || position
|| '-' || search_columns
|| '-' || cost
|| '-' || cardinality
|| '-' || bytes
|| '-' || other_tag
|| '-' || partition_start
|| '-' || partition_stop
|| '-' || partition_id
|| '-' || ora_hash(other)
|| '-' || distribution
|| '-' || cpu_cost
|| '-' || io_cost
|| '-' || temp_space
|| '-' || ora_hash(access_predicates)
|| '-' || ora_hash(filter_predicates)
|| '-' || ora_hash(projection)
|| '-' || time
|| '-' || qblock_name
|| '-' || ora_hash(remarks)
) as hash_path_row
from (
select
*
from
my_sql_plan
)
)
)
group by
hash_value
, plan_hash_value
, child_number;

This example avoids any costly LOB operations. The string aggregation is performed by the COLLECT function that is available from 10g on. Note that the sort order issue caused by the COLLECT function (actually the lack of any defined order within the aggregate) is addressed by the ORA_HASH function that obviously sorts the passed data in case a collection is passed. This can be indirectly noticed when omitting the CAST operation to the NTT_VARCHAR2 custom collection type.

Here are some variations suitable for 9i databases:

1. Using a custom HASHKEY function built on top of DBMS_OBFUSCATION_TOOLKIT and the user-defined aggregation function from above

create or replace function hashkey(in_string in varchar2) return varchar2 as
begin
if in_string is null then
return to_char(null);
else
return rawtohex(utl_raw.cast_to_raw(sys.dbms_obfuscation_toolkit.md5(input_string=>in_string)));
end if;
end;
/

select
hash_value
--, plan_hash_value
, child_number
, hashkey(substr(aggregate_concat_ord('-' || hash_path_row), 2, 4000)) as the_hash
from (
select
hash_value
--, plan_hash_value
, child_number
, hashkey(
operation
|| '-' || options
|| '-' || object_node
|| '-' || object#
|| '-' || object_owner
|| '-' || object_name
|| '-' || optimizer
|| '-' || id
|| '-' || parent_id
|| '-' || depth
|| '-' || position
|| '-' || search_columns
|| '-' || cost
|| '-' || cardinality
|| '-' || bytes
|| '-' || other_tag
|| '-' || partition_start
|| '-' || partition_stop
|| '-' || partition_id
|| '-' || hashkey(other)
|| '-' || distribution
|| '-' || cpu_cost
|| '-' || io_cost
|| '-' || temp_space
|| '-' || hashkey(access_predicates)
|| '-' || hashkey(filter_predicates)
) as hash_path_row
from
(select * from my_sql_plan
)
)
group by
hash_value
--, plan_hash_value
, child_number;

Note that we have to restrict the final hash generation to the first 4,000 bytes, since the built-in hash functions in 9i don't support LOBs or collections. This means that without a user-defined hash function that supports LOBs or collections in 9i we can't differentiate between plans that have the same first 4,000 characters after aggregation. Since this is very unlikely it should matter only in exceptional cases. Furthermore the V$SQL_PLAN view in 9i doesn't cover the PLAN_HASH_VALUE function, therefore you would need to get that e.g. from V$SQL in case you want to show that.

Obviously the 9i V$SQL_PLAN also misses some of the columns that have been added in 10g (OBJECT_ALIAS, OBJECT_TYPE, PROJECTION, TIME, QBLOCK_NAME, REMARKS etc.).

2. Using an ordered collection and a custom TO_STRING function for string aggregation and the DBMS_UTILITY.GET_HASH_VALUE hash function

create or replace type ntt_varchar2 as table of varchar2(4000);
/

create or replace function to_string (
nt_in in ntt_varchar2,
delimiter_in in varchar2 default ','
) return clob is
v_idx pls_integer;
v_str clob;
v_dlm varchar2(10);
begin
v_idx := nt_in.first;
while v_idx is not null loop
v_str := v_str || v_dlm || nt_in(v_idx);
v_dlm := delimiter_in;
v_idx := nt_in.next(v_idx);
end loop;

return v_str;
end to_string;
/

with a as (
select
hash_value
--, plan_hash_value
, child_number
, dbms_utility.get_hash_value(
operation
|| '-' || options
|| '-' || object_node
|| '-' || object#
|| '-' || object_owner
|| '-' || object_name
|| '-' || optimizer
|| '-' || id
|| '-' || parent_id
|| '-' || depth
|| '-' || position
|| '-' || search_columns
|| '-' || cost
|| '-' || cardinality
|| '-' || bytes
|| '-' || other_tag
|| '-' || partition_start
|| '-' || partition_stop
|| '-' || partition_id
|| '-' || dbms_utility.get_hash_value(other, 0, 2147483647)
|| '-' || distribution
|| '-' || cpu_cost
|| '-' || io_cost
|| '-' || temp_space
|| '-' || dbms_utility.get_hash_value(access_predicates, 0, 2147483647)
|| '-' || dbms_utility.get_hash_value(filter_predicates, 0, 2147483647)
, 0, 2147483647) as hash_path_row
from
(select * from my_sql_plan
)
)
select
hash_value
--, plan_hash_value
, child_number
, dbms_utility.get_hash_value(
substr(to_string(cast(multiset(
select
hash_path_row
from
a
where
a.hash_value = b.hash_value
--and a.plan_hash_value = b.plan_hash_value
and a.child_number = b.child_number
order by
1) as ntt_varchar2), '-'), 1, 4000), 0, 2147483647) as the_hash
from (
select distinct
hash_value
--, plan_hash_value
, child_number
from
a
) b;

Again this needs to be limited to the first 4,000 bytes, and the PLAN_HASH_VALUE in 9i is missing. Furthermore the COLLECT aggregate function is not supported in 9i, so we need to generate the collections using the MULTISET subquery method.

Finally here's an example what you can do with these new hash values:

select
hash_value
, plan_hash_value
, child_number
, the_hash
, case when plan_hash_value = next_plan_hash_value and the_hash != next_the_hash then 'DIFF!' end as are_hashs_diff
from (
select
hash_value
, plan_hash_value
, child_number
, the_hash
, lead(plan_hash_value, 1) over (partition by hash_value, plan_hash_value order by child_number) as next_plan_hash_value
, lead(the_hash, 1) over (partition by hash_value, plan_hash_value order by child_number) as next_the_hash
from (
select
hash_value
, plan_hash_value
, child_number
, ora_hash(cast(collect(to_char(hash_path_row, 'TM')) as ntt_varchar2)) as the_hash
from (
select
hash_value
, plan_hash_value
, child_number
, hash_path_row
from (
select
hash_value
, plan_hash_value
, child_number
, id
, dense_rank() over (order by hash_value, plan_hash_value, child_number) as rnk
, ora_hash(
operation
|| '-' || options
|| '-' || object_node
|| '-' || object#
|| '-' || object_owner
|| '-' || object_name
|| '-' || object_alias
|| '-' || object_type
|| '-' || optimizer
|| '-' || id
|| '-' || parent_id
|| '-' || depth
|| '-' || position
|| '-' || search_columns
|| '-' || cost
|| '-' || cardinality
|| '-' || bytes
|| '-' || other_tag
|| '-' || partition_start
|| '-' || partition_stop
|| '-' || partition_id
|| '-' || ora_hash(other)
|| '-' || distribution
|| '-' || cpu_cost
|| '-' || io_cost
|| '-' || temp_space
|| '-' || ora_hash(access_predicates)
|| '-' || ora_hash(filter_predicates)
|| '-' || ora_hash(projection)
|| '-' || time
|| '-' || qblock_name
|| '-' || ora_hash(remarks)
) as hash_path_row
from (
select
*
from
my_sql_plan
)
)
)
group by
hash_value
, plan_hash_value
, child_number
)
);

This will mark all child cursors for the same statement that have the same PLAN_HASH_VALUE but according to your new hash value are different. The results can be quite surprising.

Dynamic sampling and set current_schema anomaly

Sometimes when I'm asked to check a particular SQL in a production like environment at first I only get rather limited access, e.g. a read-only user that has the required privileges to perform the SQL statement, but doesn't own the objects.

If I'm now supposed to analyse a rather complex SQL with dozens of table and/or view references I like to use the ALTER SESSION SET CURRENT_SCHEMA command to switch any unqualified object references to the schema set via CURRENT_SCHEMA rather than editing the SQL and add all the object references.

This basically works just fine, but I recently have come across an interesting anomaly in 10.2.0.4: If the optimizer uses dynamic sampling as part of the parsing then it doesn't do so anymore if the CURRENT_SCHEMA has been altered to a non-default schema.

The following simple test case demonstrates the issue in 10.2.0.4 Win32:

SQL>
SQL> drop user dynsamp_test cascade;
drop user dynsamp_test cascade
*
ERROR at line 1:
ORA-01918: user 'DYNSAMP_TEST' does not exist

SQL>
SQL> create user dynsamp_test identified by dynsamp_test;

User created.

SQL>
SQL> grant create session, alter session to dynsamp_test;

Grant succeeded.

SQL>
SQL> drop table dynsamp_test_table purge;
drop table dynsamp_test_table purge
*
ERROR at line 1:
ORA-00942: table or view does not exist

SQL>
SQL> create table dynsamp_test_table
2 as
3 select
4 *
5 from
6 all_objects
7 where
8 rownum <= 1000;

Table created.

SQL>
SQL> grant select on dynsamp_test_table to dynsamp_test;

Grant succeeded.

SQL>
SQL> -- this is the plan we get as schema owner
SQL> explain plan for
2 select /*+ dynamic_sampling(t 4) */
3 *
4 from
5 dynsamp_test_table
6 where
7 owner = 'NOT_A_VALID_OWNER_SINCE_TOO_LONG';

Explained.

SQL>
SQL> -- It uses obviously dynamic sampling as instructed
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1573551846

----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 128 | 5 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| DYNSAMP_TEST_TABLE | 1 | 128 | 5 (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OWNER"='NOT_A_VALID_OWNER_SINCE_TOO_LONG')

Note
-----
- dynamic sampling used for this statement

17 rows selected.

SQL>
SQL> rollback;

Rollback complete.

SQL>
SQL> -- We get the same from different schema with qualified reference (or valid synonym)
SQL> connect dynsamp_test/dynsamp_test@orcl
Connect durchgeführt.
SQL>
SQL> alter session set nls_language = 'AMERICAN';

Session altered.

SQL>
SQL> explain plan for
2 select /*+ dynamic_sampling(t 4) */
3 *
4 from
5 cbo_test.dynsamp_test_table
6 where
7 owner = 'NOT_A_VALID_OWNER_SINCE_TOO_LONG';

Explained.

SQL>
SQL> -- Still uses dynamic sampling
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1573551846

----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 128 | 5 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| DYNSAMP_TEST_TABLE | 1 | 128 | 5 (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OWNER"='NOT_A_VALID_OWNER_SINCE_TOO_LONG')

Note
-----
- dynamic sampling used for this statement

17 rows selected.

SQL>
SQL> rollback;

Rollback complete.

SQL>
SQL> -- Now switch the current_schema
SQL> alter session set current_schema = cbo_test;

Session altered.

SQL>
SQL> explain plan for
2 select /*+ dynamic_sampling(t 4) */
3 *
4 from
5 dynsamp_test_table
6 where
7 owner = 'NOT_A_VALID_OWNER_SINCE_TOO_LONG';

Explained.

SQL>
SQL> -- No more dynamic sampling!
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1573551846

----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 10 | 1280 | 5 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| DYNSAMP_TEST_TABLE | 10 | 1280 | 5 (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OWNER"='NOT_A_VALID_OWNER_SINCE_TOO_LONG')

13 rows selected.

SQL>
SQL> rollback;

Rollback complete.

SQL>
SQL> -- Check the 10053 trace file
SQL> -- It will show that the dynamic samping query fails
SQL> -- and therefore no dynamic sampling will be performed
SQL> -- presumably because the current_schema is not obeyed
SQL> -- by the recursively executed query
SQL> alter session set tracefile_identifier = 'dynamic_sampling_current_schema_10053';

Session altered.

SQL>
SQL> alter session set events '10053 trace name context forever, level 1';

Session altered.

SQL>
SQL> explain plan for
2 select /*+ dynamic_sampling(t 4) */
3 *
4 from
5 dynsamp_test_table
6 where
7 owner = 'NOT_A_VALID_OWNER_SINCE_TOO_LONG';

Explained.

SQL>
SQL> alter session set events '10053 trace name context off';

Session altered.

SQL> -- Check the 10046 trace file
SQL> -- which confirms a ORA-00942 while parsing the dynamic sampling query
SQL> alter session set tracefile_identifier = 'dynamic_sampling_current_schema_10046';

Session altered.

SQL>
SQL> alter session set sql_trace = true;

Session altered.

SQL>
SQL> explain plan for
2 select /*+ dynamic_sampling(t 4) */
3 *
4 from
5 dynsamp_test_table
6 where
7 owner = 'NOT_A_VALID_OWNER_SINCE_TOO_LONG';

Explained.

SQL>
SQL> alter session set sql_trace = false;

Session altered.

SQL>
SQL> rollback;

Rollback complete.

SQL>
SQL> spool off

So as you can see the simply query uses dynamic sampling as instructed (and in 10.2 would use dynamic sampling by default anyway since the table has no statistics), but when using the CURRENT_SCHEMA trick and then an unqualified reference, the dynamic sampling is not performed and the estimate is based on hardcoded defaults.

Checking the 10053 optimizer trace file confirms that the dynamic sampling is attempted, but the recursive query fails, and the 10046 SQL trace shows that an "ORA-00942 table or view does not exist" error is encountered while parsing the recursive dynamic sampling query.

So be careful when using this (rather unusual I have to admit) setup, the execution plans might be different if you're using a non-default CURRENT_SCHEMA and dynamic sampling is involved.

The issue seems to be fixed in 11.1.0.7 (dynamic sampling is performed as expected when using non-default CURRENT_SCHEMA), however I couldn't identify a corresponding bug on Metalink. If anyone comes across a bug number or document ID this would be helpful.

Locked table statistics and subsequent create index

Just a minor thing to consider: By default in 10g and later index statistics are generated along with an index creation (option COMPUTE STATISTICS in previous releases enabled by default), so a newly created index usually has computed statistics.

10g also introduced the option to lock table statistics.

Now if you lock statistics in 10g in later using DBMS_STATS.LOCK_TABLE_STATS or LOCK_SCHEMA_STATS and create an index on a locked table the statistics for the index will not be generated along with the CREATE INDEX command. Unfortunately there is no corresponding "FORCE" option in CREATE INDEX available to overwrite that behaviour that I'm aware of so it looks like you're only left with two choices:

1. Use a separate DBMS_STATS.GATHER_INDEX_STATS call with the FORCE=>true option to override the lock on the statistics

2. Temporarily unlock the table statistics before creating the index

The first option can be costly if the index is large, the second option requires additional steps to be taken, and it obviously needs to be ensured that the table statistics are not modified while they are unlocked (e.g. by the default statistics job in 10g and later).

A small testcase run on 10.2.0.4 Win32 follows to demonstrate the issue. I got the same result on 11.1.0.7 Win32.

SQL>
SQL> drop table lock_test purge;

Table dropped.

SQL>
SQL> create table lock_test
2 as
3 select
4 *
5 from
6 all_objects
7 where
8 rownum <= 1000;

Table created.

SQL>
SQL> exec dbms_stats.lock_table_stats(null, 'lock_test')

PL/SQL procedure successfully completed.

SQL>
SQL> create index lock_test_idx on lock_test (object_name) compute statistics;
create index lock_test_idx on lock_test (object_name) compute statistics
*
ERROR at line 1:
ORA-38029: object statistics are locked

SQL>
SQL> create index lock_test_idx on lock_test (object_name);

Index created.

SQL>
SQL> select num_rows, last_analyzed from user_ind_statistics where index_name = 'LOCK_TEST_IDX';

NUM_ROWS LAST_ANA
---------- --------

SQL>
SQL> exec dbms_stats.gather_index_stats(null, 'LOCK_TEST_IDX')
BEGIN dbms_stats.gather_index_stats(null, 'LOCK_TEST_IDX'); END;

*
ERROR at line 1:
ORA-20005: object statistics are locked (stattype = ALL)
ORA-06512: at "SYS.DBMS_STATS", line 10872
ORA-06512: at "SYS.DBMS_STATS", line 10896
ORA-06512: at line 1

SQL>
SQL> exec dbms_stats.gather_index_stats(null, 'LOCK_TEST_IDX', force=>true)

PL/SQL procedure successfully completed.

SQL>
SQL> select num_rows, last_analyzed from user_ind_statistics where index_name = 'LOCK_TEST_IDX';

NUM_ROWS LAST_ANA
---------- --------
1000 21.06.09

SQL>
SQL> drop index lock_test_idx;

Index dropped.

SQL>
SQL> exec dbms_stats.unlock_table_stats(null, 'lock_test')

PL/SQL procedure successfully completed.

SQL>
SQL> create index lock_test_idx on lock_test (object_name);

Index created.

SQL>
SQL> exec dbms_stats.lock_table_stats(null, 'lock_test')

PL/SQL procedure successfully completed.

SQL>
SQL> select num_rows, last_analyzed from user_ind_statistics where index_name = 'LOCK_TEST_IDX';

NUM_ROWS LAST_ANA
---------- --------
1000 21.06.09

SQL>
SQL> delete from lock_test where rownum <= 500;

500 rows deleted.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> alter index lock_test_idx rebuild compute statistics;
alter index lock_test_idx rebuild compute statistics
*
ERROR at line 1:
ORA-38029: object statistics are locked

SQL>
SQL> alter index lock_test_idx rebuild;

Index altered.

SQL>
SQL> select num_rows, last_analyzed from user_ind_statistics where index_name = 'LOCK_TEST_IDX';

NUM_ROWS LAST_ANA
---------- --------
1000 21.06.09

SQL>
SQL> exec dbms_stats.unlock_table_stats(null, 'lock_test')

PL/SQL procedure successfully completed.

SQL>
SQL> alter index lock_test_idx rebuild;

Index altered.

SQL>
SQL> exec dbms_stats.lock_table_stats(null, 'lock_test')

PL/SQL procedure successfully completed.

SQL>
SQL> select num_rows, last_analyzed from user_ind_statistics where index_name = 'LOCK_TEST_IDX';

NUM_ROWS LAST_ANA
---------- --------
500 21.06.09

SQL>
SQL> analyze index lock_test_idx compute statistics;
analyze index lock_test_idx compute statistics
*
ERROR at line 1:
ORA-38029: object statistics are locked

SQL>
SQL> spool off

The same applies to index rebuilds obviously.

Consequently the hopefully no longer used ANALYZE INDEX ESTIMATE/COMPUTE STATISTICS command can also not be used on locked tables and its indexes.

Temporary Tablespace Groups

Oracle 10g introduced the concept of temporary tablespace groups.

These allow to group multiple temporary tablespaces into a single group and assign a user this group of tablespaces instead of a single temporary tablespace.

This raises some interesting questions, and for some of these I don't find answers in the official documentation. Some of these questions are:

- Can a single workarea execution allocate space from more than one temporary tablespace, e.g. to support large serial sort operations?

A workarea belongs to a single operation of an execution plan. There are several different types of operations that require a workarea, among them are sorts, hash joins, group bys and sort/merge joins.

This workarea can fit into available PGA memory, but can also spill to disk in case there is insufficient memory available to support the operation.

Furthermore this implies that a execution of a single SQL statement can require multiple workareas, e.g. a quite simple statement might need two workareas for two hash joins and a third one for a subsequent sort order by operation.

Note that there are other types of operations that don't require a workarea, e.g. a nested loop join doesn't require a workarea (and therefore will never acquire temporary space).

Details about workareas can be obtained from various dynamic performance views, e.g. V$SQL_WORKAREA, V$SQL_WORKAREA_ACTIVE and V$SQL_WORKAREA_HISTOGRAM.

- Can multiple workareas of a single session allocate space from different tablespaces?

- According to the documentation different sessions of the same user can use different temporary tablespaces from the group. Is this correct?

- What about parallel execution? The documentation says that the parallel slaves can use different temporary tablespaces. Is this correct?

Here's a simple testcase that generates two small temporary tablespaces and assigns them to the same group. A test user is assigned first a single temporary tablespace and afterwards the group to see if this makes any difference.

It uses a carefully crafted table for which a sort operation doesn't fit into a single small temporary tablespace but is supposed to fit if both temporary tablespaces of the group can be used.

The session is deliberately using a very small sort_area_size in manual workarea policy mode to force the sort operation to spill to disk.

The following are the results from 11.1.0.7 Win32. Similar results can be seen from 10.2.0.4 Win32. The database used a 8KB default block size and a locally managed tablespace with manual segment space management (no ASSM) for the data but that shouldn't matter here much.

SQL>
SQL> drop tablespace temp1_grp including contents and datafiles;

Tablespace dropped.

SQL>
SQL> drop tablespace temp2_grp including contents and datafiles;

Tablespace dropped.

SQL>
SQL> create temporary tablespace temp1_grp
2 tempfile 'C:\ORACLE\ORADATA\ORCL11\TEMP1_GRP.DBF'
3 size 2M reuse uniform size 100K
4 tablespace group temp_grp;

Tablespace created.

SQL>
SQL> create temporary tablespace temp2_grp
2 tempfile 'C:\ORACLE\ORADATA\ORCL11\TEMP2_GRP.DBF'
3 size 2M reuse uniform size 100K
4 tablespace group temp_grp;

Tablespace created.

SQL>
SQL> drop user tempgrp_user cascade;

User dropped.

SQL>
SQL> create user tempgrp_user identified by tempgrp_user;

User created.

SQL>
SQL> grant create session to tempgrp_user;

Grant succeeded.

SQL>
SQL> exec dbms_random.seed(0)

PL/SQL procedure successfully completed.

SQL>
SQL> drop table test_temp_grp purge;

Table dropped.

SQL>
SQL> create table test_temp_grp
2 as
3 select
4 sys.dbms_random.string('U', 140) as object_name
5 from
6 dual
7 connect by
8 rownum <= 10000;

Table created.

SQL>
SQL> exec dbms_stats.gather_table_stats(null, 'TEST_TEMP_GRP')

PL/SQL procedure successfully completed.

SQL>
SQL> select blocks from dba_tables where table_name = 'TEST_TEMP_GRP';

BLOCKS
----------
205

SQL>
SQL> grant select on test_temp_grp to tempgrp_user;

Grant succeeded.

SQL>
SQL> create or replace synonym tempgrp_user.test_temp_grp for test_temp_grp;

Synonym created.

SQL>
SQL> -- This is the behaviour when using traditional tablespaces
SQL> alter user tempgrp_user temporary tablespace temp1_grp;

User altered.

SQL>
SQL> connect tempgrp_user/tempgrp_user
Connect durchgeführt.
SQL>
SQL> alter session set nls_language = 'AMERICAN';

Session altered.

SQL>
SQL> variable r refcursor
SQL>
SQL> variable t refcursor
SQL>
SQL> -- Force sort on disk
SQL> alter session set workarea_size_policy = manual sort_area_size = 65536;

Session altered.

SQL>
SQL> exec open :r for select * from test_temp_grp order by object_name

PL/SQL procedure successfully completed.

SQL>
SQL> -- Sort doesn't fit into available temp space
SQL> declare
2 rec test_temp_grp%rowtype;
3 begin
4 fetch :r into rec;
5 end;
6 /
declare
*
ERROR at line 1:
ORA-01652: unable to extend temp segment by 13 in tablespace TEMP1_GRP
ORA-06512: at line 4

SQL>
SQL> exec open :r for select * from test_temp_grp where rownum <= 5000 order by object_name

PL/SQL procedure successfully completed.

SQL>
SQL> -- This works
SQL> declare
2 rec test_temp_grp%rowtype;
3 begin
4 fetch :r into rec;
5 end;
6 /

PL/SQL procedure successfully completed.

SQL>
SQL> connect cbo_test/cbo_test
Connect durchgeführt.
SQL>
SQL> alter session set nls_language = 'AMERICAN';

Session altered.

SQL>
SQL> -- Switch to tablespace group
SQL> alter user tempgrp_user temporary tablespace temp_grp;

User altered.

SQL>
SQL> connect tempgrp_user/tempgrp_user
Connect durchgeführt.
SQL>
SQL> alter session set nls_language = 'AMERICAN';

Session altered.

SQL>
SQL> -- Force sort on disk
SQL> alter session set workarea_size_policy = manual sort_area_size = 65536;

Session altered.

SQL>
SQL> exec open :r for select * from test_temp_grp order by object_name

PL/SQL procedure successfully completed.

SQL>
SQL> -- Sort still doesn't fit into available temp space (but it should as you'll see later)
SQL> -- so obviously a single workarea can't use space from different tablespaces
SQL> declare
2 rec test_temp_grp%rowtype;
3 begin
4 fetch :r into rec;
5 end;
6 /
declare
*
ERROR at line 1:
ORA-01652: unable to extend temp segment by 13 in tablespace TEMP1_GRP
ORA-06512: at line 4

SQL>
SQL> -- What if we have multiple workareas per session
SQL> exec open :r for select * from test_temp_grp where rownum <= 5000 order by object_name

PL/SQL procedure successfully completed.

SQL>
SQL> -- First one works
SQL> declare
2 rec test_temp_grp%rowtype;
3 begin
4 fetch :r into rec;
5 end;
6 /

PL/SQL procedure successfully completed.

SQL>
SQL> exec open :t for select * from test_temp_grp where rownum <= 5000 order by object_name

PL/SQL procedure successfully completed.

SQL>
SQL> -- Second one fails
SQL> declare
2 rec test_temp_grp%rowtype;
3 begin
4 fetch :t into rec;
5 end;
6 /
declare
*
ERROR at line 1:
ORA-01652: unable to extend temp segment by 13 in tablespace TEMP1_GRP
ORA-06512: at line 4

SQL>
SQL> -- So it looks like a session is limited to a particular tablespace taken from the group
SQL> -- What happens to a second session of the same user
SQL> -- One tablespace is still in use by cursor R
SQL> -- Check V$SORT_USAGE if interested
SQL>
SQL> -- Run the same query in second session
SQL> -- You might need multiple attempts to get this working
SQL> -- Depending on the tablespace taken from the group
SQL> -- and then close second session to free temp space
SQL> -- Press ENTER to continue
SQL>
SQL> pause

SQL>
SQL> -- This works if the second session is assigned a different tablespace taken from the group
SQL> -- So different sessions of the same user will potentially be assigned to different tablespaces
SQL>
SQL> -- What about parallel execution
SQL> exec close :r

PL/SQL procedure successfully completed.

SQL>
SQL> exec open :r for select /*+ parallel(test_temp_grp, 2) */ * from test_temp_grp order by object_name

PL/SQL procedure successfully completed.

SQL>
SQL> -- This works
SQL> -- So the PX sessions can be assigned to different tablespaces taken from the group
SQL> -- Check V$SORT_USAGE if interested
SQL> declare
2 rec test_temp_grp%rowtype;
3 begin
4 fetch :r into rec;
5 end;
6 /

PL/SQL procedure successfully completed.

SQL>
SQL> spool off

So in summary the following can be deduced:

1. A single workarea execution can't use space from more a single tablespace even when the user is assigned a temporary tablespace group. Or in other words: A single workarea execution can only allocate a single temporary segment, and segments in general can't span multiple tablespaces in Oracle, i.e. they must fit into a single tablespace.

2. Even multiple workareas for a single session can't use space from the multiple temporary tablespaces available. So this looks like a session attribute, i.e. assigned to the session, and all workareas of that session will use the single temporary tablespace assigned (taken from the group)

3. Multiple sessions of the same user can be assigned to different temporary tablespaces taken from the group.

4. Parallel execution slave sessions can do the same, they can be assigned to different temporary tablespaces taken from the group.

So in order to leverage the power of temporary tablespace groups you need either multiple sessions running serial SQL or the parallel execution option.

User objects created in the SYS schema and the (cost based) optimizer

The answer - as already disclosed by Nicolas Gasparotto - to the question that I asked here in the "Weekend Quiz" is to run the script as SYS user, and then run the query shown against these objects in the SYS schema (tested against 10g XE, 10.2.0.4 and 11.1.0.7 on Win32).

Note: It's not recommended to create any non-SYS objects in the SYS schema and you should only perform this (if at all) in a test database.

All this came up in this recent OTN forum thread where it became obvious that the issue can only be reproduced if the objects are owned by SYS.

There are two interesting points to derive from this (apart from the obvious that one should not create any user objects in the SYS schema):

1. The optimizer seems to treat objects owned by SYS differently, in particular regarding the transformations applied. Note that the crucial point is not that the query is executed as SYS user, but that the objects are owned by the SYS user. Granting appropriate privileges to a non-SYS user on the objects owned by SYS allows to reproduce the issue even with a non-SYS user.

2. It's something to remind if there is the need to understand a performance issue with a recursive dictionary query performed on SYS-owned objects. Although you obviously can't influence the SQL generated by Oracle itself it might help to understand the issue and take appropriate steps to rectify the issue.

Oh, by the way, have I already mentioned that it's really a bad idea to create user objects in the SYS schema?

Weekend quiz

Quiz questions seem to be in nowadays, so here's one for you regarding the cost based optimizer:

Given this simple script (Run this on any 10.2.x (including XE) or 11.1.0.6/7):

create table test1
as
select * from dba_objects;

create table test2
as
select * from dba_objects;

create index i_test1_1 on test1(object_name);

create index i_test1_2 on test1(object_id);

create index i_test2_1 on test2(object_name);

create index i_test2_2 on test2(object_id);

exec dbms_stats.gather_table_stats(null, 'TEST1', estimate_percent=>null, method_opt=>'for all columns size 1');

exec dbms_stats.gather_table_stats(null, 'TEST2', estimate_percent=>null, method_opt=>'for all columns size 1');

And this simple query:

explain plan for
select
*
from
test1
where
object_name='TEST1'
and object_id in (
select /*+ unnest */
object_id
from
test2
);

with this plan:

Plan 1 (the obvious one):

Plan hash value: 2107173885

------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 212 | 6 (0)| 00:00:01 |
| 1 | NESTED LOOPS SEMI | | 2 | 212 | 6 (0)| 00:00:01 |