Search

OakieTags

Who's online

There are currently 0 users and 15 guests online.

Recent comments

Affiliations

10gR2

New Version Of XPLAN_ASH Utility

A minor update 4.01 to the XPLAN_ASH utility is available for download.

As usual the latest version can be downloaded here.

These are the notes from the change log:

- More info for RAC Cross Instance Parallel Execution: Many sections now show a GLOBAL aggregate info in addition to instance-specific data

- The Parallel Execution Server Set detection and ASSUMED_DEGREE info now makes use of the undocumented PX_STEP_ID and PX_STEPS_ARG info (bit mask part of the PX_FLAGS column) on 11.2.0.2+

- Since version 4.0 added from 11.2.0.2 on the PX *MAX* DOP in the "SQL statement execution ASH Summary" based on the new PX_FLAGS column of ASH it makes sense to add a PX *MIN* DOP in the summary to see at one glance if different DOPs were used or not

New Version Of XPLAN_ASH Utility

A new version of the XPLAN_ASH tool (detailed analysis of a single SQL statement execution) is available for download. The previous post includes links to video tutorials explaining what the tool is about.

As usual the latest version can be downloaded here.

The new version comes with numerous improvements and new features. The most important ones are:

  • Real-Time SQL Monitoring info included
  • Complete coverage including recursive SQL
  • Improved performance
  • 12c compatible
  • Simplified usage

View Data Volume Estimates

When the optimizer has to estimate the data volume (the BYTES column in the plan output), it usually bases this information on the column statistics, if applicable and available (think of complex expressions).However, whenever there is a VIEW operator in an execution plan, that represents an unmerged view, the optimizer obviously "loses" this information and starts applying defaults that are based on the column definition.Depending on the actual content of the columns this can lead to dramatic differences in data volume estimates.Both, under- and overestimates are possible, because for character based columns these defaults seem to be based on an assumed 50% fill grade, so a VARCHAR2(100 BYTE) column counts as 50 bytes data volume.For multi-byte character sets the same rule applies based on the maximum width of a column using the "char" semantics, so a VARCHAR2(1000 CHAR) column counts as 2000 byte

The Broken Statistics: "parse count (total)" and "session cursor cache hits"

The values provided by the “parse count (total)” and “session cursor cache hits” statistics are subject to several bugs. And, what’s worse, for years Oracle didn’t care to fix it. This is my impression, at least.

Then, when few weeks ago I read in the Oracle Support note 13837105.8 (Bug 13837105 – statistics “parse count (total)” and “session cursor cache hits” miscounted) that the bug introduced in 11.2.0.3 was fixed, I hoped that others bugs in this area were fixed as well.

Unfortunately, it’s not the case. What a disappointment!

Even though several posts were already wrote about this topic (e.g. here and here; check the comments as well…), I thought it could be useful to summarize what the current status is.

TIMESTAMP WITH TIME ZONE Aggregation

The TIMESTAMP WITH TIME ZONE data type that got introduced a long time ago is known for some oddities, for example Tony Hasler has a nice summary of some of them here.Here is another oddity that shows up when trying to aggregate on such a data type. Have a look at the following simple example:


create table t
as
select
rownum as id
, date '2000-01-01' + rownum - 1 as some_date
, cast(date '2000-01-01' + rownum - 1 as timestamp) as some_timestamp
, cast(date '2000-01-01' + rownum - 1 as timestamp with local time zone) as some_timestamp_with_local_tz
, cast(date '2000-01-01' + rownum - 1 as timestamp with time zone) as some_timestamp_with_timezone
from
dual
connect by

New Version Of XPLAN_ASH Tool - Video Tutorial

A new major release (version 3.0) of my XPLAN_ASH tool is available for download.

You can download the latest version here.

In addition to many changes to the way the information is presented and many other smaller changes to functionality there is one major new feature: XPLAN_ASH now also supports S-ASH, the free ASH implementation.

If you run XPLAN_ASH in a S-ASH repository owner schema, it will automatically detect that and adjust accordingly.

XPLAN_ASH was tested against the latest stable version of S-ASH (2.3). There are some minor changes required to that S-ASH release in order to function properly with XPLAN_ASH. Most of them will be included in the next S-ASH release as they really are only minor and don't influence the general S-ASH functionality at all.

ITL Deadlocks (script)

A reader of this blog, VijayS, asked me to share the script I use to demo ITL deadlocks that I mentioned in this comment. Since other readers might be interested, here is the script.

"Cost-free" joins - 2

In the previous post I've demonstrated an unexpected Nested Loop Join caused by an extreme data distribution. Although unexpected at first sight, the performance of the execution plan selected by the optimizer is decent - provided the estimates are in the right ballpark.Here is another case of an unexpected execution plan, this time about Merge Joins.

Merge Joins

In order to appreciate why the execution plan encountered is unexpected, first a quick summary about how Merge Joins work:A Merge Join is essentially a Nested Loop operation from one sorted row source into another sorted row source.

"Cost-free" joins - 1

Recently I came across some interesting edge cases regarding the costing of joins. They all have in common that they result in (at first sight) unexpected execution plans, but only some of them are actual threats to performance.

Outer Joins

The first one is about outer joins with an extreme data distribution. Consider the following data setup:


create table t1
as
select
rownum as id
, rpad('x', 100) as filler
, case when rownum > 1e6 then rownum end as null_fk
from
dual
connect by
level <= 1e6
;

exec dbms_stats.gather_table_stats(null, 't1')

create table t2
as
select
rownum as id
, rpad('x', 100) as filler
from
dual
connect by
level <= 1e6
;

QB_NAME hint query block name length limitation

Oracle 10g introduced the QB_NAME hint that can come handy in case hints need to be applied to more complex statements, in particular when possibly multiple layers of views / subqueries are involved.Jonathan Lewis has a older blog post that describes more details.Just in case you wonder why sometimes apparently the QB_NAME hint - along with all other hints that refer to the assigned query block name - seems to be ignored: One possible reason is that it looks like there is an undocumented length limitation of the query block names that can be assigned - 20 characters seem to be the maximum possible (I haven't checked the effect of multi-byte database character sets).Consider this simple example:


drop table t1;

purge table t1;