Search

OakieTags

Who's online

There are currently 0 users and 32 guests online.

Recent comments

Affiliations

11.2.0.2

warning: Invalid argument supplied for foreach() in /www/oaktable/sites/all/modules/cck/content.module on line 1284.

View Data Volume Estimates

When the optimizer has to estimate the data volume (the BYTES column in the plan output), it usually bases this information on the column statistics, if applicable and available (think of complex expressions).However, whenever there is a VIEW operator in an execution plan, that represents an unmerged view, the optimizer obviously "loses" this information and starts applying defaults that are based on the column definition.Depending on the actual content of the columns this can lead to dramatic differences in data volume estimates.Both, under- and overestimates are possible, because for character based columns these defaults seem to be based on an assumed 50% fill grade, so a VARCHAR2(100 BYTE) column counts as 50 bytes data volume.For multi-byte character sets the same rule applies based on the maximum width of a column using the "char" semantics, so a VARCHAR2(1000 CHAR) column counts as 2000 byte

TIMESTAMP WITH TIME ZONE Aggregation

The TIMESTAMP WITH TIME ZONE data type that got introduced a long time ago is known for some oddities, for example Tony Hasler has a nice summary of some of them here.Here is another oddity that shows up when trying to aggregate on such a data type. Have a look at the following simple example:


create table t
as
select
rownum as id
, date '2000-01-01' + rownum - 1 as some_date
, cast(date '2000-01-01' + rownum - 1 as timestamp) as some_timestamp
, cast(date '2000-01-01' + rownum - 1 as timestamp with local time zone) as some_timestamp_with_local_tz
, cast(date '2000-01-01' + rownum - 1 as timestamp with time zone) as some_timestamp_with_timezone
from
dual
connect by

New Version Of XPLAN_ASH Tool - Video Tutorial

A new major release (version 3.0) of my XPLAN_ASH tool is available for download.

You can download the latest version here.

In addition to many changes to the way the information is presented and many other smaller changes to functionality there is one major new feature: XPLAN_ASH now also supports S-ASH, the free ASH implementation.

If you run XPLAN_ASH in a S-ASH repository owner schema, it will automatically detect that and adjust accordingly.

XPLAN_ASH was tested against the latest stable version of S-ASH (2.3). There are some minor changes required to that S-ASH release in order to function properly with XPLAN_ASH. Most of them will be included in the next S-ASH release as they really are only minor and don't influence the general S-ASH functionality at all.

Exadata Smart Scan Projection Limitation

Here is an interesting limitation to Exadata Smart Scans - if more than 254 columns from a table (not HCC compressed, more on that in moment) need to be projected, Smart Scans for that particular segment will be disabled and Exadata will fall back to conventional I/O.

How dNFS database clone works – part 1

There is new feature in Oracle 11.2.0.2 called dNFS clone. It has been described by Kevin Closson on his blog post - Oracle Database 11g Direct NFS Clonedb Feature  and very good configuration description has been posted by Tim Hall on his blog - Direct NFS (DNFS) Clonedb in Oracle Database 11g Release 2 (Patchset 11.2.0.2). I have played with it just after I found both blogs but never think how Oracle implemented that feature. I came back to it when my colleague asked me if I ever use that in production environment and what are performance implications of using it. 

How dNFS database clone works – part 1

There is new feature in Oracle 11.2.0.2 called dNFS clone. It has been described by Kevin Closson on his blog post - Oracle Database 11g Direct NFS Clonedb Feature  and very good configuration description has been posted by Tim Hall on his blog - Direct NFS (DNFS) Clonedb in Oracle Database 11g Release 2 (Patchset 11.2.0.2). I have played with it just after I found both blogs but never think how Oracle implemented that feature. I came back to it when my colleague asked me if I ever use that in production environment and what are performance implications of using it. 

n/a

HAVING Cardinality

When performing aggregate GROUP BY operations an additional filter on the aggregates can be applied using the HAVING clause.Usually aggregates are one of the last steps executed before the final result set is returned to the client.However there are various reasons, why a GROUP BY operation might be somewhere in the middle of the execution plan operation, for example it might be part of a view that cannot be merged (or was hinted not to be merged using the NO_MERGE hint), or in the more recent releases (11g+) the optimizer decided to use the GROUP BY PLACEMENT transformation that deliberately can move the GROUP BY operation to a different execution step of the plan.In such cases, when the GROUP BY operation will be input to some other operation, it becomes essential for the overall efficiency of the execution plan preferred by the optimizer that the cardinality estimates are in the right ballpark, as it will influe

New Version Of XPLAN_ASH Utility

A new version 2.0 of the XPLAN_ASH utility introduced here is available for download.You can download the latest version here.The change log tracks the following changes:- Access check- Conditional compilation for different database versions- Additional activity summary- Concurrent activity information (what is/was going on at the same time)- Experimental stuff: Additional I/O summary- More pretty printing- Experimental stuff: I/O added to Average Active Session Graph (renamed to Activity Timeline)- Top Execution Plan Lines and Top Activities added to Activity Timeline- Activity Timeline is now also shown for serial execution when TIMELINE option is specified- From 11.2.0.2 on: We get the ACTUAL DOP from the undocumented PX_FLAGS colu

Exchange Partition, Virtual Columns And Column Statistics

Here is an odd bug that can lead to some nasty side effects when using the EXCHANGE PARTITION technique. It is probably there for a very long time, simply because it depends on the usage of virtual columns, and the basic technique of virtual columns was introduced way back in the Oracle 8i times with the introduction of Function Based Indexes.

The problem isn't the exchange partition operation itself, but the accompanying swap of object statistics information, in particular the column statistics.

Look the following sequence of DDL and DML commands and pay then special attention to the output for the column statistics before and after the EXCHANGE PARTITION operation: