Search

OakieTags

Who's online

There are currently 0 users and 27 guests online.

Recent comments

10gR2

Parallel Execution Analysis Using ASH - The XPLAN_ASH Tool

Preface

Note: This blog post actually serves three purposes:

  1. It introduces and describes my latest contribution to the Oracle Community,  the "XPLAN_ASH" tool

  • It accompanies a future OTN article on Parallel Execution that will be published some time in the future

  • It is supposed to act as a teaser for my upcoming "Parallel Execution Masterclass" that will be organized by Oracle University and can be booked later this year
  • Table Of Contents

    Introduction

    Real-Time SQL Monitoring Overview

    Real-Time SQL Monitoring Shortcomings

    Forced Cursor Sharing And Virtual Columns

    So you have that application that cannot be changed but makes use of some weird expressions that screw up the cardinality estimates of the optimizer.

    Consider this simple example:

    create table t as select rownum as id , case when mod(rownum, 100000) = 0 then 'y' else 'n' end as flag , rpad('x', 100) as filler from dual connect by level <= 1000000 ; exec dbms_stats.gather_table_stats(null, 't', method_opt => 'for all columns size 1 for columns flag size 2', estimate_percent => 30) select flag, count(*) from t group by flag; explain plan for select count(*) from t where flag = 'y' ; set linesize 200 tab off pagesize 0 select * from table(dbms_xplan.display);

    Display System Activity without Enterprise Manager

    I regularly use the system-level activity chart available in Enterprise Manager. In my opinion it is a simple and effective way to know how much a specific database is loaded at a specific time. This is for example an interesting way for observing how a specific load is processed (see this post for an example).
    Unfortunately it also happens that this possibility is not available. The main reasons I faced in the past are the following:

    Nested Loop Join Costing

    The basic formula for calculating the costs of a Nested Loop Join is pretty straightforward and has been described and published several times.

    In principle it is the cost of acquiring the driving row source plus the cost of acquiring the inner row source of the Nested Loop as many times as the driving row source dictates via the cardinality of the driving row source.

    Cost (outer rowsource) + Cost (inner rowsource) * Card (outer rowsource)

    Obviously there are cases where Oracle has introduced refinements to the above formula where this is no longer true. Here is one of these cases that is probably not uncommon.

    Column Groups - Edge Cases

    Oracle 11g added Extended Statistics support for column groups in order to detect correlated columns for filter predicates using an equal comparison.

    Note that Oracle 11g also added the ability to use the number of distinct keys of a composite index as an upper limit for the cardinality estimates for matching column predicates, which means that the optimizer is now capable of detecting correlated columns without the explicit addition of Extended Statistics / Column Groups.

    COMMIT_WAIT and COMMIT_LOGGING

    Recently I used the COMMIT_WAIT and COMMIT_LOGGING parameters for solving (or, better, working around) a problem I faced while optimizing a specific task for one of my customers. Since it was the first time I used them in a production system, I thought to write this post not only to shortly explain the purpose of the these two parameters, but also to show a case where it is sensible to use them.

    The purpose of the two parameters is the following:

    COMMIT_WAIT

    Analysing Row Lock Contention with LogMiner

    Recently I had to analyse a row lock contention problem that can be illustrated by the following test case:

    Index Scan with Filter Predicate Based on a Subquery

    Most execution plans can be interpreted by following few basic rules (in TOP, Chapter 6, I provide such a list of rules). Nevertheless, there are some special cases. One of them is when an index scan, in addition to the access predicate, has a filter predicate applying a subquery.

    The following execution plan, taken from Enterprise Manager 11.2, is an example (click on the image to increase its size):
    Execution Plan
    Notes:

    Autotrace Polluting The Shared Pool?

    Introduction

    Another random note that I made during the sessions attended at OOW was about the SQL*Plus AUTOTRACE feature. As you're hopefully already aware of this feature has some significant shortcomings, the most obvious being that it doesn't pull the actual execution plan from the Shared Pool after executing the statement but simply runs an EXPLAIN PLAN on the SQL text which might produce an execution plan that is different from the actual one for various reasons.

    Now the claim was made that in addition to these shortcomings the plan generated by the AUTOTRACE feature will stay in the Shared Pool and is eligible for sharing, which would mean that other statement executions could be affected by a potentially bad execution plan generated via AUTOTRACE rather then getting re-optimized on their own.

    Dynamic Sampling On Multiple Partitions - Bugs

    In a recent OTN thread I've been reminded of two facts about Dynamic Sampling that I already knew but had forgotten in the meantime:

    1. The table level dynamic sampling hint uses a different number of blocks for sampling than the session / cursor level dynamic sampling. So even if for both for example level 5 gets used the number of sampled blocks will be different for most of the 10 levels available (obviously level 0 and 10 are exceptions)

    2. The Dynamic Sampling code uses a different approach for partitioned objects if it is faced with the situation that there are more partitions than blocks to sample according to the level (and type table/cursor/session) of Dynamic Sampling

    Note that all this here applies to the case where no statistics have been gathered for the table - I don't cover the case when Dynamic Sampling gets used on top of existing statistics.