Search

Top 60 Oracle Blogs

Recent comments

11.2.0.1

Table Functions And Join Cardinality Estimates

If you consider the usage of Table Functions then you should be aware of some limitations to the optimizer calculations, in particular when considering a join between a Table Function and other row sources.

As outlined in one of my previous posts you can and should help the optimizer to arrive at a reasonable cardinality estimate when dealing with table functions, however doing so doesn't provide all necessary inputs to the join cardinality calculation that are useful and available from the statistics when dealing with regular tables.

Therefore even when following the recommended practice regarding the cardinality estimates it is possible to end up with some inaccuracies. This post will explain why.

Join Cardinality Basics

Auto DOP And Direct-Path Inserts

This is just a short note about one of the potential side-effects of the new Auto Degree Of Parallelism (DOP) feature introduced in 11.2.

If you happen to have Parallel DML enabled in your session along with Auto DOP (and here I refer to the PARALLEL_DEGREE_POLICY = AUTO setting, not LIMITED) then it might take you by surprise that INSERT statements that are neither decorated with a parallel hint nor use any parallel enabled objects can be turned into direct-path inserts.

Logical I/O Evolution - Part 3: 11g

Preface (with apologies to Kevin Closson)

This blog post is too long

Introduction

In the previous part of this series I've already demonstrated that the logical I/O optimization of the Table Prefetching feature depends on the order of the row sources - and 11g takes this approach a big step further.

It is very interesting that 11g does not require any particular feature like Table Prefetching or Nested Loop Join Batching (another new feature introduced in 11g) to take advantage of the Logical I/O optimization - it seems to be available even with the most basic form of a Nested Loop join.

Multi-Column Joins, Expressions and 11g

Introduction

I've already outlined in one of my previous posts that getting a reasonable cardinality estimate for multi-column joins can be tricky, in particular when dealing with correlated column values in the join columns.

Since Oracle 10g several "Multi-Column Join Cardinality" sanity checks have been introduced that prevent a multi-column join from producing too low join cardinalities - this is controlled via the "_optimizer_join_sel_sanity_check" internal parameter that defaults to true from 10g on.

HCC And Virtual Columns

This is just a short heads-up note to those dealing with HCC-enabled tables (so at present this applies only to Exadata customers).

As already outlined in a previous post about compression restrictions tables with HCC enabled do not support dropping columns - DROP COLUMN gets silently converted into SET UNUSED and DROP UNUSED COLUMNS throws an error to be unsupported.

I've recently come across an interesting variation of this restriction. Obviously Oracle treats virtual columns in this case the same: If you drop a virtual column of a HCC-enabled table it doesn't get dropped but is also silently turned into an unused column - which doesn't really make sense to me since dropping it doesn't require any physical modification to the underlying data structures.

Now you might wonder why this could be relevant? Well it can be important for several reasons:

Flashback Query "AS OF" - Tablescan costs

This is just a short note prompted by a recent thread on the OTN forums. In recent versions Oracle changes the costs of a full table scan (FTS or index fast full scan / IFFS) quite dramatically if the "flashback query" clause gets used.

It looks like that it simply uses the number of blocks of the segment as I/O cost for the FTS operation, quite similar to setting the "db_file_multiblock_read_count" ("dbfmbrc"), or from 10g on more precisely the "_db_file_optimizer_read_count", to 1 (but be aware of the MBRC setting of WORKLOAD System Statistics, see comments below) for the cost estimate of the segment in question.

This can lead to some silly plans depending on the available other access paths as can be seen from the thread mentioned.

Transitive Closure - Outer Joins

The Cost Based Optimizer (CBO) supports since at least Oracle 9i the automatic generation of additional predicates based on transitive closure.

In principle this means:

If a = b and b = c then the CBO can infer a = c

As so often with these optimizations the purpose of these automatically generated additional predicates is to allow the optimizer finding potentially more efficient access paths, like an index usage or earlier filtering reducing the amount of data to process.

ASSM bug reprise - part 2

Introduction

In the first part of this post I've explained some of the details and underlying reasons of bug 6918210. The most important part of the bug is that it can only be hit if many row migrations happen during a single transaction. However, having excessive row migrations is usually a sign of poor design, so this point probably can't be stressed enough:

If you don't have excessive row migrations the bug can not become significant

Of course, there might be cases where you think you actually have a sound design but due to lack of information about the internal workings it might not be obvious that excessive row migrations could be caused by certain activities.

Things worth to mention and remember (II) - Parallel Execution Control 2

Continuing from the previous installment of this series I'll cover in this post some of the inevitable classics regarding Parallel Execution Control. So forgive me if you're bored by the repetition of known facts - however I still see these things too often used incorrectly, therefore I decided: This is worth to mention and remember!

- Up to and including version 10.2 PARALLEL (without any parameters) and NOLOGGING are valid keywords only in DDL commands

- Applies to all versions: NOLOGGING can not be used as a hint. It can only be specified as part of DDL, for example ALTER INDEX ... REBUILD PARALLEL NOLOGGING.

Parallel DML - Conventional (non-direct-path) Inserts As Select

In a recent discussion I've mentioned that I thought to remember that the DML part of conventional load as select inserts will always be executed serially, even with parallel DML enabled and requesting parallel DML execution. It's important to understand in this context that this is not the same as the parallel query execution of the SELECT part, which is possible independently from the parallel DML part.

After that discussion I realized that it was quite some time ago that I tested this scenario, probably it was back then with some 10.2 version.

So I quickly put together a small test case that I ran on 11g versions and the results were quite surprising which motivated me to take a closer look.