So this will be my Oracle related Christmas present for you: A prototype implementation that extends the DBMS_XPLAN.DISPLAY_CURSOR output making it hopefully more meaningful and easier to interpret. It is a simple standalone SQL*Plus script with the main functionality performed by a single SQL query. I've demoed this also during my recent "optimizer hacking sessions".
DBMS_XPLAN.DISPLAY_CURSOR together with the Rowsource Statistics feature (enabled via SQL_TRACE, GATHER_PLAN_STATISTICS hint, STATISTICS_LEVEL set to ALL or controlled via the corresponding hidden parameters "_rowsource_execution_statistics" and "_rowsource_statistics_sampfreq") allows since Oracle 10g a sophisticated analysis of the work performed by a single SQL statement.
Recently I was involved in a project where I had to trace the database calls of an application based on Oracle Portal 10.1.4. The basic requirements were the following:
Given that Oracle Portal uses a pool of connections and that for each HTTP call it can use several database sessions, statically enable SQL trace for specific sessions was not an option.
At page 189 of TOP I wrote the following piece of text:
In summary, with the initialization parameter optimizer_secure_view_merging set to TRUE, the query optimizer checks whether view merging could lead to security issues. If this is the case, no view merging will be performed, and performance could be suboptimal as a result. For this reason, if you are not using views for security purposes, it is better to set this initialization parameter to FALSE.
What I didn’t consider when I wrote it, it is the implication of predicate move-around related to Virtual Private Database (VPD). In fact, as described in the documentation, that parameter controls view merging as well as predicate move-around.
In the initial part of this series I've explained some details regarding logical I/O using a Nested Loop Join as example.
To recap I've shown in particular:
- Oracle can re-visit pinned buffers without performing logical I/O
- There are different variants of consistent gets - a "normal" one involving buffer pin/unpin cycles requiring two latch acquisitions and a short-cut variant that visits the buffer while holding the corresponding "cache buffers chains" child latch ("examination") and therefore only requiring a single latch acquisition
- Although two statements use a similar execution plan and produce the same number of logical I/Os one is significantly faster and scales better than the other one
This is the first part in a series of blog posts that shed some light on the enhancements Oracle has introduced with the recent releases regarding the optimizations of logical I/O.http://www.blogger.com/img/blank.gif
Before we can appreciate the enhancements, though, we need to understand the baseline. This is what this blog post is about.
The example used throughout this post is based on a simple Nested Loop Join which is one area where Oracle has introduced significant enhancements.
It started its life as a comparison of using unique vs. non-unique indexes as part of a Nested Loop Join and their influence on performance and scalability.
This comparison on its own is very educating and also allows to demonstrate and explain some of the little details regarding logical I/O.
A reader of this blog, Paresh, asked me how I was able to find out the logic behind ITL waits without having access to Oracle code. My reply was: I wrote a test case that reproduce ITL waits and a piece of code that monitors them.
Since other readers might be interested, here is the shell script I wrote. Notice that it takes four parameters as input: user name, password, SID, and how long it has to wait in the monitoring phase.
This is just a short note prompted by a recent thread on the OTN forums. In recent versions Oracle changes the costs of a full table scan (FTS or index fast full scan / IFFS) quite dramatically if the "flashback query" clause gets used.
It looks like that it simply uses the number of blocks of the segment as I/O cost for the FTS operation, quite similar to setting the "db_file_multiblock_read_count" ("dbfmbrc"), or from 10g on more precisely the "_db_file_optimizer_read_count", to 1 (but be aware of the MBRC setting of WORKLOAD System Statistics, see comments below) for the cost estimate of the segment in question.
This can lead to some silly plans depending on the available other access paths as can be seen from the thread mentioned.
The Cost Based Optimizer (CBO) supports since at least Oracle 9i the automatic generation of additional predicates based on transitive closure.
In principle this means:
If a = b and b = c then the CBO can infer a = c
As so often with these optimizations the purpose of these automatically generated additional predicates is to allow the optimizer finding potentially more efficient access paths, like an index usage or earlier filtering reducing the amount of data to process.
This was meant to be published shortly after my latest quiz night post as an explanatory follow up, but unfortunately I only managed to complete this note by now.
There is a more or less famous bug in ASSM (see bug 6918210 in MOS as well as Greg Rahn's and Jonathan Lewis' post) in versions below 11.2 that so far has been classified as only showing up in case of a combination of larger block sizes (greater the current default of 8K) and excessive row migrations. With such a combination it was reproducible that an UPDATE of the same data pattern residing in an ASSM tablespace caused significantly more work than doing the same in a MSSM tablespace, because apparently ASSM had problems finding suitable blocks to store the migrated rows.
In recent releases Oracle has silently changed the behavior of ITL waits. The aim of this post it to describe what has changed and why. But, first of all, let’s review some essential concepts about ITLs and ITL waits.
Interested Transaction List
The Oracle database engine locks the data modified by a transaction at the row level. To implement this feature every data block contains a list of all transactions that are modifying it. This list is commonly called interested transaction list (ITL). Its purpose is twofold. First, it is used to store information to identify a transaction as well as a reference to access the undo data associated to it. Second, it is referenced by every modified or locked row to indicate which transaction it is involved.