This is the third post in a series about unnecessary indexes and some of the challenges that they present.
In this post I want to look at how to quantify the overhead of index maintenance, and estimate the benefit of removing redundant indexes.
Let’s conduct an experiment. I will set up the same table and indexes as in the previous posts in this series, but this time I will put them into the RECYCLE buffer pool, to which I have allocated the minimum possible 4M of memory, and I will allocate 25% free space to the indexes. The idea is not to benefit from the buffer cache, but suffer the pain of the physical I/O.
This is the second post in a series about unnecessary indexes and some of the challenges that they present.
This is the first post in a series about unnecessary indexes and some of the challenges that they present
I have always noted that it is remarkably easy to add indexes to systems, especially OLTP systems. However, removing them is generally more difficult.
The reward for removing an unused, or at least an unnecessary, index is that you no longer need to maintain it during DML operations, thus reducing I/O and not loading index blocks into the buffer cache. However, the risk is that performance will degrade somewhere, because an execution plan changes.
Here is an odd bug that can lead to some nasty side effects when using the EXCHANGE PARTITION technique. It is probably there for a very long time, simply because it depends on the usage of virtual columns, and the basic technique of virtual columns was introduced way back in the Oracle 8i times with the introduction of Function Based Indexes.
The problem isn't the exchange partition operation itself, but the accompanying swap of object statistics information, in particular the column statistics.
Look the following sequence of DDL and DML commands and pay then special attention to the output for the column statistics before and after the EXCHANGE PARTITION operation:
So you have that application that cannot be changed but makes use of some weird expressions that screw up the cardinality estimates of the optimizer.
Consider this simple example:
Oracle 11g added Extended Statistics support for column groups in order to detect correlated columns for filter predicates using an equal comparison.
Note that Oracle 11g also added the ability to use the number of distinct keys of a composite index as an upper limit for the cardinality estimates for matching column predicates, which means that the optimizer is now capable of detecting correlated columns without the explicit addition of Extended Statistics / Column Groups.
I've already outlined in one of my previous posts that getting a reasonable cardinality estimate for multi-column joins can be tricky, in particular when dealing with correlated column values in the join columns.
Since Oracle 10g several "Multi-Column Join Cardinality" sanity checks have been introduced that prevent a multi-column join from producing too low join cardinalities - this is controlled via the "_optimizer_join_sel_sanity_check" internal parameter that defaults to true from 10g on.
This is just a short heads-up note to those dealing with HCC-enabled tables (so at present this applies only to Exadata customers).
As already outlined in a previous post about compression restrictions tables with HCC enabled do not support dropping columns - DROP COLUMN gets silently converted into SET UNUSED and DROP UNUSED COLUMNS throws an error to be unsupported.
I've recently come across an interesting variation of this restriction. Obviously Oracle treats virtual columns in this case the same: If you drop a virtual column of a HCC-enabled table it doesn't get dropped but is also silently turned into an unused column - which doesn't really make sense to me since dropping it doesn't require any physical modification to the underlying data structures.
Now you might wonder why this could be relevant? Well it can be important for several reasons: