Search

Top 60 Oracle Blogs

Recent comments

System Statistics

System Statistics Gathered in Exadata Mode – When Are They Relevant?

The aim of this post isn’t to explain what the “exadata mode” is. Hence, if you don’t know what it is, before continuing reading have a look to this post published on Kerry Osborne’s blog. The only thing I would like to add is that the “exadata mode” is available as of 11.2.0.4 or when a patch implementing the enhancement associated to bug 10248538 is installed.

The key information I would like to share with you is that, in some situations, gathering system statistics in “exadata mode” is pointless. Let me explain why… But, before doing so, it’s important to review how the query optimizer computes the cost of full scans.

The key formula used by the query optimizer to compute the I/O cost of a full scan is the following:

io_cost = ceil ( blocks / mbrc * mreadtim / sreadtim ) + 1

Where:

Challenges and Chances of the 11g Query Optimizer

Challenges and Chances of the 11g Query Optimizer is the name of a presentation I gave at several events (e.g. Trivadis Performance Days, Oracle OpenWorld, DOAG Konferenz, UKOUG Conference) throughout 2011. Its abstract is the following:

With every new release, the query optimizer is enhanced. Oracle Database 11g Release 1 and Release 2 are no exception to the rule. Specifically, they introduce key improvements in the following areas: indexing, optimization techniques, object statistics and plan stability. The aim of this presentation is to review the new features from a practical point of view as well as to point out challenges related to them. In other words, to let you know what you can expect from the query optimizer when you upgrade to Oracle Database 11g.

Cost Is Time: Next Generation

It looks like Oracle has introduced with the Oracle 11.2.0.2 patch set a new "cost is time" model for the time estimate of the Cost-Based Optimizer (CBO).

In order to understand the implications let me summarize the evolution of the CBO in terms of cost / time estimate so far:

1. Oracle 7 and 8

The cost estimate generated by the Cost-Based Optimizer (CBO) has always been a time estimate, although expressed in a slightly obscure unit, which is number of single block reads.

Workload System Statistics Bug in 11.2

Since the introduction of 11.2 I receive on a regular basis questions related to “strange” workload system statistics. The last email on that topic was sent to me yesterday. So, instead to send, again, a private reply, I decided to write this short post.

What’s wrong with 11.2 and workload system statistics?

Let’s have a look to the output of the following query:

SQL> SELECT pname, pval1
  2  FROM sys.aux_stats$
  3  WHERE sname = 'SYSSTATS_MAIN';

PNAME                  PVAL1
--------------- ------------
CPUSPEEDNW            1596.0
IOSEEKTIM                4.0
IOTFRSPEED            4096.0
SREADTIM             10900.3
MREADTIM              4525.8
CPUSPEED              1603.0
MBRC                     7.0
MAXTHR            17391616.0
SLAVETHR            413696.0

As you can see the SREADTIM and MREADTIM times are very high. In this case about three orders of magnitude of what you would expect from a regular system.

I’m not aware of the exact cause of this problem, but to me it seems that the statistics externalized in x$kcfio are broken. Anyway, in MOS there are several bugs related to it (9842771 and 9701256). Hence, it’s not a feature (e.g. a change in the unit of measure), it’s a bug. On my Linux test system I’m able to reproduce it on both 11.2.0.1 and 11.2.0.2. According to the bugs mentioned before, the problem is not limited to Linux.

Since with the DBMS_STATS package we are not able to gather correct statistics, the only advice I can give on that topic is that you have to manually set them to sensible values.

Update 2011-03-23

To fix the problem you can install the patch 9842771. It is available for 11.2.0.1 and 11.2.0.2. By the way, since the patch only provides a new version of the dbms_stats_internal package, the statistics externalized in x$kcfio are not broken… they just have another unit of measure.

The CPU Costing Model: A Few Thoughts Part V (Reality)

There’s plenty more I could talk about regarding the CBO CPU costing model and system statistics but I’ll make this my final little comment on this subject for now.   As previously discussed, the CPU costing model basically takes the time it takes to perform the all necessary I/O related activities and all the time it takes to [...]

The CPU Costing Model: A Few Thoughts Part IV (Map of the Problematique)

It’s called the CPU Costing model because among other things, it includes the time associated with performing CPU operations.   The CPU Costing model formula once again:   (sum of all the single block I/Os x average wait time for a single block I/O +  sum of all the multiblock I/Os x average wait time for [...]

The CPU Costing Model: A Few Thoughts Part III (Bang Bang)

One of the advantages of system statistics and the CPU costing model is in how the CBO deals with multiblock reads and the impact of the db_file_multiblock_read_count parameter. When performing a full table (or fast full index) scan, the CBO needs to determine just how many multiblock read operations are likely to be performed so the associated operation can [...]

The CPU Costing Model – A Few Thoughts Part II

As previously discussed, the formula used by the CBO using the CPU costing model is basically:
 
(sum of all the single block I/Os x average wait time for a single block I/O +
 sum of all the multiblock I/Os x average wait time for a multiblock I/O +
 sum of all the required CPU cycles / CPU cycles per second)
/
average [...]

The CPU Costing Model: A Few Thoughts Part I

In the coming days, I’ll post a series of little entries highlighting a specific point in relation to the use of system statistics and the CPU cost model. In my previous post, we looked at how the cost of a FTS is calculated using the CPU costing model and how this generally results in an increase in the associated FTS [...]

Understanding the different modes of System Statistics aka. CPU Costing and the effects of multiple blocksizes - part 4

Back to part 3

Using objects residing in multiple blocksizes

I've already mentioned it several times on my blog but I would like to take the chance here again to stress the point that the cost based optimizer does a bad job when it comes to calculating costs for full table scans of objects residing in non-default block sizes. It really looks like that this feature has been introduced to support transportable tablespaces but it obviously hasn't been tested very thoroughly when it comes to cost calculation.

Each of the different modes has its deficiencies when dealing with objects in non-default blocksizes. The somehow odd thing is that the traditional I/O costing does the best job, and all system statistics based calculations are utterly wrong.

Traditional I/O based costing

The traditional I/O based costing simply scales the MBRC up or down according to the non-default blocksize to come to the same I/O read request size. So if you e.g. have a MBRC of 8 and a default blocksize of 8KB and now calculate the cost for an object residing in a 2KB tablespace, the MBRC will be multiplied 4, which results in a MBRC of 32. The I/O cost will be different although due to the different adjustment used with the higher MBRC setting. The adjusted MBRC for 32 is 16.39 whereas the adjusted MBRC for 8 is 6.59, so the calculated cost for the full table scan of the object residing in the 2KB tablespace will be higher. Likewise the same happens when using an object in a 16KB non-default tablespace. The MBRC will be reduced accordingly to 4 to get the same I/O read size again. Since adjusted MBRC for MBRC = 4 is 4.17, the cost calculated will actually be less for the object residing the 16KB tablespace.