Search

Top 60 Oracle Blogs

Recent comments

CBO

Understanding the different modes of System Statistics aka. CPU Costing and the effects of multiple blocksizes - part 4

Back to part 3

Using objects residing in multiple blocksizes

I've already mentioned it several times on my blog but I would like to take the chance here again to stress the point that the cost based optimizer does a bad job when it comes to calculating costs for full table scans of objects residing in non-default block sizes. It really looks like that this feature has been introduced to support transportable tablespaces but it obviously hasn't been tested very thoroughly when it comes to cost calculation.

Each of the different modes has its deficiencies when dealing with objects in non-default blocksizes. The somehow odd thing is that the traditional I/O costing does the best job, and all system statistics based calculations are utterly wrong.

Traditional I/O based costing

The traditional I/O based costing simply scales the MBRC up or down according to the non-default blocksize to come to the same I/O read request size. So if you e.g. have a MBRC of 8 and a default blocksize of 8KB and now calculate the cost for an object residing in a 2KB tablespace, the MBRC will be multiplied 4, which results in a MBRC of 32. The I/O cost will be different although due to the different adjustment used with the higher MBRC setting. The adjusted MBRC for 32 is 16.39 whereas the adjusted MBRC for 8 is 6.59, so the calculated cost for the full table scan of the object residing in the 2KB tablespace will be higher. Likewise the same happens when using an object in a 16KB non-default tablespace. The MBRC will be reduced accordingly to 4 to get the same I/O read size again. Since adjusted MBRC for MBRC = 4 is 4.17, the cost calculated will actually be less for the object residing the 16KB tablespace.

Weekend quiz

Quiz questions seem to be in nowadays, so here's one for you regarding the cost based optimizer:

Given this simple script (Run this on any 10.2.x (including XE) or 11.1.0.6/7):

create table test1
as
select * from dba_objects;

create table test2
as
select * from dba_objects;

create index i_test1_1 on test1(object_name);

create index i_test1_2 on test1(object_id);

create index i_test2_1 on test2(object_name);

create index i_test2_2 on test2(object_id);

exec dbms_stats.gather_table_stats(null, 'TEST1', estimate_percent=>null, method_opt=>'for all columns size 1');

exec dbms_stats.gather_table_stats(null, 'TEST2', estimate_percent=>null, method_opt=>'for all columns size 1');

And this simple query:

explain plan for
select
*
from
test1
where
object_name='TEST1'
and object_id in (
select /*+ unnest */
object_id
from
test2
);

with this plan:

Plan 1 (the obvious one):

Plan hash value: 2107173885

------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 212 | 6 (0)| 00:00:01 |
| 1 | NESTED LOOPS SEMI | | 2 | 212 | 6 (0)| 00:00:01 |

Understanding the different modes of System Statistics aka. CPU Costing and the effects of multiple blocksizes - part 3

Back to part 2 Forward to part 4

System statistics in 9i

In 10g the CPU costing mode is enabled by default and is supported by the default NOWORKLOAD system statistics.

But you can use system statistics already in 9i, although you have to enable them explicitly.

Oracle 9i is out of support, but I guess there are still a lot of systems out there that are using the 9.2 release, therefore I find it worth to mention what you can do in 9i with system statistics. This can be helpful if you consider to test your application already in the 9i environment with system statistics before upgrading to 10g.

In most descriptions about 9i and system statistics only WORKLOAD system statistics are mentioned, but 9i also supports NOWORKLOAD system statistics, although not in the same full flavour as 10g does.

You can activate CPU costing in 9i by running DBMS_STATS.GATHER_SYSTEM_STATS('NOWORKLOAD'), but this seems to work differently than in 10g and later.

Whereas in 10g and later this actually measures the IOSEEKTIM and IOTFRSPEED values, this seems to activate in 9i something that is comparable with the default NOWORKLOAD system statistics of 10g.

The SYS.AUX_STATS$ does not show any actual values, but tests revealed that 9i (at least 9.2.0.8) in that case seems to measure the CPU speed (in 10g this is the CPUSPEEDNW value) and uses the same default values for IOSEEKTIM and IOTFRSPEED as 10g does (10ms and 4096 bytes/ms resp.).

Running some tests showed that you arrive at the same I/O cost as you do in 10g with the default NOWORKLOAD system statistics.

exec dbms_stats.delete_system_stats;

Understanding the different modes of System Statistics aka. CPU Costing and the effects of multiple blocksizes - part 2

Back to part 1 Forward to part 3

Before heading on to the remaining modes of system statistics, let's summarize what has been observed in part 1 regarding the default NOWORKLOAD system statistics in 10g and later. The following table shows what the test case from the previous post demonstrated:

Table 1: 8KB MSSM locally managed tablespace 10,000 blocks table segment
default NOWORKLOAD system statistics:

MBRC|SREADTIM|MREADTIM|MREADTIM/SREADTIM|NOCPU cost|adjusted MBRC|CPU cost|CPU/NOCPU cost
----|--------|--------|-----------------|----------|-------------|--------|--------------
8 |12 | 26 | 2.16 |1,518 | 6.59 |2,709 |1.78
16 |12 | 42 | 3.5 | 962 |10.39 |2,188 |2.27
32 |12 | 74 | 6.16 | 610 |16.39 |1,928 |3.16
64 |12 |138 |11.5 | 387 |25.84 |1,798 |4.64
128 |12 |266 |22.16 | 245 |40.82 |1,732 |7.07

If you happen to have a 16KB default blocksize the results would look like the following. Note that the table is now only 5,000 blocks in size, and the SREADTIM is now a bit longer (10+16384/4096=14ms instead of 10+8192/4096=12ms) therefore the 16KB blocksize calculation makes the full table scan look a bit cheaper to the optimizer when using the default NOWORKLOAD system statistics.

Table 2: 16KB MSSM locally managed tablespace 5,000 blocks table segment
default NOWORKLOAD system statistics:

MBRC|SREADTIM|MREADTIM|MREADTIM/SREADTIM|NOCPU cost|adjusted MBRC|CPU cost|CPU/NOCPU cost

Understanding the different modes of System Statistics aka. CPU Costing and the effects of multiple blocksizes - part 1

Forward to part 2

This is the first part of a series of posts that cover one of the fundamentals of the cost based optimizer in 9i and later. Understanding how the different system statistics modes work is crucial in making the most out of the cost based optimizer, therefore I'll attempt to provide some detailed explanations and samples about the formulas and arithmetics used. Finally I'll show (again) that using multiple block sizes for "tuning" purposes is a bad idea in general, along with detailed examples why I think this is so.

One of the deficiencies of the traditional I/O based costing was that it simply counted the number of I/O requests making no differentation between single-block I/O and multi-block I/O.

System statistics were introduced in Oracle 9i to allow the cost based optimizer to take into account that single-block I/Os and multi-block I/Os should be treated differently in terms of costing and to include a CPU component in the cost calculation.

The system statistics tell the cost based optimizer (CBO) among other things the time it takes to perform a single block read request and a multi-block read request. Given this information the optimizer ought to be able to come to estimates that better fit the particular environment where the database is running on and additionally use an appropriate costing for multi-block read requests that usually take longer than single block read requests. Given the information about the time it takes to perform the read requests the cost calculated can be turned into a time estimate.

The cost calculated with system statistics is still expressed in the same units as with traditional I/O based costing, which is in units of single-block read requests.

Optimizer partition oddities, part 2: List partitioning

Back to part 1

Some time ago on the OTN forum the following table layout was part of a discussion regarding performance issues and it revealed an interesting anomaly regarding list partition pruning:

If you're using list partitioning with partitions that use multiple values that map to a single list partition then the optimizer obviously uses a questionable approach when you're using multiple values on the partition key to prune to a single partition.

Consider the following table layout:

CREATE TABLE XYZ
(
TICKER VARCHAR2(22 BYTE) NOT NULL,
EXCH_CODE VARCHAR2(25 BYTE) NOT NULL,
ID_ISIN VARCHAR2(12 BYTE),
HIGH_52WEEK NUMBER(28,10),
LOW_52WEEK NUMBER(28,10),
PX_OPEN NUMBER(28,10),
PX_HIGH NUMBER(28,10),
BLOOMBERG_FILE_SOURCE VARCHAR2(100 BYTE),
LATEST_VERSION_FLAG CHAR(1 BYTE)
)
PARTITION BY LIST (EXCH_CODE)
(
PARTITION BBO_ASIA VALUES ('SL','IS','SP','JF','JN','PK','KP','VM','JS','IN','TB','KQ','JP','NV','JJ','MK','HK','IJ','JT','TT','PA','CS','JX',
'IB','AU','FS','VN','NZ','KS','PM','CH','BD','JQ','VH','CG','JO')
,
PARTITION BBO_NAMR VALUES ('UO','US','UN','PQ','TR','UD','UP','TX','UL','UB','UU','UX','UT','TN','UQ','UR','UW','UV','TA','CT','CV','UC','CJ','UA','UM','CN','UF','CF')
,
PARTITION BBO_LAMR VALUES ('AR','BS','AC','CR','EG','EK','VB','BN','EQ','PE','AF','CX','KY','CC','MM','BM','TP','BV','BH','UY','BZ','ED','VC','VS','BO','CI','CB','PP','BA','JA','CE')
,
PARTITION BBO_EURO VALUES (DEFAULT)
);

I'm now going to populate that table using this sample data: