I'll be presenting at the SIOUG 2009 conference that will take place in Portoroz, from 20th to 23rd September 2009.
The title of the presentation will be "CBO fundamentals: Understanding the different modes of System Statistics" and will be based on the series already available here on my blog.
So if you're interested in the details of the cost calculations performed by the recent releases of the Cost Based Optimizer and would like to here about this in a different style, this presentation might be something for you.
If you happen to be at the conference I'm looking forward to meeting you there.
Thanks to OakTable fellow Joze Senegacnik for inviting me to the conference.
I just posted "Fundamentals of Software Performance Quick Reference Card" at the Method R company website:
This two-page quick reference card written by Cary Millsap sums up computer software performance the Method R way. The first page lists definitions of the terms you need to know: efficiency, knee, load, response time, and so on. The second page lists ten principles that are vital to your ability to think clearly about software performance. This document contains meaningful insight in a format that's compact enough to hang on your wall.
It's free, and there's no sign-up required. I hope you will enjoy it.
I have been working on a system where many Application Engine programs are running throughout the day, and are frequently collecting Optimizer statistics with the %UpdateStats macro on many working storage tables. Concurrent calls to dbms_stats are typical.
There are two new behaviours in Oracle 10g RDBMS that can in extreme cases, in combination with a system that calls dbms_stats very frequently, create a significant performance overhead.
From Oracle 10g, histograms may, by default, be collected automatically. That means that rows are concurrently deleted from and inserted into histgrm$ and hist_head$, leading to contention and consistent read.
SELECT dbms_stats.get_stats_history_retention FROM dual; GET_STATS_HISTORY_RETENTION --------------------------- 31
Statistics history was designed to work in conjunction with schema wide statistics jobs that only refreshed stale statistics. There is an option on gather_schema_stats to collect only statistics on tables where the current statistics are stale. However, there is no such option on gather_table_stats. If you have decided to call this procedure for a particular table, then it is assumed you know you need to refresh the statistics. However, by calling dbms_stats from a batch program you can end up calling it much more frequently than is really necessary.
Recommendations
EXECUTE dbms_stats.alter_stats_history_retention(retention=>0);
If you have allowed the statistics history to grow before you disable history retention, then you might like to read John Hallas' posting on Purging statistics from the SYSAUX tablespace.
I have been working on a system where many Application Engine programs are running throughout the day, and are frequently collecting Optimizer statistics with the %UpdateStats macro on many working storage tables. Concurrent calls to dbms_stats are typical.
There are two new behaviours in Oracle 10g RDBMS that can in extreme cases, in combination with a system that calls dbms_stats very frequently, create a significant performance overhead.
From Oracle 10g, histograms may, by default, be collected automatically. That means that rows are concurrently deleted from and inserted into histgrm$ and hist_head$, leading to contention and consistent read.
SELECT dbms_stats.get_stats_history_retention FROM dual; GET_STATS_HISTORY_RETENTION --------------------------- 31
Statistics history was designed to work in conjunction with schema wide statistics jobs that only refreshed stale statistics. There is an option on gather_schema_stats to collect only statistics on tables where the current statistics are stale. However, there is no such option on gather_table_stats. If you have decided to call this procedure for a particular table, then it is assumed you know you need to refresh the statistics. However, by calling dbms_stats from a batch program you can end up calling it much more frequently than is really necessary.
Recommendations
EXECUTE dbms_stats.alter_stats_history_retention(retention=>0);
If you have allowed the statistics history to grow before you disable history retention, then you might like to read John Hallas' posting on Purging statistics from the SYSAUX tablespace.
Sometimes when I'm asked to check a particular SQL in a production like environment at first I only get rather limited access, e.g. a read-only user that has the required privileges to perform the SQL statement, but doesn't own the objects.
If I'm now supposed to analyse a rather complex SQL with dozens of table and/or view references I like to use the ALTER SESSION SET CURRENT_SCHEMA command to switch any unqualified object references to the schema set via CURRENT_SCHEMA rather than editing the SQL and add all the object references.
This basically works just fine, but I recently have come across an interesting anomaly in 10.2.0.4: If the optimizer uses dynamic sampling as part of the parsing then it doesn't do so anymore if the CURRENT_SCHEMA has been altered to a non-default schema.
The following simple test case demonstrates the issue in 10.2.0.4 Win32:
SQL>
SQL> create user dynsamp_test identified by dynsamp_test;
User created.
SQL>
SQL> grant create session, alter session to dynsamp_test;
Grant succeeded.
SQL>
SQL> drop table dynsamp_test_table purge;
drop table dynsamp_test_table purge
*
ERROR at line 1:
ORA-00942: table or view does not exist
SQL>
SQL> create table dynsamp_test_table
2 as
3 select
4 *
5 from
6 all_objects
7 where
8 rownum <= 1000;
Table created.
SQL>
SQL> grant select on dynsamp_test_table to dynsamp_test;
Grant succeeded.
SQL>
SQL> -- this is the plan we get as schema owner
SQL> explain plan for
2 select /*+ dynamic_sampling(t 4) */
3 *
4 from
5 dynsamp_test_table
6 where
7 owner = 'NOT_A_VALID_OWNER_SINCE_TOO_LONG';
Explained.
SQL>
SQL> -- It uses obviously dynamic sampling as instructed
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1573551846
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 128 | 5 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| DYNSAMP_TEST_TABLE | 1 | 128 | 5 (0)| 00:00:01 |
----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("OWNER"='NOT_A_VALID_OWNER_SINCE_TOO_LONG')
Note
-----
- dynamic sampling used for this statement
17 rows selected.
SQL>
SQL> rollback;
Rollback complete.
SQL>
SQL> -- We get the same from different schema with qualified reference (or valid synonym)
SQL> connect dynsamp_test/dynsamp_test@orcl
Connect durchgeführt.
SQL>
SQL> alter session set nls_language = 'AMERICAN';
Session altered.
SQL>
SQL> explain plan for
2 select /*+ dynamic_sampling(t 4) */
3 *
4 from
5 cbo_test.dynsamp_test_table
6 where
7 owner = 'NOT_A_VALID_OWNER_SINCE_TOO_LONG';
Explained.
SQL>
SQL> -- Still uses dynamic sampling
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1573551846
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 128 | 5 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| DYNSAMP_TEST_TABLE | 1 | 128 | 5 (0)| 00:00:01 |
----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("OWNER"='NOT_A_VALID_OWNER_SINCE_TOO_LONG')
Note
-----
- dynamic sampling used for this statement
17 rows selected.
SQL>
SQL> rollback;
Rollback complete.
SQL>
SQL> -- Now switch the current_schema
SQL> alter session set current_schema = cbo_test;
Session altered.
SQL>
SQL> explain plan for
2 select /*+ dynamic_sampling(t 4) */
3 *
4 from
5 dynsamp_test_table
6 where
7 owner = 'NOT_A_VALID_OWNER_SINCE_TOO_LONG';
Explained.
SQL>
SQL> -- No more dynamic sampling!
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1573551846
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 10 | 1280 | 5 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| DYNSAMP_TEST_TABLE | 10 | 1280 | 5 (0)| 00:00:01 |
----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("OWNER"='NOT_A_VALID_OWNER_SINCE_TOO_LONG')
13 rows selected.
SQL>
SQL> rollback;
Rollback complete.
SQL>
SQL> -- Check the 10053 trace file
SQL> -- It will show that the dynamic samping query fails
SQL> -- and therefore no dynamic sampling will be performed
SQL> -- presumably because the current_schema is not obeyed
SQL> -- by the recursively executed query
SQL> alter session set tracefile_identifier = 'dynamic_sampling_current_schema_10053';
Session altered.
SQL>
SQL> alter session set events '10053 trace name context forever, level 1';
Session altered.
SQL>
SQL> explain plan for
2 select /*+ dynamic_sampling(t 4) */
3 *
4 from
5 dynsamp_test_table
6 where
7 owner = 'NOT_A_VALID_OWNER_SINCE_TOO_LONG';
Explained.
SQL>
SQL> alter session set events '10053 trace name context off';
Session altered.
SQL> -- Check the 10046 trace file
SQL> -- which confirms a ORA-00942 while parsing the dynamic sampling query
SQL> alter session set tracefile_identifier = 'dynamic_sampling_current_schema_10046';
Session altered.
SQL>
SQL> alter session set sql_trace = true;
Session altered.
SQL>
SQL> explain plan for
2 select /*+ dynamic_sampling(t 4) */
3 *
4 from
5 dynsamp_test_table
6 where
7 owner = 'NOT_A_VALID_OWNER_SINCE_TOO_LONG';
Explained.
SQL>
SQL> alter session set sql_trace = false;
Session altered.
SQL>
SQL> rollback;
Rollback complete.
SQL>
SQL> spool off
So as you can see the simply query uses dynamic sampling as instructed (and in 10.2 would use dynamic sampling by default anyway since the table has no statistics), but when using the CURRENT_SCHEMA trick and then an unqualified reference, the dynamic sampling is not performed and the estimate is based on hardcoded defaults.
Checking the 10053 optimizer trace file confirms that the dynamic sampling is attempted, but the recursive query fails, and the 10046 SQL trace shows that an "ORA-00942 table or view does not exist" error is encountered while parsing the recursive dynamic sampling query.
So be careful when using this (rather unusual I have to admit) setup, the execution plans might be different if you're using a non-default CURRENT_SCHEMA and dynamic sampling is involved.
The issue seems to be fixed in 11.1.0.7 (dynamic sampling is performed as expected when using non-default CURRENT_SCHEMA), however I couldn't identify a corresponding bug on Metalink. If anyone comes across a bug number or document ID this would be helpful.
After doing my seminar in Spain last month, one of the attendees sent me a secret note about what the CBO would look like in Oracle 12g. Apparently it’s re-engineered from ground and completely different from anything we’ve seen before. It goes beyond being automatic, intelligent and auto-tuning, it’s actually edible too!
The New CBO Thanks to John Ospino for sending me this insider-info ;)
After doing my seminar in Spain last month, one of the attendees sent me a secret note about what the CBO would look like in Oracle 12g. Apparently it’s re-engineered from ground and completely different from anything we’ve seen before. It goes beyond being automatic, intelligent and auto-tuning, it’s actually edible too!
The New CBO Thanks to John Ospino for sending me this insider-info ;)
After doing my seminar in Spain last month, one of the attendees sent me a secret note about what the CBO would look like in Oracle 12g. Apparently it’s re-engineered from ground and completely different from anything we’ve seen before. It goes beyond being automatic, intelligent and auto-tuning, it’s actually edible too!
The New CBO Thanks to John Ospino for sending me this insider-info ;)
After doing my seminar in Spain last month, one of the attendees sent me a secret note about what the CBO would look like in Oracle 12g. Apparently it’s re-engineered from ground and completely different from anything we’ve seen before. It goes beyond being automatic, intelligent and auto-tuning, it’s actually edible too!
The New CBO Thanks to John Ospino for sending me this insider-info ;)
If you haven’t read them – here are the previous articles in Oracle memory troubleshooting series: Part 1, Part 2
In Oracle, the HEAPDUMP dump event in Oracle allows you to dump various heap contents to tracefile. With adding the “level” parameter to this dump event, you can specify which heaps to dump.
Julian Dyke has documented most of the levels here.
There are two little known, but useful level bits for heapdumps – bit 0x10000000 and 0x20000000. These allow Oracle to dump top-5 biggest subheaps in a heap recursively.
When bit 0x10000000 is enabled then Oracle dumps the top-5 subheaps inside any heap its instructed to dump.
When bit 0x20000000 is enabled then Oracle dumps the top-5 subheaps as mentioned above, but in addition Oracle recursively dumps top-5 subheaps inside any subheaps automatically dumped above. So instead of dumping 1 level of subheaps, Oracle recursively dumps 2 levels if such sub-sub-heaps exist.
This reduces the amount of manual work – as Oracle can drill down towards the root cause automatically and dump the relevant information.
Here’s a little test case:
I set the serverout buffer unlimited (Oracle 10.2+) so that Oracle would buffer unlimited amount of dbms_output data in UGA (this is also a “good” way for using up all the memory in your server so you use “unlimited” with care).
SQL> set serverout on size unlimited SQL> SQL> exec for i in 1..1000000 loop dbms_output.put_line(lpad('x',10,'x')); end loop;
Without the recursive top subheap dumping we would see output like this (after processing the tracefile with heapdump_analyzer):
This is the usual way for dumping a target process private heap:
SQL> oradebug setorapid 35 Oracle pid: 35, Unix process pid: 26457, image: oracle@linux03 SQL> oradebug dump heapdump 1 <-- level 1 dumps all top level private heaps (top-level PGA,UGA and call heap) Statement processed. SQL>
And the output is:
[oracle@linux03 trace]$ heapdump_analyzer LIN11G_ora_26486.trc -- Heapdump Analyzer v1.00 by Tanel Poder ( ) Total_size #Chunks Chunk_size, From_heap, Chunk_type, Alloc_reason ---------- ------- ------------ ----------------- ----------------- ----------------- 55065316 841 65476 , top uga heap, freeable, session heap <-- session heap is allocated from top uga heap 41218392 2517 16376 , session heap, freeable, koh-kghu sessi <-- koh-kghu session heap is allocated from session heap 13650208 836 16328 , session heap, free, 65520 1 65520 , pga heap, free, 65476 1 65476 , top uga heap, recreate, session heap 57736 14 4124 , session heap, freeable, kxsFrame4kPage ...
We would see that most of the uga memory (roughly 41MB of 55MB) is allocated for for some reason “koh-kghu sessi”. This is a session heap where from allocations for various objects like PL/SQL variables, records and collections are done. So when we’d want to drill down and see inside that heap we could use the HEAPDUMP_ADDR dump with that heap descriptors address as parameter to look inside it.
However with these extra bits mentioned above, Oracle can automatically dump us the contents of biggest subheaps inside the heaps we asked it to dump:
Recent comments
2 years 52 weeks ago
3 years 12 weeks ago
3 years 16 weeks ago
3 years 17 weeks ago
3 years 21 weeks ago
3 years 42 weeks ago
4 years 11 weeks ago
4 years 40 weeks ago
5 years 25 weeks ago
5 years 25 weeks ago