Search

OakieTags

Who's online

There are currently 0 users and 27 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

Redo over multiple weeks

#555555;">I’ve always wanted some sort of calendar view of load where I could see patterns across the same days of the week and same hours of the day and then be able to pick different periods and diff them:

#555555;">#2970a6;" href="http://dboptimizer.com/wp-content/uploads/2011/02/diff_diff_new.png">

#555555;">The first step of such a display would be selecting the data in such a way to represent the graphic. A graphic should be harder to do than a similar, though less powerful, ascii representation.

#555555;">So here is a quick sketch of looking at redo generation over time. Right now I have to monitor how well redo can be archived which depends on how it fast it is generated. (typically what I’m more interested in is the general performance which is what the above graphic addressed)

#555555;">The output below shows the redo generated per hour on average (picking the maximum hour in the bucket which is 4 hours long below) and the maximum generation at the minute level.  I may or may not be able to archive redo at the up to the minute but I do want to make sure I can keep up with each hour.

#555555;">DAY    HOUR  MAX(MB)    w1     w2      w3       w4       w5
SAT     0       9               5       4       4       3
SAT     4       12              4       5       4       4
SAT     8       1               0       0       0       0
SAT     12      0               0       0       0       0
SAT     16      1               0       0       0       0
SAT     20      4               1       0       0       1
SUN     0       10              3       3       1       3
SUN     4       13              6       5       3       5
SUN     8       7               6       6       0       6
SUN     12      7               0       3       0       4
SUN     16      1               0       0       1       0
SUN     20      8               3       2       0       3
MON     0       8               3       2       3       2
MON     4       7               2       3       2       1
MON     8       5               1       0       2       0
MON     12      1               0       0       1       0
MON     16      1               0       0       0       0
MON     20      7               2       2       0       2
TUE     0       14              6       5       7       4
TUE     4       7               1       1       1       2
TUE     8       3               0       0       0       0
TUE     12      1       1       0       0       0       0
TUE     16      1       1       0       0       0       0
TUE     20      3       1       1       1       1       1
WED     0       8       3       2       3       2       2
WED     4       7       2       1       3       2       2
WED     8       8       1       0       0       2       3
WED     12      7       1       0       0       1       1
WED     16      1       1       0       0       0       1
WED     20      4       1       1       1       1       1
THU     0       15      7       8       6       6       6
THU     4       8       2       1       1       1       1
THU     8       1       1       0       0       0       1
THU     12      16      1       11      0       0       1
THU     16      1       1       0       0       0       1
THU     20      4       1       1       1       1       1
FRI     0       11      2       2       2       2       2
FRI     4       8       3       1       1       1       1
FRI     8       4       1       0       0       0       0
#555555;">
#006699 !important;">set heading #006699 !important;">off
#006699 !important;">set feedback #006699 !important;">off
#006699 !important;">set pagesize 0
#006699 !important;">with pivot_data #006699 !important;">as (
#222222;">   #006699 !important;">select
#222222;">          WW pivot_col
#222222;">        , DY
#222222;">        , D
#222222;">        , HH
#222222;">        , #006699 !important;">max(maxval) mv
#222222;">        , #006699 !important;">max(average) av
#222222;">   #006699 !important;">from
#222222;">      ( #006699 !important;">select #006699 !important;">distinct
#222222;">               begin_time,
#222222;">               4*trunc(to_char(begin_time,'HH24')/4)     HH,
#222222;">               to_char(begin_time,'DY')       DY,
#222222;">               mod(to_char(begin_time,'D'),7)  D,
#222222;">               mod(to_char(begin_time,'WW'),5)       WW,
#222222;">               average/(1024*1024) average,
#222222;">               maxval/(1024*1024) maxval,
#222222;">               snap_id
#222222;">        #006699 !important;">from   DBA_HIST_SYSMETRIC_SUMMARY
#222222;">        #006699 !important;">where  dbid=&&DBID and
#222222;">               metric_name='Redo Generated Per Sec'
#222222;">           and begin_time > sysdate - 31
#222222;">       )
#222222;">   #006699 !important;">group #006699 !important;">by HH,D,DY,WW
)
#006699 !important;">select DY ||'   '||
#222222;">       HH ||'   '||
#222222;">       round(#006699 !important;">max(mv),0)    ||'  '||
#222222;">       round(#006699 !important;">max(w1),0)  ||'    '||
#222222;">       round(#006699 !important;">max(w2),0)  ||'    '||
#222222;">       round(#006699 !important;">max(w3),0)  ||'    '||
#222222;">       round(#006699 !important;">max(w4),0)  ||'    '||
#222222;">       round(#006699 !important;">max(w5),0)
#006699 !important;">from (
#222222;">   #006699 !important;">select *
#222222;">   #006699 !important;">from  pivot_data
#222222;">   pivot ( #ff1493 !important;">avg(av)
#222222;">           #006699 !important;">for pivot_col in ( 1 #006699 !important;">as w1,2 #006699 !important;">as w2,3 #006699 !important;">as w3 ,4 #006699 !important;">as w4 ,5 #006699 !important;">as w5 )
#222222;">         )
)
#006699 !important;">group #006699 !important;">by DY,D,HH
#006699 !important;">order #006699 !important;">by D,HH
/
#006699 !important;">set heading #006699 !important;">on
#006699 !important;">set feedback #006699 !important;">on
#006699 !important;">set pagesize 30

#555555;" />

#555555;">PS the above pivot is for 11g, for 10g here is the query without pivot

#555555;">
#006699 !important;">set pagesize 100
col DY #006699 !important;">for A4
col HH #006699 !important;">for 99
col mx #006699 !important;">for 99
col w1 #006699 !important;">for 99
col w2 #006699 !important;">for 99
col w3 #006699 !important;">for 99
col w4 #006699 !important;">for 99
col w5 #006699 !important;">for 99
#006699 !important;">with pivot_data #006699 !important;">as (
#222222;">   #006699 !important;">select
#222222;">          WW
#222222;">        , DY
#222222;">        , D
#222222;">        , HH
#222222;">        , #006699 !important;">max(maxval) mv
#222222;">        , #006699 !important;">max(average) av
#222222;">   #006699 !important;">from
#222222;">      ( #006699 !important;">select #006699 !important;">distinct
#222222;">               begin_time,
#222222;">               4*trunc(to_char(begin_time,'HH24')/4)     HH,
#222222;">               to_char(begin_time,'DY')       DY,
#222222;">               mod(to_char(begin_time,'D'),7)  D,
#222222;">               mod(to_char(begin_time,'WW'),5)       WW,
#222222;">               average/(1024*1024) average,
#222222;">               maxval/(1024*1024) maxval,
#222222;">               snap_id
#222222;">        #006699 !important;">from   DBA_HIST_SYSMETRIC_SUMMARY
#222222;">        #006699 !important;">where
#222222;">               metric_name='Redo Generated Per Sec'
#222222;">           and begin_time > sysdate - 31
#222222;">       )
#222222;">   #006699 !important;">group #006699 !important;">by HH,D,DY,WW
)
#006699 !important;">select DY, HH,
#222222;">      round(#006699 !important;">max( mv ),0) mx,
#222222;">      round(#006699 !important;">max( decode( WW, 0 , av, null ) ),0) w1,
#222222;">      round(#006699 !important;">max( decode( WW, 1 , av, null ) ),0) w2,
#222222;">      round(#006699 !important;">max( decode( WW, 2 , av, null ) ),0) w3,
#222222;">      round(#006699 !important;">max( decode( WW, 3 , av, null ) ),0) w4,
#222222;">      round(#006699 !important;">max( decode( WW, 4 , av, null ) ),0) w5
#222222;">   #006699 !important;">from  pivot_data
#006699 !important;">group #006699 !important;">by DY,D,HH
#006699 !important;">order #006699 !important;">by D,HH
/

#555555;">Just to see the redo per hour can be done more simply as:

#555555;">set pagesize 100
col stat_name format a30
col MB format 9,999.99
select
       btime, stat_name,
       round((end_value-beg_value)/(1024*1024),2) MB
from (
select
       e.stat_name,
       to_char(s.BEGIN_INTERVAL_TIME,'DD-MON-YY HH24:MI')  btime,
       e.value end_value,
       Lag (e.value)
              OVER( PARTITION BY e.stat_name ORDER BY s.snap_id) beg_value
from
       DBA_HIST_SYSSTAT e,
       DBA_HIST_SNAPSHOT s
where
       s.snap_id=e.snap_id
   and s.begin_interval_time > sysdate -2
   and e.stat_name = 'redo size'
order by e.stat_name, begin_interval_time
)
where end_value-beg_value > 0
order by btime;

Mark W. Farnham, New Hampshire Republican for US Senate

Watch this space as I ramp up my primary campaign in July. You should already be able to get a sense of who I am from the existing posts on this blog. But I’m sure you will want to know more before you line up to vote for me on September 9th in the primary election.

Mark

WebLogic 12cR3 articles

WebLogic 12cR3 was released towards the end of last week, so this weekend I had an install-fest.

I also did some minor amendments to some existing articles.

From a newbie administrator perspective, like me, there is very little difference between WebLogic 12cR2 and 12cR3, so most of the time it is business as usual.

To coincide with this new release, #292f33;">Packt are doing a $10 promotion for WebLogic eBooks (WebLogic Partner Community EMEA).

Cheers

Tim…


WebLogic 12cR3 articles was first posted on June 30, 2014 at 11:11 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Why does the Optimiser not respect my qb_name() hint?

I recently was involved in an investigation on a slow-running report on an Exadata system. This was rather interesting, the raw text file with the query was 33kb in size, and SQL Developer formatted the query to > 1000 lines. There were lots of interesting constructs in the query and the optimiser did its best to make sense of the inline views and various joins.

This almost lead to a different post about the importance of understanding query transformation. You will see in a bit what QT has to do with this post. Although access paths and join methods are still very relevant when trying to understand execution plans you will be amazed by the extent of the work happening during query transformation. I have said so on twitter already, but in case you missed it: Oracle has published a SQL Tuning Guide with the 12c documentation set. This is a really, really good, well-written piece of documentation. And it contains a lot of information about query transformation in chapter 5.

Query Blocks

One of the lessons I learned when looking at complex SQL statements is to think about Query Blocks. Let’s start with an example. I created some test tables, T1 and T2, both with rather random data. T1 is bigger, about 33 million rows, T2 has about 1 million rows. All of them are based on Jonathan Lewis’s presentation about “creating tests“. This is Oracle 12.1.0.1.3 on Exadata.

Here is the statement used for this blog post, unhinted. To keep things simple there are no indexes on T1 or T2.

select /*+ gather_plan_statistics */
  /* test0013 */
 t1.id,t1.N1
from t1
where exists ( select 1 from t2 where t1.id = t2.id and t2.id between 10 and 20)
and t1.n1 = (select n1 from t1 where id = 1 and rownum = 1)   -- the subquery would return 32 rows without the count stopkey
union all
select  t2.id, t2.n1 from t2
where t2.id between 10 and 12
union all
select t2.id, t2.n1 from t2
where t2.id between 13 and 15;

The statement does not make sense for humans, but it should be logically correct (this is my first post on the optimiser, I am hoping that the more knowledgeable readers point out any inaccuracies should there be any). The intent is to introduce a few query blocks.

This results in the following execution plan:

SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last +alias'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  5zrwsqnwjzzry, child number 0
-------------------------------------
select /*+ gather_plan_statistics */   /* test0013 */  t1.id,t1.N1 from
t1 where exists ( select 1 from t2 where t1.id = t2.id and t2.id
between 10 and 20) and t1.n1 = (select n1 from t1 where id = 1 and
rownum = 1)  union all select  t2.id, t2.n1 from t2 where t2.id between 10
and 12 union all select t2.id, t2.n1 from t2 where t2.id between 13 and
15

Plan hash value: 319705157

-----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |      |      1 |        |      6 |00:00:00.07 |     516K|    516K|       |       |          |
|   1 |  UNION-ALL                               |      |      1 |        |      6 |00:00:00.07 |     516K|    516K|       |       |          |
|*  2 |   HASH JOIN SEMI                         |      |      1 |      1 |      0 |00:00:00.07 |     485K|    485K|  1421K|  1421K|  540K (0)|
|*  3 |    TABLE ACCESS STORAGE FULL             | T1   |      1 |      1 |      0 |00:00:00.07 |     485K|    485K|  1025K|  1025K| 3085K (0)|
|*  4 |     COUNT STOPKEY                        |      |      1 |        |      1 |00:00:00.01 |       4 |      2 |       |       |          |
|*  5 |      TABLE ACCESS STORAGE FULL FIRST ROWS| T1   |      1 |      2 |      1 |00:00:00.01 |       4 |      2 |  1025K|  1025K| 1029K (0)|
|*  6 |    TABLE ACCESS STORAGE FULL             | T2   |      0 |     12 |      0 |00:00:00.01 |       0 |      0 |  1025K|  1025K|          |
|*  7 |   TABLE ACCESS STORAGE FULL              | T2   |      1 |      4 |      3 |00:00:00.01 |   15204 |  15200 |  1025K|  1025K| 3085K (0)|
|*  8 |   TABLE ACCESS STORAGE FULL              | T2   |      1 |      4 |      3 |00:00:00.01 |   15204 |  15200 |  1025K|  1025K| 3085K (0)|
-----------------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SET$1
   2 - SEL$5DA710D3
   3 - SEL$5DA710D3 / T1@SEL$1
   4 - SEL$3
   5 - SEL$3        / T1@SEL$3
   6 - SEL$5DA710D3 / T2@SEL$2
   7 - SEL$4        / T2@SEL$4
   8 - SEL$5        / T2@SEL$5

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("T1"."ID"="T2"."ID")
   3 - storage(("T1"."ID"<=20 AND "T1"."ID">=10 AND "T1"."N1"=))
       filter(("T1"."ID"<=20 AND "T1"."ID">=10 AND "T1"."N1"=))
   4 - filter(ROWNUM=1)
   5 - storage("ID"=1)
       filter("ID"=1)
   6 - storage(("T2"."ID"<=20 AND "T2"."ID">=10))
       filter(("T2"."ID"<=20 AND "T2"."ID">=10))
   7 - storage(("T2"."ID"<=12 AND "T2"."ID">=10))
       filter(("T2"."ID"<=12 AND "T2"."ID">=10))
   8 - storage(("T2"."ID"<=15 AND "T2"."ID">=13))
       filter(("T2"."ID"<=15 AND "T2"."ID">=13))

54 rows selected.

So far so good, nothing really too surprising here but keep in mind this is a simple example. Now if I wanted to pass hints to the optimiser on how to deal with the subqueries I have to refer to the query blocks. Jonathan Lewis has described this in an older but still relevant blog post, and you can find presentations from others on the subject as well. Easy enough, I simply add a /*+ qb_name(name) */ immediately after every select, provide the query block name in the hint (/*+ full(@block_a t1@block_a) */) and I’m done. So let’s try:

select /*+ qb_name(block_a) gather_plan_statistics */
  /* test0013 */
 t1.id,t1.N1
from t1
where exists ( select /*+ qb_name(block_c) */ 1 from t2 where t1.id = t2.id and t2.id between 10 and 20)
and t1.n1 = (select /*+ qb_name(block_d) */ n1 from t1 where id = 1 and rownum = 1)
union all
select /*+ qb_name(block_e) */ t2.id, t2.n1 from t2
where t2.id between 10 and 12
union all
select /*+ qb_name(block_f) */ t2.id, t2.n1 from t2
where t2.id between 13 and 15;

Now surely I should get query blocks block_[ac-f] in the execution plan, right? When going over the post I noticed that block_b was missing in the SQL statement by the way, for some reasons it went missing, never mind.

SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last +alias'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  66ytd4g9xmwxb, child number 0
-------------------------------------
select /*+ qb_name(block_a) gather_plan_statistics */   /* test0012 */
t1.id,t1.N1 from t1 where exists ( select /*+ qb_name(block_c) */ 1
from t2 where t1.id = t2.id and t2.id between 10 and 20) and t1.n1 =
(select /*+ qb_name(block_d) */ n1 from t1 where id = 1 and rownum = 1)
union all select /*+ qb_name(block_e) */ t2.id, t2.n1 from t2 where
t2.id between 10 and 12 union all select /*+ qb_name(block_f) */ t2.id,
t2.n1 from t2 where t2.id between 13 and 15

Plan hash value: 319705157

-----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |      |      1 |        |      6 |00:00:00.04 |     516K|    516K|       |       |          |
|   1 |  UNION-ALL                               |      |      1 |        |      6 |00:00:00.04 |     516K|    516K|       |       |          |
|*  2 |   HASH JOIN SEMI                         |      |      1 |      1 |      0 |00:00:00.04 |     485K|    485K|  1421K|  1421K|  385K (0)|
|*  3 |    TABLE ACCESS STORAGE FULL             | T1   |      1 |      1 |      0 |00:00:00.04 |     485K|    485K|  1025K|  1025K| 3085K (0)|
|*  4 |     COUNT STOPKEY                        |      |      1 |        |      1 |00:00:00.01 |       4 |      2 |       |       |          |
|*  5 |      TABLE ACCESS STORAGE FULL FIRST ROWS| T1   |      1 |      2 |      1 |00:00:00.01 |       4 |      2 |  1025K|  1025K| 1029K (0)|
|*  6 |    TABLE ACCESS STORAGE FULL             | T2   |      0 |     12 |      0 |00:00:00.01 |       0 |      0 |  1025K|  1025K|          |
|*  7 |   TABLE ACCESS STORAGE FULL              | T2   |      1 |      4 |      3 |00:00:00.01 |   15204 |  15200 |  1025K|  1025K| 3085K (0)|
|*  8 |   TABLE ACCESS STORAGE FULL              | T2   |      1 |      4 |      3 |00:00:00.01 |   15204 |  15200 |  1025K|  1025K| 3085K (0)|
-----------------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SET$1
   2 - SEL$7CA7EA44
   3 - SEL$7CA7EA44 / T1@BLOCK_A
   4 - BLOCK_D
   5 - BLOCK_D      / T1@BLOCK_D
   6 - SEL$7CA7EA44 / T2@BLOCK_C
   7 - BLOCK_E      / T2@BLOCK_E
   8 - BLOCK_F      / T2@BLOCK_F

Surprise, surprise

Umm, where are the query blocks A and C (remember there is no hint for block_b)? I was hoping to get rid of all the system generated names, but no luck. Actually, that’s not quite right, and to understand this you have to look at the 10053 trace, which is mostly harmless. In the trace you find all the query block names, right at the beginning:

Registered qb: BLOCK_A 0x3bc2578 (HINT BLOCK_A)
---------------------
QUERY BLOCK SIGNATURE
---------------------
  signature (): qb_name=BLOCK_A nbfros=1 flg=0
    fro(0): flg=4 objn=61471 hint_alias="T1"@"BLOCK_A"

Registered qb: BLOCK_C 0x3bc5610 (HINT BLOCK_C)
---------------------
QUERY BLOCK SIGNATURE
---------------------
  signature (): qb_name=BLOCK_C nbfros=1 flg=0
    fro(0): flg=4 objn=61485 hint_alias="T2"@"BLOCK_C"

Registered qb: BLOCK_D 0xfedf8560 (HINT BLOCK_D)
---------------------
QUERY BLOCK SIGNATURE
---------------------
  signature (): qb_name=BLOCK_D nbfros=1 flg=0
    fro(0): flg=4 objn=61471 hint_alias="T1"@"BLOCK_D"

Registered qb: BLOCK_E 0xfedf7290 (HINT BLOCK_E)
---------------------
QUERY BLOCK SIGNATURE
---------------------
  signature (): qb_name=BLOCK_E nbfros=1 flg=0
    fro(0): flg=4 objn=61485 hint_alias="T2"@"BLOCK_E"

Registered qb: BLOCK_F 0xfedf5f50 (HINT BLOCK_F)
---------------------
QUERY BLOCK SIGNATURE
---------------------
  signature (): qb_name=BLOCK_F nbfros=1 flg=0
    fro(0): flg=4 objn=61485 hint_alias="T2"@"BLOCK_F"

Registered qb: SET$1 0xfedf4b28 (PARSER)

So why is there no BLOCK_A in the Query Block Name / Object Alias section of the plan? Now it’s time to scroll down (a lot) or to search for the query block from the execution plan (SEL$7CA7EA44 for example).

I found it in the query transformation part of the optimiser trace, more specifically in the cost-based query transformation part. There you can see how the optimiser tries various optimisations to the SQL statement you issued. Not really surprisingly the optimiser works on the EXISTS part trying to unnest it. Here’s the excerpt from the trace:

*****************************
Cost-Based Subquery Unnesting
*****************************
SU: Unnesting query blocks in query block SET$1 (#1) that are valid to unnest.
Subquery removal for query block BLOCK_D (#5)
RSW: Not valid for subquery removal BLOCK_D (#5)
Subquery unchanged.
Subquery Unnesting on query block BLOCK_A (#4)SU: Performing unnesting that does not require costing.
SU: Considering subquery unnest on query block BLOCK_A (#4).
SU:   Checking validity of unnesting subquery BLOCK_C (#6)
SU:   Passed validity checks.
SU:   Transforming EXISTS subquery to a join.
Registered qb: SEL$7CA7EA44 0xfedfa350 (SUBQUERY UNNEST BLOCK_A; BLOCK_C)
---------------------
QUERY BLOCK SIGNATURE
---------------------
  signature (): qb_name=SEL$7CA7EA44 nbfros=2 flg=0
    fro(0): flg=0 objn=61471 hint_alias="T1"@"BLOCK_A"
    fro(1): flg=0 objn=61485 hint_alias="T2"@"BLOCK_C"

Notice the new query block name: Registered qb: SEL$7CA7EA44 0xfedfa350 (SUBQUERY UNNEST BLOCK_A; BLOCK_C). And there you go, the rewritten part of the SQL statement “replaces” the original blocks A and C.

Many thanks go to Karen Morton who helped me along the way and had so many good suggestions!

New Version Of XPLAN_ASH Utility

A new version 4.1 of the XPLAN_ASH utility is available for download.

As usual the latest version can be downloaded here.

This version in particular supports now the new 12c "Adaptive" plan feature - previous versions don't cope very well with those if you don't add the "ADAPTIVE" formatting option manually.

Here are the notes from the change log:

- GV$SQL_MONITOR and GV$SQL_PLAN_MONITOR can now be customized in the
settings as table names in case you want to use your own custom monitoring repository that copies data from GV$SQL_MONITOR and GV$SQL_PLAN_MONITOR in order to keep/persist monitoring data. The tables need to have at least those columns that are used by XPLAN_ASH from the original views

- The "Activity Timeline based on ASH" for RAC Cross Instance Parallel Execution shows an additional line break for the GLOBAL summary

- Fixed various bugs related to some figures when analyzing Cross Instance RAC Parallel Execution

- The new "GLOBAL" aggregation level for Cross Instance RAC Parallel Execution (see version 4.0 change log below) is now also shown in the "Information on Parallel Degree based on ASH" section

- The "Parallel Distribution ASH" column on execution plan line level now can show process information taken from Real-Time SQL Monitoring for those processes that are not found in ASH samples.
This effectively means that with monitoring information available for every plan line every involved process will now be shown along with its ASH sample count and rows produced

So some processes will show up now with a sample count of 0.

The idea behind this is to provide more information about row distribution even for those lines/processes that are not covered by the ASH samples.
Previously the rowcount produced was only shown for those processes covered in ASH samples

The new behaviour is default - if you find the output messy you can return to previous behaviour (show only rowcounts for processes found in ASH samples) by setting the new configuration switch "show_monitor_rowcount" to any other value than the default of "YES"

- The "Real-Time SQL Monitoring" information on execution plan line level now includes the read and write request information ("ReadReq", "WriteReq")

- The I/O figures based on ASH now include the new "DELTA_READ_MEM_BYTES" information that was added in 12c. This applies to the following sections:
  - SQL Statement I/O Summary based on ASH
  - Parallel Worker activity overview based on ASH
  - Activity Timeline based on ASH

The "Read Mem Bytes" seems to correspond to the "logical read bytes from cache" statistics, so any direct path reads are not covered by this value

- Added some more verbose description in the "Note" sections how to handle long lines. XPLAN_ASH now does a SET TRIMSPOOL ON if you want to spool the output to a file

- Whenever the output referred to DFOs this was changed to "DFO TREE", which is the correct term

- The "Parallel Worker activity overview based on ASH" section now shows a blank line between the sections which should make this section more readable

- Adaptive plans are now supported by XPLAN_ASH

Note they don't work well with previous versions, the formatting of the inactive lines breaks and the overall information can be misleading if you don't add manually the "ADAPTIVE" formatting option

If XPLAN_ASH detects an adaptive plan, it will always force the ADAPTIVE formatting option.
This also means that Adaptive plans for the time being won't work with SASH as SASH doesn't collect the OTHER_XML column from GV$SQL_PLAN
You could manually add that column to SASH_SQLPLANS and add the column to the "sash_pkg.get_sqlplans" procedure - this is a CLOB column, but INSERT / SELECT should work I think
The view SASH_PLAN_TABLE needs also to be modified to select the OTHER_XML column instead of a dummy NULL

Although this output is less readable than the "faked" output that shows only the plan operations that are actually in use, it is the only simple way how ASH/MONITOR data can be related to execution plan lines, as these hold the information with the actual plan line, not the one that is made up by DBMS_XPLAN.DISPLAY* based on the DISPLAY_MAP information in the OTHER_XML column

Hence I decided for the time being to use the same approach as 12c Real-Time SQL Monitoring and always show the full/adaptive shape of the plan

Another challenge for XPLAN_ASH with adaptive plans is the possibly changing PLAN_HASH_VALUE during execution.

XPLAN_ASH extracts the PLAN_HASH_VALUE from ASH/MONITOR when trying to get the plan from DBA_HIST_SQL_PLAN.

Hence XPLAN_ASH now needs to take care to extract the most recent PLAN_HASH_VALUE, previously it didn't matter as it wasn't supposed to change during execution. This seems to work based on my tests, but it's something to keep in mind

- The new "gather stats on load" 12c feature implies for INSERT...SELECT statements that the cursor will immediately be invalidated/removed from the Library Cache after (successful) execution. So now such
INSERT...SELECT behave like CTAS which also gets removed immediately. This is a pity as you won't be able to pick up the plan from the Library Cache after the execution completes using XPLAN_ASH (or any other tool using DBMS_XPLAN.DISPLAY*).

Although V$SQL_PLAN_MONITOR might keep plan for some time after the execution, it can't be used as input to DBMS_XPLAN.DISPLAY*, hence this isn't a viable workaround. In principle however this isn't a good thing as the SQL and plan information might be missing from AWR / STATSPACK reports due to the
immediate invalidation/removal.

At the time being the only viable workaround known to me for this is to prevent the "gather stats on load" feature either via parameter "_optimizer_gather_stats_on_load" or hint "no_gather_optimizer_statistics", or via using pre-12c optimizer feature settings which implicitly disables the feature which is of course not
really a good workaround as the feature itself might be rather desirable

12c nasty with remote query optimization

We have a fairly common query process, where we run a MERGE command to compare a remote table to a local copy of it, as "poor mans" Golden Gate to bring that table up to date on a regular basis.  [Editors note: Writing MERGE's is more complicated but a lot cheaper than Golden Gate :-)]

After an upgrade to 12c, the performance of some of the MERGE’s went very bad…and you can see what happened with the (sanitised) example below:

The bold part is a join that we’ll be executing on the remote database (mydb). It’s been hinted to run in a particular way.

SQL>   explain plan
  2    into sys.plan_table$
  3    for
  4    merge 
  5    into local_copy.loc_t1 w
  6    using ( select /*+ leading(t1) index(t2 t2_pk) use_nl_with_index(t2)*/
  7               t1.col1
  8              ,t1.col2
  9              ,t1.col3
        ...
 27            from scott.t1@mydb t1,
 28                 scott.t2@mydb t2
 29            where t1.seq = t2.seq
 30            and ...
 31            ) p
 32     on (  p.seq = w.seq
 33    )
 34    when matched then
        ..
 75    when not matched then
        ..

Explained.

Now we can’t see directly from the explain plan how the query will be run on the remote database – we just get a "REMOTE" line in the plan. However, the additional data in the plan reveals a problem

------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Inst   |IN-OUT|
------------------------------------------------------------------------------------------------------------
|   0 | MERGE STATEMENT      |             |  1068K|   696M|       |  2492K  (1)| 00:01:38 |        |      |
|   1 |  MERGE               | LOC_T1      |       |       |       |            |          |        |      |
|   2 |   VIEW               |             |       |       |       |            |          |        |      |
|*  3 |    HASH JOIN OUTER   |             |  1068K|   298M|   210M|  2492K  (1)| 00:01:38 |        |      |
|   4 |     REMOTE           |             |  1068K|    90M|       | 50193   (2)| 00:00:02 | PORAI~ | R->S |
|   5 |     TABLE ACCESS FULL| T1          |    38M|  3625M|       | 91205   (2)| 00:00:04 |        |      |
------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("P"."SEQ"="W"."SEQ"(+))

Look very closely at what query Oracle will be throwing at the remote database

Remote SQL Information (identified by operation id):
----------------------------------------------------

   4 - SELECT /*+ NO_MERGE LEADING ("T1") USE_NL_WITH_INDEX ("A2") INDEX ("A2" "T2_PK") */
       ...
       FROM "SCOTT"."T1" "A1","SCOTT"."T2" "A2" 
       WHERE "A1"."SEQ"="A2"."SEQ"

The two tables have been re-aliased as A1 and A2, but notice that one of the hints did NOT get corrected.  The lack of a (valid) leading hint led to a bad plan on the remote database, and performance problems as a result.  For our case, the solution was to explictly add a NO_MERGE hint into the original statement:

SQL>   explain plan
  2    into sys.plan_table$
  3    for
  4    merge 
  5    into local_copy.loc_t1 w
  6    using ( select /*+ no_merge leading(t1) index(t2 t2_pk) use_nl_with_index(t2)*/
  7               t1.col1
  8              ,t1.col2
  9              ,t1.col3
        ...
 27            from scott.t1@mydb t1,
 28                 scott.t2@mydb t2
 29            where t1.seq = t2.seq
 30            and ...
 31            ) p
 32     on (  p.seq = w.seq
 33    )
 34    when matched then
        ..
 75    when not matched then
        ..

Explained.

which yielded a correctly hinted SQL on the remote database

Remote SQL Information (identified by operation id):
----------------------------------------------------

   4 - SELECT /*+ NO_MERGE LEADING ("A1") USE_NL_WITH_INDEX ("A2") INDEX ("A2" "T2_PK") */
       ...
       FROM "SCOTT"."T1" "A1","SCOTT"."T2" "A2" 
       WHERE "A1"."SEQ"="A2"."SEQ"

This only has come to light on 12c – the previous version we were on (11.2.0.3) was unaffected.

Testing…the surgeon’s approach

I played a lot of volleyball in a bygone life :-) and subsequently ruined my knees to the extent that I needed surgery. I got a shock when the surgeon (after a series of x-rays and checks) said to me: "Of course, we’ll only know once we’re in there".

So here’s a body part (a knee) that’s had hundreds of thousands of years to evolve, so you’d expect that knees are pretty much the same world wide, yet an experienced and qualified surgeon puts the "we cant be 100% sure" caveat before chopping me open.

I wish we could apply the same process to testing of IT software.  I remember reading about the mantra of "its harder to fix bugs once they’re in production" and that’s certainly true.  However, somewhere along the way, that became the justification for test cycles being incredibly long and mind-bogglingly detailed.  If my Finance software can’t balance the books, then yes, that’s a big drama.  But if the "Save" button is a shade of blue that didn’t match the design screen shots – is it really worth holding back the production implementation? There are two problems with a mantra of "we will find every possible defect":

1) You can find defects in software almost ad infinitum.  They just get less and less severe, and your testing cycle bottlenecks you entire IT department.

2) You create a false confidence in the testing.  "Hell, if we spent 12 months in testing, then we’ll never find any bugs once we go to Production." 

So I say – Why not take the surgeons approach?  Sure, we’ll test the product to a reasonable level of satisfaction, but we’ll readily accept the fact that the ultimate test ground is only Production.  "We’ll only know when its in there". 

Upgrade to 12c … credentials

We did a "real" upgrade to 12c this weekend, where "real" means a production system, as opposed to my laptop, a play VM etc etc :-)

It all went relatively smoothly except for one interesting thing, that I can’t 100% say was caused by the upgrade, but it would appear to be the case.

After the upgrade, our scheduler external jobs started failing.  A quick look in the alert log revealed:

Sun Jun 29 09:26:11 2014
ORA-12012: error on auto execute of job "FIRE_SCHED_DAILY"
ORA-27496: credential "LOCAL_OS_ACCT" is disabled

So its possible (not proven) that upgrading to 12c might disable credentials. In this particular case, the database went from standalone to a pluggable database.

The remedy was the simply drop and recreate the credential

Oracle Meetup

This is just a temporary note to remind London-based Oracle technical folks that the first free evening event arranged by e-DBA will take place this coming Thursday (3rd July), 6:30 – 9:00 pm.

The event is free and includes breaks for beer and food, but you do have to sign up in advance – places are limited. July

The theme for the evening is “Upgrades”: covering general principles (Jonathan Lewis), 12c specifics (Jason Arneil), and tools (Dominic Giles and James Anthony)

Even if you’re not interested in upgrades, you might want to attend if you haven’t heard about Swingbench and SLOB (Silly Little Oracle Benchmark).

Details and registration at this url.

 

Oracle Meetup

This is just a temporary note to remind London-based Oracle technical folks that the first free evening event arranged by e-DBA will take place this coming Thursday (3rd July), 6:30 – 9:00 pm.

The event is free and includes breaks for beer and food, but you do have to sign up in advance – places are limited. July

The theme for the evening is “Upgrades”: covering general principles (Jonathan Lewis), 12c specifics (Jason Arneil), and tools (Dominic Giles and James Anthony)

Even if you’re not interested in upgrades, you might want to attend if you haven’t heard about Swingbench and SLOB (Silly Little Oracle Benchmark).

Details and registration at this url.