Search

OakieTags

Who's online

There are currently 0 users and 36 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

RAC Attack in OTN Lounge at OOW11

Want to get your hands on a key technology in both the Exadata Database Machine and the newly announced Oracle Database Appliance?

If you’ll be at OpenWorld – in just 11 days – then the IOUG RAC SIG is putting together a special event for you!  (You might have already heard about this on Twitter or from Justin at the OTN Blog.)

Every day from 9am to 1pm, find our table in the OTN Lounge (on Howard Street) and we’ll help you get an 11gR2 RAC cluster database running inside virtual machines on your own windows-based laptop. You can experiment boldly – if you make a mistake then you won’t have to start over; we can easily “reset” your virtual machines to any point.

The basic idea is to facilitate some community-driven direct mentoring.  We’ll have experienced RAC DBAs on hand to answer any questions you come up with.  We’ve run limited-seat classes (always full) at several past Collaborate conventions including this year in Orlando; but we’re changing the format at OpenWorld so that more people can participate.

It’s not required, but to have the best experience we recommend bringing a USB-powered external hard drive with 100G of free space – you can buy one online for about $40.   Detailed hardware requirements are in the online lab handbook.  But come visit our table and say hello regardless – besides, who knows what other interesting stuff we might be hacking on…  :)

Hope to see you there!

http://racattack.org/e

Post-Script

If know a DBA who already has experience with RAC, then they can help us meet and mentor newer DBAs.  The last 5 volunteer slots are still open.  (Each volunteer can sign up for 2 max.)  Pass along the word.  Anyone can sign up for a volunteer slot by directly editing the wiki page at http://racattack.org/e and emailing me.

More oradebug

Alex commented on my post about " oradebug " about the select statement on x$ksmfsv which holds a list of all fixed variables amongst other things and joined it to x$ksmmem to get the absolute address in the SGA to....[Read More]

Posted by Pete On 21/09/11 At 07:26 PM

Oracle Database Appliance — What Does It Mean for You and Your Business?

When I first heard about Oracle Database Appliance and what it does, I got really excited — I saw great potential in this product. When we got our hands dirty and started testing the appliance, I become confident that this product will be a hit. Now it’s finally the time when I can share my [...]

In the abstract

During conference season, it can be a challenge to come up with abstracts that you can feel passionate about, while making sure to craft them in a way that is both attractive to selection committees and the audience you feel like you want to reach. I often find that tri-purpose (satisfying myself, a committee, and the potential audience) to be daunting and occasionally conflicting — leading to abstract paralysis.

Starting today, I’m going to work harder at it. If you’ve been to any of my presentations in the recent past, you know that I like to spend more time on what I think are technical “culture” issues rather than examples of how to implement or interpret the technical features of the latest software release. It’s an area that I’m passionate about, and it’s one that I feel is drastically underrepresented and underserved at most technical conferences.

The biggest challenge I have with those kinds of presentations is making them selectable and attractive — for the topics mostly concern our ability to collaborate and communicate effectively in support of our business and mission objectives. And in that case, we all feel (myself included) like we’re from Lake Wobegon.

To me, no where is this more apparent than in the discussions about the Agile movement in software development, testing and production operations. Fellow Oak Table member Martin Widlake has some excellent examples of these issues in his 2 recent blog posts on the subject:

“Friday Philosophy – Why Doesn’t Agile Work?” and “In Defense of Agile Development (and their Ilk)”

(I especially like “Ilk”)

In a small, forgotten corner of the Internet, I belong to a Yahoo! Group (yes, they still exist!) on Agile Databases, which has as its description:

Discussion about Database management with regards to Extreme Programming practices and principles.

You can visit the group here.

In a recent discussion, there was a post from Scott Ambler that I found myself violently agreeing with:

A question was asked about coordinating and scheduling changes made by database and ETL teams with the development teams in order to reduce confusion and churn during development.

Question / Comment: While one or more code iterations are taking place in parallel, the data design and ETL are working on their iteration of the db schema and data, which will be consumed by later code iterations.

Scott’s Comment / Answer: Better yet, this could occur in a “whole team” manner where data-experienced people are embedded in the actual team.  This can improve productivity by reducing overall overhead.  Unfortunately this can be difficult in many companies due to the organizational complexities resulting from the cultural impedance mismatch between data and development professionals.

(Emphasis mine)

I feel like I’ve have the privilege of working in places where those organizational complexities and cultural impedance mismatches were overcome and I’d love to talk about what I think made that happen.

Now just to write some compelling abstracts on the subject — ideas welcome!

Using TKPROF for Analyzing 10046 Extended SQL Trace Files – What is Wrong with this Quote?

September 21, 2011 I have written a couple of blog articles on the topic of reading 10046 extended SQL trace files.  Some of those blog articles are listed below: 10046 Extended SQL Trace Interpretation 10046 Extended SQL Trace Interpretation 2 10046 Extended SQL Trace Interpretation 3 Brain Teaser: 10046 Extended SQL Trace Shows “EXEC #435118472:c=15600,e=510″, How is that Possible? [...]

oradebug

Laszlo has published his slides from Hacktivity in Budapest last weekend where he shows how the Oracle undocumented oradebug command can be used to exploit the database; covering turning off authentication, turning off audit and more. His slides are here....[Read More]

Posted by Pete On 21/09/11 At 12:54 PM

APAC OTN Tour: First stop Beijing, China…

With OOW fast approaching, the last thing I wanted to do was be left without a passport, but the week after I return from OOW I have the first leg of the APAC OTN tour in Beijing. A little over a week ago I sent off all my documents, including my passport, and I’ve had a nagging feeling in my guts ever since. This morning I received my passport and a single entry visa for China along with it a wave of relief.

Applying for visas is very stressful when you have other trips on the calendar. I know some of the other people on the tour have got a more visas to apply for and less time to do it, so I hope they can cope with the stress better than me. :)

So I’m £84 + £11 postage out of pocket, but travel approval permitting, I should be fine for Beijing. The Auckland and Perth legs are fine because I don’t need a visa for New Zealand and I’ve already been to Australia this year, so my ETA is still valid.

Now take a deep breath and relax…

Cheers

Tim…

PS. For anyone else travelling to OOW, make sure to apply for your ESTA, or check last years is still valid… ;)




In Defense of Agile Development (and Their Ilk)

In my previous post I asked the question “why doesn’t Agile work?”. I’m not sure the nuance of the question came over correctly.

I’d just like to highlight that the question I asked was “Why does agile not work”. It was not “Why is Agile rubbish“. I’ve said a few times in the past couple of weeks that I like the ideology of Agile and I am (and have been for years and years) a strong proponent of prototyping, cyclic development, test driven design and many other things that are part of the Agile or XP methodologies.

That distinction in the title is a really important distinction and one I’d hoped I’d made clear in my post. Looking back at my post though, I think it is clear I failed :-( . I highlighted reasons why I think Agile does not work and in my head I was thinking “if we avoid these, Agile could work” – but when you write something down it does not matter what is in your head if it does not reach the paper.

I’m actually frustrated that in the last few years I have not seen Agile really succeed and also that this must be the normal situation, going on the response you get when the topic of Agile comes up with fellow technicians and comments on my own blog.

However, on that post about Agile two people who’s opinion I deeply respect came back at me to say “Agile does work!”. Cary Millsap, who many of you will have heard of as the “Method R” guy and the person behind Oracle Flexible Architecture. And Mike Cox, who most of you won’t have heard of but Mike taught me a lot about sensible development back in the 90′s. He’s one of the best developers I have ever had the pleasure of working with and I know he has had great success with Agile and RED. I’m not sure if they read my post as “Agile is Rubbish” or they are, like me, simply frustrated that it can work but so often does not.

So I’ve been thinking about this a lot this weekend and I was helped by Cary’s paper on the topic that he mentioned in his comment. I’d highly recommend downloading it as it is an excellent description of not only why Agile can help but describes how and some of the pitfalls {I’d started my own post on that, but go read Cary’s}. I should add, you can see Cary present his case for Agile at the UKOUG conference this year.

So where does this bring me to? Well, I think “Is Agile good or bad” has become almost an “IT religion” topic, people love it or loath it and it is based on what they have seen of the methodology in real life. No, that’s wrong, it is based on what they have seen that has been labelled with that methodology in real life. Or worse, it is based on anecdotal opinion of those around them. The thing is, if you look at what XP is supposed to consist of or what Agile Programming is supposed to consist of, most of us would agree that a great deal of it makes sense in many situations. I’d disagree with some of the details in Cary’s paper but overall I’m in strong agreement. Sadly, What Agile and XP is supposed to be is not well matched by what you see on the ground in most cases. So even if these methodologies are right for the situation, what has been implemented is not the methodology but probably more a slap-dash process that simply jettisons documentation, design and proper testing. This whole thread sprung from my lamenting the demise of database design and several of the comments highlighted that the introduction of Agile seemed to equate, at least in part, with the demise of design. As MIke and Cary say, and as I think anyone who has successfully utilized Agile would say, Design is an integral part of Agile and XP methodology.

Agile can and does work. But many things can and do work, such as taking regular exercise to keep healthy or regularly maintaining your house to keep it weathertight. Like Agile, both take effort but the overall benefit is greater than the cost. And like Agile, do it wrong and you can make things worse. If your window frames are starting to rot and you just slap a new layer of top-coat on them all you will do is seal in the damp and rot and hide the problem – until the glass falls out. Going for a regular 5 mile run is good for you – but not if you are 10 stone (60KG) overweight and have not run in years. A 5 mile run is also not a good idea if you want to be a long-jumper. Right training (methodology) for the right aim. Also, just like keeping healthy, house maintenance or anything that takes effort but works, proponents tend towards extremism – probably as a reaction to the constant {perceived} pig-headedness of critics or the failure of people to just do what now seems so sensible to them {think reformed smokers}. I’ll have to buy Cary and Mike pints to make up for that jibe now, and promise them it was not aimed at them personally…

Sadly, the reality is, Agile does not work 90% of the time it is tried. So, does that mean Agile is actually rubbish? Or at least, not fit for purpose, because many companies are not able to use it? Companies are there to achieve something and the IT systems are part of achieving that something. If Agile cannot aid that IT department then Agile is the wrong way for that department and company.

*sigh* I’ve gone on and on about this and still not got to my own main point, which is this.

- Can we identify reasons for Agile and XP Failing.
- Having identified the Reasons, can we fix them in simple ways?
- Can we create some simple guidelines as to when a project should be more Agile and when it should be more Up-Front design.

I’d love to know people’s opinions on those three points above.

Real-Time SQL Monitoring - Retention (part 2)

As I mentioned in my last post, I've been looking at increasing the SQL Monitoring Retention at my current site using _sqlmon_max_plan but, as well as confirming with Oracle Support that they're happy for us to do so, it would be nice to know what the resulting memory footprint would be to help us come up with a sensible value. Here is how :-

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select * from v$sgastat where name like '%keswx%' ;

POOL         NAME                            BYTES
------------ -------------------------- ----------
shared pool  keswx:plan en                  645696
shared pool  keswxNotify:tabPlans            16384
shared pool  keswx:batch o                 3646864

Those are the values on a system with _sqlmon_max_plan=320.

Thanks to those who helped out with this - they know who they are.

Coming up with an appropriate value is going to involve considering each system's workload, though, because it's not a time-based retention parameter. If people are interested in statements that ran in the last 12 hours, then the value would be different on each system. But at least now we'll be able to see the impact, which looks pretty reasonable to me.

Updated later - thanks to Nick Affleck for pointing out the
additional 's' I introduced on the parameter name. Fixed now to read
_sqlmon_max_plan

Display_Cursor

Oracle 10 added the awesome procedure dbms_xplan.display_cursor but unfortunately the documentation of the package is a bit lacking and the options and output can be confusing, so here are few clarifications.

The procedure display_cursor gives the *real* execution plan instead of an estimated execution plan which is a huge relief after “explain plan”, “set autotrace on” and  “dbms_xplan.display” which all output the expected execution plan. Of the the three, “set autotrace on” is the most peculiar as we can actually run the query and yet the output is the expect plan not the actually executed plan. Pheww – what a relief with dbms_xplan.display_cursor.

But, hold on to the seat of your pants, because that’s not all folks. The coolest thing is that display_cursor will output the actual execution statistics for each row source line in the real execution plan. Now that’s cool. But to access these statistics, one has to enable the collection of the statistics.

To use, run a query in SQL*Plus and include the hint

 /*+ gather_plan_statistics */

then,  immediately following the query execution run:

 select * from table(dbms_xplan.display_cursor(null,null,’ALLSTATS’));

Then you will see the following columns in the output (minus the color coding)

A few confusing things about the output. Some of the columns are estimated statistics from the optimizer. Some of the columns are actual statistics from executing the query. And some of the columns are not per row statistics but statistics that include the statistics of the child rows.

The columns E-Rows and A-Rows can be used for Tuning by Cardinality Feedback (TCF). TCF refers to the process of comparing “E-Row” and “A-Rows” to determine whether the optimizer is optimizing correctly. The two columns “E-Rows” and “A-Rows” which represent estimated rows and actual rows can be compared to highlight how close or how far off the optimizers predictions where. If the optimizers predictions are far of, it’s an indication, though not a proof, that the query has been inadequately optimized. To get the optimizer to  create an optimal execution path, one can look for ways of bringing the optimizers estimated statistics in line with the actual statistics. See Tuning by Cardinality feedback by Wolfgang Breitling.

In V$sql_plan_statistics  these “should” be equal

  • cardinality ~= output_rows/starts
  • starts*cardinality ~= output_rows

And in the output above

  • E-Rows=cardinality
  • A-Rows=output_rows

Thus in order to compare E-Rows to A-Rows, we have to multiply E-Rows by starts.

Other display_cursor 3rd argument options

There are other options besides specifying “ALLSTATS” in the display_cursor. Here are a list of the options and the fields they show:

By default statistics are total for all executions of the cursor but “LAST” can be appended to the 3rd argument to get the stats for the last execution. NOTE:  the last four columns on memory and temp space always show total or last as indicated above. Here is an example of using “LAST”:

select * from table ( dbms_xplan.display_cursor (null,null,’MEMSTATS LAST‘));

Notice that for memory consumption, some of the stats are bytes and some are kilo-bytes.

The above tables are images. The following table is in text for searching and copy/paste.

Arguments giving only optimizer estimates






#ff0000;">Rows #ff0000;">Bytes #ff0000;">TempSpc  #ff0000;">Cost #ff0000;">Time
BASIC
 null #ff0000;">   *  #ff0000;">*   #ff0000;">* #ff0000;"> *
TYPICAL #ff0000;"> *  #ff0000;">* #ff0000;"> * #ff0000;"> *  #ff0000;">*
SERIAL  #ff0000;">*  #ff0000;">*  #ff0000;">*  #ff0000;"> * #ff0000;"> *
ALL #ff0000;"> * #ff0000;"> *   #ff0000;">* #ff0000;"> *
ADVANCED #ff0000;"> * #ff0000;"> *  #ff0000;">* #ff0000;"> *

 Arguments that give actual row source executions statistics (the red asterisks are estimates)

(the following two tables show the same arguments, but are broken in two to fit the width)








E-Rows starts A-Rows Buffers Reads Writes A-Time
MEMSTATS #ff0000;"> *  *  *  *
ALLSTATS #ff0000;"> *  *  *  *  *  *  *
RUNSTATS_LAST  #ff0000;">*  LAST  LAST  LAST  LAST  LAST  LAST
RUNSTATS_TOT #ff0000;"> *   TOT  TOT  TOT  TOT  TOT  TOT
IOSTATS  #ff0000;">*  *  *  *  *  *   *

 







0Mem 1MEM 0/1/M Used-Mem Used-Tmp Max-Tmp
MEMSTATS  #ff0000;">*  #ff0000;">*  TOT LAST LAST TOT
ALLSTATS #ff0000;"> *  #ff0000;">* TOT LAST LAST TOT
RUNSTATS_LAST
RUNSTATS_TOT
IOSTATS

 Arguments for special cases


TQ IN-OUT PQ Distrib pstart pstop Instance
PARTITION X X X
PARALLEL X X
REMOTE X

List of arguments for non-statistical output







predicate info note Query Block Name Column projection info outline bind vars
BASIC
null X X
TYPICAL X X
SERIAL X X
ALL X X X X
ADVANCED X X X X X X
PREDICATES X
NOTE X
ALIAS X
PROJECTION X
OUTLINES X
PEEK_BINDS X

 

Enabling extended rows source execution statistics gathering

There are three ways to gather row source executions stats:

  1.  /*+ gather_plan_statistics */
  2. sql_trace=true;
  3. Statistics_level=all

The first two are cheaper (sets _rowsource_statistics_sampfreq =128 ) but less accurate than the last option which though more accurate can consume signficantly more CPU (_rowsource_statistics_sampfreq =1).

 Manually querying extended row source execution statistics

The data retrieved with display_cursor can be queried directly from

  • v$sql_plan_statistics
  • v$sql_plan_statistics_all

 

desc v$sql_plan_statistics
ADDRESS
HASH_VALUE
SQL_ID
PLAN_HASH_VALUE
CHILD_ADDRESS
CHILD_NUMBER
OPERATION_ID
EXECUTIONS
LAST_STARTS          STARTS
LAST_OUTPUT_ROWS     OUTPUT_ROWS
LAST_CR_BUFFER_GETS  CR_BUFFER_GETS
LAST_CU_BUFFER_GETS  CU_BUFFER_GETS
LAST_DISK_READS      DISK_READS
LAST_DISK_WRITES     DISK_WRITES
LAST_ELAPSED_TIME    ELAPSED_TIME

and

desc v$sql_plan_statistics_all
 ADDRESS
 HASH_VALUE
 SQL_ID      PLAN_HASH_VALUE CHILD_ADDRESS CHILD_NUMBER
 TIMESTAMP
 OPERATION
 OPTIONS
 OBJECT_NODE  OBJECT# OBJECT_OWNER OBJECT_NAME OBJECT_ALIAS OBJECT_TYPE
 OPTIMIZER
 ID PARENT_ID DEPTH POSITION
 SEARCH_COLUMNS
 #ff0000;">COST  CARDINALITY  BYTES
 OTHER_TAG
 PARTITION_START PARTITION_STOP PARTITION_ID
 OTHER
 DISTRIBUTION
 #ff0000;">CPU_COST  IO_COST  TEMP_SPACE
 ACCESS_PREDICATES
 FILTER_PREDICATES
 PROJECTION
#ff0000;"> TIME (seconds estimated by optimizer)
 QBLOCK_NAME
 REMARKS
 OTHER_XML (bind vars and other info)
 #008000;">EXECUTIONS
 #008000;">LAST_STARTS           #008000;">STARTS
 #008000;">LAST_OUTPUT_ROWS      #008000;">OUTPUT_ROWS
 #008000;">LAST_CR_BUFFER_GETS   #008000;">CR_BUFFER_GETS
 #008000;">LAST_CU_BUFFER_GETS   #008000;">CU_BUFFER_GETS
 #008000;">LAST_DISK_READS       #008000;">DISK_READS
 #008000;">LAST_DISK_WRITES      #008000;">DISK_WRITES
 #008000;">LAST_ELAPSED_TIME     #008000;">ELAPSED_TIME (microseconds)
 POLICY
 #ff0000;">ESTIMATED_OPTIMAL_SIZE (KB)
 #ff0000;">ESTIMATED_ONEPASS_SIZE (KB)
 #008000;">LAST_MEMORY_USED (KB)
 #008000;">LAST_EXECUTION (LAST whether work area was optimal, one pass or multi)
 LAST_DEGREE
 #008000;">TOTAL_EXECUTIONS (number of times work area was active)
 #008000;">OPTIMAL_EXECUTIONS
 #008000;">ONEPASS_EXECUTIONS
 #008000;">MULTIPASSES_EXECUTIONS
 #008000;">ACTIVE_TIME (centi-seconds, time work area is active)
 #008000;">MAX_TEMPSEG_SIZE (bytes)
 #008000;">LAST_TEMPSEG_SIZE (bytes)

so you can write a query to get the data directly such as

col operation for a45
SELECT
      LPAD(' ',depth)||P.OPERATION||'_'||P.OPTIONS||' '||P.OBJECT_NAME  operation
    , last_starts * cardinality e_rows_x_starts
    , last_output_rows  a_rows
    , LAST_CR_BUFFER_GETS bgets
    , LAST_DISK_READS     pread
    , LAST_DISK_WRITES    pwrites
    , LAST_ELAPSED_TIME   elapsed
    , LAST_MEMORY_USED
    , LAST_TEMPSEG_SIZE
    , LAST_EXECUTION
  FROM
       V$SQL_PLAN_statistics_all P
  WHERE
        sql_id='&sql_id'
  order by child_number,id
/

which makes the comparison between estimate and actual easier because I can include starts*cardinality to get the e-rows_x_starts which I can compare directly to a-rows, ie output_rows.

Putting it all together

We can take the TCF ideas farther by putting them together in a query such that the output is easier to  read:

col cn format 99
col ratio format 99
col ratio1 format A6
--set pagesize 1000
set linesize 140
break on sql_id on cn
col lio_rw format 999
col "operation" format a60
col a_rows for 999,999,999
col e_rows for 999,999,999
col elapsed for 999,999,999

Def v_sql_id=&SQL_ID

select
       -- sql_id,
       --hv,
       childn                                         cn,
       --ptime, stime,
       case when stime - nvl(ptime ,0) > 0 then
          stime - nvl(ptime ,0)
        else 0 end as elapsed,
       nvl(trunc((lio-nvl(plio,0))/nullif(a_rows,0)),0) lio_ratio,
       --id,
       --parent_id,
       --starts,
       --nvl(ratio,0)                                    TCF_ratio,
       ' '||case when ratio > 0 then
                rpad('-',ratio,'-')
             else
               rpad('+',ratio*-1 ,'+')
       end as                                           TCF_GRAPH,
       starts*cardinality                              e_rows,
                                                       a_rows,
       --nvl(lio,0) lio, nvl(plio,0)                      parent_lio,
                                                         "operation"
from (
  SELECT
      stats.LAST_ELAPSED_TIME                             stime,
      p.elapsed                                  ptime,
      stats.sql_id                                        sql_id
    , stats.HASH_VALUE                                    hv
    , stats.CHILD_NUMBER                                  childn
    , to_char(stats.id,'990')
      ||decode(stats.access_predicates,null,null,'A')
      ||decode(stats.filter_predicates,null,null,'F')     id
    , stats.parent_id
    , stats.CARDINALITY                                    cardinality
    , LPAD(' ',depth)||stats.OPERATION||' '||
      stats.OPTIONS||' '||
      stats.OBJECT_NAME||
      DECODE(stats.PARTITION_START,NULL,' ',':')||
      TRANSLATE(stats.PARTITION_START,'(NRUMBE','(NR')||
      DECODE(stats.PARTITION_STOP,NULL,' ','-')||
      TRANSLATE(stats.PARTITION_STOP,'(NRUMBE','(NR')      "operation",
      stats.last_starts                                     starts,
      stats.last_output_rows                                a_rows,
      (stats.last_cu_buffer_gets+stats.last_cr_buffer_gets) lio,
      p.lio                                                 plio,
      trunc(log(10,nullif
         (stats.last_starts*stats.cardinality/
          nullif(stats.last_output_rows,0),0)))             ratio
  FROM
       v$sql_plan_statistics_all stats
       , (select sum(last_cu_buffer_gets + last_cr_buffer_gets) lio,
                 sum(LAST_ELAPSED_TIME) elapsed,
                 child_number,
                 parent_id,
                 sql_id
         from v$sql_plan_statistics_all
         group by child_number,sql_id, parent_id) p
  WHERE
    stats.sql_id='&v_sql_id'  and
    p.sql_id(+) = stats.sql_id and
    p.child_number(+) = stats.child_number and
    p.parent_id(+)=stats.id
)
order by sql_id, childn , id
/

which gives output like

    Enter value for sql_id: g2w9n4gksyys6
    old  59:     stats.sql_id='&v_sql_id'  and
    new  59:     stats.sql_id='g2w9n4gksyys6'  and

     CN   ELAPSED    LIO_RATIO TCF_GRAPH   E_ROWS       A_ROWS operation
    --- ------------ --------- ------ ------------ ------------ ------------------------------------------------------------
      0            0         0                                1 SELECT STATEMENT
           5,720,456         0                   1            1  HASH GROUP BY
              29,711         0                            1,909   NESTED LOOPS
                   0         0  #ff0000;">+++              1        1,909    NESTED LOOPS
           1,969,304         0  +++              1        1,909     NESTED LOOPS
                   0         0  +++              1        2,027      NESTED LOOPS
           7,939,649         0  +++              1        1,656       NESTED LOOPS
             716,054         0  +++              1        1,657        NESTED LOOPS
             270,201         0  ++              39       23,171         HASH JOIN
                  23         0                   5            1          JOIN FILTER CREATE :BF0000
                  31         1                   5            1           TABLE ACCESS BY INDEX ROWID PS_PAY_CALENDAR
                  14         2                   5            1            INDEX RANGE SCAN PS0PAY_CALENDAR
             141,467         0              18,503       23,171          VIEW  VW_SQ_1
           3,032,120         0              18,503       23,171           HASH GROUP BY
             152,564         0             163,420       33,020            JOIN FILTER USE :BF0000
             407,746         0             163,420       33,020             MERGE JOIN
                  55         0                   5            1              SORT JOIN
                  12         2                   5            1               INDEX RANGE SCAN PS0PAY_CALENDAR
              79,435         0              40,000       33,020              SORT JOIN
             119,852         0              40,000       40,000               INDEX FAST FULL SCAN WB_JOB
           #ff0000;">2,959,031       #ff0000;"> 13  -           23,171        1,657         TABLE ACCESS BY INDEX ROWID WB_JOB
             944,887         1              23,171       23,174          INDEX RANGE SCAN WB_JOB
             102,650         0               1,657        1,656        VIEW PUSHED PREDICATE  VW_SQ_2
              73,769         0               1,657        1,657         SORT AGGREGATE
              25,617         0               1,657        1,657          FIRST ROW
             225,497         1               1,657        1,657           INDEX RANGE SCAN (MIN/MAX) WB_JOB
             357,872         0               3,312        2,027       TABLE ACCESS BY INDEX ROWID WB_RETROPAY_EARNS
           3,655,774         1               3,312        2,027        INDEX RANGE SCAN WB_RETROPAY_EARNS_IDX1
             199,884         0               2,027        1,909      TABLE ACCESS BY INDEX ROWID PS_RETROPAY_RQST
             317,793         1               2,027        1,909       INDEX RANGE SCAN PS_RETROPAY_RQST
              71,534         0               1,909        1,909     INDEX RANGE SCAN PS#RETROPAYPGM_TBL
              18,396         0               1,909        1,909    TABLE ACCESS BY INDEX ROWID PS_RETROPAYPGM_TBL

The 3 important parts of this query are

  • Elapsed is per row source, not cumulative of it’s children
  • LIO_RATIO
  • TCP_GRAPH

Elapsed time format has a huge drawback in the display_cursor output as each lines elapsed time includes the elapsed time of all the children which makes an execution plan difficult to scan and see where the time is being spent. In the above output the elapsed time represents the elapsed time of each row source line.

LIO_RATIO shows the number of buffers accessed per row returned. Ideally 1 buffer or less is accessed per row returned. When the number of buffers per row becomes large, it’s a good indication that there is a more optimal method to get the rows.  The I/O stats include the stats of the child row source, so the query has to get the I/O from the childern and subtract from the parent, making the query a bit more complex.

TCP_GRAPH graphically shows the ratio of estimated rows to actual rows. The estimated rows used is cardinality* starts, not just cardinality. This value can be compared directly to actual_rows and the difference in order of magnitude is shown. Each ‘+’ represents and order of magnitude larger and each “-” represents an order of magnitude smaller. The more orders of magnitude, either way, the more the optimizers calculations are off and thus like more pointing to a possible plan that is suboptimal.

In the above output there   are 5 lines where the optimizer only expect 1 row and the actual results were over 1000, ie 3 orders of magnitude difference. These are the three lines with “+++”
There is one line with “-” where actual was an order of magnitude smaller. On that same line we see it’s one of the slower lines almost 3 seconds and that the were 13 lio’s per row returned, which is sign of inefficiency.