Search

OakieTags

Who's online

There are currently 0 users and 38 guests online.

Recent comments

Affiliations

September 2011

In Defense of Agile Development (and Their Ilk)

In my previous post I asked the question “why doesn’t Agile work?”. I’m not sure the nuance of the question came over correctly.

I’d just like to highlight that the question I asked was “Why does agile not work”. It was not “Why is Agile rubbish“. I’ve said a few times in the past couple of weeks that I like the ideology of Agile and I am (and have been for years and years) a strong proponent of prototyping, cyclic development, test driven design and many other things that are part of the Agile or XP methodologies.

That distinction in the title is a really important distinction and one I’d hoped I’d made clear in my post. Looking back at my post though, I think it is clear I failed :-( . I highlighted reasons why I think Agile does not work and in my head I was thinking “if we avoid these, Agile could work” – but when you write something down it does not matter what is in your head if it does not reach the paper.

I’m actually frustrated that in the last few years I have not seen Agile really succeed and also that this must be the normal situation, going on the response you get when the topic of Agile comes up with fellow technicians and comments on my own blog.

However, on that post about Agile two people who’s opinion I deeply respect came back at me to say “Agile does work!”. Cary Millsap, who many of you will have heard of as the “Method R” guy and the person behind Oracle Flexible Architecture. And Mike Cox, who most of you won’t have heard of but Mike taught me a lot about sensible development back in the 90′s. He’s one of the best developers I have ever had the pleasure of working with and I know he has had great success with Agile and RED. I’m not sure if they read my post as “Agile is Rubbish” or they are, like me, simply frustrated that it can work but so often does not.

So I’ve been thinking about this a lot this weekend and I was helped by Cary’s paper on the topic that he mentioned in his comment. I’d highly recommend downloading it as it is an excellent description of not only why Agile can help but describes how and some of the pitfalls {I’d started my own post on that, but go read Cary’s}. I should add, you can see Cary present his case for Agile at the UKOUG conference this year.

So where does this bring me to? Well, I think “Is Agile good or bad” has become almost an “IT religion” topic, people love it or loath it and it is based on what they have seen of the methodology in real life. No, that’s wrong, it is based on what they have seen that has been labelled with that methodology in real life. Or worse, it is based on anecdotal opinion of those around them. The thing is, if you look at what XP is supposed to consist of or what Agile Programming is supposed to consist of, most of us would agree that a great deal of it makes sense in many situations. I’d disagree with some of the details in Cary’s paper but overall I’m in strong agreement. Sadly, What Agile and XP is supposed to be is not well matched by what you see on the ground in most cases. So even if these methodologies are right for the situation, what has been implemented is not the methodology but probably more a slap-dash process that simply jettisons documentation, design and proper testing. This whole thread sprung from my lamenting the demise of database design and several of the comments highlighted that the introduction of Agile seemed to equate, at least in part, with the demise of design. As MIke and Cary say, and as I think anyone who has successfully utilized Agile would say, Design is an integral part of Agile and XP methodology.

Agile can and does work. But many things can and do work, such as taking regular exercise to keep healthy or regularly maintaining your house to keep it weathertight. Like Agile, both take effort but the overall benefit is greater than the cost. And like Agile, do it wrong and you can make things worse. If your window frames are starting to rot and you just slap a new layer of top-coat on them all you will do is seal in the damp and rot and hide the problem – until the glass falls out. Going for a regular 5 mile run is good for you – but not if you are 10 stone (60KG) overweight and have not run in years. A 5 mile run is also not a good idea if you want to be a long-jumper. Right training (methodology) for the right aim. Also, just like keeping healthy, house maintenance or anything that takes effort but works, proponents tend towards extremism – probably as a reaction to the constant {perceived} pig-headedness of critics or the failure of people to just do what now seems so sensible to them {think reformed smokers}. I’ll have to buy Cary and Mike pints to make up for that jibe now, and promise them it was not aimed at them personally…

Sadly, the reality is, Agile does not work 90% of the time it is tried. So, does that mean Agile is actually rubbish? Or at least, not fit for purpose, because many companies are not able to use it? Companies are there to achieve something and the IT systems are part of achieving that something. If Agile cannot aid that IT department then Agile is the wrong way for that department and company.

*sigh* I’ve gone on and on about this and still not got to my own main point, which is this.

- Can we identify reasons for Agile and XP Failing.
- Having identified the Reasons, can we fix them in simple ways?
- Can we create some simple guidelines as to when a project should be more Agile and when it should be more Up-Front design.

I’d love to know people’s opinions on those three points above.

Real-Time SQL Monitoring - Retention (part 2)

As I mentioned in my last post, I've been looking at increasing the SQL Monitoring Retention at my current site using _sqlmon_max_plan but, as well as confirming with Oracle Support that they're happy for us to do so, it would be nice to know what the resulting memory footprint would be to help us come up with a sensible value. Here is how :-

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select * from v$sgastat where name like '%keswx%' ;

POOL         NAME                            BYTES
------------ -------------------------- ----------
shared pool  keswx:plan en                  645696
shared pool  keswxNotify:tabPlans            16384
shared pool  keswx:batch o                 3646864

Those are the values on a system with _sqlmon_max_plan=320.

Thanks to those who helped out with this - they know who they are.

Coming up with an appropriate value is going to involve considering each system's workload, though, because it's not a time-based retention parameter. If people are interested in statements that ran in the last 12 hours, then the value would be different on each system. But at least now we'll be able to see the impact, which looks pretty reasonable to me.

Updated later - thanks to Nick Affleck for pointing out the
additional 's' I introduced on the parameter name. Fixed now to read
_sqlmon_max_plan

Display_Cursor

Oracle 10 added the awesome procedure dbms_xplan.display_cursor but unfortunately the documentation of the package is a bit lacking and the options and output can be confusing, so here are few clarifications.

The procedure display_cursor gives the *real* execution plan instead of an estimated execution plan which is a huge relief after “explain plan”, “set autotrace on” and  “dbms_xplan.display” which all output the expected execution plan. Of the the three, “set autotrace on” is the most peculiar as we can actually run the query and yet the output is the expect plan not the actually executed plan. Pheww – what a relief with dbms_xplan.display_cursor.

But, hold on to the seat of your pants, because that’s not all folks. The coolest thing is that display_cursor will output the actual execution statistics for each row source line in the real execution plan. Now that’s cool. But to access these statistics, one has to enable the collection of the statistics.

To use, run a query in SQL*Plus and include the hint

 /*+ gather_plan_statistics */

then,  immediately following the query execution run:

 select * from table(dbms_xplan.display_cursor(null,null,’ALLSTATS’));

Then you will see the following columns in the output (minus the color coding)

A few confusing things about the output. Some of the columns are estimated statistics from the optimizer. Some of the columns are actual statistics from executing the query. And some of the columns are not per row statistics but statistics that include the statistics of the child rows.

The columns E-Rows and A-Rows can be used for Tuning by Cardinality Feedback (TCF). TCF refers to the process of comparing “E-Row” and “A-Rows” to determine whether the optimizer is optimizing correctly. The two columns “E-Rows” and “A-Rows” which represent estimated rows and actual rows can be compared to highlight how close or how far off the optimizers predictions where. If the optimizers predictions are far of, it’s an indication, though not a proof, that the query has been inadequately optimized. To get the optimizer to  create an optimal execution path, one can look for ways of bringing the optimizers estimated statistics in line with the actual statistics. See Tuning by Cardinality feedback by Wolfgang Breitling.

In V$sql_plan_statistics  these “should” be equal

  • cardinality ~= output_rows/starts
  • starts*cardinality ~= output_rows

And in the output above

  • E-Rows=cardinality
  • A-Rows=output_rows

Thus in order to compare E-Rows to A-Rows, we have to multiply E-Rows by starts.

Other display_cursor 3rd argument options

There are other options besides specifying “ALLSTATS” in the display_cursor. Here are a list of the options and the fields they show:

By default statistics are total for all executions of the cursor but “LAST” can be appended to the 3rd argument to get the stats for the last execution. NOTE:  the last four columns on memory and temp space always show total or last as indicated above. Here is an example of using “LAST”:

select * from table ( dbms_xplan.display_cursor (null,null,’MEMSTATS LAST‘));

Notice that for memory consumption, some of the stats are bytes and some are kilo-bytes.

The above tables are images. The following table is in text for searching and copy/paste.

Arguments giving only optimizer estimates






#ff0000;">Rows #ff0000;">Bytes #ff0000;">TempSpc  #ff0000;">Cost #ff0000;">Time
BASIC
 null #ff0000;">   *  #ff0000;">*   #ff0000;">* #ff0000;"> *
TYPICAL #ff0000;"> *  #ff0000;">* #ff0000;"> * #ff0000;"> *  #ff0000;">*
SERIAL  #ff0000;">*  #ff0000;">*  #ff0000;">*  #ff0000;"> * #ff0000;"> *
ALL #ff0000;"> * #ff0000;"> *   #ff0000;">* #ff0000;"> *
ADVANCED #ff0000;"> * #ff0000;"> *  #ff0000;">* #ff0000;"> *

 Arguments that give actual row source executions statistics (the red asterisks are estimates)

(the following two tables show the same arguments, but are broken in two to fit the width)








E-Rows starts A-Rows Buffers Reads Writes A-Time
MEMSTATS #ff0000;"> *  *  *  *
ALLSTATS #ff0000;"> *  *  *  *  *  *  *
RUNSTATS_LAST  #ff0000;">*  LAST  LAST  LAST  LAST  LAST  LAST
RUNSTATS_TOT #ff0000;"> *   TOT  TOT  TOT  TOT  TOT  TOT
IOSTATS  #ff0000;">*  *  *  *  *  *   *

 







0Mem 1MEM 0/1/M Used-Mem Used-Tmp Max-Tmp
MEMSTATS  #ff0000;">*  #ff0000;">*  TOT LAST LAST TOT
ALLSTATS #ff0000;"> *  #ff0000;">* TOT LAST LAST TOT
RUNSTATS_LAST
RUNSTATS_TOT
IOSTATS

 Arguments for special cases


TQ IN-OUT PQ Distrib pstart pstop Instance
PARTITION X X X
PARALLEL X X
REMOTE X

List of arguments for non-statistical output







predicate info note Query Block Name Column projection info outline bind vars
BASIC
null X X
TYPICAL X X
SERIAL X X
ALL X X X X
ADVANCED X X X X X X
PREDICATES X
NOTE X
ALIAS X
PROJECTION X
OUTLINES X
PEEK_BINDS X

 

Enabling extended rows source execution statistics gathering

There are three ways to gather row source executions stats:

  1.  /*+ gather_plan_statistics */
  2. sql_trace=true;
  3. Statistics_level=all

The first two are cheaper (sets _rowsource_statistics_sampfreq =128 ) but less accurate than the last option which though more accurate can consume signficantly more CPU (_rowsource_statistics_sampfreq =1).

 Manually querying extended row source execution statistics

The data retrieved with display_cursor can be queried directly from

  • v$sql_plan_statistics
  • v$sql_plan_statistics_all

 

desc v$sql_plan_statistics
ADDRESS
HASH_VALUE
SQL_ID
PLAN_HASH_VALUE
CHILD_ADDRESS
CHILD_NUMBER
OPERATION_ID
EXECUTIONS
LAST_STARTS          STARTS
LAST_OUTPUT_ROWS     OUTPUT_ROWS
LAST_CR_BUFFER_GETS  CR_BUFFER_GETS
LAST_CU_BUFFER_GETS  CU_BUFFER_GETS
LAST_DISK_READS      DISK_READS
LAST_DISK_WRITES     DISK_WRITES
LAST_ELAPSED_TIME    ELAPSED_TIME

and

desc v$sql_plan_statistics_all
 ADDRESS
 HASH_VALUE
 SQL_ID      PLAN_HASH_VALUE CHILD_ADDRESS CHILD_NUMBER
 TIMESTAMP
 OPERATION
 OPTIONS
 OBJECT_NODE  OBJECT# OBJECT_OWNER OBJECT_NAME OBJECT_ALIAS OBJECT_TYPE
 OPTIMIZER
 ID PARENT_ID DEPTH POSITION
 SEARCH_COLUMNS
 #ff0000;">COST  CARDINALITY  BYTES
 OTHER_TAG
 PARTITION_START PARTITION_STOP PARTITION_ID
 OTHER
 DISTRIBUTION
 #ff0000;">CPU_COST  IO_COST  TEMP_SPACE
 ACCESS_PREDICATES
 FILTER_PREDICATES
 PROJECTION
#ff0000;"> TIME (seconds estimated by optimizer)
 QBLOCK_NAME
 REMARKS
 OTHER_XML (bind vars and other info)
 #008000;">EXECUTIONS
 #008000;">LAST_STARTS           #008000;">STARTS
 #008000;">LAST_OUTPUT_ROWS      #008000;">OUTPUT_ROWS
 #008000;">LAST_CR_BUFFER_GETS   #008000;">CR_BUFFER_GETS
 #008000;">LAST_CU_BUFFER_GETS   #008000;">CU_BUFFER_GETS
 #008000;">LAST_DISK_READS       #008000;">DISK_READS
 #008000;">LAST_DISK_WRITES      #008000;">DISK_WRITES
 #008000;">LAST_ELAPSED_TIME     #008000;">ELAPSED_TIME (microseconds)
 POLICY
 #ff0000;">ESTIMATED_OPTIMAL_SIZE (KB)
 #ff0000;">ESTIMATED_ONEPASS_SIZE (KB)
 #008000;">LAST_MEMORY_USED (KB)
 #008000;">LAST_EXECUTION (LAST whether work area was optimal, one pass or multi)
 LAST_DEGREE
 #008000;">TOTAL_EXECUTIONS (number of times work area was active)
 #008000;">OPTIMAL_EXECUTIONS
 #008000;">ONEPASS_EXECUTIONS
 #008000;">MULTIPASSES_EXECUTIONS
 #008000;">ACTIVE_TIME (centi-seconds, time work area is active)
 #008000;">MAX_TEMPSEG_SIZE (bytes)
 #008000;">LAST_TEMPSEG_SIZE (bytes)

so you can write a query to get the data directly such as

col operation for a45
SELECT
      LPAD(' ',depth)||P.OPERATION||'_'||P.OPTIONS||' '||P.OBJECT_NAME  operation
    , last_starts * cardinality e_rows_x_starts
    , last_output_rows  a_rows
    , LAST_CR_BUFFER_GETS bgets
    , LAST_DISK_READS     pread
    , LAST_DISK_WRITES    pwrites
    , LAST_ELAPSED_TIME   elapsed
    , LAST_MEMORY_USED
    , LAST_TEMPSEG_SIZE
    , LAST_EXECUTION
  FROM
       V$SQL_PLAN_statistics_all P
  WHERE
        sql_id='&sql_id'
  order by child_number,id
/

which makes the comparison between estimate and actual easier because I can include starts*cardinality to get the e-rows_x_starts which I can compare directly to a-rows, ie output_rows.

Putting it all together

We can take the TCF ideas farther by putting them together in a query such that the output is easier to  read:

col cn format 99
col ratio format 99
col ratio1 format A6
--set pagesize 1000
set linesize 140
break on sql_id on cn
col lio_rw format 999
col "operation" format a60
col a_rows for 999,999,999
col e_rows for 999,999,999
col elapsed for 999,999,999

Def v_sql_id=&SQL_ID

select
       -- sql_id,
       --hv,
       childn                                         cn,
       --ptime, stime,
       case when stime - nvl(ptime ,0) > 0 then
          stime - nvl(ptime ,0)
        else 0 end as elapsed,
       nvl(trunc((lio-nvl(plio,0))/nullif(a_rows,0)),0) lio_ratio,
       --id,
       --parent_id,
       --starts,
       --nvl(ratio,0)                                    TCF_ratio,
       ' '||case when ratio > 0 then
                rpad('-',ratio,'-')
             else
               rpad('+',ratio*-1 ,'+')
       end as                                           TCF_GRAPH,
       starts*cardinality                              e_rows,
                                                       a_rows,
       --nvl(lio,0) lio, nvl(plio,0)                      parent_lio,
                                                         "operation"
from (
  SELECT
      stats.LAST_ELAPSED_TIME                             stime,
      p.elapsed                                  ptime,
      stats.sql_id                                        sql_id
    , stats.HASH_VALUE                                    hv
    , stats.CHILD_NUMBER                                  childn
    , to_char(stats.id,'990')
      ||decode(stats.access_predicates,null,null,'A')
      ||decode(stats.filter_predicates,null,null,'F')     id
    , stats.parent_id
    , stats.CARDINALITY                                    cardinality
    , LPAD(' ',depth)||stats.OPERATION||' '||
      stats.OPTIONS||' '||
      stats.OBJECT_NAME||
      DECODE(stats.PARTITION_START,NULL,' ',':')||
      TRANSLATE(stats.PARTITION_START,'(NRUMBE','(NR')||
      DECODE(stats.PARTITION_STOP,NULL,' ','-')||
      TRANSLATE(stats.PARTITION_STOP,'(NRUMBE','(NR')      "operation",
      stats.last_starts                                     starts,
      stats.last_output_rows                                a_rows,
      (stats.last_cu_buffer_gets+stats.last_cr_buffer_gets) lio,
      p.lio                                                 plio,
      trunc(log(10,nullif
         (stats.last_starts*stats.cardinality/
          nullif(stats.last_output_rows,0),0)))             ratio
  FROM
       v$sql_plan_statistics_all stats
       , (select sum(last_cu_buffer_gets + last_cr_buffer_gets) lio,
                 sum(LAST_ELAPSED_TIME) elapsed,
                 child_number,
                 parent_id,
                 sql_id
         from v$sql_plan_statistics_all
         group by child_number,sql_id, parent_id) p
  WHERE
    stats.sql_id='&v_sql_id'  and
    p.sql_id(+) = stats.sql_id and
    p.child_number(+) = stats.child_number and
    p.parent_id(+)=stats.id
)
order by sql_id, childn , id
/

which gives output like

    Enter value for sql_id: g2w9n4gksyys6
    old  59:     stats.sql_id='&v_sql_id'  and
    new  59:     stats.sql_id='g2w9n4gksyys6'  and

     CN   ELAPSED    LIO_RATIO TCF_GRAPH   E_ROWS       A_ROWS operation
    --- ------------ --------- ------ ------------ ------------ ------------------------------------------------------------
      0            0         0                                1 SELECT STATEMENT
           5,720,456         0                   1            1  HASH GROUP BY
              29,711         0                            1,909   NESTED LOOPS
                   0         0  #ff0000;">+++              1        1,909    NESTED LOOPS
           1,969,304         0  +++              1        1,909     NESTED LOOPS
                   0         0  +++              1        2,027      NESTED LOOPS
           7,939,649         0  +++              1        1,656       NESTED LOOPS
             716,054         0  +++              1        1,657        NESTED LOOPS
             270,201         0  ++              39       23,171         HASH JOIN
                  23         0                   5            1          JOIN FILTER CREATE :BF0000
                  31         1                   5            1           TABLE ACCESS BY INDEX ROWID PS_PAY_CALENDAR
                  14         2                   5            1            INDEX RANGE SCAN PS0PAY_CALENDAR
             141,467         0              18,503       23,171          VIEW  VW_SQ_1
           3,032,120         0              18,503       23,171           HASH GROUP BY
             152,564         0             163,420       33,020            JOIN FILTER USE :BF0000
             407,746         0             163,420       33,020             MERGE JOIN
                  55         0                   5            1              SORT JOIN
                  12         2                   5            1               INDEX RANGE SCAN PS0PAY_CALENDAR
              79,435         0              40,000       33,020              SORT JOIN
             119,852         0              40,000       40,000               INDEX FAST FULL SCAN WB_JOB
           #ff0000;">2,959,031       #ff0000;"> 13  -           23,171        1,657         TABLE ACCESS BY INDEX ROWID WB_JOB
             944,887         1              23,171       23,174          INDEX RANGE SCAN WB_JOB
             102,650         0               1,657        1,656        VIEW PUSHED PREDICATE  VW_SQ_2
              73,769         0               1,657        1,657         SORT AGGREGATE
              25,617         0               1,657        1,657          FIRST ROW
             225,497         1               1,657        1,657           INDEX RANGE SCAN (MIN/MAX) WB_JOB
             357,872         0               3,312        2,027       TABLE ACCESS BY INDEX ROWID WB_RETROPAY_EARNS
           3,655,774         1               3,312        2,027        INDEX RANGE SCAN WB_RETROPAY_EARNS_IDX1
             199,884         0               2,027        1,909      TABLE ACCESS BY INDEX ROWID PS_RETROPAY_RQST
             317,793         1               2,027        1,909       INDEX RANGE SCAN PS_RETROPAY_RQST
              71,534         0               1,909        1,909     INDEX RANGE SCAN PS#RETROPAYPGM_TBL
              18,396         0               1,909        1,909    TABLE ACCESS BY INDEX ROWID PS_RETROPAYPGM_TBL

The 3 important parts of this query are

  • Elapsed is per row source, not cumulative of it’s children
  • LIO_RATIO
  • TCP_GRAPH

Elapsed time format has a huge drawback in the display_cursor output as each lines elapsed time includes the elapsed time of all the children which makes an execution plan difficult to scan and see where the time is being spent. In the above output the elapsed time represents the elapsed time of each row source line.

LIO_RATIO shows the number of buffers accessed per row returned. Ideally 1 buffer or less is accessed per row returned. When the number of buffers per row becomes large, it’s a good indication that there is a more optimal method to get the rows.  The I/O stats include the stats of the child row source, so the query has to get the I/O from the childern and subtract from the parent, making the query a bit more complex.

TCP_GRAPH graphically shows the ratio of estimated rows to actual rows. The estimated rows used is cardinality* starts, not just cardinality. This value can be compared directly to actual_rows and the difference in order of magnitude is shown. Each ‘+’ represents and order of magnitude larger and each “-” represents an order of magnitude smaller. The more orders of magnitude, either way, the more the optimizers calculations are off and thus like more pointing to a possible plan that is suboptimal.

In the above output there   are 5 lines where the optimizer only expect 1 row and the actual results were over 1000, ie 3 orders of magnitude difference. These are the three lines with “+++”
There is one line with “-” where actual was an order of magnitude smaller. On that same line we see it’s one of the slower lines almost 3 seconds and that the were 13 lio’s per row returned, which is sign of inefficiency.

Rebuilding Indexes and the Clustering Factor Quiz (One Of The Few)

Today’s question has been prompted by various recent comments regarding the Clustering Factor (CF) of an index and how to change the CF requires a reorg of the underlining table. It used to be quite a common myth that if the CF of an index was greater that “X” or based on some nonsensical formula the [...]

NFS max rsize on Solaris – gotcha

When mounting NFS file systems there is an option to set the max rsize requested. For example:

mount -o rsize=1048576,wsize=1048576,proto=tcp,vers=3  192.168.1.10:/foo /foo

The general rsize used is 32K,  for example in Oracle documentation, but for large sequential I/O the larger rsize can make a big difference. In some tests the larger rsize was twice as fast.

Unfortunately though, if the client is Solaris the larger rsize is ignored because of a kernel parameter. That parameter is nfs3_bsize which defaults to 32K and limits the rsize. To change the value either add it to /etc/system for use on reboot or for changing it immediately, use mdb:

mdb -kw
> nfs3_bsize/D
nfs3_bsize:
nfs3_bsize:     32768
> nfs3_bsize/W 100000
nfs3_bsize:     0xd             =       0x100000

Impact of STATISTICS_LEVEL on Cardinality Feedback and Adaptive Cursor Sharing

The STATISTICS_LEVEL parameter controls a bunch of features. In addition to the documentation, also the V$STATISTICS_LEVEL view provides a list of the ones it controls.

NULL

A recent question from OTN:

What’s the difference between “{expression} is null” and “{expression}= null” ?

Technically we can say the following:

  • {expression} is null will evaluate to one of TRUE or FALSE
  • {expression} = null will evaluate to NULL

As a follow on from this: a predicate applied to a dataset will return rows only when it evaluates to TRUE; a constraint will allow rows to be inserted into a table if if doesn’t evaluate to FALSE.

Three-valued logic catches everybody out occasionally.

 

UKOUG Oracle Data Security Day presentation slides available

I spoke at the UKOUG special security day event last week at Bletchley Park just outside of Milton Keynes. We had a great agenda for the day which was focused on Data Security. We had Ian Glover of CREST and....[Read More]

Posted by Pete On 19/09/11 At 04:07 PM

Big Tables, Sorts and Indexes Solution (Right On Mother)

My, what a clever lot we have reading this blog Indeed, most of the work has already been done for me as we already have a couple of excellent demos in the list of comments. The answer is Yes, the CBO will consider using the index and Yes, there may be a number of scenarios when the [...]

Inline Views – What is Wrong with this Quote?

September 18, 2011 I do not recall putting together any articles about inline views, so let’s start out with a couple of examples before taking a look at a quote.  We need to create a table for this example, so I will reuse a slightly modified table creation script from another article, and also collect [...]