Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

My New Laptop

One more post that's only vaguely related to Oracle and then back to the stats posts.

I mentioned previously that I'd ordered myself one of these. After suffering performance problems with my 11.2/OEL/VMWare/Parallelism demos at UKOUG, I was determined that wouldn't happen again and that it was time for a new laptop, complete with Core i7, 8G RAM and an SSD. In the event, Dell came up with a 6 week lead-time, so I cancelled the order. Of course, as soon as I did that, they solved their lead-time issues and raised a new order on my behalf which I'm still trying to get returned! But that's another story.

I started to look at alternatives (but there weren't many because the laptop Core i7 hasn't been available long) and came across the Sony VPCF11Z1E. No SSD, but a Blu-Ray drive and I've been more than happy with my previous Vaio over the last couple of years. Better still, my mate Jari checked out John Lewis and I could pick one up the next day. It all looked a bit consumer/multimedia orientated and I was a bit concerned about the dimensions and weight, but performance was the main priority and I could always stick an SSD in it later if I decided that would help. The other issue (and I was talking to another conference speaker who had shared this experience) was that I'd left it late, the conference was approaching and so it was a bit of a hurried purchase. If Apple had a Core i7 model out now, that would have been on the list too.

The next day it turned up with the minimum of fuss (John Lewis are great) and I opened the box excitedly. Only to feel let down :-(

The thing just seemed so big and heavy. Worse, still, when I powered it on and started using it, I noticed the fan noise as much as the absolutely stunning screen. Definitely not in the same league as the ISP4400 in the bedroom, but the laptop equivalent. Mmmm. Not the "isn't my new toy great" moment I'd been expecting. It also didn't fit into my existing laptop bag which was a key part of my smooth operation commuting to London. In fact, it was difficult to see why I'd bothered and I kept looking longingly at my old Vaio as I moved stuff between the two machines.

Mads insisted she thought it was great, as did Jari (but then he would, having talked me into it! LOL), but I was searching for things to like about it. Eventually they started to appear and I've become much happier with it over the past few weeks.

- The screen is simply stunning and one benefit of a wide screen is that it's not a bad replacement for the dual monitor set-up at work. Tom Kyte is right - screen estate is a big deal.

- Throw in Blu-Ray and should I ever want to watch a movie it's amazing, based on my first experience with District 9.

- I love Sony keyboards.

- I hate numeric keypads and offset keyboards on a laptop. Might be useful if I played games a bit more.

- I ran the same Swingbench tests on the old and new laptop against the same VM (and no, I'm not going into details - this is consistent across many different tests). Old laptop - 35,000 Transactions Per Minute. New laptop - 180,000 TPM. Ah! Now I remember why I bought it* ;-)

- Despite it being a bit bigger than I'd like, it's nowhere near as bad as Marco managed to make it look in this carefully-arranged stunt-shot!

By the time I reached Hotsos, I was feeling much better about it and when Kevin Closson was talking about the advantages and disadvantages of SMT, I was keen to try it. Damn! Consumer-grade nonsense again! I've never seen fewer BIOS options in my life! Date/Time, system password and a solitary option to enable VT. So at least Sony let you enable VT now which is an improvement on the past, but ... sigh. I'm going to have to go hunting for ways of accessing more advanced BIOS options so that I can break my next set of demos ;-)

All in all, I'm happy I bought it now, but it's definitely not recommended if you're a road warrior. Even with my new rucksack, it's a little on the heavy side and you can imagine how quickly you can tear through battery power (particularly if you decide to run intensive benchmarks)!

*Top tip, though. It doesn't matter how powerful your new laptop is and how elegantly VMWare allows you to move your demos on to it, if you forget to install the Windows Loopback Adaptor, it will all count for nothing and you'll look and feel like a clown ;-)

P.S. Couldn't resist sneaking a look in that Dell box, wondering if I'd made a mistake. What a relief. It was just as big, seemed heavier, looked pretty but I'd definitely rather have the Sony!

Hotsos 2010 - Summary

[One thing that's great about jet-lag is that it allows you to catch up on blogging and all the email that's built up while you've been away at the conference. Not much else you can do at 2:30 in the morning.]

I'm glad I went to the Hotsos Symposium again this year. Attending conferences is an expensive business when you're a contractor as it means more lost income, so I can't go to every conference I'd like to attend but this one should probably remain near the top of my list.

Lows

- My presentation, obviously. There was a funny moment in the office today when I was looking at the OEM Top Activity page - because, yet again, it had highlighted a performance problem we were previously unaware of - and our architect said perhaps I shouldn't be using it because I couldn't be trusted not to break OEM! ;-)

- Somehow contriving to miss both of Richard Foote's presentations but at least I was able to enjoy his entertaining company at dinner one night.

- The weather - not what I had in mind at all!

Highs

- The people, as always. Not only are there lots of smart speakers and attendees at Hotsos, but the informal setting makes it easier to catch up with them. It was particularly good to get time to speak properly to a few people I've only met briefly before, buy Kerry Osborne that beer and to see Polish Paul Matuszyk there after I recommended it to him 4 years ago! As for Monique, she's one of a kind ;-)

- Disco Night. Can I request that we have more party nights themed on women wearing glittery mini-dresses and Oracle geeks dancing in duck costumes?

- Alex G playing chopsticks on the piano in a large suite. I was expecting something a little more elegant!

- Quite a few presentations, not least, Bob Sneed's CPU QoS, Kevin Closson ranting intelligently about where we are today and where we're heading, Alex G reducing the room to fits of laughter whilst reinforcing the Battle Against Any Guess message and Wolfgang's Anatomy of a SQL Tuning Session.

- Feeling inspired to think about performance even more now that I'm back in the office.

So, expensive or not, I'm glad I went but next year I might stick to text-mode demos that don't need a network connection!

Hotsos 2010 - Day 5 - Training Day with Tanel Poder

I generally wouldn't visit the Hotsos Training Day, mainly because I've been away from home and work for long enough, particularly when you add the travelling time at either end, but this time I was determined to attend because Tanel was presenting.

It was a busy room with a very high percentage of attendees staying for the additional day. I suppose once people were actually allowed out of the office in the current economic climate, they figured they might as well pack in as much learning as they could. Tanel described it as one of the peaks of his career and I can understand that. It's an honour to present at the Symposium, so a whole day must be a pretty special privliege, but he's more than earned it with the number of good presentations and blog posts he's come up with.

It didn't get off to a good start, though, as there was a problem with the focus of the projectors, but that was soon adjusted a little and people were happy as long as it was good enough to read the text.

Speaking of text, he kicked off spending quite a bit of time talking about using the right tools for the job - sqlplus, basically ;-) - and how we can make our own performance more efficient before we even start looking at performance issues. Make no mistake - despite having a love of pictures these days, I'm fundamentally a command line chap who frequently finds himself doing the things Tanel talked about, which consequently made me chuckle.

- When diagnosing Oracle problems reported by others, I ask them to stop using TOAD, their JDBC application or whatever it is they're using and login to sqlplus. Once the problem is recreated there, I know it's a real problem.

- On windows I use the CMD prompt version of sqlplus too and can't stand sqlplusw (but I should perhaps keep that quiet because lots of others seem to like it ;-) )

- On Linux he spent a lot of time on RLWRAP and command line completion and there were some really good ideas in there that I don't use enough so I'm going to revisit them.

He showed some neat and entertaining tricks with colours that I took some photos of but they wouldn't really do them justice. I'm expecting him to have a movie playing from sqlplus next year ;-)

While talking about his Snapper tool, he drew the comparisons between it and ASH data, which he was pretty positive about and mentioned how ASH data is like a DW fact table with multiple dimensions that allow flexible and relatively complex drill-down into specific dimensions to help diagnose performance issues. That's something I talk about quite a lot when I'm teaching people about ASH.

But he came at it more from the angle of his own Snapper tool, which is cool if you've never taken a look and between that and the section on latchprof and latchprofx, demonstrated pretty effectively that these tools allow you to look at most problems and to limit your investigations to those that matter. For example, for someone who is known for his work on diagnosing latch contention problems (amongst other things) he was very careful to point out that you shouldn't become obsessed by latch contention unless it's causing a problem! He didn't stop there, though, next up was the utility that I know a few people have been waiting for - the Mother Of All Tuning Scripts, or MOATS.

I wouldn't want to steal any of Tanel's thiunder by blogging about it in too much detail, I'm sure he'll do that himself, but MOAT is a bit like a top utility for Oracle. Although other such tools exist, e.g. OraTop, I've had a quick look and I think MOAT is probably more extensive and only requires some PL/SQL and SQL. I was pleased to see him acknowledge that it was something that he worked on with Adrian Billington who is the guy behind oracle-developer.net and someone who I've enjoyed a few beers with since I started working in London again.

Next up were SQL Performance Tools and thank goodness he re-emphasised that using the EXPLAIN PLAN FOR command is asking for trouble, particularly as there are far better tools these days to look at the plan from the child cursor instead. (My TOAD equivalent for this in the office is to berate people for using that bl**dy ambulance! LOL)

Unfortunately there were some growing Production problems at work that demanded my attention
so I had to bail out early to work in my room which meant that I missed
most of the SQL Tuning and Visualisation sections but I knew the Perfsheet content
already and you can count on me revisiting the whole day once I'm back
in the office. I particularly liked his course notes - not too detailed
to be unwieldy, but just enough detail to remind me of the day.

Although it was a really enjoyable day, I obviously knew a lot of the content already, partly because I have similar interests and partly from reading Tanel's blog. It made me wonder why people who go to Hotsos don't read blogs and stuff
the rest of the year, because I sensed a lot of the content was completely new to some others in the room. I suppose now they've found his blog, they might start.

After a few hours working in my room, things were sorted out and I could go to the bar across the street for a burger and a few beers with Marco. As soon as we walked in, Tanel shouted over and there were was an interesting little crew in there, with Tanel basking in the glow of a job well done. It was a shame about the incredibly loud karaoke or I might have stayed longer but it's safer to keep me away from karaoke - although I was quite looking forward to the Estonian version ;-) It was a good evening, though, and ended up with quite a few of us in the hotel bar. My main memories are of me defending 'my' pictures and Tanel and I praising Mr. Billington to the heavens.

The "Adrian Billington must be allowed out to a conference" campaign starts here! We won't take no for an answer.

Hinting Dynamically Generated SQL in Application Engine

One of the clever things you can do in Application Engine is dynamically generate parts of a SQL statement.  However, this can produce challenges should decide that you also need to add an optimiser hint to a SQL statement.  In this post I want to demonstrate how you can also use the same techniques to dynamically generate valid Optimizer hints.

The following statement was generated by a step in a delivered Application Engine that had not been previously been changed.  The PS_XXX tables are custom tables, but the references are in custom meta-data that is used to dynamically generate the SQL.  I have added the hints and comments. 

The specific hints and comments are not important here, the point is how I managed to generate them.

Note that:

  • I have added a comment that contains the name of the application engine step that generated this SQL.  This has can be added automatically with the aeid.sql script on my website.  It can be difficult to identify which step generated which static SQL statement.  It can be impossible to do that with dynamic SQL.  This identifying comment appears in the SQL.
  • I have added a QB_NAME hint to the sub-query, and then the hint can refer to that sub-query (see related blog entry Hinting Sub-Queries on Oracle).
  • Rather than specify the index name in the INDEX hint I have use the table_name(column_list) construction to specify an index on a named table that starts with the named columns.  This is good general practice, the hint remains valid in the case that an index name changes.  However, it is also useful here because there is no meta-data that I can use to construct the name of the index.
#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
UPDATE /*+INDEX(@SUB1 ln@SUB1 PS_XXX_SPLT_TA2(process_instance,resource_id))          INDEX(PS_XXX_SPLT_TA1 PS_XXX_SPLT_TA1(process_instance,iu_line_type))*/    /*ID-IU_PROCESSOR.P150.P150-010.S*/     PS_XXX_SPLT_TA1 SET    iu_line_type='U'WHERE iu_line_type='2' AND process_instance=12345678 AND setid_iu='XXXXX' AND ledger_group = 'ACTUALS' AND EXISTS (    SELECT /*+QB_NAME(SUB1)*/ 'X'     FROM PS_XXX_SPLT_TA2 ln    where ln.EMPLID2 = PS_XXX_SPLT_TA1.EMPLID2     and ln.SEQ_NUM = PS_XXX_SPLT_TA1.SEQ_NUM     and ln.BUSINESS_UNIT = PS_XXX_SPLT_TA1.BUSINESS_UNIT     and ln.RESOURCE_ID = PS_XXX_SPLT_TA1.RESOURCE_ID     AND ln.setid_iu = 'XXXXX'    AND ln.ledger_group = 'ACTUALS'     AND ln.process_instance = 12345678     GROUP BY ln.BUSINESS_UNIT_GL , ln.ledger, ln.OPERATING_UNIT, ln.BUSINESS_UNIT, ln.RESOURCE_ID, ln.EMPLID2, ln.SEQ_NUM     HAVING SUM(ln.RESOURCE_AMOUNT) <> 0)

So how did I get those hints into the SQL?

First of all remember that Application Engine doesn't know anything about SQL.  An Application Engine step is just a string of characters that will be submitted to the database.  Any PeopleCode macros are executed as the step is prepared and the resultant string is then set to the database.

The name of the PeopleSoft record being updated is in a variable line_wrk2_rec.  I can introduce the contents of the variable line_wrk2_rec with the %BIND() macro.  Normally a string bind variable is delimited by single quotes because it is used in a function or predicate, but the quotes can be suppressed with the NOQUOTES option.  I can convert the PeopleSoft record name to the database table name with the %Table() PeopleCode macro. 

So, in this example

Code in Application Engine Step Expansion
%BIND(line_wrk_rec) 'XXX_SPLT_TA1'
%BIND(line_wrk_rec,NOQUOTES)) XXX_SPLT_TA1
%Table(%BIND(line_wrk_rec,NOQUOTES)) PS_XXX_SPLT_TA1

Note that delivered PeopleSoft SQL never specifies a row source alias on the table being updated because this would be invalid SQL on SQL Server.  Thus one SQL statement can be used on multiple platforms.  Although it is possible to have platform specific steps in Application Engine, PeopleSoft development avoid this wherever possible because it increases their development overheads.  So the row source alias is the table name. 

I have used the expression %Table(%BIND(line_wrk_rec,NOQUOTES)) twice; once for the table alias and then again when I specify the index.  I want to force the use of an index on PS_XXX_SPLT_TA1 that leads on columns PROCESS_INSTANCE and RESOURCE_ID.  This is the code in the Application Engine step that generates the SQL statement above.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
UPDATE /*+INDEX(@SUB1 ln@SUB1 %Table(%BIND(line_wrk2_rec,NOQUOTES))(process_instance,resource_id))        INDEX(%Table(%BIND(line_wrk_rec,NOQUOTES)) %Table(%BIND(line_wrk_rec,NOQUOTES))(process_instance,iu_line_type))*/    /*ID-IU_PROCESSOR.P150.P150-010.S*/%Table(%BIND(line_wrk_rec,NOQUOTES))  SET iu_line_type='U'  WHERE iu_line_type='2'    AND process_instance=%BIND(process_instance)    AND setid_iu=%BIND(iu_proc_002_aet.setid_ledger)    AND ledger_group = %BIND(iu_proc_002_aet.ledger_group)    AND EXISTS (     SELECT /*+QB_NAME(SUB1)*/ 'X'       FROM %Table(%BIND(line_wrk2_rec,NOQUOTES)) ln         %BIND(iu_where_aet.iu_where_sql,NOQUOTES)%BIND(iu_group_by_aet.iu_group_by_sql,NOQUOTES)         %BIND(iu_proc_002_aet.where_bu,NOQUOTES)        AND ln.setid_iu = %BIND(iu_proc_002_aet.setid_ledger)        AND ln.ledger_group = %BIND(iu_proc_002_aet.ledger_group)        AND ln.process_instance = %ProcessInstance      GROUP BY ln.%BIND(iu_sys_tran_aet.fieldname_bu_gl,NOQUOTES), ln.ledger%BIND(iu_proc_001_aet.iu_group_by_sql,NOQUOTES)      HAVING SUM(ln.%BIND(iu_sys_tran_aet.fieldname_base_amt,NOQUOTES)) <> 0)

There is a problem here (at least there is in theory). If this statement executes for a different table the index hint will instruct the SQL to look for an index on that different table on the same columns.  The predicates in the where clauses are also derived from dynamic code.  If I was being absolutely rigorous, I would have added some procedural code in preceding steps to build this part of the hints dynamically too, however, I am also trying to keep the customisation to a minimum in an area of code that is otherwise vanilla.

I have accepted a compromise.  If the step executes for a different table, the hint will probably be invalid because there is probably no such index on these columns on that table.  There is a risk that such an index does exist and so the hint could be valid but totally inappropriate to the situation because the predicates are totally different.  The result could be very poor performance.  However, in this case, in practice, this risk is zero, and the hint results in appropriate behaviour in all scenarios.  Although this is something that you need to consider on a case by case basis

Hinting Dynamically Generated SQL in Application Engine

One of the clever things you can do in Application Engine is dynamically generate parts of a SQL statement.  However, this can produce challenges should decide that you also need to add an optimiser hint to a SQL statement.  In this post I want to demonstrate how you can also use the same techniques to dynamically generate valid Optimizer hints.

The following statement was generated by a step in a delivered Application Engine that had not been previously been changed.  The PS_XXX tables are custom tables, but the references are in custom meta-data that is used to dynamically generate the SQL.  I have added the hints and comments. 

The specific hints and comments are not important here, the point is how I managed to generate them.

Note that:

  • I have added a comment that contains the name of the application engine step that generated this SQL.  This has can be added automatically with the aeid.sql script on my website.  It can be difficult to identify which step generated which static SQL statement.  It can be impossible to do that with dynamic SQL.  This identifying comment appears in the SQL.
  • I have added a QB_NAME hint to the sub-query, and then the hint can refer to that sub-query (see related blog entry Hinting Sub-Queries on Oracle).
  • Rather than specify the index name in the INDEX hint I have use the table_name(column_list) construction to specify an index on a named table that starts with the named columns.  This is good general practice, the hint remains valid in the case that an index name changes.  However, it is also useful here because there is no meta-data that I can use to construct the name of the index.
#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
UPDATE /*+INDEX(@SUB1 ln@SUB1 PS_XXX_SPLT_TA2(process_instance,resource_id))          INDEX(PS_XXX_SPLT_TA1 PS_XXX_SPLT_TA1(process_instance,iu_line_type))*/    /*ID-IU_PROCESSOR.P150.P150-010.S*/     PS_XXX_SPLT_TA1 SET    iu_line_type='U'WHERE iu_line_type='2' AND process_instance=12345678 AND setid_iu='XXXXX' AND ledger_group = 'ACTUALS' AND EXISTS (    SELECT /*+QB_NAME(SUB1)*/ 'X'     FROM PS_XXX_SPLT_TA2 ln    where ln.EMPLID2 = PS_XXX_SPLT_TA1.EMPLID2     and ln.SEQ_NUM = PS_XXX_SPLT_TA1.SEQ_NUM     and ln.BUSINESS_UNIT = PS_XXX_SPLT_TA1.BUSINESS_UNIT     and ln.RESOURCE_ID = PS_XXX_SPLT_TA1.RESOURCE_ID     AND ln.setid_iu = 'XXXXX'    AND ln.ledger_group = 'ACTUALS'     AND ln.process_instance = 12345678     GROUP BY ln.BUSINESS_UNIT_GL , ln.ledger, ln.OPERATING_UNIT, ln.BUSINESS_UNIT, ln.RESOURCE_ID, ln.EMPLID2, ln.SEQ_NUM     HAVING SUM(ln.RESOURCE_AMOUNT) <> 0)

So how did I get those hints into the SQL?

First of all remember that Application Engine doesn't know anything about SQL.  An Application Engine step is just a string of characters that will be submitted to the database.  Any PeopleCode macros are executed as the step is prepared and the resultant string is then set to the database.

The name of the PeopleSoft record being updated is in a variable line_wrk2_rec.  I can introduce the contents of the variable line_wrk2_rec with the %BIND() macro.  Normally a string bind variable is delimited by single quotes because it is used in a function or predicate, but the quotes can be suppressed with the NOQUOTES option.  I can convert the PeopleSoft record name to the database table name with the %Table() PeopleCode macro. 

So, in this example

Code in Application Engine Step Expansion
%BIND(line_wrk_rec) 'XXX_SPLT_TA1'
%BIND(line_wrk_rec,NOQUOTES)) XXX_SPLT_TA1
%Table(%BIND(line_wrk_rec,NOQUOTES)) PS_XXX_SPLT_TA1

Note that delivered PeopleSoft SQL never specifies a row source alias on the table being updated because this would be invalid SQL on SQL Server.  Thus one SQL statement can be used on multiple platforms.  Although it is possible to have platform specific steps in Application Engine, PeopleSoft development avoid this wherever possible because it increases their development overheads.  So the row source alias is the table name. 

I have used the expression %Table(%BIND(line_wrk_rec,NOQUOTES)) twice; once for the table alias and then again when I specify the index.  I want to force the use of an index on PS_XXX_SPLT_TA1 that leads on columns PROCESS_INSTANCE and RESOURCE_ID.  This is the code in the Application Engine step that generates the SQL statement above.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
UPDATE /*+INDEX(@SUB1 ln@SUB1 %Table(%BIND(line_wrk2_rec,NOQUOTES))(process_instance,resource_id))        INDEX(%Table(%BIND(line_wrk_rec,NOQUOTES)) %Table(%BIND(line_wrk_rec,NOQUOTES))(process_instance,iu_line_type))*/    /*ID-IU_PROCESSOR.P150.P150-010.S*/%Table(%BIND(line_wrk_rec,NOQUOTES))  SET iu_line_type='U'  WHERE iu_line_type='2'    AND process_instance=%BIND(process_instance)    AND setid_iu=%BIND(iu_proc_002_aet.setid_ledger)    AND ledger_group = %BIND(iu_proc_002_aet.ledger_group)    AND EXISTS (     SELECT /*+QB_NAME(SUB1)*/ 'X'       FROM %Table(%BIND(line_wrk2_rec,NOQUOTES)) ln         %BIND(iu_where_aet.iu_where_sql,NOQUOTES)%BIND(iu_group_by_aet.iu_group_by_sql,NOQUOTES)         %BIND(iu_proc_002_aet.where_bu,NOQUOTES)        AND ln.setid_iu = %BIND(iu_proc_002_aet.setid_ledger)        AND ln.ledger_group = %BIND(iu_proc_002_aet.ledger_group)        AND ln.process_instance = %ProcessInstance      GROUP BY ln.%BIND(iu_sys_tran_aet.fieldname_bu_gl,NOQUOTES), ln.ledger%BIND(iu_proc_001_aet.iu_group_by_sql,NOQUOTES)      HAVING SUM(ln.%BIND(iu_sys_tran_aet.fieldname_base_amt,NOQUOTES)) <> 0)

There is a problem here (at least there is in theory). If this statement executes for a different table the index hint will instruct the SQL to look for an index on that different table on the same columns.  The predicates in the where clauses are also derived from dynamic code.  If I was being absolutely rigorous, I would have added some procedural code in preceding steps to build this part of the hints dynamically too, however, I am also trying to keep the customisation to a minimum in an area of code that is otherwise vanilla.

I have accepted a compromise.  If the step executes for a different table, the hint will probably be invalid because there is probably no such index on these columns on that table.  There is a risk that such an index does exist and so the hint could be valid but totally inappropriate to the situation because the predicates are totally different.  The result could be very poor performance.  However, in this case, in practice, this risk is zero, and the hint results in appropriate behaviour in all scenarios.  Although this is something that you need to consider on a case by case basis

Hotsos 2010 – Farewell…

Said farewell today to the guys I met in the lobby of the Omni Mandelay Hotel in Irving, Texas. The final bits of the Hotsos 2010 Symposium. It has been a good run.

By the way, what did you mean Doug, with “a big laptop”, regarding your new Sony VAIO ???

It isn’t that big…

Dougs New VAIO

;-)

Hotsos 2010 – A Training Day with Tanel Poder

My last day already, so I hereby leave you with some impressions from Tanel’s Training Day. Maybe until next year.

If you also want to learn from him, then here’s your chance in Holland. Scripts and tools used by Tanel can be found here.

8-)

Hotsos 2010 - Monique

There she goes, squirreling into the corner of your carry-on.  Ready for another adventure.  Tucked between a fun floppy hat and the flourescent sun-block, of which she uses plenty, pale creature that she is, Monique is situated to prove to you that a seasoned traveller knows best.  Dressed in a curly mohair (so as not to show the wrinkles that betray one new to the jet set) and sporting a pale chiffon scarf - to double as a disquise in those cities where she is already known for her somewhat silly escapades - she seems to wink at you and beckon you along.  Forget your cares, but by no means your playing cards, and step lightly into whatever dreams your next voyage should hold for you:  let Monique be your guide.  

Her very presence alleviates all the maladies brought on by the jostling of trains, the dipping of ships, the swerving of buses and even by the simple fact that you are far from home.  For she has discovered the secret to successful travel; with the exception of the luscious few minutes in the X-ray machine, where she delights in making funny faces at the operators searching for more dangerous contents.  Monique never looks back.

Charleen Kinser Designs

(With thanks to Carol Dacko for bringing along an interesting dinner companion for my own cuddly friends. Somehow I doubt they'll ever be the same - she's terribly sophisticated for my lot!)

Hotsos 2010 - What's THAT?

I heard someone bemoaning the lack of Swag at Hotsos - just an event program, couple of magazines and a small clockwork toy sponsored by Oracle. I don't think I ever come to the Symposium expecting Swag - it's just not that kind of event. But you do tend to get good speaker gifts, including the best speaker gift I ever received. This year, it's a digital photo frame.

As for that clockwork toy, the Cuddly Toys think it's *brilliant*!!! At first, their reaction was "What's THAT?!" then when I wound it up and let it dance in front of them, they fell in love.

The only thing is that it doesn't run for long per wind-up, so my right wrist is aching from them continually beggind me to "Do it again, Douglas!"

Then again, their reaction wasn't quite as relaxed as when they met Carol Dacko's friend while we were at the restaurant. Apologies for the quality of the photo, it was dark in there, but at least it proves that Little H and Chris were allowed out for dinner and that not all Cuddly Toys are made the same.

They were just stunned by their new friend. What's THAT?

All will be revealed in the next post ....

fast refresh of join-only MVs: _mv_refresh_use_stats and locking log stats

A devastating performance degradation of materialized view fast refreshes can happen in versions after 9i - and can be healed rather easily by simply setting the hidden parameter _mv_refresh_use_stats or, a bit surprisingly, by locking statistics on the logs. The problem can manifest at least in the currently-latest patchsets of 10g, 11gR1 and 11gR2 (10.2.0.4, 11.1.0.7 and 11.2.0.1), seems to hit a lot of people, and its root cause are the utilization of wrong hints by the Oracle refresh engine.

We will investigate the join-only MV case only, since this is the case I have investigated after a question by Christo Kutrovsky, factoring in some observations by Taral Desai and some Support notes; I have some clues that something similar may happen for other types of MVs.

The test case sets up this very common scenario for fast refreshes:

1 - two big base tables joined together by the MV;
2 - only a small fraction of rows modified (actually one deleted, two updated, one inserted);
3 - all tables and indexes with fresh statistics collected;
4 - MV logs with no statistic collected AND with not-locked statistics;
5 - indexes present on the joined columns;
6 - indexes present on the rowid columns of the MV.

Points 1 and 2 make for the ideal scenario for incremental ("fast") refreshes to be effective; 3 is very common as well, since you normally have many other statements issued on the tables; the relevance of 4 will be clear later, but it happens very often in real life, since people might perhaps consider collecting stats on the log, but locking their statistics is usually not made, at least in my experience.

To understand the importance of points 5 and 6, please check this post of mine; note how those indexes are a necessary prerequisite for the sanity of the DEL and INS steps of the MV process. Without them, the refresh cannot be incremental since it has no physical way to read and propagate only the modified rows and those related to them, but it must scan (uselessly) most of the base tables and MV. But in other for the refresh to be incremental ("fast"), those indexes have to be actually used...

the issue

Let's illustrate the issue focusing on the DEL step (the easier to discuss about). In the above mentioned post, we have seen that the DEL step uses a single SQL statement whose text, leaving out minor technical details and hints, is:

/* MV_REFRESH (DEL) */
delete from test_mv
 where test_t1_rowid in
       (
select * from
       (
select chartorowid (m_row$$)
  from mlog$_test_t1
 where snaptime$$ > :1
       ) as of snapshot (:2)
       )

In 9.2.0.8, we get this very healthy plan:

-------------------------------------------------
|Id|Operation             |Name                 |
-------------------------------------------------
| 0|DELETE STATEMENT      |                     |
| 1| DELETE               |                     |
| 2|  NESTED LOOPS        |                     |
| 3|   VIEW               |                     |
| 4|    SORT UNIQUE       |                     |
| 5|     TABLE ACCESS FULL|MLOG$_TEST_T1        |
| 6|   INDEX RANGE SCAN   |TEST_MV_TEST_T1_ROWID|
-------------------------------------------------

That is: get the rowid of all modified rows from the log, and use the rowid-based index to delete the "old image" of them from the MV (inserting their "new image" is the job of the INS step). This is truly incremental, since the resource usage and elapsed time are proportional to the number of rows logged in the MV log, not to the dimension of the tables.

In 10.2.0.4, 11.1.0.7 and 11.2.0.1 the plan becomes:

------------------------------------------
|Id|Operation              |Name         |
------------------------------------------
| 0|DELETE STATEMENT       |             |
| 1| DELETE                |TEST_MV      |
| 2|  HASH JOIN RIGHT SEMI |             |
| 3|   TABLE ACCESS FULL   |MLOG$_TEST_T1|
| 4|   MAT_VIEW ACCESS FULL|TEST_MV      |
------------------------------------------

Oops, the indexes are not used ... hence the DEL step overhead is proportional to the size of the MV, and that can be definitely unacceptable.

That is due to the engine injecting an HASH_SJ hint in the outermost nested subquery:

... WHERE "TEST_T1_ROWID" IN (SELECT /*+ NO_MERGE  HASH_SJ  */ ...

This is recognized as a bug in many scenarios (start from Oracle Support note 578720.1 and follow the references to explore some of them) even if I have not found a clear and exhaustive note that documents the behaviour.

remedy one: set "_mv_refresh_use_stats"

To get back to the healthy plan, simply set "_mv_refresh_use_stats" to "true" (ask Oracle Support first of course for permission); this makes for a set of hint much more adequate for a fast refresh:

... WHERE "TEST_T1_ROWID" IN (SELECT /*+ NO_MERGE  NO_SEMIJOIN  */ ...

Note: The root cause for this bug is probably due to a change hinted in note 875532.1 - in 10.2.0.3 the meaning of _mv_refresh_use_stats was reversed, but not the default, hence (by mistake?) activating a different piece of the engine code.

The very same problem happens for the INS step; I won't go into much details here (please check the test case spools provided above if interested), but in 9.2.0.8 the base table modified rows are directly fetched using the rowid contained in the log:

-----------------------------------------------------
|Id|Operation                      |Name            |
-----------------------------------------------------
| 0|INSERT STATEMENT               |                |
| 1| TABLE ACCESS BY INDEX ROWID   |TEST_T2         |
| 2|  NESTED LOOPS                 |                |
| 3|   VIEW                        |                |
| 4|    NESTED LOOPS               |                |
| 5|     VIEW                      |                |
| 6|      SORT UNIQUE              |                |
| 7|       TABLE ACCESS FULL       |MLOG$_TEST_T1   |
| 8|     TABLE ACCESS BY USER ROWID|TEST_T1         |
| 9|   INDEX RANGE SCAN            |TEST_T2_J2_1_IDX|
-----------------------------------------------------

Instead, in 10.2.0.4, 11.1.0.7 and 11.2.0.1 we get the following plan:

--------------------------------------------------
|Id|Operation                   |Name            |
--------------------------------------------------
| 0|INSERT STATEMENT            |                |
| 1| TABLE ACCESS BY INDEX ROWID|TEST_T2         |
| 2|  NESTED LOOPS              |                |
| 3|   VIEW                     |                |
| 4|    HASH JOIN RIGHT SEMI    |                |
| 5|     TABLE ACCESS FULL      |MLOG$_TEST_T1   |
| 6|     TABLE ACCESS FULL      |TEST_T1         |
| 7|   INDEX RANGE SCAN         |TEST_T2_J2_1_IDX|
--------------------------------------------------

Whose resource consumption is, of course, proportional to the size of the base table.

Even in this case, this is due to the nasty HASH_SJ hint:

... FROM "TEST_T1" "MAS$" WHERE ROWID IN (SELECT  /*+ HASH_SJ */  ...

If you set _mv_refresh_use_stats, you get back the 9.2.0.8 plan - and thus you are back to incremental for both the DEL and INS steps. As a side note, a cardinality hint is used, where the cardinality is set to the correct value (6 in my test case):

... FROM "TEST_T1" "MAS$" WHERE ROWID IN (SELECT  /*+ CARDINALITY(MAS$ 6) NO_SEMIJOIN ...

 
remedy two: collect and lock statistics on the logs

Very interestingly, instead of setting the hidden parameter, you have another way to get back to the healthy plan: gather statistics on the MV logs when they are empty AND lock them (as suggested in note 578720.1, albeit not in this scenario and even if setting the parameter is not necessary; thanks to Taral Desai for pointing me to the note). In this case, no hint at all is injected beside a NO_MERGE for the DEL step:

... WHERE "TEST_T1_ROWID" IN (SELECT /*+ NO_MERGE  */ ...
... FROM "TEST_T1" "MAS$" WHERE ROWID IN (SELECT  ...

So, the engine is confident that the CBO will come out with a good plan, and it does not inject any "intelligent" hint. Possibly, and intriguing, this is because by locking the statistics, I am assuring the engine that these statistics are representative of the data anytime. So, locking the statistics is not meant only as a way to prevent dbms_stats from changing them ... it is deeper than that. At least in this case, you are taking responsibility for them, and Oracle will take that in consideration.