Top 60 Oracle Blogs

Recent comments

April 2010

Exadata Storage Server and the Query Optimizer – Part 1

Even though the utilization of an Exadata Storage Server should be transparent for the query optimizer, when you look at execution plans generated in an environment using it you might notice slight differences. The purpose of this series of post is to summarize the differences I was able to observe.
Disclaimer: I do not have access [...]

PowerPoint is the Enemy

I went to Starbucks about an hour ago to treat myself to a mocha (even though it was nonfat). And while waiting for my coffee to arrive I noticed the front page of The New York Times. Right smack in the middle of the page was a horrendous PowerPoint slide and the title of the article underneath read "We Have Met the Enemy and He Is PowerPoint".

That old restart problem again...

Not something entirely learned new today - but rather a revisited "something I learned" coupled with an "I didn't necessarily expect it in this case".

It is the old "statement restart" and "evil triggers" issue. I had an asktom question asking why a row seemed to be getting deleted twice. You should read that link, I'll be referring to it here.

I immediately knew what the issue was (I was pretty sure). Before I even read the test case - I had already pointed them to three articles I wrote on the subject a while ago...

But when I looked at their test case and modified it slightly to be smaller and easier to read - I was frankly surprised at what was happening - but ok with the behavior.

It takes a series of bad things to happen for this issue the poster was seeing to have happen, you have to

a) use a trigger (evil)
b) do something non-transactional in that trigger - for example modify a package global variable
c) do slow by slow processing (if they did a big mass operation - they could still see a restart, but it would be less likely perhaps)

Here is a snippet of the code again:

ops$tkyte%ORA11GR2> create or replace PACKAGE pacepack
2 as
3 type array is table of number index by varchar2(40);
4 g_data array;
5 g_cnt number;
6 end pacepack;
7 /

Package created.

That package just has some global variables (evil global variables - evil evil - they didn't use globals in their API, they had getter/setter functions - but they behaved like globals as well). We'll create a row trigger to save the rowids of the rows our trigger has processed - even if Oracle rolls back our firing statement - modifications we've made in our trigger to the globals won't rollback.

Now, we'll code the trigger:

ops$tkyte%ORA11GR2> create table delete_data
2 as
3 select owner, object_name
4 from dba_objects
5 where rownum < 53001;

Table created.

ops$tkyte%ORA11GR2> create or replace trigger delete_data_bt
2 before delete or update on delete_data
3 for each row
4 begin
5 if ( pacepack.g_data.exists(rowidtochar(:old.rowid)) )
6 then
7 dbms_output.put_line( 'doing "' || :old.rowid ||
'" again was called ' || pacepack.g_cnt );
8 else
9 pacepack.g_data(rowidtochar(:old.rowid)) := 1;
10 end if;
11 pacepack.g_cnt := pacepack.g_cnt + 1;
12 end;
13 /

Trigger created.

The trigger checks to see if the PLSQL global table variable has the rowid already entered in it - if so, it says "I already did this one", else it sets a flag showing it processed that one for the first time. We also increment a count (you know, to count the rows we've deleted or updated).

Now we process the data:

ops$tkyte%ORA11GR2> declare
4 FROM delete_data
5 for update ;
6 l_cnt number := 0;
8 pacepack.g_data.delete;
9 pacepack.g_cnt := 0;
10 for l_delete_row in l_delete_csr
11 loop
12 update delete_data
13 set owner = lower(owner)
14 where current of L_delete_CSR;
15 l_cnt := l_cnt + 1;
16 end loop;
17 dbms_output.put_line( 'trigger count = ' || pacepack.g_cnt ||
' local count = ' || l_cnt );
18 END;
19 /
doing "AAAYMdAAEAABFrnABH" again was called 8827
doing "AAAYMdAAEAABFwiAAx" again was called 20140
doing "AAAYMdAAEAABFyeACk" again was called 31405
doing "AAAYMdAAEAABFzaABE" again was called 42670
trigger count = 53004 local count = 53000

PL/SQL procedure successfully completed.

As you can see - our trigger was fired 53,004 times - for 53,000 rows. Our counts do not match and we can see the four updates that were restarted.

Why they were restarted - not really relevant - the fact is - they CAN be restarted, we can show that they can be restarted, it is documented that they can be. Any statement can be restarted - we do a savepoint before executing the statement and if we deem necessary (which we did obviously), we rollback to that savepoint and do it over.

So, expect the restart - and stop using triggers. If you do use triggers - be very very very careful to make sure they are not subject to issues with restarts (eg: NO autonomous transactions - NO modifications of global variables - NO API calls that have side effects that don't roll back (setter functions for example)!

Index ITL

Jonathan Lewis wrote a series of posts on topic of Index ITLs (interested transaction list), how and when they become huge and make index bigger in size – see this post with links to several other articles. As it turned out in a recent thread on forum, one of possible solutions to the issue [...]

WWOUG April meeting sponsored by Blue Gecko

Blue Gecko is proud to sponsor the venue and refreshments for the Western Washington Oracle Users Group (WWOUG) meeting this Thursday (4/29/2010) from 5:30 pm – 7:30 pm.

Guest speaker Tim Gorman, an author, Oracle ACE, and Oak Table member will present Scaling to Infinity: Partitioning Data Warehouses on Oracle.

Please register for this event if you plan to attend.

Seattle Public Library downtown branch, 1000 4th Ave, Seattle
Room 2, Level 4 (Wright/Ketcham Meeting Room)

5:30 – 6:00 Refreshments and WWOUG announcements
6:00 – 7:00 Tim Gorman: Scaling to Infinity
7:00 – 7:30 Open networking

Related posts:

  1. April Seattle Apps Tech Group Networking Event
  2. “Tech Café” – Blue Gecko Sponsored Event
  3. “Tech Café” – Blue Gecko Sponsored Event!

Execution plan Quiz: Shouldn’t these row sources be the other way around ;-)

Here’s a little trick question. Check out the execution plan below.

What the hell, shouldn’t the INDEX/TABLE access be the other way around?!

Also, how come it’s TABLE ACCESS FULL (and not by INDEX ROWID) in there?

This question is with a little gotcha, but can you come up with a query which produced such plan? ;-)

| Id  | Operation          | Name   | E-Rows |
|   0 | SELECT STATEMENT   |        |        |
|*  1 |  INDEX RANGE SCAN  | PK_EMP |      1 |
|*  2 |   TABLE ACCESS FULL| EMP    |      1 |


quotes from Stockholm ...

I spent an afternoon at the Nobel Museum in Stockholm and found myself writing notes furiously at the very first exhibit. "The mere formulation of a problem is often far more essential than it's solution, which may be merely a matter of mathematical or experimental skill. To raise a new question, new possibilities, to regard old problems from a new angle requires creative imagination and marks

Oracle Plan Stability (Stored Outlines) in PeopleSoft Global Payroll

A longer version of this posting, with experimental results, is available on my website.

In PeopleSoft for the Oracle DBA, I wrote a page (p. 291) explaining why stored outlines were not suitable for use in PeopleSoft.  Five years later, my view has not significantly changed.  Essentially, stored outlines work best with shared SQL, and there isn't much shared SQL in PeopleSoft, because a lot of it is dynamically generated.

RAC Synchonous Alert File Reader

So you have 3 alert files from your 3 node RAC and you are trying to see what happened in the cluster as a whole. The Rac Synchro Reader will let you view the alert files of all 3 nodes side by side, with the timestamps aligned. When you scroll one alert file the others will scroll to a matching timestamp.

Windows executable. Others have found it works with the Listener logfiles, too. Tested for alertfiles from version 9 and 10.

Analyzing a SQL Trace File with SQL Statements

As of Oracle Database 11g the DBMS_SQLTUNE package provides the SELECT_SQL_TRACE function. Its purpose is to load the content of a SQL trace file into a SQL tuning set. But, as it often happens, a feature can be (mis)used for another purpose. The aim of this post is to show how to take advantage of [...]