Top 60 Oracle Blogs

Recent comments

February 2011


I’ve been ignoring the “social network” phenomenon for a long time but I’ve finally given in and, as some people have already discovered, signed up to LinkedIn. I don’t know whether this is a good, bad, or pointless thing to do, but I thought I’d give it a go for a while and see what happens.

At present, though, I’m only linking to people who fall into one of several restrictive categories: I know them “outside” the internet, or have worked for them, or collaborated with them, or had a length technical exchange with them either by email or in person at a technical conference.

So if you send me a “Join my network” and I don’t do anything about it, please don’t be offended – it just means you don’t belong to a fairly restricted group. (On the other hand, since I’ve averaged about 3 days per site at 40 client sites per year for the last 10 years, it’s possible that you’ve been let down by my appalling memory for names.)

Oracle Support-final update to SR

Just had a really pleasent exchange with Oracle support. I was after a way to purge the repository database of an OEM 11.1 Grid Control installation without having to blow it all away. Unfortunately, there is no such option. However, what I liked was this final update from the support member:

Generic Note

From sunny Colorado – blue sky and SNOW! – I do wish we could have provided a better option.

But I do want to thank you so much for your kindness and patience. You are the best kind of customer to work with. That means a lot, in these challenging jobs.

Very best,

The whole SR was well and competently managed by Thom, and at no time did he come up with techniques to buy more time by asking for irrelevant log files or similar. I wish more support staff were like him.

The Myth of Oracle Fusion…

I read a post this morning by Grant Ronald talking about fusion apps. In Grant’s post he mentioned things that people have been saying about Fusion over the years. Middleware and Apps are not my specialist field, but I get to hear a lot about them from the conferences and ACE Director meetings, so I have been witness to the Oracle Fusion myth from the beginning.

Cast your mind back several years and the whole concept of Fusion was launched at OOW. We were told that the middleware stack was going to become a single coherent product, rather than the buggy rag-tag bunch of technologies we had in 9iAS and AS10g. Sounds good so far, but then all the existing stuff got rebranded as Fusion Middleware when the products it was made up of hadn’t significantly changed. That’s confusing.

Fast forward a bit and we were expecting something like real Fusion Middleware to appear, then the BEA buyout was announced and WebLogic became the core of Fusion Middleware. Oh. So this wonderful coherent product that Oracle had been developing and we were expecting soon was swapped for a best-of-breed app server from an acquisition. Strange and a little disconcerting, but at least we have a better app server now, except that some of the existing features still required you to install the old AS10g stuff. Still the name Fusion is plastered everywhere.

Fast forward a bit more and we have got to a point where applying the term “Fusion” to the middleware stack is less insulting, but if anyone experienced Fusion along the way they would probably have been left with a bad feeling about what Fusion actually means. It’s very hard to overcome a bad first impression. Are Oracle really surprised that the term “Fusion” is associated with myth and confusion?

OK. That’s the Middleware. What about Fusion Apps? Well, the name includes the word “Fusion”, so it takes on all the bad connotations associated with the infancy of Fusion Middleware. Added to that, since the original announcement of Fusion Apps there have been numerous acquisitions, all of which have no doubt added to the confusion about what Fusion Apps actually is. Then we are told there is no natural upgrade from eBusiness Suite to Fusion Apps. It’s a new product and we have to migrate data to it as we would any new ERP. Next we are told that the initial release will only be a subset of the modules we require, so we will have to run it alongside eBusiness Suite. Wow. This is really confusing. That sounds like a half-finished ERP built on a half-finished middleware stack. Once again, are Oracle really surprised people react like this?

Now I’m not saying the Fusion Middleware is bad. It’s come a long way. I’m also not saying Fusion Apps are bad. I’ve seen the demos and they look amazing. I’ve also talked to people in that field who are genuinely impressed and exited by it. I believe it will be a big eye opener and possibly a game-changer for a lot of people. What I’m saying is I can totally understand when people on the outside of our little goldfish bowl have a really bad and confused impression of anything containing the term “Fusion”, because it does have a very long and sordid history.

In my opinion the term Fusion needs to be scrapped and replaced, then perhaps we can forget the history and focus on the now. Kinda like they did with Beehive. :)



Redo over multiple weeks

I’ve always wanted some sort of calendar view of load where I could see patterns across the same days of the week and same hours of the day and then be able to pick different periods and diff them:

The first step of such a display would be selecting the data in such a way to represent the graphic. A graphic should be harder to do than a similar, though less powerful, ascii representation.

So here is a quick sketch of looking at redo generation over time. Right now I have to monitor how well redo can be archived which depends on how it fast it is generated. (typically what I’m more interested in is the general performance which is what the above graphic addressed)

The output below shows the redo generated per hour on average (picking the maximum hour in the bucket which is 4 hours long below) and the maximum generation at the minute level.  I may or may not be able to archive redo at the up to the minute but I do want to make sure I can keep up with each hour.

DAY    HOUR  MAX(MB)    w1     w2      w3       w4       w5
SAT     0       9               5       4       4       3
SAT     4       12              4       5       4       4
SAT     8       1               0       0       0       0
SAT     12      0               0       0       0       0
SAT     16      1               0       0       0       0
SAT     20      4               1       0       0       1
SUN     0       10              3       3       1       3
SUN     4       13              6       5       3       5
SUN     8       7               6       6       0       6
SUN     12      7               0       3       0       4
SUN     16      1               0       0       1       0
SUN     20      8               3       2       0       3
MON     0       8               3       2       3       2
MON     4       7               2       3       2       1
MON     8       5               1       0       2       0
MON     12      1               0       0       1       0
MON     16      1               0       0       0       0
MON     20      7               2       2       0       2
TUE     0       14              6       5       7       4
TUE     4       7               1       1       1       2
TUE     8       3               0       0       0       0
TUE     12      1       1       0       0       0       0
TUE     16      1       1       0       0       0       0
TUE     20      3       1       1       1       1       1
WED     0       8       3       2       3       2       2
WED     4       7       2       1       3       2       2
WED     8       8       1       0       0       2       3
WED     12      7       1       0       0       1       1
WED     16      1       1       0       0       0       1
WED     20      4       1       1       1       1       1
THU     0       15      7       8       6       6       6
THU     4       8       2       1       1       1       1
THU     8       1       1       0       0       0       1
THU     12      16      1       11      0       0       1
THU     16      1       1       0       0       0       1
THU     20      4       1       1       1       1       1
FRI     0       11      2       2       2       2       2
FRI     4       8       3       1       1       1       1
FRI     8       4       1       0       0       0       0
set heading off
set feedback off
set pagesize 0

with pivot_data as (
          WW pivot_col
        , DY
        , D
        , HH
        , max(maxval) mv
        , max(average) av
      ( select distinct
               4*trunc(to_char(begin_time,'HH24')/4)     HH,
               to_char(begin_time,'DY')       DY,
               mod(to_char(begin_time,'D'),7)  D,
               mod(to_char(begin_time,'WW'),5)       WW,
               average/(1024*1024) average,
               maxval/(1024*1024) maxval,
        where  dbid=&&DBID and
               metric_name='Redo Generated Per Sec'
	       and begin_time > sysdate - 31
   group by HH,D,DY,WW
select DY ||'	'||
       HH ||'	'||
       round(max(mv),0)    ||'	'||
       round(max(w1),0)  ||'	'||
       round(max(w2),0)  ||'	'||
       round(max(w3),0)  ||'	'||
       round(max(w4),0)  ||'	'||
from (
   select *
   from  pivot_data
   pivot ( avg(av)
           for pivot_col in ( 1 as w1,2 as w2,3 as w3 ,4 as w4 ,5 as w5 )
group by DY,D,HH
order by D,HH

set heading on
set feedback on
set pagesize 30

PS the above pivot is for 11g, for 10g here is the query without pivot

set pagesize 100
col DY for A4
col HH for 99
col mx for 99
col w1 for 99
col w2 for 99
col w3 for 99
col w4 for 99
col w5 for 99

with pivot_data as (
        , DY
        , D
        , HH
        , max(maxval) mv
        , max(average) av
      ( select distinct
               4*trunc(to_char(begin_time,'HH24')/4)     HH,
               to_char(begin_time,'DY')       DY,
               mod(to_char(begin_time,'D'),7)  D,
               mod(to_char(begin_time,'WW'),5)       WW,
               average/(1024*1024) average,
               maxval/(1024*1024) maxval,
               metric_name='Redo Generated Per Sec'
           and begin_time > sysdate - 31
   group by HH,D,DY,WW
select DY, HH,
      round(max( mv ),0) mx,
      round(max( decode( WW, 0 , av, null ) ),0) w1,
      round(max( decode( WW, 1 , av, null ) ),0) w2,
      round(max( decode( WW, 2 , av, null ) ),0) w3,
      round(max( decode( WW, 3 , av, null ) ),0) w4,
      round(max( decode( WW, 4 , av, null ) ),0) w5
   from  pivot_data
group by DY,D,HH
order by D,HH

Exatadata Book 2

I have been getting quite a few questions about our upcoming Exadata Book lately so I thought I would post a quick update. We are working feverishly on it so please give us a break!

Just kidding!

I am actually feeling like we can see the light at the end of the tunnel now. We are well past the half way mark and I am feeling confident about the content. Well more than confident actually. I think it’s going to be awesome! In large part that’s due to the fact that I feel like we have the Dream Team working on the project. Tanel Poder has signed on as a co-author. Kevin Closson is the Official Technical Reviewer (and we’re tentatively planning on including a number of his comments in the book – in little “Kevin Says” sidebars). As one of the main architects of Exadata, this should provide some interesting perspective. Arup Nanda has volunteered as an unofficial technical reviewer as well. I have to say that Arup has been a great help. And I really appreciate him providing another perspective on what we’re writing about. All three of these guys are fellow Oak Table bretheren, by the way. Randy Johnson is the other co-author, and although he generally prefers to keep a low profile, he is extremely knowledgeable on things that the rest of us don’t deal with that much on a day to day basis, namely backup and recovery and storage configuration. He has a great RAC and ASM background as well. I have to also say that a guy none of you has ever heard of (Andy Colvin) has been a huge help as well. He is our in-house Exadata patching guru. Without him I’m not sure we would have been able to do the necessary testing to complete the book.

I must say that I feel honored to be involved in a project with such an accomplished group of guys. And by the way, we have had numerous offers from people that I have a lot of respect for to help with various aspects of the project. I want to thank all of you for those offers, even if we haven’t taken you up on all of them (our little brains can only absorb so much feedback at any one time). The book is actually available for pre-order on the Amazon already (so someone must think we are actually going to finish it pretty soon). I think we’re  right on track for later spring delivery. :-)

Philosophy – 13

If you see a comment like “X is a bad idea” this does not mean “some mechanism that is vaguely ‘not X’ is a good idea”.

If, for example, I say:

    “Histograms will not work well on character strings that are more than 32 bytes long and generally similar in the first 32 bytes”

that is absolutely not the same as saying

    “It’s a good idea to create histograms on character strings that are less than 32 bytes long.”

If this were a purely mathematical world we could invoke symbolic logic and point out:

(A => B) <=> (¬B => ¬A)

which means my statement is equivalent to:

    if you have a histogram that is working well then the data is not character strings of more than 32 bytes with generally similar values in the first 32 bytes”

Of course, being Oracle, you may find that someone, somewhere, has exactly such a histogram that appears to work brilliantly for them – but that will be because the optimizer has messed up the arithmetic so much that they are getting a great execution plan for completely the wrong reason … so they need to watch out for the next upgrade or patch release in case the optimizer gets enhanced.

[The Philsophy Series]


I’m starting this post as an ongoing discussion of research.
Today I received a AWR export and wanted to get a feel for what the load is on the system. I imported the AWR report and then wanted a quick overview, so I put together a script to give the AAS over time divided up as CPU, WAIT and IO. Current drawbacks is that the script uses SYSDATE which might not be appropriate if the AWR data is from many days ago.
The query seems a bit slow on a month of data, so will be looking at optimizations in the future.
+ = load demanding CPU
o = load waiting for IO to complete
- = load waiting for something like a lock, latch or other resource
the “8″ in the middle of the line is the # of CPU cores. A system that was a 100% maxed out and not waiting for IO or other resources would have pluses (“+”) right up to the number of cores, which is 8 in this case. If the pluses went over the number of cores then there would be contention on CPU resources.

TM                  AAS    CPU     IO   WAIT GRAPH
---------------- ------ ------ ------ ------ --------------------------------------------------------------------------------
03-FEB  04:00:00    3.6    1.8    1.8     .1 +++++++++ooooooooo                      8
03-FEB  05:00:00    4.1    1.9    2.1     .1 ++++++++++oooooooooo-                   8
03-FEB  06:00:00    5.1    2.8    1.9     .3 ++++++++++++++oooooooooo--              8
03-FEB  07:00:00    5.2    2.6    2.2     .3 +++++++++++++ooooooooooo--              8
03-FEB  08:00:00    5.9    3.0    2.6     .2 +++++++++++++++ooooooooooooo-           8
03-FEB  09:00:00    6.3    3.2    2.8     .3 ++++++++++++++++oooooooooooooo--        8
03-FEB  10:00:00    6.3    3.0    2.9     .3 +++++++++++++++ooooooooooooooo--        8
03-FEB  11:00:00    5.5    2.9    2.5     .2 ++++++++++++++oooooooooooo-             8
03-FEB  12:00:00    5.3    2.6    2.5     .2 +++++++++++++ooooooooooooo-             8
03-FEB  13:00:00    5.4    2.7    2.4     .3 +++++++++++++oooooooooooo-              8
03-FEB  14:00:00    5.5    2.8    2.4     .3 ++++++++++++++oooooooooooo-             8
03-FEB  15:00:00    6.1    2.8    3.1     .2 ++++++++++++++ooooooooooooooo-          8
03-FEB  16:00:00    6.3    3.1    3.0     .2 +++++++++++++++ooooooooooooooo-         8
03-FEB  17:00:00    5.6    2.7    2.6     .2 ++++++++++++++ooooooooooooo-            8
03-FEB  18:00:00    3.4    2.1    1.1     .1 +++++++++++oooooo-                      8
03-FEB  19:00:00    4.7    2.5    1.6     .6 +++++++++++++oooooooo---                8
03-FEB  20:00:00    4.7    2.8    1.8     .1 ++++++++++++++ooooooooo                 8
03-FEB  21:00:00    7.3    3.6    3.3     .4 ++++++++++++++++++ooooooooooooooooo--   8
03-FEB  22:00:00   19.2    6.4   11.5    1.2 ++++++++++++++++++++++++++++++++oooooooo8ooooooooooooooooooooooooooooooooooooooo
03-FEB  23:00:00   12.7    5.2    7.1     .5 ++++++++++++++++++++++++++oooooooooooooo8oooooooooooooooooooooo--
04-FEB  00:00:00   11.1    4.5    6.3     .3 ++++++++++++++++++++++oooooooooooooooooo8oooooooooooooo--
04-FEB  01:00:00   12.8    4.3    5.9    2.6 ++++++++++++++++++++++oooooooooooooooooo8oooooooooooo-------------
04-FEB  02:00:00    4.2    2.5    1.6     .1 ++++++++++++oooooooo                    8
04-FEB  03:00:00    2.1    1.3     .7     .1 +++++++ooo                              8
04-FEB  04:00:00    2.1    1.3     .9     .0 ++++++oooo                              8
04-FEB  05:00:00    2.3    1.2    1.0     .1 ++++++ooooo                             8
04-FEB  06:00:00    3.7    2.5    1.0     .2 +++++++++++++ooooo-                     8

Def v_days=10   -- amount of days to cover in report
Def v_secs=3600 -- size of bucket in seconds, ie one row represents avg over this interval
Def v_bars=5    -- size of one AAS in characters wide
Def v_graph=80  -- width of graph in characters

undef DBID
col graph format a&v_graph
col aas format 999.9
col total format 99999
col npts format 99999
col wait for 999.9
col cpu for 999.9
col io for 999.9

/* dba_hist_active_sess_history */

        to_char(to_date(tday||' '||tmod*&v_secs,'YYMMDD SSSSS'),'DD-MON  HH24:MI:SS') tm,
        --samples npts,
        round(total/&v_secs,1) aas,
        round(cpu/&v_secs,1) cpu,
        round(io/&v_secs,1) io,
        round(waits/&v_secs,1) wait,
	-- substr, ie trunc, the whole graph to make sure it doesn't overflow
	   -- substr, ie trunc, the graph below the # of CPU cores line
           -- draw the whole graph and trunc at # of cores line
	     rpad('+',round((cpu*&v_bars)/&v_secs),'+') ||
             rpad('o',round((io*&v_bars)/&v_secs),'o')  ||
             rpad('-',round((waits*&v_bars)/&v_secs),'-')  ||
             rpad(' ',p.value * &v_bars,' '),0,(p.value * &v_bars)) ||
        p.value  ||
	   -- draw the whole graph, then cut off the amount we drew before the # of cores
	     rpad('+',round((cpu*&v_bars)/&v_secs),'+') ||
             rpad('o',round((io*&v_bars)/&v_secs),'o')  ||
             rpad('-',round((waits*&v_bars)/&v_secs),'-')  ||
             rpad(' ',p.value * &v_bars,' '),(p.value * &v_bars),( &v_graph-&v_bars*p.value) )
from (
       to_char(sample_time,'YYMMDD')                   tday
     , trunc(to_char(sample_time,'SSSSS')/&v_secs) tmod
     , (max(sample_id) - min(sample_id) + 1 )      samples
     , sum(decode(session_state,'ON CPU',10,decode(session_type,'BACKGROUND',0,10)))  total
     , sum(decode(session_state,'ON CPU' ,10,0))    cpu
     , sum(decode(session_state,'WAITING',10,0)) -
       sum(decode(session_type,'BACKGROUND',decode(session_state,'WAITING',10,0)))    -
       sum(decode(event,'db file sequential read',10,
                        'db file scattered read',10,
                        'db file parallel read',10,
                        'direct path read',10,
                        'direct path read temp',10,
                        'direct path write',10,
                        'direct path write temp',10, 0)) waits
     , sum(decode(session_type,'FOREGROUND',
		 decode(event,'db file sequential read',10,
                                  'db file scattered read',10,
                                  'db file parallel read',10,
                                  'direct path read',10,
                                  'direct path read temp',10,
                                  'direct path write',10,
                                  'direct path write temp',10, 0))) IO
       /* for waits I want to subtract out the BACKGROUND
          but for CPU I want to count everyone */
   where sample_time > sysdate - &v_days
   and dbid=&&DBID
   and sample_time < (select min(sample_time) from v$active_session_history)
   group by  trunc(to_char(sample_time,'SSSSS')/&v_secs),
) ash,
  ( select  value from dba_hist_parameter where DBID=&&DBID and parameter_name='cpu_count' and rownum < 2 ) p
order by to_date(tday||' '||tmod*&v_secs,'YYMMDD SSSSS')

About Database Security...

I'll be doing a webcast on Thursday Feb. 24th at 9:30am (PST) regarding "How Secure is your Enterprise Data?". It'll be just about 30 minutes long.

I won't be the only speaker - I'm on second at 9:30am (PST). Anyone can "attend" (since it is virtual after all).
Hope to "see" you there...

Using a Windows Authenticated Proxy Server with EM11g

Over the years I have a number of issues with proxy configuration in Enterprise Manager Grid Control. The product is very definitely improving in this regard, however EM 11g still does not currently correctly integrate enterprise proxy servers that authmatically authenticate users using NTLM – a fairly common configuration given the ubiquity of Active Directory in the marketplace [...]


A quick and temporary note about a couple of links that I’ve just found:


I’ve just popped this note to the top of the stack again to mention a useful article from Craig Shallahamer on “Important Statistical Distributions”. Every DBA and developer should read it – as an introduction to the patterns you find in real-world data it’s short and sweet; it’s also very important. I often see test data which is “the wrong kind of random”: nearly everyone thinks a uniform distribution is okay for testing their systems when lots of data sets exhibit “normal”, “log normal” or “poisson” distributions – and it makes a huge difference to the validity of performance tests if you get it wrong.

Update 2

Richard Feynmann is one of my “heroes” – so I had to pop this post to the top of the stack when I found this online archive video of him talking about Quantum Electrodynamics for the Douglas Robb Memorial Lectures at the University of Auckland some years ago.