Top 60 Oracle Blogs

Recent comments



Here’s a little trick I’ve only just discovered (by accident). What’s a tidy way to extract the fractional part from a number ?
Running (I haven’t checked earlier versions yet – or looked at the manuals for the definition of the mod() function):

SQL> select mod(1.34,1) from dual;


1 row selected.

SQL> select mod(-1.34,1) from dual;


I really didn’t think it would make sense to use an expression like mod(p,1) – but it’s pleasant surprise that is does what it does.

And now for something completely different

The always excellent Cricinfo is in the process of bringing to an end it’s All Time XI series of articles by selecting an All Time World XI from the world teams previously selected. You can read all about it here . You can also choose your own – if you are at all interested in Cricket [...]

Locked Out

I know many of you enjoy my stories about mistakes I’ve made and how I’ve solved them, so here’s another log for the fire…

I’ll start by talking about an event that occurred about a month ago that made a bit concerned at the time, but not enough for me to take action on it.  Little did I know…

Recently during a development cycle, there was a request to “refresh” the Integrated Test database using the Development database as a source – this is the database after Development in the lifecycle, but before Quality Assurance.  Normally I prefer not to “refresh” a database that’s further upstream in the cycle – I prefer to refresh backwards from Production, but actively deploy incremental changes forwards from Development.  However, since at this location there’s not a very robust database SCM process they don’t have a good record of each change in development as it occurs – and unfortunately they don’t want to turn on DDL auditing as it might interfere with their security and auditing processes.  Since they use ETL to populate the data, the decision was made to simply blow away the Integrated Test schema and re-create it using Export / Import (Data Pump versions) – most likely just the DDL (METADATAONLY).

I didn’t like it – I wanted to generate a change script and apply the changes.  However, I didn’t have the time or energy to argue it.

That was about 1 month ago.

Now, in the Integrated Test database we’ve been running weekly 1-hour load tests – the load test lead is competent – which has provided us with some nice baselines and comparative reports.  Since I was out of town last week at OOW, I sat down today to look at last week’s test.

It was a mess – database servers at 100% CPU throughout the test.  Weird queries showing up in the ADDM report.  Emails talking about bouncing the database to “clear it”.

What the heck happened?

I took one of the queries reported by ADDM and started tearing it apart and was flabbergasted at what the optimizer was telling me about the execution plan.  Merge Joins, Cartesian Products, and a Cost that looked like the US Federal Deficit after 2 years of liberal spending.

That’s when the little voice in my head spoke up.  “Remember 2 weeks ago when we were looking at statistics and one of the developers said that the stats appeared out of date?”  “Remember how you said it wasn’t a problem because the auto stats gather job runs every day and brought them up-to-date, that the table in question probably didn’t have any material changes to it?”  “Maybe you should re-examine your assumptions, buddy!”

Ok, ok.

Double check the tables using ALLTABLES and notice that several of the tables haven’t been analyzed in a while (over 2 weeks).  That in-and-of-itself doesn’t appear bad – maybe they haven’t changed.  What was the name of the “view” that showed how much tables have changed since the last analyze?  Oh yeah, DBATABMODIFICATIONS – I remember when we had to set MONITORING manually and would check that view to see when we should kick off a manual statistics gathering job.  So, check DBATABMODFICATIONS – and see that lots of tables have tons of inserts, updates and deletes showing – that’s not right – especially if the auto stats gather job is running nightly.

So, let’s check on the auto stats gather job – maybe it’s not running, maybe it’s got an error, maybe something else…

Where is that job?  Oh yeah, the new DBMSSCHEDULER.  Now I’m looking through DBASCHEDULE views and I don’t see it.  Did the DBAs drop the job?  Disable it?  Why on earth would they do that?  No, that doesn’t make sense – no one asked for them to do that, and I’m sure they wouldn’t do it on their own.

Ah, the auto stats gather job isn’t a SCHEDULEd job, it’s an (wait for it), AUTOTASK (we’re on 11gR2).  (I’m sure I could have saved time by looking at Metalink Note 779596.1 – How Can We Find Out Status of Task ‘Auto Optimizer Stats Collection’ – but what’s the fun with that?)

So, we check and it’s enabled.  We check and it’s been run (DBAAUTOTASKCLIENTHISTORY and DBAAUTOTASKJOBHISTORY) – heck, it ran last night!  Oddly it ran in only 4 seconds – with a smiling “SUCCEEDED” next to it.  How could the job succeed and not update the stats?

Alright – something else must be wrong – I’m not convinced that the database is lying or has a bug – there must be something else.  Maybe something about the tables?  I remember that back in the day if a table didn’t have any stats on it in the first place, that updating them would fail – which is why I used to always do a quick analyze on tables immediately after creating them.  That doesn’t appear to be the case here, but maybe something else about the tables…

Hmmmm…  Where to check?  How about Enterprise Manager?  There must be something there.

Ah, Manage Optimizer Statistics.  Hmm…  Object Status link – search for the schema, and run the “report”.

All the tables show their statistics as Stale, and Locked.


How the heck did they get Locked?

Again, I don’t think the DBAs went and locked them – no request was submitted to do so.  And I know they wouldn’t have locked them on their own initiative.  So they must have been locked all along.

How do table statistics get locked on creation?

Maybe the Import Data Pump locked them?  Maybe a default parameter during the import?

Search google for impdp / imp lock statistics… and there it was:


Default: ALL


Enables you to filter what Export unloads: data only, metadata only, or both.

Syntax and Description

  • ALL unloads both data and metadata. This is the default.

  • DATA_ONLY unloads only table row data; no database object definitions are unloaded.

  • METADATA_ONLY unloads only database object definitions; no table row data is unloaded. Be aware that if you specify CONTENT=METADATA_ONLY, then when the dump file is subsequently imported, any index or table statistics imported from the dump file will be locked after the import.


#OOW10 Redux

A few days after my first visit to Oracle OpenWorld in over 15 years, I’ve taken a few days to digest the experience and wanted to summarize my thoughts and “analysis”.  Attending with my wife, who still works for Oracle and is a director associated with Exadata POCs, was also a fun time – between her contacts within Oracle and my “contacts” in the outside ‘”Oracle underground” (as my wife refers to things like the OakTable network) I think we were able to get a nice, full picture of the conference.

I thought there were many interesting “themes” at the conference, from the official ones like Exalogic and the Fusion Apps to unofficial ones like how Exadata has been changing the existing dynamic around databases in the enterprise.

Exalogic was an enigma at the conference – as the opening keynote announcement, and repeated at the closing keynote, one would have thought that there would have been a lot of buzz about the box.  Instead, I saw a fair amount of confusion as people debated the meaning of “cloud” vs. “box” vs. “appliance” vs. “platform”.  Information about the configuration was scarce, and many people at the conference seemed to ignore it completely.  From initial descriptions (Ellison’s claim, for example, that 2 of them could handle all of Facebook’s web traffic) it appears that Oracle has built a high-performance, high-density Java platform, that, coupled with several Exadata database platforms, could easily handle most, if not all, of an enterprises application and data hosting needs.  It remains to be seen if organizations could actually run their entire application portfolio on a set of Exa-platforms – and non-Oracle software “support” seemed to be “lacking” on the Exalogic.  (I visited the Informatica booth to ask them if PowerCenter could run on the Exalogic – their response: “What’s an Exalogic?”, was telling…)

I opined at the conference that Ellison was trying to replace the generic hosted “LAMP” stack with and “LJE” stack: Linux, Java, Exa.  And in the main, I think it’s a good idea – the ability to provide a private shared hosting environment within an enterprise is attractive.  (Full disclosure – I used to work for Network Solutions, so I have a fair appreciation of hosting environments).  It was interesting to see the response to my “tweet” on the subject, as several people shot back how an LJE stack is so much more expensive than a LAMP stack.  And for small applications, they’re right.  However, contrast the size of the Facebook LAMP stack with how Ellison described a hypothetical Facebook LJE stack (2 Exalogic platforms, several Exadata platforms) and I’d guess that the price difference would be less than most people guess – not to mention a lot more energy and space efficient.  As an example of the hosting paradigm, I enjoyed a presentation by Nicholas Tan (Commonwealth Bank of Austrailia) and Roland Slee (Oracle) in which they combined 300 databases into 1 database and started providing internal database hosting with dramatic cost and time savings.

In any event, the rest of the conference seemed to be more focused on Exadata, as more and more customers are coming to grips with it.  Unfortunately, I noticed several “Exadata” presentations that had little Exadata-specific content – as Cary Millsap tweeted, it appeared to be a magic word that got presentations onto the agenda.  The 2 new Exadata boxes were nice technology refreshes, and Alex Gorbachev has a good side-by-side comparison of them.  On an another note, I wondered if Oracle was introducing Exadata models “too quickly” – I heard rumblings from attendees with v1 HP-based Exadata boxes about how they’re being left behind a bit.

As my first conference trying to fully utilize the power of Twitter (I know, I’m late to the party), I was happy with how it semi-allowed me to “attend” several sessions at once vicariously through other people.  Greg Rahn and Gwen Shapira’s tweets kept me in the loop around other sessions all day.  In particular, I was particularly happy to be following the analysts during their “private” sessions with Oracle – Merv Adrian and James Kobelius were very insightful and “non-snarky”.  James’ tweets around the Exadata Intelligent Data Warehouse (whereby SAS PMML models could be run directly on Exadata platforms) were especially interesting to me.

In the end, I started to see somewhat of a sea-change occurring in more of an emphasis on applying the power of Exadata to problems rather than advice on how to technically configure them.  And I think, Ellison agrees with that – one of his statements that customers shouldn’t be defining them selves on the basis of how unique their configurations are, but rather on what they are doing with them.  Of course, that kind of statement starts to imply a reduction in configuration choices – run whatever you want as long as its on an Exadata or is running Oracle Linux or Oracle Solaris (x86) — (I went to a session on Database Flash Cache in which Oracle all but admitted that the feature was limited to operating systems under Oracle’s “control”).  And Ellison’s pointed comments about Red Hat Linux didn’t go unnoticed by the crowd either.  In any event, choice in the hardware space has been narrowing for some time, as non-Intel and non-IBM architectures continue to decline in the market (with the exception of the ARM-based “systems”).  (BTW, speculation around Oracle and ARM has been entertaining).  Ellison’s rather clear desire to provide pre-built solution platforms is also a subtle comparison to IBM and its large services arm – it will be fascinating to watch how this plays out in the market.

This narrowing of choice is continuing into the DBA space, as I believe that range of ways a production DBA can affect the performance of applications continues to diminish – and not in a bad way, but rather in the way that the database defaults are getting better and better and reduce the configuration options that DBAs need to manage.  From Greg Rahn’s comments about how Exadata defaults seem to handle most reasonable workloads without difficultly, to Alex Gorbachev’s recommendation to use OMF to Tom Kyte’s presentation on how smart the optimizer is getting (how it eliminates tables from queries when it determines that they are not necessary) it’s becoming clear to me that the real ability to affect performance is shifting from the  production DBA to what I term database engineering and/or development.

Watch Cary Millsap’s interview with Justin Kestelyn and you’ll see what I mean – I think it’s imperative that DBA’s who are interested and talented at performance work to become more involved in software development.  Either by becoming dedicated development DBAs or forging ahead into database engineering.  I had a good, quick talk with Gwen Shapira about this at the blogger’s meetup.  And I was also struck by information from the Real World Performance team sessions in which they showed how application architectural choices affected design – Greg Rahn spoke about how necessary it was to start thinking in terms of set processing in order to harness the power of multiple compute and storage nodes; and in an understated demonstration, Andrew Holdsworth showed how fewer connections to the database can result in more scalability.  These all speak to development and application choices in my mind.

Finally, I had a good time trying to attend sessions at the Hilton for the Oracle Develop conference.  Cary and I talked about the wisdom of developing plug-ins for SQL Developer vs. Eclipse and I was pleasantly surprised to see Quest announce their Toad Plug-In for Eclipse at the conference.  With the demise of SQL*Plus for Windows and the need to integrate more database development tools into the developer experience, these discussions (not to mention the reduction in price of SQL Developer’s Data Modeling tool to free!) really hit home for me – now only if we could get better SCM integration – perhaps Eclipse with the Quest Plug-in, Method-R MR Trace Plug-in and Mercurial Plug-in will do it for me…

(I like Mercurial because it emphasizes changes and change-sets – which I think is a better match to database refactoring than Subversion – but that’s a topic I’m still researching).

Frequency Histogram 3

Here’s the output I get from querying dba_tab_histograms for a column that has been given a frequency histogram by a call to dbms_stats.gather_table_stats().

--------------- --------------
              2             -1
           3190              0
           3813              1
           4310              4
           4480              5
           7507            999
          17210           2000
          17212           2002
          17213           2004
          17215           2006
          17729           2008
          17750           2009
          17752           2010
          17904           2011
          17906           2012
          17909           2017
          17994           5000
          18006           5001
          18009           5002
          18023           5003
          18062           6001
          39885          11000

In an earlier blog I wrote about a query that turned the figures from a frequency histogram into a list of column values and column frequencies, and if I had used that query against the histogram data I would have found that the value 11,000 appears 21,827 times – according to the histogram.

But there’s a problem with this histogram: the value 11,000 actually appeared roughly 2 million times in the real data set – and when I enabled autotrace on the query that I had used to count the number of times that 11,000 appeared the optimizer’s prediction (the cardinality in the execution plan) matched the actual value fairly closely. So how did the optimizer manage to get from the histogram to the correct cardinality ?

Oracle Closed World and Unconference Presentations

There are so many things to blog about these past few days. That is mainly about the cool stuffs around OCW and OOW, sessions that I have attended (OCW, unconference, OOW), plus the interesting people that I’ve met on various areas of expertise.. So I’ll be posting some highlights (and a lot of photos) on the next posts.

Last Monday (Sept. 20) I was able to present at the Oracle Closed World @ Thirsty Bear. The full agenda is here

Oracle Closed World Presentations Downloads

Presentations from Oracle Closed World will be posted to

Links to scripts from Tanel Poder’s presentations at

Some photos from OCW 2010

Thanks all to an awesome Oracle Closed World and the Oaktable, Miracle Consulting and Embarcadero for sponsoring this event.

PS stay tuned for the 10 important database ideas black paper that we will be putting together from Oracle Closed World. (if you would like to contribute please email me
Kerry, Karl, Kyle
Link to a few more OCW 2010 photos from Karl :


WordPress allows me to see statistics about “referrers” – the places that people came from before getting to my blog. This can be interesting, and potentially useful, information.

From time to time the referrer is the “WordPress Dashboard” – which I think means that my blog has appeared on the WordPress front page as one of the more popular blogs on  That’s a rather nice feeling. In fact some recent stats on WordPress say that it gets a little over 2 billion page views per month on 11.4 million blogs. Since my blog currently gets about 50,000 page views per month I’m responsible for about 1/40000 of the total page views … which is well above average.

There is a downside, though, to appearing on the WordPress dashboard – there seems to be an interesting correlation between the days I hit the WordPress dashboard and the days when I see spam comments climb to high values; fortunately Akismet manages to filter out almost all the spam, but I wonder if there are packages on the market that keep checking for popular blogs then automatically start generating spam comments on them.

But let’s move on to a more worrying referral: a couple of days ago the thing that appeared top of the list was a domain name starting with “freesexmovie”. I didn’t try clicking on it to discover what was at the far end – I took a guess, and I’m always a little paranoid about picking up viruses (of the computer kind) from strange web sites. I struggled, though, to find the connection between Oracle and something being free.

Is this another case of clever software finding popular blogs and creating local references to them, perhaps copying unusual words or phrases, so that search engines will lead unsuspecting readers to the wrong sites ?

There’s probably a suitable “bon mot” that I could quote for both these referral observations – but I can’t think of one at present.

OOW 2010 – the highlights

I started my week with a blast by sky diving at Parachute Center – Sacramento, it was an awesome experience!

Then we went to Redwood Shores to check out the Oracle office

OpenWorld 2010: ACE Director’s Meeting – Day 2

Day 2 was full of stuff I can’t speak about again, so I’ll keep this brief. The highlight for me was Wim Coekaerts speaking about Linux and Virtualization. I love listening to Wim speak about technology. It feels like you are chatting with your mate about technology stuff. He’s a real low-temperature guy. There were a few people there who were hearing Wim speak for the first time who are now converts. Perhaps I should start a fanboy club… :)

I intended to go to bed early, but ended up chatting until after midnight.

The following day (Saturday) was an off day. I had oringinally intended to do some stuff with a couple of the guys, but was so tired I gassed out. I woke up in the morning, got some food and went back to bed. At about 18:00 I ventured out for some food again, then went straight back to bed. Amazingly, I managed to sleep all night as well.