Top 60 Oracle Blogs

Recent comments

November 2013


The APPEND_VALUES hint was introduced in 11.2 to allow direct path inserts with variables using the VALUES clause. i.e.


The feature was designed to allow bulk inserting via arrays of 100′s or 1000′s of records in a single insert statement. Prior to 11.2, there was no documented way to do an direct path insert other than with the APPEND hint which only worked on inserts that used the SELECT clause. i.e.


Sangam 13 Presentations and Scripts

Thank you all those who attended my sessions at Sangam13 -- the annual conference of All India Oracle User Group in Hyderabad. I saw many who attended all seven sessions of mine, including the super hot (literally) one for Big Data in a small room. Audience like this makes the day for any speaker; I am no exception. Your support was very much appreciated.

Here are the presentations and all the scripts for download. All presentations are in PDF format and all scripts are in a zipped file.

Thor: The Dark World

I’ve just got back from watching Thor: The Dark World.

Unlike lots of people, I actually liked the first Thor film. I also liked the character in The Avengers, so I went into the movie with high hopes. Some parts of this film are totally awesome. Some parts are a little bit boring. Some parts are like a low budget DR Who episode.

The dark elves were really cool, a bit spooky and dead tough when they were on every other planet, but put them in London and they just run around looking like dodgy extras in a Power Rangers episode. A few minutes earlier they were shooting everyone with super laser stick things and killing Gods. Now all they can do is chase people? Seriously?

The Inferior Subset

Why Subsets qualify as an inferior good

Why are you sub-setting your data? Even with the cost of spinning disk falling by half every 18 months or so, and the cost and power of flash rapidly catching up, several large customers I’ve encountered in the last three years are investing in large scale or pervasive programs to force their non-prod environments to subset data as a way to save storage space.

However, there are also several trade-offs with sub-setting and potential issues it can create, including:

Data glut is the problem. Data agility is the solution

Data glut data feels like

Screen Shot 2013-09-12 at 9.06.05 PM

photo by Christophe Pfeilstücker

Data Agility feels like


Row Movement

Here’s a question that appeared recently on OTN, and it’s one I’ve wondered about a few times – but never spent any time investigating. Are there any overheads to enabling row movement on a table ? If not, why is it not enabled by default and eliminated as an option ?

Obviously there are costs when a row moves – it will be updated, deleted and re-inserted with all relevant index entries adjusted accordingly – but is there an inherent overhead even if you do nothing to move a single row ?

Equally obviously you’ve made it possible for some to “accidentally” shrink the table, cause short term locking problems, and longer term performance probems; similarly it becomes possible to update rows in partitioned tables in a way that causes them to move; but “someone might do it wrong” doesn’t really work as an argument for “de-featurising” something that need not have been a feature in the first place.

Production Possibility Frontier

By: Woody Evans

“The most powerful thing that organization can do is to enable  development and testing to get environments when they need them”

- Gene Kim, author of the Phoenix Project

App Features vs. Data Work

The power of a technology change, especially a disruptive technology shift, is that it creates opportunities to increase efficiencies. The downside is that companies take a long time to realize that someone has moved their cheese. Data virtualization, ie automated thin cloning of databases, VM images, App Stacks etc alters the production possibility frontier dramatically, providing customers can get past the belief that their IT is already optimized.

An Ideal Frontier

What’s the true price of data?

By: Woody Evans

Data is Scarce

Data faces a scarcity problem. In particular, we have a data clone scarcity problem. Our need for data clones for Backup, Analytics and Development environments/QA is virtually unlimited and growing. But, we live in world of limited data. We don’t have enough disk space and bandwidth and hours in the day to get the data where we want it, or to maintain the value of our data by keeping it fresh. And because we have insufficient resources, we’re forced to make trade-offs.

Diagnosing buffer busy waits with the ash_wait_chains.sql script (v0.2)

In my previous post ( Advanced Oracle Troubleshooting Guide – Part 11: Complex Wait Chain Signature Analysis with ash_wait_chains.sql ) I introduced an experimental script for analysing performance from “top ASH wait chains” perspective. The early version (0.1) of the script didn’t have the ability to select a specific user (session), SQL or module/action performance data for analysis. I just hadn’t figured out how to write the SQL for this (as the blocking sessions could come from any user). So it turns out it was just matter of using “START WITH ” in the connect by loop. Now the paramter 2 of this script (whose activty to measure) actually works.

Download the latest version of: