Top 60 Oracle Blogs

Recent comments


Surprising Error

Here’s an unexpected error that appeared recently while I was doing a test  on a database running  (The fact that I got an error didn’t surprise me, it was Oracle’s choice of error for the mistake I’d made.)

Cut-n-Paste from a SQL*Plus session:

SQL> alter system set db_cache_size = 256m scope = memory;
alter system set db_cache_size = 256m scope = memory
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00001: unique constraint (.) violated

If you’re wondering, I was trying to reduce the db_cache_size from 384MB and the value had been set by the startup parameter file – so the ORA-02097 is probably trying to tell me that I can’t reduce the dynamic value below the initial minimum. (I didn’t check this, I just bounced the database with a change of parameter file, but it’s probably a reasonable guess.)


That’s “Block Change Tracking” if you’re not familiar with the acronym – a feature that appeared in the 10g with its own background process (CTWR – change tracking writer) to help rman do faster incremental backups on very large datafiles.

It’s very useful if you’re using large tablespaces (i.e. the “one file per tablespace”) option – but, as with many things Oracle produces, when you push the limits or move into combinations of features odd errors start to appear.

Saibabu Devabhaktuni has written a blog note about using BCT with physical standby databases – a relatively new enhancement to BCT, but possibly one you need to be cautious about for reasons he explains.

Vincent Barbarino and the Book of Database Refactoring

Welcome back.  I realize this blog has been dead for a long time, but like Gabe Kotter, I’ve decided to return to my roots a bit and fire this blog back up.

Those of you that have spoken to me lately or have been following me on Twitter know that I’ve been pretty passionate about database design and development processes.  I’ve gotten to the point where I’ve almost never seen a database where the physical infrastructure is perfect, so ongoing changes to database structures and schemas are just a fact of life.

It’s managing that process of changing the database layout that’s been getting me all worked up lately – even when folks know what to change and how they’d like to change it, they don’t have tools or processes to introduce change into the database system in a traceable manner.

In one of my prior jobs, we were adamant about change-tracking all of our database changes – we used a rigorous change control and CM process based on object-change by object-change individual script files.  Little did I know at the time that what we were practicing was a form of database refactoring…

I had thought that almost everyone understood the importance of maintaining traceability for database changes, but I’ve recently encountered situations where the lack of standards and tools means that changes are applied to databases in a haphazard fashion.  While searching for some good arguments to use when introducing this concept, I came across a book by Scott Ambler and Pramod Sadalage entitled “Refactoring Databases: Evolutionary Database Design”.

Immediately I was happy with the concept: a whole “text-book” of how and why you need to manage the process of database structural change management.  In the remainder of this post, I’ll be reviewing and commenting on this book.

Before I begin, I think it’s interesting to look at the review quotes in the front of the book.  In some ways I wonder if folks know who this book is for – most of the quotes seem to patronize “data-professionals”  saying it’s high time that they joined the modern world in embracing agile development techniques.  References to “strong-armed DBAs” holding back projects seem a little harsh.

And yet.

I continue to believe that the jack-of-all-trades DBA moniker is mostly to blame for the sad state of database physical design today.  Ask folks what the primary task of a DBA group is, and chances are you’ll be told that it’s backup and recovery, not physical database design and construction.  I even have a hard time with the term database development as I don’t really feel like I’m writing code when I’m doing physical database layout.  I’ve been advocating the split of the DBA term into Database Operator (traditional DBA), Database Engineer (physical database designer and constructor) and Database Developer (stored procedure and SQL writer).

Using my terms, this book is best targeted at the Database Engineer and Developer.

What’s funny to me about the opprobrium heaped upon DBAs by agile developers is that I don’t think it’s a criticism of database technology in and of itself – but rather frustration with being forced to work with database professionals who lack the temperament, skills and experience to do database engineering and development.  Let’s face it, a conservative operations DBA is (rightly) concerned primarily with system availability and reliability through ensuring proper backups and minimizing potential workload on the server.  These are the DBAs who prefer to have hundreds of small independent databases in production all running at 2% utilization because it plays to their strengths.

It’s far harder to manage a large, multi-application tenant database running at 30-40% utilization experiencing continual structure changes – and that’s where this book starts to help.

The Preface has a nice section on “Why Evolutionary Database Development?” which starts us off into understanding why its necessary to resist the desire to have a full and complete set of logical and physical models before performing database development.  Early in my career I participated in efforts to create so-called Enterprise Data Models – which, being constructed by ivory-tower oversight and governance groups lacked any sort of applicability to business and mission requirements.  And sadly, were out-of-date even when they were eventually completed.  The book authors do a nice job of highlighting the benefits of the incremental approach, and also caution folks about the potential barriers to its adoption.  In particular they point out the severe lack of tools supporting database SCM (this is written in 2006).

They also mention the need for database sandbox environments – they suggest individual developers get their own databases to experiment with.  I’m not a big fan of this approach – I prefer a single development database that allows me to host a lot of data, with each developer getting their own schema to play around in.  I also ALWAYS enable DDL auditing in ALL of my databases – that way I can track potential changes that might need to be promoted to the next environment (I also get to validate that my changes were applied to the higher environment – and, as icing on the cake, I can trap dumb ideas like embedding DDL statements inside transactional operations).

Chapter 2 introduces the concept of Database Refactoring, with a quick introduction on refactoring in general (“a disciplined way to restructure code in small steps”).  The authors do a nice job of pointing out that database refactoring is conceptually more difficult than code refactoring – that code refactoring only needs to maintain behavioral semantics, while database refactorings must also maintain informational semantics (pg. 15).  The emphasis here includes the ability to introduce change in a transitional way that allows for multiple applications and multiple versions of applications to continue to run against the same database.  A simple example of moving a column from a parent table to a child table is also included.

In section 2.3, the authors categorize database refactorings into 6 broad categories: Structural (modifying table definitions), Data Quality (think check constraints), Referential Integrity (capturing rules that might currently be maintained by application code), Architectural (transferring common logic from applications into database procedures to increase their usefulness), Method (stored procedure refactorings), and Non-Refactoring Transformations (evolving the schema to handle new data concepts).

They also introduce the idea of indicators that your database may require refactoring – they call them “Database Smells” :-)

These include common problems like multipurpose columns, multipurpose tables, redundant storage of data items, overloaded columns, and fear of altering the database because it is too complex.

In section 2.6, the authors explain how it is easier to refactor your database schema when you decrease the coupling between applications and the database – through concepts like persistence layers.

Chapter 3 walks you through the basics of a database refactoring process – including giving you a list of process steps.  It also includes some good checks on determining whether or not the change is necessary and worth the effort.  Finally, they talk about version control and visibility.

Chapter 4 is pretty short, and deals with deploying or migrating changes from environment to environment.  This includes the concept of bundling changes together, scheduling and documenting deployments.  Finally, they discuss the actual deployment process, including defining and possibly testing your backout procedures.

In my environments, we’d break up these deployment items into 3 main groups: items that are pre-deployable (i.e., can be deployed ahead of time without affecting current applications), items that require application outages, and items that can be deployed “post-deployment” (perhaps cleanup activities that require the structure change, but aren’t required by the applications).

Chapter 5 discusses strategies (actually lessons learned) for successfully moving database refactorings through your development process, including implementing traceability for database changes, simplifying database change review processes, and hunting down and eliminating duplicate SQL.

The rest of the book, Chapters 6 through 11, goes through specific kinds of refactorings  (i.e., Introduce Calculated Column) along with basic pros/cons of each one and example SQL scripts (using the Oracle dialect).  It serves as a reference catalog of database change concepts and is useful from a delineation perspective.  I wish there was more meat in the pro and con section for each transformation, but in the end it’s a useful list.

Overall I thoroughly enjoyed the book and would recommend it for many development teams – project managers and developers should read at least the first 50 pages so as to understand how to integrate database development into the overall project plan.  Traditional DBAs supporting development teams absolutely must read this – if only to enhance their ability to interact and fully support development activities.

That’s all I have for now – look for shorter, more incisive posts in the future!

- Dom.

Open World

I won’t be attending Open World this year but if you’re going you may want to check out the Google Spreadsheet that Greg Rahn has prepared to list the presentations from Oak Table members.

(I’ll be deleting this post when OOW 2010 is over.)

Salary Survey Weirdness

Well now here’s an odd thing. In an otherwise frankly insulting article supposedly about visa fraud our old friend Don Burleson refers to Oracle Corporations salary survey for Oracle professionals which apparently shows US DBAs earning $97k on average whilst DBA staff in the rest of the world were close to about half of that [...]

Oracle Versions

Having discovered that it’s now easy to create polls, I find that it’s a little addictive.

There have been requests for help going all the way back to 7.3 fairly recently on the OTN database forum, so I thought I’d set up a poll to see which versions people had in production. If I’ve got it right you’ll be able to mark multiple choices from the list. 

For reasons I don’t understand the order of the version selection is being randomised every time the screen refreshes. (The results show, as required, in the order “newest version first”).

Filter “Bug”

Here’s an example to remind you how important it is to look at the “Predicate Information” supplied with an execution plan. Here are two execution plans that look very similar in shape – a continuous set of steps inwards and downwards from parent to child, with no “multi-child” parent rows:

| Id  | Operation                      | Name    | Rows  | Bytes | Cost  |
|   0 | SELECT STATEMENT               |         |     1 |    19 |     4 |
|   1 |  TABLE ACCESS BY INDEX ROWID   | MIN_MAX |     1 |    19 |     2 |
|*  2 |   INDEX UNIQUE SCAN            | MM_PK   |     1 |       |     1 |
|   3 |    SORT AGGREGATE              |         |     1 |     8 |       |
|   4 |     FIRST ROW                  |         |    10 |    80 |     2 |
|*  5 |      INDEX RANGE SCAN (MIN/MAX)| MM_PK   |    10 |    80 |     2 |

Predicate Information (identified by operation id):
2 - access("MM1"."ID_PARENT"=100 AND "MM1"."ID_CHILD"= (SELECT
5 - access("MM2"."ID_PARENT"=100)

And then this:


Some time ago I added the “subscribe” option to the right-hand panel to allow people to register for automatic email whenever I posted a new article. (I’ve also checked how easy it is to unsubscribe if you change your mind  - and it’s very easy)

Since then I haven’t been paying attention to how many people have subscribed – but I noticed today that the total number of active subscribers has gone over 250, which is quite nice.

If you haven’t noticed the option you might want to think about it:

  • benefit – you don’t have to check for new posts;
  • drawback – you don’t get any notification of comments on posts – unless you also subscribe to individual posts

Update 31st Aug 2010: Following the comments about RSS feeds, I’ve added a poll to this article to make it easier for people to express an opinion. If you also want to make a comment about which is better, feel free, but if you just want to register a preference it just takes a couple of clicks.

Update 1st Sept 2010: After 24 hours the votes have come in at roughly 2 to 1 in favour of full feeds – so I’ve changed the syndication settings to full feed, showing comment count.

Update 24th November 2010: WordPress keeps adding little details to their product – and I discovered this morning that the latest change is that they can now send email to a blog owner every time someone registers as a new subscriber. This prompted me to check how many subscribers I have – and it’s just gone up to 347. (I’ve decided to tell myself that this is why my viewing stats seem to have dropped off slightly – more people have decided to do the efficient thing by subscribing and then just reading the articles, so they spend less time browsing the blog and following recent comments, and more time doing their job.)


Some readers have noticed that a few links to my blog seem to be broken. Don’t panic, it’s not permanent it’s just the result of Don Burleson losing his temper.

Let me start by telling you about DMCA, the “Digital Millenium Copyright Act”. DMCA is a mechanism designed to protect Internet service providers (ISPs) from being sued over content published by their customers by allowing them to act as a communication channel and staying out of the line of fire.

The protocol is simple:

  • Person B decides that Person A has copied material for which person B holds the copyright.
  • Person B emails A’s ISP with a message swearing that he (or she) really, truly, believes there is a case for breach of copyright, supplying a link to the original source and (in principle) exact details of the copied material.
  • The ISP takes down the offending article and informs person A of the “DMCA take-down notice”.
  • At this point person A simply has to send an email to the ISP swearing that the article really, truly, doesn’t breach person B’s copyright.
  • The ISP restores the article and sends the DMCA counter-claim to Person B
  • At this point the ISP is in the clear and it’s up to Person B to pursue the copyright claim against Person A in court.

The protocol is necessary because (in general) you wouldn’t want your ISP to be be shut down or bankrupted because of the actions of a single individual who was abusing the services. On the downside, the protocol does enable individuals to make nuisances of themselves – particularly in an arena which should be open to technical discussion and peer review.

Content theft on my blog

I have received an email today from Don Burleson who informed me that a number of my articles have mysteriously appeared on another weblog hosted by wordpress on the name of  For instance:

My post: has reappeared as Even the host names are identical! The next one is a bit better as he managed to remove my “” example with “”: is my but he forgot to add the corrections I made. I recognize my DNS server, auxOEL5 :)

The story goes on.

Copy: from original:

Copy: Original:

Copy: Original:

And many more. It seems my blog is replicated almost 1:1 to his.

I am very disappointed to see that someone else has to resort to such extreme measures to boost his profile. Copyright aside, this is very unprofessional and I would have expected something else. If you are interested in the whole story, have a look at

Oh, and it’s not only me, more established authors such as Alex Gorbachev, Mark Bobak and Alex Fatkulin are also victims of this.

So, and that’s it from me about this matter.


Having complained to WordPress and leaving comments on my articles they mysteriously disappeared overnight. Everyone affected should do the same, and I urge you to file complaints to wordpress to have the blog removed.