Who's online

There are currently 0 users and 26 guests online.

Recent comments


May 2012

You Will Be Our Slave – Err, no, I Won’t

For the sake of current clients, this posting has been time-shifted.

I’m looking at the paperwork for a possible new job in front of me. Document seven out of 13 is the Working Time Directive Waiver. It’s the one where you sign on then dotted line saying your proposed new client can demand more than 48 hours of work a week out of you. {This may be UK or European Union specific but, frankly, I don’t care}.

I’m not signing it. For one thing, I doubt the legality of the document under EU law – especially in light of the issues the UK government had with this and junior doctors {who often, and still do, end up making life-deciding decisions on patients when they are too tired to play Noughts and Crosses, having worked 80 hours that week}. For another, well, I don’t give a damn. I ain’t signing it.


This recent blog post by Seth Godin reminded me a lot of my introduction to Effective Oracle by Design of a few years ago. What was true then is still so true today...

Here is an excerpt from my book that mirrors what he just wrote:

I will use yet another analogy to describe how this book will present information.  Pretend for a moment that the developer is instead a medical doctor and the application is the patient.  There are many types of MD’s:

Using your DVD as a yum repository on a RPM based Linux

So, you’ve just gotten a fresh installed Linux system with Oracle Linux or Redhat Linux from the sysadmin. And with Oracle Linux you can not use the internet (forbidden by company laws is a common one), or you got Redhat Linux and can not use up2date for some reason. Most of the time, when installing Oracle products I am allowed to use the root account myself during the install. The DVD most of the time is still present in the drive.

You could mount the DVD and use ‘rpm’ directly to install packages off the DVD. If you get an error the rpm package has a dependency, you resolve the dependency, if that depended package has a dependency itself, you resolve that, etc. That’s something you could do. But there is an easier way!

Free Book: Introducing Microsoft SQL Server 2012

Michael McLaughlin‘s blog has a post about a free MS SQL Server book (Introducing Microsoft SQL Server 2012) available here. If you are into MS SQL Server, you can’t get a better deal than that. :)

Remember, you can still get hold of a free copy of Tom Kyte‘s book (Expert Oracle Database Architecture, 2nd Edition) from Red Gate.



Performance testing with Virident PCIe SCM

Thanks to the kind introduction from Kevin Closson I was given the opportunity to benchmark the Virident PCIe flash cards. I have written a little review of the testing conducted, mainly using SLOB. To my great surprise Virident gave me access to a Westmere-EP system running a top of the line 2s12c24t system with lots of memory.

In summary the testing shows that the “flash revolution” is happening, and that there are lots of vendors out there building solutions for HPC and Oracle database workloads alike. Have a look at the attached PDF for the full story if you are interested. When looking at the numbers please bear in mind it was a two socket system! I’m confident the server could not max out the cards.

Full article:

Virident testing martin bach consulting

Subquery Factoring

I have a small collection of postings where I’ve described anomalies or limitations in subquery factoring (the “with subquery”, or Common Table Expression (CTE) to give it the official ANSI name). Here’s another example of Oracle’s code not behaving consistently. You may recognise the basic query from yesterday’s example of logical tuning – so I won’t reprint the code to generate the data sets. This examples in this note were created on – we start with a simple query and its execution plan:

When is a foreign key not a foreign key...

I learn or relearn something new every day about Oracle.  Just about every day really!

Last week I was in Belgrade Serbia delivering a seminar and an attendee reminded me of something I knew once but had totally forgotten about.  It had to do with foreign keys and the dreaded NULL value.

Many of you might think the following to be not possible, we'll start with the tables:

ops$tkyte%ORA11GR2> create table p
  2  ( x int,
  3    y int,
  4    z int,
  5    constraint p_pk primary key(x,y)
  6  )

Logical tuning

Here’s a model of a problem I solved quite recently at a client site. The client’s query was much more complex and the volume of data much larger, but this tiny, two table, example is sufficient to demonstrate the key principle. (Originally I thought I’d have to use three tables to model the problem, which is why you may find my choice of table names a little odd). I ran this example on – which was the client version:

Linux : HandBrake – Streaming DVDs from my NAS…

When my TV broke a few months ago I made the decision not to replace it. That means I only get to watch DVDs on the computer or stuff streamed on the web (BBC iPlayer, ITV Player or 4OD) using my iPad. I’m pretty happy with the situation as it prevents me wasting too much time in front of the TV. My only issue was being tied to the computer for DVDs. Yesterday I entered the 21st century and started streaming DVDs to my iPad.

A little Googling revealed HandBrake is about as simple as it gets, where DVD video transcoders on Linux are concerned. With that installed I saved a copy of a DVD (Alien) into the “movies” folder on my NAS, which is pre-configured for streaming videos. That’s nice and simple.

60TB Disk Drives?

I was reading a story where Seagate were talking about 60TB disk drives. That’s all well and good, but how quick can I get data to and from them? If I need a certain number of spindles to get the performance I require, then I’m just going to end up with masses of wasted capacity.

I can picture the scene now. I have a database of “x” terabytes in size and I need “y” number of spindles to get the performance I require, so I end up having to buy disks amounting to “z” petabytes of space to meet my performance needs. Not only is it hard to justify, but you know the “spare” capacity will get used to store stuff that’s got nothing to do with my database.

Just think of those 60TB bad-boys in a RAID5 configuration. Shudder. :)