Who's online

There are currently 0 users and 41 guests online.

Recent comments


Oakies Blog Aggregator

OakTable Membership…

On a small note… Sometimes there is some confusion about how you become a OakTable member as could be noticed here “SbhOracle” (post has been altered now)… The OakTable membership can ONLY be acquired by INVITE after being recommended by one or more of the OakTable members regarding long time outstanding achievements done in the Oracle …

Continue reading »

Oracle OpenWorld 2011–Reflective Thoughts

And so another OpenWorld has come and gone, and while I wasn’t able to attend in person this year, I was able to watch most of the keynotes live while following along with my peeps on Twitter.

It’s always interesting to see whether or not people are “impressed” with the announcements from Oracle during OpenWorld — a lot of that depends on your perspective. While the past couple of OpenWorlds brought us Exadata and Exalogic, I felt that there were a LOT of “engineered systems” announced in both the run-up to OpenWorld (Database Appliance, SPARC SuperCluster) and at OpenWorld itself (Exalytics, Big Data Appliance). If you’re keeping score at home, you now have at least the following set of engineered system components to choose from:

Exalytics — an OBIEE high-performance system (Essbase, OLAP, TimesTen)
Database Appliance (mid-market 2-node RAC in a box)
Big Data Appliance (Hadoop, NoSQL, R and Infiniband connectivity)
Exadata Storage Expansion Rack
SPARC SuperCluster

I predict that integrators and Oracle sales engineers will be very busy putting together solution portfolios and configurations for large customers.

This bigger set of products also puts more pressure on Oracle to deliver a solid management console that can oversee multi-engineered system landscapes, and while the jury is out on Oracle Enterprise Manager 12c, there were several encouraging bits about it — including the ability to customize the screens and workflows in a whitelabel fashion.

Of course, in addition to the Big Data Appliance, which appears to be Oracle’s way of “legitimizing” Hadoop within the enterprise, and providing tighter integration through enhanced connectors over Infiniband, there was another Oracle database product “announced” in the form of Oracle NoSQL. From most accounts, the NoSQL product appears to be a well-engineered key-value store system based on the Berkley DB software.

Then we had the Oracle Public Cloud announcement and theater around and their keynote. With the cloud, Oracle emphasized the their stance on the open, portable nature of Java and how you can easily move onto and off of their cloud. Two things about the Oracle cloud were particularly interesting to me: the possibility of getting access to “public” data sets on the cloud, and the Oracle Social Network.

Larry Ellison demonstrated the Oracle Social Network used within a company sales process as a collaborative activity streaming tool integrated with Oracle’s Fusion applications — which seemed to resonate well with the enterprise customers in attendance.

All in all, a lot of stuff — and I didn’t even cover the Fusion Apps stuff.

One final intriguing thought — now that Oracle has so many different database products: RDBMS, NoSQL, Essbase, TimesTen and Rdb — it will be interesting to see how they “integrate” them, possibly on their cloud. I can imagine a future in which you don’t choose your product, but rather your feature and usage requirements, something like this:

Oracle Public Cloud Data Configuration

Describe your data requirements:

I need high-volume access to keys and values, I am less concerned about consistency
I need tables and columns that I can use to create relations and views to support ad-hoc queries and analysis
I need faceted, multi-dimensional analysis structures to support numerical analysis
I have a lot of documents and my data is basically unstructured.

Describe how your want to access your data:

I need JDBC / SQL connectivity
I need a RESTful API
I need a SOAP API

And then underneath the covers the cloud provisions the correct product for you, while watching your usage to see if it needs to configure a different product…

Cloud Control 12c R1 Installation on Oracle Linux 5.7 and 6.1…

While I was at Open World I tried a few times to get hold of the new Cloud Control software, but the hotel network wasn’t up to the job, so I had to wait until I got home.

The installation is pretty simple compared to previous versions of Grid Control and it installs fine on both Oracle Linux 5.x and 6.x. As always it’s a little greedy on the memory front, with the recommendation for a small installation being 4G for the Cloud Control and 2G for the repository database. That’s not including the OS requirement. On the subject of the repository database, you can use a number of 10g and 11g versions, but anything before requires additional patches, so I stayed with

You can see what I did here.



The most important news this week

(to me, anyway ....)

It's easy to lose perspective when you're in the midst of Openworld conference chaos and partying but the most important news I heard this week wasn't anything to do with NoSQL or BigData or OEM or even Steve Jobs' passing away, but a horrible email I received one morning. If you check out the 'Fanzine' section of the bookmarks on the right of this blog, it was invented for Andy Cowling because his was the first blog I read that warranted it.

I remember Andy C blogging about the various kinds of friends we have these days of the online variety, acquaintances and real friends but, regardless of that or how well I know him, he's a top man so I'm just crossing everything that this goes well for him and his family ....

(No smiley thing to adequately express how I feel ....)

Back from Open World...

I just returned home from Oracle Open World 2011.  It was good to meet some new faces and meet up with some old ones.

I had five sessions this year - one of them was a panel session (no content to post) - and all of them had good audiences and good questions at the end.  I've posted the materials I used here for download.  All of the powerpoints  plus the scripts I used to demonstrate what I was talking about are included.  It should all be on the Open World site as well (only the powerpoints, no separate scripts) as they collected all of the presentations from every session as it was happening and uploaded them.

Again - thanks to everyone that attended one of my sessions, I hope you got something useful out of it...

The art of getting security right-an observation

A number of high-profile hacks recently (and not so recent) has caught my attention. Well I thought, not such a big problem-I don’t have a PS3 and hence don’t have an account that can be hacked. I was still intrigued that the hackers managed to get hold of the passwords. I may be wrong here, as I haven’t followed the developments not close enough (as I wasn’t affected), but the question I asked myself: how can they be obtained? Surely Sony must have used some sort of encryption for passwords. It’s so far-fetched that anybody stores passwords in clear text somewhere!

Oh well then, Sony has been targeted a number of times and time and time again the security was breached. They only consolation is that the intruders have made it very public when they were successful, otherwise we’d have never learned about the problems Sony has with security.

Now other sites were hacked as well, and somehow I felt the impacts coming closer, such as and others.


Oh well the world is a bad place and the bad guys are way ahead of the good ones I thought, for as long as I’m not affected… That held until today when the ISP and infrastructure provider I am hosting my lab at sent an email out that their systems have been compromised and every customer should change all the passwords they have used with their administrative, web based interface as well as accounts on the servers themselves. I was very happy with Hetzner as their EQ8 server offering was a system I used extensively.

What can I say? I’m very not impressed. Again, how can passwords be stored in a system in a way that makes it easy to compromise them? Was that an Excel Sheet? Why can’t passwords be sensibly encrypted so they are just garbage to intruders. I think a global standard has to be put in place similar to the PCI standard which makes password encryption with strong algorithms mandatory. Better still, failing to do so should be fined. In a way that it hurts.

For those who are interested, the website has the latest. I was considering moving some of my other domains over to them but may have to rethink that strategy.

Clear text passwords in email

But it’s not only the careless storage of sensitive information on one’s own system. How many times did you get emails stating “welcome to service xyz, your username is abc and your password def”. They might as well send your bank details as well including credit card numbers and expiration date.

There has to be a wakeup call in the industry: it is far too simple to outsmart you! Do use strong encryption to protect customer data and identities. Failing to do so can, and maybe one day will cripple the online business of many companies, causing so-called financial analysts to spread panic and sell lots of shares plunging economies into difficult times.

Finally {}

Passwords aren’t a good enough solution to protecting identity and access to ones accounts. IMO there should be better ways of ensuring unauthorized access to your confidential data. What about a finger print reader? Or an iris scan? Sounds James Bond at the moment but if we are to trust the infrastructure again, we need to think of alternatives to passwords. To be secure they are long, clumsy, hard to memorise so you end up using one for almost everything. Also, root kits undermine your home PC and can make almost all online banking a very dangerous game. Trojans are able to undermine security of the iTAN system, yet my bank, HSBC doesn’t offer one of the only safe options for Internet banking: HBCI. Are we just too naïve? A 16 year old schoolboy from Germany performed a “safety audit” for many German banks’ online applications and found that most of them were insecure (XSS the main problem)

Post Scriptum

If anyone knows a reliable, responsible service provider where I can move my domains to, please get in touch!

Friday Philosophy – The start of Computing

This week I finally made a visit to Bletchley Park in the middle of England. Sue and I have been meaning to go there for several years, it is the site of the British code-breaking efforts during the second world war and, despite difficulties getting any funding, there has been a growing museum there for a number of years. {Hopefully, a grant from the Heritage Lottery Fund, granted only this month, will secure it’s future}.

Why is Bletchley Park so significant? Well, for us IT-types it is significant because Alan Turing did a lot of work there and it was the home of Colossus, one of the very first electrical, programmable computers. More generally of interrest, their efforts and success in cracking enemy ciphers during WW2 were incredibly important and beneficial to the UK and the rest of the allies.

In this post, I am not going to touch on Colossus or Alan Turing, but rather a machine called the “Bombe”. The Bombe was used to help discover the daily settings of the German Enigma machines, used for decrypting nearly all German and Italian radio messages. All the Bombes were destroyed after the war (at least, all the UK ones were) to help keep secret the work done to crack the cyphers – but at Bletchley Park the volunteers have recreated one. Just like the working model of Babbage’s Difference Engine, it looks more like a work of art than a machine. Here is a slightly rough video I took of it in action:

My slightly rough video of the bombe

{OK, if you want a better video try a clearer video by someone else.}

I had a chat with the gentleman you see in both videos about the machine and he explained something that the tour we had just been on did not make clear – the Bombe is a parallel processing unit. Enigma machines have three wheels. There are banks of three coloured disks in the bombe (see the picture below). eg, in the middle bank the top row of disks are black, middle are yellow and bottom are red. Each vertical set of three disks, black-yellow-red, is the equivalent of a single “enigma machine”. Each trio of disks is set to different starting positions, based on educated guesses as to what the likely start positions for a given message might be. The colour of the disk matches, I think, one of the known sets of wheels the enigma machines could be set up with. The machine is then set to run the encrypted message through up to 36 “Enigmas” at once. If the output exceeds a certain level of sense (in this case quite crucially, no letter is every encrypted back to itself) then the settings might be correct and are worth further investigation. This machine has been set up with the top set of “Enigmas” not in place, either to demonstrate the workings or because the machine is set up for one of the more complex deciphering attempts where only some of the banks can be used.

This is the bombe seen from the front

The reason the chap I was talking to really became fascinated with this machine is that, back in about 1999, a home PC programmed to do this work was no faster than the original electro-mechanical machines from 1944 were supposed to have taken. So as an engineer he wanted to help build one and find out why it was so fast. This struck a chord with me because back in the late 1990′s I came across several examples of bespoke computers designed to do specific jobs (either stuff to do with natural gas calorific value, DNA matching or protein folding), but by 2000, 2002 they had all been abandoned as a general PC could be programmed to be just as fast as these bespoke machines – because bespoke means specialist means longer and more costly development time means less bangs for your buck.

Admittedly the Bombe is only doing one task, but it did it incredibly fast, in parallel, and as a part of the whole deciphering process that some of the best minds of their time had come up with (part of the reason the Bletchley Park site was chosen was that it was equidistant between Oxford and Cambridge and, at that time, there were direct train links. {Thanks, Dr Beeching}. ).

Tuning and reliability was as important then as it is now. In the below picture of the back of the machine (sorry about the poor quality, it was dim in that room), you can see all the complex wiring in the “door” and, in the back of the machine itself, those three rows of bronze “pipes” are in fact…Pipes. Oil pipes. This is a machine, they quickly realised that it was worth a lot of effort to keep those disks oiled, both for speed and reliability.

All the workings of the Bombe from the back

Talking of reliability, one other thing my guide said to me. These machines are complex and also have some ability to cope with failures or errors built into them. But of course, you needed to know they were working properly. When these machines were built and set up, they came with a set of diagnostic tests. These were designed to push the machine, try the edge cases and to be as susceptible to mechanical error as possible. The first thing you did to a new or maintained machine was run your tests.

1943, you had awesome parallel processing, incredible speed and test-driven development and regression testing. We almost caught up with all of this in the early 21st Century.

Installing Oracle Enterprise Manager 12c on OL 5.7

I have been closely involved in the upgrade discussion of my current customer’s Enterprise Managers setup from an engineering point of view. The client uses OEM extensively for monitoring, alerts generated by it are automatically forwarded to an IBM product called Netcool.

Now some of the management servers are still on in certain regions, and for a private cloud project I was involved in an 11.1 system was needed.The big question was: wait for 12.1 or upgrade to 11.1?

So to cut a long story short I have been very keen to get to the OEM 12c beta programme, but unfortunately wasn’t able to make it. Also, I wasn’t at Open World this year which means I didn’t get to see any of the demos. You can imagine I was quite curious to get my hands on it, and when it has been released a few days ago I downloaded it to my lab machine. I created a new domU for the database- plus latest PSU and another one for the management server. I assigned 2 CPUs each, the database server got 2G of memory while the OMS received 8.Don’t take this as a recommendation though, it’s only for lab use! I wouldn’t use less than 24G of memory for a production management server, and it would obviously follow the MAA recommendations and be installed behind an enterprise grade load balancer etc. Needless to say I’d use RAC+Data Guard for the repository database.

Operating System

The installation software comes in the form of 2 zip files which you download from OTN as usual, totalling some 5.5G for x86-64. Unzip them to a favourite location.

I decided to use Oracle Linux 5.7 as the OMS host since the documentation links were broken when I tried to figure out which platforms were supported, and a more conventional setup can sometimes be less painful. Additionally, I could use the oracle-validated RPM to set up the necessary system parameters although it later turned out that I was setting some that weren’t needed for OEM. It didn’t hurt either.

Even when using OEM purely in your lab it’s always a good idea to register your domUs in DNS. Not only do I have to do so because of the SCANs in RAC 11.2, but also want to ensure that naming resolution between OMS and targets works without problems.

One of the important things to remember though is to deactivate/modify the IPTables service and to set SELinux to permissive or lower. I had a failed “omsca” run because of firewall rules in place I didn’t spot.

I set 20G of space aside for my /u01 mount point, created as a LVM from a dedicated oracle volume group. Again, that’s not enough for production ;)


With all the prep work done I started a vncserver process on the OMS machine and started runInstaller from the unzipped file location. Spot the difference here? That is really it, you don’t have to

  • Get an ancient, potentially bug ridden JVM from the sun archives and deploy it to your machine
  • Get a very old version of Weblogic and install it separately
  • Ask Oracle support for patch WD7J because the not-so-smart update can’t get it for you

This is a huge step into the right direction, but one wonders why it has taken Oracle so long to make a convoluted install process better. But then you might argue that you install only a few of them so it might be a bit of an IKEA experience (but without a lot of the fun).

After runInstaller has started, it greets you in its usual friendly way.  As it is case these days with Oracle software, it prompts you for your My Oracle Support credentials-I usually skip this step.

I also opted to skip software updates on this next screen:

Since my 20G LVM is mounted to /u01, it was only natural to use /u01/app/oraInventory as the inventory location. I choose oinstall to own the software.

Funny enough, just by having applied the oracle-validated RPM plus its dependencies I managed to pass all the tests! What I didn’t know at this time was that the IPtables and SELinux settings weren’t detected-again, make sure you have set SELinux to permissive and modified/deactivated the firewall. Otherwise the omsca will fail trying to start the EMGC_OMS1 (sp?) server for deployment of the actual Grid Control software. The only warning I got was about swap space which I have a habit of ignoring. And honestly, I was only 1 M off…

Next I chose the “advanced” installation with a middleware home set to /u01/gc12.1. As it turned out the leading slash in the print screen wasn’t needed, and you shouldn’t use it.

Select the plug-ins you want on the next screen, bearing in mind that you might run into dependency problems until you have selected all relevant items from the screen.

On this screen you have the option to assign passwords to various users needed for weblogic. Also note the double // in the path-this is not recommended and a result of my previous setting of the middleware home location. I actually redid the installation for said IPtables problem and removed the trailing slash but don’t have print screens for it.

Enter the credentials and connection information for your repository database on this screen. Mine was, check the online documentation for required patches to your database home.

On this screen you define the repository details, a screen which is fairly self-explanatory. OUI then goes off and runs a number of checks against your database, mainly to see if certain init.ora parameters are matching the recommended standard. Interestingly AMM wasn’t recommended, instead OUI asked me to increase the values for sga_target and pga_aggregate_target. This hasn’t changed much from the OEM GC 11.1 days-ensure that you are happy with the settings, and bounce the instance if needed for them to take effect.

Towards the end of the OUI session you can make a choice of port ranges; experience told me that these are best left at their defaults.

Finally! The summary screen shows what you decided to do, it’s on two different print screens shown here:

Now it’s time for a coffee or two, the installation process on my lab took about 1 hour. The interesting part is that you can actually correct errors if they occur. The only MOS note for OEM 12.1 I found describes the location of the log files and should be your first port of call in case of problems.

Luckily there weren’t problems so I went on to run the root scripts which didn’t take any time at all to execute.

Step 9-success! I now have an Enterprise Manager 12.1 system!

I will need to spend some time experimenting with it to see what changed-I assume a lot. Should I find some more time I’ll create a few new posts about installing agents on the machines I’d like to monitor and other bits and bobs.

Migration to Exadata Session at #OOW11

Considering it was the last session of #OOW11 I was surprised to see a sizable number of folks showing up for my 3rd and final session slated for 3 to 4 PM on Thursday. Thank you for attending and for your questions.

Here is the slide deck. Note: please do not click on the link. Instead, right click on it, save the file and open it. It's a Powerpoint show; not a PPT. You can download free Powerpoint player to watch it, if you don't have Powerpoint installed.

Mike Carey: Vicious Circle…

The second in the Felix Castor series by Mike Carey. In Vicious Circle, Felix has to save the ghost of a girl from getting killed (sort of). File under supernatural detective with similarities to The Dresden Files. Pretty cool if you are into this sort of thing, like I am, but not for everyone.

My favorite character has to be Juliette, the succubus. To paraphrase one of her comments regarding attempting to use her abilities on some people suffering from demonic possession, “They should have only been capable of spontaneous orgasm.” It seems demonic possession gets in the way of your average succubus innocently going about their business… :)

Second favorite character has got to be Nikki, a zombie conspiracy theorist. It’s good to know that Zombies care about their appearance too. Nikki tries to prevent his corpse decaying by using industrial air conditioning and avoiding bacterial contamination as much as possible. Nikki is rather surprised by his physical reaction to Juliette, which should be impossible for a zombie as their congealed blood does not flow. I’ll leave it up to you to guess what happened to him. :)