Search

Top 60 Oracle Blogs

Recent comments

September 2010

Oracle Closed World and Unconference Presentations

There are so many things to blog about these past few days. That is mainly about the cool stuffs around OCW and OOW, sessions that I have attended (OCW, unconference, OOW), plus the interesting people that I’ve met on various areas of expertise.. So I’ll be posting some highlights (and a lot of photos) on the next posts.

Last Monday (Sept. 20) I was able to present at the Oracle Closed World @ Thirsty Bear. The full agenda is here http://www.amiando.com/ocw.html?page=434169

OOW 2010 Session Application Profiling in RAC

Thank you all who attended my session at OOW. It was the last day of the conference and I appreciate your taking the time. Here are the slides.

Thank you for attending and hope you will find it useful. Please look my other blog entry where I described the tool I built as well.

OOW 2010 Session Stats with Confidence

Thank you very much to all those who attended my session "Stats with Confidence". Unfortunately I was delayed by the keynote running late. With the big party coming up, I appreciate the spirit of those brave souls who stayed back. The late start didn't allow me to show the demo completely. But here are the scripts; hope you will be able to follow it along and run it on your own infrastructure.

It contains the presentation as well. Thanks for attending and hope you will find it useful.

Oracle OpenWorld Day 5 Highlights

Like most at OpenWorld, I started the day a tad later than usual after yesterday’s Appreciation Party. I decided to be a little adventurous with my first session and see a non-database related session on Application Integration Architecture as I have an interest in the issue. It was all actually quite interesting with discussions regarding [...]

Software. Hardware. Complete.

A lot is happening here at Oracle Open World, more than my brain can master right now. Exalogic, ZFS storage, Unbreakable Enterprise Linux, Fusion Apps (finally), Oracle VM for Solaris, etc… Some of those topics aren’t that important for me and/or my customers right now, because they are just out of reach like, for example Exalogic. I read that a full box setup will go for just over 1.000.000 US$ so that is a setup I won’t be managing for a while. On the other hand from an Oracle perspective this is the logic next step to make to provide a solid complete solution from apps layer down to the hardware layer providing a boxed solution for Oracle’s most demanding customers regarding performance, best of breed and availability.

Oracle Open World 2010

It has very good technology features in there of which I hope it will be open for us “general” database consumers in the near future. One of those Exadata features I would really like to get my hands on would be the in memory index functionality, if not only that it fits perfect with the way I handle XML data, that is “content” based. An other feature would be, for instance, those storage cell optimizations. Anyway, until now, when trying to enable them, it only comes back with a “Exadata Only” warning, so I will have to wait a little (I hope).

So now, I am currently following Wim Coekaert‘s OEL / Unbreakable Enterprise Linux kernel session called “Oracle’s Linux Roadmap”, if not only being interested in the statement of direction about ZFS, OEL and/or Oracle Linux. Oracle Linux will be a fork as far as I have read and/or will it be an option to chose while installing Oracle Linux. From now (Oracle 5.5 and onwards) you can chose installation for a Red Hat Linux kernel strict install or a for Oracle optimized Unbreakable Linux kernel install. You can, although I haven’t tried it yet, also update the kernel afterwards with the Oracle optimized kernel.

Some of Oracle's Open Source contributions

So what is this Oracle Unbreakable Linux kernel part all about…?

Until now you could download the freely available binaries or source code of Oracle Enterprise Linux and applications would run unchanged if they were compatible and supported on RedHat. No code change needed and Oracle will/would fully support it. Until now there were, as mentioned during the presentation, not one single bugs reported of Oracle Enterprise Linux not being compatible with RedHat. Oracle Enterprise Linux is used by an enormous amount of big customers and among others also implemented in the Exadata set-up’s and has cheaper support offerings than, for example, RedHat support. So far so good; nothing new.

But as was mentioned, these limits of being strict Red Hat Compatible had some mayor disadvantages for Oracle, like fixing bugs from a new Red Hat distribution to be properly working in, working with, an Oracle software environment. Red Hat does not validate Oracle software, plus Red Hat adapts community efforts very slowly. So these were some of the reasons why Oracle now created the Unbreakable Enterprise Kernel. Be aware this is not the old thing called the Unbreakable Linux program but actually a new Linux kernel.

So this new Oracle kernel is a fast, modern, reliable and optimized for Oracle software. It’s, among others, used in the Exadata and Exalogic machines to deliver extreme performance. From now on its also allows Oracle to innovate without sacrificing compatibility. In short, regarding the software: It now includes both the Unbreakable Enterprise Kernel and the existing strict Red Hat Compatible Kernel. During boot time you can chose for or the strict Red Hat one or the optimized for Oracle software Unbreakable Enterprise Linux kernel.

As Oracle states “Oracle now recommends only the Unbreakable Enterprise Linux Kernel for all Oracle software on Linux”. You could wonder about what this means regarding certified solutions. My guess would be that Oracle will push the the Oracle Unbreakable Enterprise kernel (OUEL?) but will certify on both kernels if not only that a lot of Oracle’s on Demand services are still based on Red Hat environments.

Until now the OUL kernel has made huge improvements compared to Red Hat, for example, 400% gain regarding 8kb flash cache reads (IOPS)… Oracle also has now the ability to support bigger servers, up to 4096 CPU’s and 2 TB of memory, up to 4 PB clustered volumes with OCFS2 and advanced NUMA support. CPU’s can stay now in a low power state when the system is idle. Also this power management supports ACPI 4.0 and fine grained CPU and memory resource control.

The Oracle Unbreakable Linux Kernel

All these additions will flow back into the community which, of course, is what Linux is all about. Some other stuff which Oracle pushed back into the stream/kernel source is better data integrity, stop corrupted data in memory from actually being written by detecting in flight memory corruption; on hardware fault management level, errors detection and logging before they effect OS or application and, for example, automatic isolation of defective CPU’s and memory which will avoid crashes and improves application up time. Diagnostic tooling has been made less resource intensive hoping that people won’t switch them so when a performance or corruption issue happens, then there will trace info. During the presentation the “latencytop” diagnostic tool was mentioned as an example.

The Oracle Unbreakable Enterprise Linux kernel can be downloaded via or the public-yum.oracle.com from OEL 5.5 and onwards or use the ULN network. Follow instructions to download via the public yum server and/or look up instructions via public-yum-el5.repo and apply it on your current 5.5 system, a different way to achieve the same would be to use “up2date oracle-linux” via yum if you have already bought/installed/registered with Oracle Unbreakable Linux support.

Wim described it in more detail on his blog via

(1) On ULN we created a new channel “Oracle Linux Latest (ol5_x86_64_latest)” which you can subscribe to. It’s really very simple. Take a standard RHEL5.5 or OEL5.5 installation, register with ULN with the O(E)L5.5 channel and add the Oracle Linux 5 latest on top as well. Then just run up2date -u oracle-linux

the oracle-linux rpm will pull in all the required packages

(2) On our public yum repo we updated the repo file public-yum-el5.repo . So just configure yum with the above repo file and enable the Oracle Linux channel. [ol5_u5_base] enabled=1 and run yum install oracle-linux

The OUEL kernel improved also on some miscellaneous thinks like NFS IPv6 support and RAID 5 to RAID 6 support. There will be a new volume manager kind of GUI that will support your LUN creation, deletion, expansion and snapshot of NAS storage. All the work done is submitted to the mainline kernel and enhancements will trickle down and be tested in the Enterprise Linux distribution (be aware: Oracle Enterprise Linux is not the same as the Oracle Unbreakable Enterprise kernel).

Also see Wim’s post of today on blogs.oracle.com/wim to get more info. I guess, this also counts for Sergio Leunissen’s blog, who is currently hammering his laptop behind me in the audience, while I am writing this. Keep watching those blogs for more info :-)

By the way, while speaking with Sergio about the “naming” of this software, which apparently was driven by the need to have clear distinction between “Oracle Solaris” and more general hardware based “Oracle Linux” architectures.

HOWTO: Handle Complex XML Schemas (Part 1)

Currently sitting in at the Oracle Open World 2010 presentation of Sam Iducula, Consulting member of the tech. staff and Mark Drake, Sr. Product Manager for Oracle XML DB. Before getting into the more in-depth topics Sam explained XML schema usage, for validation via XML schema validators like for example XML Spy or JDeveloper. This is currently really needed because those more used XML Schema like the really big ones out there like H7, etc, are nowadays so very very big that a good XML Schema validator is really needed. XML Schema in binary XML format is stored in a post parsed binary format. This has the advantage that Oracle knows about the format when storing the XML document. Extra information can be shared by the database by registering the XML Schema in the database that validates the Binary XML content.

There can be a lot of recursive dependencies, via the import or include references in a XML Schema, which make it even more difficult to make optimal use of this information. For example in the H7 (Health Level 7 schemas) setup this includes over 100 included XML Schemas. Oracle 11gR2 has been greatly improved performance and handling of those very huge meta data information as stored in such XML Schemas. Via streaming schema validation and adding hints via xdb:annotations, this provides the database with even more information on how to optimal handle these structures and as such performance can be improved even more. Some of those hints could be used to avoid the creation of objects, in this case while using XMLType Object Relational storage via, for instance, xdb:defaultTable=”" (providing an empty string) or store parts of the XML document information out of line. By the way for this last example you should use JDeveloper because it will annotate the XML Schema incorrectly (the bug has been reported by me). One of the improvements in 11.2.0.2.0 is huge improvements were made in cycle detection recognition, so they are handled even better in the mentioned version.

On the XMLDB home page on the Oracle OTN website a package of tools provided (“Oracle XML DB Ease of Use Tools for Structured Storage“) which can make your life easier regarding those xdb:annotation’s especially for those enormous big XML Schemas. This tool set which enables you to automate a lot of hints in XML Schema optimization you would like to make. Via XQuery or other XML DB update statements you are also able to override the by the database generated naming or storage options. Via some simple anonymous PL/SQL blocks this can be very easily done via for example, DBMS_ANNOTATE-x packages contained in this XML tool set, as said which is freely available on the XML DB OTN Oracle website.

Automation of xdb:annotations
Click the picture to enlarge

This tool set also comes with a white paper that shows and demonstrates some of the best XML handling ideas and experience gathered trough a lot of years handling customer use cases the Oracle XML DB Development team had. For example if you know it’s not applicable to your XML document you are able to switch off or alter DOM validation handling while storing or handling your XML document in the database. You can override ordering for example if it is applicable for your XML Schema, this avoids oracle checking it, which improves handling, but, be very aware, it can also be dangerous doing this if it was implemented by the person who created the XML Schema, but just didn’t care about the real life implement and/or it’s importance regarding being a actual mandatory requirement in practice.

I have experience multiple times that even with official XML Schemas the restrictions didn’t match real life use, so although automation really helps you to manage your XML registered schemas more easily, you must be aware of those exceptions. XML schemas can be created very loosely on real life implementations which can get you in a lot of problems after these storage models, based on such an XML schema, is used in your database design; those rules will be enforced via a XML schemas in the database.

As always, proper design with future needs in mind, takes time to do it properly. This is also the case regarding creating a good XML schema.

In Oracle 11you have now the possibility, via this tool set, to use DBMS_XML_MANAGE (for XMLType Object Relational storage) to rewrite table to column mapping, which figures out for you, makes it more easy, to identify and create supporting indexes on ComplexTypes. This has the advantage that you can create indexes with some more meaningful names like, for example “line_items_uniq_idx_01″ or whatever the naming convention within your company might be.

In the latest XDB toolset there will be now also a XDB_ANALYZE_XMLSCHEMA package which sorts out all of the scripting and possible options, while you feed it the actual XML Schemas. As was demonstrated by Mark Drake, all the FpML schemas which have a lot of dependencies of each other where analyzed, annotated, registered and it created over 100 tables and more than 2500+ objects in minutes. Try doing this by hand…

Also while using this package it will sort out the proper XML Schema dependencies and in which order all those XML schemas have to be registered in the correct order (based on includes, imports and ref’s used by Simple- and ComplexTypes). Sometimes you have to break up column create table statements because the maximum amount of columns allowed by Oracle in one single CREATE TABLE statement is “only” 1000 columns. This package will help you figure out how much of those Object Relational storage items will have to be moved “out of line” and/or to break up on a certain level in the XML hierarchy of the XML tree to avoid this 1000 column limitation but also to provide the design info needed to get the maximum performance.

This tool set used for XMLType Object Relational storage is only useful if you XML design is highly relational. If not then, your XMLType storage module should be based on Binary XML. The advantage of using XMLType Object Relational storage is that you make full use of Oracle relational technology and optimizations, which is available since a long long time and full use of, for example, the Cost Based Optimizer will kick into effect. On the other hand, be aware if your XML design is really relational, maybe you should have created it by relational means. There should be a proper use case to work with the XML format in the first place. My adagio always is: if it is not XML, don’t use Oracle XMLDB. If it is, go for Oracle XMLDB, if not only that is a “no cost option” within your Oracle database and it has been designed, since version 9.2.0.3.0, to optimal handle XML in your database.

For further information about choosing the proper XMLType storage model and how to optimally query these structures, have a look at:

HTH

Marco

Oracle OpenWorld Day 4 Highlights

Thankfully after a decent night’s sleep, I approached Day 4 at Oracle OpenWorld with more energy than I did yesterday. I probably had the most productive morning of the whole conference when I visited the various Oracle product booths in the Exhibition Hall at Moscone South and spoke to many of the database product managers and [...]

Report from Oracle Openworld

Openworld 2010, despite the supposedly lagging economy, had record attendance again this year.  No doubt this was the result of Oracle acquiring something like fourteen companies since last year, including Sun in 2009.  The crowds were thick, divided about evenly between geeks in badly-fitting vendor t-shirts and slick sales-side hustlers with dress pants and shiny shoes.  I landed somewhere in the middle of the two (badly-fitting dress shirt, comfortable jeans and loafers), proudly sporting a long dangling codpiece of ribbons from my attendee badge:

Oracle Closed World Presentations Downloads

Presentations from Oracle Closed World will be posted to

Links to scripts from Tanel Poder’s presentations at

Some photos from OCW 2010

Thanks all to an awesome Oracle Closed World and the Oaktable, Miracle Consulting and Embarcadero for sponsoring this event.

PS stay tuned for the 10 important database ideas black paper that we will be putting together from Oracle Closed World. (if you would like to contribute please email me kyle.hailey@embarcadero.com)
Kerry, Karl, Kyle
Link to a few more OCW 2010 photos from Karl : http://www.flickr.com/photos/kylehailey/sets/72157625025196338/

Oracle Closed World Presentations Downloads

Presentations from Oracle Closed World will be posted to