Uncategorized

So good … it will scare your socks off

One of the big releases of 2017 is on our doorstep.  Watch the trailer here

Little things worth knowing: scheduler jobs and Instance Caging

While updating material for a training class about Database Resource Management I briefly wondered if Instance Caging should apply for scheduler jobs as well. The obvious answer is “yes”, but I wanted to find proof. I hope I did in the end, and wanted to share how I got there.

The test environment

My system is based on the shiny new AMD Ryzen 7 1700X Eight-Core Processor, and it shows as 1s8c16t. I really like it! My 12.2.0.1 database named DEMO is patched to August 2017 – 12.2.0.1.170814 to be precise. It’s difficult to test resource management, and specifically Instance Caging, with a dual-core laptop, so this machine should help!

Test setup

Apart from the database workload I am soon going to launch the system is otherwise idle. I set cpu_count to 4 and made sure I had a resource manager plan active:

Buffer cache hit ratio–blast from the past

I was perusing some old content during a hard drive “spring clean” the other day, and I found an old gem from way back in 2001.  A time when the database community were trying to dispel the myth that all database performance issues could be tracked back to,  and solved via, the database buffer cache hit ratio.  Thankfully, much of that folklore has now passed into the realm of fiction, but I remember at the time, as a means of showing how silly some of the claims were, I published a routine that would generate any buffer cache hit ratio you desired.  It just simply ran a query to burn through logical I/O’s (and burn a whole in your CPU!) until the required number of operations bumped up the buffer cache hit ratio to whatever number you liked Smile 

UKOUG is coming

Yes it is just a few more weeks until the UKOUG conference swings by.

This has been one of my favourite conferences for years – dating back to my first one in 2002 !!.  You can see from the picture at the tail of this post – whilst times have changed in those 15 years, the basic tenets of community, networking and technical content remain unchanged to this day.

The AskTOM team will all be there and there’s a fantastic agenda spread over 4 days.  This user group really knows how to “bake” a good conference.  Maybe they followed my recipe. Smile

 

IP CIDR rules and address ranges

I always forget IP address range coverage rules and forget where to look.

It’s the wiki!

https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing

and for good reference here is the table:

/32 is for a single address

/24 is for a range in the last place x.x.x.0

/16 is for a range in the last 2 places x.x.0.0

Screen Shot 2017-11-03 at 11.28.31 AM

Parsing freeform data in flat files

SQL loader is a very cool utility that has existed for a long time within Oracle to load flat files into the database. However sometimes people find the control file syntax quite cryptic, and when it comes to passing very complicated structures, this can mean control files which are hard to maintain. For me the best solution here is to use an external table. That way we can combine the power of the SQL Loader control file syntax embedded within the external table definition, along with the full power of PL/SQL and SQL for additional parsing of that data.

Here is an example where the data is spread across multiple lines and the task is to bring all that data together into a natural form, namely an ID followed by text.

So here is my file that has free format text

“Oh…another language is too hard”

We had a request on AskTOM a few days ago asking for an implementation of the XIRR function in PL/SQL.

I didn’t really know much about what that function was, or what it did, but a quick web search yielded plenty of examples in Excel, where it is a pre-delivered function, as described here:

“Returns the internal rate of return for a schedule of cash flows that is not necessarily periodic.”

Why being wrong can be awesome

OK, Now that I’ve started the post with a nice click-bait heading, let’s get down to the business of being wrong. Smile

I did a lot of conference presentations last year, and the great thing about that for me was that I got to meet a lot of new people in the Oracle community in the Developer and DBA space. One of the questions that came up over and over again was about putting one’s knowledge “out there” in the community and how to deal with the repercussions of that.  In particular, “What if you publish something that is proven wrong?”

Here’s the thing about being wrong …. there’s two likely outcomes:

Best method for tuning sub-optimal execution plans

How do you determine if the Oracle SQL optimizer has created a sub-optimal execution plan? re-run statistics and see what happens? wait for Oracle to find other execution plans? What if neither method is helping? Do you read the execution plan? What do you look at? Differences in actual vs estimated? How successful is that? Look for full table scans?  Do you look at the 10053 trace? How much time and effort does that take?  What do you look at in the 10053 trace. Do you have a systematic methodology  that works in almost all cases?

Well there is a method that is reliable and systematic. It’s laid out in Dan Tow’s book SQL Tuning.

Idle banter

When your car gets a flat tyre, it’s always handy to have a spare.  We do the same with the database Smile