I had a fun request come in from a colleague the other day. They had a simple list of fully qualified file names and they needed to present that data in the familiar hierarchical tree layout.
To demonstrate, I took a little trip down memory lane and grabbed a subset of presentations I’ve done over the years.
There was a time, many moons ago when CLOB, BLOB and BFILE did not exist as data types. So if you had anything longer than a few kilobytes of data to store, you had to use a LONG or a LONG RAW. But those data types came with all sorts of restrictions and frustrations, and we all embraced the improvements that the LOB data types brought in Oracle 8. But of course, we carry a lot of that historical “baggage” in the data dictionary.
I love Active Session History (ASH) data because a lot of the work I’ve done in my consulting life was “after the fact” diagnosis. By this I mean that many of us have been in a similar circumstance where the customer will contact you not when a problem is occurring, but only when you contact them for some other potentially unrelated reason. At which point you hear will that dreaded sentence:
“Yeah, the Order Entry screen was really slow a couple of hours ago”
And this is where ASH is an awesome resource. With the ASH data available, there is a good chance you will be able to diagnose the issue without having to make an embarrassing request for the customer to repeat the task so that you can trace the underlying database activity. Because no-one likes to be the person that says:
“Yeah that performance must have really sucked for you … Hey, let’s do it again!”
Lets say I’ve been reading about schema separation, and thus I am going to have a schema which owns all of my objects, which I’ll call APP_OWNER, which will have no connection privilege and a separate schema called APP_ADMIN which will take care of all of the DDL tasks.
Here’s my setup:
There is a good chance that (based on this blog post title) that you’re expecting a post on SQL, and that’s understandable. But I’ll come clean nice and early – that was just to lure you in
The post is about SUM and DISTINCT, but not in the technical sense.
I had an interesting AskTom question recently where the poster was using SQL Loader to load in tables, but wanted to be able to analyze the resultant log file after execution. And of course, what better way to analyze..well…anything…than with a database and some SQL.
So we need to be able to access the log file as a table, and an external table is perfect for that, so let’s start there.
Here’s a sample SQL Loader log file (with a little perturbation to preserve anonymity). It’s quite complex because multiple tables were loaded as part of a single SQL Loader run.
Yeah, sure it would be cool to crank up some big time powered VM’s in the cloud and let rip, but the reality is – if you’re starting out on a cloud exploration, you probably want to (initially at least) just dip your toes in the water and start with something small. For example, if I wanted to play with 12c Release 2, I can just sign up for an Exadata Express service so I can explore the new features without breaking the bank.
But whatever the need, accessing a database as a service as opposed to a server, there’s often that fear of “handing over the reins”, that is, that I’ll not be able to do the things I want to do, especially when it comes to OS level access. And for a developer or DBA, perhaps a thing that might raise alarm bells is: “How will I access my trace files ?”
I didn’t see a PL/SQL version on https://rosettacode.org for the Haversine formula so here’s a version for anyone that needs it.
Recent comments
2 years 51 weeks ago
3 years 11 weeks ago
3 years 15 weeks ago
3 years 16 weeks ago
3 years 21 weeks ago
3 years 42 weeks ago
4 years 10 weeks ago
4 years 40 weeks ago
5 years 24 weeks ago
5 years 25 weeks ago