One of the things that sounded kinda neat in SQL Developer 3.1 was the Data Pump Wizards, so I thought I would have a play with them.
As you would expect, they are pretty straight forward. They can’t do everything you can do with expdp and impdp, but they are pretty cool for on-the-fly tasks.
You can use the wizard to generate data pump definitions using the PL/SQL API. It would have been a nice touch if it gave you the option to see a regular command line or parameter file definition also, since I would be more likely to put that into source control than the API definition of a job. Even so, a nice little feature.
I’ve recently returned from a great two-week holiday, firstly at the Australian Open Tennis (what a final !!) and then up at the Gold Coast in not quite so sunny Queensland. Time now to get back to my blog In my previous IOT examples, we had a very large column called Description which we didn’t [...]
February 8, 2012 Default parameter values are great, as long as it is possible to predict the default values, and the defaults are appropriate for the environment in which the defaults are present. It is sometimes a challenge to remember all of the rules, and exceptions to those rules, that determine the defaults. I had a [...]
This is just a short note that Oracle has added several nice details to 18.104.22.168 and 22.214.171.124 respectively that can be helpful for troubleshooting.
ASH, PGA Memory And TEMP Consumption
Since 126.96.36.199 the V$ACTIVE_SESSION_HISTORY view (that requires Enterprise Edition plus Diagnostic License) contains the PGA_ALLOCATED and TEMP_SPACE_ALLOCATED columns.
In particular the latter closes an instrumentation gap that always bothered me in the past: So far it wasn't easy to answer the question which session used to allocate TEMP space in the past. Of course it is easy to answer while the TEMP allocation was still held by a session by looking at the corresponding V$ views like V$SORT_USAGE, but once the allocation was released answering questions like why was my TEMP space exhausted three hours ago was something that couldn't be told by looking at the information provided by Oracle.
Oh, well ... having decided that I was going to skip the Symposium this year, everything changed.
My friend Randolf Geist had to cancel his attendance so when he saw my previous post he asked if I'd be prepared to step in with a couple of presentations so that he wouldn't feel quite so bad about letting people down. I asked for a little while to think about it (because all of my original reasons for not attending still existed) but in the end was happy to help out.
To be crystal clear, Randolf offered this option to the Hotsos folk because it can be difficult lining up replacement speakers so late in the day and they decided that they wanted to go with it.
My website was down for the best part of an hour yesterday, between 15:30 and 16:30 UK time.
I contacted the hosting provider, but each time they looked at the site it was up, so they close the ticket. The fact that 30 seconds later it was down again does not seem to worry them. I asked for something a little more substantial than, “It’s working now. Your ticket has been closed”, but unfortunately it was not forthcoming.
Since that incident it seems to have been stable. Fingers crossed, whatever it was will not happen again.
BLOG UPDATE 2012.0.2.8: I changed the URL to the kit and uploaded a new tar archive with permissions changes
We’ve all been there. You’re facing the need to assess Oracle random physical I/O capability on a given platform in preparation for OLTP/ERP style workloads. Perhaps the storage team has assured you of ample bandwidth for both high-throughput and high I/O operations per second (IOPS). But you want to be sure and measure for yourself so off you go looking for the right test kit.
This is the February 1, 2013 drop of the SLOB tar archive. After downloading it, please compute an md5sum on it. For example, using OSX md5 command you will see the following type of output. The md5 checksum will, of course, be the same whether computed by Linux md5sum or any other such command.
The changes in this tarball are to the cr_db.sql script which had a faulty alter tablespace command in it. I've also added a very crude yet helpful awr post-processing script called awr_info.sh under the misc directory. See the README in that directory for more info on the script.
$ ls -l 2013.02.01-slob-kit.tar_.gz
-rw-r--r--@ 1 clossk staff 10040 Feb 1 16:16 2013.02.01-slob-kit.tar_.gz
$ md5 2013.02.01-slob-kit.tar_.gz
MD5 (2013.02.01-slob-kit.tar_.gz) = f157b1c51b553f90df53fcea95f89632
Most execution plans can be interpreted by following few basic rules (in TOP, Chapter 6, I provide such a list of rules). Nevertheless, there are some special cases. One of them is when an index scan, in addition to the access predicate, has a filter predicate applying a subquery.
Sofar we've explored playing around with a few triggers to implement a business rule. In this post I'd like to step back a bit and take a birds-eye view at the common use-cases that I see brought up for introducing triggers into the app-development picture.
The first use-case is: we use triggers to assign or modify column values of mutating rows (that is, rows currently being inserted, updated, or deleted). Here's a few typical examples of this use case that, I'm sure, must look familiar to you too.