Dan Tow, in his book SQL Tuning, lays out a simple method of tuning SQL queries. The method is
Then to find a great optimal optimization path candidate
The basics are pretty simple and powerful. Of course there are many cases that get more complex and Dan goes into these complex cases in his book.
Let’s say I’ve got a partitioned table, and because New Year’s eve is coming around, I certainly don’t want to be called out at 12:01am because I forgot to add the required partition for the upcoming year .
Since 11g, I can sleep easy at night by using the INTERVAL partition scheme. Here’s my table
Just a little video montage of the fun and learning from UKOUG. A great conference every year.
An interesting question came through on AskTom recently. The requirement was to perform a single pass through a source table, and load the data into three target tables.
Now that’s trivially achieved with a multi-table insert, but there was a subtle “twist” on this requirement. Each of the three target tables may already contain some, none or all of the rows from the source table. Hence the requirement was to “fill in the blanks”.
So here’s a little demo of one way we could achieve this.
Upgrading is always stressful – be it a computer, an Oracle database or an iPhone. There’s always a good chance for lost data and lost time dealing with complications.
So yesterday I picked up a new iPhone 7 from Verizon. The pickup was seamless. I had signed up for an upgrade program when I got the iPhone 6, so now I just walked in, gave them my old iPhone 6 and they gave me an new iPhone 7. It’s bit scary giving up my old phone before restoring to my new phone, but I had a backup AND I asked Verizon to please not wipe my iPhone 6 for 24 hours incase there were upgrade errors. They normally wipe the phone immediately.
The day was off to a good start. It only took about 10 minutes to get the phone and I had taken a full backup of my iPhone 6 the day before and thought I’d plug in , restore back up and wow, that would be easy.
We’re being asked to store more and more data, yet keep backup windows, query performance and the like unchanged, no matter how much we store. As a result, more and more database shops are needing to partition their data. The problem is – partitioning data is a significant restructure of the data, which thus incurs a large outage and the accompanying planning and coordination.
Unless you’re on 12.2.
Here’s a demo where we can take an existing table and
Long before Streams, long before Goldengate, if you want to keep data between sites synchronised in some fashion, or even allow sites to independently update their data, there was the Advanced Replication facility in Oracle. An “extension” of the concept of simple materialized views (or snapshots as they were called then), you could design complete replicated environments across Oracle databases.
But it was a non-trivial exercise to do this. You had to be familiar with replication groups, replication objects, replication sites, master groups, master sites, master definition sites, deferred transactions, quiescing, updatable materialized views, replication catalogs, conflict resolution…the list goes on an on.
Every so often a DSS query that usually takes 10 minutes ends up taking over an hour. (or one that takes an hour never seems to finish)
Why would this happen?
When investigating the DSS query, perhaps with wait event tracing, one finds that the query which is doing full table scans and should be doing large multi-block reads and waiting for “db file scattered read” is instead waiting for single block reads, ie “db file sequential read”. What the heck is going on?
Sequential reads during a full table scan scattered read query is a classic sign of reading rollback and reading rollback can make that minute(s) full table scan take hours.
Excited to see the announcement of Amazon RDS Performance Insight feature for database performance monitoring and tuning.
Having met the team for this project I can say from my personal view point that the importance and future success of this feature is clear as day to me. The team is awesomely sharp, the architecture is super impressive, and this is by far the most exciting performance monitoring and feedback system I’ve been involved with, surpassing the work I’ve been involved in on Oracle’s performance monitoring and tuning system and Embarcadero’s DB Optimizer and Quest Foglight and Spotlight. Not only does the feature provide it’s own dashboard but will also provide an API to power your dashboards that already exist in the industry. I expect to see partners leveraging the API to provide new insights in their already existing database performance monitors.
I would think installing SQL*Plus on the Mac would be point, click download, point click, install bam it works.
It did install mostly straight forward on my old Mac. Got a new Mac and no dice.
Tried installing myself guessing at the downloads. First of all why isn’t there just one download?
Downloaded instantclient and instantclient with SQL*Plus which turns out to be correct, but no dice. Still got errors.
Got errors. Gave up.
Came back again to look at it yesterday and followed this:
worked like a charm.
Then I ran a shell script that used SQL*Plus oramon.sh and get the error