Summary: SQLT is a tool that collects comprehensive information on all aspects of a SQL performance problem. SQL tuning experts know that EXPLAIN PLAN is only the proverbial tip of the iceberg but the fact is not well recognized by the Oracle database community, so much evangelization is necessary.
I remember the time I was trying to solve a production problem a long time ago. I did not have any tools but I was good at writing queries against the Oracle data dictionary. How does one find the PID of an Oracle dedicated server process? Try something like this:
select spid from v$process where addr=(select saddr from v$session where sid = '&sid')
My boss was not amused.
After the incident, he got me a license for Toad.
Writing queries against the data dictionary is macho but it is not efficient.
Tools are in.
Thanks so much to everyone who attended the Ohio Oracle User's Group event with me on July 18. I really enjoyed the opportunity to be there. The group appears to be vibrant and growing and everyone involved made it a top-notch experience (for me at least!).
My topic for the day was "SQL Stuff You Should Know" and was a mash-up of lots of different bits and pieces of information about SQL Tuning and the Oracle Optimizer. Links to my presentation slides will be made available on the OOUG web site, but I thought I'd also provide them here.
During last week's webinar, there were several questions that I did not have a chance to answer so I wanted to follow-up by answering those questions here. I also wanted to provide links to previous webinar recordings as many people had asked where to find them.
Best Practices for Developing Optimally Performing SQL
Diagnosing SQL Performance Problems (Part 1 of 3 part series)
Thanks to dear Maria Colgan for her intervention, I am publicly admitting the error of my ways and declaring to never use the drug of the optimizer_index_cost_adj parameter again. I publicly admit (again) that I have (on occasion) used an extremely high OICA setting to try and "force" Exadata to use full table scans over index scans. I was wrong and I am sorry.
I've had a great time at the Enkitec Extreme Exadata Expo (E4) this week in Dallas. I eagerly awaited the presentation of Maria Colgan ("The Optimizer Lady") where she promised to "show through the use of real-world examples what you can do to ensure the Optimizer fully understands the capabilities of the platform it is running on without having to mess with initialization parameters or Optimizer hints."
Join me for the 2nd Embarcadero Community webinar in my Performance Series on August 29 entitled "Making SQL Performance Solutions Stick". Registration is now open and I'd love to see you there!
Don't worry if you didn't attend the first session in July. This session will have plenty of new information and does not have any dependency on having to attend the first event.
I'll be talking about:
Thanks to everyone who joined me for part 1 of my Oracle SQL Performance webinar series! Embarcadero will be posting the recording within the next few days. In the meantime, I thought I'd post the slides and a few scripts that were shown.
Thanks again and I hope to see you again next month for part 2!
You’ve probably heard sayings like “sometimes things aren’t always what they seem” and “people lie”. Well, sometimes execution plans lie. It’s not really by intent, but it is sometimes difficult (or impossible) to represent everything in a query execution tree in nice tabular format like dbms_xplan gives. One of the optimizations that was introduced back in 10gR2 was the use of bloom filters. Bloom filters can be used in two ways: 1) for filtering or 2) for partition pruning (bloom pruning) starting with 11g. Frequently the data models used in data warehousing are dimensional models (star or snowflake) and most Oracle warehouses use simple range (or interval) partitioning on the fact table date key column as that is the filter that yields the largest I/O reduction from partition pruning (most queries in a time series star schema include a time window, right!). As a result, it is imperative that the join between the date dimension and the fact table results in partition pruning. Let’s consider a basic two table join between a date dimension and a fact table. For these examples I’m using STORE_SALES and DATE_DIM which are TPC-DS tables (I frequently use TPC-DS for experiments as it uses a [...]
[back to Introduction] In over six years of doing data warehouse POCs and benchmarks for clients there is one area that I frequently see as problematic: “batch jobs”. Most of the time these “batch jobs” take the form of some PL/SQL procedures and packages that generally perform some data load, transformation, processing or something similar. The reason these are so problematic is that developers have hard-coded “slow” into them. I’m generally certain these developers didn’t know they had done this when they coded their PL/SQL, but none the less it happened. So How Did “Slow” Get Hard-Coded Into My PL/SQL? Generally “slow” gets hard-coded into PL/SQL because the PL/SQL developer(s) took the business requirements and did a “literal translation” of each rule/requirement one at a time instead of looking at the “before picture” and the “after picture” and determining the most efficient way to make those data changes. Many times this can surface as cursor based row-by-row processing, but it also can appear as PL/SQL just running a series of often poorly thought out SQL commands. Hard-Coded Slow Case Study The following is based on a true story. Only the facts names have been changed to protect the innocent. Here is [...]
I will be gave two new presentations that are on my website.