Search

OakieTags

Who's online

There are currently 0 users and 36 guests online.

Recent comments

The Most Important Tool for SQL Tuning

Summary: SQLT is a tool that collects comprehensive information on all aspects of a SQL performance problem. SQL tuning experts know that EXPLAIN PLAN is only the proverbial tip of the iceberg but the fact is not well recognized by the Oracle database community, so much evangelization is necessary.

I remember the time I was trying to solve a production problem a long time ago. I did not have any tools but I was good at writing queries against the Oracle data dictionary. How does one find the PID of an Oracle dedicated server process? Try something like this:

select spid from v$process where addr=(select saddr from v$session where sid = '&sid')

My boss was not amused.

After the incident, he got me a license for Toad.

Writing queries against the data dictionary is macho but it is not efficient.

Tools are in.

Fast forward many years. Now one of my specialties is tuning SQL statements. There is a general lack of appreciation for the vast spectrum of reasons that may lie behind poor performance of a SQL statement. Therefore, the opening gambit is usually “Here is the EXPLAIN PLAN. What’s wrong?”

First of all, note that EXPLAIN PLAN does not take bind variables into account and therefore the EXPLAIN PLAN may not be the plan that was used at run time. Further, plans can change at any time for dozens of reasons: bind variables, statistics, cardinality feedback, adaptive query optimization, etc.

What if the query plan is just fine but the environment is having difficulties?

And don’t expect me to tune a SQL statement without all the information about the tables and indexes involved in the query, all the information that feeds into plan generation, the execution history of the SQL statement, and detailed information about the environment in which the statement is execution.

Does the query refer to views or PL/SQL functions; I need information on them.

Why not give me a test case, perhaps even with data?

Carlos Sierra was a genius who worked in the SQL performance team of Oracle Support for many, many years. He realized that the process of collecting the necessary information was unsystematic and long and drawn out. He therefore created a tool called SQLT (the T stands for Trace) to collect all the required information and present it in the form of an HTML report.

View definitions? Check.

Optimizer settings? Check.

Environment information? Check.

Execution history? Check.

There’s even a 10053 trace and a test case that I can run in my own environment.

Here’s a screen shot of the HTML report that I mentioned.

Outlines and patches and profiles and baselines and directives, oh my!

Security policies? Really?

Since the SQLT tool was produced by Oracle Support, there’s a Support note on it. But, there’s even a whole book on it written by Stelios Charlambides, one of Carlos’s partners in crime.

The Case of the Foolish Fix

Performance of many queries had worsened when the database was upgraded from version 11.2.0.3 to 12.1.0.2. In this case, the problem was definitely the query plan. In version 11.2.0.3, the cost-based optimizer picked the right driving table but not in 12.1.0.2. I used a SQLT feature called XPLORE to solve the problem. First, I set up a test case in my own environment. Then I unleashed XPLORE on it. It systematically changed the values of optimizer settings and “fix controls” to see how the optimizer would respond in each case. After an hour and a half of processing—or so it seemed—it hit pay dirt. A certain fix, first inserted into version 11.2.0.4, had corrected the “single table cardinalities” in the presence of transitive predicates. Unfortunately, the CBO was producing worse plans with the corrected cardinalities. While this is certainly cause for consternation, the problem had been identified. Restoring performance was now simply a matter of disabling the fix.

The Case of the Puzzling Policy

A certain query executed lightning fast in the development environment and dog-slow in the production environment. The problem had been puzzling everybody for months. When I reviewed the SQLT report, I saw the curious filter “NULL IS NOT NULL” in the development environment. This told me that a security policy was in effect and that it was disabling part of the query plan in the development environment. ‘Nuff said.

The Case of the Curious Constraints

The DBA insisted that there was no difference between two environments, and yet the query performed better--much better--in one of the two environments. I used the COMPARE feature of SQLT but could not find any differences in the two environments that could explain the oddity. It would seem that SQLT had failed me. But I was able to write simple queries on the data from the two environments that SQLT had collected and I found that three table constraints in the faster environment were disabled in the slower environments. Why would disabling constraints make queries run slower? Join-elimination, you see. The DBA said that he was aware of the disabled constraints but had not considered them important enough to mention. Well, live and learn.

The Case of the Immense Indexes

The customer was loading hundreds billions of rows of new data into a table using INSERT statements, a process that was expected to last weeks. However, the insertion rate had been slowing down dramatically as the days passed and what had been expected to take weeks would likely take months. It was clear from the wait-event information in the SQL report that Oracle was spending a big chunk of time updating two indexes, but what was the reason for the slowdown? There was enough execution-history information in the AWR report for me to graph the slowdown over time and prove that the slowdown was directly proportional to the size of the indexes. It was too late to redesign the loading strategy but at least I was able to explain the problem to the customer.

Always a But

Unfortunately SQLT is not perfect. The biggest problem is that it is written in PL/SQL which must be loaded into the target database. And it creates lots of tables in a special schema. This is anathema to production environments. My other pet peeve about SQLT is that it does not support STATSPACK. The creators of SQLT believed that most customers are licensed for AWR, but I do not believe that’s the case. Finally, SQLT is only available in My Oracle Support so you’re out of luck if you do not have access to it.

And a Word of Caution

It is easy to get very excited about SQLT but it’s not the be-all and end-all. There will always be cases where SQLT is not enough. For example, you might find the answer you are looking in an AWR report. Remember to keep an open mind.

Taking the Plunge

Read the Support note first, then download SQLT and install it in a test environment and play with it. You don’t want to forget the COMPARE and XPLORE features. If you’ve got a few spare dollars left over from Christmas shopping, I recommend the book  by Stelios Charlambides.

Make learning about SQLT your New Year’s resolution. Happy learning.

Iggy Fernandez is the editor of the NoCOUG Journal, the print journal of the oldest-running Oracle user group in the world which celebrated thirty years last year. He was an Oracle DBA for many years but now specializes in SQL tuning. He used to speak Oracle conferences all over the country but is now completely bored with that gig except in the case of the RMOUG Training Days which treats its speakers so well, even picking them up at the airport and taking them back.