Search

Top 60 Oracle Blogs

Recent comments

SQL Tuning

SQL Monitoring, Flamegraph and Execution Plan Temperature 2.0

Two of the things that I like the most about SQL Monitoring reports are the ability to quickly spot where in the execution plan the time is spent (Activity% column, thank you ASH) and the fact you can collapse part of the plan. Too bad the two don’t “work” together meaning if you collapse a part of the plan the Activity% is not rolled up at the collapsed level. I understand why it works that way (it might be confusing otherwise) but I’d still like to be able to collapsed a node and get a “subtree Activity%” so I know if that subtree is something I should be worry about or not (kind of…).

Little help in creating SQL Plan Baselines

One of the main design goals behind SQLd360 is to have no installation nor “evidence” left in the database, i.e. there is no SQLd360 repository in the database while there is a SQLTXPLAIN one (this isn’t necessarily bad, it’s just a different approach).

As a consequence several little things SQLT provided are gone with SQLd360, for example few years ago (it’s been disabled by default for a while) SQLT generated a script to create a “custom” SQL Profile for the best performing plan it identified, something similar was happening for SQL Plan Baselines stored in SQL Tuning Set for quick implementation of baselines down the road.

Presentations on Slideshare

Every once in a while I get asked if I can email the PPT for a session that I delivered. I always say YES (of course) so I figure why not be proactive and upload the material fot the presentations I delivered over the last several months. Under the “Pages” section on the right side of the page there is a new link “Presentations” that takes you to Slideshare.

It’s my first experience with Slideshare and I’m pretty sure I made mistakes along the way so if you see something wrong just let me know (and let me know how to fix it PLEASE </p />
</p></div>

    	  	<div class=

The Most Important Tool for SQL Tuning

Summary: SQLT is a tool that collects comprehensive information on all aspects of a SQL performance problem. SQL tuning experts know that EXPLAIN PLAN is only the proverbial tip of the iceberg but the fact is not well recognized by the Oracle database community, so much evangelization is necessary.

I remember the time I was trying to solve a production problem a long time ago. I did not have any tools but I was good at writing queries against the Oracle data dictionary. How does one find the PID of an Oracle dedicated server process? Try something like this:

select spid from v$process where addr=(select saddr from v$session where sid = '&sid')

My boss was not amused.

After the incident, he got me a license for Toad.

Writing queries against the data dictionary is macho but it is not efficient.

Tools are in.

Truncated CTAS text and SQL Plan Baselines

This is probably not earth-shattering (not that I ever blog earth-shattering things anyway) for many but it does answer a question I got today about “Do you think a truncated SQL text for CTAS affects SPM ability to give me the desired plan?”.

SQL text for CTAS is truncated as result of bug 17982832 (sister bugs 18705302 and20308798 affect 10046 and AWR respectively) but does this affect SPM? Can SPM match on the truncated text? Or maybe can SPM see the whole text and match on the original SQL? Those are the questions I wanted to answer.

As usual a test is worth a thousand expert opinions so here it goes:

Something new about SQL Plan Directives and 12.2

SQL Plan Directives (SPD) remind me of bind peeking about 10 years ago, a nice feature on paper that worked fine most of the times but caused some major headache on those occasions when it didn’t. Luckily for bind peeking social media wasn’t that popular 10y ago so it took a while to “make a name” for a troublemaking feature, nowadays a couple of blog posts and many re-tweets to get pretty popular, poor SPD!
DISCLAIMER: I happen to like a lot both features and think positively of them, I just admit the “transparent” behavior was a little too ambitious </p />
</p></div>

    	  	<div class=

What I forgot and had to relearn about DESC indexes

The title for this blog post is probably more catchy than the post itself but the election is close so every sort of campaign is allowed, right?
This post is another one of those “I knew it, but I forgot and got bitten back” blog post so hopefully next time I see it I’ll be quicker in recognizing such behavior.

The goal of the SQL is to quickly return the Top-N rows  that match some filter condition(s), descendingly sorted by one of such columns. Pretty common requirement if you consider the filter/sort column to be a date one (“give me the last day worth of transactions, starting with the most recent ones”) and many people would solve using a DESC index on the date column.

Which Observations would you like to see in SQLd360?

SQLd360 v1617 finally includes a new “Observations” section (section 1F) that I pushed back for long, very long </p />
</p></div>

    	  	<div class=

How to find file and block# to dump in Exadata

Probably because of my past with Oracle Support, I find myself collecting traces and dumps on a regular basis. For example every time I see a row source operation that I would expect to perform only multiblock reads (e.g. Full Table Scan) doing single block ones I collect some block dumps to understand why that is happening. It’s just a matter of looking at the raw SQL trace file (or P1/P2 from ASH for that matter) for the unexpected single block reads, grab file# and block# and dump the block.

Single block reads in Exadata become “cell single block physical read” and the P1/P2 for the wait event don’t show the file#/block# but rather then cellhash# and diskhash# where the data came from. This is a little annoying to me because I can’t just grab cellhash#/diskhash# (plus bytes, that matches with the block size being a single block read) and dump that AS FAR AS I KNOW (corrections are very welcome here).

eAdam, SQLd360 hidden gem

The title is actually VERY wrong! eAdam isn’t a gem hidden inside SQLd360, it’s a standalone tool developed by Carlos Sierra and it’s been around for way longer than SQLd360. You can read more about eAdam here but in short its goal is to export AWR data in a portable way that can be restored in another database, something like a raw version of AWR Warehouse (kind of).

Every time you run SQLd360, the tool collects a reduced version of eAdam just for ASH data (both GV$ and DBA_HIST) for the SQL of interest, packs the result into the zip file and links it into the main page under column 5, “eAdam ASH”. The reason for doing so is SQLd360 has tons of reports built on top of the most important columns of ASH but what if you want to query another column that is not present in any report? With eAdam you basically have the whole ASH for this SQL ID to do all the data mining you want!