Search

OakieTags

Who's online

There are currently 0 users and 37 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

Finding the blocking SQL in a lock wait

#555555;">One of my pet peeves on Oracle is the inability to find out what SQL took out a lock that another user is waiting. It’s easy to find the waiting user and their SQL with v$session by looking at v$session.event where the event is an “enqueue” (v8 and v9) or “enq: TX – row lock contention” and then looking up their SQL via the v$session.sql_hash_value which joins to v$sql.hash_value for the v$sql.sql_text.

#555555;">So far so good and easy.
#555555;">Second step of finding the blocker is really easy starting in 10g because Oracle has a new field v$session.blocking_session which can be joined back to v$session.sid to find information on that user.
#555555;">The rub is that there is no way to find the SQL text that the blocking session ran that took out the original blocking lock.
#555555;">For the 2 day course I teach on Active Session History (ASH) and Oracle wait events, I wanted to show students how to actually get the blocking SQL text if they really had to.
I went as far as looking at log miner to try and get the blocking SQL text and this works sometimes and sometimes it doesn’t. At that point I gave up, knowing the next step was dumping the redo logs which was more research than I felt like doing at the time.Luckily someone has picked up the torch – Doug Burns!
On the Oaktable email list I shared my research with Doug and Doug took it even farther and posted it on his blog:
#555555;">
  • #2970a6;" href="http://oracledoug.com/serendipity/index.php?/archives/1477-Diagnosing-Locking-Problems-using-ASH-Part-1.html">Diagnosing Locking Problems using ASH – Part 1#555555;"> – try OEM 10g on Lock Problem
  • #2970a6;" href="http://oracledoug.com/serendipity/index.php?/archives/1478-Diagnosing-Locking-Problems-using-ASH-Part-2.html">Diagnosing Locking Problems using ASH – Part 2#555555;"> – look at raw ASH data, missing blocker SQL
  • #2970a6;" href="http://oracledoug.com/serendipity/index.php?/archives/1480-Diagnosing-Locking-Problems-using-ASH-Part-3.html">Diagnosing Locking Problems using ASH – Part 3#555555;"> – look at raw ASH data, find the blocker SQL
  • #2970a6;" href="http://oracledoug.com/serendipity/index.php?/archives/1481-Diagnosing-Locking-Problems-using-ASH-Part-4.html">Diagnosing Locking Problems using ASH – Part 4#555555;"> – misleading data from ASH
  • #2970a6;" href="http://oracledoug.com/serendipity/index.php?/archives/1484-Diagnosing-Locking-Problems-using-ASH-Part-5.html">Diagnosing Locking Problems using ASH – Part 5#555555;"> – logminer undependable
  • #2970a6;" href="http://oracledoug.com/serendipity/index.php?/archives/1487-Diagnosing-Locking-Problems-using-ASH-Part-6.html">Diagnosing Locking Problems using ASH – Part 6#555555;"> – overview of all the parts (table of contents)
  • #2970a6;" href="http://oracledoug.com/serendipity/index.php?/archives/1488-Diagnosing-Locking-Problems-using-ASHLogMiner-Part-7.html">Diagnosing Locking Problems using ASH/LogMiner – Part 7#555555;"> – Redo log dump
  • #2970a6;" href="http://oracledoug.com/serendipity/index.php?/archives/1491-Diagnosing-Locking-Problems-using-ASHLogMiner-Part-8.html">Diagnosing Locking Problems using ASH/LogMiner – Part 8#555555;"> – Redo log dump lacks “select for update”
  • #2970a6;" href="http://www.blogger.com/Diagnosing%20Locking%20Problems%20using%20ASH/LogMiner%20%E2%80%93%20Part%209">Diagnosing Locking Problems using ASH/LogMiner – Part 9#555555;"> – Redo log dump – search for traces of “select for update”
#555555;">
#555555;">Long story short, the best way to try and see what change (when there was a change and not a “select for update) to cause the lock is to use flashback information. For example if or contention table was TEST_TAB1 and our field that we knew was modified “Field_with_data” then we could try to find what it was changed from:

Session 1

update test_tab set val1=’aa’ where id=1;

Session 2

update test_tab set val1=’aaa’ where id=1;

Blocking info from ASH where wait is enq: TX – row lock contention

select
      distinct
      lpad(ltrim(to_char(p2,'XXXXXX')),6,'0')||'00'||
      ltrim(to_char(mod(p3,256),'XX'))||
      ltrim(to_char(trunc(p3/256),'XX'))||'0000'
      block_xid,
      to_char(p2,'XXXXXXXX') p2hex,
      to_char(p3,'XXXXXXXX') p3hex,
      trunc(p2/65536) usn,
      mod(p2,65536) slot,
      p3 sqn, xid wait_xid
from v$active_session_history
where event like 'enq: T%'
and sample_time > sysdate - &v_minutes/(60*24)
;

BLOCK_XID	      P2HEX     P3HEX	    USN         SLOT      SQN  WAIT_XID
----------------  --------- --------- ---------- ---------- ---------- ----------------
0A0001007264000       A0001      6472	      10          1      25714

Data from flashback, after session 1 commits (before the commit there is no data returned)

SELECT        VERSIONS_XID
       ,      VERSIONS_STARTTIME
       ,      VERSIONS_ENDTIME
       ,      VERSIONS_STARTSCN
       ,      VERSIONS_ENDSCN
       ,      VERSIONS_OPERATION
       ,      id
       ,      val1
       FROM   TEST_TAB 
              VERSIONS BETWEEN
              TIMESTAMP MINVALUE AND MAXVALUE
     where VERSIONS_XID=HEXTORAW('0A0001007264000')
      ORDER  BY  VERSIONS_STARTTIME
/
VERSIONS_XID	 VERSIONS_STARTTIME    ENDTIME STARTSCN  ENDSCN  V          VAL1
---------------- --------------------- ------- -------- -------  - --------------
0A00010072640000 15-OCT-13 06.46.30 PM         17042888	         U            aa


Now that’s not the blocking SQL but at least you can see what the value of the field was that the blocker changed it to, so you can guess to some degree what the actual SQL was. Not great, but better than nothing.

Oracle’s In-Memory Database: The True Cost Of Licensing

At last, the long-awaited release of Oracle Database 12.1.0.2 has arrived, including the highly anticipated In-Memory Database Option. I have had the pleasure of being involved in the early beta stages of this feature and I have say: I'm impressed. This is truly game changing technology, and has been extremely well designed and implemented by […]

Oracle’s In-Memory Database: The True Cost Of Licensing

At last, the long-awaited release of Oracle Database 12.1.0.2 has arrived, including the highly anticipated In-Memory Database Option. I have had the pleasure of being involved in the early beta stages of this feature and I have say: I'm impressed. This is truly game changing technology, and has been extremely well designed and implemented by […]

In-Memory Column Store and More

It was inevitable that I would have to have a play with the In-Memory Column Store.

It’s just a functional look. I have no idea about performance at the moment, although I should be starting a proof of concept in the next couple of weeks for one of our internal systems, so it will be interesting to see how it holds up. :)

Another thing that has come out of this release is the Full Database Cache Mode. That’s nothing to do with the In-Memory Column Store, but I can definitely see us using this on some of our stuff!

Finally, I’ve done a quick write up of the Smart Flash Cache stuff for my OCP exam notes.

It’s been out since 11.2.0.2, so I’m not sure why it is considered a 12c new feature, but it is part of the exam, so I thought I better cover it.

There have been a lot of changes to the multitenant option, so I’m probably going to spend some time amending/rewriting those articles to take on board the new stuff. From what I can see so far, the stuff they’ve added has certainly plugged a lot of “holes” in that option. Looks good so far!

Cheers

Tim…


In-Memory Column Store and More was first posted on July 25, 2014 at 10:32 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

12.1.0.2 Released With Cool Indexing Features (Short Memory)

Oracle Database 12.1.0.2 has finally been released and it has a number of really exciting goodies from an indexing perspective which include: Database In-Memory Option, which enables specific portions of the database to be in dual format, in both the existing row based format and additionally into an efficient memory only columnar based format. This in […]

AWR Warehouse, Status

So the AWR Warehouse patches are out, but documentation has not officially caught up to it yet, so as we appreciate your patience.  I thought I would post about what I went over in my webinar last week when I had the wonderful opportunity to speak to the ODTUG members on this feature that everyone is so anxious to get their hands on.

Let’s start with some of the top questions:

1. Can I just load the AWR data into the OMR, (Enterprise Manager Repository) database?

A.  No, it is highly recommended that you do not do this-  use a separate database and server to house the AWR Warehouse.

2. What licensing requirements are there?

A.  The AWR Warehouse, (AWRW, my new acronym and hashtag, #AWRW) requires the Diagnostic Pack license and with this license, a limited use EE license is included for the AWR Warehouse.  This is subject to licensing changes in the future, but at this time, this is a great opportunity considering all the great features that will be included in the AWR Warehouse to house, partition, access and report on via EM12c, etc.

3.  Can I query the AWR Warehouse directly?

A.  It is another Oracle database, so yes, of course you can!

Behind the Scenes

The AWR Warehouse’s data is sourced from target databases in EM12c, providing the ability to retain AWR data indefinitely or for any period of time you choose.  For retail companies or those that do heavy once per year processing, this is gold.  The idea that you can do comparisons on performance a year ago vs. today is incredibly valuable.

This data is loaded via an ETL process using an Agent to Agent direct push of the data, initialized by an Enterprise Manager.  The actual export on the source database and import on the AWR Warehouse is performed by a DBMS_Scheduler job local to those servers.

awrw_etl

The actual interval on the source database and AWR Warehouse depends on if you’ve just added the database to the AWR Warehouse, (back load of data, requires “catch up”) or if the AWRW ETL load has been disabled for a period of time.  There is a built in “throttle” to ensure that no more than 500 snapshots are loaded at any given time and intervals that cause very little to no network traffic in the environment.  During the catchup that required a full 500 snapshots to load on a VM test environment, I was thrilled to see it took a total maximum execution time of less than 12 minutes and 2GB of data.  The network latency was nominal, too.

For the next sections, you will notice the naming convention in jobs and objects of “CAW” either in the beginning or middle of the name.  CAW stands for Consolidated AWR Warehouse and you can use %CAW% to help filter to locate via queries in any AWRW related search, including on source databases, (targets).

Source ETL Job

The job on the source database, (targets) to datapump the AWR data from the source for a given snapshot(s) to reside on an OS directory location to be “pushed” by the agent to agent onto an AWR Warehouse OS directory location.

DBMS Scheduler Job Name: MGMT_CAW_EXTRACT

Exec Call: begin dbsnmp.mgmt_caw_extract.run_extract; end;

How Often: 3 Hour Intervals if “playing catch up”, otherwise, 24 hr interval.

AWR Warehouse Job

This is the job that loads the data from source targets, (databases) to the AWR Warehouse.

DBMS Scheduler Job Name: MGMT_CAW_LOAD

Exec Call: begin dbsnmp.mgmt_caw_load.run_master;

How Often: 5 Minute Intervals

Biggest Resource Demand from the “run_master”:

begin dbms_swrf_internal.move_to_awr(schname => :1); end;

EM Job Service

The EM12c comes into play with the ETL job process by performing a direct agent to agent push to the AWR Warehouse via a job submitted to the EM Job Service.  You can view the job in the Job Activity in the EM12c console:

em_etl_job

Under the Hood

The additions to the source database, (target) and the AWR Warehouse once adding to the AWR Warehouse or creating an AWR Warehouse is done through the DBNSMP schema.  The objects currently begin with the CAW_, (Consolidated AWR Warehouse) naming convention, so they are easy to locate in the DBSNMP schema.

AWR Warehouse Objects

The additions to the DBSNMP schema are used to support the ETL jobs and ease mapping from the Enterprise Manager to the AWR Warehouse for AWR and ASH reporting.  The AWR schema objects that already exist in the standard Oracle database are updated to be partitioned on ETL loads by DBID, Snapshot ID or a combination of both, depending on what the AWR Warehouse developers found important to assist in performance.

There are a number of objects that are added to the DBSNMP schema to support the AWRW.  Note the object types and counts below:

caw_object_count

The table that is of particular interest to those of you with AWR queries that are interested in updating them to be AWRW compliant, is the CAW_DBID_MAPPING table:

dbsnmp_caw_dbid_mapping

You will be primarily joining the AWR objects DBID column to the #ff0000;">CAW_DBID_MAPPING.NEW_DBID/OLD_DBID to update those AWR scripts.

An example of changes required, would be like the following:

from   dba_hist_sys_time_modelstm,   dba_hist_snapshot s, gv$parameter p,   dbsnmp.caw_dbid_mapping m
             where stm.stat_name in (‘DB CPU’,’backgroundcpu time’)    
             and   LOWER(m.target_name)= ‘&dbname
             and   s.dbid= m.new_dbid     and   s.snap_id = stm.snap_id
            and   s.dbid = stm.dbid           and   s.instance_number = stm.instance_number
            and   p.name = ‘cpu_count’   and   p.inst_id = s.instance_number)

 Notice that the simple change with the addition of the mapping table and addition to the where clause has resolved the requirements to query just the data for the database in question by DBID.

I’ve included some updated scripts to use as examples and hopefully give everyone a quick idea on how to work forward with the AWR Warehouse if you so decide to jump headfirst into querying it directly.

Source Database Objects

There are only a couple additions to the Source Databases when they become part of the AWR Warehouse.

source_target_count

The objects are only used to manage the AWR extract jobs and track information about the tasks.

CAW_EXTRACT_PROPERTIES : Information on ETL job, dump location and intervals.

CAW_EXTRACT_METADATA : All data about extracts- times, failures, details.

So….

Do you feel educated?  Do you feel overwhelmed?  I hope this was helpful to go over some of the processes, objects and information for AWR queries regarding the AWR Warehouse and I’ll continue to blog about this topic as much as I can!   This feature is brand new and as impressed and excited as I am about it now, I can’t wait for all there is to come!



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [AWR Warehouse, Status], All Right Reserved. 2014.

Excel connect to Oracle – 64bit and 32bit issues

#555555;">Wow, thanks to

#555555;">Process Monitor   #2970a6;" href="http://technet.microsoft.com/en-us/sysinternals/bb896645">http://technet.microsoft.com/en-us/sysinternals/bb896645

#555555;">I was able track down why I couldn’t connect to Oracle from Excel.

#555555;">I had wanted to try some of the examples Charles Hooper has posted on connecting to and monitoring Oracle, for example

#555555;">#2970a6;" href="http://hoopercharles.wordpress.com/2010/01/20/excel-scrolling-oracle-performance-charts/">http://hoopercharles.wordpress.com/2010/01/20/excel-scrolling-oracle-performance-charts/

#555555;">I kept getting the error “Provider not found”
Now what kind of trace info is there for an error like this in Excel? None AFAIK. Time to start guessing.
I’m on windows 7 64 bit. I have the 64bit 11gR2 Oracle installed.  Excel shows up in task manager as “EXCEL.EXE  *32″. My first guess was, “oh, excel must want the 32bit libraries” so I got the 32 bit instant client from Oracle. Unzipped them into a directory and put them first into the path. Still no dice.
Did a lot of google searches and turned up that I needed

#555555;">oraoledb.dll

#555555;">but this wasn’t in any of the instant client zips that I downloaded from

#555555;">#2970a6;" href="http://www.oracle.com/technetwork/topics/winsoft-085727.html">http://www.oracle.com/technetwork/topics/winsoft-085727.html.

#555555;">Turns out it’s in a download halfway down the page:

#555555;">*Instant Client Package – ODAC: Includes ODP.NET, Oracle Services for MTS, Oracle Providers for ASP.NET, Oracle Provider for OLE DB, and OO4O with Oracle Instant Client

#555555;">Downloaded this, put all the dlls in the first directory in my path. Still no dice.

#555555;">I tried “strace”, but it gave no output. Rrrr.

#555555;">Then I tried process monitor – bingo.

#555555;">With process monitor, I filtered on processes containing “excel”, ran the connect, got tons of output, but knew it was a problem with libraries. I found “oraoledb10.dll” among the output. It was listed in an old Oracle 10 install directory. Oracle 10 32bit had been initially been installed. The install gave no warnings but bombed out late in the game so I remvoed the 10g and I installed Oracle 11gR2 64bit. (Oracle 11gR2 32bit won’t even begin the install)
So now, I scoured the registry with regedit and found the old oraoledb10.dll here

#555555;">HKEY_CLASSES_ROOT\Wow6432Node\CLSID\{3F63C36E-51A3-11D2-BB7D-00C04FA30080}\InprocServer32

#555555;">I changed this to the new location of oraoledb11.dll
and now it works.

Oracle Database 12.1.0.2.0 – Turning on the In-Memory Database option

It is indeed that sample as switching a knob to turn it on. To enable it you will have to set a reasonable among of...
class="readmore">Read More

Oracle Database 12.1.0.2.0 – Native JSON Support (1)

Oracle Database 12.1.0.2 has now native support build-in for handling JSON (Javascript Object Notation) data. Oracle Database supports JSON natively with relational database features, including...
class="readmore">Read More

Recurring Conversations: AWR Intervals (Part 2)

(Reminder, just in case we still need it, that the use of features in this post require Diagnostics Pack license.)

Damn me for taking so long to write blog posts these days. By the time I get around to them, certain very knowledgeable people have commented on part 1 and given the game away! ;-)

I finished the last part by suggesting that a narrow AWR interval makes less sense in a post-10g Diagnostics Pack landscape than it used to when we used Statspack.

Why do people argue for a Statspack/AWR interval of 15 or 30 minutes on important systems? Because when they encounter a performance problem that is happening right now or didn’t last for very long in the past, they can drill into a more narrow period of time in an attempt to improve the quality of the data available to them and any analysis based on it. (As an aside, I’m sure most of us have generated additional Statspack/AWR snapshots manually to *really* reduce the time scope to what is happening right now on the system, although this is not very smart if you’re using AWR and Adaptive Thresholds!)

However, there are better tools for the job these days.

If I have a user complaining about system performance then I would ideally want to narrow down the scope of the performance metrics to that user’s activity over the period of time they’re experiencing a slow-down. That can be a little difficult on modern systems that use complex connection pools, though. Which session should I trace? How do I capture what has already happened as well as what’s happening right now? Fortunately, if I’ve already paid for Diagnostics Pack then I have *Active Session History* at my disposal, constantly recording snapshots of information for all active sessions. In which case, why not look at

- The session or sessions of interest (which could also be *all* active sessions if I suspect a system-wide issue)
- For the short period of time I’m interested in
- To see what they’re actually doing

Rather than running a system-wide report for a 15 minute interval that aggregates the data I’m interested in with other irrelevant data? (To say nothing of having to wait for the next AWR snapshot or take a manual one and screwing up the regular AWR intervals ...)

When analysing system performance, it’s important to use the most appropriate tool for the job and, in particular, focus your data collection on what is *relevant to the problem under investigation*. The beauty of ASH is that if I’m not sure what *is* relevant yet, I can start with a wide scope of all sessions to help me find the session or sessions of interest and gradually narrow my focus. It has the history that AWR has, but with finer granularity of scope (whether that be sessions, sql statements, modules, actions or one of the many other ASH dimensions). Better still, if the issue turns out to be one long-running SQL statement, then a SQL Monitoring Active Report probably blows all the other tools out of the water!

With all that capability, why are experienced people still so obsessed with the Top 5 Timed Events section of an AWR report as one of their first points of reference? Is it just because they’ve become attached to it over the years of using Statspack? AWR has it’s uses (see JB’s comments for some thoughts on that and I’ve blogged about it extensively in the past) but analysing specific performance issues on Production databases is not it’s strength. In fact, if we’re going to use AWR, why not just use ADDM and let software perform automatically the same type of analysis most DBAs would do anyway (and in many cases, not as well!)

Remember, there’s a reason behind these Recurring Conversations posts. If I didn’t keep finding myself debating these issues with experienced Oracle techies, I wouldn’t harbour doubts about what seem to be common approaches. In this case, I still think there are far too many people using AWR where ASH or SQL Monitoring are far more appropriate tools. I also think that if we stick with a one hour interval rather than a 15 minute interval, we can retain four times as much *history* in the same space! When it comes to AWR – give me long retention over a shorter interval every time!

P.S. As well as thanking JB for his usual insightful comments, I also want to thank Martin Paul Nash. When I was giving an AWR/ASH presentation at this springs OUGN conference, he noticed the bullet point I had on the slide suggesting that we *shouldn’t* change the AWR interval and asked why. Rather than going into it at the time, I asked him to remind me at the end of the presentation and then because I had no time to answer, I promised I’d be blogging about it that weekend. That was almost 4 months ago! Sigh. But at least I got there in the end! ;-)