Search

OakieTags

Who's online

There are currently 0 users and 24 guests online.

Recent comments

Affiliations

AWR

Why is my SYSAUX Tablespace so Big? Statistics_level=ALL

One of my most popular postings is about why your SYSTEM tablespace could be rather large. Recently I’ve had issues with a SYSAUX tablespace being considerably larger than I expected, so I thought I would do a sister posting on the reason.

The client I was working with at the time was about to go live with a new application and database. For various reasons I was a little anxious about how the Java application (the User Interface) would actually call the stored PL/SQL code I had helped develop. Initial workloads would be low and so I asked that the STATISTICS_LEVEL be set to ALL, so that bind variables (amongst other things) would be gathered. This is on version 10.2.0.4, btw, enterprise edition and 4-node RAC.

We went live, issues were encountered and resolved, the usual way these things work. Then, a few weeks in and when everything was still very “exciting” from a problem resolution perspective, I got an odd email from the DBA team. Would they like us to add another datafile to the SYSAUX tablespace. Huh? I checked. I’d been watching the size of our application’s tablespaces but not the others {well, I was not supposed to be a DBA and I was watching an awful lot of other things}. Our SYSAUX tablespace was around 160GB in size, having pretty much filled it’s 5th datafile. Why? I checked to see what was taking up the space in the tablespace:

 select * from
 (select owner,segment_name||'~'||partition_name segment_name,bytes/(1024*1024) size_m
 from dba_segments
 where tablespace_name = 'SYSAUX'
 ORDER BY BLOCKS desc)
 where rownum < 40

OWNER              SEGMENT_NAME                                             SIZE_M
------------------ -------------------------------------------------- ------------
SYS                WRH$_LATCH_CHILDREN~WRH$_LATCH__14459270_3911            27,648
SYS                WRH$_LATCH_CHILDREN_PK~WRH$_LATCH__14459270_3911         26,491
SYS                WRH$_LATCH_CHILDREN~WRH$_LATCH__14459270_3537            23,798
SYS                WRH$_LATCH_CHILDREN_PK~WRH$_LATCH__14459270_3537         22,122
SYS                WRH$_LATCH_CHILDREN~WRH$_LATCH__14459270_4296            17,378
SYS                WRH$_LATCH_CHILDREN_PK~WRH$_LATCH__14459270_4296         16,818
SYS                WRH$_ACTIVE_SESSION_HISTORY~WRH$_ACTIVE_14459270_3          136
                   911
SYS                WRH$_SQLSTAT~WRH$_SQLSTA_14459270_3911                       96
SYS                WRH$_SQLSTAT~WRH$_SQLSTA_14459270_3537                       72
SYS                WRH$_SQLSTAT~WRH$_SQLSTA_14459270_4296                       47
SYS                WRH$_LATCH_MISSES_SUMMARY_PK~WRH$_LATCH__14459270_           45
                   3537
SYS                I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST~                              41
SYS                WRH$_SYSMETRIC_SUMMARY~                                      40
SYS                WRH$_LATCH_MISSES_SUMMARY_PK~WRH$_LATCH__14459270_           37

As you can see, almost all the space is being taken up by WRH$_LATCH_CHILDREN and WRH$_LATCH_CHILDREN_PK partitions. They are massive compared to other objects. A quick goggle did not come up with much other than many hits just listing what is in SYSAUX and the odd person also seeing SYSAUX being filled up with these objects and suggested ways to clear down space, nothing about the cause.

I had a chat with the DBAs and we quickly decided that this was going to be something to do with AWR given the name of objects – “WRH$_” objects are the things underlying AWR. The DBA suggested my settings of 15 minute intervals and 35 day retention was too aggressive. I knew this was not the case, I’ve had more aggressive snapshot intervals and longer retention periods on far busier systems than this. I did not have access to Metalink at that point so I asked the DBAs to raise a ticket, which they duly did.

Oracle support cogitated for a couple of days and came back with the advice to reduce the retention period. Hmmmm. Via the DBA I asked Oracle support to explain why those objects were so large when I had not seen this issue on several other systems. Was it a bug? I had by now corroborated with a friend from a previous site with 5 minute snapshot intervals and two months retention period and their SYSAUX tablespace was about 10GB all in. I did not want to go changing things if we did not know it would fix the issue as we really wanted to stop the growth of SYSAUX as soon as possible, not just try a hunch.

As you probably realise from the title of this blog, the issue was not the snapshot interval or retention period but the STATISTICS_LEVEL=ALL. The one and only hit you get in metalink if you search on WRH$_LATCH_CHILDREN is note 874518.1. From V10.1.0.2 to V11.1.0.7 setting this parameter to ALL is known to create a lot of data about Latch children and not clear it down when the AWR data is purged (Bug 8289729). The advice was to change STATISTICS_LEVEL and make the snapshot interval larger. I’d suggest you just need to alter the STATISTICS_LEVEL, unless you really, really need that extra information gathered. It seemed to take Oracle Support an extra day or two to find that note for us. {I’ve since checked out Metalink directly to confirm all this}.

So with a known issue we felt confident that altering the initialisation parameter would solve the issue. It took a while for us to change the STATISTICS_LEVEL on the production system – Change Control for that site is rather robust. This allowed us to see some other impacts of this issue.

The mmon process which looks after AWR data was becoming a top session in our OEM performance screens. In particular, a statement with SQL id 2prbzh4qfms7u that inserted into the WRH$_LATCH_CHILDREN table was taking several seconds to run each time and was running quite often {I include the SQL ID as it may be the same on many oracle V10 systems as it is internal code}:


The internal SQL inserting into wrh$_latch_children was becoming demanding

This was doing a lot of IO, by far the majority of the IO on our system at the time – it was a new system and we had been able to tune out a lot of the physical IO.


The physical IO requirements and 15-20 second elapsed time made this out most demanding statement on the system

We also now started to have issues with mmon running out of undo space when it ran at the same time as our daily load. This was particularly unfortunate as it coincided in a period of “intense management interest” in the daily load…

What was happening to the size of the SYSAUX tablespace?

Enter the tablespace (or leave null)> sys

TS_NAME              ORD      SUM_BLKS        SUM_K  MAX_CHNK_K NUM_CHNK
-------------------- ----- ----------- ------------ ----------- --------
SYSAUX               alloc  58,187,904  465,503,232  33,553,408       14
                     free       10,728       85,824      21,504       20
SYSTEM               alloc     128,000    1,024,000   1,024,000        1
                     free       68,360      546,880     546,752        3

4 rows selected.

select * from
(select owner,segment_name||'~'||partition_name segment_name,bytes/(1024*1024) size_m
 from dba_segments
 where tablespace_name = 'SYSAUX'
 ORDER BY BLOCKS desc)
where rownum < 40

OWNER    SEGMENT_NAME                                                     SIZE_M
-------- ------------------------------------------------------------ ----------
SYS      WRH$_LATCH_CHILDREN~WRH$_LATCH__14459270_6201                     30262
         WRH$_LATCH_CHILDREN~WRH$_LATCH__14459270_5817                     29948
         WRH$_LATCH_CHILDREN~WRH$_LATCH__14459270_5435                     28597
         WRH$_LATCH_CHILDREN~WRH$_LATCH__14459270_4675                     28198
         WRH$_LATCH_CHILDREN~WRH$_LATCH__14459270_3911                     27648
         WRH$_LATCH_CHILDREN_PK~WRH$_LATCH__14459270_5817                  27144
         WRH$_LATCH_CHILDREN~WRH$_LATCH__14459270_6585                     26965
         WRH$_LATCH_CHILDREN_PK~WRH$_LATCH__14459270_6201                  26832
         WRH$_LATCH_CHILDREN_PK~WRH$_LATCH__14459270_4675                  26741
         WRH$_LATCH_CHILDREN_PK~WRH$_LATCH__14459270_3911                  26491
         WRH$_LATCH_CHILDREN~WRH$_LATCH__14459270_4296                     26307
         WRH$_LATCH_CHILDREN_PK~WRH$_LATCH__14459270_5435                  26248
         WRH$_LATCH_CHILDREN_PK~WRH$_LATCH__14459270_4296                  25430
         WRH$_LATCH_CHILDREN_PK~WRH$_LATCH__14459270_6585                  25064
         WRH$_LATCH_CHILDREN~WRH$_LATCH__14459270_5058                     24611
         WRH$_LATCH_CHILDREN_PK~WRH$_LATCH__14459270_5058                  23161
         WRH$_LATCH_CHILDREN~WRH$_LATCH__14459270_6966                      9209
         WRH$_LATCH_CHILDREN_PK~WRH$_LATCH__14459270_6966                   8462
         WRH$_SYSMETRIC_SUMMARY~                                             152
         WRH$_ACTIVE_SESSION_HISTORY~WRH$_ACTIVE_14459270_3911               136
         WRH$_SQLSTAT~WRH$_SQLSTA_14459270_3911                               96

@sysaux_conts

OWNER                          OBJ_PART_NAME                                SIZE_M
------------------------------ ---------------------------------------- ----------
SYS                            WRH$_LATCH_CHILDREN-WRH                  231745.063
SYS                            WRH$_LATCH_CHILDREN_PK-WRH               215573.063
SYS                            WRH$_SQLSTAT-WRH                           711.0625
SYS                            WRH$_LATCH_MISSES_SUMMARY_PK-WRH           439.0625
SYS                            WRH$_ACTIVE_SESSION_HISTORY-WRH            437.0625
SYS                            WRH$_LATCH_PARENT-WRH                      292.0625
SYS                            WRH$_LATCH-WRH                             276.0625
SYS                            WRH$_LATCH_MISSES_SUMMARY-WRH              273.0625
SYS                            WRH$_SEG_STAT-WRH                          268.0625
SYS                            WRH$_LATCH_PARENT_PK-WRH                   239.0625
SYS                            WRH$_SYSSTAT_PK-WRH                        237.0625

Yes, that is close to half a terabyte of SYSAUX and it is all used, more partitions have appeared and the total size of the largest segments in SYSAUX show how WRH$_LATCH_CHILDREN and WRH$_LATCH_CHILDREN_PK make up the vast majority of the space used.

Shortly after, we finally got permission to change the live system. The impact was immediate, mmon dropped from being the most demanding session, that SQL code dropped down the rankings and the issues with running out of undo ceased.

I was anxious to see if the old data got purged, as the Metalink note had suggested the data would not be purged. Thankfully, that was not the case. The space was slowly released as normal purging of data outside the retention period took place and after just over a month, the SYSAUX tablespace contained a lot less information and was mostly free space:

OWNER      OBJ_PART_NAME                                SIZE_M
---------- ---------------------------------------- ----------
SYS        WRH$_LATCH_MISSES_SUMMARY_PK-WRH           512.0625
SYS        WRH$_LATCH_MISSES_SUMMARY-WRH              350.0625
SYS        WRH$_LATCH-WRH                             304.0625
SYS        WRH$_SQLSTAT-WRH                           280.0625
SYS        WRH$_LATCH_PK-WRH                          259.0625
SYS        WRH$_SYSSTAT_PK-WRH                        247.0625
SYS        WRH$_SERVICE_STAT_PK-WRH                   228.0625
SYS        WRH$_PARAMETER_PK-WRH                      201.0625
SYS        WRH$_PARAMETER-WRH                         169.0625
SYS        WRH$_SYSSTAT-WRH                           169.0625
SYS        WRH$_SEG_STAT-WRH                          161.0625
SYS        WRH$_SYSTEM_EVENT_PK-WRH                   156.0625
SYS        WRH$_SYSMETRIC_SUMMARY-                         152
SYS        WRH$_SYSTEM_EVENT-WRH                      133.0625
SYS        WRH$_SERVICE_STAT-WRH                      123.0625
SYS        WRH$_ACTIVE_SESSION_HISTORY-WRH            115.0625

TS_NAME              ORD      SUM_BLKS        SUM_K  MAX_CHNK_K NUM_CHNK
-------------------- ----- ----------- ------------ ----------- --------
SYSAUX               alloc  58,251,904  466,015,232  33,553,408       15
                     free   57,479,400  459,835,200   4,063,232    1,208
SYSTEM               alloc     128,000    1,024,000   1,024,000        1
                     free       68,048      544,384     544,320        2

Now, how do we get that space back? I left that with the DBA team to resolve.

Oh, one last thing. I mentioned the above to a couple of the Oaktable lot in the pub a few weeks back. Their instant response was to say “You set STATISTICS_LEVEL to ALL on a live system?!? You are mad!”
:-)

{Update, I’ve just spotted this posting by Colbran which is related. Last time I googled this I just got a stub with no information}

I am Neo off the Matrix (apparently)

I know I have mentioned it before, but I am a big fan of the OEM performance screens that are derived from the ASH/AWR information. One of the things I really like about it is the immediate information it gives you, in one glance, that things are “not normal”. Once you notice that things are not normal you can then, within a few seconds, get a feel for what is still probably OK and where you have something that has changed.

As an example of the immediate information, I recently came back to my desk and glanced at my OEM performance screen. It was showing the below:


This will not be an interesting picture to you, but to me it tells me a lot about my system

“data load has just ran” I said to my comrade-in-arms. “which one?” he asked. “The Delta – It ran the quick plan. But it started a bit late, 12:15. Oh, and looks like the transaction view code has swapped back to the full table scan plan and the summary code is not playing up at the moment.”

“you’re turning into Neo you are – can you see a lady in a red dress???” he asked.

That was of course a reference to the “Matrix” films where at times you see the virtual world displayed on a screen as a stream of characters running down the screen – but once you get used to it you can apparently “see” what is going.

The screen shot above is not even actually a very good example of what the performance screens can show you. One of my minor complaints about the performance screens is that it scales to show the greatest of the largest peak or a number of sessions to match the number of CPUs (real or fake) that are available to you. So if you have more CPU available than you need, you can’t see much detail in the graph. And if you have had a nasty peak of activity, again, all detail is squeezed out. In my case, the box is sized to cope in 12 months and the system is new, so activity is scuttling along the bottom of the graph.

However, “poor” though the example is, it told me what was going on across my system at a glance, something about the major tasks we are running, that one problem is currently occurring and that several of the other issues I need to keep an eye out for are not occurring.

That is why I love these screens – I recognise “my” activity patterns from the graph, I now recognise the SQL IDs for my key statements. If I see a pattern in the graph I don’t recognise, I need to check things out immediately. Three or four times over the last 2 weeks I have spotted an issues, started investigating and found out the cause before the Operations desk has even noticed an issue.

Oh, and what is SQL type 189? It is a merge statement. Our implementation of OEM is a little old, it does not correctly interpret that SQL command type. It might be a little old, it is still a lot useful.

Network Events in ASH

Note - using ASH and the Top Activity screen require the use of the Diagnostics Pack License.

This post was prompted by yet another performance problem identified using
pretty pictures and Active Session History data. Although, as you'll see, some
pretty old-fashioned tools played their part too!

ASH entries only exist for certain SQL*Net events. As usual, I think the
design is very successful as long as you have a basic understanding of how ASH
works.

The events all fall into one of three wait classes - Application, Network
and Idle.

SQL> select wait_class, name from v$event_name where name like 'SQL*Net%' order by 1, 2;

WAIT_CLASS           NAME
-------------------- ----------------------------------------
Application          SQL*Net break/reset to client
Application          SQL*Net break/reset to dblink
Idle                 SQL*Net message from client
Idle                 SQL*Net vector message from client
Idle                 SQL*Net vector message from dblink
Network              SQL*Net message from dblink
Network              SQL*Net message to client
Network              SQL*Net message to dblink
Network              SQL*Net more data from client
Network              SQL*Net more data from dblink
Network              SQL*Net more data to client
Network              SQL*Net more data to dblink
Network              SQL*Net vector data from client
Network              SQL*Net vector data from dblink
Network              SQL*Net vector data to client
Network              SQL*Net vector data to dblink

16 rows selected.

Hopefully that immediately dismisses the suggestion I've heard a few times that ASH/Top Activity ignores SQL*Net events. However, there are some events that are classed as Idle here that will not be captured in ASH data and will therefore not appear in the Top Activity screen. (In fact, even if you were to set the _ash_sample_all parameter you would not see these events in the Top Activity screen, even though ASH would contain entries for them. There is an additional filter applied to the data before it's displayed. In any case, I would *not* recommend setting _ash_sample_all except for fun.)

The most contentious of these is probably
SQL*Net message from client. This event indicates that the Oracle server is
waiting for the client to  do something. If the client is a user session, Oracle
is waiting for the user to do something - perhaps someone has a sql*plus session open that they're doing nothing with or they are filling in an application screen with data? From an Oracle perspective that session
is Idle and there isn't much we can do to tune it, including the waits would heavily skew the data we're using for our analysis and, because events in the Idle
class are specifically excluded from both ASH data and the OEM performance
pages,  such activity won't appear. That seems sensible to me.

However, if the client is actually an application server,
those supposedly idle events can tell us something about end-to-end system
performance. This blog post describes one real life situation where that event
helped me identify the performance problem.

Back to the problem at hand. Sadly, in the midst of a a very chaotic work situation, I didn't manage to grab any of the graphs in question but we had a persistent problem with our critical batch processes spending time waiting on 'SQL*Net more data from client', which showed up very clearly as network class waits.

Now there are a number of reasons why this might be happening. Just a couple off the top of my head

1) There is a performance problem with the application server so Oracle is expecting more data to arrive more quickly than the app server can deliver it.

2) A network problem is affecting the delivery speed.

Although the issue here turned out to be number 2, I suppose the true underlying issue is that this application is insanely 'chatty' and uses very frequent round trips to deliver data. Maybe the application design could be improved first?

Ultimately we enabled Extended Tracing on the relevant sessions and confirmed that, yes, the sessions were spending most of their time waiting for the next batch of data and also how long these waits were. In practise, I'm not sure how much value the trace files added in this particular case. In fact, I think this is an excellent example of the underlying beauty of the sampling approach. As an event became *slower* it appeared *more often* in the samples and we had plenty of examples of how long some of the waits were. (But only some - probably the longest ones.)

How did we solve this mysterious problem? With two very useful tools

- ping/tracert. I simply went on to the app server, pinged the db server and confirmed that the response time (from my memory) was in the range of 70-80 milliseconds - far higher than I would expect. One of the Java developers then ran a trace route to the server which highlighted that network performance was great until we hit the db server. It turned out that one of the network interface cards had auto-negotiated down to 100M/s and this was the root cause - easily fixed. Sometimes old-fashioned and simple tools are all that you need, once you know where you should focus your attention (and ASH is great at focussing attention in the right place).

- The next tool is one of the reasons I wanted to write this post. What made me try ping? Whilst I would have got there eventually, I thought I would try my performance optimisation secret weapon! One just for the database wizards! Something I've been keeping an eye on is Oracle Support's attempt to give people a methodical approach to solving performance issues via the Oracle Performance Diagnostic Guide on My Oracle Support - Note ID 390374.1. There are several guides on that landing page but here is the link to the Slow Performance PDF. (You'll need an account to be able to access this.) This is an example of the kind of advice there. (Note that this is for SQL*Net message from client, but you would tend to get most events together anyway and there's similar advice for more data from client.)

"Cause Identified: Slow network limiting the response time between client and database.
The network is saturated and this is limiting the ability of the client and database to communicate with each other.
Cause Justification
TKProf:
1. SQL*Net message from client waits are a large part of the overall time (see the overall summary
section)
2. Array operations are used. This is seen when there are more than 5 rows per execution on average (divide total rows by total execution calls for both recursive and non-recursive calls)
3. The average time for a ping is about equal to twice the average time for a SQL*Net message from client wait and this time is more than a few milliseconds. This indicates that most of the client time is spent in the network."

I've been loathe to mention this document until now because it looks like a work in progress (albeit not updated since the start of 2009), isn't perfect, has gaps and I can imagine a number of my peers taking issue with some of the content. But for something to aid learning if you're not naturally great at solving performance problems, I can think of much worse starting places.

In the end, what could have been a really tricky performance problem (and might have taken a while to notice) was apparent with a brief glance at OEM and we could then follow up by applying the right analysis tools to the problem.

To finish off, when I searched Google for any examples of 'SQL*Net more data from client', one of the first results that cropped up was a chapter from one of my favourite books - 'Optimising Oracle Performance' by Cary Millsap and Jeff Holt. Well worth a read ....

Alternative Pictures Demo

Note - features in this post require the Diagnostics Pack license

Not long after I'd finished the last post, I realised I could reinforce the points I was making with a quick post showing another one of the example tests supplied with Swingbench - the Calling Circle (CC) application. Like the Sales Order Entry application, CC is a mixed read/write test consisting of small transactions. As always, there's more information at Dominic's website.

One of the main differences to the SOE test is that the CC test consumes data so you need to generate a new set of data before each test using the supplied ccwizard utility. I won't show you the entire workflow here but enough to give you a flavour of the process. The utility is the same one used to create the necessary CC schema in the first place but the option I'm looking for here is "Generate Data for Benchmark Run".

I'd already decided that my CC schema is populated with data for 1 million customers when I created it so I just need to specify the number of transactions the next test will be for. I happen to know that on my particular configuration, a 1000 transaction test will take around 5 minutes to run.

I ran the test twice. I've highlighted the first run here in the Top Activity page.

It should be clear that CC suffers significant log file sync waits on my particular test platform, just like the SOE test. Therefore I'll regenerate the test data set, enable asynchronous commits and re-run the test. Here I've highlighted the second test run.

As well as seeing a similar change in the activity profile according to the ASH samples (the log file sync activity has disappeared as has the LGWR System I/O), there's a significant difference to the SOE test. Because this test run is based on a specific workload volume, as defined by the size of the test data, rather than a fixed time period, the second test run completed more quickly than the first run. The activity only fills the 5 minute activity bar partially, rather than the first test which filled the whole bar.

If you test a specific and limited workload volume it is much clearer from the Top Activity page which test is processing transactions more quickly, based on the Time axis. That's why I didn't pick this example the first time - it's too obvious what's going on!

That Pictures demo in full

Note - features in this post require the Diagnostics Pack license

With so many potential technical posts in my pile, it was initially difficult to decide where to start again but I figured I should avoid the stats series until I'm back into the swing of things ;-) Instead I decided to fulfill a commitment I made to myself (and others, whether they knew about it or not) almost three months ago.

When I gave the evening demo session in the Amis offices I think the 2 hours went pretty well but, as usual with the OEM presentations, I got a little carried away and didn't conclude the demo properly. (This is also the demo I *would* have done at Hotsos last year if the damn thing had worked first time ;-)) It was a shame because as well as showing the neat and useful side of OEM Performance Pages, it also illustrates one of the common pitfalls in interpreting what the graphs are showing you.

I began by running a 4 concurrent user Sales Order Entry (SOE) test using Dominic Giles' Swingbench utility. I won't got into the details of the SOE test because I don't think it's particularly relevant here but you can always download and/or read about Swingbench for yourself at Dominics website.

I ran the test for a fixed period of 5 minutes using no think-time delay.

Using the capability to look at ASH data in the recent past, the OEM Top Activity page looks like this.

- There was a fairly consistent average of 4-5 active sessions over the 5 minutes period and, looking at the Top Sessions panel in the bottom right of the screen, these were four SOE sessions of similar activity levels and the LGWR process.

- The majority of time was spent on User I/O, System I/O and Commit Wait Class activity, with a little CPU.

- Three PL/SQL blocks were responsible for most of the Commit activity.

- The LGWR process was responsible for most of the System I/O activity.

I'll leave it there for now and won't drill down into any more detail.

In terms of optimising the performance of this test, what might I consider doing?

The most important aspect is to optimise the application to reduce the resource consumption to the minimum required to achieve our objectives. There's a whole bunch of User I/O activity that could perhaps be eliminated? But I'm going to ask you to accept the big assumption here that this application has been optimised and that I'm just using a Swingbench test as an illustration of the type of system-wide problem you could see. In that case, my eye is drawn to the Commit activity.

When I'm teaching this stuff, I'm usually deliberately simplistic (at least at the end of the process) and highlight that what I'm interested in 'tuning' is whatever most sessions are waiting on according to the ASH samples this screen uses. I used to explain how I'd look for the biggest areas of colour, drill down into those and identify what's going on. Sadly, I later heard that someone (I think it was JB at Oracle*) had already come up with a nifty acronym for this - COBS. Click on the Big Stuff! One day I will come up with a nifty acronym for something too, but you shouldn't hold your breath waiting.

So, if I click on the big stuff here, I can see that the Commit Class waits are log file sync.

How might I reduce the time that sessions are waiting for log file sync? Here are a few reasons why the test sessions might be waiting on log file sync more often or for longer than I'd like.

- Application design - committing too frequently
- CPU starvation
- Slow I/O to online redo log files

Whether waits are predominantly the result of CPU overload or slow I/O can be determined by looking at the underlying log file parallel write wait times on the LGWR process but that's a bigger subject for another time.

You can look into all of these in more depth - and should - but as this is designed to be a fun demo of the pretty pictures (it used to be 'the USB stick demo'), I'll simply try to eliminate that activity and re-run the same test. Here's how Top Activity looks now.

Oh. Maybe that wasn't what you expected? OK, the LGWR activity has disappeared, but it seems the system is almost as busy as it was before but that the main bottleneck is now User I/O activity. That's often the way, though - you eliminate one bottleneck in a system and it just shows up somewhere else. It must be good to get rid of log file sync waits though, right? User I/O seems like more productive work and I've managed to make the LGWR activity disappear completely.

But then if you were to look at this graph in terms of Average Active Sessions or DB Time or (as it's more likely to be expressed) how big that spike looks, the two tests would look similarly busy from a system-wide perspective. They were but the real question is - busy doing *what*? There's some important information missing here and Swingbench is able to provide it.

TotalCompletedTransactions 22,868

Mmmm, so I wonder what that value was for the first run?

TotalCompletedTransactions 12,232

Woo-hoo! *That's* what I call tuning a benchmark - processing almost twice the number of transactions in the same 5 minute period.

So it turns out that the sessions in the database *were* just as busy during the second run (not too surprising seeing as the test has no user think time so keeps hammering the database with as many requests as it can handle) but that they were busy doing the more productive work of reading and processing data rather than just waiting for COMMITs to complete.

I raised this issue of DB Time not showing activity details with Graham Wood* at Oracle in relation to a previous blog post. I think he made the point to me that that's why the OEM Performance Home Page is *not* the Top Activity page. If I take a look at that home page, it shows me the same information as the Swingbench results output did, albeit not as clearly

Looking at the Throughput graph, I can see that the second test processesd around double the number of Transactions per second for the same test running on the same system.

To wrap up (and be a little defensive) ...

- Yes, I could have traced one or more sessions and generated a complete and detailed response time profile that should have lead me to the same conclusion.

- Yes, as this is a controlled test environment and I'm the only 'user', AWR/Statspack would have been an even more powerful analysis tool in the right hands.

- The Top Activity page is not the most appropriate tool for this job but it is handy for illustrating concepts.

- Lest I seem a slavish pictures fan, I'm showing how people might misuse or misundersand ASH/Top Activity. In this case, the Home Performance Page is a much better tool because we're looking at system-wide data and not drilling into session or SQL details.

Oh, and what is my Top Secret Magic Silver Bullet Tuning Tip for OLTP-type applications? (only to be used by Advanced Oracle Performance Wizards)

alter system set commit_write='BATCH, NOWAIT';

Done! In fact, why not just use this on all of your systems, just in case people are waiting on log file sync?

(Leaves space below for angry responses and my withering humorous retorts)

* This is not name-dropping, this is giving due credit to the people who really know what they're talking about

Time Model Bug

Tasks that are performed via jobs in the database will be double accounted in the system time model that has been introduced with Oracle 10g.

So if you execute significant workload via DBMS_JOB or DBMS_SCHEDULER any system time model related statistic like DB Time, DB CPU etc. that gets recorded for that workload gets double accounted.

This bug is not particularly relevant since your top workloads will still be the same top workloads, because all other statistics (like Elapsed Time, CPU, Buffer Gets etc.) are not affected by the bug.

I mention it only here since the bug (see below for details) as of the time of writing can't yet be found on My Oracle Support in the bug database but I recently came across several AWR reports where the majority of workload was generated via job processes and therefore the time model statistics were effectively doubled.

It might help as a viable explanation if you sometimes wonder why an AWR or Statspack report only captures 50% or less of the recorded total DB Time or DB CPU and where this unaccounted time has gone. If a significant part of the workload during the reporting period has been performed by sessions controlled via DBMS_JOB or DBMS_SCHEDULER then probably most of the unaccounted time is actually not unaccounted but the time model statistics are wrong.

So if you have such an otherwise unexplainable unaccounted DB Time / DB CPU etc. you might want to check if significant workload during the reporting period was executed via the job system. Note that I don't say that this is the only possible explanation of such unaccounted time - there might be other reasons like uninstrumented waits, other bugs etc.

Of course all the percentages that are shown in the AWR / ADDM / Statspack reports that refer to "Percentage of DB Time" or "Percentage of DB CPU" will be too small in such cases.

If the majority of workload during the reporting period has been generated by jobs then you can safely assume that the time model statistics have to be divided by 2 (and the percentages have to be doubled). If you have a mixture of jobs and regular foreground sessions then it will be harder to derive the correct time model statistics.

Note that the "Active Session History" (ASH) is not affected by the bug - the ASH reports always were consistent in my tests regarding the DB Time (respectively the number of samples) and CPU time information.

The following simple test case can be used to reproduce the issue at will. Ideally you should have exclusive access to the test system since any other concurrent activity will affect the test results.

You might want to check the 1000000000 iterations of the simple PL/SQL loop on your particular CPU - on my test system this takes approx. 46 seconds to complete.

The first version assumes that a PERFSTAT user with an installed STATSPACK is present in the database since STATSPACK doesn't require an additional license. An AWR variant follows below.

alter session set nls_language = american nls_territory = america;

store set .settings replace

set echo on timing on define on

define iter="1000000000"

variable snap1 number

exec :snap1 := statspack.snap

declare
n_cnt binary_integer;
begin
n_cnt := 0;
for i in 1..&iter loop
n_cnt := n_cnt + 1;
end loop;
end;
/

variable snap2 number

exec :snap2 := statspack.snap

/* Uncomment this if you want to test via DBMS_JOB
variable job_id number

begin
dbms_job.submit(:job_id, '
declare
n_cnt binary_integer;
n_status integer;
begin
n_cnt := 0;
for i in 1..&iter loop
n_cnt := n_cnt + 1;
end loop;
n_status := dbms_pipe.send_message(''bg_job_complete'');
end;
');
end;
/

commit;
*/

/* Uncomment this if you want to test via DBMS_SCHEDULER */
begin
dbms_scheduler.create_job(
job_name => dbms_scheduler.generate_job_name
, job_type => 'PLSQL_BLOCK'
, job_action => '
declare
n_cnt binary_integer;
n_status integer;
begin
n_cnt := 0;
for i in 1..&iter loop
n_cnt := n_cnt + 1;
end loop;
n_status := dbms_pipe.send_message(''bg_job_complete'');
end;
' , enabled => true);
end;
/

declare
pipe_status integer;
begin
pipe_status := dbms_pipe.receive_message('bg_job_complete');
end;
/

declare
pipe_id integer;
begin
pipe_id := dbms_pipe.remove_pipe('bg_job_complete');
end;
/

variable snap3 number

exec :snap3 := statspack.snap

rem set heading off pagesize 0 feedback off linesize 500 trimspool on termout off echo off verify off

prompt Enter PERFSTAT password

connect perfstat

column dbid new_value dbid noprint

select dbid from v$database;

column instance_number new_value inst_num noprint

select instance_number from v$instance;

column b_id new_value begin_snap noprint
column e_id new_value end_snap noprint

select :snap1 as b_id, :snap2 as e_id from dual;
define report_name=sp_foreground.txt

@?/rdbms/admin/sprepins

column dbid new_value dbid noprint

select dbid from v$database;

column instance_number new_value inst_num noprint

select instance_number from v$instance;

column b_id new_value begin_snap noprint
column e_id new_value end_snap noprint

select :snap2 as b_id, :snap3 as e_id from dual;
define report_name=sp_background.txt

@?/rdbms/admin/sprepins

undefine iter

@.settings

set termout on

Here is the same test case but with AWR reports (requires additional diagnostic license)

alter session set nls_language = american nls_territory = america;

store set .settings replace

set echo on timing on define on

define iter="1000000000"

column snap1 new_value awr_snap1 noprint

select dbms_workload_repository.create_snapshot as snap1 from dual;

declare
n_cnt binary_integer;
begin
n_cnt := 0;
for i in 1..&iter loop
n_cnt := n_cnt + 1;
end loop;
end;
/

column snap2 new_value awr_snap2 noprint

select dbms_workload_repository.create_snapshot as snap2 from dual;

/* Uncomment this if you want to test via DBMS_JOB
variable job_id number

begin
dbms_job.submit(:job_id, '
declare
n_cnt binary_integer;
n_status integer;
begin
n_cnt := 0;
for i in 1..&iter loop
n_cnt := n_cnt + 1;
end loop;
n_status := dbms_pipe.send_message(''bg_job_complete'');
end;
');
end;
/

commit;
*/

/* Uncomment this if you want to test via DBMS_SCHEDULER */
begin
dbms_scheduler.create_job(
job_name => dbms_scheduler.generate_job_name
, job_type => 'PLSQL_BLOCK'
, job_action => '
declare
n_cnt binary_integer;
n_status integer;
begin
n_cnt := 0;
for i in 1..&iter loop
n_cnt := n_cnt + 1;
end loop;
n_status := dbms_pipe.send_message(''bg_job_complete'');
end;
' , enabled => true);
end;
/

declare
pipe_status integer;
begin
pipe_status := dbms_pipe.receive_message('bg_job_complete');
end;
/

declare
pipe_id integer;
begin
pipe_id := dbms_pipe.remove_pipe('bg_job_complete');
end;
/

column snap3 new_value awr_snap3 noprint

select dbms_workload_repository.create_snapshot as snap3 from dual;

set heading off pagesize 0 feedback off linesize 500 trimspool on termout off echo off verify off

spool awr_foreground.html

select
output
from
table(
sys.dbms_workload_repository.awr_report_html(
(select dbid from v$database)
, (select instance_number from v$instance)
, &awr_snap1
, &awr_snap2
)
);

spool off

spool awr_background.html

select
output
from
table(
sys.dbms_workload_repository.awr_report_html(
(select dbid from v$database)
, (select instance_number from v$instance)
, &awr_snap2
, &awr_snap3
)
);

spool off

spool awr_diff.html

select
output
from
table(
sys.dbms_workload_repository.awr_diff_report_html(
(select dbid from v$database)
, (select instance_number from v$instance)
, &awr_snap1
, &awr_snap2
, (select dbid from v$database)
, (select instance_number from v$instance)
, &awr_snap2
, &awr_snap3
)
);

spool off

undefine awr_snap1
undefine awr_snap2
undefine awr_snap3

undefine iter

column snap1 clear
column snap2 clear
column snap3 clear

@.settings

set termout on

And here is a sample snippet from a generated Statspack report on a single CPU system with nothing else running on the system:

Normal foreground execution:

STATSPACK report for

Database DB Id Instance Inst Num Startup Time Release RAC
~~~~~~~~ ----------- ------------ -------- --------------- ----------- ---
orcl112 1 05-Aug-10 08:21 11.2.0.1.0 NO

Host Name Platform CPUs Cores Sockets Memory (G)
~~~~ ---------------- ---------------------- ----- ----- ------- ------------
XXXX Microsoft Windows IA ( 1 0 0 2.0

Snapshot Snap Id Snap Time Sessions Curs/Sess Comment
~~~~~~~~ ---------- ------------------ -------- --------- ------------------
Begin Snap: 13 05-Aug-10 08:34:17 25 1.2
End Snap: 14 05-Aug-10 08:35:05 25 1.2
Elapsed: 0.80 (mins) Av Act Sess: 1.1
DB time: 0.87 (mins) DB CPU: 0.80 (mins)

Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 104M Std Block Size: 8K
Shared Pool: 128M Log Buffer: 6,076K

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ ------------------ ----------------- ----------- -----------
DB time(s): 1.1 2.4 0.09 3.99
DB CPU(s): 1.0 2.2 0.08 3.68

Execution via Job/Scheduler:

STATSPACK report for

Database DB Id Instance Inst Num Startup Time Release RAC
~~~~~~~~ ----------- ------------ -------- --------------- ----------- ---
orcl112 1 05-Aug-10 08:21 11.2.0.1.0 NO

Host Name Platform CPUs Cores Sockets Memory (G)
~~~~ ---------------- ---------------------- ----- ----- ------- ------------
XXXX Microsoft Windows IA ( 1 0 0 2.0

Snapshot Snap Id Snap Time Sessions Curs/Sess Comment
~~~~~~~~ ---------- ------------------ -------- --------- ------------------
Begin Snap: 14 05-Aug-10 08:35:05 25 1.2
End Snap: 15 05-Aug-10 08:35:53 24 1.3
Elapsed: 0.80 (mins) Av Act Sess: 1.9
DB time: 1.55 (mins) DB CPU: 1.54 (mins)

Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 104M Std Block Size: 8K
Shared Pool: 128M Log Buffer: 6,076K

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ ------------------ ----------------- ----------- -----------
DB time(s): 1.9 92.8 0.79 7.74
DB CPU(s): 1.9 92.1 0.78 7.68

As you might have guessed my single CPU test system has not been added a second CPU when performing the same task via DBMS_SCHEDULER / DBMS_JOB yet the time model reports (almost) 2 DB Time / DB CPU seconds and active sessions per second in that case.

I have reproduced the bug on versions 10.2.0.4, 11.1.0.7 and 11.2.0.1 but very likely all versions supporting the time model are affected.

A (non-public) bug "9882245 - DOUBLE ACCOUNTING OF SYS MODEL TIMINGS FOR WORKLOAD RUN THROUGH JOBS" has been filed for it, but the fix is not available yet therefore as far as I know it is not yet part of any available patch set / PSU.

Note that there seems to a different issue with the DB CPU time model component: If you have a system that reports more CPUs than sockets (for example a Power5, Power6 or Power7 based IBM server that reports 16 sockets / 32 CPUs) then the DB CPU component gets reduced by approximately 50%, which means it is divided by 2.

This means in combination with above bug that you end up with a doubled DB Time component for tasks executed via jobs, but the DB CPU time model component is in the right ballpark since the doubled DB CPU time gets divided by 2.

I don't know if the bug fix also covers this issue, so you might want to keep this in mind when checking any time model based information.

Graphing AWR Data in Excel

I often use data collected by the Oracle Automatic Workload Repository (AWR) to help to diagnose performance problems. However, I often work on performance problems with application teams, rather than the DBAs.  It is surprising how often that I don't have access to Oracle Enterprise Manager.
Somebody might say that the system was slow at a particular time. I want to get an overview of the database at that time, and I might also want to compare it to another time when the system wasn't slow. Later I may generate AWR reports using particular pairs of snapshots, but I need something to direct me to when an issue occurred, and hence which snapshots to compare.

Plan stability in 10g - using existing cursors to create Stored Outlines and SQL profiles

If you have the need for plan stability - that is telling the database to use a particular execution plan no matter what the optimizer thinks otherwise - then you might be in the situation that the "good" execution plan is already available in the shared pool or in the AWR, so it would be handy if you could simply tell Oracle to use that particular execution plan to create a Stored Outline.

Note that in 11g this is all possible using the new SQL Plan Management framework (SPM), but that is not available in 10g, so we need to think differently.

In 10g the DBMS_OUTLN package has been enhanced with the CREATE_OUTLINE procedure to create an outline from an existing child cursor in the shared pool.

Please note that in releases prior to 10.2.0.4 there was a severe bug that caused your session to crash when using DBMS_OUTLN.CREATE_OUTLINE (Bug 5454975 which has been fixed in 10.2.0.4). The workaround is to enable the creation of stored outlines by issuing "alter session set create_stored_outlines = true;" before using DBMS_OUTLN.CREATE_OUTLINE. For more information see the Metalink Notes 463288.1 and 445126.1.

Note that from 10g on the hints required to create an outline are stored as part of the plan table in the OTHER_XML column as part of the XML detail information.

You can use the ADVANCED or OUTLINE option of the DBMS_XPLAN.DISPLAY* functions to display that OUTLINE information. For more information see e.g. here.

So let's try DBMS_OUTLN.CREATE_OUTLINE in 10.2.0.4:

SQL>
SQL> drop table t_fetch_first_rows purge;

Table dropped.

SQL>
SQL> create table t_fetch_first_rows (
2 id number not null,
3 name varchar2(30) not null,
4 type varchar2(30) not null,
5 measure number
6 );

Table created.