Search

Top 60 Oracle Blogs

Recent comments

swingbench

Swingbench with the AWS Trial

Swingbench is a one of the best choices for easy loads on a database.

If you use swingbench take note of sbutil

This is going to be a very short post for a change. I have used Swingbench extensively and really love the tool. Many thanks to Dominic Giles!

Recently he announced a new tool on his blog that you can use to inflate your data volume. So instead of using the “-scale” argument when executing oewizard you can just keep the defaults and later on create as much data as you like. Here is an example, the reason for this post.

Setting the Scene

Little things worth knowing: creating the Swingbench order entry schema “lights out”

This is a little note, primarily to myself I guess, about the creation of the order entry schema (part of Swingbench, written by Dominic Giles) when no VNC sessions are available (although you can almost always use port-forwarding :). Instead, you can create the schema on the command line. I always execute commands on remote systems in screen for increased peace of mind. Should the network drop, the order entry generation will continue as if nothing ever happened.

Like many others I use Swingbench during trainings and presentations to have some activity on a system. Very useful for demonstrating ASH and OEM, and many other things too!

Alternative Pictures Demo

Note - features in this post require the Diagnostics Pack license

Not long after I'd finished the last post, I realised I could reinforce the points I was making with a quick post showing another one of the example tests supplied with Swingbench - the Calling Circle (CC) application. Like the Sales Order Entry application, CC is a mixed read/write test consisting of small transactions. As always, there's more information at Dominic's website.

One of the main differences to the SOE test is that the CC test consumes data so you need to generate a new set of data before each test using the supplied ccwizard utility. I won't show you the entire workflow here but enough to give you a flavour of the process. The utility is the same one used to create the necessary CC schema in the first place but the option I'm looking for here is "Generate Data for Benchmark Run".

I'd already decided that my CC schema is populated with data for 1 million customers when I created it so I just need to specify the number of transactions the next test will be for. I happen to know that on my particular configuration, a 1000 transaction test will take around 5 minutes to run.

I ran the test twice. I've highlighted the first run here in the Top Activity page.

It should be clear that CC suffers significant log file sync waits on my particular test platform, just like the SOE test. Therefore I'll regenerate the test data set, enable asynchronous commits and re-run the test. Here I've highlighted the second test run.

As well as seeing a similar change in the activity profile according to the ASH samples (the log file sync activity has disappeared as has the LGWR System I/O), there's a significant difference to the SOE test. Because this test run is based on a specific workload volume, as defined by the size of the test data, rather than a fixed time period, the second test run completed more quickly than the first run. The activity only fills the 5 minute activity bar partially, rather than the first test which filled the whole bar.

If you test a specific and limited workload volume it is much clearer from the Top Activity page which test is processing transactions more quickly, based on the Time axis. That's why I didn't pick this example the first time - it's too obvious what's going on!

That Pictures demo in full

Note - features in this post require the Diagnostics Pack license

With so many potential technical posts in my pile, it was initially difficult to decide where to start again but I figured I should avoid the stats series until I'm back into the swing of things ;-) Instead I decided to fulfill a commitment I made to myself (and others, whether they knew about it or not) almost three months ago.

When I gave the evening demo session in the Amis offices I think the 2 hours went pretty well but, as usual with the OEM presentations, I got a little carried away and didn't conclude the demo properly. (This is also the demo I *would* have done at Hotsos last year if the damn thing had worked first time ;-)) It was a shame because as well as showing the neat and useful side of OEM Performance Pages, it also illustrates one of the common pitfalls in interpreting what the graphs are showing you.

I began by running a 4 concurrent user Sales Order Entry (SOE) test using Dominic Giles' Swingbench utility. I won't got into the details of the SOE test because I don't think it's particularly relevant here but you can always download and/or read about Swingbench for yourself at Dominics website.

I ran the test for a fixed period of 5 minutes using no think-time delay.

Using the capability to look at ASH data in the recent past, the OEM Top Activity page looks like this.

- There was a fairly consistent average of 4-5 active sessions over the 5 minutes period and, looking at the Top Sessions panel in the bottom right of the screen, these were four SOE sessions of similar activity levels and the LGWR process.

- The majority of time was spent on User I/O, System I/O and Commit Wait Class activity, with a little CPU.

- Three PL/SQL blocks were responsible for most of the Commit activity.

- The LGWR process was responsible for most of the System I/O activity.

I'll leave it there for now and won't drill down into any more detail.

In terms of optimising the performance of this test, what might I consider doing?

The most important aspect is to optimise the application to reduce the resource consumption to the minimum required to achieve our objectives. There's a whole bunch of User I/O activity that could perhaps be eliminated? But I'm going to ask you to accept the big assumption here that this application has been optimised and that I'm just using a Swingbench test as an illustration of the type of system-wide problem you could see. In that case, my eye is drawn to the Commit activity.

When I'm teaching this stuff, I'm usually deliberately simplistic (at least at the end of the process) and highlight that what I'm interested in 'tuning' is whatever most sessions are waiting on according to the ASH samples this screen uses. I used to explain how I'd look for the biggest areas of colour, drill down into those and identify what's going on. Sadly, I later heard that someone (I think it was JB at Oracle*) had already come up with a nifty acronym for this - COBS. Click on the Big Stuff! One day I will come up with a nifty acronym for something too, but you shouldn't hold your breath waiting.

So, if I click on the big stuff here, I can see that the Commit Class waits are log file sync.

How might I reduce the time that sessions are waiting for log file sync? Here are a few reasons why the test sessions might be waiting on log file sync more often or for longer than I'd like.

- Application design - committing too frequently
- CPU starvation
- Slow I/O to online redo log files

Whether waits are predominantly the result of CPU overload or slow I/O can be determined by looking at the underlying log file parallel write wait times on the LGWR process but that's a bigger subject for another time.

You can look into all of these in more depth - and should - but as this is designed to be a fun demo of the pretty pictures (it used to be 'the USB stick demo'), I'll simply try to eliminate that activity and re-run the same test. Here's how Top Activity looks now.

Oh. Maybe that wasn't what you expected? OK, the LGWR activity has disappeared, but it seems the system is almost as busy as it was before but that the main bottleneck is now User I/O activity. That's often the way, though - you eliminate one bottleneck in a system and it just shows up somewhere else. It must be good to get rid of log file sync waits though, right? User I/O seems like more productive work and I've managed to make the LGWR activity disappear completely.

But then if you were to look at this graph in terms of Average Active Sessions or DB Time or (as it's more likely to be expressed) how big that spike looks, the two tests would look similarly busy from a system-wide perspective. They were but the real question is - busy doing *what*? There's some important information missing here and Swingbench is able to provide it.

TotalCompletedTransactions 22,868

Mmmm, so I wonder what that value was for the first run?

TotalCompletedTransactions 12,232

Woo-hoo! *That's* what I call tuning a benchmark - processing almost twice the number of transactions in the same 5 minute period.

So it turns out that the sessions in the database *were* just as busy during the second run (not too surprising seeing as the test has no user think time so keeps hammering the database with as many requests as it can handle) but that they were busy doing the more productive work of reading and processing data rather than just waiting for COMMITs to complete.

I raised this issue of DB Time not showing activity details with Graham Wood* at Oracle in relation to a previous blog post. I think he made the point to me that that's why the OEM Performance Home Page is *not* the Top Activity page. If I take a look at that home page, it shows me the same information as the Swingbench results output did, albeit not as clearly

Looking at the Throughput graph, I can see that the second test processesd around double the number of Transactions per second for the same test running on the same system.

To wrap up (and be a little defensive) ...

- Yes, I could have traced one or more sessions and generated a complete and detailed response time profile that should have lead me to the same conclusion.

- Yes, as this is a controlled test environment and I'm the only 'user', AWR/Statspack would have been an even more powerful analysis tool in the right hands.

- The Top Activity page is not the most appropriate tool for this job but it is handy for illustrating concepts.

- Lest I seem a slavish pictures fan, I'm showing how people might misuse or misundersand ASH/Top Activity. In this case, the Home Performance Page is a much better tool because we're looking at system-wide data and not drilling into session or SQL details.

Oh, and what is my Top Secret Magic Silver Bullet Tuning Tip for OLTP-type applications? (only to be used by Advanced Oracle Performance Wizards)

alter system set commit_write='BATCH, NOWAIT';

Done! In fact, why not just use this on all of your systems, just in case people are waiting on log file sync?

(Leaves space below for angry responses and my withering humorous retorts)

* This is not name-dropping, this is giving due credit to the people who really know what they're talking about

Time Matters: Throughput vs. Response Time - Part 2

In one of my Hotsos Symposium 2010 posts I mentioned that Peter Stalder had plugged some test results from an earlier blog post into Neil Gunther's Universal Scalability Law to see how well the model applied. Peter's posted his slides now and I've added the URL to the comments thread of the original post so people can see another perspective.

He also pointed out a recent blog post discussing similar subjects at Neil Gunther's blog although I must admit I've only had a quick glance at it because I'm up to my eyeballs in mail at the moment :-(