Search

Top 60 Oracle Blogs

Recent comments

November 2010

NFS and IO testing : Bonnie++

Been spending time lately testing out and trying to tune NFS mounts.
For testing IO, I've been using
  • dd
  • iozone
  • bonnie++
  • orion
This first option, dd, is fine for file creation tests and/or sequential reads but less flexible for random reads or random writes
To test random reads I thought I'd use iozone but don't see a solely random read test.
Then I thought I'd use the Oracle supplied tool, orion, to do random read test on an NFS mounted file system, but this doesn't work, at least on AIX 6.1, with my mount settings.

Charles Hooper's Oracle Notes

Miscellaneous Random Oracle Topics: Stop, Think, ... Understand

Workload System Statistics Bug in 11.2

Since the introduction of 11.2 I receive on a regular basis questions related to “strange” workload system statistics. The last email on that topic was sent to me yesterday. So, instead to send, again, a private reply, I decided to write this short post.

What’s wrong with 11.2 and workload system statistics?

Let’s have a look to the output of the following query:

SQL> SELECT pname, pval1
  2  FROM sys.aux_stats$
  3  WHERE sname = 'SYSSTATS_MAIN';

PNAME                  PVAL1
--------------- ------------
CPUSPEEDNW            1596.0
IOSEEKTIM                4.0
IOTFRSPEED            4096.0
SREADTIM             10900.3
MREADTIM              4525.8
CPUSPEED              1603.0
MBRC                     7.0
MAXTHR            17391616.0
SLAVETHR            413696.0

As you can see the SREADTIM and MREADTIM times are very high. In this case about three orders of magnitude of what you would expect from a regular system.

I’m not aware of the exact cause of this problem, but to me it seems that the statistics externalized in x$kcfio are broken. Anyway, in MOS there are several bugs related to it (9842771 and 9701256). Hence, it’s not a feature (e.g. a change in the unit of measure), it’s a bug. On my Linux test system I’m able to reproduce it on both 11.2.0.1 and 11.2.0.2. According to the bugs mentioned before, the problem is not limited to Linux.

Since with the DBMS_STATS package we are not able to gather correct statistics, the only advice I can give on that topic is that you have to manually set them to sensible values.

Update 2011-03-23

To fix the problem you can install the patch 9842771. It is available for 11.2.0.1 and 11.2.0.2. By the way, since the patch only provides a new version of the dbms_stats_internal package, the statistics externalized in x$kcfio are not broken… they just have another unit of measure.

Different Performance from Standard Edition and Enterprise Edition? 3

November 22, 2010 (Back to the Previous Post in the Series) (Forward to the Next Post in the Series) In the previous articles of this series we looked at different performance behavior exhibited by Standard Edition and Enterprise Editions for release versions 10.2.0.5 and 11.2.0.1, when presented with a very simple SQL statement that forced an index access path.  [...]

YPDNGG: You Probably Don’t Need Golden Gate

Before launching into this, I must give due deference to Mogens Nørgaard’s landmark article, You Probably Don’t Need RAC (YPDNR), available here, but originally published Q3 2003 in IOUG Select Journal.  Mogens showed that you can be a friend of Oracle without always agreeing with everything they do.

Index Join

One of the less well known access paths available to the optimizer is the “index join” also known as the “index hash join” path. It’s an access path that can be used when the optimizer decides that it doesn’t need to visit a table to supply the select list because there are indexes on the table that, between them, hold all the required columns. A simple example might look something like the following:


create table indjoin
as
select
	rownum	id,
	rownum	val1,
	rownum	val2,
	rpad('x',500) padding
from
	all_objects
where
	rownum <= 3000
;

create unique index ij_v1 on indjoin(id, val1);
create unique index ij_v2 on indjoin(id, val2);

-- collect stats on the table and indexes

select
	ij.id
from
	indjoin		ij
where
	ij.val1 between 100 and 200
and	ij.val2 between 50 and 150
;

Note that the columns in the where clause appear in (some) indexes, and the column(s) in the select list exist in (at least) some indexes. Under these circumstances the optimizer can produce the following plan (the test script was one I wrote for 8i – but this plan comes from an 11.1 instance):


---------------------------------------------------------------------------
| Id  | Operation              | Name             | Rows  | Bytes | Cost  |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |                  |     3 |    36 |    24 |
|   1 |  VIEW                  | index$_join$_001 |     3 |    36 |    24 |
|*  2 |   HASH JOIN            |                  |       |       |       |
|*  3 |    INDEX FAST FULL SCAN| IJ_V1            |     3 |    36 |    11 |
|*  4 |    INDEX FAST FULL SCAN| IJ_V2            |     3 |    36 |    11 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access(ROWID=ROWID)
   3 - filter("VAL1"<=200 AND "VAL1">=100)
   4 - filter("VAL2"<=150 AND "VAL2">=50)

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - "ID"[NUMBER,22]
   2 - (#keys=1) "ID"[NUMBER,22]
   3 - ROWID[ROWID,10], "ID"[NUMBER,22]
   4 - ROWID[ROWID,10]

We do a fast full scan of the two indexes extracting the rowid and id from index ij_v1 and just the rowid from index ij_v2. We can then get the result we want by doing a hash join between these two result sets on the rowid values because any time the two rowsources have a rowid in common, it’s a rowid for a row where val1 is between 100 and 200, and val2 is between 50 and 150 and the first rowsource is carrying the id - which is the thing we need to report.

There are a couple of little observations that we can make about this example.

    First, although I’ve only used two indexes in this example Oracle is not limited to just two indexes. The number of indexes that could be used is effectively unlimited.
    Second, the index_join path is strictly limited to cases where the optimizer can see that every column in the query can be found in indexes on the table.
    Third, although my example uses index fast full scans that’s not a necessary feature of the plan. Just like any other hash join, Oracle could use an index range (or full) scan to get some of the data.
    Finally, there are clearly a couple of bugs in the code.

Bugs:

If you check the rows/bytes columns in the plan you’ll see that the predicted number of rows selected is the same for both indexes (lines 3 and 4) – but we extract the rowid and the id from the first index (projection detail for line 3), so the total data volume expected from line 3 is slightly larger than the total data volume from line 4 where we extract only the rowid; theoretically, therefore, the optimizer has used the tables (indexes) in the wrong order – the one supplying the smaller volume of data should have been used as the first (build) rowsource.

More significantly, though, a quick check of the code that generates the data tells you that each index will supply 101 rows to the hash join – and you can even show that for other query execution plans the optimizer will calculate this cardinality (nearly) correctly. In the case of the index join the optimizer seems to have lost the correct individual cardinalities and has decided to use the size of the final result set as the cardinality of the two driving index scans.

There’s more, of course – one of the strangest things about the index join is that if your select list includes the table’s rowid, the optimizer doesn’t consider that to be a column in the index. So even though the predicate section of the plan shows the rowids being projected in the hash join, Oracle won’t use an index join for a query returning the rowid !

Footnote: The reason I’ve written this brief introduction to the index join is because an interesting question came up at the first E2SN virtual conference.

“If you hint an index hash join, is there any way of telling Oracle the order in which it should use the indexes?”

The answer is no – but there are ways of creating code that will do what you want, and that will be the topic of my next blog.

[Further reading on Index Joins]

Different Performance from Standard Edition and Enterprise Edition? 2

November 21, 2010 (Back to the Previous Post in the Series) (Forward to the Next Post in the Series) In the previous article in this series I compared the Standard Edition of Oracle 10.2.0.5 with the Enterprise Edition of Oracle 10.2.0.5 when a simple SQL statement was executed, looking for examples of different performances in the [...]

New Features

… but only a reference.

Arup Nanda, who wrote a series on 10g New Features and then 11g new features for Oracle Magazine, has just published a short note pointing out that Oracle has been busy doing a little house-keeping on their website(s) and this has resulted his articles moving.

If you want to read some really good material (the how and the why, as well as the what) on the more significant feartures of 10g and 11g, then following this link to Arup’s Blog.

Different Performance from Standard Edition and Enterprise Edition? 1

November 21, 2010 (Forward to the Next Post in the Series) As I am sure that quite a few people are aware, the Standard Edition of Oracle Database is likely to be suitable for many of the smaller databases that you might find in your organization, as long as the following are not needed: Parallel execution [...]

Conference Update

The week before the UKOUG conference (or more accurately - UKOUG Conference Series Technology and E-Business Suite 2010. Maybe I'll just stick to UKOUG) and whilst it's an unusual year in as much as I'm not presenting some traditions hold true. I have a stinking cold - my third in the past month. Which proves it can't just be the stress or work associated with upcoming presentations. I'm starting to wonder about Vitamin D deficiency as it seems to be a factor in both psoriasis flare-ups (bad one at the moment) and a poor immune system. Or maybe it's that damn moustache trapping an unusual number of germs? :-(

Whatever, I'll still be heading to Brum, but only for Monday and Tuesday. With no free place on offer, I decided to only take a couple of days away from work this time. Regardless, there's plenty to look forward to although my own personal contribution might not be what I'm most looking forward to! Once Lisa Dobson and Debra Lilley got wind of my fund-raising efforts, it was always going to end in tears. I believe I'll have a part to play in Monday's closing keynote, so please give generously. My agenda looks something like this but I somehow doubt I'll make all these presentations.

Monday

09:00 - 10:00    What's new in Oracle Database Application Development - Tom Kyte
10:25 - 11:10    Sane SAN 2010 - James Morle
11:20 - 12:20    Oracle 'Hidden' Features: Features You Probably Didn't Know About and are FREE - Graham Wood
11:45 - 12:45    Roundtable: Exadata Database Machine - Joel Goodman
13:20 - 14:05    Practical Use of Active Session History - David Kurtz
14:15 - 15:15    Performance Stories from Exadata Migrations - Tanel Poder
15:40 - 16:40    Oracle Optimizer - Top Tips to get Optimal SQL Execution all the Time - Maria Colgan
16:50 - 17:35    Customising ASH and AWR: Beyond ashrpt.sql and awrrpt.sql - Daniel Fink
17:50 - 18:50    Keynote Presentation - David Callaghan plus Movember Shave-off
18:50 - 20:00    Monday evening networking

Tuesday

08:45 - 09:30    Keynote presentation - Graham Wood
09:40 - 10:40    Stabilising Statistics - Tony Hasler
09:40 - 10:40    What else can I do with AWR/ASH data - Tom Kyte
11:05 - 12:05    The Optimizer – what you can expect after upgrading to Oracle Database 11g - Maria Colgan
12:15 - 13:00    Time Series Analysis Techniques for Statspack or AWR - Wolfgang Breitling
14:05 - 14:50    Co-operating with the database - Jonathan Lewis
15:00 - 15:45    Analysis of Oracle ASM Failability - Alex Gorbachev
16:10 - 17:10    Using Constraints to Improve Performnace - Joze Senegacnik
17:20 - 18:05    How to Build a System that Doesn't Scale - Graham Wood
18:05 - 20:00    Tuesday evening networking - Sponsored by OTN

As far as actual speaking engagements go, next up will be Hotsos Symposium 2011 - 6th-10th March 2011. Hotsos is one of my favourite conferences so I'm looking forward to that. It also gives me a chance to put the subject of statistics on partitioned objects to bed, for a while at least. (Oh, and expect a moustache to appear on that page at some point!) The more the blog posts have expanded into a sprawling mess and the more people I've pointed towards those posts, the more I've realised that I should write a white paper on the subject to tidy up the mess and hopefully act as a more useful reference. I'll then base the presentation on the paper. That's the plan anyway.

Then I have a couple of other overseas events. First is the OUG Ireland Conference on 30th March 2011. I don't know too much about this one yet, other than that I've offered to give a presentation on Real Time SQL Monitoring that I've done a couple of times at my current client site. I've presented in the past at an OUG Ireland event, but it was more than a few years ago!

Then it's time to go back on a ferry! The last such conference I attended was a blast so I'm expecting to have a great time at the Norwegian Oracle User Group (OUGN) Spring Conference - 7th-9th April 2011. Here's Mark Rittman's post about an earlier conference.

Lots to look forward to, then, not least the removal of my facial hair.