Search

OakieTags

Who's online

There are currently 0 users and 45 guests online.

Recent comments

Affiliations

December 2010

Quiz Night

Many people in the UK like to greet the new year with some sort of party, and in this household the type of party we like includes old-fashioned party games (can you draw “Brief Encounters” in 10 seconds ?); so, for all those members of the Oracle community who haven’t prepared their party games this year, here’s a little quiz:

Which processes could report waits for “log file sequential read”, and why ?

    1.

    10.

Warning: the number of answers I have may, or may not, be the 10 suggested by the list.

Update Jan 5th: There were a couple of answers I wasn’t expecting (and would need to check), but some things are fairly easy to check. Here, for example, is a simple script that will identify sessions that regularly wait for ‘log file sequential read’.

select
	ses.program
from
	v$session_event sev,
	V$session ses
where
	sev.event = 'log file sequential read'
and	ses.sid = sev.sid
;

Answers 7th Jan:

The obvious one, of course, is ARCH (or ARCn as they tend to be) the archiver process(es). Archiving requires something to read the online redo and copy them to some other location. Here’s a brief extract from tracing an archive process – showing reads of 1MB (2048 x 512 bytes per block) -just after a log file switch:

WAIT #0: nam='log file sequential read' ela= 38410 log#=0 block#=1 blocks=2048 obj#=-1 tim=101515708285
WAIT #0: nam='Log archive I/O' ela= 183580 count=1 intr=256 timeout=-1 obj#=-1 tim=101515892589
WAIT #0: nam='log file sequential read' ela= 84409 log#=0 block#=2049 blocks=2048 obj#=-1 tim=101515977375
WAIT #0: nam='Log archive I/O' ela= 116641 count=1 intr=256 timeout=-1 obj#=-1 tim=101516094684
WAIT #0: nam='log file sequential read' ela= 31139 log#=0 block#=4097 blocks=2048 obj#=-1 tim=101516126157
WAIT #0: nam='Log archive I/O' ela= 122492 count=1 intr=256 timeout=-1 obj#=-1 tim=101516250399
WAIT #0: nam='log file sequential read' ela= 32197 log#=0 block#=6145 blocks=2048 obj#=-1 tim=101516282965
WAIT #0: nam='Log archive I/O' ela= 122779 count=1 intr=256 timeout=-1 obj#=-1 tim=101516552698
WAIT #0: nam='Log archive I/O' ela= 4 count=0 intr=32 timeout=2147483647 obj#=-1 tim=101516552996
WAIT #0: nam='Log archive I/O' ela= 25 count=1 intr=32 timeout=2147483647 obj#=-1 tim=101516553297

Slightly less obvious is LGWR, the log writer itself. Apart from reading the log files at (normal) startup to check that they’re there, LGWR reads the start of the current and next log files on every log switch. Here’s the trace file of LGWR on a log file switch where there are two files in each group:

WAIT #0: nam='log file sequential read' ela= 32788 log#=0 block#=1 blocks=1 obj#=-1 tim=100016453547
WAIT #0: nam='log file sequential read' ela= 18309 log#=1 block#=1 blocks=1 obj#=-1 tim=100016471921
WAIT #0: nam='log file single write' ela= 518 log#=0 block#=1 blocks=1 obj#=-1 tim=100016472654
WAIT #0: nam='log file single write' ela= 516 log#=1 block#=1 blocks=1 obj#=-1 tim=100016473225
Media recovery not enabled or manual archival only 0x10000
WAIT #0: nam='log file sequential read' ela= 90 log#=0 block#=1 blocks=1 obj#=-1 tim=100016473376
WAIT #0: nam='log file sequential read' ela= 73 log#=1 block#=1 blocks=1 obj#=-1 tim=100016473474
WAIT #0: nam='log file single write' ela= 477 log#=0 block#=1 blocks=1 obj#=-1 tim=100016473981
WAIT #0: nam='log file single write' ela= 856 log#=1 block#=1 blocks=1 obj#=-1 tim=100016474885

By playing around with files per group we can work out that the order of activity is:

    read the current
    write the next
    read the current
    write the current

Next up: any process which is involved in (asynchronous) Change Data Capture (CDC) or Streams will be reading the log files, as will any process running a Log Miner session.

A couple of people suggested the SMON and PX slaves could read the log file on database startup – but as another poster pointed out, it’s the startup process that reads the log files – and if parallel recovery is invoked it passes change vectors to the PX slaves, the slaves don’t read the log files directly. Apart from that, I don’t really know which processes read the log file – I don’t want to say “process X doesn’t” read the log file, because it may do at times that I haven’t considered. (And there are lots of processes to look into in 11g.)

However, there is one (generic) process that can read the log files – and that’s ANY process that has to handle data blocks. And that’s really why I wrote this note in the first place. If a process finds at any time that the buffer it’s looking at in memory is corrupt it may do real-time block recovery if it thinks the corruption has occurred between the moment the block was read from disk and the present. There are various flags and details in the buffer header that tell it that the block is dirty (differs from what’s on disk) and the redo block address (RBA) where the earliest known change to the in-memory block can be found in the log files.

So any process may, in conditions that shouldn’t happen often (we hope), identify an in-memory corrupt, read a block from disc, then go backwards in time in the redo logs and start walking the redo logs to find the redo needed to bring the block up to date. So if you see excessive amounts of “log file sequential read” coming from your foreground session, check what’s going on – it might be an indication of a hardware fault.

Thanks to anyone who participated – and if anyone runs the little script and finds some new examples of log file sequential reads, please add them to the list.

Update Jan 2011:
Thanks to Pavol Babel in comment 8 and his reply to comment 7 for adding MRP0 and RFS to the list of processes that are expected to read the log file.

ANSI Full Outer Join, Ready or Not?

December 30, 2010 (Modified January 1, 2011) When I read pages 101-103 of the book “Pro Oracle SQL” a couple of days ago, I was reminded of a couple of things.  This section of the book describes full outer joins, showing the ANSI syntax and Oracle syntax to perform a full outer join.  If you read [...]

WordPress 3.0.4 Released…

WordPress 3.0.4 has been released. This is quite an important released because it fixes,

“a core security bug in our HTML sanitation library, called KSES. I would rate this release as “critical.””

Get upgrading soon…

Cheers

Tim…

Existence

Here’s a posting on OTN that demonstrates a piece of SQL that uses inline scalar subqueries which are all “existence” tests to produce (presumably) a set of flags describing the state of a particular item of data.

I’ve linked to it because I contributed a comment about the implications of the cost figures that appeared in the execution plan for two of the “exists” subqueries. Essentially “existence” is optimized as a “first_rows(1)” operation – which results in two lines of the plan showing two different costs for table scans of the same table.

Update 30th Dec:
If you follow the OTN note you’ll see that the original poster was confused by my comments about the relative costs of the two tablescans, so I’ve whipped up a quick script to clarify the point. It uses my typical “reproducible” setup of 1MB uniform extents, 8KB block size, freelist management and disabling system statistics. Here’s the starting data set:

create table t1
as
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		rownum <= 10000
)
select
	rownum			id,
	mod(rownum,1000)	n_1000,
	mod(rownum,5000)	n_5000,
	mod(rownum,10000)	n_10000,
	lpad(rownum,10,'0')	small_vc,
	rpad('x',100)		padding
from
	generator	v1,
	generator	v2
where
	rownum <= 100000
;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1',
		estimate_percent => 100,
		method_opt 	 => 'for all columns size 1'

	);
end;
/

Notice that any given value of n_1000 appears every 1,000 rows, any given value of n_5000 appears every 5,000 rows, and any given value for n_10000 appears every 10,000 rows. Think about the effect this has on Oracle’s prediction of how much work it has to do when asked to find a particular row under first_rows(1) optimisation (which is the optimisation strategy triggered by the “rownum = 1″ predicates below):


set autotrace traceonly explain

select count(*) from t1;
select id from t1 where	n_1000  =  500 and rownum = 1;
select id from t1 where	n_5000  = 2500 and rownum = 1;
select id from t1 where	n_10000 = 5000 and rownum = 1;

set autotrace off

The more rows you have to scan (on average) to find a given value, the more costly the tablescan becomes. (The initial select count(*) is there to demonstrate Oracle’s estimate of the cost of scanning the whole table).

---------------------------------------------------
| Id  | Operation          | Name | Rows  | Cost  |
---------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |   283 |
|   1 |  SORT AGGREGATE    |      |     1 |       |
|   2 |   TABLE ACCESS FULL| T1   |   100K|   283 |
---------------------------------------------------

-----------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost  |
-----------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     9 |     5 |
|*  1 |  COUNT STOPKEY     |      |       |       |       |
|*  2 |   TABLE ACCESS FULL| T1   |     2 |    18 |     5 |
-----------------------------------------------------------
   1 - filter(ROWNUM=1)
   2 - filter("N_1000"=500)

-----------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost  |
-----------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     9 |    16 |
|*  1 |  COUNT STOPKEY     |      |       |       |       |
|*  2 |   TABLE ACCESS FULL| T1   |     2 |    18 |    16 |
-----------------------------------------------------------
   1 - filter(ROWNUM=1)
   2 - filter("N_5000"=2500)

-----------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost  |
-----------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     9 |    30 |
|*  1 |  COUNT STOPKEY     |      |       |       |       |
|*  2 |   TABLE ACCESS FULL| T1   |     2 |    18 |    30 |
-----------------------------------------------------------
   1 - filter(ROWNUM=1)
   2 - filter("N_10000"=5000)

And now, check the costs of the tablescans in the existence subqueries for the following:


select
	(select 1 from dual where exists (select null from t1 where n_1000  =  500)) n_1000,
	(select 1 from dual where exists (select null from t1 where n_5000  = 2500)) n_5000,
	(select 1 from dual where exists (select null from t1 where n_10000 = 5000)) n_10000
from
	dual
;

-----------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost  |
-----------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |       |     2 |
|*  1 |  FILTER            |      |       |       |       |
|   2 |   FAST DUAL        |      |     1 |       |     2 |
|*  3 |   TABLE ACCESS FULL| T1   |     2 |     8 |     5 |
|*  4 |  FILTER            |      |       |       |       |
|   5 |   FAST DUAL        |      |     1 |       |     2 |
|*  6 |   TABLE ACCESS FULL| T1   |     2 |     8 |    16 |
|*  7 |  FILTER            |      |       |       |       |
|   8 |   FAST DUAL        |      |     1 |       |     2 |
|*  9 |   TABLE ACCESS FULL| T1   |     2 |     8 |    30 |
|  10 |  FAST DUAL         |      |     1 |       |     2 |
-----------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter( EXISTS (SELECT /*+ */ 0 FROM "T1" "T1" WHERE "N_1000"=500))
   3 - filter("N_1000"=500)
   4 - filter( EXISTS (SELECT /*+ */ 0 FROM "T1" "T1" WHERE "N_5000"=2500))
   6 - filter("N_5000"=2500)
   7 - filter( EXISTS (SELECT /*+ */ 0 FROM "T1" "T1" WHERE "N_10000"=5000))
   9 - filter("N_10000"=5000)

VirtualBox 4.0.0. Changes to VBoxManage Syntax…

I just got home to find a question about my RAC on VirtualBox article. The poster was having problems creating the virtual disks using the commands in the article. A quick scan through the docs reveals a number of changes to the way VirtualBox 4.0.0 handles disks and also changes to the VBoxManage syntax. I’ve amended the article to include a version of the commands for version 4.0.0 which seem to create and attach the shared disks in the same state, but it will be a few days before I get to test this properly.

So what’s happened that affects shared disk setup?

First, you can’t create a shareable disk. You have to create the disk (createhd), then modify it to shareable (modifyhd). That sounds fine, but there is an issue. The manual says,

“Before VirtualBox 4.0, it was necessary to call VBoxManage openmedium before a medium could be attached to a virtual machine; that call “registered” the medium with the global VirtualBox media registry. With VirtualBox 4.0 this is no longer necessary; media are added to media registries automatically. The “closemedium” call has been retained, however, to allow for explicitly removing a medium from a registry”

Well that is not entirely true. When you create a new disk it is not visible in the media manager until it is attached to a VM. That means you have to create the disk, attach it to a VM and then convert it to shareable. If you try to modify it before attaching it to a VM you get told the disk doesn’t exist. This just feels wrong.

As a minor annoyance, the VM detail pane doesn’t notice the “–type” change so the disks still display as normal unless you restart VirtualBox, or click the “Storage” link for the VM and come straight out, which seems to get the screen to update.

This is not a big deal, it’s just a little annoying as the old syntax was more straight forward. I’m sure it was done for a good reason… :)

Cheers

Tim…

Hash Joins – What is Wrong with this Statement?

December 29, 2010 I started reading the book “Pro Oracle SQL” a couple of days ago, and I am having trouble putting it down.  Other than a couple of minor word substitutions and intended, but not specified, qualifying words, the first 100 pages of the book are fantastic (I suspect that many of these cases [...]

Delphix deals before the end of year


It's let's make a deal time at Delphix. If you have ever had the fun of closing an Oracle deal at their fiscal year end, you know what happens. Not only is it fiscal year end but Delphix is also millimeters away from adding an extra figure to their end of quarter total and I'm betting that now is the last time these low of prices will ever be seen - just my view point from being inside the castle.

Excel connect to Oracle – 64bit and 32bit issues

Wow, thanks to

Process Monitor   http://technet.microsoft.com/en-us/sysinternals/bb896645

I was able track down why I couldn’t connect to Oracle from Excel.

I had wanted to try some of the examples Charles Hooper has posted on connecting to and monitoring Oracle, for example

http://hoopercharles.wordpress.com/2010/01/20/excel-scrolling-oracle-performance-charts/

I kept getting the error “Provider not found”
Now what kind of trace info is there for an error like this in Excel? None AFAIK. Time to start guessing.
I’m on windows 7 64 bit. I have the 64bit 11gR2 Oracle installed.  Excel shows up in task manager as “EXCEL.EXE  *32″. My first guess was, “oh, excel must want the 32bit libraries” so I got the 32 bit instant client from Oracle. Unzipped them into a directory and put them first into the path. Still no dice.
Did a lot of google searches and turned up that I needed

oraoledb.dll

but this wasn’t in any of the instant client zips that I downloaded from

http://www.oracle.com/technetwork/topics/winsoft-085727.html.

Turns out it’s in a download halfway down the page:

*Instant Client Package – ODAC: Includes ODP.NET, Oracle Services for MTS, Oracle Providers for ASP.NET, Oracle Provider for OLE DB, and OO4O with Oracle Instant Client

Downloaded this, put all the dlls in the first directory in my path. Still no dice.

I tried “strace”, but it gave no output. Rrrr.

Then I tried process monitor – bingo.

With process monitor, I filtered on processes containing “excel”, ran the connect, got tons of output, but knew it was a problem with libraries. I found “oraoledb10.dll” among the output. It was listed in an old Oracle 10 install directory. Oracle 10 32bit had been initially been installed. The install gave no warnings but bombed out late in the game so I remvoed the 10g and I installed Oracle 11gR2 64bit. (Oracle 11gR2 32bit won’t even begin the install)
So now, I scoured the registry with regedit and found the old oraoledb10.dll here

HKEY_CLASSES_ROOT\Wow6432Node\CLSID\{3F63C36E-51A3-11D2-BB7D-00C04FA30080}\InprocServer32

I changed this to the new location of oraoledb11.dll
and now it works.

TRON: Legacy…

Just got back from TRON: Legacy. The word “Legacy” seems ever so apt.

I went to see the first TRON film when it came out. I loved it. I was blown away. I wanted it all to be real. Remember this was 1982. The year after the ZX81 was released. I had a ZX81 and I was totally amazed by computers. It didn’t seem like a massive leap to think of programs as little people running around inside the box doing amazing things. It was a brave new world.

So 28 years later we have the sequel. It looks amazing and has a brilliant sound track but the fatal flaw is the concept of programs running round like people. They keep mentioning the “grid” as if that brings the concept up to date. It’s not little people running round in the box. It’s little people running round in “the grid”. Oh please. Give me a break. If that were the case there would be two giant fat people called Facebook and Google and a bunch of ants.

If I were them I would have re-imagined it entirely and replaced the original concept with a submersive virtual reality program. Doh! Already been done by eXistenZ and The Matrix

TRON was amazing in its time. TRON: Legacy isn’t. The best thing that could happen to this craptacular film is they chop it up and turn it into a visually spectacular extended video for the Daft Punk soundtrack. I would probably buy that DVD.

Cheers

Tim…

Migrating to wordpress

I'm migrating to wordpress. In 5 seconds you should be redirected to