Search

OakieTags

Who's online

There are currently 0 users and 34 guests online.

Recent comments

Oakies Blog Aggregator

gc cr disk read

You might encounter RAC wait event ‘gc cr disk read’ in 11.2 while tuning your applications in RAC environment. Let’s probe this wait event to understand why a session would wait for this wait event.

Understanding the wait event

Let’s say that a foreground process running in node 1, is trying to access a block using a SELECT statement and that block is not in the local cache. To maintain the read consistency, foreground process will require the block consistent with the query SCN. Then the sequence of operation is(simplified):

  1. Foreground session calculates the master node of the block; Requests a LMS process running in the master node to access the block.
  2. Let’s assume that block is resident in the master node’s buffer cache. If the block is in a consistent state (meaning block version SCN is lower (or equal?) to query SCN), then LMS process can send the block to the foreground process immediately. Life is not that simple, so, let’s assume that requested block has an uncommitted transaction.
  3. Since the block has uncommitted changes, LMS process can not send the block immediately. LMS process must create a CR (Consistent Read) version of the block: clones the buffer, applies undo records to the cloned buffer rolling back the block to the SCN consistent with the requested query SCN.
  4. Then the CR block is sent to the foreground process.

LMS is a light weight process

Global cache operations must complete quickly, in the order of milli-seconds, to maintain the overall performance of RAC database. LMS is a critical process and does not do heavy lifting tasks such as disk I/O etc. If LMS process has to initiate I/O, instead of initiating I/O, LMS will downgrade the block mode and send the block to the requesting foreground process (this is known as Light Works rule). Foreground process will apply undo records to the block to construct CR version of the block.

Now, the Foreground process might not find the undo blocks in the local cache as the transactions happened in the remote cache. A request is sent to remote LMS process to access undo block. If the undo block is not in the remote cache either, remote LMS process will send a grant to the foreground process to read the undo block from the disk. Foreground process accounts this wait time for the undo segment block grants to the ‘gc cr disk read’ wait event.

There are other reasons as to why FG process might have to read undo block. One of them is that Fairness downconvert triggered by LMS process. Essentially, if a block is requested too many times leading to many CR block fabrication, then instead of LMS doing more work, LMS process will simply down convert the block, send the block and grant to the requester an access to the block. FG process will apply undo to construct CR block itself.

gv$cr_block_server can be used to review the number of down converts, Light works etc. But, it is probably not possible to identify the reason for a block down convert after the event.

Why do we need this new event?

There is a very good reason why this event was introduced. Prior to 11g, waits for single block CR grants are accounted to wait event such as ‘gc cr block 2-way’, ‘gc cr block 3-way’ etc. Waits for grants on remote-instance-undo-blocks for CR fabrication is special, in the sense that, this is an additional unnecessary work from the application point of view. We need to be able to differentiate the amount of time spent waiting for undo block grants for CR fabrication vs other types of grants (such as data blocks etc). So, it looks like, Oracle has introduced this new event and I do think that this will be very useful for debugging performance issues.

Prior to 11g, you could still differentiate waits for single block grants for undo using ASH data or trace files. But, you will have to use the obj# field for this differentiation and obj# is set to 0 or -1 in the case of undo blocks/undo header blocks.

Test case

Of course, a test case would be nice, Just any regular table will do and my table structure have just two columns number, varchar2(255) with 1000 rows or so.

create table rs.t_1000 (n1 number, v1 varchar2(255));
insert into rs.t_1000 select n1, lpad(n1, 255, 'DEADBEAF') from (select level n1 from dual connect by level <=1000;
commit;
  1. node 1: update rs.t_1000 set v1=v1 where n1=100
  2. node 2; select * from rs.t_1000 where n1=100 – just to get parsing details away.
  3. node 1: alter system flush buffer cache; –flushed buffer cache.
  4. node 2: select * from rs.t_1000 where n1=100 — This SELECT statement suffers from gc cr disk read.

At step 3 in our test case, I flushed the buffer cache in node 2. When I reread the block again in node 1, here is an approximate sequence of operations that occurred:

  1. For SELECT statement in step 4, foreground sent a block request to LMS process in node2; LMS process in node 2 did not find the block in the buffer cache (since we flushed the buffer cache).
  2. So, LMS process in node2 sent a grant to the foreground process to read the data block from disk.
  3. Foreground process read the block from the disk, found that block version is higher than the query environment SCN and that there is a pending transaction in an ITL entry of the block.
  4. Foreground process clones the buffer and tries to apply undo records in order to reconstruct CR version of the block to match the query environment SCN.
  5. But, that pending transaction was initiated in node 2 and that undo segment is mastered by node 2. So, FG process sends a request for the block to node 2 LMS process. LMS process does not find the undo segment block in the node 2 cache and sends back a grant to read the block from the disk to the FG process. Meanwhile, the FG process is still waiting and the amount of time that FG process was waiting to receive the grant is accounted towards ‘gc cr disk read’.

(I formatted trace lines to improve readability and my comments are inline)

.

PARSING IN CURSOR #18446741324873694136 len=162 dep=0 uid=0 oct=3 lid=0 tim=1103269602 hv=361365006 ad='73fba918' sqlid='7cmy6msasmzhf'
select dbms_rowid.rowid_relative_fno (rowid) fno,
dbms_rowid.rowid_block_number(rowid) block,
dbms_rowid.rowid_object(rowid) obj, v1 from rs.t_1000 where n1=100
END OF STMT
c=0,e=17012,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=479790187,tim=1103269601
c=0,e=12393,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=479790187,tim=1103282129
nam='SQL*Net message to client' ela= 4 driver id=1650815232 #bytes=1 p3=0 obj#=603 tim=1103288000
nam='Disk file operations I/O' ela= 7 FileOperation=2 fileno=4 filetype=2 obj#=603 tim=1103288120
nam='db file sequential read' ela= 786 file#=4 block#=2891 blocks=1 obj#=85844 tim=1103289701
nam='db file sequential read' ela= 560 file#=4 block#=2892 blocks=1 obj#=85844 tim=1103290450
nam='Disk file operations I/O' ela= 4 FileOperation=2 fileno=7 filetype=2 obj#=85844 tim=1103290546
nam='db file sequential read' ela= 542 file#=7 block#=2758 blocks=1 obj#=75154 tim=1103291149
-- RS: Following is for undo header block to find the transaction.
nam='gc cr disk read' ela= 633 p1=6 p2=176 p3=43 obj#=0 tim=1103292048
nam='db file sequential read' ela= 662 file#=6 block#=176 blocks=1 obj#=0 tim=1103292843
-- RS: Following read is for undo block itself to rollback the transaction changes.
nam='gc cr disk read' ela= 483 p1=6 p2=955 p3=44 obj#=0 tim=1103293699
nam='db file sequential read' ela= 569 file#=6 block#=955 blocks=1 obj#=0 tim=1103294355
nam='library cache pin' ela= 1045 handle address=2043988208 pin address=1969440144 100*mode+namespace=48820893384706 obj#=0 tim=1103295827
FETCH :c=0,e=8033,p=5,cr=6,cu=0,mis=0,r=1,dep=0,og=1,plh=479790187,tim=1103296065
WAIT : nam='SQL*Net message from client' ela= 323 driver id=1650815232 #bytes=1 p3=0 obj#=0 tim=1103296522
FETCH :c=0,e=24,p=0,cr=1,cu=0,mis=0,r=0,dep=0,og=1,plh=479790187,tim=1103296586
STAT id=1 cnt=1 pid=0 pos=1 obj=75154 op='TABLE ACCESS BY INDEX ROWID T_1000 (cr=7 pr=5 pw=0 time=6505 us cost=2 size=261 card=1)'
STAT id=2 cnt=1 pid=1 pos=1 obj=85844 op='INDEX RANGE SCAN T_1000_N1 (cr=3 pr=2 pw=0 time=2494 us cost=1 size=0 card=1)'
WAIT : nam='SQL*Net message to client' ela= 1 driver id=1650815232 #bytes=1 p3=0 obj#=0 tim=1103296728

Trace file analysis

Let’s review few lines from the trace file. Block 2758 holds the row physically. After reading that block, a ‘gc cr disk read’ wait event is encountered. Essentially, the FG identified that there is a pending transaction, so sent a request to LMS accounting the wait time to ‘gc cr disk read’ event. For this ‘gc cr disk read’ event, p1 is file_id, p2 is block_id, p3 seems to be a counter increasing by 1 for each of these waits. For example, for the next gc cr disk read, p3 is set to 44. obj#=0 indicates that this is an undo block or undo header block.

Notice that next line indicates a physical read for that undo header block occurred and the obj# is set to 0.I verified the undo header block by dumping the block too


nam='db file sequential read' ela= 542 file#=7 block#=2758 blocks=1 obj#=75154 tim=1103291149
nam='gc cr disk read' ela= 633 p1=6 p2=176 p3=43 obj#=0 tim=1103292048
nam='db file sequential read' ela= 662 file#=6 block#=176 blocks=1 obj#=0 tim=1103292843

Commit Cleanout

Commit cleanouts is another reason why you would encounter this wait event. When a session commits, that session will revisit the modified blocks to clean out the ITL entries in the modified blocks. But, this cleanout does not happen in all scenarios. For example, if the number of block changes exceeds list of blocks that session maintains in SGA then the session will mark the transaction table as committed, without cleaning out ITL entries in the actual blocks. Commit cleanout doesn’t happen if the modified block is not in buffer cache anymore (We will use this idea to trigger commit cleanout in the other node).

Next session reading that block will close out the ITL entry in the block with an upper bound SCN. Let’s tweak our test case little bit to simulate commit cleanouts.

One notable difference in the test case below is that after flushing the buffer cache, we switch back to the session updating the block,commit the transaction and flush the buffer cache again. With these two flush we guarantee that session will not commit cleanout and undo block is flushed from the cache. SELECT statement is executed after the commit. When we read the same block from a different node then the reader session will cleanout the ITL entry. Cleanout operation require to identify the transaction state and max query SCN. To identify whether the transaction is committed or not, session in node2 must read the undo header block. Waits to receive the grants for the undo|undo header block are accounted towards gc cr disk read wait event.

  1. node 1: update rs.t_1000 set v1=v1 where n1=100
  2. node 2; select * from rs.t_1000 where n1=100 – just to get parsing details away.
  3. node 1: alter system flush buffer cache; –flushed buffer cache.
  4. commit in node 1 from the other session.
  5. node 1: alter system flush buffer cache; –flushed buffer cache to remove undo blocks from cache.
  6. node 2: select * from rs.t_1000 where n1=100 .

Trace file

My comments are inline


...
-- block with the row is read below
WAIT : nam='db file sequential read' ela= 417 file#=7 block#=2758 blocks=1 obj#=75154 tim=783492616
WAIT : nam='Disk file operations I/O' ela= 2 FileOperation=2 fileno=6 filetype=2 obj#=75154 tim=783492751
-- Undo header block is read with time accounted towards gc cr disk read. Note that, there are no other reads after this.
-- In the prior test case that we saw earlier, there was an additional read to read undo block;
-- In this test case, no additional read for undo block as only commits need to be cleaned out.
WAIT : nam='gc cr disk read' ela= 12779 p1=6 p2=160 p3=41 obj#=0 tim=783507346
WAIT : nam='db file sequential read' ela= 778 file#=6 block#=160 blocks=1 obj#=0 tim=783508447

So, what can we do?

Is there any thing you can do to improve the performance of the application and reduce ‘gc cr disk read’ wait events? There are few options that you can consider, some are long-term options though.
  1. Consider Application affinity. If the application is running in the same node that transactions are aggressively modifying the objects, you can reduce/eliminate the grants. Application affinity is important even though cache fusion is in play.
  2. Of course, tune the statement such a way that number of block visits are reduced.
  3. Reduce commit cleanouts. For example, if a program is excessively modifying the block in node 1. Subsequent program is reading the same blocks in node 2, then you may have to cleanout the block manually from node 1 by reading the blocks. BTW, parallel queries do not perform commit cleanouts, only serial queries will perform commit cleanouts. Another operation that can get stuck is index creation. If the table has numerous blocks without commit cleanouts, then index build might be slower and can suffer from gc cr disk read events. This is magnified by the fact that these reads are single block reads and much slower for bulk operations such as index rebuild. Reading through all blocks of the table through SELECT statements might be good enough to complete commit cleanouts.
  4. Don’t keep long pending transactions on highly active tables in RAC environment. This typically happens for applications that uses tables as queue to pick up jobs i.e. fnd_concurrent_requests table in EBusiness suite. Proper configuration of concurrent manager, optimal values for sleep and cache might help here.
  5. For read only tablespace, you might want to make sure that there are no unnecessary commit cleanout operation, since the block will be never cleaned and so every session will try to clean out the block.

Summary

In summary, this is an useful event to differentiate CR fabrication performance issues and other performance issues. Using few techniques mentioned here, you can reduce the impact of this event.

 

OT: Do Search Terms Describe the Visitor?

January 12, 2012 I thought that I would start this slightly off topic blog article with a bit of humor.  Seven months ago I wrote a blog article that refuses to move from the first position in the most visited articles on this blog.  In the process of trying to understand why a mathematics focused article [...]

ASH

You may have noticed that I’m having a little trouble keeping up to date on the blog at the moment – I know I’ve got several comments on Oracle Core to respond to but haven’t had time to look at them yet. Very briefly, though, I thought I’d point to a note that Doug Burns has just posted about John Beresniewicz’s presentation one “outliers” at the UKOUG conference.

Key feature: it supplies a query that could be very useful for capturing short, but nasty, events;  and has links to a couple of documents explaining what it’s trying to do and why. It’s a query that could do with more exercise on production systems so that Doug can get some feedback to JB about how effect it is, and how it could be improved.

Start here.

 

You've Got Framing

The framing clause brings the "window" to window functions. It's an ideal
solution when you can arrange a business question such that an answer comes
from applying an aggregate function to a range...



Read the full post at www.gennick.com/database.

You've Got Framing

The framing clause brings the "window" to window functions. It's an ideal
solution when you can arrange a business question such that an answer comes
from applying an aggregate function to a range...



Read the full post at www.gennick.com/database.

RMOUG 2012 – Hello Denver!

On February 14-16, I’ll be at the Colorado Convention Center in Denver, Colorado for RMOUG’s Training Days Conference. This is the largest regional Oracle User Conference in North America and attracts presenters from all around the country and the globe. I’ll be presenting:

Presentation Name: Troubleshooting RAC Background Process

Abstract: RAC background process performance is critical to keep the application performance. This session will demo techniques to review the performance of RAC background processes such as LMS, LMD, LMON, etc. using various statistics and UNIX tools. The presentation will also discuss why certain background processes must run in higher priority to maintain the application performance in RAC.

Presentation Name: A Kind and Gentle Introduction to RAC

Abstract: This session will introduce basic concepts such as cache fusion, conversion to RAC, protocols for interconnect, general architectural overview, GES layer locks, clusterware, etc. The session will also discuss the srvctl command and demo a few of these commands to improve the understanding.

Presentation Name: Parallel Execution in RAC

Abstract: This presentation will start to discuss and demo parallel server allocation, intra, and inter node parallelism aspects. The session will discuss the new parallelism features such as parallel statement queuing, parallel auto dop, and discuss the interaction of those features with RAC. The session will probe a few critical parameters to improve PQ performance in RAC.

Click here for more information or to register for RMOUG’s Training Days.

Beware of ACFS when upgrading to 11.2.0.3

This post is about a potential pitfall when migrating from 11.2.0.x to the next point release. I stumbled over problem this one on a two node cluster.

The operating system is Oracle Linux 5.5 running 11.2.0.2.3 and I wanted to go to 11.2.0.3.0. As you know, Grid Infrastructure upgrades are out-of-place, in other words require a separate Oracle home. This is also one of the reasons I wouldn’t want less than 20G on a non-lab like environment for the Grid Infrastructure mount points …

Now when you are upgrading from 11.2.0.x to 11.2.0.3 you need to apply a one-off patch, but the correct one! Search for patch number 12539000 (11203:ASM UPGRADE FAILED ON FIRST NODE WITH ORA-03113) and apply the one that matches your version-and pay attention to these PSUs! There is the obvious required opatch update to be performed before again as well.

So much for the prerequisites. Oracle 11.2.0.3 is available as patch 10404530, and part 3 is for Grid Infrastructure which has to be done first. This post only covers the GI upgrade, the database part is usually quite uneventful in comparison…

Upgrading Grid Infrastructure

After unzipping the third patch file you start runInstaller. But not before having carefully unset all pointers to the current 11.2.0.2 GRID_HOME (ORACLE_HOME, ORACLE_SID, LD_LIBRARY_PATH, ORA_CRS_HOME, etc)!

Clicking through OUI is mostly a matter of “next”, “next”, “next”, the action starts with the rootupgrade.sh script. Here’s the output from node1:

[root@node1 ~]# /u01/crs/11.2.0.3/rootupgrade.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME=  /u01/crs/11.2.0.3

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/crs/11.2.0.3/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation

ASM upgrade has started on first node.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node1'
...
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node1' has completed
CRS-2673: Attempting to stop 'ora.gpnpd' on 'node1'
CRS-2677: Stop of 'ora.gpnpd' on 'node1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
OLR initialization - successful
Replacing Clusterware entries in inittab
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
PRCA-1056 : Unable to upgrade ACFS from version 11.2.0.2.0 to version 11.2.0.3.0
PRCT-1011 : Failed to run "advmutil". Detailed error: advmutil:
CLSU-00100: Operating System function: open64 failed with error data: 2advmutil: CLSU-00101: Operating System error message: No such file or directory|advmutil: CLSU-00103: error location: OOF_1|advmutil: CLSU-00104: additional error information: open64 (/dev/asm/orahomevol-315)|advmutil: ADVM-09006: Error opening volume /dev/asm/orahomevol-315
srvctl upgrade model -first ... failed
Failed to perform first node tasks for cluster modeling upgrade at /u01/crs/11.2.0.3/crs/install/crsconfig_lib.pm line 9088.
/u01/crs/11.2.0.3/perl/bin/perl -I/u01/crs/11.2.0.3/perl/lib -I/u01/crs/11.2.0.3/crs/install /u01/crs/11.2.0.3/crs/install/rootcrs.pl execution failed

So that was not too great indeed-my update failed halfway through. Two facts make this bearable:

  1. rootupgrade.sh (and root.sh for that matter) are restartable since 11.2.0.2 at least
  2. A great deal of logging is available in $GRID_HOME/cfgtoollogs/crsconfig/rootcrs_hostname.log

Now advmutil was correct-there were no volumes in /dev/asm/*

An analysis of the rootcrs_node1.log file showed that the command that failed was this one

2012-01-06 10:09:10: Executing cmd: /u01/crs/11.2.0.3/bin/srvctl upgrade model  -s 11.2.0.2.0 -d 11.2.0.3.0 -p first
2012-01-06 10:09:12: Command output:
>  PRCA-1056 : Unable to upgrade ACFS from version 11.2.0.2.0 to version 11.2.0.3.0
>  PRCT-1011 : Failed to run "advmutil". Detailed error: advmutil: CLSU-00100: Operating System function: open64 failed with error data: 2|advmutil: CLSU-00101: Operating System error message: No such file or directory|advmutil: CLSU-00103: error location: OOF_1|advmutil: CLSU-00104: additional error information: open64 (/dev/asm/orahomevol-315)|advmutil: ADVM-09006: Error opening volume /dev/asm/orahomevol-315
>End Command output
2012-01-06 10:09:12:   "/u01/crs/11.2.0.3/bin/srvctl upgrade model  -s 11.2.0.2.0 -d 11.2.0.3.0 -p first" failed with status 1.
2012-01-06 10:09:12: srvctl upgrade model -first ... failed

Thinking Clearly

Thinking Clearly is an idea I thought I had adopted from Cary Millsap, but sadly I didn’t apply it here! Lesson learned: don’t assume, check!

I however assumed that because of the shutdown of the clusterware stack there wasn’t any Oracle software running on the node, hence there wouldn’t be an ADVM volume BY DEFINITION. Cluster down-ADVM down too.

Upon checking the log file again, I realised how wrong I was. Most of the lower stack Clusterware daemons were actually running by the time the srvctl command failed to upgrade ACFS to 11.2.0.3. So the reason for this failure had to be a different one. It quickly turned out that ALL the ACFS volumes were disabled. A quick check with asmcmd verified this:

$ asmcmd volinfo -a

Volume Name: ORAHOMEVOL
Volume Device: /dev/asm/orahomevol-315
State: DISABLED
Size (MB): 15120
Resize Unit (MB): 256
Redundancy: UNPROT
Stripe Columns: 4
Stripe Width (K): 128
Usage: ACFS
Mountpath: /u01/app/oracle/product/11.2.0.2

OK, that explains it all-disabled volumes are obviously NOT presented in /dev/asm/. A call to “asmcmd volenable -a” sorted that problem.

Back to point 1 – rootupgrade.sh is restartable. I then switched back to the root session and started another attempt at running the script and: (drums please) it worked. Now all that was left to do was to run rootupgrade.sh on the second (and last) node. This completed successfully as well. The required patch for the ASM rolling upgrade by the way is needed there and then-the rootcrs_lastnode.log file has these lines:

2012-01-10 09:44:10: Command output:
>  Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
>  Started to upgrade the CSS.
>  Started to upgrade the CRS.
>  The CRS was successfully upgraded.
>  Oracle Clusterware operating version was successfully set to 11.2.0.3.0
>End Command output
2012-01-10 09:44:10: /u01/crs/11.2.0.3/bin/crsctl set crs activeversion ... passed
2012-01-10 09:45:10: Rolling upgrade is set to 1
2012-01-10 09:45:10: End ASM rolling upgrade
2012-01-10 09:45:10: Executing as oracle: /u01/crs/11.2.0.3/bin/asmca -silent -upgradeLocalASM -lastNode /u01/crs/11.2.0.2
2012-01-10 09:45:10: Running as user oracle: /u01/crs/11.2.0.3/bin/asmca -silent -upgradeLocalASM -lastNode /u01/crs/11.2.0.2
2012-01-10 09:45:10:   Invoking "/u01/crs/11.2.0.3/bin/asmca -silent -upgradeLocalASM -lastNode /u01/crs/11.2.0.2" as user "oracle"
2012-01-10 09:45:10: Executing /bin/su oracle -c "/u01/crs/11.2.0.3/bin/asmca -silent -upgradeLocalASM -lastNode /u01/crs/11.2.0.2"
2012-01-10 09:45:10: Executing cmd: /bin/su oracle -c "/u01/crs/11.2.0.3/bin/asmca -silent -upgradeLocalASM -lastNode /u01/crs/11.2.0.2"
2012-01-10 09:45:51: Command output:
>
>  ASM upgrade has finished on last node.
>
>End Command output
2012-01-10 09:45:51: end rolling ASM upgrade in last

Note the ROLLING UPGRADE!

Summary

If your rootupgrade.sh script bails out with ADVMUTIL, check if your ACFS volumes are enabled-they most likely are not.

Dbvisit Standby…

Several years ago I met Arjen Visser and Bertie Plaatsman from Dbvisit and they told me about their standby database product, which is a replacement for Data Guard. Now I don’t spend much time on non-Oracle products, but this one was interesting to me as it works on Standard Edition, unlike Data Guard which is an Enterprise Edition option. From that point onward I kept seeing Arjen and conferences and telling myself I really should take a look at the product.

Over last year I bumped into Arjen at a few conferences, along with some other members of the company (Eric, Mike and Vit). They are a cool group of people, so my interest in their products was ignited again. Finally, after several years of showing interest I tried out the standby product towards the end of last year, which resulted in the following article.

I held the article back until now because I was waiting for version 6.0.16 to be released so I could check out the revised web interface.

It’s a really nice product. Simple to install. Easy to use. Does exactly what it says it does. Most importantly, it’s backed by a cool group of people. When I tried the previous releases I had a few comments about the documentation and those were taken on board and changes were made. This is why I like dealing with smaller companies. There aren’t endless layers of bureaucracy involved in changing a few sentences in an install document. :)

I’ve said I’ll give their replication product (kinda like Golden Gate) a go, but based on previous experience it will probably take me about 4 years to get round to that. :)

Cheers

Tim…




IOTs by the Oracle Indexing Expert

I’m really pleased to see that Richard Foote has started a series on Index Organized Tables. You can see his introductory post on the topic here. As ever with Richard, he puts in lots of detail and explanation and I’ve been a fan of his blogging style for a long time.

I’ve got a few posts on the topic left to do myself, maybe this competition will spur me to get on and write them!

What I will also be very interested to see is the different way we will cover the same topic. Richard has already put in a block dump and dug into the details a little at a low level about how the data is stored, which I have not done. He has also shown how using an IOT instead of a fully overloaded index (where you create an index covering all the columns of the table, to avoid visiting the table for other columns) results in not only saving the space taken up by the redundant heap table but that the IOT index is smaller than the fully overloaded index. This is due to the lack of a rowid.

I put in occasional pictures and maybe write more about how the example matches real world situations. If you want, you can look back at my own introduction to the topic.

I’m sure this is going to be an excellent series and I’ll be following it myself.

Index Organized Tables – An Introduction Of Sorts (Pyramid Song)

Thought it was high time that I covered in a little detail the subject of Index Organized Tables (IOTs). When used appropriately, they can be an extremely useful method of storing and accessing data. Hopefully by the end of this series, you’ll have a better understanding of IOTs, their respective strengths and weaknesses and so perhaps [...]