ITL Waits – Changes in Recent Releases (script)

A reader of this blog, Paresh, asked me how I was able to find out the logic behind ITL waits without having access to Oracle code. My reply was: I wrote a test case that reproduce ITL waits and a piece of code that monitors them.

Since other readers might be interested, here is the shell script I wrote. Notice that it takes four parameters as input: user name, password, SID, and how long it has to wait in the monitoring phase.

ITL Waits – Changes in Recent Releases

In recent releases Oracle has silently changed the behavior of ITL waits. The aim of this post it to describe what has changed and why. But, first of all, let’s review some essential concepts about ITLs and ITL waits.

Interested Transaction List

The Oracle database engine locks the data modified by a transaction at the row level. To implement this feature every data block contains a list of all transactions that are modifying it. This list is commonly called interested transaction list (ITL). Its purpose is twofold. First, it is used to store information to identify a transaction as well as a reference to access the undo data associated to it. Second, it is referenced by every modified or locked row to indicate which transaction it is involved.


Scripts to Download Documentation

In this post I pointed out that I like to have a copy of the documentation in PDF format on my notebook. In the same post, and its comments, I also described how I generate the scripts I use to download the files. Recently I updated the scripts and, as a result, I thought to share them with you. So, below you find the CMD and SH scripts for the documentation related to 10.2, 11.1 and 11.2.

Integration of Editions with Services in…

There’s a neat new feature for editioning in that allows you to associate an edition with a service. I’ve added it to the end of my Edition-Based Redefinition article here.



Segment Creation on Demand (Deferred Segment Creation) in Oracle…

It looks like has improved most of the original shortfalls of segment creation on demand that were present in



Distinct placement

As a follow-up to a recent post on different names Oracle can use for the intermediate views, here is a quick example of the technique called distinct placement.

IS NULL Conditions and B-tree Indexes

At page 383 of my book I wrote the following sentence (BTW, the same information is also provided by Table 9-3 at page 381):

With B-tree indexes, IS NULL conditions can be applied only through composite B-tree indexes when several SQL conditions are applied and at least one of them is not based on IS NULL or an inequality.

The text continues by showing the following examples (notice that in both cases the IS NULL predicate is applied through an access predicate):

Oracle 11gR2 on Oracle Linux 6…

With the arrival of Oracle Linux 6 comes the inevitable installation articles.

The Oracle installation on Oracle Linux 6 is certainly smoother than the recent Fedora installations have been. Even Enterprise Manager works fine with no meddling.

The official 11gR2 installation guide has not been updated to include Oracle Linux 6 and I can’t see any notes on MOS about it, so I’ve essentially followed the installation for Oracle Linux 5 and adjusted where necessary. I’m guessing when the official notes are released they are going to be pretty close to this. I can’t see any certifications against Oracle Linux 6, so I guess I would avoid it for production Oracle installations at the moment.




Well I turned in the HCC chapter on the Exadata book last week and of course as is usually the case, I immediately ran across something kind of cool on the topic I just finished writing about. (we still have several editing passes though, so I can add it later). Anyway, although I don’t have time to post much these days, I thought this one would be a quick little snippet. So here it is.

The Compression Advisor is part of the DBMS_COMPRESSION package. Specifically it is the GET_COMPRESSION_RATIO procedure. This procedure is worthy of a separate post but I won’t discuss it here except to say that as of you can use it to test HCC compression ratios on non-Exadata platforms. That’s pretty cool, but what I wanted to tell you about is a handy little function in the same package called GET_COMPRESSION_TYPE. This function can tell you exactly what level of compression has been applied to a single row. This can come in handy for investigating the inner workings of  HCC (or OLTP or BASIC compression for that matter).

As you probably already know, HCC is only applied to records loaded via direct path writes. Any updates cause rows to be migrated out of that storage format into blocks flagged for OLTP compression. Of course OLTP compression on a block only kicks in when a block is “full”. On top of this, altering a table to change it’s compression does not actually change the storage format of any existing records (unless you use the MOVE keyword). So you could load some data and then change the designation (say from QUERY LOW to QUERY HIGH). Rows that are inserted after the change will be stored in the new format (assuming the records are loaded via direct path writes of course). So why am I telling you all this. Well, because I ran across a statement in some Oracle documentation that said you can check to see what compression method a table is stored with by looking at the COMPRESS_FOR column in the DBA_TABLES view. This column does reveal what the table designation is. However, the setting actually only tells you how rows inserted in the future will be compressed. It tells you absolutely nothing about the way current rows are stored.

As for the mechanics, it appears that each row has a bitmask associated with it showing what compression format is being used. So I wrote a little script to give me what I want to see (check_row_comp.sql) using the DBMS_COMPRESSION.GET_COMPRESSION_TYPE function. Here’s an example of its use.

== Note this listing has been updated to fix the bitmask as suggested by Greg in the comments (I had it wrong initially)
SYS@SANDBOX1> !cat check_row_comp.sql
col old_rowid for a20
There is a bit mask that indicates level of compression
10000000 (1) = no compression
01000000 (2) = BASIC/OLTP
00100000 (4) = QUERY HIGH
00010000 (8) = QUERY LOW
00001000 (16) = ARCHIVE HIGH
00000100 (32) = ARCHIVE LOW
old_rowid(rowid) old_rowid,
DBMS_COMPRESSION.GET_COMPRESSION_TYPE ( '&owner', '&table_name', '&rowid'), 
1, 'No Compression',
2, 'Basic/OLTP Compression', 
4, 'HCC Query High',
8, 'HCC Query Low',
16, 'HCC Archive High',
32, 'HCC Archive Low',
'Unknown Compression Level') compression_type
from dual;
SYS@SANDBOX1> select rowid from kso.skew_hcc3 where pk_col = 444444;
1 row selected.
SYS@SANDBOX1> @check_row_comp
Enter value for owner: KSO
Enter value for table_name: SKEW_HCC3
Enter value for rowid: AAAWbXAAIAAGAfiBdz
-------------------- -------------------------
1.929.0              HCC Archive Low
1 row selected.
SYS@SANDBOX1> update kso.skew_hcc3 set col1 = col1*2 where pk_col=444444;
1 row updated.
SYS@SANDBOX1> commit;
Commit complete.
SYS@SANDBOX1> select rowid, old_rowid(rowid) old_rowid from kso.skew_hcc3 where pk_col = 44444;
ROWID              OLD_ROWID
------------------ --------------------
AAAWbXAAIAAF7aUAAA 8.1554068.0
1 row selected.
SYS@SANDBOX1> -- row migrated to file 8
SYS@SANDBOX1> @check_row_comp
Enter value for owner: KSO
Enter value for table_name: SKEW_HCC3
Enter value for rowid: AAAWbXAAIAAF7aUAAA
-------------------- -------------------------
1.929.0              No Compression
1 row selected.

I did a previous post (Proof That Whole CU’s Are Not Decompressed) where I showed row migration to a block flagged for OLTP compression, but of course the OLTP compression would not be applied until the block was sufficiently full. As you can see in the example, the function showed that the record was not actually compressed, even though it had been migrated to a block flagged for OLTP compression.

So the GET_COMPRESSION_TYPE function is handy for showing us how an individual record is actually stored. Of course you can see this by dumping blocks as well, but I think this function is much easier than looking at block dumps. You could also write a script to sample some percentage of the rows in a table to get a feel for how well compressed the rows are, but I will leave that as an exercise for the reader. (please let me know if you undertake that task as it is something I will eventually want to do as well) ;)

Automatic shared memory resize with disabled ASMM/AMM

Thanks to Nigel Antell – he has posted a comment on one of Tanel Poder’s blog notes with a link to MOS Doc ID 1269139.1. In this note a parameter called _MEMORY_IMM_MODE_WITHOUT_AUTOSGA is described. It allows Oracle 11gR2 to adjust sizes of shared memory structures even if Automatic (Shared) Memory Management have been explicitly disabled; with the main purpose of avoiding ORA-04031 error. This is nice.
PS. I wonder if it was done by Oracle development specifically due to many customers don’t like ASMM (for ex. because of negative impact on application) but encounter ORA-04031 from time to time, which results in many opened SRs. Who knows.