Search

Top 60 Oracle Blogs

Recent comments

January 2011

Presenting at TCOUG 2011 Winter Meeting

I just arrived to Minneapolis, MN and will be speaking tomorrow at the Twin Cities Oracle Users Group 2011 Winter Meeting. It’s the first time I’m speaking for this user group so I’m excited to meet the new audience. I will present one hour version of my Grid Infrastructure Internals session. Of course, no demos during presentation as it’s just enough to cover the theory slides but I’ll leave the demo as a homework as usual. :)

Big thanks to Klara Hribkova, Vice President of TCOUG, for picking me up at the airport and organizing a dinner. I’ve had a lovely meal — Bison Pot Roast — melting in my mouth.

I’m staying at Doubletree Hotel Minneapolis-Park Place. Nice and clean and very friendly staff but what’s interesting about this hotel is that they have DSL modems in the rooms so each connection is going via its own DSL line. Isn’t that cool? As I logged in, I realized that complimentary internet is shaped to 512Kbit only and faster connections are charged extra. Oh well, I can’t complain about free. Still, having DSL modem in the hotel room is kind of unusual! :)

DSL-modem-in-hotel-room

I should also mention that it’s really cold here with temperature just like in Ottawa — about -20 °C or below zero in Fahrenheit. At least, no wind chill like in Ottawa.

I want to mention that my experiment on collecting database consolidation stories was a *relative* success — we’ve got slump of participants and I can announce the book winner — Noons, it’s yours! :)

If you are at the TCOUG meeting tomorrow by any chance (well, today already), please catch me to say hello!

New cursor_bind_capture_destination parameter in Oracle 11.2.0.2

I just noticed that there’s a new cursor_bind_capture_destination parameter in Oracle 11.2.0.2 (which is really more like Oracle 11gR3 version because of the large amount of new features in it, as opposed to just bugfixes).
This parameter allows you to save some SYSAUX tablespace disk space – if the occasionally captured bind variable values (from V$SQL_BIND_DATA) take too much space. Normally these bind values (in a packed RAW form) are visible in DBA_HIST_SQLSTAT.

New cursor_bind_capture_destination parameter in Oracle 11.2.0.2

I just noticed that there’s a new cursor_bind_capture_destination parameter in Oracle 11.2.0.2 (which is really more like Oracle 11gR3 version because of the large amount of new features in it, as opposed to just bugfixes).

This parameter allows you to save some SYSAUX tablespace disk space – if the occasionally captured bind variable values (from V$SQL_BIND_DATA) take too much space. Normally these bind values (in a packed RAW form) are visible in DBA_HIST_SQLSTAT.BIND_DATA column, which can take up to 2kB per statement in a snapshot – it’s stored as RAW(2000). Of course a more convenient way to query the actual bind values is to use DBA_HIST_SQLBIND (you can also use DBMS_SQLTUNE.EXTRACT_BIND function for translating the raw payload to meaningful values).

So, if you choose to capture a lot of SQL statements per AWR snapshot (it’s configurable) and don’t really care about the sampled bind variable values and want to save the disk space, then you can set cursor_bind_capture_destination = MEMORY or OFF (if you don’t want to capture bind variable values at all).

I’m using my pvalid.sql script for checking its valid values (it’s based on the X$ table underlying V$PARAMETER_VALID_VALUES view, so I could see undocumented parameter valid values too):

SQL> @pvalid cursor_bind_capture_destination
Display valid values for multioption parameters matching “cursor_bind_capture_destination”…

  PAR# PARAMETER                                                 ORD VALUE         
—— ————————————————– ———- —————
  2062 cursor_bind_capture_destination                             2 MEMORY
       cursor_bind_capture_destination                             3 MEMORY+DISK
       cursor_bind_capture_destination                             1 OFF

The default is MEMORY+DISK (this is essentially what you get before 11.2.0.2 and you can’t turn it off unless you turn off the AWR flushing of the whole SQLSTATS metrics).

Share

New cursor_bind_capture_destination parameter in Oracle 11.2.0.2

I just noticed that there’s a new cursor_bind_capture_destination parameter in Oracle 11.2.0.2 (which is really more like Oracle 11gR3 version because of the large amount of new features in it, as opposed to just bugfixes).
This parameter allows you to save some SYSAUX tablespace disk space – if the occasionally captured bind variable values (from V$SQL_BIND_DATA) take too much space. Normally these bind values (in a packed RAW form) are visible in DBA_HIST_SQLSTAT.

More on Interested Transaction Lists

How Oracle Locking Works
When a Lock is NOT a Lock!

In the last installment of this series "100 Thing you Probably Didn't Know About Oracle" you learned how Oracle locks the rows of a table. Here is what you learned in a nutshell:

(1) When a transaction modifies a record, the pre-change image is stored in the undo segments, which is required for various things; the most important of which is to provide a read consistent version of the row when another session wants it.

(2) The transaction is assigned a transaction identifier that shows the undo segment number, slot# and record of the undo information.

(3) The transaction locks the rows (since it did not commit) by placing a special type of data in the block header known as Interested Transaction List (ITL) entry. The ITL entry shows the transaction ID and other information.

(4) When a new transaction wants to update the same rows (locked by the previous transaction) it checks the ITL entries in the block first, to check if there is a lock.

(5) Since the lock information of rows is stored in the block itself, and the ITL entries in the block refer to the locks on the rows in that block alone, there is no need to have a central lock manager to dispense and handle the release of the locks. This makes the locking process not only immensely scalable but feasible as well since there is no theoretical limit to the number of locks.

#45818e;">[Updated Jan 22, 2011] [Thank you, Randolph Geist (info@www.sqltools-plusplus.org) for pointing it out. I follow his blog http://oracle-randolf.blogspot.com/, which is a treasure trove of information.
(6) The information that a row is locked is stored along with the row in the form of a lock byte.
#45818e;">[End of Update Jan 22, 2011]

While the article might have answered some of the vexing questions you may have had or needed some clarity on the concepts you were somewhat familiar with, I sincerely hope it has piqued you curiosity to learn even more about these concepts. If I was successful in explanation, now you should not be satisfied, you should have more questions. If you don’t have any, then I completely failed in my explanation.

So, what are the questions? For starters, how do you know what objects being locked in the transaction? It’s actually quite trivial. The view V$LOCK has provided that information for years, albeit in a convoluted form. A new view V$LOCKED_OBJECT is a bit more user-friendly. Let’s examine that with an example. First, update a row:

SQL> update itltest set col2 = 'CHANGED BY SESSION AGAIN' where col1 = 221
  2  /

1 row updated.

We can check the transaction ID:

SQL> select dbms_transaction.local_transaction_id from dual'

LOCAL_TRANSACTION_ID
--------------------------------------------------------------------------------
2.16.41316

1 row selected.

As you learned from the previous installment in this series, the transaction ID is a series of numbers denoting undo segment number, slot# and record# (also known as sequence#) respectively, separated by periods.

Now, check the view V$LOCKED_OBJECT:

SQL> select * from v$locked_object
  2  /

    XIDUSN    XIDSLOT     XIDSQN  OBJECT_ID SESSION_ID
---------- ---------- ---------- ---------- ----------
ORACLE_USERNAME                OS_USER_NAME
------------------------------ ------------------------------
PROCESS                  LOCKED_MODE
------------------------ -----------
         2         16      41316      95263         56
ARUP                           oracle
13181                              3

The view shows Undo Segment# (XIDUSN), Undo Slot# (XIDSLOT) and Undo Rec# (XIDSQN), which can be used to construct the transaction ID to be joined with the V$TRANSACTION to get the details. The view contains the column OBJECT_ID. Another important column is LOCKED_MODE, which shows the mode the rows are locked. In this case, it’s “3”, which means Row Exclusive. Here is a script that decodes the modes as well as reports the object name.

select
    owner               object_owner,
    object_name         object_name,
    session_id          oracle_sid,
    oracle_username     db_user,
    decode(LOCKED_MODE,
        0, 'None',
        1, 'Null',
        2, 'Row Share',
        3, 'Row Exclusive',
        4, 'Share',
        5, 'Sub Share Exclusive',
        6, 'Exclusive',
        locked_mode
    )                   locked_mode
    from v$locked_object lo,
        dba_objects do
    where
        (xidusn||'.'||xidslot||'.'||xidsqn)
            = ('&transid')
    and
        do.object_id = lo.object_id
/

Save this script and execute it when you need further details on the transaction. The script will ask for the transaction ID which you can pass in the format reported by dbms_transaction.local_transaction_id.

Next, you may draw my attention to the point #3 above. If there are 10 records in the block and a transaction updated (and therefore locked) all ten of them, how many ITL entries will be used – one or ten?

Good question (I have to say that, since I asked that :) I suppose you can answer that yourself. Ten ITL slots may be feasible; but what if the block has 10,000 records? Is it possible to have that many ITL slots in the block header? Let’s ponder on that for a second. There will be two big issues with that many ITL slots.

First, each ITL slot, by the way, is 24 bytes long. So, 10000 slots will take up 240,000 bytes or almost 22 KB. A typical Oracle block is 8KB (I know, it could be 2K, 4K or 16K; but suppose it is the default 8K). Of course it can’t accommodate 22KB.

Second, even if the total size of the ITL slots is less than the size of the block, where will be the room to hold data? In addition, there should be some space for the data block overhead; where will that space come from?

Obviously, these are genuine problems that make one ITL slot per row impractical. Therefore Oracle does not create an ITL entry for each locked row. Instead, it creates the ITL entry for each transaction, which may have updated a number of rows. Let me repeat that – each ITL slot in the block header actually refers to a transaction; not the individual rows. That is the reason why you will not find the rowid of the rows locked in the ITL slot.  Here is the ITL entry from the block header, again:

Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x000a.019.00007c05  0x00c00288.1607.0e  ----    1  fsc 0x0000.00000000
0x02   0x0003.017.00009e24  0x00c00862.190a.0f  C---    0  scn 0x0000.02234e2b

There is a reference to a transaction ID; but not rowid. When a transaction wants to update a row in the block, it checks the ITL entries. If there is none, it means rows in that block are unlocked. However, if there are some ITL entries, does it mean that some rows in the block are locked? Not necessarily. It simply means that the rows the block were locked earlier; but that lock may or may not be active now. To check if a row is locked, the transaction checks for the lock byte stored along with the row.

That brings up an interesting question. If presence of an ITL slot does not mean a record in the block is locked, when does the ITL slot get cleared so that it can be reused, or when does that ITL slot disappear? Shouldn’t that ITL slot disappear when the transaction ends by commit or rollback? That should be the next burning question throbbing in your head right now.

Clearing of ITL Slots

To answer that question, consider this scenario: a transaction updates 10000 records, on 10000 different blocks. Naturally there will be 10000 ITL slots, one on each block, all pointing to the same transaction ID. The transaction commits; and the locks are released. Should Oracle revisit each block and remove the ITL entry corresponding to the transaction as a part of the commit operation?

If that were the processing logic, the commit would have taken a very long time. Acquiring the buffers of the 10000 blocks and updating the ITL entry will not be quick; it will take a very long time, prolonging the commit processing. From part 1 of the series, you learned that the commit processing is actually very quick, with a flush of the log buffer to redo logs and the writing of the commit marker in the redo stream. Even a checkpoint to the datafiles is not done as a part of commit processing – all the effort going towards making the process fast, very fast. Had Oracle added the logic of altering ITL slots, the commit processing would have been potentially long, very long. Therefore Oracle does not remove the ITL entries after that transaction ends (by committing, or rolling back); the slots are just left behind as artifacts.

The proof, as they say, is in the pudding. Let’s see with an example:

SQL> create table itltest (col1 number, col2 varchar2(200));

Table created.

SQL> begin
  2     for i in 1..1000 loop
  3             insert into itltest values (
  4                     i,'INITIAL VALUE OF COLUMN');
  5     end loop;
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

This inserts 1000 records. Let’s find out the file and block these records go to:

  1  select
  2     dbms_rowid.rowid_relative_fno(rowid) File#,
  3     dbms_rowid.rowid_block_number(rowid) Block#,
  4     count(1)
  5  from itltest
  6  group by
  7     dbms_rowid.rowid_relative_fno(rowid),
  8     dbms_rowid.rowid_block_number(rowid)
  9  order by
 10*    1,2
SQL> /

     FILE#     BLOCK#   COUNT(1)
---------- ---------- ----------
         7       4027        117
         7       4028        223
         7       4029        220
         7       4030        220
         7       4031        220

5 rows selected.

Let’s identify the rows in a specific block, block# 4028, for instance.

SQL> select min(col1), max(col1)
  2  from itltest
  3  where dbms_rowid.rowid_block_number(rowid) = 4028
SQL> /

 MIN(COL1)  MAX(COL1)
---------- ----------
1 223

1 row selected.

Block 4028 has the rows 1 through 223. That’s all we need to know for now. We will limit our activity to this block alone. We will need to update a single row in this block from a session:

SQL> update itltest set col2 = ‘Changed’ where col1 = 1;

Do NOT commit; just keep the session at this point. Open a different session, and update a different row, e.g. one with col1 = 2. Since this is a different row, there will be no lock contention. Similarly update 20 other rows on this block. There will be 20 different transactions on the rows of this table.

Let’s examine the innards of the block by dumping it. Before that, we should flush the block to the disk.

SQL> alter system checkpoint;

System altered.

SQL> alter system dump datafile 7 block min 4028 block max 4028;

System altered.

The information will be written to a tracefile. We have to know the SPID of the process to identify the tracefile:

SQL> select p.spid
  2  from v$session s, v$process p
  3  where s.sid = (select sid from v$mystat where rownum < 2)
  4* and p.addr = s.paddr
SQL> /

SPID
------------------------
9537

We will locate a file called D112D2_ora_9537.trc in the trace directory. Please note, this tracefile is named OracleSID_ora_ProcessID.trc; so the exact name will be different your system. Open the file and search for “Itl”. Here is an excerpt from the file:

Block header dump:  0x01c00fbc
 Object id on Block? Y
 seg/obj: 0x1741f  csc: 0x00.235a849  itc: 36  flg: E  typ: 1 - DATA
     brn: 0  bdba: 0x1c00fb8 ver: 0x01 opc: 0
     inc: 0  exflg: 0

 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x0008.00d.0000a1eb  0x00c015d1.1d3c.28  ----    1  fsc 0x0005.00000000
0x02   0x0007.018.00007fab  0x00c01246.180b.21  ----    1  fsc 0x0005.00000000
0x03   0x0003.004.0000a1b0  0x00c005ef.1a18.07  ----    1  fsc 0x0005.00000000
0x04   0x0010.010.00000004  0x00c011ee.0001.10  ----    1  fsc 0x0005.00000000
0x05   0x000e.00e.00000003  0x00c011cb.0001.0f  ----    1  fsc 0x0005.00000000
0x06   0x000c.00e.00000003  0x00c011ab.0001.1b  ----    1  fsc 0x0005.00000000
0x07   0x0013.011.00000004  0x00c00f9c.0001.0f  ----    1  fsc 0x0005.00000000
0x08   0x0002.00a.0000a166  0x00c014d8.1c06.12  ----    1  fsc 0x0005.00000000
0x09   0x0001.010.00007f65  0x00c00cd3.16ae.14  ----    1  fsc 0x0005.00000000
0x0a   0x0014.01b.00000008  0x00c00faa.0003.67  ----    1  fsc 0x0005.00000000
0x0b   0x000f.00f.00000003  0x00c011db.0001.20  ----    1  fsc 0x0005.00000000
0x0c   0x000d.00f.00000004  0x00c011bb.0001.1d  ----    1  fsc 0x0005.00000000
0x0d   0x0012.010.00000003  0x00c00f8b.0001.1c  ----    1  fsc 0x0005.00000000
0x0e   0x000a.00c.00007f76  0x00c003d7.16ea.31  ----    1  fsc 0x0005.00000000
0x0f   0x0011.010.00000004  0x00c011fb.0001.10  ----    1  fsc 0x0005.00000000
0x10   0x0009.000.0000a236  0x00c00e91.1bbe.17  ----    1  fsc 0x0005.00000000
0x11   0x0006.00e.0000a1fc  0x00c0035c.1c24.2d  ----    1  fsc 0x0005.00000000
0x12   0x000b.012.00000003  0x00c01193.0001.1d  ----    1  fsc 0x0005.00000000
0x13   0x0004.00e.00007ff7  0x00c00d01.1771.0a  ----    1  fsc 0x0005.00000000
0x14   0x0005.002.0000a19f  0x00c00f1a.1bd6.1d  ----    1  fsc 0x0005.00000000
0x15   0x0015.000.00000002  0x00c00fba.0000.02  ----    1  fsc 0x0005.00000000
0x16   0x0016.000.00000002  0x00c00fca.0000.02  ----    1  fsc 0x0005.00000000
0x17   0x0017.000.00000002  0x00c00fda.0000.02  ----    1  fsc 0x0005.00000000
0x18   0x0018.000.00000002  0x00c00fea.0000.02  ----    1  fsc 0x0005.00000000
0x19   0x0019.000.00000002  0x00c00ffa.0000.02  ----    1  fsc 0x0005.00000000
0x1a   0x001a.000.00000002  0x00c0100a.0000.02  ----    1  fsc 0x0005.00000000
0x1b   0x001b.000.00000002  0x00c0101a.0000.02  ----    1  fsc 0x0005.00000000
0x1c   0x001c.000.00000002  0x00c0102a.0000.02  ----    1  fsc 0x0005.00000000
0x1d   0x001d.000.00000002  0x00c0103a.0000.02  ----    1  fsc 0x0005.00000000
0x1e   0x001e.002.00000002  0x00c0104a.0000.03  ----    1  fsc 0x0005.00000000
0x1f   0x001f.002.00000002  0x00c0105a.0000.03  ----    1  fsc 0x0005.00000000
0x20   0x0020.000.00000002  0x00c0106a.0000.02  ----    1  fsc 0x0005.00000000
0x21   0x0021.005.00000002  0x00c0107a.0000.08  ----    1  fsc 0x0000.00000000
0x22   0x0022.000.00000002  0x00c0108a.0000.02  ----    1  fsc 0x0005.00000000
0x23   0x0023.000.00000002  0x00c0109a.0000.02  ----    1  fsc 0x0005.00000000
0x24   0x0024.000.00000002  0x00c010aa.0000.02  ----    1  fsc 0x0005.00000000
bdba: 0x01c00fbc
data_block_dump,data header at 0xeb6994

Note the Itl entries – there is an entry for each transaction, marked by its transaction ID, as expected. When the block was created, there were two ITL slots. As the demand for locks increased, additional slots were created and used for these new transactions.

Now go to all these sessions and either commit or rollback to end the transactions. Dump the block and search for “Itl”. The ITL slots are still there, even though the transactions have ended and the locks released. Oracle does not update the ITL entries.

So, when does the ITL entry gets cleared? When block’s buffer is written to the disk, the unneeded ITL entries are checked and cleared out. Let’s force a block flushing:

SQL> alter system checkpoint;

Now dump the data block once again and examine the ITLs. Here is an excerpt from the tracefiles.

Object id on Block? Y
 seg/obj: 0x1741f  csc: 0x00.235a849  itc: 36  flg: E  typ: 1 - DATA
     brn: 0  bdba: 0x1c00fb8 ver: 0x01 opc: 0
     inc: 0  exflg: 0

 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x0014.016.00000008  0x00c00fb3.0002.11  C---    0  scn 0x0000.0235a524
0x02   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x03   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x04   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x05   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x06   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x07   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x08   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x09   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x0a   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x0b   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x0c   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x0d   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x0e   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x0f   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x10   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x11   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x12   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x13   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x14   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x15   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x16   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x17   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x18   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x19   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x1a   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x1b   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x1c   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x1d   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x1e   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x1f   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x20   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x21   0x0021.002.00000002  0x00c0107a.0000.05  C---    0  scn 0x0000.0235a807
0x22   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x23   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x24   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
bdba: 0x01c00fbc
data_block_dump,data header at 0x484994

Note the Xid columns – the transaction Id, which shows 0’s, meaning there is no transaction using the ITL slots. These ITL slots are eligible for reuse. Update two rows from two different sessions, checkpoint and dump the block once again. Here is the ITL information again:

Itl           Xid                  Uba          Flag  Lck        Scn/Fsc
0x01   0x0014.016.00000008  0x00c00fb3.0002.11  C---    0  scn 0x0000.0235a524
0x02   0x0005.009.0000a1a5  0x00c00f21.1bd6.04  ----    1  fsc 0x0016.00000000
0x03   0x000a.002.00007f7a  0x00c003d8.16ea.13  ----    1  fsc 0x0016.00000000

0x04   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x05   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x06   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
... and so on ...

The first two Itl slots are now used. Note, only the ITL slots in this specific block will be created. All other blocks will continue to have the same number of ITL slots.

ITL Waits

From the part 2 of this series you learned that the ITL slots are not preallocated, at least not all of them. When a transaction needs to lock rows in the block, and it does not find an unused ITL slot, Oracle creates a new ITL slot for the transaction. Consider the figure below. There is no more room in the block for a new ITL entry. A new transaction comes in to update Record3. What will happen?

The transaction will have to wait. This is not the same wait as a row lock; because there is no lock on the row marked Record3.Instead, session will wait on a special wait event. You can check the wait event from the V$SESSION view.

SQL> select event
  2  from v$session
  3  where sid = 78
  4  /

EVENT
----------------------------------------------------------------
enq: TX - allocate ITL entry

The moment one of the transactions – from either Session1 or Session2 end by commit or rollback, the new transaction can grab that ITL slot and complete the locking operation. You will see that wait event disappear.

Since the ITL waits come and go, how do you capture them; or more specifically how will you know which objects are being subjected to this wait? It’s fairly trivial. Since Oracle 9.2 a new view – V$SEGMENT_STATSTICS – shows various segment related statistics on segments. Here is an example:

SQL> select statistic_name, value from v$segment_statistics
  2* where object_name = 'ITLTEST';

STATISTIC_NAME                                                        VALUE
---------------------------------------------------------------- ----------
logical reads                                                          7216
buffer busy waits                                                         3
gc buffer busy                                                            0
db block changes                                                       5600
physical reads                                                            0
physical writes                                                          39
physical read requests                                                    0
physical write requests                                                   9
physical reads direct                                                     0
physical writes direct                                                    0
optimized physical reads                                                  0
gc cr blocks received                                                     0
gc current blocks received                                                0
ITL waits                                                                 2
row lock waits                                                            1

Various stats on the segment named ITLTEST are listed here. Of the lot, the one interesting to our discussion here is “ITL waits”, which shows “2”. It means the table ITLTEST has waited 2 times for ITL waits (not for a legitimate row locking, which shown in the stats immediately afterwards).

Conversely, you may want to find out what have been subjected to ITL waits. The following query shows you that:

SQL> select owner, object_name
  2  from v$segment_statistics
  3  where statistic_name = 'ITL waits'
  4* and value > 0;

OWNER                          OBJECT_NAME
------------------------------ ------------------------------
ARUP                           ITLTEST

1 row selected.

The view has many more columns for making filtering easier:

SQL> desc v$segment_statistics
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 OWNER                                              VARCHAR2(30)
 OBJECT_NAME                                        VARCHAR2(30)
 SUBOBJECT_NAME                                     VARCHAR2(30)
 TABLESPACE_NAME                                    VARCHAR2(30)
 TS#                                                NUMBER
 OBJ#                                               NUMBER
 DATAOBJ#                                           NUMBER
 OBJECT_TYPE                                        VARCHAR2(18)
 STATISTIC_NAME                                     VARCHAR2(64)
 STATISTIC#                                         NUMBER
 VALUE                                              NUMBER

Actually selecting from the above view is a bit expensive on the database. The base view is V$SEGSTAT, shown below:

SQL> desc v$segstat
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 TS#                                                NUMBER
 OBJ#                                               NUMBER
 DATAOBJ#                                           NUMBER
 STATISTIC_NAME                                     VARCHAR2(64)
 STATISTIC#                                         NUMBER
 VALUE                                              NUMBER

While V$SEGMENT_STATISTICS show much more information, it’s a little slow due to all those joins. If you don’t need all that information, you may want to select instead from V$SEGSTAT, which is usually faster. The columns are self explanatory; but here they are in any case:

TS# - the tablespace number. You can use this to get the tablespace name from TS$ table joined by TS# column
OBJ# - the object_id, from dba_objects. You can get the rest of the details from that view
DATAOBJ# - the data_object_id, from dba_objects. This is usually the same as object_id; except in case of sub-objects such as partitions in which case they differ.

One important point about this view: like all V$ views, it shows information from the start of the instance. When the instance recycles, the values are reset to 0. To get a historical information, you should periodically select from this view and store in a regular table. If you have AWR enabled, you can check the historical records from there. Here is an example:

SQL> select snap_id, ITL_WAITS_TOTAL, ITL_WAITS_DELTA
  2  from DBA_HIST_SEG_STAT
  3  where obj# = 95263
  4* order by snap_id;

   SNAP_ID ITL_WAITS_TOTAL ITL_WAITS_DELTA
---------- --------------- ---------------
      5014               2               2

1 row selected.

Solution

Well, so far I talked about a problem. Is there a solution? Of course there is.

Remember, the cause of ITL waits is simply space inside a block. If there is no space inside the block to grow the ITL list to add more slots, the sessions will wait with the ITL wait event. So the solution is to reserve same space for that growth. There are two basic alternatives to solve the ITL wait problem:

(1) INITRANS.

Remember the little clause during table or index creation? Have you ever explicitly set it to its non-default value? Most likely you haven’t. It specifies the number of ITL slots that must be initially created on a block. If you specify 10, then 10 ITL slots are created on the block, guaranteeing the slot for 10 transactions. The 11th transaction will need to extend the ITL list; or wait if that is not possible.

To check for the INITRANS value of tables, use:

SQL> select ini_trans
  2  from dba_tables
  3  where table_name = 'T';

 INI_TRANS
----------
        10

(2) Less Space for Data

The other option is to make sure that you have less data inside a data block to allow the ITL sufficient free space. You can do it by several ways – by setting a high value of PCTFREE and by setting MINIMIZE_RECORDS_PER_BLOCK clause.

Obviously, both these options waste space inside the block; so you should use these only on those segments that experience high ITL waits, as you can see from AWR reports or your homegrown data collectors. To increase the INITRANS of an existing table, you should issue:

ALTER TABLE ITLTEST INITRANS 10;

Remember, the setting affects the new blocks only; not the existing ones. You can issue ALTER TABLE … MOVE command to relocate the blocks to new blocks, and thereby effecting the new settings.

What is the upper limit of the ITL slots? They are set by a parameter of the object called MAXTRANS. The default is 256. If you set it to 20, the ITL slots will go up to that much only. However, the parameter has no effect in Oracle 10gR2. It’s ignored and the ITL slots can go up to 256.

Takeaways

In this installment you learned:

(1) ITL itself does not say whether a row is locked or not. The lock byte stored in the row tells that.
(2) When a transaction ends, the corresponding ITL entry is not removed or altered. It gets cleared during flush to the disk.
(3) When the ITL can’t grow due to the lack of space in the block, the session waits will the event “enq: TX - allocate ITL entry
(4) You can identify the segments that have suffered from this wait by checking the view V$SEGSTAT.
(5) To reduce the possibility of these waits, you should have sufficient space inside the data block for ITL expansion, either by defining higher number of initial ITL slots, or forcing less data inside the blocks.

I hope you enjoyed this installment. As always, I will appreciate if you drop in a line on how you liked it.

Pending Statistics

This is just a quick heads-up to those that plan to use the Pending Statistics feature that has been introduced in Oracle 11.1.

It looks like that in all currently available versions that support this feature Pending Statistics have not been implemented consequently for all possible DBMS_STATS calls, so you have to be very careful which calls you use. Having enabled the pending statistics for a particular table you might start to manipulate the statistics under the impression that the modifications performed are not reflected in the actual dictionary statistics (by "dictionary statistics" in this case I don't mean the statistics of the data dictionary objects themselves but the actual statistics of database objects stored in the data dictionary) but only in the pending statistics area allowing you to test statistics modifications in an isolated environment using the OPTIMIZER_USE_PENDING_STATISTICS parameter on session level.

You therefore might be in for a surprise to find out that this holds true only for a limited set of DBMS_STATS calls, but not for all.

This effectively means that particular changes to the statistics will be effective immediately although pending statistics have been enabled.

In particular manipulations of the statistics using the SET_*_STATS procedures of DBMS_STATS seem to ignore the pending statistics settings and still update the dictionary statistics immediately without further notice.

This is rather unfortunate since this means that Pending Statistics can not be used in a straightforward way to test user-defined statistics which can be very helpful under certain circumstances but require extensive testing before using them on a live system.

But also other calls, like gathering statistics only for a particular set of columns show an unexpected behaviour: It looks like that both statistics get modified, the pending statistics area, but also the dictionary statistics.

Note that setting the GLOBAL preferences (DBMS_STATS.SET_GLOBAL_PREFS) for PUBLISH to FALSE seems to fix this particular issue - in that case only the pending statistics get updated, but the dictionary statistics are left unchanged. This fix does not apply to the SET_*_STATS procedures unfortunately, those seem to always update the dictionary statistics.

The worst thing however is that the statistics history that is automatically maintained since Oracle 10g does not reflect these (unintended) changes properly, so you can not easily recover from the potentially unwanted modifications by calling DBMS_STATS.RESTORE_TABLE_STATS.

Finally I was obviously able to activate the pending statistics using DBMS_STATS.RESTORE_TABLE_STATS - you can rather clearly see this behaviour when using the SET_GLOBAL_PREFS('PUBLISH', 'FALSE') variant of the following script.

The following is a small demonstration of the issues encountered - please note that it modifies the GLOBAL preferences for the PUBLISH setting if you intend to run this test by yourself.

set echo on timing on linesize 130 tab off trimspool on

alter session set nls_date_format = 'DD.MM.YYYY HH24:MI:SS';

drop table t purge;

create table t
as
select * from all_objects
where rownum <= 1000;

exec dbms_stats.set_global_prefs('PUBLISH', 'TRUE')

select dbms_stats.get_prefs('PUBLISH', null, 'T') from dual;

-- Our baseline, no histograms, basic column statistics for all columns
exec dbms_stats.gather_table_stats(null, 'T', estimate_percent => null, cascade => false, method_opt => 'FOR ALL COLUMNS SIZE 1')

-- Verify the result
select
num_distinct
, density
, last_analyzed
, num_buckets
, user_stats
from
user_tab_col_statistics
where
table_name = 'T'
and column_name = 'OBJECT_NAME';

-- Enable pending statistics for table T
-- You can try these different calls
--
-- The GATHER_*_STATS procedures seem to behave correctly
-- only when setting the GLOBAL PREFS to FALSE
--
-- "Correctly" means that the results are reflected in the
-- pending area only but not in the dictionary statistics
--
-- Note that the SET_*_STATS procedures seem to ignore the setting
-- always and publish directly to the dictionary
-- no matter what the PUBLISH setting is on any level (TABLE, GLOBAL)
--
-- exec dbms_stats.set_global_prefs('PUBLISH', 'FALSE')
exec dbms_stats.set_table_prefs(null, 'T', 'PUBLISH', 'FALSE')
-- exec dbms_stats.set_schema_prefs(user, 'PUBLISH', 'FALSE')

-- Verify the current setting, statistics will not be published
select dbms_stats.get_prefs('PUBLISH', null, 'T') from dual;

-- Wait for two seconds to make the LAST_ANALYZED column meaningful
exec dbms_lock.sleep(2)

-- This is supposed to go to the pending statistics area
exec dbms_stats.gather_table_stats(null, 'T', estimate_percent => null, cascade => false, method_opt => 'FOR ALL COLUMNS SIZE 254')

-- Yes, it worked, the dictionary statistics are not modified
select
num_distinct
, density
, last_analyzed
, num_buckets
, user_stats
from
user_tab_col_statistics
where
table_name = 'T'
and column_name = 'OBJECT_NAME';

-- The pending statistics area contains now the new statistics including histograms
select
num_distinct
, density
, last_analyzed
from
user_col_pending_stats
where
table_name = 'T'
and column_name = 'OBJECT_NAME';

select
count(*)
from
user_tab_histgrm_pending_stats
where
table_name = 'T'
and column_name = 'OBJECT_NAME';

select
count(*)
from
user_tab_histograms
where
table_name = 'T'
and column_name = 'OBJECT_NAME';

-- Wait for two seconds to make the LAST_ANALYZED column meaningful
exec dbms_lock.sleep(2)

-- Let's gather statistics only for the OBJECT_NAME column
exec dbms_stats.gather_table_stats(null, 'T', estimate_percent => null, cascade => false, method_opt => 'FOR COLUMNS OBJECT_NAME SIZE 1')

-- Oops, why do my dictionary statistics reflect that change (Note the LAST_ANALYZED column)
-- Except for you set the GLOBAL preferences for PUBLISH to FALSE
select
num_distinct
, density
, last_analyzed
, num_buckets
, user_stats
from
user_tab_col_statistics
where
table_name = 'T'
and column_name = 'OBJECT_NAME';

-- I do have now the statistics updated in both, pending statistics and dictionary statistics
-- Except for you set the GLOBAL preferences for PUBLISH to FALSE
select
num_distinct
, density
, last_analyzed
from
user_col_pending_stats
where
table_name = 'T'
and column_name = 'OBJECT_NAME';

select
count(*)
from
user_tab_histgrm_pending_stats
where
table_name = 'T'
and column_name = 'OBJECT_NAME';

select
count(*)
from
user_tab_histograms
where
table_name = 'T'
and column_name = 'OBJECT_NAME';

-- Wait for two seconds to make the LAST_ANALYZED column meaningful
exec dbms_lock.sleep(2)

-- Let's recreate the histogram only on the OBJECT_NAME column
exec dbms_stats.gather_table_stats(null, 'T', estimate_percent => null, cascade => false, method_opt => 'FOR COLUMNS OBJECT_NAME SIZE 254')

-- Oops, I did it again...
-- Except for you set the GLOBAL preferences for PUBLISH to FALSE
select
num_distinct
, density
, last_analyzed
, num_buckets
, user_stats
from
user_tab_col_statistics
where
table_name = 'T'
and column_name = 'OBJECT_NAME';

select
num_distinct
, density
, last_analyzed
from
user_col_pending_stats
where
table_name = 'T'
and column_name = 'OBJECT_NAME';

select
count(*)
from
user_tab_histgrm_pending_stats
where
table_name = 'T'
and column_name = 'OBJECT_NAME';

select
count(*)
from
user_tab_histograms
where
table_name = 'T'
and column_name = 'OBJECT_NAME';

-- Wait for two seconds to make the LAST_ANALYZED column meaningful
exec dbms_lock.sleep(2)

-- Let's define a manually crafted NDV and DENSITY value
-- Again I expect this to go to the pending statistics area
declare
srec dbms_stats.statrec;
novals dbms_stats.numarray;
distcnt number;
avgclen number;
nullcnt number;
density number;
begin
dbms_stats.get_column_stats(null, 't', 'object_name', distcnt => distcnt, avgclen => avgclen, nullcnt => nullcnt, density => density, srec => srec);
dbms_stats.set_column_stats(
ownname=>null,
tabname=>'t',
colname=>'object_name',
distcnt=>distcnt*100,
nullcnt=>nullcnt,
srec=>srec,
avgclen=>avgclen,
density=>density/100
);
end;
/

-- Nope, no change here
select
num_distinct
, density
, last_analyzed
from
user_col_pending_stats
where
table_name = 'T'
and column_name = 'OBJECT_NAME';

-- But I just changed it in the dictionary statistics
-- Even in case of setting the GLOBAL preference to FALSE
select
num_distinct
, density
, last_analyzed
, num_buckets
, user_stats
from
user_tab_col_statistics
where
table_name = 'T'
and column_name = 'OBJECT_NAME';

-- And what is even worse: The statistics history does not reflect all these changes to the dictionary statistics
select table_name, stats_update_time from USER_TAB_STATS_HISTORY where table_name = 'T';

exec dbms_stats.restore_table_stats(null, 'T', systimestamp)

-- But which statistics have been restored now?
-- It looks like this actually restored the PENDING statistics
-- according to the LAST_ANALYZED information??
select
num_distinct
, density
, last_analyzed
, num_buckets
, user_stats
from
user_tab_col_statistics
where
table_name = 'T'
and column_name = 'OBJECT_NAME';

select
count(*)
from
user_tab_histgrm_pending_stats
where
table_name = 'T'
and column_name = 'OBJECT_NAME';

select
count(*)
from
user_tab_histograms
where
table_name = 'T'
and column_name = 'OBJECT_NAME';

exec dbms_stats.set_global_prefs('PUBLISH', 'TRUE')

And this is what I get from running this on 11.1.0.7, 11.2.0.1 or 11.2.0.2:

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL>
SQL> alter session set nls_date_format = 'DD.MM.YYYY HH24:MI:SS';

Session altered.

Elapsed: 00:00:00.00
SQL>
SQL> drop table t purge;

Table dropped.

Elapsed: 00:00:00.20
SQL>
SQL> create table t
2 as
3 select * from all_objects
4 where rownum <= 1000;

Table created.

Elapsed: 00:00:00.35
SQL>
SQL> exec dbms_stats.set_global_prefs('PUBLISH', 'TRUE')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.03
SQL>
SQL> select dbms_stats.get_prefs('PUBLISH', null, 'T') from dual;

DBMS_STATS.GET_PREFS('PUBLISH',NULL,'T')
----------------------------------------------------------------------------------------------------------------------------------
TRUE

Elapsed: 00:00:00.00
SQL>
SQL> -- Our baseline, no histograms, basic column statistics for all columns
SQL> exec dbms_stats.gather_table_stats(null, 'T', estimate_percent => null, cascade => false, method_opt => 'FOR ALL COLUMNS SIZE 1')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.09
SQL>
SQL> -- Verify the result
SQL> select
2 num_distinct
3 , density
4 , last_analyzed
5 , num_buckets
6 , user_stats
7 from
8 user_tab_col_statistics
9 where
10 table_name = 'T'
11 and column_name = 'OBJECT_NAME';

NUM_DISTINCT DENSITY LAST_ANALYZED NUM_BUCKETS USE
------------ ---------- ------------------- ----------- ---
995 .001005025 18.01.2011 18:58:24 1 NO

Elapsed: 00:00:00.01
SQL>
SQL> -- Enable pending statistics for table T
SQL> -- You can try these different calls
SQL> --
SQL> -- The GATHER_*_STATS procedures seem to behave correctly
SQL> -- only when setting the GLOBAL PREFS to FALSE
SQL> --
SQL> -- "Correctly" means that the results are reflected in the
SQL> -- pending area only but not in the dictionary statistics
SQL> --
SQL> -- Note that the SET_*_STATS procedures seem to ignore the setting
SQL> -- always and publish directly to the dictionary
SQL> -- no matter what the PUBLISH setting is on any level (TABLE, GLOBAL)
SQL> --
SQL> -- exec dbms_stats.set_global_prefs('PUBLISH', 'FALSE')
SQL> exec dbms_stats.set_table_prefs(null, 'T', 'PUBLISH', 'FALSE')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> -- exec dbms_stats.set_schema_prefs(user, 'PUBLISH', 'FALSE')
SQL>
SQL> -- Verify the current setting, statistics will not be published
SQL> select dbms_stats.get_prefs('PUBLISH', null, 'T') from dual;

DBMS_STATS.GET_PREFS('PUBLISH',NULL,'T')
----------------------------------------------------------------------------------------------------------------------------------
FALSE

Elapsed: 00:00:00.01
SQL>
SQL> -- Wait for two seconds to make the LAST_ANALYZED column meaningful
SQL> exec dbms_lock.sleep(2)

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.01
SQL>
SQL> -- This is supposed to go to the pending statistics area
SQL> exec dbms_stats.gather_table_stats(null, 'T', estimate_percent => null, cascade => false, method_opt => 'FOR ALL COLUMNS SIZE 254')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.99
SQL>
SQL> -- Yes, it worked, the dictionary statistics are not modified
SQL> select
2 num_distinct
3 , density
4 , last_analyzed
5 , num_buckets
6 , user_stats
7 from
8 user_tab_col_statistics
9 where
10 table_name = 'T'
11 and column_name = 'OBJECT_NAME';

NUM_DISTINCT DENSITY LAST_ANALYZED NUM_BUCKETS USE
------------ ---------- ------------------- ----------- ---
995 .001005025 18.01.2011 18:58:24 1 NO

Elapsed: 00:00:00.01
SQL>
SQL> -- The pending statistics area contains now the new statistics including histograms
SQL> select
2 num_distinct
3 , density
4 , last_analyzed
5 from
6 user_col_pending_stats
7 where
8 table_name = 'T'
9 and column_name = 'OBJECT_NAME';

NUM_DISTINCT DENSITY LAST_ANALYZED
------------ ---------- -------------------
995 .00101 18.01.2011 18:58:26

Elapsed: 00:00:00.01
SQL>
SQL> select
2 count(*)
3 from
4 user_tab_histgrm_pending_stats
5 where
6 table_name = 'T'
7 and column_name = 'OBJECT_NAME';

COUNT(*)
----------
255

Elapsed: 00:00:00.01
SQL>
SQL> select
2 count(*)
3 from
4 user_tab_histograms
5 where
6 table_name = 'T'
7 and column_name = 'OBJECT_NAME';

COUNT(*)
----------
2

Elapsed: 00:00:00.01
SQL>
SQL> -- Wait for two seconds to make the LAST_ANALYZED column meaningful
SQL> exec dbms_lock.sleep(2)

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.01
SQL>
SQL> -- Let's gather statistics only for the OBJECT_NAME column
SQL> exec dbms_stats.gather_table_stats(null, 'T', estimate_percent => null, cascade => false, method_opt => 'FOR COLUMNS OBJECT_NAME SIZE 1')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.14
SQL>
SQL> -- Oops, why do my dictionary statistics reflect that change (Note the LAST_ANALYZED column)
SQL> -- Except for you set the GLOBAL preferences for PUBLISH to FALSE
SQL> select
2 num_distinct
3 , density
4 , last_analyzed
5 , num_buckets
6 , user_stats
7 from
8 user_tab_col_statistics
9 where
10 table_name = 'T'
11 and column_name = 'OBJECT_NAME';

NUM_DISTINCT DENSITY LAST_ANALYZED NUM_BUCKETS USE
------------ ---------- ------------------- ----------- ---
995 .001005025 18.01.2011 18:58:29 1 NO

Elapsed: 00:00:00.01
SQL>
SQL> -- I do have now the statistics updated in both, pending statistics and dictionary statistics
SQL> -- Except for you set the GLOBAL preferences for PUBLISH to FALSE
SQL> select
2 num_distinct
3 , density
4 , last_analyzed
5 from
6 user_col_pending_stats
7 where
8 table_name = 'T'
9 and column_name = 'OBJECT_NAME';

NUM_DISTINCT DENSITY LAST_ANALYZED
------------ ---------- -------------------
995 .001005025 18.01.2011 18:58:29

Elapsed: 00:00:00.01
SQL>
SQL> select
2 count(*)
3 from
4 user_tab_histgrm_pending_stats
5 where
6 table_name = 'T'
7 and column_name = 'OBJECT_NAME';

COUNT(*)
----------
0

Elapsed: 00:00:00.01
SQL>
SQL> select
2 count(*)
3 from
4 user_tab_histograms
5 where
6 table_name = 'T'
7 and column_name = 'OBJECT_NAME';

COUNT(*)
----------
2

Elapsed: 00:00:00.01
SQL>
SQL> -- Wait for two seconds to make the LAST_ANALYZED column meaningful
SQL> exec dbms_lock.sleep(2)

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.01
SQL>
SQL> -- Let's recreate the histogram only on the OBJECT_NAME column
SQL> exec dbms_stats.gather_table_stats(null, 'T', estimate_percent => null, cascade => false, method_opt => 'FOR COLUMNS OBJECT_NAME SIZE 254')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.34
SQL>
SQL> -- Oops, I did it again...
SQL> -- Except for you set the GLOBAL preferences for PUBLISH to FALSE
SQL> select
2 num_distinct
3 , density
4 , last_analyzed
5 , num_buckets
6 , user_stats
7 from
8 user_tab_col_statistics
9 where
10 table_name = 'T'
11 and column_name = 'OBJECT_NAME';

NUM_DISTINCT DENSITY LAST_ANALYZED NUM_BUCKETS USE
------------ ---------- ------------------- ----------- ---
995 .00101 18.01.2011 18:58:32 254 NO

Elapsed: 00:00:00.01
SQL>
SQL> select
2 num_distinct
3 , density
4 , last_analyzed
5 from
6 user_col_pending_stats
7 where
8 table_name = 'T'
9 and column_name = 'OBJECT_NAME';

NUM_DISTINCT DENSITY LAST_ANALYZED
------------ ---------- -------------------
995 .00101 18.01.2011 18:58:32

Elapsed: 00:00:00.01
SQL>
SQL> select
2 count(*)
3 from
4 user_tab_histgrm_pending_stats
5 where
6 table_name = 'T'
7 and column_name = 'OBJECT_NAME';

COUNT(*)
----------
255

Elapsed: 00:00:00.01
SQL>
SQL> select
2 count(*)
3 from
4 user_tab_histograms
5 where
6 table_name = 'T'
7 and column_name = 'OBJECT_NAME';

COUNT(*)
----------
255

Elapsed: 00:00:00.01
SQL>
SQL> -- Wait for two seconds to make the LAST_ANALYZED column meaningful
SQL> exec dbms_lock.sleep(2)

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.01
SQL>
SQL> -- Let's define a manually crafted NDV and DENSITY value
SQL> -- Again I expect this to go to the pending statistics area
SQL> declare
2 srec dbms_stats.statrec;
3 novals dbms_stats.numarray;
4 distcnt number;
5 avgclen number;
6 nullcnt number;
7 density number;
8 begin
9 dbms_stats.get_column_stats(null, 't', 'object_name', distcnt => distcnt, avgclen => avgclen, nullcnt => nullcnt, density => density, srec => srec);
10 dbms_stats.set_column_stats(
11 ownname=>null,
12 tabname=>'t',
13 colname=>'object_name',
14 distcnt=>distcnt*100,
15 nullcnt=>nullcnt,
16 srec=>srec,
17 avgclen=>avgclen,
18 density=>density/100
19 );
20 end;
21 /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.17
SQL>
SQL> -- Nope, no change here
SQL> select
2 num_distinct
3 , density
4 , last_analyzed
5 from
6 user_col_pending_stats
7 where
8 table_name = 'T'
9 and column_name = 'OBJECT_NAME';

NUM_DISTINCT DENSITY LAST_ANALYZED
------------ ---------- -------------------
995 .00101 18.01.2011 18:58:32

Elapsed: 00:00:00.01
SQL>
SQL> -- But I just changed it in the dictionary statistics
SQL> -- Even in case of setting the GLOBAL preference to FALSE
SQL> select
2 num_distinct
3 , density
4 , last_analyzed
5 , num_buckets
6 , user_stats
7 from
8 user_tab_col_statistics
9 where
10 table_name = 'T'
11 and column_name = 'OBJECT_NAME';

NUM_DISTINCT DENSITY LAST_ANALYZED NUM_BUCKETS USE
------------ ---------- ------------------- ----------- ---
99500 .0000101 18.01.2011 18:58:34 254 YES

Elapsed: 00:00:00.01
SQL>
SQL> -- And what is even worse: The statistics history does not reflect all these changes to the dictionary statistics
SQL> select table_name, stats_update_time from USER_TAB_STATS_HISTORY where table_name = 'T';

TABLE_NAME STATS_UPDATE_TIME
------------------------------ ---------------------------------------------------------------------------
T 18-JAN-11 06.58.24.391000 PM +01:00

Elapsed: 00:00:00.01
SQL>
SQL> exec dbms_stats.restore_table_stats(null, 'T', systimestamp)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.29
SQL>
SQL> -- But which statistics have been restored now?
SQL> -- It looks like this actually restored the PENDING statistics
SQL> -- according to the LAST_ANALYZED information??
SQL> select
2 num_distinct
3 , density
4 , last_analyzed
5 , num_buckets
6 , user_stats
7 from
8 user_tab_col_statistics
9 where
10 table_name = 'T'
11 and column_name = 'OBJECT_NAME';

NUM_DISTINCT DENSITY LAST_ANALYZED NUM_BUCKETS USE
------------ ---------- ------------------- ----------- ---
995 .00101 18.01.2011 18:58:32 254 NO

Elapsed: 00:00:00.01
SQL>
SQL> select
2 count(*)
3 from
4 user_tab_histgrm_pending_stats
5 where
6 table_name = 'T'
7 and column_name = 'OBJECT_NAME';

COUNT(*)
----------
255

Elapsed: 00:00:00.01
SQL>
SQL> select
2 count(*)
3 from
4 user_tab_histograms
5 where
6 table_name = 'T'
7 and column_name = 'OBJECT_NAME';

COUNT(*)
----------
255

Elapsed: 00:00:00.01
SQL>
SQL> exec dbms_stats.set_global_prefs('PUBLISH', 'TRUE')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL>

It is also interesting to note that, although Oracle has added an array of new dictionary views that allow to access the pending statistics area, these views are inconsistent with the existing statistics-related views in terms of naming conventions, columns and behaviour.

For example the *_col_pending_stats view does not have a NUM_BUCKETS column, and the corresponding *_tab_histgrm_pending_stats view for histogram details shows 0 rows if basic column statistics have been defined but no histogram whereas the original *_tab_histograms returns two rows if basic column statistics have been collected representing the low and high value of the column.

Summary

The Pending Statistics feature clearly shows some unexpected behaviour. In particular you better don't rely on it preventing the dictionary statistics from being updated by DBMS_STATS calls with the PUBLISH attribute set to FALSE.

Some of this clearly looks like a bug but I couldn't find a corresponding bug entry yet in My Oracle Support.

Note that this post does not cover the actual usage of Pending Statistics by the optimizer - I haven't done any extensive testing in this regard, but some quick checks showed that it seems to work as expected, which means that the optimizer picks statistics for those tables that have Pending Statistics defined when setting OPTIMIZER_USE_PENDING_STATISTICS = TRUE, but still uses the statistics from the dictionary for those that don't have any pending statistics defined.

A couple of road trips coming up in the US...

I did quite a bit of international travel last year - not that I ignored North America, but I definitely visited more user groups and conferences and customer events across the ocean than near to home.

This year is starting off different. I'll be in Cincinnati, Vancouver, Seattle and Denver next week (see http://asktom.oracle.com for the links to those events).
Right after that in February - I'll be in San Diego, Los Angeles, Seattle and Phoenix for an interesting IOUG set of events. The URLs for those will be posted really soon - but mark the calendar for the week of Feb 21st for those. The format is rather different from the norm - there will be three of us presenting simultaneously. The cast will consist of Andrew Holdsworth (he runs our real world performance group), Graham Wood - the father of ASH and AWR in many respects and myself. We don't always agree (which makes it more interesting) on the nitty gritty details - but as you'll see we are ready and willing to talk it out. Once we've done these first four - we'll have more in the making in the other parts of the US - working the west coast first but will be doing the entire country (that is the plan). I'll keep you updated on that.
In March - I hit Dallas for the annual Hotsos Symposium. This is a conference I always try to make (only missed one of them!) and the list of speakers and topics this year looks awesome. It is definitely something to check out - the training day to be given by Karen Morton is sure to be a really good add on as well if you can stay the full four days. I've found the training days to be something well worth considering attending. To get a full day from someone rather than just session after session with different speakers puts things in a different perspective. In a day someone can deliver content that just isn't possible in a single session environment.

ASSM Again

While checking my backlog of drafts (currently 75 articles in note form) I came across this one from August 2009 and was a little upset that I hadn’t finished it sooner – it’s a nice example of geek stuff that has the benefit of being useful.

From the Oracle newsgroup comp.databases.oracle.server, here’s an example of how to recreate a performance problem due to maintenance on ASSM bitmaps in 10.2.0.4.

Create a table in a tablespace with an 8KB block size, locally managed tablespace with uniform 1MB extents, and automatic segment space management (ASSM). Check the newsgroup thread if you want complete details on reproducing the test:

Session 1: Insert 100,000 rows of about 490 bytes into a table using a pl/sql loop and commit at end.
Session 1: Insert 1,000 rows into the table with single SQL inserts and no commits
Session 1: delete all data from the table with a single statement – but do not commit

Session 2: Insert 1,000 rows into the table with single SQL inserts and no commits – it’s very slow.

As one person on the thread pointed out – it looks as if Oracle is doing a full tablescan of the table, one block at a time showing “db file sequential read” waits all the way through the table. (If your db_cache_size is large enough you might not see this symptom).

I simplified the test – inserting just 100,000 of the rows (with the commit), then deleting them all (without the commit), then inserting one row from another session. Taking a snapshot of x$kcbsw and x$kcbwh, I got the following figures for the activity that took place inserting that one extra row (this was on Oracle 10.2.0.3):

---------------------------------
Buffer Cache - 23-Dec 11:50:36
Interval:-  0 seconds
---------------------------------
          Why0          Why1          Why2    Other Wait
          ----          ----          ----    ----------
         1,457             0             0             0 ktspfwh10: ktspscan_bmb
             8             0             0             0 ktspswh12: ktspffc
             1             0             0             0 ktsphwh39: ktspisc
         7,061             0             0             0 ktspbwh1: ktspfsrch
             1             0             0             0 ktuwh01: ktugus
         7,060             0             0             0 ktuwh05: ktugct
             1             0             0             0 ktuwh09: ktugfb
             2             0             0             0 kdswh02: kdsgrp
             2             0             0             0 kdiwh06: kdifbk
             2             0             0             0 kdiwh07: kdifbk
          ----          ----          ----    ----------
        15,595             0             0             0 Total: 10 rows

The figures tell us how much work Oracle has to do to find a table block that could accept a new row. The idea is simple – Oracle checks the first “level 3″ bitmap block (which is actually part of the segment header block) to find a pointer to the current “level 2″ bitmap block; it checks this level 2 bitmap block to find a pointer to the current “level 1″ bitmap block; and finally it checks the “level 1″ bitmap block to find a pointer to a data block that shows some free space.

Unfortunately every block in our table is apparently empty – but that’s only going to be true once session 1 commits. In this version of Oracle the blocks are all visible as “x% free” in the level 1 bitmaps – but when Oracle visits each block (“ktspbwh1: ktspfsrch”) it checks the ITL entry, which points it to the transaction table slot in the related undo segment header block to Get the Commit Time for the transaction (“ktuwh05: ktugct”) and finds that the transaction is not committed so the space is not really free. So Oracle has to visit the next block shown as free in the bitmap.

In our “bulk delete / no commit” case, we end up visitng every (or nearly every) block in the entire table before we find a block we can actually use – and, given the nature of the ASSM bitmap implementation, the order of the block visits is the “natural” table order, so we see something that looks like a full tablescan operating through single blocks reads (apart, perhaps, from a few blocks that are still cached).

I can’t explain why we do 1,457 visits to bitmap blocks (“ktspfwh10: ktspscan_bmb”) in this version of Oracle, but perhaps it’s simply an indication that Oracle picks (say) five “free block” entries from the bitmap block each time it visits it and therefore has to visit each bitmap block about 12 times if it doesn’t find a data block with space that really is free in it search.

Note – these results will be hugely dependent on version of Oracle – in an earlier version of Oracle the bitmap blocks were not updated by the delete until some time after the commit – and this variation of delayed block cleanout produced other unpleasant anomalies; and, just to make like difficult in later versions of Oracle, the x$kcbsw / x$kcbwh objects are not populated properly in 11g.

Footnote: In case you hadn’t realised, ASSM is a mechanism aimed at OLTP systems with a reasonable degree of concurrency – so it’s not too surprising that you can find problems and weak spots if you hit it with processing which is biased towards the DW and batch processing end of the spectrum.

Automatic shared memory resize with disabled ASMM/AMM

Thanks to Nigel Antell – he has posted a comment on one of Tanel Poder’s blog notes with a link to MOS Doc ID 1269139.1. In this note a parameter called _MEMORY_IMM_MODE_WITHOUT_AUTOSGA is described. It allows Oracle 11gR2 to adjust sizes of shared memory structures even if Automatic (Shared) Memory Management have been explicitly disabled; with the main purpose of avoiding ORA-04031 error. This is nice.
PS. I wonder if it was done by Oracle development specifically due to many customers don’t like ASMM (for ex. because of negative impact on application) but encounter ORA-04031 from time to time, which results in many opened SRs. Who knows.

Finding Unnecessary Effective Date Processing in PS/Query

In PeopleSoft for the Oracle DBA (Ch11, p309) I commented on how PS/Query will automatically add effective date criteria to and EFFDT column, even if it is not a key column. 

PS/Query does warn you that it has added the criteria, but it won't warn that the column is not a key column