Search

Top 60 Oracle Blogs

Recent comments

undo

Undo Understood

It’s hard to understand all the ramifications of Oracle’s undo handling, and it’s not hard to find cases where the resulting effects are very confusing. In a recent post on the OTN database forum resulted in one response insisting that the OP was obviously updating a table with frequent commits from one session while querying it from another thereby generating a large number of undo reads in the querying session.

Truncate

The old question about truncate and undo (“does a truncate generate undo or not”) appeared on the OTN database forum over the week-end, and then devolved into “what really happens on a truncate”, and then carried on.

The quick answer to the traditional question is essentially this: the actual truncate activity typically generates very little undo (and redo) compared to a full delete of all the data because all it does is tidy up any space management blocks and update the data dictionary; the undo and redo generated is only about the metadata, not about the data itself.

Flashback logging

When database flashback first appeared many years ago I commented (somewhere, but don’t ask me where) that it seemed like a very nice idea for full-scale test databases if you wanted to test the impact of changes to batch code, but I couldn’t really see it being a good idea for live production systems because of the overheads.

12c Temporary

Just one of those little snippets to cover something new and remind of something old. A single session can now have three (or more) temporary tablespaces in use at the same time for different reasons.

12c Temporary

Just one of those little snippets to cover something new and remind of something old. A single session can now have three (or more) temporary tablespaces in use at the same time for different reasons.

used_urec

A question that comes up from time to time on the OTN forums is the one about how much undo do you generate when you insert or update a row. It’s not a question that has an immediate answer – it depends on what you’re actually doing, the amount of change you introduce, and the number of indexes affected. However, there are a few guidelines that will point you in the right direction – and the key is to keep an eye on the used_urec and used_ublk colummns from v$transactions.

Consider the example of inserting data: when you insert a row, you typically generate one undo record (used_urec) for the row, but when you insert many rows using an array insert Oracle has an optimisation mechanism on undo record creation that allows it to create one used_urec to cover all the changes you have made simultaneously to an individual block - so used_urec could be much smaller than the number of rows processed.

However, if you have indexes in place and are doing normal index maintenance on import, then each table row would require each index to be updated, so you would go back to one used_urec per table row plus one used_urec per index maintained per table row.

So, when you look at the “big picture” there’s no obvious correlation between rows inserted and undo generated — until you look at the fine detail of exactly what you’re doing, and whether any optimisations apply. (The details of the optimisation strategies available vary with the chosen insert mechanisms and with version of Oracle)

Just for reference, here’s a link to a little note I wrote some months ago about monitoring undo – it generated a surprising number of comments.

Consistent Gets

There’s an interesting question on the OTN database forum at present – why does an update of 300,000 rows take a billion buffer visits. (There are 25 indexes on the table – so you might point a finger at then initially, but only one of the indexes is going to be changed by the update so that should only account for around an extra 10 gets per row in a clean environment.)

The answer to the question hadn’t been reached by the time I wrote this note – and this note isn’t intended as a suggested cause of the issue, it’s just an example of the type of thing that could cause an apparent excess of buffer visits. Here’s a little bit of code I’ve just tested on 11.1.0.6 using an 8KB block size

drop sequence t1_seq;
create sequence t1_seq;

drop table t1;

create table t1 as
select
	rownum id, rpad('x',10) small_vc
from
	all_objects
where
	rownum <= 11
;

execute dbms_stats.gather_table_stats(user,'t1')

select * from t1;

pause

execute snap_my_stats.start_snap
update t1 set small_vc = upper('small_vc') where id = 11;
execute snap_my_stats.end_snap

(The calls to the “snap_my_stats” package simply record the current contents of v$mystat joined to v$statname before and after the update and print the changes.)

The code simply creates a sequence and a table with 11 rows and no indexes, then updates one specific row in the table. However, where the “pause” appears, I start up 10 separate sessions to do the following:

column seqval new_value m_seq

select t1_seq.nextval seqval from dual;

update t1 set small_vc = upper(small_vc) where id = &m_seq;

pause

exit

So when I hit return on the pause for the first session, there are 10 separate active transactions on the single block in my table, one for each row except row 11. (And now you know what the sequence was for.)

Here’s a subset of the statistics from v$mystat after my update statement – remember, all I’ve done is update one row in one block using a tablescan:


Name                                                                     Value
----                                                                     -----
session logical reads                                                       45
db block gets                                                                3
db block gets from cache                                                     3
consistent gets                                                             42
consistent gets from cache                                                  42
consistent gets from cache (fastpath)                                        3
consistent gets - examination                                               39
db block changes                                                             7
consistent changes                                                          13
calls to kcmgrs                                                             26
calls to kcmgas                                                              1
calls to get snapshot scn: kcmgss                                            7
redo entries                                                                 3
redo size                                                                  764
undo change vector size                                                    236
data blocks consistent reads - undo records applied                         13
active txn count during cleanout                                            20
table scan blocks gotten                                                     1

Note the last statistics – just one block accessed by tablescan – compared to the session logical reads at 45 buffer visits.
That 45 buffer visits comes from 3 current (db) block gets and 42 consistent gets.
Of the 42 consistent gets 39 are examinations, which – in the absence of indexes and hash clusters are visits to undo blocks
The 39 undo visits are to find 13 undo records to apply, and 26 visits (to undo segment headers) to find 13 transaction SCNs.

What you’re seeing is one session doing (relatively speaking) a lot of work to hide the effects of other sessions which have not yet committed their transactions. (This was only a quick test, so I haven’t examined why the larger figures appear in multiples of 13 rather than multiples of 10 – the number of other transactions – and since this note is just trying to demonstrate a concept I won’t be looking into it any further.)

If you have a number of “non-interfering” transactions – i.e. transactions that don’t actually lock each other out – on a single table then you could find that they spend more time hiding each other’s work than they do doing their own work.

Footnote:

The numbers change significantly if I commit the 10 transactions (but wait until they’ve all executed, so they are all active at the same time) before I do the update to the 11th row.

The numbers changed even more surprisingly when I forgot to collect stats on the table in my initial example of the test.

More CR

Following on from yesterday’s post on consistent reads, I thought I’d make the point that the way you work can make an enormous difference to the amount of work you do. Here’s a silly little demo (in 10.2.0.3):

drop table t1 purge;
create table t1 (id number, n1 number);
insert into t1 values (1,0);
insert into t1 values (2,0);
commit;

execute dbms_stats.gather_table_stats(user,'t1')
execute snap_my_stats.start_snap

begin
	for i in 1..1000 loop
		update t1 set n1 = i where id = 1;
	end loop;
end;
/

execute snap_my_stats.end_snap

set doc off
doc

Output - 10.2.0.3 (some rows deleted)
Name                                                                     Value
----                                                                     -----
opened cursors cumulative                                                   11
user calls                                                                   6
recursive calls                                                          1,068
recursive cpu usage                                                          7
session logical reads                                                    4,041
CPU used when call started                                                   7
CPU used by this session                                                     7
DB time                                                                      6
db block gets                                                            1,030
db block gets from cache                                                 1,030
consistent gets                                                          3,011
consistent gets from cache                                               3,011
consistent gets - examination                                                4
db block changes                                                         2,015
change write time                                                            4
free buffer requested                                                    1,014
switch current to new buffer                                             1,000
calls to kcmgas                                                          1,014
calls to get snapshot scn: kcmgss                                        3,009
redo entries                                                               960
redo size                                                              295,160
undo change vector size                                                111,584
no work - consistent read gets                                           1,004
table scans (short tables)                                               1,001
table scan rows gotten                                                   2,002
table scan blocks gotten                                                 1,001
buffer is pinned count                                                   1,000
execute count                                                            1,009

#

I’ve created two rows in a table, then updated one of them 1,000 times – using a table scan to do the update. I haven’t yet committed my transaction. At this point I’m going to use a second session to run the same update loop on the second row in the table:

begin
	for i in 1..1000 loop
		update t1 set n1 = i where id = 2;
	end loop;
end;
/

Name                                                                     Value
----                                                                     -----
opened cursors cumulative                                                    8
user calls                                                                   6
recursive calls                                                          1,009
recursive cpu usage                                                        170
session logical reads                                                  965,999
CPU used when call started                                                 172
CPU used by this session                                                   172
DB time                                                                    173
db block gets                                                            1,030
db block gets from cache                                                 1,030
consistent gets                                                        964,969
consistent gets from cache                                             964,969
consistent gets - examination                                          961,965
db block changes                                                         3,016
consistent changes                                                   1,001,000
free buffer requested                                                    1,015
CR blocks created                                                        1,001
calls to kcmgas                                                          1,015
calls to get snapshot scn: kcmgss                                        3,008
redo entries                                                             1,936
redo size                                                              358,652
undo change vector size                                                111,608
data blocks consistent reads - undo records applied                  1,001,000
cleanouts and rollbacks - consistent read gets                           1,001
immediate (CR) block cleanout applications                               1,001
active txn count during cleanout                                         2,000
cleanout - number of ktugct calls                                        1,001
IMU CR rollbacks                                                        41,041
table scans (short tables)                                               1,001
table scan rows gotten                                                   2,002
table scan blocks gotten                                                 1,001
execute count                                                            1,006

Many of the statistics are (virtually) identical (e.g. “execute count”, “db block gets”, “free buffer requested”); some show an increase by 1,000 (often from zero) – largely because we have to worry 1,000 times about cleaning out the current block and creating a read-consistent version so that we can check if it can be updated.

But the most noticeable changes are in the “CPU time” and “consistent gets” because of the 1,000 times we have to apply 1,000 undo records to the block as we create the read-consistent version of the block. The CPU time has gone from 7 (hundredths of a second) to 172 because of (roughly) 1,000,000 “consistent gets – examination”. As I mentioned yesterday, this matches closely to “data blocks consistent reads – undo records applied” so we know why they are happening. Watch out in your batch jobs – if you have a lot of concurrent update activity going on a significant fraction of the workload may be the constant re-creation of CR clones.

However, there is another interesting detail to watch out for – what happens if I change the update execution path from a tablescan to an indexed access path:

create table t1 (id number, n1 number);
insert into t1 values (1,0);
insert into t1 values (2,0);
commit;

execute dbms_stats.gather_table_stats(user,'t1')
create index t1_i1 on t1(id);                            --  Make indexed access path available.

Then with an index hint in my update code, I get the following effects (having done the same update loop on row 1 in the first session, of course):


begin
	for i in 1..1000 loop
		update /*+ index(t1) */ t1 set n1 = i where id = 2;     -- indexed access path hinted
	end loop;
end;
/

Name                                                                     Value
----                                                                     -----
opened cursors cumulative                                                    7
user calls                                                                   6
recursive calls                                                          1,006
recursive cpu usage                                                         11
session logical reads                                                    2,036
CPU used when call started                                                  11
CPU used by this session                                                    11
DB time                                                                     11
db block gets                                                            1,030
db block gets from cache                                                 1,030
consistent gets                                                          1,006
consistent gets from cache                                               1,006
consistent gets - examination                                                6
db block changes                                                         2,015
free buffer requested                                                       14
shared hash latch upgrades - no wait                                     1,000
calls to kcmgas                                                             14
calls to get snapshot scn: kcmgss                                        1,004
redo entries                                                               960
redo size                                                              295,144
undo change vector size                                                111,608
index crx upgrade (positioned)                                           1,000
index scans kdiixs1                                                      1,000
execute count                                                            1,005

The difference is astonishing – where did all the ‘create CR copy’ activity go ?

I’ve pointed out before now that choosing a different execution plan for an update can have a big impact on performance – this is just another example demonstrating the point.

Consistent Reads

Here’s a quick demo to make a point about consistent reads (prompted by a question on the Oracle-L mailing list):

SQL> drop table t1;

Table dropped.

SQL> create table t1 (n1 number);

Table created.

SQL> insert into t1  values(0);

1 row created.

SQL> begin
  2  for i in 1..1000 loop
  3  update t1 set n1 = i;
  4  end loop;
  5  end;
  6  /

PL/SQL procedure successfully completed.

Note that I haven’t issued a commit in this session, and all I’ve got is a single row in the table (and because it’s my usual demo setup of locally managed tablespaces with uniform extents of 1MB using freelist management I know that that one row is in the first available block of the table).

How much work is a second session going to do to scan that table ?

SQL> alter system flush buffer_cache;
SQL> execute snap_my_stats.start_snap
SQL> select * from t1;
SQL> set serveroutput on size 1000000 format wrapped
SQL> execute snap_my_stats.end_snap
---------------------------------
Session stats - 18-Apr 11:33:01
Interval:-  2 seconds
---------------------------------
Name                                                                     Value
----                                                                     -----
session logical reads                                                      967
consistent gets                                                            967
consistent gets from cache                                                 967
consistent gets - examination                                              964
consistent changes                                                       1,001
CR blocks created                                                            1
data blocks consistent reads - undo records applied                      1,001
IMU CR rollbacks                                                            41

The snap_my_stats package is similar in concept to Tom Kyte’s “runstats” or Tanel Poder’s “snapper” program to capture changes in values in the dynamic performance views over short time periods. In this case I’ve deleted all but a few of the larger changes, and a couple of small changes.

The figure that stands out (probably) is the “session logical reads” – we’ve done 967 logical I/Os to scan a tables of just one block. The reason for this is that we’ve created a read-consistent copy of that one block (“CR blocks created” = 1), and it has taken a lot of work to create that copy. We’ve had to apply 1,001 undo records (“data blocks consistent reads – undo records applied” = 1001).

Most of those undo records come from individual accesses (which are of the cheaper “consistent gets – examination” type that only need a single get on the “cache buffers chains” latch) to undo blocks, following the “UBA (undo block address)” pointer in the relevant ITL entry of the table block, but since this is a 10g database the last few undo records come out of the “In-memory Undo” of the other session. Basically the cloning operation is something like this:

  1. Get current block
  2. Notice uncommitted transaction
  3. Clone current block – then ignore current block
  4. Get Undo block indicated by uncommitted ITL and apply undo change vector
  5. Repeat step four – 1,000 times
  6. Block is now clear of all uncommitted transactions
  7. There are no committed transactions with a commit SCN later than the start of query
  8. Display contents of block

It is an interesting point that as the first session created undo records it would pin and fill undo blocks – so would only do a few current gets (one for each block) on the undo blocks it was using. As another process reverses out the changes in a CR clone it has to get and release each undo block every time it wants a single undo record … applying undo records introduces far more latch and buffer activity that the original generation of the undo.

Footnote

It’s worth knowing that there are three statistics relating to applying undo records:

transaction tables consistent reads - undo records applied        Estimating "old" commit SCNs during delayed block cleanout
data blocks consistent reads - undo records applied               Creating CR clones
rollback changes - undo records applied                           The result of a real "rollback;"

See this posting on comp.databases.oracle.server for a rough description of transaction table consistent reads; and this elderly posting highlighting a benefit of knowing about rollback changes.

Footnote 2

The second step in the list of actions is: “Notice uncommitted transaction”. It’s probably worth pointing out that another part of the ITL entry holds the transaction id (“xid”) which implicitly identifies the undo segment and transaction table slot in that segment that has been used to hold  the transaction state. The current contents of that slot allow Oracle to determine whether or not (and when, if necessary) the transaction was committed.

Private Redo

About this time last year I wrote a short answer on OTN about Private Redo Threads and In-Memory Undo. Thanks to a follow-up question a year later I’ve been prompted to supply a link to my original answer because it was actually pretty good: OTN Thread “In Memory Undo”.