Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Creating synthetic and random test data from an existing table

We have all been there at some point. Either we need to run a test but can't bring production data
outside of the production network or we need to produce a test case for a support organization that
are not allowed to view production data. What to do?

Creating synthetic and random test data from an existing table

We have all been there at some point. Either we need to run a test but can't bring production data
outside of the production network or we need to produce a test case for a support organization that
are not allowed to view production data. What to do?

OSWatcher, Tracefile Analyzer, and Oracle 12.2 single instance

I have previously written about TFA, OSWatcher et all for Oracle 12.1. Since then, a lot of things have happened and I had an update for 12.2 on my to-do list for far too long. Experience teaches me that references to support notes and official documentation get out of date rather quickly, so as always, if you find anything that changed please let me know via the comments section and I’ll update the post.

This is going to be a 3 part mini-series to save you having to go over 42 pages of text … In this first part I’m going to have a look at single instance Oracle. In part 2 I’ll have a look at Oracle Restart environments, and finally in part 3 I’ll finish the series by looking at a 12.2 RAC system.

The environment

I am using a small VM to install Oracle 12.2.0.1.0 (initially) on Oracle Linux 7.4 with kernel UEK4. As always, my EE database binaries go into /u01/app/oracle/product/12.2.0.1/dbhome_1.

The installation/testing (and reporting) of my findings are following this approach:

  • Install the O/S
  • Install Oracle 12.2.0.1.0 EE
  • Create an EE database (not shown here)
  • Patch binaries and database to 12.2.0.1.180116
  • Upgrade TFA to 12.2.1.3.1 as downloaded from My Oracle Support DOC ID 1513912.1

These were the current versions at the time of writing.

Install Oracle 12.2.0.1.0

The first step after the O/S is provisioned is to install the Oracle software, obviously. I have noticed that TFA is part of the Oracle binaries. Towards the end of the installation process, you are prompted to execute root.sh, as normal. On my system, root.sh had the following contents:

      1 #!/bin/sh
      2 unset WAS_ROOTMACRO_CALL_MADE
      3 . /u01/app/oracle/product/12.2.0.1/dbhome_1/install/utl/rootmacro.sh "$@"
      4 . /u01/app/oracle/product/12.2.0.1/dbhome_1/install/utl/rootinstall.sh
      5 /u01/app/oracle/product/12.2.0.1/dbhome_1/suptools/tfa/release/tfa_home/install/roottfa.sh
      6 /u01/app/oracle/product/12.2.0.1/dbhome_1/install/root_schagent.sh
      7 
      8 #
      9 # Root Actions related to network
     10 #
     11 /u01/app/oracle/product/12.2.0.1/dbhome_1/network/install/sqlnet/setowner.sh
     12 
     13 #
     14 # Invoke standalone rootadd_rdbms.sh
     15 #
     16 /u01/app/oracle/product/12.2.0.1/dbhome_1/rdbms/install/rootadd_rdbms.sh
     17 
     18 /u01/app/oracle/product/12.2.0.1/dbhome_1/rdbms/install/rootadd_filemap.sh

After a few variables are set/defined by sourcing in files created during the installation, roottfa.sh is called (see line 5). It allows you to configure TFA to run as a background (daemon) process. I decided to go with that option after consulting chapter 4 in the 12.2 Autonomous Health Framework documentation and reading about the advantages of using TFA as a daemon. This may or may not be the right way to run TFA for you, the documentation is really good and helps you decide. Here is the transcript of my root.sh execution:

[root@server5 ~]#  /u01/app/oracle/product/12.2.0.1/dbhome_1/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/12.2.0.1/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Do you want to setup Oracle Trace File Analyzer (TFA) now ? yes|[no] :
yes
Installing Oracle Trace File Analyzer (TFA).
Log File: /u01/app/oracle/product/12.2.0.1/dbhome_1/install/root_server5_2018-01-22_17-21-41-005116657.log
Finished installing Oracle Trace File Analyzer (TFA)

Once that message is shown, TFA is configured and controlled via a systemd unit file:

[root@server5 ~]# systemctl cat oracle-tfa
# /etc/systemd/system/oracle-tfa.service
# Copyright (c) 2014, Oracle and/or its affiliates. All rights reserved.
#
# Oracle TFA startup
#
[Unit]
Description=Oracle Trace File Analyzer
After=syslog.target
[Service]
ExecStart=/etc/init.d/init.tfa run >/dev/null 2>&1 

The service is enabled and running.

After the completion of roottfa.sh, TFA resides in $ORACLE_BASE/tfa and its subdirectories. This is documented in the 12.2 Autonomous Health Framework chapter 4.2.3 and has an interesting implication: if you set your environment using oraenv, you might find that you get errors invoking tfactl, such as these on my VM. I have used a “minimum install” for my operating system and quite specifically didn’t add any additional perl modules in my kickstart file. Now, when invoking tfactl after having set my environment using oraenv, I find that there are missing perl modules in my system’s perl installation:

[oracle@server5 ~]$ . oraenv
ORACLE_SID = [NCDB] ? NCDB
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@server5 ~]$ tfactl status
Can't locate Digest/MD5.pm in @INC (@INC contains: 
/usr/local/lib64/perl5 /usr/local/share/perl5 
/usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl
/usr/lib64/perl5 /usr/share/perl5 . /u01/app/oracle/tfa/server5/tfa_home/bin 
/u01/app/oracle/tfa/server5/tfa_home/bin/common /u01/app/oracle/tfa/server5/tfa_home/bin/modules 
/u01/app/oracle/tfa/server5/tfa_home/bin/common/exceptions) 
at /u01/app/oracle/tfa/server5/tfa_home/bin/common/tfactlshare.pm line 7628.
BEGIN failed--compilation aborted at /u01/app/oracle/tfa/server5/tfa_home/bin/common/tfactlshare.pm line 7628.
Compilation failed in require at /u01/app/oracle/tfa/server5/tfa_home/bin/tfactl.pl line 223.
BEGIN failed--compilation aborted at /u01/app/oracle/tfa/server5/tfa_home/bin/tfactl.pl line 223.

The output has been changed for readability (originally I was missing Data::Dumper as well). After studying the documentation (still section 4.2.3 in the aforementioned document), it turns out to be a user mistake. As I said before, after TFA is configured using roottfa.sh as part of the root.sh script execution, it runs in daemon mode and crucially, is available from $ORACLE_BASE/tfa. I found that location being referred to in /etc/init.d/init.tfa as well. When I simply typed “tfactl” into my terminal window, I invoked a different “tfactl”. There is a lot more to be said about this, and I will try and do so in a different post.

NB: the same section 4.2.3 in the documentation states that even should you not run TFA in daemon mode, you can still make use of “user mode TFA” in $ORACLE_HOME, although there are certain restrictions. I haven’t pursued that route.

Anyway, after switching to the location where TFA is actually installed ($ORACLE_BASE/tfa), all is well. It seems that running roottfa.sh creates a new “Oracle” perl:

[root@server5 ~]# find /u01/app/oracle/tfa -name perl
/u01/app/oracle/tfa/server5/tfa_home/perl
/u01/app/oracle/tfa/server5/tfa_home/perl/bin/perl

I found Digest::MD5 and Data::Dumper in /u01/app/oracle/tfa/server5/tfa_home/perl/lib/5.22.0/x86_64-linux-thread-multi.

So let’s try and get the status of the current installation from $ORACLE_BASE/tfa:

[oracle@server5 ~]$ /u01/app/oracle/tfa/bin/tfactl status

Access Denied: Only TFA Admin can run this command

Nearly there: the perl modules are no longer reported to be missing, the “Oracle” perl installation appears to be used now. But what about this error message? I read in section 4.2.4 “Securing Access to Oracle Trace File Analyzer” (still referring to the Autonomous Health Framework manual) that access to TFA is restricted. However, the RDBMS owner should have been granted access automatically.

Using the commands shown in the manual I checked permissions and it turns out that the oracle user is configured to have access to TFA.

[root@server5 ~]# /u01/app/oracle/tfa/bin/tfactl access lsusers
.---------------------------------.
|       TFA Users in server5      |
+-----------+-----------+---------+
| User Name | User Type | Status  |
+-----------+-----------+---------+
| oracle    | USER      | Allowed |
'-----------+-----------+---------'

In fact, I can query TFA’s status using the “print status” command as oracle (/u01/app/oracle/tfa/bin/tfactl print status). I compared the output of “tfactl -help” between oracle and root, and there are more options available when running as root. This might explain the above error.

What is the status now?

TFA is now set up and working, but using the base release:

[root@server5 ~]# /u01/app/oracle/tfa/bin/tfactl status

.------------------------------------------------------------------------------------------------.
| Host    | Status of TFA | PID   | Port  | Version    | Build ID             | Inventory Status |
+---------+---------------+-------+-------+------------+----------------------+------------------+
| server5 | RUNNING       | 18786 | 41482 | 12.2.1.0.0 | 12210020161122170355 | COMPLETE         |
'---------+---------------+-------+-------+------------+----------------------+------------------'

It should probably be patched to something more recent. I’ll try that in 2 ways: first by applying the January 2018 RU to see if the version changes. Since the standard deployment doesn’t come with OSWatcher which I’m particularly interested in, I’ll download and apply TFA 12.2.1.3.1 next. As with all patching, I need to make sure that I have working backups which I’m comfortable restoring should anything go badly wrong.

Status after applying the January RU

A combination of opatch/datapatch later, my system is on the latest RU patchlevel:

[oracle@server5 OPatch]$ opatch lspatches
27105253;Database Release Update : 12.2.0.1.180116 (27105253)

OPatch succeeded.

However, this did not have an effect on the version of TFA in $ORACLE_BASE:

[root@server5 ~]# systemctl restart oracle-tfa
[root@server5 ~]# /u01/app/oracle/tfa/bin/tfactl status

.------------------------------------------------------------------------------------------------.
| Host    | Status of TFA | PID   | Port  | Version    | Build ID             | Inventory Status |
+---------+---------------+-------+-------+------------+----------------------+------------------+
| server5 | RUNNING       | 24042 | 37226 | 12.2.1.0.0 | 12210020161122170355 | COMPLETE         |
'---------+---------------+-------+-------+------------+----------------------+------------------'

Not quite what I expected after reading the docs: the installation of the latest RU should have updated TFA as well. But maybe I got something wrong on my end. The RU readme did not have any reference to TFA that I could find.

Yet it doesn’t matter: I wanted to have all the great support tools anyway (and they aren’t shipped with “stock TFA”), so it was time to install the latest version from MOS.

Upgrading TFA using 12.2.1.3.1 (MOS)

The patch is quite simple and well documented. If TFA is up and running in daemon mode as in my example, the patching tool will recognise that fact and patch the installation in-place. After a couple of minutes on my VM, I have a new version:

[root@server5 stage]# /u01/app/oracle/tfa/bin/tfactl status

.------------------------------------------------------------------------------------------------.
| Host    | Status of TFA | PID   | Port  | Version    | Build ID             | Inventory Status |
+---------+---------------+-------+-------+------------+----------------------+------------------+
| server5 | RUNNING       | 28105 | 39100 | 12.2.1.3.1 | 12213120171215143839 | COMPLETE         |
'---------+---------------+-------+-------+------------+----------------------+------------------'

The MOS version comes with lots of useful tools as well:

[oracle@server5 stage]$ /u01/app/oracle/tfa/bin/tfactl toolstatus

.------------------------------------------------------------------.
|                   TOOLS STATUS - HOST : server5                  |
+----------------------+--------------+--------------+-------------+
| Tool Type            | Tool         | Version      | Status      |
+----------------------+--------------+--------------+-------------+
| Development Tools    | orachk       |   12.2.0.1.3 | DEPLOYED    |
|                      | oratop       |       14.1.2 | DEPLOYED    |
+----------------------+--------------+--------------+-------------+
| Support Tools Bundle | darda        | 2.10.0.R6036 | DEPLOYED    |
|                      | oswbb        |        8.1.2 | RUNNING     |
|                      | prw          | 12.1.13.11.4 | NOT RUNNING |
+----------------------+--------------+--------------+-------------+
| TFA Utilities        | alertsummary |   12.2.1.1.0 | DEPLOYED    |
|                      | calog        |   12.2.0.1.0 | DEPLOYED    |
|                      | changes      |   12.2.1.1.0 | DEPLOYED    |
|                      | dbglevel     |   12.2.1.1.0 | DEPLOYED    |
|                      | events       |   12.2.1.1.0 | DEPLOYED    |
|                      | grep         |   12.2.1.1.0 | DEPLOYED    |
|                      | history      |   12.2.1.1.0 | DEPLOYED    |
|                      | ls           |   12.2.1.1.0 | DEPLOYED    |
|                      | managelogs   |   12.2.1.1.0 | DEPLOYED    |
|                      | menu         |   12.2.1.1.0 | DEPLOYED    |
|                      | param        |   12.2.1.1.0 | DEPLOYED    |
|                      | ps           |   12.2.1.1.0 | DEPLOYED    |
|                      | pstack       |   12.2.1.1.0 | DEPLOYED    |
|                      | summary      |   12.2.1.1.0 | DEPLOYED    |
|                      | tail         |   12.2.1.1.0 | DEPLOYED    |
|                      | triage       |   12.2.1.1.0 | DEPLOYED    |
|                      | vi           |   12.2.1.1.0 | DEPLOYED    |
'----------------------+--------------+--------------+-------------'

Note :-
  DEPLOYED    : Installed and Available - To be configured or run interactively.
  NOT RUNNING : Configured and Available - Currently turned off interactively.
  RUNNING     : Configured and Available.

[oracle@server5 stage]$ 

Since I care a lot about OSWatcher, I was very pleased to see it running.

[oracle@server5 stage]$ ps -ef | grep -i osw
oracle   28344     1  0 10:58 ?        00:00:00 /bin/sh ./OSWatcher.sh 30 48 NONE /u01/app/oracle/tfa/repository/suptools/server5/oswbb/oracle/archive
oracle   28934 28344  0 10:58 ?        00:00:00 /bin/sh ./OSWatcherFM.sh 48 /u01/app/oracle/tfa/repository/suptools/server5/oswbb/oracle/archive
oracle   30662 27252  0 11:01 pts/4    00:00:00 grep --color=auto -i osw
[oracle@server5 stage]$ 

Kindly refer to the documentation for more information about TFA. It’s quite a collection of tools, and it helps you in so many ways…

Index Skip Scan: Potential Use Case or Maybe Not ? (Shine On You Crazy Diamond)

While answering a recent question on a LinkedIn forum, it got me thinking whether there’s a potential use case for using an INDEX SKIP SCAN I hadn’t previously considered. I’ve discussed Index Skip Scans previously (as I did here), a feature introduced around Oracle9i that allows an index to be considered by the CBO even […]

Histogram Threat

Have you ever seen a result like this:


SQL> select sql_id, count(*) from V$sql group by sql_id having count(*) > 1000;

SQL_ID		COUNT(*)
------------- ----------
1dbzmt8gpg8x7	   30516

A client of mine who had recently upgraded to 12.2.0.1 RAC, using DRCP (database resident connection pooling) for an application using PHP was seeing exactly this type of behaviour for a small number of very simple SQL statements and wanted to find out what was going on because they were also seeing an undesirable level of contention in the library cache when the system load increased.

In this note I just want to highlight a particular detail of their problem – with an example – showing how easily histograms can introduce problems if you don’t keep an eye out for the dangers.

One of their queries really was as simple as this:

select count(*), sum(skew2) from t1 where skew = :b1;

And I’m going to use this query to model the problem. All I have to do is arrange for a data set that results in a hybrid (or height-balanced) histogram being created on the skew column, and then run the query lots of times with different input bind values. In the case of the client there were around 18,000 possible values for the column, and the number of rows per value varied from 1 to about 20,000 – but whatever the number of rows selected the optimum execution plan was always going to be an indexed access.


rem
rem     Script:         acs_child_cursors.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2018
rem

create table t1 (
        id, skew, skew2, padding
)
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 3e3
)
select
        rownum  id,
        g1.id   id1,
        g2.id   id2,
        rpad('x',100)
from
        generator       g1,
        generator       g2
where
        g2.id <= g1.id     -- > comment to avoid WordPress format issue
order by
        g2.id, g1.id
;

alter table t1 modify skew not null;
alter table t1 modify skew2 not null;

create index t1_skew on t1(skew);

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1 for columns skew size 254'
--              method_opt       => 'for all columns size 1'
        );
end;
/

variable b1 number
exec :b1 := 0;

set termout off
@start_1000
@start_1000
@start_1000
set termout on

set linesize 120
set trimspool on

column sql_text format a55

select
        child_number, plan_hash_value, executions,
        is_bind_sensitive,
        is_bind_aware,
        is_shareable,
        sql_text
from
        v$sql
where   sql_id = 'b82my582cnvut'
;

The data set contains 3,000 distinct values for skew and the way I’ve generated the rows means that the value N will appear N times – so there’s one row with the value 1 and 3,000 rows with the value 3,000 and so on for a total of 4,501,500 rows. If you want to run the tes the code is likely to take a couple of minutes to complete, requiring roughly 700 MB of disk space.

The mechanism of the script start_1000 is something I published a few years ago, and essentially it executes a script called start_1.sql 1,000 times which, for this test, contains the following two lines:


exec :b1 := :b1 + 1

select count(*), sum(skew2) from t1 where skew = :b1;

The net effect of the 3 calls to start_1000.sql is that my simple SQL statement is called once in turn for each value of skew from 1 to 3,000. The SQL_ID of the statement is ‘b82my582cnvut’ which I’ve used to query v$sql when the run is complete, with the following result:


CHILD_NUMBER PLAN_HASH_VALUE EXECUTIONS I I I SQL_TEXT
------------ --------------- ---------- - - - -------------------------------------------------------
	   0	  1041516234	    498 Y N N select count(*), sum(skew2) from t1 where skew = :b1
	   1	  1041516234	     25 Y Y N select count(*), sum(skew2) from t1 where skew = :b1
	   2	  1041516234	    104 Y Y N select count(*), sum(skew2) from t1 where skew = :b1
	   3	  1041516234	    308 Y Y N select count(*), sum(skew2) from t1 where skew = :b1
	   4	  1041516234	    429 Y Y N select count(*), sum(skew2) from t1 where skew = :b1
	   5	  1041516234	    640 Y Y N select count(*), sum(skew2) from t1 where skew = :b1
	   6	  1041516234	     31 Y Y N select count(*), sum(skew2) from t1 where skew = :b1
	   7	  1041516234	    305 Y Y N select count(*), sum(skew2) from t1 where skew = :b1
	   8	  1041516234	    660 Y Y Y select count(*), sum(skew2) from t1 where skew = :b1

9 rows selected.

I’ve got 9 child cursors, all with the same execution plan, all except the last labelled as not shareable (you may find that you don’t get exactly the same result, on repeated tests I got between 5 and 9 cursors). Given enough time all these cursors except the last (shareable) one would be aged out of the library cache. In the case of the client, who had a shared pool that was probably quite a bit larger than needed, the number of non-shareable cursors could get very large and they were hanging around for ages. Typically most of the cursors would report an execution count less than 30, with many showing just one or two executions and a handful showing execution counts in the region of several hundred (and that hanful were the ones that were still marked as shareable).

After eliminating the critical histogram (using dbms_stats.delete_column_stats()) and eliminating the redundant child cursors (using dbms_pool.purge()) the massive proliferation stopped happening and the performance threat disappeared. The only issue then was to change the table preferences for stats collection on this table to add the clause “for columns size 1 skew” so that the histogram would not be recreated on the next gather.

Further Observations.

I suspect that part of the client’s probem – something that exaggerated the count rather than causing it – could be attributed to using DRCP (database resident connection pool) which probably still has some quirky bits of behaviour. It was also true that the client’s connection pool was probably much bigger than it needed to be so if there were any funny little bits of emergent behaviour at scale the client would probably have hit them.

The problem of escalating child cursors is one that Oracle has been working on for quite a long time, and there’s a (hidden) parameter that was introduced late in 11gR2 (though I think that the 11g mechanism first appeared through a fix control) to allow Oracle to mark a parent cursor obsolete if it acquired too many child cursors.  There’s a note on MoS that the client had read on this topic: Doc ID: 2298504.1: Cursor Mutex X Wait Events: After Upgrading To 12.2″ which looked as if it was describing their symptoms so they had set this parameter (_cursor_obsolete_threshold) from 8192 (the 12.2 default) down to 1024 (the default for 12.1 and earlier versions). This had been of some help with the library cache problem.  When the sql_id at the top of this article reported 30,516 child cursors that would be 29 “obsolete” parent cursors with 1,024 childs cursor and one “live” parent cursor with 820 child cursors.

You can appreciate that if Oracle has to find a parent cursor and pin it while walking a chain of 30,516 child cursors that’s likely to take a lot more time than walking a chain of 30 parent cursors (while holding a library cache mutex, perhaps) to find the one non-obsolete parent, then acquiring the parent mutex to walk a chain of 820 child cursor.

I don’t know the exact details of the mechanisms involved with manipulating the various chains – but there are likely to be times when one process needs exclusive mutexes/latches to change a chain while other processes are holding shared mutexes/latches to search the chain. When you’ve got 30,000 child cursors in total the important questions become: “where’s the trade-off between making the child chains shorter and making the parent search longer ?” (I’d expect – or hope – that the Oracle developers had actually found a way to code the obsolence so that the new parent was first in the search, and the rest were never visited, of course.)

One of the suggestions I made to try to alleviate the problem – which I had assumed was due in part to the degree of concurrent execution of the statement – was to mark the cursor as “hot” This resulted in 36 differet sql_ids for the same statement (the client machine had 72 CPUs). This had some effect but ultimately meant that there were 36 chains of parents that would eventually end up with lots of child cursors – the only difference was the rate at which the total cursor count was growing (a lot slower), balanced against the threat that they might all manage to get to 30,000+ child cursors! Instead, as a quick and dirty workaround, I supplied the client with a script that could be run at quieter moments during the day to call dbms_shared_pool.purge() for the sql_id to flush all its cursors from the library cache.

One final oddity – which isn’t from the client site. When I changed my test above to avoid generating the histogram (using the commented out method_opt “for all columns size 1”) I found that I still got two child cursors; the first ended up marked as bind-aware but insensitive and non-shareable, the second appeared (time and time again) when my bind value got to 499 and was marked bind-sensitive, bind-aware and shareable.  I still have no idea why this happened.

Footnote:

When repeating the test I started with a “flush shared_pool” – but I’ve removed this line from the code above in case anyone ran it without considering the possible side effects. You do need to clear previous copies of the key statement from the library cache, though, if you want to avoid confusing the issue on repeated runs of the test.

 

Result Cache: when *not* to use it

I encountered recently a case where result cache was incorrectly used, leading to high contention when the application encountered a peak of load. It was not a surprise when I’ve seen that the function was called with an ‘ID’ as argument, which may have thousands of values in this system. I mentioned to the software vendor that the result cache must be used only for frequently calling the function with same arguments, not for random values, even if each value have 2 or 3 identical calls. And, to detail this, I looked at the Oracle Documentation to link the part which explains when the result cache can be used and when it should be avoided.

But I’ve found nothing relevant. This is another(*) case where the Oracle Documentation is completely useless. Without explaining how a feature works, you completely fail to get this feature used. Most people will not take the risk to use it, and a few will use it in the wrong place, before definitely blacklisting this feature.

(*) By another case, I’m thinking about Kamil Stawiarski presentation about Pragma UDF and the lack of useful documentation about it.

Oracle documentation

So this is what I’ve find in the Database Performance Tuning Guide about the Benefits of Using the Server Result Cache

  1. The benefits of using the server result cache depend on the application
  2. OLAP applications can benefit significantly from its use.
  3. Good candidates for caching are queries that access a high number of rows but return a small number, such as those in a data warehouse.

So, this is vague (‘depends’, ‘can benefit’, ‘good candidates’). And doesn’t help to decide when it can be used.
The ‘access a high number of rows but return a small number’ is an indication why cache hits can benefit. However, there is no mention of the most important things, which are :

  • The cache result is invalidated for any DML on the tables the result relies on.
  • The cache miss, when the result is invalidated is expensive
  • The cache miss, when the result is not in the result cache is expensive
  • The ‘expensive’ here is a scalability issue: not detected in unit tests, but big contention when load increases

Real things to know

The first thing to know is that the Result Cache memory is protected by a latch:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 2 0 0 0 0
0000000060047870 Result Cache: SO Latch 0 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

This latch has no children:

SQL> select * from v$latch_children where name like '%Result Cache%';
 
no rows selected

Only one latch to protect the whole result cache: concurrent sessions – even for different functions – have to serialize their access on the same latch.

This latch is acquired in exclusive mode when the session has to write to the result cache (cache miss, invalidation,…) or in shared mode – since 11gR2 when reading only. This has been explained by Alex Fatkulin http://afatkulin.blogspot.ch/2012/05/result-cache-latch-in-11gr2-shared-mode.html.

This means that, whatever the Oracle Documentation says, the benefit of result cache comes only at cache hit: when the result of the function is already there, and has not been invalidated. If you call the same function with always the same parameter, frequently, and with no changes in the related tables, then we are in the good case.

But if there was a modification of one of the tables, even some rows that have nothing to do with the result, then you will have an overhead: exclusive latch get. And if you call the function with new values for the arguments, that’s also a cache miss which has to get this exclusive latch. And if you have multiple sessions experiencing a cache miss, then they will spin on CPU to get the exclusive latch. This can be disastrous with a large number of sessions. I have seen this kind of contention for hours with connection pools set to 100 sessions when the call to the function is frequent with different values.

To show it, I create a demo table (just to have a dependency) and a result_cache function:

SQL> create table DEMO as select rownum n from xmltable('1 to 1000');
Table created.
 
SQL> create or replace function F(n number) return number result_cache as begin for i in (select * from DEMO where DEMO.n=F.n) loop return i.n; end loop; end;
2 /
Function created.

I have just restarted the instance and my latch statistics are reset:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 2 0 0 0 0
0000000060047870 Result Cache: SO Latch 0 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

Result Cache Hit

This will call the function always with the same argument, and no change in the table it relies on:
SQL> declare n number; begin for i in 1..1e3 loop n:=n+f(1); end loop; end;
2 /
PL/SQL procedure successfully completed.

So, the first call is a cache miss and the 999 next calls are cache hits. This is the perfect case for Result Cache.

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 1009 0 0 0 0
0000000060047870 Result Cache: SO Latch 1 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

So, that’s about 1000 latch gets. With cache hits you get the latch once per execution, and this is a shared latch, so no contention here.
You want to see check that it is a shared latch? Just set a breakpoint with gdb on the ksl_get_shared_latch function (up to 12.1 because 12.2 uses ksl_get_shared_latch_int) and print the arguments (as explained by Stefan Koehler and Frits Hoogland):

As my RC latch is at address 00000000600477D0 I set a beakpoint on ksl_get_shared_latch where the first argument is 0x600477d0 and display the other arguments:

break ksl_get_shared_latch
condition 1 $rdi == 0x600477d0
commands
silent
printf "ksl_get_shared_latch laddr:%x, willing:%d, where:%d, why:%d, mode:%d\n", $rdi, $rsi, $rdx, $rcx, $r8
c
end

Then one call with cache hit displays:

ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5358, mode:8

Mode 8 is shared: many concurrent sessions can do the same without waiting. Shared is scalable: cache hits are scalable.

Cache miss – result not in cache

Here each call will have a different value for the argument, so that they are all cache misses (except the first one):

SQL> declare n number; begin for i in 1..1e3 loop n:=n+f(i); end loop; end;
2 /
PL/SQL procedure successfully completed.

Now the ‘RC latch’ statistics have increased further:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 6005 0 0 0 0
0000000060047870 Result Cache: SO Latch 1 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

This is about 5000 additional latch gets, which means 5 per execution. And, because it writes, you can expect them to be exclusive.

Here is my gdb script output when I call the function with a value that is not already in cache:

ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5358, mode:8
ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5347, mode:16
ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5358, mode:16
ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5374, mode:16

Mode 16 is exclusive. And we have 3 of them in addition to the shared one. You can imagine what happens when several sessions are running this: spin and wait, all sessions on the same resource.

Cache miss – result in cache but invalid

I run the same again, where all values are in cache now:

SQL> declare n number; begin for i in 1..1e3 loop n:=n+f(i); end loop; end;
2 /
PL/SQL procedure successfully completed.

So this is only 1000 additional gets:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 7005 0 0 0 0
0000000060047870 Result Cache: SO Latch 1 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

The function depends on DEMO table, and I do some modifications on it:

SQL> insert into DEMO values (0)
1 row created.
SQL> commit;
Commit complete.

This has invalidated all previous results. A new run will have all cache miss:

SQL> declare n number; begin for i in 1..1e3 loop n:=n+f(i); end loop; end;
2 /
PL/SQL procedure successfully completed.

And this is 5000 additional gets:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 12007 0 0 0 0
0000000060047870 Result Cache: SO Latch 1 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

So what?

The important thing to know is that each cache miss requires an exclusive access to the Result Cache, multiple times. Those must be avoided. The Result Cache is good for a static set of result. It is not a short-term cache to workaround an application design where the function is called two or three times with the same values. This is, unfortunately, not explained in the Oracle Documentation. But it becomes obvious when we look at the implementation, or when we load test it with multiple sessions. The consequence can be this kind of high contention during minutes or hours:

Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
latch free 858,094 1,598,387 1863 78.8
enq: RC - Result Cache: Contention 192,855 259,563 1346 12.8

Without either the knowledge of the implementation, or relevant load tests, the risk is that a developer stays on his good results in unit testing, and implement Result Cache in each function. The consequence will be seen too late, in production, at a time of load peak. If this happens to you, you can disable the result cache (DBMS_RESULT_CACHE.BYPASS(TRUE);) but the risk is to have performance degradation in the ‘good cases’. Or recompile the procedures with removed RESULT_CACHE, but you may bring a new contention on library cache then.

 

Cet article Result Cache: when *not* to use it est apparu en premier sur Blog dbi services.

A look into Oracle redo, part 1: redo allocation latches

This will be a series of posts about Oracle database redo handling. The database in use is Oracle version 12.2.0.1, with PSU 170814 applied. The operating system version is Oracle Linux Server release 7.4. In order to look into the internals of the Oracle database, I use multiple tools; very simple ones like the X$ views and oradebug, but also advanced ones, quite specifically the intel PIN tools (https://software.intel.com/en-us/articles/pin-a-dynamic-binary-instrumentation-tool). One of these tools is ‘debugtrace’, which contains pretty usable output on itself (a indented list of function calls and returns), for which I essentially filter out some data, another one is ‘pinatrace’, which does not produce directly usable output, because it provides instruction pointer and memory addresses. However, the information it provides is extremely useful (and detailed!) once you translate the addresses to Oracle function names and memory region names. I have written a tool to do that: https://gitlab.com/FritsHoogland/pinatrace_annotate.git

The blog posts are meant for investigating Oracle database internals. This is a microscopic view into how the database works. I do believe by understanding the mechanics, you can very easy and quickly assess and resolve issues because of a fundamental understanding of how the database works.

First of all to immediately show the value of looking into the internals, a clear documentation bug: this entry from the online documentation of 12.2: https://docs.oracle.com/en/database/oracle/oracle-database/12.2/cncpt/memory-architecture.html#GUID-C2AD1BF6-A5AE-42E9-9677-0AA08126864B (not sure how long this link will be valid, Oracle is notorious for changing it) states that the log buffer is singular buffer called the log buffer. In fact, the log buffer is not a singular buffer, but a buffer that contains at least to public redo strands, which changed since at least Oracle 10g. These strands are externalised in x$kcrfstrand:

SQL> select indx,strand_size_kcrfa, first_buf_kcrfa, last_buf_kcrfa from x$kcrfstrand where last_buf_kcrfa != '00';

      INDX STRAND_SIZE_KCRFA FIRST_BUF_KCRFA  LAST_BUF_KCRFA
---------- ----------------- ---------------- ----------------
         0          67108864 00000000D8000000 00000000DBFFFE00
         1          67108864 00000000DC000000 00000000DFFFFE00

The number of public redo buffers is dependent on the undocumented parameter “_log_parallelism_max”. A quick question out on twitter revealed that systems with a high count of processors could have a higher public redo strand count.

The public strands are allocated in the ‘Redo Buffers’ memory allocation:

SQL> select "AREA NAME", "START ADDR", to_char((to_number(rawtohex("START ADDR"),'xxxxxxxxxxxxxxxx')+"SEGMENT SIZE"),'XXXXXXXXXXXXXXXX'),pagesize from x$ksmssinfo; 

AREA NAME                        START ADDR       TO_CHAR((TO_NUMBE   PAGESIZE
-------------------------------- ---------------- ----------------- ----------
Variable Size                    0000000061000000          D8000000    2097152
Redo Buffers                     00000000D8000000          E07C6000    2097152
Fixed Size                       0000000060000000          60A00000    2097152
skgm overhead                    00000000E1000000          E1003000       4096

Each public strand is protected by a redo allocation latch:

SQL> select addr, child#, name, gets, misses, immediate_gets, immediate_misses from v$latch_children where name = 'redo allocation';

ADDR                 CHILD# NAME                       GETS     MISSES IMMEDIATE_GETS IMMEDIATE_MISSES
---------------- ---------- -------------------- ---------- ---------- -------------- ----------------
0000000076FE17F0         20 redo allocation               5          0              0                0
0000000076FE1750         19 redo allocation               5          0              0                0
0000000076FE16B0         18 redo allocation               5          0              0                0
0000000076FE1610         17 redo allocation               5          0              0                0
0000000076FE1570         16 redo allocation               5          0              0                0
0000000076FE14D0         15 redo allocation               5          0              0                0
0000000076FE1430         14 redo allocation               5          0              0                0
0000000076FE1390         13 redo allocation               5          0              0                0
0000000076FE12F0         12 redo allocation               5          0              0                0
0000000076FE1250         11 redo allocation               5          0              0                0
0000000076FE11B0         10 redo allocation               5          0              0                0

ADDR                 CHILD# NAME                       GETS     MISSES IMMEDIATE_GETS IMMEDIATE_MISSES
---------------- ---------- -------------------- ---------- ---------- -------------- ----------------
0000000076FE1110          9 redo allocation               5          0              0                0
0000000076FE1070          8 redo allocation               5          0              0                0
0000000076FE0FD0          7 redo allocation               5          0              0                0
0000000076FE0F30          6 redo allocation               5          0              0                0
0000000076FE0E90          5 redo allocation               5          0              0                0
0000000076FE0DF0          4 redo allocation               5          0              0                0
0000000076FE0D50          3 redo allocation               5          0              0                0
0000000076FE0CB0          2 redo allocation            6168          1           1708                2
0000000076FE0C10          1 redo allocation            7267         20          34804               15

Redo allocation latch child# 1 is the latch protecting allocations in public redo buffer 0, child# 2 is the latch protecting allocations in public redo buffer 1. Two questions that you would probably ask yourself:
1: why are there 20 redo allocation latches whilst there are only two public redo buffers?
2: how can you tell child# 1 is bound to public redo buffer 0, and child# 2 is bound to public redo buffer 1?

Answers to these questions:
1: the 18 other redo allocation latches are because of a mechanism called ‘private redo strands’, see https://blog.tanelpoder.com/files/Oracle_Performance_Scalability_10g_11g.pdf, https://fritshoogland.wordpress.com/2016/11/15/redo-a-blogpost/
2: in order to be sure about what that redo allocation latch protects, you need to look at the actual function calls.

For this, I am using a tool from intel, called ‘pin tools’, which is a suite of tools, of which one is called ‘debugtrace’: https://software.intel.com/en-us/articles/pin-a-dynamic-binary-instrumentation-tool

First I am going to look into how a latch is chosen.

Attach a debugtrace to a normal foregrond session that executes an insert and a commit, and filter the output to make it more readable using a custom filter: https://gitlab.com/snippets/1686145

Now search for the latch address (in lowercase) 0x76fe0c10 (the address of the redo allocation latch, see above query from v$latch_children):

 | | | | | | | | | | | | | | | > kcrfw_redo_gen_ext(0x3, 0x7ffd52f513e0, ...)
 | | | | | | | | | | | | | | | | > kcobrh(0x7ffd52f513e0, 0x3, ...)
 | | | | | | | | | | | | | | | | < kcobrh+0x00000000014b returns: 0x1b4
 | | | | | | | | | | | | | | | | > kcb_get_redo_version(0x3, 0x7ffd52f513a0, ...)
 | | | | | | | | | | | | | | | | < kcb_get_redo_version+0x0000000000c4 returns: 0x1
 | | | | | | | | | | | | | | | | > kcmglscn8(0x7ffd52f508d0, 0, ...)
 | | | | | | | | | | | | | | | | | > kcscu8(0x600113b8, 0x7ffd52f508d0, ...)
 | | | | | | | | | | | | | | | | | < kcscu8+0x000000000047 returns: 0x1
 | | | | | | | | | | | | | | | | < kcmglscn8+0x000000000054 returns: 0x756239
 | | | | | | | | | | | | | | | | > kscn_to_ub8_impl(0x7ffd52f50838, 0x7ffd52f508d0, ...)
 | | | | | | | | | | | | | | | | < kscn_to_ub8_impl+0x00000000003e returns: 0
 | | | | | | | | | | | | | | | | > kslgetl(0x76fdebe0, 0, ...)
 | | | | | | | | | | | | | | | | < kslgetl+0x00000000012f returns: 0x1
 | | | | | | | | | | | | | | | | > kslgetl(0x76fe0c10, 0, ...)                             <<<<<<<<
 | | | | | | | | | | | | | | | | < kslgetl+0x00000000012f returns: 0x1
 | | | | | | | | | | | | | | | | > ksvgcls(0x76fe0c10, 0, ...)
 | | | | | | | | | | | | | | | | < ksvgcls+0x000000000021 returns: 0
 | | | | | | | | | | | | | | | | > kcscu8(0x60016260, 0x7ffd52f508d8, ...)
 | | | | | | | | | | | | | | | | < kcscu8+0x000000000047 returns: 0x1
 | | | | | | | | | | | | | | | | > ub8_to_kscn_impl(0x756239, 0xd8eab838, ...)
 | | | | | | | | | | | | | | | | < ub8_to_kscn_impl+0x000000000031 returns: 0
 | | | | | | | | | | | | | | | | > ub8_to_kscn_impl(0x756239, 0xd8eab848, ...)
 | | | | | | | | | | | | | | | | < ub8_to_kscn_impl+0x000000000031 returns: 0
 | | | | | | | | | | | | | | | | > ub8_to_kscn_impl(0x756239, 0x7ffd52f50848, ...)
 | | | | | | | | | | | | | | | | < ub8_to_kscn_impl+0x000000000031 returns: 0
 | | | | | | | | | | | | | | | | > kscnbur3_int_impl(0x756239, 0xd8eab824, ...)
 | | | | | | | | | | | | | | | | < kscnbur3_int_impl+0x000000000030 returns: 0x756239
 | | | | | | | | | | | | | | | | > ub8_to_kscn_impl(0x756231, 0xd8eab840, ...)
 | | | | | | | | | | | | | | | | < ub8_to_kscn_impl+0x000000000031 returns: 0
 | | | | | | | | | | | | | | | | > kslfre(0x76fe0c10, 0xd8eab840, ...)
 | | | | | | | | | | | | | | | | < kslfre+0x0000000001e2 returns: 0

If we look at the work of andrey nikolaev: https://andreynikolaev.files.wordpress.com/2016/03/latches_and_mutexes_in_oracle_12c.pdf
We know that the first argument of kslgetl is the latch address, and the second argument is 0 for no wait or 1 for willing to wait mode. What becomes clear, is that a foreground process gets the redo allocation latch in nowait mode. Of course this is widely known and documented. the return code of 0x1 means the latch is successfully taken. The latch that is fetched before the redo allocation latch of course is the redo copy latch:

SQL> @flba 76fdebe0
ADDR             NAME
---------------- ------------------------
0000000076FDEBE0 redo copy

If we look again at the figures of the two redo allocation latches protecting the public redo strands:

ADDR                 CHILD# NAME                       GETS     MISSES IMMEDIATE_GETS IMMEDIATE_MISSES
---------------- ---------- -------------------- ---------- ---------- -------------- ----------------
0000000076FE0CB0          2 redo allocation            6168          1           1708                2
0000000076FE0C10          1 redo allocation            7267         20          34804               15

It becomes apparent that child# 1 is used way more than child# 2, which is visible in the column ‘immediate_gets’. How does that work? Let’s take the first redo allocation child latch, and issue an insert again with debugtrace on it.

session 1:

SQL> oradebug setmypid
Statement processed.
SQL> oradebug call kslgetl 0x76FE0C10 0 0 2442
Function returned 1

session 2; debugtrace output:

 | | | | | | | | | | | | | | | | > kslgetl(0x76fdebe0, 0, ...)
 | | | | | | | | | | | | | | | | < kslgetl+0x00000000012f returns: 0x1
 | | | | | | | | | | | | | | | | > kslgetl(0x76fe0c10, 0, ...)
 | | | | | | | | | | | | | | | | < kslgetl+0x00000000012f returns: 0
 | | | | | | | | | | | | | | | | > kslgetl(0x76fe0cb0, 0, ...)
 | | | | | | | | | | | | | | | | < kslgetl+0x00000000012f returns: 0x1

First child# 1 is tried, but when the latch is already taken, indicated by returncode 0, it turns to the second latch.

What will happen when both latches are taken?

session 1:

SQL> oradebug call kslgetl 0x76FE0C10 0 0 2442
Function returned 1
SQL> oradebug call kslgetl 0x76FE0CB0 0 0 2442
Function returned 1

This is what it looks like on in debugtrace output:

 | | | | | | | | | | | | | | | | > kslgetl(0x76fe0cb0, 0, ...)
 | | | | | | | | | | | | | | | | < kslgetl+0x00000000012f returns: 0
 | | | | | | | | | | | | | | | | > kslgetl(0x76fe0c10, 0, ...)
 | | | | | | | | | | | | | | | | < kslgetl+0x00000000012f returns: 0
 | | | | | | | | | | | | | | | | > kslgetl(0x76fe0cb0, 0x1, ...)
 | | | | | | | | | | | | | | | | | > kslges(0x76fe0cb0, 0x7a63c410, ...)
 | | | | | | | | | | | | | | | | | | > ksuSdiInProgress(0x2, 0x7a63c410, ...)
 | | | | | | | | | | | | | | | | | | < ksuSdiInProgress+0x000000000035 returns: 0
 | | | | | | | | | | | | | | | | | | > kskthbwt(0, 0, ...)
 | | | | | | | | | | | | | | | | | | < kskthbwt+0x0000000000a1 returns: 0x10
 | | | | | | | | | | | | | | | | | | > kslwlmod(0x7ffc45d6ab00, 0x7a63c410, ...)
 | | | | | | | | | | | | | | | | | | | > kstmgetsectick(0x7ffc45d6ab00, 0x7a63c410, ...)
 | | | | | | | | | | | | | | | | | | | < kstmgetsectick+0x00000000003a returns: 0x5a4bd4b9
 | | | | | | | | | | | | | | | | | | < kslwlmod+0x0000000000cb returns: 0xffffffff
 | | | | | | | | | | | | | | | | | | > kslwtbctx(0x7ffc45d6a988, 0x7a63c588, ...)

What this shows, is the foreground session first trying to get the latch of the first redo allocation child in immediate mode, which failed, then the latch of the second redo allocation child in immediate mode, which failed too, after which it had no other option than to get a redo allocation latch in willing to wait mode. For that it performed a redo allocation latch get for the first child latch again with the second argument set to 1, indicating the willing to wait mode. Because the latch was still taken, kslgetl called kslges in order to spin (which means a series of retries of getting the latch) for it; on Linux the spinning happens inside kslges, so it is not not visible in a debugtrace). After it spun for 20000 times, the process registers a wait (kslwtbctx), and then goes to sleep on a semaphore in the wait:

 | | | | | | | | | | | | | | | | | | < kslwtbctx+0x00000000050b returns: 0x77c000a4
 | | | | | | | | | | | | | | | | | | > skgpwwait(0x7ffc45d6abf8, 0x7f910bf92ca0, ...)
 | | | | | | | | | | | | | | | | | | | > kslwait_conv_wait_time(0xffffffff, 0x7f910bf92ca0, ...)
 | | | | | | | | | | | | | | | | | | | < kslwait_conv_wait_time+0x000000000027 returns: 0xffffffff
 | | | | | | | | | | | | | | | | | | | > sskgpwwait(0x7ffc45d6abf8, 0x7f910bf92ca0, ...)
 | | | | | | | | | | | | | | | | | | | | > semop@plt(0x98000, 0x7ffc45d6a888, ...)

This is the modern latch implementation where a session goes to sleep on a semaphore after spinning for a latch in willing to wait mode. In order to learn more about that, look in the work of Andrey Nikolaev. The semaphore implementation means that as soon as the latch becomes available, the session is posted and can do it’s allocation in the public redo strand.

However, this does not tie the child latch to a public strand, it only shows how a session bunny-hops public redo allocation latches in nowait mode until it hits a blind wall (if all have been taken) after which it will start waiting in willing to wait mode, waiting to be posted.

In order to see the connection between the redo allocation latches and the actual public strand memory areas, we need to look one level deeper, which means looking at (main) memory access of a process. This can be done using the pin tools pinatrace tool, and adding annotations to the pinatrace in order to understand what is going on. for this, I created a tool: https://gitlab.com/FritsHoogland/pinatrace_annotate

First, create a pinatrace of a normal insert on a test database. A word of warning: if you think a function call trace generates a lot of detailed data, the pinatrace alias memory access trace generates *WAY* more. Even modest things like ‘select count(*) from dual’ generates heaps of data.

Back to the pinatrace of the normal insert; generate the trace, and then annotate it using pinatrace_annotate tool. This should show the insert takes redo allocation latch child #1. To validate, search for the latch address again:

kcrfw_redo_gen_ext+1479:0x0000000060016570(Fixed Size(pgsz:2048k)|+91504 fixed sga|var:kcrfsg_+840 ):R:8:0x76fe0c10/1996360720(Variable Size(pgsz:2048k)|+368970768 shared pool|(child)latch#1:redo allocation+0 shared pool|permanent memor,duration 1,cls perm+16120848 )
kcrfw_redo_gen_ext+1486:0x00007fff5d88efe8():W:8:0x76fe0c10/1996360720(Variable Size(pgsz:2048k)|+368970768 shared pool|(child)latch#1:redo allocation+0 shared pool|permanent memor,duration 1,cls perm+16120848 )
kcrfw_redo_gen_ext+1493:0x00007fff5d88efc8():R:4:0/0()
kcrfw_redo_gen_ext+1516:0x00007fff5d88efe8():R:8:0x76fe0c10/1996360720(Variable Size(pgsz:2048k)|+368970768 shared pool|(child)latch#1:redo allocation+0 shared pool|permanent memor,duration 1,cls perm+16120848 )

What this shows is the kcrfw_redo_gen_ext function reading address 0x60016570, which is in the fixed SGA, which belongs to a ‘fixed sga variable’ called kcrfsg_ at offset 840, reading the value 0x76fe0c10, which we already know is the address of redo allocation latch child #1. The latch address is written into address 0x7fff5d88efe8, which is in the process’ PGA, but we got no annotation of it, because there is nothing between the round brackets. Later, the latch address in PGA is read again.

We know the kcrfsg_ variable/struct from my earlier investigation into how the logwriter works. This is a struct in fixed sga that holds data important to kcrf (kernel cache redo file). My current guess is sg means ‘SGA’.

Only a little further in the trace, we see that the kslgetl function is invoked, and indeed it is trying to get this latch:

kslgetl+69:0x0000000076fe0c1f(Variable Size(pgsz:2048k)|+368970783 shared pool|(child)latch#1:redo allocation+15 shared pool|permanent memor,duration 1,cls perm+16120863 ):R:1:0x37/55()

The way a successful latch get looks like, which is visible in the trace, is this:

kslgetl+138:0x0000000076fe0c10(Variable Size(pgsz:2048k)|+368970768 shared pool|(child)latch#1:redo allocation+0 shared pool|permanent memor,duration 1,cls perm+16120848 ):R:8:0/0()
kslgetl+138:0x0000000076fe0c10(Variable Size(pgsz:2048k)|+368970768 shared pool|(child)latch#1:redo allocation+0 shared pool|permanent memor,duration 1,cls perm+16120848 ):W:8:0x1a/26()

The work of Andrey nikolaev describes that when a latch is taken, the ora pid is written at offset 0. Here we see at offset 0, the kslgetl function reads a 0, indicating the latch is free, after which the value 26, the ora pid of the process, is written into it, which means this process now has taken the latch.

The next step is to see if there is any indication of it pointing to the public redo strand. It should be close to the latch get, because a latch should be hold for as little time as possible. Essentially, what I did then is search for ‘Redo Buffers’. I found this a little further still in the kcrfw_redo_gen_ext function:

kcrfw_redo_gen_ext+11277:0x0000000076fdf3bc(shared pool|permanent memor,duration 1,cls perm+16114620 ):R:4:0x1256f/75119()
kcrfw_redo_gen_ext+11290:0x0000000076fdf3d8(shared pool|permanent memor,duration 1,cls perm+16114648 ):R:8:0/0()
kcrfw_redo_gen_ext+11300:0x0000000076fdf430(shared pool|permanent memor,duration 1,cls perm+16114736 ):W:8:0/0()
kcrfw_redo_gen_ext+11307:0x0000000076fdf3d0(shared pool|permanent memor,duration 1,cls perm+16114640 ):R:8:0xdeadbeef0000726e/-2401053092612115858()
kcrfw_redo_gen_ext+11312:0x0000000076fdf428(shared pool|permanent memor,duration 1,cls perm+16114728 ):W:8:0xdeadbeef0000726e/-2401053092612115858()
kcrfw_redo_gen_ext+11317:0x0000000076fdf3c8(shared pool|permanent memor,duration 1,cls perm+16114632 ):R:8:0x100010000726e/281479271707246()
kcrfw_redo_gen_ext+11322:0x0000000076fdf420(shared pool|permanent memor,duration 1,cls perm+16114720 ):W:8:0x100010000726e/281479271707246()
kcrfw_redo_gen_ext+11327:0x0000000076fdf3c0(shared pool|permanent memor,duration 1,cls perm+16114624 ):R:8:0x1b800001000/1889785614336(??)
kcrfw_redo_gen_ext+11332:0x0000000076fdf418(shared pool|permanent memor,duration 1,cls perm+16114712 ):W:8:0x1b800001000/1889785614336(??)
kcrfw_redo_gen_ext+11337:0x0000000076fdf3b8(shared pool|permanent memor,duration 1,cls perm+16114616 ):R:8:0x1256f0000b2fe/322633648354046()
kcrfw_redo_gen_ext+11342:0x0000000076fdf410(shared pool|permanent memor,duration 1,cls perm+16114704 ):W:8:0x1256f0000b2fe/322633648354046()
kcrfw_redo_gen_ext+11347:0x0000000076fdf3b0(shared pool|permanent memor,duration 1,cls perm+16114608 ):R:8:0/0()
kcrfw_redo_gen_ext+11352:0x0000000076fdf408(shared pool|permanent memor,duration 1,cls perm+16114696 ):W:8:0/0()
kcrfw_redo_gen_ext+11307:0x0000000076fdf3a8(shared pool|permanent memor,duration 1,cls perm+16114600 ):R:8:0x7270/29296()
kcrfw_redo_gen_ext+11312:0x0000000076fdf400(shared pool|permanent memor,duration 1,cls perm+16114688 ):W:8:0x7270/29296()
kcrfw_redo_gen_ext+11317:0x0000000076fdf3a0(shared pool|permanent memor,duration 1,cls perm+16114592 ):R:8:0xd8e4e200/3638878720(Redo Buffers(pgsz:2048k)|+15000064 redo|PUB_REDO_0+15000064 )
kcrfw_redo_gen_ext+11322:0x0000000076fdf3f8(shared pool|permanent memor,duration 1,cls perm+16114680 ):W:8:0xd8e4e200/3638878720(Redo Buffers(pgsz:2048k)|+15000064 redo|PUB_REDO_0+15000064 )
kcrfw_redo_gen_ext+11327:0x0000000076fdf398(shared pool|permanent memor,duration 1,cls perm+16114584 ):R:8:0/0()
kcrfw_redo_gen_ext+11332:0x0000000076fdf3f0(shared pool|permanent memor,duration 1,cls perm+16114672 ):W:8:0/0()
kcrfw_redo_gen_ext+11337:0x0000000076fdf390(shared pool|permanent memor,duration 1,cls perm+16114576 ):R:8:0x758e9f/7704223()
kcrfw_redo_gen_ext+11342:0x0000000076fdf3e8(shared pool|permanent memor,duration 1,cls perm+16114664 ):W:8:0x758e9f/7704223()
kcrfw_redo_gen_ext+11347:0x0000000076fdf388(shared pool|permanent memor,duration 1,cls perm+16114568 ):R:8:0x758ea0/7704224()
kcrfw_redo_gen_ext+11352:0x0000000076fdf3e0(shared pool|permanent memor,duration 1,cls perm+16114656 ):W:8:0x758ea0/7704224()
kcrfw_redo_gen_ext+11366:0x0000000076fdf438(shared pool|permanent memor,duration 1,cls perm+16114744 ):W:4:0xdeadbeef/3735928559(Redo Buffers(pgsz:2048k)|+112049903 redo|PUB_REDO_1+44941039 )
kcrfw_redo_gen_ext+11380:0x0000000076fdf3a8(shared pool|permanent memor,duration 1,cls perm+16114600 ):R:4:0x7270/29296()
kcrfw_redo_gen_ext+11386:0x0000000076fdf3a8(shared pool|permanent memor,duration 1,cls perm+16114600 ):W:4:0x7271/29297()
kcrfw_redo_gen_ext+11392:0x0000000076fdf468(shared pool|permanent memor,duration 1,cls perm+16114792 ):R:8:0xe0000000/3758096384(Redo Buffers(pgsz:2048k)|+134217728 )

There’s a lot to say about this. The first thing to point out, is that these are all memory locations of a memory segment in the shared pool in permanent memory somewhere around 0x76fdf300 / 0x76fdf400. The first public strand is explicitly mentioned (by my annotation of the memory structures):

kcrfw_redo_gen_ext+11317:0x0000000076fdf3a0(shared pool|permanent memor,duration 1,cls perm+16114592 ):R:8:0xd8e4e200/3638878720(Redo Buffers(pgsz:2048k)|+15000064 redo|PUB_REDO_0+15000064 )
kcrfw_redo_gen_ext+11322:0x0000000076fdf3f8(shared pool|permanent memor,duration 1,cls perm+16114680 ):W:8:0xd8e4e200/3638878720(Redo Buffers(pgsz:2048k)|+15000064 redo|PUB_REDO_0+15000064 )

As well as the second public strand:

kcrfw_redo_gen_ext+11366:0x0000000076fdf438(shared pool|permanent memor,duration 1,cls perm+16114744 ):W:4:0xdeadbeef/3735928559(Redo Buffers(pgsz:2048k)|+112049903 redo|PUB_REDO_1+44941039 )

I am confident that the second public strand just happens to collide with a sort of ‘not used’ code, if you look at what the hexadecimal value of this is: 0xdeadbeef

The second thing to notice is there is a lot of information written really close to each other (0x76fdf300-0x76fdf400). The only location I know that holds this kind of redo related information is the previously mentioned kcrfsg_ struct, but this seems to be another location that holds critical information about redo, of which all we know is it sits in the shared pool (not in fixed sga), and it is allocated as permanent memory.

I tried various things to get more information about this memory location, like dumping the sga heap, inspecting this memory location in a heap dump to see if there is a sub heap that I can dump for more information (there isn’t), asking on twitter (which sometimes helps), but I gotten no information. I spend a considerable time searching in the bug database, and still gotten no answers….

I decided to prove usage of the second public strand by manually obtain the latch for the first strand, so an insert will take the second latch, and if that indeed means the second public redo strand is used. This means fetching the first latch using oradebug in a session, and then starting another database session, attach a pinatrace to it, and execute an insert.

Once the pinatrace is annotated, search for 0x76fe0c10 (the first redo allocation latch) again. The start is exactly the same, the latch address is fetched from kcrfsg_ at offset 840.

kcrfw_redo_gen_ext+1479:0x0000000060016570(Fixed Size(pgsz:2048k)|+91504 fixed sga|var:kcrfsg_+840 ):R:8:0x76fe0c10/1996360720(Variable Size(pgsz:2048k)|+368970768 shared pool|(child)latch#1:redo allocation+0 shared pool|permanent memor,duration 1,cls perm+16120848 )
kcrfw_redo_gen_ext+1486:0x00007ffcca22e6e8():W:8:0x76fe0c10/1996360720(Variable Size(pgsz:2048k)|+368970768 shared pool|(child)latch#1:redo allocation+0 shared pool|permanent memor,duration 1,cls perm+16120848 )
kcrfw_redo_gen_ext+1493:0x00007ffcca22e6c8():R:4:0/0()
kcrfw_redo_gen_ext+1516:0x00007ffcca22e6e8():R:8:0x76fe0c10/1996360720(Variable Size(pgsz:2048k)|+368970768 shared pool|(child)latch#1:redo allocation+0 shared pool|permanent memor,duration 1,cls perm+16120848 )

Now look at the same offsets in the function kslgetl (+138) again:

kslgetl+138:0x0000000076fe0c10(Variable Size(pgsz:2048k)|+368970768 shared pool|(child)latch#1:redo allocation+0 shared pool|permanent memor,duration 1,cls perm+16120848 ):R:8:0x27/39()
kslgetl+138:0x0000000076fe0c10(Variable Size(pgsz:2048k)|+368970768 shared pool|(child)latch#1:redo allocation+0 shared pool|permanent memor,duration 1,cls perm+16120848 ):W:8:0x27/39()

Now we see the latch already has a value at offset zero, which is 39, indicating the latch is taken. Now that the session executing the insert found the first redo allocation latch busy, first this happens:

kcrfw_redo_gen_ext+14267:0x00000000600169c4(Fixed Size(pgsz:2048k)|+92612 fixed sga|var:kcrf_actv_strands_+0 ):R:4:0x1/1()
kcrfw_redo_gen_ext+14273:0x00007ffcca22e6d8():R:4:0x2/2()
kcrfw_redo_gen_ext+14283:0x00000000600169c4(Fixed Size(pgsz:2048k)|+92612 fixed sga|var:kcrf_actv_strands_+0 ):W:4:0x2/2()

Because the latch for the first public redo strand was taken, an additional strand is enabled by setting the fixed sga variable kcrf_actv_strands_ to 2, indicating the second strand is now active too.

Then kcrfsg_ at offset 840 is read again, but a higher value is saved in PGA memory. In fact, the latch size is known (160), so in order to get then next latch, the session picks up the latch address of child #1, and adds 160 in order to get to child #2.

kcrfw_redo_gen_ext+14335:0x0000000060016570(Fixed Size(pgsz:2048k)|+91504 fixed sga|var:kcrfsg_+840 ):R:8:0x76fe0c10/1996360720(Variable Size(pgsz:2048k)|+368970768 shared pool|(child)latch#1:redo allocation+0 shared pool|permanent memor,duration 1,cls perm+16120848 )
kcrfw_redo_gen_ext+14342:0x00007ffcca22e6e8():W:8:0x76fe0cb0/1996360880(Variable Size(pgsz:2048k)|+368970928 shared pool|(child)latch#2:redo allocation+0 shared pool|permanent memor,duration 1,cls perm+16121008 )

This latch is free, and is taken by this session:

kslgetl+138:0x0000000076fe0cb0(Variable Size(pgsz:2048k)|+368970928 shared pool|(child)latch#2:redo allocation+0 shared pool|permanent memor,duration 1,cls perm+16121008 ):R:8:0/0()
kslgetl+138:0x0000000076fe0cb0(Variable Size(pgsz:2048k)|+368970928 shared pool|(child)latch#2:redo allocation+0 shared pool|permanent memor,duration 1,cls perm+16121008 ):W:8:0x1a/26()

Now let’s look at the same function locations with the first redo allocation latch taken, and the second redo allocation latch acquired:

kcrfw_redo_gen_ext+11277:0x0000000076fdf4e4(shared pool|permanent memor,duration 1,cls perm+16114916 ):R:4:0x1256f/75119()
kcrfw_redo_gen_ext+11290:0x0000000076fdf500(shared pool|permanent memor,duration 1,cls perm+16114944 ):R:8:0/0()
kcrfw_redo_gen_ext+11300:0x0000000076fdf558(shared pool|permanent memor,duration 1,cls perm+16115032 ):W:8:0/0()
kcrfw_redo_gen_ext+11307:0x0000000076fdf4f8(shared pool|permanent memor,duration 1,cls perm+16114936 ):R:8:0xdeadbeef000000e6/-2401053092612144922()
kcrfw_redo_gen_ext+11312:0x0000000076fdf550(shared pool|permanent memor,duration 1,cls perm+16115024 ):W:8:0xdeadbeef000000e6/-2401053092612144922()
kcrfw_redo_gen_ext+11317:0x0000000076fdf4f0(shared pool|permanent memor,duration 1,cls perm+16114928 ):R:8:0x10001000000e6/281479271678182()
kcrfw_redo_gen_ext+11322:0x0000000076fdf548(shared pool|permanent memor,duration 1,cls perm+16115016 ):W:8:0x10001000000e6/281479271678182()
kcrfw_redo_gen_ext+11327:0x0000000076fdf4e8(shared pool|permanent memor,duration 1,cls perm+16114920 ):R:8:0x800/2048()
kcrfw_redo_gen_ext+11332:0x0000000076fdf540(shared pool|permanent memor,duration 1,cls perm+16115008 ):W:8:0x800/2048()
kcrfw_redo_gen_ext+11337:0x0000000076fdf4e0(shared pool|permanent memor,duration 1,cls perm+16114912 ):R:8:0x1256f00012488/322633648383112()
kcrfw_redo_gen_ext+11342:0x0000000076fdf538(shared pool|permanent memor,duration 1,cls perm+16115000 ):W:8:0x1256f00012488/322633648383112()
kcrfw_redo_gen_ext+11347:0x0000000076fdf4d8(shared pool|permanent memor,duration 1,cls perm+16114904 ):R:8:0/0()
kcrfw_redo_gen_ext+11352:0x0000000076fdf530(shared pool|permanent memor,duration 1,cls perm+16114992 ):W:8:0/0()
kcrfw_redo_gen_ext+11307:0x0000000076fdf4d0(shared pool|permanent memor,duration 1,cls perm+16114896 ):R:8:0xe6/230()
kcrfw_redo_gen_ext+11312:0x0000000076fdf528(shared pool|permanent memor,duration 1,cls perm+16114984 ):W:8:0xe6/230()
kcrfw_redo_gen_ext+11317:0x0000000076fdf4c8(shared pool|permanent memor,duration 1,cls perm+16114888 ):R:8:0xdc01ce00/3691105792(Redo Buffers(pgsz:2048k)|+67227136 redo|PUB_REDO_1+118272 )
kcrfw_redo_gen_ext+11322:0x0000000076fdf520(shared pool|permanent memor,duration 1,cls perm+16114976 ):W:8:0xdc01ce00/3691105792(Redo Buffers(pgsz:2048k)|+67227136 redo|PUB_REDO_1+118272 )
kcrfw_redo_gen_ext+11327:0x0000000076fdf4c0(shared pool|permanent memor,duration 1,cls perm+16114880 ):R:8:0/0()
kcrfw_redo_gen_ext+11332:0x0000000076fdf518(shared pool|permanent memor,duration 1,cls perm+16114968 ):W:8:0/0()
kcrfw_redo_gen_ext+11337:0x0000000076fdf4b8(shared pool|permanent memor,duration 1,cls perm+16114872 ):R:8:0x75a1d2/7709138()
kcrfw_redo_gen_ext+11342:0x0000000076fdf510(shared pool|permanent memor,duration 1,cls perm+16114960 ):W:8:0x75a1d2/7709138()
kcrfw_redo_gen_ext+11347:0x0000000076fdf4b0(shared pool|permanent memor,duration 1,cls perm+16114864 ):R:8:0x75a1d2/7709138()
kcrfw_redo_gen_ext+11352:0x0000000076fdf508(shared pool|permanent memor,duration 1,cls perm+16114952 ):W:8:0x75a1d2/7709138()
kcrfw_redo_gen_ext+11366:0x0000000076fdf560(shared pool|permanent memor,duration 1,cls perm+16115040 ):W:4:0xdeadbeef/3735928559(Redo Buffers(pgsz:2048k)|+112049903 redo|PUB_REDO_1+44941039 )
kcrfw_redo_gen_ext+11380:0x0000000076fdf4d0(shared pool|permanent memor,duration 1,cls perm+16114896 ):R:4:0xe6/230()
kcrfw_redo_gen_ext+11386:0x0000000076fdf4d0(shared pool|permanent memor,duration 1,cls perm+16114896 ):W:4:0xe7/231()
kcrfw_redo_gen_ext+11392:0x0000000076fdf590(shared pool|permanent memor,duration 1,cls perm+16115088 ):R:8:0xe0200000/3760193536(Redo Buffers(pgsz:2048k)|+136314880 )

Here we see reads and writes in the same fashion, including the 0xdeadbeef write, but now we see PUB_REDO_1, alias the second public redo strand. This proves the second redo allocation child latch is really protecting the second public redo strand!

Case Study – 1

It has been some time since I wrote an article walking through the analysis of information on an AWR report, but a nice example appeared a few weeks ago on Twitter that broke a big AWR picture into a sequence of bite-sized chunks that made a little story so here it is, replayed in sync with my ongoing thoughts. The problem started with the (highly paraphrased) question – “How could I get these headline figures when all the ‘SQL ordered by’ sections of the report show captured SQL account for 0.0% of Total?”. The report was a 30 minute snapshot from 11.2.0.3, and here’s the Load Profile:https://jonathanlewis.files.wordpress.com/2018/01/load_profile.jpg?w=150... 150w, https://jonathanlewis.files.wordpress.com/2018/01/load_profile.jpg?w=300... 300w" sizes="(max-width: 549px) 100vw, 549px" />As you can see, the database was doing quite a lot of work as measured by the physical reads and writes, the number of block changes and size of redo, and the fact that the average CPU usage by the foreground sessions in this instance accounts for 9.5 CPUs. Generally speaking the 49 seconds per second of DB time is also a fairly good hint,when combined with the other numbers, that the instance is busy but, in the absence of any other information, that could be one session holding a blocking lock with 48 other sessions waiting for the lock to be released.

There are a couple of unusual (though not impossible) features to this profile. Given the amount of work the figures for user calls and executes look very small – again not technically impossible, just unlikely in “normal” database processing given the apparent workload – and bear in mind that the “SQL ordered by ..” figures seem to account for none of the workload. Similarly the figures for logical reads and block changes are a little unusual (for “normal” processing) – on average this instance seems to have modified every block it visited (without any work showing up in the captured SQL).

Next in the set of images was the Time Model:https://jonathanlewis.files.wordpress.com/2018/01/time_model.jpg?w=150&h=76 150w, https://jonathanlewis.files.wordpress.com/2018/01/time_model.jpg?w=300&h... 300w" sizes="(max-width: 689px) 100vw, 689px" />As you can see, the foreground time (DB time) is 85,944 seconds or which foreground CPU usage (DB CPU) is 16,735 seconds – with about 69,000 seconds unaccounted ! THere’s virtually no time spend on PL/SQL or SQL, and Rman doesn’t even make an appearance  (I mention rman specifically because there was at least one version of Oracle where the rman time was accidentally lost from this summary).

So what does the Top Timed Events look like:https://jonathanlewis.files.wordpress.com/2018/01/top_timed.jpg?w=150&h=50 150w, https://jonathanlewis.files.wordpress.com/2018/01/top_timed.jpg?w=300&h=99 300w" sizes="(max-width: 481px) 100vw, 481px" />

It’s nice to see that this is consistent with the Load Profile: the DB CPU matches, and there’s a lot of physical reads (and a quick check says that 6,560,642/1,800 = 3,644 … which is pretty close to the 3,746 physical reads per second in the Load Profile).  There’s one major anomaly here, though: the huge number of (and time spent on) latch: row cache objects. and even though it’s not the single largest component of time it’s the most obvious one to pursue so the next bit of the report to check is the Dictionary Cache Statistics, with the Tablespace IO Stats and Segments by Physical Reads to follow. I don’t have an image for the dictionary cache stats, but the answer to the question “What were all the rowcache object gets for?” was: “dc_tablespaces (214,796,434)” – which (probably) told me everything I needed to know.

I could show you the specific Instance Activity statistic that I wanted to see next, but I’ll postpone that for a moment and jump to the Tablespace IO Stats – which we were planning to do and might have done anyway if we hadn’t known the rocket science behind massive number of gets on dc_tablespaces.

https://jonathanlewis.files.wordpress.com/2018/01/tablespace_io.jpg?w=15... 150w, https://jonathanlewis.files.wordpress.com/2018/01/tablespace_io.jpg?w=30... 300w" sizes="(max-width: 653px) 100vw, 653px" />

That’s a horrendous number of (single block) reads of the undo tablespace – and why would they be happening ? The instance engaged in some massive rollback activity (and the transactions being rolled back are on objects in the GIRO tablespace – which is why it is also suffering a lot of single block reads) and this is the point where we jump to the relevant Instance Activity statistic to confirm the claim:

https://jonathanlewis.files.wordpress.com/2018/01/undo_applied.jpg?w=150... 150w, https://jonathanlewis.files.wordpress.com/2018/01/undo_applied.jpg?w=300... 300w" sizes="(max-width: 572px) 100vw, 572px" />

There are two other “undo records applied” statistics, but we don’t need to think about them – the match between the count of records applied and the gets on the dc_tablespaces latch is nearly perfect. Almost everything that this instance is doing is rolling back – there must have been some enormous data changes (possibly only one, possibly parallel-enabled) that failed in the previous half hour and now the entire mess is being cleaned up.

One little detail to note – the “undo records applied” per second is 122,355, but the Load Profile reported 247,885 “Block changes” per second. The two figures are consistent with each other. Each application of an undo record is two block changes – the first when you modify the original data block, the second when you update the undo record itself to flag it as “applied”:  122,355 * 2  = 244,710, which is a good match for 247,855.

Final Thoughts

There is a second reason why you could see lots of gets on dc_tablespaces – but I think it was a bug in 9i relating to temporary tablespace groups. The phenomenon as a side effect of rolling back was something I discovered in the 8i timeline and I’ve not actually checked what an AWR report really would look like if I forced a massive rollback to take place as the only workload across the interval – so here’s a quick test I constructed and ran to finish the thread off:


set timing on

create table t1 as
with g as (select rownum id from dual connect by level <= 1000)
select rownum id, rownum n1, rpad ('x',150) padding from g,g
;

create index t1_i1 on t1(id);

begin
        for i in 1..1000000 loop
                update t1 set n1 = n1 where id = i;
        end loop;
end;
/

alter system flush buffer_cache;

pause take AWR snapshot from another session

rollback;

prompt take AWR snapshot from another session

On the small VM I used to run the test it took a little over 60 seconds to run the update, and the same again to complete the rollback. The “DB time” show in the Time Model section of the AWR report was 62.0 seconds, while the “sql execute elapsed time” was 3.9 seconds (which was the SQL executed while capturing the AWR data).

Conclusion

This was a problem where the database seemed to have done a lot of work that couldn’t be attributed to an SQL. While I had a little rocket science up my sleeve that may have allowed me to identify the source more quickly and with more confidence than the average DBA all I’ve done in this note it highlight a couple of oddities and big numbers that anyone could have spotted, and followed a couple of simple guesses:

a) DB time was large, but sql (and pl/sql) time was non-existent

b) Top timed events were I/O and latches – so identify the latch

c) The specific latch pointed towards tablespaces – so check the Tablespace I/O and note the UNDO tablespace

d) Look at any Instance Activity statistics about “undo”.

e) Job done – but a quick test case is the icing on the cake.

 

 

Oracle database block checksum XOR algorithm explained

Recently I’ve started to write my own clone of BBED to have something handy and useful in extreme cases when you have to go deep and fix stuff on low level (I have only like 2 such cases a year but each time it is really fun and a nice money </p />
</p></div>

    	  	<div class=

Testing Oracle SQL online

Want to test some DDL, a query, check an execution plan? You need only a browser. And you can copy-paste, or simply link, your test-case in a forum, a tweet, an e-mail, a tweet. Here is a small list (expecting to grow from your comments) of free online services which can run with an Oracle Database: SQL Fiddle, Rextester, db<>fiddle and Oracle Live SQL

SQL Fiddle

SQL Fiddle let you build a schema and run DDL on the following databases:

  • Oracle 11gR2
  • Microsoft SQL Server 2014
  • MySQL 5.6
  • Postgres 9.6 and 9.3
  • SQLLite (WebSQL and SQL.js)

As an Oracle user, the Oracle 11gR2 is not very useful as it is a version from 2010. But there’s a simple reason for that: that’s the latest free version – the Oracle XE Edition. And a free online service can run only free software. Now that Oracle plans to release an XE version every year, this should be better soon.

Example: http://sqlfiddle.com/#!4/42960/1/0

CaptureSQLfiddle

Rextester

Rextester is a service to compile code online, in a lot of languages and also the following databases:

  • Oracle 11gR2
  • Microsoft SQL Server 2014
  • MySQL 5.7
  • PostgreSQL 9.6

Example: http://rextester.com/QCYJF41984

Rextester has also an API where you can run a query and get a JSON answer:

$ curl -s --request POST --data 'LanguageChoice=35 Program=select * from dual' http://rextester.com/rundotnet/api
{"Warnings":null,"Errors":null,"Result":"\u003ctable class=\"sqloutput\"\u003e\u003ctbody\u003e\u003ctr\u003e\u003cth\u003e\u0026nbsp;\u0026nbsp;\u003c/th\u003e\r\n\u003cth\u003eDUMMY\u003c/th\u003e\r\n\u003c/tr\u003e\r\n\u003ctr\u003e\u003ctd\u003e1\u003c/td\u003e\r\n\u003ctd\u003eX\u003c/td\u003e\r\n\u003c/tr\u003e\r\n\u003c/tbody\u003e\u003c/table\u003e\r\n","Stats":"absolute service time: 1,37 sec","Files":null}

The answer has the result as an HTML table:

$ curl -s --request POST --data 'LanguageChoice=35 Program=select * from dual' http://rextester.com/rundotnet/api | jq -r .Result

 nbsp; nbsp; DUMMY
1 X


Here is my SELECT * FROM DUAL:

$ curl -s --request POST --data 'LanguageChoice=35 Program=select * from dual' http://rextester.com/rundotnet/api | jq -r .Result | lynx -dump -stdin
DUMMY
1 X

Capturerextester

db<>fiddle

db<>fiddle has a very nice interface, easy to link and easy to paste to StackOverflow (click on ‘markdown’)

  • Oracle 11gR2
  • SQL Server 2014 2016 2017, and even 2017 Linux version.
  • MariaDB 10.2
  • SQLite 3.8
  • PostgreSQL 8.4 9.4 9.6 10

Example: http://dbfiddle.uk/?rdbms=oracle_11.2&fiddle=948a067dd17780ca65b01243751c2cb0

Capturedbfiddle

Oracle Live SQL

Finally, you can also run on the latest release of Oracle, with a service provided by Oracle itself: Live SQL.

  • Oracle 12cR2 (an early build from October 2016)

Example: https://livesql.oracle.com/apex/livesql/s/f6ydueahcslf66dlynagw9s3w

CaptureLiveSQL

 

Cet article Testing Oracle SQL online est apparu en premier sur Blog dbi services.