Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Tightened security in 20c

If you cannot wait for a fully autonomous offering, and you’ve jumped into the 20c preview release on Oracle Cloud, obviously the first thing you will probably be installing is Oracle Application Express.

Unlike autonomous, you’ll be installing it manually, which is a quick and easy process, and either in that installation or when adding ORDS later, you’ll be wanting to set the passwords for the public access accounts (typically APEX_PUBLIC_USER and APEX_REST_PUBLIC_USER).

Here’s what that looks like in Oracle Database 19c



SQL> show user
USER is "SYS"
SQL> alter user APEX_PUBLIC_USER identified by MY_SECRET_PASSWORD;

User altered.

And here is what it will look like in Oracle Database 20c Smile



SQL> show user
USER is "SYS"
SQL> alter user APEX_PUBLIC_USER identified by MY_SECRET_PASSWORD;
alter user APEX_PUBLIC_USER identified by MY_SECRET_PASSWORD
*
ERROR at line 1:
ORA-01031: insufficient privileges

Since Oracle 11, the control of user accounts has been improving, including controlling which accounts are explicitly related to the delivery of Oracle solutions (such as APEX, ORDS, Context, JVM and the like). In particular, the ORACLE_MAINTAINED column was added to DBA_USERS to let administrators know which accounts are managed by the database.



SQL> desc DBA_USERS

 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 USERNAME                                  NOT NULL VARCHAR2(128)
 USER_ID                                   NOT NULL NUMBER
 PASSWORD                                           VARCHAR2(4000)
 ACCOUNT_STATUS                            NOT NULL VARCHAR2(32)
 LOCK_DATE                                          DATE
 EXPIRY_DATE                                        DATE
 DEFAULT_TABLESPACE                        NOT NULL VARCHAR2(30)
 TEMPORARY_TABLESPACE                      NOT NULL VARCHAR2(30)
 LOCAL_TEMP_TABLESPACE                              VARCHAR2(30)
 CREATED                                   NOT NULL DATE
 PROFILE                                   NOT NULL VARCHAR2(128)
 INITIAL_RSRC_CONSUMER_GROUP                        VARCHAR2(128)
 EXTERNAL_NAME                                      VARCHAR2(4000)
 PASSWORD_VERSIONS                                  VARCHAR2(17)
 EDITIONS_ENABLED                                   VARCHAR2(1)
 AUTHENTICATION_TYPE                                VARCHAR2(8)
 PROXY_ONLY_CONNECT                                 VARCHAR2(1)
 COMMON                                             VARCHAR2(3)
 LAST_LOGIN                                         TIMESTAMP(9) WITH TIME ZONE
 ORACLE_MAINTAINED                                  VARCHAR2(1)
 INHERITED                                          VARCHAR2(3)
 DEFAULT_COLLATION                                  VARCHAR2(100)
 IMPLICIT                                           VARCHAR2(3)
 ALL_SHARD                                          VARCHAR2(3)
 PASSWORD_CHANGE_DATE                               DATE

In 20c password management has been tightened for Oracle maintained accounts, of which the APEX accounts are included. Hence, to update their passwords, you’ll need to let the database know that you know you are tinkering with special accounts.



SQL> alter session set "_oracle_script" = true;

Session altered.

SQL> alter user APEX_PUBLIC_USER identified by MY_SECRET_PASSWORD;

User altered.

Then all that is left to do is to sit back and bask in the joy of Application Express on the latest version of the Oracle Database.

Oracle Fatal Background Processes

When talking about Oracle background processes, there’s a term/qualifier “fatal” background process. This means that when one of these background processes crashes, then whoever detects the process disappearance (PMON or LGWR or CLMN possibly), will shut down the instance as it cannot function normally anymore.
Not all background process crashes take down the whole instance, for example processes like J000 and P000 are technically background processes (daemons disconnected from network), but their crash won’t take down the instance.

Oracle Fatal Background Processes

When talking about Oracle background processes, there’s a term/qualifier “fatal” background process. This means that when one of these background processes crashes, then whoever detects the process disappearance (PMON or LGWR or CLMN possibly), will shut down the instance as it cannot function normally anymore.
Not all background process crashes take down the whole instance, for example processes like J000 and P000 are technically background processes (daemons disconnected from network), but their crash won’t take down the instance.

Indexes for Joins on Oracle

Looking at ways to programmatically analyze explain plans to see if adding indexes would help.

The analysis is straight forward for for full table scans with strong predicate filters. If there is a strong predicate filter on an explain plan node with a full table scan then it’s a good candidate for an index.

Now a similar situation exists for a join.

Again if the join has a strong predicate filter on a full table scan, that is a candidate.

On the other hand when the actual join fields look like they will benefit from an index, it’s less obvious.

In the queries below T1 and T2 have the same data.

Here is the query

select  max(t1.data) from  t1, t2 
where 
      t1.id = t2.id  
   and t1.clus   = 1  ;

 

First we try with no indexes.

Then we try with an index on the predicate filter on T1.

Then we add a index on the join column on T2.

The idea being that we should see performance improvements at the addition of each index so that the final efficient path uses the predicate filter index on T1 returning a small amount of rows then  drive into T2 with those small amount of rows using the index on T2 join field via an NL join.

The surprising thing is that a HJ that simply hashes values from the predicate filtered rows from T1 and driving into the hash result with all the rows from T2 is just efficient and we don’t have to maintain the index on the t2 join column! It surprising to me. Sure HJs are cool, but I always thought of them as just in time indexes (ie the hashed result) which is work that could be avoid by having an index supporting an NL join and not having to recreate the hash result every query execution. (I’ll have to do some load testing to see what the actual impact of the NL vs HJ query on total database resource usage and load).

Simply adding an index on T1 filter column gets us a HJ that is just as efficient as having a NL using the T2 join field but in the case of the HJ we don’t have to maintain the T2 join column filter.

There is a strong predicate filter on T1.clus ( called “clus” because the values a clustered, we can look non-clustered later, i.e. the same query but using the “val” column which is not clustered)

In  the queries below, a NL driving on T1 rows after the predicate filter into T2 using an index on the join, is efficient.

But the HJ , hashing T1 after predicate filter and driving into the hash with all the rows from T2 is surprisingly (to me) as efficient or more efficient).

First example will start with out any index usage.

Create test data:

drop table seed;
create table seed as
select rownum n from dual connect by level <= 1e6
;

drop table t1;
drop table t2;
create table t1 ( 
    id  int,
    clus int,
    val int,
    data VARCHAR(40)
 );
 
 insert into t1 (
    id, clus , val, data
)
 select 
 n,
 trunc(n/100),
 mod(n,10000),
dbms_random.string('U',dbms_random.value(2,7))
from seed;


select count(*) from t1 where clus = 1;
100
select count(*) from t1 where val = 1;
100

create table t2 as select * from t1;

alter session set Statistics_level=all;
set timing on;

The following 3 examples will use full table scans on both tables.

( I use ” + 0 ” to turn indexes off so I don’t have to drop and recreate for demo since I’ve actually already created the indexes before running these queries )

The first NL below is slower than the next HASH join because the NL has to read the full table T2 for every row in T1 resulting in 300K+ buffer accesses (100 rows after T1 filter time ~3000 buffers per row ).

The HASH just has to read T2 once (3K+ buffer accesses)  and look up each of those rows in a hash of T1 , still a lot of rows ( should be 1M, i.e. all the rows in T2 which is what the estimate says, but actual says 1212 which I don’t understand).

Hashing T2 instead of T1 is a lot more memory, 50M instead of 1.5 M and takes longer even though its much less look ups from the filter on T1 (only 100).

select  /*+ leading(t1 t2)  use_nl(t2) */ max(t1.data) from  t1, t2 
where 
      t1.id = t2.id  + 0 
   and t1.clus + 0  = 1  ;
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));

--------------------------------------------------------------------------------------
| Id  | Operation	    | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |	   |	  1 |	     |	    1 |00:00:06.83 |	 328K|
|   1 |  SORT AGGREGATE     |	   |	  1 |	   1 |	    1 |00:00:06.83 |	 328K|
|   2 |   NESTED LOOPS	    |	   |	  1 |	  36 |	  100 |00:00:06.83 |	 328K|
|*  3 |    TABLE ACCESS FULL| T1   |	  1 |	  36 |	  100 |00:00:00.07 |	3275 |
|*  4 |    TABLE ACCESS FULL| T2   |	100 |	   1 |	  100 |00:00:06.76 |	 325K|
--------------------------------------------------------------------------------------
   3 - filter("T1"."CLUS"+0=1)
   4 - filter("T1"."ID"="T2"."ID"+0)


select max(t1.data) from  t1, t2 
where 
    t1.id = t2.id  + 0 
   and t1.clus + 0  = 1  ;
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));

-----------------------------------------------------------------------------------------------------------------
| Id  | Operation	    | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |	OMem |	1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |	   |	  1 |	     |	    1 |00:00:00.09 |	3281 |	     |	     |		|
|   1 |  SORT AGGREGATE     |	   |	  1 |	   1 |	    1 |00:00:00.09 |	3281 |	     |	     |		|
|*  2 |   HASH JOIN SEMI    |	   |	  1 |	  36 |	  100 |00:00:00.09 |	3281 |	1753K|	1753K| 1414K (0)|
|*  3 |    TABLE ACCESS FULL| T1   |	  1 |	  36 |	  100 |00:00:00.09 |	3275 |	     |	     |		|
|   4 |    TABLE ACCESS FULL| T2   |	  1 |	1000K|	 1212 |00:00:00.01 |	   6 |	     |	     |		|
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("T1"."ID"="T2"."ID"+ 0)
   3 - filter("T1"."CLUS"+ 0=1)


select  /*+ leading(t2 t1) use_hash(t1) max(t1.data) */   max(t1.data)  from  t1, t2 
where 
    t1.id = t2.id  + 0 
   and t1.clus + 0  = 1  ;
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));

-----------------------------------------------------------------------------------------------------------------
| Id  | Operation	    | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |	OMem |	1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |	   |	  1 |	     |	    1 |00:00:00.32 |	6526 |	     |	     |		|
|   1 |  SORT AGGREGATE     |	   |	  1 |	   1 |	    1 |00:00:00.32 |	6526 |	     |	     |		|
|*  2 |   HASH JOIN	    |	   |	  1 |	  36 |	  100 |00:00:00.32 |	6526 |	  50M|	9345K|	 49M (0)|
|   3 |    TABLE ACCESS FULL| T2   |	  1 |	1000K|	 1000K|00:00:00.03 |	3251 |	     |	     |		|
|*  4 |    TABLE ACCESS FULL| T1   |	  1 |	  36 |	  100 |00:00:00.08 |	3275 |	     |	     |		|
-----------------------------------------------------------------------------------------------------------------


Add an index on the predicate filter:

create index t1_clus on t1 (clus);

And we get a much faster hash join access as we access a lot less buffers on T1.

NL driving from T1 still has to do a full table scan on T2 for every row so is slow.

 

 

select max(t1.data) from  t1, t2 
where 
    t1.id = t2.id  + 0 
   and t1.clus    = 1  ;
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));


-------------------------------------------------------------------------------------------------------------------------------- 
| Id  | Operation			      | Name	| Starts | E-Rows | A-Rows |   A-Time	| Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------- 
|   0 | SELECT STATEMENT		      | 	|      1 |	  |	 1 |00:00:00.01 |      10 |	  |	  |
|   1 |  SORT AGGREGATE 		      | 	|      1 |	1 |	 1 |00:00:00.01 |      10 |	  |	  ||
|*  2 |   HASH JOIN SEMI		      | 	|      1 |    100 |    100 |00:00:00.01 |      10 |  1753K|  1753K| 1408K (0)|
|   3 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1	|      1 |    100 |    100 |00:00:00.01 |	4 |	  |	  ||
|*  4 |     INDEX RANGE SCAN		      | T1_CLUS |      1 |    100 |    100 |00:00:00.01 |	3 |	  |	  ||
|   5 |    TABLE ACCESS FULL		      | T2	|      1 |   1000K|   1212 |00:00:00.01 |	6 |	  |	  ||
-------------------------------------------------------------------------------------------------------------------------------- 

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("T1"."ID"="T2"."ID")
4 - access("T1"."MILLI"=1)


select /*+ leading(t1 t2)  use_nl(t2) */  max(t1.data) from  t1, t2 
where 
      t1.id = t2.id + 0
   and t1.clus   = 1  ;
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS'));

--------------------------------------------------------------------------------------------------------------------
| Id  | Operation			      | Name	| Starts | E-Rows | A-Rows |   A-Time	| Buffers | Reads  |
--------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT		      | 	|      1 |	  |	 1 |00:08:14.23 |    2893K|   2892K|
|   1 |  SORT AGGREGATE 		      | 	|      1 |	1 |	 1 |00:08:14.23 |    2893K|   2892K|
|   2 |   NESTED LOOPS			      | 	|      1 |    100 |    100 |00:08:14.23 |    2893K|   2892K|
|   3 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1	|      1 |    100 |    100 |00:00:00.01 |	4 |	 0 |
|*  4 |     INDEX RANGE SCAN		      | T1_CLUS |      1 |    100 |    100 |00:00:00.01 |	3 |	 0 |
|*  5 |    TABLE ACCESS FULL		      | T2	|    100 |	1 |    100 |00:08:14.23 |    2893K|   2892K|
--------------------------------------------------------------------------------------------------------------------

Adding on index on T2.id and allowing the NL driving from T1 to look up rows in T2 by index gets the NL down to about the same as HASH JOIN.

The HJ is on hash of  T2 is faster when accessing T1 by it’s index on the predicate filter but still hashing all of T2 slows it down.

I don’t understand why the HJ hashing on T2 is slower when accessing T1 by index instead of full table scan.

drop index t2_id;

create index t2_id on t2 (id);  

select /*+ leading(t1 t2)  use_nl(t2) */  max(t1.data) from  t1, t2 
where 
      t1.id = t2.id  
   and t1.clus   = 1  ;
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS'));

-----------------------------------------------------------------------------------------------------------
| Id  | Operation			      | Name	| Starts | E-Rows | A-Rows |   A-Time	| Buffers |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT		      | 	|      1 |	  |	 1 |00:00:00.01 |      17 |
|   1 |  SORT AGGREGATE 		      | 	|      1 |	1 |	 1 |00:00:00.01 |      17 |
|   2 |   NESTED LOOPS SEMI		      | 	|      1 |    100 |    100 |00:00:00.01 |      17 |
|   3 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1	|      1 |    100 |    100 |00:00:00.01 |	4 |
|*  4 |     INDEX RANGE SCAN		      | T1_CLUS |      1 |    100 |    100 |00:00:00.01 |	3 |
|*  5 |    INDEX RANGE SCAN		      | T2_ID	|    100 |   1000K|    100 |00:00:00.01 |      13 |
-----------------------------------------------------------------------------------------------------------

select /*+ leading(t2 t1) use_hash(t1)  */
max(t1.data) from  t1, t2 
where 
      t1.id = t2.id +1 
   and t1.clus = 1  ;
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));

--------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation			      | Name	| Starts | E-Rows | A-Rows |   A-Time	| Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT		      | 	|      1 |	  |	 1 |00:00:00.22 |    3255 |	  |	  ||
|   1 |  SORT AGGREGATE 		      | 	|      1 |	1 |	 1 |00:00:00.22 |    3255 |	  |	  ||
|*  2 |   HASH JOIN			      | 	|      1 |     78 |    100 |00:00:00.22 |    3255 |    50M|  9345K|   49M (0)|
|   3 |    TABLE ACCESS FULL		      | T2	|      1 |   1000K|   1000K|00:00:00.03 |    3251 |	  |	  ||
|   4 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1	|      1 |    100 |    100 |00:00:00.01 |	4 |	  |	  ||
|*  5 |     INDEX RANGE SCAN		      | T1_CLUS |      1 |    100 |    100 |00:00:00.01 |	3 |	  |	  ||
-------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("T1"."ID"="T2"."ID"+1)
   5 - access("T1"."CLUS"=1)

select /*+ leading(t2 t1) use_hash(t1)  */
max(t1.data) from  t1, t2 
where 
      t1.id = t2.id 
   and t1.clus = 1  ;
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));

--------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation			      | Name	| Starts | E-Rows | A-Rows |   A-Time	| Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT		      | 	|      1 |	  |	 1 |00:00:01.25 |    2246 |	  |	  ||
|   1 |  SORT AGGREGATE 		      | 	|      1 |	1 |	 1 |00:00:01.25 |    2246 |	  |	  ||
|*  2 |   HASH JOIN			      | 	|      1 |    100 |    100 |00:00:01.25 |    2246 |    43M|  6111K|   42M (0)|
|   3 |    INDEX FAST FULL SCAN 	      | T2_ID	|      1 |   1000K|   1000K|00:00:00.31 |    2242 |	  |	  ||
|   4 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1	|      1 |    100 |    100 |00:00:00.01 |	4 |	  |	  ||
|*  5 |     INDEX RANGE SCAN		      | T1_CLUS |      1 |    100 |    100 |00:00:00.01 |	3 |	  |	  ||
--------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("T1"."ID"="T2"."ID")
   5 - access("T1"."CLUS"=1)

For reference, the  order of NL and HJ access and hints

 

Screen Shot 2020-03-23 at 1.24.20 PM

reference

 

http://datavirtualizer.com/right-deep-left-deep-and-bushy-joins-in-sql/

 

Followup:

The Hash Join put about 7x (0.22 AAS)  as much load on the DB as the next loops (0.03AAS) and both are doing 178 executions/sec.

First Load Chart image with top SQL drill down filters for the  efficient HJ version. Th HJ plan is shown here.

 

-------------------------------------------------------------------------------------------------------------------------------- 
| Id  | Operation			      | Name	| Starts | E-Rows | A-Rows |   A-Time	| Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------- 
|   0 | SELECT STATEMENT		      | 	|      1 |	  |	 1 |00:00:00.01 |      10 |	  |	  |
|   1 |  SORT AGGREGATE 		      | 	|      1 |	1 |	 1 |00:00:00.01 |      10 |	  |	  ||
|*  2 |   HASH JOIN SEMI		      | 	|      1 |    100 |    100 |00:00:00.01 |      10 |  1753K|  1753K| 1408K (0)|
|   3 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1	|      1 |    100 |    100 |00:00:00.01 |	4 |	  |	  ||
|*  4 |     INDEX RANGE SCAN		      | T1_CLUS |      1 |    100 |    100 |00:00:00.01 |	3 |	  |	  ||
|   5 |    TABLE ACCESS FULL		      | T2	|      1 |   1000K|   1212 |00:00:00.01 |	6 |	  |	  ||
-------------------------------------------------------------------------------------------------------------------------------- 

 

Screen Shot 2020-03-24 at 11.21.06 AM

The second image is the load filtered by the NJ join and here is the explain.

-----------------------------------------------------------------------------------------------------------
| Id  | Operation			      | Name	| Starts | E-Rows | A-Rows |   A-Time	| Buffers |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT		      | 	|      1 |	  |	 1 |00:00:00.01 |      17 |
|   1 |  SORT AGGREGATE 		      | 	|      1 |	1 |	 1 |00:00:00.01 |      17 |
|   2 |   NESTED LOOPS SEMI		      | 	|      1 |    100 |    100 |00:00:00.01 |      17 |
|   3 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1	|      1 |    100 |    100 |00:00:00.01 |	4 |
|*  4 |     INDEX RANGE SCAN		      | T1_CLUS |      1 |    100 |    100 |00:00:00.01 |	3 |
|*  5 |    INDEX RANGE SCAN		      | T2_ID	|    100 |   1000K|    100 |00:00:00.01 |      13 |
-----------------------------------------------------------------------------------------------------------

 

Screen Shot 2020-03-24 at 11.21.13 AM

 

The following hash join, which hashes T2, was not as efficient  hashing T1 results but it confused me where it looked like using the index on T1 filter was more expensive than not using it. Looks like it’s the other way around in actual load.

Even though skipping the T1 index shows an A-Time of 0.22 and using the T1 Index shows 1.25, when running a concurrent load it looks the opposite.

The first example below avoids the index on T1 and has an AAS of 10.36  and elapsed of 3.7 secs (instead of 0.22 in explain)

The second which uses the index on T1 filter has an AAS of 8.2 and elapsed of 3.1 secs (instead of 1.25 in explain)

Both are executing 2.24 executes/sec.

 

select /*+ leading(t2 t1) use_hash(t1)  */
max(t1.data) from  t1, t2 
where 
      t1.id = t2.id +1 
   and t1.clus = 1  ;
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));

--------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation			      | Name	| Starts | E-Rows | A-Rows |   A-Time	| Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT		      | 	|      1 |	  |	 1 |00:00:00.22 |    3255 |	  |	  ||
|   1 |  SORT AGGREGATE 		      | 	|      1 |	1 |	 1 |00:00:00.22 |    3255 |	  |	  ||
|*  2 |   HASH JOIN			      | 	|      1 |     78 |    100 |00:00:00.22 |    3255 |    50M|  9345K|   49M (0)|
|   3 |    TABLE ACCESS FULL		      | T2	|      1 |   1000K|   1000K|00:00:00.03 |    3251 |	  |	  ||
|   4 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1	|      1 |    100 |    100 |00:00:00.01 |	4 |	  |	  ||
|*  5 |     INDEX RANGE SCAN		      | T1_CLUS |      1 |    100 |    100 |00:00:00.01 |	3 |	  |	  ||
-------------------------------------------------------------------------------------------------------------------------------

.

Screen Shot 2020-03-24 at 12.09.16 PM

--------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation			      | Name	| Starts | E-Rows | A-Rows |   A-Time	| Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT		      | 	|      1 |	  |	 1 |00:00:01.25 |    2246 |	  |	  ||
|   1 |  SORT AGGREGATE 		      | 	|      1 |	1 |	 1 |00:00:01.25 |    2246 |	  |	  ||
|*  2 |   HASH JOIN			      | 	|      1 |    100 |    100 |00:00:01.25 |    2246 |    43M|  6111K|   42M (0)|
|   3 |    INDEX FAST FULL SCAN 	      | T2_ID	|      1 |   1000K|   1000K|00:00:00.31 |    2242 |	  |	  ||
|   4 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1	|      1 |    100 |    100 |00:00:00.01 |	4 |	  |	  ||
|*  5 |     INDEX RANGE SCAN		      | T1_CLUS |      1 |    100 |    100 |00:00:00.01 |	3 |	  |	  ||
--------------------------------------------------------------------------------------------------------------------------------

.Screen Shot 2020-03-24 at 12.09.24 PM

Reference

http://datavirtualizer.com/right-deep-left-deep-and-bushy-joins-in-sql/

http://www.nocoug.org/download/2012-08/Jonathan_Lewis_Creating_Tests.pdf

Virtual Conference Update: Systematic Oracle SQL Optimization in 2020

The current state of affairs in the entire world have radically changed since last month when we announced the next Oracle SQL Optimization event in the virtual conference series. We’ve been keeping an eye on the trajectory of the developments and decided to make some changes too.
So, here’s the update: we are going to run the Oracle SQL Optimization in 2020 virtual conference twice!
Additional dates These are the existing + new dates for the event:

Virtual Conference Update: Systematic Oracle SQL Optimization in 2020

The current state of affairs in the entire world have radically changed since last month when we announced the next Oracle SQL Optimization event in the virtual conference series. We’ve been keeping an eye on the trajectory of the developments and decided to make some changes too.
So, here’s the update: we are going to run the Oracle SQL Optimization in 2020 virtual conference twice!
Additional dates These are the existing + new dates for the event:

Video : Online Segment Shrink for Tables : Free Unused Space

In today’s video we’ll give a demonstration of how to shrink tables that contain a lot of free space. As I say in the video, this is not something you should do regularly. It’s only necessary if you’ve done some drastic one-off maintenance, like a large data purge maybe.

There are a few articles this relates to.

The star of today’s video is Kellyn Pot’Vin-Gorman, who is returning to her Oracle roots, but on Azure. </p />
</p></div>

    	  	<div class=

ANSI hinting

I’ve made casual remarks in the past about how “ANSI”-style SQL introduces extra complications in labelling or identifying query blocks – which means it’s harder to hint correctly. This is a note to show how the optimizer first transforms “ANSI” SQL into “Oracle” syntax. I’m going to write a simple 4-table join in classic Oracle form and check the execution plan with its query block names and fully qualified table aliases; then I’ll translate to the ANSI equivalent and repeat the check for query block names and aliases , finally I’ll rewrite the query in classic Oracle syntax that reproduces the query block names and fully qualified table aliases that we got from the ANSI form.

We start by creating and indexing 4 tables (with a script that I’ve been using for various tests for several years, but the results I’ll show come from 19c):

rem
rem     Script:         ansi_hint_3.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2014
rem

create table t1
as
select 
        trunc((rownum-1)/4)     t1_n1,
        trunc((rownum-1)/4)     t1_n2,
        case mod(rownum,20) when 0 then rownum else -1 end      flagged1,
        rpad(rownum,180)        t1_v1
from all_objects 
where rownum <= 3000 --> comment to avoid wordpress format issue
;

create table t2
as
select 
        mod(rownum,200)         t2_n1,
        mod(rownum,200)         t2_n2,
        case mod(rownum,20) when 0 then rownum else -1 end      flagged2,
        rpad(rownum,180)        t2_v1
from all_objects 
where rownum <= 3000 --> comment to avoid wordpress format issue
;

create table t3
as
select 
        trunc((rownum-1)/4)     t3_n1,
        trunc((rownum-1)/4)     t3_n2,
        case mod(rownum,20) when 0 then rownum else -1 end      flagged3,
        rpad(rownum,180)        t3_v1
from all_objects 
where rownum <= 3000 --> comment to avoid wordpress format issue
;

create table t4
as
select 
        trunc((rownum-1)/4)     t4_n1,
        trunc((rownum-1)/4)     t4_n2,
        case mod(rownum,20) when 0 then rownum else -1 end      flagged4,
        rpad(rownum,180)        t4_v1
from all_objects 
where rownum <= 3000 --> comment to avoid wordpress format issue
;

create index t1_i1 on t1(t1_n1);
create index t2_i1 on t2(t2_n1);
create index t3_i1 on t3(t3_n1);
create index t4_i1 on t4(t4_n1);

Then we check the execution plan for a simple statement with what looks like a single named query block:


explain plan for
select
        /*+ qb_name(main) */
        *
from
        t1, t2, t3, t4
where
        t2.t2_n1 = t1.t1_n2
and     t3.t3_n1 = t2.t2_n2
and     t4.t4_n1 = t3.t3_n2
;

select * from table(dbms_xplan.display(null,null,'outline alias'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
Plan hash value: 3619951144

-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |   192K|   140M|    61  (22)| 00:00:01 |
|*  1 |  HASH JOIN           |      |   192K|   140M|    61  (22)| 00:00:01 |
|   2 |   TABLE ACCESS FULL  | T4   |  3000 |   565K|    13   (8)| 00:00:01 |
|*  3 |   HASH JOIN          |      | 48000 |    26M|    41  (13)| 00:00:01 |
|   4 |    TABLE ACCESS FULL | T3   |  3000 |   565K|    13   (8)| 00:00:01 |
|*  5 |    HASH JOIN         |      | 12000 |  4500K|    26   (8)| 00:00:01 |
|   6 |     TABLE ACCESS FULL| T2   |  3000 |   559K|    13   (8)| 00:00:01 |
|   7 |     TABLE ACCESS FULL| T1   |  3000 |   565K|    13   (8)| 00:00:01 |
-----------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - MAIN
   2 - MAIN / T4@MAIN
   4 - MAIN / T3@MAIN
   6 - MAIN / T2@MAIN
   7 - MAIN / T1@MAIN

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      SWAP_JOIN_INPUTS(@"MAIN" "T4"@"MAIN")
      SWAP_JOIN_INPUTS(@"MAIN" "T3"@"MAIN")
      SWAP_JOIN_INPUTS(@"MAIN" "T2"@"MAIN")
      USE_HASH(@"MAIN" "T4"@"MAIN")
      USE_HASH(@"MAIN" "T3"@"MAIN")
      USE_HASH(@"MAIN" "T2"@"MAIN")
      LEADING(@"MAIN" "T1"@"MAIN" "T2"@"MAIN" "T3"@"MAIN" "T4"@"MAIN")
      FULL(@"MAIN" "T4"@"MAIN")
      FULL(@"MAIN" "T3"@"MAIN")
      FULL(@"MAIN" "T2"@"MAIN")
      FULL(@"MAIN" "T1"@"MAIN")
      OUTLINE_LEAF(@"MAIN")
      ALL_ROWS
      DB_VERSION('19.1.0')
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T4"."T4_N1"="T3"."T3_N2")
   3 - access("T3"."T3_N1"="T2"."T2_N2")
   5 - access("T2"."T2_N1"="T1"."T1_N2")

Note in the Query Block Name / Object Alias information that all 4 tables were “sourced from”, or have aliases qualified by, “@MAIN”, and in the final plan all the tables are used in a query block called MAIN.

Now look at the basic ANSI equivalent:


explain plan for
select 
        /*+ qb_name(main) */
        *
from
        t1
join 
        t2
on      t2.t2_n1 = t1.t1_n2
join 
        t3
on      t3.t3_n1 = t2.t2_n2
join 
        t4
on      t4.t4_n1 = t3.t3_n2
;

select * from table(dbms_xplan.display(null,null,'outline alias'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
Plan hash value: 3619951144

-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |   192K|   140M|    61  (22)| 00:00:01 |
|*  1 |  HASH JOIN           |      |   192K|   140M|    61  (22)| 00:00:01 |
|   2 |   TABLE ACCESS FULL  | T4   |  3000 |   565K|    13   (8)| 00:00:01 |
|*  3 |   HASH JOIN          |      | 48000 |    26M|    41  (13)| 00:00:01 |
|   4 |    TABLE ACCESS FULL | T3   |  3000 |   565K|    13   (8)| 00:00:01 |
|*  5 |    HASH JOIN         |      | 12000 |  4500K|    26   (8)| 00:00:01 |
|   6 |     TABLE ACCESS FULL| T2   |  3000 |   559K|    13   (8)| 00:00:01 |
|   7 |     TABLE ACCESS FULL| T1   |  3000 |   565K|    13   (8)| 00:00:01 |
-----------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$43767242
   2 - SEL$43767242 / T4@SEL$3
   4 - SEL$43767242 / T3@SEL$2
   6 - SEL$43767242 / T2@SEL$1
   7 - SEL$43767242 / T1@SEL$1

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      SWAP_JOIN_INPUTS(@"SEL$43767242" "T4"@"SEL$3")
      SWAP_JOIN_INPUTS(@"SEL$43767242" "T3"@"SEL$2")
      SWAP_JOIN_INPUTS(@"SEL$43767242" "T2"@"SEL$1")
      USE_HASH(@"SEL$43767242" "T4"@"SEL$3")
      USE_HASH(@"SEL$43767242" "T3"@"SEL$2")
      USE_HASH(@"SEL$43767242" "T2"@"SEL$1")
      LEADING(@"SEL$43767242" "T1"@"SEL$1" "T2"@"SEL$1" "T3"@"SEL$2" "T4"@"SEL$3")
      FULL(@"SEL$43767242" "T4"@"SEL$3")
      FULL(@"SEL$43767242" "T3"@"SEL$2")
      FULL(@"SEL$43767242" "T2"@"SEL$1")
      FULL(@"SEL$43767242" "T1"@"SEL$1")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SEL$2")
      MERGE(@"SEL$1" >"SEL$2")
      OUTLINE(@"SEL$58A6D7F6")
      OUTLINE(@"SEL$3")
      MERGE(@"SEL$58A6D7F6" >"SEL$3")
      OUTLINE(@"SEL$9E43CB6E")
      OUTLINE(@"MAIN")
      MERGE(@"SEL$9E43CB6E" >"MAIN")
      OUTLINE_LEAF(@"SEL$43767242")
      ALL_ROWS
      DB_VERSION('19.1.0')
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T4"."T4_N1"="T3"."T3_N2")
   3 - access("T3"."T3_N1"="T2"."T2_N2")
   5 - access("T2"."T2_N1"="T1"."T1_N2")

Check the Plan Hash Value – it gives you a strong clue that the execution plans are the same, and a close examination of the body of the plan and the Predicate information confirm that the two queries operate in exactly the same way at exactly the same cost. But there’s a significant difference in the query blocks and table aliases.

The Query Block Name / Alias Alias information tells us that query block “main” has disappeared and the query operates completely from a query block with the internally generated name SEL$43767242; moreover we can see that tables t1 and t2 appear to be sourced from a query block called sel$1, while t3 comes from sel$2 and t4 comes from sel$3.

Finally here’s a messy Oracle form to reproduce the ANSI query block names and table aliases:


explain plan for
select  /*+ qb_name(main) */
        *
from    (
        select  /*+ qb_name(sel$3) */
                *
        from
                (
                select  /*+ qb_name(sel$2) */
                        *
                from    (
                        select 
                                /*+ qb_name(sel$1) */
                                *
                        from    
                                t1,
                                t2
                        where   t2.t2_n1 = t1.t1_n2
                        ) v1,
                        t3
                where   t3.t3_n1 = v1.t2_n2
                )       v2,
                t4
        where   t4.t4_n1 = v2.t3_n2
        )
;

select * from table(dbms_xplan.display(null,null,'outline alias'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
Plan hash value: 3619951144

-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |   192K|   140M|    61  (22)| 00:00:01 |
|*  1 |  HASH JOIN           |      |   192K|   140M|    61  (22)| 00:00:01 |
|   2 |   TABLE ACCESS FULL  | T4   |  3000 |   565K|    13   (8)| 00:00:01 |
|*  3 |   HASH JOIN          |      | 48000 |    26M|    41  (13)| 00:00:01 |
|   4 |    TABLE ACCESS FULL | T3   |  3000 |   565K|    13   (8)| 00:00:01 |
|*  5 |    HASH JOIN         |      | 12000 |  4500K|    26   (8)| 00:00:01 |
|   6 |     TABLE ACCESS FULL| T2   |  3000 |   559K|    13   (8)| 00:00:01 |
|   7 |     TABLE ACCESS FULL| T1   |  3000 |   565K|    13   (8)| 00:00:01 |
-----------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$43767242
   2 - SEL$43767242 / T4@SEL$3
   4 - SEL$43767242 / T3@SEL$2
   6 - SEL$43767242 / T2@SEL$1
   7 - SEL$43767242 / T1@SEL$1

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      SWAP_JOIN_INPUTS(@"SEL$43767242" "T4"@"SEL$3")
      SWAP_JOIN_INPUTS(@"SEL$43767242" "T3"@"SEL$2")
      SWAP_JOIN_INPUTS(@"SEL$43767242" "T2"@"SEL$1")
      USE_HASH(@"SEL$43767242" "T4"@"SEL$3")
      USE_HASH(@"SEL$43767242" "T3"@"SEL$2")
      USE_HASH(@"SEL$43767242" "T2"@"SEL$1")
      LEADING(@"SEL$43767242" "T1"@"SEL$1" "T2"@"SEL$1" "T3"@"SEL$2"
              "T4"@"SEL$3")
      FULL(@"SEL$43767242" "T4"@"SEL$3")
      FULL(@"SEL$43767242" "T3"@"SEL$2")
      FULL(@"SEL$43767242" "T2"@"SEL$1")
      FULL(@"SEL$43767242" "T1"@"SEL$1")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SEL$2")
      MERGE(@"SEL$1" >"SEL$2")
      OUTLINE(@"SEL$58A6D7F6")
      OUTLINE(@"SEL$3")
      MERGE(@"SEL$58A6D7F6" >"SEL$3")
      OUTLINE(@"SEL$9E43CB6E")
      OUTLINE(@"MAIN")
      MERGE(@"SEL$9E43CB6E" >"MAIN")
      OUTLINE_LEAF(@"SEL$43767242")
      ALL_ROWS
      DB_VERSION('19.1.0')
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T4"."T4_N1"="T3"."T3_N2")
   3 - access("T3"."T3_N1"="T2"."T2_N2")
   5 - access("T2"."T2_N1"="T1"."T1_N2")

Again a quick check of the Plan Hash Value confirms that the messier query is a match for the previous query with its ANSI transformation, and the plan body and Query Block Name / Object Alias information confirm the match throughout in the naming.

Any time you write ANSI syntax this layering of nested inline views is what happens to your query before any other transformation is applied – and sometimes (though very rarely in recent versions of Oracle) this can result in unexpected limitations in the way the optimizer subsequently transforms the query.

Apart from “accidents”, though, the big issue with the “ANSI rewrite” comes from the side effects of all the extra query blocks. In anything but the simplest cases you have to work a little harder to figure out the query block names you need to use if you want to apply hints to fix an optimizer problem – you can’t create your own meaningful names for every query block in the query you wrote. Fortunately this task is made a little easier if you check the execution plan of the query after adding the hint /*+ no_query_transformation */, as this tends to produce a plan that looks like a step by step “translation” of the way the query was written (apart from the ANSI transformation, of course). This might be enough to identify the base-level query blocks that the optimizer starts with when you use ANSI syntax.

 

On Working Remotely: An Automattic Reader

How does a distributed company — a group of people with shared business goals but spread out around the world, representing different cultures, family settings, and local health considerations — stick together during a major health crisis like the COVID-19 pandemic?

We don’t intend to make it sound easy. And we are aware — from our families, our communities, the businesses we support, and our customers — that many, if not most companies cannot actually work 100 percent remotely because of the nature of their business.

For those who can transition to distributed work in the wake of this evolving crisis, we wanted to suggest ideas that might help colleagues work well together even when you’re no longer all sharing the same physical space.

We’re lucky that many Automatticians have shared advice and best practices based on their many years of working from home — and we’ve compiled some of these resources below to empower others to listen to and support their coworkers during a difficult and disruptive time.

Of course, from his first post on remote work to his most recent one reflecting on the COVID-19 pandemic, to his Distributed podcast and beyond, founder and CEO Matt Mullenweg is a prominent voice on remote work and distributed culture. To send you off on a lighter note, Matt shares regular “What’s In My Bag” posts.

We hope these resources are helpful to you during these trying times, and that you and everyone in your communities stay safe.

Oracle rowcache fastgets

This blogpost is about the Oracle database row or dictionary cache. This is a separate cache that caches database metadata, like database object properties or user properties.

There is surprising little in-depth technical detail about the row cache. To some degree I understand: issues with the row cache are rare.

I noticed a column in V$ROWCACHE called ‘FASTGETS’. Whatever FASTGETS means, in my database it is being used:

SQL> select cache#, type, parameter, gets, fastgets, getmisses from v$rowcache;
    CACHE# TYPE        PARAMETER                              GETS   FASTGETS  GETMISSES
---------- ----------- -------------------------------- ---------- ---------- ----------
         3 PARENT      dc_rollback_segments                   1000          9         12
         1 PARENT      dc_free_extents                           0          0          0
         4 PARENT      dc_used_extents                           0          0          0
         2 PARENT      dc_segments                            4374       7308       2496
         0 PARENT      dc_tablespaces                         1232      14466        178
         5 PARENT      dc_tablespace_quotas                      0          0          0
         6 PARENT      dc_files                                  5          0          5
        10 PARENT      dc_users                                953      13473        196
         8 PARENT      dc_objects                            32545      13682       3769
        17 PARENT      dc_global_oids                          723          0        165
        12 PARENT      dc_constraints                            0          0          0
        13 PARENT      dc_sequences                              9          0          9
        16 PARENT      dc_histogram_defs                     60851          0      19613

If you look at the description of FASTGETS in the documentation (currently the 20c version), it says: ‘Reserved for internal use’. That’s not very helpful.

Luckily, there is a trace event to make Oracle print additional information regarding the row cache: event 10222.

Please mind using events is an unsupported action unless you have explicit blessing from Oracle support to use it. Even then, it should only be set if you are 100% sure you need it, and you should unset it as soon as you can. Also document the exact reason for using it, and evaluate set events during patching and version upgrades.

Using the event, I found that a normal dictionary lookup looks like this:

kqrpre: start hash=6d2c659c mode=S keyIndex=1 dur=CALL opt=FALSE
kqrpre: found cached object po=0x7e5da720 flg=2
kqrmpin : kqrpre: found po Pin po 0x7e5da720 cid=10 flg=2 hash=70b9dd3f
time=3663934334
kqrpre: pinned po=0x7e5da720 flg=2 pso=0x793472c8
kqrpre: done po=0x7e5da720 cid=10 flg=2 eq=0x7e616a20 pso=0x793472c8 dur=CALL
kqrpre: keyIndex=1 hash=70b9dd3f 6d2c659c 0
kqrpre: time=3663934346
kqrpre: returnVal: TRUE
.
kqrprl: eq=0x7e616a20 fpInUse=FALSE
ksedsts()+418<-kqrprl()+1208<-kkdlGetBaseUser()+248<-kzulgt1()+220<-kksLockUserSchema()+125<-kksLoadChild()+1983<-kxsGetRuntimeLock()+2049<-kksfbc()+17007<-kkspsc0()+2222<-kksParseCursor()+123<-opiosq0()+2266<-kpoopr
x()+420<-kpoal8()+838<-opiodr()+1244<-ttcpip()+1239
<-opitsk()+1943<-opiino()+957<-opiodr()+1244<-opidrv()+1091<-sou2o()+191<-opimai_real()+455<-ssthrdmain()+423<-main()+275<-__libc_start_main()+245.

This is a bit cryptic, but this is a row cache parent read (kqrpre) and release (kqrprl), for cache id (cid) 10 (dc_users, see above v$rowcache output), for which the needed entry was found (found cached object). It even contains a short stack format dump of the current location of invocation.

Now see the following lookup:

kqrpre: start hash=70b9dd3f mode=S keyIndex=0 dur=CALL opt=TRUE
kqrpre: optimistic lookup hash=70b9dd3f
kqrpre: KQR_READ_PD_VER: versionAddr=0x628f2a20 version=10
kqrpre: optimistic lookup: success
kqrpre: done po=0x7f651cbb1bf8 cid=10 flg=0 eq=(nil) pso=0x79360288 dur=CALL
kqrpre: keyIndex=0 hash=70b9dd3f 6d2c659c 0
kqrpre: time=3663936115
kqrpre: returnVal: TRUE
.
kqrprl: eq=0x7f651cbb1bf8 fpInUse=TRUE
ksedsts()+418<-kqrprl()+1208<-kkdlGetUserId()+587<-kqlhdlod()+641<-kqlCallback()+120<-kqllod()+667<-kglLoadOnLock()+1107<-kgllkal()+861<-kglLock()+1429<-kglget()+302<-kglgob()+328<-qcdlgbo()+609<-qcdlgob()+982<-qcsfgob()+278<-qcsprfro()+553<-qcsprfro_tree()+380
<-qcsprfro_tree()+150<-qcspafq()+246<-qcspqbDescendents()+281<-qcspqb()+272<-kkmdrv()+192<-opiSem()+1962<-opiDeferredSem()+447<-opitca()+436<-kksFullTypeCheck()+86<-rpiswu2()+583<-kksLoadChild()+7969<-kxsGetRuntimeLock()+2049<-kksfbc()+17007<-kkspsc0()+2222
<-kksParseCursor()+123<-opiosq0()+2266<-kpooprx()+420<-kpoal8()+838<-opiodr()+1244<-ttcpip()+1239<-opitsk()+1943<-opiino()+957<-opiodr()+1244<-opidrv()+1091<-sou2o()+191<-opimai_real()+455<-ssthrdmain()+423<-main()+275<-__libc_start_main()+245.

This is lookup for cid 10 (dc_users) too, but now “optimistic”, as can be seen on line 2. The first line shows “opt=TRUE”, which gives the indication that optimistic reading is probably not done dynamically, but an explicit setting. Could this “optimistic” read be what is shown as FASTGET in v$rowcache?

Let’s look at the actual C functions that are called for such an optimistic row cache read:

> kqrpre1(0xa, 0x7ffc48561380, ...)
| > kqrpre2(0xa, 0x7ffc48561380, ...)
| | > kqrhsh(0x62f471c8, 0x7ffc48561380, ...)
| | | > kgghash(0x7ffc48561380, 0x82, ...)
| | | < kgghash+0x0000000000d0 returns: 0xc34886c9
| | < kqrhsh+0x00000000007d returns: 0xc34886c9
| | > KQR_READ_PD_VER(0x62f471c8, 0x628c7670, ...)
| | < KQR_READ_PD_VER+0x000000000044 returns: 0x4a
| | > kqrFastPoRead(0x62f471c8, 0x7ffc48561380, ...)
| | | > _intel_fast_memcmp(0x7ffc48561380, 0x78fc6834, ...)
| | | < _intel_fast_memcmp+0x000000000053 returns: 0
| | | > kglInstanceIdn(0x7f1bc9f099a0, 0x78fc6800, ...)
| | | <> kglHandleIdn(0x7f1bc9f099a0, 0x7c736fb0, ...)
| | | < kglHandleIdn+0x000000000011 returns: 0x1
| | | > kglPdbInstanceIdn(0x7f1bc9f099a0, 0x7c736fb0, ...)
| | | < kglPdbInstanceIdn+0x000000000024 returns: 0x1
| | < kqrFastPoRead+0x0000000001ac returns: 0x1

Here we see kqrpre1 call kqrpre2, which calls kqrhsh to calculate a hash value to lookup the row cache entry. Then we see KQR_READ_PD_VER being called, which is unique to the optimistic lookup, and then kqrFastPoRead. That seems to be a reasonable definite indication this is doing the FASTGET.

The logical question to ask then is what is the actual difference between a fastget and a (regular) get?

Looking through the function calls of a fastget versus a regular get the thing that strikes me is the absence of concurrency checking and protection gets in the fastget alias optimistic call. A regular row cache get performs:
– obtains the row cache hash bucket mutex
– obtains the row cache parent object mutex
– pins the (row cache entry) memory
– frees the row cache hash bucket mutex
– locks the row cache entry (kqrget)
– frees the row cache parent object mutex

– the row cache data is used (this is the actual purpose of the row cache get)

– obtains the row cache parent object mutex
– unpins the (row cache entry) memory
– frees the row cache parent object mutex
– frees the lock (there is no explicit unlock call, I assume it’s done via the state object)

A fastget does simply not do a lot of the concurrency protection measures, it checks the version of the entry before and after getting the entry, and apparently that’s enough to be sure about the consistency of the row cache entry. It does lock the entry, and releases it after usage. Especially since the number of row cache lookups can easily get very high, this probably saves a lot of CPU cycles.

Looking in the bigger picture, the fastget seems to be done at times when the database executable has fetched the same entry in a regular way earlier, so it’s safe to assume it will still be there, and it’s safe to do it without all the concurrency control around it.

Is there anything you can do about it? Not that I know, this is just a blog to explain something about which there is no documentation.

In case you’re wondering when this appeared in Oracle, the answer is 11.2.0.4. The FASTGETS column appeared in that function, as well as the C function kqrFastPoRead.

There’s also the undocumented parameter “_kqr_optimistic_reads”, which is TRUE in my (Oracle 18) instance.