Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Tracing Errors

This is a little lesson in trouble-shooting. It assumes you have the privilege to generate and edit trace files, and all I’m going to do is show how I worked out the answer to a fairly simple question that appeared recently on the Oracle Developer Community forum.

I have a table t1 which is reasonably large (1M rows) with a column v30, and I’ve issued the command.

rem
rem     Script:         drop_column.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2003
rem

alter table t1 set unused column v30;

After this I’ve issued another command, pressed return, and immediately hit ctrl-C – I’ve got a brief window for this interrupt as the command is going to generate roughly 230MB of redo. If you want to try the experiment it might take a couple of attempts to get the timing right.

alter table t1
        drop unused columns
        checkpoint 1000
/
{ctrl-C}

Then I’ve tried to drop the table with the following result:

drop table t1
           *
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-12986: columns in partially dropped state. Submit ALTER TABLE DROP COLUMNS CONTINUE

This emulates the problem that appeared on the forum but the OP didn’t really want to issue the “continue” command because their table was “large” and the drop had been running for 24 hours when it was interrupted. It’s worth noting that four columns had been specified as unused, which means the drop command was probably generating roughly 1KB of redo per row. It’s also worth mentioning that the table was 600 columns wide – so there may have been a few performance side effects due to the multiple (minimum 3) row-pieces per row and row-chaining.

Ignoring any questions about the possible impact of having 600 columns, the key question in this case is:

  • Why isn’t it possible to drop the table (the manuals suggest it should be possible at this point)
  • Is there a way to bypass the problem?

This is a repeatable error – we can try to drop the table as many times as we like and we will get the same error reported in fractions of a second. so an obvious strategy is to enable tracing and see if the trace file can tell us anything about why the error is happening. This is a particularly sensible strategy to follow since error ORA-00604 is telling us that there’s something wrong in the recursive SQL, which means there’s a very good chance that we will actually find an SQL statement in the trace file that is the rescursive statement triggering the error.

So, enable sql_trace (or set event 10046), or do whatever you like to do to enable basic tracing (no need for anything above level 1 just yet); try to execute the drop; then search the trace file for the word ERROR at the start of a line (“^ERROR”). Here’s what I found as the first match in my trace file:

ERROR #139987889121800:err=12986 tim=424276176462

Note that the err= value is the 12986 that was reported as the error under the ORA-00604 error. Sometimes it’s necessary to search backwards in the trace file until you find the matching cursor number (#139987889121800), but in this case it was already visible just a few lines further up the file. So here’s the fragment of the file around that ERROR line:


PARSING IN CURSOR #139987889121800 len=72 dep=1 uid=0 oct=15 lid=0 tim=424276175961 hv=1894278903 ad='75c06e38' sqlid='4wv7gq1sfhtrr'
ALTER TABLE "TEST_USER"."T1" RENAME TO "BIN$rhxBELdQGMXgUwEAAH93eQ==$0"
END OF STMT
PARSE #139987889121800:c=2896,e=3035,p=0,cr=4,cu=0,mis=1,r=0,dep=1,og=4,plh=0,tim=424276175960
EXEC #139987889121800:c=186,e=185,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=0,tim=424276176418
ERROR #139987889121800:err=12986 tim=424276176462
CLOSE #139987889121800:c=10,e=9,dep=1,type=0,tim=424276176712
EXEC #139987891478792:c=34686,e=35859,p=0,cr=73,cu=12,mis=0,r=0,dep=0,og=1,plh=0,tim=424276176774
ERROR #139987891478792:err=604 tim=424276176808

The error has come from an SQL statement that is trying to rename my table to some wierd and wonderful name which starts with the characters “BIN$” – that’s a “drop” command trying to move a table into the recycle bin by renaming it – and you’re not allowed to rename a table that is in a partially dropped state. So that’s why we get the error; and the obvious way to bypass it is: “drop table t1 purge;” – which works.

You’ll notice that I’ve include a couple of lines after the first ERROR. This is to show you the line that generated the ORA-00604 (err=604) error. It comes from cursor #139987891478792, and seraching backwards up the file for that cursor number I get to:

PARSING IN CURSOR #139987891478792 len=13 dep=0 uid=107 oct=12 lid=107 tim=424276140765 hv=202649412 ad='7f517dd3fe00' sqlid='d47kdkn618bu4'
drop table t1
END OF STMT

That’s not a suprise, of course, but it is important to cross-check that you haven’t been chasing the wrong error. There are some cases where the Oracle code does something to see if an error will occur but has an exception handler that means the error doesn’t end up reaching the application, so you do need to do a check that the error you found first was the one that floated up to the top of the stack.

Footnote

From Oracle 12.2.0.1 you could arrange to read your own trace file – while your session is connected – through the dynamic performance view v$diag_trace_file_contents.

 

AWR Flush Levels

From version 12.1.0.2 onward, for taking AWR snapshots, you have the choice between four AWR flush levels: BESTFIT, LITE, TYPICAL and ALL. If you check the Oracle Database documentation, you won’t find much information about the difference between them. The best you will find, in the PL/SQL Packages and Types Reference, is the following:

The flush level can be one of the following:

  • BESTFIT: Uses the default value depending on the type of snapshot being taken.
  • LITE: Lightweight snapshot. Only the most important statistics are collected. This is default for a pluggable database (PDB) and application container.
  • TYPICAL: Regular snapshot. Most of the statistics are collected. This is default for a container database root (CDB root) and non-CDB database.
  • ALL: Heavyweight snapshot. All the possible statistics are collected. This consumes a considerable amount of disk space and takes a long time to create.

Based on that information, and knowing that the default flush level of the CREATE_SNAPSHOT subprograms of the DBMS_WORKLOAD_REPOSITORY package is BESTFIT, we can infer that BESTFIT simply choose LITE for PDB-level snapshots and TYPICAL for the others.
What I miss is actual information about which piece of data is stored for LITE, TYPICAL and ALL.

Finally, as of 19c, we have a way to know more about that topic without having to trace what the CREATE_SNAPSHOT subprograms do. In fact, we can take advantage of three new data dictionary views (AWR_CDB_TABLE_SETTINGS, AWR_ROOT_TABLE_SETTINGS, AWR_PDB_TABLE_SETTINGS) to know the minimum flush level that a snapshot must have to store data into a specific AWR table. Have a look to the following query and its output:

SELECT flush_level_val, count(*)
FROM awr_cdb_table_settings
GROUP BY flush_level_val

FLUSH_LEVEL_VAL   COUNT(*)
--------------- ----------
LITE                    50
TYPICAL                 71
ALL                      7
NOT APPLICABLE          33

Based on it we can infer that:

  • TYPICAL, compared to LITE, stores data into 71 additional tables.
  • ALL, compared to TYPICAL, stores data into 7 additional tables.

It goes without saying that you can also show the name of the tables. For example, the following are the one associated to each of them (please, do not ask how the database engine makes sense of something like “Datafile Group”):

LITE

SELECT table_name
FROM awr_cdb_table_settings
WHERE flush_level_val = 'LITE'
ORDER BY table_name

TABLE_NAME 
--------------------------
Datafile Group
SQL Group
Service Group
Undo Group
WRH$_ACTIVE_SESSION_HISTORY
WRH$_ASM_BAD_DISK
WRH$_ASM_DISKGROUP
WRH$_ASM_DISKGROUP_STAT
WRH$_ASM_DISK_STAT_SUMMARY
WRH$_BG_EVENT_SUMMARY
WRH$_BUFFER_POOL_STATISTICS
WRH$_CELL_CONFIG
WRH$_CELL_CONFIG_DETAIL
WRH$_CELL_DB
WRH$_CELL_DISK_SUMMARY
WRH$_CELL_GLOBAL
WRH$_CELL_GLOBAL_SUMMARY
WRH$_CELL_IOREASON
WRH$_CELL_OPEN_ALERTS
WRH$_CON_SYSSTAT
WRH$_CON_SYS_TIME_MODEL
WRH$_DB_CACHE_ADVICE
WRH$_DYN_REMASTER_STATS
WRH$_ENQUEUE_STAT
WRH$_EVENT_HISTOGRAM
WRH$_LATCH
WRH$_LATCH_MISSES_SUMMARY
WRH$_MEMORY_RESIZE_OPS
WRH$_MEM_DYNAMIC_COMP
WRH$_MTTR_TARGET_ADVICE
WRH$_MUTEX_SLEEP
WRH$_MVPARAMETER
WRH$_OSSTAT
WRH$_PARAMETER
WRH$_PGASTAT
WRH$_PGA_TARGET_ADVICE
WRH$_PROCESS_MEMORY_SUMMARY
WRH$_SERVICE_NAME
WRH$_SGA
WRH$_SGASTAT
WRH$_SQLSTAT
WRH$_SQLTEXT
WRH$_SQL_WORKAREA_HISTOGRAM
WRH$_STREAMS_POOL_ADVICE
WRH$_SYSSTAT
WRH$_SYSTEM_EVENT
WRH$_SYS_TIME_MODEL
WRH$_THREAD
WRH$_UNDOSTAT
WRH$_WAITSTAT

TYPICAL

SELECT table_name
FROM awr_cdb_table_settings
WHERE flush_level_val = 'TYPICAL'
ORDER BY table_name

TABLE_NAME 
--------------------------
IM Segment Group
Resource Manager Metric Group
Segment Group
WRH$_AWR_TEST_1
WRH$_BUFFERED_QUEUES
WRH$_BUFFERED_SUBSCRIBERS
WRH$_CHANNEL_WAITS
WRH$_CLUSTER_INTERCON
WRH$_COMP_IOSTAT
WRH$_CON_SYSMETRIC_SUMMARY
WRH$_CON_SYSTEM_EVENT
WRH$_CR_BLOCK_SERVER
WRH$_CURRENT_BLOCK_SERVER
WRH$_DATAFILE
WRH$_DISPATCHER
WRH$_DLM_MISC
WRH$_FILEMETRIC_HISTORY
WRH$_IC_CLIENT_STATS
WRH$_IC_DEVICE_STATS
WRH$_IM_SEG_STAT
WRH$_IM_SEG_STAT_OBJ
WRH$_INSTANCE_RECOVERY
WRH$_INST_CACHE_TRANSFER
WRH$_INTERCONNECT_PINGS
WRH$_IOSTAT_DETAIL
WRH$_IOSTAT_FILETYPE
WRH$_IOSTAT_FUNCTION
WRH$_JAVA_POOL_ADVICE
WRH$_LIBRARYCACHE
WRH$_LMS_STATS
WRH$_LOG
WRH$_MEMORY_TARGET_ADVICE
WRH$_OPTIMIZER_ENV
WRH$_PERSISTENT_QMN_CACHE
WRH$_PERSISTENT_QUEUES
WRH$_PERSISTENT_SUBSCRIBERS
WRH$_PROCESS_WAITTIME
WRH$_RECOVERY_PROGRESS
WRH$_REPLICATION_TBL_STATS
WRH$_REPLICATION_TXN_STATS
WRH$_RESOURCE_LIMIT
WRH$_ROWCACHE_SUMMARY
WRH$_RSRC_CONSUMER_GROUP
WRH$_RSRC_METRIC
WRH$_RSRC_PDB_METRIC
WRH$_RSRC_PLAN
WRH$_RULE_SET
WRH$_SEG_STAT
WRH$_SEG_STAT_OBJ
WRH$_SERVICE_STAT
WRH$_SERVICE_WAIT_CLASS
WRH$_SESSMETRIC_HISTORY
WRH$_SESS_NETWORK
WRH$_SESS_SGA_STATS
WRH$_SESS_TIME_STATS
WRH$_SGA_TARGET_ADVICE
WRH$_SHARED_POOL_ADVICE
WRH$_SHARED_SERVER_SUMMARY
WRH$_SQL_BIND_METADATA
WRH$_SQL_PLAN
WRH$_SQL_SUMMARY
WRH$_STREAMS_APPLY_SUM
WRH$_STREAMS_CAPTURE
WRH$_SYSMETRIC_HISTORY
WRH$_SYSMETRIC_SUMMARY
WRH$_TABLESPACE
WRH$_TABLESPACE_SPACE_USAGE
WRH$_TABLESPACE_STAT
WRH$_WAITCLASSMETRIC_HISTORY
WRI$_SQLTEXT_REFCOUNT
WRM$_PDB_INSTANCE

ALL

SELECT table_name
FROM awr_cdb_table_settings
WHERE flush_level_val = 'ALL'
ORDER BY table_name

TABLE_NAME 
--------------------------
Tempfile Group
WRH$_CON_SYSMETRIC_HISTORY
WRH$_FILESTATXS
WRH$_LATCH_CHILDREN
WRH$_LATCH_PARENT
WRH$_TEMPFILE
WRH$_TEMPSTATXS

Based on this information you can guess what information is available with each flush level. For example, since WRH$_SQL_PLAN is included in TYPICAL, you can expect to not have execution plans with LITE.

Another (undocumented) feature available as of 19c is the possibility to change the minimum flush level associated with each AWR table. Specifically, you can use the MODIFY_TABLE_SETTINGS procedure of the DBMS_WORKLOAD_REPOSITORY package to override the default value of a table or, in case you already changed it, to restore the default value. The following PL/SQL blocks illustrate.

  • Include the execution plans in the LITE snapshots:
BEGIN
  DBMS_WORKLOAD_REPOSITORY. MODIFY_TABLE_SETTINGS(
    table_name => 'WRH$_SQL_PLAN',
    flush_level => 'LITE'
  );
END;
  • Restore the default value for the WRH$_SQL_PLAN table:
BEGIN
  DBMS_WORKLOAD_REPOSITORY. MODIFY_TABLE_SETTINGS(
    table_name => 'WRH$_SQL_PLAN',
    flush_level => 'DEFAULT'
  );
END;

How to control resource usage on Autonomous

When you connect to your Autonomous Database, you get to choose from some predefined services. The services available depends on whether you are using a transaction processing (ATP) or a data warehouse instance (ADW) of the database, but for example, for an ATP database you get the following:

image

Note: This is a screen of the docs as of time of writing this post. Over time, that may change so always be sure to consult the docs directly in future.

Naturally you’re probably going to be using TP for your day to day applications, which does not run with any parallelism. However from time to time, you might have an occasional need to run something in parallel, but you still want it to be within the confines of typical application usage. For example, your application might be showing data from a materialized view, but an authorised user might be entitled to conduct an on-demand full refresh of that materialized view if they need it reflect the current state of the underlying raw data. Thus within the application, you might want to allow them to refresh that materialized view via a button click but also enlist all the power of parallel processing for the query that (re)generates that materialized view to make it as quick as possible.

One of the cool things with ATP/ADW is that you can now accomplish this by temporarily changing your consumer group for these ad-hoc operations. The DBMS_SESSION package lets you change the consumer group without requiring you to reconnect to a different service. For example, let’s assume I have a huge table and an incredibly large join on that table to yield a result:


SQL> create table t as select d.* from dba_Objects d,
  2   ( select 1 from dual connect by level <= 100 );

Table created.

SQL> explain plan for
  2   select t5.owner, t3.object_type, count(*)
  3   from
  4    t t1,
  5    t t2,
  6    t t3,
  7    t t4,
  8    t t5,
  9    t t6
 10  where t1.object_id = t2.object_id
 11  and t2.object_id = t3.object_id
 12  and t3.object_id = t4.object_id
 13  and t4.object_id = t5.object_id
 14  and t5.object_id = t6.object_id
 15  group by t5.owner, t3.object_type;

Explained.

Assuming I’m connecting to the default TP service in ATP, this query runs in serial even though its a 6-way join on a 3.5million row table.


SQL> SELECT * from dbms_xplan.display();

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------
Plan hash value: 632967984

-------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |           |   821 | 39408 |       |    85M (51)| 00:55:28 |
|   1 |  HASH GROUP BY                    |           |   821 | 39408 |       |    85M (51)| 00:55:28 |
|*  2 |   HASH JOIN                       |           |   688G|    30T|    57M|    46M (10)| 00:30:12 |
|   3 |    TABLE ACCESS STORAGE FULL      | T         |  3558K|    16M|       |   818   (5)| 00:00:01 |
|*  4 |    HASH JOIN                      |           |  6838M|   273G|    57M|    43M  (5)| 00:28:03 |
|   5 |     TABLE ACCESS STORAGE FULL     | T         |  3558K|    16M|       |   818   (5)| 00:00:01 |
|*  6 |     HASH JOIN                     |           |    67M|  2461M|    23M|    43M  (5)| 00:28:01 |
|   7 |      VIEW                         | VW_GBF_71 |   674K|    16M|       |    43M  (5)| 00:28:01 |
|   8 |       HASH GROUP BY               |           |   674K|    14M|   945G|    43M  (5)| 00:28:01 |
|*  9 |        HASH JOIN                  |           |    36G|   738G|    57M|   150K (79)| 00:00:06 |
|  10 |         TABLE ACCESS STORAGE FULL | T         |  3558K|    16M|       |   818   (5)| 00:00:01 |
|* 11 |         HASH JOIN                 |           |   358M|  5807M|    57M|  3228  (39)| 00:00:01 |
|  12 |          TABLE ACCESS STORAGE FULL| T         |  3558K|    16M|       |   818   (5)| 00:00:01 |
|  13 |          TABLE ACCESS STORAGE FULL| T         |  3558K|    40M|       |   818   (5)| 00:00:01 |
|  14 |      TABLE ACCESS STORAGE FULL    | T         |  3558K|    44M|       |   822   (6)| 00:00:01 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("T1"."OBJECT_ID"="T2"."OBJECT_ID")
   4 - access("T2"."OBJECT_ID"="T3"."OBJECT_ID")
   6 - access("T3"."OBJECT_ID"="ITEM_1")
   9 - access("T4"."OBJECT_ID"="T5"."OBJECT_ID")
  11 - access("T5"."OBJECT_ID"="T6"."OBJECT_ID")

30 rows selected.

And for the vast majority of queries in applications connecting via the TP service, this is the correct way to go. A big query alone is not really justification to let your user community all go hammer-and-tongs smashing away at parallel queries. But perhaps for a suitably authorised user, you want to permit them to have more resources available to them. As I mentioned before, it could be refreshing a materialized view for the benefit of all other users. We can change the consumer group to allow this:


SQL> variable x varchar2(100)
SQL> exec dbms_session.switch_current_consumer_group('HIGH',:x,true);

PL/SQL procedure successfully completed.

Now the execution plan reveals the additional parallel facilities that have become available


SQL> explain plan for ...

Explained.

SQL> SELECT * from dbms_xplan.display();

-----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                    | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                             |           |   821 | 41871 |       |    22M (10)| 00:14:53 |        |      |            |
|   1 |  PX COORDINATOR                              |           |       |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)                        | :TQ10008  |   821 | 41871 |       |    22M (10)| 00:14:53 |  Q1,08 | P->S | QC (RAND)  |
|   3 |    HASH GROUP BY                             |           |   821 | 41871 |       |    22M (10)| 00:14:53 |  Q1,08 | PCWP |            |
|   4 |     PX RECEIVE                               |           |   821 | 41871 |       |    22M (10)| 00:14:53 |  Q1,08 | PCWP |            |
|   5 |      PX SEND HASH                            | :TQ10007  |   821 | 41871 |       |    22M (10)| 00:14:53 |  Q1,07 | P->P | HASH       |
|   6 |       HASH GROUP BY                          |           |   821 | 41871 |       |    22M (10)| 00:14:53 |  Q1,07 | PCWP |            |
|*  7 |        HASH JOIN                             |           |    20M|   997M|       |    22M (10)| 00:14:53 |  Q1,07 | PCWP |            |
|   8 |         VIEW                                 | VW_GBF_72 |   674K|    16M|       |    11M (10)| 00:07:21 |  Q1,07 | PCWP |            |
|   9 |          HASH GROUP BY                       |           |   674K|    14M|   945G|    11M (10)| 00:07:21 |  Q1,07 | PCWP |            |
|  10 |           PX RECEIVE                         |           |   674K|    14M|       |    11M (10)| 00:07:21 |  Q1,07 | PCWP |            |
|  11 |            PX SEND HASH                      | :TQ10005  |   674K|    14M|       |    11M (10)| 00:07:21 |  Q1,05 | P->P | HASH       |
|  12 |             HASH GROUP BY                    |           |   674K|    14M|   945G|    11M (10)| 00:07:21 |  Q1,05 | PCWP |            |
|* 13 |              HASH JOIN                       |           |    36G|   738G|       | 65679  (99)| 00:00:03 |  Q1,05 | PCWP |            |
|  14 |               PX RECEIVE                     |           |  3558K|    16M|       |   446   (3)| 00:00:01 |  Q1,05 | PCWP |            |
|  15 |                PX SEND BROADCAST             | :TQ10002  |  3558K|    16M|       |   446   (3)| 00:00:01 |  Q1,02 | P->P | BROADCAST  |
|  16 |                 PX BLOCK ITERATOR            |           |  3558K|    16M|       |   446   (3)| 00:00:01 |  Q1,02 | PCWC |            |
|  17 |                  TABLE ACCESS STORAGE FULL   | T         |  3558K|    16M|       |   446   (3)| 00:00:01 |  Q1,02 | PCWP |            |
|* 18 |               HASH JOIN                      |           |   358M|  5807M|       |  1533  (44)| 00:00:01 |  Q1,05 | PCWP |            |
|  19 |                PX BLOCK ITERATOR             |           |  3558K|    40M|       |   446   (3)| 00:00:01 |  Q1,05 | PCWC |            |
|  20 |                 TABLE ACCESS STORAGE FULL    | T         |  3558K|    40M|       |   446   (3)| 00:00:01 |  Q1,05 | PCWP |            |
|  21 |                PX RECEIVE                    |           |  3558K|    16M|       |   446   (3)| 00:00:01 |  Q1,05 | PCWP |            |
|  22 |                 PX SEND BROADCAST            | :TQ10003  |  3558K|    16M|       |   446   (3)| 00:00:01 |  Q1,03 | P->P | BROADCAST  |
|  23 |                  PX BLOCK ITERATOR           |           |  3558K|    16M|       |   446   (3)| 00:00:01 |  Q1,03 | PCWC |            |
|  24 |                   TABLE ACCESS STORAGE FULL  | T         |  3558K|    16M|       |   446   (3)| 00:00:01 |  Q1,03 | PCWP |            |
|  25 |         PX RECEIVE                           |           |  1074K|    26M|       |    11M  (9)| 00:07:32 |  Q1,07 | PCWP |            |
|  26 |          PX SEND BROADCAST                   | :TQ10006  |  1074K|    26M|       |    11M  (9)| 00:07:32 |  Q1,06 | P->P | BROADCAST  |
|  27 |           VIEW                               | VW_GBC_71 |  1074K|    26M|       |    11M  (9)| 00:07:32 |  Q1,06 | PCWP |            |
|  28 |            HASH GROUP BY                     |           |  1074K|    23M|  1078G|    11M  (9)| 00:07:32 |  Q1,06 | PCWP |            |
|  29 |             PX RECEIVE                       |           |  1074K|    23M|       |    11M  (9)| 00:07:32 |  Q1,06 | PCWP |            |
|  30 |              PX SEND HASH                    | :TQ10004  |  1074K|    23M|       |    11M  (9)| 00:07:32 |  Q1,04 | P->P | HASH       |
|  31 |               HASH GROUP BY                  |           |  1074K|    23M|  1078G|    11M  (9)| 00:07:32 |  Q1,04 | PCWP |            |
|* 32 |                HASH JOIN                     |           |    36G|   772G|       | 65681  (99)| 00:00:03 |  Q1,04 | PCWP |            |
|  33 |                 PX RECEIVE                   |           |  3558K|    16M|       |   446   (3)| 00:00:01 |  Q1,04 | PCWP |            |
|  34 |                  PX SEND BROADCAST           | :TQ10000  |  3558K|    16M|       |   446   (3)| 00:00:01 |  Q1,00 | P->P | BROADCAST  |
|  35 |                   PX BLOCK ITERATOR          |           |  3558K|    16M|       |   446   (3)| 00:00:01 |  Q1,00 | PCWC |            |
|  36 |                    TABLE ACCESS STORAGE FULL | T         |  3558K|    16M|       |   446   (3)| 00:00:01 |  Q1,00 | PCWP |            |
|* 37 |                 HASH JOIN                    |           |   358M|  6149M|       |  1536  (44)| 00:00:01 |  Q1,04 | PCWP |            |
|  38 |                  PX BLOCK ITERATOR           |           |  3558K|    44M|       |   449   (4)| 00:00:01 |  Q1,04 | PCWC |            |
|  39 |                   TABLE ACCESS STORAGE FULL  | T         |  3558K|    44M|       |   449   (4)| 00:00:01 |  Q1,04 | PCWP |            |
|  40 |                  PX RECEIVE                  |           |  3558K|    16M|       |   446   (3)| 00:00:01 |  Q1,04 | PCWP |            |
|  41 |                   PX SEND BROADCAST          | :TQ10001  |  3558K|    16M|       |   446   (3)| 00:00:01 |  Q1,01 | P->P | BROADCAST  |
|  42 |                    PX BLOCK ITERATOR         |           |  3558K|    16M|       |   446   (3)| 00:00:01 |  Q1,01 | PCWC |            |
|  43 |                     TABLE ACCESS STORAGE FULL| T         |  3558K|    16M|       |   446   (3)| 00:00:01 |  Q1,01 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   7 - access("ITEM_1"="ITEM_1")
  13 - access("T4"."OBJECT_ID"="T5"."OBJECT_ID")
  18 - access("T5"."OBJECT_ID"="T6"."OBJECT_ID")
  32 - access("T1"."OBJECT_ID"="T2"."OBJECT_ID")
  37 - access("T2"."OBJECT_ID"="T3"."OBJECT_ID")

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 2 because of degree limit

63 rows selected.

In my case, my parallel was capped at 2 because I’m running a very small autonomous database. Also, this facility is not just your applications. SQL Developer Web and Application Express also now have support for altering the consumer group for the session should your users need it.

Important Note: Temporarily changing the consumer group is not equivalent to connecting to a different service.

As the docs themselves note:

image

If you have an application that always runs heavy duty parallel queries and DML, then it naturally makes sense to use an appropriate service that best achieve that. For example, in a data warehouse instance, it would make sense to run the resource intensive Extract-Transform-Load processing in a Medium or High service, but for that same database, users accessing that warehouse data with their Application Express (APEX) applications would do so in the default Low service. Being able to temporarily change the consumer group then adds the benefit of letting perhaps a couple of key APEX region queries to be run in parallel in order to give the best response times to customers, whilst keeping appropriate balance between responsiveness and resource consumption from a holistic perspective. Consult the APEX documentation for how to do this via the “optimizer hint” attribute.

TL;DR: The combination of services and consumer groups gives you more flexibility with both ATP and ADW instances to best serve the needs of your applications and the customers that are using them.

Troubleshooting performance on Autonomous Database

By Franck Pachot

.
https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 1024w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 1536w" sizes="(max-width: 1853px) 100vw, 1853px" />
On my Oracle Cloud Free Tier Autonomous Transaction Processing service, a database that can be used for free with no time limit, I have seen this strange activity. As I’m running nothing scheduled, I was surprised by this pattern and looked at it by curiosity. And I got the idea to take some screenshot to show you how I look at those things. The easiest performance tool available in the Autonomous Database is the Performance Hub which shows the activity though time with detail on multiple dimensions for drill-down analysis. This is based on ASH of course.

In the upper pane, I focus on the part with homogenous activity because I may views the content without the timeline and then want to compare the activity metric (Average Active Session) with the peak I observed. Without this, I may start to look to something that is not significant and waste my time. Here, where the activity is about 1 active session, I want to drill-down on dimensions that account for around 0.8 active sessions to be sure to address 80% of the surprising activity. If the part selected includes some idle time around, I would not be able to do this easily.

https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Session... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Session... 1024w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Session... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Session... 1536w" sizes="(max-width: 1896px) 100vw, 1896px" />

The second pane let me drill-down either on 3 dimensions in a load map (we will see that later), or one main dimension with the time axis (in this screenshot the dimension is “Consumer Group”) with two other dimensions below displayed without the time detail, here “Wait Class” and “Wait Event”. This is where I want to compare the activity (0.86 average active session on CPU) to the load I’m looking at, as I don’t have the time to see peaks and idle periods.

  • I see “Internal” for all “Session Attributes” ASH dimensions, like “Consumer Group”, “Module”, “Action”, “Client”, “Client Host Port”
  • About “Session Identifiers” ASH dimensions, I still see “internal” for “User Session”, “User Name” and “Program”.
  • “Parallel Process” shows “Serial” and “Session Type” shows “Foreground” which doesn’t give me more information

I have more information from “Resource Consumption”:

  • ASH Dimension “Wait Class”: mostly “CPU” and some “User I/O”
  • ASH Dimension “Wait Event”: the “User I/O” is “direct path read temp”

I’ll dig into those details later. There’s no direct detail for the CPU consumption. I’ll look at logical reads of course, and SQL Plan but I cannot directly match the CPU time with that. Especially from Average Active Session where I don’t have the CPU time – I have only samples there. It may be easier with “User I/O” because they should show up in other dimensions.

There are no “Blocking Session” but the ASH Dimension “Object” gives interesting information:

  • ASH Dimension “Object”: SYS.SYS_LOB0000009134C00039$$ and SYS.SYS_LOB0000011038C00004$$ (LOB)

https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Objects... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Objects... 1024w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Objects... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Objects... 1536w" sizes="(max-width: 1900px) 100vw, 1900px" />

I don’t know an easy way to copy/paste from the Performance Hub so I have generated an AWR report and found them in the Top DB Objects section:

Object ID % Activity Event % Event Object Name (Type) Tablespace Container Name
9135 24.11 direct path read 24.11 SYS.SYS_LOB0000009134C00039$$ (LOB) SYSAUX SUULFLFCSYX91Z0_ATP1
11039 10.64 direct path read 10.64 SYS.SYS_LOB0000011038C00004$$ (LOB) SYSAUX SUULFLFCSYX91Z0_ATP1

That’s the beauty of ASH. In addition, to show you the load per multiple dimensions, it links all dimensions. Here, without guessing, I know that those objects are responsible for the “direct path read temp” I have seen above.

Let me insist on the numbers. I mentioned that I selected, in the upper chart, a homogeneous activity time window in order to compare the activity number with and without the time axis. My total activity during this time window is a little bit over 1 session active (on average, AAS – Average Active Session). I can see this on the time chart y-axis. And I confirm it if I sum-up the aggregations on other dimensions. Like above CPU + USER I/O was 0.86 + 0.37 =1.23 when the selected part was around 1.25 active sessions. Here when looking at “Object” dimension, I see around 0.5 sessions on SYS_LOB0000011038C00004$$ (green) during one minute, then around 0.3 sessions on SYS_LOB0000009134C00039$$ (blue) for 5 minutes and no activity on objects during 1 minute. That matches approximately the 0.37 AAS on User I/O. From the AWR report this is displayed as “% Event” and 24.11 + 10.64 = 34.75% which is roughly the ratio of those 0.37 to 1.25 we had with Average Active Sessions. When looking at sampling activity details, it is important to keep in mind the weight of each component we look at.

Let’s get more detail about those objects, from SQL Developer Web, or any connection:


DEMO@atp1_tp> select owner,object_name,object_type,oracle_maintained from dba_objects 
where owner='SYS' and object_name in ('SYS_LOB0000009134C00039$$','SYS_LOB0000011038C00004$$');

   OWNER                  OBJECT_NAME    OBJECT_TYPE    ORACLE_MAINTAINED
________ ____________________________ ______________ ____________________
SYS      SYS_LOB0000009134C00039$$    LOB            Y
SYS      SYS_LOB0000011038C00004$$    LOB            Y

DEMO@atp1_tp> select owner,table_name,column_name,segment_name,tablespace_name from dba_lobs 
where owner='SYS' and segment_name in ('SYS_LOB0000009134C00039$$','SYS_LOB0000011038C00004$$');

   OWNER                TABLE_NAME    COLUMN_NAME                 SEGMENT_NAME    TABLESPACE_NAME
________ _________________________ ______________ ____________________________ __________________
SYS      WRI$_SQLSET_PLAN_LINES    OTHER_XML      SYS_LOB0000009134C00039$$    SYSAUX
SYS      WRH$_SQLTEXT              SQL_TEXT       SYS_LOB0000011038C00004$$    SYSAUX

Ok, that’s interesting information. It confirms why I see ‘internal’ everywhere: those are dictionary tables.

WRI$_SQLSET_PLAN_LINES is about SQL Tuning Sets and in 19c, especially with the Auto Index feature, the SQL statements are captured every 15 minutes and analyzed to find index candidates. A look at SQL Tuning Sets confirms this:


DEMO@atp1_tp> select sqlset_name,parsing_schema_name,count(*),dbms_xplan.format_number(sum(length(sql_text))),min(plan_timestamp)
from dba_sqlset_statements group by parsing_schema_name,sqlset_name order by count(*);


    SQLSET_NAME    PARSING_SCHEMA_NAME    COUNT(*)    DBMS_XPLAN.FORMAT_NUMBER(SUM(LENGTH(SQL_TEXT)))    MIN(PLAN_TIMESTAMP)
_______________ ______________________ ___________ __________________________________________________ ______________________
SYS_AUTO_STS    C##OMLIDM                        1 53                                                 30-APR-20
SYS_AUTO_STS    FLOWS_FILES                      1 103                                                18-JUL-20
SYS_AUTO_STS    DBSNMP                           6 646                                                26-MAY-20
SYS_AUTO_STS    XDB                              7 560                                                20-MAY-20
SYS_AUTO_STS    ORDS_PUBLIC_USER                 9 1989                                               30-APR-20
SYS_AUTO_STS    GUEST0001                       10 3656                                               20-MAY-20
SYS_AUTO_STS    CTXSYS                          12 1193                                               20-MAY-20
SYS_AUTO_STS    LBACSYS                         28 3273                                               30-APR-20
SYS_AUTO_STS    AUDSYS                          29 3146                                               26-MAY-20
SYS_AUTO_STS    ORDS_METADATA                   29 4204                                               20-MAY-20
SYS_AUTO_STS    C##ADP$SERVICE                  33 8886                                               11-AUG-20
SYS_AUTO_STS    MDSYS                           39 4964                                               20-MAY-20
SYS_AUTO_STS    DVSYS                           65 8935                                               30-APR-20
SYS_AUTO_STS    APEX_190200                    130 55465                                              30-APR-20
SYS_AUTO_STS    C##CLOUD$SERVICE               217 507K                                               30-APR-20
SYS_AUTO_STS    ADMIN                          245 205K                                               30-APR-20
SYS_AUTO_STS    DEMO                           628 320K                                               30-APR-20
SYS_AUTO_STS    APEX_200100                  2,218 590K                                               18-JUL-20
SYS_AUTO_STS    SYS                        106,690 338M                                               30-APR-20

All gathered by this SYS_AUTO_STS job. And the statements captured were parsed by SYS – a system job has hard work because of system statements, as I mentioned when seeing this for the first time:

With this drill-down from the “Object” dimension, I’ve already gone far enough to get an idea about the problem: an internal job is reading the huge SQL Tuning Sets that have been collected by the Auto STS job introduced in 19c (and used by Auto Index). But I’ll continue to look at all other ASH Dimensions. They can give me more detail or at least confirm my guesses. That’s the idea: you look at all the dimensions and once one gives you interesting information, you dig down to more details.

I look at “PL/SQL” ASH dimension first because an application should call SQL from procedural code and not the opposite. And, as all this is internal, developed by Oracle, I expect they do it this way.

  • ASH Dimension “PL/SQL”: I see ‘7322,38’
  • ASH Dimension “Top PL/SQL”: I see ‘19038,5’

https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/plsql-3... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/plsql-1... 1024w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/plsql-7... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/plsql-1... 1536w" sizes="(max-width: 1883px) 100vw, 1883px" />

Again, I copy/paste to avoid typos and got them from the AWR report “Top PL/SQL Procedures” section:

PL/SQL Entry Subprogram % Activity PL/SQL Current Subprogram % Current Container Name
UNKNOWN_PLSQL_ID <19038, 5> 78.72 SQL 46.81 SUULFLFCSYX91Z0_ATP1
UNKNOWN_PLSQL_ID <7322, 38> 31.21 SUULFLFCSYX91Z0_ATP1
UNKNOWN_PLSQL_ID <13644, 332> 2.13 SQL 2.13 SUULFLFCSYX91Z0_ATP1
UNKNOWN_PLSQL_ID <30582, 1> 1.42 SQL 1.42 SUULFLFCSYX91Z0_ATP1

Side note on the number: activity was 0.35 AAS on top-level PL/SQL, 0.33 on current PL/SQL. 0.33 is included within 0.35 as a session active on a PL/SQL call. In AWR (where “Entry” means “top-level”) you see them nested and including the SQL activity. This is why you see 78.72% here, it is SQL + PL/SQL executed under the top-level call. But actually, the procedure (7322,38) is 31.21% if the total AAS, which matches the 0.33 AAS.

By the way, I didn’t mention it before but this in AWR report is actually an ASH report that is included in the AWR html report.

Now trying to know which are those procedures. I think the “UNKNOWN” comes from not finding it in the packages procedures:


DEMO@atp1_tp> select * from dba_procedures where (object_id,subprogram_id) in ( (7322,38) , (19038,5) );

no rows selected

but I find them from DBA_OBJECTS:


DEMO@atp1_tp> select owner,object_name,object_id,object_type,oracle_maintained,last_ddl_time from dba_objects where object_id in (7322,19038);

   OWNER           OBJECT_NAME    OBJECT_ID    OBJECT_TYPE    ORACLE_MAINTAINED    LAST_DDL_TIME
________ _____________________ ____________ ______________ ____________________ ________________
SYS      XMLTYPE                      7,322 TYPE           Y                    18-JUL-20
SYS      DBMS_AUTOTASK_PRVT          19,038 PACKAGE        Y                    22-MAY-20

and DBA_PROCEDURES:


DEMO@atp1_tp> select owner,object_name,procedure_name,object_id,subprogram_id from dba_procedures where object_id in(7322,19038);


   OWNER                   OBJECT_NAME    PROCEDURE_NAME    OBJECT_ID    SUBPROGRAM_ID
________ _____________________________ _________________ ____________ ________________
SYS      DBMS_RESULT_CACHE_INTERNAL    RELIES_ON               19,038                1
SYS      DBMS_RESULT_CACHE_INTERNAL                            19,038                0

All this doesn’t match </p />
</p></div>

    	  	<div class=

FK on delete

This is part 1 of a short reference note about the work load created by referential integrity constraints when you delete from a parent table. It was prompted by a question on the Oracle Developer Community forum about how to handle a very large delete from a table which (a) included a column of type CLOB and (b) had 9 child tables.

The 9 referential integrity constraints were declared with “on delete cascade”, but the delete was taking too long even though all the related child data had been deleted before the parent delete. In outline the process was designed to operate in batches as follows:

  • populate global temporary table with a small set of IDs
  • delete from 9 child tables where parent_id in (select id from gtt)
  • delete from parent where id in (select id from gtt);

The process was running very slowly. At first sight – and without examining any trace files or workload statistics – the “obvious” guess would be that this was something to do with the CLOBs – but a CLOB delete is usually a “logical” delete, i.e. it sets a flag amd shouldn’t really be doing a lot of work actually deleting and freeing space, and it doesn’t generate undo for the CLOB itself. Examination of the session statistics showed that the problem was with the work that Oracle was doing to handle the referential integrity – even though all related rows had been deleted in advance of the parent delete.

A highly suggestive statistic from the session stats (v$sesstat for the session) for a controlled test that deleted 1170 parent rows was the “execute count” which was 10,892.  Why would you execute that many statements when all you’re doing is a simple “delete from parent where id in (select from gtt)” – check the arithmetic: 1,170 * 9 = 10,530 which is a fairly good indicator that every row deleted from the parent results in 9 statements being executed to delete from the 9 child tables.

Foreign Key options

Oracle is a little restrictive in how it allows you to define foreign key constraints – MySQL, for example, allows you to specify the action the database should take on update or delete of a parent row,, and it allows 5 possible actions, so you have:


on update [restrict | cascade | set null | no action | set default]
on delete [restrict | cascade | set null | no action | set default]

In comparison Oracle only implements “on delete”, and the only options it allows are “cascade”, “set null”, and “no action” (which is the default, though you can’t actually specify it).

Side note: “No Action” and “Restrict” are very similar – if you try to delete a parent for which a child row exists then the attempt will raise an error. The difference between the two operations is that “no action” will delete the parent then rollback when it finds the child while “restrict” will check to see if the child exists before attempting to delete the parent. (See footnote 1 for detailed examination of how Oracle handles “no action”).

As an initial investigation of the work that Oracle does to handle referential integrity and deletes I want to examine one simple but extreme example. I’m going to create a parent/child pair of tables with “on delete no action”, and delete one parent row. The thing that makes the demonstration “extreme”  is that I will first create, and then delete, a large number of rows for that parent row before deleting the parent. Here’s the initial data setup:

rem
rem     Script:         fk_lock_stress.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2012
rem
rem     Last tested 
rem             19.3.0.0
rem

create table child
as
with generator as (
        select
                rownum id 
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid wordpress format issue
)
select
        trunc((rownum-1)/1200)  n1,
        lpad('x',80)            idx_padding,
        lpad(rownum,8,'0')      small_vc
from
        generator       v1,
        generator       v2
where
        rownum <= 6e4 -- > comment to avoid wordpress format issue
;

create index child_ix on child(n1, idx_padding) pctfree 95;

create table parent as
select
        id,
        lpad(rownum,8,'0')      small_vc,
        lpad('x',80)            padding
from    (
        select 
                distinct n1             id
        from
                child
        )
;

alter table parent add constraint par_pk primary key(id);
alter table child add constraint chi_fk_par foreign key(n1) references parent;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'PARENT',
                method_opt       => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'CHILD',
                method_opt       => 'for all columns size 1'
        );
end;
/


select  index_name, num_rows, distinct_keys, leaf_blocks, avg_leaf_blocks_per_key 
from    user_indexes
where   index_name in ('PARENT','CHILD')
;

select  object_id, object_name
from    user_objects
order by
        object_id
;

delete from child where n1 = 10;
commit;
delete from child where n1 = 20;
commit;
delete from child where n1 = 30;
commit;
delete from child where n1 = 40;
commit;

execute dbms_stats.gather_table_stats(user, 'child', cascade=>true)

alter system flush buffer_cache;

For each parent row there are 1,200 child rows, and I’ve given the “foreign key” index on child a pctfree of 95% which has resulted in the index needing roughly 600 leaf blocks per key. (When I first created this test my value for x$kcbbf (buffer pins) was 500 and I wanted to see what would happen if I needed to pin more buffers that could fit the array).

After creating the index and gathering stats I’ve deleted all the rows for 4 of the possible parent values from the child table because I want to see how much work it takes to delete a parent for which there are no longer any child rows, as follows:

execute snap_events.start_snap
execute snap_my_stats.start_snap

alter session set events '10046 trace name context forever, level 8';
alter session set tracefile_identifier = 'del';

delete from parent where id = 10;
commit;

delete from parent where id = 40;
commit;

alter session set tracefile_identifier = '';
alter session set events '10046 trace name context off';

execute snap_my_stats.end_snap
execute snap_events.end_snap

The “snap_my_stats / snap_events” packages are just my usual capture of workload information over the interval. I’ve enabled extended tracing at level 8 (waits) so that I can see if Oracle runs any interesting recursive SQL and, since I;ve flushed the buffer cache, this will also let me see what datablocks Oracle has to read from the database.

Here are the key results from a test run on 19.3.0.0:


Statistic                                                Value
---------                                                -----
physical reads                                           1,225

Event                                             Waits   Time_outs           Csec    Avg Csec    Max Csec
-----                                             -----   ---------           ----    --------    --------
db file sequential read                           1,225           0           0.86        .001           1
events in waitclass Other                            11           0           0.01        .001          73

Is it a coincidence that I’ve done 1,225 single block reads when the number of leaf blocks per key in the child index is a little over 600 and I’ve just deleted two parent key values? (Spoiler: NO).

When we apply tkprof to the trace file we find the following summary of the first delete:


SQL ID: 0u6t174agkq27 Plan Hash: 3366423708

delete from parent
where
 id = 10


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.01          0          0          1           0
Execute      1      0.00       0.02        608          1        609           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.03        608          1        610           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 107
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  DELETE  PARENT (cr=1 pr=608 pw=0 time=23591 us starts=1)
         1          1          1   INDEX UNIQUE SCAN PAR_PK (cr=1 pr=1 pw=0 time=60 us starts=1 cost=0 size=3 card=1)(object id 94905)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                       608        0.00          0.00
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net message from client                     1        0.00          0.00
********************************************************************************

The “Row Source Operation” statistics tell use that it has taken 608 single block reads to access a single row from the parent table by unique index and delete it. So let’s look at the content of the trace file – looking for any WAIT, that is a “db file sequential read” wait  relating to this cursor. (I started by looking for the statement in the trace file so that I could extract its cursor number before using the following grep command):


grep "#139773970345352" or19_ora_17541_del.trc | grep "WAIT.*db file sequential read" | sed "s/^.*obj#=//" | sed "s/ .*$//" | sort | uniq -c | sort -n

      2 0
    604 94903
      1 94904
      1 94905

If you’re wondering which objects these numbers correspond to (though you may have made a reasonalble guess by now), here are the results from my earlier query against user_objects:


 OBJECT_ID OBJECT_NAME
---------- --------------------
     94902 CHILD
     94903 CHILD_IX
     94904 PARENT
     94905 PAR_PK

I visit one index block in the parent index, then one block in the parent table, then 604 blocks in the child index to check that there are no current, or potential, index entries in each leaf block. I’ve done that with single block reads, walking the child index in order, and I do it without showing any SQL that might help me spot that it’s happening.

Exactly the same mechanism then comes into play for deleting parent_id = 40. We effectively I do a “silent” index range scan of the child index where n1 = 40 to make sure that there are no current child rows.

When I first created this test it was because I wanted to see if Oracle would pin all the buffers holding the current image of the relevant child index leaf blocks in exclusive mode as it walked through the index – this seemed to be the obvious way to ensure that no other session could insert a child row into a leaf block that had already been checked – but when I enabled pin tracing I found 600+ pairs of “Aquire/Release”. This may help to explain Oracle’s choice of “No Action” (rather than “Restrict”): if the parent row is locked and marked for deletion then there is no need to keep the child leaf blocks pinned exclusively as any attempt by another session to introduce a new child would first check the parent and discover that it was marked for deletion and wait for the deleting transaction to commit or rollback.

There is a gap in this hypothesis, though, which shows up when there is no appropriate index on the table. (Which includes the case where the “foreign key” index has been created as a bitmap instead of a B-tree, or where there’s a B-tree index that includes the foreign key column(s) but not at the start of the index definition, or if the foreign key columns are at the start of the index but declared as descending).

If there is no appropriate “foreign key” index there are two changes in the activity. First, the child table will temporarily be locked in mode 4 (share mode) or mode 5 (share row exclusive mode) for the duration of the referential integrity check. Secondly you will see the text of the recursive SQL statement that does the check; here, for example, is the SQL extracted from a 19.3 trace file in a few variants of the original test:


select /*+ all_rows */ count(1)
from
 "TEST_USER"."CHILD" where "N1" = :1

Oracle’s treatment of this statement is cost-based, and in my case, depending on the choice of index, level of compression, and settings for system statistics I found that I could get any one of the following plans:


Bitmap index
Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE (cr=3 pr=2 pw=0 time=139 us starts=1)
         0          0          0   BITMAP CONVERSION COUNT (cr=3 pr=2 pw=0 time=132 us starts=1 cost=2 size=3600 card=1200)
         0          0          0    BITMAP INDEX RANGE SCAN CHILD_IX (cr=3 pr=2 pw=0 time=128 us starts=1)(object id 95098)

No index at all
Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE (cr=826 pr=412 pw=0 time=4880 us starts=1)
         0          0          0   TABLE ACCESS FULL CHILD (cr=826 pr=412 pw=0 time=4870 us starts=1 cost=114 size=3600 card=1200)

Index on (idx_padding, n1)
Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE (cr=826 pr=412 pw=0 time=5601 us starts=1)
         0          0          0   TABLE ACCESS FULL CHILD (cr=826 pr=412 pw=0 time=5591 us starts=1 cost=114 size=3600 card=1200)

Index on (idx_padding, n1) compress 1 (and tweaking MBRC)
Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE (cr=128 pr=126 pw=0 time=2467 us starts=1)
         0          0          0   INDEX SKIP SCAN CHILD_IX (cr=128 pr=126 pw=0 time=2460 us starts=1 cost=122 size=3600 card=1200)(object id 95111)

You’ll note that I have two cases where the strategy chosen for the query seems to be a close match to the operation performed with the “proper foreign key index” in place – moreover the number of blocks read for the tablescan strategy is less than the number of blocks read when we had the “foreign key” index in place. So what’s the difference that makes it possible to for Oracle to avoid locking the table in mode 4/5 when we have that index?

These results, by the way, explain why Mikhail Velikikh (in comment #1) can say that large numbers of buffer gets and disk reads for the parent delete are NOT an indication of a missing foreign key index … when the index is missing the large numbers are correctly reported against the SQL that checks the child, not against the parent delete.

Summary

Even if you’ve deleted all the child data before attempting to delete a parent row, and YOU know that that there’s no child data when ORACLE is told to delete the parent row it has to check that there’s no child data, and in a large scale delete (particularly where there may be many child rows per parent) you may end up seeing a a surprising amount of I/O on single block reads of the “foreign key” indexes on any child tables.

This means for a very large “house-keeping” or archiving delete with a limited time-window you may want to take special steps, such as rebuilding “foreign key” indexes after deleting large number of child rows and before deleting the parent. You may even want to disable/drop the foreign key constraint before the parent delete and re-enable it afterwards if you think you can avoid getting any bad data into the job is running.

In the next article:  on delete cascade, on delete set null..

Footnote 1

To demonstrate Oracle’s behaviour for the default “no action  foreign key constraint when you attempt to delete a parent row for which child data exists (which will result in raising an Oracle error or the form: “ORA-02292: integrity constraint (TEST_USER.CHI_FK_PAR) violated – child record found”) you need only set up a
small amount of data, attempt the delete and then dump the redo for the transaction – if nothing else is going on you’ll find the following set or redo change vectors (taken from a trace file generated in 19.3, using the command “grep OP {filename}”):

CHANGE #1 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00084 OBJ:94828 SCN:0x00000b860f303fa4 SEQ:2 OP:11.3 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:35 AFN:17 DBA:0x04400110 OBJ:4294967295 SCN:0x00000b860f303f98 SEQ:2 OP:5.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #3 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c0009c OBJ:94829 SCN:0x00000b860f303fa4 SEQ:2 OP:10.4 ENC:0 RBL:0 FLG:0x0000
CHANGE #4 CON_ID:3 TYP:0 CLS:36 AFN:17 DBA:0x04404564 OBJ:4294967295 SCN:0x00000b860f303f98 SEQ:3 OP:5.1 ENC:0 RBL:0 FLG:0x0000
CHANGE #5 CON_ID:3 TYP:0 CLS:36 AFN:17 DBA:0x04404564 OBJ:4294967295 SCN:0x00000b860f304062 SEQ:1 OP:5.1 ENC:0 RBL:0 FLG:0x0000

CHANGE #1 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c0009c OBJ:94829 SCN:0x00000b860f304062 SEQ:1 OP:10.5 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:36 AFN:17 DBA:0x04404564 OBJ:4294967295 SCN:0x00000b860f304062 SEQ:2 OP:5.6 ENC:0 RBL:0 FLG:0x0000

CHANGE #1 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00084 OBJ:94828 SCN:0x00000b860f304062 SEQ:1 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:35 AFN:17 DBA:0x04400110 OBJ:4294967295 SCN:0x00000b860f304062 SEQ:1 OP:5.11 ENC:0 RBL:0 FLG:0x0000

CHANGE #1 CON_ID:3 TYP:0 CLS:35 AFN:17 DBA:0x04400110 OBJ:4294967295 SCN:0x00000b860f304062 SEQ:2 OP:5.4 ENC:0 RBL:0 FLG:0x0000

11.3  delete row piece
5.2   start transaction
10.4  delete leaf row    (ed: this is for the primary key index)
5.1   create undo record for table block
5.1   create undo record for index leaf block

10.5  restore leaf row during rollback
5.6   mark index undo as applied

11.2  insert rowpiece
5.11  mark table undo as applied

5.4   commit;

So Oracle deletes the parent, (discovers the child) then rolls back the parent delete.

 

 

How to make queries to DBA_FREE_SPACE faster

The other day as my PC popped up its usual “Hey, you really should look at buying a bigger disk” Smile message, I was motivated to see if I could clear down some of the space utilization on the myriad of databases on my machine. On the majority them I encountered no issue doing that, but on one of them, the query to DBA_FREE_SPACE revealed an odd performance issue:


SQL> set timing on
SQL> select tablespace_name,
  2         sum(bytes) tot_free,
  3         count(*) chunks
  4  from dba_free_space
  5  group by tablespace_name  ;

TABLESPACE_NAME        TOT_FREE     CHUNKS
-------------------- ---------- ----------
SYSTEM                  3866624          2
DEMO                  103809024          1
SYSAUX                103677952          1
UNDOTBS1             2698706944        270
ASKTOM                122945536        102
USERS                 198705152         38

7 rows selected.

Elapsed: 00:04:51.22

Nearly 5 minutes just to query DBA_FREE_SPACE! I created a brand new session and re-ran the query to see if there was some sort of I/O issue on my machine, because after all, the poor thing get slammed every day with all sorts of things, not least of which is my kids demanding to use its GPU for gaming!


SQL> select
  2     event
  3    ,total_waits
  4    ,total_timeouts
  5    ,secs
  6    ,rpad(to_char(100 * ratio_to_report(secs) over (), 'FM000.00') || '%',8)  pct
  7    ,max_wait
  8  from (
  9    select
 10       event
 11      ,total_waits
 12      ,total_timeouts
 13      ,time_waited/100 secs
 14      ,max_wait
 15    from v$session_event
 16    where sid = sys_context('USERENV','SID')
 17    and event not like 'SQL*Net%'
 18  );

EVENT                                        TOTAL_WAITS TOTAL_TIMEOUTS       SECS PCT              MAX_WAIT
-------------------------------------------- ----------- -------------- ---------- -------------- ----------
Disk file operations I/O                               5              0          0  00.00%                 0
log file sync                                          1              0          0  00.00%                 0
db file sequential read                           244352              0     271.48 100.00%                 1
events in waitclass Other                             64              1          0  00.00%                 0

You can see the time was all lost in ‘db file sequential read’ waits, but the maximum wait for these I/O’s was fine – it was just the volume of them. 244 thousand! And if you check back on the number of chunks for each tablespace, it would seem odd that I’d need to do that many reads to find that relatively small number of chunks. None of the databases on my PC are that large in size.

So just like the advice we give to people on AskTOM, I ran the query with the GATHER_PLAN_STATISTICS hint to see where all this time and I/O’s were coming from. I also did a fresh DBMS_STATS.GATHER_DICTIONARY_STATS just in case.


SQL> set serverout off
SQL> select /*+ gather_plan_statistics */ tablespace_name,
  2         sum(bytes) tot_free,
  3         count(*) chunks
  4  from dba_free_space
  5  group by tablespace_name  ;

TABLESPACE_NAME        TOT_FREE     CHUNKS
-------------------- ---------- ----------
SYSTEM                  3866624          2
DEMO                  103809024          1
SYSAUX                103677952          1
UNDOTBS1             2698706944        270
ASKTOM                122945536        102
USERS                 198705152         38
LARGETS              1.3993E+10          8

7 rows selected.


SQL_ID  db1x4q4n8kgrs, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ tablespace_name,        sum(bytes)
tot_free,        count(*) chunks from dba_free_space group by
tablespace_name

Plan hash value: 190806552

-----------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name                    | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |                         |      1 |        |      7 |00:00:47.22 |     699K|    244K|       |       |          |
|   1 |  HASH GROUP BY                    |                         |      1 |      9 |      7 |00:00:47.22 |     699K|    244K|  1056K|  1056K| 1013K (0)|
|   2 |   VIEW                            | DBA_FREE_SPACE          |      1 |     14 |    422 |00:00:00.01 |     699K|    244K|       |       |          |
|   3 |    UNION-ALL                      |                         |      1 |        |    422 |00:00:00.01 |     699K|    244K|       |       |          |
|   4 |     NESTED LOOPS                  |                         |      1 |      1 |      0 |00:00:00.01 |      12 |      0 |       |       |          |
|   5 |      NESTED LOOPS                 |                         |      1 |      1 |      0 |00:00:00.01 |      12 |      0 |       |       |          |
|*  6 |       INDEX FULL SCAN             | I_FILE2                 |      1 |      3 |      7 |00:00:00.01 |       1 |      0 |       |       |          |
|*  7 |       TABLE ACCESS CLUSTER        | FET$                    |      7 |      1 |      0 |00:00:00.01 |      11 |      0 |       |       |          |
|*  8 |        INDEX UNIQUE SCAN          | I_TS#                   |      7 |      1 |      7 |00:00:00.01 |       4 |      0 |       |       |          |
|*  9 |      TABLE ACCESS CLUSTER         | TS$                     |      0 |      1 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|* 10 |       INDEX UNIQUE SCAN           | I_TS#                   |      0 |      1 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|* 11 |     HASH JOIN                     |                         |      1 |      1 |    307 |00:00:00.01 |      67 |      0 |  1797K|  1797K| 1188K (0)|
|  12 |      NESTED LOOPS                 |                         |      1 |      1 |    307 |00:00:00.01 |      50 |      0 |       |       |          |
|* 13 |       FIXED TABLE FULL            | X$KTFBFE                |      1 |      8 |    307 |00:00:00.01 |      46 |      0 |       |       |          |
|* 14 |       INDEX UNIQUE SCAN           | I_FILE2                 |    307 |      1 |    307 |00:00:00.01 |       4 |      0 |       |       |          |
|* 15 |      TABLE ACCESS FULL            | TS$                     |      1 |      1 |      7 |00:00:00.01 |      17 |      0 |       |       |          |
|  16 |     NESTED LOOPS                  |                         |      1 |     10 |    115 |00:01:55.93 |     699K|    244K|       |       |          |
|* 17 |      HASH JOIN                    |                         |      1 |     17 |     46 |00:00:00.01 |      22 |      0 |  1316K|  1316K|  778K (0)|
|  18 |       NESTED LOOPS                |                         |      1 |     19 |     46 |00:00:00.01 |       5 |      0 |       |       |          |
|  19 |        NESTED LOOPS               |                         |      1 |     72 |     46 |00:00:00.01 |       4 |      0 |       |       |          |
|* 20 |         INDEX FULL SCAN           | I_FILE2                 |      1 |      3 |      7 |00:00:00.01 |       1 |      0 |       |       |          |
|* 21 |         INDEX RANGE SCAN          | RECYCLEBIN$_TS          |      7 |     24 |     46 |00:00:00.01 |       3 |      0 |       |       |          |
|  22 |        TABLE ACCESS BY INDEX ROWID| RECYCLEBIN$             |     46 |      7 |     46 |00:00:00.01 |       1 |      0 |       |       |          |
|* 23 |       TABLE ACCESS FULL           | TS$                     |      1 |      9 |      7 |00:00:00.01 |      17 |      0 |       |       |          |
|* 24 |      FIXED TABLE FULL             | X$KTFBUE                |     46 |      1 |    115 |00:00:47.14 |     699K|    244K|       |       |          |
|  25 |     NESTED LOOPS                  |                         |      1 |      1 |      0 |00:00:00.01 |     284 |      0 |       |       |          |
|  26 |      NESTED LOOPS                 |                         |      1 |      1 |      0 |00:00:00.01 |     284 |      0 |       |       |          |
|  27 |       MERGE JOIN CARTESIAN        |                         |      1 |    138 |    336 |00:00:00.01 |       3 |      0 |       |       |          |
|* 28 |        INDEX FULL SCAN            | I_FILE2                 |      1 |      3 |      7 |00:00:00.01 |       1 |      0 |       |       |          |
|  29 |        BUFFER SORT                |                         |      7 |     48 |    336 |00:00:00.01 |       2 |      0 |  2048 |  2048 | 2048  (0)|
|  30 |         TABLE ACCESS FULL         | RECYCLEBIN$             |      1 |     48 |     48 |00:00:00.01 |       2 |      0 |       |       |          |
|  31 |       TABLE ACCESS CLUSTER        | UET$                    |    336 |      1 |      0 |00:00:00.01 |     281 |      0 |       |       |          |
|* 32 |        INDEX UNIQUE SCAN          | I_FILE#_BLOCK#          |    336 |      1 |     41 |00:00:00.01 |     240 |      0 |       |       |          |
|* 33 |      TABLE ACCESS CLUSTER         | TS$                     |      0 |      1 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|* 34 |       INDEX UNIQUE SCAN           | I_TS#                   |      0 |      1 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|  35 |     NESTED LOOPS                  |                         |      1 |      1 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|  36 |      NESTED LOOPS                 |                         |      1 |      1 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|  37 |       TABLE ACCESS FULL           | NEW_LOST_WRITE_EXTENTS$ |      1 |      1 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|* 38 |       TABLE ACCESS CLUSTER        | TS$                     |      0 |      1 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|* 39 |        INDEX UNIQUE SCAN          | I_TS#                   |      0 |      1 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|* 40 |      INDEX RANGE SCAN             | I_FILE2                 |      0 |      1 |      0 |00:00:00.01 |       0 |      0 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - filter(("FI"."TS#" IS NOT NULL AND "FI"."RELFILE#" IS NOT NULL))
   7 - filter("F"."FILE#"="FI"."RELFILE#")
   8 - access("F"."TS#"="FI"."TS#")
   9 - filter("TS"."BITMAPPED"=0)
  10 - access("TS"."TS#"="F"."TS#")
  11 - access("TS"."TS#"="KTFBFETSN")
  13 - filter(("CON_ID"=0 OR "CON_ID"=3))
  14 - access("KTFBFETSN"="FI"."TS#" AND "KTFBFEFNO"="FI"."RELFILE#")
       filter(("FI"."TS#" IS NOT NULL AND "FI"."RELFILE#" IS NOT NULL))
  15 - filter((INTERNAL_FUNCTION("TS"."ONLINE$") AND "TS"."CONTENTS$"=0 AND "TS"."BITMAPPED"<>0 AND
              BITAND("TS"."FLAGS",4503599627370496)<>4503599627370496))
  17 - access("TS"."TS#"="RB"."TS#")
  20 - filter(("FI"."TS#" IS NOT NULL AND "FI"."RELFILE#" IS NOT NULL))
  21 - access("RB"."TS#"="FI"."TS#")
  23 - filter((INTERNAL_FUNCTION("TS"."ONLINE$") AND "TS"."CONTENTS$"=0 AND "TS"."BITMAPPED"<>0 AND
              BITAND("TS"."FLAGS",4503599627370496)<>4503599627370496))
  24 - filter((INTERNAL_FUNCTION("CON_ID") AND "KTFBUESEGBNO"="RB"."BLOCK#" AND "KTFBUEFNO"="FI"."RELFILE#" AND "KTFBUESEGFNO"="RB"."FILE#" AND
              "KTFBUESEGTSN"="RB"."TS#"))
  28 - filter(("FI"."TS#" IS NOT NULL AND "FI"."RELFILE#" IS NOT NULL))
  32 - access("U"."TS#"="RB"."TS#" AND "U"."SEGFILE#"="RB"."FILE#" AND "U"."SEGBLOCK#"="RB"."BLOCK#")
       filter(("U"."TS#"="FI"."TS#" AND "U"."SEGFILE#"="FI"."RELFILE#"))
  33 - filter("TS"."BITMAPPED"=0)
  34 - access("TS"."TS#"="U"."TS#")
  38 - filter((INTERNAL_FUNCTION("TS"."ONLINE$") AND BITAND("TS"."FLAGS",4503599627370496)=4503599627370496 AND "TS"."CONTENTS$"=0 AND
              "TS"."BITMAPPED"<>0))
  39 - access("TS"."TS#"="F"."EXTENT_DATAFILE_TSID")
  40 - access("F"."EXTENT_DATAFILE_TSID"="FI"."TS#")
       filter("FI"."TS#" IS NOT NULL)

Notice line 16 is a NESTED LOOP which has a cumulative total of our 244K reads. I then tracked that down to line 24 which is a full scan of the X$KTFBUE structure, but the key thing was that this scan was performed 46 times. Heading up 2 lines to lines 22, and you can see that these 46 executions were a result of 46 rows coming out of my RECYCLEBIN$ table.

Thus it looks like the performance (in this database) is related to the recyclebin. To firm up this relationship, I purged my recycle bin and re-tested the free space query


SQL> conn /@db19_pdb1 as sysdba
Connected.
SQL> purge dba_recyclebin;

DBA Recyclebin purged.

SQL> set timing on
SQL> select tablespace_name,
  2         sum(bytes) tot_free,
  3         count(*) chunks
  4  from dba_free_space
  5  group by tablespace_name  ;

TABLESPACE_NAME        TOT_FREE     CHUNKS
-------------------- ---------- ----------
SYSTEM                  3866624          2
DEMO                  103809024          1
SYSAUX                103677952          1
UNDOTBS1             2698706944        270
ASKTOM                122945536        102
USERS                 198705152         38

7 rows selected.

Elapsed: 00:00:00.38

Bingo! An empty recyclebin and my free space query becomes instantaneous.

On this particular database, this effect is reproducible and linear in terms of degradation, due to the nested loop. Here’s some code to demonstrate that. In a loop, I will create a table as a copy of DUAL and then drop it immediately, thus adding an object to the reyclebin. I will then run my free space query and time how long it takes to execute and fetch from it.


SQL> declare
  2    s timestamp;
  3  begin
  4  for i in 1 .. 20 loop
  5    execute immediate 'create table zz'||i||' as select * from dual';
  6    execute immediate 'drop table zz'||i;
  7
  8    s := systimestamp;
  9    for cur in (
 10        select
 11          tablespace_name,
 12          sum(bytes) tot_free,
 13          count(*) chunks
 14        from dba_free_space
 15        group by tablespace_name )
 16    loop
 17      null;
 18    end loop;
 19    dbms_output.put_line(i||'-'||(systimestamp-s));
 20   end loop;
 21   end;
 22   /
1-+000000000 00:00:01.003000000
2-+000000000 00:00:01.964000000
3-+000000000 00:00:02.910000000
4-+000000000 00:00:03.954000000
5-+000000000 00:00:04.903000000
6-+000000000 00:00:05.889000000
7-+000000000 00:00:06.855000000
8-+000000000 00:00:07.729000000
9-+000000000 00:00:09.776000000
10-+000000000 00:00:10.754000000
11-+000000000 00:00:10.604000000
12-+000000000 00:00:11.606000000
13-+000000000 00:00:13.817000000
14-+000000000 00:00:13.992000000
15-+000000000 00:00:16.601000000
16-+000000000 00:00:15.626000000
17-+000000000 00:00:16.564000000
18-+000000000 00:00:19.054000000
19-+000000000 00:00:19.748000000
20-+000000000 00:00:20.553000000

PL/SQL procedure successfully completed.

You can see that each new object in the recycle bin adds some time to the execution of the query.

But here is the interesting thing. I initially discovered this on my Windows 19.6 database. To see if it was a potential bug, I patched the database to 19.8 and the slow down is still present. However, I also have several other 19.7 and 19.8 databases, both on Windows and on Linux (under VirtualBox VM’s). I did not observe this behaviour on any of those databases, no matter how many objects were in the recycle bin.

Thus its reasonable to conclude that there is some special set of dictionary statistical circumstances that will lead to this execution plan. In any event, I thought it warranted a blog post so that if you encounter this slow down on queries to DBA_FREE_SPACE, then a purge of the recycle bin will probably do the trick for you.

TL;DR: If your queries to DBA_FREE_SPACE are slow, then check your recycle bin.

Seedlet: A New, Sophisticated Theme Fully Powered By the Block Editor

Is your WordPress.com site ready for a refresh? Today, we’re unveiling Seedlet, a new theme that’s simple yet stylish.

Screenshot of the Seedlet theme demo page home screenhttps://en-blog.files.wordpress.com/2020/08/seedlet-theme-screenshot.jpg... 143w, https://en-blog.files.wordpress.com/2020/08/seedlet-theme-screenshot.jpg... 285w, https://en-blog.files.wordpress.com/2020/08/seedlet-theme-screenshot.jpg... 768w, https://en-blog.files.wordpress.com/2020/08/seedlet-theme-screenshot.jpg 1411w" sizes="(max-width: 974px) 100vw, 974px" />

Designed by Kjell Reigstad, Seedlet is a great option for professionals and creatives seeking a sophisticated vibe. Classically elegant typography creates a refined site that gives your writing and images space to breathe — and shine. 

Seedlet was built to be the perfect partner to the block editor, and supports all the latest blocks. Writing, audio, illustrations, photography, video — use Seedlet to engage and direct visitors’ eyes, without the theme getting in the way. And the responsive design shifts naturally between desktop and mobile devices.

https://en-blog.files.wordpress.com/2020/08/theme_devices_4-3.jpg?w=1440 1440w, https://en-blog.files.wordpress.com/2020/08/theme_devices_4-3.jpg?w=150 150w, https://en-blog.files.wordpress.com/2020/08/theme_devices_4-3.jpg?w=300 300w, https://en-blog.files.wordpress.com/2020/08/theme_devices_4-3.jpg?w=768 768w, https://en-blog.files.wordpress.com/2020/08/theme_devices_4-3.jpg?w=1024 1024w" sizes="(max-width: 720px) 100vw, 720px" />

Learn more about setting up Seedlet, and explore the demo site to see it in action. 

Our team is hard at work developing new block-powered themes. Watch this space for updates!

Data virtualization on SQL Server with Redgate SQL Clone

By Franck Pachot

.
In the previous blog post I’ve installed SQL Server on the Oracle Cloud. My goal was actually to have a look at Redgate SQL Clone, a product that automates thin cloning. The SQL Server from the Oracle marketplace is ok for SQL Clone prerequisites. There’s a little difference in .NET Framework version (I have 4.6 where 4.7.2 or later is required but that’s fine – if it was not an update would be easy anyway).

I’ve downloaded Redgate SQL Clone (14 days trial) and installed it. It requires a logon that you can create easily. Of course, you provide an e-mail, and when you use the trial, someone from Business Development will contact you to know your feedback and what your project is. No problem, I have already very good contacts at Redgate even if I’m not working with SQL Server very often. Look at their roadmap if you are interested by other databases </p />
</p></div>

    	  	<div class=

How to do a GRANT on an entire schema

TL;DR: There is no schema grant command in Oracle, but this routine lets you get close.

Why no schema level grant?

A very common request on the various forums that exist for the Oracle Database is a “schema level grant”. The idea is that you could do something like


grant select on HR to SCOTT;

The concept sounds relatively straightforward but there would ne some nuances to cover off, for example:

  • Should it cover existing objects only or new ones as well?
  • Should it cover “all” objects? For example, if I had some AQ tables or DR$-prefixed tables for text indexes. Do I include them?
  • Should EXECUTE just cover PL/SQL or should it cover object types as well?
  • Should a lower level REVOKE override a schema level grant? What if a schema level grant then followed the revoke?

I’m not saying it can’t be done, but there’s a lot more to think about than you might first think.

A PL/SQL workaround

In the interim, if you have some firm rules on grants from an owning schema, here is a routine that can assist. By default it will grant the following privileges to the target recipient

  • TABLE – insert, update, delete, select, references (unless the table is external, in which case only select is given)
  • VIEW – insert, update, delete, select
  • SEQUENCE – select
  • PROCEDURE – execute
  • FUNCTION – execute
  • PACKAGE – execute
  • TYPE – execute

SQL> create or replace
  2  procedure schema_grant(p_owning_schema varchar2, p_recipient varchar2) is
  3
  4     errs_found boolean := false;
  5
  6     cursor c_all is
  7        select
  8           owner owner
  9          ,object_name
 10          ,object_type
 11          ,decode(object_type
 12                            ,'TABLE'   ,
 13                                decode(external,'YES','SELECT','SELECT,INSERT,UPDATE,DELETE,REFERENCES')
 14                            ,'VIEW'    ,'SELECT,INSERT,UPDATE,DELETE'
 15                            ,'SEQUENCE','SELECT'
 16                            ,'EXECUTE') priv
 17        from
 18          ( select o.owner,o.object_name,o.object_type,'NO' external
 19            from   dba_objects o
 20            where  o.owner = upper(p_owning_schema)
 21            and    o.object_type in ('FUNCTION','PACKAGE','PROCEDURE','SEQUENCE','TYPE','VIEW')
 22            and    o.generated = 'N'
 23            and    o.secondary = 'N'
 24            and    o.object_name not like 'AQ$%'
 25            union all
 26            select o.owner,o.object_name,o.object_type, t.external
 27            from   dba_objects o,
 28                   dba_tables t
 29            where  o.owner = upper(p_owning_schema)
 30            and    o.object_type  = 'TABLE'
 31            and    o.generated = 'N'
 32            and    o.secondary = 'N'
 33            and    o.object_name not like 'AQ$%'
 34            and    o.owner = t.owner
 35            and    o.object_name = t.table_name
 36          )
 37        order by decode(object_type  -- the order is only so views are granted after any likely
 38                        ,'VIEW', 1     -- objects referenced by them have already been granted
 39                        , 0) asc       -- as the grant would else fail due to view invalidity.
 40                ,owner
 41                ,object_name;
 42
 43     type t_grant_list is table of c_all%rowtype;
 44     r    t_grant_list;
 45
 46     l_ddl_indicator number;
 47
 48     procedure logger(m varchar2) is
 49     begin
 50       dbms_output.put_line(m);
 51     end;
 52
 53  begin
 54    open c_all;
 55    fetch c_all bulk collect into r;
 56    close c_all;
 57
 58    for i in 1 .. r.count loop
 59        dbms_output.put_line(rpad(r(i).object_type,20)||r(i).owner||'.'||r(i).object_name)   ;
 60
 61        begin
 62          logger('grant '||r(i).priv||' on '||r(i).owner||'.'||r(i).object_name||' to '||p_recipient);
 63          execute immediate 'grant '||r(i).priv||' on '||r(i).owner||'.'||r(i).object_name||' to '||p_recipient;
 64
 65        exception
 66          when others then
 67             logger('ERROR: '||sqlerrm);
 68             errs_found := true;
 69        end;
 70    end loop;
 71    if errs_found then
 72      logger('**** ERRORS FOUND ****');
 73    end if;
 74    logger('Finished, record count = '||r.count);
 75
 76  end;
 77  /

Procedure created.

The owner of this procedure will most likely need the GRANT ANY OBJECT PRIVILEGE in order for it to work. If any error is encountered trying to grant a privilege on an object, the routine will continue on. For example, here is an execution without the appropriate privileges


SQL> set serverout on
SQL> exec schema_grant('HR','SCOTT')
PROCEDURE           HR.ADD_JOB_HISTORY
grant EXECUTE on HR.ADD_JOB_HISTORY to SCOTT
ERROR: ORA-01031: insufficient privileges
TABLE               HR.COUNTRIES
grant SELECT,INSERT,UPDATE,DELETE,REFERENCES on HR.COUNTRIES to SCOTT
ERROR: ORA-01031: insufficient privileges
TABLE               HR.DEPARTMENTS
grant SELECT,INSERT,UPDATE,DELETE,REFERENCES on HR.DEPARTMENTS to SCOTT
ERROR: ORA-01031: insufficient privileges
SEQUENCE            HR.DEPARTMENTS_SEQ
grant SELECT on HR.DEPARTMENTS_SEQ to SCOTT
ERROR: ORA-01031: insufficient privileges
TABLE               HR.EMPLOYEES
grant SELECT,INSERT,UPDATE,DELETE,REFERENCES on HR.EMPLOYEES to SCOTT
ERROR: ORA-01031: insufficient privileges
SEQUENCE            HR.EMPLOYEES_SEQ
grant SELECT on HR.EMPLOYEES_SEQ to SCOTT
ERROR: ORA-01031: insufficient privileges
TABLE               HR.JOBS
grant SELECT,INSERT,UPDATE,DELETE,REFERENCES on HR.JOBS to SCOTT
ERROR: ORA-01031: insufficient privileges
TABLE               HR.JOB_HISTORY
grant SELECT,INSERT,UPDATE,DELETE,REFERENCES on HR.JOB_HISTORY to SCOTT
ERROR: ORA-01031: insufficient privileges
TABLE               HR.LOCATIONS
grant SELECT,INSERT,UPDATE,DELETE,REFERENCES on HR.LOCATIONS to SCOTT
ERROR: ORA-01031: insufficient privileges
SEQUENCE            HR.LOCATIONS_SEQ
grant SELECT on HR.LOCATIONS_SEQ to SCOTT
ERROR: ORA-01031: insufficient privileges
TABLE               HR.REGIONS
grant SELECT,INSERT,UPDATE,DELETE,REFERENCES on HR.REGIONS to SCOTT
ERROR: ORA-01031: insufficient privileges
PROCEDURE           HR.SECURE_DML
grant EXECUTE on HR.SECURE_DML to SCOTT
ERROR: ORA-01031: insufficient privileges
VIEW                HR.EMP_DETAILS_VIEW
grant SELECT,INSERT,UPDATE,DELETE on HR.EMP_DETAILS_VIEW to SCOTT
ERROR: ORA-01031: insufficient privileges
**** ERRORS FOUND ****
Finished, record count = 13

PL/SQL procedure successfully completed.

And here is more typical output that you would hope to see once the appropriate privilege is in place


SQL> grant grant any object privilege to ADMIN

Grant succeeded.

SQL> exec schema_grant('HR','SCOTT')
PROCEDURE           HR.ADD_JOB_HISTORY
grant EXECUTE on HR.ADD_JOB_HISTORY to SCOTT
TABLE               HR.COUNTRIES
grant SELECT,INSERT,UPDATE,DELETE,REFERENCES on HR.COUNTRIES to SCOTT
TABLE               HR.DEPARTMENTS
grant SELECT,INSERT,UPDATE,DELETE,REFERENCES on HR.DEPARTMENTS to SCOTT
SEQUENCE            HR.DEPARTMENTS_SEQ
grant SELECT on HR.DEPARTMENTS_SEQ to SCOTT
TABLE               HR.EMPLOYEES
grant SELECT,INSERT,UPDATE,DELETE,REFERENCES on HR.EMPLOYEES to SCOTT
SEQUENCE            HR.EMPLOYEES_SEQ
grant SELECT on HR.EMPLOYEES_SEQ to SCOTT
TABLE               HR.JOBS
grant SELECT,INSERT,UPDATE,DELETE,REFERENCES on HR.JOBS to SCOTT
TABLE               HR.JOB_HISTORY
grant SELECT,INSERT,UPDATE,DELETE,REFERENCES on HR.JOB_HISTORY to SCOTT
TABLE               HR.LOCATIONS
grant SELECT,INSERT,UPDATE,DELETE,REFERENCES on HR.LOCATIONS to SCOTT
SEQUENCE            HR.LOCATIONS_SEQ
grant SELECT on HR.LOCATIONS_SEQ to SCOTT
TABLE               HR.REGIONS
grant SELECT,INSERT,UPDATE,DELETE,REFERENCES on HR.REGIONS to SCOTT
PROCEDURE           HR.SECURE_DML
grant EXECUTE on HR.SECURE_DML to SCOTT
VIEW                HR.EMP_DETAILS_VIEW
grant SELECT,INSERT,UPDATE,DELETE on HR.EMP_DETAILS_VIEW to SCOTT
Finished, record count = 13

PL/SQL procedure successfully completed.

Keeping privileges fresh

These grants are obviously “point in time” grants, which means objects created after this procedure has been run will not be picked up. You could run this routine at regular intervals, but a grant is DDL and obviously it is generally not a great idea to be running lots of DDL repeatedly on the database.

If your intention is to try keep the “schema level” grant up to date, for example, on say a development environment where objects are being created and changed regularly, then here is an extended version of the procedure with usage notes underneath it.


SQL> create or replace
  2  procedure schema_grant(p_owning_schema varchar2, p_recipient varchar2, p_complete boolean default false) is
  3
  4     errs_found boolean := false;
  5
  6     cursor c_all is
  7        select
  8           owner owner
  9          ,object_name
 10          ,object_type
 11          ,decode(object_type
 12                            ,'TABLE'   ,
 13                                decode(external,'YES','SELECT','SELECT,INSERT,UPDATE,DELETE,REFERENCES')
 14                            ,'VIEW'    ,'SELECT,INSERT,UPDATE,DELETE'
 15                            ,'SEQUENCE','SELECT'
 16                            ,'EXECUTE') priv
 17        from
 18          ( select o.owner,o.object_name,o.object_type,'NO' external
 19            from   dba_objects o
 20            where  o.owner = upper(p_owning_schema)
 21            and    o.object_type in ('FUNCTION','PACKAGE','PROCEDURE','SEQUENCE','TYPE','VIEW')
 22            and    o.generated = 'N'
 23            and    o.secondary = 'N'
 24            and    o.object_name not like 'AQ$%'
 25            union all
 26            select o.owner,o.object_name,o.object_type, t.external
 27            from   dba_objects o,
 28                   dba_tables t
 29            where  o.owner = upper(p_owning_schema)
 30            and    o.object_type  = 'TABLE'
 31            and    o.generated = 'N'
 32            and    o.secondary = 'N'
 33            and    o.object_name not like 'AQ$%'
 34            and    o.owner = t.owner
 35            and    o.object_name = t.table_name
 36          )
 37        order by decode(object_type  -- the order is only so views are granted after any likely
 38                        ,'VIEW', 1     -- objects referenced by them have already been granted
 39                        , 0) asc       -- as the grant would else fail due to view invalidity.
 40                ,owner
 41                ,object_name;
 42
 43     cursor c_partial is
 44        with objs as
 45        (
 46        select /*+ materialize */ owner,object_name,object_type,priv_count
 47        from
 48          (
 49          select /*+ materialize */ owner,object_name,object_type,decode(object_type,'VIEW',4,1) priv_count
 50          from   dba_objects
 51          where  owner = upper(p_owning_schema)
 52          and    object_type in ('FUNCTION','PACKAGE','PROCEDURE','SEQUENCE','TYPE','VIEW')
 53          and    generated = 'N'
 54          and    secondary = 'N'
 55          and    object_name not like 'AQ$%'
 56          and    status != 'INVALID'
 57          union all
 58          select o.owner,o.object_name,o.object_type, decode(t.external,'YES',1,5) priv_count
 59          from   dba_objects o,
 60                 dba_tables t
 61          where  o.owner = upper(p_owning_schema)
 62          and    o.object_type  = 'TABLE'
 63          and    o.generated = 'N'
 64          and    o.secondary = 'N'
 65          and    o.object_name not like 'AQ$%'
 66          and    o.owner = t.owner
 67          and    o.object_name = t.table_name
 68          and    o.status != 'INVALID'
 69          )
 70        ),
 71        obj_privs as (
 72          select o.owner,o.object_name table_name,o.object_type,p.privilege,p.grantee
 73          from   dba_tab_privs p,
 74                 dba_objects o
 75          where  o.owner = p.owner
 76          and    o.object_name = p.table_name
 77          and    o.owner != 'SYS'
 78        )
 79        select
 80                 owner owner
 81                ,object_name
 82                ,object_type
 83                ,decode(object_type
 84                                  ,'TABLE'   ,'SELECT,INSERT,UPDATE,DELETE,REFERENCES'
 85                                  ,'VIEW'    ,'SELECT,INSERT,UPDATE,DELETE'
 86                                  ,'SEQUENCE','SELECT'
 87                                  ,'EXECUTE') priv
 88              from
 89              (
 90                select owner,object_name,object_type
 91                from
 92                (   select owner,object_name,object_type,priv_count
 93                    from   objs
 94                    minus
 95                    select owner, table_name, object_type,
 96                           least(count(*),decode(object_type,'TABLE',5,'VIEW',4,1))
 97                    from   obj_privs
 98                    where  grantee = upper(p_recipient)
 99                    and    privilege in ('SELECT','INSERT','UPDATE','DELETE','REFERENCES','EXECUTE')
100                    and    owner = upper(p_owning_schema)
101                    group by owner, table_name, object_type
102                )
103              )
104              order by decode(object_type  -- the order is only so views are granted after any likely
105                              ,'VIEW', 1     -- objects referenced by them have already been granted
106                              , 0) asc       -- as the grant would else fail due to view invalidity.
107                      ,owner
108                      ,object_name;
109
110     type t_grant_list is table of c_all%rowtype;
111     r    t_grant_list;
112
113     l_ddl_indicator number;
114
115     procedure logger(m varchar2) is
116     begin
117       dbms_output.put_line(m);
118     end;
119
120  begin
121    if p_complete then
122      logger('Starting complete run');
123
124      open c_all;
125      fetch c_all bulk collect into r;
126      close c_all;
127    else
128      logger('Starting partial run');
129
130      begin
131        select 1
132        into   l_ddl_indicator
133        from dba_objects
134        where owner = upper(p_owning_schema)
135        and last_ddl_time > sysdate-1/24
136        and rownum = 1;
137
138        open c_partial;
139        fetch c_partial bulk collect into r;
140        close c_partial;
141      exception
142        when no_data_found then
143          logger('Finished, no ddl in past 60 mins');
144          return;
145      end;
146    end if;
147
148    for i in 1 .. r.count loop
149        dbms_output.put_line(rpad(r(i).object_type,20)||r(i).owner||'.'||r(i).object_name)   ;
150
151        begin
152          logger('grant '||r(i).priv||' on '||r(i).owner||'.'||r(i).object_name||' to '||p_recipient);
153          execute immediate 'grant '||r(i).priv||' on '||r(i).owner||'.'||r(i).object_name||' to '||p_recipient;
154
155        exception
156          when others then
157             logger('ERROR: '||sqlerrm);
158             errs_found := true;
159        end;
160    end loop;
161    if errs_found then
162      logger('**** ERRORS FOUND ****');
163    end if;
164    logger('Finished, record count = '||r.count);
165
166  end;
167  /

Procedure created.

This version can operate in two modes

  • COMPLETE mode – acts as per the other version. It will perform all the grants
  • PARTIAL mode – checks if there has been any objects modified in the last 60 minutes, and if so, will determine a list of “missing” privileges and only grant those

Thus typical usage would be to run a COMPLETE mode (p_complete=>true) initially to over off most of the grants, and then every “n” mins (for example, 10mins) run the routine in PARTIAL mode, and it will pick up those new objects that have been created on your database. In this way, we can give the perception which is closer to the utopia of a schema level grant.

Hopefully you find these useful.

Oracle 19c Automatic Indexing: Dropping Automatic Indexes Part II (New Angels of Promise)

Just a quick update on a previous post on dropping Automatic Indexes. As I discussed previously, one could drop Automatic Indexes by moving them to a new tablespace and then dropping the tablespace. This cumbersome technique was necessary because there was no direct facility to drop Automatic Indexes. Additionally, it’s worth noting this process isn’t […]