Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Merry Christmas and Happy New Year !! (“Heroes”)

I would like to take this opportunity to wish all my readers a very Merry Christmas and a most happy, peaceful and prosperous New Year. My gift this year is not David Bowie and Bing Crosby doing their famous Christmas duet but a performance by Bowie of his classic “Heroes” as featured on the same […]

Video : Oracle REST Data Services (ORDS) : HTTP Headers and ORDS-Specific Bind Variables

In today’s video we do a quick run through how to use HTTP headers and ORDS-specific bind variables in Oracle REST Data Services.

The video is based on this article.

As always you can see all my ORDS articles here.

There is also a YouTube playlist just for ORDS here.

The star of today’s video is Mark Rittman, who is one of the OGs of the community, and clearly ecstatic at being on video. </p />
</p></div>

    	  	<div class=

ORDS, SQLcl, SQL Developer and SQL Developer Data Modeler 19.4 Released

If you follow me, Jeff or Kris on Twitter, you will already know ORDS, SQLcl, SQL Developer and SQL Developer Data Modeler version 19.4 have all been released. They became available for download late Friday US time, and I got a DM about it yesterday morning, so fresh from spending 2 days running Data Guard builds, I moved on to running builds of some of this stuff.

Oracle REST Data Services (ORDS)

https://oracle-base.com/blog/wp-content/uploads/2019/12/ords-2-258x300.png 258w" sizes="(max-width: 139px) 85vw, 139px" />

It was a long day, but this is what I did.

  • Updated all Docker ORDS builds : here
  • Updated all Vagrant database builds that include ORDS: here

As you might expect, the config changes were minimal, but the time spent waiting for all those builds to complete was hellish.

The new version of ORDS has an additional prompt during the installation. You get this question.

Enter a number to select a feature to enable [1] SQL Developer Web [2] REST Enabled SQL [3] None [1]:1

The documentation hasn’t been released yet, so I had to run a manual installation to spot the new parameter which was necessary to make the ORDS builds silent. If you want SQL Developer Web, you have to enable REST Enabled SQL also, which requires these in your parameter file.

restEnabledSql.active=true
feature.sdw=true

You’ll recognise the first if you’ve used REST Enabled SQL before. The second is the new one.

I spent some of the time waiting for builds knocking up a quick run through the SQL Developer Web feature, which you can see here.

If you’ve used Autonomous Data Warehouse (ADW) or Autonomous Transaction Processing (ATP) on Oracle Cloud, you will already know what SQL Developer Web is. I put out the post and told Jeff to take a look. Next thing Kris put it on Twitter. I was going to wait for the docs to arrive before mentioning it in public. It did give me some quick feedback, so I was able to update it on the fly as I was playing with the tool. Kind-of “live blogging” I guess. </p />
</p></div>

    	  	<div class=

Purge Cursor

This is a note I first drafted about 5 years ago (the date stamp says March 2014) and rediscovered a few days ago when the question came up on a Twitter thread.

How do you purge a single SQL statement from the library cache without having to execute “alter system flush shared_pool”?

The answer is in the package dbms_shared_pool, specfically the purge() procedure. This package changes significantly in the upgrade from 11.2 (manual page here) to 12.1 (manual page here) so it’s best to check the reference manual for the version you’re using in case it changes again.  In 11.2 (and earlier) there’s just one option for the purge() procedure but in 12.1 the package gets 3 overloaded versions of the procedure – and one of the operloads gets an extra parameter (edition) by 19c.

Side note: In very early versions of Oracle the package wasn’t installed automatically, so you may have to execute $ORACLE_HOME/admin/rdbms/dbmspool.sql (possibly followed by prvtpool.plb) to install it. The facility to purge a cursor appeared in 11.1 and was then back-ported to 10.2.0.4. The manual pages for the procedure are, however, not up to date and don’t list all the possible flags that tell the procedure what type of object it is supposed to be purging.

The only use I’ve made of the purge() procedure is to purge a cursor from memory, though you can purge other types of object if you want to. Technically you could flush the execution plan from memory without eliminating the cursor, though you would still have to re-optimize the statement so there may be no benefit (or very little benefit) in doing so.

To demonstrate the mechanism I’m going to use three sessions – two to run a query with different optimizer environments, then a third to find and purge the cursors. Here’s the code for the first two sessions:


rem
rem     Script: purge_cursor.sql
rem     Dated:  March 2014
rem     Author: Jonathan Lewis
rem
rem     Last tested
rem             12.2.0.1
rem             11.2.0.4
rem             10.2.0.4  -- with variations to get  two child cursors
rem

/*    To be run by session 1    */

create table t1 as select * from all_objects where rownum <= 10000;
create index t1_i1 on t1(object_id) invisible;

alter session set optimizer_use_invisible_indexes = true;

set serveroutput off
select object_name from t1 where object_id = 250;
select * from table(dbms_xplan.display_cursor);

/*    To be run by session 2    */

set serveroutput off
select object_name from t1 where object_id = 250;
select * from table(dbms_xplan.display_cursor);

You’ll see that I’ve created an invisible index on the table then allowed one session to use invisible indexes while the other session isn’t allowed to. As a consequence session 1 will produce an execution plan that shows an index range scan for child cursor 0, and session 2 will produce an execution plan that shows a table scan for child cursor 1. I won’t show the output from these two sessions, but my calls to dbms_xplan reported the sql_id as ‘ab08hg3s62rpq’ and I’ve used that as the value for a substituion variable in the code to be run by session 3.

The final demo srcipt is a little messy because it’s going to attempt to report all the execution plans for that sql_id three times, but it will also write and execute a script to call the purge() procedure twice – once to eliminate the plans but leave the cursors in place, then a second time to purge the cursors.

define m_sql_id = 'ab08hg3s62rpq'

spool purge_cursor

prompt  =============================
prompt  Before purge- 2 child cursors
prompt  =============================

select * from table(dbms_xplan.display_cursor('&m_sql_id', null));

spool off

set verify off
set feedback off
set heading off

spool temp_purge.sql

select 
        q'{ execute dbms_shared_pool.purge('}' ||
                address || ',' || hash_value || 
                q'{', 'C', 64) }'
from 
        V$sqlarea 
where 
        sql_id = '&m_sql_id'
;

spool off

@temp_purge

set heading on
set feedback on
set verify on

spool purge_cursor append

prompt  =======================================
prompt  After heap 6 purge- 2 cursors, no plans
prompt  =======================================

select * from table(dbms_xplan.display_cursor('&m_sql_id', null));

spool off

set verify off
set feedback off
set heading off

spool temp_purge.sql

select 
        q'{ execute dbms_shared_pool.purge('}' ||
                address || ',' || hash_value || 
                q'{', 'C') }'
from 
        V$sqlarea 
where 
        sql_id = '&m_sql_id'
;

spool off

@temp_purge

set heading on
set verify on
set feedback on

spool purge_cursor append

prompt  =================================
prompt  After complete purge - no cursors
prompt  =================================

select * from table(dbms_xplan.display_cursor('&m_sql_id', null));

On the first pass the select from v$sqlarea produces a script with the following line:

 execute dbms_shared_pool.purge('000000008D3B4FD0,4032913078', 'C', 64)

On the second pass the select produces a script with the following line:

 execute dbms_shared_pool.purge('000000008D3B4FD0,4032913078', 'C')

The effect of the first call is to purge heap 6 (power(2,6) = 64) for any child cursors that exist for the sql_id. The effect of the second call is to purge the entire set of child cursors. The purge_cursor.lst file ends up with the following results:


=============================
Before purge- 2 child cursors
=============================

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------
SQL_ID  ab08hg3s62rpq, child number 0
-------------------------------------
select object_name from t1 where object_id = 250

Plan hash value: 3320414027

---------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |       |       |     2 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |     1 |    25 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | T1_I1 |     1 |       |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("OBJECT_ID"=250)

SQL_ID  ab08hg3s62rpq, child number 1
-------------------------------------
select object_name from t1 where object_id = 250

Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |    27 (100)|          |
|*  1 |  TABLE ACCESS FULL| T1   |     1 |    25 |    27   (8)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("OBJECT_ID"=250)


37 rows selected.

=======================================
After heap 6 purge- 2 cursors, no plans
=======================================
old   1: select * from table(dbms_xplan.display_cursor('&m_sql_id', null))
new   1: select * from table(dbms_xplan.display_cursor('ab08hg3s62rpq', null))

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------
SQL_ID  ab08hg3s62rpq, child number 0

select object_name from t1 where object_id = 250

NOTE: cannot fetch plan for SQL_ID: ab08hg3s62rpq, CHILD_NUMBER: 0
      Please verify value of SQL_ID and CHILD_NUMBER;
      It could also be that the plan is no longer in cursor cache (check v$sql_plan)

SQL_ID  ab08hg3s62rpq, child number 1

select object_name from t1 where object_id = 250

NOTE: cannot fetch plan for SQL_ID: ab08hg3s62rpq, CHILD_NUMBER: 1
      Please verify value of SQL_ID and CHILD_NUMBER;
      It could also be that the plan is no longer in cursor cache (check v$sql_plan)


16 rows selected.

=================================
After complete purge - no cursors
=================================
old   1: select * from table(dbms_xplan.display_cursor('&m_sql_id', null))
new   1: select * from table(dbms_xplan.display_cursor('ab08hg3s62rpq', null))

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------
SQL_ID: ab08hg3s62rpq cannot be found


2 rows selected.

As you can see we start with two child cursors and two different execution plans, then the two child cursors “lose” their execution plans, and finally the cursors disappear completely. (And the parent disappears at the same time.)

Presenting the procedure with a little more formality – the formal declaration of the procedure that I’ve been using reads:


procedure purge(
        name    varchar2, 
        flag    char    DEFAULT 'P', 
        heaps   number  DEFAULT 1
)

To purge a cursor we set the flag to ‘C’ (or ‘c’)  and the name to ‘{address},{hash_value}’ (make sure you don’t get extra spaces in that expression). If we want to purge just the execution plan we need to target heap 6 which means setting the heaps value to power(2,6). Different types of object use difference (sub)heaps so if you want to delete multiple heaps you need to add together the appropriate power(2,N); the default value for heaps is 1, which equates to heap 0, which means the whole object.

For other types of object there is a reference list (under the keep() procedure) that expands on the manuals to give us the following possible values of flag:


  --        Value        Kind of Object to {keep}
  --        -----        ------------------------
  --          P          package/procedure/function
  --          Q          sequence
  --          R          trigger
  --          T          type
  --          JS         java source
  --          JC         java class
  --          JR         java resource
  --          JD         java shared data
  --          C          cursor

You will note, of course, that I’ve used v$sqlarea rather than v$sql when I was searching for the address and hash value. When you purge a cursor you purge every child cursor you can’t identify an individual child. Even if you query v$sql instead of v$sqlarea, and use a single child_address instead of the (parent) address the purge() procedure will still purge every child for the parent.

Warning

There is a comment in my original notes about a bug that was fixed by 11.2.0.4 – if any of the child cursors is currently executing then the purge() procedure will go into a loop waiting on “cursor: pin X”, timing out every 1/100 second. Unfortunately my notes didn’t make any explicit comment about what impact this had on any session trying to parse or execute any of the children for that parent cursor – but I suspect that you’d end up with a nasty race condition and an apparent hang.

 

3 months after the Oracle “Always Free” Tier — unexpected termination.

3 months after the Oracle “Always Free” Tier — unexpected termination. But don’t panic.

3 months ago, when Larry Ellison announced the “Always Free Tier”, I posted a blog about its possibilities and limitation:

The Oracle Cloud Free Tier

I used the ATP, ADW, and compute instances that I’ve created during that time and then did not expect any termination. But exactly 3 months later, the service is not available.

Autonomous Database

About the Autonomous Databases, I got the same as Dani Schider:

I can’t connect with SQL Developer Web

I can’t connect with sqlplus:

But the Service is up.

Fortunately, nothing is lost: I just re-start the service and my data is there.

Perfect. The last login was on December 18th at midnight as I run this from my free tier VM to keep the databases up:

00 00 * * * TNS_ADMIN=/home/opc/wallet sqlplus -s -L demo/"**P455w0rd**"@atp1_low >/tmp/atp1.log <<<'select banner,current_timestamp from v$version;'
00 00 * * * TNS_ADMIN=/home/opc/wallet sqlplus -s -L demo/"**P455w0rd**"@adw1_low >/tmp/adw1.log <<<'select banner,current_timestamp from v$version;'

Oh, but talking about the compute instance, they are down as well:

Compute Instance

I received this notification which proves that:

  • this problem was not expected
  • the free tier is still considered as production

Ok, really cool that they “are currently working to restore the instance(s) on your behalf.” but the procedure is simple. The instances were terminated but the boot volume is still there. I follow the procedure metioned:

Compute > Boot Volumes

The name is easy to find: “A (Boot Volume)” is the boot volume for the instance “A” that was terminated:

It is also the occasion to clean-up the mess I left: the two first “always free” I created are still there even if I’ve terminated the instance.

Create Instance

From the context menu on the boot volume, just click “Create Instance”

Don’t forget to mention the name if you want the same as before

Public network

Do not forget by the default is no interface on the public internet. Click on “Show Shape”, go to “Network and Storage Options” and select the “Public subnet”

Don’t forget to click on “Assign a public IP address”. You will have a new IP address, so don’t forget to change anything that referred to it.

But I can check that I’ve not lost anything:

The last sign of life is in /var/log/secure was at Dec 18 03:14:28 when a user “fulford” tried to ssh from Shangai… so the server probably crashed around that time.

Finally all good

We were notified about the problem, with the simple way to recover and no data was lost, so no big damage. But if you relied on an 24/7 up service then some manual intervention (“human labor” ;) is required to get the service up and change the IP addresses. Remember that it is a free service and “you get what you pay for”…

Update 21-DEC-2019

I received a notification that they have finished restoring the compute instances:

We have finished restoring the Compute instance(s) listed in this notification that were incorrectly terminated.

Actually, they have detected that I did it:

During recovery, we identified that there were changes already made to your environment. As a result we have taken no further action, to prevent unwanted impact to your tenancy.

What if I didn’t recover myself? It seems that all went fine:

I really like the way this problem was handled by Oracle. Here is the root cause which confirms my guess that a 3-month cleanup didn’t detect the activity:

An automatic cleanup that removes Compute instances from terminated accounts incorrectly identified one or more of your free tier instances as being eligible for termination.

I can understand that. The problem with cloud automation is that when something fails, many environments are concerned. Cloud scales everything — even the failures. This free tier was announced at Open World, probably setup quickly and not tested enough. But that concerns only the free tier: there is no automatic termination when you pay with cloud credits.

London March 2020: “Oracle Indexing Internals and Best Practices” and “Oracle Performance Diagnostics and Tuning” Seminars

Places are filling up, but there are still some available at both of my acclaimed seminars that I’ll be running in London, UK in March 2020. The dates and registration links are as follows: 23-24 March 2020: “Oracle Indexing Internals and Best Practices” seminar – Tickets and Registration Link 25-26 March 2020: “Oracle Performance Diagnostics and […]

Unique all the things … including your pluggables

A quick tip just in time for Christmas Smile

I logged on to my database this morning, and things just didn’t look right. In fact, they looked down right alarming. All my objects were gone, my user account had the wrong password..It was almost as if I was connecting to a totally different database!

That’s because I was! Smile Here is how it happened:

  • I have a 19c database, called it DB19, and it had a single pluggable database within it called PDB1.  (As you can see, I am just a wizard when it comes to picking meaningful names!). This is my “day to day” database for doing AskTOM questions, research, education etc etc.
  • Over the past few days, I’ve been exploring some options with cloning pluggables, application containers etc. So I needed more pluggables and more databases. So I created a second 19c database, called DB19S which also had a single pluggable database called PDB1.
  • You can probably see what is coming here, but let me reinforce the point by taking a look at the listener on this machine once I had created my second database.

C:\Users\hamcdc>lsnrctl status LISTENER19

LSNRCTL for 64-bit Windows: Version 19.0.0.0.0 - Production on 18-DEC-2019 16:19:03

Copyright (c) 1991, 2019, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1519)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER19
Version                   TNSLSNR for 64-bit Windows: Version 19.0.0.0.0 - Production
Start Date                12-DEC-2019 17:59:32
Uptime                    5 days 22 hr. 19 min. 31 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   C:\oracle\product\19\network\admin\listener.ora
Listener Log File         C:\oracle\diag\tnslsnr\gtx\listener19\alert\log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=1519)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXTPROC1519ipc)))
Services Summary...
...
...
Service "pdb1" has 2 instance(s).
  Instance "db19", status READY, has 1 handler(s) for this service...
  Instance "db19s", status READY, has 1 handler(s) for this service...
...
...
The command completed successfully

Once you have two pluggables (each from a different container) with the same name, they are going to register that name with the listener. Hence the database you get directed to when you use that access is going to be somewhat arbitrary. Probably not what you want Smile

So whilst it is not mandatory, I would strongly recommend that every pluggable database on your server have a unique name. Then you won’t get caught out like I did.

Have a safe and happy festive season everyone.

Wait for Java

This is a note courtesy of Jack can Zanen on the Oracle-L list server who asked a question about “wait for CPU” and then produced the answer a couple of days later. It’s a simple demonstration of how Java in the database can be very deceptive in terms of indicating CPU usage that isn’t really CPU usage.

Bottom line – when you call Java Oracle knows you’re about to start doing some work on the CPU, but once you’re inside the java engine Oracle has no way of knowing whether the java code is on the CPU or waiting. So if the java starts to wait (e.g. for some slow file I/O) Oracle will still be reporting your session as using CPU.

To demonstrate the principle, I’m going to create little java procedure that simply goes to sleep – and see what I find in the active session history (ASH) after I’ve been sleeping in java for 10 seconds.

rem
rem     Script:         java_wait_for_cpu.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2019
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem
rem     Based on an email from Jack van Zanen to Oracle-L
rem

set time on

create or replace procedure milli_sleep(i_milliseconds in number) 
as 
        language java
        name 'java.lang.Thread.sleep(int)';
/

set pagesize 60
set linesize 132
set trimspool on

column sample_time format a32
column event       format a32
column sql_text    format a60
column sql_id      new_value m_sql_id

set echo on
execute milli_sleep(1e4)

select 
        sample_time, sample_id, session_state, sql_id, event 
from 
        v$active_session_history
where 
        session_id = sys_context('userenv','sid')
and     sample_time > sysdate - 1/1440 
order by 
        sample_time
;

select sql_id, round(cpu_time/1e6,3) cpu_time, round(elapsed_time/1e6,3) elapsed, sql_text from v$sql where sql_id = '&m_sql_id';

I’ve set timing on and set echo on so that you can see when my code starts and finishes and correlate it with the report from v$active_session_history for my session. Since I’ve reported the last minute you may find some other stuff reported before the call to milli_sleep() but you should find that you get a report of about 10 seconds “ON CPU” even though your session is really not consuming any CPU at all. I’ve included a report of the SQL that’s “running” while the session is “ON CPU”.

Here (with a little edit to remove the echoed query against v$active_session_history) are the results from a run on 12.2.0.1 (and the run on 19.3.0.0 was very similar):


Procedure created.

18:51:17 SQL> execute milli_sleep(1e4)

PL/SQL procedure successfully completed.

SAMPLE_TIME                       SAMPLE_ID SESSION SQL_ID        EVENT
-------------------------------- ---------- ------- ------------- --------------------------------
16-DEC-19 06.51.11.983 PM          15577837 ON CPU  8r3xn050z2uqm
16-DEC-19 06.51.12.984 PM          15577838 ON CPU  8r3xn050z2uqm
16-DEC-19 06.51.13.985 PM          15577839 ON CPU  8r3xn050z2uqm
16-DEC-19 06.51.14.985 PM          15577840 ON CPU  8r3xn050z2uqm
16-DEC-19 06.51.15.986 PM          15577841 ON CPU  8r3xn050z2uqm
16-DEC-19 06.51.16.996 PM          15577842 ON CPU  8r3xn050z2uqm
16-DEC-19 06.51.17.995 PM          15577843 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.18.999 PM          15577844 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.20.012 PM          15577845 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.21.018 PM          15577846 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.22.019 PM          15577847 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.23.019 PM          15577848 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.24.033 PM          15577849 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.25.039 PM          15577850 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.26.047 PM          15577851 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.27.058 PM          15577852 ON CPU  4jt6zf4nybawp

16 rows selected.

18:51:27 SQL>
18:51:27 SQL> select sql_id, round(cpu_time/1e6,3) cpu_time, round(elapsed_time/1e6,3) elapsed, sql_text from v$sql where sql_id = '&m_sql_id';

SQL_ID          CPU_TIME    ELAPSED SQL_TEXT
------------- ---------- ---------- ------------------------------------------------------------
4jt6zf4nybawp       .004     10.029 BEGIN milli_sleep(1e4); END;


As you can see I had a statement executing for a few seconds before the call to milli_sleep(), but then we see milli_sleep() “on” the CPU for 10 consecutive samples; but when the sleep ends the query for actual usage shows us that the elapsed time was 10 seconds but the CPU usage was only 4 milliseconds.

 

Oracle Database 19c Automatic Indexing – Indexed Column Reorder (What Shall We Do Now?)

  I previously discussed how the default column order of an Automatic Index (in the absence of other factors) is based on the Column ID, the order in which the columns are defined in the table. But what if there are “other factors” based on new workloads and the original index column order is no […]

Finding the deleted TYPE when ANYDATA raises ORA-21700: object does not exist or is marked for…

Finding the deleted TYPE when ANYDATA raises ORA-21700: object does not exist or is marked for delete

The current message about Oracle Database is: multi-model database. That’s not new. At the time of Oracle 9i, Object Oriented was the trend, with all the flexibility of polymorphism, but without the mess of unstructured data and without the inconsistency of NoSQL. Oracle added a datatype that can contain any datatype: SYS.ANYDATA. In the same column, you can put a number in row 1, a varchar2 in row 2, a record in row 3, andy object in row 4… Any arbitrary object can be stored, but, unlike a RAW or a BLOB (or XML or JSON), each object is structured and references a known datatype or a user-created TYPE.

However, it is impossible to enforce the dependency for each row and it can happen that you DROP a TYPE that is used by an ANYDATA object.

Example

I create two types. Very simple ones, and similar for this example, but it can be any complex object definition:

DEMO@//localhost/pdb1> create type DEMO1 as object(a number);
2 /
Type created.
DEMO@//localhost/pdb1> create type DEMO2 as object(a number);
2 /
Type created.

I create a table with a key (NUMBER) and value (ANYDATA):

DEMO@//localhost/pdb1> create table DEMO ( k number, v anydata );
Table created.

I insert two instances of DEMO1

DEMO@//localhost/pdb1> insert into DEMO values(1, 
anydata.convertobject( DEMO1(1)) );
1 row created.
DEMO@//localhost/pdb1> insert into DEMO values(2,
anydata.convertobject( DEMO1(1)) );
1 row created.

and two instances of DEMO2

DEMO@//localhost/pdb1> insert into DEMO values(3,
anydata.convertobject( DEMO2(1)) );
1 row created.
DEMO@//localhost/pdb1> insert into DEMO values(4, 
anydata.convertobject( DEMO2(1)) );
1 row created.

Type name and Dump

I query the table. SQL Developer displays the type but I can also get it with ANYDATA.GETTYPENAME()

select k,v,anydata.getTypeName(v) from demo;

By curiosity, I look at the binary storage:

select k,anydata.getTypeName(v),substr(dump(v,16),1,145) from demo;

This contains the Type Object ID. Here are my types from USER_TYPES:

select * from user_types;

On this example it is clear that the TYPE_OID is there:

99ED99CFEAB04E7FE0531103000A3EA6 is contained in Typ=58 Len=74: 0,1,0,0,0,0,0,1,0,0,0,19,83,df,0,34,48,90,0,2e,0,0,2a,1,85,1,2a,1,1,2,4,0,6c,99,ed,99,cf,ea,b0,4e,7f,e0,53,11,3,0,a,3e,a6,0,1,0,0,

99ED99CFEAB44E7FE0531103000A3EA6 is contained in Typ=58 Len=74: 0,1,0,0,0,0,0,1,0,0,0,19,83,e2,0,34,48,90,0,2e,0,0,2a,1,85,1,2a,1,1,2,4,0,6c,99,ed,99,cf,ea,b4,4e,7f,e0,53,11,3,0,a,3e,a6,0,1,0,0,

Drop the TYPE

Now, I can drop the TYPE without having any error:

drop type DEMO2;

This is not a bug (Bug 14828165 : TYPE IS ALLOWED TO BE DROPPED closed in status 92). With ANYDATA you want flexibility, right?

However, I cannot query a value that references this dropped TYPE:

select * from demo
*
ERROR at line 1:
ORA-21700: object does not exist or is marked for delete

And the problem is that I cannot even know the type name:

select k,anydata.getTypeName(v) from demo;

The only thing that I can see is the Type OID from the dump of the ANYDATA value:

But as the TYPE was dropped, I cannot get the name from USER_TYPES.

Flashback query

Ideally, you can get this metadata information from a Data Pump export (OID is visible in the DDL sqlfile) or from a backup. Here, as the DROP was recent, I’ll simply use Flashback Query.

I cannot “versions between” on a view so I query first the SCN from TYPE$

select toid,versions_endscn,versions_operation
from sys.type$ versions between scn minvalue and maxvalue
where ',99,ed,99,cf,ea,b4,4e,7f,e0,53,11,3,0,a,3e,a6,0,1,0,0,'
like '%,'||regexp_replace(dump(type$.toid,16),'^.* ')||',%'
;

(I passed through a regexp because SQL Developer adds thousand separators which made their way to the substitution variable)

And then I query “as of” the DBA_TYPES for this SCN to get all information:

select *
from dba_types as of scn ( 11980082 -1)
where rawtohex(type_oid)= '99ED99CFEAB44E7FE0531103000A3EA6'

Here I have it: the dropped type referenced by this ANYDATA value is DEMO.DEMO2 and that can help me understand what it was and when it has been dropped. As long as I am in the UNDO retention I can find all information to recreate it (mentioning the OID).

I’ve put all that in a function which takes the ANYDATA value and DUMP() to find the OID and name when the ORA-21700 is encountered:

with function try(x anydata,d varchar2) return varchar2 as
l_toid varchar2(1000); l_scn number; l_name varchar2(1000); begin return anydata.getTypeName(x); exception when others then select rawtohex(toid),versions_endscn into l_toid,l_scn from sys.type$ versions between scn minvalue and maxvalue where d like '%,'||regexp_replace(dump(type$.toid,16),'^.* ')||',%' order by versions_endscn fetch first 1 rows only; select owner||'.'||type_name into l_name from dba_types as of scn (l_scn -1) where rawtohex(type_oid)=l_toid; return sqlerrm||' -> '||l_name; end; select k,try(v,dump(v,16)) from demo.demo /

Basically, ANYDATA stores all known datatypes in their own format, in a record, with an OID to reference the structure metadata. Here is an example where the NUMBER format is visible inside:

Who says that there is an impedance mismatch between Relational Databases and Object Oriented models? There are not. You can store objects in a relational database. But there are only a few use cases where you want a column with a generic datatype where you can store ANYDATA. For example, Advanced Queuing uses that for queued messages: you know what you put. You know what you read. But the table can store heterogeneous data without having to define one table queue for each type. Yes, this looks like inheritance and abstract class, in a relational table.