Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Indexing Null Values - Part 1

Indexing null values in Oracle is something that has been written about a lot in the past already. Nowadays it should be common knowledge that Oracle B*Tree indexes don't index entries that are entirely null, but it's possible to include null values in B*Tree indexes when combining them with something guaranteed to be non-null, be it another column or simply a constant expression.

Jonathan Lewis not too long ago published a note that showed an oddity when dealing with IS NULL predicates that in the end turned out not to be a real threat and looked more like an oddity how Oracle displays the access and filter predicates when accessing an index and using IS NULL together with other predicates following after.

However, I've recently come across a rather similar case where this display oddity turns into a real threat. To get things started, let's have a look at the following (this is from 18.3.0, but other recent versions should show similar results):

#eeeeee; border: 1px dashed rgb(204 , 204 , 204); overflow: auto;">SQL> create table null_index as select * from dba_tables;

Table created.

SQL> insert /*+ append */ into null_index select a.* from null_index a, (select /*+ no_merge cardinality(100) */ rownum as dup from dual connect by level <= 100);

214700 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(null, 'NULL_INDEX', method_opt => 'for all columns size auto for columns size 254 owner')

PL/SQL procedure successfully completed.

SQL> create index null_index_idx on null_index (pct_free, ' ');

Index created.

SQL> set serveroutput off pagesize 5000 arraysize 500

Session altered.

SQL> set autotrace traceonly
SQL>
SQL> select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

101 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 3608178030

------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 19 | 5852 | 1028 (1)| 00:00:01 |
|* 1 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 19 | 5852 | 1028 (1)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | NULL_INDEX_IDX | 13433 | | 32 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST')
2 - access("PCT_FREE" IS NULL)


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
2178 consistent gets
35 physical reads
0 redo size
7199 bytes sent via SQL*Net to client
372 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed

So this is the known approach of indexing null values by simply adding a constant expression and we can see from the execution plan that indeed the index was used to identify the rows having NULLs.

But we can also see from the execution plan, the number of consistent gets and also the Rowsource Statistics that this access can surely be further improved:

#eeeeee; border: 1px dashed rgb(204 , 204 , 204); overflow: auto;">Plan hash value: 3608178030

--------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads |
--------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1028 (100)| 101 |00:00:00.01 | 2178 | 35 |
|* 1 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 1 | 19 | 1028 (1)| 101 |00:00:00.01 | 2178 | 35 |
|* 2 | INDEX RANGE SCAN | NULL_INDEX_IDX | 1 | 13433 | 32 (0)| 13433 |00:00:00.01 | 30 | 35 |
--------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST'))
2 - access("PCT_FREE" IS NULL)

Because the additional predicate on OWNER can only be applied on table level, we first identify more than 13,000 rows on index level, visit all those table rows via random access and apply the filter to end up with the final 101 rows.

So obviously we should add OWNER to the index to avoid visiting that many table rows:

#eeeeee; border: 1px dashed rgb(204 , 204 , 204); overflow: auto;">SQL> create index null_index_idx2 on null_index (pct_free, owner);

Index created.

SQL> select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

101 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 3808602675

-------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 19 | 5852 | 40 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 19 | 5852 | 40 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | NULL_INDEX_IDX2 | 19 | | 38 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("PCT_FREE" IS NULL)
filter("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST')



Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
137 consistent gets
61 physical reads
0 redo size
33646 bytes sent via SQL*Net to client
372 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed

Plan hash value: 3808602675

---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 40 (100)| 101 |00:00:00.01 | 137 | 61 |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 1 | 19 | 40 (0)| 101 |00:00:00.01 | 137 | 61 |
|* 2 | INDEX RANGE SCAN | NULL_INDEX_IDX2 | 1 | 19 | 38 (0)| 101 |00:00:00.01 | 36 | 61 |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("PCT_FREE" IS NULL)
filter(("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST'))

So at first sight this looks indeed like an improvement, and it is compared to the previous execution plan, see for example how the number of consistent gets has been reduced. However, there is something odd going on: The index cost part is even greater than in the previous example, and looking more closely at the predicate information section it becomes obvious that the additional predicate on OWNER isn't applied as access predicate to the index, but only as filter. This means rather than directly identifying the relevant parts of the index by navigating the index structure efficiently using both predicates, only the PCT_FREE IS NULL expression gets used to identify the more than 13,000 corresponding index entries and then applying the filter on OWNER afterwards. While this is better than applying the filter on table level, it still can become a very costly operation and the question here is, why doesn't Oracle use both expressions to access the index? The answer to me looks like an implementation restriction - I don't see any technical reason why Oracle shouldn't be capable of doing so. Currently it looks like that in this particular case when using an IN predicate or the equivalent OR predicates following an IS NULL on index level gets only applied as filter, similar to predicates following range or unequal comparisons, or skipping columns / expressions in a composite index. But for those cases there is a reason why Oracle does so - it no longer can use the sorted index entries for efficient access, but I don't see why this should apply to this IS NULL case - and Jonathan's note above shows that in principle for other kinds of predicates it works as expected (except the oddity discussed).

This example highlights another oddity: Since it contains an IN list, ideally we would like to see an INLIST ITERATOR used as part of the execution plan, but there is only an INDEX RANGE SCAN operation using this FILTER expression.

By changing the order of the index expressions and having the expression used for the IS NULL predicate as trailing one, we can see the following:

#eeeeee; border: 1px dashed rgb(204 , 204 , 204); overflow: auto;">SQL> create index null_index_idx3 on null_index (owner, pct_free);

Index created.

SQL> select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

101 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 2178707950

--------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 19 | 5852 | 6 (0)| 00:00:01 |
| 1 | INLIST ITERATOR | | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 19 | 5852 | 6 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | NULL_INDEX_IDX3 | 19 | | 4 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access(("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST') AND "PCT_FREE" IS NULL)


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
108 consistent gets
31 physical reads
0 redo size
33646 bytes sent via SQL*Net to client
372 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed

Plan hash value: 2178707950

----------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads |
----------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 6 (100)| 101 |00:00:00.01 | 108 | 31 |
| 1 | INLIST ITERATOR | | 1 | | | 101 |00:00:00.01 | 108 | 31 |
| 2 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 2 | 19 | 6 (0)| 101 |00:00:00.01 | 108 | 31 |
|* 3 | INDEX RANGE SCAN | NULL_INDEX_IDX3 | 2 | 19 | 4 (0)| 101 |00:00:00.01 | 7 | 31 |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access((("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST')) AND "PCT_FREE" IS NULL)

So this is the expected execution plan, including an INLIST ITERATOR and showing that all predicate expressions get used to access the index efficiently, reducing the number of consistent gets further. Of course, a potential downside here is that this index might not be appropriate if queries are looking for PCT_FREE IS NULL only.

Summary

It looks like that IN / OR predicates following an IS NULL comparison on index level are only applied as filters and therefore also prevent other efficient operations like inlist iterators. The problem in principle can be worked around by putting the IS NULL expression at the end of a composite index, but that could come at the price of requiring an additional index on the IS NULL expression when there might be the need for searching just for that expression efficiently.

In part 2 for curiosity I'll have a look at what happens when applying the same to Bitmap indexes, which include NULL values anyway...

Script used:

#eeeeee; border: 1px dashed rgb(204 , 204 , 204); overflow: auto;">set echo on

drop table null_index purge;

create table null_index as select * from dba_tables;

insert /*+ append */ into null_index select a.* from null_index a, (select /*+ no_merge cardinality(100) */ rownum as dup from dual connect by level <= 100);

commit;

exec dbms_stats.gather_table_stats(null, 'NULL_INDEX', method_opt => 'for all columns size auto for columns size 254 owner')

create index null_index_idx on null_index (pct_free, ' ');

set serveroutput off pagesize 5000 arraysize 500

set autotrace traceonly

select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

create index null_index_idx2 on null_index (pct_free, owner);

select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

create index null_index_idx3 on null_index (owner, pct_free);

select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

Oracle Instant Client RPM installation where to find things

Last week I blogged about the option to install Oracle’s Instant Client via the public YUM repository. If you go ahead and try this, there is one thing you will undoubtedly notice: file locations are rather unusual if you have worked with Oracle for a while. This is true at least for the 19c Instant Client, it might be similar for older releases although I didn’t check. I’d like to thank @oraclebase for prompting me to write this short article!

Installing the 19.3 “Basic” Instant Client package

So to start this post I am going to install the 19.3 “Basic” package on my Oracle Linux 7.6 lab environment:

$ sudo yum install oracle-instantclient19.3-basic
Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package oracle-instantclient19.3-basic.x86_64 0:19.3.0.0.0-1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package                     Arch   Version      Repository                Size
================================================================================
Installing:
 oracle-instantclient19.3-basic
                             x86_64 19.3.0.0.0-1 ol7_oracle_instantclient  51 M

Transaction Summary
================================================================================
Install  1 Package

Total download size: 51 M
Installed size: 225 M
Is this ok [y/d/N]: y
Downloading packages:
oracle-instantclient19.3-basic-19.3.0.0.0-1.x86_64.rpm     |  51 MB   00:09     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : oracle-instantclient19.3-basic-19.3.0.0.0-1.x86_64           1/1 
  Verifying  : oracle-instantclient19.3-basic-19.3.0.0.0-1.x86_64           1/1 

Installed:
  oracle-instantclient19.3-basic.x86_64 0:19.3.0.0.0-1                          

Complete!

With the software installed, let’s have a look at where everything is:

$ rpm -ql oracle-instantclient19.3-basic
/etc/ld.so.conf.d/oracle-instantclient.conf
/usr/lib/oracle
/usr/lib/oracle/19.3
/usr/lib/oracle/19.3/client64
/usr/lib/oracle/19.3/client64/bin
/usr/lib/oracle/19.3/client64/bin/adrci
/usr/lib/oracle/19.3/client64/bin/genezi
/usr/lib/oracle/19.3/client64/lib
/usr/lib/oracle/19.3/client64/lib/libclntsh.so
/usr/lib/oracle/19.3/client64/lib/libclntsh.so.10.1
/usr/lib/oracle/19.3/client64/lib/libclntsh.so.11.1
/usr/lib/oracle/19.3/client64/lib/libclntsh.so.12.1
/usr/lib/oracle/19.3/client64/lib/libclntsh.so.18.1
/usr/lib/oracle/19.3/client64/lib/libclntsh.so.19.1
/usr/lib/oracle/19.3/client64/lib/libclntshcore.so
/usr/lib/oracle/19.3/client64/lib/libclntshcore.so.19.1
/usr/lib/oracle/19.3/client64/lib/libipc1.so
/usr/lib/oracle/19.3/client64/lib/libmql1.so
/usr/lib/oracle/19.3/client64/lib/libnnz19.so
/usr/lib/oracle/19.3/client64/lib/libocci.so.19.1
/usr/lib/oracle/19.3/client64/lib/libociei.so
/usr/lib/oracle/19.3/client64/lib/libocijdbc19.so
/usr/lib/oracle/19.3/client64/lib/liboramysql19.so
/usr/lib/oracle/19.3/client64/lib/network
/usr/lib/oracle/19.3/client64/lib/network/admin
/usr/lib/oracle/19.3/client64/lib/network/admin/README
/usr/lib/oracle/19.3/client64/lib/ojdbc8.jar
/usr/lib/oracle/19.3/client64/lib/xstreams.jar
/usr/share/oracle
/usr/share/oracle/19.3
/usr/share/oracle/19.3/client64
/usr/share/oracle/19.3/client64/doc
/usr/share/oracle/19.3/client64/doc/BASIC_LICENSE
/usr/share/oracle/19.3/client64/doc/BASIC_README

As you can see in the RPM output, files are found under /usr/lib/oracle. That’s why I meant the file location is unusual. I for my part have followed the directory structure suggested by previous release’s Oracle Universal Installer (OUI) defaults and installed it under /u01/app/oracle/product/version/client_1. The actual location however doesn’t really matter.

No more manually calling ldconfig with 19.3

Note that before 19.3 you had to manually run ldconfig after installing the Instant Client RPM file. In 19c this is handled via a post-install script. The RPM adds its configuration in /etc/ld.so.conf.d/oracle-instantclient.conf.

$ rpm -q --scripts oracle-instantclient19.3-basic
postinstall scriptlet (using /bin/sh):
ldconfig
postuninstall scriptlet (using /bin/sh):
ldconfig

This is nicer – at least in my opinion – than setting LD_LIBRARY_PATH in a shell. This seemed to work just fine: I installed the SQLPlus package (oracle-instantclient19.3-sqlplus) and I could start it without any problems.

Network Configuration

If you have a requirement to add a TNS naming file, you should be able to do so by either setting TNS_ADMIN or place the file in /usr/lib/oracle/19.3/client64/lib/network/admin. I have a tnsnames.ora file pointing to my XE database here:

$ cat /usr/lib/oracle/19.3/client64/lib/network/admin/tnsnames.ora 
xepdb1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = oraclexe)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = xepdb1)
    )
  )

I can connect to my database without setting any specific environment variables:

$ env | egrep -i 'tns|ld_library' | wc -l
0
$ sqlplus martin@xepdb1

SQL*Plus: Release 19.0.0.0.0 - Production on Mon May 13 09:38:12 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Enter password: 
Last Successful login time: Mon May 13 2019 09:22:11 -04:00

Connected to:
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0

The SQLPlus package places a symbolic link to /usr/lib/oracle/19.3/client64/bin/sqlplus into /usr/bin so I don’t even have to adjust the PATH variable.

Summary

Being able to use the Instant Client out of the box is very useful for automated deployments where all you have to do is add a repository followed by a call to yum. This should make a lot of peoples’ lives much easier.

APEX Connect – A Slightly Different Conference

I wanted to do a write-up for the APEX Connect conference that happened in Bonn, Germany, a few days ago, as it was a slightly different conference than I normally go to and a slightly different experience for me.

APEX Connect is a German event (put on by DOAG) that is focused on APEX, SQL & PL/SQL, and JavaScript. So more of a developers’ conference. It was an unusual conference for me for a few reasons:

  1. I don’t generally go to developer-focused events, my current roles tend to be performance & architecture based and, despite having been a developer or development DBA for most of the last 30 years, I (wrongly) mentally class myself as a “DBA type”.
  2. I was doing a keynote! I’m not used to that, but I was honoured to be asked.
  3. I was doing only the opening keynote, so once that was out the way I had pretty much 3 days of being a normal delegate. That made a really nice change.

The conference was well attended and well run. A couple of things that they did right was to have good catering and coffee was always available – good coffee! It really makes a difference and it is something some conferences (including the ones I help organise) struggle with. You have no idea how much venues in the UK want to charge to make coffee available all day, let alone *good* coffee!

Something else that struck me was that the audience was a little younger than many Oracle-focused conferences. This was helped by DOAG’s #NextGen programme which encourages students and recent graduates to come to conferences and meet professionals working in the industry. I met a few of these students/recent students as they had been tasked with asking all the keynote speakers a question, which I thought was a nice way for them to meet these “stars” and realise we are just normal people, doing a job.

https://mwidlake.files.wordpress.com/2019/05/img_0124.jpg?w=736&h=736 736w, https://mwidlake.files.wordpress.com/2019/05/img_0124.jpg?w=150&h=150 150w, https://mwidlake.files.wordpress.com/2019/05/img_0124.jpg?w=300&h=300 300w" sizes="(max-width: 368px) 100vw, 368px" />

Some of the usual suspects! Conferences should be, I believe, half educational and half social.

Another thing was the male:female ratio. Looking at the sessions I was in, it was about 75%:25%, which in our industry is a little unusual – and very encouraging to see. I had a good discussion with Niels de Bruijn on the subject of sex (balance), who is the main organiser, and it is a topic I have discussed a few times with Sabine Heimsath, who organised the SQL & PL/SQL stream and who asked me to present. Niels was pleased with the balance at the conference, and I shared my experiences of trying to increase the balance in the UK (I’d love 25%!). It is not a simple issue and I think (note, these are my words and not theirs) that it is almost a lost cause for my generation. I think things went wrong in the home computer industry in the 80’s and continued through the 90’s, with IT being portrayed by the general media as typically male and the IT-focused industry keeping to a very male-focused stance. I won’t stop trying to address the balance but I wonder if where we can really make the difference is in events where the audience is younger…

Anyway, APEX Connect had a nicely balanced and lively feel.

As I said earlier, I had been asked to do the opening keynote. My job was to say something that would be relevant to the whole audience, which was not too technically deep, and set the scene for APEX Connect. An added bonus would be for the audience to leave the talk energised for the rest of the conference. My normal talks are about tech… but I do branch out into talks on presenting, disasters and, err, beer. Talks to entertain basically. So that is what I aimed for, to entertain and to energise.

I’m one of those annoying presenters who does not usually get particularly nervous before a talk, I simply step up to (OK, behind) the lectern and go. But for the second time in 6 months (the other being the opening words for UKOUG 2018) I was really nervous for this. I think it is because when you talk on a technical subject, you are describing how something works and, so long as the audience understand, the rest of the talk (little stories, extra facts) are window dressing – enjoying the talk is a good thing to achieve but is secondary. With a keynote the Window Dressing is actually the main thing, and if the audience does not enjoy the talk you have not done the job. I’m glad to say I got a lot of positive feedback and I was very much relieved. I think I hit “peak enjoyment” for this talk when I described my early career (Builder, Oracle 7 PL/SQL developer, server-side developer, Development DBA) and used the graphical slide here.

Server. (on it’s) Side. Developer.

I have to say, with that talk out the way I was able to really enjoy being simply “at a conference”, seeing some great talks (Toon Koppelaars, Connor McDonald, Kamil Stawiarski, Eric van Roon – I missed Luiza Nowak & Heli Helskyaho but I had seen those talks before), chatting to lots of people, and enjoying the socialising.

I want to say a big Thank You to the organisers – Simone Fischer, Niels de Bruijn, Tobias Arnhold, Kai Donato and all the others behind the scenes. I know what it’s like doing this! And a special Thank You to Sabine Heimsath for asking me to present and for helping me get places and answering odd questions on German culture!

I’ll finish with what I think conferences and the community are all about. This does not just apply to developers of course, but to all of us in the industry:

https://mwidlake.files.wordpress.com/2019/05/good-developers.jpg?w=1040&... 1040w, https://mwidlake.files.wordpress.com/2019/05/good-developers.jpg?w=150&h=84 150w, https://mwidlake.files.wordpress.com/2019/05/good-developers.jpg?w=300&h... 300w, https://mwidlake.files.wordpress.com/2019/05/good-developers.jpg?w=768&h... 768w, https://mwidlake.files.wordpress.com/2019/05/good-developers.jpg?w=1024&... 1024w" sizes="(max-width: 521px) 100vw, 521px" />

Did you forget to allocate Huge Pages on your PostgreSQL server?

This short post is for those who answered ‘keep the default’ in the following. Because the default (which is no huge page allocated) is not good for a database.

When you install a Linux server, by default, there are no Huge Pages defined until you set vm.nr_hugepages in /etc/sysctl.conf and reboot or ‘sysctl -p’.

When you install PostgreSQL, by default, huge_pages=try which means that the postgres server will start with no error nor warning when huge pages are not available. This is mostly the equivalent of ‘LARGE_PAGES=TRUE’ in Oracle, except that Oracle will try to allocate as much as possible in Huge Pages.

This setting can be considered safer in case of unplanned reboot: prefer starting in degraded mode rather than not starting at all. But the risk is that you do not realize when the shared buffers are allocated in small pages.

Where?

First, how to know if the shared buffers were allocated in small or large pages? They are shared, and then show in pmap with ‘s’ mode. Here is my pmap output when allocated as Huge Pages:

$ pmap $(pgrep postgres) |  grep -E -- "-s- .*deleted" | sort -u
00007fa05d600000 548864K rw-s- anon_hugepage (deleted)

Here is the same when allocated as small pages:

$ pmap $(pgrep postgres) |  grep -E -- "-s- .*deleted" | sort -u
00007f129b289000 547856K rw-s- zero (deleted)

As far as I know, there’s no partially allocated shared buffer: if there are not enough huge pages for the total, then none are used.

How?

In order to set it, that’s easy. You set the number of 2MB pages in /etc/sysctl.conf and allocate with ‘sysctl -p’. Here is how I check the size of the memory area, from /proc/meminfo but formatted for humans.

/proc/meminfo formatted for humans

How much? That’s simple: Enough and not too much.

Enough means that all shared buffers should fit. Just take the sum of all shared_buffers for all instances in the server. If you have other programs using shared memory and allocating it large pages, they count as well. And don't forget to update when you add a new instance or increase the memory for an existing one.

Not too much because the processes also need to allocate their memory as small pages and what is reserved for huge pages cannot be allocated in small pages. If you do not leave enough small pages, you will have many problems and may even not be able to boot. Like this:

Kernel panic - not syncing: Out of memory and no killable processes - Blog dbi services

In addition to that, PostgreSQL does not support direct IO and needs some free memory for the filesystem cache, which are small pages. The documentation still mentions that postgres shared buffers should leave the same amount of RAM for filesystem cache (which means double buffering).

Be careful, when looking at /proc/meminfo the Huge Pages allocated by postgreSQL are free until they are used. So do not rely on HugePages_Free do your maths from the sum of shared_buffers. Use pmap to see that they are used just after starting the instance. There may be some other kernel settings to set (permissions, memlock) if allocation didn’t occur.

Why?

Do not fear it. Once you have set those areas, and checked them, they are fixed. Then no surprise if you take care. And it can make a big difference in the performance and memory footprint. The shared buffers have the following properties:

  • they are big, and allocating 1GB, 10GB or 100GB in 4k pages is not reasonable. Huge Pages are 2MB.
  • they are shared, and mapping so many small pages from many processes is not efficient. Takes lot of memory just to map them, increases the chance of TLB misses,…

I’ll use Kevin Closson pgio (https://kevinclosson.net) to show how to test.

Sneak Preview of pgio (The SLOB Method for PostgreSQL) Part IV: How To Reduce The Amount of Memory In The Linux Page Cache For Testing Purposes.

Here’s my pgio.conf:

$ grep -vE "^[[:blank:]]*#|^$" pgio.conf
UPDATE_PCT=0
RUN_TIME=60
NUM_SCHEMAS=2
NUM_THREADS=2
WORK_UNIT=255
UPDATE_WORK_UNIT=8
SCALE=100M
DBNAME=pgio
CONNECT_STRING=pgio
CREATE_BASE_TABLE=TRUE
$ sh ./setup.sh
Job info:      Loading 100M scale into 2 schemas as per pgio.conf->NUM_SCHEMAS.
Batching info: Loading 2 schemas per batch as per pgio.conf->NUM_THREADS.
Base table loading time: 0 seconds.
Waiting for batch. Global schema count: 2. Elapsed: 0 seconds.
Waiting for batch. Global schema count: 2. Elapsed: 1 seconds.
Group data loading phase complete.         Elapsed: 1 seconds.

I have set up two 100M schemas. My shared_buffers is 500MB so all reads are cache hits:

2.7 million LIOPS (Logical Reads Per Second) here. The advantage of pgio benchmark is that I focus exactly on what I want to measure: reading pages from the shared buffer. There’s minimal physical I/O here (should be zero but there was no warm up here and the test is too short), and minimal processing on the page (and this is why I use pgio and not pgbench here).

I have disabled Huge Pages for this test.

$ grep -E "(shared_buffers|huge_pages).*=" $PGDATA/postgresql.conf
shared_buffers=500MB
#shared_buffers = 128MB # min 128kB
#huge_pages = try # on, off, or try
huge_pages=off

Now enabling them. I keep the ‘try’ default and check that they are used. I could have set huge_pages to true to be sure.

$ sed -ie '/^huge_pages/d' $PGDATA/postgresql.conf
$ grep huge_pages $PGDATA/postgresql.conf
#huge_pages = try # on, off, or try
$ pg_ctl -l $PGDATA/logfile restart
waiting for server to shut down.... done
server stopped
waiting for server to start.... done
server started
$ pmap $(head -1 $PGDATA/postmaster.pid) | sort -hk2 | tail -4 | grep -E "^|-s-"
00007fb824d4e000 20292K r-x-- libicudata.so.50.1.2
00007fb81cc3f000 103592K r---- locale-archive
00007fb7fb200000 548864K rw-s- anon_hugepage (deleted)
total 800476K

Then I run the same test:

Here, even with a small shared memory (500MB) and only 4 threads, the difference is visible: the cache hits performance on the small pages is only 2719719/3016865=90% of what is achieved with large pages.

Those screenshots are from a very small demo to demonstrate how to do it. If you need real numbers, run this on a longer run, like RUN_TIME=600. And on your platform, because of the overhead of large shared memory allocated in small pages depends on your CPU, your OS (with the patches to mitigate the CPU security vulnerabilities), your hypervisor,…

One thing is certain: any database shared buffer cache should be allocated in pages larger than the default. Small (4k) pages are not there for large allocations of shared areas above GigaBytes. And the second advantage is that they will never be written to swap: you allocate this memory to reduce disk I/O, and consequently, you don’t want it to be written to disk.

There’s a very nice presentation on this topic by Fernando Laudares Camargos:

FOSDEM 2019 - Hugepages and databases

Installing the Oracle Instant Client RPM via YUM on Oracle Linux 7

Many applications require Oracle’s instant client to enable connectivity with the database. In the past, getting hold of the instant client required you to agree to the license agreement before you could download the software. For a little while now, Oracle offers a YUM repository with the instant client RPMs. There are a couple of announcements to that effect, for example on Oracle’s Linux blog. It’s a great step ahead for usability, and I really appreciate the move. After a small exchange on Twitter I had to give this a go and see how this works. The following article is a summary of what I learned.

Instead of pulling the RPMs straight from the repository location using curl or wget, you can add the repository to your YUM configuration just as easily. How you do this depends on your installed Oracle Linux image. Oracle changed the way the contents of /etc/yum.repos.d is handled earlier this year. If your system is already using the new modular approach – many cloud images for example use the new modular approach already – the steps are slightly different from the monolithic configuration file.

Monolithic public-yum-ol7.repo

My virtualised lab environment (for which I actually do have a backup!) is based on Oracle Linux 7.6 and still uses /etc/yum.repos.d/public-yum-ol7.repo. The files belonging to the oraclelinux-release-el7 RPM are present but disabled. I am conscious of the fact that this configuration file is deprecated.

To enable the instant client repository you can either edit public-yum-ol7.repo with your text editor of choice, or use yum-config-manager. Ansible users can use the ini_file module to do the same thing in code. I used yum-config-manager:

$ sudo yum-config-manager --enable ol7_oracle_instantclient

With the repository enabled you can search for the oracle-instantclient RPMS

$ sudo yum search oracle-instant
Loaded plugins: ulninfo
ol7_UEKR5                                                | 1.2 kB     00:00     
ol7_latest                                               | 1.4 kB     00:00     
ol7_oracle_instantclient                                 | 1.2 kB     00:00     
(1/4): ol7_oracle_instantclient/x86_64/primary             | 4.4 kB   00:00     
(2/4): ol7_oracle_instantclient/x86_64/updateinfo          |  145 B   00:00     
(3/4): ol7_latest/x86_64/updateinfo                        | 907 kB   00:00     
(4/4): ol7_latest/x86_64/primary                           |  13 MB   00:02     
ol7_latest                                                          12804/12804
ol7_oracle_instantclient                                                  21/21
========================= N/S matched: oracle-instant ==========================
oracle-instantclient18.3-basic.x86_64 : Oracle Instant Client Basic package
oracle-instantclient18.3-basiclite.x86_64 : Oracle Instant Client Light package
oracle-instantclient18.3-devel.x86_64 : Development headers for Instant Client.
oracle-instantclient18.3-jdbc.x86_64 : Supplemental JDBC features for the Oracle
                                     : Instant Client
oracle-instantclient18.3-odbc.x86_64 : Oracle Instant Client ODBC
oracle-instantclient18.3-sqlplus.x86_64 : SQL*Plus for Instant Client.
oracle-instantclient18.3-tools.x86_64 : Tools for Oracle Instant Client
oracle-instantclient18.5-basic.x86_64 : Oracle Instant Client Basic package
oracle-instantclient18.5-basiclite.x86_64 : Oracle Instant Client Light package
oracle-instantclient18.5-devel.x86_64 : Development headers for Instant Client.
oracle-instantclient18.5-jdbc.x86_64 : Supplemental JDBC features for the Oracle
                                     : Instant Client
oracle-instantclient18.5-odbc.x86_64 : Oracle Instant Client ODBC
oracle-instantclient18.5-sqlplus.x86_64 : SQL*Plus for Instant Client.
oracle-instantclient18.5-tools.x86_64 : Tools for Oracle Instant Client
oracle-instantclient19.3-basic.x86_64 : Oracle Instant Client Basic package
oracle-instantclient19.3-basiclite.x86_64 : Oracle Instant Client Light package
oracle-instantclient19.3-devel.x86_64 : Development header files for Oracle
                                      : Instant Client.
oracle-instantclient19.3-jdbc.x86_64 : Supplemental JDBC features for the Oracle
                                     : Instant Client
oracle-instantclient19.3-odbc.x86_64 : Oracle Instant Client ODBC
oracle-instantclient19.3-sqlplus.x86_64 : Oracle Instant Client SQL*Plus package
oracle-instantclient19.3-tools.x86_64 : Tools for Oracle Instant Client

  Name and summary matches only, use "search all" for everything.

As you can see, there are instant clients for 18.3, 18.5 and 19.3 at the time of writing.

Modular YUM configuration

If your system is already configured to use the modular YUM configuration, you need to know which RPM contains the configuration for the instant client repository. The YUM getting started guide comes to the rescue again. If you consult the table in section Installing Software from Oracle Linux Yum Server, you’ll notice that you have to install oracle-release-el7.

$ sudo yum install oracle-release-el7
Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package oracle-release-el7.x86_64 0:1.0-2.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package                  Arch         Version           Repository        Size
================================================================================
Installing:
 oracle-release-el7       x86_64       1.0-2.el7         ol7_latest        14 k

Transaction Summary
================================================================================
Install  1 Package

Total download size: 14 k
Installed size: 18 k
Is this ok [y/d/N]: y
Downloading packages:
oracle-release-el7-1.0-2.el7.x86_64.rpm                    |  14 kB   00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : oracle-release-el7-1.0-2.el7.x86_64                          1/1 
  Verifying  : oracle-release-el7-1.0-2.el7.x86_64                          1/1 

Installed:
  oracle-release-el7.x86_64 0:1.0-2.el7                                         

Complete!

As part of the YUM command, the instant client repository configuration is laid down and enabled. From this point onward, everything is exactly the same as previously shown with in the section detailing the monolithic repository configuration file.

Easy Oracle Cloud wallet location in the JDBC connection string

I wrote about the 19c easy-connect string recently and the possibility to use a wallet with it (and no need for a tnsnames.ora then):

19c EZCONNECT and Wallet (Easy Connect and External Password File)

That was with sqlplus and setting TNS_ADMIN and still requires sqlnet.ora to set the wallet location directory. This post adds two things:

  • TNS_NAMES parameter in the JDBC URL with no need for
    the java -Doracle.net.tns_admin
  • We can add our password to the cloud wallet downloaded from the Autonomous Database (ATP/ADW)

Oracle Cloud user

For this test I’ve created a new user in my Autonomous Transaction Processing cloud service.

The click-path is:

  • Autonomous Database
  • Autonomous Database Details
  • Service Console
  • Administration
  • Manage ML Users (yes, this is the Machine Learning interface)
  • Create User

I’ve created the user Franck with password T1s1s@UserPassword (with an at-sign on purpose to show you that it is annoying but manageable).

A side note here because this is the most hidden part of these autonomous services. You are in the Machine Learning interface and if you click on the Home button on top right you can log with this user and access a Zeppelin Notebook to run some SQL statements.

I’m opening one ‘SQL Script Scratchpad’ here to see my roles. Actually, a user created from this Oracle ML interface is a developer user with the role ‘OML_DEVELOPER’ and the role ‘DWROLE’ (even it this is ATP and not ADW). This differs from the ADMIN user which has many more roles.

You can create the same user from the sql command line or SQL Developer connected as ADMIN. But, for this, we need to the credentials wallet first.

Oracle Cloud wallet

I will connect with this user from my on-premises (aka laptop;) and then I need to download the credential wallet which contains everything I need to connect to the service remotely.

The web path is:

  • Autonomous Database
  • Autonomous Database Details
  • Service Console
  • Download Client Credentials (Wallet)

I enter a password for this wallet: T1s1s@WalletPassword and get a .zip file. This is not a password-protected ZIP. The password is the wallet password. Note that this is common to all my compartment database services and I’ll see all of them in the tnsnames.ora. There’s one wallet per compartment. Finer security is done by each database with user authentication. But the wallet has the name of the service you downloaded from

Oracle JDBC client

I unzip the wallet:

[oracle@db193]$ unzip -d /home/oracle/mywallet /tmp/wallet_MYATP.zip
Archive: /tmp/wallet_MYATP.zip
inflating: /home/oracle/mywallet/cwallet.sso
inflating: /home/oracle/mywallet/tnsnames.ora
inflating: /home/oracle/mywallet/truststore.jks
inflating: /home/oracle/mywallet/ojdbc.properties
inflating: /home/oracle/mywallet/sqlnet.ora
inflating: /home/oracle/mywallet/ewallet.p12
inflating: /home/oracle/mywallet/keystore.jks

It contains 3 entries for each of my database services, with low/medium/high alternatives for resource management.

Here I’m using myatp_low:

[oracle@db193]$ grep myatp /home/oracle/mywallet/tnsnames.ora
myatp_low = (description= (address=(protocol=tcps)(port=1522)(host=adb.eu-frankfurt-1.oraclecloud.com))(connect_data=(service_name=vavxrlxx2llql7m_myatp_low.atp.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.eucom-central-1.oraclecloud.com,OU=Oracle BMCS FRANKFURT,O=Oracle Corporation,L=Redwood City,ST=California,C=US")) )

If I specify only the TNS_ADMIN, the tnsnames.ora is found but not the wallet (because the directory in sqlnet.ora is not setup with the wallet location):

[oracle@db193]$ TNS_ADMIN=/home/oracle/mywallet tnsping myatp_low
TNS Ping Utility for Linux: Version 19.0.0.0.0 - Production on 08-MAY-2019 20:21:20
Copyright (c) 1997, 2019, Oracle.  All rights reserved.
Used parameter files:
/home/oracle/mywallet/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (description= (address=(protocol=tcps)(port=1522)(host=adb.eu-frankfurt-1.oraclecloud.com))(connect_data=(service_name=vavxrlxx2llql7m_myatp_low.atp.oraclecloud.com))(security=(ssl_server_cert_dn= CN=adwc.eucom-central-1.oraclecloud.com,OU=Oracle BMCS FRANKFURT,O=Oracle Corporation,L=Redwood City,ST=California,C=US)))
TNS-12560: TNS:protocol adapter error

jdbc:oracle:thin:…?TNS_ADMIN=…

I don’t care about sqlnet.ora here and I’ll use SQLcl with the thin JDBC connection. And since 18.3 the driver supports a TNS_ADMIN parameter in the URL to mention the TNS_ADMIN. And this one is used to find the tnsnames.ora but also the credential files:

[oracle@db193]$ sql Franck/'"T1s1s@UserPassword"'@myatp_low?TNS_ADMIN=/home/oracle/mywallet
SQLcl: Release 19.1 Production on Wed May 08 20:23:38 2019
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0
SQL> show user
USER is "FRANCK"
SQL> show jdbc-- Database Info --
Database Product Name: Oracle
Database Product Version: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0
Database Major Version: 18
Database Minor Version: 0
-- Driver Info --
Driver Name: Oracle JDBC driver
Driver Version: 19.3.0.0.0
Driver Major Version: 19
Driver Minor Version: 3
Driver URL: jdbc:oracle:thin:@myatp_low
Driver Location:
resource: oracle/jdbc/OracleDriver.class
jar: /u01/app/oracle/product/DB193/jdbc/lib/ojdbc8.jar
JarSize: 4210517
JarDate: Fri Apr 05 03:38:42 GMT 2019
resourceSize: 2604
resourceDate: Thu Apr 04 20:38:40 GMT 2019

This is awesome because in previous JDBC driver versions you had to set this by adding a parameter in Java, like -Doracle.net.tns_admin or OracleConnection.CONNECTION_PROPERTY_TNS_ADMIN (or SET CLOUDCONFIG in SQLcl).

The 18.3 version of the driver is available for JDK 8 (ojdbc8.jar) and JDK 10 (ojdbc10.jar):

JDBC and UCP Downloads page

Passwordless connection

And there is more. You have an easy way to set the wallet location. You have a wallet. It is used to store the SSL/TLS certificate and key.

But you can use the same to store your passwords:

[oracle@db193]$ mkstore -wrl . -createCredential myatp_low Franck
Oracle Secret Store Tool Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved.
Your secret/Password is missing in the command line
Enter your secret/Password: T1s1s@UserPassword
Re-enter your secret/Password: T1s1s@UserPassword
Enter wallet password:

Important note: this is easy for passwordless connection, but then the protection of those files are critical. Before, you needed the wallet and the user password to connect to your service. The wallet to reach the database, and the user password to connect to it. Now, anybody that has read access to the wallet can connect to your service with the stored credentials.

Here is the list of credentials stored there:

[oracle@db193]$ mkstore -wrl . -listCredential
Oracle Secret Store Tool Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved.
Enter wallet password: T1s1s@WalletPassword
List credential (index: connect_string username)
1: myatp_low Franck

And even the stored password is easily visible when you provide the wallet password:

[oracle@db193]$ mkstore -wrl . -viewEntry oracle.security.client.password1
Oracle Secret Store Tool Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved.
Enter wallet password: T1s1s@WalletPassword
oracle.security.client.password1 = T1s1s@UserPassword

There’s no magic. This is still password authentication and the passwords needs to be read. But that’s better than having it hardcoded in scripts and command lines.

So, now it is easy to connect without mentioning the user and password but only the service and the location of the wallet:

[oracle@db193]$ sql /@myatp_low?TNS_ADMIN=/home/oracle/mywallet
SQLcl: Release 19.1 Production on Wed May 08 20:27:25 2019
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0
SQL> show user
USER is "FRANCK"
SQL>

the service has been found in the tnsnames.ora and the password in the credentials (sqlnet.ora was not used there, so no need to configure it).

PeopleSoft Administrator Podcast: #183 – Effective Performance Monitoring

I recently recorded a podcast with Dan Iverson and Kyle Benson for the PeopleSoft Administrator Podcast, this time about instrumentation, monitoring the performance of PeopleSoft system, and Performance Monitor.  There is also just a little about cursor sharing.

(3 May 2019) #183 – Effective Performance Monitoring

You can listen to the podcast on psadmin.io, or subscribe with your favourite podcast player, or in iTunes.

Learning about Kubernetes: JDBC database connectivity to an Oracle database

In the past few months I have spent some time trying to better understand Kubernetes and how application developers can make use of it in the context of the Oracle database. In this post I’m sharing what I learned along that way. Please be careful: this is very much a moving target, and I wouldn’t call myself an expert in the field. If you find anything in this post that could be done differently/better, please let me know!

By the way, I am going to put something similar together where Oracle Restful Data Services (ORDS) will provide a different, more popular yet potentially more difficult-to-get-right connection method.

My Motivation for this post

The question I wanted to answer to myself was: can I deploy an application into Kubernetes that uses JDBC to connect to Oracle? The key design decision I made was to have a database outside the Kubernetes cluster. From what I read on blogs and social media it appears to be the consensus at the time of writing to keep state out of Kubernetes. I am curious to see if this is going to change in the future…

My environment

As this whole subject of container-based deployment and orchestration is developing at an incredible pace, here’s the specification of my lab environment used for this little experiment so I remember which components I had in use in case something breaks later…

  • Ubuntu 18.04 LTS serves as my host operating environment
  • Local installation of kubectl v1.14.1
  • I am using minikube version: v1.0.1 and it pulled the following default components:
    • Container runtime: docker 18.06.3-ce
    • kubernetes 1.14.1 (kubeadm v1.14.1, kubelet v1.14.1)
  • The database service is provided by Oracle XE (oracle-database-xe-18c-1.0-1.x86_64)
    • Oracle XE 18.4.0 to provide data persistance in a VM (“oraclexe.example.com”)
    • The Pluggable Database (PDB) name is XEPDB1
    • I am using an unprivileged user to connect

I would like to take the opportunity again to point out that the information you see in this post is my transcript of how I built this proof-of-concept, can-I-even-do-it application. It’s by no means meant to be anything other than that. There are a lot more bells and whistles to be attached to this in order to make it remotely ready for more serious use. But it was super fun putting this all together which is why I’m sharing this anyway.

Preparing Minikube

Since I’m only getting started with Kubernetes I didn’t want to set up a multi-node Kubernetes cluster in the cloud, and instead went with Minikube just to see if my ideas pan out ok or not. As I understand it, Minikube is a development/playground Kubernetes environment and it’s explicitly stated not to be suitable for production. It does seem to give me all that I need to work out my ideas. Since I’ve been using Virtualbox for a long time I went with the Minikube Virtualbox provider and all the defaults. It isn’t too hard to get this to work, I followed the documentation (see earlier link) on how to install kubectl (the main Kubernetes command line tool) and Minikube.

Once kubectl and Minikube were installed, I started the playground environment. On a modern terminal this prints some very fancy characters in the output as you can see:

$ minikube start
</pre></div>

    	  	<div class= 

	

Compression Restrictions - Update: Wide Table Compression In 12.1 And Later

I do have a very old post that used to be rather popular here that listed various restrictions related to compression. One of the most obvious restrictions in older versions was that the basic / OLTP (Advanced Row) heap table compression based on symbol tables / de-duplication was limited to tables with 254 columns or less - or probably more general to rows with single row pieces.

This limitation was lifted with Oracle 12c which started to support this compression type with tables having more than 254 columns / multiple row pieces - and there is a new internal parameter called "_widetab_comp_enabled" that defaults to TRUE.
So this is nothing really new, 12.1 has been released a long time ago, but it looks like it wasn't mentioned in the official 12c "New Features Guide", but only in some other documents as a side note - although it could be an important change for users not allowed to make use of HCC compression that have such "wide" tables and would like to save space resp. improve I/O performance.
Now the odd thing is that Oracle obviously found some problems with this change in 12.1.0.2 that seems to revolve around redo apply and hence decided to disable this feature partially in later PSUs of 12.1.0.2. There are a number of related notes on MyOracleSupport, specifically:
All About Advanced Table Compression (Overview, Usage, Examples, Restrictions) (Doc ID 882712.1)

Advanced Compression Restriction on 255 Columns Lifted For Oracle 12c (Doc ID 1612095.1)

Bug 25498473 : OLTP COMPRESSION DOESN'T WORK IN 12.1.0.2.160719 AND HIGHER WITH > 255 COLUMNS

So the summary of the situation seems to be this:
- In 12.1.0.2 with current PSUs applied you only get compression of tables with multiple row pieces when using Advanced Compression and conventional DML. Direct Path Load / CTAS / ALTER TABLE MOVE reorganisation does not compress hence basic compression does not work for such "wide" tables
- Starting with Oracle 12.2 this (artificial) limitation is lifted again, and now the Direct Path Load / CTAS / ALTER TABLE MOVE reorganisation code paths support compression again, which seems to include basic compression, although above mentioned documents only refer to OLTP (Advanced Row) compression
Note that the code obviously - according to the mentioned documents - checks the COMPATIBLE setting of the database, so running 12.2+ with compatible set to 12.1 means behaviour like 12.1, which means no basic compression for "wide" tables.
So depending on the patch level of the database a script populating a 1,000 columns table with 10,000 rows of very repeatable content that you can find at the end of this post produces the following output in 12.1.0.2:

Unpatched 12.1.0.2:
#eeeeee; border: 1px dashed rgb(204 , 204 , 204); overflow: auto;">SQL> create table many_cols (
2 col001 varchar2(10),
3 col002 varchar2(10),
4 col003 varchar2(10),
.
.
.
1000 col999 varchar2(10),
1001 col1000 varchar2(10)
1002 );

Table created.

SQL>
SQL> declare
2 s_sql1 varchar2(32767);
3 s_sql2 varchar2(32767);
4 begin
5 for i in 1..1000 loop
6 s_sql1 := s_sql1 || 'col' || to_char(i, 'FM9000') || ',';
7 s_sql2 := s_sql2 || '''BLABLA'',';
8 end loop;
9
10 s_sql1 := rtrim(s_sql1, ',');
11 s_sql2 := rtrim(s_sql2, ',');
12
13 s_sql1 := 'insert into many_cols (' || s_sql1 || ') values (' || s_sql2 || ')';
14
15 for i in 1..10000 loop
16 execute immediate s_sql1;
17 end loop;
18
19 commit;
20 end;
21 /

PL/SQL procedure successfully completed.

SQL> select count(*) from many_cols;

COUNT(*)
----------
10000

SQL> exec dbms_stats.gather_table_stats(null, 'many_cols')

PL/SQL procedure successfully completed.

SQL> select blocks from user_tab_statistics where table_name = 'MANY_COLS';

BLOCKS
----------
10003


SQL> alter table many_cols move compress basic;

Table altered.

SQL> exec dbms_stats.gather_table_stats(null, 'many_cols')

PL/SQL procedure successfully completed.

SQL> select blocks from user_tab_statistics where table_name = 'MANY_COLS';

BLOCKS
----------
55

12.1.0.2 with some recent PSU applied:

#eeeeee; border: 1px dashed rgb(204 , 204 , 204); overflow: auto;">SQL> create table many_cols (
2 col001 varchar2(10),
3 col002 varchar2(10),
4 col003 varchar2(10),
.
.
.
1000 col999 varchar2(10),
1001 col1000 varchar2(10)
1002 );

Table created.

SQL>
SQL> declare
2 s_sql1 varchar2(32767);
3 s_sql2 varchar2(32767);
4 begin
5 for i in 1..1000 loop
6 s_sql1 := s_sql1 || 'col' || to_char(i, 'FM9000') || ',';
7 s_sql2 := s_sql2 || '''BLABLA'',';
8 end loop;
9
10 s_sql1 := rtrim(s_sql1, ',');
11 s_sql2 := rtrim(s_sql2, ',');
12
13 s_sql1 := 'insert into many_cols (' || s_sql1 || ') values (' || s_sql2 || ')';
14
15 for i in 1..10000 loop
16 execute immediate s_sql1;
17 end loop;
18
19 commit;
20 end;
21 /

PL/SQL procedure successfully completed.

SQL> select count(*) from many_cols;

COUNT(*)
----------
10000

SQL> exec dbms_stats.gather_table_stats(null, 'many_cols')

PL/SQL procedure successfully completed.

SQL> select blocks from user_tab_statistics where table_name = 'MANY_COLS';

BLOCKS
----------
10097


SQL> alter table many_cols move compress basic;

Table altered.

SQL> exec dbms_stats.gather_table_stats(null, 'many_cols')

PL/SQL procedure successfully completed.

SQL> select blocks from user_tab_statistics where table_name = 'MANY_COLS';

BLOCKS
----------
10143

12.2 and later produce in principle again the same output as the unpatched 12.1.0.2, here as an example 18.3 (18.4.1 same behaviour):

#eeeeee; border: 1px dashed rgb(204 , 204 , 204); overflow: auto;">SQL> create table many_cols (
2 col001 varchar2(10),
3 col002 varchar2(10),
4 col003 varchar2(10),
.
.
.
1000 col999 varchar2(10),
1001 col1000 varchar2(10)
1002 );

Table created.

SQL>
SQL> declare
2 s_sql1 varchar2(32767);
3 s_sql2 varchar2(32767);
4 begin
5 for i in 1..1000 loop
6 s_sql1 := s_sql1 || 'col' || to_char(i, 'FM9000') || ',';
7 s_sql2 := s_sql2 || '''BLABLA'',';
8 end loop;
9
10 s_sql1 := rtrim(s_sql1, ',');
11 s_sql2 := rtrim(s_sql2, ',');
12
13 s_sql1 := 'insert into many_cols (' || s_sql1 || ') values (' || s_sql2 || ')';
14
15 for i in 1..10000 loop
16 execute immediate s_sql1;
17 end loop;
18
19 commit;
20 end;
21 /

PL/SQL procedure successfully completed.

SQL> select count(*) from many_cols;

COUNT(*)
----------
10000

SQL> exec dbms_stats.gather_table_stats(null, 'many_cols')

PL/SQL procedure successfully completed.

SQL> select blocks from user_tab_statistics where table_name = 'MANY_COLS';

BLOCKS
----------
10003


SQL> alter table many_cols move compress basic;

Table altered.

SQL> exec dbms_stats.gather_table_stats(null, 'many_cols')

PL/SQL procedure successfully completed.

SQL> select blocks from user_tab_statistics where table_name = 'MANY_COLS';

BLOCKS
----------
55

If you wanted to make use of "wide" table compression in 12.1.0.2 with PSU applied, then the script would have to be changed to enable OLTP / Advanced Row compression before the initial population and then you'll get this output:

#eeeeee; border: 1px dashed rgb(204 , 204 , 204); overflow: auto;">SQL> create table many_cols (
2 col001 varchar2(10),
3 col002 varchar2(10),
4 col003 varchar2(10),
.
.
.
1000 col999 varchar2(10),
1001 col1000 varchar2(10)
1002 )
1003 compress for oltp
1004 ;

Table created.

SQL>
SQL> declare
2 s_sql1 varchar2(32767);
3 s_sql2 varchar2(32767);
4 begin
5 for i in 1..1000 loop
6 s_sql1 := s_sql1 || 'col' || to_char(i, 'FM9000') || ',';
7 s_sql2 := s_sql2 || '''BLABLA'',';
8 end loop;
9
10 s_sql1 := rtrim(s_sql1, ',');
11 s_sql2 := rtrim(s_sql2, ',');
12
13 s_sql1 := 'insert into many_cols (' || s_sql1 || ') values (' || s_sql2 || ')';
14
15 for i in 1..10000 loop
16 execute immediate s_sql1;
17 end loop;
18
19 commit;
20 end;
21 /

PL/SQL procedure successfully completed.

SQL> select count(*) from many_cols;

COUNT(*)
----------
10000

SQL> exec dbms_stats.gather_table_stats(null, 'many_cols')

PL/SQL procedure successfully completed.

SQL> select blocks from user_tab_statistics where table_name = 'MANY_COLS';

BLOCKS
----------
88


SQL> alter table many_cols move compress basic;

Table altered.

SQL> exec dbms_stats.gather_table_stats(null, 'many_cols')

PL/SQL procedure successfully completed.

SQL> select blocks from user_tab_statistics where table_name = 'MANY_COLS';

BLOCKS
----------
10143

which confirms that the "wide" table compression works for conventional inserts, but it requires the additional "Advanced Compression" license. Another downside of this approach is a potentially huge impact on performance and undo / redo generation due to the repeated block compression with each compression operation dumping the whole before image block to undo / redo.

Script used:
#eeeeee; border: 1px dashed rgb(204 , 204 , 204); overflow: auto;">set echo on

drop table many_cols purge;

create table many_cols (
col001 varchar2(10),
col002 varchar2(10),
col003 varchar2(10),
col004 varchar2(10),
col005 varchar2(10),
col006 varchar2(10),
col007 varchar2(10),
col008 varchar2(10),
col009 varchar2(10),
col010 varchar2(10),
col011 varchar2(10),
col012 varchar2(10),
col013 varchar2(10),
col014 varchar2(10),
col015 varchar2(10),
col016 varchar2(10),
col017 varchar2(10),
col018 varchar2(10),
col019 varchar2(10),
col020 varchar2(10),
col021 varchar2(10),
col022 varchar2(10),
col023 varchar2(10),
col024 varchar2(10),
col025 varchar2(10),
col026 varchar2(10),
col027 varchar2(10),
col028 varchar2(10),
col029 varchar2(10),
col030 varchar2(10),
col031 varchar2(10),
col032 varchar2(10),
col033 varchar2(10),
col034 varchar2(10),
col035 varchar2(10),
col036 varchar2(10),
col037 varchar2(10),
col038 varchar2(10),
col039 varchar2(10),
col040 varchar2(10),
col041 varchar2(10),
col042 varchar2(10),
col043 varchar2(10),
col044 varchar2(10),
col045 varchar2(10),
col046 varchar2(10),
col047 varchar2(10),
col048 varchar2(10),
col049 varchar2(10),
col050 varchar2(10),
col051 varchar2(10),
col052 varchar2(10),
col053 varchar2(10),
col054 varchar2(10),
col055 varchar2(10),
col056 varchar2(10),
col057 varchar2(10),
col058 varchar2(10),
col059 varchar2(10),
col060 varchar2(10),
col061 varchar2(10),
col062 varchar2(10),
col063 varchar2(10),
col064 varchar2(10),
col065 varchar2(10),
col066 varchar2(10),
col067 varchar2(10),
col068 varchar2(10),
col069 varchar2(10),
col070 varchar2(10),
col071 varchar2(10),
col072 varchar2(10),
col073 varchar2(10),
col074 varchar2(10),
col075 varchar2(10),
col076 varchar2(10),
col077 varchar2(10),
col078 varchar2(10),
col079 varchar2(10),
col080 varchar2(10),
col081 varchar2(10),
col082 varchar2(10),
col083 varchar2(10),
col084 varchar2(10),
col085 varchar2(10),
col086 varchar2(10),
col087 varchar2(10),
col088 varchar2(10),
col089 varchar2(10),
col090 varchar2(10),
col091 varchar2(10),
col092 varchar2(10),
col093 varchar2(10),
col094 varchar2(10),
col095 varchar2(10),
col096 varchar2(10),
col097 varchar2(10),
col098 varchar2(10),
col099 varchar2(10),
col100 varchar2(10),
col101 varchar2(10),
col102 varchar2(10),
col103 varchar2(10),
col104 varchar2(10),
col105 varchar2(10),
col106 varchar2(10),
col107 varchar2(10),
col108 varchar2(10),
col109 varchar2(10),
col110 varchar2(10),
col111 varchar2(10),
col112 varchar2(10),
col113 varchar2(10),
col114 varchar2(10),
col115 varchar2(10),
col116 varchar2(10),
col117 varchar2(10),
col118 varchar2(10),
col119 varchar2(10),
col120 varchar2(10),
col121 varchar2(10),
col122 varchar2(10),
col123 varchar2(10),
col124 varchar2(10),
col125 varchar2(10),
col126 varchar2(10),
col127 varchar2(10),
col128 varchar2(10),
col129 varchar2(10),
col130 varchar2(10),
col131 varchar2(10),
col132 varchar2(10),
col133 varchar2(10),
col134 varchar2(10),
col135 varchar2(10),
col136 varchar2(10),
col137 varchar2(10),
col138 varchar2(10),
col139 varchar2(10),
col140 varchar2(10),
col141 varchar2(10),
col142 varchar2(10),
col143 varchar2(10),
col144 varchar2(10),
col145 varchar2(10),
col146 varchar2(10),
col147 varchar2(10),
col148 varchar2(10),
col149 varchar2(10),
col150 varchar2(10),
col151 varchar2(10),
col152 varchar2(10),
col153 varchar2(10),
col154 varchar2(10),
col155 varchar2(10),
col156 varchar2(10),
col157 varchar2(10),
col158 varchar2(10),
col159 varchar2(10),
col160 varchar2(10),
col161 varchar2(10),
col162 varchar2(10),
col163 varchar2(10),
col164 varchar2(10),
col165 varchar2(10),
col166 varchar2(10),
col167 varchar2(10),
col168 varchar2(10),
col169 varchar2(10),
col170 varchar2(10),
col171 varchar2(10),
col172 varchar2(10),
col173 varchar2(10),
col174 varchar2(10),
col175 varchar2(10),
col176 varchar2(10),
col177 varchar2(10),
col178 varchar2(10),
col179 varchar2(10),
col180 varchar2(10),
col181 varchar2(10),
col182 varchar2(10),
col183 varchar2(10),
col184 varchar2(10),
col185 varchar2(10),
col186 varchar2(10),
col187 varchar2(10),
col188 varchar2(10),
col189 varchar2(10),
col190 varchar2(10),
col191 varchar2(10),
col192 varchar2(10),
col193 varchar2(10),
col194 varchar2(10),
col195 varchar2(10),
col196 varchar2(10),
col197 varchar2(10),
col198 varchar2(10),
col199 varchar2(10),
col200 varchar2(10),
col201 varchar2(10),
col202 varchar2(10),
col203 varchar2(10),
col204 varchar2(10),
col205 varchar2(10),
col206 varchar2(10),
col207 varchar2(10),
col208 varchar2(10),
col209 varchar2(10),
col210 varchar2(10),
col211 varchar2(10),
col212 varchar2(10),
col213 varchar2(10),
col214 varchar2(10),
col215 varchar2(10),
col216 varchar2(10),
col217 varchar2(10),
col218 varchar2(10),
col219 varchar2(10),
col220 varchar2(10),
col221 varchar2(10),
col222 varchar2(10),
col223 varchar2(10),
col224 varchar2(10),
col225 varchar2(10),
col226 varchar2(10),
col227 varchar2(10),
col228 varchar2(10),
col229 varchar2(10),
col230 varchar2(10),
col231 varchar2(10),
col232 varchar2(10),
col233 varchar2(10),
col234 varchar2(10),
col235 varchar2(10),
col236 varchar2(10),
col237 varchar2(10),
col238 varchar2(10),
col239 varchar2(10),
col240 varchar2(10),
col241 varchar2(10),
col242 varchar2(10),
col243 varchar2(10),
col244 varchar2(10),
col245 varchar2(10),
col246 varchar2(10),
col247 varchar2(10),
col248 varchar2(10),
col249 varchar2(10),
col250 varchar2(10),
col251 varchar2(10),
col252 varchar2(10),
col253 varchar2(10),
col254 varchar2(10),
col255 varchar2(10),
col256 varchar2(10),
col257 varchar2(10),
col258 varchar2(10),
col259 varchar2(10),
col260 varchar2(10),
col261 varchar2(10),
col262 varchar2(10),
col263 varchar2(10),
col264 varchar2(10),
col265 varchar2(10),
col266 varchar2(10),
col267 varchar2(10),
col268 varchar2(10),
col269 varchar2(10),
col270 varchar2(10),
col271 varchar2(10),
col272 varchar2(10),
col273 varchar2(10),
col274 varchar2(10),
col275 varchar2(10),
col276 varchar2(10),
col277 varchar2(10),
col278 varchar2(10),
col279 varchar2(10),
col280 varchar2(10),
col281 varchar2(10),
col282 varchar2(10),
col283 varchar2(10),
col284 varchar2(10),
col285 varchar2(10),
col286 varchar2(10),
col287 varchar2(10),
col288 varchar2(10),
col289 varchar2(10),
col290 varchar2(10),
col291 varchar2(10),
col292 varchar2(10),
col293 varchar2(10),
col294 varchar2(10),
col295 varchar2(10),
col296 varchar2(10),
col297 varchar2(10),
col298 varchar2(10),
col299 varchar2(10),
col300 varchar2(10),
col301 varchar2(10),
col302 varchar2(10),
col303 varchar2(10),
col304 varchar2(10),
col305 varchar2(10),
col306 varchar2(10),
col307 varchar2(10),
col308 varchar2(10),
col309 varchar2(10),
col310 varchar2(10),
col311 varchar2(10),
col312 varchar2(10),
col313 varchar2(10),
col314 varchar2(10),
col315 varchar2(10),
col316 varchar2(10),
col317 varchar2(10),
col318 varchar2(10),
col319 varchar2(10),
col320 varchar2(10),
col321 varchar2(10),
col322 varchar2(10),
col323 varchar2(10),
col324 varchar2(10),
col325 varchar2(10),
col326 varchar2(10),
col327 varchar2(10),
col328 varchar2(10),
col329 varchar2(10),
col330 varchar2(10),
col331 varchar2(10),
col332 varchar2(10),
col333 varchar2(10),
col334 varchar2(10),
col335 varchar2(10),
col336 varchar2(10),
col337 varchar2(10),
col338 varchar2(10),
col339 varchar2(10),
col340 varchar2(10),
col341 varchar2(10),
col342 varchar2(10),
col343 varchar2(10),
col344 varchar2(10),
col345 varchar2(10),
col346 varchar2(10),
col347 varchar2(10),
col348 varchar2(10),
col349 varchar2(10),
col350 varchar2(10),
col351 varchar2(10),
col352 varchar2(10),
col353 varchar2(10),
col354 varchar2(10),
col355 varchar2(10),
col356 varchar2(10),
col357 varchar2(10),
col358 varchar2(10),
col359 varchar2(10),
col360 varchar2(10),
col361 varchar2(10),
col362 varchar2(10),
col363 varchar2(10),
col364 varchar2(10),
col365 varchar2(10),
col366 varchar2(10),
col367 varchar2(10),
col368 varchar2(10),
col369 varchar2(10),
col370 varchar2(10),
col371 varchar2(10),
col372 varchar2(10),
col373 varchar2(10),
col374 varchar2(10),
col375 varchar2(10),
col376 varchar2(10),
col377 varchar2(10),
col378 varchar2(10),
col379 varchar2(10),
col380 varchar2(10),
col381 varchar2(10),
col382 varchar2(10),
col383 varchar2(10),
col384 varchar2(10),
col385 varchar2(10),
col386 varchar2(10),
col387 varchar2(10),
col388 varchar2(10),
col389 varchar2(10),
col390 varchar2(10),
col391 varchar2(10),
col392 varchar2(10),
col393 varchar2(10),
col394 varchar2(10),
col395 varchar2(10),
col396 varchar2(10),
col397 varchar2(10),
col398 varchar2(10),
col399 varchar2(10),
col400 varchar2(10),
col401 varchar2(10),
col402 varchar2(10),
col403 varchar2(10),
col404 varchar2(10),
col405 varchar2(10),
col406 varchar2(10),
col407 varchar2(10),
col408 varchar2(10),
col409 varchar2(10),
col410 varchar2(10),
col411 varchar2(10),
col412 varchar2(10),
col413 varchar2(10),
col414 varchar2(10),
col415 varchar2(10),
col416 varchar2(10),
col417 varchar2(10),
col418 varchar2(10),
col419 varchar2(10),
col420 varchar2(10),
col421 varchar2(10),
col422 varchar2(10),
col423 varchar2(10),
col424 varchar2(10),
col425 varchar2(10),
col426 varchar2(10),
col427 varchar2(10),
col428 varchar2(10),
col429 varchar2(10),
col430 varchar2(10),
col431 varchar2(10),
col432 varchar2(10),
col433 varchar2(10),
col434 varchar2(10),
col435 varchar2(10),
col436 varchar2(10),
col437 varchar2(10),
col438 varchar2(10),
col439 varchar2(10),
col440 varchar2(10),
col441 varchar2(10),
col442 varchar2(10),
col443 varchar2(10),
col444 varchar2(10),
col445 varchar2(10),
col446 varchar2(10),
col447 varchar2(10),
col448 varchar2(10),
col449 varchar2(10),
col450 varchar2(10),
col451 varchar2(10),
col452 varchar2(10),
col453 varchar2(10),
col454 varchar2(10),
col455 varchar2(10),
col456 varchar2(10),
col457 varchar2(10),
col458 varchar2(10),
col459 varchar2(10),
col460 varchar2(10),
col461 varchar2(10),
col462 varchar2(10),
col463 varchar2(10),
col464 varchar2(10),
col465 varchar2(10),
col466 varchar2(10),
col467 varchar2(10),
col468 varchar2(10),
col469 varchar2(10),
col470 varchar2(10),
col471 varchar2(10),
col472 varchar2(10),
col473 varchar2(10),
col474 varchar2(10),
col475 varchar2(10),
col476 varchar2(10),
col477 varchar2(10),
col478 varchar2(10),
col479 varchar2(10),
col480 varchar2(10),
col481 varchar2(10),
col482 varchar2(10),
col483 varchar2(10),
col484 varchar2(10),
col485 varchar2(10),
col486 varchar2(10),
col487 varchar2(10),
col488 varchar2(10),
col489 varchar2(10),
col490 varchar2(10),
col491 varchar2(10),
col492 varchar2(10),
col493 varchar2(10),
col494 varchar2(10),
col495 varchar2(10),
col496 varchar2(10),
col497 varchar2(10),
col498 varchar2(10),
col499 varchar2(10),
col500 varchar2(10),
col501 varchar2(10),
col502 varchar2(10),
col503 varchar2(10),
col504 varchar2(10),
col505 varchar2(10),
col506 varchar2(10),
col507 varchar2(10),
col508 varchar2(10),
col509 varchar2(10),
col510 varchar2(10),
col511 varchar2(10),
col512 varchar2(10),
col513 varchar2(10),
col514 varchar2(10),
col515 varchar2(10),
col516 varchar2(10),
col517 varchar2(10),
col518 varchar2(10),
col519 varchar2(10),
col520 varchar2(10),
col521 varchar2(10),
col522 varchar2(10),
col523 varchar2(10),
col524 varchar2(10),
col525 varchar2(10),
col526 varchar2(10),
col527 varchar2(10),
col528 varchar2(10),
col529 varchar2(10),
col530 varchar2(10),
col531 varchar2(10),
col532 varchar2(10),
col533 varchar2(10),
col534 varchar2(10),
col535 varchar2(10),
col536 varchar2(10),
col537 varchar2(10),
col538 varchar2(10),
col539 varchar2(10),
col540 varchar2(10),
col541 varchar2(10),
col542 varchar2(10),
col543 varchar2(10),
col544 varchar2(10),
col545 varchar2(10),
col546 varchar2(10),
col547 varchar2(10),
col548 varchar2(10),
col549 varchar2(10),
col550 varchar2(10),
col551 varchar2(10),
col552 varchar2(10),
col553 varchar2(10),
col554 varchar2(10),
col555 varchar2(10),
col556 varchar2(10),
col557 varchar2(10),
col558 varchar2(10),
col559 varchar2(10),
col560 varchar2(10),
col561 varchar2(10),
col562 varchar2(10),
col563 varchar2(10),
col564 varchar2(10),
col565 varchar2(10),
col566 varchar2(10),
col567 varchar2(10),
col568 varchar2(10),
col569 varchar2(10),
col570 varchar2(10),
col571 varchar2(10),
col572 varchar2(10),
col573 varchar2(10),
col574 varchar2(10),
col575 varchar2(10),
col576 varchar2(10),
col577 varchar2(10),
col578 varchar2(10),
col579 varchar2(10),
col580 varchar2(10),
col581 varchar2(10),
col582 varchar2(10),
col583 varchar2(10),
col584 varchar2(10),
col585 varchar2(10),
col586 varchar2(10),
col587 varchar2(10),
col588 varchar2(10),
col589 varchar2(10),
col590 varchar2(10),
col591 varchar2(10),
col592 varchar2(10),
col593 varchar2(10),
col594 varchar2(10),
col595 varchar2(10),
col596 varchar2(10),
col597 varchar2(10),
col598 varchar2(10),
col599 varchar2(10),
col600 varchar2(10),
col601 varchar2(10),
col602 varchar2(10),
col603 varchar2(10),
col604 varchar2(10),
col605 varchar2(10),
col606 varchar2(10),
col607 varchar2(10),
col608 varchar2(10),
col609 varchar2(10),
col610 varchar2(10),
col611 varchar2(10),
col612 varchar2(10),
col613 varchar2(10),
col614 varchar2(10),
col615 varchar2(10),
col616 varchar2(10),
col617 varchar2(10),
col618 varchar2(10),
col619 varchar2(10),
col620 varchar2(10),
col621 varchar2(10),
col622 varchar2(10),
col623 varchar2(10),
col624 varchar2(10),
col625 varchar2(10),
col626 varchar2(10),
col627 varchar2(10),
col628 varchar2(10),
col629 varchar2(10),
col630 varchar2(10),
col631 varchar2(10),
col632 varchar2(10),
col633 varchar2(10),
col634 varchar2(10),
col635 varchar2(10),
col636 varchar2(10),
col637 varchar2(10),
col638 varchar2(10),
col639 varchar2(10),
col640 varchar2(10),
col641 varchar2(10),
col642 varchar2(10),
col643 varchar2(10),
col644 varchar2(10),
col645 varchar2(10),
col646 varchar2(10),
col647 varchar2(10),
col648 varchar2(10),
col649 varchar2(10),
col650 varchar2(10),
col651 varchar2(10),
col652 varchar2(10),
col653 varchar2(10),
col654 varchar2(10),
col655 varchar2(10),
col656 varchar2(10),
col657 varchar2(10),
col658 varchar2(10),
col659 varchar2(10),
col660 varchar2(10),
col661 varchar2(10),
col662 varchar2(10),
col663 varchar2(10),
col664 varchar2(10),
col665 varchar2(10),
col666 varchar2(10),
col667 varchar2(10),
col668 varchar2(10),
col669 varchar2(10),
col670 varchar2(10),
col671 varchar2(10),
col672 varchar2(10),
col673 varchar2(10),
col674 varchar2(10),
col675 varchar2(10),
col676 varchar2(10),
col677 varchar2(10),
col678 varchar2(10),
col679 varchar2(10),
col680 varchar2(10),
col681 varchar2(10),
col682 varchar2(10),
col683 varchar2(10),
col684 varchar2(10),
col685 varchar2(10),
col686 varchar2(10),
col687 varchar2(10),
col688 varchar2(10),
col689 varchar2(10),
col690 varchar2(10),
col691 varchar2(10),
col692 varchar2(10),
col693 varchar2(10),
col694 varchar2(10),
col695 varchar2(10),
col696 varchar2(10),
col697 varchar2(10),
col698 varchar2(10),
col699 varchar2(10),
col700 varchar2(10),
col701 varchar2(10),
col702 varchar2(10),
col703 varchar2(10),
col704 varchar2(10),
col705 varchar2(10),
col706 varchar2(10),
col707 varchar2(10),
col708 varchar2(10),
col709 varchar2(10),
col710 varchar2(10),
col711 varchar2(10),
col712 varchar2(10),
col713 varchar2(10),
col714 varchar2(10),
col715 varchar2(10),
col716 varchar2(10),
col717 varchar2(10),
col718 varchar2(10),
col719 varchar2(10),
col720 varchar2(10),
col721 varchar2(10),
col722 varchar2(10),
col723 varchar2(10),
col724 varchar2(10),
col725 varchar2(10),
col726 varchar2(10),
col727 varchar2(10),
col728 varchar2(10),
col729 varchar2(10),
col730 varchar2(10),
col731 varchar2(10),
col732 varchar2(10),
col733 varchar2(10),
col734 varchar2(10),
col735 varchar2(10),
col736 varchar2(10),
col737 varchar2(10),
col738 varchar2(10),
col739 varchar2(10),
col740 varchar2(10),
col741 varchar2(10),
col742 varchar2(10),
col743 varchar2(10),
col744 varchar2(10),
col745 varchar2(10),
col746 varchar2(10),
col747 varchar2(10),
col748 varchar2(10),
col749 varchar2(10),
col750 varchar2(10),
col751 varchar2(10),
col752 varchar2(10),
col753 varchar2(10),
col754 varchar2(10),
col755 varchar2(10),
col756 varchar2(10),
col757 varchar2(10),
col758 varchar2(10),
col759 varchar2(10),
col760 varchar2(10),
col761 varchar2(10),
col762 varchar2(10),
col763 varchar2(10),
col764 varchar2(10),
col765 varchar2(10),
col766 varchar2(10),
col767 varchar2(10),
col768 varchar2(10),
col769 varchar2(10),
col770 varchar2(10),
col771 varchar2(10),
col772 varchar2(10),
col773 varchar2(10),
col774 varchar2(10),
col775 varchar2(10),
col776 varchar2(10),
col777 varchar2(10),
col778 varchar2(10),
col779 varchar2(10),
col780 varchar2(10),
col781 varchar2(10),
col782 varchar2(10),
col783 varchar2(10),
col784 varchar2(10),
col785 varchar2(10),
col786 varchar2(10),
col787 varchar2(10),
col788 varchar2(10),
col789 varchar2(10),
col790 varchar2(10),
col791 varchar2(10),
col792 varchar2(10),
col793 varchar2(10),
col794 varchar2(10),
col795 varchar2(10),
col796 varchar2(10),
col797 varchar2(10),
col798 varchar2(10),
col799 varchar2(10),
col800 varchar2(10),
col801 varchar2(10),
col802 varchar2(10),
col803 varchar2(10),
col804 varchar2(10),
col805 varchar2(10),
col806 varchar2(10),
col807 varchar2(10),
col808 varchar2(10),
col809 varchar2(10),
col810 varchar2(10),
col811 varchar2(10),
col812 varchar2(10),
col813 varchar2(10),
col814 varchar2(10),
col815 varchar2(10),
col816 varchar2(10),
col817 varchar2(10),
col818 varchar2(10),
col819 varchar2(10),
col820 varchar2(10),
col821 varchar2(10),
col822 varchar2(10),
col823 varchar2(10),
col824 varchar2(10),
col825 varchar2(10),
col826 varchar2(10),
col827 varchar2(10),
col828 varchar2(10),
col829 varchar2(10),
col830 varchar2(10),
col831 varchar2(10),
col832 varchar2(10),
col833 varchar2(10),
col834 varchar2(10),
col835 varchar2(10),
col836 varchar2(10),
col837 varchar2(10),
col838 varchar2(10),
col839 varchar2(10),
col840 varchar2(10),
col841 varchar2(10),
col842 varchar2(10),
col843 varchar2(10),
col844 varchar2(10),
col845 varchar2(10),
col846 varchar2(10),
col847 varchar2(10),
col848 varchar2(10),
col849 varchar2(10),
col850 varchar2(10),
col851 varchar2(10),
col852 varchar2(10),
col853 varchar2(10),
col854 varchar2(10),
col855 varchar2(10),
col856 varchar2(10),
col857 varchar2(10),
col858 varchar2(10),
col859 varchar2(10),
col860 varchar2(10),
col861 varchar2(10),
col862 varchar2(10),
col863 varchar2(10),
col864 varchar2(10),
col865 varchar2(10),
col866 varchar2(10),
col867 varchar2(10),
col868 varchar2(10),
col869 varchar2(10),
col870 varchar2(10),
col871 varchar2(10),
col872 varchar2(10),
col873 varchar2(10),
col874 varchar2(10),
col875 varchar2(10),
col876 varchar2(10),
col877 varchar2(10),
col878 varchar2(10),
col879 varchar2(10),
col880 varchar2(10),
col881 varchar2(10),
col882 varchar2(10),
col883 varchar2(10),
col884 varchar2(10),
col885 varchar2(10),
col886 varchar2(10),
col887 varchar2(10),
col888 varchar2(10),
col889 varchar2(10),
col890 varchar2(10),
col891 varchar2(10),
col892 varchar2(10),
col893 varchar2(10),
col894 varchar2(10),
col895 varchar2(10),
col896 varchar2(10),
col897 varchar2(10),
col898 varchar2(10),
col899 varchar2(10),
col900 varchar2(10),
col901 varchar2(10),
col902 varchar2(10),
col903 varchar2(10),
col904 varchar2(10),
col905 varchar2(10),
col906 varchar2(10),
col907 varchar2(10),
col908 varchar2(10),
col909 varchar2(10),
col910 varchar2(10),
col911 varchar2(10),
col912 varchar2(10),
col913 varchar2(10),
col914 varchar2(10),
col915 varchar2(10),
col916 varchar2(10),
col917 varchar2(10),
col918 varchar2(10),
col919 varchar2(10),
col920 varchar2(10),
col921 varchar2(10),
col922 varchar2(10),
col923 varchar2(10),
col924 varchar2(10),
col925 varchar2(10),
col926 varchar2(10),
col927 varchar2(10),
col928 varchar2(10),
col929 varchar2(10),
col930 varchar2(10),
col931 varchar2(10),
col932 varchar2(10),
col933 varchar2(10),
col934 varchar2(10),
col935 varchar2(10),
col936 varchar2(10),
col937 varchar2(10),
col938 varchar2(10),
col939 varchar2(10),
col940 varchar2(10),
col941 varchar2(10),
col942 varchar2(10),
col943 varchar2(10),
col944 varchar2(10),
col945 varchar2(10),
col946 varchar2(10),
col947 varchar2(10),
col948 varchar2(10),
col949 varchar2(10),
col950 varchar2(10),
col951 varchar2(10),
col952 varchar2(10),
col953 varchar2(10),
col954 varchar2(10),
col955 varchar2(10),
col956 varchar2(10),
col957 varchar2(10),
col958 varchar2(10),
col959 varchar2(10),
col960 varchar2(10),
col961 varchar2(10),
col962 varchar2(10),
col963 varchar2(10),
col964 varchar2(10),
col965 varchar2(10),
col966 varchar2(10),
col967 varchar2(10),
col968 varchar2(10),
col969 varchar2(10),
col970 varchar2(10),
col971 varchar2(10),
col972 varchar2(10),
col973 varchar2(10),
col974 varchar2(10),
col975 varchar2(10),
col976 varchar2(10),
col977 varchar2(10),
col978 varchar2(10),
col979 varchar2(10),
col980 varchar2(10),
col981 varchar2(10),
col982 varchar2(10),
col983 varchar2(10),
col984 varchar2(10),
col985 varchar2(10),
col986 varchar2(10),
col987 varchar2(10),
col988 varchar2(10),
col989 varchar2(10),
col990 varchar2(10),
col991 varchar2(10),
col992 varchar2(10),
col993 varchar2(10),
col994 varchar2(10),
col995 varchar2(10),
col996 varchar2(10),
col997 varchar2(10),
col998 varchar2(10),
col999 varchar2(10),
col1000 varchar2(10)
);

declare
s_sql1 varchar2(32767);
s_sql2 varchar2(32767);
begin
for i in 1..1000 loop
s_sql1 := s_sql1 || 'col' || to_char(i, 'FM9000') || ',';
s_sql2 := s_sql2 || '''BLABLA'',';
end loop;

s_sql1 := rtrim(s_sql1, ',');
s_sql2 := rtrim(s_sql2, ',');

s_sql1 := 'insert into many_cols (' || s_sql1 || ') values (' || s_sql2 || ')';

for i in 1..10000 loop
execute immediate s_sql1;
end loop;

commit;
end;
/

select count(*) from many_cols;

exec dbms_stats.gather_table_stats(null, 'many_cols')

select blocks from user_tab_statistics where table_name = 'MANY_COLS';

-- alter session set "_widetab_comp_enabled" = false;

alter table many_cols move compress basic;

exec dbms_stats.gather_table_stats(null, 'many_cols')

select blocks from user_tab_statistics where table_name = 'MANY_COLS';

Execution Plan Puzzle

Here’s an execution plan that’s just been published on the ODC database forum. The plan comes from a call to dbms_xplan.display_cursor() with rowsource execution statistics enabled.

There’s something unusual about the execution statistics that I don’t think I’ve seen before – can anyone else see anything really odd, or (better still) anything which they would expect others to find odd but which they can easily explain.

A couple of hints:

  • It’s nothing to do with the fact that E-Rows and A-Rows don’t match – that’s never a surprise.
  • It’s not really about the fact that huge amounts of time seems to appear out of “nowhere” in the A-Time column
  • It is something to do with the relationship between A-Rows and Starts

I’ve inserted a few separator lines to break the plan into smaller pieces that can be examined in isolation. There are two “Load as Select” sections (presumably from “with” subqueries) and the main body of the query.

We don’t, as at time of writing, have the SQL or the Oracle version number that produced this plan. [Update: version now reported as 12.1.0.2]


---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                       | Name                         | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                                |                              |      1 |        |     50 |00:00:18.00 |     367K|     55 |     55 |       |       |          |
|   1 |  TEMP TABLE TRANSFORMATION                      |                              |      1 |        |     50 |00:00:18.00 |     367K|     55 |     55 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   2 |   LOAD AS SELECT                                |                              |      1 |        |      0 |00:00:00.55 |   56743 |      0 |     53 |  1040K|  1040K|          |
|   3 |    TABLE ACCESS BY INDEX ROWID                  | OBJECTS                      |   7785 |      1 |   7785 |00:00:00.03 |    8150 |      0 |      0 |       |       |          |
|*  4 |     INDEX UNIQUE SCAN                           | PK_OBJECTS                   |   7785 |      1 |   7785 |00:00:00.01 |     360 |      0 |      0 |       |       |          |
|   5 |    TABLE ACCESS BY INDEX ROWID BATCHED          | ATTRIBUTES                   |   7785 |      1 |   5507 |00:00:00.05 |   12182 |      0 |      0 |       |       |          |
|*  6 |     INDEX RANGE SCAN                            | UK_ATTR                      |   7785 |      1 |   5507 |00:00:00.03 |    9621 |      0 |      0 |       |       |          |
|*  7 |      TABLE ACCESS FULL                          | ATTRIBUTE_TYPES              |      1 |      1 |      1 |00:00:00.01 |      38 |      0 |      0 |       |       |          |
|   8 |    TABLE ACCESS BY INDEX ROWID BATCHED          | ATTRIBUTES                   |   7785 |      1 |   5507 |00:00:00.03 |   12182 |      0 |      0 |       |       |          |
|*  9 |     INDEX RANGE SCAN                            | UK_ATTR                      |   7785 |      1 |   5507 |00:00:00.02 |    9621 |      0 |      0 |       |       |          |
|* 10 |      TABLE ACCESS FULL                          | ATTRIBUTE_TYPES              |      1 |      1 |      1 |00:00:00.01 |      38 |      0 |      0 |       |       |          |
|  11 |    TABLE ACCESS BY INDEX ROWID BATCHED          | ATTRIBUTES                   |   1366 |      1 |   1366 |00:00:00.02 |    4592 |      0 |      0 |       |       |          |
|* 12 |     INDEX RANGE SCAN                            | IDX_ATTR_NDC_OBJECT_VALUE    |   1366 |      1 |   1366 |00:00:00.01 |    3227 |      0 |      0 |       |       |          |
|* 13 |      INDEX RANGE SCAN                           | NCI_NODES_COVERING_IDX       |   1366 |      1 |   1366 |00:00:00.01 |     595 |      0 |      0 |       |       |          |
|* 14 |    VIEW                                         |                              |      1 |     12 |   7785 |00:00:00.41 |   24174 |      0 |      0 |       |       |          |
|* 15 |     FILTER                                      |                              |      1 |        |   7891 |00:00:00.39 |   19582 |      0 |      0 |       |       |          |
|* 16 |      CONNECT BY WITH FILTERING                  |                              |      1 |        |  66134 |00:00:00.37 |   19144 |      0 |      0 |  7069K|  1062K| 6283K (0)|
|  17 |       TABLE ACCESS BY INDEX ROWID               | NODES                        |      1 |      1 |      1 |00:00:00.01 |       4 |      0 |      0 |       |       |          |
|* 18 |        INDEX UNIQUE SCAN                        | PK_NODES                     |      1 |      1 |      1 |00:00:00.01 |       3 |      0 |      0 |       |       |          |
|  19 |       NESTED LOOPS                              |                              |      9 |     11 |  66133 |00:00:00.19 |   19137 |      0 |      0 |       |       |          |
|  20 |        CONNECT BY PUMP                          |                              |      9 |        |  66134 |00:00:00.01 |       0 |      0 |      0 |       |       |          |
|* 21 |        TABLE ACCESS BY INDEX ROWID BATCHED      | NODES                        |  66134 |     11 |  66133 |00:00:00.15 |   19137 |      0 |      0 |       |       |          |
|* 22 |         INDEX RANGE SCAN                        | NCI_NODES_PARENT_NODE_ID     |  66134 |     11 |  67807 |00:00:00.08 |   12139 |      0 |      0 |       |       |          |
|  23 |         TABLE ACCESS BY INDEX ROWID             | OBJECT_TYPES                 |      1 |      1 |      1 |00:00:00.01 |       3 |      0 |      0 |       |       |          |
|* 24 |          INDEX UNIQUE SCAN                      | UK_IDX_OBJECT_TYPE_NDC       |      1 |      1 |      1 |00:00:00.01 |       2 |      0 |      0 |       |       |          |
|  25 |       TABLE ACCESS BY INDEX ROWID               | OBJECT_TYPES                 |      1 |      1 |      1 |00:00:00.01 |       3 |      0 |      0 |       |       |          |
|* 26 |        INDEX UNIQUE SCAN                        | UK_IDX_OBJECT_TYPE_NDC       |      1 |      1 |      1 |00:00:00.01 |       2 |      0 |      0 |       |       |          |
|* 27 |      TABLE ACCESS BY INDEX ROWID                | OBJECT_TYPES                 |    219 |      1 |      1 |00:00:00.01 |     438 |      0 |      0 |       |       |          |
|* 28 |       INDEX UNIQUE SCAN                         | PK_OBJECT_TYPES              |    219 |      1 |    219 |00:00:00.01 |     219 |      0 |      0 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|  29 |   LOAD AS SELECT                                |                              |      1 |        |      0 |00:00:02.86 |   37654 |     53 |      2 |  1040K|  1040K|          |
|  30 |    TABLE ACCESS BY INDEX ROWID                  | OBJECTS                      |    316 |      1 |    316 |00:00:00.01 |     603 |      0 |      0 |       |       |          |
|* 31 |     INDEX UNIQUE SCAN                           | PK_OBJECTS                   |    316 |      1 |    316 |00:00:00.01 |     287 |      0 |      0 |       |       |          |
|  32 |    TABLE ACCESS BY INDEX ROWID BATCHED          | ATTRIBUTES                   |    316 |      1 |    316 |00:00:00.01 |     950 |      0 |      0 |       |       |          |
|* 33 |     INDEX RANGE SCAN                            | UK_ATTR                      |    316 |      1 |    316 |00:00:00.01 |     666 |      0 |      0 |       |       |          |
|* 34 |      TABLE ACCESS FULL                          | ATTRIBUTE_TYPES              |      1 |      1 |      1 |00:00:00.01 |      38 |      0 |      0 |       |       |          |
|  35 |    HASH UNIQUE                                  |                              |      1 |    148 |    316 |00:00:02.86 |   37650 |     53 |      0 |  1041K|  1041K| 1371K (0)|
|* 36 |     FILTER                                      |                              |      1 |        |   5500 |00:00:02.85 |   36097 |     53 |      0 |       |       |          |
|  37 |      MERGE JOIN CARTESIAN                       |                              |      1 |    148 |   5114K|00:00:02.23 |   34073 |     53 |      0 |       |       |          |
|* 38 |       HASH JOIN                                 |                              |      1 |     12 |    657 |00:00:01.05 |   34016 |      0 |      0 |  1003K|  1003K|  728K (0)|
|  39 |        NESTED LOOPS                             |                              |      1 |     69 |    969 |00:00:00.36 |   20145 |      0 |      0 |       |       |          |
|  40 |         NESTED LOOPS                            |                              |      1 |    132 |    970 |00:00:00.36 |   19975 |      0 |      0 |       |       |          |
|  41 |          VIEW                                   |                              |      1 |     12 |    312 |00:00:00.35 |   19582 |      0 |      0 |       |       |          |
|* 42 |           FILTER                                |                              |      1 |        |    312 |00:00:00.35 |   19582 |      0 |      0 |       |       |          |
|* 43 |            CONNECT BY WITH FILTERING            |                              |      1 |        |  66134 |00:00:00.34 |   19144 |      0 |      0 |  6219K|  1010K| 5527K (0)|
|  44 |             TABLE ACCESS BY INDEX ROWID         | NODES                        |      1 |      1 |      1 |00:00:00.01 |       4 |      0 |      0 |       |       |          |
|* 45 |              INDEX UNIQUE SCAN                  | PK_NODES                     |      1 |      1 |      1 |00:00:00.01 |       3 |      0 |      0 |       |       |          |
|  46 |             NESTED LOOPS                        |                              |      9 |     11 |  66133 |00:00:00.18 |   19137 |      0 |      0 |       |       |          |
|  47 |              CONNECT BY PUMP                    |                              |      9 |        |  66134 |00:00:00.01 |       0 |      0 |      0 |       |       |          |
|* 48 |              TABLE ACCESS BY INDEX ROWID BATCHED| NODES                        |  66134 |     11 |  66133 |00:00:00.15 |   19137 |      0 |      0 |       |       |          |
|* 49 |               INDEX RANGE SCAN                  | NCI_NODES_PARENT_NODE_ID     |  66134 |     11 |  67807 |00:00:00.08 |   12139 |      0 |      0 |       |       |          |
|  50 |               TABLE ACCESS BY INDEX ROWID       | OBJECT_TYPES                 |      1 |      1 |      1 |00:00:00.01 |       3 |      0 |      0 |       |       |          |
|* 51 |                INDEX UNIQUE SCAN                | UK_IDX_OBJECT_TYPE_NDC       |      1 |      1 |      1 |00:00:00.01 |       2 |      0 |      0 |       |       |          |
|  52 |             TABLE ACCESS BY INDEX ROWID         | OBJECT_TYPES                 |      1 |      1 |      1 |00:00:00.01 |       3 |      0 |      0 |       |       |          |
|* 53 |              INDEX UNIQUE SCAN                  | UK_IDX_OBJECT_TYPE_NDC       |      1 |      1 |      1 |00:00:00.01 |       2 |      0 |      0 |       |       |          |
|* 54 |            TABLE ACCESS BY INDEX ROWID          | OBJECT_TYPES                 |    219 |      1 |      1 |00:00:00.01 |     438 |      0 |      0 |       |       |          |
|* 55 |             INDEX UNIQUE SCAN                   | PK_OBJECT_TYPES              |    219 |      1 |    219 |00:00:00.01 |     219 |      0 |      0 |       |       |          |
|* 56 |          INDEX RANGE SCAN                       | NCI_NODES_PARENT_NODE_ID     |    312 |     11 |    970 |00:00:00.01 |     393 |      0 |      0 |       |       |          |
|* 57 |         TABLE ACCESS BY INDEX ROWID             | NODES                        |    970 |      6 |    969 |00:00:00.01 |     170 |      0 |      0 |       |       |          |
|* 58 |        VIEW                                     | index$_join$_065             |      1 |     42 |      4 |00:00:00.69 |   13871 |      0 |      0 |       |       |          |
|* 59 |         HASH JOIN                               |                              |      1 |        |    434 |00:00:00.01 |      12 |      0 |      0 |  1519K|  1519K| 1491K (0)|
|  60 |          INDEX FAST FULL SCAN                   | PK_OBJECT_TYPES              |      1 |     42 |    434 |00:00:00.01 |       4 |      0 |      0 |       |       |          |
|  61 |          INDEX FAST FULL SCAN                   | UK_IDX_OBJECT_TYPE_NDC       |      1 |     42 |    434 |00:00:00.01 |       8 |      0 |      0 |       |       |          |
|  62 |       BUFFER SORT                               |                              |    657 |     12 |   5114K|00:00:00.63 |      57 |     53 |      0 |   372K|   372K|  330K (0)|
|  63 |        VIEW                                     |                              |      1 |     12 |   7785 |00:00:00.02 |      57 |     53 |      0 |       |       |          |
|  64 |         TABLE ACCESS FULL                       | SYS_TEMP_0FD9D761B_1445481D  |      1 |     12 |   7785 |00:00:00.02 |      57 |     53 |      0 |       |       |          |
|  65 |      TABLE ACCESS BY INDEX ROWID                | OBJECTS                      |    657 |      1 |    657 |00:00:00.01 |    1068 |      0 |      0 |       |       |          |
|* 66 |       INDEX UNIQUE SCAN                         | PK_OBJECTS                   |    657 |      1 |    657 |00:00:00.01 |     410 |      0 |      0 |       |       |          |
|  67 |      TABLE ACCESS BY INDEX ROWID BATCHED        | ATTRIBUTES                   |    318 |      1 |    318 |00:00:00.01 |     956 |      0 |      0 |       |       |          |
|* 68 |       INDEX RANGE SCAN                          | UK_ATTR                      |    318 |      1 |    318 |00:00:00.01 |     670 |      0 |      0 |       |       |          |
|* 69 |        TABLE ACCESS FULL                        | ATTRIBUTE_TYPES              |      1 |      1 |      1 |00:00:00.01 |      38 |      0 |      0 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|  70 |   SORT GROUP BY                                 |                              |      1 |      1 |     50 |00:00:14.59 |     273K|      2 |      0 |   619K|   471K|  550K (0)|
|  71 |    VIEW                                         |                              |      1 |      1 |   4375 |00:00:13.31 |     273K|      2 |      0 |       |       |          |
|  72 |     HASH UNIQUE                                 |                              |      1 |      1 |   4375 |00:00:13.31 |     273K|      2 |      0 |  1186K|  1186K| 1400K (0)|
|  73 |      TABLE ACCESS BY INDEX ROWID                | OBJECTS                      |   4606 |      1 |   4606 |00:00:05.59 |   37088 |      0 |      0 |       |       |          |
|* 74 |       INDEX UNIQUE SCAN                         | PK_OBJECTS                   |   4606 |      1 |   4606 |00:00:05.56 |   32472 |      0 |      0 |       |       |          |
|* 75 |      HASH JOIN                                  |                              |      1 |      1 |   4375 |00:00:13.29 |     273K|      2 |      0 |  1410K|  1075K| 1423K (0)|
|  76 |       NESTED LOOPS                              |                              |      1 |      1 |   4375 |00:00:00.07 |   12952 |      2 |      0 |       |       |          |
|  77 |        NESTED LOOPS                             |                              |      1 |      2 |   4375 |00:00:00.06 |   12593 |      2 |      0 |       |       |          |
|  78 |         NESTED LOOPS                            |                              |      1 |      1 |   4375 |00:00:00.05 |   11761 |      2 |      0 |       |       |          |
|* 79 |          HASH JOIN                              |                              |      1 |      1 |   5500 |00:00:00.01 |      60 |      2 |      0 |  1321K|  1321K| 1775K (0)|
|  80 |           VIEW                                  |                              |      1 |     12 |   7785 |00:00:00.01 |      54 |      0 |      0 |       |       |          |
|  81 |            TABLE ACCESS FULL                    | SYS_TEMP_0FD9D761B_1445481D  |      1 |     12 |   7785 |00:00:00.01 |      54 |      0 |      0 |       |       |          |
|  82 |           VIEW                                  |                              |      1 |    148 |    316 |00:00:00.01 |       6 |      2 |      0 |       |       |          |
|  83 |            TABLE ACCESS FULL                    | SYS_TEMP_0FD9D761C_1445481D  |      1 |    148 |    316 |00:00:00.01 |       6 |      2 |      0 |       |       |          |
|  84 |          TABLE ACCESS BY INDEX ROWID BATCHED    | ATTRIBUTES                   |   5500 |      1 |   4375 |00:00:00.04 |   11701 |      0 |      0 |       |       |          |
|* 85 |           INDEX RANGE SCAN                      | IDX_ATTR_NDC_OBJECT_VALUE    |   5500 |      1 |   4375 |00:00:00.02 |    7353 |      0 |      0 |       |       |          |
|* 86 |         INDEX RANGE SCAN                        | NCI_ATTRIBUTE_VALUES_ATTR_ID |   4375 |      2 |   4375 |00:00:00.01 |     832 |      0 |      0 |       |       |          |
|  87 |        TABLE ACCESS BY INDEX ROWID              | ATTRIBUTE_VALUES             |   4375 |      2 |   4375 |00:00:00.01 |     359 |      0 |      0 |       |       |          |
|  88 |       VIEW                                      |                              |      1 |   1730 |   4606 |00:00:13.21 |     260K|      0 |      0 |       |       |          |
|* 89 |        FILTER                                   |                              |      1 |        |   4606 |00:00:00.06 |    2094 |      0 |      0 |       |       |          |
|* 90 |         CONNECT BY WITH FILTERING               |                              |      1 |        |   4922 |00:00:00.05 |    2037 |      0 |      0 |   478K|   448K|  424K (0)|
|  91 |          NESTED LOOPS                           |                              |      1 |    148 |    316 |00:00:00.01 |     953 |      0 |      0 |       |       |          |
|  92 |           NESTED LOOPS                          |                              |      1 |    148 |    316 |00:00:00.01 |     637 |      0 |      0 |       |       |          |
|  93 |            VIEW                                 | VW_NSO_1                     |      1 |    148 |    316 |00:00:00.01 |       3 |      0 |      0 |       |       |          |
|  94 |             HASH UNIQUE                         |                              |      1 |    148 |    316 |00:00:00.01 |       3 |      0 |      0 |  2170K|  2170K| 2517K (0)|
|  95 |              VIEW                               |                              |      1 |    148 |    316 |00:00:00.01 |       3 |      0 |      0 |       |       |          |
|  96 |               TABLE ACCESS FULL                 | SYS_TEMP_0FD9D761C_1445481D  |      1 |    148 |    316 |00:00:00.01 |       3 |      0 |      0 |       |       |          |
|* 97 |            INDEX UNIQUE SCAN                    | PK_NODES                     |    316 |      1 |    316 |00:00:00.01 |     634 |      0 |      0 |       |       |          |
|  98 |           TABLE ACCESS BY INDEX ROWID           | NODES                        |    316 |      1 |    316 |00:00:00.01 |     316 |      0 |      0 |       |       |          |
|  99 |          NESTED LOOPS                           |                              |      2 |   1582 |   4606 |00:00:00.01 |    1081 |      0 |      0 |       |       |          |
| 100 |           CONNECT BY PUMP                       |                              |      2 |        |   4922 |00:00:00.01 |       0 |      0 |      0 |       |       |          |
|*101 |           TABLE ACCESS BY INDEX ROWID BATCHED   | NODES                        |   4922 |     11 |   4606 |00:00:00.01 |    1081 |      0 |      0 |       |       |          |
|*102 |            INDEX RANGE SCAN                     | NCI_NODES_PARENT_NODE_ID     |   4922 |     11 |   4608 |00:00:00.01 |     950 |      0 |      0 |       |       |          |
| 103 |            TABLE ACCESS BY INDEX ROWID          | OBJECT_TYPES                 |      1 |      1 |      1 |00:00:00.01 |       3 |      0 |      0 |       |       |          |
|*104 |             INDEX UNIQUE SCAN                   | UK_IDX_OBJECT_TYPE_NDC       |      1 |      1 |      1 |00:00:00.01 |       2 |      0 |      0 |       |       |          |
| 105 |          TABLE ACCESS BY INDEX ROWID            | OBJECT_TYPES                 |      1 |      1 |      1 |00:00:00.01 |       3 |      0 |      0 |       |       |          |
|*106 |           INDEX UNIQUE SCAN                     | UK_IDX_OBJECT_TYPE_NDC       |      1 |      1 |      1 |00:00:00.01 |       2 |      0 |      0 |       |       |          |
|*107 |         TABLE ACCESS BY INDEX ROWID             | OBJECT_TYPES                 |      3 |      1 |      1 |00:00:00.01 |      57 |      0 |      0 |       |       |          |
|*108 |          INDEX UNIQUE SCAN                      | PK_OBJECT_TYPES              |      3 |      1 |      3 |00:00:00.01 |       3 |      0 |      0 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

 Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("O"."OBJECT_ID"=:B1)
   6 - access("A"."OBJECT_ID"=:B1 AND "A"."ATTRIBUTE_TYPE_ID"=)
   7 - filter("NAME_DISPLAY_CODE"='ATTRIBUTE_TYPE.MDR_CODELIST')
   9 - access("A"."OBJECT_ID"=:B1 AND "A"."ATTRIBUTE_TYPE_ID"=)
  10 - filter("NAME_DISPLAY_CODE"='ATTRIBUTE_TYPE.MDR_CODELIST')
  12 - access("A"."NAME_DISPLAY_CODE"='ATTRIBUTE_TYPE.MDR_ACTIVE_STATUS' AND "A"."OBJECT_ID"=)
  13 - access("NOD"."NODE_ID"=:B1)
  14 - filter("GROUP_ACTIVE_STATUS"='LOOKUP_VALUE.ACTIVE_STATUS_A')
  15 - filter(("DELETION_DATE"='01-Jan-1900' AND  IS NOT NULL AND "OBJECT_ID" IS NOT NULL))
  16 - access("N"."PARENT_NODE_ID"=PRIOR NULL)
  18 - access("N"."NODE_ID"=TO_NUMBER(:I_NODE_ID))
  21 - filter("N"."OBJECT_TYPE_ID"<>)
  22 - access("connect$_by$_pump$_029"."PRIOR n.node_id"="N"."PARENT_NODE_ID")
  24 - access("NAME_DISPLAY_CODE"='OBJECT_TYPE.ARCHIVE_CONTAINER')
  26 - access("NAME_DISPLAY_CODE"='OBJECT_TYPE.ARCHIVE_CONTAINER')
  27 - filter("NAME"=:I_SEARCH_OBJ_TYPE)
  28 - access("OBJECT_TYPE_ID"=:B1)
  31 - access("O"."OBJECT_ID"=:B1)
  33 - access("A"."OBJECT_ID"=:B1 AND "A"."ATTRIBUTE_TYPE_ID"=)
  34 - filter("AT"."NAME_DISPLAY_CODE"='ATTRIBUTE_TYPE.MDR_MASTER_UNIQUE_ITEM_ID')
  36 - filter(("CN"."CODELIST"= AND "CN"."CODELIST_MUI"=))
  38 - access("N"."OBJECT_TYPE_ID"="OBJECT_TYPE_ID")
  42 - filter(("DELETION_DATE"='01-Jan-1900' AND  IS NOT NULL))
  43 - access("PARENT_NODE_ID"=PRIOR NULL)
  45 - access("NODE_ID"=TO_NUMBER(:I_NODE_ID))
  48 - filter("OBJECT_TYPE_ID"<>)
  49 - access("connect$_by$_pump$_049"."PRIOR node_id "="PARENT_NODE_ID")
  51 - access("NAME_DISPLAY_CODE"='OBJECT_TYPE.ARCHIVE_CONTAINER')
  53 - access("NAME_DISPLAY_CODE"='OBJECT_TYPE.ARCHIVE_CONTAINER')
  54 - filter(("NAME_DISPLAY_CODE"='OBJECT_TYPE.MDR_CL_CONTAINER' OR "NAME_DISPLAY_CODE"='OBJECT_TYPE.MDR_STUDY_CL_PARENT_CONTAINER'))
  55 - access("OBJECT_TYPE_ID"=:B1)
  56 - access("N"."PARENT_NODE_ID"="NODE_ID")
  57 - filter("N"."DELETION_DATE"='01-Jan-1900')
  58 - filter(("NAME_DISPLAY_CODE"MEMBER OF"PKG_MDR_COMP_RPT_MGR"."F_GET_LOV_MAPPED_OBJTYPES"("VP40"."VARCHAR_TBL"('OBJECT_TYPE.MDR_CL_CODELIST')) OR
              "NAME_DISPLAY_CODE"MEMBER OF"PKG_MDR_COMP_RPT_MGR"."F_GET_LOV_MAPPED_OBJTYPES"("VP40"."VARCHAR_TBL"('OBJECT_TYPE.MDR_CL_SUBSET'))))
  59 - access(ROWID=ROWID)
  66 - access("O"."OBJECT_ID"=:B1)
  68 - access("A"."OBJECT_ID"=:B1 AND "A"."ATTRIBUTE_TYPE_ID"=)
  69 - filter("AT"."NAME_DISPLAY_CODE"='ATTRIBUTE_TYPE.MDR_MASTER_UNIQUE_ITEM_ID')
  74 - access("O"."OBJECT_ID"=NVL(:B1,"PKG_MDR_UTIL"."F_GET_REF_NODE_OBJECT_ID"(:B2,"REQ_INFO_TYPE"("USER_CREDENTIALS_TYPE"(1,NULL,'en-US'),
              "AUDIT_INFO_TYPE"(NULL,NULL,NULL),NULL,NULL,NULL))))
  75 - access("COD"."CL_NODE_ID"="CL"."NODE_ID" AND "AV"."ATTRIBUTE_VALUE"="COD"."OBJ_NAME")
  79 - access("CN"."CODELIST_MUI"="CL"."MUI_VALUE")
  85 - access("A"."NAME_DISPLAY_CODE"='ATTRIBUTE_TYPE.MDR_CODELIST_VALUE' AND "CN"."OBJECT_ID"="A"."OBJECT_ID")
  86 - access("A"."ATTRIBUTE_ID"="AV"."ATTRIBUTE_ID")
  89 - filter(("DELETION_DATE"='01-Jan-1900' AND  IS NOT NULL))
  90 - access("N"."PARENT_NODE_ID"=PRIOR NULL)
  97 - access("N"."NODE_ID"="NODE_ID")
 101 - filter("N"."OBJECT_TYPE_ID"<>)
 102 - access("connect$_by$_pump$_082"."PRIOR n.node_id "="N"."PARENT_NODE_ID")
 104 - access("NAME_DISPLAY_CODE"='OBJECT_TYPE.ARCHIVE_CONTAINER')
 106 - access("NAME_DISPLAY_CODE"='OBJECT_TYPE.ARCHIVE_CONTAINER')
 107 - filter("NAME_DISPLAY_CODE"MEMBER OF"PKG_MDR_COMP_RPT_MGR"."F_GET_LOV_MAPPED_OBJTYPES"("VP40"."VARCHAR_TBL"('OBJECT_TYPE.MDR_CL_CODE')))
 108 - access("OBJECT_TYPE_ID"=:B1)

Any observations welcome. I’m not expecting many people to see the anomaly I see (and there may be further anomalies I haven’t even looked for that others do see straight away), but it’s possible that the pattern is one that some people frequently see and find totally unsurprising.

Update – where’s the anomaly

The anomaly is the presence of operations 73 and 74.

There are two different observations that make these lines stand out. First, operation 72 is a hash unique which is a “single child” operation that calls its child to supply a rowsource and then reduces that rowsource to a distinct set using a hashing mechanism. But in this plan we can see that operation 72 appears to have two child operations – numbers 73 and 75 – so clearly the plan isn’t following the pure “standard” pattern.

Secondly, notice that operations 73 and 74 both report 4,606 Starts. An operation that reports “N” starts has to have a parent operation calling it N times, which means the parent operation must have reported (at least) N rows  under the A-Rows heading. But we know that the hash unique operation will call its child operation exactly once – and we can see that the hash unique here has only been called once. So something else much be causing the 4,606 Starts.

Fortunately we remember that “scalar subqueries in the select list” will report their execution plans above the part of the plan that describes the main body of the query. In fact we can see this several times in the two “load as select” parts of this plan; operations (3,4), (5,6,7), (8,9,10), (11,12,13) describe 4 scalar subqueries that must be embedded in the select list of the first “with” subquery that is described by operations 14 – 28.

So we could assume, for the moment, that operations 73 and 74 are in some way an inline scalar subquery in a select list – and that leads to the next step in the problem. A scalar subquery will operate at most once for each row returned in the main rowsource – though the number of starts might be reduced by the effects of scalar subquery caching. Operations 73 and 74 start 4,606 times; the rowsource that we feel it ought to be associated with is the hash join immediately below it (operation 75) which returns 4,375 rows, moreover the first child of the hash join returns 4,375 rows – so we’re not seeing enough rows returned to justify our second attempt at interpreting the plan.

So where can we find something that returns 4,606 (or more) rows that would allow us to say “maybe that’s were the scalar subquery goes” ?

Look further down the plan – operation 88 (the view operation that constitutes the second child of the hash join) reports A-Rows = 4,606. Operations 73,74 really ought to be reported between operations 88 and 89.  There’s a filter at operation 89 that reduces the 4,922 rows produced by operation 90 to 4,606 and it’s after that that the scalar subquery runs to add a column to the rowsource before passing it upwards. (We might be able to see this if we had the projection information for the plan)

Corroborating this claim we can look at the A-Time for operation 88: it’s 13.21 seconds and there’s nothing below it that accounts for that time; but if we insert operations 73 and 74 just below operation 88 we suddenly have 4,606 subquery calls which report 5.59 seconds and that’s a step in the right direction for identifying the 13.21 seconds that appeared “from nowhere” – especially when you notice that the predicate for operation 74 (or 88a) calls a PL/SQL packaged procedure that is either calling three more Pl/SQL procedures or 3 user-defined types and probably using a fair amount of unrecorded time.

Conclusion

Scalar subqueries in select lists can dump their execution plans in places you don’t expect. We know that the plan for a scalar subquery in the select list of a simple query will report itself above the main body of the query plan. Here we have an example of a scalar subquery that reports itself an extra step further out than we intuitively suspect and, quite possibly, if we hadn’t had the rowsource execution statistics to guide us, we wouldn’t have been able to work out what the plan was really trying to tell us.

Footnote

Since Andy Sayer had been commenting on the same ODC thread I emailed him a brief version of the above notes last night, and he created, and emailed to me, a very simple example to reproduce this behaviour which I’ve also tested on 11.2.0.4.