Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Deepen Engagement on Your Website with Loom Video Embeds

Video has quickly become one of the most engaging mediums on the internet. Whether you have a personal blog, virtual classroom, business vlog, collaboration website for your team, or something in between, there’s no doubt that you’ve come across video as a way to build and engage your following. By 2022, online videos will make up more than 82% of all consumer internet traffic – 15 times higher than in 2017! (Cisco)

Videos come in all shapes and sizes, and we know that the ability to easily add and embed them is important for engaging your audience. We’ve been hard at work making sure that you can add the video content you want to your WordPress.com website.

We’re pleased to share today a new Loom block that supports Loom video embeds on any page or post. Loom is a video messaging tool for work, and now all you need to do is copy and paste a Loom URL directly into a new block to embed it.

https://en-blog.files.wordpress.com/2020/10/block-selector-video.png?w=540 540w, https://en-blog.files.wordpress.com/2020/10/block-selector-video.png?w=96 96w, https://en-blog.files.wordpress.com/2020/10/block-selector-video.png?w=192 192w" sizes="(max-width: 270px) 100vw, 270px" />
Embed videos with blocks

The new Loom block joins a number of other video embed features available to WordPress.com users—for instance, you can copy and paste a video URL from YouTube, TikTok, DailyMotion, or Vimeo, or you can use a Video block to upload your own video files.

3 Simple Steps to Embed a Loom Video on Your Website

  1. Grab your Loom video URL – copy it.
  2. Paste that URL directly into the WordPress editor in a new block.
    • Alternatively, you can search for “Loom” in the block selector or enter /loom and hit enter in a new line to add it quickly. Then paste the URL into the block interface from here.
  3. Publish!

Adding video content to your website is as easy as 1-2-3. Trying adding some to a page or post today to kick off deeper engagement with your site’s visitors.

Upgrades : You have to do them. When are you going to learn? (TLSv1.2)

Questions:

  • Do you remember when SSLv3 was a thing?
  • Do you remember when everyone disabled SSLv3 on their websites?
  • Do you remember how loads of people running Oracle database version 11.2.0.2 and lower cried because all their database callouts failed?
  • Do you remember how they were all forced to patch to 11.2.0.3 or 11.2.0.4 to get support for TLS?
  • Do you remember thinking, I’ll never let something like that happen again?

I’m so sick of saying this. I know I sound like a broken record, but it’s like I’m living in the movie Groundhog Day.

There is no such thing as standing still in tech. It’s like swimming upstream in a river. It takes work to remain stationary. The minute you stop for a rest you are actually moving backwards. I’m sure your next response is,

“But Tim, if it ain’t broke, don’t fix it!”

The minute you stop patching and upgrading, your application is already broken. Yesterday you had an up-to-date system. Today you don’t. You have stopped, but the world around you continued to move on, and sometimes what they do will have a direct impact on you.

The security folks have been complaining about TLSv1.0 and TLSx1.1 for ages, but we are now in the position where the world and their dog are switching off those protocols, and the “we don’t need no stinking patches or upgrades” brigade are pissing and moaning again.

You knew this was going to happen. You had plenty of warning. It is your fault things are now failing. The bad decisions you made have led you to this point, so stop blaming other people. IT IS YOUR FAULT!

Where do you go from here?

First things first, start planning your patch cycles and upgrade cycles. That isn’t a “one time and done” plan. That is from now until forever. You’ve got to keep your server operating systems and software up to date.

If you can’t cope with that, then move to a cloud service that will patch your shit for you!

I know upgrades aren’t necessarily a quick fix, as they need some planning, so you will need some sticking plasters to get your through the immediate issues. Things to consider are:

  • Your load balancers and/or reverse proxies can hide some of your crap from the outside world. You can support TLSv1.2+ between the client and the reverse proxy, then drop down to a less secure protocol between your reverse proxy and your servers.
  • You can do a similar thing with database callouts to the outside world. Use an internal proxy between you and the external resource. The connection between your proxy and the outside world will speak on TLSv1.2+, but the callout from the database to your proxy will speak using a protocol your database can cope with.

These are not “fixes”. They are crappy sticking-plaster solutions to hide your incompetence. You need to fix your weak infrastructure, but these will buy you some time…

I don’t really care if you think you have a compelling counter argument, because I’m still going to scream “WRONG” at you. If you don’t think patching and upgrades are important, please quit your tech job and go be incompetent somewhere else. Have a nice life and don’t let the door hit you on the ass on your way out!

Cheers

Tim…

PS. You know this is going to happen again soon, when the world decides that anything less than TLSv1.3 is evil.

The post Upgrades : You have to do them. When are you going to learn? (TLSv1.2) first appeared on The ORACLE-BASE Blog.


Upgrades : You have to do them. When are you going to learn? (TLSv1.2) was first posted on October 8, 2020 at 12:43 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Direct Path

This is a little addendum to a note I wrote a couple of days ago about serial direct path reads and KO (fast object checkpoint) enqueue waits.

The original note was prompted by a problem where someone had set the hidden parameter “_serial_direct_read” to ‘always’ because there were running 11g and wanted some “insert as select” statements to use direct path reads on the select portion and 11g wasn’t co-operating.

Serial direct path reads were introduced as a possibility in (at least) the 8.1.7.4 timeline, but the parameter was set to false until 11gR2 where it changed to auto. (Legal values are: false, true, never, auto, always.)

In 11.2, though, even though a simple select statement could use serial direct path reads for segment scans, Oracle would not use the mechanism for “insert as select”.

This note is just a little piece of code to demonstrate the point.  Run it on 11g and (unless your buffer cache is large enough to make the test table “small”) Oracle will use direct path reads on the select, but scattered reads to cache for the insert. Upgrade to 12.1 and Oracle will use direct path reads on both.

rem
rem     Script:         serial_fail.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2020
rem

create table t1
as
select
        ao.*
from
        all_objects     ao,
        (select rownum from dual connect by level <= 16) mult
/

create table t2
as
select  *
from    t1
where   rownum = 0
/

alter system flush buffer_cache;

prompt  =============
prompt  Simple Select
prompt  =============

execute snap_events.start_snap
select * from t1 where object_id = 98765;
execute snap_events.end_snap

prompt  ================
prompt  Insert as select
prompt  ================

execute snap_events.start_snap
insert into t2
select * from t1 where object_id = 98765;
execute snap_events.end_snap

prompt  =====================
prompt  Insert as select with
prompt  _serial_direct=always
prompt  =====================

alter session set "_serial_direct_read"=always;

execute snap_events.start_snap
insert /* serial direct */ into t2
select * from t1 where object_id = 98765;
execute snap_events.end_snap

alter session set "_serial_direct_read"=auto;

The calls to the snap_events package are the to produce the change in v$session_event for my session during the SQL.

You’ll notice I’ve included three main SQL statements rather than two – the third statement (2nd execution of the insert) is to demonstrate that it is possible to get direct path reads on the insert by setting the hidden parameter to ‘always’.

One detail to remember when testing this particular feature (and the same guideline applies to some other features), the “direct / not direct” becomes an attribute of the cursor, it’s not an attribute of the execution plan. This is why I’ve added a comment to the 2nd insert; if I hadn’t done so Oracle would have reused the (identical text) cursor from the first insert, which would have resulted in scattered reads being used instead of direct path reads. This distinction between cursor and plan explains why there is not hint that will allow you to force direct path reads for a specific query (not even the infamous opt_param() hint).

Here are the three sets of results from a system running 11.2.0.4:

=============
Simple Select
=============

Event                                             Waits   Time_outs           Csec    Avg Csec    Max Csec
-----                                             -----   ---------           ----    --------    --------
db file sequential read                               1           0           0.10        .100           4
direct path read                                    114           0          20.86        .183           6
SQL*Net message to client                             4           0           0.00        .000           0
SQL*Net message from client                           4           0           0.11        .028     174,435

================
Insert as select
================

Event                                             Waits   Time_outs           Csec    Avg Csec    Max Csec
-----                                             -----   ---------           ----    --------    --------
db file sequential read                              22           0           0.60        .027           4
db file scattered read                              130           0          35.97        .277           5
SQL*Net message to client                             4           0           0.01        .002           0
SQL*Net message from client                           4           0           0.10        .025     174,435

=====================
Insert as select with
_serial_direct=always
=====================

Event                                             Waits   Time_outs           Csec    Avg Csec    Max Csec
-----                                             -----   ---------           ----    --------    --------
direct path read                                    108           0          17.44        .161           6
SQL*Net message to client                             4           0           0.00        .000           0
SQL*Net message from client                           4           0           0.09        .022     174,435


Note the db file scattered read waits in the mddle test. If you re-run the test on 12.1.0.x (or later) you’ll find that the middle set of results will change to direct path read waits.

For reference, this limitation is covered by MOS note13250070.8: Bug 13250070 – Enh: Serial direct reads not working in DML. The actual bug note is not public.

Footnote (a couple of hours later):

A response from Roger MacNicol to my publication tweet has told us that the bug note says only that direct path reads had been restricted unnecessarily and the restriction has been removed.

Oracle 19c Automatic Indexing: Indexing With Stale Statistics Part III (Do Anything You Say)

In Part I of this series, we saw how Automatic Indexing will not create a viable Automatic Index if there are stale or missing statistics on the underlining segments. In Part II we saw how these SQL statements effectively become blacklisted and when segment statistics are subsequently collected, Automatic Indexing will still not create viable […]

2020 PASS Summit and the Azure SQL Championship

I barely have enough time to breathe let along blog these days, but lucky for me, I am taking some downtime today, so I can come out here and write!

Anyway, since I’m so busy, I volunteered to help judge the Azure SQL Championship that you can be part of!

https://dbakevlar.com/wp-content/uploads/2020/10/azure_chmp-1024x349.jpg 1024w, https://dbakevlar.com/wp-content/uploads/2020/10/azure_chmp-768x262.jpg 768w, https://dbakevlar.com/wp-content/uploads/2020/10/azure_chmp-1536x524.jpg 1536w, https://dbakevlar.com/wp-content/uploads/2020/10/azure_chmp-2048x699.jpg 2048w" sizes="(max-width: 300px) 100vw, 300px" />

This is a great opportunity to level up your skills from onprem to be more cloud-centric and you’re not too late!  The competition is from Oct. 12th-30th and there will be new contest challenge each week and daily quizzes that offer you the chance to win badges and prizes, including PASS Summit conference passes for 2020 or 2021!

As for the other win, with all the customers I’m working with migrating to Azure, it’s also a prize to level up your technical skills and know more about Azure SQL for the future of your career.

With that said, the Azure SQL Championship is all about taking what you already know about SQL Server and pivoting it to Azure SQL.  You’ll find out how to do more with your data estate and the future of the relational database.  When I came to Microsoft 2 1/2 years ago, I knew very little about Azure and came up to speed quickly thanks to the instructional content available at Microsoft and have no doubt the championship, with the game like focus will be a wonderful way to keep everyone engaged as we level up.

If you haven’t registered, do so now here and let me see what you’ve got!



Tags:  ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright ©  [2020 PASS Summit and the Azure SQL Championship], All Right Reserved. 2020.

A Look at How We Updated Fonts on WordPress.com

We recently updated WordPress.com’s fonts and wanted to give you a behind-the-scenes look at how we chose the list. Here’s an example of a few of them in use.

https://en-blog.files.wordpress.com/2020/10/image.png?w=2048 2048w, https://en-blog.files.wordpress.com/2020/10/image.png?w=150 150w, https://en-blog.files.wordpress.com/2020/10/image.png?w=300 300w, https://en-blog.files.wordpress.com/2020/10/image.png?w=768 768w" sizes="(max-width: 1024px) 100vw, 1024px" />

We looked at several criteria when curating our selections to give your site’s visitors the best experience.

First, we looked at the overall popularity and quality of each font, paying close attention to the letterforms of the most common and most quirky characters.  We made sure that these worked well in specific layouts, at different scales, and as part of the entire collection.

https://en-blog.files.wordpress.com/2020/10/image-1.png?w=150 150w, https://en-blog.files.wordpress.com/2020/10/image-1.png?w=300 300w, https://en-blog.files.wordpress.com/2020/10/image-1.png?w=768 768w, https://en-blog.files.wordpress.com/2020/10/image-1.png 1980w" sizes="(max-width: 1024px) 100vw, 1024px" />

To make the cut, fonts had to contain a breadth of styles and weights, including true italics.

https://en-blog.files.wordpress.com/2020/10/image-2.png?w=150 150w, https://en-blog.files.wordpress.com/2020/10/image-2.png?w=300 300w, https://en-blog.files.wordpress.com/2020/10/image-2.png?w=768 768w, https://en-blog.files.wordpress.com/2020/10/image-2.png 1934w" sizes="(max-width: 1024px) 100vw, 1024px" />

It was important to select fonts with broad character and language support beyond basic Latin. We also looked to showcase a diversity of type designers and open source fonts.

Careful consideration of all the criteria mentioned above allowed us to end up with a list that facilitates attractive font pairings for headings and body text.

https://en-blog.files.wordpress.com/2020/10/image-3.png?w=150 150w, https://en-blog.files.wordpress.com/2020/10/image-3.png?w=300 300w, https://en-blog.files.wordpress.com/2020/10/image-3.png?w=768 768w, https://en-blog.files.wordpress.com/2020/10/image-3.png 2001w" sizes="(max-width: 1024px) 100vw, 1024px" />

Here’s a closer look at a few patterns you can create with a keen typographic eye.


For an even closer, hands-on look, you can check them out in Global Styles and the Customizer today. We’re looking forward to seeing how you use custom fonts on WordPress.com! Your feedback will help us continue to grow, refine, and improve the font choices we offer.

CLUSTER

By Franck Pachot

.
Statistically, my blog posts starting with a single SQL keyword (like COMMIT and ROLLBACK) in the title are not fully technical ones, but about moves. Same here. It is more about community engagement, people sharing. And about a friend. And clusters of course…

https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/10/DSC0274... 1024w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/10/DSC0274... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/10/DSC0274... 900w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/10/DSC0274... 1500w" sizes="(max-width: 300px) 100vw, 300px" />
In 2020 because of this COVID virus, we try to avoid clusters of people. And everybody suffers from that in the community because we have no, or very few, physical conferences where we can meet. This picture is from 6 years ago, 2014, my first Oracle Open World. I was not a speaker, I was not an ACE. Maybe my first and last conference without being a speaker… I was already a blogger and I was already on Twitter. And on Wednesday evening, after the blogger’s meetup, I missed the Aerosmith concert. For much better: a dinner at Scoma’s with these well-known people I was just meeting for the first time. It is not about being with famous people. It is about talking with smart people, who have long experience in this technical community, and are good inspiration on many topics – tech and soft skills. Look at who is taking the picture, visible in the mirror replica (as in a MAA Gold architecture</p />
</p></div>

    	  	<div class=

Why you keep getting "ORA-01653: unable to extend table"

Space management used to be hard

Way back in the good old days…In fact, before I continue, rest assured that whenever a database person says “good old days” what they really mean is a time totally warped by their imagination where they have managed to block out all of the horror stories and problems they faced, to only remember the tiny nugget of positivity about some antiquated version of the database, so that they come to you and say … “Way back in the good old days” Smile. But I digress…

Way back in the good old days, things were actually not anywhere near as good as things are today when it came to tablespaces and datafiles. In the early versions of Oracle 7, you picked a size for your datafiles and that was pretty much it. If your tablespaces got close to full, you could manually resize the datafiles or add new ones, but that was 100% your job to keep an eye on things. As a consequence, every DBA of that generation had a script called something along the lines of “freespace.sql” which rang the alarm bell whenever a tablespace got above 80% full, because you didn’t want to be that person that forgot to allocate space in your database before going home for the evening.

Autoextend and automatic space management

Eventually we got autoextend datafiles which meant that as long as our datafiles were not close to their absolute limit, we had a much greater chance of getting an uninterrupted nights sleep! But even autoextend had its issues if it was the crutch that you were leaning on in terms of ensuring consistent application performance.

It obviously takes time to add tens or hundreds of a megabytes to a file, and thus, if your single row transaction was the unfortunate soul that took the datafile over its current size threshold, then you would sit there for precious seconds whilst your datafile grew to its new size.

At the time, this was often the cause of bluster and bravado from DBAs in the industry proudly beating their chest with: REAL admins do not need to autoextend. We just manage our databases properly”. Of course, this was complete tosh because not using the features available to you is less proper management and more revealing yourself to be a fool Smile 

Many Oracle practitioners still worry about this pause in throughput as a file needs to autoextend, but only because there was never not a lot of publicity around a feature that arrived in Oracle 9, namely the SMC0 process (Space Management Coordinator). It did precisely what DBAs were doing manually with their own scripts, namely,

  • keep an eye on the rate of growth of segments in the database,
  • based on that rate, initiate an increase the size of datafiles before they truly needed to autoextend.

If you’re unfamiliar with the SMC0 process, MOS Doc ID 743773.1 is good reference, or check the docs.

Empty tables are not always empty

But enough background…Even with autoextend, manual checks, and SMC0, if you have bursty or sporadic growth, there may come a moment when a datafile cannot grow any further and you hit the familiar “ORA-01653: unable to extend table .. in tablespace…” error when you were not expecting it. If you are in the situation where you have bursts of growth, it is important to realise that once you get one occurrence of ORA-01653, you might continue to get ORA-01653 errors, even if the transaction iniated the growth fails and rolls back.

Lets look at an example of this. I’ll create a 1GB tablespace and for the sake of covering the autoextending case as well, you can assume it could just as easily have started as a 500MB file and has since autoextended out to its maximum size of 1GB.


SQL> create tablespace demo datafile 'X:\ORACLE\ORADATA\DB19\PDB1\DEMO.DBF' size 1000m;

Tablespace created.


SQL> create table t ( pk int, c varchar2(4000) ) tablespace demo  ;

Table created.

Now I am going to load up this table T with far too much data than the tablespace can accommodate.


SQL> insert /*+ APPEND*/ into t
  2  select rownum, rpad(rownum,4000)
  3  from dual
  4  connect by level <= 1000000;
insert /*+ APPEND*/ into t
*
ERROR at line 1:
ORA-01653: unable to extend table MISTER.T by 1024 in tablespace DEMO

So my table T failed to load any data, and thus is still empty. But let us now look at the state of the tablespace.


SQL> @free DEMO

TABLESPACE_NAME          TOT_SIZE                                 TOT_FREE               PCT_USED
------------------------ ---------------------------------------- -------------------- ----------
DEMO                          1000m                                       7m                   99

1 row selected.

It’s space is still completely exhausted. By looking at DBA_EXTENTS you can see why this is the case.


SQL> select count(*)
  2  from   dba_extents
  3  where  segment_name = 'T'
  4  and    owner = user;

  COUNT(*)
----------
       195

The table T, which failed to load any rows, still allocated extents as the load progressed. Those allocations are performed by recursive dictionary transactions, which you can think of as being synonymous with autonomous transactions in PL/SQL. They run “separately” from the main loading transaction and thus are not rolled back when the transaction fails. My table T is empty, but it is still 1GB in size. We can safely assume that any activity that requires space in this tablespace is now going to fail as well.

Since we were loading table T from an empty state, there is a simple fix here. I simply truncate the table to release all of its extents back to the free space for the tablespace.


SQL> truncate table t;

Table truncated.

SQL> @free DEMO

TABLESPACE_NAME          TOT_SIZE                                 TOT_FREE               PCT_USED
------------------------ ---------------------------------------- -------------------- ----------
DEMO                          1000m                                     999m                    0

1 row selected.

When truncate cannot be used

But what if the table was not empty before starting the failed load transaction? For example, what if I had already successfully loaded 100 rows into the table as below:


SQL> insert /*+ APPEND*/ into t
  2  select rownum, rpad(rownum,4000)
  3  from dual
  4  connect by level <= 100;

100 rows created.

SQL> commit;

Commit complete.

and then I’d attemped a much bigger load and hit the error.


SQL> insert /*+ APPEND*/ into t
  2  select rownum, rpad(rownum,4000)
  3  from dual
  4  connect by level <= 1000000;
insert /*+ APPEND*/ into t
*
ERROR at line 1:
ORA-01653: unable to extend table MISTER.T by 1024 in tablespace DEMO

Obviously a truncate command is off the table (… see what I did there? Smile). Do I need to perform a complete reorganisation of the table in order to reclaim the space? Fortunately not. A command that has long been available in the Oracle Database but often forgotten by DBAs is that we can prune extents from a table if those extents contain no data. The DEALLOCATE UNUSED will free up those extents that do not contain any data whilst not touching any extents that are in “genuine” use.


SQL> alter table t deallocate unused;

Table altered.

My table data is preserved, and I’ve now freed those extents back to the tablespace.


SQL> select count(*) from t;

  COUNT(*)
----------
       100

1 row selected.

SQL> @free DEMO

TABLESPACE_NAME          TOT_SIZE                                 TOT_FREE               PCT_USED
------------------------ ---------------------------------------- -------------------- ----------
DEMO                          1000m                                     998m                    0

1 row selected.

TL;DR: If you are doing data loads in bursts and hitting the ORA-01653 error, you might be able to reclaim space without expensive reorganisations by using the ALTER TABLE DEALLOCATE UNUSED command.

Oracle 19c Automatic Indexing: Indexing With Stale Statistics Part II (Survive)

    In my previous post, I discussed how having stale statistics, usually a bad idea, is especially problematic with regard Automatic Indexes as it usually results in viable automatic indexes only being created in an UNUSABLE/INVISIBLE state. If we were to now to collect the missing statistics:   If we now repeatedly re-run the […]

Index FFS Cost 2

Here’s a little puzzle, highlighting a “bug that’s not a bug” that was “fixed but not fixed” some time in the 10.2 timeline. (If you want specifics about exactly when the fix became available and what patches might be available they’re in MOS – Bug 5099019 : DBMS_STATS DOESN’T COUNT LEAF_BLOCKS CORRECTLY.

Running 19.3.0.0, with the system statistics as shown:


begin
        dbms_stats.set_system_stats('MBRC',16);
        dbms_stats.set_system_stats('MREADTIM',10);
        dbms_stats.set_system_stats('SREADTIM',5);
        dbms_stats.set_system_stats('CPUSPEED',500);
end;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                cascade          => true,
                estimate_percent => 100,
                method_opt       => 'for all columns size 1'
        );
end;
/

select
        index_name, leaf_blocks
from
        user_indexes
where
        table_name = 'T1'
;

alter system flush buffer_cache;

set autotrace traceonly 

select
        count(small_vc)
from    t1
where   small_vc is not null
;

set autotrace off

For my test run there are no uncommitted transactions, the gathering of table stats with cascade to indexes means all the blocks are clean so there’s no undo activity needed to produce an answer for the final query, and all that the query is going to do is run an index fast full scan to count the number of rows in the table because there’s an index on just (small_vc).

The system stats tell us that Oracle is going to cost the index fast full scan by taking the leaf block count, dividing by 16 (the MBRC) and multiplying by 2 ( mreadtim / sreadtim) and adding a bit for CPU usage. So here are the results from running the query and generating the plan with autotrace:


PL/SQL procedure successfully completed.

INDEX_NAME           LEAF_BLOCKS
-------------------- -----------
T1_V1                        153

1 row selected.

System altered.

1 row selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 274579903

-------------------------------------------------------------------------------
| Id  | Operation             | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |       |     1 |    11 |    22   (5)| 00:00:01 |
|   1 |  SORT AGGREGATE       |       |     1 |    11 |            |          |
|*  2 |   INDEX FAST FULL SCAN| T1_V1 |  9999 |   107K|    22   (5)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("SMALL_VC" IS NOT NULL)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       1530  consistent gets
       1522  physical reads
          0  redo size
        558  bytes sent via SQL*Net to client
        425  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

Points to notice:

  • The cost is (as predicted):  ceiling(153 leaf_blocks / 16 MBRC ) * 2 + a bit:  = 20 + bit.
  • The number of physical blocks read is 1522, not the 153 leaf blocks reported in index stats
  • No recursive SQL, and I did say that there were no undo / read consistency issues to worry about

There is clearly an inconsistency between the size of the index and the (100% gathered) leaf_block count that the optimizer is using, and this is a point I made many years ago in “Cost Based Oracle – Fundamentals”.  To gather the leaf block count Oracle has looked at the number of leaf blocks that, after cleanout – and here’s how I prepared this demo:


rem
rem     Script:         index_ffs_cost.sql
rem     Author:         Jonathan Lewis
rem     Dated:          July 2012
rem

create table t1
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid wordpress format issue
)
select
        rownum                  id,
        lpad(rownum,10,'0')     small_vc,
        rpad('x',100)           padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5 -- > comment to avoid wordpress format issue
;

create index t1_v1 on t1(small_vc) pctfree 80;

delete
        from t1
where
        id between 5000 and 95000
;

commit;

When I gathered stats on the index I had just deleted 90% of the data from the table – so 90% of the leaf blocks in the index were empty but still in the index structure, and only 10% of the leaf blocks held any current index entries. This is why Oracle reported leaf_blocks = 153 while the index fast full scan had to read through nearly 1530 blocks.

This is one of those contradictory problems – for an index fast full scan you need to know the size of the segment that will be scanned because that’s the number of blocks you will have to examine; but in most cases the number of populated index leaf blocks is the number you need to know about when you’re trying to cost an index range scan. Of course in most cases the nature of Oracle’s implementation of B-tree indexes will mean that the two counts will be within one or two percent of each other. But there are a few extreme cases where you could get an index into a state where the segment size is large and the data set is small and you don’t want the optimizer to think that an index fast full scan will be low-cost.

Oracle produced a mechanism for getting the segment block count captured as the leaf_blocks statistic late in 10.2, but it’s not implemented by default, and it’s not something you can tweak into a query with the opt_param() hint. Fix control 509019 has the description: “set leaf blocks to the number of blocks in the index extent map”, but it’s not a run-time fix, it’s a fix that has to be in place when you gather the  index stats – thus:

alter session set "_fix_control"='5099019:1';

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                cascade          => true,
                estimate_percent => 100,
                method_opt       => 'for all columns size 1'
        );
end;
/

select
        index_name, leaf_blocks
from
        user_indexes
where
        table_name = 'T1'
;

alter system flush buffer_cache;

set autotrace traceonly 

select
        count(small_vc)
from    t1
where   small_vc is not null
;

set autotrace off

===================================================

Session altered.

PL/SQL procedure successfully completed.

INDEX_NAME           LEAF_BLOCKS
-------------------- -----------
T1_V1                       1555

1 row selected.

System altered.

1 row selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 274579903

-------------------------------------------------------------------------------
| Id  | Operation             | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |       |     1 |    11 |   201   (3)| 00:00:01 |
|   1 |  SORT AGGREGATE       |       |     1 |    11 |            |          |
|*  2 |   INDEX FAST FULL SCAN| T1_V1 |  9999 |   107K|   201   (3)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("SMALL_VC" IS NOT NULL)

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       1530  consistent gets
       1522  physical reads
          0  redo size
        558  bytes sent via SQL*Net to client
        439  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

Session altered.

As you can see Oracle has now set leaf_blocks = 1555. This is actually the number of blocks below the highwater mark in the segment. A further check with the dbms_space and dbms_space_usage packages showed that the number of blocks in the index structure was actually 1,523 with a further 32 blocks that we could infer were space management blocks. Of the 1,523 blocks in the index structure 157 were reported as “FULL” while 1364 were reported as FS2 which possibly ought to mean  “available for re-use” (though still in the structure), although this didn’t quite seem to be the case a few years ago.

Although Oracle has supplied a fix to a problem I highlighted in CBO-F, I can understand why it’s not enabled by default, and I don’t think I’d want to take advantage of it in a production system given the way it’s a session setting at stats gathering time. The number of times it likely to matter I’d probably add hints to the SQL to stop the optimizer from using the index incorrectly, or do something at a stats-gathering moment to call dbms_stats.set_index_stats().