Search

Top 60 Oracle Blogs

Recent comments

12.1.0.2

Indexing Null Values - Part 1

Indexing null values in Oracle is something that has been written about a lot in the past already. Nowadays it should be common knowledge that Oracle B*Tree indexes don't index entries that are entirely null, but it's possible to include null values in B*Tree indexes when combining them with something guaranteed to be non-null, be it another column or simply a constant expression.

Jonathan Lewis not too long ago published a note that showed an oddity when dealing with IS NULL predicates that in the end turned out not to be a real threat and looked more like an oddity how Oracle displays the access and filter predicates when accessing an index and using IS NULL together with other predicates following after.

Compression Restrictions - Update: Wide Table Compression In 12.1 And Later

I do have a very old post that used to be rather popular here that listed various restrictions related to compression. One of the most obvious restrictions in older versions was that the basic / OLTP (Advanced Row) heap table compression based on symbol tables / de-duplication was limited to tables with 254 columns or less - or probably more general to rows with single row pieces.

This limitation was lifted with Oracle 12c which started to support this compression type with tables having more than 254 columns / multiple row pieces - and there is a new internal parameter called "_widetab_comp_enabled" that defaults to TRUE.

I/O Benchmark Minor Update

I've recently published a new version 1.03 of the I/O benchmark scripts on #333333;">my #336699;">github repository#333333;"> (ideally pick the #336699;">IO_BENCHMARK.ZIP containing all the scripts#333333; font-family: "verdana" , "arial" , sans-serif;">).

Bloom Filter Efficiency And Cardinality Estimates

I've recently came across an interesting observation I've not seen documented yet, so I'm publishing a simple example here to demonstrate the issue.

In principle it looks like that the efficiency of Bloom Filter operations are dependent on the cardinality estimates. This means that in particular cardinality under-estimates of the optimizer can make a dramatic difference how efficient a corresponding Bloom Filter operation based on such a cardinality estimate will work at runtime. Since Bloom Filters are crucial for efficient processing in particular when using Exadata or In Memory column store this can have significant impact on the performance of affected operations.

No Asynchronous I/O When Using Shared Server (Also Known As MTS)

I've recently had a case at a client where it was questioned why a particular application was seemingly not making full use of the available I/O capabilities - in comparison to other databases / applications using similar storage.

Basically it ended up in a kind of finger pointing between the application vendor and the IT DBA / storage admins, one side saying that the infrastructure used offers insufficient I/O capabilities (since the most important application tasks where dominated by I/O waits in the database), and the other side saying that the application doesn't make use of the I/O capabilities offered - compared to other databases / applications that showed a significantly higher IOPS rate and/or I/O throughput using the same kind of storage.

CDB Views and Query Optimizer Cardinality Estimations

Today I faced a performance problem caused by a bad cardinality estimation involving a CDB view in a 12.1.0.2 multitenant environment. While solving the problem I did a number of observations that I try to summarize in this blog post.

First of all, when checking the execution plan of a query already running for more than two hours, I noticed that, in the execution plan, neither the referenced CDB view nor one of its underlying objects were referenced. The following query (and its execution plan) executed while connect to the CDB illustrates (I also added the 12.2.0.1 output to show you the difference it that area):

Purging Unused Triples Is a Never-ending Operation

This is a short post to point out a problem I recently discovered and for which I couldn’t find information online. Hence, in case you hit it, thanks to this short post I hope that you’ll save some time.

The documentation describes a procedure you can use to get rid of dangling triples stored in a semantic network. Simply put, you must execute a PL/SQL block like the following:

SQL_ADAPTIVE_PLAN_RESOLVED Is Broken

You can use the V$SQL.IS_RESOLVED_ADAPTIVE_PLAN column to know whether the execution plan associated to a child cursor is adaptive or not. Specifically, to know whether the query optimizer selected either an adaptive join method or an adaptive star transformation (notice that it is not set when the hybrid hash distribution is involved).

The column takes one of the following values:

SPD State Does Not Change If Adaptive Statistics Are Disabled

The aim of this post is to point out an issue (bug?) that I recently discovered. But, before talking about it, I need to go through a rather long introduction about the state of SQL plan directive (SPD).

As of version 12.1.0.2, an SPD has two state information. You can see both of them through the CDB/DBA_SQL_PLAN_DIRECTIVES data dictionary views: