From version 12.1.0.2 onward, for taking AWR snapshots, you have the choice between four AWR flush levels: BESTFIT, LITE, TYPICAL and ALL. If you check the Oracle Database documentation, you won’t find much information about the difference between them. The best you will find, in the PL/SQL Packages and Types Reference, is the following:
The flush level can be one of the following:
In the previous post I've demonstrated that Oracle has some problems to make efficient use of B*Tree indexes if an IS NULL condition is followed by IN / OR predicates also covered by the same index - the predicates following are not used to navigate the index structure efficiently but are applied as filters on all index entries identified by the IS NULL.
In this part I'll show what results I got when repeating the same exercise using Bitmap indexes - after all they include NULL values anyway, so no special tricks are required to use them for an IS NULL search. Let's start again with the same data set (actually not exactly the same but very similar) and an index on the single expression that gets searched for via IS NULL - results are again from 18.3.0:
Indexing null values in Oracle is something that has been written about a lot in the past already. Nowadays it should be common knowledge that Oracle B*Tree indexes don't index entries that are entirely null, but it's possible to include null values in B*Tree indexes when combining them with something guaranteed to be non-null, be it another column or simply a constant expression.
Jonathan Lewis not too long ago published a note that showed an oddity when dealing with IS NULL predicates that in the end turned out not to be a real threat and looked more like an oddity how Oracle displays the access and filter predicates when accessing an index and using IS NULL together with other predicates following after.
I do have a very old post that used to be rather popular here that listed various restrictions related to compression. One of the most obvious restrictions in older versions was that the basic / OLTP (Advanced Row) heap table compression based on symbol tables / de-duplication was limited to tables with 254 columns or less - or probably more general to rows with single row pieces.
I've recently came across an interesting observation I've not seen documented yet, so I'm publishing a simple example here to demonstrate the issue.
In principle it looks like that the efficiency of Bloom Filter operations are dependent on the cardinality estimates. This means that in particular cardinality under-estimates of the optimizer can make a dramatic difference how efficient a corresponding Bloom Filter operation based on such a cardinality estimate will work at runtime. Since Bloom Filters are crucial for efficient processing in particular when using Exadata or In Memory column store this can have significant impact on the performance of affected operations.
I've recently had a case at a client where it was questioned why a particular application was seemingly not making full use of the available I/O capabilities - in comparison to other databases / applications using similar storage.
Basically it ended up in a kind of finger pointing between the application vendor and the IT DBA / storage admins, one side saying that the infrastructure used offers insufficient I/O capabilities (since the most important application tasks where dominated by I/O waits in the database), and the other side saying that the application doesn't make use of the I/O capabilities offered - compared to other databases / applications that showed a significantly higher IOPS rate and/or I/O throughput using the same kind of storage.
General Information
Today I faced a performance problem caused by a bad cardinality estimation involving a CDB view in a 12.1.0.2 multitenant environment. While solving the problem I did a number of observations that I try to summarize in this blog post.
First of all, when checking the execution plan of a query already running for more than two hours, I noticed that, in the execution plan, neither the referenced CDB view nor one of its underlying objects were referenced. The following query (and its execution plan) executed while connect to the CDB illustrates (I also added the 12.2.0.1 output to show you the difference it that area):
This is a short post to point out a problem I recently discovered and for which I couldn’t find information online. Hence, in case you hit it, thanks to this short post I hope that you’ll save some time.
The documentation describes a procedure you can use to get rid of dangling triples stored in a semantic network. Simply put, you must execute a PL/SQL block like the following:
Recent comments
3 years 3 days ago
3 years 12 weeks ago
3 years 17 weeks ago
3 years 17 weeks ago
3 years 22 weeks ago
3 years 43 weeks ago
4 years 11 weeks ago
4 years 41 weeks ago
5 years 25 weeks ago
5 years 26 weeks ago