Search

Top 60 Oracle Blogs

Recent comments

Oracle

Column Groups

There’s a question on the ODC database forum about column groups that throws up an interesting side point. The OP is looking at a query like the following and asking about which column groups might help the optimizer get the best plan:

Match_recognise – 2

In my previous post I presented a warning about the potential cost of sorting and the cost of failing to find a match after each pass of a long search. In a comment on that post Stew Ashton reminded me that the cost of repeatedly trying to find a match starting from “the next row down” could be less of a threat than the cost of “back-tracking” before moving to the next row down.

Taking the example from the previous posting to explain – the requirement was for customers who had executed a transaction in September but not October, and a match_recognize() clause suggested on the ODC (formerly OTN) database forum to implement this requirement was as follows:

A look into Oracle redo, part 6: oracle post-wait commit and the on disk SCN

This is the sixth part in a blog series about Oracle database redo. The previous posts provided information about the log writer writing, this post is about the process that is waiting after issuing commit for the log writer to write it’s redo from the public redo strand. When the database is using post/wait for process commits, the committing process follows the following (simplified) procedure:

18c new Lost Write Protection

There are many layers between the Oracle Database pwrite() calls and the physical sector written on disk: filesystem, logical volume, SAN or NAS, with a lot of smart software running for Virtualisation, Compression, Snapshotting, Synchronisation… Are you sure that the changes you made to your data is actually persisted on disk, completely and without any corruption? In case of bug or crash in the storage layer, it may happen that only part of the changes was written. In the case of crash, Oracle ensures that the datafile headers are written at the end, so that recovery can kick-in after the crash. Then, a partially written block can be detected and restored. With different checksum settings, you can also check block integrity while writing or reading. But that protects only for fractured blocks. What if a block write just did not occur? An old version of the block remains and then is perfectly correct for checksum, RMAN, and DBV.

Conditional SQL – 5

Here’s a note that has been sitting around for more than 3 years (the draft date is Jan 2015), waiting for me to finish it off; and in that time we’ve got a new version of Oracle that changes the solution to the problem it presented. (I also managed to write “Conditional SQL –  6” in the intervening period !)

This posting started with a question on the OTN (now ODC) database forum about an execution plan used by 11.2.0.3.  Here’s a model to represent the data and the query:

Hotsos Symposium 2018

Just a quick note to say I’m doing a couple of talks at this year’s Hotsos Symposium, which starts next Monday at 8:30AM. It’s not too late to register.

As an enticement – here is one of my favorite pictures from last year’s event: (Come join the fun!)

IMG_0379http://kerryosborne.oracle-guy.com/files/2018/03/IMG_0379-e1519959814127... 225w" sizes="(max-width: 480px) 100vw, 480px" />

255 Columns

It’s the gift that keeps on giving – no matter how many problems you find there are always more waiting to be found. It’s been some time since I last wrote about tables with more than 255 columns, and I said then that there was more to come. In the last article I described how adding a few columns to a table, or updating a trailing column in a way that made the table’s used column count exceed 255, could result in some strange row-splitting behaviour – in this article I’m going to look at a critical side effect of that behaviour.

18c dbms_xplan note about failed SQL Plan Baseline

SQL Plan Baselines is a great feature for plan stability: you capture the plans that you accept. However, if the data model changes and the accepted plans cannot reproduce, the optimizer will come with a new plan. In 18c we have a note from DBMS_XPLAN when the optimization ‘failed to use SQL plan baseline for this statement’.

I create a table, with an index, and run a query on it using this index:

SQL> create table DEMO as select rownum n from xmltable('1 to 1000');
Table DEMO created.
 
SQL> create index DEMO_N on DEMO(n);
Index DEMO_N created.
 
SQL> select * from DEMO where n=1;
 
N
-
1

A look into Oracle redo, part 5: the log writer writing

This the the fifth blog in a series of blogposts about Oracle database redo. The previous blog looked into the ‘null write’ (kcrfw_do_null_write actually) function inside kcrfw_redo_write_driver, which does housekeeping like updating SCNs and posting processes if needed, this blog looks into what happens when the log writer is actually posted by a process or if public redo strand buffers have been written into. In part 3 of this blog series (the log writer working cycle) it can be seen that when a session posts the log writer, it returns from the semaphore related functions, and calls ‘kcrfw_redo_write_driver’ directly, which otherwise is called inside ksbcti.

Inside the kcrfw_redo_write_driver function, the first thing of interest is executed only when the logwriter is posted, and the kcrfw_redo_write_driver function is called directly after returning from ksarcv and ksl_exit_main_loop_wait:

Duplicating a Database on ODA

Introduction

Every so often, we get a request to duplicate a database for one of our customers using an Oracle Database Appliance (ODA). The process for doing that is relatively straightforward, but there are a couple of nuances along the way so I thought I’d write it up as a blog post in case it’s of use to others. Obviously, I have obfuscated any customer-specific information to protect their identity.

Configuration

The first nuance is to understand what database is being used as the source for the clone. Generally, a request for cloning will be something like this:

“We are creating a new environment which needs a new database to be set up. Please copy P1_SRV_T and restore as P1_SRV_F”.