Search

Top 60 Oracle Blogs

Recent comments

Infrastructure

Dump logfile

Here’s a little procedure I’ve been using since Oracle 8i to dump the contents of the current log file – I’ve mentioned it several times in the past but never published it, so I’ll be checking for references to it and linking to it.

The code hasn’t changed in a long time, although I did add a query to get the full tracefile name from v$process when that became available. There’s also an (optional) called to dbms_support.my_sid to pick up the SID of the current session that slid into the code when that package became available.

Index Splits – 3

This is stored only for reference, and in case anyone wants to wade through the details. It’s the redo log dump from the 90/10 index leaf block split test from the previous blog posts running on 11.2.0.4 on Linux. The first part is the full block dump, the second part is an extract of the Record and Change vector headings with the embedded opcode (opc:) for the undo records in the redo vectors, and a tiny note of what each change vector is doing.

Index Splits – 2

In yesterday’s article I described the mechanism that Oracle for an index leaf block split when you try to insert a new entry into a leaf block that is already full, and I demonstrated that the “50-50” split and the “90-10” split work in the same way, namely:

Index splits

After writing this note I came to the conclusion that it will be of no practical benefit to anyone …  but I’m publishing it anyway because it’s just an interesting little observation about the thought processes of some Oracle designer/developer. (Or maybe it’s an indication of how it’s sensible to re-use existing code rather than coding for a particular boundary case, or maybe it’s an example of how to take advantage of “dead time” to add a little icing to the cake when the extra time required might not get noticed). Anyway, the topic came up in a recent thread on the OTN/ODC database forum and since the description given there wasn’t quite right I thought I’d write up a correction and a few extra notes.

Direct IOT

A recent (automatic ?) tweet from Connor McDonald highlighted an article he’d written a couple of years ago about an enhancement introduced in 12c that allowed for direct path inserts to index organized tables (IOTs). The article included a demonstration seemed to suggest that direct path loads to IOTs were of no benefit, and ended with the comment (which could be applied to any Oracle feature): “Direct mode insert is a very cool facility, but it doesn’t mean that it’s going to be the best option in every situation.”

Validate FK

A comment arrived yesterday on an earlier posting about an enhancement to the truncate command in 12c that raised the topic of what Oracle might do to validate a foreign key constraint. Despite being sure I had the answer written down somewhere (maybe on a client site or in a report to a client) I couldn’t find anything I’d published about it, so I ran up a quick demo script to show that all Oracle does is construct a simple SQL statement that will do check the data – and then do whatever the optimizer does to produce the fastest possible plan.

Here’s the script – with a few variations to show what happens if you start tweaking features to change the plan.

Truncate upgrade

Connor McDonald produced a tweet yesterday linking to a short video he’d created about an enhancement to the truncate command in 12c. If you have referential integrity declared between a parent and child table then in 12c you can truncate the parent table and Oracle will truncate the child table for you – rather than raising an error. The feature requires the foreign key constraint to be declared “on delete cascade” – which is an option that I don’t see used very often. Unfortunately if you try to change an existing foreign key constraint to meet this requirement you’ll find that you can’t (yet) use the “alter table modify constraint” to make the necessary change. As Connor pointed out, you’ll have to drop and recreate the constraint – which leaves you open to bad data getting into the system or an outage while you do the drop and recreate.

Cursor_sharing force

Prompted by a recent ODC (OTN) question I’ve just written up an example of one case where setting the cursor_sharing parameter to force doesn’t work as you might expect. It’s a specific example of what I believe is a theme that can appear in several different circumstances: if your SQL mixes “genuine” bind variable with literals then the literals may not be substituted.

Here’s a simple data set to start with:

dbms_random

In a recent ODC thread someone had a piece of SQL that was calling dbms_random.string(‘U’,20) to generate random values for a table of 100,000,000 rows. The thread was about how to handle the ORA-30009 error (not enough memory for operation) that is almost inevitable when you use the “select from dual connect by level <= n” strategy for generating very large numbers of rows, but this example of calling dbms_random.string() so frequently prompted me to point out an important CPU saving , and then publicise through this blog a little known fact (or deduction) about the dbms_random.string() function.

255 Columns

It’s the gift that keeps on giving – no matter how many problems you find there are always more waiting to be found. It’s been some time since I last wrote about tables with more than 255 columns, and I said then that there was more to come. In the last article I described how adding a few columns to a table, or updating a trailing column in a way that made the table’s used column count exceed 255, could result in some strange row-splitting behaviour – in this article I’m going to look at a critical side effect of that behaviour.