Search

Top 60 Oracle Blogs

Recent comments

Oracle

Announcing SLOB 2.5 for Download at Github.

This is just a quick blog post to announce that SLOB 2.5 is now available for downloading at: https://github.com/therealkevinc/SLOB_distribution/tree/SLOB_2.5.0.

There is an important bug fix in this release that corrects redo logging payload generation when testing with non-zero slob.conf->UPDATE_PCT.  I recommend downloading and using this release accordingly.  The bug is described in the release notes.

A special thanks to Maris Elsins for finding, blogging and reporting the bug.

If you adopt this release there is no need to reload SLOB (via setup.sh). Data loaded with SLOB 2.4 is compatible with SLOB 2.5. Simply deploy the tar archive and bring over your slob.conf and you’re ready to test with SLOB 2.5.

Updatable Join Views

Here’s a quick “how to”.

If you want to update a column in table A with a column value from table B, then there’s a simple way to check if the required result can be achieved through an updatable join view.

Step 1: write a query that joins table A to table B and reports the rows in table A that you want to update, with the value from table B that should be used to update them, e.g.

Little sleeps

A peripheral question in a recent comment (made in response to me asking whether a loop had been written with a sleep time of 1/100th or 1/1000th of a second) asked “How do you sleep for 1/1000th of a second in pure PL/SQL?”

The answer starts with “How pure is pure ?” Here’s a “pure” PL/SQL solution that “cheats” by calling one of the routines in Oracle’s built-in Java library:

Nologging

Bobby Durrett recently published a note about estimating the volume of non-logged blocks written by an instance with the aim of getting some idea of the extra redo that would be generated if a database were switched to “force logging”.

Since my most recent blog notes have included various extracts and summaries from the symbolic dumps of redo logs it occurred to me that another strategy for generating the same information would be to dump the redo generated by Oracle when it wanted to log some information about non-logged blocks. This may sound like a contradiction, of course, but it’s the difference between data and meta-data: if Oracle wants to write data blocks to disc without logging their contents it needs to write a note into the redo log saying “there is no log of the contents of these blocks”.

19c High-Frequency statistics gathering and Real-Time Statistics

Those are the two exciting new features about the optimizer statistics which arrived in the latest release of 12cR2: 19c. Less exciting is that we are not allowed to use them in any other platform than Exadata:

https://apex.oracle.com/database-features/

But let’s cross the fingers and hope that this will be released in the future because they solve real-life problems such as Out-of-Range queries. Here is a little example involving both of them. A table starts empty and is growing during the day. Relying only on the statistics gathered during the maintenance window will give bad estimations. And dynamic sampling may not sample the right blocks.

Update restarts

Somewhere I think I’ve published a note about an anomaly that’s been bugging me since at least Oracle 10g – but if it is somewhere on the Internet it’s hiding itself very well and I can’t find it, though I have managed to find a few scripts on my laptop that make a casual reference to the side effects of the provlem. [Ed: a tweet from Timur Ahkmadeev has identified a conversation in the comments on an older post that could explain why I thought I’d written something about the issue.]

Anyway, I’ve decided to create some new code and write the article (all over again, maybe). The problem is a strange overhead that can appear when you do a simple but large update driving off a tablescan.

Quiz Night

Upgrades cause surprises – here’s a pair of results from a model that I constructed more than 15 years ago, and ran today on 12.2, then modified and ran again, then ran on 11.2.0.4, then on 12.1.0.2. It’s very simple, I just create a table, gather stats, then update every row.

Truncate AWR tables (unsupported)

When WRH$ tables grow too large so that they cannot be purged

This is no supported, please look at the My Oracle Support notes for a supported way to purge AWR when going too big, like re-creating AWR (needs to start the database in restricted mode) or purging with the normal procedure (can be long as it runs a delete). And do not copy-paste my statements as this is just an example.

When some tables grow too large, the purge job does not work correctly (because some things like the partitioning are done at the end). Then SYSAUX grows. And worse: the next upgrade may take hours if it has to change something on the AWR tables.

Upgrade time

Here is an example of an upgrade from 11g to 19c which took hours. Here is how I open the upgrade logs with “less” for the Top-10 longest statement execution:

Little things worth knowing: keeping enq: TM enqueue at bay during direct path inserts

Direct path inserts are commonly found in processing where data are shifted from one source to another. There are many permutations of the theme, this post details the way SQL Loader (sqlldr) behaves.

I have previously written about sqlldr and concurrent direct path inserts. The behaviour for releases <= 12.1 is described in the first post, the most welcome changes in 12.2 went into the second post. Since the fastest way of doing something is not to do it at all, I thought it might be worth demonstrating a useful technique to keep the dreaded TM enqueue at bay. Please note that these do not only apply to sqlldr, inserts with the append hint for example behave the same way in my experience.

An Oracle Auto Index function to drop secondary indexes - what is a “secondary” index?

dbms_auto_index.drop_secondary_indexes

In 19.4 the Auto Index package has 4 procedure/functions: