Search

Top 60 Oracle Blogs

Recent comments

redo

Nologging

Bobby Durrett recently published a note about estimating the volume of non-logged blocks written by an instance with the aim of getting some idea of the extra redo that would be generated if a database were switched to “force logging”.

Since my most recent blog notes have included various extracts and summaries from the symbolic dumps of redo logs it occurred to me that another strategy for generating the same information would be to dump the redo generated by Oracle when it wanted to log some information about non-logged blocks. This may sound like a contradiction, of course, but it’s the difference between data and meta-data: if Oracle wants to write data blocks to disc without logging their contents it needs to write a note into the redo log saying “there is no log of the contents of these blocks”.

Update restarts

Somewhere I think I’ve published a note about an anomaly that’s been bugging me since at least Oracle 10g – but if it is somewhere on the Internet it’s hiding itself very well and I can’t find it, though I have managed to find a few scripts on my laptop that make a casual reference to the side effects of the provlem. [Ed: a tweet from Timur Ahkmadeev has identified a conversation in the comments on an older post that could explain why I thought I’d written something about the issue.]

Anyway, I’ve decided to create some new code and write the article (all over again, maybe). The problem is a strange overhead that can appear when you do a simple but large update driving off a tablescan.

Quiz Night

Upgrades cause surprises – here’s a pair of results from a model that I constructed more than 15 years ago, and ran today on 12.2, then modified and ran again, then ran on 11.2.0.4, then on 12.1.0.2. It’s very simple, I just create a table, gather stats, then update every row.

Redo Dumps

A thread started on the Oracle-L list-server a few days ago asking for help analysing a problem where a simple “insert values()” (that handled millions of rows per day) was running very slowly. There are many reasons why this might happen, ranging from the trivial (someone has locked the table in exclusive mode), through the slightly subtle (we’re trying to insert a row that collides on a uniqueness constraint with an uncommitted insert from another session) to the subtle (Oracle has to read through the undo to check current versions of blocks against read-consistent versions) ending up at the esoteric (the ASSM space management blocks are completely messed up again).

Dump logfile

Here’s a little procedure I’ve been using since Oracle 8i to dump the contents of the current log file – I’ve mentioned it several times in the past but never published it, so I’ll be checking for references to it and linking to it.

The code hasn’t changed in a long time, although I did add a query to get the full tracefile name from v$process when that became available. There’s also an (optional) called to dbms_support.my_sid to pick up the SID of the current session that slid into the code when that package became available.

Index Splits – 3

This is stored only for reference, and in case anyone wants to wade through the details. It’s the redo log dump from the 90/10 index leaf block split test from the previous blog posts running on 11.2.0.4 on Linux. The first part is the full block dump, the second part is an extract of the Record and Change vector headings with the embedded opcode (opc:) for the undo records in the redo vectors, and a tiny note of what each change vector is doing.

18c: some optimization about redo size

Some years ago, at the time of 12.1 release, I published in the SOUG newsletter some tests to show the amount of redo generated by different operations on a 10000 rows table. I had run it on 12.2 without seeing the differences and now on 18.1
I get the statistics from mystat using a script that displays them as columns, with the value being the difference from the previous run. I’ve run the same as in the article, and most of the statistics were in the same ballpark.

A look into oracle redo, part 11: log writer worker processes

Starting from Oracle 12, in a default configured database, there are more log writer processes than the well known ‘LGWR’ process itself, which are the ‘LGnn’ processes:

$ ps -ef | grep test | grep lg
oracle   18048     1  0 12:50 ?        00:00:13 ora_lgwr_test
oracle   18052     1  0 12:50 ?        00:00:06 ora_lg00_test
oracle   18056     1  0 12:50 ?        00:00:00 ora_lg01_test

These are the log writer worker processes, for which the minimal amount is equal to the amount public redo strands. Worker processes are assigned to a group, and the group is assigned to a public redo strand. The amount of worker processes in the group is dependent on the undocumented parameter “_max_log_write_parallelism”, which is one by default.

A look into oracle redo: index and overview

I gotten some requests to provide an overview of the redo series of blogposts I am currently running. Here it is:

A look into oracle redo, part 10: commit_wait and commit_logging

The redo series would not be complete without writing about changing the behaviour of commit. There are two ways to change commit behaviour:

1. Changing waiting for the logwriter to get notified that the generated redo is persisted. The default is ‘wait’. This can be set to ‘nowait’.
2. Changing the way the logwriter handles generated redo. The default is ‘immediate’. This can be set to ‘batch’.

There are actually three ways these changes can be made:
1. As argument of the commit statement: ‘commit’ can be written as ‘commit write wait immediate’ (statement level).
2. As a system level setting. By omitting an explicit commit mode when executing the commit command, the setting as set with the parameters commit_wait (default: wait) and commit_logging (default: immediate).
3. As a session level setting. By omitting an explicit commit mode, but by setting either commit_wait or commit_logging it overrides the settings at the system level.