Search

Top 60 Oracle Blogs

Recent comments

Oracle

The time has come …

… the walrus said, to speak of many things!

2019 was for me, in many ways, my “Annis Horribilis”. My mother passed away in early January, I suffered through a fair chunk of the year through chronic pain, and in mid December, my father also passed away.

Given all these things, I have made the relatively easy decision that I would retire, and effective January 6th, I left the workforce. I want to thank publicly my manager, Peter Underwood, who had held my position open for me to return to since I went on medical leave on December 5th, 2018.

So this is the last post that I will make on PeteWhoDidNotTweet. In the next week or so, I plan on removing the social media accounts I’ve been using. The website will remain up and running till the next time I’m supposed to renew it. I’ve set it to not renew, so at some stage, GoDaddy will no doubt expire it. If you need anything from the site, I suggest you copy it sooner rather than later.

Oracle and the Future

We’ll start with a disclaimer here- this is my experience and my opinion. NO ONE ELSE’S, just as any link to any blog in this post is. Take it or leave it, I really don’t care, never have, never will- never been one to follow any drummer but my own anyway. </p />
</em></strong></p></div>

    	  	<div class=

Video : Oracle REST Data Services (ORDS) : Including Hyperlinks in JSON Output

In today’s video we’ll demonstrate how to include hyperlinks in JSON output delivered by Oracle REST Data Services (ORDS).

This is based on this article, which includes some more complete examples.

Good API design is not as simple as you might think, and making sure you pass back relevant information, like URLs for navigating through the services and maybe even service documentation links can make things a lot clearer.

Advanced usage of gdb for profiling

This post is about how to use gdb, which is a debugger, so very simplistically put an aid for looking at C programs, as a profiler. I use gdb quite a lot for profiling because it’s the easiest way for profiling for me.

Lots of people which I know use other tools like perf, systemtap and dtrace for the same purpose and that’s fine. Each tools has its own advantages and disadvantages. One disadvantage of gdb is that it’s using ptrace to attach to a process, which makes it dead slow from a machine perspective, because everything it then does goes via another process, which is the debugger. That is how the debugger works.

Also lots of people use gdb like I do, and use basic functionality, which is breaking at functions, which makes it possible to find out the sequence of how functions are called, generating backtraces (stack traces) to understand the stack and maybe looking at functions arguments.

push_having_to_gby() – 2

The problem with finding something new and fiddling with it and checking to see how you can best use it to advantage is that you sometimes manage to “break” it very quickly. In yesterday’s blog note I introduced the /*+ push_having_to_gby(@qbname) */ hint and explained why it was a useful little enhancement. I also showed a funny little glitch with a missing predicate in the execution plan.

Today I thought I’d do something a little more complex with the example I produced yesterday, and I’ve ended up with a little note that’s not actually about the hint, it’s about something that appeared in my initial testing of the hint, and then broke when I pushed it a little further. Here’s a script to create data for the new test:

Where does the log writer spend its time on?

The Oracle database log writer is the process that fundamentally influences database change performance. Under normal circumstances the log writer must persist the changes made to the blocks before the actual change is committed. Therefore, it’s vitally important to understand what the log writer is exactly doing. This is widely known by the Oracle database community.

The traditional method for looking at log writer performance is looking at the wait event ‘log file parallel write’ and the CPU time, and comparing that to the ‘log file sync’ alias “commit wait time”. If ‘log file parallel write’ and ‘log file sync’ roughly match, a commit is waiting on the log writer IO latency, if it isn’t then it’s unclear, and things get vague.

push_having_to_gby()

I came across an interesting new hint recently when checking the Outline Data for an execution plan: /*+ push_having_to_gby() */  It’s an example of a “small” change designed to reduce CPU usage by reducing the volume of data that passes through the layers of calls that an execution plan represents. The hint appeared in 18.3 but I’ve run the following on 19.3 as a demonstration of what it does and why it’s a good thing:

Oracle REST Data Services (ORDS) 19.4 : A quick life update…

https://oracle-base.com/blog/wp-content/uploads/2019/12/ords-2-258x300.png 258w" sizes="(max-width: 238px) 85vw, 238px" />

Almost 2 weeks ago I wrote about the release of Oracle REST Data Services (ORDS), SQLcl, SQL Developer and SQL Developer Data Modeler 19.4.

I spent the holidays playing around with ORDS quite a bit, so I came back to work today and pushed it out across all Dev and Test installations.

Oracle wait event ‘log file parallel write’ change

This post is about a change in how the time is measured for the event ‘log file parallel write’. This is important for the performance tuning of any change activity in an Oracle database, because with the default commit settings, a foreground session that commits changes waits in the wait event ‘log file sync’, which is a wait on logwriter activity, for which the wait event ‘log file parallel write’ always has been the indicator of the time spend on IO.

Log file sync
First things first: a foreground session normally waits on the wait event ‘log file sync’ when it commits waiting for its change vectors to be written to the online redologfile(s) by the logwriter. It is wrong to always assume a ‘log file sync’ will be present. If, somehow, the logwriter manages to increase the ON DISK SCN to or beyond the foreground session’s commit SCN, there will be no ‘log file sync’ wait event.

Scalar Subq Bug

This is an observation that came up on the Oracle Developer Forum a couple of days ago, starting life as the fairly common problem:

I have a “select” that runs quickly  but when I use in a “create as select” it runs very slowly.

In many cases this simply means that the query was a distributed query and the plan changed because the driving site changed from the remote to the local server. There are a couple of other reasons, but distributed DML is the one most commonly seen.

In this example, though, the query was not a distributed query, it was a fully local query. There were three features to the query that were possibly suspect, though: