Search

Top 60 Oracle Blogs

Recent comments

Uncategorized

Statistics on Load

One of the talks I gave recently on the Groundbreaker Yatra tour was on the changes that arrive with the optimizer when you move to 12c and above. We talked about changes to histograms, all of the adaptive features etc, but also, one of the simplest but coolest optimizer improvements when it comes to loading data. When you perform a direct-path insert into an empty table, or a create-table-as-select (CTAS) operation, there is no need to perform an additional DBMS_STATS call to gather statistics at the conclusion of the load. The load process itself handles that for you. Here’s a simple example of that in action

It’s back!

Yes indeed! Now that the dates and times are available for OpenWorld 2019, then it is naturally time for the best data searching, filtering and analysis tool on the planet to step up to the plate, enter the fray and …. hmmm… I’ve run out of metaphors </p />
</p></div>

    	  	<div class=

CockroachDB… true distributed system-of-record for the cloud

After exploring the analytics end of distributed NewSQL databases, I decided to explore true system-of-record in the cloud. Without a doubt, the future is moving to the cloud. CockroachDB is at the fore-front of this journey to be the backbone database for new micro-services in the cloud. Escape the legacy database rats nest that wasn’t designed to be truly distributed or in the cloud.

With CockroachDB, you can domicile data for regulatory reasons and ensure the best possible performance by accessing data locally. CockroachDB allows you to use create a SINGLE distributed database across multiple cloud providers and on-prem. Our self healing not only helps keep your data consistent, but it automates cloud-to-cloud migration without downtime.

CockroachDB… true distributed system-of-record for the cloud

After exploring the analytics end of distributed NewSQL databases, I decided to explore true system-of-record in the cloud. Without a doubt, the future is moving to the cloud. CockroachDB is at the fore-front of this journey to be the backbone database for new micro-services in the cloud. Escape the legacy database rats nest that wasn't designed to be truly distributed or in the cloud.

With CockroachDB, you can domicile data for regulatory reasons and ensure the best possible performance by accessing data locally. CockroachDB allows you to use create a SINGLE distributed database across multiple cloud providers and on-prem. Our self healing not only helps keep your data consistent, but it automates cloud-to-cloud migration without downtime.

HIGH_VALUE (and other LONG columns) revisited

Just a quick post to ensure people don’t get caught out by a recent “trick” I saw on an AskTOM question for extracting the HIGH_VALUE out of it’s LONG storage in the dictionary to a more appropriate datatype. A reviewer (I’m sure in good faith) posted the following mechanism to extract the HIGH_VALUE

Same dog, different leash – functions in SQL

Let’s start with this somewhat odd looking result. I have an inline function that returns a random number between 0 and 20, and I call that for each row in ALL_OBJECTS and then I extract just those rows for which the generated random number is 10. Seems simple enough….but why do I get results for which the value of the second column is most certainly not 10?

MERGE and ORA-8006

I’m sure there will be a slew of post-Kscope wrap up posts coming out into the blogosphere, so in lieu of that, and the the fact that I’m just stuck in an airport waiting for a flight, I’ll offer something slightly more technical. I did a post a while back about a curious error “unable to get a stable set of rows” when using MERGE. Here is another variant which can occur when you allow rows to physically move during a MERGE.

How is that possible?” I hear you ask. Easy. All we need is partitioned table with ENABLE ROW MOVEMENT.

SMON_SCN_TIME and ORA-8161? Digging deeper

In the recent versions of the Oracle database, we’ve had the ability to convert between a System Change Number (SCN) and the approximate time to which that SCN pertains. These functions are unsurprisingly called SCN_TO_TIMESTAMP and TIMESTAMP_TO_SCN. The only potential misnomer here is that even though the functions are called “timestamp” and return a datatype of timestamp, on most platforms you are going to notice that the granularity doesn’t run down into fractions of seconds


SQL> select scn_to_timestamp(14816563713652) from dual;

SCN_TO_TIMESTAMP(14816563713652)
---------------------------------------------------------
08-JUN-19 02.30.59.000000000 AM

This all looks great until you start poking around too far into the past, and you end up in territory like this:

Kscope Sunday will be awesome

Yeah yeah I know. What kind of dufus cut-pastes a giant image into their blog post. That would be ….. me Smile

But the reality is, there is just soooooo much going on this year on the Sunday before the “official” start of the Kscope conference, it was too hard to condense it into a few sentences. So I just dumped the image from the website, but you can read about it here.

It’s my first Kscope and it will be blast to be running a day of high quality but dirt cheap learning with Maria, Jeff, Steven and Blaine.

With and without WITH_PLSQL within a WITH SQL statement

OK, let’s be honest right up front. The motivation for this post is solely to be able to roll out a tongue twisting blog post title Smile. But hopefully there’s some value as well in here for you if you’re hitting the error:

ORA-32034: unsupported use of WITH clause

First some background. A cool little enhancement to the WITH clause came along in 12c that allowed PLSQL functions to be defined within the scope of the executing SQL statement. To see the benefit of this, consider the following example that I have a personal affinity with (given my surname).

Let’s say I’ve allowed mixed-case data in a table that holds names.