Search

Top 60 Oracle Blogs

Recent comments

Uncategorized

A new use for DML error logging

Many moons ago I did a short video on the DML error logging feature in Oracle. The feature has been around for many years now, and is a great tool for capturing errors during a large load without losing all of the rows that successfully loaded. You can watch that video below if you’re new to DML error logging.

But here is a possible new use case for DML error logging, even if you are not doing large scale loads. Let me describe the problem first, and then show how DML error logging might be a solution.

I’ll create a table with a constraint on it’s column

DDL invalidates your SQL right ?

I stumbled upon this post by optimizer guru Nigel Bayliss last week, so please have a read of that first before proceeding. But I wanted to show a simple demo of how management of cursors continues to improve with each version of the database.

No more stale statistics in 19c

There is an odd contradiction that we all encounter for most databases, especially if they are predominantly used during the business day. Here is how that contradiction comes to be – it is in the way that we obtain and use optimizer  statistics on those databases. The contradiction runs like this:

  • To minimize service disruption, we gather statistics at a quiet time, for example, in the middle of the night
  • We then use those statistics during the business day whilst user activity is at its highest.
  • Highest user activity will typically mean the highest frequency of data changes.
  • Hence the statistics are at their peak accuracy when no-one is using them to optimize queries, and they are at their least accurate when everyone is using them to optimize queries!

We can demonstrate this easily with the following script run in 18c.

Plugzilla!

Cloning a pluggable database takes time, and for environments where you’d like to use clones as part of unit testing, or other elements of Agile development, it would be nice to be able to bring a clone into operation in the smallest time possible. One mechanism for that is sparse storage clones aka snapshot copy, but depending on your database version and your storage infrastructure, you might hit some limitations.

Enter …. Plugzilla! This PL/SQL package allows you clone pluggable databases extremely quickly by having pluggable database pre-cloned in advance.

Example

Lets say you have a development pluggable database called PDB1. You want to let developers take clones of this as quickly and as often as they like and at various stages in its life cycle. Here is how we might do it with plugzilla.

Autonomous Transaction Processing – your slice of the pie

I grabbed the following screen shot from a slide deck I’ve been giving about Autonomous Transaction Processing (ATP). It shows the services that are made available to you once you create your database. At first glance, it looks like we have a simple tier, where the lower in the tier you are, the smaller the slice of the database resource “pie” you get.

image

Take a COPY out of SQLcl’s book

As the world continues to get smaller, it makes sense for all databases to be defined to handle more than just US7ASCII, which is why the default characterset for all recent versions of the Oracle database is AL32UTF8. In that way, we can handle the various challenges when it comes to languages other than english.


SQL> create table t ( x varchar2(30 char));

Table created.

SQL> insert into t values ('안녕미미 99~12900원 시즌오프 원피');

1 row created.

But lets see what happens when we copy that data using some legacy facilities in SQL Plus.

Statistics on Load

One of the talks I gave recently on the Groundbreaker Yatra tour was on the changes that arrive with the optimizer when you move to 12c and above. We talked about changes to histograms, all of the adaptive features etc, but also, one of the simplest but coolest optimizer improvements when it comes to loading data. When you perform a direct-path insert into an empty table, or a create-table-as-select (CTAS) operation, there is no need to perform an additional DBMS_STATS call to gather statistics at the conclusion of the load. The load process itself handles that for you. Here’s a simple example of that in action

It’s back!

Yes indeed! Now that the dates and times are available for OpenWorld 2019, then it is naturally time for the best data searching, filtering and analysis tool on the planet to step up to the plate, enter the fray and …. hmmm… I’ve run out of metaphors </p />
</p></div>

    	  	<div class=

CockroachDB… true distributed system-of-record for the cloud

After exploring the analytics end of distributed NewSQL databases, I decided to explore true system-of-record in the cloud. Without a doubt, the future is moving to the cloud. CockroachDB is at the fore-front of this journey to be the backbone database for new micro-services in the cloud. Escape the legacy database rats nest that wasn’t designed to be truly distributed or in the cloud.

With CockroachDB, you can domicile data for regulatory reasons and ensure the best possible performance by accessing data locally. CockroachDB allows you to use create a SINGLE distributed database across multiple cloud providers and on-prem. Our self healing not only helps keep your data consistent, but it automates cloud-to-cloud migration without downtime.

HIGH_VALUE (and other LONG columns) revisited

Just a quick post to ensure people don’t get caught out by a recent “trick” I saw on an AskTOM question for extracting the HIGH_VALUE out of it’s LONG storage in the dictionary to a more appropriate datatype. A reviewer (I’m sure in good faith) posted the following mechanism to extract the HIGH_VALUE