Search

Top 60 Oracle Blogs

Recent comments

Oracle

Oracle stored procedure compilation errors displayed for humans

Here is a script I use a lot especially when importing a schema with Data Pump and checking for invalid objects. I usually don’t care about compilation errors at compile time but just run UTL_RECOMP.RECOMP_PARALLEL at the end and check for errors on invalid objects. Here is an example.

I have imported a schema with Data pump and got some compilation errors:

I want to resolve them, or at least to understand them.

If I query DBA_ERRORS, I get the following:

This is a small example, but it can be huge. Not very helpful:

Hash Partitions

Here’s an important thought if you’ve got any large tables which are purely hash partitioned. As a general guideline you should not need partition level stats on those tables. The principle of hash partitioned tables is that the rows are distributed uniformly and randomly based on the hash key so, with the assumption that the number of different hash keys is “large” compared to the number of partitions, any one partition should look the same as any other partition.

sys_op_lbid

I’ve made use of the function a few times in the past, for example in this posting on the dangers of using reverse key indexes, but every time I’ve mentioned it I’ve only been interested in the “leaf blocks per key” option. There are actually four different variations of the function, relevant to different types of index and controlled by setting a flag parameter to one of 4 different values.

The call to sys_op_lbid() take 3 parameters: index (or index [sub]partition object id, a flag vlaue, and a table “rowid”, where the flag value can be one of L, R, O, or G. The variations of the call are as follows:

Append hint

One of the questions that came up on the CBO Panel Session at the UKOUG Tech2018 conference was about the /*+ append */ hint – specifically how to make sure it was ignored when it came from a 3rd party tool that was used to load data into the database. The presence of the hint resulted in increasing amounts of space in the table being “lost” as older data was deleted by the application which then didn’t reuse the space the inserts always went above the table’s highwater mark; and it wasn’t possible to change the application code.

The first suggestion aired was to create an SQL Patch to associate the hint /*+ ignore_optim_embedded_hints */ with the SQL in the hope that this would make Oracle ignore the append hint. This won’t work, of course, because the append hint is not an optimizer hint, it’s a “behaviour” hint.

Oracle 19c Data Guard sandbox created by DBCA -createDuplicateDB

Here are the commands I use to create a sandbox on Linux with a CDB1 database in a Data Guard configuration. I use the latest (19c) DBCA features to create the Primary and duplicate to the Standby.

I’m doing all in a VM which is a Compute Instance provisioned in the Oracle Cloud. In this example, I have an Oracle Linux 7.6 VM.DenseIO2.24 shape with 320GB RAM and 24 cores but remember that you will not be able to scale up/down so choose according to your credits...

I have 40GB in the / filesystem

OS and filesystem installation

I’ve installed the prerequisites as root (preinstall package, sudo and HugePages — here 200GB out of the 314GB I have):

19c Observe-Only Data Guard FSFO: no split-brain risk in manual failover

Fast-Start Failover (FSFO) is an amazing feature of Oracle Data Guard Broker which brings High Availability (HA)features in addition to the Disaster Recovery (DR) one.

Data Guard as an HA solution

By default, a physical standby database protects from Disaster Recovery (like when your Data Center is on fire or underwater, or with a power cut,…). But it requires a manual action to do the failover. Then, even if the failover is quick (seconds to minutes) and there’s no loss of data (if in SYNC), it cannot be considered as HA because of the manual decision which can take hours. The idea of the manual decision is to understand the cause as it may be better to just wait in case of a transient failure. Especially if the standby site is less powerful and application performance will be degraded.

12c Snapshots

I published a note a few years ago about using the 12c “with function” mechanism for writing simple SQL statements to takes deltas of dynamic performance views. The example I supplied was for v$event_histogram but I’ve just been prompted by a question on ODC to supply a couple more – v$session_event and v$sesstat (joined to v$statname) so that you can use one session to get an idea of the work done and time spent by another session – the first script reports wait time:

Getting locale warnings when logging on to Linux

This blogpost is about the reason and solving getting the following message, or messages alike these when logging i to a linux box using ssh:

-bash: warning: setlocale: LC_CTYPE: cannot change locale (UTF-8): No such file or directory

However, this is a warning. Please mind such an issue might be come up in another way, which can be more disrupting; at least in the past I had issues running perl for the same issue:

[root@dmcel01 ~]# /usr/local/bin/ipconf -verify
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = "en_US.UTF-8",
LC_ALL = "UTF-8",
LC_CTYPE = "UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").

My next Conferences in 2019

In my 2019 talks, a lot of performance stuff for DBA and Developers:

This year started with the amazing OBUG Belgium Tech Days 2019

The co-location of data and code, in present and future (like the MLE engine running JavaScript or Python in the database)

The most relevant statistics gathering in the 12.2 database (from 12cR2, 18c, 19c)

A different view on Join Methods by tracing the internal functions.

The Riga Dev Days in Latvia:

Riga Dev Days | Tech conference and the annual meeting point for developers in Baltic States.

Where I talk about microservices, and data/code co-location

Cartesian Join

I wrote this note a little over 4 years ago (Jan 2015) but failed to publish it for some reason. I’ve just rediscovered it and it’s got a couple of details that are worth mentioning, so I’ve decided to go ahead and publish it now.

A recent [ed: 4 year old] question on the OTN SQL forum asked for help in “multiplying up” data – producing multiple rows from a single row source. This is something I’ve done fairly often when modelling a problem, for example by generating an orders table and then generating an order_lines table from the orders table, and there are a couple of traps to consider.