Search

Top 60 Oracle Blogs

Recent comments

Oracle

Hash Optimisation-

Franck Pachot did an interesting presentation at the OBUG (Belgium user group) Tech Days showing how to use one of the O/S debug/trace tools to step through the function calls that Oracle made during different types of joins. This prompted me to ask him a question about a possible optimisation of hash joins as follows:

The hash join operation creates an in-memory hash table from the rowsource produced by its first child operation then probes the hash table with rows from the row source produced by the second child operation; but if there are no rows in the first row source then there’s no need to acquire rows from the second row source, so Oracle doesn’t call the second child operation.

19c Auto Index: the dictionary views

The abbreviation AI may be misleading but it has nothing to do with Artificial Intelligence. And you may have been surprised that the ‘A’ means ‘Automatic’ rather than ‘Autonomous’ as the latter is constantly used to tag any new feature in the database since 18c. But this difference is really important: ‘Autonomous’ supposes that you don’t have anything to do and don’t even need to be notified about what happened. On the opposite, ‘Automatic’ means that some things are done without your intervention, in order to help you, but you are still in charge of managing them. And you need to look at the dictionary views, to be aware of the findings, recommendations, and implementations. Automatic Indexing is an evolution of the Advisors that were introduced since 10g and, in the same way, it provides many dictionary views to understand its activity.

19c Easy Connect

When TCP/IP was the protocol used mostly everywhere, Oracle introduced EZCONNECT naming method to avoid long connection strings with parentheses everywhere. They said that it was a way to avoid tnsnames.ora but that’s not completely true:

  • you can use full connection strings without a tnsnames.ora
  • you still need a tnsnames.ora for more complex connection strings

But the idea was to replace:

(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=APP.service.net))(ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=2483)))

by:

localhost:2483/APP.service.net

and even reduce it to a simple hostname when using the default port and default service name.

I use it a lot when connecting manually, but as soon as there’s a need to add multiple hosts for transparent failover or load balancing, or to add some additional parameters, the full connection string is required.

19c DG Broker export/import configuration

This is something I wanted for a long time: be able to save a broker configuration to be able to re-configure it if needed. What I usually do is maintain a script with all commands. What I dreamed was being able to export the configuration as a script. What we have now, in 19c, is the ability to export/import the configuration as a .xml file.

Actually, the configuration is already stored as XML in the broker configuration files (the .dat ones):

Oracle 19c Hint Usage reporting

One reason why we try to avoid hints in our queries is that it is very difficult to use correctly. No error is raised when there’s an incorrect syntax or when the hint cannot be used semantically. 19c dbms_xplan has an important enhancement as it can report hint usage, at least for optimizer hints.

By default, DBMS_XPLAN in the default TYPICAL format will report only invalid hints:

Descending Problem

I’ve written in the past about oddities with descending indexes ( here, here, and here, for example) but I’ve just come across a case where I may have to introduce a descending index that really shouldn’t need to exist. As so often happens it’s at the boundary where two Oracle features collide. I have a table that handles data for a large number of customers, who record a reasonable number of transactions per year, and I have a query that displays the most recent transactions for a customer.

Upcoming Events- #SQLSatCle and @RMOUG_ORG #TD19

As I live in an RV and travel as part of my work and to attend events, travel has started to figure out that I often fly out of one location and fly back into another. This week will be no different.

Since we travel between cities on the weekend, its bound to happen that I leave for a SQL Saturday on a Friday in one city and arrive back in a different city on a Sunday.

This Friday I will fly out of New Orleans, LA airport, only to fly in on Sunday to a small airport outside of Pensacola, FL so I can speak on Power BI to the SQL Saturday Cleveland event in Ohio.

Where to check Data Guard gap?

At work, we had a discussion with well-known colleagues, Luca Canali and Ludovico Caldara, about where we check that Data Guard recovery works as expected without gap. Several views can be queried, depending on the context. Here are a few comments about them.

v$database

This is my preferred because it relies on the actual state of the database, whatever the recovery process is:

SQL> select scn_to_timestamp(current_scn) 
from v$database;
SCN_TO_TIMESTAMP(CURRENT_SCN)
----------------------------------------------------------
22-JAN-19 03.08.32.000000000 PM

This reads the current System Change number (DICUR_SCN from X$KCCDI) and maps it to a timestamp (using the mapping SMON_SCN_TIME table).

Oracle numbers in K/M/G/T/P/E

Oracle is very well instrumented, for decades, from a time where measuring the memory in bytes was ok. But today, we spend a lot of time converting bytes in KB, GB, TB to read it easily. I would love to see a Human-Readable format for TO_CHAR, but there’s not. Here is a workaround without having to create a new function.

DBMS_XPLAN does that when displaying execution plans and we can access the functions it uses internally. The metrics can be numbers, and then the Kilo, Mega, Giga applies to powers of 1000. Or they can be a size in bytes, and we prefer the powers of 1024. Or they can be a time in seconds, and then we use a base 60. And then we have 3 sets of functions:

DML Tablescans

This note is a follow-up to a recent comment a blog note about Row Migration:

So I wonder what is the difference between the two, parallel dml and serial dml with parallel scan, which makes them behave differently while working with migrated rows. Why might the strategy of serial dml with parallel scan case not work in parallel dml case? I am going to make a service request to get some clarifications but maybe I miss something obvious?

The comment also referenced a couple of MoS notes: