Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Golang chatting with Oracle

Python is awesome. I like Python very much, but there are some cases where Python’s performance is just not enough. What other options do we have?
Well, there’s for example GoLang

Problem with GoLang is that there is not a lot on the Internet about using it with Oracle Databases. Let me show you some basic steps for configuring the environment.

The first step is to install GoLang – instructions can be found in here: https://golang.org/doc/install

To connect to Oracle, you need Oracle Client (it can be Oracle Instant Client).
I will configure my environment on my MAC and use Oracle Instant Client for MAC

piclerick:~ inter$ mkdir oracle
piclerick:~ inter$ cd oracle
piclerick:oracle inter$ ls
piclerick:oracle inter$ unzip ~/Downloads/instantclient-basic-macos.x64-18.1.0.0.0.zip
Archive:  /Users/inter/Downloads/instantclient-basic-macos.x64-18.1.0.0.0.zip
  inflating: instantclient_18_1/BASIC_README
  inflating: instantclient_18_1/adrci
  inflating: instantclient_18_1/genezi
    linking: instantclient_18_1/libclntsh.dylib  -> libclntsh.dylib.18.1
  inflating: instantclient_18_1/libclntsh.dylib.18.1
  inflating: instantclient_18_1/libclntshcore.dylib.18.1
  inflating: instantclient_18_1/libnnz18.dylib
    linking: instantclient_18_1/libocci.dylib  -> libocci.dylib.18.1
  inflating: instantclient_18_1/libocci.dylib.18.1
  inflating: instantclient_18_1/libociei.dylib
  inflating: instantclient_18_1/libocijdbc18.dylib
  inflating: instantclient_18_1/libons.dylib
  inflating: instantclient_18_1/liboramysql18.dylib
   creating: instantclient_18_1/network/
  inflating: instantclient_18_1/ojdbc8.jar
  inflating: instantclient_18_1/ucp.jar
  inflating: instantclient_18_1/uidrvci
  inflating: instantclient_18_1/xstreams.jar
   creating: instantclient_18_1/network/admin/
  inflating: instantclient_18_1/network/admin/README
finishing deferred symbolic links:
  instantclient_18_1/libclntsh.dylib -> libclntsh.dylib.18.1
  instantclient_18_1/libocci.dylib -> libocci.dylib.18.1
piclerick:oracle inter$ unzip ~/Downloads/instantclient-sqlplus-macos.x64-18.1.0.0.0.zip
Archive:  /Users/inter/Downloads/instantclient-sqlplus-macos.x64-18.1.0.0.0.zip
  inflating: instantclient_18_1/SQLPLUS_README
  inflating: instantclient_18_1/glogin.sql
  inflating: instantclient_18_1/libsqlplus.dylib
  inflating: instantclient_18_1/libsqlplusic.dylib
  inflating: instantclient_18_1/sqlplus
piclerick:oracle inter$ unzip ~/Downloads/instantclient-tools-macos.x64-18.1.0.0.0.zip
Archive:  /Users/inter/Downloads/instantclient-tools-macos.x64-18.1.0.0.0.zip
  inflating: instantclient_18_1/TOOLS_README
  inflating: instantclient_18_1/exp
  inflating: instantclient_18_1/expdp
  inflating: instantclient_18_1/imp
  inflating: instantclient_18_1/impdp
  inflating: instantclient_18_1/libnfsodm18.dylib
  inflating: instantclient_18_1/sqlldr
  inflating: instantclient_18_1/wrc
piclerick:oracle inter$ unzip ~/Downloads/instantclient-sdk-macos.x64-18.1.0.0.0-2.zip
Archive:  /Users/inter/Downloads/instantclient-sdk-macos.x64-18.1.0.0.0-2.zip
   creating: instantclient_18_1/sdk/
   creating: instantclient_18_1/sdk/admin/
  inflating: instantclient_18_1/sdk/admin/oraaccess.xsd
   creating: instantclient_18_1/sdk/demo/
  inflating: instantclient_18_1/sdk/demo/cdemo81.c
  inflating: instantclient_18_1/sdk/demo/demo.mk
  inflating: instantclient_18_1/sdk/demo/occidemo.sql
  inflating: instantclient_18_1/sdk/demo/occidemod.sql
  inflating: instantclient_18_1/sdk/demo/occidml.cpp
  inflating: instantclient_18_1/sdk/demo/occiobj.cpp
  inflating: instantclient_18_1/sdk/demo/occiobj.typ
  inflating: instantclient_18_1/sdk/demo/oraaccess.xml
  inflating: instantclient_18_1/sdk/demo/setuporamysql.sh
   creating: instantclient_18_1/sdk/include/
  inflating: instantclient_18_1/sdk/include/ldap.h
  inflating: instantclient_18_1/sdk/include/nzerror.h
  inflating: instantclient_18_1/sdk/include/nzt.h
  inflating: instantclient_18_1/sdk/include/occi.h
  inflating: instantclient_18_1/sdk/include/occiAQ.h
  inflating: instantclient_18_1/sdk/include/occiCommon.h
  inflating: instantclient_18_1/sdk/include/occiControl.h
  inflating: instantclient_18_1/sdk/include/occiData.h
  inflating: instantclient_18_1/sdk/include/occiObjects.h
  inflating: instantclient_18_1/sdk/include/oci.h
  inflating: instantclient_18_1/sdk/include/oci1.h
  inflating: instantclient_18_1/sdk/include/oci8dp.h
  inflating: instantclient_18_1/sdk/include/ociap.h
  inflating: instantclient_18_1/sdk/include/ociapr.h
  inflating: instantclient_18_1/sdk/include/ocidef.h
  inflating: instantclient_18_1/sdk/include/ocidem.h
  inflating: instantclient_18_1/sdk/include/ocidfn.h
  inflating: instantclient_18_1/sdk/include/ociextp.h
  inflating: instantclient_18_1/sdk/include/ocikpr.h
  inflating: instantclient_18_1/sdk/include/ociver.h
  inflating: instantclient_18_1/sdk/include/ocixmldb.h
  inflating: instantclient_18_1/sdk/include/ocixstream.h
  inflating: instantclient_18_1/sdk/include/odci.h
  inflating: instantclient_18_1/sdk/include/orastruc.h
  inflating: instantclient_18_1/sdk/include/oratypes.h
  inflating: instantclient_18_1/sdk/include/oraxml.h
  inflating: instantclient_18_1/sdk/include/oraxml.hpp
  inflating: instantclient_18_1/sdk/include/oraxmlcg.h
  inflating: instantclient_18_1/sdk/include/oraxsd.h
  inflating: instantclient_18_1/sdk/include/oraxsd.hpp
  inflating: instantclient_18_1/sdk/include/ori.h
  inflating: instantclient_18_1/sdk/include/orid.h
  inflating: instantclient_18_1/sdk/include/orl.h
  inflating: instantclient_18_1/sdk/include/oro.h
  inflating: instantclient_18_1/sdk/include/ort.h
  inflating: instantclient_18_1/sdk/include/xa.h
  inflating: instantclient_18_1/sdk/include/xml.h
  inflating: instantclient_18_1/sdk/include/xml.hpp
  inflating: instantclient_18_1/sdk/include/xmlctx.hpp
  inflating: instantclient_18_1/sdk/include/xmldav.h
  inflating: instantclient_18_1/sdk/include/xmldf.h
  inflating: instantclient_18_1/sdk/include/xmlerr.h
  inflating: instantclient_18_1/sdk/include/xmlev.h
  inflating: instantclient_18_1/sdk/include/xmlotn.h
  inflating: instantclient_18_1/sdk/include/xmlotn.hpp
  inflating: instantclient_18_1/sdk/include/xmlproc.h
  inflating: instantclient_18_1/sdk/include/xmlsch.h
  inflating: instantclient_18_1/sdk/include/xmlsoap.h
  inflating: instantclient_18_1/sdk/include/xmlsoap.hpp
  inflating: instantclient_18_1/sdk/include/xmlsoapc.hpp
  inflating: instantclient_18_1/sdk/include/xmlurl.h
  inflating: instantclient_18_1/sdk/include/xmlxptr.h
  inflating: instantclient_18_1/sdk/include/xmlxsl.h
  inflating: instantclient_18_1/sdk/include/xmlxvm.h
  inflating: instantclient_18_1/sdk/ott
 extracting: instantclient_18_1/sdk/ottclasses.zip
  inflating: instantclient_18_1/sdk/SDK_README

To connect to Oracle, we will use goracle – it is a package which is a database/sql/driver.Driver for connecting to Oracle DB, using Anthony Tuininga’s excellent OCI wrapper, ODPI-C.

Let’s get the package:

piclerick:~ inter$ go get gopkg.in/goracle.v2

Now we can set the environment variables and create our source code:

piclerick:~ inter$ export  ORACLE_HOME=/Users/inter/oracle/instantclient_18_1
piclerick:~ inter$ export DYLD_LIBRARY_PATH=/Users/inter/oracle/instantclient_18_1
piclerick:~ inter$ vim chatt_with_oracle.go

The base for our script will look like this:

package main

import (
        "fmt"
        "database/sql"
        "log"
        _ "gopkg.in/goracle.v2"
)

func main() {

}

Please note that we have to import goracle a bit differently than other packages – with "_" prefix. That’s because we will connect to the database like this: sql.Open("goracle", conn). So the "goracle" won’t be used directly in a code but as a string literal to SQL driver.

Let’s create a connection to the database and execute a simple query

package main

import (
        "fmt"
        "database/sql"
        "log"
        _ "gopkg.in/goracle.v2"
)

func main() {
  //connect to database
  db, err := sql.Open("goracle", "hr/hr@192.168.56.18:1521/rokoko")

  //close database connection at the end of program execution
  defer db.Close()

  //check if there were some errors during connection
  if err != nil {
        log.Panic(err)
  }

  //execute a query and get pointer to the rows
  rows, err := db.Query("select count(1) from employees")

  //check if there were some errors during execution
  if err != nil {
        log.Panic(err)
  }

  //define variable for the output
  var emps_cnt int

  //if there are some rows - put the output to the variable address by reference
  if rows.Next() {
        rows.Scan(&emps_cnt)
  }

  fmt.Println(emps_cnt)

}

Now we can compile our program and execute it:

piclerick:~ inter$ go build chatt_with_oracle.go
piclerick:~ inter$ ./chatt_with_oracle
107

Great! Let’s add more rows to the output and additional bind variable:

package main

import (
	"fmt"
	"database/sql"
	"log"
	_ "gopkg.in/goracle.v2"
)

func main() {
  //connect to database
  db, err := sql.Open("goracle", "hr/hr@192.168.56.18:1521/rokoko")

  //close database connection at the end of program execution
  defer db.Close()

  //check if there were some errors during connection
  if err != nil {
	log.Panic(err)
  }

  //execute a query and get pointer to the rows
  deptId := 20
  rows, err := db.Query("select first_name, last_name, salary from employees where department_id=:1", deptId)

  //check if there were some errors during execution
  if err != nil {
	log.Panic(err)
  }

  //define variable for the output
  var salary int
  var fname, lname string

  //if there are some rows - put the output to the variable address by reference
  for rows.Next() {
  	rows.Scan(&fname, &lname, &salary)
        fmt.Println(fname, lname, salary)
  }

}

And here’s the result:

piclerick:~ inter$ ./chatt_with_oracle
Michael Hartstein 26000
Pat Fay 12000

If you want to execute PL/SQL function or procedure and get the results, you can use sql.Out interface. For example, let’s create a sample PL/SQL function:

SQL> ed
Zapisano file afiedt.buf

  1  create or replace function f_get_dept_avg_sal(p_dept_id number) return number is
  2    v_avg_sal number;
  3  begin
  4    select avg(salary) into v_avg_sal
  5    from employees e
  6    where e.department_id=p_dept_id;
  7    return v_avg_sal;
  8* end;
  9  /

Utworzono funkcje.

Now we can add a simple PL/SQL block to our GO function and use Exec method and sql.Out interface:

package main

import (
	"fmt"
	"database/sql"
	"log"
	_ "gopkg.in/goracle.v2"
)

func main() {
  //connect to database
  db, err := sql.Open("goracle", "hr/hr@192.168.56.18:1521/rokoko")

  //close database connection at the end of program execution
  defer db.Close()

  //check if there were some errors during connection
  if err != nil {
	log.Panic(err)
  }

  //execute a query and get pointer to the rows
  deptId := 20
  rows, err := db.Query("select first_name, last_name, salary from employees where department_id=:1", deptId)

  //check if there were some errors during execution
  if err != nil {
	log.Panic(err)
  }

  //define variable for the output
  var salary int
  var fname, lname string

  //if there are some rows - put the output to the variable address by reference
  for rows.Next() {
	rows.Scan(&fname, &lname, &salary)
        fmt.Println(fname, lname, salary)
  }

  //define variable for average salary
  var avgSal float64

  //execute anonymous block and use sql.Out interface for extracting bind variable value
  db.Exec("begin :1 := f_get_dept_avg_sal(:2); end;", sql.Out{Dest: &avgSal}, deptId)

  fmt.Printf("AVG salary in department %d is %f\n", deptId, avgSal)

}

And here’s the result:

piclerick:~ inter$ go build chatt_with_oracle.go
piclerick:~ inter$ ./chatt_with_oracle
Michael Hartstein 26000
Pat Fay 12000
AVG salary in department 20 is 19000.000000

And it’s simple as that </p />
</p></div>

    	  	<div class=

Using dbca to create a physical standby database

While investigating new options I discovered with dbca for the previous article I noticed that it’s now possible to use Oracle’s Database Creation Assistant (dbca) to create a physical standby database using the -createDuplicateDB flag.

I wanted to know how easily this can be done on my Oracle 18.5.0 system. I have 2 VMs, server3 and server4 running Oracle Linux 7.6. I created the primary database on server3. The database name is NCDB with the db_unique_name set to SITEA. I also use SITEA as the database SID to prevent confusion. My physical standby database with db_unique_name SITEB will reside on server4.

The setup is kept as simple as possible to avoid any unintended complications. In other words, this is a lab setup, not something you’d entertain using for any other purpose. As always, this is a tech post and not a piece about licencing. It is up to you to ensure you aren’t performing any tasks that you aren’t licensed for.

Creating the primary database

The primary database can be created using dbca as always. Unlike the last post, this one uses ASM, not a filesystem to show how easy it is to do both.

The primary database is a non-Container Database (NCDB). I intend to create the primary instance as SITEA (Site A) with the standby named SITEB. This way no one gets confused when the database runs in primary role in the designated disaster recovery location. I haven’t seen gdbName <> sid/db_unique_name in many single instance deployments and appreciate that might be unfamiliar but bear with me. I’ll try to make this as transparent as possible.

Invoking dbca

Enough talking, let’s create the database:

[oracle@server3 ~]$ dbca -silent -createDatabase \
> -gdbName NCDB \
> -templateName lab_template.dbc \
> -sid SITEA \
> -enableArchive true -archiveLogMode auto \
> -memoryMgmtType auto_sga \
> -createAsContainerDatabase false \
> -recoveryAreaDestination +RECO \
> -totalMemory 2048 \
> -datafileDestination +DATA \
> -initParams db_unique_name=SITEA,dg_broker_start=true \
> -customScripts add_srls_dbca.sql

[...]

Prepare for db operation
10% complete
Registering database with Oracle Restart
14% complete
Copying database files
43% complete
Creating and starting Oracle instance
45% complete
49% complete
53% complete
56% complete
62% complete
Completing Database Creation
68% complete
70% complete
71% complete
Executing Post Configuration Actions
100% complete
Database creation complete. For details check the logfiles at:
/u01/app/oracle/cfgtoollogs/dbca/SITEA.
Database Information:
Global Database Name:SITEA
System Identifier(SID):SITEA
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/SITEA/SITEA1.log" for further details.

A few words of explanation: As I said I use SITEA as db_unique_name and the database’s SID. In Oracle Managed Files (OMF) deployments a file name references the db_unique name, and I wanted my data files, temp files and log files all to be in +DATA/SITEA/. db_unique_name defaults to db_name, and not changing it would have resulted in the database being created in +DATA/NCDB and that’s not what I wanted.

I decided to include starting the Data Guard broker as part of the initialisation parameters. Archiving is enabled right from the start as well.

The custom script you see referenced creates the necessary number of standby redo logs. This will save me a bit of work, because RMAN is smart. When duplicating the database for use as a standby it creates standby redo logs (SRL) for me if if finds SRLs in the target database. Yet another thing less for me to worry about.

Implications of using gdbname and sid (+ db_unique_name) in dbca

If you haven’t used a SID <> gDBName combination before, you might feel about unsure about how Oracle creates things. First of all, it created the database with the db_name you passed to dbca as gdbname. In this post that’s NCDB. Normally, if you grep’d the operating system’s process table for NCDB you would find all the instance’s background processes. With my dbca command however (where an Oracle SID <> gdbname is provided) you won’t find anything though. Grep for the ORACLE_SID “SITEA” instead. Likewise, if you want to create a bequeath connection to the database, set your ORACLE_SID to SITEA.

Since this is Oracle Restart the database is registered with Grid Infrastructure under its db_unique_name/ORACLE_SID combination in the RDBMS home:

[oracle@server3 ~]$ srvctl config database
SITEA
[oracle@server3 ~]$ srvctl config database -db SITEA
Database unique name: SITEA
Database name: NCDB
Oracle home: /u01/app/oracle/product/18.0.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/SITEA/PARAMETERFILE/spfile.273.1003226559
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Disk Groups: DATA,RECO
Services:
OSDBA group:
OSOPER group:
Database instance: SITEA
[oracle@server3 ~]$

Interacting with the database via srvctl requires you to use the db_unique_name SITEA as shown.

This is my preferred way to set things up these days. I’d like to invite you to critically reflect about the approach I took. I prefer it over other designations such as “PROD” and “STDBY”.

Creating the standby database

With the primary database created, archivelog mode enabled plus a working Fast Recovery Area I can move on to creating the physical standby database.

Before I do this a few small changes have to be made to my network configuration. I always add TNS entries for both primary and standby database in the “primary” host’s $ORACLE_HOME/network/admin/tnsnames.ora file and ship this across to the “standby” host. They are needed later. With the preparation done, I proceed to calling dbca:

[oracle@server4 ~]$ dbca -silent -createDuplicateDB \
> -gdbName NCDB \
> -sid SITEB \
> -initParams db_create_file_dest=+DATA,db_recovery_file_dest=+RECO,dg_broker_start=true \
> -primaryDBConnectionString server3:1521/SITEA \
> -createAsStandby \
> -dbUniqueName SITEB

[ ... ]

Prepare for db operation
22% complete
Listener config step
44% complete
Auxiliary instance creation
67% complete
RMAN duplicate
89% complete
Post duplicate database operations
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/SITEB/SITEB1.log" for further details.

So far so good. What did dbca do? And more importantly, how did it create the standby database? I didn’t have a backup of NCDB/SITEA at the time I called dbca on server4 so it must have performed a duplicate … from active database. Looking at the dbca trace in $ORACLE_BASE/cfgtoollogs/dbca/SITEB I can confirm this. After wading through a lot of messages, here’s the relevant bit:

INFO: Mar 11, 2019 12:14:04 PM oracle.assistants.dbca.driver.backend.steps.RmanDuplicateStep executeImpl
INFO: Running rman duplicate command:run {
set newname for datafile 1 to new;
set newname for datafile 3 to new;
set newname for datafile 4 to new;
set newname for datafile 7 to new;
duplicate target database
for standby
from active database
dorecover
nofilenamecheck;
}

This might be an issue for some users because it requires a direct connection to the source database. I don’t know if RMAN might use the “from service … using backupset …” clause if you provided an offline backup.

The files I care about are all in the right place:

SQL> select name from v$datafile union all
2 select name from v$tempfile union all
3 select member from v$logfile union all
4 select value from v$parameter2 where name = 'control_files';

NAME
-------------------------------------------------------
+DATA/SITEB/DATAFILE/system.268.1003228125
+DATA/SITEB/DATAFILE/sysaux.273.1003228127
+DATA/SITEB/DATAFILE/undotbs1.272.1003228129
+DATA/SITEB/DATAFILE/users.271.1003228129
+DATA/SITEB/TEMPFILE/temp.262.1003228149
+DATA/SITEB/ONLINELOG/group_3.266.1003228137
+RECO/SITEB/ONLINELOG/group_3.310.1003228137
+DATA/SITEB/ONLINELOG/group_2.267.1003228137
+RECO/SITEB/ONLINELOG/group_2.258.1003228137
+DATA/SITEB/ONLINELOG/group_1.270.1003228135
+RECO/SITEB/ONLINELOG/group_1.259.1003228135
+DATA/SITEB/ONLINELOG/group_4.265.1003228137
+RECO/SITEB/ONLINELOG/group_4.311.1003228137
+DATA/SITEB/ONLINELOG/group_5.264.1003228139
+RECO/SITEB/ONLINELOG/group_5.312.1003228139
+DATA/SITEB/ONLINELOG/group_6.274.1003228139
+RECO/SITEB/ONLINELOG/group_6.313.1003228141
+DATA/SITEB/ONLINELOG/group_7.263.1003228147
+RECO/SITEB/ONLINELOG/group_7.314.1003228147
+DATA/SITEB/CONTROLFILE/current.260.1003228117
+RECO/SITEB/CONTROLFILE/current.260.1003228117

21 rows selected.

As reported by other bloggers, dbca creates a “temporary” listener that it doesn’t remove after the database creation process is completed. I specifically wanted to see if this is the case with Oracle Restart as well and yes, it is:

[oracle@server4 SITEB]$ ps -ef | grep tnslsnr
oracle 1125 1 0 10:27 ? 00:00:00 /u01/app/oracle/product/18.0.0/dbhome_1/bin/tnslsnr LISTENER20190318102721 -inherit
grid 5837 1 0 09:01 ? 00:00:00 /u01/app/grid/product/18.0.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
oracle 7588 31809 0 10:44 pts/1 00:00:00 grep --color=auto tnslsnr

The extra listener didn’t survive a reboot though and the processes should disappear eventually. dbca persists the configuration into $ORACLE_HOME/network/admin/listener.ora though, and those entries should probably be removed. This is especially true with Oracle Restart (and RAC for that matter) because I want all listeners to be controlled from the GRID home, not the RDBMS home.

There are other leftover files in $ORACLE_HOME/dbs as well for which I’d hope dbca removes them one day.

On the plus side dbca registers the database in Oracle Restart for me as part of the work it did:

[oracle@server4 ~]$ srvctl config database -db SITEB
Database unique name: SITEB
Database name: NCDB
Oracle home: /u01/app/oracle/product/18.0.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/SITEB/PARAMETERFILE/spfile.275.1003228201
Password file: +DATA/SITEB/orapwsiteb
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Disk Groups: DATA,RECO
Services:
OSDBA group:
OSOPER group:
Database instance: SITEB

It’s good to see that I don’t need to worry about storing password file and spfile in ASM myself, so that job is already taken care off. A few other things need to be changed though. The database is still considered a primary (of which I know it isn’t) and the start option therefore is wrong, too. There are licensing implications opening a standby read only rather than just mounting it when turning on redo apply. Refer to the documentation for more details.

Out of habit I changed the role to PHYSICAL_STANDBY and the start option to MOUNT. This is what I ended up with in my lab, YMMV and depends on how you are licensed.

[oracle@server4 ~]$ srvctl config database -db SITEB
Database unique name: SITEB
Database name: NCDB
Oracle home: /u01/app/oracle/product/18.0.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/SITEB/PARAMETERFILE/spfile.275.1003228201
Password file: +DATA/SITEB/orapwsiteb
Domain:
Start options: mount
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Disk Groups: DATA,RECO
Services:
OSDBA group:
OSOPER group:
Database instance: SITEB

Creating the Broker configuration

The standby redo logs have been created by RMAN and with dg_broker_start set to true by dbca for both databases I can now create a Data Guard Broker configuration. With Oracle Restart I don’t have to statically register the databases with the listener anymore. Clusterware takes care of starting/stopping them during role changes.

[oracle@server3 ~]$ dgmgrl
DGMGRL for Linux: Release 18.0.0.0.0 - Production on Mon Mar 18 11:11:55 2019 Version 18.5.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@SITEA
Password:
Connected to "SITEA"
Connected as SYSDBA.
DGMGRL> create configuration autoconf
> primary database is 'SITEA'
> connect identifier is 'SITEA';
Configuration "autoconf" created with primary database "SITEA"
DGMGRL> add database 'SITEB' as connect identifier is 'SITEB';
Database "SITEB" added
DGMGRL> edit database 'SITEA' set property standbyfilemanagement=auto;
Property "standbyfilemanagement" updated
DGMGRL> edit database 'SITEB' set property standbyfilemanagement=auto;
Property "standbyfilemanagement" updated
DGMGRL> enable configuration
Enabled.
DGMGRL> show configuration

Configuration - autoconf

Protection Mode: MaxPerformance
Members:
SITEA - Primary database
SITEB - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS (status updated 17 seconds ago)
DGMGRL>

You might want to increase the protection level from MaxPerformance to something that better suits your requirements. Three might be additional changes you want to do within the broker as well. The code output you just saw is the bare minimum I put into place.

Summary

All in all this has been pretty straight forward without too many hiccups along the way. If you want to you can put a lot of those steps into your favourite automation toolkit and you’ll end up with a standby database reasonably quickly provided your environment supports this kind of setup.

PostgresConf 2019 Training Days

https://ardentperf.files.wordpress.com/2019/03/hands-on-lab.jpg?w=600&h=450 600w, https://ardentperf.files.wordpress.com/2019/03/hands-on-lab.jpg?w=150&h=113 150w" sizes="(max-width: 300px) 100vw, 300px" />It feels like PostgresConf in New York is in full swing, even though the main tracks haven’t even started yet!

(Oh, and by the way, as of this morning I heard there are still day-passes available for those who haven’t yet registered for the conference… and then you can come hear a great session about Wait Events in PostgreSQL this Thursday at 4:20pm!)

The first two days of PostgresConf are summits, tutorials and training sessions. A good chunk of my day today was helping out with Scott Mead’s intensive 3 hour hands-on lab Setting up PostgreSQL for Production in AWS – but outside of that I’ve managed to drop in to a number of other sessions that sounded interesting. I did my best to take down some notes so I could share a few highlights.

Monday March 18

Personally, my favorite session on Monday was Brent Bigonger’s session.  He’s a database engineer at Amazon who was involved in migrating their Inventory Management System to Aurora PostgreSQL (slides available here). I always love hearing good stories (part of why I’ve always been a fan of user groups) – this presentation gave a nice high level overview of the business, a review of the planning and execution process for the migration, and lots of practical lessons learned.

  • Some of the tips were things people are generally familiar with – like NULLs behaving differently and the importance of performance management with a tool like Performance Insights.
  • My favorite tip is getting better telemetry by instrumenting SQL with comments (SELECT /* my-service-call-1234 */ …) which reminded me of something I also read in Baron Sc​hwartz’s recently updated e-book on observable systems: “including implicit data in SQL.”
  • A great new tip (to me) was the idea of creating a heartbeat table as one more safety check in a replication process.  You can get a sense for lag by querying the table and you can also use it during a cutover to get an extra degree of assurance that no data was missed.
  • Another general point I really resonated with: Brent gave a nice reminder that a simple solution which meets the business requirements is better than a sophisticated or complex solution that goes beyond what the business really needs.  I feel tempted on occasion to leverage architectures because they are interesting – and I always appreciate hearing this reiterated!

On the AWS track, aside from Brent’s session, I caught a few others: Jim Mlodgenski giving a deep dive on Aurora PostgreSQL architecture and Jim Finnerty giving a great talk on Aurora PostgreSQL performance tuning and query plan management.  It’s funny, but I think my favorite slide from Finnerty’s talk was actually one of the simplest and most basic; he had a slide that just had high-level list of steps for performance tuning.  I don’t remember the exact list on that slide at the moment, but the essential process: (1) identify to top SQL (2) EXPLAIN to get the plan (3) make improvements to the SQL and (4) test and verify whether the improvements actually had the intended effect.

Other sessions I dropped into:

  • Alvaro Hernandez giving an Oracle to PostgreSQL Migration Tutorial.  I love live demos (goes along with loving hands on labs) and so this session was a hit with me – I wasn’t able to catch the whole thing but I did catch a walk-through of ora2pg.
  • Avinash Vallarapu giving an Introduction to PostgreSQL for Oracle and MySQL DBAs. When I slipped in, he was just wrapping up a section on hot physical backups in PostgreSQL with the pg_basebackup utility.  After that, Avi launched into a section on MVCC in PostgreSQL – digging into transaction IDs and vacuum, illustrated with block dumps and the pageinspect extension.  The part of this session I found most interesting was actually a few of the participant discussions – I heard lively discussions about what extensions are and about comparisons with RMAN and older versions of Oracle.

Tuesday March 19

As I said before, a good chunk of my morning was in Scott’s hands-on lab. If you ever do a hands-on lab with Scott then you’d better look out… he did something clever there: somewhere toward the beginning, if you followed the instructions correctly, then you would be unable to connect to your database!  Turns out this was on purpose (and the instructions actually tell you this) – since people often have this particular problem connecting when they first start on out RDS, Scott figured he’d just teach everyone how to fix it.  I won’t tell you what the problem actually is though – you’ll have to sign up for a lab sometime and learn for yourself.  :)

As always, we had a lot of really interesting discussions with participants in the hands-on lab.  We talked about the DBA role and the shared responsibility model, about new tools used to administer RDS databases in lieu of shell access (like Performance Insights and Enhanced Monitoring), and about how RDS helps implement industry best practices like standardization and automation. On a more technical level, people were interested to learn about the “pgbench” tool provided with postgresql.

In addition to the lab, I also managed to catch part of Simon Riggs’ session Essential PostgreSQL11 Database Administration – in particular, the part about PostgreSQL 11 new features.  One interesting new thing I learned was about some work done specifically around the performance of indexes on monotonically increasing keys.

Interesting Conversations

https://ardentperf.files.wordpress.com/2019/03/img_2736.jpg?w=600&h=450 600w, https://ardentperf.files.wordpress.com/2019/03/img_2736.jpg?w=150&h=113 150w" sizes="(max-width: 300px) 100vw, 300px" />Of course I learned just as much outside of the sessions as I learned in the sessions.  I ended up eating lunch with Alexander Kukushkin who helped facilitate a 3 hour hands-on session today about Understanding and implementing PostgreSQL High Availability with Patroni and enjoyed hearing a bit more about PostgreSQL at Zalando. Talked with a few people from a government organization who were a long-time PostgreSQL shop and interested to hear more about Aurora PostgreSQL. Talked with a guy from a large financial and media company about flashback query, bloat and vacuum, pg_repack, parallel query and partitioning in PostgreSQL.

And of course lots of discussions about the professional community. Met PostgresConf conference volunteers from California to South Africa and talked about how they got involved in the community.  Saw Lloyd and chatted about the Seattle PostgreSQL User Group.

The training and summit days are wrapping up and now it’s time to get ready for the next three days: keynotes, breakout sessions, exposition, a career fair and more!  I can’t wait.  :)

PostgresConf 2019 Training Days

https://ardentperf.files.wordpress.com/2019/03/hands-on-lab.jpg?w=600&h=450 600w, https://ardentperf.files.wordpress.com/2019/03/hands-on-lab.jpg?w=150&h=113 150w" sizes="(max-width: 300px) 100vw, 300px" />It feels like PostgresConf in New York is in full swing, even though the main tracks haven’t even started yet!

(Oh, and by the way, as of this morning I heard there are still day-passes available for those who haven’t yet registered for the conference… and then you can come hear a great session about Wait Events in PostgreSQL this Thursday at 4:20pm!)

The first two days of PostgresConf are summits, tutorials and training sessions. A good chunk of my day today was helping out with Scott Mead’s intensive 3 hour hands-on lab Setting up PostgreSQL for Production in AWS – but outside of that I’ve managed to drop in to a number of other sessions that sounded interesting. I did my best to take down some notes so I could share a few highlights.

Monday March 18

Personally, my favorite session on Monday was Brent Bigonger’s session.  He’s a database engineer at Amazon who was involved in migrating their Inventory Management System to Aurora PostgreSQL (slides available here). I always love hearing good stories (part of why I’ve always been a fan of user groups) – this presentation gave a nice high level overview of the business, a review of the planning and execution process for the migration, and lots of practical lessons learned.

  • Some of the tips were things people are generally familiar with – like NULLs behaving differently and the importance of performance management with a tool like Performance Insights.
  • My favorite tip is getting better telemetry by instrumenting SQL with comments (SELECT /* my-service-call-1234 */ …) which reminded me of something I also read in Baron Sc​hwartz’s recently updated e-book on observable systems: “including implicit data in SQL.”
  • A great new tip (to me) was the idea of creating a heartbeat table as one more safety check in a replication process.  You can get a sense for lag by querying the table and you can also use it during a cutover to get an extra degree of assurance that no data was missed.
  • Another general point I really resonated with: Brent gave a nice reminder that a simple solution which meets the business requirements is better than a sophisticated or complex solution that goes beyond what the business really needs.  I feel tempted on occasion to leverage architectures because they are interesting – and I always appreciate hearing this reiterated!

On the AWS track, aside from Brent’s session, I caught a few others: Jim Mlodgenski giving a deep dive on Aurora PostgreSQL architecture and Jim Finnerty giving a great talk on Aurora PostgreSQL performance tuning and query plan management.  It’s funny, but I think my favorite slide from Finnerty’s talk was actually one of the simplest and most basic; he had a slide that just had high-level list of steps for performance tuning.  I don’t remember the exact list on that slide at the moment, but the essential process: (1) identify to top SQL (2) EXPLAIN to get the plan (3) make improvements to the SQL and (4) test and verify whether the improvements actually had the intended effect.

Other sessions I dropped into:

  • Alvaro Hernandez giving an Oracle to PostgreSQL Migration Tutorial.  I love live demos (goes along with loving hands on labs) and so this session was a hit with me – I wasn’t able to catch the whole thing but I did catch a walk-through of ora2pg.
  • Avinash Vallarapu giving an Introduction to PostgreSQL for Oracle and MySQL DBAs. When I slipped in, he was just wrapping up a section on hot physical backups in PostgreSQL with the pg_basebackup utility.  After that, Avi launched into a section on MVCC in PostgreSQL – digging into transaction IDs and vacuum, illustrated with block dumps and the pageinspect extension.  The part of this session I found most interesting was actually a few of the participant discussions – I heard lively discussions about what extensions are and about comparisons with RMAN and older versions of Oracle.

Tuesday March 19

As I said before, a good chunk of my morning was in Scott’s hands-on lab. If you ever do a hands-on lab with Scott then you’d better look out… he did something clever there: somewhere toward the beginning, if you followed the instructions correctly, then you would be unable to connect to your database!  Turns out this was on purpose (and the instructions actually tell you this) – since people often have this particular problem connecting when they first start on out RDS, Scott figured he’d just teach everyone how to fix it.  I won’t tell you what the problem actually is though – you’ll have to sign up for a lab sometime and learn for yourself.  :)

As always, we had a lot of really interesting discussions with participants in the hands-on lab.  We talked about the DBA role and the shared responsibility model, about new tools used to administer RDS databases in lieu of shell access (like Performance Insights and Enhanced Monitoring), and about how RDS helps implement industry best practices like standardization and automation. On a more technical level, people were interested to learn about the “pgbench” tool provided with postgresql.

In addition to the lab, I also managed to catch part of Simon Riggs’ session Essential PostgreSQL11 Database Administration – in particular, the part about PostgreSQL 11 new features.  One interesting new thing I learned was about some work done specifically around the performance of indexes on monotonically increasing keys.

Interesting Conversations

https://ardentperf.files.wordpress.com/2019/03/img_2736.jpg?w=600&h=450 600w, https://ardentperf.files.wordpress.com/2019/03/img_2736.jpg?w=150&h=113 150w" sizes="(max-width: 300px) 100vw, 300px" />Of course I learned just as much outside of the sessions as I learned in the sessions.  I ended up eating lunch with Alexander Kukushkin who helped facilitate a 3 hour hands-on session today about Understanding and implementing PostgreSQL High Availability with Patroni and enjoyed hearing a bit more about PostgreSQL at Zalando. Talked with a few people from a government organization who were a long-time PostgreSQL shop and interested to hear more about Aurora PostgreSQL. Talked with a guy from a large financial and media company about flashback query, bloat and vacuum, pg_repack, parallel query and partitioning in PostgreSQL.

And of course lots of discussions about the professional community. Met PostgresConf conference volunteers from California to South Africa and talked about how they got involved in the community.  Saw Lloyd and chatted about the Seattle PostgreSQL User Group.

The training and summit days are wrapping up and now it’s time to get ready for the next three days: keynotes, breakout sessions, exposition, a career fair and more!  I can’t wait.  :)

PostgresConf 2019 Training Days

https://ardentperf.files.wordpress.com/2019/03/hands-on-lab.jpg?w=600&h=450 600w, https://ardentperf.files.wordpress.com/2019/03/hands-on-lab.jpg?w=150&h=113 150w" sizes="(max-width: 300px) 100vw, 300px" />It feels like PostgresConf in New York is in full swing, even though the main tracks haven’t even started yet!

(Oh, and by the way, as of this morning I heard there are still day-passes available for those who haven’t yet registered for the conference… and then you can come hear a great session about Wait Events in PostgreSQL this Thursday at 4:20pm!)

The first two days of PostgresConf are summits, tutorials and training sessions. A good chunk of my day today was helping out with Scott Mead’s intensive 3 hour hands-on lab Setting up PostgreSQL for Production in AWS – but outside of that I’ve managed to drop in to a number of other sessions that sounded interesting. I did my best to take down some notes so I could share a few highlights.

Monday March 18

Personally, my favorite session on Monday was Brent Bigonger’s session.  He’s a database engineer at Amazon who was involved in migrating their Inventory Management System to Aurora PostgreSQL (slides available here). I always love hearing good stories (part of why I’ve always been a fan of user groups) – this presentation gave a nice high level overview of the business, a review of the planning and execution process for the migration, and lots of practical lessons learned.

  • Some of the tips were things people are generally familiar with – like NULLs behaving differently and the importance of performance management with a tool like Performance Insights.
  • My favorite tip is getting better telemetry by instrumenting SQL with comments (SELECT /* my-service-call-1234 */ …) which reminded me of something I also read in Baron Sc​hwartz’s recently updated e-book on observable systems: “including implicit data in SQL.”
  • A great new tip (to me) was the idea of creating a heartbeat table as one more safety check in a replication process.  You can get a sense for lag by querying the table and you can also use it during a cutover to get an extra degree of assurance that no data was missed.
  • Another general point I really resonated with: Brent gave a nice reminder that a simple solution which meets the business requirements is better than a sophisticated or complex solution that goes beyond what the business really needs.  I feel tempted on occasion to leverage architectures because they are interesting – and I always appreciate hearing this reiterated!

On the AWS track, aside from Brent’s session, I caught a few others: Jim Mlodgenski giving a deep dive on Aurora PostgreSQL architecture and Jim Finnerty giving a great talk on Aurora PostgreSQL performance tuning and query plan management.  It’s funny, but I think my favorite slide from Finnerty’s talk was actually one of the simplest and most basic; he had a slide that just had high-level list of steps for performance tuning.  I don’t remember the exact list on that slide at the moment, but the essential process: (1) identify to top SQL (2) EXPLAIN to get the plan (3) make improvements to the SQL and (4) test and verify whether the improvements actually had the intended effect.

Other sessions I dropped into:

  • Alvaro Hernandez giving an Oracle to PostgreSQL Migration Tutorial.  I love live demos (goes along with loving hands on labs) and so this session was a hit with me – I wasn’t able to catch the whole thing but I did catch a walk-through of ora2pg.
  • Avinash Vallarapu giving an Introduction to PostgreSQL for Oracle and MySQL DBAs. When I slipped in, he was just wrapping up a section on hot physical backups in PostgreSQL with the pg_basebackup utility.  After that, Avi launched into a section on MVCC in PostgreSQL – digging into transaction IDs and vacuum, illustrated with block dumps and the pageinspect extension.  The part of this session I found most interesting was actually a few of the participant discussions – I heard lively discussions about what extensions are and about comparisons with RMAN and older versions of Oracle.

Tuesday March 19

As I said before, a good chunk of my morning was in Scott’s hands-on lab. If you ever do a hands-on lab with Scott then you’d better look out… he did something clever there: somewhere toward the beginning, if you followed the instructions correctly, then you would be unable to connect to your database!  Turns out this was on purpose (and the instructions actually tell you this) – since people often have this particular problem connecting when they first start on out RDS, Scott figured he’d just teach everyone how to fix it.  I won’t tell you what the problem actually is though – you’ll have to sign up for a lab sometime and learn for yourself.  :)

As always, we had a lot of really interesting discussions with participants in the hands-on lab.  We talked about the DBA role and the shared responsibility model, about new tools used to administer RDS databases in lieu of shell access (like Performance Insights and Enhanced Monitoring), and about how RDS helps implement industry best practices like standardization and automation. On a more technical level, people were interested to learn about the “pgbench” tool provided with postgresql.

In addition to the lab, I also managed to catch part of Simon Riggs’ session Essential PostgreSQL11 Database Administration – in particular, the part about PostgreSQL 11 new features.  One interesting new thing I learned was about some work done specifically around the performance of indexes on monotonically increasing keys.

Interesting Conversations

https://ardentperf.files.wordpress.com/2019/03/img_2736.jpg?w=600&h=450 600w, https://ardentperf.files.wordpress.com/2019/03/img_2736.jpg?w=150&h=113 150w" sizes="(max-width: 300px) 100vw, 300px" />Of course I learned just as much outside of the sessions as I learned in the sessions.  I ended up eating lunch with Alexander Kukushkin who helped facilitate a 3 hour hands-on session today about Understanding and implementing PostgreSQL High Availability with Patroni and enjoyed hearing a bit more about PostgreSQL at Zalando. Talked with a few people from a government organization who were a long-time PostgreSQL shop and interested to hear more about Aurora PostgreSQL. Talked with a guy from a large financial and media company about flashback query, bloat and vacuum, pg_repack, parallel query and partitioning in PostgreSQL.

And of course lots of discussions about the professional community. Met PostgresConf conference volunteers from California to South Africa and talked about how they got involved in the community.  Saw Lloyd and chatted about the Seattle PostgreSQL User Group.

The training and summit days are wrapping up and now it’s time to get ready for the next three days: keynotes, breakout sessions, exposition, a career fair and more!  I can’t wait.  :)

IM_DOMAIN$

A few months ago Franck Pachot wrote about a recursive SQL statement that kept appearing in the library cache. I discovered the note today because I had just found a client site where the following statement suddenly appeared near the top of the “SQL ordered by Executions” section of their AWR reports after they had upgraded to 18c.


select domain# from sys.im_domain$ where objn = :1 and col# = :2

I found Franck’s article by the simple expedient of typing the entire query into a Google search – his note was the first hit on the list, and he had a convenient example (based on the SCOTT schema) to demonstrate the effect, so I built the tables from the schema and ran a simple test with extended SQL tracing (event 10046) enabled.

Here’s an extract (with no deletions) from the resulting trace file:

PARSING IN CURSOR #139819795591784 len=110 dep=0 uid=104 oct=3 lid=104 tim=31306461773 hv=3572295767 ad='6bf8b8a0' sqlid='8n2bcc3aftu2r'
select /*+ leading(EMP DEPT) USE_HASH(DEPT) USE_HASH(BONUS) */ * from DEPT natural join EMP natural join BONUS
END OF STMT
PARSE #139819795591784:c=59,e=59,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=4262704031,tim=31306461772

PARSING IN CURSOR #139819795585328 len=64 dep=1 uid=0 oct=3 lid=0 tim=31306461966 hv=1240924087 ad='69a8b760' sqlid='0b639nx4zdzxr'
select domain# from sys.im_domain$ where objn = :1 and col# = :2
END OF STMT
PARSE #139819795585328:c=37,e=37,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=2321277860,tim=31306461965
EXEC #139819795585328:c=32,e=31,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=2321277860,tim=31306462058
FETCH #139819795585328:c=17,e=17,p=0,cr=1,cu=0,mis=0,r=0,dep=1,og=4,plh=2321277860,tim=31306462098
STAT #139819795585328 id=1 cnt=0 pid=0 pos=1 obj=10422 op='TABLE ACCESS BY INDEX ROWID IM_DOMAIN$ (cr=1 pr=0 pw=0 str=1 time=21 us cost=0 size=39 card=1)'
STAT #139819795585328 id=2 cnt=0 pid=1 pos=1 obj=10423 op='INDEX UNIQUE SCAN IM_DOMAIN_UK (cr=1 pr=0 pw=0 str=1 time=18 us cost=0 size=0 card=1)'
CLOSE #139819795585328:c=5,e=5,dep=1,type=1,tim=31306462287

EXEC #139819795591784:c=484,e=484,p=0,cr=1,cu=0,mis=0,r=0,dep=0,og=1,plh=4262704031,tim=31306462316
FETCH #139819795591784:c=0,e=804,p=0,cr=44,cu=0,mis=0,r=0,dep=0,og=1,plh=4262704031,tim=31306463191
STAT #139819795591784 id=1 cnt=0 pid=0 pos=1 obj=0 op='HASH JOIN  (cr=45 pr=0 pw=0 str=1 time=1222 us cost=72 size=97 card=1)'
STAT #139819795591784 id=2 cnt=4 pid=1 pos=1 obj=0 op='HASH JOIN  (cr=45 pr=0 pw=0 str=1 time=1001 us cost=70 size=232 card=4)'
STAT #139819795591784 id=3 cnt=4 pid=2 pos=1 obj=117764 op='TABLE ACCESS FULL EMP (cr=22 pr=0 pw=0 str=1 time=259 us cost=35 size=152 card=4)'
STAT #139819795591784 id=4 cnt=4 pid=2 pos=2 obj=117765 op='TABLE ACCESS FULL DEPT (cr=22 pr=0 pw=0 str=1 time=81 us cost=35 size=80 card=4)'
STAT #139819795591784 id=5 cnt=0 pid=1 pos=2 obj=117766 op='TABLE ACCESS FULL BONUS (cr=0 pr=0 pw=0 str=1 time=4 us cost=2 size=39 card=1)'
CLOSE #139819795591784:c=24,e=24,dep=0,type=1,tim=31306508552

PARSE #139819795591784:c=41,e=42,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=4262704031,tim=31306508798
PARSE #139819795585328:c=21,e=22,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=2321277860,tim=31306509010
EXEC #139819795585328:c=132,e=132,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=2321277860,tim=31306509220
FETCH #139819795585328:c=20,e=19,p=0,cr=1,cu=0,mis=0,r=0,dep=1,og=4,plh=2321277860,tim=31306509415
CLOSE #139819795585328:c=8,e=8,dep=1,type=3,tim=31306509494
EXEC #139819795591784:c=682,e=704,p=0,cr=1,cu=0,mis=0,r=0,dep=0,og=1,plh=4262704031,tim=31306509558
FETCH #139819795591784:c=588,e=1246,p=0,cr=44,cu=0,mis=0,r=0,dep=0,og=1,plh=4262704031,tim=31306511014
CLOSE #139819795591784:c=23,e=22,dep=0,type=3,ti

As you can see, every time I do a parse call for the query against the SCOTT schema (PARSE #139819795591784), Oracle does a parse/exec/fetch/close for the query against im_domain$ (PARSE #139819795585328) – and this happens even when the SCOTT query is in the session cursor cache!

As Franck points out, this looks like something to do with the In Memory option even though the option wasn’t enabled in his database, and wasn’t enabled in my client’s database. Once you’ve got a reproducible example of a problem, though, you can start fiddling to see if you can bypass it. In this case I decided to check all the parameters to do with the in-memory option – which is a bit like hard work because there are 208 parameters that include the word “inmemory”. After checking the descriptions of the first twenty or so I decided there was an easier option – if Oracle is asking about “domains” for columns possibly it’s something to do with the relatively new “join group” feature for in-memory columnar compression, so I ran a query to produce the list of parameter names and description for parameter with the words “join” and “group” in their names – there are two:


_sqlexec_join_group_aware_hj_enabled              enable/disable join group aware hash join
_sqlexec_join_group_aware_hj_unencoded_rowset     minimum number of unencoded rowsets processed before adaptation 

The first one looks rather promising – and it has a default value to TRUE, and it can be changed by “alter session” or “alter system”. So I executed:


alter session set "_sqlexec_join_group_aware_hj_enabled" = false;
alter system flush shared_pool;

Then I ran my test again and voila! there it wasn’t. No more queries against in_domain$.

Problem solved (until the client decides they want to use the in-memory option, perhaps).

There may be other reasons why this recursive query appears which aren’t to do with hash joins, of course, but join groups are specifically to allow efficient hash joins with the in-memory option, (it’s a trick to do with common encoding for compression to allow Bloom filtering to eliminate CUs without decoding) so I’m hoping I won’t have to track down and eliminate another sources for the query.

 

Oracle Database 19c Automatic Indexing: Predicted Back In 2008 (A Better Future)

I’ve recently received a number of correspondences regarding one of my most popular blog posts, dating back to February 2008: Index Rebuild vs. Coalesce vs. Shrink Space (Pigs – 3 Different Ones). In the comments section, there’s an interesting discussion where I mention: “If Oracle19 does everything for you and all the various indexes structures get […]

High CPU usage in docker-proxy with chatty database application? Disable userland-proxy!

Or just keep database and application co-located :)

It is well-known from the get-go, but very often overlooked because of ignorance or laziness: the database application must be co-located with the database server. Row-by-row roundtrips between the application and the database are expensive. Not only due to the network latency, but also because of the many CPU cycles wasted to switch the context between the two engines, or the two processes, and maybe the two servers.

In modern architectures, with microservices and containers, this means that a business service must be implemented in one microservice containing the business logic and the business data. Separating the application and the database into two microservices is a wrong design, non-efficient, non-scalable, and also non-green because of the unnecessary CPU usage.

Docker

I was building a new demo for this, as in the previous post, where I compare running the procedural code in the client or the server side of the database. When I was running my database in a Docker container, I’ve seen that the bad performance I wanted to show was even worse than expected:

  • the symptom was high CPU usage in “docker-proxy” process
  • the cause was that I’m using the default Docker userland proxy

Here is the related Twitter thread. Thanks to @G_Ceresa, @ochoa_marcelo, and @ofirm for the quick replies about the cause and solution:

SQL, PL/SQL and JavaScript running in the Database Server (Oracle MLE)

In a previous post I measured the CPU usage when running a database transaction in the same engine (SQL), or two engines in the same process (PL/SQL + SQL or JavaScript + SQL) or two processes (Javascript client + server SQL):

ODC Appreciation Day : Reduce CPU usage by running the business logic in the Oracle Database

For the JavaScript + SQL running in the same process, I used the Oracle Multi-Lingual Engine in beta 0.2.7 but there is now a new beta 0.3.0 and this post runs the same (or similar) with this.

I’ve installed this MLE in a previous post:

Oracle Multi-Lingual Engine

And here is the demo where I run 400000 amount transfers between accounts. Here are the tables:

SQLcl: Release 18.4 Production on Sun Mar 17 15:42:34 2019
Copyright (c) 1982, 2019, Oracle. All rights reserved.
Last Successful login time: Sun Mar 17 2019 15:42:36 +00:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0_MLE - 64bit Beta
15:42:39 SQL> create table CUSTOMERS (
CUSTOMER_ID number generated always as identity
constraint CUSTOMER_PK primary key,
CUSTOMER_NAME varchar2(42)
);
Table created.
15:42:42 SQL> create table ACCOUNTS (
ACCOUNT_ID varchar2(10) constraint ACCOUNT_PK primary key,
CUSTOMER_ID number,
AMOUNT number default 0
);
Table created.
15:42:46 SQL> insert /*+ append */ into CUSTOMERS (CUSTOMER_NAME)
select x from (
select to_char( date'-4712-01-01'+rownum-1,'Jsp') x
from xmltable('1 to 1000000')
) where length(x)=42 and rownum<=4000;
4000 rows created.
15:42:49 SQL> commit;
Commit complete.
15:42:51 SQL> select * from CUSTOMERS 
order by CUSTOMER_ID fetch first 10 rows only;
  CUSTOMER_ID CUSTOMER_NAME
----------- ------------------------------------------
1 Three Thousand Three Hundred Seventy-Three
2 Three Thousand Three Hundred Seventy-Seven
3 Three Thousand Three Hundred Seventy-Eight
4 Three Thousand Seven Hundred Seventy-Three
5 Three Thousand Seven Hundred Seventy-Seven
6 Three Thousand Seven Hundred Seventy-Eight
7 Three Thousand Eight Hundred Seventy-Three
8 Three Thousand Eight Hundred Seventy-Seven
9 Three Thousand Eight Hundred Seventy-Eight
10 Seven Thousand Three Hundred Seventy-Three
10 rows selected.
15:42:54 SQL> insert /*+ append */ into ACCOUNTS                   
(ACCOUNT_ID,CUSTOMER_ID,AMOUNT)
select 'X'||to_char(rownum,'FM0999999'),CUSTOMER_ID,10000
from CUSTOMERS cross join xmltable('1 to 100')
;
400000 rows created.
15:42:57 SQL> commit;
Commit complete.
15:42:58 SQL> commit;
Commit complete.
15:43:15 SQL> select * from ACCOUNTS 
order by ACCOUNT_ID fetch first 10 rows only;
ACCOUNT_ID     CUSTOMER_ID   AMOUNT
---------- ----------- ------
X0000001 1150 10000
X0000002 1151 10000
X0000003 1152 10000
X0000004 1153 10000
X0000005 1154 10000
X0000006 1155 10000
X0000007 1156 10000
X0000008 1157 10000
X0000009 1158 10000
X0000010 1159 10000
10 rows selected.
15:43:16 SQL> select /*+ full(ACCOUNTS) cache(ACCOUNTS) */ 
count(*),avg(amount) from ACCOUNTS;
  COUNT(*)   AVG(AMOUNT)
-------- -----------
400000 10000

I have a ‘show-cpu-seconds-from-ps.sh’ script that displays the cputime delta from ps output.

Here I run all in one SQL statement: 5 seconds of CPU

15:44:23 SQL> set timing on
15:44:28 SQL> host sh /tmp/sqlcl/show-cpu-seconds-from-ps.sh init to reset cputime counter
15:44:30 SQL> update ACCOUNTS set AMOUNT=
2 case
3 when ACCOUNT_ID='X0000001' then AMOUNT+(select 1*count(*) from ACCOUNTS where ACCOUNT_ID<>'X0000001')
4 else AMOUNT-1
5 end
6 /
400000 rows updated.
Elapsed: 00:00:04.451
15:44:43 SQL> host sh /tmp/sqlcl/show-cpu-seconds-from-ps.sh diff to show the delta cputime
5 cpu seconds in pid= 19971 oracleCDB1 (LOCAL=NO)
15:44:43 SQL> select * from ACCOUNTS order by ACCOUNT_ID fetch first 10 rows only;
ACCOUNT_ID CUSTOMER_ID AMOUNT
X0000001 1150 409999
X0000002 1151 9999
X0000003 1152 9999
X0000004 1153 9999
X0000005 1154 9999
X0000006 1155 9999
X0000007 1156 9999
X0000008 1157 9999
X0000009 1158 9999
X0000010 1159 9999
10 rows selected.
Elapsed: 00:00:00.019
15:44:43 SQL> rollback;
Rollback complete.
Elapsed: 00:00:04.158

This is the actual CPU cycles needed to update those 400000 account amounts: 5 seconds. And the rollback is the same.

Now with a PL/SQL procedure: 30 seconds of CPU (because of the context switches between the PL/SQL and SQL engines)

15:44:47 SQL> create or replace procedure transfer(acc1 varchar2, acc2 varchar2, amount number) as
2 begin
3 -- debit acc1
4 update ACCOUNTS set ACCOUNTS.AMOUNT = ACCOUNTS.AMOUNT - transfer.amount where ACCOUNT_ID=acc1;
5 if sql%rowcount <> 1 then raise_application_error(-20000,'Account '''||acc1||''' unknown'); end if;
6 -- credit acc2
7 update ACCOUNTS set ACCOUNTS.AMOUNT = ACCOUNTS.AMOUNT + transfer.amount where ACCOUNT_ID=acc2;
8 if sql%rowcount <> 1 then raise_application_error(-20000,'Account '''||acc2||''' unknown'); end if;
9 end;
10 /
Procedure created.
Elapsed: 00:00:00.113
15:46:11 SQL> desc transfer
PROCEDURE transfer
Argument Name Type In/Out Default?
ACC1 VARCHAR2 IN
ACC2 VARCHAR2 IN
AMOUNT NUMBER IN
15:46:38 SQL> set timing on
15:46:41 SQL> host sh /tmp/sqlcl/show-cpu-seconds-from-ps.sh init to reset cputime counter
15:46:43 SQL> exec for c in (select * from ACCOUNTS where ACCOUNT_ID<>'X0000001') loop transfer(c.ACCOUNT_ID,'X0000001',1); end
loop;
PL/SQL procedure successfully completed.
Elapsed: 00:00:30.283
15:47:15 SQL> host sh /tmp/sqlcl/show-cpu-seconds-from-ps.sh diff to show the delta cputime
30 cpu seconds in pid= 19971 oracleCDB1 (LOCAL=NO)
15:47:38 SQL> select * from ACCOUNTS order by ACCOUNT_ID fetch first 10 rows only;
ACCOUNT_ID CUSTOMER_ID AMOUNT
X0000001 1150 409999
X0000002 1151 9999
X0000003 1152 9999
X0000004 1153 9999
X0000005 1154 9999
X0000006 1155 9999
X0000007 1156 9999
X0000008 1157 9999
X0000009 1158 9999
X0000010 1159 9999
10 rows selected.
Elapsed: 00:00:00.015
15:47:43 SQL> rollback;
Rollback complete.
Elapsed: 00:00:04.266

Now with a JavaScript client: 157 seconds of CPU (in the same database session process, but two engines).

15:48:38 SQL> disconnect
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0_MLE - 64bit Beta
15:48:54 SQL> script
2 var DriverManager = Java.type("java.sql.DriverManager");
3 if ( ! con === undefined ) { con.rollback(); con.close(); }
4 var con = DriverManager.getConnection("jdbc:oracle:thin:@//localhost/PDB1","demo","demo");
5 con.setAutoCommit(false);
6 var sql = con.createStatement();
7 .
15:49:10 SQL> save script01-init.js replace
Wrote file script01-init.js
15:49:16 SQL> @ script01-init.js
Elapsed: 00:00:01.019
15:49:18 SQL> script
2 print("First 10 accounts:");
3 var res=sql.executeQuery(" select ACCOUNT_ID,AMOUNT from ACCOUNTS order by 1 fetch first 10 rows only");
4 while(res.next()){print(" ACCOUNT_ID: "+res.getString(1)+" "+"AMOUNT: "+res.getString(2)); }
5 .
15:49:33 SQL> save script02-query.js replace
Wrote file script02-query.js
15:49:35 SQL> @ script02-query.js
First 10 accounts:
ACCOUNT_ID: X0000001 AMOUNT: 10000
ACCOUNT_ID: X0000002 AMOUNT: 10000
ACCOUNT_ID: X0000003 AMOUNT: 10000
ACCOUNT_ID: X0000004 AMOUNT: 10000
ACCOUNT_ID: X0000005 AMOUNT: 10000
ACCOUNT_ID: X0000006 AMOUNT: 10000
ACCOUNT_ID: X0000007 AMOUNT: 10000
ACCOUNT_ID: X0000008 AMOUNT: 10000
ACCOUNT_ID: X0000009 AMOUNT: 10000
ACCOUNT_ID: X0000010 AMOUNT: 10000
Elapsed: 00:00:00.181
15:49:37 SQL> script
2 var pre1=con.prepareStatement(" update ACCOUNTS set AMOUNT=AMOUNT-? where ACCOUNT_ID=?");
3 var pre2=con.prepareStatement(" update ACCOUNTS set AMOUNT=AMOUNT+? where ACCOUNT_ID=?");
4 function transfer (acc1,acc2,amount) {
5 pre1.setInt(1,amount); pre1.setString(2,acc1); pre1.execute();
6 pre2.setInt(1,amount); pre2.setString(2,acc2); pre2.execute();
7 }
8 var res=sql.executeQuery(" select ACCOUNT_ID from ACCOUNTS where ACCOUNT_ID<>'X0000001'");
9 print("Calling transaction for each account...");var t0=new Date();var cnt=0;
10 while(res.next()){ transfer(res.getString(1),'X0000001',1); cnt++ }
11 print(cnt+" transactions executed in "+(new Date() - t0)/1000+" seconds");
12 .
15:50:17 SQL> save script02-run.js replace
Wrote file script02-run.js
15:50:18 SQL> set timing on
15:50:22 SQL> host sh /tmp/sqlcl/show-cpu-seconds-from-ps.sh init to reset cputime counter
15:50:25 SQL> @    script02-run.js
Calling transaction for each account...
399999 transactions executed in 138.016 seconds
Elapsed: 00:02:18.082
15:52:45 SQL> host sh /tmp/sqlcl/show-cpu-seconds-from-ps.sh diff to show the delta cputime
52 cpu seconds in pid= 19945 /opt/oracle/product/12.2.0.1/dbhome_1/jdk/jre/bin/java -Djava.awt.headless=true -Dappl
e.awt.UIElement=true -Xss10M -client
105 cpu seconds in pid= 20426 oracleCDB1 (LOCAL=NO)
15:52:56 SQL> @    script02-query.js
First 10 accounts:
ACCOUNT_ID: X0000001 AMOUNT: 409999
ACCOUNT_ID: X0000002 AMOUNT: 9999
ACCOUNT_ID: X0000003 AMOUNT: 9999
ACCOUNT_ID: X0000004 AMOUNT: 9999
ACCOUNT_ID: X0000005 AMOUNT: 9999
ACCOUNT_ID: X0000006 AMOUNT: 9999
ACCOUNT_ID: X0000007 AMOUNT: 9999
ACCOUNT_ID: X0000008 AMOUNT: 9999
ACCOUNT_ID: X0000009 AMOUNT: 9999
ACCOUNT_ID: X0000010 AMOUNT: 9999
Elapsed: 00:00:00.015
15:53:13 SQL> script
2 con.rollback();
3 con.close();
4 .
15:53:20 SQL> save script02-close.js replace
Wrote file script02-close.js
15:53:22 SQL> @ script02-close.js
Elapsed: 00:00:06.198

And finally running JavaScript in the MLE engine: 223 seconds of CPU. This MLE, in beta, may not be fully optimized, so the time is not very relevant. The point is that the whole is running 100% in the same process.

15:53:31 SQL> connect demo/demo@//localhost/pdb1
Connected.
15:55:34 SQL> create or replace javascript source named "demo.js" as
2 function transfer (acc1,acc2,amount) {
3 var sql = _dbRequire('@oracle/sql');
4 sql.execute(" update ACCOUNTS set AMOUNT=AMOUNT-:amount where ACCOUNT_ID=:acc1",[amount,acc1]);
5 sql.execute(" update ACCOUNTS set AMOUNT=AMOUNT+:amount where ACCOUNT_ID=:acc2",[amount,acc2]);
6 }
7 module.exports.run = function () {
8 var sql = _dbRequire('@oracle/sql');
9 var res=sql.execute(" select ACCOUNT_ID from ACCOUNTS where ACCOUNT_ID<>'X0000001'");
10 for (var row of res.rows) {
11 transfer(row[0],'X0000001',1);
12 }
13 }
14 /
Function created.
Elapsed: 00:00:00.013
15:56:02 SQL> create or replace procedure run as language javascript
2 name 'demo\.js.run()';
3 /
Procedure created.
Elapsed: 00:00:00.032
15:56:14 SQL> select * FROM user_libraries;
no rows selected
Elapsed: 00:00:00.122
15:56:19 SQL> select object_name,procedure_name,object_type from user_procedures;
OBJECT_NAME PROCEDURE_NAME OBJECT_TYPE
RUN PROCEDURE
TRANSFER PROCEDURE
Elapsed: 00:00:00.291
15:56:21 SQL> select object_name,procedure_name,object_type from user_procedures;
OBJECT_NAME PROCEDURE_NAME OBJECT_TYPE
RUN PROCEDURE
TRANSFER PROCEDURE
Elapsed: 00:00:00.012
15:56:36 SQL> set timing on
15:56:51 SQL> host sh /tmp/sqlcl/show-cpu-seconds-from-ps.sh init to reset cputime counter
15:56:53 SQL> call demo.run();
Call completed.
Elapsed: 00:03:32.463
16:00:28 SQL> host sh /tmp/sqlcl/show-cpu-seconds-from-ps.sh diff to show the delta cputime
223 cpu seconds in pid= 20761 oracleCDB1 (LOCAL=NO)

The important point with this MLE engine is that you can write your code without being tied to a platform. You write code (or use libraries of code, copy/paste from StackOverflow…) in the latest trendy languages (JavaScript and Python for the moment, whatever in the future). And it can run on the client, the application server, or in the database. Then, the best colocation of code can be achieved without duplicating the logic into different languages. In summary, the developer thinks “serverless” for simplicity and agility and the operations run “full server” for efficiency and scalability.

But that’s for the future. Follow the MLE and be involved in the community:

Space: Multilingual Engine | Oracle Community

For your curiosity, where is the perf-top for the last run in MLE showing the work in oracle and in libmle.so engines:

Oracle Multi-Lingual Engine

PI-Day example

Here is a very quick and easy test of the Oracle MLE, the engine that let you run JavaScript or Python stored procedures in the Oracle Database (currently in beta).

The MLE beta is provided as a docker image containing the Oracle Database 12.2 with the additional MLE libraries. I have created a VM on the Oracle Cloud in order to test it and show an end-to-end demo on Oracle Linux 7.

Get software

Here is where to download the database server with MLE beta:

Oracle Database MLE Download

and the SQLcl client

SQLcl Downloads

I’ll download them from my VM where I have no browser,

but downloading Oracle software requires a human intervention to accept the license terms. Then, what I do is start the download from my laptop (where I can sign-in with my Oracle account and accept the license), suspend the download and copy the link which contains an ‘’AuthParam value. Then I can use this URL to download it with wget in my VM, in the /tmp directory:

Once I have this in the /tmp directory, I install and start docker, unzip SQLcl and create the MLE container:

sudo su
# install docker and java from OL7 Add-Ons
yum-config-manager --enable ol7_addons
yum install -y docker-engine docker-compose java
#start docker
systemctl start docker
docker info | grep -A2 ^Storage
# install SQLcl, the client to connect to the database
unzip -d /tmp /tmp/sqlcl-18.4.0.007.1818.zip*
# start the MLE beta
docker load --input /tmp/mle-docker-0.3.0.tar.gz*
cat > /tmp/mle.yml <<'CAT'
version: '3.1'
services:
ora122MLE:
image: mle-docker:0.3.0
restart: always
environment:
ORACLE_SID: CDB1
ORACLE_PDB: PDB1
ORACLE_PWD: oracle
ports:
- 1521:1521
volumes:
- "/var/tmp/oradata:/opt/oracle/oradata"
- "/tmp:/home/oracle/myproject"
CAT
mkdir -p /var/tmp/oradata
docker-compose -f /tmp/mle.yml up

I could have built a docker image containing SQLcl but I’m not a container junkie. My OS is a virtual machine, SQLcl runs in a Java VM, do I really need to containerize this in between two VMs? Only the database runs in a container because this is how the beta is shipped.

[Update 17-MAR-2019] This is ok only with few round-trips between client and server. The implementation of docker-proxy to connect from host to container is not efficient at all, and then better install SQLcl in the container itself or another linked container.

The creation of the container, at docker-compose up, is long. Be patient. This is how database docker images work: they have to create the database when creating the container.

The reason is that a database stores persistent data, and then can be created only once the container started. And an Oracle database is a big beast to create. Here, I put the database in /var/tmp and the ‘myproject’ directory is mapped to /tmp.

The MLE beta is based on the official docker images build scripts provided by Oracle. Everything is fine when you see “DATABASE IS READY TO USE”

Let’s check with the password I defined in the docker-compose file, and create a DEMO user in the pluggable database:

/tmp/sqlcl/bin/sql sys/oracle@//localhost/pdb1 as sysdba
connect sys/oracle@//localhost/pdb1 as sysdba
grant dba to demo identified by demo;
connect demo/demo@//localhost/pdb1

The documentation of the MLE beta is at: https://oracle.github.io/oracle-db-mle/docker. As I’m doing this on 14th or March, which is known as PI-Day in the countries which use the MM/DD format, I’ll create a PI function which can round to 2 decimals:

connect demo/demo@//localhost/pdb1
create javascript source named "test1.js" as
module.exports.PI=function (d) {
return Math.round(10**d*Math.PI)/10**d;
}
/
create function PI(decimals in number)
return number as language javascript
name 'test1\.js.PI(decimals number) return number';
/
select PI(2) from dual;

In the previous beta, we had to use an external utility (dbjs) to load the JavaScript code. It is now much easier with the CREATE SOURCE statement.

I’m now ready with a MLE environment for further tests…