Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Generate your Oracle Secure External Password Store wallet from your tnsnames.ora

Want to connect passwordless with SQLcl to your databases from a single location? Here is a script that creates the Secure External Password Store wallet credentials for each service declared in the tnsnames, as well as shell aliases for it (as bash does autocompletion). The idea is to put everything (wallet, sqlcl,…) in one single directory that you must protect of course because read access to the files is sufficient to connect to your databases.

Download the latest SQLcl from:

SQLcl Downloads

And install the Oracle Client if you do not have it already:

Oracle Instant Client Downloads

Now here is my script that:

  • reads the tnsnames.ora (define the location)
  • define sqlnet.ora and tnsnames.ora (ifile to the original one)
  • creates the password wallet
  • generates a script to define all aliases
  • create a login.sql

All that is located in the sqlcl directory (here under my $HOME) and the aliases have everything to point here (TNS_ADMIN and SQLPATH)

# this is where your tnsnames.ora is found
TNS_ADMIN=/etc
# unzip -d ~ sqlcl-19.1.0.094.1619.zip
#
# if "Error Message = no ocijdbc18 in java.library.path" see https://martincarstenbach.wordpress.com/2019/05/20/using-the-secure-external-password-store-with-sqlcl/
#
alias sqlcl='TNS_ADMIN=~/sqlcl SQLPATH=~/sqlcl ~/sqlcl/bin/sql -L -oci'
#
cat > ~/sqlcl/sqlnet.ora <WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY="$HOME/sqlcl")))
SQLNET.WALLET_OVERRIDE=TRUE
CAT
#
cat > ~/sqlcl/tnsnames.ora <ifile=$TNS_ADMIN/tnsnames.ora
CAT
#
cat > ~/sqlcl/login.sql <<'CAT'
set exitcommit off pagesize 5000 linesize 300 trimspool on sqlprompt "_user'@'_connect_identifier> "
set sqlformat ansiconsole
CAT
#
read -p "Enter SYS password to store in the wallet: " -s PASSWORD
# Create the wallet
mkstore -wrl ~/sqlcl -create <$PASSWORD
$PASSWORD
END
# Add services to wallet
awk -F"," '/^[^ #\t].*=/{sub(/=.*/,""); for (i=1;i<=NF;i++){print $i}}' $TNS_ADMIN/tnsnames.ora | while read service
do
echo "=== Adding $service to wallet for passwordless connection like: /@$service as sysdba"
mkstore -wrl ~/sqlcl -createCredential $service SYS <$PASSWORD
$PASSWORD
$PASSWORD
END
done
# list services from wallet
{
mkstore -wrl ~/sqlcl -listCredential <$PASSWORD
END
} | awk '/^[0-9]+: /{print "alias sysdba_"tolower($2)"="q"TNS_ADMIN=~/sqlcl SQLPATH=~/sqlcl ~/sqlcl/bin/sql -L -oci /@"toupper($2)" as sysdba"q}' q="'" qq='"' | sort | tee ~/sqlcl/
services.sh
#
unset PASSWORD

Then just source the generated services.sh to create aliases for each service (like sysdba_xxx). This example creates connections as sysdba with the SYS authentication, but it is highly recommended to have your own user. Of course the idea here is that the same password is used on all databases, but that again can be customized.

When I don’t want to use an alias (from a script for example) I also have a chmod u+x script in my path to run sqlcl with this environment

TNS_ADMIN=~/sqlcl SQLPATH=~/sql ~/sqlcl/bin/sql -L -oci ${@:-/nolog}

and SQLcl has also autocompletion for the connect command (from the tnsnames.ora).

If you have a “no ocijdbc18 in java.library.path” message, then look at Martin Bach blog:

Using the Secure External Password store with sqlcl

If you have credentials to connect to the Oracle Cloud, use the downloaded wallet instead of creating one with mkstore.

Using the Secure External Password store with sqlcl

Sometimes it is necessary to invoke a SQL script in bash or otherwise in an unattended way. SQLcl has become my tool of choice because it’s really lightweight and can do a lot. If you haven’t worked with it yet, you really should give it a go.

So how does one go about invoking SQL scripts from the command line these days? There’s an age-old problem with unattended execution: how do you authenticate against the database? There are many ways to do so, some better than others. This post shows how to use the Secure External Password Store with SQLcl. As always, there is more than one way to do this, @FranckPachot recently wrote about a different approach on Medium which you might want to check out as well.

Please don’t store passwords in scripts

I have seen passwords embedded in shell scripts far too often, and that’s something I really don’t like for many, many reasons. Thankfully Oracle offers an alternative to storing clear text passwords in the form of the Secure External Password Store (SEPS).This post explains one of many ways to use a wallet to use sqlcl to connect to a database. It assumes that a Secure External Password store is set up with the necessary credentials. Components referenced in this post are:

  • sqlcl 19.1
  • Instant Client Basic 18.5
  • Oracle XE 18.4

The SEPS wallet is found in /home/oracle/seps with its corresponding tnsnames.ora and sqlnet.ora in /home/oracle/seps/tns. I have set TNS_ADMIN to /home/oracle/seps/tns and ensured that sqlnet.ora points to the correct wallet location.

First attempt

The first attempt at using sqlcl with the wallet resulted in the following error:

$ /home/oracle/sqlcl/bin/sql -L /@xepdb1

SQLcl: Release 19.1 Production on Fri May 17 05:56:56 2019

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

  USER          = 
  URL           = jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=oraclexe)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=XEPDB1)))
  Error Message = ORA-01017: invalid username/password; logon denied

I provided the -L flag to prevent sqlcl from asking me for different credentials after a failed login attempt. Using the -verbose flag in the next attempt I confirmed that sqlcl was indeed using my tnsnames.ora file in the directory specified by $TNS_ADMIN.

Investigating

So I started investigating … The first place to go to is the documentation, however I didn’t find anything relevant in the command line reference or FAQ shown on the product’s landing page. I then cast my net wider and found a few things on My Oracle Support (they didn’t apply to my version of sqlcl) and the Oracle forums.

I tried various things to get the thin client to cooperate with using the wallet but didn’t pursue that route further after learning about the option to use the OCI JDBC driver. After experimenting a little more I got on the right track.

Second attempt

The consensus in the Oracle forum posts I found seems to be to use the OCI flag when invoking the tool. So I tried that next:

$ /home/oracle/sqlcl/bin/sql -L -oci /@xepdb1

SQLcl: Release 19.1 Production on Fri May 17 06:09:29 2019

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

  USER          = 
  URL           = jdbc:oracle:oci8:@xepdb1
  Error Message = no ocijdbc18 in java.library.path
  USER          = 
  URL           = jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=oraclexe)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=XEPDB1)))
  Error Message = ORA-01017: invalid username/password; logon denied

No success yet, but there’s an important clue in the output the first URL indicates that indeed an OCI connection was tried, except that a shared library was missing from Java’s library path. I guessed correctly that ocijdbc18 is part of the instant client 18 basic installation. After installing the RPM for the latest 18c instant client I confirmed libocijdb18.so was part of the package.

From what I understand java doesn’t pick up the configuration created by ldconfig and you either have to set java.library.path manually (as in java -Djava.library.path=…) or set LD_LIBRARY_PATH. The latter is easier, and it gave me the desired result:

$ export LD_LIBRARY_PATH=/usr/lib/oracle/18.5/client64/lib:$LD_LIBRARY_PATH
$ echo "select user from dual" | /home/oracle/sqlcl/bin/sql -L -oci /@xepdb1

SQLcl: Release 19.1 Production on Fri May 17 06:15:29 2019

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Last Successful login time: Fri May 17 2019 06:15:32 -04:00

Connected to:
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0

USER                                                                                                                            
------------------------------------------------------------------------
MARTIN                                                                                                                    

Disconnected from Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0

Result! I can use sqlcl to connect to a database using a wallet.

3200 Clever hackers are in my PC; wow!!

Hackers are clever people; they must be to hack other people and take over their private data and steal identities and money. I have to draw the limit at the number of hackers who claim to be in my PC....[Read More]

Posted by Pete On 19/05/19 At 10:08 PM

Do you know what you are measuring with pgbench?

pgbench flamegraph

pgbench is a benchmark application for PostgreSQL. You define some parameters for the workload (read-only, volume of data, number of threads, cursor sharing, …) and measure the number of transactions per second. Pgbench is used a lot when one wants to compare two alternative environments, like different postgres version, different platform, different table design,…

However, a scientific approach should go beyond the simple correlation between the observed performance (transactions per seconds) and the configuration. Without a clear analysis and explanation on the cause-consequence, we cannot extrapolate from a single set of observations to a general recommendation. The goal of this post is to show what is behind this ‘transaction per second’ measure.

pgio flamegraphs

I’ll run another benchmark tool focused at the platform: Kevin Closson pgio, which is designed exactly for this analysis. Rather than trying to simulate all layers of an application (like pgbench) we can focus at a specific component: the PostgreSQL shared buffer cache, or the OS filesystem cache, or the storage access,…

I’m using Brendan Gregg FlameGraph here to visualize the full stack sampled by perf record

brendangregg/FlameGraph

with the following flags:

perf record --call-graph dwarf -F99 -e cpu-cycles -a

I’ve compiled PostgreSQL server with the following flags:

./configure CFLAGS=" -fno-omit-frame-pointer" --enable-debug

pgbench

I’ve initialized the pgbench database with a small scale (about 100MB) as it is the only setting where we can focus the pgbench activity: with a small size, I’ll have no physical reads:

pgbench --initialize --scale=8 pgio

In the same idea, I run a read-only workload, with 12 threads:

pgbench --no-vacuum --select-only --protocol=prepared --client=12 --jobs=12 --time=120 pgio &

Then, after waiting a few minutes for the warm-up, I record perf events:

sudo perf record --call-graph dwarf -F99 -e cpu-cycles -a \
-o /tmp/perf.data sleep 60

The result is parsed to produce a flamegraph of stack samples:

sudo perf script -i /tmp/perf.data | ./stackcollapse-perf.pl | ./flamegraph.pl --width=1200 --hash --cp

Here is the result (.svg)

This is what happened in the system during the pgbench test. Pgbench, the client, spends its time on PQsendQueryPrepared and PQconsumeInput, which is the minimum that can be done with an OLTP-like well-tuned application. I’ve run with ‘--protocol=prepared’ to avoid parsing overhead which is not what I want to measure.

The postgres process is running the backend. And this is where we can realize that the real database work (run DML and commit) is not where this pgbench spending its time. Less than 15% of samples in the backend executor (ExecScan) and 6% on the CommitTransaction (even if it is a select-only workload there’s a commit here). Remains the ReadyForQuery and pq_getbyte which are about frontend-backend communication.

If you run a benchmark to measure something else than the network roundtrips and context switches involved in the client/server communication, then this pgbench workload is not the right tool.

If you benchmark to compare the CPU and RAM activity, for example because you want to choose the best compute shape from your cloud provider, then you need to run something that is focused at this activity, in a sustainable way.

pgio

I’ll use Kevin Closson ‘pgio’ which is the same approach as his ‘SLOB’ for Oracle:

SLOB Resources

The settings in pgio.conf are similar in size and number of threads (I don’t want physical I/O and this stays in cache):

UPDATE_PCT=0
RUN_TIME=60
NUM_SCHEMAS=1
NUM_THREADS=12
WORK_UNIT=255
UPDATE_WORK_UNIT=8
SCALE=100M
DBNAME=pgio
CONNECT_STRING=pgio
CREATE_BASE_TABLE=TRUE

The setup and run is easy, and again I record perf events after a little warmup:

sh ./setup.sh
sh ./runit.sh &
sudo perf record --call-graph dwarf -F99 -e cpu-cycles -a \
-o /tmp/perf.data sleep 60

Same flamegraph (using same colors):

sudo perf script -i /tmp/perf.data | ./stackcollapse-perf.pl | ./flamegraph.pl --width=1200 --hash --cp > /tmp/perf.svg

And here is the .svg result:

There’s no frontend work here because of all runs from a PL/pgSQL loop and then no roundtrip, network and context switch is there to influence my measures. Most of the activity is in the query executor, accessing the shared buffers. This is what you want if you want to compare some platform configurations like:

  • cloud compute shapes
  • NUMA
  • large pages
  • memory settings
  • filesystem cache
  • compression / encryption
  • various intel security bugs mitigation patches

And instead of ‘transaction per second’ pgio will measure the number of buffers read per second and the cache hits.

In summary…

Pgbench is not the tool if you want to measure specific platform components, or the postgres components interfacing with the system (buffer cache, WAL, writer, …). Pgbench can be used to test the database for the application. But in all case, one number like ‘transactions per second’ is not sufficient. FlameGraph can help to visualize what is involved behind this measure.

GG, this approach(pathfinder) can be used with GTT.

GG, this approach(pathfinder) can be used with GTT. You can fill relevant data in the script.sql before the statement that is taggued.

Hacking together faster INSERTs

Most developers tools out there have some mechanism to unload a table into a flat file, either as CSV, or Excel, and some even allow you to unload the data as INSERT statements. The latter is pretty cool because it’s a nice way of having a self-contained file that does not need Excel or DataPump or any tool additional to the one you’re probably using to unload the data.

SQLcl and SQL Developer are perhaps the easiest to utilize for such an extract. You simply add the pseudo-hint INSERT to get the output as insert statements. For example:


SQLcl: Release 18.4 Production on Wed Apr 17 17:05:49 2019

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Last Successful login time: Wed Apr 17 2019 17:05:49 +08:00

Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.6.0.0.0

SQL> select /*insert*/ * from t where rownum <= 10;
REM INSERTING into T
SET DEFINE OFF;
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_FILE#_BLOCK#','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_OBJ3','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_TS1','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_CON1','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','IND$','TABLE');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','CDEF$','TABLE');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','C_TS#','CLUSTER');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_CCOL2','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_PROXY_DATA$','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_CDEF4','INDEX');
...
...

That’s pretty cool but no matter what tool you use, ultimately you end up with a set of insert statements that will load 1 row at a time into the database. For a small number of rows, that is no problem. But for larger numbers of rows then you might sitting around for a long time waiting for that script to finish. There is two things that are going to slow us down here.

1) Parsing

Every insert is a brand new statement since it contains literal values. Even for a simple INSERT, that’s a lot of parsing work for the database. This is easily solved with a couple of ALTER SESSION wrappers at the head and tail of the script.


alter session set cursor_sharing = force;
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_FILE#_BLOCK#','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_OBJ3','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_TS1','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_CON1','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','IND$','TABLE');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','CDEF$','TABLE');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','C_TS#','CLUSTER');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_CCOL2','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_PROXY_DATA$','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_CDEF4','INDEX');
...
...
alter session set cursor_sharing = exact;

That will replace the literals with bind variable values and hence all of the INSERTs will be sharable.

2) Row by Row

A set of INSERT statements is inherently a row by row operation. One INSERT = one row. Of course, we developers have all had drummed into our programming minds over the years that database operations should be done in sets not row by row. But how can we get to that utopia with set of insert statements? We could totally rewrite the script with bind arrays and the like, but that’s a big job. So here is my quick hack to get convert single row inserts into multiple row inserts without too much effort.

A set of individual INSERTS such as


Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_FILE#_BLOCK#','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_OBJ3','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_TS1','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_CON1','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','IND$','TABLE');

can also be written as a multi-table INSERT, where all insert targets are the same table and all WHEN conditions are omitted and hence are implicitly true. The above set of INSERTs can be phrased as a single statement like this:


insert all
into t(OWNER,OBJECT_NAME,OBJECT_TYPE)  values (q'{SYS}',q'{I_FILE#_BLOCK#}',q'{INDEX}')
into t(OWNER,OBJECT_NAME,OBJECT_TYPE)  values (q'{SYS}',q'{I_OBJ3}',q'{INDEX}')
into t(OWNER,OBJECT_NAME,OBJECT_TYPE)  values (q'{SYS}',q'{I_TS1}',q'{INDEX}')
into t(OWNER,OBJECT_NAME,OBJECT_TYPE)  values (q'{SYS}',q'{I_CON1}',q'{INDEX}')
into t(OWNER,OBJECT_NAME,OBJECT_TYPE)  values (q'{SYS}',q'{IND$}',q'{TABLE}')
select * from dual;

You could write some awk/sed etc to take an existing INSERT script and make it into a multiple table one. I took a slightly different approach and whipped up some PL/SQL to allow pipelining the appropriate multi-table inserts out to a spool file. Here is the function I wrote to do it.


create or replace function as_insert(p_query varchar2, p_batch int default 10) return sys.odcivarchar2list pipelined as
    l_theCursor     integer default dbms_sql.open_cursor;
    l_columnValue   varchar2(4000);
    l_status        integer;
    l_descTbl       dbms_sql.desc_tab;
    l_colCnt        number;
    n number := 0;
    
    l_tname varchar2(200) := substr(p_query,instr(p_query,' ',-1,1)+1);
    l_collist varchar2(32000);
    l_colval varchar2(32000);
    l_dml varchar2(32000);
    
    l_nls sys.odcivarchar2list := sys.odcivarchar2list();
    
begin
   if l_tname is null then l_tname := '@@TABLE@@'; end if;

   select value
   bulk collect into l_nls
   from v$nls_parameters
   where parameter in (   
      'NLS_DATE_FORMAT',
      'NLS_TIMESTAMP_FORMAT',
      'NLS_TIMESTAMP_TZ_FORMAT')
   order by parameter;

    execute immediate 'alter session set nls_date_format=''yyyy-mm-dd hh24:mi:ss'' ';
    execute immediate 'alter session set nls_timestamp_format=''yyyy-mm-dd hh24:mi:ssff'' ';
    execute immediate 'alter session set nls_timestamp_tz_format=''yyyy-mm-dd hh24:mi:ssff tzr'' ';

    dbms_sql.parse(  l_theCursor,  p_query, dbms_sql.native );
    dbms_sql.describe_columns( l_theCursor, l_colCnt, l_descTbl );

    for i in 1 .. l_colCnt loop
        dbms_sql.define_column(l_theCursor, i, l_columnValue, 4000);
        l_collist := l_collist || l_descTbl(i).col_name||',';
    end loop;
    l_collist := 'into '||l_tname||'('||rtrim(l_collist,',')||')';

    l_status := dbms_sql.execute(l_theCursor);

    pipe row('alter session set cursor_sharing = force;');
    while ( dbms_sql.fetch_rows(l_theCursor) > 0 ) loop
       n := n + 1;
  
       if mod(n,p_batch) = 1 then
          pipe row('insert all ');
       end if;
       
        for i in 1 .. l_colCnt loop
            dbms_sql.column_value( l_theCursor, i, l_columnValue );
            if l_columnValue is null then
              l_colval := l_colval || 'null,';
            elsif l_descTbl(i).col_type in (1,8,9,96,112) then
              l_colval := l_colval || 'q''{'||l_columnValue ||'}''' || ',';
            elsif l_descTbl(i).col_type in (2,100,101) then
              l_colval := l_colval || l_columnValue || ',';
            elsif l_descTbl(i).col_type in (12) then
              l_colval := l_colval || 'to_date('''||l_columnValue||''',''yyyy-mm-dd hh24:mi:ss'')' || ',';
            elsif l_descTbl(i).col_type in (180) then
              l_colval := l_colval || 'to_timestamp('''||l_columnValue||''',''yyyy-mm-dd hh24:mi:ssff'')' || ',';
            elsif l_descTbl(i).col_type in (181) then
              l_colval := l_colval ||'to_timestamp_tz('''||l_columnValue||''',''yyyy-mm-dd hh24:mi:ssff tzr'')' || ',';
            elsif l_descTbl(i).col_type in (231) then
              l_colval := l_colval || 'to_timestamp('''||l_columnValue||''',''yyyy-mm-dd hh24:mi:ssff'')' || ',';
            elsif l_descTbl(i).col_type in (182) then
              l_colval := l_colval || 'to_yminterval('''||l_columnValue||''')' || ',';
            elsif l_descTbl(i).col_type in (183) then
              l_colval := l_colval ||'to_dsinterval('''||l_columnValue||''')'  || ',';
            end if;
        end loop;
        l_colval := rtrim(l_colval,',')||')';
        pipe row( l_collist  );
        pipe row( '  values ('||l_colval );
        if mod(n,p_batch) = 0 then
          pipe row('select * from dual;');
        end if;
        l_colval := null;
    end loop;
    if n = 0 then
      pipe row( 'No data found ');
    elsif mod(n,p_batch) != 0 then
      pipe row('select * from dual;');
    end if;
    pipe row('alter session set cursor_sharing = exact;');

    execute immediate 'alter session set nls_date_format='''||l_nls(1)||''' ';
    execute immediate 'alter session set nls_timestamp_format='''||l_nls(2)||''' ';
    execute immediate 'alter session set nls_timestamp_tz_format='''||l_nls(3)||''' ';
    return;
end;
/

and here’s an example of the output you’d expect to see from it


alter session set cursor_sharing = force;
insert all
into scott.emp(EMPNO,ENAME,SAL)
  values (7369,q'{SMITH}',800)
into scott.emp(EMPNO,ENAME,SAL)
  values (7499,q'{ALLEN}',1600)
into scott.emp(EMPNO,ENAME,SAL)
  values (7521,q'{WARD}',1250)
into scott.emp(EMPNO,ENAME,SAL)
  values (7566,q'{JONES}',2975)
into scott.emp(EMPNO,ENAME,SAL)
  values (7654,q'{MARTIN}',1250)
into scott.emp(EMPNO,ENAME,SAL)
  values (7698,q'{BLAKE}',2850)
into scott.emp(EMPNO,ENAME,SAL)
  values (7782,q'{CLARK}',2450)
into scott.emp(EMPNO,ENAME,SAL)
  values (7788,q'{SCOTT}',3000)
into scott.emp(EMPNO,ENAME,SAL)
  values (7839,q'{KING}',5000)
into scott.emp(EMPNO,ENAME,SAL)
  values (7844,q'{TURNER}',1500)
select * from dual;
insert all
into scott.emp(EMPNO,ENAME,SAL)
  values (7876,q'{ADAMS}',1100)
into scott.emp(EMPNO,ENAME,SAL)
  values (7900,q'{JAMES}',950)
into scott.emp(EMPNO,ENAME,SAL)
  values (7902,q'{FORD}',3000)
into scott.emp(EMPNO,ENAME,SAL)
  values (7934,q'{MILLER}',1300)
select * from dual;
alter session set cursor_sharing = exact;

Lets do a performance test on a large sample set. In this test, I’m loading a little over one million rows into an empty table, where the performance harness for each INSERT mechanism looks like the following to output the start and end times for the script:


select systimestamp from dual;
alter session set cursor_sharing = force;
set feedback off


[ Set of INSERT, or set of MULTI-TABLE inserts with 50 rows per INSERT]
set feedback on
alter session set cursor_sharing = exact;
select systimestamp from dual;

Running each gave the following results.


SQL> @c:\tmp\normal_insert.sql

SYSTIMESTAMP
---------------------------------------
17-APR-19 03.39.53.189000 PM +08:00


Session altered.


Session altered.


SYSTIMESTAMP
---------------------------------------
17-APR-19 03.43.47.920000 PM +08:00


SQL> @c:\tmp\multi_insert.sql

SYSTIMESTAMP
---------------------------------------
17-APR-19 03.43.40.402000 PM +08:00


Session altered.


Session altered.


SYSTIMESTAMP
---------------------------------------
17-APR-19 03.44.27.319000 PM +08:00

So that’s a nice four-fold speed boost, down from 3mins 54seconds to less than 50seconds.

Just to reiterate – I’m not suggesting you need to go this level for all INSERT scripts. Most of the time I just add the cursor_sharing options to my scripts, and that makes them plenty fast enough for the job. But if you have got a large number of inserts to do, then converting them to a multi-table equivalent might give you a nice boost.

Here’s the video version

I ‘fixed’ execution plan regression with optimizer_features_enable, what to do next?

Here is a simple example of using Mauro Pagano ‘pathfinder’ tool where you don’t really want to run the query, but just get the execution plan with all variations of optimizer settings. That’s something I used many times in situations similar to this one:

  • the database was upgraded, say from 11.2.0.4 to 19.3
  • one (or a few) SQL statements have problematic performance regression
  • the execution plan (in 19.3) is different than from the previous version (11.2.0.4) — you get both with SQL Tuning Sets or AWR
  • you set optimizer_features_enable to 11.2.0.4 and the old plan with acceptable performance is back

That’s a quick workaround, thanks to this unique Oracle Optimizer feature which let us run the latest version of the database with a previous version of the optimizer code. But the goal is not to stay long like this. Once the service is made acceptable again with this temporary setting, the second step is to understand which bug or feature is responsible for the change. Then, at least, the workaround can be limited to only one underscore setting instead of the generic optimizer_features_enable which sets hundreds of them. The third step then will be to fix the root cause, of course, and understanding what was wrong will help.

This post is about the second step — going from the general optimizer_features_enable to a unique focused setting.

This is something I wanted to write for a long time but I was always in a rush when encountering this kind of problem. But I’m currently attending Mike Dietrich upgrade workshop at AOUG conference in Vienna and this, the change of execution plan, is addressed by the exercises. Mike exposes the tools that can be used to compare the performance before and after the upgrade: capture the statements and performance statistics and compare them, and to fix them with SQL Plan Management.

The workshop instructions are on Mike’s blog:

HOL 19c - Main Index Page

If you did the workshop you have seen that the query sql_id=13dn4hkrzfpdy has a different execution plan between 11g and 19c and the idea of the lab is to fix the previous plan with a SQL Plan Baseline. That’s perfect, but I was curious about the reason for this execution plan change. There are many new features or fixes between 11.2.0.4 and 19.3 and one is probably responsible for that.

This is where Mauro Pagano ‘pathfinder’ can be used. Setting optimizer_features_enable is a shortcut to set all individual features or fixes, and pathfinder will try each of them one by one.

The query with a plan regression was:

SQL Details:
-----------------------------
Object ID : 34
Schema Name : TPCC
Container Name : Unknown (con_dbid: 72245725)
SQL ID : 13dn4hkrzfpdy
Execution Frequency : 3273
SQL Text :
SELECT COUNT(DISTINCT (S_I_ID)) FROM ORDER_LINE, STOCK
WHERE OL_W_ID = :B2 AND OL_D_ID = :B4 AND (OL_O_ID < :B3
) AND OL_O_ID >= (:B3 - 20) AND S_W_ID = :B2 AND S_I_ID =
OL_I_ID AND S_QUANTITY < :B1

The plan before and after, as reported by AWR Diff Report are the following:

And my goal is to understand which feature or fix control, when disabled, gets back to the plan hash value 954326358 instead of 3300316041

I installed sqldb360 (open sourced by Carlos Sierra and Mauro Pagano), which contains pathfinder:

git clone https://github.com/sqldb360/sqldb360.git
cd ./sqldb360/sql/

I changed the script.sql to put my query there with an EXPLAIN PLAN because I don’t want to execute it (which would require parameters):

alter session set current_schema=TPCC;
explain plan for
SELECT /* ^^pathfinder_testid */
COUNT(DISTINCT (S_I_ID)) FROM ORDER_LINE, STOCK
WHERE OL_W_ID = :B2 AND OL_D_ID = :B4 AND (OL_O_ID < :B3
) AND OL_O_ID >= (:B3 - 20) AND S_W_ID = :B2 AND S_I_ID =
OL_I_ID AND S_QUANTITY < :B1

By default, pathfinder executes the query and gets the execution plan with dbms_xplan.display_cursor, using the tag in the comment to identify it.

Here I’m doing an EXPLAIN PLAN and then I changed the pathfinder.sql to use dbms_xplan.display. My change in the ‘xplan driver’ is the following:

I’ve left the old query, but add the following one to be executed:

-- my addition there
PRO .
PRO SELECT RPAD('explain plan', 11) inst_child, plan_table_output
PRO FROM TABLE(DBMS_XPLAN.DISPLAY('PLAN_TABLE', NULL, 'ADVANCED'))
-- done
PRO /

Then running pathfinder:

[oracle@hol]$ sqlplus / as sysdba @ pathfinder.sql '"/ as sysdba"'

This takes some time to test all settings for optimizer underscore parameters (632 ones here in 19.3) and fix controls (1459 here):

The result is a zip file containing an index and the detail of each test.

The index (00001_pathfinder_upgr_20190515_1113_index.html) has one line per combination and it is easy to search from the plan hash value:

My old plan is chosen when _optimizer_partial_join_eval is set to false:

And now, I have a better workaround. Instead of setting the optimizer_feature_enable, I can set only:

ALTER SESSION SET "_optimizer_partial_join_eval" = FALSE;

Of course, my search for the plan hash value also highlights which versions set the same:

The goal of this post is to show the tool. If you want to know more about Partial Join Evaluation, Google tells me that I blogged about this in the past:

Partial Join Evaluation in Oracle 12c - Blog dbi services

The query here, a count(distinct) on a join, is subject to this optimization which changes the join to a semi-join.

If I can change the query, maybe I’ll prefer to disable it with a hint. If I click on the baseline plan from the pathfinder index, I can see the plan with hints:

Then probably a NO_PARTIAL_JOIN can disable this feature.

Side remark: you can see OPTIMIZER_FEATURES_ENABLE(‘19.1.0') but I told you that I’m on 19.3, right? And that this is the pathfinder baseline without any session setting. I didn’t expect 19.3 there because Release Updates should not add features that change the execution plan. But I expected something like ‘19.0.0’. The magic of the new release model…

In summary:

  • Pathfinder is easy to run, give it a try when you need to understand why an execution plan has changed.
  • Do the Mike Dietrich hands-on lab: upgrade is something to exercise before doing it in production.
  • Since 8i, the Oracle Optimizer developers add a flag for any change, in order to give us the possibility to enable or disable the feature or the fix. And you control it at instance, session or query level. This is a unique feature you do not find on other database systems. And it can save your business because a critical regression can always happen after an upgrade.
  • AOUG conference had a great idea with the ‘workshop and live-demo’ day before the conference day. Fewer attendees and more interaction with the speakers.

Linux Scripting, Part II

In Part I, we started with some scripting basics, as in, how to write a script. This included the concepts of breaking a script into sections, (introduction, body and conclusion)

For Part II, we’ll start with the BASH script “introduction”.

The introduction in a BASH script should begin the same in all scripts.

  1. Set the shell to be used for the script
  2. Set the response to failure on any steps, (exit or ignore)
  3. Add in a step for testing, but comment out or remove when in production

For our scripts, we’ll keep to the BASH format that is used by the template scripts, ensuring a repeatable and easy to identify introduction.

/bin/bash
set -euo pipefail
# set -eox pipefail #-x for testing
IFS=$'\n\t'

In the next section, we’ll set up dynamic argument requirements. By using this format, we create a very dynamic experience for the person executing the script. You can pass in values as part of the argument, but this way, you ask a question, offer examples or values in feedback to the person executing the script and they can offer a clear answer to the question. The answers are then used to dynamically populate values farther into the commands being executed.

To do this, we first set up the required number of arguments and arguments for our script. Our script example we’ll use, deploys a full, end-to-end solution in Azure.

usage() { echo "Usage: $0  -g  -p  -h  -l  -d  -b " 1>&2; exit 1; }
 declare groupname=""
 declare password=""
 declare holname=""
 declare zone=""
 declare data=""
 declare brcksize=""

Once we set up the required number of arguments that will need to be passed, (otherwise, the script exits.) We then set our declarations for each of the arguments to satisfy the script starting out.

Initialize parameters specified from command line

while getopts ":g:p:h:l:d:b:" arg; do
        case "${arg}" in

                g)
                        groupname=${OPTARG}
                        ;;
                p)
                        password=${OPTARG}
                        ;;
                h)
                        holname=${OPTARG}
                        ;;
                l)
                        zone=${OPTARG}
                        ;;
                d)
                        data=${OPTARG}
                        ;;
                b)
                        brcksize=${OPTARG}
                        ;;
                esac
done
shift $((OPTIND-1))

In the above section, we get the existing options, (a blank) for each of the values and then let the script know we will be providing updated values.

Passing in Values

We then let the script know we are done setting the variables we’ll be working with in the next section.

For each of the above sections, I then choose to interact with the person executing the script and ask questions to fulfill the values.  I don’t expect them to know what to enter, so this way, I can offer insight into the values they’ll want to enter.  Let’s dig into the first two entries so you can see an example of how this will be scripted:

if [[ -z “$groupname”]]; then
echo “What is the name for the resource group to create the deployment in? Example: EDU_Group”
echo “Enter your Resource group name:”
read groupname
{{ “${groupname:$}”]]
fi

if [[-z “$password”]]; then
echo “Your database login will be sqladmin and you’ll need a password for this login.”
echo “Passwords must meet reMicrosoft requirements, including caps, special characters. Example: SQLAdm1nt3stng!”
echo “Enter the login password:”
read password
[[“${password:?}”]]
fi

In the above sections, you’ll note that we start with an IF statement, stepping through each of the variables,  making a statements and asking questions using a simple ECHO command and then prompting for the value, reading the value and then exiting from the IF statement.

For the first example, groupname, the user will see the following on their screen:

What is the name for the resource group to create the deployment in? Example: EDU_Group

Enter your Resource group name.

Then the user will be prompted for their value to enter.  This makes the experience much easier on the person interacting with them script, providing more value.  

Notice in the next example, for password, I’ve not only stated that password requirements for the SQL Database password must be met, I’ve added an example for the password, too.  Passwords can easily be hardened after the hands on lab, so it makes sense to make this as easy as possible or set up for success for someone working with your script.  I’ve built a second script that hardens the environment anyway.  Always script like a five year old is going to be the one executing the script and think ahead with additional enhancements as you go along.

I’ll let you absorb this and we’ll move onto the next step soon!

 



Tags:  ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBAKevlar [Linux Scripting, Part II], All Right Reserved. 2019.

Indexing Null Values - Part 1

Indexing null values in Oracle is something that has been written about a lot in the past already. Nowadays it should be common knowledge that Oracle B*Tree indexes don't index entries that are entirely null, but it's possible to include null values in B*Tree indexes when combining them with something guaranteed to be non-null, be it another column or simply a constant expression.

Jonathan Lewis not too long ago published a note that showed an oddity when dealing with IS NULL predicates that in the end turned out not to be a real threat and looked more like an oddity how Oracle displays the access and filter predicates when accessing an index and using IS NULL together with other predicates following after.

However, I've recently come across a rather similar case where this display oddity turns into a real threat. To get things started, let's have a look at the following (this is from 18.3.0, but other recent versions should show similar results):

#eeeeee; border: 1px dashed rgb(204 , 204 , 204); overflow: auto;">SQL> create table null_index as select * from dba_tables;

Table created.

SQL> insert /*+ append */ into null_index select a.* from null_index a, (select /*+ no_merge cardinality(100) */ rownum as dup from dual connect by level <= 100);

214700 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(null, 'NULL_INDEX', method_opt => 'for all columns size auto for columns size 254 owner')

PL/SQL procedure successfully completed.

SQL> create index null_index_idx on null_index (pct_free, ' ');

Index created.

SQL> set serveroutput off pagesize 5000 arraysize 500

Session altered.

SQL> set autotrace traceonly
SQL>
SQL> select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

101 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 3608178030

------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 19 | 5852 | 1028 (1)| 00:00:01 |
|* 1 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 19 | 5852 | 1028 (1)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | NULL_INDEX_IDX | 13433 | | 32 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST')
2 - access("PCT_FREE" IS NULL)


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
2178 consistent gets
35 physical reads
0 redo size
7199 bytes sent via SQL*Net to client
372 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed

So this is the known approach of indexing null values by simply adding a constant expression and we can see from the execution plan that indeed the index was used to identify the rows having NULLs.

But we can also see from the execution plan, the number of consistent gets and also the Rowsource Statistics that this access can surely be further improved:

#eeeeee; border: 1px dashed rgb(204 , 204 , 204); overflow: auto;">Plan hash value: 3608178030

--------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads |
--------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1028 (100)| 101 |00:00:00.01 | 2178 | 35 |
|* 1 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 1 | 19 | 1028 (1)| 101 |00:00:00.01 | 2178 | 35 |
|* 2 | INDEX RANGE SCAN | NULL_INDEX_IDX | 1 | 13433 | 32 (0)| 13433 |00:00:00.01 | 30 | 35 |
--------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST'))
2 - access("PCT_FREE" IS NULL)

Because the additional predicate on OWNER can only be applied on table level, we first identify more than 13,000 rows on index level, visit all those table rows via random access and apply the filter to end up with the final 101 rows.

So obviously we should add OWNER to the index to avoid visiting that many table rows:

#eeeeee; border: 1px dashed rgb(204 , 204 , 204); overflow: auto;">SQL> create index null_index_idx2 on null_index (pct_free, owner);

Index created.

SQL> select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

101 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 3808602675

-------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 19 | 5852 | 40 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 19 | 5852 | 40 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | NULL_INDEX_IDX2 | 19 | | 38 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("PCT_FREE" IS NULL)
filter("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST')



Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
137 consistent gets
61 physical reads
0 redo size
33646 bytes sent via SQL*Net to client
372 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed

Plan hash value: 3808602675

---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 40 (100)| 101 |00:00:00.01 | 137 | 61 |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 1 | 19 | 40 (0)| 101 |00:00:00.01 | 137 | 61 |
|* 2 | INDEX RANGE SCAN | NULL_INDEX_IDX2 | 1 | 19 | 38 (0)| 101 |00:00:00.01 | 36 | 61 |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("PCT_FREE" IS NULL)
filter(("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST'))

So at first sight this looks indeed like an improvement, and it is compared to the previous execution plan, see for example how the number of consistent gets has been reduced. However, there is something odd going on: The index cost part is even greater than in the previous example, and looking more closely at the predicate information section it becomes obvious that the additional predicate on OWNER isn't applied as access predicate to the index, but only as filter. This means rather than directly identifying the relevant parts of the index by navigating the index structure efficiently using both predicates, only the PCT_FREE IS NULL expression gets used to identify the more than 13,000 corresponding index entries and then applying the filter on OWNER afterwards. While this is better than applying the filter on table level, it still can become a very costly operation and the question here is, why doesn't Oracle use both expressions to access the index? The answer to me looks like an implementation restriction - I don't see any technical reason why Oracle shouldn't be capable of doing so. Currently it looks like that in this particular case when using an IN predicate or the equivalent OR predicates following an IS NULL on index level gets only applied as filter, similar to predicates following range or unequal comparisons, or skipping columns / expressions in a composite index. But for those cases there is a reason why Oracle does so - it no longer can use the sorted index entries for efficient access, but I don't see why this should apply to this IS NULL case - and Jonathan's note above shows that in principle for other kinds of predicates it works as expected (except the oddity discussed).

This example highlights another oddity: Since it contains an IN list, ideally we would like to see an INLIST ITERATOR used as part of the execution plan, but there is only an INDEX RANGE SCAN operation using this FILTER expression.

By changing the order of the index expressions and having the expression used for the IS NULL predicate as trailing one, we can see the following:

#eeeeee; border: 1px dashed rgb(204 , 204 , 204); overflow: auto;">SQL> create index null_index_idx3 on null_index (owner, pct_free);

Index created.

SQL> select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

101 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 2178707950

--------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 19 | 5852 | 6 (0)| 00:00:01 |
| 1 | INLIST ITERATOR | | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 19 | 5852 | 6 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | NULL_INDEX_IDX3 | 19 | | 4 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access(("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST') AND "PCT_FREE" IS NULL)


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
108 consistent gets
31 physical reads
0 redo size
33646 bytes sent via SQL*Net to client
372 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed

Plan hash value: 2178707950

----------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads |
----------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 6 (100)| 101 |00:00:00.01 | 108 | 31 |
| 1 | INLIST ITERATOR | | 1 | | | 101 |00:00:00.01 | 108 | 31 |
| 2 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 2 | 19 | 6 (0)| 101 |00:00:00.01 | 108 | 31 |
|* 3 | INDEX RANGE SCAN | NULL_INDEX_IDX3 | 2 | 19 | 4 (0)| 101 |00:00:00.01 | 7 | 31 |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access((("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST')) AND "PCT_FREE" IS NULL)

So this is the expected execution plan, including an INLIST ITERATOR and showing that all predicate expressions get used to access the index efficiently, reducing the number of consistent gets further. Of course, a potential downside here is that this index might not be appropriate if queries are looking for PCT_FREE IS NULL only.

Summary

It looks like that IN / OR predicates following an IS NULL comparison on index level are only applied as filters and therefore also prevent other efficient operations like inlist iterators. The problem in principle can be worked around by putting the IS NULL expression at the end of a composite index, but that could come at the price of requiring an additional index on the IS NULL expression when there might be the need for searching just for that expression efficiently.

In part 2 for curiosity I'll have a look at what happens when applying the same to Bitmap indexes, which include NULL values anyway...

Script used:

#eeeeee; border: 1px dashed rgb(204 , 204 , 204); overflow: auto;">set echo on

drop table null_index purge;

create table null_index as select * from dba_tables;

insert /*+ append */ into null_index select a.* from null_index a, (select /*+ no_merge cardinality(100) */ rownum as dup from dual connect by level <= 100);

commit;

exec dbms_stats.gather_table_stats(null, 'NULL_INDEX', method_opt => 'for all columns size auto for columns size 254 owner')

create index null_index_idx on null_index (pct_free, ' ');

set serveroutput off pagesize 5000 arraysize 500

set autotrace traceonly

select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

create index null_index_idx2 on null_index (pct_free, owner);

select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

create index null_index_idx3 on null_index (owner, pct_free);

select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

Oracle Instant Client RPM installation where to find things

Last week I blogged about the option to install Oracle’s Instant Client via the public YUM repository. If you go ahead and try this, there is one thing you will undoubtedly notice: file locations are rather unusual if you have worked with Oracle for a while. This is true at least for the 19c Instant Client, it might be similar for older releases although I didn’t check. I’d like to thank @oraclebase for prompting me to write this short article!

Installing the 19.3 “Basic” Instant Client package

So to start this post I am going to install the 19.3 “Basic” package on my Oracle Linux 7.6 lab environment:

$ sudo yum install oracle-instantclient19.3-basic
Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package oracle-instantclient19.3-basic.x86_64 0:19.3.0.0.0-1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package                     Arch   Version      Repository                Size
================================================================================
Installing:
 oracle-instantclient19.3-basic
                             x86_64 19.3.0.0.0-1 ol7_oracle_instantclient  51 M

Transaction Summary
================================================================================
Install  1 Package

Total download size: 51 M
Installed size: 225 M
Is this ok [y/d/N]: y
Downloading packages:
oracle-instantclient19.3-basic-19.3.0.0.0-1.x86_64.rpm     |  51 MB   00:09     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : oracle-instantclient19.3-basic-19.3.0.0.0-1.x86_64           1/1 
  Verifying  : oracle-instantclient19.3-basic-19.3.0.0.0-1.x86_64           1/1 

Installed:
  oracle-instantclient19.3-basic.x86_64 0:19.3.0.0.0-1                          

Complete!

With the software installed, let’s have a look at where everything is:

$ rpm -ql oracle-instantclient19.3-basic
/etc/ld.so.conf.d/oracle-instantclient.conf
/usr/lib/oracle
/usr/lib/oracle/19.3
/usr/lib/oracle/19.3/client64
/usr/lib/oracle/19.3/client64/bin
/usr/lib/oracle/19.3/client64/bin/adrci
/usr/lib/oracle/19.3/client64/bin/genezi
/usr/lib/oracle/19.3/client64/lib
/usr/lib/oracle/19.3/client64/lib/libclntsh.so
/usr/lib/oracle/19.3/client64/lib/libclntsh.so.10.1
/usr/lib/oracle/19.3/client64/lib/libclntsh.so.11.1
/usr/lib/oracle/19.3/client64/lib/libclntsh.so.12.1
/usr/lib/oracle/19.3/client64/lib/libclntsh.so.18.1
/usr/lib/oracle/19.3/client64/lib/libclntsh.so.19.1
/usr/lib/oracle/19.3/client64/lib/libclntshcore.so
/usr/lib/oracle/19.3/client64/lib/libclntshcore.so.19.1
/usr/lib/oracle/19.3/client64/lib/libipc1.so
/usr/lib/oracle/19.3/client64/lib/libmql1.so
/usr/lib/oracle/19.3/client64/lib/libnnz19.so
/usr/lib/oracle/19.3/client64/lib/libocci.so.19.1
/usr/lib/oracle/19.3/client64/lib/libociei.so
/usr/lib/oracle/19.3/client64/lib/libocijdbc19.so
/usr/lib/oracle/19.3/client64/lib/liboramysql19.so
/usr/lib/oracle/19.3/client64/lib/network
/usr/lib/oracle/19.3/client64/lib/network/admin
/usr/lib/oracle/19.3/client64/lib/network/admin/README
/usr/lib/oracle/19.3/client64/lib/ojdbc8.jar
/usr/lib/oracle/19.3/client64/lib/xstreams.jar
/usr/share/oracle
/usr/share/oracle/19.3
/usr/share/oracle/19.3/client64
/usr/share/oracle/19.3/client64/doc
/usr/share/oracle/19.3/client64/doc/BASIC_LICENSE
/usr/share/oracle/19.3/client64/doc/BASIC_README

As you can see in the RPM output, files are found under /usr/lib/oracle. That’s why I meant the file location is unusual. I for my part have followed the directory structure suggested by previous release’s Oracle Universal Installer (OUI) defaults and installed it under /u01/app/oracle/product/version/client_1. The actual location however doesn’t really matter.

No more manually calling ldconfig with 19.3

Note that before 19.3 you had to manually run ldconfig after installing the Instant Client RPM file. In 19c this is handled via a post-install script. The RPM adds its configuration in /etc/ld.so.conf.d/oracle-instantclient.conf.

$ rpm -q --scripts oracle-instantclient19.3-basic
postinstall scriptlet (using /bin/sh):
ldconfig
postuninstall scriptlet (using /bin/sh):
ldconfig

This is nicer – at least in my opinion – than setting LD_LIBRARY_PATH in a shell. This seemed to work just fine: I installed the SQLPlus package (oracle-instantclient19.3-sqlplus) and I could start it without any problems.

Network Configuration

If you have a requirement to add a TNS naming file, you should be able to do so by either setting TNS_ADMIN or place the file in /usr/lib/oracle/19.3/client64/lib/network/admin. I have a tnsnames.ora file pointing to my XE database here:

$ cat /usr/lib/oracle/19.3/client64/lib/network/admin/tnsnames.ora 
xepdb1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = oraclexe)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = xepdb1)
    )
  )

I can connect to my database without setting any specific environment variables:

$ env | egrep -i 'tns|ld_library' | wc -l
0
$ sqlplus martin@xepdb1

SQL*Plus: Release 19.0.0.0.0 - Production on Mon May 13 09:38:12 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Enter password: 
Last Successful login time: Mon May 13 2019 09:22:11 -04:00

Connected to:
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0

The SQLPlus package places a symbolic link to /usr/lib/oracle/19.3/client64/bin/sqlplus into /usr/bin so I don’t even have to adjust the PATH variable.

Summary

Being able to use the Instant Client out of the box is very useful for automated deployments where all you have to do is add a repository followed by a call to yum. This should make a lot of peoples’ lives much easier.