Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Terraform tips’n’tricks: debugging data sources and testing interpolations

I have previously blogged about the use of Terraform data sources to fetch information concerning Oracle Cloud Infrastructure (OCI) resources. The documentation is pretty good, but sometimes you may want to know more about the data returned. This post describes a potential way to debug output of a data source and to evaluate interpolations.

Do you know Data::Dumper?

Perl is one of the programming languages I have worked with in the past. When I did (it really was a long time ago) there wasn’t a proper IDE allowing me to have nice breakpoints and inspect variables so I resorted to the good, old Data::Dumper. It worked pretty much everywhere and showed me the contents of complex data structures when I was a bit at a loss. For example:

 #!/usr/bin/env perl

use strict;
use warnings;

use Data::Dumper qw(Dumper);

my %dataStructure = (
    key1 => {
        a => "b",
        c => "d",
    key2 => {
        e => "f",
        g => "h"

# ---
# main

print Dumper(\%dataStructure); 

When executed, the last line would print the contents of the data structure:

$ perl 
$VAR1 = {
          'key2' => {
                      'g' => 'h',
                      'e' => 'f'
          'key1' => {
                      'a' => 'b',
                      'c' => 'd'

Who needs JSON if you can have hashes of hashes (and other data structures) in perl ;) More seriously though, why is this code relevant to the post? Please read on, there is a feature remarkably similar yet more powerful in Terraform.

Terraform console

When I first read about the Terraform console in the most excellent Terraform Up and Running, I didn’t pay too much attention to it. This proved to be wrong in hindsight, the console can do a lot more than I thought it could.

This example demonstrates how I can debug data structures in Terraform using the console. I take the example I blogged about recently: fetching an Oracle Cloud ID (OCID) for an Oracle-provided Linux 7 image. Please refer back to the post for more details. Let’s assume I have already run terraform apply.

Starting the console

That’s as simple as typing terraform console

$ terraform console
> help
The Terraform console allows you to experiment with Terraform interpolations.
You may access resources in the state (if you have one) just as you would
from a configuration. For example: "" would evaluate
to the ID of "" if it exists in your state.

Type in the interpolation to test and hit  to see the result.

To exit the console, type "exit" and hit , or use Control-C or

A useful help message for starters :)

Debugging my data source

My Terraform code uses the following data source:

 data "oci_core_images" "ol7_latest" {
        compartment_id = var.compartment_ocid

        operating_system = "Oracle Linux"
        operating_system_version = "7.9"
        shape = "VM.Standard.E2.1.Micro"

As per the OCI provider’s documentation, the data source returns an object of type images (a list of images). Let’s see if I can dump any of that. I am using the full path to the images object as in data.data_source_type.data_source_name.images. Note that the output is shortened to the relevant information

$ terraform console
> data.oci_core_images.ol7_latest.images
    "display_name" = "Oracle-Linux-7.9-2021.01.12-0"
    "id" = ""
    "operating_system" = "Oracle Linux"
    "operating_system_version" = "7.9"
    "size_in_mbs" = "47694"
    "state" = "AVAILABLE"
    "time_created" = "2021-01-11 19:05:21.301 +0000 UTC"
    "display_name" = "Oracle-Linux-7.9-2020.11.10-1"
    "id" = ""
    "operating_system" = "Oracle Linux"
    "operating_system_version" = "7.9"
    "size_in_mbs" = "47694"
    "state" = "AVAILABLE"
    "time_created" = "2020-11-11 06:18:05.628 +0000 UTC"
    "display_name" = "Oracle-Linux-7.9-2020.10.26-0"
    "id" = ""
    "operating_system" = "Oracle Linux"
    "operating_system_version" = "7.9"
    "size_in_mbs" = "47694"
    "state" = "AVAILABLE"
    "time_created" = "2020-10-27 06:33:38.068 +0000 UTC"

Right, so, the data source returns a list of 3 potential Oracle Linux 7.9 images I could choose from for my always-free VM. That’s nice to know! Although it doesn’t change my approach of taking the latest :)

Inspecting cloud resources

The console extrapolation isn’t limited to data sources. You can also look at cloud resources, such as this VCN:

resource "oci_core_vcn" "dummy_vcn" {
        compartment_id = var.compartment_ocid

        cidr_block = ""
        display_name = "iAmaDemoVCN"

Note that you always get a “default security list” when you create a VCN. Please don’t use it as it opens up SSH from everywhere. Please see my previous post on the topic for an example.

Let’s create the VCN:

$ terraform apply

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # oci_core_vcn.dummy_vcn will be created
  + resource "oci_core_vcn" "dummy_vcn" {
      + cidr_block               = ""
      + cidr_blocks              = (known after apply)
      + compartment_id           = "ocid1.compartment.oc1..aaaaa..."
      + default_dhcp_options_id  = (known after apply)
      + default_route_table_id   = (known after apply)
      + default_security_list_id = (known after apply)
      + defined_tags             = (known after apply)
      + display_name             = "iAmaDemoVCN"
      + dns_label                = (known after apply)
      + freeform_tags            = (known after apply)
      + id                       = (known after apply)
      + ipv6cidr_block           = (known after apply)
      + ipv6public_cidr_block    = (known after apply)
      + is_ipv6enabled           = (known after apply)
      + state                    = (known after apply)
      + time_created             = (known after apply)
      + vcn_domain_name          = (known after apply)

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

oci_core_vcn.dummy_vcn: Creating...
oci_core_vcn.dummy_vcn: Creation complete after 0s []

After Terraform finished creating the VCN I can display the “known after apply” values in the console:

$ terraform console
> oci_core_vcn.dummy_vcn
  "cidr_block" = ""
  "cidr_blocks" = tolist([
  "compartment_id" = ""
  "default_dhcp_options_id" = ""
  "default_route_table_id" = ""
  "default_security_list_id" = ""
  "defined_tags" = tomap({
    "Oracle-Tags.CreatedBy" = "someuser"
    "Oracle-Tags.CreatedOn" = "2021-02-01T19:08:00.774Z"
  "display_name" = "iAmaDemoVCN"
  "dns_label" = tostring(null)
  "freeform_tags" = tomap({})
  "id" = ""
  "ipv6cidr_block" = tostring(null)
  "ipv6public_cidr_block" = tostring(null)
  "is_ipv6enabled" = tobool(null)
  "state" = "AVAILABLE"
  "time_created" = "2021-02-01T19:08:00.774Z"
  "timeouts" = null /* object */
  "vcn_domain_name" = tostring(null)

Note that unlike with data sources I didn’t have to specify “resource” at the beginning. Doing so would result in an error telling you that a managed resource “resource” “resource_type” has not been declared in the root module.


Terraform’s console is a very useful tool for debugging your Terraform resources. Remember that when you want to inspect data sources, you need the “data” prefix. When looking at resources, you simply use the resource_type.resource_name syntax.

Expanded TYPE definitions for PL/SQL 21c

There has always been that odd conflict between the language you use for SQL versus the language you use for PL/SQL when it comes to expressing what appears to the be the same thing.

PL/SQL: Do want a datatype that contains two elements? Use a RECORD

SQL> declare
  2    type MY_EMPS is record (
  3     empno number,
  4     ename varchar2(10)
  5     );
  6  begin
  7    null;
  8  end;
  9  /

PL/SQL procedure successfully completed.

SQL: Do want a datatype that contains two elements? Use an OBJECT TYPE

SQL> create or replace
  2  type MY_EMPS as object
  3  (  empno number,
  4     ename varchar2(10)
  5  );
  6  /

Type created.

That clash of syntax/styles becomes more apparent when you want to start mixing and matching the datatypes that may or may not be supported in both languages.

PL/SQL: Do want a datatype that contains three elements, one of which is a Boolean? Use a RECORD

SQL> declare
  2    type MY_EMPS is record (
  3     empno    number,
  4     ename    varchar2(10),
  5     approved boolean
  6     );
  7  begin
  8    null;
  9  end;
 10  /

PL/SQL procedure successfully completed.

SQL: Do want a datatype that contains three elements, one of which is a Boolean?

Well, lets not go there Smile

Things like the Boolean datatype are one of the reasons why there always been that separation between SQL and PL/SQL when it comes to type definitions. Any datatype you create with the CREATE TYPE command could ultimately become a column in a table or even a table itself under the Object Relational features of the database, and thus anything that was not strictly a SQL datatype was blocked. 18c took a small step toward recognising that not all types were intended for tables and columns when non persistable types where introduced, but we had still the restrictions on the data types available.

21c now recognises that fact. Now when you create a database type and you know it will not be used for any persistence, we have now opened up more options for the definition for that type.

For example,  now I can have a Boolean element in my type definition as long as I tell the database I won’t be using the type for persistence.

SQL> create or replace
  2  type MY_EMPS_FOR_USE_IN_PLSQL as object
  3  (  empno    number,
  4     ename    varchar2(10),
  5     approved boolean
  6  ) not persistable ;
  7  /

Type created.

Or I can use many of the other PL/SQL datatypes that will not normally be allowed in type definition

SQL> create or replace
  2  type MY_EMPS_FOR_USE_IN_PLSQL as object
  3  (  empno     number,
  4     ename     varchar2(10),
  5     approved  boolean,
  6     small_num pls_integer,
  7     big_num   binary_integer
  8  ) not persistable ;
  9  /

Type created.

Non-persistable types are allowed for nested tables and varrays as well

SQL> create or replace
  2  type int_list as
  3  varray(10) of (pls_integer)
  4  not persistable;
  5  /

Type created.

But sadly, it looks like it is not possible yet to use reference by %TYPE for the elements.

SQL> desc emp
 Name                          Null?    Type
 ----------------------------- -------- -------------
 EMPNO                         NOT NULL NUMBER(4)
 ENAME                                  VARCHAR2(10)
 JOB                                    VARCHAR2(9)
 MGR                                    NUMBER(4)
 HIREDATE                               DATE
 SAL                                    NUMBER(7,2)
 COMM                                   NUMBER(7,2)
 DEPTNO                                 NUMBER(2)

SQL> create or replace
  2  type MY_EMPS_FOR_USE_IN_PLSQL as object
  3  (  empno    emp.empno%type,
  4     ename    emp.ename%type,
  5     approved boolean
  6  ) not persistable ;
  7  /

Warning: Type created with compilation errors.

SQL> sho err

-------- -----------------------------------------------------
0/0      PL/SQL: Compilation unit analysis terminated
2/12     PLS-00201: identifier 'EMP.EMPNO' must be declared

In any event, these are nice touches that allow you to bring more consistency to your code base.

If you’re wondering who is the picture, it is George Boole who finally gets his rightful place in database types Smile

Counting business days between 2 dates

One of the most commonly hit questions on AskTom is how to count the number of work days (Mon => Fri) between a start and end date range.

This is not a particular tough problem to solve since we have easy access to the day of the week via TO_CHAR, thus simply cycling between the start and end date looking for weekdays gives us the answer

select count(*) 
from ( select rownum rnum
       from dual
       connect by level <= greatest(:start_date,:end_date) - least(:start_date,:end_date)+1 
where to_char( least(:start_date,:end_date)+rnum-1, 'DY' ) not in ( 'SAT', 'SUN' );

That solution is fine for the requirement where you have a single pair of inputs. But what if you have a table with thousands or millions of rows, each with a start/end date pair. I don’t want to be cycling through synthesized rows (even if that cycling is just on the DUAL pseudo-table) because that won’t scale, or at best will burn a hole in one of my server CPU cores! Smile

Rather than cycling through the days between a start and end pair, we can apply some arithmetic rules:

  • total number of complete weeks (days difference / 7) between the dates gives multiple of 5 work day,
  • but you need to adjust that if the start date falls later in the week than the end date,
  • if the start date falls on a  saturday/sunday, then adjust a day or two accordingly,
  • if the end date falls on saturday/sunday, then adjust a day or two accordingly

After some tinkering, I came up with the following expression:

(I also added greatest/least wrappers so that it didn’t matter if the start date was later than the end date)

   (trunc(abs(:start_date-:end_date)/7) + 
     case when to_number(to_char(least(:start_date,:end_date),'D')) > to_number(to_char(greatest(:start_date,:end_date),'D')) then 1 else 0 end 
   ) * 5 +
   to_number(to_char(greatest(:start_date,:end_date),'D')) - to_number(to_char(least(:start_date,:end_date),'D')) + 
   case when to_number(to_char(least(:start_date,:end_date),'D')) in (1,7) then 0 else 1 end +
   case when to_number(to_char(least(:start_date,:end_date),'D')) = 7 then 1 else 0 end + 
   case when to_number(to_char(greatest(:start_date,:end_date),'D')) = 7 then -1 else 0 end 

Now that this an expression, it can trivially be added as a virtual column to an existing table if needed.

SQL> create table t ( sd date, ed date);

Table created.

SQL> insert into t
  2  select date '2020-01-01'-20+rownum*2, date '2020-01-01'-10+rownum
  3  from dual
  4  connect by level <= 30;

30 rows created.

SQL> select * from t;

SD        ED
--------- ---------
14-DEC-19 23-DEC-19
16-DEC-19 24-DEC-19
18-DEC-19 25-DEC-19
20-DEC-19 26-DEC-19
22-DEC-19 27-DEC-19
24-DEC-19 28-DEC-19
26-DEC-19 29-DEC-19
28-DEC-19 30-DEC-19
30-DEC-19 31-DEC-19
01-JAN-20 01-JAN-20
03-JAN-20 02-JAN-20
05-JAN-20 03-JAN-20
07-JAN-20 04-JAN-20
09-JAN-20 05-JAN-20
11-JAN-20 06-JAN-20
13-JAN-20 07-JAN-20
15-JAN-20 08-JAN-20
17-JAN-20 09-JAN-20
19-JAN-20 10-JAN-20
21-JAN-20 11-JAN-20
23-JAN-20 12-JAN-20
25-JAN-20 13-JAN-20
27-JAN-20 14-JAN-20
29-JAN-20 15-JAN-20
31-JAN-20 16-JAN-20
02-FEB-20 17-JAN-20
04-FEB-20 18-JAN-20
06-FEB-20 19-JAN-20
08-FEB-20 20-JAN-20
10-FEB-20 21-JAN-20

30 rows selected.

SQL> alter table t add business_days number generated always as (
  2     (trunc(abs(sd-ed)/7) +
  3       case when to_number(to_char(least(sd,ed),'D')) > to_number(to_char(greatest(sd,ed),'D')) then 1 else 0 end
  4     ) * 5 +
  5     to_number(to_char(greatest(sd,ed),'D')) - to_number(to_char(least(sd,ed),'D')) +
  6     case when to_number(to_char(least(sd,ed),'D')) in (1,7) then 0 else 1 end +
  7     case when to_number(to_char(least(sd,ed),'D')) = 7 then 1 else 0 end +
  8     case when to_number(to_char(greatest(sd,ed),'D')) = 7 then -1 else 0 end
  9  );

Table altered.

SQL> select * from t;

SD        ED        BUSINESS_DAYS
--------- --------- -------------
14-DEC-19 23-DEC-19             6
16-DEC-19 24-DEC-19             7
18-DEC-19 25-DEC-19             6
20-DEC-19 26-DEC-19             5
22-DEC-19 27-DEC-19             5
24-DEC-19 28-DEC-19             4
26-DEC-19 29-DEC-19             2
28-DEC-19 30-DEC-19             1
30-DEC-19 31-DEC-19             2
01-JAN-20 01-JAN-20             1
03-JAN-20 02-JAN-20             2
05-JAN-20 03-JAN-20             1
07-JAN-20 04-JAN-20             2
09-JAN-20 05-JAN-20             4
11-JAN-20 06-JAN-20             5
13-JAN-20 07-JAN-20             5
15-JAN-20 08-JAN-20             6
17-JAN-20 09-JAN-20             7
19-JAN-20 10-JAN-20             6
21-JAN-20 11-JAN-20             7
23-JAN-20 12-JAN-20             9
25-JAN-20 13-JAN-20            10
27-JAN-20 14-JAN-20            10
29-JAN-20 15-JAN-20            11
31-JAN-20 16-JAN-20            12
02-FEB-20 17-JAN-20            11
04-FEB-20 18-JAN-20            12
06-FEB-20 19-JAN-20            14
08-FEB-20 20-JAN-20            15
10-FEB-20 21-JAN-20            15

30 rows selected.

Note: Day of Week calculations from TO_CHAR can vary due to NLS settings, so you might need to alter this expression slightly if you don’t live in Australia Smile

Video : Partial Indexes for Partitioned Tables

In today’s video we demonstrate partial indexes for partitioned tables, introduced in Oracle database 12.1.

The video is based on this article.

The star of today’s video is Carry Millsap, who is being lead astray by a very naughty James Morle.



The post Video : Partial Indexes for Partitioned Tables first appeared on The ORACLE-BASE Blog.

Video : Partial Indexes for Partitioned Tables was first posted on February 8, 2021 at 9:02 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Video: High Performance Block I/O on Linux

Here’s the High Performance Block IO on Linux hacking session video.
It’s based on my recent article about achieving 11M IOPS & 66 GB/s IO on a single ThreadRipper workstation.
Towards the end of this video, I also explain how I got up to 11.5M IOPS without having to keep CPUs 100% busy, with batched I/O submission and completion checks I reduced the CPU usage pretty noticeably. Also, with properly configuring I/O polling queues in the Linux kernel, I got close to 10M IOPS by using just 10 CPUs (leaving almost 70% of CPU time idle).

Video: High Performance Block I/O on Linux

Here’s the High Performance Block IO on Linux hacking session video.
It’s based on my recent article about achieving 11M IOPS & 66 GB/s IO on a single ThreadRipper workstation.
Towards the end of this video, I also explain how I got up to 11.5M IOPS without having to keep CPUs 100% busy, with batched I/O submission and completion checks I reduced the CPU usage pretty noticeably. Also, with properly configuring I/O polling queues in the Linux kernel, I got close to 10M IOPS by using just 10 CPUs (leaving almost 70% of CPU time idle).

Oracle 19c Automatic Indexing: Function-Based Indexes? Part II (If You Can See Me)

In my previous post, I discussed how Automatic Indexing does not currently support creating an index based on a function or expression predicate, even if it’s an equality predicate. You must manually create the associated function-based index. However, if you have access to the application, there’s a better strategy when frequently searching on a function-based […]

Oracle 19c Automatic Indexing: Function-Based Indexes? (No Plan)

I previously discussed how Automatic Indexing only currently supports Equality based predicates. The question I have today is does Automatic Indexing support function-based indexes? Let’s take a look. The below DAVID table has the key column NAME which is an effectively unique VARCHAR2 column: If we look at the current details of the table columns: […]

Learn ODA on Oracle Cloud

By Franck Pachot

You want to learn and practice your ODA command line and GUI without having an ODA at home? It should be possible to run the ODA image on VirtualBox but that’s probably a hard work as it is tied to the hardware. About the configuration, you can run the Oracle Appliance Manager Configurator on your laptop but I think it is not compatible with the latest odacli. However, for a long time Oracle provides an ODA simulator and it is now available in the Oracle Cloud Marketplace for free.

Here is it page: 300w, 1024w, 768w, 1536w, 2048w" sizes="(max-width: 2560px) 100vw, 2560px" />
You can get there by: Oracle Cloud Console > Create Compute Instance > Edit > Change Image > Oracle Images > Oracle Database Appliance (ODA) Simulator

I mentioned that this is for free. The marketplace does not allow me to run on an Always Free Eligible shape. But you may take the software and run it elsewhere (you will see the .tar.gz in the opc user home directory)

Cleanup and pull images

From the marketplace, the container is already running but I clean it and re-install. This does everything: it installs docker if not already there (you run all this as root).

# cleanup (incl. portainer)
sudo docker rm -f portainer ; sudo docker rmi -f portainer/portainer
yes | sudo ~/simulator*/
# setup (get images and start portainer)
sudo ~/simulator*/

With this you can connect by http on port 9000 to the Portainer. Of course, you need to open this in the Network Security Groups (I opened the range 9000-9100 as I’ll use those ports later). You can connect with user admin password welcome1… yes, that’s the CHANGE_ON_INSTALL password for ODA </p />

    	  	<div class=

First Steps in Spatial Data

This is the introductory blog post in a series about using Spatial Data in the Oracle database.

Caveat: Spatial Data has been a part of the Oracle database since at least version 8i.  I have been aware of it for many years, but have never previously used it myself.  Recently, I have recently had some spare time and decided to experiment with it.  These blogs document my first steps.  I have spent a lot of time reading the documentation and using Google to find other people's blogs.  Where I found useful material I have provided links to it.  It is likely that more experienced developers can point out my mistakes, and better methods to achieve results.  In which case, I will gladly publish comments and make corrections to my material.


  1. Loading GPX data into XML data types
  2. Convert GPX Track to a Spatial Line Geometry

Problem Statement

A map reading stop!

When I am not working with Oracle databases, I am a keen cyclist and I ride with a touring club.  I have also always enjoyed maps having been taught to read Ordnance Survey maps at school.  It is no surprise therefore that I lead rides for my cycling club.  We used to use (and you can still buy) paper maps.  By 2005 I was starting to use a GPS. Initially, I recorded rides as tracks on PDA.  By 2012, I was regularly using an android tablet on my handlebar bag for navigation.   The market has caught up and people now attach their phones to their handlebars or have dedicated bike computers with GPS and Bluetooth links to their phones.  The cycling club website includes a library of the routes of previous rides, however, you can only search that by the structured data held for that ride.  So, for example, I can only search for rides in the Chilterns if that word appears in the description.  I cannot do a spatial search.

I have also started to use Strava, an internet service for tracking exercise.  It is mainly used by cyclists and runners.  Activities can be recorded on a phone or other device and then be uploaded, compared and analysed.  Every time I go out on the bike I upload the activity.  I also uploaded my back catalogue of GPS data.  As a result of the Coronavirus lockdowns, I bought an indoor trainer by that I use with Zwift and that also posts data to Strava.  My most recent toy is a heart monitor.  Both Strava and Zwift also capture data from that.  Strava will let you see a certain amount of analysis about your activities and how you compare to other people, and more if you pay for their subscription service.  They will also allow you to export and download all of your data as a set of structured data in CSV files, and also the GPX files and photographs that you uploaded.  

I thought it would be interesting to try to analyse and interrogate that data.  Typical questions might include:

  1. I ride up Swain's Lane in Highgate most days.  How long do I take, and am I getting faster or slower?
  2. I want to go for a ride in the Chilterns, so I would like to see tracks of previous rides to get some route ideas.

So I am going to upload my Strava data into an Oracle database, load the GPS tracks currently in GPX files into the database, convert them to Spatial geometries, and then process them.  To answer the first question I will need to provide a working definition of Swain's Lane.  For the second, I need definitions of various areas.  For example, I will take the Chilterns to be the area designed by Natural England as an Area of Outstanding Natural Beauty.  So I will need to import a definition of that and other areas from published data.

The following series of blogs illustrate how I dealt with these and other challenges.