Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Prepping an Oracle Database for a Cloud Migration

There’s so much I need to get written down these days, but there’s only so many hours in a day and days in a week and I’ve totally failed in this area.  Well, I have a little time right now, so going to try to get something down.  It only took me four times to get this published! </p />
</p></div>

    	  	<div class=

SQL Server on Oracle Cloud

By Franck Pachot

.
You can create a VM with SQL Server running in the Oracle Cloud. This is easy with a few clicks on the marketplace:
https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 1536w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 1733w" sizes="(max-width: 1024px) 100vw, 1024px" />
Here are the steps I did above:

  • Oracle Cloud -> Marketplace -> Application -> Category -> Database management
  • You have multiple flavors. I’ve chosen the lastest cheaper “Microsoft SQL 2016 Standard with Windows Server 2016 Standard”
  • Select the compartment and “Launch Instance”
  • Choose a name, and the network

This is fast but don’t be as impatient as I am, the password is displayed after a while:

After a few minutes, you have the user (opc) and password (mine was ‘BNG8lsxsD6jrD’) that you will have to change at the first connection. This user will allow you to connect with Remote Desktop. This means that you have to open the 3389 TCP port:

  • You find your instance in Oracle Cloud -> compute -> instances (don’t forget that this is just easy provisioning for a database running on IaaS. It is not a PaaS managed service
  • Subnet -> Security List -> Add Ingress Rules -> IP Protocol TCP, Destination port: 3389

Once this port is opened in the ingress rules, you can connect with Remote Desktop and access the machine as the user OPC which is a local administrator. You can install anything there.

There’s something that quickly annoys me when I want to install something there – Internet Explorer and its “enhanced security configuration”. I disable this and take my responsibility for what I want to download there:
https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 1536w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 1770w" sizes="(max-width: 1024px) 100vw, 1024px" />

SSMS – the SQL Server Management Studio is not installed on the server. You have the command line with “sqlcmd” and the SQL Server Configuration manager where you can verify that the TCP access to the database is enabled and on the default port 1433:
https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 1536w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 1693w" sizes="(max-width: 1024px) 100vw, 1024px" />

I’ve the SSMS installed on my laptop and as I’ve created this VM on the public subnet, I open the ingress TCP port 1433:
https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 1536w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 1923w" sizes="(max-width: 1024px) 100vw, 1024px" />
But that’s not sufficient because the installation from the marketplace does not open this port. You need to open it in the windows firewall:
https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 1536w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 1000w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 670w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 1776w" sizes="(max-width: 1024px) 100vw, 1024px" />
And… one more thing… the installation from the marketplace allows only Windows Authentication to the database but, if you don’t share a domain, you can’t connect remotely with this.

I’ve created a user:


Microsoft Windows [Version 10.0.14393]
(c) 2016 Microsoft Corporation. All rights reserved.

C:\Windows\system32>sqlcmd

create login franck with password='Test-dbi-2020';
alter server role sysadmin add member franck;
go

Then, in order to connect with a SQL Server user I had to enable SQL Server authentication:
https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 1276w" sizes="(max-width: 1024px) 100vw, 1024px" />

You may wonder how I enabled it without having SSM first? I’ve read https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/change-server-authentication-mode?view=sql-server-ver15 that mentions:


EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE',N'Software\Microsoft\MSSQLServer\MSSQLServer',N'LoginMode', REG_DWORD, 2
GO

to run on SQLCMD.
But I thought I was more clever, ran regedit.exe and changed HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer\MSSQLServer to add a ‘LoginMode’ key. But it didn’t work. I finally realized that the registry key is: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQLServer.
This is a very subtle difference, isn’t it? Ok, now I admit it: I installed SSMS on the cloud VM before finding this </p />
</p></div>

    	  	<div class=

Introducing Packer: building Vagrant base boxes hands-free

I have referred to Packer in some of my cloud-related presentations as an example of a tool for creating immutable infrastructure. In addition to the cloud, Packer supports a great many other build targets as well. Since I work with VirtualBox and Vagrant a lot, Packer’s ability to create Vagrant base boxes is super awesome. Combined with local box versioning I can build new Vagrant systems in almost no time. More importantly though, I can simply kick the process off, grab a coffee, and when I’m back, enjoy a new build of my Oracle Linux Vagrant base box.

The task at hand

I would like to build a new Vagrant base box for Oracle Linux 7.8, completely hands-off. All my processes and workflows therefore need to be defined in software (Infrastructure as Code).

Since I’m building private Vagrant boxes I don’t intend to share I can ignore the requirements about passwords as documented in the Vagrant documentation, section “Default User Settings”. Instead of the insecure key pair I’m using my own keys as well.

The build environment

I’m using Ubuntu 20.04.1 LTS as my host operating system. Packer 1.5 does all the hard work provisioning VMs for Virtualbox 6.1.12. Ansible 2.9 helps me configure my systems. Vagrant 2.2.7 will power my VMs after they are created.

Except for VirtualBox and Packer I’m using the stock packages supplied by Ubuntu.

How to get there

Packer works by reading a template file and performs the tasks defined therein. If you are new to Packer, I suggest you visit the website for more information and some really great guides.

As of Packer 1.5 you can also use HCL2. Which is nice, as it allows me to reuse (or rather add to) my Terraform skills. However, at the time of writing the documentation warns that HCL2 support is still in beta, which is why I went with the JSON template language.

High-level steps

From a bird’s eye view, using my code Packer template…

  • Creates a Virtualbox VM from an ISO image
  • Feeds a kickstart file to it for an unattended installation
  • After the VM is up it applies an Ansible playbook to it to install VirtualBox guest additions

The end result should be a fully working Vagrant base box.

Provisioning the VM

The first thing to do is to create the Packer template for my VM image. Properties of a VM are defined in the so-called builders section. As I said before, there are lots of builders available… I would like to create a Virtualbox VM from an ISO image, so I went with virtualbox-iso, which is really super easy to use and well documented. So after a little bit of trial and error I ended up with this:

{
  "builders": [
    {
      "type": "virtualbox-iso",
      "boot_command": [
        "",
        "",
        "linux text inst.ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/ol7.ks",
        ""
      ],
      "disk_size": "12288",
      "guest_additions_path": "/home/vagrant/VBoxGuestAdditions.iso",
      "guest_os_type": "Oracle_64",
      "hard_drive_interface": "sata",
      "hard_drive_nonrotational": "true",
      "http_directory": "http",
      "iso_checksum": "1c1471c49025ffc1105d0aa975f7c8e3",
      "iso_checksum_type": "md5",
      "iso_url": "file:///m/stage/V995537-01-ol78.iso",
      "sata_port_count": "5",
      "shutdown_command": "echo 'packer' | sudo -S shutdown -P now",
      "ssh_timeout": "600s",
      "ssh_username": "vagrant",
      "ssh_agent_auth": true,
      "vboxmanage": [
        [
          "modifyvm",
          "{{.Name}}",
          "--memory",
          "2048"
        ],
        [
          "modifyvm",
          "{{.Name}}",
          "--cpus",
          "2"
        ]
      ],
      "vm_name": "packertest"
    }
  ],
  "provisioners": [
    {
      "type": "ansible",
      "playbook_file": "ansible/guest_additions.yml",
      "user": "vagrant"
    }
  ],
  "post-processors": [
    {
      "keep_input_artifact": true,
      "output": "/home/martin/vagrant/boxes/ol7_7.8.2.box",
      "type": "vagrant"
    }
  ]
} 

If you are new to Packer this probably looks quite odd, but it’s actually very intuitive after a little while. I haven’t used Packer much before coming up with this example, which is a great testimony to the quality of the documentation.

Note this template has 3 main sections:

  • builders: the virtualbox-iso builder allows me to create a VirtualBox VM based on an (Oracle Linux 7.8) ISO image
  • provisioners: once the VM has been created to my specification I can run Ansible playbooks against it
  • post-processors: this line is important as it creates the Vagrant base box after the provisioner finished its work

Contrary to most examples I found I’m using SSH keys for communicating with the VM, rather than a more insecure username/password combination. All you need to do is add the SSH key to the agent via ssh-add before you kick the build off.

While testing the best approach to building the VM and guest additions I ran into a few issues prompting me to upload the guest additions ISO to the vagrant user’s home directory. This way it wasn’t too hard to refer to it in the Ansible playbook (see below).

Kickstarting Oracle Linux 7

The http_directory directive in the first (builder) block is crucial for automating the build. As soon as Packer starts its work, it will create a HTTP server in the directory indicated by the variable. This directory must obviously exist.

Red-Hat-based distributions allow admins to install the operating system in a fully automated way using the Kickstart format. You provide the Kickstart file to the system when you boot it for the first time. A common way to do so is via HTTP, which is why I’m so pleased about the HTTP server started by Packer. It couldn’t be easier: thanks to my http_directory a web server is already started, and using the HTTPIP and HTTPort variables I can refer to files inside the directory from within the template.

As soon as Packer boots the VM the Kickstart file is passed as specified in boot_command. I had to look the syntax up using a search engine. It essentially comes down to simulating a bunch of keystrokes as if you were typing them interactively.

Long story short, I don’t need to worry about the installation, at least as long as my Kickstart file is ok. One way to get the Kickstart file right is to use the one that’s created after a manual operating system installation. I usually end up using /root/anaconda-ks.cfg and customise it.

There are 3 essentials tasks to complete in the Kickstart file if you want to create a Vagrant base box:

  1. Create the vagrant user account
  2. Allow password-less authentication to the vagrant account via SSH
  3. Add vagrant to the list of sudoers

First I have to include a directive to create a vagrant user:

user --name=vagrant

The sshkey keyword allows me to inject my SSH key into the user’s authorized_keys file.

sshkey --username=vagrant "very-long-ssh-key"

I also have to add the vagrant account to the list of sudoers. Using the %post directive, I inject the necessary line into /etc/sudoers:

%post --log=/root/ks-post.log

/bin/echo "vagrant        ALL=(ALL)       NOPASSWD: ALL" >> /etc/sudoers

%end 

Calling the Ansible provisioner

So far I have defined a VM to be created (within the builders section). The installation is completely hands-off thanks for the Kickstart file I provided. However, I’m not done yet: I have yet to install the Virtualbox guest additions. This is done via the Ansible provisioner. It connects as vagrant to the VM and executes the instructions from ansible/guest_additions.yml.

This is a rather simple file:

- hosts: all
  become: yes
  tasks:
  - name: upgrade all packages
    yum:
      name: '*'
      state: latest

  - name: install kernel-uek-devel
    yum:
      name: kernel-uek-devel
      state: present

  - name: reboot to the latest kernel
    reboot:

  # Guest additions are located as per guest_additions_path in 
  # Packer's configuration file
  - name: Mount guest additions ISO read-only
    mount:
      path: /mnt/
      src: /home/vagrant/VBoxGuestAdditions.iso
      fstype: iso9660
      opts: ro
      state: mounted

  - name: execute guest additions
    shell: /mnt/VBoxLinuxAdditions.run 

In plain English, I’m becoming root before updating all software packages. One of the pre-rquisites for compiling Virtualbox’s guest additions is to install the kernel-uek-devel package.

After that operation completed theVM reboot into the new kernel before mounting the guest addition ISO I asked to be copied to /home/vagrant/VBoxGuestAdditions.iso in the builder section of the template.

Once the ISO file is mounted, I call VBoxLinuxAdditions.run to build the guest additions.

Building the Vagrant base box

Putting it all together, this is the output created by Packer:

$ ANSIBLE_STDOUT_CALLBACK=debug ./packer build oracle-linux-7.8.json 
virtualbox-iso: output will be in this color.

==> virtualbox-iso: Retrieving Guest additions
==> virtualbox-iso: Trying /usr/share/virtualbox/VBoxGuestAdditions.iso
==> virtualbox-iso: Trying /usr/share/virtualbox/VBoxGuestAdditions.iso
==> virtualbox-iso: /usr/share/virtualbox/VBoxGuestAdditions.iso => /usr/share/virtualbox/VBoxGuestAdditions.iso
==> virtualbox-iso: Retrieving ISO
==> virtualbox-iso: Trying file:///m/stage/V995537-01-ol78.iso
==> virtualbox-iso: Trying file:///m/stage/V995537-01-ol78.iso?checksum=md5%3A1c1471c49025ffc1105d0...
==> virtualbox-iso: file:///m/stage/V995537-01-ol78.iso?checksum=md5%3A1c1471c49025ffc1105d0... => /m/stage/V995537-01-ol78.iso
==> virtualbox-iso: Starting HTTP server on port 8232
==> virtualbox-iso: Using local SSH Agent to authenticate connections for the communicator...
==> virtualbox-iso: Creating virtual machine...
==> virtualbox-iso: Creating hard drive...
==> virtualbox-iso: Creating forwarded port mapping for communicator (SSH, WinRM, etc) (host port 2641)
==> virtualbox-iso: Executing custom VBoxManage commands...
    virtualbox-iso: Executing: modifyvm packertest --memory 2048
    virtualbox-iso: Executing: modifyvm packertest --cpus 2
==> virtualbox-iso: Starting the virtual machine...
==> virtualbox-iso: Waiting 10s for boot...
==> virtualbox-iso: Typing the boot command...
==> virtualbox-iso: Using ssh communicator to connect: 127.0.0.1
==> virtualbox-iso: Waiting for SSH to become available...
==> virtualbox-iso: Connected to SSH!
==> virtualbox-iso: Uploading VirtualBox version info (6.1.12)
==> virtualbox-iso: Uploading VirtualBox guest additions ISO...
==> virtualbox-iso: Provisioning with Ansible...
    virtualbox-iso: Setting up proxy adapter for Ansible....
==> virtualbox-iso: Executing Ansible: ansible-playbook -e ...
    virtualbox-iso:
    virtualbox-iso: PLAY [all] *********************************************************************
    virtualbox-iso:
    virtualbox-iso: TASK [Gathering Facts] *********************************************************
    virtualbox-iso: ok: [default]
    virtualbox-iso: [WARNING]: Platform linux on host default is using the discovered Python
    virtualbox-iso: interpreter at /usr/bin/python, but future installation of another Python
    virtualbox-iso: interpreter could change this. See https://docs.ansible.com/ansible/2.9/referen
    virtualbox-iso: ce_appendices/interpreter_discovery.html for more information.
    virtualbox-iso:
    virtualbox-iso: TASK [upgrade all packages] ****************************************************
    virtualbox-iso: changed: [default]
    virtualbox-iso:
    virtualbox-iso: TASK [install kernel-uek-devel] ************************************************
    virtualbox-iso: changed: [default]
    virtualbox-iso:
    virtualbox-iso: TASK [reboot to enable latest kernel] ******************************************
==> virtualbox-iso: EOF
    virtualbox-iso: changed: [default]
    virtualbox-iso:
    virtualbox-iso: TASK [Mount guest additions ISO read-only] *************************************
    virtualbox-iso: changed: [default]
    virtualbox-iso:
    virtualbox-iso: TASK [execute guest additions] *************************************************
    virtualbox-iso: changed: [default]
    virtualbox-iso:
    virtualbox-iso: PLAY RECAP *********************************************************************
    virtualbox-iso: default                    : ok=6    changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
    virtualbox-iso:
==> virtualbox-iso: Gracefully halting virtual machine...
==> virtualbox-iso: Preparing to export machine...
    virtualbox-iso: Deleting forwarded port mapping for the communicator (SSH, WinRM, etc) (host port 2641)
==> virtualbox-iso: Exporting virtual machine...
    virtualbox-iso: Executing: export packertest --output output-virtualbox-iso/packertest.ovf
==> virtualbox-iso: Deregistering and deleting VM...
==> virtualbox-iso: Running post-processor: vagrant
==> virtualbox-iso (vagrant): Creating a dummy Vagrant box to ensure the host system can create one correctly
==> virtualbox-iso (vagrant): Creating Vagrant box for 'virtualbox' provider
    virtualbox-iso (vagrant): Copying from artifact: output-virtualbox-iso/packertest-disk001.vmdk
    virtualbox-iso (vagrant): Copying from artifact: output-virtualbox-iso/packertest.ovf
    virtualbox-iso (vagrant): Renaming the OVF to box.ovf...
    virtualbox-iso (vagrant): Compressing: Vagrantfile
    virtualbox-iso (vagrant): Compressing: box.ovf
    virtualbox-iso (vagrant): Compressing: metadata.json
    virtualbox-iso (vagrant): Compressing: packertest-disk001.vmdk
Build 'virtualbox-iso' finished.

==> Builds finished. The artifacts of successful builds are:
--> virtualbox-iso: VM files in directory: output-virtualbox-iso
--> virtualbox-iso: 'virtualbox' provider box: /home/martin/vagrant/boxes/ol7_7.8.2.box

This concludes the build of the base box.

Using the newly created base box

I’m not quite done yet though: as you may recall I’m using (local) box versioning. A quick change of the metadata file ~/vagrant/boxes/ol7.json and a call to vagrant init later, I can use the box:

$ vagrant box outdated
Checking if box 'ol7' version '7.8.1' is up to date...
A newer version of the box 'ol7' for provider 'virtualbox' is
available! You currently have version '7.8.1'. The latest is version
'7.8.2'. Run `vagrant box update` to update. 

That looks pretty straight-forward, so let’s do it:

$ vagrant box update
==> server: Checking for updates to 'ol7'
    server: Latest installed version: 7.8.1
    server: Version constraints: 
    server: Provider: virtualbox
==> server: Updating 'ol7' with provider 'virtualbox' from version
==> server: '7.8.1' to '7.8.2'...
==> server: Loading metadata for box 'file:///home/martin/vagrant/boxes/ol7.json'
==> server: Adding box 'ol7' (v7.8.2) for provider: virtualbox
    server: Unpacking necessary files from: file:///home/martin/vagrant/boxes/ol7_7.8.2.box
    server: Calculating and comparing box checksum...
==> server: Successfully added box 'ol7' (v7.8.2) for 'virtualbox'! 

Let’s start the environment:

$ vagrant up
Bringing machine 'server' up with 'virtualbox' provider...
==> server: Importing base box 'ol7'...
==> server: Matching MAC address for NAT networking...
==> server: Checking if box 'ol7' version '7.8.2' is up to date...
==> server: Setting the name of the VM: packertest_server_1598013258821_67878
==> server: Clearing any previously set network interfaces...
==> server: Preparing network interfaces based on configuration...
    server: Adapter 1: nat
==> server: Forwarding ports...
    server: 22 (guest) => 2222 (host) (adapter 1)
==> server: Running 'pre-boot' VM customizations...
==> server: Booting VM...
==> server: Waiting for machine to boot. This may take a few minutes...
    server: SSH address: 127.0.0.1:2222
    server: SSH username: vagrant
    server: SSH auth method: private key
==> server: Machine booted and ready!
==> server: Checking for guest additions in VM...
==> server: Setting hostname...
==> server: Mounting shared folders...
    server: /vagrant => /home/martin/vagrant/packertest 

Happy Automating!

Flashback Bug

Here’s a problem with the “flashback versions” technology that showed up at the end of last week. There’s a thread about it on the Oracle Developer community forum, and a chain of tweets that was my initial response to a twitter alert about it that Daniel Stein posted.

The problem appears somewhere in the 18c timeline – it doesn’t seem to be present in 12.2.0.1 – so if you’re running any versions 18+ here’s a modified version of the test case supplied by Daniel Stein to demonstrate the issue.

rem
rem     Script:         flashback_bug.sql
rem     Author:         Jonathan Lewis / Daniel Stein
rem     Dated:          Aug 2020
rem
rem     Last tested 
rem             19.3.0.0        Wrong result
rem             12.2.0.1        Correct result
rem

create table t1 (n1 number(4)) 
segment creation immediate
;

prompt  =============================================================
prompt  Sleeping 10 seconds after create table to avoid subsequent
prompt  ORA-01466: unable to read data - table definition has changed
prompt  =============================================================


execute dbms_lock.sleep(10)

prompt  =========
prompt  Start SCN
prompt  =========

column current_scn new_value scn_vorher2 
select to_char(current_scn,'9999999999999999') current_scn from V$DATABASE;

insert into t1(n1) values (1);
insert into t1(n1) values (2);
insert into t1(n1) values (3);
insert into t1(n1) values (4);
insert into t1(n1) values (5);
insert into t1(n1) values (6);
insert into t1(n1) values (7);
insert into t1(n1) values (8);
insert into t1(n1) values (9);
insert into t1(n1) values (10);
insert into t1(n1) values (11);
insert into t1(n1) values (12);

delete from t1 where n1 in (2, 5, 8, 11);

commit;

prompt  =======
prompt  End SCN
prompt  =======

column current_scn new_value scn_nachher
select to_char(current_scn,'9999999999999999') current_scn from V$DATABASE;

prompt  =============
Propmt  Version Query 
prompt  =============

column VERSIONS_STARTSCN        format 9999999999999999 heading "Start SCN"
column VERSIONS_ENDSCN          format 9999999999999999 heading "End SCN"
column VERSIONS_OPERATION       format A9               heading "Operation"
column SCN_RANGE                format A35              heading "SCN Range"

select 
        n1, rowid, 
        versions_operation, versions_startscn, versions_endscn,
        &scn_vorher2||' and '||&scn_nachher scn_range
from
        t1 versions between scn &scn_vorher2 and &scn_nachher
order by 
        rowid, versions_startscn, versions_operation, versions_endscn
;

prompt  ==================
prompt  Current Table data
prompt  ==================

select n1,rowid from t1;


alter system dump redo scn min &scn_vorher2 scn max &scn_nachher;

I’ve created a table then inserted a few rows and deleted some of them in the same transaction. I’ve captured the database SCN at the start and end of the transaction and then tried to run a “versions between” query across that range of SCNs. Here are the results from the “versions between” query and the final query of the current contents of the table on a test on a database running 19.3:

        N1 ROWID              Operation         Start SCN           End SCN SCN Range
---------- ------------------ --------- ----------------- ----------------- -----------------------------------
         1 AAAXFOAATAAAACDAAA I            12670408139684                   12670408139679 and 12670408139687
         3 AAAXFOAATAAAACDAAC I            12670408139684                   12670408139679 and 12670408139687
         4 AAAXFOAATAAAACDAAD I            12670408139684                   12670408139679 and 12670408139687
         5 AAAXFOAATAAAACDAAE I            12670408139684                   12670408139679 and 12670408139687
         6 AAAXFOAATAAAACDAAF I            12670408139684                   12670408139679 and 12670408139687
         7 AAAXFOAATAAAACDAAG I            12670408139684                   12670408139679 and 12670408139687
         8 AAAXFOAATAAAACDAAH I            12670408139684                   12670408139679 and 12670408139687
         9 AAAXFOAATAAAACDAAI I            12670408139684                   12670408139679 and 12670408139687
        10 AAAXFOAATAAAACDAAJ I            12670408139684                   12670408139679 and 12670408139687
        11 AAAXFOAATAAAACDAAK I            12670408139684                   12670408139679 and 12670408139687

10 rows selected.

        N1 ROWID
---------- ------------------
         1 AAAXFOAATAAAACDAAA
         3 AAAXFOAATAAAACDAAC
         4 AAAXFOAATAAAACDAAD
         6 AAAXFOAATAAAACDAAF
         7 AAAXFOAATAAAACDAAG
         9 AAAXFOAATAAAACDAAI
        10 AAAXFOAATAAAACDAAJ
        12 AAAXFOAATAAAACDAAL

8 rows selected.

There’s a trap in this demonstration. If you follow the link to the forum thread you’ll find that Daniel Stein says there’s a problem with Oracle 19.6 and upwards (but not 12.2) – then various other people test on their versions and find that 19.7, 18.5, and 19.3 are broken. That last one is what made me run the test on the 19.3 I had at hand – and I got the right result, unlike the results above.

If you look at the output from this test run you’ll see that the versioning query says that in the SCN range I’ve inserted 10 rows, and that rows 2 and 12 have never been inserted. Given the SQL I’d run I was expecting to see 16 rows in the output, 12 of them saying I’d inserted values 1 through 12, and then 4 saying I’d deleted 2, 5, 8 and 11 – but (when it works correctly) Oracle manages to combine the insert and delete in the same transaction to cancel the entries. So the versioning query ought to be showing exactly the 8 rows that appear in the current data query.

So I had two puzzles:

  • What’s gone wrong for everyone else ?
  • Why hasn’t it gone wrong for me ?

To answer the second question first: I took a look at my instance parameters and found that I had set statistics_level to all. Clearly this ought to have nothing to do with an error in flashback version queries, but I changed it to typical anyway and re-ran the test: and got the (now expected) wrong results. If you read the twitter thread you’ll find that Oracle support had had an SR opened on this problem for a few days but had been unable to reproduce the problem – perhaps they, too, had a system with statistics_level set to all.

Getting back to the first question: what’s gone wrong. I couldn’t think of a good reason why setting the statistics_level should make a difference, so I took the view that the versioning query is running through the redo log to find all the changes that took place in the range, so maybe the answer will appear in the redo. That’s why the last line in the demo above is a command to dump the redo generated between the two SCNs.

Short-circuiting the details of what I looked at I’m just going to show you the layer 11 (table) redo change vectors for my DML, which I extracted from the trace file using the command grep “OP:11” {filename}.

First the set when statistics_level = all:


CHANGE #3 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94544 SCN:0x00000b860f2d3d1b SEQ:2 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94544 SCN:0x00000b860f2d3d1b SEQ:3 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94544 SCN:0x00000b860f2d3d1c SEQ:1 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94544 SCN:0x00000b860f2d3d1c SEQ:2 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94544 SCN:0x00000b860f2d3d1c SEQ:3 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94544 SCN:0x00000b860f2d3d1c SEQ:4 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94544 SCN:0x00000b860f2d3d1c SEQ:5 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94544 SCN:0x00000b860f2d3d1c SEQ:6 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94544 SCN:0x00000b860f2d3d1c SEQ:7 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94544 SCN:0x00000b860f2d3d1c SEQ:8 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94544 SCN:0x00000b860f2d3d1c SEQ:9 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94544 SCN:0x00000b860f2d3d1c SEQ:10 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #3 CON_ID:3 TYP:2 CLS:1 AFN:10 DBA:0x01009c57 OBJ:720 SCN:0x00000b860f2d3cb8 SEQ:1 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94544 SCN:0x00000b860f2d3d1c SEQ:11 OP:11.3 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94544 SCN:0x00000b860f2d3d1d SEQ:1 OP:11.3 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94544 SCN:0x00000b860f2d3d1d SEQ:2 OP:11.3 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94544 SCN:0x00000b860f2d3d1d SEQ:3 OP:11.3 ENC:0 RBL:0 FLG:0x0000

What you can see are 13 op codes “11.2” – which is “insert row piece”. Most of these are for OBJ 94554, which is the object_id for my table, and one of them if for OBJ 720, which is a table owned by sys called exp_head$ (which is something to do with “expression tracking”). After the inserts there are 4 op Ccodes “11.3” which is “delete row piece”. So there’s no obvious error or complication in the redo.

And now when we set statistics_level to typical and re-run the script (which means we get a new OBJ number):

CHANGE #3 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94545 SCN:0x00000b860f2d3d8a SEQ:2 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94545 SCN:0x00000b860f2d3d8a SEQ:3 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94545 SCN:0x00000b860f2d3d8b SEQ:1 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94545 SCN:0x00000b860f2d3d8b SEQ:2 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94545 SCN:0x00000b860f2d3d8b SEQ:3 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94545 SCN:0x00000b860f2d3d8b SEQ:4 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94545 SCN:0x00000b860f2d3d8b SEQ:5 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94545 SCN:0x00000b860f2d3d8b SEQ:6 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94545 SCN:0x00000b860f2d3d8b SEQ:7 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94545 SCN:0x00000b860f2d3d8b SEQ:8 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94545 SCN:0x00000b860f2d3d8b SEQ:9 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94545 SCN:0x00000b860f2d3d8b SEQ:10 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #3 CON_ID:3 TYP:2 CLS:1 AFN:10 DBA:0x01009c57 OBJ:720 SCN:0x00000b860f2d3d1d SEQ:1 OP:11.2 ENC:0 RBL:0 FLG:0x0000
CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94545 SCN:0x00000b860f2d3d8b SEQ:11 OP:11.12 ENC:0 RBL:0 FLG:0x0000

In this case we get 13 “insert row piece” Op Codes, and then a single “11.12” which is “delete multiple rows”. So there IS a difference in the redo, and something about that difference is probably behind the error coming and going.

Taking a closer look at the 11.12 and one of the 11.2 redo change vectors, this is what we see:


CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94545 SCN:0x00000b860f2d3d8b SEQ:11 OP:11.12 ENC:0 RBL:0 FLG:0x0000
KTB Redo
op: 0x02  ver: 0x01
compat bit: 4 (post-11) padding: 1
op: C  uba: 0x04400a24.0672.3c
KDO Op code: QMD row dependencies Disabled
  xtype: XA flags: 0x00000000  bdba: 0x04c00083  hdba: 0x04c00082
itli: 1  ispac: 0  maxfr: 4858
tabn: 0 lock: 1 nrow: 4
slot[0]: 1
slot[1]: 4
slot[2]: 7
slot[3]: 10


CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:19 DBA:0x04c00083 OBJ:94544 SCN:0x00000b860f2d3d1d SEQ:3 OP:11.3 ENC:0 RBL:0 FLG:0x0000
KTB Redo
op: 0x02  ver: 0x01
compat bit: 4 (post-11) padding: 1
op: C  uba: 0x04400467.0595.2c
KDO Op code: DRP row dependencies Disabled
  xtype: XA flags: 0x00000000  bdba: 0x04c00083  hdba: 0x04c00082
itli: 1  ispac: 0  maxfr: 4858
tabn: 0 slot: 10(0xa)

The 11.12 Op Code shows us a list of 4 entries in the block’s “row directory” for “table 0”. THe 11.3 Op Code shows us just one – it’s the last of the four deletes so it’s reporting entry 10 in the “row directory”, the previous three Op Code 11.3 vectors were reporting slots 1, 4, and 7 respectively.

Obviously we can’t draw any firm conclusions about what’s going wrong, we simply have a few observations that might give us some ideas of the type of error; and after a few more experiments I decided that the code handling flashback versioning mis-interpreting the “delete multiple row” record. It consistently seemed to delete the first row identified the slot[0] entry and then jumped straight to the last slot but add one to the index entry before attempting to apply the vector.

Footnote

I’ve used dbms_lock.sleep() to pause for 10 seconds after creating the table. The sleep() procedure was copied into the dbms_session package in recent versions of Oracle so that end-user code could access it without having to receive privileges on the rather more dangerous dbms_lock package.

I’ve referenced the statistics_level parameter as the one affecting the generation of the “delete multiple row” redo vector. In fact you can get the same effect by setting the hidden parameter “_rowsource_execution_statistics” to true. Setting statistics_level to all has the effect of enabling rowsource execution statistics so this isn’t surprising.

I pointed out that the flashback versioning code seemed to “add 1” to the final slot[n] identifier before producing the flashback output. Think about that that means if you change the demonstration to delete the LAST row in the list of rows inserted. Here’s what mning query did in a test when I changed 11 to 12 in my delete statement:


ERROR at line 2:
ORA-00600: internal error code, arguments: [ktrvtgr_2], [79691907], [0], [11], [], [], [], [], [], [], [], []

79691907 is the block number of the block where the table rows are stored.
11 is the correct row directory index for the last row (12) in the table – buy my hypothesis is that the flashback code was trying to find (non-existent) directory index 12 because it adds one to the supplied index entry.

Update

In the interval between completing the final draft yesterday and publishing the note this morning, Oracle support replied to the the forum thread with a bug number (28910586 – not yet publicly visible) and a note that a patch could be available on request if an SR were raised giving the Oracle version and platform. It will be interesting to see whether the patch blocks the creation of the 11.12 op codes or whether it corrects the interpretation of 11.12 codes by the flashback versions code.

Video : Simple Oracle Document Access (SODA) for REST

In today’s video we’ll give a demonstration of Simple Oracle Document Access (SODA) for REST. This is a feature of Oracle REST Data Services (ORDS).

This video is based on the following article.

There are some other resources that might come in handy.

The star of today’s video is the son of Rodrigo Mufalani. Rodrigo is a fellow Oracle ACE and you can check out Rodrigo’s blog here.

Cheers

Tim…

The post Video : Simple Oracle Document Access (SODA) for REST first appeared on The ORACLE-BASE Blog.


Video : Simple Oracle Document Access (SODA) for REST was first posted on August 24, 2020 at 7:48 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

How to upgrade to Oracle 19.8 on Windows

As always, you should follow the instructions in the README.html that comes with the Release Update, but because I know many people like to see what the experience of others is like, here’s my run of the patch application.

As always don’t forget to run your CMD prompt as Administrator, otherwise OPatch will typically have trouble accessing and locking the software inventory

Prerequisites

Get your environment setup as per the patch notes.


C:\WINDOWS\system32>set ORACLE_SID=db19

C:\WINDOWS\system32>set ORACLE_HOME=C:\oracle\product\19

C:\WINDOWS\system32>set PATH=C:\oracle\product\19\bin;%PATH%

C:\WINDOWS\system32>set PATH=%ORACLE_HOME%\perl\bin;%PATH%

C:\WINDOWS\system32>set PERL5LIB=

C:\WINDOWS\system32>net stop msdtc
The Distributed Transaction Coordinator service is stopping.
The Distributed Transaction Coordinator service was stopped successfully.

Check your OPatch version

For this patch, it needs to be 12.2.0.1.17 or higher. Mine was OK but my general recommendation is to always download the latest OPatch any time you need to use it. Of course, I was lazy here and did not Smile


C:\WINDOWS\system32>set PATH=%ORACLE_HOME%\Opatch;%PATH%

C:\WINDOWS\system32>opatch version
OPatch Version: 12.2.0.1.19

OPatch succeeded.

Apply the patch

Shutdown everything that might be using the Oracle executables in the ORACLE_HOME you are about to patch. Note that on Windows this is more than just running a “shutdown” on the database. You need to go into the Windows Services and ensure any of the Oracle services are also shutdown to ensure nothing is running.

Now when you apply the patch, there is a good chance it is going to fail Smile



D:\>cd D:\oracle\stage\win198\31247621

D:\oracle\stage\win198\31247621>opatch apply
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation.  All rights reserved.


Oracle Home       : C:\oracle\product\19
Central Inventory : C:\Program Files\Oracle\Inventory
   from           :
OPatch version    : 12.2.0.1.19
OUI version       : 12.2.0.7.0
Log file location : C:\oracle\product\19\cfgtoollogs\opatch\opatch2020-08-18_14-41-35PM_1.log

Verifying environment and performing prerequisite checks...

Conflicts/Supersets for each patch are:

Patch : 31247621

        Bug Superset of 30901317
        Super set bugs are:
        29947145, 29968085, 29293072, 29516727, 28774416, 29766679, 30147473, 29631749, 29434301, 29443250, 29643721, 30114534, ....

        Conflict with 30805684
        Conflict details:
        C:\oracle\product\19\rdbms\admin\oracle.sym
        C:\oracle\product\19\bin\oracle.exe


Following patches have conflicts: [   30805684   31247621 ]
Use the MOS Patch Conflict Checker "https://support.oracle.com/epmos/faces/PatchConflictCheck" to resolve.
See MOS documents 1941934.1 and 1299688.1 for additional information and resolution methods.


UtilSession failed: Please rebuild the superset patch [31247621] to make sure it supersedes all the relevant patch(es) [30901317,30805684].
The rebuild patch should contain bug fix [29994370, 29947145, 30834110, 29968085, 29293072, 29516727, 28774416, 29766679, 30147473, 29631749, ...

Log file location: C:\oracle\product\19\cfgtoollogs\opatch\opatch2020-08-18_14-41-35PM_1.log

OPatch failed with error code = 73

It failed because of a conflict with patch 30805684, and it is at this point I found the following line that I should have paid attention to in the patch README:

If Opatch reports conflicts with an earlier version of OJVM Bundle Patch, perform a rollback of the OJVM patch before following below steps.

And yes indeed, 30805684 is my 19.7 OJVM patch. So you’ll need to head off the My Oracle Support and download the 19.8 OJVM as well (patch 31219897).

Rollback the old JVM


D:\oracle\stage\win198\31247621>opatch rollback -id 30805684
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation.  All rights reserved.


Oracle Home       : C:\oracle\product\19
Central Inventory : C:\Program Files\Oracle\Inventory
   from           :
OPatch version    : 12.2.0.1.19
OUI version       : 12.2.0.7.0
Log file location : C:\oracle\product\19\cfgtoollogs\opatch\opatch2020-08-18_14-46-25PM_1.log


Patches will be rolled back in the following order:
   30805684
The following patch(es) will be rolled back: 30805684

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = 'C:\oracle\product\19')


Is the local system ready for patching? [y|n]
y
User Responded with: Y

Rolling back patch 30805684...

RollbackSession rolling back interim patch '30805684' from OH 'C:\oracle\product\19'

Patching component oracle.rdbms, 19.0.0.0.0...

Patching component oracle.rdbms.dbscripts, 19.0.0.0.0...

Patching component oracle.javavm.server, 19.0.0.0.0...

Patching component oracle.javavm.client, 19.0.0.0.0...
RollbackSession removing interim patch '30805684' from inventory
Log file location: C:\oracle\product\19\cfgtoollogs\opatch\opatch2020-08-18_14-46-25PM_1.log

OPatch succeeded.

Now we can try again to apply the patch </p />
</p></div>

    	  	<div class=

Subquery with OR

I’ve written a couple of notes in the past about the problems of optimising queries with predicates of the form “or exists {subquery}”. A recent question on the Oracle Developer Community forum brought to my attention an improvement in this area in (very precisely) 12.2, as well as giving me a cute example of how the first cut of a new feature doesn’t always cover every detail, and creating a nice example of how the new technology enhances the old technology.

We start with some data and a simple query running under 12.2.0.1:

rem
rem     Script:         exists_with_or_4.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Aug 2020
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem             12.1.0.2  -- feature not implemented
rem

create table cat_contact(
        contact_method_id       varchar2(1) not null,
        contact_id              number(8,0) not null,
        small_vc                varchar2(10),
        padding                 varchar2(100)
);

alter table cat_contact add constraint cc_pk primary key(contact_id);
create index cc_i1 on cat_contact(contact_method_id);

insert into cat_contact
select
        chr(64 + case when rownum <= 10 then rownum else 26 end),
        rownum,
        lpad(rownum,10),
        rpad('x',100,'x')
from
        all_objects
where
        rownum <= 10000
;

select count(*) from cat_contact where contact_method_id in ('A','B','C');

create table cat_item(
        contact_id      number(8,0) not null,
        item_category   varchar2(1) not null,
        small_vc        varchar2(10),
        padding         varchar2(100),
        constraint ci_ref_cc foreign key(contact_id) references cat_contact
)
;

alter table cat_item add constraint ci_pk primary key(contact_id, item_category);
create index ci_i1 on cat_item(item_category);

insert into cat_item 
select
        rownum,
        chr(88 + case when rownum <= 10 then mod(rownum,2) else 2 end),
        lpad(rownum,10),
        rpad('x',100,'x')
from
        all_objects
where
        rownum <= 10000
;

select count(*) from cat_item where item_category in ('X','Y');

execute dbms_stats.gather_table_stats(user,'cat_contact')
execute dbms_stats.gather_table_stats(user,'cat_item')

I’ve created and populated two tables (the table and column names come from the ODC thread). There’s a foreign key relationship defined between cat_item and cat_contact, both tables have primary keys declared, with a couple of extra columns declared not null.

I’ve populated the two tables with a small amount of data and each table has one column rigged with very skewed data:

  • cat_contact.contact_method_id is mostly ‘Z’ with one row each of ‘A’ to ‘J’ ,
  • cat_item.item_category (the second column in the primary key) is mostly ‘Z’ with 5 rows each of ‘X’ and ‘Y’

After populating each table I’ve queried it in a way which means the subsequent stats gathering will create frequency histograms on these two columns and the optimizer will be able to take advantage of the skew in its arithmetic, which means it may choose to use the indexes I’ve created on those skewed columns if the right values appear in the queries.

So here’s the query we’re interested in:

SELECT  /*+ 
                qb_name(main) 
        */ 
        *  
FROM    cat_contact c  
WHERE   (
                exists  (  
                        SELECT  /*+ qb_name(subq) */
                                *  
                        FROM    cat_item i  
                        WHERE   i.contact_id = c.contact_id  
                        AND     i.item_category in ('X', 'Y')  
                )
        OR      c.contact_method_id IN ('A', 'B', 'C')  
        )
;  

select * from table(dbms_xplan.display);

Here’s the default execution plan (in 12.2.0.1 with my settings for system stats and various other optimizer-related figures that MIGHT make a difference) pulled from memory after executing the query to return 10 rows.


-----------------------------------------------------------------------------------
| Id  | Operation           | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |             |       |       |    34 (100)|          |
|*  1 |  FILTER             |             |       |       |            |          |
|   2 |   TABLE ACCESS FULL | CAT_CONTACT | 10000 |  1152K|    34   (6)| 00:00:01 |
|   3 |   INLIST ITERATOR   |             |       |       |            |          |
|*  4 |    INDEX UNIQUE SCAN| CI_PK       |     1 |     6 |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter((INTERNAL_FUNCTION("C"."CONTACT_METHOD_ID") OR  IS NOT NULL))
   4 - access("I"."CONTACT_ID"=:B1 AND (("I"."ITEM_CATEGORY"='X' OR
              "I"."ITEM_CATEGORY"='Y')))

For every row in the cat_contact table Oracle has checked whether or not the contact_method is an ‘A’, ‘B’, or ‘C’ and passed any such rows up to its parent, for all other rows it’s then executed the subquery to see if the row with the matching contact_id in contact_item has an ‘X’ or ‘Y’ as the item_category. It’s had to run the subquery 9,997 times (there were only three rows matching ‘A’,’B’,’C’) and the INLIST ITERATOR at operation 3 means that it’s probed the index nearly 20,000 timtes. This does not look efficient.

I’ve said in previous articles that when you need to optimize queries of this shape you need to rewrite them as UNION ALL queries to separate the two parts of the complex OR predicate and then make sure that you don’t report any items twice – which you do by making use of the lnnvl() function. So let’s do this – but let’s do it the lazy “new technology” way by upgrading to 19c and executing the query there; here’s the plan I got in 19.3.0.0:


-------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |                 |       |       |    14 (100)|          |
|   1 |  VIEW                                     | VW_ORE_231AD113 |    13 |   962 |    14   (8)| 00:00:01 |
|   2 |   UNION-ALL                               |                 |       |       |            |          |
|   3 |    INLIST ITERATOR                        |                 |       |       |            |          |
|   4 |     TABLE ACCESS BY INDEX ROWID BATCHED   | CAT_CONTACT     |     3 |   354 |     4   (0)| 00:00:01 |
|*  5 |      INDEX RANGE SCAN                     | CC_I1           |     3 |       |     3   (0)| 00:00:01 |
|   6 |    NESTED LOOPS                           |                 |    10 |  1240 |    10  (10)| 00:00:01 |
|   7 |     NESTED LOOPS                          |                 |    10 |  1240 |    10  (10)| 00:00:01 |
|   8 |      SORT UNIQUE                          |                 |    10 |    60 |     4   (0)| 00:00:01 |
|   9 |       INLIST ITERATOR                     |                 |       |       |            |          |
|  10 |        TABLE ACCESS BY INDEX ROWID BATCHED| CAT_ITEM        |    10 |    60 |     4   (0)| 00:00:01 |
|* 11 |         INDEX RANGE SCAN                  | CI_I1           |    10 |       |     3   (0)| 00:00:01 |
|* 12 |      INDEX UNIQUE SCAN                    | CC_PK           |     1 |       |     0   (0)|          |
|* 13 |     TABLE ACCESS BY INDEX ROWID           | CAT_CONTACT     |     1 |   118 |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access(("C"."CONTACT_METHOD_ID"='A' OR "C"."CONTACT_METHOD_ID"='B' OR
              "C"."CONTACT_METHOD_ID"='C'))
  11 - access(("I"."ITEM_CATEGORY"='X' OR "I"."ITEM_CATEGORY"='Y'))
  12 - access("I"."CONTACT_ID"="C"."CONTACT_ID")
  13 - filter((LNNVL("C"."CONTACT_METHOD_ID"='A') AND LNNVL("C"."CONTACT_METHOD_ID"='B') AND
              LNNVL("C"."CONTACT_METHOD_ID"='C')))

The optimizer has used the new “cost-based OR-expansion” transformation to rewrite the query as a UNION ALL query. We can see an efficient access into cat_contact to identify the ‘A’,’B’,’C’ rows, and then we can see that the second branch of the union all handles the existence subquery but the optimizer has unnested the subquery to select the 10 rows from cat_item where the item_category is ‘X’ or ‘Y’ and used those rows in a nested loop to drive into the cat_contact table using the primary key. We can also see the use of the lnnvl() function in operation 13 that ensures we don’t accidentally report the ‘A’,’B’,’C’ rows again.

So let’s go back to 12.2.0.1 and see what happens if we just add the /*+ or_expand(@main) */ hint to the query. Here’s the resulting execution plan:


-------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |                 |       |       |    14 (100)|          |
|   1 |  VIEW                                     | VW_ORE_231AD113 |    13 |   962 |    14   (8)| 00:00:01 |
|   2 |   UNION-ALL                               |                 |       |       |            |          |
|   3 |    INLIST ITERATOR                        |                 |       |       |            |          |
|   4 |     TABLE ACCESS BY INDEX ROWID BATCHED   | CAT_CONTACT     |     3 |   354 |     4   (0)| 00:00:01 |
|*  5 |      INDEX RANGE SCAN                     | CC_I1           |     3 |       |     3   (0)| 00:00:01 |
|   6 |    NESTED LOOPS                           |                 |    10 |  1240 |    10  (10)| 00:00:01 |
|   7 |     NESTED LOOPS                          |                 |    10 |  1240 |    10  (10)| 00:00:01 |
|   8 |      SORT UNIQUE                          |                 |    10 |    60 |     4   (0)| 00:00:01 |
|   9 |       INLIST ITERATOR                     |                 |       |       |            |          |
|  10 |        TABLE ACCESS BY INDEX ROWID BATCHED| CAT_ITEM        |    10 |    60 |     4   (0)| 00:00:01 |
|* 11 |         INDEX RANGE SCAN                  | CI_I1           |    10 |       |     3   (0)| 00:00:01 |
|* 12 |      INDEX UNIQUE SCAN                    | CC_PK           |     1 |       |     0   (0)|          |
|* 13 |     TABLE ACCESS BY INDEX ROWID           | CAT_CONTACT     |     1 |   118 |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access(("C"."CONTACT_METHOD_ID"='A' OR "C"."CONTACT_METHOD_ID"='B' OR
              "C"."CONTACT_METHOD_ID"='C'))
  11 - access(("I"."ITEM_CATEGORY"='X' OR "I"."ITEM_CATEGORY"='Y'))
  12 - access("I"."CONTACT_ID"="C"."CONTACT_ID")
  13 - filter((LNNVL("C"."CONTACT_METHOD_ID"='A') AND LNNVL("C"."CONTACT_METHOD_ID"='B') AND
              LNNVL("C"."CONTACT_METHOD_ID"='C')))

We get exactly the plan we want – with the same cost as the 19c cost, which happens to be less than half the cost of the default plan that we got from 12.2.0.1. So it looks like there may be case where you will need to hint OR-expansion because is might not appear by default.

Other Observations 1 – ordering

You may have noticed that my query has, unusually for me, put the existence subquery first and the simple filter predicate second in the where clause. I don’t like this pattern as (over time, and with different developers modifying queries) it’s too easy in more complex cases to “lose” the simple predicate; a one-liner can easily drift, change indents, get bracketed with another predicate that it shouldn’t be connected with and so on. I’ve actually seen production systems producing wrong results because little editing accidents like this (counting brackets is the classic error) have occured – so I’m going to rerun the test on 12.2.0.1 with the predicates in the order I would normally write them.

Here’s the “corrected” query with its execution plan:


SELECT  /*+ 
                qb_name(main) 
                or_expand(@main)
        */ 
        *  
FROM    cat_contact c  
WHERE   (
                c.contact_method_id IN ('A', 'B', 'C')  
        OR
                exists  (  
                        SELECT  /*+ qb_name(subq) */
                                *  
                        FROM    cat_item i  
                        WHERE   i.contact_id = c.contact_id  
                        AND     i.item_category in ('X', 'Y')  
                )
        )
;  


-------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |                 |       |       |    16 (100)|          |
|   1 |  VIEW                                     | VW_ORE_231AD113 |    13 |   962 |    16   (7)| 00:00:01 |
|   2 |   UNION-ALL                               |                 |       |       |            |          |
|   3 |    NESTED LOOPS                           |                 |    10 |  1240 |    10  (10)| 00:00:01 |
|   4 |     NESTED LOOPS                          |                 |    10 |  1240 |    10  (10)| 00:00:01 |
|   5 |      SORT UNIQUE                          |                 |    10 |    60 |     4   (0)| 00:00:01 |
|   6 |       INLIST ITERATOR                     |                 |       |       |            |          |
|   7 |        TABLE ACCESS BY INDEX ROWID BATCHED| CAT_ITEM        |    10 |    60 |     4   (0)| 00:00:01 |
|*  8 |         INDEX RANGE SCAN                  | CI_I1           |    10 |       |     3   (0)| 00:00:01 |
|*  9 |      INDEX UNIQUE SCAN                    | CC_PK           |     1 |       |     0   (0)|          |
|  10 |     TABLE ACCESS BY INDEX ROWID           | CAT_CONTACT     |     1 |   118 |     1   (0)| 00:00:01 |
|* 11 |    FILTER                                 |                 |       |       |            |          |
|  12 |     INLIST ITERATOR                       |                 |       |       |            |          |
|  13 |      TABLE ACCESS BY INDEX ROWID BATCHED  | CAT_CONTACT     |     3 |   354 |     4   (0)| 00:00:01 |
|* 14 |       INDEX RANGE SCAN                    | CC_I1           |     3 |       |     3   (0)| 00:00:01 |
|  15 |     INLIST ITERATOR                       |                 |       |       |            |          |
|* 16 |      INDEX UNIQUE SCAN                    | CI_PK           |     1 |     6 |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   8 - access(("I"."ITEM_CATEGORY"='X' OR "I"."ITEM_CATEGORY"='Y'))
   9 - access("I"."CONTACT_ID"="C"."CONTACT_ID")
  11 - filter(LNNVL( IS NOT NULL))
  14 - access(("C"."CONTACT_METHOD_ID"='A' OR "C"."CONTACT_METHOD_ID"='B' OR
              "C"."CONTACT_METHOD_ID"='C'))
  16 - access("I"."CONTACT_ID"=:B1 AND (("I"."ITEM_CATEGORY"='X' OR "I"."ITEM_CATEGORY"='Y')))

The execution plan has jumped from 14 lines to 17 lines, the cost has gone up from 14 to 16, and both branches of the plan now report access to cat_contact and cat_item (though only through its primary key index in the second branch). What’s happened?

Oracle 12.2.0.1 has rewritten the query as a UNION ALL working from the bottom up – so in this case the first branch of the rewrite handles the original filter subquery, unnesting it to drive efficient from cat_item to cat_contact. This means the second branch of the rewrite has to find the ‘A’,’B’,’C’ rows in cat_contact and then check that the filter subquery hadn’t previously reported them – so the optimizer has applied the lnnvl() function to the filter subquery – which you can nearly see in the Predicate Information for operation 11.

To make it clearer, here’s what you get as the predicate information for that operation after calling explain plan and dbms_xplan.display()

  11 - filter(LNNVL( EXISTS (SELECT /*+ QB_NAME ("SUBQ") */ 0 FROM "CAT_ITEM" "I" WHERE
              ("I"."ITEM_CATEGORY"='X' OR "I"."ITEM_CATEGORY"='Y') AND "I"."CONTACT_ID"=:B1)))

In 12.2 the order of predicates in your query seems to be important – unless told otherwise the optimizer is working from the bottom-up (then rewriting top-down). But there is hope (though not documented hope). I added the /*+ or_expand(@main) */ hint to the query to force OR-expansion. Checking the Outline Information of the plan I could see that this had been expanded to /*+ or_expand(@main (1) (2)) */. Taking a wild guess as the significance of the numbers and changing the hint to /*+ or_expand(@main (2) (1) */ I re-ran the test and back to the more efficient plan – with the filter subquery branch appearing second in the UNION ALL view and the lnnvl() applied to the simpler predicate.

So the OR-expansion code is not fully cost-based in 12.2.0.1, but you can modify the behaviour through hinting. First to force it to appear (which may not happen even if it seems to be the lower cost option), and secondly to control the ordering of the components of the UNION ALL. As with all things relating to hints, though, act with extreme caution: we do not have sufficient documentation explaining exactly how they work, and with some of them we don’t even know whether the code path is even complete yet.

Other Observations 2 – 12cR1

The or_expand() hint and cost-based OR-expansion appeared specifically in 12.2.0.1; prior to that we had a similar option in the use_concat() hint and concatenation – which also attempts to rewrite your query to produce a union all of disjoint data sets. But there are restrictions on what concatentation can do. I rarely remember what all the restrictions are, but there are two critical restrictions:

  • first, it will only appear by default if there is an indexed access path available to drive every branch of the rewrite
  • secondly, it will not apply further transformations to the separate branches that it produces

If we try adding the or_expand() hint to our query in 12.1.0.2 it will have no effect, so let’s add a suitable use_concat() hint and see what happens:

explain plan for
SELECT  /*+ 
                qb_name(main) 
                use_concat(@main 8 or_predicates(1))
--              use_concat(@main   or_predicates(1))
        */ 
        *  
FROM    cat_contact c  
WHERE   (
                exists  (  
                        SELECT  /*+ qb_name(subq) */
                                *  
                        FROM    cat_item i  
                        WHERE   i.contact_id = c.contact_id  
                        AND     i.item_category in ('X', 'Y')  
                )
        OR
                c.contact_method_id IN ('A', 'B', 'C')  
        )
;  

select * from table(dbms_xplan.display);

-----------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |             | 10000 |  1152K|    40   (3)| 00:00:01 |
|   1 |  CONCATENATION                        |             |       |       |            |          |
|   2 |   INLIST ITERATOR                     |             |       |       |            |          |
|   3 |    TABLE ACCESS BY INDEX ROWID BATCHED| CAT_CONTACT |     3 |   354 |     4   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN                  | CC_I1       |     3 |       |     3   (0)| 00:00:01 |
|*  5 |   FILTER                              |             |       |       |            |          |
|*  6 |    TABLE ACCESS FULL                  | CAT_CONTACT |  9997 |  1151K|    35   (6)| 00:00:01 |
|   7 |    INLIST ITERATOR                    |             |       |       |            |          |
|*  8 |     INDEX UNIQUE SCAN                 | CI_PK       |     1 |     6 |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("C"."CONTACT_METHOD_ID"='A' OR "C"."CONTACT_METHOD_ID"='B' OR
              "C"."CONTACT_METHOD_ID"='C')
   5 - filter( EXISTS (SELECT /*+ QB_NAME ("SUBQ") */ 0 FROM "CAT_ITEM" "I" WHERE
              ("I"."ITEM_CATEGORY"='X' OR "I"."ITEM_CATEGORY"='Y') AND "I"."CONTACT_ID"=:B1))
   6 - filter(LNNVL("C"."CONTACT_METHOD_ID"='A') AND LNNVL("C"."CONTACT_METHOD_ID"='B') AND
              LNNVL("C"."CONTACT_METHOD_ID"='C'))
   8 - access("I"."CONTACT_ID"=:B1 AND ("I"."ITEM_CATEGORY"='X' OR "I"."ITEM_CATEGORY"='Y'))

26 rows selected.

As you can see by forcing concatentation I’ve got my “union all” view with lnnvl() applied in the second branch. But the second branch was the “select where exists()” branch and the optimizer has not been able (allowed?) to do the unnesting that would let it drive efficiently from the cat_item table to the cat_contact table. The effect of this is that the plan still ends up with a full tablescan of cat_contact running a filter subquery on virtually every row- so concatenation doesn’t save us anything.

The significance of the “8” in the hint, by the way is (I believe) that it tells the optimizer to use inlist iterators when possible. If I had omitted the “8” the plan would have had 4 branches – one each for ‘A’, ‘B’, and ‘C’ and the fourth for the filter subquery. I could also have added a hint /*+ use_concat(@subq or_predicates(1)) */ to replace operations 7 and 8 with a single index range scan with a filter predicate for the ‘X’/’Y’ check (and that might, in any case, be slightly more efficient than the iteration approach).

Footnote(s)

The “legacy” OR-expansion (“concatenation” a.k.a. LORE in the optimizer trace file) can be controlled through the hints use_concat(), and no_expand().

The new cost-based OR-expansion (now ORE in the optimizer trace file) can be controlled through the hints or_expand() and no_or_expand().

The new cost-based OR-expansion has some restrictions, for example it is explicitly blocked in a MERGE statement, even in 19c, as reported in this blog note by Nenad Noveljic. As the blog note shows, concatenation is still possible but you (may) have to disable cost based OR-expansion.

I scanned the executable for the phrase “ORE: bypassed” to see if there were any messages that would suggest other reasons why cost-based OR-expansion would not be used; unfortunately the only relevant string was “ORE: bypassed – %s” [update (see comment 5 below): after ignoring case there was a second option: “ORE: Bypassed for disjunct chain: %s.”] – in other words all the possible bypass messages would be filled in on demand. I found a list of messages that might be relevant; I’d be a little cautious about trusting it but if you don’t see the feature appearing when you’re expecting it then it might be worth checking whether one of these could apply.

  • Old OR expansion hints present
  • Semi join hint present
  • QB has FALSE predicate
  • QB marked for NO Execution
  • Full Outer join QB
  • Rownum found in disjunction
  • Anti/semi/outer join in disjunction
  • Opposite Range in disjunction
  • No Index or Partition driver found
  • Predicate chain has all constant predicates
  • Negated predicate found
  • Long bitmap inlist in OR predicate
  • PRIOR expression in OR predicate
  • All 1-row tables found
  • No valid predicate for OR expansion
  • Disjunctive subquery unnesting possible
  • Subquery unnesting possible
  • Subquery coalesced query block
  • Merge view query block

Finally – here’s another reference blog note comparing LORE with ORE from Mohamed Houri.

 

Why I don’t want my presentations recorded!

https://oracle-base.com/blog/wp-content/uploads/2020/08/cat-2642643_640-... 295w, https://oracle-base.com/blog/wp-content/uploads/2020/08/cat-2642643_640-... 57w" sizes="(max-width: 158px) 85vw, 158px" />

I was on Twitter a couple of days ago and I mentioned my preference not to be recorded when I’m presenting. That sparked a few questions, so I said I would write a blog post about it. Here it is.

This is a bit of a stream of consciousness, so forgive me if I ramble.

The impact on me!

The primary reason I don’t like being recorded is it has a big impact on me.

I’ve said many times, presenting is not natural for me. I’m very nervous about doing it. I have to do a lot of preparation before an event to try to make it look casual, and almost conversational. It takes a rather large toll on me personally, invading every part of my life for weeks before it happens, and pretty much ruining the day(s) immediately before the event. In my head it’s going to be a complete disaster, and the public humiliation is going to be that much worse because I’m an Oracle ACE and Groundbreaker Ambassador, so I must clearly think I’m the shit, yet I can’t even present and don’t have a clue what I’m talking about. Classic impostor syndrome stuff.

That’s “normal” for me and conferences, which is why I nearly always get a post-conference crash, because of the relief it’s over. But it goes into overdrive if I know the session is going to be recorded, because in my head there will be a permanent record of my screw up.

I have been recorded before, but fortunately not on the sessions where I’ve screwed up… Yet… I don’t think… Recently I’ve decided that I will probably pull out of any event where I’m being recorded, as I can’t keep putting myself through that anymore.

There are other people that will happily fill the conference slot, so me not being there is no big deal.

Editorial control

When I write an article, I constantly go back and revisit things. If my opinion changes, I learn something new, or just don’t like the way I explained something I will rewrite it. I have full control of the content.

When I record a YouTube video I edit it, making sure it contains what I want it to contain. YouTube won’t let you do much in the way of editing a video once it’s posted, but you can make minor changes to the timeline. Even so, if something really annoyed me I could delete it, re-edit it and post it again. Yes I would lose all the views and comments, but ultimately I can do that if I want.

When a user group records a presentation, you no longer have any control of that content. If your opinion changes, or it contains some really dumb stuff, it is there for life. I know nothing is lost on the internet, but at least I should be able to control the “current version” of the content.

I very rarely write for other publications. I like to keep control of my content, so I can decide what to do with it. A lot of this is a throw-back to the previous point about my insecurities, but that’s how I feel about it, and why should I have to compromise about my content?

It’s my content!

Following on from the previous point, it is my content. I wrote it. I rehearsed it. I presented it. And most importantly, I wasn’t being paid to present it! Why should a user group now have control of that content?

Karen López (@datachick) recently posted a really interesting tweet.

“What would you think about an organization who held an event and you spoke at it for free. You signed an agreement to allow distribution to attendees, but they are now selling your content as part of a subscription that you are getting no compensation for?”

@datachick

I’m not saying this is what user groups are planning, but it’s certainly something some might try, now that times are getting harder than usual.

I’m sorry if this sounds really selfish, but I think I’m doing enough for the community and user groups, without giving them additional product to sell. I know a lot of user groups find finance difficult, but in the current online-model, the financial situation is very different. There aren’t any buildings to hire and people to feed.

The audience matters!

My presentation style varies depending on the audience.

If I present in the UK I tend to speak faster and swear a bit. Similar with Australia. When I present in other countries I tend to tone down my language, as some places are really uptight about expletives.

In some countries where English is a second or third language, I slow down a lot and remove some content from the session, because I know there will be a larger number of people who will struggle to keep up. Maybe I’ll miss out a couple of anecdotes, so I can speak more slowly. If there is live translation I have to go a lot slower.

I remember seeing one recording of me presenting with live translation and I sounded really odd, as I was having to present so slowly for the live translation to work. It was kind-of shocking for me to see it back, and I would prefer people not see that version of the talk, as it doesn’t represent me. It’s “adjusted me” to suit the circumstance.

Other things…

OK. Let’s assume other speakers are not self-obsessed control freaks like me for a second…

It’s possible some people would prefer to be selective about what gets recorded. For example, the first time I do a talk I really don’t know how it will turn out. That’s different to the 10th time I give the same talk. For a new talk I doubt I would feel happy about it being recorded, even if I were generally cool with the concept. I may feel better about recording a talk I have done a few times, having had time to adjust and improve it. I think of this like comedians, who go on tour and constantly change their material based on how it works with the audience. At the end of a tour they record their special, only using the best bits. Then it’s time to start preparing for the next tour. I suspect many comedians would be annoyed at being recorded on the first day of a tour. Same idea…

I think recording sessions could be off-putting for new speakers. When you are new to the game there is enough to worry about, without having to think about this too. Maybe other people aren’t as “sensitive” as me, but maybe they are.

I don’t like to be in pictures and videos. It’s just not my thing. I rarely put myself into my videos on YouTube. I’m sure there would be other speakers who would prefer to be judged by what they say, rather than how they look.

I used to be concerned that if someone recorded my session and put it on YouTube, nobody would come to my future sessions on the same subject. I actually don’t think this is a real problem. It seems the audience for blog posts, videos and conferences is still quite different. Yes, there is some crossover, but there is also a large group of people that gravitate to their preferred medium and stick with it.

But what about…

Look, I really do know what the counter arguments to this are.

  • Some people can’t get to your session because of an agenda clash, and they would like to watch it later.
  • This gives the user group members a resource they can look back at to remind themselves what you said.
  • This is a resource for existing user group members who couldn’t make it to the event.
  • For paid events, the attendees are paying money, so they have the right to have access to recordings. (but remember, the speakers are not being paid!)

I know all this and more. I am sorry if people don’t like my view on this. I really am, and I’m happy not to be selected to speak at an event. It really doesn’t bother me. Feel free to pick someone else that fits into your business model. That is fine by me. It really is.

Conclusion

Maybe I’m the only person that feels this way. Maybe other people feel the same, but don’t feel they have a loud enough voice to make a big deal out of it.

At the end of the day, it’s my content and I should have the right to decide if I’m happy about it being recorded or not. I believe conferences should make recording optional, and I’ll opt out. If people believe recording should be mandatory, that’s totally fine. It’s just unlikely I will be involved.

I’m sorry if you don’t like my opinion, but that’s how I feel at this point and it’s my choice. My attitude may change in future. It may not. Either way, it’s still my choice!

Cheers

Tim…

Update: This is not because of any recent conferences. Just thought I better add that in case someone thought it was. I’ve been asking events not to record me for a while now and it’s not been drama. In a recent message for a conference later in the year I was asked to explicitly confirm my acceptance of recording and publishing rights, which is why I mentioned it on Twitter, which then prompted the discussion. Sorry to any recent events if you thought you were the catalyst for this. You weren’t. Love you! </p />
</p></div>

    	  	<div class=

Developer Live wrap up

The Developer Live event for database has concluded. Thank you to the (almost) 2000 people that attended my talk across the USA, Europe and APAC timezones! I very much appreciate you giving up your time to attend the session.

Whilst we were doing some Q&A I tried to add as many useful links into the chat line as I could, to help people with onward learning on SQL performance and database performance in general. A few of you then asked if I could publish that chat as to not lose that information, so here it is below

From Me to Everyone:  04:06 PM
Hi everyone,

Its Connor here. Chris Saxon and I are monitoring the Q&A channel, so fire off your questions and we’ll answer them right here.  Anything we can’t answer, we’ll tackle at a later date.

Sessions are all recorded and will be made available in the next day or so.
Check back on
https://developer.oracle.com/developer-live/database for the recordings.

From Me to Everyone:  04:36 PM

If you want more details on PLSQL, I had a chat with Ace Director Patrick Barel about PLSQL, which you can check out here on my youtube channel https://www.youtube.com/watch?v=7xcQtxsM7DE
There are more videos on parsing and avoiding the pain of it at my YouTube channel https://www.youtube.com/c/ConnorMcDonaldOracle

If you want fun way of explaining array fetching to your development team, here’s a funny video on a real world example https://www.youtube.com/watch?v=-Mlxdn5osvs

If you want more info on some of these more sophisticated SQL statements, here’s a playlist on analytic SQL https://www.youtube.com/watch?v=0cjxYMxa1e4&list=PLJMaoEWvHwFIUwMrF4HLnRksF0H8DHGtt

See the full example of how smart the database optimizer is here https://www.youtube.com/watch?v=s1MDfejM0ys

How to use FORCE_MATCHING_SIGNATURE on V$SQL. Check out my blog post on this
https://connor-mcdonald.com/2016/05/30/sql-statements-using-literals/

cursor_sharing can also be used to improve DML (insert/update/delete) performance. Here’s an example https://connor-mcdonald.com/2019/05/17/hacking-together-faster-inserts/

More information on bind peeking  https://www.youtube.com/watch?v=MnISfllmK74 and some of the risks that were there in earlier versions problems with bind peeking https://www.youtube.com/watch?v=ynLF6S15J5M

And here is how adaptive cursor sharing improves things https://www.youtube.com/watch?v=m9sjq9TDuTw

If you DO find SQL complex, then here’s a great resource to get up to speed with database developement using SQL https://asktom.oracle.com/databases-for-developers.htm

Once again, thanks for attending, and if you the missed the session, here it is below

Expert Advice: How to Build an Accessible Education Website on WordPress.com

Learn the basics and best practices of building an accessible and inclusive website for your classroom, school, or class assignment. This is a free, one-hour live webinar open to all, but is especially geared toward educators, teachers, school webmasters, and students.

Date: Thursday, August 27, 2020
Time: 10:00 am PT | 12:00 pm CT | 1:00 pm ET | 17:00 UTC
Registration linkhttps://zoom.us/webinar/register/2715977718561/WN_RFyhYfGNTOikZxw4aAsMXA
Who’s invited: All are welcome, but this webinar is designed for stakeholders within education, including teachers, educators, school webmasters, students, and parents.

Melissa Silberstang and Fernando Medina are WordPress.com Happiness Engineers and accessibility advocates who have helped thousands of people build websites on WordPress.com. They’ll help you understand what makes a great, accessible website, and what customizations to look out for as you build.

During the last 10-15 minutes of the webinar, attendees will be able to ask questions during the live Q&A portion.

We know you’re busy, so if you can’t make the live event, you’ll be able to watch a recording of the webinar on our YouTube channel.

Live attendance is limited, so be sure to register early. We look forward to seeing you on the webinar!