Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Oracle 19c: Adventures with Automatic Indexing

The video of this recent presentation, given as a part of the Oracle Groundbreakers EMEA Tour 2020, is now available.

Automatic Indexing is one of the much-heralded features of Oracle 19c, but it is only available on Engineered Systems, therefore in Autonomous Database that is built on Exadata and on other Exadata platforms. This presentation shares some initial experiences with the feature based on testing it in conjunction with Swingbench and discusses how well it performed.


Handling kernel upgrades with Ansible prior to an Oracle installation

As part of the process of setting up VMs in the cloud for use with the Oracle database it is frequently necessary to update the systems to the latest and greatest, and hopefully more secure packages before the Oracle installation can begin. In a similar way I regularly upgrade the (cloud-vendor provided) base image when building a custom image using Packer. This demands for an automated process in my opinion, and Ansible is the right tool for me.

I may have mentioned once or twice that a Spacewalk powered (or equivalent) local repository is best for consistency. You may want to consider using it to ensure all systems are upgraded to the same packages. Applying the same package updates in production as you did in test (after successful regression testing of course) makes testing in lower-tier environments so much more meaningful ;)

Upgrading

My target VM is running in Oracle’s Cloud, and I’m spinning it and its required supporting infrastructure up with a small Terraform script. The playbook is executed immediately after the VM becomes available.

The playbook you are about to see later in the article is only intended for use prior to the initial installation of the Oracle binaries, after the VM has been freshly provisioned.

My playbook will determine whether a new kernel-uek has been installed as part of the upgrade process, and optionally reboot the VM should it have to. A reboot is acceptable in my scenario where I’m building a new VM with Oracle software to be installed as part of the process. In other cases, this approach is not tenable, consider yourself warned. A flag controls the reboot behaviour.

Be advised that not using a local repository can lead to an upgrade of kernel UEK5 to UEK6. The most current oraclelinux-release-el7 package ships with the ol7_UEKR5 repository disabled and ol7_UEKR6 repository enabled. The playbook therefore enables the UEK5 repository explicitly, and disables ol7_UEKR6 to remain on the UEK5 kernel branch. It also checks for UEK5 and mandates Oracle Linux 7.8 or newer because everything is really old by modern standards …

A few days ago Oracle Linux 7.9 has become available, and again – depending on your yum configuration – you might end up upgrading 7.8 to 7.9. Which is exactly what I want, but not necessarily what you want. Please review your configuration carefully to ensure the correct outcome. It goes without saying that testing is a must.

Introducing Ansible for system upgrades

I have been using Ansible a lot over the past years, it’s a handy tool to know. The question I wanted to answer is: how can I perform a full system upgrade in Ansible prior to installing the Oracle database?

The Ansible Playbook

Before talking more about the playbook, let’s see it first:

---
#
# Ansible playbook to update all RPM packages on a lab VM (powered by Oracle Linux 7.8 and later)
# prior to the installation of Oracle binaries (only).
#
# NEVER USE THIS PLAYBOOK ON A VM THAT HASN'T JUST FRESHLY BEEN PROVISIONED! 
#
# It is _only_ intended to be used as part of the initial Oracle installation on a fresh VM.
#
# The system upgrade includes the kernel as well, as it's an integral part of the system 
# and newer kernels provide security fixes and performance enhancements.
#
# The installation of a new kernel requires a reboot to become effective. You can control
# whether you want to reboot straight away or later (see variable "reboot"). The default is
# not to reboot the VM as part of the playbook's execution.
#
# The playbook requires the server to be booted into Oracle's Unbreakable Enterprise Kernel
# (UEK) for now, this can easily be changed if needed. 
#
# The reboot module requires ansible >= 2.7. 
#
# Both conditions are enforced. 
#
# As an added safety net, the playbook checks  for the presence of /etc/oraInst.loc and 
# /etc/oratab, which admittedly isn't a perfect way of identifying the presence of Oracle 
# software, but it's better than nothing. 
# It is _your_ responsibility to ensure you don't run this playbook outside the initial Oracle 
# software installation.
#
# PARAMETERS
# - reboot: if set to true the playbook is going to reboot the VM straight away. If set to
#      false it is your responsibility to reboot into the new kernel
# 
# Please refer to https://martincarstenbach.wordpress.com/ for more details
#

- hosts: oracledb
  name: full system upgrade of a lab VM prior to the installation of Oracle 19c
  become: yes
  vars:
    reboot: false

  tasks:
  - name: fail if the O/S is not detected as Oracle Linux 7.8 or later
    fail:
      msg: This playbook is written for Oracle Linux Linux 7.8 and later
    when: ansible_distribution != 'OracleLinux' and ansible_distribution_version is version('7.8', '<')

  - name: fail if Oracle's UEK5 is not in use
    fail:
      msg: this playbook only covers Oracle's Unbreakable Enterprise Kernel UEK Release 5
    when: not ansible_kernel is search ("4.14")

  - name: fail if the Ansible release is older than 2.7
    fail:
      msg: This playbook requires Ansible 2.7 or later
    when: ansible_version.full is version('2.7', '<=')

  # no guarantee this detects _your_ Oracle installation, see notes
  - name: try to detect Oracle software
    block:
      - name: try to detect Oracle Universal Installer's inventory pointer 
        stat: path=/etc/oraInst.loc
        register: orainst

      - name: fail if inventory pointer was detected
        fail:
          msg: It appears as if Oracle database software has already been installed, aborting 
        when: orainst.stat.exists | bool

      - name: try to detect the database's oratab file
        stat: path=/etc/oratab
        register: oratab

      - name: fail if an oratab file was detected
        fail:
          msg: It appears as if Oracle database software has already been installed, aborting 
        when: oratab.stat.exists | bool

  #
  # this is where the actual work is done
  # 
  - name: update all packages (remain on the UEK5 branch)
    yum:
      name: '*'
      state: latest
      enablerepo: ol7_UEKR5
      disablerepo: ol7_UEKR6
      update_cache: yes

  - name: get latest kernel UEK installed on disk
    shell: /usr/bin/rpm -q kernel-uek | /usr/bin/sort -V | /usr/bin/tail -1
    register: latest_uek
    args:
      warn: false

  - name: trim the RPM name to make it easier to compare with Ansible's facts
    set_fact:
      latest_uek_rel: "{{ latest_uek.stdout | regex_replace('kernel-uek-(.*)', '\\1') }}"

  - name: print detected kernel releases
    debug:
      msg: |
        This server booted into UEK {{ ansible_kernel }}, 
        the latest kernel on disk is {{ latest_uek_rel }}.

  - name: reboot the VM
    reboot:
      msg: Ansible is rebooting the VM now
    when: ansible_kernel != latest_uek_rel and reboot | bool 

  - name: print reboot reminder
    debug:
      msg: A new kernel has been installed, please remember to reboot the VM at an opportune moment
    when: ansible_kernel != latest_uek_rel and reboot | bool == false 

Although it looks lengthy the code is straight forward. It will update all packages after a few safety checks. I experimented with another flag, upgrade_kernel, but had to learn the hard way that creating an exclusion list for yum is quite difficult given the many different packages starting ^kernel … At the end of the day I decided against its use.

Kernel Update

The hardest part was to come up with a way to compare the boot kernel with the latest installed kernel (on disk). The playbook only concerns itself with the Unbreakable Enterprise Kernel (UEK), ignoring the Red Hat Compatible Kernel (RHCK).

The latest kernel on disk can be found using a combination of the rpm, sort and tail commands. I tried to achieve the same result with Ansible’s yum module and the list option, but would have had to spend quite a bit of time working out how to get the latest installed kernel this way. Back to shell it was! Sort’s -V option is magical as is allows me to sort the kernels by release in ascending order. The last row returned has to be the most current kernel. If this kernel release (after being stripped of kernel-uek-) doesn’t match the boot kernel, a reboot is necessary.

Depending on whether you set the reboot flag to true or false, the system is rebooted. If you are building a Packer image, or prepare a VM for an Oracle installation you may want to consider setting the flag to true. If you don’t, a friendly message reminds you of the need to reboot at your convenience.

Summary

I don’t run this playbook in isolation, it has become an integral part of my Ansible-driven Oracle installation toolkit. I prefer to get the software updates out of the way early in the Oracle software installation process, it’s much easier this way.

Application Express – the PSE update

I published a post a couple of days ago about how due to the architecture of PL/SQL and hence Application Express, we can rapidly deliver and deploy updates to the core APEX product to deliver timely fixes to the APEX community.

Because a single patch may now evolve over time to contain additional fixes, long time friend of the APEX community Peter Raganitsch then made the following observation on Twitter:

image

I can’t speak for whether this will happen in future, but I decided I could help APEX customers with a little database magic to get them closer to this utopia Smile.

Using the preprocessor feature of external tables, it is easy to get the output from any OS command to be available inside the database as if it were a standard table. Armed with that knowledge, and assuming that you have a Oracle Customer Support Identifier (CSI) which is a given if you are interested in downloading and applying patches, we can craft an interrogation of the APEX patch README file on support.oracle.com that is associated with the patch.

Here is a my preprocessor script. I’ve done this for Windows but of course can easily be ported to a Unix equivalent.


@echo off
cd /d x:\temp
del /q README.txt
wget --quiet --http-user=MY_CSI_EMAIL_USERNAME --http-password=MY_CSI_PASSWORD --no-check-certificate --output-document=README.txt "https://updates.oracle.com/Orion/Services/download?type=readme&aru=23890337"
grep PATCH_VERSION README.txt | awk "{print $4}"

This relatively simple script will

  • connect to the Oracle Support updates page with my CSI details,
  • download the README file ,
  • extracts the PATCH_VERSION line to get the version details.

Once I have a working script, I can wrap that into an external table and I’m good to go


SQL> create table apex_pse (
  2   version varchar2(64)
  3  )
  4  organization external
  5  (  type oracle_loader
  6     default directory temp
  7     access parameters
  8     ( records delimited by newline
  9       preprocessor  bin:'run_os.bat'
 10     )
 11     location ( temp:'dummy.txt' )
 12  ) reject limit unlimited ;
 
 Table created.

SQL> select * from apex_pse;

VERSION
-----------------------------
2020.10.29

Now I can easily run a query each day, week, month etc to check on my APEX installation, or of course I could build a little APEX application around it.


SQL> select
  2    case
  3      when installed = available
  4        then 'UP TO DATE'
  5        else 'NEW VERSION EXISTS'
  6     end status
  7  from
  8  ( select max(patch_version) installed
  9    from   apex_patches
 10    where  patch_number = '32006852'
 11  ),
 12  ( select version available from apex_pse )
 13  /

STATUS
------------------
UP TO DATE

Caveat: Obviously we’re only dealing with PSE 32006852 here, and you should always be checking the official APEX documentation for updates on fixes and patches.

(and if you’re wondering why the picture for this post is a spider…well, its because to get the patch information we are using a …. wait for it …. web crawler)

Oracle 19c: Real-Time Statistics & High-Frequency Statistics Collection

The video of this recent presentation, given as a part of the Oracle Groundbreakers EMEA Tour 2020, is now available.

Keeping object statistics up to date is critical to Oracle database performance and stability. Both of these features aim to address the challenge of using data that has been significantly updated before the statistics maintenance window has run again. The features are only available on engineered systems, and so certainly are targetted at the autonomous database.
  • Real-time Statistics augment existing statistics. However, they are not quite as real-time as the name suggests. To keep their implementation lightweight they use the table monitoring mechanism, this limits the information that can be collected.
  • High-Frequency Automatic Optimizer Statistics Collection is effectively a never-ending statistics maintenance window. As your data and statistics change, so there are opportunities for SQL execution plans, and therefore application performance to change. DBAs and developers need to be aware of the implications.


About the oracle database and compiling and linking.

This blogpost is about how the oracle database executable created or changed during installation and patching. I take linux for the examples, because that is the version that I am almost uniquely working with. I think the linux operating is where the vast majority of linux installations are installed on, and therefore an explanation with linux is helpful to most of the people.

The first thing to understand is the oracle executable is a dynamically linked executable. This is easy to see when you execute the ‘ldd’ utility against the oracle executable:

$ ldd oracle
	linux-vdso.so.1 (0x00007ffd3f5b0000)
	libodm19.so => /u01/app/oracle/product/19/dbhome_1/lib/libodm19.so (0x00007fa693084000)
	libofs.so => /u01/app/oracle/product/19/dbhome_1/lib/libofs.so (0x00007fa692e82000)
	libcell19.so => /u01/app/oracle/product/19/dbhome_1/lib/libcell19.so (0x00007fa692b69000)
	libskgxp19.so => /u01/app/oracle/product/19/dbhome_1/lib/libskgxp19.so (0x00007fa69284d000)
	libskjcx19.so => /u01/app/oracle/product/19/dbhome_1/lib/libskjcx19.so (0x00007fa692604000)
	librt.so.1 => /lib64/librt.so.1 (0x00007fa6923fb000)
	libclsra19.so => /u01/app/oracle/product/19/dbhome_1/lib/libclsra19.so (0x00007fa6921c0000)
	libdbcfg19.so => /u01/app/oracle/product/19/dbhome_1/lib/libdbcfg19.so (0x00007fa691f93000)
	libhasgen19.so => /u01/app/oracle/product/19/dbhome_1/lib/libhasgen19.so (0x00007fa691270000)
	libskgxn2.so => /u01/app/oracle/product/19/dbhome_1/lib/libskgxn2.so (0x00007fa69106d000)
	libocr19.so => /u01/app/oracle/product/19/dbhome_1/lib/libocr19.so (0x00007fa690d49000)
	libocrb19.so => /u01/app/oracle/product/19/dbhome_1/lib/libocrb19.so (0x00007fa690a49000)
	libocrutl19.so => /u01/app/oracle/product/19/dbhome_1/lib/libocrutl19.so (0x00007fa690828000)
	libaio.so.1 => /lib64/libaio.so.1 (0x00007fa690625000)
	libons.so => /u01/app/oracle/product/19/dbhome_1/lib/libons.so (0x00007fa6903d1000)
	libmql1.so => /u01/app/oracle/product/19/dbhome_1/lib/libmql1.so (0x00007fa69016f000)
	libipc1.so => /u01/app/oracle/product/19/dbhome_1/lib/libipc1.so (0x00007fa68fcf5000)
	libdl.so.2 => /lib64/libdl.so.2 (0x00007fa68faf1000)
	libm.so.6 => /lib64/libm.so.6 (0x00007fa68f76f000)
	libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fa68f54f000)
	libnsl.so.1 => /lib64/libnsl.so.1 (0x00007fa68f336000)
	libresolv.so.2 => /lib64/libresolv.so.2 (0x00007fa68f11f000)
	libc.so.6 => /lib64/libc.so.6 (0x00007fa68ed5d000)
	/lib64/ld-linux-x86-64.so.2 (0x00007fa693287000)

The way this works, is that a library is defined in the ELF (the executable format of linux executables) header of the oracle executable. This can be seen using the ‘readelf’ utility:

$ readelf -d oracle

Dynamic section at offset 0x16f03640 contains 45 entries:
  Tag        Type                         Name/Value
 0x0000000000000001 (NEEDED)             Shared library: [libodm19.so]
 0x0000000000000001 (NEEDED)             Shared library: [libofs.so]
 0x0000000000000001 (NEEDED)             Shared library: [libcell19.so]
 0x0000000000000001 (NEEDED)             Shared library: [libskgxp19.so]
 0x0000000000000001 (NEEDED)             Shared library: [libskjcx19.so]
 0x0000000000000001 (NEEDED)             Shared library: [librt.so.1]
 0x0000000000000001 (NEEDED)             Shared library: [libclsra19.so]
 0x0000000000000001 (NEEDED)             Shared library: [libdbcfg19.so]
 0x0000000000000001 (NEEDED)             Shared library: [libhasgen19.so]
 0x0000000000000001 (NEEDED)             Shared library: [libskgxn2.so]
 0x0000000000000001 (NEEDED)             Shared library: [libocr19.so]
 0x0000000000000001 (NEEDED)             Shared library: [libocrb19.so]
 0x0000000000000001 (NEEDED)             Shared library: [libocrutl19.so]
 0x0000000000000001 (NEEDED)             Shared library: [libaio.so.1]
 0x0000000000000001 (NEEDED)             Shared library: [libons.so]
 0x0000000000000001 (NEEDED)             Shared library: [libmql1.so]
 0x0000000000000001 (NEEDED)             Shared library: [libipc1.so]
 0x0000000000000001 (NEEDED)             Shared library: [libdl.so.2]
 0x0000000000000001 (NEEDED)             Shared library: [libm.so.6]
 0x0000000000000001 (NEEDED)             Shared library: [libpthread.so.0]
 0x0000000000000001 (NEEDED)             Shared library: [libnsl.so.1]
 0x0000000000000001 (NEEDED)             Shared library: [libresolv.so.2]
 0x0000000000000001 (NEEDED)             Shared library: [libc.so.6]
 0x000000000000000f (RPATH)              Library rpath: [/u01/app/oracle/product/19/dbhome_1/lib]
 0x000000000000000c (INIT)               0xdb0d80
 0x000000000000000d (FINI)               0x12b54b90
 0x0000000000000019 (INIT_ARRAY)         0x17398c20
 0x000000000000001b (INIT_ARRAYSZ)       8 (bytes)
 0x000000006ffffef5 (GNU_HASH)           0x4002d0
 0x0000000000000005 (STRTAB)             0x9ddb10
 0x0000000000000006 (SYMTAB)             0x528128
 0x000000000000000a (STRSZ)              3567352 (bytes)
 0x000000000000000b (SYMENT)             24 (bytes)
 0x0000000000000015 (DEBUG)              0x0
 0x0000000000000003 (PLTGOT)             0x17504000
 0x0000000000000002 (PLTRELSZ)           26136 (bytes)
 0x0000000000000014 (PLTREL)             RELA
 0x0000000000000017 (JMPREL)             0xdaa768
 0x0000000000000007 (RELA)               0xda9328
 0x0000000000000008 (RELASZ)             5184 (bytes)
 0x0000000000000009 (RELAENT)            24 (bytes)
 0x000000006ffffffe (VERNEED)            0xda9188
 0x000000006fffffff (VERNEEDNUM)         7
 0x000000006ffffff0 (VERSYM)             0xd44a08
 0x0000000000000000 (NULL)               0x0

This shows the names of the needed shared libraries ‘(NEEDED)’. Some of the needed shared libraries are oracle shared libraries, such as libodm19.so, libofs.so, libcell19.so, libskgxp19.so and so on. Other libraries are operating system libraries, such as libc.so.6, libresolv.so.2, libnsl.so.1, libpthread.so.0, libm.so.6 and so on. The oracle libraries are found because an RPATH (runpath) is included in the header, in my case /u01/app/oracle/product/19/dbhome_1/lib. The operating system libraries are not included with the oracle installation, they are dynamically obtained from the operating system, for which the selection lies with the operating system.

So, we got the oracle executable, and we found out it’s a dynamically linked executable, which means that it’s using shared libraries for some of its functionality.

Now let’s take one step further. Whenever the oracle database software is installed or patched, it must be linked in order to build the executable with the current state of the software.
You might wonder the what I mean with the phrase ‘is installed’: you probably don’t execute a relink all. And that is sensible, because the installer does that for you, you can validate it in $ORACLE_HOME/install/make.log.
I’ll get to the manual linking in a bit.

The oracle database executable and compilation

The first thing to discuss now is compilation. Compilation is the process of turning text based code into a compiled form, for which a lot of compilers do not create an executable form, but an intermediary form, which is called an object. Turning an object or objects into an executable form is called linking. Compiling on linux is done using a compiler, and the default C compiler with Oracle and RedHat linux is gcc. Since Oracle 12.2, the compiler is not a requirement for installation anymore. It is documented, but I believe many may have missed this.

But isn’t there the $ORACLE_HOME/rdbms/lib/config.c file, which is still there, and still used, and isn’t there the make target config.o (make -f ins_rdbms.mk config.o)? Yes, both of them are still there. And still gcc is not a requirement anymore. If you have a pressing need for changing the config.c file (which lists the dba,oper,asm,backup,dataguard,keymanagement and RAC group names), you can still change it, and when you remove the config.o file which USED to be generated with gcc, will now be generated by the ‘as’ executable (portable GNU assembler). This is visible in the oracle database executable make target (ioracle):

$ mv config.o config.O
$ make --dry-run -f ins_rdbms.mk ioracle
chmod 755 /u01/app/oracle/product/19/dbhome_1/bin
cd /u01/app/oracle/product/19/dbhome_1/rdbms/lib/; \
/usr/bin/as -o config.o `[ -f config.c ] && echo config.c || echo config.s`; \
/usr/bin/ar r /u01/app/oracle/product/19/dbhome_1/lib/libserver19.a /u01/app/oracle/product/19/dbhome_1/rdbms/lib/config.o
echo
echo " - Linking Oracle "
rm -f /u01/app/oracle/product/19/dbhome_1/rdbms/lib/oracle
/u01/app/oracle/product/19/dbhome_1/bin/orald  -o /u01/app/oracle/product/19/dbhome_1/rdbms/lib/oracle -m64 -z noexecstack -Wl,--disable-new-dtags -L/u01/app/oracle/product/19/dbhome_1/rdbms/lib/ -L/u01/app/oracle/product/19/dbhome_1/lib/ -L/u01/app/oracle/product/19/dbhome_1/lib/stubs/   -Wl,-E /u01/app/oracle/product/19/dbhome_1/rdbms/lib/opimai.o /u01/app/oracle/product/19/dbhome_1/rdbms/lib/ssoraed.o /u01/app/oracle/product/19/dbhome_1/rdbms/lib/ttcsoi.o -Wl,--whole-archive -lperfsrv19 -Wl,--no-whole-archive /u01/app/oracle/product/19/dbhome_1/lib/nautab.o /u01/app/oracle/product/19/dbhome_1/lib/naeet.o /u01/app/oracle/product/19/dbhome_1/lib/naect.o /u01/app/oracle/product/19/dbhome_1/lib/naedhs.o /u01/app/oracle/product/19/dbhome_1/rdbms/lib/config.o  -ldmext -lserver19 -lodm19 -lofs -lcell19 -lnnet19 -lskgxp19 -lsnls19 -lnls19  -lcore19 -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lxml19 -lcore19 -lunls19 -lsnls19 -lnls19 -lcore19 -lnls19 -lclient19  -lvsnst19 -lcommon19 -lgeneric19 -lknlopt -loraolap19 -lskjcx19 -lslax19 -lpls19  -lrt -lplp19 -ldmext -lserver19 -lclient19  -lvsnst19 -lcommon19 -lgeneric19 `if [ -f /u01/app/oracle/product/19/dbhome_1/lib/libavserver19.a ] ; then echo "-lavserver19" ; else echo "-lavstub19"; fi` `if [ -f /u01/app/oracle/product/19/dbhome_1/lib/libavclient19.a ] ; then echo "-lavclient19" ; fi` -lknlopt -lslax19 -lpls19  -lrt -lplp19 -ljavavm19 -lserver19  -lwwg  `cat /u01/app/oracle/product/19/dbhome_1/lib/ldflags`    -lncrypt19 -lnsgr19 -lnzjs19 -ln19 -lnl19 -lngsmshd19 -lnro19 `cat /u01/app/oracle/product/19/dbhome_1/lib/ldflags`    -lncrypt19 -lnsgr19 -lnzjs19 -ln19 -lnl19 -lngsmshd19 -lnnzst19 -lzt19 -lztkg19 -lmm -lsnls19 -lnls19  -lcore19 -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lxml19 -lcore19 -lunls19 -lsnls19 -lnls19 -lcore19 -lnls19 -lztkg19 `cat /u01/app/oracle/product/19/dbhome_1/lib/ldflags`    -lncrypt19 -lnsgr19 -lnzjs19 -ln19 -lnl19 -lngsmshd19 -lnro19 `cat /u01/app/oracle/product/19/dbhome_1/lib/ldflags`    -lncrypt19 -lnsgr19 -lnzjs19 -ln19 -lnl19 -lngsmshd19 -lnnzst19 -lzt19 -lztkg19   -lsnls19 -lnls19  -lcore19 -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lxml19 -lcore19 -lunls19 -lsnls19 -lnls19 -lcore19 -lnls19 `if /usr/bin/ar tv /u01/app/oracle/product/19/dbhome_1/rdbms/lib/libknlopt.a | grep "kxmnsd.o" > /dev/null 2>&1 ; then echo " " ; else echo "-lordsdo19 -lserver19"; fi` -L/u01/app/oracle/product/19/dbhome_1/ctx/lib/ -lctxc19 -lctx19 -lzx19 -lgx19 -lctx19 -lzx19 -lgx19 -lclscest19 -loevm -lclsra19 -ldbcfg19 -lhasgen19 -lskgxn2 -lnnzst19 -lzt19 -lxml19 -lgeneric19 -locr19 -locrb19 -locrutl19 -lhasgen19 -lskgxn2 -lnnzst19 -lzt19 -lxml19 -lgeneric19  -lgeneric19 -lorazip -loraz -llzopro5 -lorabz2 -lorazstd -loralz4 -lipp_z -lipp_bz2 -lippdc -lipps -lippcore  -lippcp -lsnls19 -lnls19  -lcore19 -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lxml19 -lcore19 -lunls19 -lsnls19 -lnls19 -lcore19 -lnls19 -lsnls19 -lunls19  -lsnls19 -lnls19  -lcore19 -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lxml19 -lcore19 -lunls19 -lsnls19 -lnls19 -lcore19 -lnls19 -lasmclnt19 -lcommon19 -lcore19  -ledtn19 -laio -lons  -lmql1 -lipc1 -lfthread19    `cat /u01/app/oracle/product/19/dbhome_1/lib/sysliblist` -Wl,-rpath,/u01/app/oracle/product/19/dbhome_1/lib -lm    `cat /u01/app/oracle/product/19/dbhome_1/lib/sysliblist` -ldl -lm   -L/u01/app/oracle/product/19/dbhome_1/lib `test -x /usr/bin/hugeedit -a -r /usr/lib64/libhugetlbfs.so && test -r /u01/app/oracle/product/19/dbhome_1/rdbms/lib/shugetlbfs.o && echo -Wl,-zcommon-page-size=2097152 -Wl,-zmax-page-size=2097152 -lhugetlbfs`
rm -f /u01/app/oracle/product/19/dbhome_1/bin/oracle
mv /u01/app/oracle/product/19/dbhome_1/rdbms/lib/oracle /u01/app/oracle/product/19/dbhome_1/bin/oracle
chmod 6751 /u01/app/oracle/product/19/dbhome_1/bin/oracle
(if [ ! -f /u01/app/oracle/product/19/dbhome_1/bin/crsd.bin ]; then \
    getcrshome="/u01/app/oracle/product/19/dbhome_1/srvm/admin/getcrshome" ; \
    if [ -f "$getcrshome" ]; then \
        crshome="`$getcrshome`"; \
        if [ -n "$crshome" ]; then \
            if [ $crshome != /u01/app/oracle/product/19/dbhome_1 ]; then \
                oracle="/u01/app/oracle/product/19/dbhome_1/bin/oracle"; \
                $crshome/bin/setasmgidwrap oracle_binary_path=$oracle; \
            fi \
        fi \
    fi \
fi\
);
$ mv config.O config.o

First of all, I am in the $ORACLE_HOME/rdbms/lib directory already. I moved the config.o file to a different name, config.O (uppercase O). This will trigger the config.o file to be generated during linking via the the makefile, because the make macro for generating the oracle executable checks for the existence of config.o in $ORACLE_HOME/rdbms/lib, and the generation of config.o is triggered by it not existing.
I used make with the ‘–dry-run’ option, which means it will list what it WOULD do, it doesn’t actually do it.
Now that the make macro doesn’t find the $ORACLE_HOME/rdbms/lib/config.o file, it generates it, using ‘as’, the GNU assembler.
After the run, I move the config.O file back to config.o.
Please mind the make target config.o (make -f ins_rdbms.mk config.o) still exists, and this follows the traditional way, so using gcc, to create the object file config.o.

For anything else than config.c, Oracle provides objects (the compiled, intermediary form of C) in object files. This has several advantages. First of all, the server to install oracle on doesn’t require a compiler. That also means that there is no discussion about compiler versions, Oracle knows for a fact which version of a compiler is used. Second, Oracle can use a different compiler than the GNU compiler, as long as it does provide objects in Linux X86_64 (ELF) format. In fact, that is what oracle does: for Oracle 19.9, Oracle used a compiler from intel: Intel(R) C Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 17.0.2.174 Build 20170213
You can obtain that information from the oracle executable using:

$ readelf -p .comment oracle | egrep 'Intel.*Build\ [0-9]*'

I hope that at this point I made it clear that no compiler is needed anymore for oracle installation and making changes to the oracle installation, like patching.

The oracle database executable and objects and object files

It’s probably a good idea to show how the oracle executable is build. The way this happens is using the make target ‘ioracle’, which is visible above (make -f ins_rdbms.mk ioracle).
The macro calls ‘orald’, which actually is a script in $ORACLE_HOME/bin, which calls the operating-system ‘ld’ executable, which is the GNU linker.
The arguments to ‘orald’ are arguments that mostly are put through to ‘ld’. ‘-o’ is the output flag, and that shows the executable to be build by the linker, and besides options being set, what you mainly see is -L (library path) and -l (library) switches adding libraries (=object files and archive files) to build the oracle executable.
There’s a couple of places that are used to get objects to build the oracle executable:
– $ORACLE_HOME/rdbms/lib — oracle database rdbms specific libraries
– $ORACLE_HOME/lib — oracle database general libraries (objects are used by multiple “products” in the $ORACLE_HOME)
– $ORACLE_HOME/lib/stubs — this is a directory with ‘stub objects’, which are versions of operating system libraries that contain no code, but allow the oracle executable to be build, even if a operating system library is missing (see: https://docs.oracle.com/cd/E23824_01/html/819-0690/chapter2-22.html)
– $ORACLE_HOME/ctx/lib — oracle text (not sure why oracle text requires an explicit lookup to $ORACLE_HOME/ctx/lib, while other options are all in $ORACLE_HOME/lib
– /lib64 — operating system libraries

At this point it’s important to realise that the object files for linking oracle are visible in two forms: as plain object files (.o) and as archive files (.a). Archive files are exactly what the name suggests: these are archives of object files. You can look and manipulate an archive file using the ‘ar’ (archiver utility), for which the working strongly resembles how tar and jar work: t=list, x=extract, c=create.
If you take a look at one of the main archives, libserver19.a, you see that it contains 2852 object files:

$ ar -t $ORACLE_HOME/lib/libserver19.a | wc -l
2852

If you do wonder what’s inside, ‘ar -tv’ would be a good way to have an idea:

$ ar -tv $ORACLE_HOME/lib/libserver19.a
rw-r--r-- 54321/54321  11136 Oct 19 21:12 2020 kdr.o
rw-rw-r-- 94110/42424   8376 Apr 17 04:58 2019 upd.o
rw-rw-r-- 94110/42424  41968 Apr 17 04:58 2019 kd.o
...
rw-r--r-- 54321/54321  13248 Oct 19 21:11 2020 qjsntrans.o
rw-r--r-- 54321/54321  20296 Oct 19 21:11 2020 kubsd.o
rw-r--r-- 54321/54321  16720 Oct 19 21:12 2020 kqro.o

The conclusion here is that archive files are logical and sensible, otherwise the library directories would have been swamped with huge numbers of object files.

When object files are linked to an executable, it requires object files, and these are in ‘.o’ files, or grouped in ‘.a’ files. A third type of file is needed for linking linking an executable that is going to be a dynamically linked executable: the libraries (the ‘.so’ files) the executable is dynamically going to use. The linker will validate the libraries, which means it inspects the libraries to find the symbols that the objects that form the executable is calling. A library (‘.so’ file) is an already compiled form, in facts it’s pretty much similar to an executable, only the way it’s invoked is when it’s called by a dynamically linked executable that uses it, instead of directly.

The object files itself

This above text pretty much describe how executables, libraries, object files and archives sit together, and how the linking creates the oracle executable via the makefile. This description describes how this is configured by oracle for creating the oracle executable. However, this is really flexible, and can be done differently, so this is not how it always is or should be, this is how oracle chosen it to do.

We can look one level deeper into how this works. An object file in fact is already an archive, containing one or more compiled versions of functions:

$ nm -A opimai.o
opimai.o:                 U dbkc_free_bs_context
opimai.o:                 U dbkc_init
opimai.o:                 U dbktFlush
opimai.o:                 U __intel_new_feature_proc_init
opimai.o:                 U kgeasnmierr
opimai.o:                 U kge_pop_guard_fr
opimai.o:                 U kge_push_guard_fr
opimai.o:                 U kge_report_17099
opimai.o:                 U kgeresl
opimai.o:                 U kge_reuse_guard_fr
opimai.o:                 U ksdwrf
opimai.o:                 U kseini
opimai.o:                 U ksmdsg
opimai.o:                 U ksmgpg_
opimai.o:                 U ksmlsge_phaseone
opimai.o:                 U ksmsgaattached_
opimai.o:                 U kso_save_arg
opimai.o:                 U kso_spawn_ts_save
opimai.o:                 U ksosp_parse
opimai.o:                 U ksuginpr
opimai.o:                 U lfvinit
opimai.o:0000000000000010 T main
opimai.o:                 U opiinit
opimai.o:0000000000000300 t opimai_init
opimai.o:0000000000000140 T opimai_real
opimai.o:                 U opiterm
opimai.o:                 U sdterm
opimai.o:                 U _setjmp
opimai.o:                 U skge_sign_fr
opimai.o:                 U skgmstack
opimai.o:                 U skgp_done_args
opimai.o:                 U skgp_retrieve_args
opimai.o:                 U slgtds
opimai.o:                 U slgts
opimai.o:                 U slkbpi
opimai.o:                 U slkfpi
opimai.o:                 U sou2o
opimai.o:                 U spargs
opimai.o:                 U ssthrdmain

This example takes the object file $ORACLE_HOME/rdbms/lib/opimai.o, and this object file contains 3 actual functions (shown by an address and the symbol type ‘T’ or ‘t’), and a whole bunch of functions without an address and symbol type ‘U’. The functions with symbol type ‘U’ are undefined functions, which means that these functions are not in this archive, but defined somewhere else.
The important thing to consider is that a single symbol can contain multiple functions.

I chosen this object file, because this is in fact the actual object file where the main function, the starting function, for the oracle executable is in. If you obtain a stack trace of an oracle database process, the first function (called ‘first frame’) in at least recent linux versions (some other operating systems or versions might show earlier functions) will show main as the first function. This is also what the linker uses to build the executable, it follows the symbol information together with the command line switches to resolve and obtain all the functions via the symbol information. The linker will generate an error and not build the executable if it can’t find or resolve the symbols and get all the information it needs.

Patching

At this point you should have an understanding what a dynamically linked executable, libraries, object files and archives are, and that the oracle executable is build using a makefile using the linker.

It might be handy and interesting to look at patching. This information, the information about archives and objects, should give you more background about the specifics of patching. Oracle patching has many forms, and actually can do and change a lot of things in a lot of ways. It is retraceable what a patch does by looking at the contents of the patch. But that is now what this post is about.

Especially with one-off patches, in the case of a patch to fix or change one or more functions in the oracle executable, what the patch provides is the fixed and thus changed versions of these functions. However, oracle does not provide sourcecode. In general what oracle provides, is the object or objects containing the changed functions. In order to get the changed function or functions into the oracle executable, what generally happens is that the current/old versions of the object file are removed from the archive they are in, and saved in $ORACLE_HOME/.patch_storage, and the patched versions of the object file are inserted into the archive.

But, we saw an object file generally contains more or much more functions. Well, this is why patches can be incompatible with other patches: if multiple patches change different (or the same) functions in the same objects, the patch applied latest will undo the changes of the previous patch(es). This is also why you must request merge patches if patches are incompatible.

Dealing with individual object files, extracting them from an archive and saving them in order to be able to restore it into the archive is tedious. Also, the archive itself doesn’t mind whatever you remove from it or insert to it, even if it will break linking the oracle executable. Therefore, oracle has created opatch to perform a great deal of validations and checks, and take the work of dependency checks from you and fully automate it. In fact, in general, you can take a (one-off) patch and try to apply it, if it does, it will allow oracle to be relinked, if there is a conflicting patch it will tell you. Also, if you want to revert your applied patch, you can simply rollback and get opatch to load the previous version into the archive. This is way better than letting us humans deal with it directly.

After the patching changed the archives to contain the updated versions of the objects which contain updated functions, these must make it into the oracle executable. This must be done by relinking the executable, which will take the objects including the changed objects from all the object files and archives, and create a new executable. The oracle executable is never directly touched on linux with recent versions, to my knowledge.

I hope this explanation made sense and made a lot of these things which we are dealing with as oracle DBAs more understandable. Any comments or updates are welcome!

Vagrant & Docker Builds : ORDS and SQLcl 20.3

In a previous post I discussed the recent release of APEX 20.2 and the subsequent builds it triggered. Last night I pulled down ORDS 20.3 and SQLcl 20.3, so I updated my Vagrant and Docker builds again.

The ORDS download page is here. At the time of writing, the SQLcl download page is still showing 20.2, but the SQL Developer download page has a link for the 20.3 SQLcl download at the bottom. Both these versions have been available for about a week. Update: It’s showing 20.3 on both SQLcl pages now. </p />
</p></div>

    	  	<div class=

Updating my APEX 20.2 installation

One of (or should I say “another of”!) the very cool features of Application Express (APEX) is that by being a database-centric software installation, patching of the product can be done very efficiently and easily by simply loading fresh versions of the underlying PL/SQL source.

The reason loading PL/SQL source is such a good thing is that when you load PL/SQL source that is unchanged the database can simply treat that as a “no-op” which

  • makes loading the unchanged PL/SQL faster,
  • avoids invalidation and recompilation impacts

We can see that with this simple demo.


SQL> create or replace
  2  procedure MY_BIG_PROC is
  3  begin
  4    dbms_output.put_line(1);
  5    dbms_output.put_line(2);
  6    dbms_output.put_line(3);
  7    dbms_output.put_line(4);
...
...
996    dbms_output.put_line(993);
997    dbms_output.put_line(994);
998    dbms_output.put_line(995);
999    dbms_output.put_line(996);
1000    dbms_output.put_line(997);
1001    dbms_output.put_line(998);
1002    dbms_output.put_line(999);
1003    dbms_output.put_line(1000);
1004  end;
1005  /

Procedure created.

SQL>
SQL> select last_ddl_time
  2  from   user_objects
  3  where  object_name = 'MY_BIG_PROC';

LAST_DDL_TIME
-------------------
05/11/2020 11:11:29

1 row selected.

[wait 30 seconds]

SQL> create or replace
  2  procedure MY_BIG_PROC is
  3  begin
  4    dbms_output.put_line(1);
  5    dbms_output.put_line(2);
  6    dbms_output.put_line(3);
  7    dbms_output.put_line(4);
...
...
996    dbms_output.put_line(993);
997    dbms_output.put_line(994);
998    dbms_output.put_line(995);
999    dbms_output.put_line(996);
1000    dbms_output.put_line(997);
1001    dbms_output.put_line(998);
1002    dbms_output.put_line(999);
1003    dbms_output.put_line(1000);
1004  end;
1005  /

Procedure created.

SQL>
SQL> select last_ddl_time
  2  from   user_objects
  3  where  object_name = 'MY_BIG_PROC';

LAST_DDL_TIME
-------------------
05/11/2020 11:11:29

1 row selected.


I re-loaded the PL/SQL procedure, but since it was not altered, no DDL change was recorded. As a consequence, when it comes to patching APEX, the patches that are distributed can be a cumulative set of fixes but applying those patches to your APEX installation will be just as efficient as if you had cherry-picked just the changes that were applicable to your own instance.

But the real benefit here is not the time it takes to apply a patch but that as bugs are reported and subsequently fixed/tested, the affected PL/SQL components can be almost immediately added to a patch and made available to customers. Thus rather than customers having to either

  • download lots of small patches and keep a record of which patches are applied and which are not, or
  • wait weeks or months for a large bundle of fixes to be consolidated and published,

in the majority of instance you can simply apply a single patch that contains timely fixes to issues, and that patch will be expanded over time to cover any new issues that arise in APEX.

For example, I installed APEX on my own machine last week and applied PSE 32006852 during that installation. A couple of days later, I checked the Known Issues documentation and saw that the PSE had been updated, so I downloaded it and applied the updated version. This is all tracked within my APEX instance.


SQL> select PATCH_NUMBER,PATCH_VERSION
  2  from   APEX_PATCHES;

PATCH_NUMBER PATCH_VERSION
------------ -----------------------------
    32006852 2020.10.23
    32006852 2020.10.28

Thus all I need to do to keep my APEX installation fully patched is keep an eye on the Known Issues page and the PSE from time to time.

If you’re wondering: “How do I know if I need to get a new patch version?”, it is stamped right there at the top of the README for the patch.

image

So it looks like for me, there are some fixes available because the patch version is my database is 2020.10.28. So I picked up the fresh PSE and applied it as per the README.


SQL> conn /@db19_pdb1 as sysdba
Connected.

SQL> @catpatch

. ORACLE
.
. Application Express (APEX) 20.2.0.00.20
. Patch Set Exception 32006852
........................................

APEX_VERSION
------------------------------
APEX_SCHEMA
--------------------------------------------------------------------------------------------------------------------------------
20.2.0.00.20
APEX_200200



PL/SQL procedure successfully completed.


Session altered.


APEX_SCHEMA
--------------------------------------------------------------------------------------------------------------------------------
APEX_200200

... Disabling Jobs
... disabling ORACLE_APEX_AUTOMATIONS
... disabling ORACLE_APEX_BACKUP
... disabling ORACLE_APEX_DAILY_MAINTENANCE
... disabling ORACLE_APEX_DICTIONARY_CACHE
... disabling ORACLE_APEX_ISSUE_EMAILS
... disabling ORACLE_APEX_MAIL_QUEUE
... disabling ORACLE_APEX_PURGE_SESSIONS
... disabling ORACLE_APEX_REST_SOURCE_SYNC
... disabling ORACLE_APEX_WS_NOTIFICATIONS

PL/SQL procedure successfully completed.


Session altered.

...patch_32063008.sql

Table altered.


Table altered.


Grant succeeded.

...wwv_flow_db_env_detection.plb

Package body created.

No errors.
... wwv_flow_db_version is up to date

PL/SQL procedure successfully completed.

...wwv_flow_asn1.sql

Package created.

No errors.
...wwv_flow_public_key_util.sql

Package created.

No errors.
...wwv_flow_asn1.plb

Package body created.

No errors.
...wwv_flow_public_key_util.plb

Package body created.

No errors.
...wwv_flow_meta_data

Package body created.

No errors.
...wwv_flow_authentication_social.plb

Package body created.

No errors.
...wwv_flow_error

Package body created.

No errors.
...wwv_flow_escape

Package body created.

No errors.
...wwv_flow_exec_web_src_utils.plb

Package body created.

No errors.
...wwv_flow_ir_render

Package body created.

No errors.

Package body created.

No errors.
...wwv_flow_page

Package body created.

No errors.
...wwv_flow_web_src_sync.plb

Package body created.

No errors.
...wwv_flow_data_export.plb

Package body created.

No errors.
...wwv_flow_xlsx.plb

Package body created.

No errors.
...wwv_flow_card_region

Package body created.

No errors.
...patch_32077747.sql

1 row updated.


Commit complete.

...devpatch.sql

Session altered.


Grant succeeded.


Session altered.

...patch_32077573.sql

View created.


Trigger created.

No errors.
...patch_32078187.sql

1 row updated.


1 row updated.


1 row updated.


Commit complete.

...wwv_flow_web_src_dev.plb

Package body created.

No errors.

PL/SQL procedure successfully completed.

...patch_32001315.sql

1 row updated.


Commit complete.


PL/SQL procedure successfully completed.


Commit complete.


PL/SQL procedure successfully completed.

... Enabling Jobs

PL/SQL procedure successfully completed.

...Validating APEX
...(11:29:34) Starting validate_apex for APEX_200200
...(11:29:34) Checking missing sys privileges
...(11:29:34) Re-generating APEX_200200.wwv_flow_db_version
... wwv_flow_db_version is up to date
...(11:29:34) Checking invalid public synonyms
...(11:29:35) Key object existence check
...(11:29:35) Setting DBMS Registry for APEX to valid
...(11:29:35) Exiting validate_apex

PL/SQL procedure successfully completed.

...Recompiling invalid public synonyms

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.

timing for: Complete Patch 32006852
Elapsed: 00:00:04.72

Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.8.0.0.0


Because a good portion of those fixes are probably already contained in the previous PSE update, it only took 5 seconds! And easy as you like, I’m now fully up to date.


SQL> select PATCH_NUMBER,PATCH_VERSION
  2  from   APEX_PATCHES;

PATCH_NUMBER PATCH_VERSION
------------ ---------------------------
    32006852 2020.10.23
    32006852 2020.10.28
    32006852 2020.10.29

What’s new with Oracle database 12.2.0.1.200714 versus 12.2.0.1.201020

This blogpost takes a look at the technical differences between Oracle database 12.2.0.1 PSU 200714 (july 2020) and PSU 201020 (october 2020). This gives technical specialists an idea of the differences, and gives them the ability to assess if the PSU impacts anything.

Functions


code symbol names unique in version 12.2.0.1.200714 versus 12.2.0.1.201020

NAME                                                         RESOLVE                                                      ANNOTATION                                                                                 ARCHIVE_OBJECT
------------------------------------------------------------ ------------------------------------------------------------ ------------------------------------------------------------------------------------------ ------------------------------------------------------------
NAME                                               RESOLVE                                            ANNOTATION                                                                       ARCHIVE_OBJECT                                                 COUNT(*)
-------------------------------------------------- -------------------------------------------------- -------------------------------------------------------------------------------- ------------------------------------------------------------ ----------
kcbo                                               kcbo                                               kernel cache buffers object queue                                                                                                                      1
kkdl                                               kkdl                                               kernel compile dictionary lookup                                                                                                                       1
kmgs                                               kmgs                                               kernel multi threaded/mman manage (sga) space (?)                                                                                                      1
kzpR                                               (kzp)R                                             kernel security privileges ??                                                                                                                          1
qcso                                               qcso                                               query compile semantic analysis (parser) OBJECTS                                                                                                       1
qeae                                               qeae                                               query execute aggregate (order by) elimination (?)                                                                                                     1
qkaI                                               (qka)I                                             query kernel allocation ??                                                                                                                             1
qkex                                               (qke)x                                             query kernel expressions ??                                                                                                                            1
qksh                                               (qks)h                                             query kernel sql ??                                                                                                                                    1
qksp                                               (qks)p                                             query kernel sql ??                                                                                                                                    1
krvx                                               krvx                                               kernel redo recovery extract                                                                                                                           2
kxtt                                               kxtt                                               kernel execution temporary table operations                                                                                                            2
Java                                               Java                                               java classes                                                                                                                                           6

code symbol names unique in version 12.2.0.1.201020 versus 12.2.0.1.200714

NAME                                               RESOLVE                                            ANNOTATION                                                                       ARCHIVE_OBJECT                                                 COUNT(*)
-------------------------------------------------- -------------------------------------------------- -------------------------------------------------------------------------------- ------------------------------------------------------------ ----------
ECGr                                               (EC)Gr                                             elleptic curve cryptography ??                                                   libjavavm12.a:ecl.o                                                   7
Java                                               Java                                               java classes                                                                     libjavavm12.a:ShapeSpanIterator.o                                     7
ec_G                                               (ec_)G                                             elleptic curve cryptography ??                                                   libjavavm12.a:ecp_jac.o                                               7
mp_c                                               (mp_)c                                             multiple precision cryptography ??                                               libjavavm12.a:mpi.o                                                   7
mp_i                                               (mp_)i                                             multiple precision cryptography ??                                               libjavavm12.a:mpi.o                                                   7
mp_t                                               (mp_)t                                             multiple precision cryptography ??                                               libjavavm12.a:mpi.o                                                   7
mp_b                                               (mp_)b                                             multiple precision cryptography ??                                               libjavavm12.a:mp_gf2m.o                                               8
mp_s                                               (mp_)s                                             multiple precision cryptography ??                                               libjavavm12.a:mpi.o                                                  11
ec_G                                               (ec_)G                                             elleptic curve cryptography ??                                                   libjavavm12.a:ecl_gf.o                                               21
s_mp                                               s_mp                                               cryptography multiple precision                                                  libjavavm12.a:mpi.o                                                  49

This is a poor, yet one of the only ways to understand what is going on inside the oracle executable.
There aren’t that many thing removed, which is understandable, this is an update. 20 functions have been removed in total. The archive:object annotation is not there, I started doing that for the latest PSU.
For the functions that were added, the vast majority comes from libjavavm18, and seem to be cryptographic related functions.

It should be noted that if you compare the added functions and the changed functions with the Oracle 18 ones, a fair share of them are equal, but there are also functions unique to each version. Of course the equal functions probably are related to huge issues that must be fixed in all versions. The functions that are unique to this version are likely to be (minor-)version specific issues.

NoSQL and SQL: key-value access always scale

By Franck Pachot

.
I have written about some NoSQL myths in previous posts ( and here) and I got some feedback from people mentioning that the test case was on relatively small data. This is true. In order to understand how it works, we need to explain and trace the execution, and that is easier on a small test case. Once the algorithm is understood it is easy to infer how it scales. Then, if readers want to test it on huge data, they can. This may require lot of cloud credits, and I usually don’t feel the need to do this test for a blog post, especially when I include all the code to reproduce it on a larger scale.

But this approach may be biased by the fact that I’ve been working a lot with RDBMS where we have all tools to understand how it works. When you look at the execution plan, you know the algorithm and can extrapolate the numbers to larger tables. When you look at the wait events, you know on which resource they can scale with higher concurrency. But times change, NoSQL databases, especially the ones managed by the cloud providers, provide only a simple API with limited execution statistics. Because that’s the goal: simple usage. And this is why people prefer to look at real scale executions when talking about performance. And, as it is not easy to run a real scale Proof of Concept, they look at the well-known big data users like Google, Facebook, Amazon…

I was preparing my DOAG presentation “#KnowSQL: Where are we with SQL, NoSQL, NewSQL in 2020?” where I’ll mention at 4TB table I’ve seen at a customer. The table was far too big (30 million extents!) because it was not managed for years (a purge job failing, not being monitored, and the outsourcing company adding datafiles for years without trying to understand). But the application was still working well, with happy users. Because they use a key-value access, and this has always been scalable. Here is a common misconception: NoSQL databases didn’t invent new techniques to store data. Hash partitioning and indexes are the way to scale this. It existed in RDBMS for a long time. What NoSQL did was providing easy access to this limited API, and restraining data access to this simple API in order to guarantee predictable performance.

By coincidence, with my presentation in mind, I had access to an Oracle Exadata that was not yet used, and I got the occasion to create a similar table containing billion of items:


SQL> info+ BIG

TABLE: BIG
         LAST ANALYZED:2020-10-29 22:32:06.0
         ROWS         :3049226754
         SAMPLE SIZE  :3049226754
         INMEMORY     :
         COMMENTS     :

Columns

NAME         DATA TYPE      NULL  DEFAULT    LOW_VALUE   HIGH_VALUE   NUM_DISTINCT   HISTOGRAM
------------ -------------- ----- ---------- ----------- ------------ -------------- ---------
*K           RAW(16 BYTE)   No                                            3049226754     HYBRID
 V           BLOB           Yes                                           0              NONE

Indexes
INDEX_NAME             UNIQUENESS   STATUS   FUNCIDX_STATUS   COLUMNS
---------------------- ------------ -------- ---------------- ------
FRANCK.SYS_C00205258   UNIQUE       N/A                       K

Just two columns: one RAW(16) to store the key as a UUID and one BLOB to store any document value. Exactly like a key-value document store today, and similar to the table I’ve seen at my customer. Well, at this customer, this was desined in the past century, with LONG RAW instead of BLOB, but this would make no sense today. And this table was not partitioned because they didn’t expect this size. In my test I did what we should do today for this key-value use case: partition by HASH:


create table BIG ( K RAW(16) primary key using index local, V BLOB ) tablespace users
LOB (V) store as securefile (enable storage in row nocache compress high)
partition by hash (K) partitions 100000 parallel 20

It is probably not useful to have 100000 partitions for a few terabytes table, but then this table is ready for a lot of more data. And in Oracle 100000 partition is far from the limit which is 1 million partitions. Note that this is a lab. I am not recommending to create 100000 partitions if you don’t need to. I’m just saying that it is easy to create a terabytes table with the performance of really small tables when accessed with the partitioning key.

So, here is the size:


14:58:58 SQL> select segment_type,segment_name,dbms_xplan.format_size(sum(bytes)) "SIZE",count(*)
from dba_segments where owner='FRANCK'
group by grouping sets ((),(segment_type),(segment_type,segment_name))
;

SEGMENT_TYPE                   SEGMENT_NAME                   SIZE         COUNT(*)
------------------------------ ------------------------------ ---------- ----------
LOB PARTITION                  SECURFILE                      781G           100000
LOB PARTITION                                                 781G           100000
INDEX PARTITION                SYS_C00205258                  270G           100000
INDEX PARTITION                SYS_IL0000826625C00002$$       6250M          100000
INDEX PARTITION                                               276G           200000
TABLE PARTITION                BIG                            7691G          100000
TABLE PARTITION                                               7691G          100000
                                                              8749G          400000
8 rows selected.

There’s 8.5 TB in total here. The table, named “BIG”, has 100000 partitions for a total of 7691 GB. The primary key index, “SYS_C00205258” is 270 GB as it contains the key (so 16 bytes, plus the ROWID to address the table, per entry). It is a local index, with same HASH partitioning as the table. For documents that are larger than the table block, the LOB partition can store them. Here I have mostly small documents which are stored in the table.

I inserted the rows quickly with a bulk load which I didn’t really tune or monitor. But here is an excerpt from AWR report when the insert was running:



Plan Statistics                                          DB/Inst: EXA19C/EXA19C1  Snaps: 16288-16289
-> % Snap Total shows the % of the statistic for the SQL statement compared to the instance total

Stat Name                                Statement   Per Execution % Snap
---------------------------------------- ---------- -------------- -------
Elapsed Time (ms)                        3.2218E+07   16,108,804.2    99.4
CPU Time (ms)                            2.1931E+07   10,965,611.8    99.4
Executions                                        2            1.0     0.0
Buffer Gets                              6.5240E+08  326,201,777.5   115.6
Disk Reads                               4.0795E+07   20,397,517.5   100.2
Parse Calls                                      48           24.0     0.0
Rows                                     5.1202E+08  256,008,970.0     N/A
User I/O Wait Time (ms)                   2,024,535    1,012,267.6    96.9
Cluster Wait Time (ms)                    4,293,684    2,146,841.9    99.9
Application Wait Time (ms)                      517          258.5    20.6
Concurrency Wait Time (ms)                4,940,260    2,470,130.0    96.0
          -------------------------------------------------------------

This is about 16 key-value ingested per millisecond (256,008,970.0/16,108,804.2). And it can go further as I have 15% of buffer contention that I can easily get rid of if I take care of the index definition.

After running this a few days, I have nearly 5 billion rows here:

SQL> select count(*) from BIG;

  COUNT(*)
----------
4967207817

Elapsed: 00:57:05.949

The full scan to get the exact count lasted one hour here because I’ve run it without parallel query (an equivalent of map reduce) so the count was done on one CPU only. Anyway, if counting the rows were a use case, I would create a materialized view to aggregate some metrics.

By curiosity I’ve run the same with parallel query: 6 minutes to count the 5 billion documents with 20 parallel processes:
https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/11/Screens... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/11/Screens... 1024w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/11/Screens... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/11/Screens... 1536w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/11/Screens... 2048w" sizes="(max-width: 2560px) 100vw, 2560px" />

My goal is to test reads. In order to have predictable results, I flush the buffer cache:


5:10:51 SQL> alter system flush buffer_cache;

System FLUSH altered.

Elapsed: 00:00:04.817

Of course, in real life, there’s a good chance that all the index branches stay in memory.


15:11:11 SQL> select * from BIG where K=hextoraw('B23375823AD741B3E0532900000A7499');

K                                V
-------------------------------- --------------------------------------------------------------------------------
B23375823AD741B3E0532900000A7499 4E2447354728705E776178525C7541354640695C577D2F2C3F45686264226640657C3E5D2453216A

Elapsed: 00:00:00.011

“Elapsed” is the elapsed time in seconds. Here 11 milliseconds. NoSQL databases advertise their “single digit millisecond” and that’s right, because “No SQL” provides a very simple API (key-value access). Any database, NoSQL or RDBMS, can be optimized for this key-value access. An index on the key ensures a O(logN) scalability and, when you can hash partition it, you can maintain this cost constant when data grows, which is then O(1).

In order to understand not only the time, but also how it scales with more data or high throughput, I look at the execution plan:



15:11:27 SQL> select * from dbms_xplan.display_cursor(format=>'allstats last');

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  gqcazx39y5jnt, child number 21
--------------------------------------
select * from BIG where K=hextoraw('B23375823AD741B3E0532900000A7499')

Plan hash value: 2410449747

----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads |
----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |               |      1 |        |      1 |00:00:00.01 |       3 |     3 |
|   1 |  PARTITION HASH SINGLE             |               |      1 |      1 |      1 |00:00:00.01 |       3 |     3 |
|   2 |   TABLE ACCESS BY LOCAL INDEX ROWID| BIG           |      1 |      1 |      1 |00:00:00.01 |       3 |     3 |
|*  3 |    INDEX UNIQUE SCAN               | SYS_C00205258 |      1 |      1 |      1 |00:00:00.01 |       2 |     2 |
----------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("K"=HEXTORAW('B23375823AD741B3E0532900000A7499'))

I’ve read only 3 “Buffers” here. Thanks to the partitioning (PARTITION HASH SINGLE), each local index is small, with a root branch and a leaf block: 2 buffers read. This B*Tree index (INDEX UNIQUE SCAN) returns the physical address in the table (TABLE ACCESS BY LOCAL INDEX ROWID) in order to get the additional column.

Finally, I insert one row:


SQL> set timing on autotrace on
SQL> insert into BIG values(hextoraw('1D15EA5E8BADF00D8BADF00DFF'),utl_raw.cast_to_raw(dbms_random.string('p',1024)));

1 row created.

Elapsed: 00:00:00.04

This takes 40 milliseconds

The autotrace shows what is behind:


Execution Plan
----------------------------------------------------------

---------------------------------------------------------------------------------
| Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |      |     1 |  1177 |     1   (0)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL | BIG  |       |       |            |          |
---------------------------------------------------------------------------------

Statistics
----------------------------------------------------------
          1  recursive calls
          8  db block gets
          1  consistent gets
          7  physical reads
       1864  redo size
        872  bytes sent via SQL*Net to client
       1048  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed

There are a few blocks to maintain (db block gets) when adding a new entry into a B*Tree index, especially when there are some blocks to split to allocate more space in the tree. In RDBMS you should categorize the data ingestion into:

  • high throughput for big data, like metrics and logs from IoT, with the rate of bulk inserts as I did to fill-in the table
  • fast response time to put one of few items and this is milliseconds, scaling thanks to local index partitioning

I’m talking about the roots of NoSQL here: providing the simplest key-value access in order to scale. But the most advanced NoSQL managed services went further, pushing the data ingest performance with LSM (log-structured merge) indexes rather than B*Tree in-place index maintenance. They have also implemented many features to autonomously maintain the partitions at their best for storage, performance and high availability. This presentation explains a few in the context of AWS DynamoDB:

With DynamoDB you can’t get the execution plan, but you can ask for the ConsumedCapacity to be returned with the result. This helps to validate your understanding of the data access even without running on huge volume and expensive provisioned capacity. This is what I did in https://blog.dbi-services.com/rdbms-scales-the-algorithm/ measuring the linear increase of RCU on a small 2 million items table, which is sufficient to extrapolate to larger data sets. Key-value access always scales in this way: response time remains constant when data grows. And It can remain constant when the users grow as well, by splitting partitions to more storage.

Cet article NoSQL and SQL: key-value access always scale est apparu en premier sur Blog dbi services.

FBI or Virtual

This note has has been sitting with the other 800 drafts since some time in May 2019, and started with a comment about following on from “a recent talk on how to engineer indexes properly”. Unfortunately I don’t remember when I wrote it, or why it came about.I mention this only because the note shows you how you can run into irritating limitations when you’re trying to do things properly.

First, a little script to generate some highly skewed data:

rem
rem     Script:         fbi_or_virtual_desc.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2019
rem
rem     Tested on:
rem       19.2.0.0
rem       18.3.0.0
rem       12.2.0.1

create table t1
segment creation immediate
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id1,
        rownum                          id2,
        chr(65+trunc(ln(rownum)))       flag,
        lpad(rownum,10,'0')             v1,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid WordPress format issue
;

If you check the way I’ve defined the flag column you’ll see that it generates uppercase letters of the alphabet (chr(65) = ‘A’), and the use of the ln() (natural logarithm_ function with rownum means the earlier letters in the alphabet have far fewer rows than later letters. Here’s a quick query to show the skew:


select
        flag, count(*)
from
        t1
group by
        flag
order by
        count(*)
/

FLAG   COUNT(*)
---- ----------
A             2
B             5
C            13
D            34
E            94
F           255
G           693
H          1884
I          5123
J         13923
K         37848
L        102880
M        279659
N        557587

14 rows selected.

Imagine I have a requirement to access the rows where flag = ‘D’ very efficiently, and I’d like to do that in descending order of id1. I could create an index on (flag, id1) to help, of course – but that’s going to produce a very large index that (with help from a histogram and literal parameter usage, combined with the excellent clustering pattern) the optimizer would probably use only for the values from ‘A’ to ‘H’.

Clearly the solution is to take advantage of a function-based index here to create index entries for just the rows we are interested in and “hide” the rest. Better still we can create a virtual column and index that because we would then have a simple column name to use in our SQL and we could even make it invisible (from 12c) so that it won’t cause problems with sloppy code that does a “select *” or “insert values()” without a list of columns.

Which is the better choice? Let’s add the virtual column and then create indexes. You’ll notice I created id1 and id2 with identical values – this allows me to do test both strategies in a single pass of the script. If I tried to create the virtual column and the function-based index using id1 in both cases I’d get an error message that I was using the same expression twice – though the actual error would depend on the order of creation.

If I created the virtual column first the error on attempting to create a function-based index would be: “ORA-54018: A virtual column exists for this expression”, and if I created the function-based index first the error on trying to create the virtual column would be “ORA-54015: Duplicate column expression was specified”.

So I’m using id1 to test the virtual column approach and id2 for to test the function-based index approach.


alter table t1 add (
        id1_c generated always as
                (case when flag = 'D' then id1 end)
                virtual
        )
;

create index t1_i0 on t1(id1);

create index t1_i1  on t1(id1_c);
create index t1_i1d on t1(id1_c desc);

create index t1_i2  on t1((case when flag = 'D' then id2 end));
create index t1_i2d on t1((case when flag = 'D' then id2 end) desc);

And here’s is what we see if we “set echo on” during the index creation script:


SQL>
SQL> create index t1_i0 on t1(id1);

Index created.

SQL>
SQL> create index t1_i1  on t1(id1_c);

Index created.

SQL> create index t1_i1d on t1(id1_c desc);
create index t1_i1d on t1(id1_c desc)
                          *
ERROR at line 1:
ORA-54034: virtual columns not allowed in functional index expressions

SQL>
SQL> create index t1_i2  on t1((case when flag = 'D' then id2 end));

Index created.

SQL> create index t1_i2d on t1((case when flag = 'D' then id2 end) desc);

Index created.

We can’t create a descending index on the virtual column – even though we can create a descending function-based index on exactly the expression that we wanted to use for the virtual column – so it seems we can get what we want, provided we are prepared to forgo the niceness of naming (and perhaps making invisible) a virtual column.

So are we happy?

No, we’re not – and here’s why:

select
        index_name, num_rows, leaf_blocks
from
        user_indexes
where
        table_name = 'T1'
order by
        index_name
;

INDEX_NAME             NUM_ROWS LEAF_BLOCKS
-------------------- ---------- -----------
T1_I0                   1000000        2226
T1_I1                        34           1
T1_I2                        34           1
T1_I2D                  1000000        1812

To make a column in an index “descending” Oracle takes the “one’s complement” of each byte in turn then appends 0xFF to the result. The effect of this is to turn a NULL into 0xFF which, as a non-null value, has to be stored in the index with a rowid pointing to the table. So an index holding 34 rows and avoiding the waste of 999,966 redundant entries turns into an index useing (wasting) space on those 999,966 entries when you try to create a descending version of the index. The index we end up with is nearly as big as the index on (flag, id1) would have been.

Conclusion

We can’t create the descending index we want by first creating a virtual column. On the other hand, if we create it as a simple function-based index it’s going to be just as big as the index we were trying to avoid creating.

Footnote 1

The ‘0xFF’ is why you can’t create a unique, descending index unless at least one of the columns is declared not null – a two-column unique index would allow multiple rows to hold (null, null), but try to recreate it descending and those rows now hold duplicated (0xFF, 0xFF) and index creation fails. (Worse still, you may have no problem data when you create the index, and then discover that processes break because they introduce problem data.)

Footnote 2

There’s no such thing as a “descending index” – there are only indexes where some of the columns are stored descending. If you create an index with every single column marked as descending then all you’ve done is increase the size of the index, while confusing the optimizer and disabling some of the clever tricks it can do. The optimizer is perfectly able to read a normal index in descending order. (Which means that this example is a simplified demonstration of the problem – if I wanted the ‘D’ rows in descending order of id1 then I’d have an “order by id1 desc” in my query and the optimizer would produce a plan to walk the ascending (t1_i1) index in descending order.