Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Europe June 2018 Seminars: Confirmed Venues for Oslo, Munich and Zürich

I’m very excited to confirm all venues for the European June 2018 dates of my popular “Oracle Indexing Internals and Best Practices” seminar. This is a must attend seminar of benefit to not only DBAs, but also to Developers, Solution Architects and anyone else interested in designing, developing or maintaining high performance Oracle-based applications. It’s […]

Collaborate 18 Session: Networking for the Cloud DBA

Thank you all for coming to my session today. In my years of experience managing DBAs, I find that the basic concepts of networking are always the weakest link in the knowledge to get things together, at least for the DBAs. In this session you learned how netmask, CIDR, routing table works, how that fits together in a cloud world, where a knowledge of networking is not just nice to have; but an absolute must.

Download the preso here. As always I highly appreciate your comments, here as comments or directly to me via twitter @ArupNanda or by email arup@proligence.com.

Collaborate 18 Session: Microservices Using Python and Oracle

Many thanks to all those who came to my session at Collaborate 18 today. I sincerely hope you enjoyed the session and learned something new in the process.

Here are the files you can download:

1. The presentation
2. The list_avail template
3. The hotel.py python program (download the file and remove the .txt extension)
4. The hoteld.py python program

In case you wanted to visit the Python learning series I wrote, here is where you can find it. bit.ly/python4plsql

As I mentioned, I presented about Oracle and Python at Oracle Code event a couple of months ago in New York City. Here is where you can get the scripts and the presentation.

Enterprise Manager Silent Installation

Introduction

This is another post in my series on silent installations using response files. In previous posts, I covered silent installations for the Grid Infrastructure, RDBMS, post installation configuration steps, and creating the listener and database using response files. In this post, I’ll cover installing Enterprise Manager using a response file.

Generating the Response Files

Unlike all the other installations and configurations using response files, for some reason Oracle Corporation in its wisdom has seen fit not to simply provide RSP files for an Enterprise Manager installation – they need to be generated (which probably just extracts them from the zip files, anyway, or am I being far too cynical?!) To do this, use a command like the following:

[oracle@host1 em]$ ./em13200p1_linux64.bin -getResponseFileTemplates -outputLoc /u01/app/em
-bash: ./em13200p1_linux64.bin: Permission denied

Of course, I had FTP’ed the files to this host, and there was no execute permission on the em13200p1_linux64.bin file:

[oracle@host1 em]$ ls -al
total 6808580
drwxr-xr-x. 2 oracle oinstall       4096 Apr 19 11:04 .
drwxrwxr-x. 6 oracle oinstall       4096 Apr 20 10:39 ..
-rw-r--r--. 1 oracle oinstall  564264960 Apr 13 07:02 em13200p1_linux64-2.zip
-rw-r--r--. 1 oracle oinstall  741526563 Apr 13 12:03 em13200p1_linux64-3.zip
-rw-r--r--. 1 oracle oinstall 2084231936 Apr 13 13:24 em13200p1_linux64-4.zip
-rw-r--r--. 1 oracle oinstall  109191154 Apr 13 11:39 em13200p1_linux64-5.zip
-rw-r--r--. 1 oracle oinstall 2146696423 Apr 13 14:47 em13200p1_linux64-6.zip
-rw-r--r--. 1 oracle oinstall  771426157 Apr 13 12:32 em13200p1_linux64-7.zip
-rw-r--r--. 1 oracle oinstall  554606940 Apr 13 11:49 em13200p1_linux64.bin
[oracle@host1 em]$ chmod u+x *.bin
[oracle@host1 em]$ ls -al
total 6808580
drwxr-xr-x. 2 oracle oinstall       4096 Apr 19 11:04 .
drwxrwxr-x. 6 oracle oinstall       4096 Apr 20 10:39 ..
-rw-r--r--. 1 oracle oinstall  564264960 Apr 13 07:02 em13200p1_linux64-2.zip
-rw-r--r--. 1 oracle oinstall  741526563 Apr 13 12:03 em13200p1_linux64-3.zip
-rw-r--r--. 1 oracle oinstall 2084231936 Apr 13 13:24 em13200p1_linux64-4.zip
-rw-r--r--. 1 oracle oinstall  109191154 Apr 13 11:39 em13200p1_linux64-5.zip
-rw-r--r--. 1 oracle oinstall 2146696423 Apr 13 14:47 em13200p1_linux64-6.zip
-rw-r--r--. 1 oracle oinstall  771426157 Apr 13 12:32 em13200p1_linux64-7.zip
-rwxr--r--. 1 oracle oinstall  554606940 Apr 13 11:49 em13200p1_linux64.bin
[oracle@host1 em]$ ./em13200p1_linux64.bin -getResponseFileTemplates -outputLoc /u01/app/em
** The directory "/tmp" does not have enough space. At least 10241 MB of free space are required.
Please input another directory or [Exit]: /u01/app/em
0%.....[/u01/app/em/em13200p1_linux64-2.zip]
  End-of-central-directory signature not found.
Failed to extract disk 2 (rc 2)
......................................................100%
Launcher log file is /u01/app/em/OraInstall2018-04-20_11-26-31AM/launcher2018-04-20_11-26-31AM.log.
Copying response file template(s)
to /u01/app/em ...
  Copying response file template upgrade.rsp
  Copying response file template software_only.rsp
  Copying response file template new_install.rsp
Finished copying response file template(s)

The log(s) can be found here: /u01/app/em/OraInstall2018-04-20_11-26-31AM.

So what went wrong this time? This is the line that tells you:

End-of-central-directory signature not found.

The first zip file didn’t copy correctly. You can also see that from the size of the file, around 540 MB instead of the nearly 2 GB it should be. Notice it did extract three response files anyway, but I couldn’t be sure they were complete (and needed to reload the second file anyway!). So I loaded it again, and again three response files were generated. The Advanced Installation and Configuration Guide tells you to only use new_install.rsp, so ignore the other two files for now. Also, notice the spurious error message about /tmp? If I look at that it would seem to have plenty of free space:

[oracle@host1 em]$ df -h /tmp
Filesystem                Size  Used Avail Use% Mounted on
/dev/mapper/rootvg-tmplv  2.0G   66M  1.8G   4% /tmp

Anyway, I told it another directory (/u01/app/em/stage) which I had to create before running the command successfully.

Using the Response File

Of all the response files I’ve worked with during this series of posts, I have to say the EM response file was the pickiest, most annoying file of the lot! When you look at the file, you’ll see a lot of variables associated with a value. When you start editing the file, they are of this sort of format:

UNIX_GROUP_NAME=

Make sure you have either changed all the parts to something else or get rid of them entirely, as the stupid installer will complain about them otherwise! A couple of other issues I found along the way:

Collaborate18 Session: Real Life DBA Best Practices

Thank you all those came for my session at IOUG Collaborate 18 today. I hope you learned something new and inspired by something to make your task a bit easier, a bit more efficient and a whole lot enjoyable.

You can download the presentation here.

Remember, a best practice is something that can be explained as:
1) why it is good, or what specific good comes by following it
2) what happens if you don't follow it
3) what circumstances it doesn't apply, if any

If you hear a "best practice" without any of these, reject it politely. And most of all, create your own best practices and promote them. We all deserve to learn something new to make our collective life a bit easier.

Friday Philosophy – If I Was a Computer, I Might Be An IBM System 360

So today I turn 50. I spent a few minutes looking up what your typical computer looked like then and I decided the IBM System/360 was about right. It was a business machine. Personal computers did not exist then as, well, it was almost the dark ages…

{Note, it has been pointed out to me that I should say “If I were a computer”. Seems that even at my age I still have things still to learn properly… I’ll leave it as is to remind myself…}.

Technology changes and we who work with it change. With tech, after a while any individual part (say a specific computer) becomes obsolete and we throw it away. The technology itself changes and we stop using the parts that are superceeded. I don’t remember the last time I used any sort of floppy disc or a Zip Drive. Ironically we still often use a floppy disc icon to identify the Save button.

But us? Do we who work with I.T. become obsolete? Yes. We do so when we stop changing with the times (or “stop working”, but this is not an “I’m old and considering my mortality” type post, you will be relieved to know). I think that if you lose your passion to keep learning something new in computing and/or programming, it’s time to move out of the arena; as, worryingly soon, you are going to become like those old systems that never get updates and you know will be thrown out if they develop a serious flaw or just become too expensive to keep on the payroll – err, I mean plugged in.

I nearly lost interest about 8,10 years ago. I think it was partly that I found myself doing the same things over & over again and having the same stupid arguments (sorry, “discussions”) about how not doing things correctly was going to just make everyone’s life harder in the long run. I don’t mean doing things the same, I mean doing the things that make a good system – ensuring it fits the business need, that it is tested before it gets released, and you do this crazy thing called design. This time it was not that I needed to alter along with the tech, I needed to alter myself a little. I finally realised that, although it was getting worse, the I.T. world has always been a bit like that and part of the trick to this business is simply walking away from places that are too bad and looking for those who are a bit better. I’m lucky to be able to do that moving about (don’t get me wrong, I did have to put effort into it and I think that is where some people go wrong, they seem to almost expect an external agent to make things better for them) but then I’m 50 and still in the business. I’ve seen lots of people simply leave the industry when they could not affect that change.

However, doing a bit of the introverted-navel-gazing that comes with Significant Birthdays, I find it interesting that at 20, 25, 30, 35,… 42 (very significant that one) things have always been changing for me.

At 20 I was studying Genetics & Zoology at college and thought I would be a lab scientist. A career in I.T. was not even a consideration.
By 25 I’d taken up computing and I had fallen into this company called Oracle and I reckoned I would be with them for a good while, building systems with Forms, ReportWriter. PL/SQL and whatever came next. Oracle would not last for ever…
When I was 30 I was self employed, touting my services to various companies and mostly doing systems design and performance work.
Come 35 and I was back full-time employed (that was a surprise) working in science organisation (even more of a surprise) using my degree to some, well, degree (an utter surprise). And presenting at Oracle user group conferences.
At 40 I was self-employed again, but now totally focused on performance and and Oracle RDBMS Subject Matter Expert (someone who knows a bit about most of it and most of a bit of it).
42. 42 is a great age. You are the answer to everything…
At 45 I was retired. Except when I was not. OK, I had become a Consultant, doing short jobs for different clients. And doing all this User Group stuff. Me! Antisocial, miserable, slightly-autistic me!
Now at 50, I have to confess I am not keeping on top of the technical details of my chosen sphere the way I probably should, if my career is still in this area. But I’m not doing bad and my “job” is now even more as a presenter and a member of the User Group community. I need new skills for that.

So it keeps changing. Sometimes I chose the change and sometimes changes just dropped on me. But I’ll look at the options as they come up. And if no options are coming up and I am not happy in my job, I go look for options. I won’t say I always choose the best option but, heck, it’s worked OK so far.

I wonder what I’ll be doing at 55 and 60? I just hope I am not stuck in a museum with a “do not touch” sign next to me, like all the remaining IBM System/360s

Creating the Listener and Database in Silent Mode

Introduction

In previous posts, I talked about a customer where I needed to do a silent installation of the Grid Infrastructure and Database, as well as the post installation configuration steps. For that particular customer, I didn’t create an empty database but used RMAN to duplicate the existing database. But what about if you want to create a new listener and database in silent mode as well? That’s what this post covers.

NETCA

Just like my previous posts, I installed the RDBMS using a response file. Since I’ve already covered that earlier, I’m not going to reproduce that here again. Once the RDBMS software is installed, it’s just a matter of creating the listener and the database. Creating the listener is dead easy. Oracle provides a netca.rsp file, which for my simple installation didn’t need any modification at all. Here’s the file:

$ cat netca.rsp
######################################################################
## Copyright(c) 1998, 2014 Oracle Corporation. All rights reserved. ##
##                                                                  ##
## Specify values for the variables listed below to customize your  ##
## installation.                                                    ##
##                                                                  ##
## Each variable is associated with a comment. The comment          ##
## identifies the variable type.                                    ##
##                                                                  ##
## Please specify the values in the following format:               ##
##                                                                  ##
##         Type         Example                                     ##
##         String       "Sample Value"                              ##
##         Boolean      True or False                               ##
##         Number       1000                                        ##
##         StringList   {"String value 1","String Value 2"}         ##
##                                                                  ##
######################################################################
##                                                                  ##
## This sample response file causes the Oracle Net Configuration    ##
## Assistant (NetCA) to complete an Oracle Net configuration during ##
## a custom install of the Oracle12c server which is similar to     ##
## what would be created by the NetCA during typical Oracle12c      ##
## install. It also documents all of the NetCA response file        ##
## variables so you can create your own response file to configure  ##
## Oracle Net during an install the way you wish.                   ##
##                                                                  ##
######################################################################

[GENERAL]
RESPONSEFILE_VERSION="12.1"
CREATE_TYPE="CUSTOM"

#-------------------------------------------------------------------------------
# Name       : SHOW_GUI
# Datatype   : Boolean
# Description: This variable controls appearance/suppression of the NetCA GUI,
# Pre-req    : N/A
# Default    : TRUE
# Note:
# This must be set to false in order to run NetCA in silent mode.
# This is a substitute of "/silent" flag in the NetCA command line.
# The command line flag has precedence over the one in this response file.
# This feature is present since 10.1.0.3.
#-------------------------------------------------------------------------------
#SHOW_GUI=false

#-------------------------------------------------------------------------------
# Name       : LOG_FILE
# Datatype   : String
# Description: If present, NetCA will log output to this file in addition to the
#              standard out.
# Pre-req    : N/A
# Default    : NONE
# Note:
#       This is a substitute of "/log" in the NetCA command line.
# The command line argument has precedence over the one in this response file.
# This feature is present since 10.1.0.3.
#-------------------------------------------------------------------------------
#LOG_FILE=""/oracle12cHome/network/tools/log/netca.log""

[oracle.net.ca]
#INSTALLED_COMPONENTS;StringList;list of installed components
# The possible values for installed components are:
# "net8","server","client","aso", "cman", "javavm"
INSTALLED_COMPONENTS={"server","net8","javavm"}

#INSTALL_TYPE;String;type of install
# The possible values for install type are:
# "typical","minimal" or "custom"
INSTALL_TYPE=""typical""

#LISTENER_NUMBER;Number;Number of Listeners
# A typical install sets one listener
LISTENER_NUMBER=1

#LISTENER_NAMES;StringList;list of listener names
# The values for listener are:
# "LISTENER","LISTENER1","LISTENER2","LISTENER3", ...
# A typical install sets only "LISTENER"
LISTENER_NAMES={"LISTENER"}

#LISTENER_PROTOCOLS;StringList;list of listener addresses (protocols and parameters separated by semicolons)
# The possible values for listener protocols are:
# "TCP;1521","TCPS;2484","NMP;ORAPIPE","IPC;IPCKEY","VI;1521"
# A typical install sets only "TCP;1521"
LISTENER_PROTOCOLS={"TCP;1521"}

#LISTENER_START;String;name of the listener to start, in double quotes
LISTENER_START=""LISTENER""

#NAMING_METHODS;StringList;list of naming methods
# The possible values for naming methods are:
# LDAP, TNSNAMES, ONAMES, HOSTNAME, NOVELL, NIS, DCE
# A typical install sets only: "TNSNAMES","ONAMES","HOSTNAMES"
# or "LDAP","TNSNAMES","ONAMES","HOSTNAMES" for LDAP
NAMING_METHODS={"TNSNAMES","ONAMES","HOSTNAME"}

#NOVELL_NAMECONTEXT;String;Novell Directory Service name context, in double quotes
# A typical install does not use this variable.
#NOVELL_NAMECONTEXT = ""NAMCONTEXT""

#SUN_METAMAP;String; SUN meta map, in double quotes
# A typical install does not use this variable.
#SUN_METAMAP = ""MAP""

#DCE_CELLNAME;String;DCE cell name, in double quotes
# A typical install does not use this variable.
#DCE_CELLNAME = ""CELL""

#NSN_NUMBER;Number;Number of NetService Names
# A typical install sets one net service name
NSN_NUMBER=1

#NSN_NAMES;StringList;list of Net Service names
# A typical install sets net service name to "EXTPROC_CONNECTION_DATA"
NSN_NAMES={"EXTPROC_CONNECTION_DATA"}

#NSN_SERVICE;StringList;Oracle12c database's service name
# A typical install sets Oracle12c database's service name to "PLSExtProc"
NSN_SERVICE={"PLSExtProc"}

#NSN_PROTOCOLS;StringList;list of coma separated strings of Net Service Name protocol parameters
# The possible values for net service name protocol parameters are:
# "TCP;HOSTNAME;1521","TCPS;HOSTNAME;2484","NMP;COMPUTERNAME;ORAPIPE","VI;HOSTNAME;1521","IPC;IPCKEY"
# A typical install sets parameters to "IPC;EXTPROC"
NSN_PROTOCOLS={"TCP;HOSTNAME;1521"}

#SERVICEUSERPASSWORD;String;Windows service user password
# If the oracle home is installed as secure user, supply the password
#SERVICEUSERPASSWORD=""svcpassword""

And to create the listener, you just need:

[oracle@host1 response]$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/db_1
[oracle@host1 response]$ export PATH=$ORACLE_HOME/bin:$PATH
[oracle@host1 response]$ pwd
/u01/app/database/response
[oracle@host1 response]$ netca -silent -responseFile /u01/app/database/response/netca.rsp

Parsing command line arguments:
    Parameter "silent" = true
    Parameter "responsefile" = /u01/app/database/response/netca.rsp
Done parsing command line arguments.
Oracle Net Services Configuration:
Profile configuration complete.
Oracle Net Listener Startup:
    Running Listener Control:
      /u01/app/oracle/product/12.1.0.2/db_1/bin/lsnrctl start LISTENER
    Listener Control complete.
    Listener started successfully.
Listener configuration complete.
Oracle Net Services configuration successful. The exit code is 0
[oracle@host1 response]$ lsnrctl stat

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 20-APR-2018 10:52:01

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=host1)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                20-APR-2018 10:51:26
Uptime                    0 days 0 hr. 0 min. 35 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/12.1.0.2/db_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/host1/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=host1.acme.com)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
The listener supports no services
The command completed successfully

DBCA

Now we have the listener started, let’s set up the database using a response file as well. This time, I did need to make some changes to the response file Oracle provided. As you can probably guess by looking at some of my changes, the database we’re creating here is going to be used for the Enterprise Manager repository for this customer. Oh, one thing – obviously I didn’t use “password” as the password for SYS, SYSTEM and DBSNMP when I really ran this! </p />
</p></div>

    	  	<div class=

18.2 patch… painless for me

18.2 was released a few days ago, so I thought I’d throw it against my 18c instance and see how things played out.  This was just a single instance database, running with a single pluggable.

Step 1 for me was to download the latest OPatch.  To be honest, I don’t even check whether I already have the latest version in any installed ORACLE_HOME anymore.  If I’m going to apply a patch, I’m always going to download the latest OPatch unless the README explicitly tells me some reason why I should not.  I figure getting the latest version reduces the chances I’ll hit problems.  For me, I grabbed version “12.2.0.1.13”, but your experience may differ depending on when you encountered upon this post.

So I then downloaded patch 27676517 (note, this does not include patches for the JVM which are done separately).

From there is was just a case of following the README instructions and I had no dramas at all.  Software patching went through smoothly:


[oracle@host18 27676517]$ export PATH=$PATH:$ORACLE_HOME/OPatch
[oracle@host18 27676517]$ opatch apply
Oracle Interim Patch Installer version 12.2.0.1.13
Copyright (c) 2018, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/18.0.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/18.0.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.13
OUI version       : 12.2.0.4.0
Log file location : /u01/app/oracle/product/18.0.0/dbhome_1/cfgtoollogs/opatch/opatch2018-04-19_08-32-51AM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   27676517  

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/oracle/product/18.0.0/dbhome_1')


Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '27676517' to OH '/u01/app/oracle/product/18.0.0/dbhome_1'
ApplySession: Optional component(s) [ oracle.assistants.usm, 18.0.0.0.0 ] , [ oracle.has.crs, 18.0.0.0.0 ] , [ oracle.network.cman, 18.0.0.0.0 ] , [ oracle.assistants.asm, 18.0.0.0.0 ]  not present in the Oracle Home or a higher version is found.

Patching component oracle.network.rsf, 18.0.0.0.0...

Patching component oracle.rdbms.crs, 18.0.0.0.0...

Patching component oracle.assistants.acf, 18.0.0.0.0...

Patching component oracle.sqlplus.ic, 18.0.0.0.0...

Patching component oracle.rdbms.deconfig, 18.0.0.0.0...

Patching component oracle.sqlplus, 18.0.0.0.0...

Patching component oracle.rdbms.util, 18.0.0.0.0...

Patching component oracle.rdbms, 18.0.0.0.0...

Patching component oracle.rdbms.dbscripts, 18.0.0.0.0...

Patching component oracle.assistants.deconfig, 18.0.0.0.0...

Patching component oracle.assistants.server, 18.0.0.0.0...

Patching component oracle.rdbms.install.plugins, 18.0.0.0.0...

Patching component oracle.rdbms.rsf, 18.0.0.0.0...

Patching component oracle.rdbms.rman, 18.0.0.0.0...

Patching component oracle.javavm.client, 18.0.0.0.0...

Patching component oracle.ldap.owm, 18.0.0.0.0...

Patching component oracle.ldap.security.osdt, 18.0.0.0.0...
Patch 27676517 successfully applied.
Log file location: /u01/app/oracle/product/18.0.0/dbhome_1/cfgtoollogs/opatch/opatch2018-04-19_08-32-51AM_1.log

OPatch succeeded.

 

And the datapatch stage to update the database(s) also went through without any problems.


[oracle@host18 OPatch]$ ./datapatch -verbose
SQL Patching tool version 18.0.0.0.0 Production on Thu Apr 19 08:37:40 2018
Copyright (c) 2012, 2018, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_10552_2018_04_19_08_37_40/sqlpatch_invocation.log

Connecting to database...OK
Gathering database info...done

Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)

Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of interim SQL patches:
  No interim patches found

Current state of release update SQL patches:
  Binary registry:
    18.2.0.0.0 Release_Update 1804041635: Installed
  PDB CDB$ROOT:
    No release update patches installed
  PDB PDB$SEED:
    No release update patches installed
  PDB PDB1:
    No release update patches installed

Adding patches to installation queue and performing prereq checks...done
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED PDB1
    No interim patches need to be rolled back
    Patch 27676517 (Database Release Update : 18.2.0.0.180417 (27676517)):
      Apply from 18.1.0.0.0 Feature Release to 18.2.0.0.0 Release_Update 1804041635
    No interim patches need to be applied

Installing patches...
Patch installation complete.  Total patches installed: 3

Validating logfiles...done
Patch 27676517 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27676517/22097537/27676517_apply_DB18_CDBROOT_2018Apr19_08_38_45.log (no errors)
Patch 27676517 apply (pdb PDB$SEED): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27676517/22097537/27676517_apply_DB18_PDBSEED_2018Apr19_08_39_58.log (no errors)
Patch 27676517 apply (pdb PDB1): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27676517/22097537/27676517_apply_DB18_PDB1_2018Apr19_08_40_01.log (no errors)
SQL Patching tool complete on Thu Apr 19 08:41:13 2018
[oracle@host18 OPatch]$ 




[oracle@host18 ~]$ sqlplus scott/tiger@pdb1

SQL*Plus: Release 18.0.0.0.0 - Production on Thu Apr 19 08:44:27 2018
Version 18.2.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.


Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.2.0.0.0

SQL> 

So all in all, a painless and straightforward experience for me.

One minor annoyance for me was the “Bugs Fixed” document.  The README references this as:

image

but that document is not accessible currently.  However, if you follow a different link in the README

image

then you will get to the document which contains the list of fixes etc.

Footnote: If you are not familiar with the new nomenclature of “Patch”, “Release Update”, etc etc, here is a good place to start

https://mikedietrichde.com/2017/11/07/ru-rur-recommendations-facts/

and follow the various links within that post for more information.

Happy upgrading!

Addenda: When it comes to the ‘datapatch’ script, the README is perhaps a little ambiguous.  More on that here

https://mikedietrichde.com/2018/04/19/do-you-have-to-execute-datapatch-when-you-create-a-new-database/

 

Choosing the Right Compression

Choosing the right database table compression matters.  It affects the size of the objects and also the performance of your application.  Compression makes objects smaller, and for many systems, this is an objective in itself, but it also affects performance.  As compressed objects are smaller, less physical I/O is required to access them.  However, that saving is paid for in CPU.  More CPU is required to compress the data when it is written, and then again (albeit usually less CPU) to decompress it when you read it.  The question is whether the saving in I/O pays for the additional CPU.

There are a number of forms of compression to choose between.  They have different characteristics.  The resulting objects are different sizes, and they require different amounts of CPU to compress and decompress.

  • Basic Compression:  This is not really compression, but a form of block-level de-duplication.  It has been available since Oracle 9i.  It is available on all platforms.
  • Hybrid Columnar Compression is only available on Engineered Systems, FS Flash Storage System and ZFS Storage Appliance (ZFSSA).  4 different levels of compression that use 3 different compression algorithms.  HCC is frequently positioned as being highly effective at reducing the size of data warehouses, but it also has implications for performance.
    • Query Low: LZO algorithm
    • Query High: ZLIB algorithm
    • Archive Low: ZLIB algorithm higher level with a larger block size.
    • Archive High: BZIP2 algorithm.  The higher compression ratio comes at the expense of significantly increased CPU overhead on decompression.
Further Reading:

Which one is right for your application?  It depends.  Ultimately, you need to test it for yourself.  In this blog, I am going to relate my experience of one particular system, including how I tested it.  Our objective was optimising performance, rather than reducing database size.

My Test System

  • SuperCluster M7 (i.e. an Engineered System), Oracle 12.1.0.2, 2-node RAC, running PeopleSoft Financials 9.2.
  • In particular, this system runs a large number of complex PeopleSoft General Ledger nVision reports that use a number of reporting views on summary ledgers. There are materialized views built on the reporting views.  The summary ledgers are updated nightly.  Then the materialized views are fully refreshed in non-atomic mode so the tables are truncated and repopulated in direct-path mode.  Hence, they can be Hybrid Columnar Compressed without needing the Advanced Compression licence.  The materialized views are not indexed.  Thus they can only be full scanned, and these scans can be offloaded to the storage cells (i.e. smart scan).

The metrics presented here are extracted from test system where the summary ledger tables are updated, materialised view refreshed and then 136 General Ledger (GL) report books are run on regular a daily schedule.  This approach ensures that each test is, as far as was possible, run with the same initial conditions and only known parameters are changed each time.  In each test, a different compression level was set on the materialized views.

  • I have not investigated ARCHIVE compression.

Writing

Description Materialized Views
Size (MB) Comp. % Comp. Ratio Elapsed Total DB Time
No Compression 151,868 0% 1.0 1:01 18:13
Basic Compression 34,720 77% 4.4 1:12 22:54
Query Low 22,666 85% 6.7 1:00 18:27
Query High 14,086 91% 10.8 1:05 22:56

all times expressed in hours:minutes

  • While basic compression does significantly reduce table sizes (4.4x), Hybrid Columnar Compression does better (6.7x – 10.8x).  
  • HCC Query Low only has a very small overhead while writing data.  The materialized views are refreshed in parallel, and while the total DB time is higher, the elapsed time is lower!
  • Both basic compression and HCC Query High take about 20% longer to write the data in direct path mode.

Reading

Description nVision DB Time
On CPU+CPU Wait cell smart table scan Others Total On CPU+CPU Wait cell smart table scan Others
No Compression 201:26 97:12 18:15 316:54 64% 31% 6%
Basic Compression 466:13 66:03 31:07 563:24 83% 12% 6%
Query Low 241:13 12:34 44:57 298:45 81% 4% 15%
Query High 278:20 4:43 11:38 294:42 94% 2% 4%

The nVision GL reports only read the materialized views (although they do write some temporary working storage tables).

  • As the level of compression of the materialized views increases, the time spent on cell smart table scan (physical read) decreases, this is mainly because the objects are smaller.
  • The slightly surprising result is that the CPU overhead of basic compression is higher than either form of HCC, and more than twice that of using uncompressed tables.  Hence it significantly increases the total database response time
  • Both forms of query HCC perform better than the uncompressed table.  The increase in CPU time is more than compensated by the reduction in time spent on physical I/O.
  • The Query low test was affected by other factors external to the test (a lot more time spent in other wait events).  Otherwise, it would have outperformed query high.  If we look at just CPU and physical read time it is clearly the best option.
In an offloaded (i.e. smart) scan on an engineered system, decompression as well as the physical I/O is done in the storage cell.  The saving in time spent on physical between query low and query high is small because the disk subsystem performs so well on the engineered system.
Whereas on a non-engineered system compression and I/O must be done by the database server, there is nowhere else.  You may well find that a different level of compression is optimal.

Conclusion

  • Basic compression exhibits a surprisingly high CPU overhead.  In my test, the performance was poorer than with uncompressed tables.
  • HCC Query low was generally the best option, saving in I/O more than paid for the increase in CPU.  
  • I also analysed ASH data for the different tests broken down by materialized view.  I found it was better to leave smaller materialized views uncompressed but to compress the larger ones.  This is not surprising since HCC adds additional structures.

    These results and conclusions are from one set of test on one particular application, on one particular 'engineered' hardware configuration using one particular data set.  So you need to conduct your own comparative test and decide what is best for your system.

    A look into oracle redo: index and overview