Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Why SQL Server Shouldn’t Run as Root on Linux

So more than one person contacted me about my last post and stated that they didn’t see a problem with SQL Server on Linux running as the root user.

Me:

Although enhancements have changed Windows installations for applications to run with a unique user, I created a mssql OS user even back on SQL Server 2000 on Windows as I had a tendency to use similar security practices for all database platforms as a multi-platform DBA.  With that being said-  yes, it introduced complexity, but it was for a reason: users should be restricted to the least amount of privileges required.  To grant any application or database “God” powers on a host is akin to granting DBA to every user in the database, but at the host level.  As important as security is to DBAs INSIDE the database, it should be just as important to us OUTSIDE of it on the host it resides on.

Security is important and has become more complex with the increase of security breaches and introduction of the cloud.  One of the most simple ways to do this is to ensure that all application owners on a host are granted only the privileges they require.  The application user should only utilize SUDO, stick bit, iptables, SUID, SGID and proper group creation/allocation if and when required.

SQL Server As Root

As an example, let’s say that SQL Server is owned by root on a Linux machine and its a standard process for this system to still load data via Excel spreadsheets since it’s OS migration to Linux.  A hacker has discovered this and builds malware to attach to the excel spreadsheet and as the files are owned by the same owner as SQL Server, the malware is able to take advantage of the vulnerability, running scripts on the host as root. If SQL Server had been installed as the OS user mssql, with limited privileges and proper group affiliation, this type of attack could be isolated and impact lessened.

Linux administrators will consistently demand you justify why you need to run anything as root.  No matter if it’s to update files, create files, modify applications or even execute system tasks, but by having a proper user/group configuration will:

  1. Have the ability to track via an audit who has switched over to super user.
  2. be more likely to eliminate human mistakes by executing catastrophic commands.
  3. Simplify identifying what processes are part of an application stack
  4. provides a second layer of security that could save the company from becoming the next security breach.
  5. Limits the amount of data the application is privy to on the host.

Do It Right, Don’t Do It Twice

Create a proper MSSQL OS User on Linux and create a DBA and SQLINSTALL group.  As Microsoft matures its SQL Server offering on Linux, features are going to increase and be enhanced.  Microsoft will embrace mature practices, so get ahead of the game and start now, building proper security from the beginning.

 

 



Tags:  ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Why SQL Server Shouldn't Run as Root on Linux], All Right Reserved. 2018.

The post Why SQL Server Shouldn’t Run as Root on Linux appeared first on DBA Kevlar.

Histograms

I had a sudden urge to go a bit meta – so here’s a relative frequency histogram of my observations of the general use of histograms in Oracle:

 

https://jonathanlewis.files.wordpress.com/2018/01/histograms.png?w=150&h=83 150w, https://jonathanlewis.files.wordpress.com/2018/01/histograms.png?w=300&h... 300w, https://jonathanlewis.files.wordpress.com/2018/01/histograms.png?w=768&h... 768w, https://jonathanlewis.files.wordpress.com/2018/01/histograms.png 1469w" sizes="(max-width: 1024px) 100vw, 1024px" />

 

 

Dealing with IP addresses

Kris Rice made mention on Twitter about the free (but limited lifetime) IP address to Country mappings that you can get from Maxmind.

image

If you ever need to deal with IP addresses, you might find the following utility package I wrote a while back useful.  It lets you convert from the string representation of an IP address to its numeric equivalent, and vice-versa.  It handles IPv4 and IPv6 with one caveat being that that I didn’t bother with the collapsed zeros for IPv6 so I could keep the performance snappy.  Free for your use without warranty or responsibility Smile


SQL> create or replace
  2  package ip_util is
  3
  4  function ip_num_from_str(p_ip_str varchar2) return number deterministic;
  5  function ip_str_from_num(p_ipnum number) return varchar2 deterministic;
  6
  7  end;
  8  /

Package created.

SQL>
SQL>
SQL> create or replace
  2  package body ip_util is
  3
  4    --
  5    -- constants need to be fixed, not expressions if you want to avoid ora-4068
  6    --
  7    l_ip41 constant number(12)  := 256;        -- power(256,1);
  8    l_ip42 constant number(12)  := 65536;      -- power(256,2);
  9    l_ip43 constant number(12)  := 16777216;   -- power(256,3);
 10    l_ip44 constant number(12)  := 4294967296; -- power(256,4);
 11
 12    l_ip61 constant number(38)  := 65536;                              --power(65536,1);
 13    l_ip62 constant number(38)  := 4294967296;                         --power(65536,2);
 14    l_ip63 constant number(38)  := 281474976710656;                    --power(65536,3);
 15    l_ip64 constant number(38)  := 18446744073709551616;               --power(65536,4);
 16    l_ip65 constant number(38)  := 1208925819614629174706176;          --power(65536,5);
 17    l_ip66 constant number(38)  := 79228162514264337593543950336;      --power(65536,6);
 18    l_ip67 constant number(38)  := 5192296858534827628530496329220096; --power(65536,7);
 19
 20
 21  function ip_num_from_str(p_ip_str varchar2) return number deterministic is
 22    l_ip_num     number;
 23    l_dot1       pls_integer;
 24    l_dot2       pls_integer;
 25    l_dot3       pls_integer;
 26    l_dot4       pls_integer;
 27
 28    l_colon      pls_integer;
 29    l_colon_cnt  pls_integer;
 30    l_hex        varchar2(32);
 31    l_ip_str     varchar2(64);
 32  begin
 33    if p_ip_str like '%.%' then
 34      l_dot1   := instr(p_ip_str,'.');
 35      l_dot2   := instr(p_ip_str,'.',l_dot1+1);
 36      l_dot3   := instr(p_ip_str,'.',l_dot2+1);
 37      l_dot4   := instr(p_ip_str,'.',l_dot3+1);
 38      if l_dot4 > 0 then
 39         raise_application_error(-20000,'Cannot be resolved to an IP4 address');
 40      end if;
 41
 42      l_ip_num :=  l_ip43*to_number(substr(p_ip_str,1,l_dot1-1)) +
 43                   l_ip42*to_number(substr(p_ip_str,l_dot1+1,l_dot2-l_dot1-1)) +
 44                   l_ip41*to_number(substr(p_ip_str,l_dot2+1,l_dot3-l_dot2-1)) +
 45                   to_number(substr(p_ip_str,l_dot3+1));
 46
 47    elsif p_ip_str like '%:%' then
 48      --
 49      -- Note: The abbreviation of "Consecutive sections of zeroes are replaced with a double colon (::)" is not implemented.
 50      --
 51      l_colon_cnt := length(p_ip_str)-length(replace(p_ip_str,':'));
 52      if l_colon_cnt != 7 then
 53         raise_application_error(-20000,'Cannot be resolved to an IP6 address');
 54      end if;
 55
 56      l_ip_str := p_ip_str||':';
 57      loop
 58        l_colon := instr(l_ip_str,':');
 59        l_hex := l_hex || lpad(substr(l_ip_str,1,l_colon-1),4,'0');
 60        l_ip_str := substr(l_ip_str,l_colon+1);
 61        exit when l_ip_str is null;
 62      end loop;
 63      l_ip_num := to_number(l_hex,'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx');
 64    end if;
 65
 66    return l_ip_num;
 67  end;
 68
 69
 70  function ip_str_from_num(p_ipnum number) return varchar2 deterministic is
 71  begin
 72    if p_ipnum < l_ip44 then
 73      return  mod(trunc(p_ipnum/l_ip43),l_ip41) ||'.'||
 74              mod(trunc(p_ipnum/l_ip42),l_ip41) ||'.'||
 75              mod(trunc(p_ipnum/l_ip41),l_ip41) ||'.'||
 76              mod(p_ipnum,l_ip41);
 77    else
 78      --
 79      -- Note: The abbreviation of "Consecutive sections of zeroes are replaced with a double colon (::)" is not implemented.
 80      --
 81      return  to_char(mod(trunc(p_ipnum/l_ip67),l_ip61),'fmxxxx') ||':'||
 82              to_char(mod(trunc(p_ipnum/l_ip66),l_ip61),'fmxxxx') ||':'||
 83              to_char(mod(trunc(p_ipnum/l_ip65),l_ip61),'fmxxxx') ||':'||
 84              to_char(mod(trunc(p_ipnum/l_ip64),l_ip61),'fmxxxx') ||':'||
 85              to_char(mod(trunc(p_ipnum/l_ip63),l_ip61),'fmxxxx') ||':'||
 86              to_char(mod(trunc(p_ipnum/l_ip62),l_ip61),'fmxxxx') ||':'||
 87              to_char(mod(trunc(p_ipnum/l_ip61),l_ip61),'fmxxxx') ||':'||
 88              to_char(mod(p_ipnum,l_ip61),'fmxxxx');
 89    end if;
 90  end;
 91
 92  end;
 93  /

Package body created.

SQL> select ip_util.ip_num_from_str('192.168.1.2') from dual;

IP_UTIL.IP_NUM_FROM_STR('192.168.1.2')
--------------------------------------
                            3232235778

SQL> select ip_util.ip_str_from_num(3232235778) from dual;

IP_UTIL.IP_STR_FROM_NUM(3232235778)
-----------------------------------------------------------------------------------------------------------------
192.168.1.2

SQL> select ip_util.ip_num_from_str('2001:db8:0:0:0:ff00:42:8329') ip from dual;

                                          IP
--------------------------------------------
      42540766411282592856904265327123268393

SQL> select ip_util.ip_str_from_num(42540766411282592856904265327123268393) from dual;

IP_UTIL.IP_STR_FROM_NUM(42540766411282592856904265327123268393)
-----------------------------------------------------------------------------------------------------------------
2001:db8:0:0:0:ff00:42:8329


Linux 101 for SQL Server DBAs, Part II

Linux has become a clear area of required learning for SQL Server DBAs in as of SQL Server 2016.  What?  You didn’t read that right, you say?  Yes, while a Linux edition was introduced in SQL Server 2016, it was pushed to the forefront with the release of 2017, along with Python in a push towards stronger DevOps initiatives, (one OS to rule them all!) and Python to assist with even stronger analytics trends.

As most database platforms already run on Linux, the SQL Server community has a bit of catch up to do, so as many advanced DBA linux posts and classes that are out there, let’s start with some basics.  There’s already a Part I to this post, but we’ll continue to go on, repeating a little of what will be needed as part of this post and then going deeper into some more basics.

Privileges-  The Least Required

I love the Linux SQL Server 2017 container, but there is one thing I’m really not thrilled with.  Everything is installed under root.

root@4b714989afa3:/opt# ls -ltr
drwxrwxr-x 4 root root 4096 Nov 18 04:55 mssql
drwxrwxr-x 4 root root 4096 Nov 18 04:56 mssql-tools
drwxr-xr-x 3 root root 4096 Nov 18 04:56 microsoft

Every Oracle DBA reading this is saying, “WTF?”  There should be a SQL Server user, most likely MSSQL that should own this, along with supporting group ownership, such as DBA and SQLINSTALL.  This would follow the appropriate best practice, (I know, I know, you hate that term…) and anything that required super user privileges, (think ADMINISTRATOR on Windows) would then require SUDO, (switch user and do) to perform.

Processes

the ps command is going to be your friend.

Searching for SQL Server.

ps -ef

Now this will show you all the processes that are running and could be a bit more than you might want to see upon initial view.  Maybe you just want to see what’s running for SQL Server:

root@4b714989afa3:/# ps -ef | grep sql
root 1 0 0 Jan21 ? 00:00:00 /bin/sh -c /opt/mssql/bin/sqlservr
root 5 1 0 Jan21 ? 00:00:00 /opt/mssql/bin/sqlservr
root 7 5 53 Jan21 ? 17:02:26 /opt/mssql/bin/sqlservr

root@4b714989afa3:/# ps aux | grep sql
root 1 0.0 0.0 4508 796 ? Ss Jan21 0:00 /bin/sh -c /opt/mssql/bin/sqlservr
root 5 0.0 0.3 80208 13300 ? Sl Jan21 0:00 /opt/mssql/bin/sqlservr
root 7 50.1 17.2 3028796 692196 ? Sl Jan21 1024:57 /opt/mssq
/bin/sqlservr
root 6236 0.0 0.0 11284 920 pts/0 S+ 23:46 0:00 grep --color=auto sql

This is much cleaner, no matter  and you won’t have to view everything running on the server, which once the server is running in a standard company environment, might be quite littered with processes.  Note the aux option now shows you the child processes, along with the parent.

You can also filter by user:

root@4b714989afa3:/opt# ps -f -u root
 UID PID PPID C STIME TTY TIME CMD
 root 1 0 0 Jan21 ? 00:00:00 /bin/sh -c /opt/mssql/bin/s
 root 5 1 0 Jan21 ? 00:00:00 /opt/mssql/bin/sqlservr
 root 7 5 50 Jan21 ? 17:05:04 /opt/mssql/bin/sqlservr
 root 5840 0 0 21:51 pts/0 00:00:00 bash
 root 6249 5840 0 23:49 pts/0 00:00:00 ps -f -u root

Viewing the Contents of a File

You can do this multiple ways, but these are the most common:

  • cat
  • view
  • vi/vim

These are 3/4 options, with the first two offering to view the contents of the file and the 3rd/4th to edit.  You can also edit with the “view” option, but you’ll need to perform an override to write to the choice to view only the file.  This can be done with a standard vi/vim command of :w! while still inside the file.

Yes, that is#0000ff;">

We’ll go over this in deeper detail in a moment, but let’s go back to just cat’ing a file.  cat, short for concatenate, is one of the most common ways to look through a file and is often used  to parse through the contents.  We can use it in conjunction of other commands, often stated we “pipe” the commands together since we use the “|” symbol to connect the two commands.

And example of this might be:

# cat /home/mssql/scripts/perfdata.log | grep -i ‘cpu’

In the example above, it searches through the perfdata.log file and will return any line with the word cpu in it.  It is capital sensitive and will capture cpuinfo or cpuwait as well as cpu.

Now editing files with vi/vim takes a bit and it’s also considered “old school” but it’s where I live.  Before editing any file, consider backing it up with the copy command:

cp  

Being able to simply edit from the keyboard without having to click on a mouse makes my life easy and has been second nature for as long as I can remember.  Here are the basic keyboard commands you need to know:

Open a file in vi: vi

You can hit the escape, key to get start over on each command.

Go down a line: j

Go up a line: k

Go to the right: l

Go to the left: h

Insert: i

Append: a

Append to the end of a line: a

Add a line and insert below your cursor: o

Add a line and insert ABOVE your cursor: <shft> o

Undo: u

Quit without saving: :q

Quit without saving to a read only file: :q!

Write to a file: :w

Write to a read only file, (override): :w!

Write to a file and quit: :wq

Write to a read only file, (override) and quit: :wq!

 

Well, I think I’ve given you enough to start being dangerous…and how to learn not to be, too.

 

 

 

 

 

 

 



Tags:  , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Linux 101 for SQL Server DBAs, Part II], All Right Reserved. 2018.

The post Linux 101 for SQL Server DBAs, Part II appeared first on DBA Kevlar.

Brand new year ? Brand new to installing Oracle ?

If you’ve stumbled across this blog trying to get started with a local installation of Oracle database for some personal development, then firstly, welcome!

Secondly, it can be a little daunting to get up to speed, so here’s a little video that will walk though the process of download, installing and getting up and running with Oracle database on a Windows laptop/desktop. It’s a lot easier than you think.

Announcement: Oracle Indexing Internals Seminars Coming to New Zealand in March 2018 (Shipyards of New Zealand)

I’m very pleased to announce I’ve now finalised some dates in New Zealand for my popular and highly acclaimed “Oracle Indexing Internals and Best Practices” seminar. They are: Wellington 12-13 March 2018 (Auldhouse: Level 8 Lumley House, 11 Hunter Street): Tickets and Registration Link Auckland 15-16 March 2018 (Karstens: Level 4, 205 Queen Street): Tickets and Registration Link […]

Unplug an Encrypted PDB (ORA-46680: master keys of the container database must be exported)

In the Oracle Database Cloud DBaaS you provision a multitenant database where tablespaces are encrypted. This means that when you unplug/plug the pluggable databases, you also need to export /import the encryption keys. You cannot just copy the wallet because the wallet contains all CDB keys. Usually, you can be guided by the error messages, but this one needs a little explanation and an example.

Here I’ll unplug PDB6 from CDB1 and plug it into CDB2

[oracle@VM122 blogs]$ connect /@CDB1 as sysdba
SQLcl: Release 17.4.0 Production on Fri Jan 19 22:22:44 2018
Copyright (c) 1982, 2018, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
 
22:22:46 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ ---------- ------------ ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
5 PDB6 READ WRITE NO

Here are the master keys:

SQL> select con_id,tag,substr(key_id,1,6)||'...' "KEY_ID...",creator,key_use,keystore_type,origin,creator_pdbname,activating_pdbname from v$encryption_keys;
 
CON_ID TAG KEY_ID... CREATOR KEY_USE KEYSTORE_TYPE ORIGIN CREATOR_PDBNAME ACTIVATING_PDBNAME
------ --- --------- ------- ------- ------------- ------ --------------- ------------------
1 cdb1 AcyH+Z... SYS TDE IN PDB SOFTWARE KEYSTORE LOCAL CDB$ROOT CDB$ROOT
3 pdb6 Adnhnu... SYS TDE IN PDB SOFTWARE KEYSTORE LOCAL PDB6 PDB6

Export keys and Unplug PDB

Let’s try to unplug PDB6:
22:22:51 SQL> alter pluggable database PDB6 close immediate;
Pluggable database PDB6 altered.
 
22:23:06 SQL> alter pluggable database PDB6 unplug into '/var/tmp/PDB6.xml';
 
Error starting at line : 1 in command -
alter pluggable database PDB6 unplug into '/var/tmp/PDB6.xml'
Error report -
ORA-46680: master keys of the container database must be exported

This message is not clear. You don’t export the container database (CDB) key. You have to export the PDB ones.

Then, I have to open the PDB, switch to it, and export the key:

SQL> alter session set container=PDB6;
Session altered.
 
SQL> administer key management set keystore open identified by "k3yCDB1";
Key MANAGEMENT succeeded.
 
SQL> administer key management
2 export encryption keys with secret "this is my secret password for the export"
3 to '/var/tmp/PDB6.p12'
4 identified by "k3yCDB1"
5 /
 
Key MANAGEMENT succeeded.

Note that I opened the keystore with a password. If you use an autologin wallet, you have to close it, in the CDB$ROOT, and open it with password.

Now I can unplug the database:

SQL> alter pluggable database PDB6 close immediate;
Pluggable database PDB6 altered.
 
SQL> alter pluggable database PDB6 unplug into '/var/tmp/PDB6.xml';
Pluggable database PDB6 altered.

Plug PDB and Import keys

I’ll plug it in CDB2:

SQL> connect /@CDB2 as sysdba
Connected.
SQL> create pluggable database PDB6 using '/var/tmp/PDB6.xml' file_name_convert=('/CDB1/PDB6/','/CDB2/PDB6/');
Pluggable database PDB6 created.

When I open it, I get a warning:

18:05:45 SQL> alter pluggable database PDB6 open;
ORA-24344: success with compilation error
24344. 00000 - "success with compilation error"
*Cause: A sql/plsql compilation error occurred.
*Action: Return OCI_SUCCESS_WITH_INFO along with the error code
 
Pluggable database PDB6 altered.

The PDB is opened in restricted mode and then I have to import the wallet:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
6 PDB6 READ WRITE YES
 
SQL> select name,cause,type,status,message,action from pdb_plug_in_violations;
 
NAME CAUSE TYPE STATUS MESSAGE ACTION
---- ----- ---- ------ ------- ------
PDB6 Wallet Key Needed ERROR PENDING PDB needs to import keys from source. Import keys from source.

Then I open the destination CDB wallet and import the PDB keys into it:

SQL> alter session set container=PDB6;
Session altered.
 
SQL> administer key management set keystore open identified by "k3yCDB2";
Key MANAGEMENT succeeded.
 
SQL> administer key management
2 import encryption keys with secret "this is my secret password for the export"
3 from '/var/tmp/PDB6.p12'
4 identified by "k3yCDB2"
5 with backup
6 /
 
Key MANAGEMENT succeeded.

Now the PDB can be opened for all sessions

SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> alter pluggable database PDB6 close;
Pluggable database PDB6 altered.
 
SQL> alter pluggable database PDB6 open;
Pluggable database PDB6 altered.

Here is a confirmation that the PDB has the same key as the in the origin CDB:

SQL> select con_id,tag,substr(key_id,1,6)||'...' "KEY_ID...",creator,key_use,keystore_type,origin,creator_pdbname,activating_pdbname from v$encryption_keys;
 
CON_ID TAG KEY_ID... CREATOR KEY_USE KEYSTORE_TYPE ORIGIN CREATOR_PDBNAME ACTIVATING_PDBNAME
------ --- --------- ------- ------- ------------- ------ --------------- ------------------
1 cdb2 AdTdo9... SYS TDE IN PDB SOFTWARE KEYSTORE LOCAL CDB$ROOT CDB$ROOT
4 pdb1 Adnhnu... SYS TDE IN PDB SOFTWARE KEYSTORE LOCAL PDB6 PDB6

 

Cet article Unplug an Encrypted PDB (ORA-46680: master keys of the container database must be exported) est apparu en premier sur Blog dbi services.

Boston Cloud SIG Session: Docker Demystified

Thank you to all for coming to my session for North East Oracle User Group (Boston), more specifically the first ever Cloud SIG meetup a Burlington, MA on Jan 18th. It was heartening to see a 100+ crowd for a session that spanned all the way up to 9:30 PM on a week day. I sincerely hope you enjoyed it.

Here is the presentation for you to download: http://bit.ly/2FU7PRY

Nested MVs

A recent client was seeing a very large redo penalty from refreshing materialized views. Unfortunately they had to be refreshed very frequently, and were being handled with a complete refresh in atomic mode – which means delete every row from every MV then re-insert every row.  The total redo was running at about 5GB per hour, which wasn’t a problem for throughput, but the space for handling backup and recovery was getting a bit extreme.

The requirement consisted of two MVs which extracted and aggregated row and column subsets in two different ways from a single table; then two MVs that aggregated one of the first MVs in two different ways; then two MVs which each joined one of the first level MVs to one of the scond level MVs.

No problem – join MVs are legal, aggregate MVs are legal, “nested” MVs are legal: all you have to do is create the right MV logs and pick the right refresh command.  Since the client was also running Standard Editions (SE2) there was no need to worry about how to ensure that query rewrite would work (feature not implemented on SE).

So here, simplified and camouflaged, is a minimum subset of just the first few stages of the construction: a base table with MV log, one first-level aggregate MV with its own MV log, and two aggregate MVs based on the first MV.

drop materialized view log on req_line;
drop materialized view log on jpl_req_group_numlines;

drop materialized view jpl_req_group_numlines;
drop materialized view jpl_req_numsel;
drop materialized view jpl_req_basis;

drop table req_line;

-- ----------
-- Base Table
-- ----------

create table req_line(
        eventid         number(10,0),
        selected        number(10,0),
        req             number(10,0),
        basis           number(10,0),
        lnid            number(10,0),
        area            varchar2(10),
        excess          number(10,0),
        available       number(10,0),
        kk_id           number(10,0),
        eventdate       number(10,0),
        rs_id           number(10,0)
)
;

-- --------------------
-- MV log on base table
-- --------------------

create materialized view log 
on
req_line
with rowid(
        req, basis, lnid, eventid, selected, area,
        excess, available, kk_id, eventdate, rs_id
)
including new values
;

-- --------------------
-- Level 1 aggregate MV
-- --------------------

create materialized view jpl_req_group_numlines(
        eventid, selected, 
        row_ct, req_ct, basis_ct, req, basis, 
        maxlnid, excess, numsel, area, available, kk_id, 
        rs_id, eventdate
)
segment creation immediate
build immediate
refresh fast on demand 
as 
select 
        eventid,
        selected,
        count(*)        row_ct,
        count(req)      req_ct,
        count(basis)    basis_ct,
        sum(req)        req,
        sum(basis)      basis,
        max(lnid)       maxlnid,
        excess,
        count(selected) numsel,
        area,
        available,
        kk_id,
        rs_id,
        eventdate
from 
        req_line
group by 
        eventid, selected, area, excess,
        available, kk_id, eventdate, rs_id
;

-- ------------------------
-- MV log on first level MV
-- ------------------------

create materialized view log 
on
jpl_req_group_numlines
with rowid 
(
        eventid, area, selected, available,
        basis, req, maxlnid, numsel
)
including new values
;


-- ----------------------------
-- First "level 2" aggregate MV
-- ----------------------------

create materialized view jpl_req_numsel(
        eventid, selected, 
        row_ct, totalreq_ct, totalbasis_ct, totalreq, totalbasis, 
        maxlnid, numsel_ct, numsel, area
)
segment creation immediate
build immediate
refresh fast on demand
as 
select 
        eventid,
        selected,
        count(*)        row_ct,
        count(req)      req_ct,
        count(basis)    basis_ct,
        sum(req)        req,
        sum(basis)      basis,
        max(maxlnid)    maxlnid,
        count(numsel)   numsel_ct,
        sum(numsel)     numsel,
        area
from 
        jpl_req_group_numlines
group by 
        eventid, selected, area
;


-- -----------------------------
-- Second "level 2" aggregate MV
-- -----------------------------

create materialized view jpl_req_basis(
        eventid, 
        row_ct, totalbasis_ct, totalreq_ct, totalbasis, totalreq, 
        area, selected, available, maxlnid ,
        numsel_ct, numsel
)
segment creation immediate
build immediate
refresh fast on demand
as 
select 
        eventid,
        count(*)        row_ct,
        count(basis)    totalbasis_ct,
        count(req)      totalreq_ct,
        sum(basis)      totalbasis,
        sum(req)        totalreq,
        area,
        selected,
        available,
        max(maxlnid)    maxlnid,
        count(numsel)   numsel,
        sum(numsel)     numsel
from
        jpl_req_group_numlines
group by 
        eventid, area, available, selected
;

Once the table, MV logs and MVs exist we can insert some data into the base table, then try refreshing the views. I have tried three different calls to the dbms_refresh package, dbms_mview.refresh_all_mviews(), dbms_mview.refresh_dependent(), and dbms_mview.refresh(), specifying the ‘F’ (fast) refresh method, atomic refresh, and nested. All three fail in the same way on 12.2.0.1. The code below shows only the refresh_dependent() call.

I’ve included a query to report the current state of the materialized views before and after the calls, and set a two second sleep before the refresh so that changes in “last refresh” time will appear. The final queries are just to check that the expected volume of data has been transferred to the materialized views.


-- ------------------------------------
-- Insert some data into the base table
-- ------------------------------------

begin
        for i in 1..100 loop
                execute immediate 'insert into req_line values( :xxx, :xxx, :xxx, :xxx, :xxx, :xxx, :xxx, :xxx, :xxx, :xxx, :xxx)' 
                using i,i,i,i,i,i,i,i,i,i,i;
                commit;
        end loop;
end;
/

set linesize 144
column mview_name format a40

select
        mview_name, staleness, compile_state, last_refresh_type, 
        to_char(last_refresh_date,'dd-mon hh24:mi:ss')          ref_time
from
        user_mviews
ORDER by
        last_refresh_date, mview_name
;

prompt  Waiting for 2 seconds to allow refresh time to change

execute dbms_lock.sleep(2)

declare
        m_fail_ct       number(6,0);
begin
        dbms_mview.refresh_dependent(
                number_of_failures      => m_fail_ct,
                list                    => 'req_line',
                method                  => 'F',
                nested                  => true,
                atomic_refresh          => true
        );

        dbms_output.put_line('Failures: ' || m_fail_ct);
end;
/

select
        mview_name, staleness, compile_state, last_refresh_type, 
        to_char(last_refresh_date,'dd-mon hh24:mi:ss')          ref_time
from
        user_mviews
order by
        last_refresh_date, mview_name
;

-- --------------------------------
-- Should be 100 rows in each table
-- --------------------------------

select count(*) from jpl_req_basis;
select count(*) from jpl_req_group_numlines;
select count(*) from jpl_req_numsel;

Both the earlier versions of Oracle are happy with this code and refresh all three materialized view without fail. Oracle 12.2.0.1 crashes the procedure call with a deadlock error which, when traced, shows itself to be a self-deadlock while attempting to select a data dictionary row for update:


MVIEW_NAME                               STALENESS	     COMPILE_STATE	 LAST_REF REF_TIME
---------------------------------------- ------------------- ------------------- -------- ------------------------
JPL_REQ_BASIS                            FRESH		     VALID		 COMPLETE 19-jan 14:03:01
JPL_REQ_GROUP_NUMLINES			 NEEDS_COMPILE	     NEEDS_COMPILE	 COMPLETE 19-jan 14:03:01
JPL_REQ_NUMSEL                           FRESH		     VALID		 COMPLETE 19-jan 14:03:01

3 rows selected.

Waiting for 2 seconds to allow refresh time to change

PL/SQL procedure successfully completed.

declare
*
ERROR at line 1:
ORA-00060: deadlock detected while waiting for resource
ORA-06512: at "SYS.DBMS_SNAPSHOT_KKXRCA", line 2952
ORA-06512: at "SYS.DBMS_SNAPSHOT_KKXRCA", line 85
ORA-06512: at "SYS.DBMS_SNAPSHOT_KKXRCA", line 245
ORA-06512: at "SYS.DBMS_SNAPSHOT_KKXRCA", line 1243
ORA-06512: at "SYS.DBMS_SNAPSHOT_KKXRCA", line 2414
ORA-06512: at "SYS.DBMS_SNAPSHOT_KKXRCA", line 2908
ORA-06512: at "SYS.DBMS_SNAPSHOT_KKXRCA", line 3699
ORA-06512: at "SYS.DBMS_SNAPSHOT_KKXRCA", line 3723
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 75
ORA-06512: at line 4


MVIEW_NAME				 STALENESS	     COMPILE_STATE	 LAST_REF REF_TIME
---------------------------------------- ------------------- ------------------- -------- ------------------------
JPL_REQ_NUMSEL                           NEEDS_COMPILE	     NEEDS_COMPILE	 COMPLETE 19-jan 14:03:01
JPL_REQ_BASIS                            FRESH		     VALID		 FAST	  19-jan 14:03:04
JPL_REQ_GROUP_NUMLINES                   FRESH		     VALID		 FAST	  19-jan 14:03:04

The deadlock graph from the trace file, with a little extra surrounding information, looks like this:


Deadlock graph:
                                          ------------Blocker(s)-----------  ------------Waiter(s)------------
Resource Name                             process session holds waits serial  process session holds waits serial
TX-00020009-00000C78-A9B090F8-00000000         26      14     X        40306      26      14           X  40306


*** 2018-01-19T14:18:03.925859+00:00 (ORCL(3))
dbkedDefDump(): Starting a non-incident diagnostic dump (flags=0x0, level=1, mask=0x0)
----- Error Stack Dump -----
----- Current SQL Statement for this session (sql_id=2vnzfjzg6px33) -----
select log, oldest, oldest_pk, oldest_oid, oldest_new, youngest+1/86400,  flag, yscn, oldest_seq, oscn, oscn_pk, oscn_oid, oscn_new, oscn_seq  from sys.mlog$ where mowner = :1 and master = :2 for update
----- PL/SQL Stack -----

So far I haven’t been able to spot whether or not I’m doing something wrong, or prohibited, and I haven’t been able to find a matching problem on MoS. Since the code works on 11gR2 and 12cR1 I’m inclined to believe it’s a bug introduced in the 12cR2 timeline – which is a nuisance for my client, but if it is a bug then perhaps a fix will appear fairly promptly.

CDB Views and Query Optimizer Cardinality Estimations

Today I faced a performance problem caused by a bad cardinality estimation involving a CDB view in a 12.1.0.2 multitenant environment. While solving the problem I did a number of observations that I try to summarize in this blog post.

First of all, when checking the execution plan of a query already running for more than two hours, I noticed that, in the execution plan, neither the referenced CDB view nor one of its underlying objects were referenced. The following query (and its execution plan) executed while connect to the CDB illustrates (I also added the 12.2.0.1 output to show you the difference it that area):

12.1.0.2

SQL> EXPLAIN PLAN FOR SELECT * FROM cdb_tables;

SQL> SELECT * FROM table(dbms_xplan.display);

Plan hash value: 1439328272

---------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |          | 20000 |    16M|     1 (100)| 00:00:01 |       |       |        |      |            |
|   1 |  PX COORDINATOR         |          |       |       |            |          |       |       |        |      |            |
|   2 |   PX SEND QC (RANDOM)   | :TQ10000 | 20000 |    16M|     1 (100)| 00:00:01 |       |       |  Q1,00 | P->S | QC (RAND)  |
|   3 |    PX PARTITION LIST ALL|          | 20000 |    16M|     1 (100)| 00:00:01 |     1 |   254 |  Q1,00 | PCWC |            |
|   4 |     FIXED TABLE FULL    | X$CDBVW$ | 20000 |    16M|     1 (100)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
---------------------------------------------------------------------------------------------------------------------------------

12.2.0.1

SQL> EXPLAIN PLAN FOR SELECT * FROM cdb_tables;

SQL> SELECT * FROM table(dbms_xplan.display);

Plan hash value: 1043806087

-------------------------------------------------------------------------------------------------
| Id  | Operation          | Name       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |            | 20000 |    28M|     1 (100)| 00:00:01 |       |       |
|   1 |  PARTITION LIST ALL|            | 20000 |    28M|     1 (100)| 00:00:01 |     1 |     3 |
|   2 |   CONTAINERS FULL  | DBA_TABLES | 20000 |    28M|     1 (100)| 00:00:01 |       |       |
-------------------------------------------------------------------------------------------------

As you can see, the 12.1.0.2 execution plan doesn’t reference any object related to the CDB_TABLES view or one of its underlying tables. Instead, it uses the generic fixed table X$CDBVW$. Simply put, X$CDBVW$ is a fixed table that gives access to data stored in PDBs. To know more, I advise you to read Laurent Leturgez’s blog post entitled Oracle Database 12c CDB$VIEW function.

In the real query, the CDB view was joined to a number of V$ views. Unfortunately, the query optimizer selected the wrong join method (no surprise, it was a nested loops join instead of a hash join…) and the performance was abysmal. When I saw that the cardinality estimations were wrong, I checked whether the involved objects had statistics. But, because of its particular behavior, the fixed table X$CDBVW$ had no statistics. And, by the way, statistics on it can’t be gathered. If you try, you get the following error:

12.1.0.2

SQL> exec dbms_stats.gather_table_stats('SYS','X$CDBVW$')
BEGIN dbms_stats.gather_table_stats('SYS','X$CDBVW$'); END;

*
ERROR at line 1:
ORA-20000: Unable to analyze TABLE "SYS"."X$CDBVW$", analyzing the table is not supported
ORA-06512: at "SYS.DBMS_STATS", line 35464
ORA-06512: at line 1

12.2.0.1

SQL> exec dbms_stats.gather_table_stats('SYS','X$CDBVW$')
BEGIN dbms_stats.gather_table_stats('SYS','X$CDBVW$'); END;

*
ERROR at line 1:
ORA-20000: Unable to analyze TABLE "SYS"."X$CDBVW$", insufficient privileges or does not exist
ORA-06512: at "SYS.DBMS_STATS", line 36873
ORA-06512: at "SYS.DBMS_STATS", line 36496
ORA-06512: at "SYS.DBMS_STATS", line 36716
ORA-06512: at line 1

As Laurent mentioned in his blog post, the query optimizer uses a default value instead. However, what I noticed is that the estimation wasn’t 10000 rows as he mentioned. In my case it was 30000 rows. The difference is probably due to the version. In fact, he wrote his blog post when only 12.1.0.1 was available, but my customer is using 12.1.0.2. So, I did a couple of test on my own test environment and found out that as of and including 12.1.0.2 the number of rows increases proportionally as the number of open PDBs increases. The following example illustrates:

12.1.0.2 / 12.2.0.1

SQL> SELECT con_id, name, open_mode FROM v$pdbs WHERE open_mode LIKE 'READ%';

    CON_ID NAME                           OPEN_MODE
---------- ------------------------------ ----------
         2 PDB$SEED                       READ ONLY

SQL> EXPLAIN PLAN SET STATEMENT_ID = 'seed' FOR SELECT * FROM cdb_tables;

SQL> ALTER PLUGGABLE DATABASE test1 OPEN;

SQL> EXPLAIN PLAN SET STATEMENT_ID = 'test1' FOR SELECT * FROM cdb_tables;

SQL> ALTER PLUGGABLE DATABASE test2 OPEN;

SQL> EXPLAIN PLAN SET STATEMENT_ID = 'test2' FOR SELECT * FROM cdb_tables;

SQL> ALTER PLUGGABLE DATABASE test3 OPEN;

SQL> EXPLAIN PLAN SET STATEMENT_ID = 'test3' FOR SELECT * FROM cdb_tables;

SQL> SELECT con_id, name, open_mode FROM v$pdbs WHERE open_mode LIKE 'READ%';

    CON_ID NAME                           OPEN_MODE
---------- ------------------------------ ----------
         2 PDB$SEED                       READ ONLY
         4 TEST1                          READ WRITE
         5 TEST2                          READ WRITE
         6 TEST3                          READ WRITE

SQL> SELECT statement_id, cardinality FROM plan_table WHERE id = 0;

STATEMENT_ID                   CARDINALITY
------------------------------ -----------
seed                                 20000
test1                                30000
test2                                40000
test3                                50000

Finally, in the real query, because of the join condition based on the CON_ID column, the query optimizer incorrectly adjusted the number of rows returned through the fixed table X$CDBVW$. That led me doing few tests related to the selectivity estimations related to the CON_ID column. As the following example illustrates, the query optimizer uses a default selectivity of 1% for equality predicates, and 5% for range predicates.

12.1.0.2

SQL> EXPLAIN PLAN SET STATEMENT_ID = 'all' FOR SELECT * FROM cdb_tables;

Explained.

SQL> SELECT * FROM table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1439328272

---------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |          | 50000 |    40M|     2 (100)| 00:00:01 |       |       |        |      |            |
|   1 |  PX COORDINATOR         |          |       |       |            |          |       |       |        |      |            |
|   2 |   PX SEND QC (RANDOM)   | :TQ10000 | 50000 |    40M|     2 (100)| 00:00:01 |       |       |  Q1,00 | P->S | QC (RAND)  |
|   3 |    PX PARTITION LIST ALL|          | 50000 |    40M|     2 (100)| 00:00:01 |     1 |   254 |  Q1,00 | PCWC |            |
|   4 |     FIXED TABLE FULL    | X$CDBVW$ | 50000 |    40M|     2 (100)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
---------------------------------------------------------------------------------------------------------------------------------

SQL> EXPLAIN PLAN SET STATEMENT_ID = 'eq' FOR SELECT * FROM cdb_tables WHERE con_id = 0;

SQL> EXPLAIN PLAN SET STATEMENT_ID = 'gt' FOR SELECT * FROM cdb_tables WHERE con_id > 0;

SQL> SELECT statement_id, cardinality FROM plan_table WHERE id = 0;

STATEMENT_ID                   CARDINALITY
------------------------------ -----------
all                                  50000
eq                                     500
gt                                    2500

12.2.0.1

SQL> EXPLAIN PLAN SET STATEMENT_ID = 'all' FOR SELECT * FROM cdb_tables;

Explained.

SQL> SELECT * FROM table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1281079049

-----------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |            | 50000 |    42M|     1 (100)| 00:00:01 |       |       |        |      |            |
|   1 |  PX COORDINATOR         |            |       |       |            |          |       |       |        |      |            |
|   2 |   PX SEND QC (RANDOM)   | :TQ10000   | 50000 |    42M|     1 (100)| 00:00:01 |       |       |  Q1,00 | P->S | QC (RAND)  |
|   3 |    PX PARTITION LIST ALL|            | 50000 |    42M|     1 (100)| 00:00:01 |     1 |     3 |  Q1,00 | PCWC |            |
|   4 |     CONTAINERS FULL     | DBA_TABLES | 50000 |    42M|     1 (100)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------------------------

SQL> EXPLAIN PLAN SET STATEMENT_ID = 'eq' FOR SELECT * FROM cdb_tables WHERE con_id = 0;

SQL> EXPLAIN PLAN SET STATEMENT_ID = 'gt' FOR SELECT * FROM cdb_tables WHERE con_id > 0;

SQL> SELECT statement_id, cardinality FROM plan_table WHERE id = 0;

STATEMENT_ID                   CARDINALITY
------------------------------ -----------
all                                  50000
eq                                     500
gt                                    2500

It goes without saying that such estimates are way off. Good estimates should consider the number of open PDBs….

In summary, if you see wrong estimates related to CDB views, don’t be surprised. In fact, the query optimizer bases its estimations on a number of default values.