Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

When PDB name conflicts with CDB name

Going to multitenant architecture is not a big change. The administration things (DBA, monitoring, backups) connect to the CDB and the application things connect to the PDB. Without the multitenant option, it is still recommended to go to the CDB architecture. The non-CDB is deprecated and the multitenant architecture brings interesting features. People often ask how to name the CDB and the PDB, especially when they have naming rules or policies in the company. My recommendation is to name the PDB as you are used to naming the databases: the name often gives an idea of the data that is inside, the application, and the environment. The CDB is the container, and in my opinion, you should apply the same naming rules as for servers. Don’t forget that pluggable databases are made to be moved across CDB, so the CDB name should not depend on the content.

But, with single tenant, you have a one-to-one relationship between the CDB and the PDB and then may come the idea to set the same name for CDB and PDB… I’m not sure if it is supported or not and please, don’t do that.

Service Name

There’s one rule: the service name must be unique on a server, especially when registered to the same listener. The PDB name will be the default service name registered by the PDB. And the DB_UNIQUE_NAME of the CDB will be the default service name registered by the CDB. Then the PDB name must be different than the DBA_UNIQUE_NAME.

With this rule, it should be possible to have the same name for the CDB (the DB_NAME) and the PDB, given that we have set a different DB_UNIQUE_NAME.

Here is an example. The name of my Container Database is CDB1. But as it is part of a Data Guard configuration I changed the unique name to CDB1A (and standby will be CDB1B).

Here are the services from by CDB:

SQL> select * from v$services;
 
SERVICE_ID NAME NAME_HASH NETWORK_NAME CREATION_DATE CREATION_DATE_HASH GOAL DTP AQ_HA_NOTIFICATION CLB_GOAL COMMIT_OUTESSION_STATE_CONSISTENCY GLOBAL PDB SQL_TRANSLATION_PROFILE MAX_LAG_TIME STOP_OPTION FAILOVER_RESTORE DRAIN_TIMEOUT CON_ID
---------- ---- --------- ------------ ------------- ------------------ ---- --- ------------------ -------- ---------------------------------- ------ --- ----------------------- ------------ ----------- ---------------- ------------- ------
7 CDB1A 3104886812 CDB1A 27-AUG-17 1962062146 NONE N NO LONG NO NO CDB$ROOT NONE NONE 0 1
1 SYS$BACKGROUND 165959219 26-JAN-17 1784430042 NONE N NO SHORT NO NO CDB$ROOT NONE NONE 0 1
2 SYS$USERS 3427055676 26-JAN-17 1784430042 NONE N NO SHORT NO NO CDB$ROOT NONE NONE 0 1
0 pdb1 1888881990 pdb1 0 NONE N NO SHORT NO NO PDB1 NONE NONE 0 4
6 CDB1XDB 1202503288 CDB1XDB 27-AUG-17 1962062146 NONE N NO LONG NO NO CDB$ROOT NONE NONE 0 1

All are default services: CDB1A is the DB_UNIQUE_NAME, SYS$BACKGROUND for background processes, SYS$USERS when connecting without a service name, CDB1XDB is used to connec to XDB dispathers. PDB1 is the default service of my pluggable database PDB1.

I can also look at the services registred in the listener:


SQL> host lsnrctl status
 
LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 28-AUG-2017 20:34:36
 
Copyright (c) 1991, 2016, Oracle. All rights reserved.
 
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date 27-AUG-2017 20:41:33
Uptime 0 days 23 hr. 53 min. 3 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Log File /u01/app/oracle/diag/tnslsnr/VM104/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=VM104)(PORT=1521)))
Services Summary...
Service "57c2283990d42152e053684ea8c05ea0" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1A" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
The command completed successfully

There is just one additional service here: the GUI of my PDB (see https://blog.dbi-services.com/service-696c6f76656d756c746974656e616e74-has-1-instances/)

ORA-65149

Do you see any service named ‘CDB1′ here? No. Then I should be able to create a PDB with this name.

SQL> create pluggable database CDB1 admin user admin identified by covfefe file_name_convert=('pdbseed','cdb1');
 
Error starting at line : 1 in command -
create pluggable database CDB1 admin user admin identified by covfefe file_name_convert=('pdbseed','cdb1')
Error report -
ORA-65149: PDB name conflicts with existing service name in the CDB or the PDB
65149. 00000 - "PDB name conflicts with existing service name in the CDB or the PDB"
*Cause: An attempt was made to create a pluggable database (PDB) whose
name conflicts with the existing service name in the container
database (CDB) or the PDB.
*Action: Choose a different name for the PDB.

Ok. This is impossible. However, the error message is not correct. My PDB name does not conflict with existing service names. It may conflict with instance name or DB_NAME, but not with any service.

NID

As I’m not satisfied with this, I try to find another way to have the same name for CDB and PDB. I have a pluggable database named ‘PDB1′ and I’ll try to change the CDB name to this:


[oracle@VM104 ~]$ nid dbname=PDB1 target=sys/oracle
 
DBNEWID: Release 12.2.0.1.0 - Production on Mon Aug 28 20:40:08 2017
 
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
 
Connected to database CDB1 (DBID=926862412)
 
Connected to server version 12.2.0
 
Control Files in database:
/u01/oradata/CDB1A/control01.ctl
/u01/fast_recovery_area/CDB1A/control02.ctl
 
The following datafiles are read-only:
/u01/oradata/CDB1A/PDB1/USERS2.db (17)
These files must be writable by this utility.
 
Change database ID and database name CDB1 to PDB1? (Y/[N]) => Y
 
Proceeding with operation
Changing database ID from 926862412 to 3460932968
Changing database name from CDB1 to PDB1
Control File /u01/oradata/CDB1A/control01.ctl - modified
Control File /u01/fast_recovery_area/CDB1A/control02.ctl - modified
Datafile /u01/oradata/CDB1A/system01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/sysaux01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/undotbs01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/pdbseed/system01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/pdbseed/sysaux01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/users01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/pdbseed/undotbs01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/PDB1/system01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/PDB1/sysaux01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/PDB1/undotbs01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/PDB1/USERS.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/PDB1/USERS2.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/temp01.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/pdbseed/temp012017-08-27_18-30-16-741-PM.db - dbid changed, wrote new name
Datafile /u01/oradata/CDB1A/PDB1/temp012017-08-27_18-30-16-741-PM.db - dbid changed, wrote new name
Control File /u01/oradata/CDB1A/control01.ctl - dbid changed, wrote new name
Control File /u01/fast_recovery_area/CDB1A/control02.ctl - dbid changed, wrote new name
Instance shut down
 
Database name changed to PDB1.
Modify parameter file and generate a new password file before restarting.
Database ID for database PDB1 changed to 3460932968.
All previous backups and archived redo logs for this database are unusable.
Database is not aware of previous backups and archived logs in Recovery Area.
Database has been shutdown, open database with RESETLOGS option.
Succesfully changed database name and ID.
DBNEWID - Completed succesfully.
 
SQL> startup
ORACLE instance started.
 
Total System Global Area 859832320 bytes
Fixed Size 8798552 bytes
Variable Size 784338600 bytes
Database Buffers 58720256 bytes
Redo Buffers 7974912 bytes
ORA-01103: database name 'PDB1' in control file is not 'CDB1'
 
SQL> alter system set db_name=PDB1 scope=spfile;
 
System altered.
 
SQL> shutdown immediate
ORA-01507: database not mounted
 
ORACLE instance shut down.
SQL>
SQL> startup
ORACLE instance started.
 
Total System Global Area 859832320 bytes
Fixed Size 8798552 bytes
Variable Size 784338600 bytes
Database Buffers 58720256 bytes
Redo Buffers 7974912 bytes
Database mounted.
ORA-01589: must use RESETLOGS or NORESETLOGS option for database open
 
SQL> alter database open resetlogs;
 
Database altered.

That’s done.
My CDB is named PDB1:
SQL> select * from v$database;
 
DBID NAME CREATED RESETLOGS_CHANGE# RESETLOGS_TIME PRIOR_RESETLOGS_CHANGE# PRIOR_RESETLOGS_TIME LOG_MODE CHECKPOINT_CHANGE# ARCHIVE_CHANGE# CONTROLFILE_TYPE CONTROLFILE_CREATED CONTROLFILE_SEQUENCE# CONTROLFILE_CHANGE# CONTROLFILE_TIME OPEN_RESETLOGS VERSION_TIME OPEN_MODE PROTECTION_MODE PROTECTION_LEVEL REMOTE_ARCHIVE ACTIVATION# SWITCHOVER# DATABASE_ROLE ARCHIVELOG_CHANGE# ARCHIVELOG_COMPRESSION SWITCHOVER_STATUS DATAGUARD_BROKER GUARD_STATUS SUPPLEMENTAL_LOG_DATA_MIN SUPPLEMENTAL_LOG_DATA_PK SUPPLEMENTAL_LOG_DATA_UI FORCE_LOGGING PLATFORM_ID PLATFORM_NAME RECOVERY_TARGET_INCARNATION# LAST_OPEN_INCARNATION# CURRENT_SCN FLASHBACK_ON SUPPLEMENTAL_LOG_DATA_FK SUPPLEMENTAL_LOG_DATA_ALL DB_UNIQUE_NAME STANDBY_BECAME_PRIMARY_SCN FS_FAILOVER_STATUS FS_FAILOVER_CURRENT_TARGET FS_FAILOVER_THRESHOLD FS_FAILOVER_OBSERVER_PRESENT FS_FAILOVER_OBSERVER_HOST CONTROLFILE_CONVERTED PRIMARY_DB_UNIQUE_NAME SUPPLEMENTAL_LOG_DATA_PL MIN_REQUIRED_CAPTURE_CHANGE# CDB CON_ID PENDING_ROLE_CHANGE_TASKS CON_DBID FORCE_FULL_DB_CACHING
---- ---- ------- ----------------- -------------- ----------------------- -------------------- -------- ------------------ --------------- ---------------- ------------------- --------------------- ------------------- ---------------- -------------- ------------ --------- --------------- ---------------- -------------- ----------- ----------- ------------- ------------------ ---------------------- ----------------- ---------------- ------------ ------------------------- ------------------------ ------------------------ ------------- ----------- ------------- ---------------------------- ---------------------- ----------- ------------ ------------------------ ------------------------- -------------- -------------------------- ------------------ -------------------------- --------------------- ---------------------------- ------------------------- --------------------- ---------------------- ------------------------ ---------------------------- --- ------ ------------------------- -------- ---------------------
3460932968 PDB1 27-AUG-17 1495032 28-AUG-17 1408558 27-AUG-17 ARCHIVELOG 1495035 0 CURRENT 27-AUG-17 2574 1496538 28-AUG-17 NOT ALLOWED 27-AUG-17 READ WRITE MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE ENABLED 3460947145 3460947145 PRIMARY 0 DISABLED NOT ALLOWED DISABLED NONE NO NO NO NO 13 Linux x86 64-bit 3 3 1497050 NO NO NO CDB1A 0 DISABLED 0 NO NO YES 0 NOT APPLICABLE 3460932968 NO

And I have a PDB with the same name:

SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 MOUNTED
 
SQL> alter pluggable database PDB1 open;
 
Pluggable database PDB1 altered.
 
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 READ WRITE NO

What was forbidden with a wrong error message was made possible with this other way.

So what?

Please, do not take this as a solution. There is clearly a problem here. Maybe the documentation and error message are wrong. Maybe the NID has a bug, allowing to do something that should be blocked. Or the create pluggable database has a bug, blocking something that should be possible. Until this is fixed (SR opened) I would recommend that the PDB name is always different than the CDB name, independently of service names. Well, I would recommend it anyway as it brings a lot of confusion: when you mention a database name, people will not know whether you are referring to the CDB or the PDB.

 

Cet article When PDB name conflicts with CDB name est apparu en premier sur Blog dbi services.

SystemTap and Oracle RDBMS: Page Faults

Introduction

Now that I am able to aggregate SytemTap probes by Oracle database, let’s focus on page faults.

For this purpose a new SystemTap script (page_faults_per_db.stp) has been created and has been added into this github repository.

page_faults_per_db

This script tracks the page faults per database. It reports the total number of page faults and splits them into Major or Minor faults as well as Read or Write access.

Usage:

$> stap -g ./page_faults_per_db.stp  

Output example:

$> stap -g ./page_faults_per_db.stp 54321 10000

---------------------------------------------------------------------------------------
DBNAME      : READS_PFLT   WRITES_PFLT  TOTAL_PFLT   MAJOR_PFLT   MINOR_PFLT
---------------------------------------------------------------------------------------
BDT         : 30418        22526        52944        22           52922
NOT_A_DB    : 773          1088         1861         4            1858

Remarks

  • In this post the word database stands for “all the foreground and background processes linked to an oracle database”.
  • In a consolidated environment, having a view per database can be very useful.
  • Should you be interested by this subject, then do read Hatem Mahmoud post.

Conclusion

We are able to track and group the page faults at the database level.

Behold the full stack developer

Well…sort of.  Well…not really Smile

This blog post is really about how with just a little bit of knowledge and patience you can bring several cool technologies and tools together to achieve what you want.

I’m off to OpenWorld at the start of October which will be an awesome event as usual, and in the last few days, the session catalog has been released, detailing the topics and the scheduling for events throughout the OpenWorld conference.

For the typical conference attendee, the schedule builder is perfect for finding the topics you’re interested in and planning out your week.  But for old dinosaurs like myself, my requirements are perhaps different from the norm.  Sure, there will be topics on functionality and features that I’m not familiar with, which I will try to attend so that I can broaden my skill set.  But also, there will be topics I’m already familiar with, but the speaker is a colleague I want to support, or one that I have a lot of respect for.  And sometimes the sessions that colleagues are attending will clash, or two sessions adjacent in time will not be adjacent in location! So for me, it is the combination of topic and speaker and location and scheduled time that drives my decision making when planning out the week (for those brief periods during the conference where as an employee I have time to visit a session!)

And combining speaker and topic into an “at a glance” view using the session catalog is not possible (because I doubt there is a demand for it).  You need to search on a speaker, and then explore topics.  Or you can search on topics and then expand each topic to see the speaker.  I’m simply too lazy for either approach Smile. In past years, I would use a blank search criteria, then keep scrolling down until all results were shown, then use the “View Source” function in my browser to get all of the raw data, and then use some basic scripting (awk, sed, etc) to get a list of topics and speakers.  This year that technique did not work, because the content is all driven by Javascript.  So whatever session results are on the screen, the “View Source” simply shows a reference to a stack of Javascript code and no session content was available.  So I figured I was out of luck.

Then I stumbled upon this blog post by my friend Lucas Jellema from AMIS, where he had encountered the same issue.  Lucas is much more fluent with dealing with web site content than I, and described using node as a means of pseudo-automatically driving the site via API’s.  If you haven’t read the post, please do – it’s a cracking good read.  Lucas’s script could grab the session catalog detail and store it as a JSON file on a local drive. 

So …. task #1.  How to get node running on my laptop?  That was fairly easy – you just download the version you want, unzip it, and you’re good to go.  I then wrote my very first node program.

image

Woo hoo ! All systems go !  I figured at this point I was ready to be an enterprise node developer Smile

I then made some small modifications to Lucas’s script for my own environment, and ran it on my laptop.

image

Hmmmm…Not so good.  Maybe I’m not quite ready for enterprise developer status yet Smile  So with a little more reading about node modules, npm, and some fresh downloads, I was ready for a second attempt.

image

Jackpot !  I now had a JSON file that looked like it contained the details I needed.

image

I figured I should now be able to break out the sed, awk as per normal, write up a fresh set of parsing scripts and convert the JSON into a nice simple report for myself.  And that it dawned on me.  I’d spent a good chunk of time this and last year at events talking about all the cool JSON facilities in Oracle 12c, so surely it would make a lot more sense to load that JSON into my database, aka “Eat my own dog food” as the expression goes.  And that was remarkably simple to do.  I just created a table with a single CLOB to hold the JSON, and loaded it from the file with some PL/SQL.

image

 

Now that it was loaded into the database, I didn’t have to worry about manually examining the JSON to work out it’s structure (because this was a 400,000 line JSON document over 7 megabytes in size!).  I just threw the JSON_DATAGUIDE at it to give me a report on what the JSON structure looked like.

image

At that point it was time to head into SQL Developer so that I could do some query experimentation with the JSON_TABLE clause to convert my JSON structure into a simple report.  This took a few iterations because the session catalog JSON contains a lot of cross-references, presumably make the site navigation simpler.  So speakers had their sessions as “children”, but also sessions had speakers as children, and of course some sessions are actually hands-on labs which repeat over a number of days etc.  But ultimately, I managed to get the majority of what I needed with a little bit of JSON_TABLE and Analytic functions to weed out duplicates.

image

 

Now I had what I wanted – a simple relational view of the OpenWorld session catalog.  So I started adding some basic WHERE clauses to see what my friends Maria and Chris were doing at OpenWorld this year.  And then some predicates on days, times, etc.  And once again, there was a light bulb moment:

“Why not do this with Application Express?”

An interactive report takes about 30 seconds to create and gives me all the filtering facilities with no more queries to write.  Using the cool  /* INSERT */ hint in SQL Developer, it was trivial to create some DML to load my session catalog into a simple relational table.

image

Now all I needed was an Apex environment to get my little application created.  And as most people will know, there’s one out there on the internet for everyone to use, and it’s free.  So I requested a fresh workspace on apex.oracle.com and I was off and running.  A few clicks in the App Builder and in the SQL Workshop to load my data and my application was ready.

image

 

And there we have it.  My speaker/session cross reference application built with node, JSON, Oracle Database 12c Release 2, SQL Developer, Application Express and of course…the thing that glues it all together:  SQL

Obviously its accuracy is only as good as the time at which I grabbed the data (and we’re still a month away from OpenWorld) and only as good as my JSON parsing capabilities, which is perhaps the larger risk Smile  But it will do for me and you’re welcome to explore it as well.

https://apex.oracle.com/pls/apex/f?p=OOW17

Solaris Cluster and Decreasing Resource Needs After Upgrade

Delphix Engineering and Support are pretty amazing folks.  They continue to pursue for solutions, no matter how much time it takes and the complex challenges they’re faced with supporting heterogenous environments, hardware configurations and customer needs.

This post is in support of the effort from our team that resulted in stability to a previously impacted Solaris 11.2 cluster configuration.  The research, patching, testing and then resulting certification from Oracle was a massive undertaking from our team and I hope this information serves the community, but in no way is recommended by Delphix.  It’s just what was done to resolve the problem, after logical decisions for the use of the system by our team.

Challenge

Environment:  Solaris 11.3 (with SRU 17.5) + Oracle 12.2 RAC + ESX 5.5
Situation:
Post an upgrade to 12.2, environments were experiencing significant cluster instability, memory starvation due to the new demands for memory post the upgrade.
Upon inspection, it was found that numerous features required more memory than previous and the system simply didn’t have the means as to support it.  As our environment was a Solaris environment with 12.2, there was a documented patch we needed to request from Oracle for RAC performance and node evictions.  The environment was still experiencing node evictions, etc data showed that we’d have to triple the memory on each node to have continue using the environment as it had before.  Our folks aren’t one to give up that easily, so secondary research was performed to find out if some of the memory use could be trimmed down.
What we discovered, is that what is old can become new again.  My buddy and fellow Oakie, Marc Fielding had blogged, (along with links to other posts, including credit to another Oakie, Jeremy Schneider) about how he’d limited resources back in 2015 after patching to 12.1.0.2 and this post really helped the engineers at Delphix get past the last hump on the environment, even after implementing the patch to address a memory leak.  Much of what you’re going to see here, came from that post, focused on its use in a development/test system, (Delphix’s sweet spot.)

Research

Kernel memory out of control
Starting with kernel memory usage, the mdb -k command can be used to inspect at a percentage level:
$ echo “::memstat” | mdb -k
Page Summary           Pages                 MB          %Tot
  ————                 —————-             —————-           —-
  Kernel               151528              #ff0000;">3183            24%
  Anon                 185037              1623            12%
  ...

We can also look at it a second way, breaking down the kernel memory areas with kmsastat:

::kmsastat

cache                        buf    buf    buf    memory     alloc alloc 
name                        size in use  total    in use   succeed  fail 
------------------------- ------ ------ ------ --------- --------- ----- 
kmem_magazine_1               16   3371   3556     57344      3371     0 
kmem_magazine_3               32  16055  16256    524288     16055     0 
kmem_magazine_7               64  29166  29210   1884160     29166     0 
kmem_magazine_15             128   6711   6741    876544      6711     0 
...

Oracle ZFS ARC Cache

Next- Oracle ZFS has a very smart cache layer, also referred to as ARC (Adaptive replacement cache). Both a blessing and a curse, ARC consumes as much memory that is available, but is supposed to free up memory to other applications if it’s needed.  This memory is used to supplement any slow disk I/O.  When inspecting our environment, a significant amount was being over-allocated to ARC.  This may be due to the newness of Oracle 12.2, but in a cluster, memory starvation can be a common cause of node eviction.

We can inspect the size stats for the ARC in the following file:

view /proc/spl/kstat/zfs/arcstats

This assumes ZFS is mounted on /proc, so your actual arcstats file may reside in a different path location than shown above.  Inside the file, review the following information:

  • c is the target size of the ARC in bytes
  • c_max is the maximum size of the ARC in bytes
  • size is the current size of the ARC in bytes

Ours was eating up everything left, taking 100% of memory left, as we’ll discuss in the next section of this post.

Oracle Clusterware Memory

The Oracle clusterware is a third area that was investigated for frivolous memory usage that could be trimmed down.  There’s some clear documented steps to investigate issues with misconfigurations and feature issues from Oracle that can assist in identifying many of these.

So, post upgrade and patching, what can you do to trim down memory usage to avoid memory upgrades to support the cluster upgrade?

Changes

From the list of features and installations that weren’t offering a benefit to a development/test environment, these were what made the list and why:
Update were made to the /etc/system file, (requires a reboot and must be performed as root):
  • Added set user_reserve_hint_pct=80
    • This change was made to limit the ZFS on how much memory for the ARC cache.  There was a significant issue for the customer when CRS processes weren’t able to allocate memory.  80% was the highest percentage this could be set without a node reboot being experienced, something we all prefer not to happen.
  • Stopped the Cluster Health Monitor, (CHM) process.  This is a brand new background process in 12c Clusterware and collects workload data, which is significantly more valuable in a production environment, but in development and test?  It can easily be a subsequent drain on CPU and memory that could be better put to use for more virtual databases.
  •  To perform this, the following commands were used as the root user:
$ crsctl stop res ora.crf -init
$ crsctl delete res ora.crf -init
  • Removed the Trace File Analyzer Collector (tfactl).  This background process collects the many trace files Oracle generates into a single location.  Handy for troubleshooting, but it’s Java-based and has a significant memory footprint and subject to java heap issues.
  • It was uninstalled with the following command as the $ORACLE_HOME owner on each node of the cluster:
$ tfactl uninstall
  • Engineering stopped and disabled the Cluster Verification Utility, (CVU).  In previous version this was a utility that could be manually added to the installation or performed post to troubleshoot issues via an Admin.  This is another feature that simply eats up resources that could be reallocated to dev and test environments, so it was time to stop and disable it with the following:
$ srvctl cvu stop
$ srvctl cvu disable

Additional Changes

  • Reduced memory allocation for the ASM instance.
    • The ASM instance in 12.2 is now using 1Gb of memory, where previous 256Mb.  That’s a huge change that can impact other features dependent on that memory.
    • Upon research, it was found that 750Mb was adequate, so if more memory reallocation is required, consider lowering the memory on each node to 750Mb.
  • To perform this set of instance level parameter change, run the following on any of the nodes and then restart each node until the cluster has been cycled to put the change into effect:
$ export ORACLE_HOME=

$ export ORACLE_SID=

$ sqlplus / as sysasm
alter system set "_asm_allow_small_memory_target"=true scope=spfile;
alter system set memory_target=750m scope=spfile;
alter system set memory_max_target=750m scope=spfile;

High CPU usage features can be troubling for most DBAs, but when it’s experienced on development and test databases that are often granted less resources to begin with vs. production, a change can often enhance the stability and longevity of these environments.

  • Disabled high-res time ticks in all databases, including ASM DBs, regular DBs, and the Grid Infrastructure Management Repository DB (GIMR, SID is -MGMTDB).  High-res ticks are a new feature in 12c, and they seem to cause a lot of CPU usage from cluster time-keeping background processes like VKTM.  Here’s the SQL to disable high-res ticks (must be run once in each DB):
alter system set "_disable_highres_ticks"=TRUE scope=spfile;
The team, after all these changes, found the Solaris kernel was still consuming more memory than before the upgrade, but it was more justifiable:
  • Solaris Kernel: 1GB of RAM
  • ARC Cache: between 1-2GB
  • Oracle Clusterware: 3Gb

Memory Upgrade

We Did Add Memory, but not as much as expected to.
After all the adjustments, we still were using over 5GB of memory for these three features, so upped each node from 8GB to 16GB to ensure enough resources to support all dev and test demands post the upgrade.  We wanted to provision as many Virtual databses, (VDBs) for any development or test the groups needed, so having a more than 3Gb free for databases was going to be required!
The Solaris cluster, as this time, has experienced no more kernel panics, node evictions or unexpected reboots, which we need to admit is the most important outcome.  It’s more difficult to explain an outage to users than why we shut down and uninstalled unused features to Oracle…. </p />
</div></div></div>

    	  	<div class=

get_tab2.sql - Free Tool to show Privileges on an Object Updated

I have a core set of PL/SQL scripts that I use when conducting Oracle security work on customer sites. Most of these are available on this website for many years. One of these is my script get_tab2.sql which shows grants....[Read More]

Posted by Pete On 30/08/17 At 12:11 PM

Presenting at UKOUG Tech17 Conference in December (Here Come The Warm Jets)

I presented at the UKOUG Tech14 conference in Liverpool and thoroughly enjoyed the experience. It was one of the better User Group conferences I’ve attended and winning both the Best New Oracle Speaker and Best Oracle Speaker awards made it that bit more special. So it’s with real excitement that I have the opportunity to […]

What Are NULL pname entries in v$process?

I got a message on Linked In today from Jijo who asked why when he queries v$process are some of the PNAME column values NULL. I have a simple script vproc.sql that I use when analysing databases for many years....[Read More]

Posted by Pete On 29/08/17 At 02:35 PM

Hadoop for Database Professionals – St. Louis (7. Sep)

Here’s some more free stuff by Gluent!

We are running another half-day course together with Cloudera, this time in St. Louis on 7. September 2017.

We will use our database background and explain using database professionals terminology why “new world” technologies like Hadoop will take over some parts of the enterprise IT, why are those platforms so much better for advanced analytics over big datasets and how to use the right tool from Hadoop ecosystem for solving the right problem.

More information below. See you there!

Hadoop for Database Professionals – St. Louis

Also, Michael Rainey will deliver a SQL-on-Hadoop overview session in Portland, OR on 6. Sep 2017

NWOUG Portland Training Day 2017

 

 

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

12c Statistics on load–special cases

One of the cool features in 12c is the automatic collection of optimizer statistics when a table is either created or loaded via direct path from empty.  This makes a lot of sense because it saves us from what used to be the mandatory second step of gathering statistics whenever we loaded an empty table with data.

For example

11.2.0.4


SQL> create table t1 as
  2  select * from dba_objects;

Table created.

SQL>
SQL> select num_rows
  2  from user_tables
  3  where table_name = 'T1';

  NUM_ROWS
----------
(null)

SQL> select count(*) from t1;

  COUNT(*)
----------
     86802

So at this point, we’d then have to scan the table we just loaded with a DBMS_STATS call in order to come up with some representative statistics. This was fixed in 12c with “statistics on load”.

12c 


SQL> create table t1 as
  2  select * from dba_objects;

Table created.

SQL> – no dbms_stats call
SQL> select num_rows
  2  from user_tables
  3  where table_name = 'T1';

  NUM_ROWS
----------
     78170

But this facility has been blogged about extensively, so I wont labour the point. This post is more about just a couple of things to be aware of when taking advantage of the facility.

Partitioned tables

The 12c feature is for statistics on load of the table. So if you are creating/loading a partitioned table, whilst you will still get statistics collected, they are at the table level only. For example,


SQL> create table t1
  2  partition by range ( object_id )
  3  interval ( 20000 )
  4  (
  5    partition p1 values less than ( 20000 ),
  6    partition p2 values less than ( 200000 )
  7  )
  8  as select d.* from dba_objects d
  9  where object_id is not null;

Table created.

SQL>
SQL> select num_rows
  2  from user_tables
  3  where table_name = 'T1';

  NUM_ROWS
----------
     78165

SQL>
SQL> select partition_name, num_rows
  2  from user_tab_partitions
  3  where table_name = 'T1';

PARTITION_   NUM_ROWS
---------- ----------
P1         (null)
P2         (null)

Notice that the partition level statistics are not collected. Interestingly, indexes however will be ok whether they are created during or after the load.


--
-- indexes created after the load on the table above
--
SQL> create index ix1 on t1 ( object_id ) local;

Index created.

SQL> create index ix2 on t1 ( object_name);

Index created.

SQL>
SQL> select index_name, leaf_blocks
  2  from user_indexes
  3  where index_name in ( 'IX1','IX2');

INDEX_NAME                     LEAF_BLOCKS
------------------------------ -----------
IX1                                    173
IX2                                    535

2 rows selected.

SQL>
SQL> select index_name, partition_name, leaf_blocks
  2  from user_ind_partitions
  3  where index_name in ( 'IX1','IX2');

INDEX_NAME                     PARTITION_ LEAF_BLOCKS
------------------------------ ---------- -----------
IX1                            P1                  43
IX1                            P2                 130

--
-- indexes created as part of the table creation process
--
SQL> create table t1 (
  2     owner
  3    ,object_name
  4    ,subobject_name
  5    ,object_id
  6    ,data_object_id
  7    ,object_type
  8    ,created
  9    ,last_ddl_time
 10    ,timestamp
 11    ,status
 12    ,temporary
 13    ,generated
 14    ,secondary
 15    ,namespace
 16    ,edition_name
 17    ,sharing
 18    ,editionable
 19    ,oracle_maintained
 20    ,application
 21    ,default_collation
 22    ,duplicated
 23    ,sharded
 24    ,created_appid
 25    ,created_vsnid
 26    ,modified_appid
 27    ,modified_vsnid
 28    ,constraint pk primary key ( object_id ) using index local
 29    ,constraint uq unique ( owner, object_id ) using index
 30  )
 31  partition by range ( object_id )
 32  interval ( 20000 )
 33  (
 34    partition p1 values less than ( 20000 ),
 35    partition p2 values less than ( 200000 )
 36  )
 37  as select d.* from dba_objects d
 38  where object_id is not null;

Table created.

SQL>
SQL>
SQL> select index_name, leaf_blocks
  2  from user_indexes
  3  where index_name in ('PK','UQ');

INDEX_NAME                     LEAF_BLOCKS
------------------------------ -----------
PK                                     163
UQ                                     263

2 rows selected.

SQL>
SQL> select index_name, partition_name, leaf_blocks
  2  from user_ind_partitions
  3  where index_name in ('PK','UQ');

INDEX_NAME                     PARTITION_ LEAF_BLOCKS
------------------------------ ---------- -----------
PK                             P1                  41
PK                             P2                 122

2 rows selected.

CTAS empty is still OK

A common trick for creating an empty table whilst copying the definition of an existing table is to use a predicate with an always false condition. For example,


SQL> create table t1 as
  2  select * from dba_objects
  3  where 1=0;

Table created.

Statistics on load will still take place for such a table, ie


SQL> select num_rows
  2  from user_tables
  3  where table_name = 'T1';

  NUM_ROWS
----------
         0

Now you might be worried that since there are now statistics on the table, that perhaps a direct load using INSERT /*+ APPEND */,  which would normally perform statistics on load will no longer work. But there is no need to panic. We’ll still determine that the table is empty, and do another statistics on load operation when you perform that direct load.


SQL> insert /*+ APPEND */ into t1
  2  select * from dba_objects;

78171 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> select num_rows
  2  from user_tables
  3  where table_name = 'T1';

  NUM_ROWS
----------
     78171

But this also has implications if you want to retain some existing statistics on the table. Because when we truncate the table, that same reset of statistics collection can occur.


SQL> create table t1 as
  2  select * from dba_objects
  3  where rownum <= 10;

Table created.

SQL>
SQL> select num_rows
  2  from user_tables
  3  where table_name = 'T1';

  NUM_ROWS
----------
        10

SQL>
SQL> truncate table t1;

Table truncated.

SQL>
SQL> insert /*+ APPEND */ into t1
  2  select * from dba_objects;

78171 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> select num_rows
  2  from user_tables
  3  where table_name = 'T1';

  NUM_ROWS
----------
     78171

Similarly, all you need is one transaction that “brings the table to life” for automatic statistics collection to be no longer active. Even if that transactions rolls back. For example:


SQL> create table t1 as
  2  select * from dba_objects
  3  where 1=0;

Table created.

SQL> select num_rows
  2  from user_tables
  3  where table_name = 'T1';

  NUM_ROWS
----------
         0

SQL> -- a normal mode insert of a row
SQL> insert into t1
  2  select * from dba_objects where rownum = 1;

1 row created.

SQL>
SQL> rollback;

Rollback complete.

SQL>
SQL> insert /*+ APPEND */ into t1
  2  select * from dba_objects;

78171 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> select num_rows
  2  from user_tables
  3  where table_name = 'T1';

  NUM_ROWS
----------
         0

All of these behaviour are pretty much how I would expect them to work (your opinion may differ of course Smile), and I know that some people have opted to disable the automatic collection altogether to avoid any ambiguity but that strikes me as overkill.

If you are really concerned about it, it is easy enough to suffix your load routines with a simple wrapper function to check the statistics and issue the appropriate DBMS_STATS call to make up the shortfall.   And it would be remiss of me not to mention the latest optimizer whitepaper which covers in detail other optimizer enhancements in 12c. 

Happy stats collecting !

Postgres vs. Oracle access paths XI – Sample Scan

I was going to end this series with the previous post because the last access path available in Postgres is a bit special: a Seq Scan that returns only a sample of the rows, at random. However, it is the occasion to come back to the difference between random and sequential reads.

I’m still working on the same table as in the previous posts, with 10000 rows in 1429 pages. 5% of rows is 500 rows and 5% of blocks is about 72 pages.

Rows

Sometimes, you can answer your business question on a sample of rows, when you need an approximate result, trend or pattern Let’s say that you want to sum() on only 5 percent of rows:
explain (analyze,verbose,costs,buffers) select sum(a) from demo1 tablesample bernoulli(5) ;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------
Aggregate (cost=1435.25..1435.26 rows=1 width=8) (actual time=1.940..1.940 rows=1 loops=1)
Output: sum(a)
Buffers: shared hit=1429
-> Sample Scan on public.demo1 (cost=0.00..1434.00 rows=500 width=4) (actual time=0.007..1.890 rows=509 loops=1)
Output: n, a, x
Sampling: bernoulli ('5'::real)
Buffers: shared hit=1429
Planning time: 0.373 ms
Execution time: 1.956 ms

This row sampling reads all rows and picks a sample of them at random. Unfortunately, it reads all blocks because you cannot get a good sample if you don’t know how many rows you have in each block. Working on a sample can make sense if you want to apply complex operations on the result. Here the cost in the database is similar to a Seq Scan: 1429 blocks read at seq_page_cost=1, but the sum() applied on 500 rows (cpu_operator_cost=0.0025) and 500 tuples from the scan and 1 tuple for the result, with cpu_tuple_cost=0.01

From execution statistics, you can see that the result is exactly what we asked: 500 rows returned.

Oracle has a different syntax and different algorithm:
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 1tsadjdd9ddam, child number 0
-------------------------------------
select /*+ */ sum(a) from demo1 sample(5)
-----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 397 (100)| 1 |00:00:00.01 | 581 |
| 1 | SORT AGGREGATE | | 1 | 1 | | 1 |00:00:00.01 | 581 |
| 2 | TABLE ACCESS SAMPLE| DEMO1 | 1 | 500 | 397 (0)| 478 |00:00:00.01 | 581 |
-----------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=0) SUM("A")[22]
2 - (rowset=256) "A"[NUMBER,22]

Here we have not read all the blocks. Only 40% of them. This is faster than the Postgres approach, but the drawback is that the result is not exact: 478 rows were returned here.

Blocks

When we can afford an approximate sampling, we can sample on blocks rather than on rows:
explain (analyze,verbose,costs,buffers) select sum(a) from demo1 tablesample system(5) ;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------
Aggregate (cost=290.25..290.26 rows=1 width=8) (actual time=0.479..0.480 rows=1 loops=1)
Output: sum(a)
Buffers: shared hit=73
-> Sample Scan on public.demo1 (cost=0.00..289.00 rows=500 width=4) (actual time=0.016..0.377 rows=511 loops=1)
Output: n, a, x
Sampling: system ('5'::real)
Buffers: shared hit=73
Planning time: 0.698 ms
Execution time: 0.509 ms

The number of rows is still good here, but the result may depend on the blocks sampled. Only 73 blocks were read, which is exactly 5% and of course, the rows may be distributed differently within the blocks. However, the advantage is that it is faster as it reads less blocks. But those blocks being picked at random, they are by definition random reads: 71 pages read at random_page_cost=0:4 and, as in the previous case, 501 cpu_tuple_cost and 500 cpu_operator_cost

With block sampling, Oracle reads a smaller number of blocks than with row sampling, but still more than 5%, and the number of rows is not exact: 798 rows here:
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID fqgbwqfavgdrn, child number 0
-------------------------------------
select /*+ */ sum(a) from demo1 sample block(5)
-----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 22 (100)| 1 |00:00:00.01 | 134 |
| 1 | SORT AGGREGATE | | 1 | 1 | | 1 |00:00:00.01 | 134 |
| 2 | TABLE ACCESS SAMPLE| DEMO1 | 1 | 500 | 22 (0)| 798 |00:00:00.01 | 134 |
-----------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=0) SUM("A")[22]
2 - (rowset=256) "A"[NUMBER,22]

Again, as for the previous access paths: same concepts and different implementation between Postgres and Oracle. Everything looks similar and easily portable from a far overview, but going into details you see all those little differences which make it no so easy to be database agnostic or easily portable.

Summary

This is the end of this series comparing Postgres access path with Oracle ones. The goal is not to tell you that one is better than the other. They have a different approach, different targets, different price, different history and probably future. But understanding how they work and how they estimate the cost is a good way to learn. I’m myself learning a lot about Postgres while writing those posts, matching things I discover on Postgres with those I know for a while in Oracle.

Here is the list of posts on Access Path:

  1. Postgres vs. Oracle access paths – intro
  2. Postgres vs. Oracle access paths I – Seq Scan
  3. Postgres vs. Oracle access paths II – Index Only Scan
  4. Postgres vs. Oracle access paths III – Partial Index
  5. Postgres vs. Oracle access paths IV – Order By and Index
  6. Postgres vs. Oracle access paths V – FIRST ROWS and MIN/MAX
  7. Postgres vs. Oracle access paths VI – Index Scan
  8. Postgres vs. Oracle access paths VII – Bitmap Index Scan
  9. Postgres vs. Oracle access paths VIII – Index Scan and Filter
  10. Postgres vs. Oracle access paths IX – Tid Scan
  11. Postgres vs. Oracle access paths X – Update
  12. Postgres vs. Oracle access paths XI – Sample Scan

I think my next series will be on Join methods.

 

Cet article Postgres vs. Oracle access paths XI – Sample Scan est apparu en premier sur Blog dbi services.