Oracle

Solaris Cluster and Decreasing Resource Needs After Upgrade

Delphix Engineering and Support are pretty amazing folks.  They continue to pursue for solutions, no matter how much time it takes and the complex challenges they’re faced with supporting heterogenous environments, hardware configurations and customer needs.

This post is in support of the effort from our team that resulted in stability to a previously impacted Solaris 11.2 cluster configuration.  The research, patching, testing and then resulting certification from Oracle was a massive undertaking from our team and I hope this information serves the community, but in no way is recommended by Delphix.  It’s just what was done to resolve the problem, after logical decisions for the use of the system by our team.

Hadoop for Database Professionals – St. Louis (7. Sep)

Here’s some more free stuff by Gluent!

We are running another half-day course together with Cloudera, this time in St. Louis on 7. September 2017.

We will use our database background and explain using database professionals terminology why “new world” technologies like Hadoop will take over some parts of the enterprise IT, why are those platforms so much better for advanced analytics over big datasets and how to use the right tool from Hadoop ecosystem for solving the right problem.

More information below. See you there!

Hadoop for Database Professionals – St. Louis

Postgres vs. Oracle access paths XI – Sample Scan

I was going to end this series with the previous post because the last access path available in Postgres is a bit special: a Seq Scan that returns only a sample of the rows, at random. However, it is the occasion to come back to the difference between random and sequential reads.

I’m still working on the same table as in the previous posts, with 10000 rows in 1429 pages. 5% of rows is 500 rows and 5% of blocks is about 72 pages.

Postgres vs. Oracle access paths X – Update

In the previous post we have seen the cheapest way to get one row, reading only one block from its physical location. But that’s the optimal case where the row has not moved. I’ll (nearly) conclude this series about access path with an update.

Archaic Data Transfer- A DBA Rant

There was a great post by Noel Yuhanna on how he deems the number of DBAs required in a database environment by size and number of databases.  This challenge has created a situation where data platforms are searching for ways to remove this roadblock and eliminate the skills needed to manage the database tier.

Postgres vs. Oracle access paths IX – Tid Scan

In the previous post we have seen how Postgres and Oracle finds the table row from the index entry. It uses the TID / ROWID. I’ll focus on this access path and I will have covered all Postgres access paths to table data.

Oracle ACCESS BY ROWID

I start with Oracle because we already have seen the TABLE ACCESS BY ROWID. I’ll decompose an index acces to the table. The first step is getting the ROWID from the index entry:

SQL> select /*+ */ rowid from demo1 where n=1000;
 
ROWID
------------------
AAASPkAAMAAABIaAAF

Improving Statspack Experience

I’ve published a few month ago an article in the UKOUG OracleScene magazine on Improving Statspack Experience: quick setup script, changing settings, filling Idle Events,etc. In the article, I used dbms_job to schedule the snapshots, because I have this script for years and never took the time to do it with dbms_scheduler. Thanks to Nicolas Jardot here is the equivalent script using dbms_scheduler.

The idea is to have a script to run on each instance (when in RAC) in order to have a job calling statspack.snap and statspack.purge on each instance.

DECLARE
instno NUMBER;
snapjob VARCHAR2(30);
purgejob VARCHAR2(30);
BEGIN

The Security State of the Nation

Even though my social media profile is pretty available for Twitter and Linked in, I’m significantly conservative with other personal and financial data online.  The reversal of the Internet Privacy Rule, (I’ve linked to a Fox news link, as there was so much negative news on this one…) had everyone pretty frustrated, but then we need to look at security of personal information, especially financial data and as we can see by security breaches so far in 2017, we all have reason to be concerned.

Bequeath connect to PDB: set container in logon trigger?

There are little changes when you go to multitenant architecture and one of them is that you must connect with a service name. You cannot connect directly to a PDB with a beaqueath (aka local) connection. This post is about a workaround you may have in mind: create a common user and set a logon trigger to ‘set container’. I do not recommend it and you should really connect with a service. Here is an example.

Imagine that I have a user connecting with bequeath connection to a non-CDB, using user/password without a connection string, the database being determined by the ORACLE_SID. And I want to migrate to CDB without changing anything on the client connection configuration side. The best idea would be to use a service, explicitly or implicitly with TWO_TASK or LOCAL. But let’s imagine that you don’t want to change anything on the client side.

Get trace file from server to client

The old way to get a user dump trace file, for sql_trace (10046), Optimizer compilation trace (10053), lock trace (10704), Optimizer execution trace (10507),… is to go to the server trace directory. But if you don’t have access to the server (as in the ☁) the modern (12cR2) way is to select from V$DIAG_TRACE_FILE_CONTENTS. Before everybody is on 12.2 I’m sharing here a sqlplus script that I use for a long time to get the trace file to the client.