October 2017

Dbvisit replicate – SQL Developer chart

Here is a quick SQL Developer report which display a chart about the Dbvisit replicate lag over the last hours

The idea is to have the following chart showing the lag in MINE and APPLY processes. Here is an example where I stopped the replication to show some lag.
CaptureDbvrepsqldev

The query is on the DBVREP.DBRSCOMMON_LAG_STATS on the APPLY side, which display the wallclock time with timestamp from the MINE and from the APPLY.

Here is the SQL Developer report .xml:

Automating Stored Procedures When Refreshing Virtual SQL Server Databases

So… you’re using a Delphix virtualization engine to provision SQL Server virtual databases (VDBs), but you’re finding that your SQL account passwords are being reset to those used in production after every refresh?  Likewise, all of the settings in the application supported by the database?

Provisioning a new virtual databases is easy.  Go into the Delphix administrative console, find the dSource you want, find the snapshot for the point-in-time you want on that dSource, click the Provision button, specify the database server where the new database should reside, and hey presto!  In 10 minutes or less, a new virtual database is ready for use.

The APPROX_MEDIAN Function – A Test Case

The aim of this post is not to explain how the APPROX_MEDIAN function works (you find basic information in the documentation) but to show you the results of a test case I ran to assess how well it works.

Here’s what I did…

I started in the Oracle Database Public Cloud an instance of version 12.2.

Then I created a table with several numerical columns (the name of each column shows how many distinct values it contains), loaded 150 million rows into it (the size of the segment is 20 GB), and gathered the object statistics.

Got published in the Red Stack Magazine

The November 2017 issue of the Red Stack Magazine published by DOAG and AOUG contains one of my articles about Oracle 12c Multitenant </p />
</p></div>

    	  	<div class=

nVision Performance Tuning: 4. Partitioning of Ledger, Ledger Budget, and Summary Ledger Tables

This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

Note: Partitioning is a licenced option in Oracle RDBMS, and is only available on Enterprise Edition.

nVision queries always contain single value predicates on LEDGER and FISCAL_YEAR.  They will also always have either single value predicate or a range predicate on ACCOUNTING_PERIOD.  Therefore, partitioning the ledger tables on these columns is an effective way to cut down the data to be processed by the query as early as possible.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT … SUM(A.POSTED_BASE_AMT) 
FROM PS_LEDGER A, …
WHERE A.LEDGER='ACTUALS'

Visualize your STATSPACK reports with Python Plotly

If you work as a consultant, getting remote access to your customer is sometimes a nightmare!
Even when you’ll get one, there a times when you just want to get your job done and finish this nightmare.

For example, there are times when it is easier to dump a statspack repo and do offline analyzes. Unfortunately, it is not always possible. In this kind of circumstances, I ask my customer to generate the appropriate set of reports and send them to me for later analyzes. If you know the set of snapshot ids you want to analyze, you can use the following simple script to extract a set of statspack reports:

Visualize your STATSPACK reports with Python Plotly

If you work as a consultant, getting remote access to your customer is sometimes a nightmare!
Even when you’ll get one, there a times when you just want to get your job done and finish this nightmare.

For example, there are times when it is easier to dump a statspack repo and do offline analyzes. Unfortunately, it is not always possible. In this kind of circumstances, I ask my customer to generate the appropriate set of reports and send them to me for later analyzes. If you know the set of snapshot ids you want to analyze, you can use the following simple script to extract a set of statspack reports:

Hadoop for Database Professionals class at NoCOUG Fall Conference on 9th Nov

If you happen to be in Bay Area on Thursday 9th November, then come check out the NoCOUG Fall Conference in California State University in downtown Oakland, CA.

Gluent is delivering a Hadoop for Database Professionals class as a separate track there (with myself and Michael Rainey as speakers) where we’ll explain the basics & concepts of modern distributed data processing platforms and then show a bunch of Hadoop demos too (mostly SQL-on-Hadoop stuff that database folks care about).

Max PDBs in Standard Edition

Here is a small warning. In Standard Edition, you may expect that the features that are not available for your edition are blocked, but in 12.2 you can create more than one PDB in Standard Edition, and you should set MAX_PDBS to prevent that.

12cR1

In 12.1 Standard Edition, when you try to create more than one PDB (i.e with CON>ID > 3) you get an error:

ORA-65010: maximum number of pluggable databases created

12cR2

But it seems that this has been lost in 12.2:

nVision Performance Tuning: 3. Indexing of Ledger, Budget, and Summary Ledger Tables on Non-Engineered Oracle Systems

This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

If you are on a conventional non-engineered Oracle system, then it is not going to be viable to full scan the ledger tables for every query.  You are going to have to use indexes to find your data.  Every customer's nVision reports are unique to that customer and the nature of their business.  Different customers will analyse their data by different combinations of attributes.  Indexes will be needed to match those analysis criteria.