Search

Top 60 Oracle Blogs

Recent comments

July 2018

DBMS_CLOUD Package – A Reference Guide

The Appendix A of the Using Oracle Autonomous Data Warehouse Cloud guide describes the DBMS_CLOUD package. Unfortunately, it documents only a subset of the subroutines. And, for some of them, the description could also be enhanced. Therefore, while I was testing all the subroutines the DBMS_CLOUD package provides, I took a number of notes. By the end of my tests, I got what I can call my personal reference guide to the package. Since it might help others, here it is…

Dumping SGA to read encrypted blocks

After my last article AMM vs security, Martin Berger wrote to me:

well,
even without AMM you can do it:
write your own process which attaches to the same shm segments – and use its memory mapping (?)

My response was that it is also possible with ASMM but AMM makes it extremely easy. And this is because you can treat memory as regular binary files when operating on AMM.

Today I want to show you how dump blocks from SGA which is configured as ASMM to get into encrypted data which is also protected by Oracle Database Vault. To set up the environment I will use examples from a previous blog post.

Validate FK

A comment arrived yesterday on an earlier posting about an enhancement to the truncate command in 12c that raised the topic of what Oracle might do to validate a foreign key constraint. Despite being sure I had the answer written down somewhere (maybe on a client site or in a report to a client) I couldn’t find anything I’d published about it, so I ran up a quick demo script to show that all Oracle does is construct a simple SQL statement that will do check the data – and then do whatever the optimizer does to produce the fastest possible plan.

Here’s the script – with a few variations to show what happens if you start tweaking features to change the plan.

Data Guard: always set db_create_file_dest on the standby

The file name convert parameters are not dynamic and require a restart of the instance. An enhancement request was filled in 2011. I mentioned recently on Twitter that it can be annoying with Active Data Guard when a file on the primary server is created on a path that has no file name conversion. However, Ian Baugaard mentioned that there is a workaround for this specific case because db_create_file_dest is dynamic:

Historic Stats

If you want to examine historic object stats Oracle gives you a few procedures in the dbms_stats package to compare sets of stats captured at two different time periods, but there’s no view that you can query to get an idea of how a table’s stats have changed over time. This is a problem that can be addressed when you discover two things:

  • There are views to report pending table, index, column and histogram stats.
  • Pending stats are stored stored as “historic” stats with a future date.

Once you’ve spotted the second detail, you can acquire the SQL to generate the pending stats views:

PostgreSQL Active Session History (ash): welcome to the pg_active_session_history view (part of the pgsentinel extension)

Why active session history?

What if you could record and query an history of the active sessions? Would not it be useful for performance tuning activities?

With active session history in place you could have a look to the “near” past database activity. You could answer questions like:

  • What wait events type were taking most time?
  • What wait events were taking most time?
  • Which application name was taking most time?
  • What was a session doing?
  • What does a SQL statement wait for?
  • How many sessions were running in CPU?
  • Which database was taking most time?
  • Which backend type was taking most time?
  • On which wait event was the session waiting for?
  • And so on….

How does it look like?

Let’s have a look to the pg_active_session_history view (more details on how to create it later on):

My tmux scripts to script demos

When I did my first demo using tmux, I have written the following blog post to explain it: https://blog.dbi-services.com/using-tmux-for-semi-interactive-demos/. Since then I’ve done all my demos with this technique. My scripts and configuration files have evolved now I have uploaded the on GitHub to share them: https://github.com/FranckPachot/scripts/tree/master/tmux-demo

The README.md explains the pre-requisites (a recent version of Tmux), how to download the scripts quickly (I use this to get everything ready on a new Cloud host), and how to run it. The ‘demo’ alias starts tmux attached to the same session (in order to see it on the laptop and the beamer). The .vimrc defines the macros to run lines from the script file as tmux-send commands.

Data Modeling, Dates and DAX

Presenting data in the format to ease visualization is required for any BI product.  Power BI provides much of this with Data Analysis Expressions, (DAX).   As a DBA, I admit to cringing every time a reference was made how similar it is to functions in Excel or other non-database platforms.  I’m a DBA and I naturally am going to see data at a much larger, more complex level.  I love the simplicity of DAX, which granted me the ability to acquire basic skills using it in just a day, but considering Power BI’s ability to pull from multiple data sources, including SQL Server, Oracle, PostgreSQL and even JSON files, the comparison to Excel left me feeling, well, ‘meh.’  </p />
</p></div>

    	  	<div class=

18c: Order by in WITH clause is not preserved

For a previous post I’ve run on 18c a script of mine to get the V$MYSTAT delta values between two queries. This script (new version available on GitHub) generates the queries to store some values and subtract them on the next execution. But I had to fix it for 18c because I relied on some order by in a CTE which is lost in 18c.
The idea was to get the statistic names in a Common Table Expression (CTE):

with stats as (
select rownum n,stat_id,name from (select stat_id,name from v$statname where name in (&names) order by stat_id)
)

and query it from different parts of the UNION ALL which generates the script:

select 'select ' from dual
union all

#Exasol Fail-Safety explained

The building blocks of an Exasol cluster are commodity Intel servers like e.g. Dell PowerEdge R740 with 96 GB RAM,12 x 1.2 TB SAS Hot-plug hard-drives and 2 x 10Gb Ethernet Cards for the private network. That’s sufficient to deliver outstanding performance combined with high availability. The picture below shows a 4+1 cluster, one of our most popular configurations: