Search

OakieTags

Who's online

There are currently 0 users and 22 guests online.

Recent comments

Performance tuning

Less Performance Impact with Unified Auditing in #Oracle 12c

There is a new auditing architecture in place with Oracle Database 12c, called Unified Auditing. Why would you want to use it? Because it has significantly less performance impact than the old approach. We buffer now audit records in the SGA and write them asynchronously to disk, that’s the trick.

Other benefits of the new approach are that we have now one centralized way (and one syntax also) to deal with all the various auditing features that have been introduced over time, like Fine Grained Auditing etc. But the key improvement in my opinion is the reduced performance impact, because that was often hurting customers in the past. Let’s see it in action! First, I will record a baseline without any auditing:

 

Multiple invisible indexes on the same column in #Oracle 12c

After invisible indexes got introduced in 11g, they have now been enhanced in 12c: You can have multiple indexes on the same set of columns with that feature. Why would you want to use that? Actually, this is always the first question I ask when I see a new feature – sometimes it’s really hard to answer :-)

Here, a plausible use case could be that you expect a new index on the same column to be an improvement over the existing old index, but you are not 100% sure. So instead of just dropping the old index, you make it invisible first to see the outcome:

 

Demos do fail.

I am an ardent believer of “show me how it works” principle and usually, I have demos in my presentation. So, I was presenting “Tools for advanced debugging in Solaris and Linux” with demos in IOUG Collaborate 2015 in Las Vegas on April 13 and my souped-up laptop (with 32G of memory, SSD drives, and an high end video processor etc ) was not responding when I tried to access folder to open my presentation files.

Sometimes, demos do fail. At least, I managed to complete the demos with zero slides :-) Apologies to the audience for my R-rated rants about laptop issues.

You can download presentations files from the links below.

Session_145_advanced_debugging_using_UNIX_tools

IOUG Collaborate 2015

I will be presenting two topics in IOUG Collaborate 2015 in Vegas. Use the show planner and add my presentations to your schedule :)

Session #189: April 13 Monday 9:15 to 10:15AM Topic: Oracle Database 12c In-Memory Internals. Room Palm B

Session #145: April 13 Monday 12:45PM-1:45PM Topic: Tools and Techniques for Advanced Debugging in Solaris & Linux (mostly live demo). Room Palm B.

Video Tutorial: XPLAN_ASH Active Session History - Introduction

I finally got around preparing another part of the XPLAN_ASH video tutorial.

This part is about the main funcationality of XPLAN_ASH: SQL statement execution analysis using Active Session History and Real-Time SQL Monitoring.

In this video tutorial I'll explain what the output of XPLAN_ASH is supposed to mean when using the Active Session History functionality of the script. Before diving into the details of the script output using sample reports I provide some overview and introduction in this part that hopefully makes it simpler to understand how the output is organized and what it is supposed to mean.

This is the initial, general introduction part. More parts to follow.

In-memory pre-population speed

While presenting at Oaktable World 2014 in San Fransisco, I discussed the in-memory pre-population speed and indicated that it takes about 30 minutes to 1 hour to load ~300GB of tables. Someone asked me “Why?” and that was a fair question. So, I profiled the in-memory pre-population at startup.

Profiling methods

I profiled all in-memory worker sessions using Tanel’s snapper script and also profiled the processes in OS using Linux perf tool with 99Hz sample rate. As there is no other activity in the database server, it is okay to sample everything in the server. Snapper output will indicate where the time is spent; if the time is spent executing in CPU, then the perf report output will tell us the function call stack executing at that CPU cycle. Data from these two profiling methods will help us to understand the root cause of slowness.

Inmemory: Not all inmemory_size is usable to store tables.

I have been testing the inmemory column store product extensively and the product is performing well for our workload. However, I learnt a bit more about inmemory column store and I will be blogging a few them here. BTW, I will be talking about internals of inmemory in Oaktable world presentation, if you are in the open world 2014, you can come and see my talk: http://www.oraclerealworld.com/oaktable-world/agenda/

inmemory_size

inmemory area is another sub-heap of the top-level SGA heap

I blogged earlier about heap dump shared pool heap duration and was curious to see how the inmemory – 12.1.0.2 new feature – is implemented. This is a short blog entry to discuss the inmemory area heap.

Parameters

I have set the initialization parameters sga_target=32G and inmemory_size=16G, meaning, out of 32GB SGA, 16GB will be allocated to inmemory area and the remaining 16GB will be allocated to the traditional areas such as buffer_cache, shared_pool etc. I was expecting v$sgastat view to show the memory allocated for inmemory area, unfortunately, there are no rows marked for inmemory area (Command “show sga” shows the inmemory area though). However, dumping heapdump at level 2 shows that the inmemory area is defined as a sub-heap of the top level SGA heap. Following are the commands to take an heap dump.

Data visualization, px qref waits, and a kernel bug!

Data visualization is a useful method to identify performance patterns. In most cases, I pull custom performance metrics from AWR repository and use tableau to visualize the data. Of course, you can do the visualization using excel spreadsheet too.

Problem definition
We had huge amount of PX qref waits in a database:

Golden rules of RAC performance diagnostics

After collaborating with many performance engineers in a RAC database, I have come to realize that there are common pattern among the (mis)diagnosis. This blog about discussing those issues. I talked about this in Hotsos 2014 conference also.

Golden rules

Here are the golden rules of RAC performance diagnostics. These rules may not apply general RAC configuration issues though.

  1. Beware of top event tunnel vision
  2. Eliminate infrastructure as an issue
  3. Identify problem-inducing instance
  4. Review send-side metrics also
  5. Use histograms, not just averages

Looks like, this may be better read as a document. So, please use the pdf files of the presentation and a paper. Presentation slide #10 shows indepth coverage on gc buffer busy* wait events. I will try to blog about that slide later (hopefully).