Search

OakieTags

Who's online

There are currently 0 users and 37 guests online.

Recent comments

linux

The Oracle ‘db file parallel read’ wait event

This is a small note describing how Oracle implemented the situation which is covered by the db file parallel read wait event. This events happens if Oracle knows it must read multiple blocks which are not adjacent (thus from different random files and locations), and cannot continue processing with the result of a single block. In other words: if it cannot process something after reading a single block (otherwise Oracle will read a single block visible by the wait ‘db file sequential read’).

This is how it shows up if you enable sql trace:

Extra huge database IOs, part 2

This is part 2 of a number of blogposts about huge Oracle database IO’s.
If you landed on this blogpost and did not read part 1, please read part 1 here.

In part 1 I showed how database IOs of a full table scan could be bigger than 1MB by increasing the db_file_multiblock_read_count parameter to a number beyond 1MB expressed in Oracle blocks. These bigger IOs only happen with direct path reads, not with buffered multiblock reads.

But how much bigger can these IOs be? In part 1 I showed Oracle IOs of 1020 blocks. Is that the limit? To investigate this, I created a much bigger table (table T2 in part 1 had a maximum extent size of 1024 blocks, which meant that the 1020 is the biggest IO possible from this table).

For the sake of this investigation I created a much bigger table to get larger extents:

Extra huge database IOs

It’s been a while since I presented the first incarnation of my ‘about multiblock reads’ presentation. When I did this at the UKOUG TEBS conference in Birmingham in 2011, Christian Antognini chaired my presentation. After my presentation Christian showed me it’s possible to set the parameter ‘db_file_multiblock_read_count’ higher than 1MB/db_block_size (which is 128 if your blocksize is 8kB), and you could benefit from it if your hardware is sufficient. In fact, Christian showed me AWR reports (could also be statspack reports, not sure) which showed the benefit.

My understanding of the parameter db_file_multiblock_read_count at the time was:

The maximum value is the operating system’s maximum I/O size expressed as Oracle blocks ((max I/O size)/DB_BLOCK_SIZE). If you set this parameter to a value greater than the maximum, Oracle uses the maximum.

Installation of vmware tools on OL6u4 plus vmware shared folders

This post is just a reminder to myself how to install the vmware (fusion) stuff in a fresh installed OL6u4 installation. To be honest, it isn’t entirely “fresh”: I’ve ‘yum update’ed the installation, and I’ve installed the 11gR2 preinstall package (yum install oracle-rdbms-server-11gR2-preinstall) package already.

1. Reboot
After the yum update and yum install of the oracle database preinstall package everything is updated, but not yet activated (new kernel). This means you need to reboot your VM first.

2. Install the kernel headers
Now the whole system is installed and setup we can *almost* install the vmware tools. However: the vmware tools need the kernel headers. If you look for the kernel headers, you *think* you see them:

Your first steps with the multi-tenancy option

You can’t possibly have avoided all the buzz around the multi-tenancy option until now so it is probably in order to give you a bit more information about it besides the marketing slides Oracle has shown you so far.

IMPORTANT:This article is a technical article and leaves the pros and cons for the multi-tenancy option at one side. It does the same with the decision to make the CDB a cost option. And it doesn’t advise you on licensing either, so make sure you are appropriately licensed to make use of the features demonstrated in the series.

Series Overview

Since it is nearly impossible to give an overview over Pluggable and Container Databases in one article this is going to be a series of multiple articles. In this part you can read about creating PDBs and where to look for information about them.

Increasing the maximum I/O size in Linux

This post is really a quick note to myself to remind me how to bump up the maximum IO size on Linux. I have been benchmarking a bit lately and increasing the maximum size of an I/O request from 512kb to 1024kb looked like something worth doing. Especially since it’s also done in Exadata :)

So why would it matter? It matters most for Oracle DSS systems, actually. Why? Take ORION for example-although it’s using mindless I/O as explained by @flashdba and @kevinclosson, at least it gives me a quick way to test the size of a typical I/O request. Let me demonstrate:

Dtrace probes in Oracle 12c… v$kernel_io_outlier is populated by dtrace!!

Oracle 12c certainly has some great features, but for the performance guy like myself, performance monitoring features are particularly interesting.  There are three new v$ tables that track anomalies in the IO path.  The idea is to provide more information about really poorly performing IO that lasts more than 500ms.

  • V$IO_OUTLIER : tracks the attributies of an IO.  The size, latency as well as ASM information is recorded.
  • V$LGWRIO_OUTLIER : tracks information specifically on Log writer IO.

These two tables are going to be useful to monitor when performance issues occur.  I can already see the SQL scripts to monitor this activity starting to pile up.  But, there is one little extra table that dives even further into the IO stack using Dtrace.

Troubleshooting high load average on Linux

This morning I was quite surprised to see one of my lab servers suffering from a high load average. To be fair I haven’t looked at it for a while, it’s just one of these demo systems ticking along … However this was an interesting case!

High load averages are usually first felt when you get sluggish response from the terminal. I got very sluggish response but attributed it to the SSH connection. Nope, it turned out that the other VMs were a lot more responsive, and I know that they are on the same host.

A few options exist in Linux to see what is going on. The uptime command is a pretty good first indicator.

# uptime
 09:14:05 up 28 days, 11:29,  1 user,  load average: 19.13, 19.15, 19.13

OK so this has been going on for a little while (hey remember its my lab server!). But what’s contributing? The next stop was top for me. Here’s the abridged output:

Analyzing IO at the Exadata Cell level… a simple tool for IOPS.

Lately I have been drawn into to a fare number of discussions about IO characteristics while helping customers run benchmarks.  I have been working with a mix of developers, DBAs, sysadmin, and storage admins.  As I have learned, every group has there own perspective – certainly when it comes to IO and performance.

  • Most DBA’s want to see data from the DB point of view so AWR’s or EM works just fine.
  • Most System Admin’s look at storage from the Filesystem or ASM disk level.
  • Storage Admins want to see what is going on within the array.
  • Performance geeks like myself, like to see all up and down the stack :)

As part of pulling back the covers, I came up with a simple little tool for show IOPS at the cell level.

An introduction to Policy Managed Databases in 11.2 RAC

I just realised this week that I haven’t really detailed anything about policy managed RAC databases. I remembered having done some research about server pools way back when 11.2.0.1 came out. I promised to spend some time looking at the new type of database that comes with server pools: policy managed databases but somehow didn’t get around to doing it. Since I’m lazy I’ll refer to these databases as PMDs from now on as it saves a fair bit of typing.

So how are PMDs different from Administrator Managed Databases?

First of all you can have PMDs with RAC only, i.e. in a multi-instance active/active configuration. Before 11.2 RAC you had to tie an Oracle instance to a cluster node. This is why you see instance prefixes in a RAC spfile. Here is an example from my lab 11.2.0.3.6 cluster: