Exadata gets its performance by letting the storage (the exadata storage server) participate in query processing, which means part of the processing is done as close as possible to where the data is stored. The participation of the storage server in query processing means that a storage grid can massively parallel (depending on the amount of storage servers participating) process a smart scan request.
Following on from my recent batch of “what I’m doing at the moment” style posts, I just thought I would mention some of the infrastructure I’ve been installing and configuring recently…
We are still part way through a migration from Oracle Application Server to WebLogic 11g. There are many applications to migrate and test, fortunately not by me, but they fit into two main categories.
The purpose of this post is to show what the wait event ‘cell smart table scan’ means, based on reproducible investigation methods.
First of all, if you see the ‘cell smart table scan’ event: congratulations! This means you are using your exadata how it’s intended to be used, which means your full table scan is offloaded to the cells (storage layer), and potentially all kinds of optimisations are happening, like column filtering, predicate filtering, storage indexing, etc.
But what is exactly happening when you see the wait event ‘cell smart table scan’? Can we say anything based on this waits, like you can with other wait events?
This is a quick post about how you can get the debuginfo packages on your Oracle Linux system in the easiest way thinkable: via yum.
I guess most people reading this are familiar with Oracle Linux, and do know how to install it, and how to use the public yum server to install and update packages on Linux right from Oracle’s free internet repository. If you do not know, follow the link and learn.
As a sidestep from the purpose of this blog articel: during the ACE Director briefing prior to Oracle Openworld Wim Coekaerts announced that the public-yum repository is now hosted on Akamai instead of somewhere “unofficial” in the Oracle infra. This is really, really noticeable when using it now, previously I could not get beyond a speed of approximately 500K/s, now I can get speeds of 10M/s.
Following from yesterday’s post about Cloud Control 12cR3, Oracle Linux and VMware, I thought I would just mention something I put live yesterday evening.
We have a 3rd party Java-based application that runs on Tomcat 7 and Java 7 that until recently was running on RHEL5 on physical hardware. It runs against an Oracle database, but that is not housed on this server. This application is not that big, but it is *very* high profile as it is what we use to process our REF submissions. If you know anything about higher education in the UK, you’ll know that REF is a very big deal, especially as we are within a couple of months of the next submission.
I mentioned some time ago that I was pushing my current company to move much of their gear on to VMware, mostly because of poor resource utilization on many of the servers. That process is still under way.
One thing I wanted to mention specifically was our use of Cloud Control 12cR3. Up until recently, we were using physical kit for this. We had an 11.2 database on HP-UX, With HA provided by HP Service Guard. We had two management servers on physical kit running RHEL5 pointing at this Service Guard package to give us some resiliency in of the OMS. It worked, but it was over complicated and I was never really happy with it for a number of reasons:
I have come across an interesting situation recently and thought it was worth blogging about. My friend Doug Burns might like it, it has to do with consolidation.
I have seen quite a few sites in my career where the separation (of duties/listeners/disk space/log destinations) was paramount-and for good reason! In fact Oracle propagate it as well as a quick search with your favourite search engine will show. In my example I came across a system that used different listeners per database, which is very common and prevents users from “accidentally” connecting to the wrong system. If you are using such a setup please read on. If you are not using Oracle Restart/Clusterware/RAC then this is not immediately relevant to your Oracle estate.
This post is about database writer (dbwr, mostly seen as dbw0 nowadays) IO.
The testenvironment in which I made the measurements in this post: Linux X64 OL6u3, Oracle 184.108.40.206 (no BP), Clusterware 220.127.116.11, ASM, all database files in ASM. The test environment is a (VMWare Fusion) VM, with 2 CPU’s.
It might be a good idea to read my previous blog about logwriter IO.
The number of database writers is depended on the number of CPU’s visible to the instance (when not explicitly set with the DB_WRITER_PROCESSES parameter), and seems mostly to be CEIL(CPU_COUNT/8). There might be other things which could influence the number (NUMA comes to mind). In my case, I’ve got 2 CPU’s visible, which means I got one database writer (dbw0).
(This post is for Jerry. He will know when he reads it)
I have been a great supporter of many flavours of virtualisation and my earliest experience with Xen goes back to Oracle VM 2 which was based on RHEL 4 and an early version of Xen. Why am I saying this? Because Xen is (was?) simple and elegant. Especially for building RAC systems: paravirtualised Linux was all you needed, and a dual-core machine: Xen is very lightweight even though recent achievements in processor architecture (nested page tables, single root IO virtualisation, others) make it more desirable to use hardware virtualisation with paravirtualised drivers. This is what this post is about!
Shared storage in Xen
As you know you need shared block devices for RAC for voting disks, OCR, data files, redo logs, the lot. In Xen that’s as straight forward as it gets:
This post is about log writer (lgwr) IO.
It’s good to point out the environment on which I do my testing:
Linux X64 OL6u3, Oracle 18.104.22.168 (no BP), Clusterware 22.214.171.124, ASM, all database files in ASM.
In order to look at what the logwriter is doing, a 10046 trace of the lgwr at level 8 gives an overview.
A way of doing so is using oradebug. Be very careful about using oradebug on production environments, it can/may cause the instance to crash.
This is how I did it:
SYS@v11203 AS SYSDBA> oradebug setospid 2491 Oracle pid: 11, Unix process pid: 2491, image: firstname.lastname@example.org (LGWR) SYS@v11203 AS SYSDBA> oradebug unlimit Statement processed. SYS@v11203 AS SYSDBA> oradebug event 10046 trace name context forever, level 8 Statement processed.
Of course 2491 is the Linux process id of the log writer, as is visible with “image”.