Search

OakieTags

Who's online

There are currently 1 user and 42 guests online.

Online users

Recent comments

Affiliations

11g Release 2

Why is P1 the only parameter populated in cell smart table scan?

Anyone who has looked at Exadata might ask the question, and I did so too. After all, cell smart table scan is in wait class User IO so there should be more, right? This is what you find for a smart scan:

NAME                           PARAMETER1           PARAMETER2           PARAMETER3                     WAIT_CLASS
------------------------------ -------------------- -------------------- ------------------------------ ---------------
cell smart table scan          cellhash#                                                                User I/O
cell smart index scan          cellhash#                                                                User I/O

Compare this to the traditional IO request:

Data Guard transport lag in OEM 12c

I have come across this phenomenon a couple of times now so I thought it was worth writing up.

Consider a scenario where you get an alert because your standby database has an apply lag. The alert is generated by OEM and when you log in and check-it has indeed an apply lag. Even worse, the apply lag increases with every refresh of the page! I tagged this as an 11.2 problem but it’s definitely not related to that version.

Here is a screenshot of this misery:

 Lag in OEM

Now there are of course a number of possible causes:

Massive tablespace fragmentation on LMT with ASSM

I have been asked to investigate another interesting problem worth writing about. It’s worth mentioning here because it deals with a problem I believed to have long since been solved: tablespace fragmentation. However, in this case it was a locally managed tablespace (LMT) with Automatic Segment Space Management (ASSM) enabled. It should be difficult to have fragmentation on one of these, but as you will see it is not impossible.

So the story started innocently enough with an ORA-01653 while shuffling around subpartitions to another tablespace:

Be aware of these environment variables in .bashrc et al.

This is a quick post about one of my pet peeves-statically setting environment variables in .bashrc or other shell’s equivalents. I have been bitten by this a number of times. Sometimes it’s my own code, as in this story.

Background

Many installation instructions about Oracle version x tell you to add variables to your shell session when you log in. What’s meant well for convenience can backfire. Sure it’s nice to have ORACLE_HOME, ORACLE_SID, LD_LIBRARY_PATH, CLASSPATH etc set automatically without having to find out about them the hard way. However, there are situations where this doesn’t help.

Clusterware and listener management gotcha in 11.2

I have come across an interesting situation recently and thought it was worth blogging about. My friend Doug Burns might like it, it has to do with consolidation.

Background

I have seen quite a few sites in my career where the separation (of duties/listeners/disk space/log destinations) was paramount-and for good reason! In fact Oracle propagate it as well as a quick search with your favourite search engine will show. In my example I came across a system that used different listeners per database, which is very common and prevents users from “accidentally” connecting to the wrong system. If you are using such a setup please read on. If you are not using Oracle Restart/Clusterware/RAC then this is not immediately relevant to your Oracle estate.

DBMS_FILE_TRANSFER potentially cool but then it is not

This post is interesting for all those of you who plan to transfer data files between database instance. Why would you consider this? Here’s an excerpt from the official 12.1 package documentation:

The DBMS_FILE_TRANSFER package provides procedures to copy a binary file within a database or to transfer a binary file between databases.

But it gets better:

The destination database converts each block when it receives a file from a platform with different endianness. Datafiles can be imported after they are moved to the destination database as part of a transportable operation without RMAN conversion.

So that’s a way not only to copy data files from one database to another but it also allows me to get a file from SPARC and make it available on Linux!

I am speaking at AIM & Database Server combined SIG

If you haven’t thought about attending UKOUG’s AIM & Database Server combined SIG, you definitely should! The agenda is seriously good with many well known speakers and lots of networking opportunity. It’s also one of the few events outside an Oracle office. 

I am speaking about how you can use incremental backups to reduce cross-platform transportable tablespace (TTS) downtime:

An introduction to Policy Managed Databases in 11.2 RAC

I just realised this week that I haven’t really detailed anything about policy managed RAC databases. I remembered having done some research about server pools way back when 11.2.0.1 came out. I promised to spend some time looking at the new type of database that comes with server pools: policy managed databases but somehow didn’t get around to doing it. Since I’m lazy I’ll refer to these databases as PMDs from now on as it saves a fair bit of typing.

So how are PMDs different from Administrator Managed Databases?

First of all you can have PMDs with RAC only, i.e. in a multi-instance active/active configuration. Before 11.2 RAC you had to tie an Oracle instance to a cluster node. This is why you see instance prefixes in a RAC spfile. Here is an example from my lab 11.2.0.3.6 cluster:

4k sector size and Grid Infrastructure 11.2 installation gotcha

Some days are just too good to be true :) I ran into an interesting problem trying to install 11.2.0.3.0 Grid Infrastructure for a two node cluster. The storage was presented via iSCSI which turned out to be a blessing and inspiration for this blog post. So far I haven’t found out yet how to create “shareable” LUNs in KVM the same way I did successfully with Xen. I wouldn’t recommend general purpose iSCSI for anything besides lab setups though. If you want network based storage, go and use 10GBit/s Ethernet and either use FCoE or (direct) NFS.

Here is my setup. Storage is presented in 3 targets using tgtd on the host:

Grid Infrastructure And Database High Availability Deep Dive Seminars 2013

So this is a little bit of a plug for myself and Enkitec but I’m running my Grid Infrastructure And Database High Availability Deep Dive Seminars again for Oracle University. This time these events are online, so no need to come to a classroom at all.

Here is the short description of the course:

Providing a highly available database architecture fit for today’s fast changing requirements can be a complex task. Many technologies are available to provide resilience, each with its own advantages and possible disadvantages. This seminar begins with an overview of available HA technologies (hard and soft partitioning of servers, cold failover clusters, RAC and RAC One Node) and complementary tools and techniques to provide recovery from site failure (Data Guard or storage replication).