This is a quick post about one of my pet peeves-statically setting environment variables in .bashrc or other shell’s equivalents. I have been bitten by this a number of times. Sometimes it’s my own code, as in this story.
Many installation instructions about Oracle version x tell you to add variables to your shell session when you log in. What’s meant well for convenience can backfire. Sure it’s nice to have ORACLE_HOME, ORACLE_SID, LD_LIBRARY_PATH, CLASSPATH etc set automatically without having to find out about them the hard way. However, there are situations where this doesn’t help.
I have come across an interesting situation recently and thought it was worth blogging about. My friend Doug Burns might like it, it has to do with consolidation.
I have seen quite a few sites in my career where the separation (of duties/listeners/disk space/log destinations) was paramount-and for good reason! In fact Oracle propagate it as well as a quick search with your favourite search engine will show. In my example I came across a system that used different listeners per database, which is very common and prevents users from “accidentally” connecting to the wrong system. If you are using such a setup please read on. If you are not using Oracle Restart/Clusterware/RAC then this is not immediately relevant to your Oracle estate.
This post is interesting for all those of you who plan to transfer data files between database instance. Why would you consider this? Here’s an excerpt from the official 12.1 package documentation:
DBMS_FILE_TRANSFER package provides procedures to copy a binary file within a database or to transfer a binary file between databases.
But it gets better:
The destination database converts each block when it receives a file from a platform with different endianness. Datafiles can be imported after they are moved to the destination database as part of a transportable operation without RMAN conversion.
So that’s a way not only to copy data files from one database to another but it also allows me to get a file from SPARC and make it available on Linux!
If you haven’t thought about attending UKOUG’s AIM & Database Server combined SIG, you definitely should! The agenda is seriously good with many well known speakers and lots of networking opportunity. It’s also one of the few events outside an Oracle office.
I am speaking about how you can use incremental backups to reduce cross-platform transportable tablespace (TTS) downtime:
I just realised this week that I haven’t really detailed anything about policy managed RAC databases. I remembered having done some research about server pools way back when 188.8.131.52 came out. I promised to spend some time looking at the new type of database that comes with server pools: policy managed databases but somehow didn’t get around to doing it. Since I’m lazy I’ll refer to these databases as PMDs from now on as it saves a fair bit of typing.
So how are PMDs different from Administrator Managed Databases?
First of all you can have PMDs with RAC only, i.e. in a multi-instance active/active configuration. Before 11.2 RAC you had to tie an Oracle instance to a cluster node. This is why you see instance prefixes in a RAC spfile. Here is an example from my lab 184.108.40.206.6 cluster:
Some days are just too good to be true :) I ran into an interesting problem trying to install 220.127.116.11.0 Grid Infrastructure for a two node cluster. The storage was presented via iSCSI which turned out to be a blessing and inspiration for this blog post. So far I haven’t found out yet how to create “shareable” LUNs in KVM the same way I did successfully with Xen. I wouldn’t recommend general purpose iSCSI for anything besides lab setups though. If you want network based storage, go and use 10GBit/s Ethernet and either use FCoE or (direct) NFS.
Here is my setup. Storage is presented in 3 targets using tgtd on the host:
So this is a little bit of a plug for myself and Enkitec but I’m running my Grid Infrastructure And Database High Availability Deep Dive Seminars again for Oracle University. This time these events are online, so no need to come to a classroom at all.
Here is the short description of the course:
Providing a highly available database architecture fit for today’s fast changing requirements can be a complex task. Many technologies are available to provide resilience, each with its own advantages and possible disadvantages. This seminar begins with an overview of available HA technologies (hard and soft partitioning of servers, cold failover clusters, RAC and RAC One Node) and complementary tools and techniques to provide recovery from site failure (Data Guard or storage replication).
This might be something very obvious for the reader but I had an interesting revelation recently when implementing parallel_degree_limit_p1 in a resource consumer group. My aim was to prevent users mapped to a resource consumer group from executing any query in parallel. The environment is fictional, but let’s assume that it is possible that maintenance operations for example leave indexes and tables decorated with a parallel x attribute. Another common case is the restriction of PQ resource to users to prevent them from using all the machine’s resources.
This can happen when you perform an index rebuild for example in parallel to speed the operation up. However the DOP will stay the same with the index after the maintenance operation, and you have to explicitly set it back:
This is pretty much a note to myself on how to set up Data Guard broker for RAC 18.104.22.168+. The tests have been performed on Oracle Linux 5.5 with the Red Hat Kernel. Oracle was 22.214.171.124. Sadly my lab server didn’t support more than 2 RAC nodes, so everything has been done on the same cluster. It shouldn’t make a difference though. If it does, please let me know).
WARNING: there are some rather deep changes to the cluster here, be sure to have proper change control around making such amendments as it can cause outages! Nuff said.
A few days ago I wrote about my new lab server and the misfortune with kernel UEK (aka 2.6.32 + backports). It simply wouldn’t recognise the memory in the server:
# free -m total used free shared buffers cached Mem: 3385 426 2958 0 9 233 -/+ buffers/cache: 184 3200 Swap: 511 0 511
Ouch. Today I gave it another go, especially since my new M4 SSD has arrived. My first idea was to upgrade to UEK2. And indeed, following the instructions on Wim Coekaerts’s blog (see references), it worked: