Top 60 Oracle Blogs

Recent comments

June 2011

Block size

I knew that failing to have a db_Nk_cache_size setting for your database could cause a statement to crash when it tried to address an object (or tablespace) using a non-standard block size, reporting errors like:

    ORA-29339: tablespace block size 16384 does not match configured block sizes”
    ORA-00379: no free buffers available in buffer pool DEFAULT for block size 16K

Here’s an interesting variation on the theme, reported in a note on the OTN database forum. Note particularly the ORA-603 and ORA-604 that wrap the ORA-379; and that the user states that the problem cache is the standard block size for the database. Unfortunately we never saw a resolution to this thread – perhaps it was simply a case of a cache that was too small when the database got very busy.

Footnote: a database can fail to open if it needs to do recovery in a tablespace for which there is no buffer set. Of course this is only likely to happen if you’re running with an init.ora file and have created a non-standard cache with ‘alter system’ calls while the database was previously up. Here’s an extract from an alert log showing the type of report you get:

Fri May 20 17:58:38 2011
Beginning crash recovery of 1 threads
 parallel recovery started with 2 processes
Started redo scan
Completed redo scan
 374 redo blocks read, 98 data blocks need recovery
Fri May 20 17:58:40 2011
Slave exiting with ORA-379 exception
Errors in file c:\oracle\diag\rdbms\d11g\d11g\trace\d11g_p000_2056.trc:
ORA-00379: no free buffers available in buffer pool  for block size 16K
Aborting crash recovery due to slave death, attempting serial crash recovery
Beginning crash recovery of 1 threads
Started redo scan
Completed redo scan
 374 redo blocks read, 98 data blocks need recovery
Aborting crash recovery due to error 379
Errors in file c:\oracle\diag\rdbms\d11g\d11g\trace\d11g_ora_3536.trc:
ORA-00379: no free buffers available in buffer pool  for block size 16K
ORA-379 signalled during: ALTER DATABASE OPEN...

Real-World Performance Videos on YouTube – OLTP

In addition, here are some OLTP demos that demonstrate how much performance and throughput can be wasted by poor design and suboptimal database programming.

OLTP Performance – The Trouble with Parsing

width="640" height="510" src="" frameborder="0" allowfullscreen>

OLTP Performance – Concurrent Mid-Tier Connections

width="640" height="510" src="" frameborder="0" allowfullscreen>

Real-World Performance Videos on YouTube – Data Warehousing

Here are some videos of a data warehouse demo that the Real-World Performance Group has been running for a while now and we thought it was time to put them on YouTube. Hope you find them informative.

Migrate a 1TB Data warehouse in 20 Minutes (Part 1)

width="640" height="510" src="" frameborder="0" allowfullscreen>

Migrate a 1TB Data warehouse in 20 Minutes (Part 2)

width="640" height="510" src="" frameborder="0" allowfullscreen>

Migrate a 1TB Data warehouse in 20 Minutes (Part 3)

width="640" height="510" src="" frameborder="0" allowfullscreen>

Migrate a 1TB Data warehouse in 20 Minutes (Part 4)

width="640" height="510" src="" frameborder="0" allowfullscreen>

Open VPN for the Road Warrior

As a consultant it is important to have a test lab, something which is your own, where you can play with new versions and concepts to your heart’s delight without disturbing anyone else. Or worse, causing problems for the customer. For this reason I like to have an Internet facing machine which I can connect to from anywhere. In case the corporate network doesn’t let you out, consider getting mobile broadband on a PAYG basis-it works a dream!

I have blogged about my system a number of times, with special emphasis on RHEL 5.x and 6.x. Unlike many other Oracle scientists I do not use Virtual Box or VMWare for virtualisation, but rather Xen. When I started looking at para-virtualisation I looked at Oracle VM but at the time it was lacking features I wanted such as iSCSI provided storage. OpenSuSE is a great distribution which offers a dom0 kernel out of the box, and this is what I went for. My lab can support a four node cluster plus two Grid Control domUs, which is more than enough for me to work with. And although it’s busy, it doesn’t make working with the machine impossible.

For a long time I was very happy with my machine, and SSH access was all I needed. But when moving to Vista/Windows 7 a problem became apparent: I could no longer use port-forwarding to access my samba server on my backup domU. Microsoft added some other software to listen on the required port so I started looking at OpenVPN as a solution. This article assumes that you are familiar with OpenVPN-if you are not then you might want to have a look at the documentation. The howto is a a great starting point:

Actually, the OpenVPN tutorial is just fantastic, and it is all you need to set your clients and server up. What it failed to explain is how to access your machines behind the gateway. My dom0 has a public IP address to which OpenVPN binds. This public IP is assigned to my first network bridge, br0. I have three more virtual bridges in the system, not related to a physical interface. VMWare calls such a network a “host only” network. I am using br1 as the public interface for my domUs, br2 is used as a private interconnect. The final one, br3 is either  the storage network for my iSCSI nodes or-for my systems it serves as an alternative interface to the HAIP resource.

So all in all a reasonable setup I think. The trouble is that I didn’t know how to access my database servers in the br1 subnet. This is another one of those tips that look so obvious, but took literally hours to work out. I am not concerned with the private networks br2 and br3, after all those are not meant to be used anyway. The simplified view on my network is shown in this figure:

The workstation is my laptop-obviously.The figure is not 100% correct-the device openVPN connections go to is called tun0, but it’s related to my br0 device.

First of all, I needed to allow communication from device tun0, to which openVPN listens to, to be forwarded to br1. The following iptables rules do exactly that:

# iptables -I FORWARD -i tun0 -o br1 ACCEPT
# iptables -I FORWARD -i br1 -o tun0 -j ACCEPT

Translated into English these 2 lines instruct the firewall to ACCEPT traffic into the FORWARD chain from device tun0 to br1. In other words route the packet to the internal network. The second line allows traffic from the internal network to traverse the tunnel device tun0.

With this in place, you can instruct openVPN to push routes to the client. In my case this is done in the configuration file used to start the openVPN daemon in server mode. My config file, server.conf, contains this line:

push “route″

When a client connects to the OpenVPN server, this route is automatically added to the local routing table with the correct gateway. Note that on Windows with UAC enabled you have to start OpenVPN as administrator or otherwise it won’t be able to initialise the TAP device properly.

Are we there yet? I’m afraid not.

This is only half the story as I found out the hard way. After what seems like endless troubleshooting routing tables (which were correct all the time!) I found that I had to set a default gateway on the domUs, pointing to the dom0’s address of br1. This is as simple as updating /etc/sysconfig/network-scripts/ifcfg-ethx, and adding this line to it:


Either use your command line skills to add the default route or – like I did – bring the interface down and up again. Again, where gateway is the IP address of the br1 device on the dom0.

Hope this saves you a few hours of troubleshooting.

New Blogger – Andy Colvin

Here’s a link to brand new blog by a very experienced Exadata guy. Andy has done a bunch of Exadata projects over the last year and a half and has done more patching than anyone I know. Should be some interesting info coming from him. His first post is about an X2-8 that he is currently working on (with pictures). And here’s the link to the home page on his blog:

Andy Colvin’s Oracle Blog

Check him out.

Password file maintenance in a Data Guard environment

In a previous posting I wrote about password file maintenance in a clustered ASM and RAC environment. This article raised another question: Is there anything specific about password file maintenance in a Data Guard environment? Yes, updating a password file in a Data Guard environment isn’t as straight forward as one might think. In this […]

Calculate the Distance Between Two Latitude/Longitude Points using Plain SQL

June 13, 2011 (Modified June 14, 2011) A question recently appeared on the Usenet group that oddly made me recall a lesson from a mathematics class that I taught in the early 1990s.  A bit strange how a question related to Oracle Database would trigger such a memory, but it happened.  The question posed [...]

The Best Goal Ever !! (Fearless)

Australia (and Canberra specifically) had recently been suffering from two very long and difficult droughts. One had been a severe lack of rain, which left dams at record low levels. After many years, this ended earlier in the year with rain aplenty and with local dams at long last back at 100% capacity. The other drought however [...]

Oracle Open World 2011 – Suggest a Session

Well my lone abstract submission didn’t get selected at Open World this year. But apparently they have a second chance system where you can “Suggest a Session” and users can vote on which papers they’d like to see on the agenda. I went ahead and suggested “Tuning Exadata” – It sounds like something you shouldn’t have to do, but remember that Exadata is not an appliance that has few or no knobs to turn. It has all the power and options of an Oracle database and there are certainly things that you can do wrong that will keep Exadata from performing at its best. So the paper is about how you can know for sure whether Exadata is doing what it should and how to coerce it if you need to.

The site where this voting is supposed to take place is a little difficult to navigate (in my humble opinion) so here’s a direct link to the page where you can see the abstract (and vote if you deem it worthy). ;)

Tuning Exadata

You will have to log in with your Oracle Single Signon account (what you use for My Oracle Support – or Metalink for the old guys) or I think you can create an separate account if you want. By the way, Andy Colvin has submitted an abstract for a talk on Exadata Patching, which should be very informative if it gets picked. He’s done more Exadata patching than anyone I am aware of. Here’s a link to his abstract:

Rolling with the Punches – Adventures in Exadata Patching

There will undoubtedly be many deserving abstracts. For example, several of my OakTable brethren have suggested sessions as well. So please look around the site for others of interest as well. You can vote for as many as you want.

NLS, Part Deux

A guest post today, by Brian Ledbetter, a co-worker at Agilex:

On a customer’s database, we ran across a table that would not migrate.  It was admittedly a log table, containing long chunks of HTTP header data, but whenever we tried importing it into our 11gR2 database, we ended up getting:

IMP-00058: ORACLE error 1461 encountered
ORA-01461: can bind a LONG value only for insert into a LONG column

After looking at the table structure, the first thing we noticed was that there was a VARCHAR2(4000) column in the table.  Considering that this column was already the maximum size (in bytes) for a CHAR-based data type, it became the focus of our attention.

Looking online for solutions, we found references [1] suggesting that Oracle was implicitly converting this column to a VARCHAR2(4000 CHAR) type, creating a column that can contain up to 4 bytes per character.[2]  Because this overflows the 4000 byte limit on column length, Oracle then attempted to implicitly convert the datatype to a LONG VARCHAR2, which is apparently deprecated in 11gR2.[3]  (We’re not sure why Oracle is still trying to make this conversion, if that’s the case.)

Anyway, we tried precreating the table with a CLOB datatype, and that didn’t work either, so as a workaround, we created a copy of the table with the data trimmed to 1000 characters (leaving plenty of room after UTF8 conversion):

create tabname_migtmp as select col1, col2, substr(col3,1,1000) col3 from tabname;

We then used exp/imp to copy tabname_migtmp over to the 11gR2 server, and inserted the data from it into the final location.

insert into tabname select * from tabname_migtmp;

drop table tabname_migtmp;




See Also: Technote 444171.1,