Search

OakieTags

Who's online

There are currently 0 users and 22 guests online.

Recent comments

Affiliations

linux

Upgrading to Oracle Linux 6.5

This is a very short post to demonstrate how to upgrade to Oracle Linux 6.5. My lab system was reasonably current, Oracle Linux 6.4 with some security patches (but not all). The upgrade to 6.5 (or “latest”) is very simple, and since Oracle announced they had beefed up connectivity it’s a real joy. Instead of 450kb/s I get around 9 MB/s. For those of you who want the ISO image: you can’t currently get it from edelivery, the only update method is YUM/ULN or you download the ISO from My Oracle Support, patch 17860279.

Hard Drive Predictive Failures on Exadata

This post also applies to non-Exadata systems as hard drives work the same way in other storage arrays too – just the commands you would use for extracting the disk-level metrics would be different.

I just noticed that one of our Exadatas had a disk put into “predictive failure” mode and thought to show how to measure why the disk is in that mode (as opposed to just replacing it without really understanding the issue ;-)

Inside a RAC 12c GNS cluster

Based on some reader feedback I started looking at GNS again, but this time it will be for RAC 12c. According to the documentation GNS has been enhanced so that you can use it without subdomain delegation. I decided to try the “old fashioned” way though: DHCP for VIPs, SCAN IPs, subdomain delegation and the like as it is the most complex setup. I occasionally like complex.

The network setup is exactly the same as I used before in 11.2 and thankfully didn’t require any changes. The cluster I am building is a 2 node system on Oracle Linux 6.4 and the Red Hat compatible kernel. I have to use this as the Unbreakable Kernel doesn’t know about block devices made available to it via virtio-scsi. I use virtio-scsi for shared block devices very much in the same way I did for Xen.

Drill Down the I/O stack at UKOUG Tech13

It's just under a week to go before the doors open for the UKOUG Tech13 conference and the adjoining OakTable World UK 2013 sessions, so I thought I would write a very short blog post about what I will be doing there, where I'll be, and what I'm looking forward to. This year I will […]

Drill Down the I/O stack at UKOUG Tech13

It's just under a week to go before the doors open for the UKOUG Tech13 conference and the adjoining OakTable World UK 2013 sessions, so I thought I would write a very short blog post about what I will be doing there, where I'll be, and what I'm looking forward to. This year I will […]

When the Oracle wait interface isn’t enough

Oracle has done a great job with the wait interface. It has given us the opportunity to profile the time spend in Oracle processes, by keeping track of CPU time and waits (which is time spend not running on CPU). With every new version Oracle has enhanced the wait interface, by making the waits more detailed. Tuning typically means trying to get rid of waits as much as possible.

Applying PSU 12.1.0.1.1 in the lab environment

Since the first patch for Oracle 12c has been made available I was of course keen to see how to apply it. For a first test I opted to use my 3 node RAC cluster which is running on Oracle Linux 6.4 with UEK2. This post is not terribly well-polished, it’s more of a log of what I did…

The cluster makes use of some of the new 12c RAC features such as Flex ASM but it is not a Flex Cluster:

[oracle@rac12node3 ~]$ srvctl config asm
ASM home: /u01/app/12.1.0.1/grid
Password file: +OCR/orapwASM
ASM listener: LISTENER
ASM instance count: 2
Cluster ASM listener: ASMNET1LSNR_ASM
[oracle@rac12node3 ~]$

My database is a standard 3 node administrator managed build:

Compressing sqlplus output using a pipe

Recently I am involved in a project which requires a lot of data to be extracted from Oracle. The size of the data was so huge that the filesystems filled up. Compressing the output (using tar j (bzip2) or z (gzip)) is an obvious solution, but this can only be done after the files are created. This is why I proposed compressing the output without ever existing in uncompressed form.

This solution works with a so called ‘named pipe’, which is something for which I know for sure it can be done on Linux and unix. A named pipe has the ability to let two processes transfer data between each other. This solution will look familiar to “older” Oracle DBA’s: this was how exports where compressed from the “original” export utility (exp).

I’ve created a small script which calls sqlplus embedded in it, and executes sqlplus commands using a “here command”:

SGA bigger than than the amount of HugePages configured (Linux – 11.2.0.3)

I just learned something new yesterday when demoing large page use on Linux during my AOT seminar.

I had 512 x 2MB hugepages configured in Linux ( 1024 MB ). So I set the USE_LARGE_PAGES = TRUE (it actually is the default anyway in 11.2.0.2+). This allows the use of large pages (it doesn’t force, the ONLY option would force the use of hugepages, otherwise the instance wouldn’t start up). Anyway, the previous behavior with hugepages was, that if Oracle was not able to allocate the entire SGA from the hugepages area, it would silently allocate the entire SGA from small pages. It was all or nothing. But to my surprise, when I set my SGA_MAX_SIZE bigger than the amount of allocated hugepages in my testing, the instance started up and the hugepages got allocated too!

Exadata: what kind of IO requests has a cell been receiving?

When you are administering an Exadata or more Exadata’s, you probably have multiple databases running on different database or “computing” nodes. In order to understand what kind of IO you are doing, you can look inside the statistics of your database, and look in the data dictionary what that instance or instances (in case of RAC) have been doing. When using Exadata there is a near 100% chance you are using either normal redundancy or high redundancy, of which most people know the impact of the “write amplification” of both normal and high redundancy of ASM (the write statistics in the Oracle data dictionary do not reflect the additional writes needed to satisfy normal (#IO times 2) or high (#IO times 3) redundancy). This means there might be difference in IOs between what you measure or think for your database is doing, and actually is done at the storage level.