Search

OakieTags

Who's online

There are currently 0 users and 41 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

to_char()

Here’s an odd little detail about the to_char() function that happened to show up in a little demonstration code I used to create some data for last Friday’s quiz night.

When generating test data I often rely on doing things with rownum, and one of the thngs I do quite frequently is turn it into a character string. Nowadays I usually use the lpad() function to do this conversion because that lets me specify the defined length of the resulting column. But last Friday, rather than starting from scratch with my standard template script, I created my test case by cloning a script that I’d written in 2003 and the script had used the to_char() function.

So here’s a simple script to create a little table of character data, creating each column in a different way:

create table t as
select
	rownum				id,
	lpad(rownum,5)			lp5,
	rpad(rownum,5)			rp5,
	cast(rownum as varchar2(5))	cvc5,
	cast(rownum as char(5))		cc5,
	to_char(rownum)			tc
from
	all_objects
where
	rownum <= 12
;

And here are the results of a describe followed by a select:

SQL> desc t
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 ID                                     NUMBER
 LP5                                    VARCHAR2(5)
 RP5                                    VARCHAR2(5)
 CVC5                                   VARCHAR2(5)
 CC5                                    CHAR(5)
 TC                                     VARCHAR2(40)

SQL> select * from t order by id;

        ID LP5   RP5   CVC5  CC5   TC
---------- ----- ----- ----- ----- ----------------------------------------
         1     1 1     1     1     1
         2     2 2     2     2     2
         3     3 3     3     3     3
         4     4 4     4     4     4
         5     5 5     5     5     5
         6     6 6     6     6     6
         7     7 7     7     7     7
         8     8 8     8     8     8
         9     9 9     9     9     9
        10    10 10    10    10    10
        11    11 11    11    11    11
        12    12 12    12    12    12

12 rows selected.

As you can see, lpad(), rpad() and cast() allow you to specify the column length you want in the table definition. Cast() pads on the right (where necessary) with spaces, as does rpad() – although you can specify a padding character with rpad() and lpad(). Lpad() pads on the left (and it’s the one I use most frequently because it can be convenient to see the character representations sort in numeric order – sometimes I use ’0′ as the padding character).

The “trap” comes from to_char() which results in a column declaration of varchar2(40) but doesn’t do any padding, of course. It’s not much of a trap, really; but when you’ve got into the habit of expecting your character strings to take up some space and see the varchar2(40) in the describe and assume that every string has been padded to 40 characters, it’s easy (and very annoying) to waste time trying to work out why the arithmetic is wrong. It was interesting to note that one of the people offering an answer to the question did exactly what I had originally intended to do, viz: sest up my varchar2(40) to hold 40 characters.

Footnote: it’s crossed my mind that the size used for the column declaration might depend on the NLS settings, and could be affected by the default character set, locale, and use of separators – but I’ll leave that investigation to any readers who have a little spare time.

Ready for UKOUG

Yes, I know the technical content has been non-existent recently but it's conference season and I've started my trip to UKOUG already, despite the fact Birmingham is only about an hour away on the train from my new home! So although a fun weekend in Edinburgh awaits, celebrating a couple of important birthdays before I head into the utter chaos of conference, a quick post seems a good idea. (Train journeys do that to you ...)

I'm very excited about UKOUG this year. It only occurred to me when I was looking back through some old mails to dig out some details that last year hadn't been one of my happiest UKOUG experiences. I don't remember thinking it was that bad and my blog post seems fairly stoic in retrospect, but ....

- The weather in Britain was horrible and I only just managed to make it to Birmingham after a long wait. To give that some context, it took me about as long to get to UKOUG last year as it did to OOW in San Francisco this year ;-) Others who I'd hoped to see didn't make it :-(

- My abstract wasn't accepted so I wasn't presenting which meant a little less pressure but a lot more expense for the conference registration, so I only spent 2 days away from work to make up for that.

- My skin condition was an absolute nightmare and it was only when it started to get better in the first couple of months of this year that I realised just how much it had been getting me down. But I do remember one particularly chilly afternoon trying to find something, anything that would make me more comfortable*.

- Whilst pointing out that one of those things was Baby Oil is Too Much Information (really, I was happy to take anything that Boots had on offer!), it is funny because the combination of that and too much alcohol combined to create a situation where I found Alex Gorbachev tieing my shoe-laces for me one morning!

- Oh, and lest I forget, there was the panto :-( But, on the good side, a lot of money was raised and it was fun really.

I really didn't remember it being an unhappy experience but, looking back, it was probably my worst UKOUG experience.

Well, fake positivity pushed to one side (as usual), I've got a good feeling this will be one of my best.

- I'm presenting a couple of times although part of me is unhappy but unsurprised that the presentation I really care about and think is important will be delivered to 3 people (and I'm including myself) at 9:00 after the party and the one I have almost no confidence in is going to be in Hall 1 immediately before Cary Millsap! Maybe I'm the worst judge of these things anyway, judging by this blog post and particularly the comments, but I've now started to worry about the SPM presentation more than I was. (See what happened there - I tried to be positive for a bit and it didn't work ....)

- I'm actually going to be presenting 3 times now as this will be the first time a few of the Oak Table Network members will be attempting a new approach loosely based on the TED talks called OAK Talks. Each lunchtime, 5 different presenters will be speaking for 10 minutes each on a subject of their choosing with no slides allowed. I decided on 'You Probably Don't Need Exadata'. Whether I'll be able to distill 18 months of whining down the pub into 10 coherent minutes is another matter ;-) These will be held in the Unconference area at lunchtime on the three main conference days. I've seen some negative comments on Twitter to the effect that the Oak Table have somehow nabbed the most prestigious lunchtime Unconference slot in advance, when it should be first-come-first-served, like other Unconferences so, just to expand on my recollection of how this came about when it was all organised very late in the day ... I was keen that we should give some of the Oracle visitors from the US as much presentation time as possible so we discussed doing some talks in a pub somewhere near the ICC. Then someone informed me there was an Unconference this year, which I'd been unaware of. So (and this is the important bit) rather than detract from that, steal slots that new speakers could use and potentially steal attendees away from the ICC, wouldn't it be a marvellous idea to do it at lunchtime when there weren't going to be any Unconference sessions (because, erm it's lunchtime). That way, we could add something new in a space that wasn't scheduled to be used and help advertise and draw attention to the Unconference! Whether you think that's good or bad, my own intentions were all completely positive and trying to support other speakers too, even though not everyone sees it that way. Sigh

- Did I mention? There's an Unconference! Sign-up in the Registration area - first-come-first-served which means that anyone can present and use a variety of styles or subjects that might not have made it on to the main agenda! I've always liked the Unconference at OOW, but attendances can be a little sparse, so give it your support.

- Just for a bit of variety, I won't be staying in the Jurys this year, but trying the Hyatt again after a long absence. It's a small change, but that can be as good as a rest, I believe. Not that I'm planning too much rest, you understand! ;-)

- The agenda, as always, is stunning although I may have to revise this after I've checked out what's happening at the Unconference.

- Although I won't be able to make the whole of Oak Table Sunday because of my travel plans, I'm looking forward to seeing three of this year's Oak Table Movember volunteers raise even more money through their final shave-off, which should take place at some point on Sunday - yet to be decided! Alex Gorbachev, Dan Norris and James Morle. Two of them even hit fund-raising targets that led to died facial hair! Come along and check it out of you can and, even if you can't make it, you can probably count on some photos ;-)

- Last, but definitely not least, there is another opportunity for anyone interested in a full day of training from Cary Millsap in Birmingham on the Thursday immediately after the conference finishes. If you want to know how much fun I had on the course, check out my thoughts here. Well worth your while if you can spare the day and seems great value to me. If I hadn't attended so recently, I'd be there myself.

A lot to look forward to then and I really can't wait to get going on Sunday and meet up with friends!

* But at least I was taking notes and learning lessons so started treatment a couple of weeks ago. More reason to be happy! ;-)

Ad: Trivadis TechEvent 2011

All Trivadis employees meet twice a year near Zurich for a unique conference called TechEvent.

The TechEvents, with up to 15 parallel sessions, are among the largest IT conferences in Switzerland. During two days, employees can attend sessions about a wide range of both technical and non-technical subjects.

Watch the following (German spoken) video to know more about it…

“Mastering Oracle Trace Data” by Cary Millsap right after the UKOUG Conference in Birmingham

My good friend (and personal hero) Cary Millsap is doing a series of one day classes around the world — Mastering Oracle Trace Data. One of them is conveniently scheduled in Birmingham Thursday next week right after the UKOUG Conference. It’s not far from the Birmingham ICC where the UKOUG Technology and Business Suite Conference [...]

Confio Ignite

People have been asking me recently, “what is the best enterprise database monitoring software?”

Of course for Oracle there is OEM but what if OEM doesn’t find the problem? What if OEM breaks? ( I’ve blog before on how OEM can break and all the kings men don’t seem to be able to make him work again) What if one wants to access the performance data but the database is down? (Grid control only has an anemic modified and transformed extract from AWR). What if only DBAs have access to OEM and the developers want access to performance information and not only access but safe access in a user friendly and manager friendly interface? (I’m a strong believer in giving OEM access to developers but unfortunately the OEM interface is setup as a DBA interface with potentially risky access to database actions and is not a safe read only developer browsing interface). What if one wants to monitor multiple database platforms in the same interface?

In enterprise database monitors I look for a dashboard based on wait time and CPU statistics.  I want to see CPU and wait statistics correlated in a way that I can easily see the load on the database and the break down of the time by wait class and CPU. The main databases are instrumented with wait events such  as Oracle, Sybase, SQL Server and DB2 (DB2 wait interface support is in 9.7, before 9.7 the statistics were less useful).

I only know of 3 cross platform enterprise database monitoring products that follow such a methodology:  Confio Ignite, Quest Performance Analyzer and Precise Indepth/I3.

Precise Indepth/I3 product has fallen off my radar. I haven’t heard much from them and the last time I talked to them they said that installing the product required a consultant. That’s a show stopper for me. Quest’s as well is falling off my radar, not for lack of technology but more a lack of focus by the company. The product “Performance Analyzer” doesn’t even show up in the top 10 hits on Google. My guess is that they have rolled the product under the hood of Foglight and sell it as a Foglight option, which  means more money and more complexity which are all drawbacks. As far as technology goes, “Performance Analyzer” was pretty cool and had nice dash boards but probably the biggest drawback was the product required binaries to be installed on each and every target (at least for Oracle) which can turn into a maintenance nightmare. Well, who else is out there? The other contender is Confio. One of Confio’s great advantages is that they only do (at least for most of their history) monitoring.  There is clear focus and enthusiasm. It’s such a refreshing change from Quest and Precise (and if you are Oracle only, then OEM).

So let’s look at Confio. One of the newest and most exciting things at Confio is the feature of monitoring VMware statistics. VMware has a hundreds of statistics ( I once ran esxtop in batch mode and got 24,000 columns, yes 24,000). The statistics are shown in nice graphs in ESX vSphere and vCenter, but the statistics in the graphs have to be chose from lists and the number of lines in the graphs can become overwhelming, but the worst part is the lack of correlation between statistics of different types such as CPU, I/O and network which are on different graphs. Finally there is no way to correlate the VMware statistics with the Oracle database, until now with Confio.

Here is a Confio enterprise dashboard where I can see my databases. The databases are grouped in this image by those on VMware and those not:

If I drill down onto a database running on VMware I get not only the classic load chart on the database but also correlated graphs from the OS  and VMware:

The top graph is the load on the Oracle database grouped by top SQL based on active time in the database (which includes wait time and CPU. Below this graph are 3 other graphs:

  1.  DB Instance
    • Signal Waits Percent
    • Instance CPU Utilization
  2. VM/OS
    • VM CPU Usage
    • VM CPU Ready Time
    • O/S CPU Queue Length
  3. Physical Host
    • Host CPU Usage

Now with these graphs on the same page I can easily make correlations.  I can see a spike in my SQL load at noon.

I can correlate this spike in database load with the three graphs.

1. The “DB Instance” graph shows a spike in  the CPU used by Oracle as seen in the “DB Instance” graph. The “DB Instance” graph also shows “Signal Waits Percent” which is a fancy way of saying Oracle is waiting for CPU – ie there is CPU contention.

2. The “VM/OS” graph shows CPU usage going up and “CPU Ready Time” going up. The statisitics “CPU Ready Time” is an important statistics for VMware, yet it’s not well documented. The statisitc “CPU Ready Time” is shows how much time the VM wanted CPU from the host but couldn’t get it.

3. The “Physical Host” graph shows that there was a spike in CPU used at the HOST level across all the VMs on that host.

Additionally there are event notifications of changes on the system such as adding a new VM to the host. Note  the grey circles with arrows. Pass your mouse over the event icon to get information about the event.

Grey circles are events on other VMs, blue circles are events on this VM.

I find the ability to see all the layers in one view, see the important statistics only and be able to correlate all these statistics invaluable.

On top of the additional VMware monitoring option as seen anbove, Confio offers the classic view of the database load view through different aggregation  groupings :

  • SQL
  • Waits
  • Programs
  • Machines
  • DB Users
  • O/S Users
  • Files
  • Plans
  • Objects
  • Modules
  • Actions

Clicking the Object tabs gives a different perspective

The above charts are  large granularity but one can zoom down to as small as 10 second intervals:

Ignite also notifies when it finds performance issues:

Drilling down on alerts will point out such useful things as a SQL statement that has had an execution plan change for the worse:

Summary

The above are a few of my first impressions of Confio’s Ignite. Ignite seems to fill a clear need in the industry for enterprise cross database platform monitoring including the unique additional ability to monitor VMware.

If you are on Oracle only, then it is a cheaper alternative to OEM and if you have OEM already then Ignite is a good complement. One attractive feature of Ignite is that all the data  is collected into a centralized database allowing one to easily run custom queries and query across multiple databases. Most importantly Ignite gives safe access to database data to managers and developers – the people who should actually be seeing and understanding database performance.

You can take Confio Ignite for a spin at:

http://demo.confio.com:8143/iwc

login as demo, demo

PS: please share any experiences you have had with the product in the comments. Thanks

Getting started with Xen virtualisation on Ubuntu 11.10

After a long time and lots of problems I decided to abandon openSuSE 11.4 and its xen implementation in favour of the PVOPS kernel and a different distribution.

It’s been difficult to choose the correct one for me, for now I’m working with Ubuntu 11.10. One reason is that it’s said to be user friendly, and highly customisable. It comes with all the right ingredients for running different hypervisors, including my favourite: xen.

Important update! See “Security” below.

Background on Xen

For those who don’t know the story behind xen, here’s a short summary (errors and omissions are not intentional!)

Xen allows para-virtualisation of operating systems. Unlike with hardware virtualisation you know from the likes of Virtual Box and VMware products, a para-virtualised operating system must be aware of the fact it is virtualised. This requires access to the source code, a reason why you can’t run para-virtualised Windows on Linux Xen. Microsoft has their own imitation of xen, called hyper-V but it lacks in usability and functionality.

Xen boots into a privileged hypervisor, called dom0 with exclusive access to the hardware. Guests (also known as virtual machines or domUs) interact with the hardware through the hypervisor.

After a while, xen was bought by Citrix. Xen sources initially weren’t part of the official kernel and had to be patched into it. This meant a kernel had to be compiled as well, something not everyone is comfortable doing. For one reason or antoher, Red Hat stopped doing so with Fedora 9, and Red Hat Linux 5.x is the last of their enterprise distributions to include dom0 support. Red Hat 5 is based on kernel 2.6.18.x and further back ports from upstream development. Kernel 2.6.18 is known as the “xenified kernel”, because of said patches.

At that stage it looked as if Xen was dead (at least to me). Processors were increasingly capable to perform bottlenecking operations in silicone rather than software speeding ESX up by lots. The benefits of para-virtualisation were less visible.

OpenSuSE is the only distribution carrying the xen patches forward to my knowledge; however their 11.4 distribution had lots of usability problems causing me to abandon it. OpenSuSE 12.1 seems to continue with the forward-carrying of patches.

However in years of hard work the xen patches were integrated into the mainline kernel, an interesting background story is provided by Wim Coekarts here: http://blogs.oracle.com/wim/entry/linux_mainline_contains_all_the.  The important change in kernel development was the move away from xenified kernels (which are separate from “the” kernel) to paravirtualised ops, or pvops for short. The pvops kernel contains the entire necessary infrastructure to either be a xen kernel, or “normal” desktop kernel. Fedora 16, Ubuntu and others use the PVOPS kernel, which requires xen 4.0.

With kernels 2.6.39+ for the first time it is possible to provide a dom0 and domU without the kernel patch.

My Ubuntu experience

I installed a minimal version of Ubuntu 11.10 server on my environment. To benefit from Xen virtualisation, the following packages plus their dependencies were required:

  • xen-hypervisor-4.1-amd64
  • xen-utils-4.1
  • xenwatch
  • xen-tools
  • xen-utils-common
  • xenstore-utils
  • virtinst
  • virt-viewer
  • virt-manager

This creates a new menu for grub2 to boot into. The use of grub2 came as a little shock to me after I mastered lilo and “legacy” grub. To get on friendlier terms with it I installed startupmanager, and referred to help.ubuntu.com/community/Grub2, which is a great resource. A propos “install”. Being a RPM person, the use of apt-cache, dpkg and apt-get required a little getting used to. Luckily there are many debian/Ubuntu tutorials out there, including cheat sheets for these tools.

Using startupmanager I changed the default boot menu entry to my xen kernel and rebooted. Due to a bug in the ACPI implementation on the server I had to change /etc/default/grub file to acpi=ht.

Also, the /etc/xen/xend-config.sxp file needs to be amended to include (xend-unix-server yes)

Time for a reboot! After you got the console back, check that xen and virsh:

# xm list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0 23149     8     r-----     16.6
# virsh version
Compiled against library: libvir 0.9.2
Using library: libvir 0.9.2
Using API: Xen 3.0.1
Running hypervisor: Xen 4.1.0

Networking

After the reboot I was surprised to see a “virbr0” interface in the output of ifconfig. I checked the /etc/network/interfaces file where Ubuntu stores its network configuration but couldn’t find any. The libvirt wiki (http://wiki.libvirt.org/page/Networking) describes the virbr0 interface as the “default” network created by libvirt at boot time, using NAT. The configuration file can be found in /usr/share/libvirt/networks/default.xml and it’s used for quick and dirty live CD or other boot media that aren’t used for production. The documentation states quite clearly that you shouldn’t manually add any interfaces to the bridge. The benefit of using the default network is the ease of use: a DHCP server distributes an IP address, and NAT works out of the box including the correct setting of firewall rules. However it’s not what I want.

What I wanted though was a setup similar to what I had before: host only networks, using br0 for the “public” network, br1 and br2 for RAC interconnect or iSCSI related storage. The tools to achieve this are in package bridge-utils. To set up a bridge, edit /etc/network/interfaces and add these directives:

auto br0
iface br0 inet static
pre-up brctl addbr br0
post-down brctl delbr br0
address 192.168.99.10
netmask 255.255.255.0
bridge_stp on
bridge_maxwait 0
bridge_fd 0

auto br1
iface br1 inet static
pre-up brctl addbr br1
post-down brctl delbr br1
address 192.168.100.10
netmask 255.255.255.0
bridge_stp on
bridge_maxwait 0
bridge_fd 0

auto br2
iface br2 inet static
pre-up brctl addbr br2
post-down brctl delbr br2
address 192.168.101.10
netmask 255.255.255.0
bridge_stp on
bridge_maxwait 0
bridge_fd 0

You could even bridge eth0 to it but that is not intended. My domUs are nicely hidden from the outside world.

More loop devices

Nearly there! I actually was in the process of adding shared storage to my server when I noticed I only had 8 loop devices available to me. Normally not a big deal – unload the “loop” module, and load it again with a different value for max_loop. However, the loop module is statically compiled into the kernel.

After a lot of fiddling and the helpful comment (see below) I managed to solve the problem by amending /etc/default/grub. It contains a default line for kernel arguments. Mine has the following content:

GRUB_CMDLINE_LINUX_DEFAULT=”max_loop=256 console=tty0 acpi=off”

I wouldn’t set acpi off if I didn’t have a problem with the server failing to reboot due to a fauly ACPI implementation. I also can’t experiment with different NUMA settings which is a shame.

Security

One thing I have forgotten to mention is the security aspect-my server didn’t have a host based firewall started right away. After trying to understand “SuSEFirewall” and IPTables I was very pleasantly surprised to learn about ufw. A few commands secure your environment, have a look at these 2 great resources for ubuntu 11.10:

Summmary

I looks promising! Just need to work out more loop devices and I’ll be good to start installing the first operating system on the new system.

UKOUG – A Deep Dive into the SQL Monitoring Report

The UK Oracle User Group Conference 2011 is just around the corner and I just realized that I haven’t yet highlighted this, other than the “I’m speaking” banner on my blog.  I’ll be speaking on one of my favorite and most used reports — the SQL Monitor Report.  Below are the session details [direct link].  Hope to see you there!

Presentation Title: A Deep Dive into the SQL Monitoring Report
Speaker: Mr Greg Rahn
Company: Oracle
Presentation abstract: The SQL Monitoring Report was introduced in Oracle Database 11g and has become the single most used diagnostic report by the Oracle Real-World Performance Group and Oracle database development for SQL statement performance issues. This session will start with a technical overview of the SQL Monitoring Report and what metrics and information it provides. From there we’ll deep dive into numerous examples from the field explaining how this report was used to diagnose and validate performance issues. There is a wealth of information in the SQL Monitoring Report and this session will provide the necessary knowledge to best leverage it.
Presentation begins: 05/12/2011 11:05
Presentation duration: 60
Presentation content level: 2 (1 = indepth, 5 = strategic overview)
Audience experience: All experience levels
Audience function: Product Expert, Product User
Speaker biography: Greg Rahn is a database performance engineer in the Real-World Performance Group at Oracle Corporation. He joined the Real-World Performance Group in 2004 and has been working with Oracle databases since 1997. His primary focus is performance and scalability for Oracle data warehouses, specializing in Parallel Execution and Query Optimizer subject areas.
Related topics: Oracle RDBMS: Features and Options, Oracle RDBMS: Performance Managment
Hall: Hall 5

 

Upcoming Speaking Engagements

James Morle will be presenting at the UKOUG conference this year. He will be presenting full sessions on Oracle load testing during both the OakTable Sunday event and the main conference on the Wednesday. In addition he will be presenting an OAK Talk during Tuesday lunchtime. February 14-16 2012, James will present Sane SAN at [...]

Announcing OAK Talks

This year at the UKOUG conference the OakTable Network will be trying something a little different. In addition to the usual 45-60 minute presentations during the conference, and the special OakTable Sunday event immediately prior to the conference, we will also be trialling a new concept - the OAK Talk. Anybody that has watched a [...]

Back At Oracle Corporation (The Return Of The Thin White Duke)

I’ve been very quiet on the blogging front lately, so apologies for that. One of the main reasons is that after a period of nearly 10 years, I’ve decided to take up a position back at Oracle Corporation as a “Principal Solutions Consultant” in sunny Canberra. So things have been rather hectic, finishing up in [...]