Search

OakieTags

Who's online

There are currently 1 user and 41 guests online.

Online users

Recent comments

Affiliations

Oakies Blog Aggregator

Stats collection

From time to time I see people asking how they can check how far their system has got in it’s call to collect stats. The question is often a little ambiguous – they know which call to dbms_stats they’ve used, so they know whether they’re trying to monitor stats collection against a single table (taking advantage of v$session_longops springs to mind) or against an entire schema.

Here’s one simple-minded approach that I whipped up a few years ago – it’s possible you could do better with the latest versions of dbms_stats. Its purpose is simply to give you some way of checking what work dbms_stats needs to do (in this example, for a schema), so that you can check how much has been done and how much remains. The basic method can be modified in various ways to match you tastes.

We start with the simple observation that many calls to dbms_stats have an “options” parameter that allows you to specify things like ‘LIST STALE’, rather than ‘GATHER STALE’. The “list” options populate an object table type with details of the data segments whose statistics would be gathered by the corresponding “gather” option. So we could start with a simple piece of code to do the following:

declare
	m_object_list	dbms_stats.objecttab;
begin

	dbms_stats.gather_schema_stats(
		ownname		=> 'test_user',
		options		=> 'LIST AUTO',
--		options		=> 'LIST STALE',
--		options		=> 'LIST EMPTY',
		objlist		=> m_object_list
	);

	for i in 1..m_object_list.count loop
		dbms_output.put_line(
			rpad(m_object_list(i).ownname,30)     ||
			rpad(m_object_list(i).objtype, 6)     ||
			rpad(m_object_list(i).objname,30)     ||
			rpad(m_object_list(i).partname,30)    ||
			rpad(m_object_list(i).subpartname,30) ||
			lpad(m_object_list(i).confidence,4)
		);
	end loop;
end;
/

Call this before you make your call to gather stats (and in general it might be better to use the utl_file package to write to a file rather than using dbms_output and capturing screen output) then, as time passes, you can check the “last_analyzed” column on the relevant view to see which objects are still waiting for their stats to be collected. Of course, this approach is a little clunky, and requires a certain amount of tedious manual labour to get the check done, but once we have the basic principle the rest is easy. Let’s start by using the code in a pipelined function.

create or replace function list_stale (
	i_option	in	varchar2,
	i_user		in	varchar2 default user
)
return dbms_stats.objecttab pipelined
as
	pragma autonomous_transaction;
	m_object_list	dbms_stats.objecttab;
begin

	if i_option not in (
		'LIST AUTO', 'LIST STALE','LIST EMPTY'
	) then
		null;
	else
		dbms_stats.gather_schema_stats(
			ownname		=> i_user,
			options		=> i_option,
			objlist		=> m_object_list
		);

		for i in 1..m_object_list.count loop
			pipe row (m_object_list(i));
		end loop;

		end if;

	return;
end;
/

You’ll notice that I’ve declared the function to run as an autonomous transaction – the call to dbms_stats does various things (such as updating the mon_mods$ and col_usage$ tables) that you’re not supposed to do in a pipelined function, but you can hide these by using the automonous_transaction pragma. (Strangely the error message you get in the absences of the pragma is: “ORA-04092: cannot COMMIT in a trigger”.)

With this definition I can call the function with code like:

select  *
from    table(list_stale('LIST AUTO'))

OWNNAME      OBJTYP OBJNAME         PARTNAME         SUBPARTNAME            CONFIDENCE
------------ ------ --------------- ---------------- ---------------------- ----------
TEST_USER    TABLE  PT_RANGE                                                       100
TEST_USER    TABLE  PT_RANGE        P600                                           100
TEST_USER    TABLE  PT_RL                                                          100
TEST_USER    TABLE  PT_RL           P_2002_MAR                                     100
TEST_USER    TABLE  PT_RL           P_2002_MAR       P_2002_MAR_THE_REST           100
TEST_USER    TABLE  PT_RL           P_2002_MAR       P_2002_MAR_MN                 100
TEST_USER    TABLE  PT_RL           P_2002_MAR       P_2002_MAR_EAST_COAST         100
TEST_USER    TABLE  PT_RL           P_2002_MAR       P_2002_MAR_CA                 100
TEST_USER    TABLE  T1                                                             100
TEST_USER    INDEX  PT_PK                                                          100
TEST_USER    INDEX  PT_PK           P600                                           100
TEST_USER    INDEX  T1_PK                                                          100
        ... etc ...

This will give me a list of all objects in my schema that need stats collected – either because they have no stats, or their stats are stale.

But I can be more subtle than this. I could, for example, write a query that joins this “table” to the view dba_segments and keep running it as the stats collection proceeds to report the objects that are still waiting for stats and the sizes of those objects. The SQL needs just a little thought as the objtype is only reported as “TABLE” or “INDEX” so you have to do a UNION ALL in the join and work out which rows are really for partitions and which for subpartitions. And then you have to mess about a bit with outer joins because Oracle will, for example, want to collect stats on a table when a partition is stale – and there is no data segment at the table level of a partitioned table.

Your code might look something like this:

with list_result as (
	select *
	from	table(list_stale('LIST AUTO'))
)
select
	lst.objname, nvl(seg.segment_type,lst.objtype) segment_type, lst.partname, seg.blocks
from
	list_result	lst,
	dba_segments	seg
where
	lst.partname is null
and	seg.owner(+) = lst.ownname
and	seg.segment_type(+) = lst.objtype
and	seg.segment_name(+) = lst.objname
and	seg.partition_name is null
union all
select
	lst.objname, nvl(seg.segment_type,lst.objtype || ' PARTITION') segment_type, lst.partname, seg.blocks
from
	list_result	lst,
	dba_segments	seg
where
	lst.subpartname is null
and	lst.partname is not null
and	seg.owner(+) = lst.ownname
and	substr(seg.segment_type(+),1,5) = lst.objtype
and	seg.segment_name(+) = lst.objname
and	seg.partition_name(+) = lst.partname
union all
select
	lst.objname, segment_type, lst.partname, seg.blocks
from
	list_result	lst,
	dba_segments	seg
where
	lst.subpartname is not null
and	seg.owner = lst.ownname
and	substr(seg.segment_type,1,5) = lst.objtype
and	seg.segment_name = lst.objname
and	seg.partition_name = lst.subpartname
order by
	1,2,3
/

In a tiny schema, where every data segment is one of my “1MB uniform extents”, this is the sample of output I got:

OBJNAME         SEGMENT_TYPE       PARTNAME                           BLOCKS
--------------- ------------------ ------------------------------ ----------
PT_PK           INDEX
PT_PK           INDEX PARTITION    P200                                  128
PT_PK           INDEX PARTITION    P400                                  128
PT_PK           INDEX PARTITION    P600                                  128
PT_RANGE        TABLE
PT_RANGE        TABLE PARTITION    P200                                  128
PT_RANGE        TABLE PARTITION    P400                                  128
PT_RANGE        TABLE PARTITION    P600                                  128
PT_RL           TABLE
PT_RL           TABLE PARTITION    P_2002_FEB
PT_RL           TABLE PARTITION    P_2002_MAR
PT_RL           TABLE SUBPARTITION P_2002_FEB                            128
PT_RL           TABLE SUBPARTITION P_2002_FEB                            128
PT_RL           TABLE SUBPARTITION P_2002_FEB                            128
PT_RL           TABLE SUBPARTITION P_2002_FEB                            128
PT_RL           TABLE SUBPARTITION P_2002_MAR                            128
PT_RL           TABLE SUBPARTITION P_2002_MAR                            128
PT_RL           TABLE SUBPARTITION P_2002_MAR                            128
PT_RL           TABLE SUBPARTITION P_2002_MAR                            128
RL_PK           INDEX
RL_PK           INDEX PARTITION    P_2002_FEB
RL_PK           INDEX PARTITION    P_2002_MAR
RL_PK           INDEX SUBPARTITION P_2002_FEB                            128
RL_PK           INDEX SUBPARTITION P_2002_FEB                            128
RL_PK           INDEX SUBPARTITION P_2002_FEB                            128
RL_PK           INDEX SUBPARTITION P_2002_FEB                            128
RL_PK           INDEX SUBPARTITION P_2002_MAR                            128
RL_PK           INDEX SUBPARTITION P_2002_MAR                            128
RL_PK           INDEX SUBPARTITION P_2002_MAR                            128
RL_PK           INDEX SUBPARTITION P_2002_MAR                            128
T1              TABLE                                                    128
T1_PK           INDEX                                                    128
T2              TABLE                                                    128
T2_I1           INDEX                                                    128

Obviously you could do something more complex to associate indexes with their tables, then sum subpartitions back to partitions and partitions back to table level stats – but a simple list of objects with primary physical sizes might be sufficient to give you an idea of how much work is still to be done. If you want to be really elegant (or extravagant) you could even combine this SQL with another piece reporting the objects with last_analyzed more recent than the start of run including, if you’re not running a parallel collection, the approximate time taken for each object (think lag() and comparing consecutive times).

Oracle Database XE (Express Edition) 11.2 beta is out!

Oracle Database XE (Express Edition) 11.2 Beta has been released!

Here you go:

The documentation is here (the link at oracle.com/technetwork is broken right now…)

I’m sure you want to know about it’s limitations, these are documented in the license guide above, but here are the main figures from Oracle XE 11.2 license guide:

  1. Express Edition is limited to a single instance on any server;
  2. Express Edition may be installed on a multiple CPU server, but may only be executed on one processor in any server;
  3. Express Edition may only be used to support up to 11 GB of user data (not including Express Edition system data);
  4. Express Edition may use up to 1 GB RAM of available memory.

Installing Oracle Linux 6 as a domU in Xen

Despite all recent progress in other virtualisation technologies I am staying faithful to Xen. The reason is simple: it works for me. Paravirtualised domUs are the fastest way to run virtual machines on hardware that doesn’t support virtualisation in the processor.

I read about cgroups yesterday, a feature that’s appeared in kernel 2.6.38 and apparently was back-ported into RedHat 6. Unfortunately I can’t get hold of a copy, so I decided to use Oracle’s clone instead. I wanted to install the new domU on my existing lab environment which is a 24G RAM core i7 920 system with 1.5TB of storage. The only limitation I can see is the low number of cores, I wish I could rent an Opteron 6100 series system instead (for the same price).Creating the domU

The first setback was the failure of virt-manager. Virt Manager is OpenSuSE’s preferred too to create xen virtual machines. I wrote about virt-manager and OpenSuSE 11.2 some time ago and went back to this post for instructions. However, the logic coded into the application doesn’t seem to handle OL6, it repeatedly failed to start the PV kernel. I assume the directory structure on the ISO has changed or some other configuration issue here, maybe even a PEBKC.

That was a bit of a problem, because it meant I had to do the legwork all on my own. Thinking about it virt-manager is not to blame really, OL6 wasn’t yet released when the tool came out. So be it, at least I’ll learn something new today. To start with, I needed to create a new “disk” to contain my root volume group. The way the dom0 is set up doesn’t allow me to use LVM logical volumes – all the space is already allocated. My domUs are all stored in /var/lib/xen/images/domUName. I started of by creating the top level directory for my new OL6 reference domU:

# mkdir /var/lib/xen/images/ol6

Inside the directory I created the sparse file for my first “disk”:

# cd /var/lib/xen/images/ol6
# dd if=/dev/zero of=disk0 bs=1024k seek=8192 count=0

This will create a sparse file (much like a temp file in Oracle) for use as my first virtual disk. The next step was to extract the kernel and initial ramdisk from the ISO image and store it somewhere convenient. My default location for xen kernels is /m/xenkernels/domUName. The new kernel is a PVOPS kernel (but still not dom0 capable!) so copying it from a loopback mounted ISO image’s isolinux/vmlinuz location was enough. There is also only one initrd to copy. You should get them into the xen kernel location as shown here:

# mkdir /m/xenkernels/ol6
# cp /mnt/ol6/isolinux/{initrd.img,vmlinuz} /m/xenkernels/ol6

Next, I copied the contents of the ISO image to /var/srw/www/htdocs/ol6.

We now need a configuration file to start the domU initially. The below config file worked for me-I have deliberately not chosen a libvirt compatible XML file to keep the example simple. We’ll convert to xenstore later ….

cat /tmp/rhel6ref
name="rhel6ref"
memory=1024
maxmem=4096
vcpus=2
on_poweroff="destroy"
on_reboot="restart"
on_crash="destroy"
localtime=0
builder="linux"
bootargs=""
extra=" "
disk=[ 'file:/var/lib/xen/images/rhel6ref/disk0,xvda,w', ]
vif=[ 'mac=00:16:1e:1b:1d:ef,bridge=br1', ]
vfb=['type=vnc,vncdisplay=11']
kernel = "/m/xenkernels/ol6/vmlinuz"
ramdisk = "/m/xenkernels/ol6/initrd.img"

As I am forwarding the VNC port I needed a fixed one. In my putty session I forwarded local port 5911 to my dom0′s port 5911. Now start the domU using the xm create /tmp/rhel6ref command. Within a few seconds you should be able to point your vncviewer to localhost:11 and connect to the domU. That’s all! From now on the installation is really just a matter of “next”, “next”, “next”. Some things have changed though, have a look at these selected screenshots.

Walking through the installation

First of all you need to configure the network for the domU to talk to your staging server. I always use a manual configuration, and configure IPv4 only. The URL setup is interesting-I used http://dom0/ol6 as the repository and got further. When in doubt, check the access and error logs in /var/log/apache2

After the welcome screen I was greeted with a message stating that my xen-vbd-xxx device had to be reinitialised. Huh? But ok, so I did that and progressed. I then entered the hostname and got to the partitioning screen. I chose to “use all space” and ticked the box next to “review and modify partitioning layout”. Remember that ext4 is now the default for all file systems, but OpenSuSE’s pygrub can’t read it. The important step is to ensure that you have a separate /boot partition outside any LVM devices, and that it’s formatted with ext2. The ext3 file system might also work, but I decided to stick with ext2 which I knew pygrub could deal with. I also tend to rename my volume group to rootvg, instead vg_hostname as the installer suggests.

The VNC interface now became a little bit difficult to use when being asked to select a timezone, I deferred that to later. I ran into a bit of a problem when it came to the package selection screen. Suddenly the installer, which happily read all data from my apache setup, claimed it couldn’t read the repodata.xml file. I thouoght that was strange but then manually pointed it to the Server/repodata/repomd.xml file and clicked on the install button. Unfortunately the installer now couldn’t read the first package. The reason was quickly identified in the access log

192.168.99.124 – - [01/Apr/2011:14:01:42 +0200] “GET /ol6/Server/Packages/alsa-utils-1.0.21-3.el6.x86_64.rpm HTTP/1.1″ 403 1036 “-” “Oracle Linux Server (anaconda)/6.0″
192.168.99.124 – - [01/Apr/2011:14:03:10 +0200] “GET /ol6/Server/Packages/alsa-utils-1.0.21-3.el6.x86_64.rpm HTTP/1.1″ 403 1036 “-” “Oracle Linux Server (anaconda)/6.0″

HTTP 403 errors (i.e. FORBIDDEN). They were responsible for the problem with the repomd.xml file as well:

192.168.99.124 – - [01/Apr/2011:14:00:53 +0200] “GET /ol6/repodata/repomd.xml HTTP/1.1″ 403 1036 “-” “Oracle Linux Server (anaconda)/6.0″

The reason for these could be found in the error log:

[Fri Apr 01 14:00:53 2011] [error] [client 192.168.99.124] Symbolic link not allowed or link target not accessible: /srv/www/htdocs/ol6/repodata

Oha.Where do these come from? Fair enough, the directory structure has changed:

[root@dom0 /srv/www/htdocs/ol6] # ls -l repodata
lrwxrwxrwx 1 root root 15 Apr  1 14:09 repodata -> Server/repodata

Now then, because this is my lab and I’m solely responsible for its security, I change the Options in my apache’s server root to FollowSymLinks.Do not do this in real life! Create a separate directory, or alias, and don’t compromise your server root. Enough said …

A reload of the apache2 daemon fixed that problem, but I had to start from scratch. This time however it ran through without problems.

Cleaning Up

When the installer prompts you for a reboot, don’t click on the “OK” button just yet. The configuration file needs to be changed to use the bootloader pygrub. Change the configuration file to something similar to this:

# cat /tmp/rhel6ref
name="rhel6ref"
memory=1024
maxmem=4096
vcpus=2
on_poweroff="destroy"
on_reboot="restart"
on_crash="destroy"
localtime=0
builder="linux"
bootargs=""
extra=" "
disk=[ 'file:/var/lib/xen/images/rhel6ref/disk0,xvda,w', ]
vif=[ 'mac=00:16:1e:1b:1d:ef,bridge=br1', ]
vfb=['type=vnc,vncdisplay=11']
bootloader = "pygrub"

The only change is the replacement of the kernel and ramdisk lines with bootloader. You may have to xm destroy the VM for the change to take effect, a reboot doesn’t seem to trigger a reparse of the configuration file.

With that done, restart the VM and enjoy the final stages of the configuration. If your domU doesn’t start now, you probably forgot to format /boot with ext2 and it is ext4. In that case you have to do some research on google whether or not you can save your installation.

The result is a new Oracle Linux reference machine!

# ssh root@192.168.99.124
The authenticity of host '192.168.99.124 (192.168.99.124)' can't be established.
RSA key fingerprint is 3d:90:d5:ef:33:e1:15:f8:eb:4a:38:15:cd:b9:f1:7e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.99.124' (RSA) to the list of known hosts.
root@192.168.99.124's password:
[root@rhel6ref ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.0 (Santiago)
[root@rhel6ref ~]# uname -a
Linux rhel6ref.localdomain 2.6.32-100.28.5.el6.x86_64 #1 SMP Wed Feb 2 18:40:23 EST
2011 x86_64 x86_64 x86_64 GNU/Linux

If you want, you can register the domU in xenstore-while the domU is up and running, dump the XML file using virsh dumpxml > /tmp/ol6.xml. My configuration file looked like this:


 ol6ref
 530c97e6-3c23-ec9f-d65b-021a79d61585
 4194304
 1048576
 2
 /usr/bin/pygrub
 
 linux
  
 
 
 destroy
 restart
 destroy
 
 /usr/lib64/xen/bin/qemu-dm
 
 

 
 
 

 
 
 
 

 
 
 
 

Shut the domU down and register it using a call to “virsh define /tmp/ol6.xml”. Subsequent editing can be done using “virsh edit ol6″.

Scripts to Download Documentation

In this post I pointed out that I like to have a copy of the documentation in PDF format on my notebook. In the same post, and its comments, I also described how I generate the scripts I use to download the files. Recently I updated the scripts and, as a result, I thought to share them with you. So, below you find the CMD and SH scripts for the documentation related to 10.2, 11.1 and 11.2.

I hope you find them useful.

Highly Recommended

I've built up quite a blogging back-log that I hope to clear a little over the next week, but I saw something today that's more important than any of the posts to come so I wanted to blog about that quickly first.

John Scott, a pillar of the Oracle Apex community came up with a great idea. There'll be no wild emotion round here because I didn't know any of the people involved well, but I can wholeheartedly recommend people buy that book. Not only can I be fairly certain that it'll be a high quality book written by people who are knowlegable and passionate about Apex, but it's going to help out the families of a couple of good friends of the Apex community. Even my miserable and cloudy personality is struggling to see a down-side there ;-)

I tip my hat to the authors and hope as many people as possible place an order.

Nice idea John!

6th Planboard DBA Symposium – Registration now open

I am pleased to announce that we, the program committee members, just finalized the program for the 6th Planboard DBA Symposium to be held at May 17 2011 in Amstelveen. There are 10 presentations scheduled in 2 parallel tracks by Frits Hoogland, Johan van Veen, Iloon Ellen-Wolff, Paul der Kinderen, Wouter Wethmar, Alex Nuijten, Gert […]

Virgin Media Customer Service…

I have two big customer service rants coming. This one is related to Virgin Media and the next one will be about SCAN. I’ll hold back on the second one while I wait to see if they do the right thing.

I had Virgin Media installed on 9th March and I am completely happy with the cable broadband service, but they were unable to take on my phone line because of capacity issues. That resulted in a complete mess over the billing of my service because of the way Virgin Media process discounts on bundles. I contacted customer services and after a lot of messing about the billing situation was resolved to my satisfaction.

A few weeks later I was notified of my first bill and it was completely wrong. I got in touch with customer services and after a very long time the issue seemed to be resolved AGAIN, but I was told I would be contacted by midday the same day for confirmation by a manager. It is now 6 days later and I’ve not received a call. In that time I’ve been phoning trying to get it resolved.

I’ve just got off the phone and I’ve been assured it is sorted now, but I know it will all be a steaming pile of poo next month when the bill is sorted again.

Each person I’ve talked to has been fine, but the whole process and the systems involved are terrible. It stinks. In no way can this be called good customer services. It is a complete nightmare.

Sometimes bad customer service is because the people answering the phones are morons. Sometimes it is because the procedures and systems they are working with are idiotic. It comes to something when the person on the other end of the phone is unable to tell you how much you will be charged because the systems they are using are so convoluted with a variety of possible discounts that may or may not be applied…

So in conclusion, Virgin Media customer services is a festering pile of donkey crap…

Cheers

Tim…

 




Oracle11g: Analyze Table Validate Structure Cascade “FAST” (Slow Burn)

I always take notice when Oracle introduces a new “FAST” option, so it was with some excitement when I first noticed in Oracle 11g Rel 1 there was a new FAST option when running the ANALYZE TABLE CASCADE VALIDATE STRUCTURE command.   This was described in the manuals as introducing a hashing scheme that was significantly [...]

Congratulations to Yury Velikanov, now an Oracle ACE!

I’m very excited to announce this somewhat old news — we have a brand new Oracle ACE at Pythian — Yury Velikanov. Fantastic addition to Pythian’s team of ACEs and ACE Directors — Fahd Mirza, Gwen Shapira, Jared Still, Christo Kutrovsky and myself. If you want to know more about Oracle ACE Program, the latest issue of Oracle Magazine has an article written by the fellow Oracle ACE Mike Riley — Everything Is Coming Up ACEs!.

I’ve known Yury for a while as I met him online on Oracle-L many years ago. The world is small and after living on different continents (North America and Europe), providence put us together on the third continent — Australia. As I was building our Australian business in Sydney, it took me a year to convince Yury to join Pythian. But I tell you it was worth the efforts!

Yury is one of our top Senior Oracle Apps DBAs and he is also extremely capable when it comes to Oracle core database technology and middleware which you would expect from a very good Apps DBA. A decade ago, Yury was already fusing Oracle E-Business Suite with Oracle RAC when most people couldn’t even afford to think about it. In 2003, Yury became one of the first five Oracle Certified Masters in Europe. In other words, Yury is very talented individual with top notch Oracle EBS and Oracle database skills.

As you probably know, excellent technical skills are not the only required quality of an Oracle ACE. Oracle ACEs are distinguished for their contributions to the community and Yury has always been an enthusiastic Oracle expert willing to both share his knowledge and learn from others. As a result, he is an active contributor to Oracle community, regionally and globally.

Yury founded Oracle e-Business Suite related mailing list several years ago to help Oracle Apps DBAs around the world exchange their experiences and help others. In Australia, Yury has become an organizer of the Sydney Oracle Meetup – an informal group of Oracle professionals meeting regularly face to face.

Yury also has a long list of conferences he presented at including Hotsos Symposium, UKOUG, AUSOUG and others. Recently, Yury has also started blogging on Pythian Blog and is planning to do more and more.

I would like to note that Yury is a person who never refuse to help — he will lose sleep, work 42 hours per day if need but won’t step away if asked for help. I’ve been there myself and I know that he is a very approachable individual and will do whatever it takes.

After all this, how could I not nominate him as an Oracle ACE? Right. There was no way! Now few months late Yury is happily joined our Oracle ACE Team at Pythian. It’s actually happened a few weeks ago, it just took me so long to write about it!

Oracle Troubleshooting TV Show: Season 1, Episode 01 ;-)

Ok, it’s official – the first and only Oracle Troubleshooting TV show is live now!

The first show is almost 2 hours about the ORA-4031 errors and shared pool hacking. It’s a recording of the US/EMEA timezone online hacking session I did some days ago.

There are a couple of things to note:

  1. The text still isn’t as sharp as in the original recording, but it’s much better than in my previous upload attempts and is decently readable. I’ll try some more variations with my next shows so I hope the text quality will get better! Or maybe I should just switch to GUI tools or powerpoint slides? ;-)
  2. You probably should view this video in full screen (otherwise the text will be tiny and unreadable)
  3. There’s advertising in the beginning (and maybe end) of this show! I’ll see how much money I’ll make out of this – maybe these shows start contributing towards the awesome beer selection I’ll have in my fridge some day (right now I have none). Viewing a 30-sec advert is small price to pay for 2 hours of kick-ass shared pool hacking content !!!
  4. You can download the scripts and tools used in the demos from http://tech.e2sn.com/oracle-scripts-and-tools/
  5. Make sure you check out my online Oracle troubleshooting seminars too (this April and May already)

View the embedded video below or go to my official Oracle Troubleshooting TV show channel:

http://tanelpoder.blip.tv

Enjoy!