Search

OakieTags

Who's online

There are currently 0 users and 19 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

Using ROWNUM in the Where Clause Causes Problems

April 4, 2011 A couple of years ago a very well written article appeared in Oracle Magazine that described how to use ROWNUM in the WHERE clause, and also why using ROWNUM might not work quite as expected. Let’s assume that we want to do something a bit unusual, as described in this forum thread.  We set [...]

Participation Inequality in Clubs and Communities…

The subject of parental participation in kids clubs came up in conversation today and it reminded me of the 90-9-1 rule and a recent post about the usage of Twitter.

I’ve been involved in a number of clubs and classes since University, and in recent years I’ve been witness to the clubs that kids of my friends and family attend. In most cases, they follow a similar pattern. There are a very small number of highly motivated people who keep the club functioning. A slightly larger number of people who hang around in the periphery, stepping in from time to time to help out, but shying away from any formal commitment. Finally you have the vast majority of people who do nothing but drop off their kids and let them get on with it.

Sound familiar? :)

I’m sure you’ve seen a similar pattern yourself in clubs, online communities and even in companies you’ve worked for.

Cheers

Tim…




SSDをたくさん、InfiniBandで連携。作りました。

Hadoopは、これ以上テストする魅力を感じなくなった。だから、SSDをたくさん積んだクラスタ・ストレージを用意した:

それをInfiniBand 40Gbpsで連結:

で、、、これをたくさん並べた:

そうするとInfiniBand40Gbpsでも足りなくなる。。。。これがShared Everythingの弱点だ。Shared Nothing陣営がExadataの弱点を突いてくるのも、ここだ。

次はInfinibandのカードを3枚まで入れれるマザーでテストをしたくなった。40x3を目指せるCPUは???

UltraEdit for Mac/Linux v2.1.0.3…

Followers of the blog will know I’m a big fan of UltraEdit. I have a multi-platform unlimited upgrades license, so I run it on Linux, Mac and occasionally on a Windows VM.

I noticed today that version 2.1.0.3 was released for Mac and Linux about a month ago. Not sure how I missed that on the update notices. :) The changes for Mac are not that big because it was already at version 2.x, but the Linux version had been hanging around the 1.x for some time now and was missing a lot of functionality compared to the Mac version. This latest release is a pretty big catch-up for the Linux version and it now contains pretty much all of the functionality I use on a regular basis.

Both the Mac and Linux versions are still lagging behind the Windows version in terms of total functionality, but who cares about Windows… :)

Cheers

Tim…




Stats collection

From time to time I see people asking how they can check how far their system has got in it’s call to collect stats. The question is often a little ambiguous – they know which call to dbms_stats they’ve used, so they know whether they’re trying to monitor stats collection against a single table (taking advantage of v$session_longops springs to mind) or against an entire schema.

Here’s one simple-minded approach that I whipped up a few years ago – it’s possible you could do better with the latest versions of dbms_stats. Its purpose is simply to give you some way of checking what work dbms_stats needs to do (in this example, for a schema), so that you can check how much has been done and how much remains. The basic method can be modified in various ways to match you tastes.

We start with the simple observation that many calls to dbms_stats have an “options” parameter that allows you to specify things like ‘LIST STALE’, rather than ‘GATHER STALE’. The “list” options populate an object table type with details of the data segments whose statistics would be gathered by the corresponding “gather” option. So we could start with a simple piece of code to do the following:

declare
	m_object_list	dbms_stats.objecttab;
begin

	dbms_stats.gather_schema_stats(
		ownname		=> 'test_user',
		options		=> 'LIST AUTO',
--		options		=> 'LIST STALE',
--		options		=> 'LIST EMPTY',
		objlist		=> m_object_list
	);

	for i in 1..m_object_list.count loop
		dbms_output.put_line(
			rpad(m_object_list(i).ownname,30)     ||
			rpad(m_object_list(i).objtype, 6)     ||
			rpad(m_object_list(i).objname,30)     ||
			rpad(m_object_list(i).partname,30)    ||
			rpad(m_object_list(i).subpartname,30) ||
			lpad(m_object_list(i).confidence,4)
		);
	end loop;
end;
/

Call this before you make your call to gather stats (and in general it might be better to use the utl_file package to write to a file rather than using dbms_output and capturing screen output) then, as time passes, you can check the “last_analyzed” column on the relevant view to see which objects are still waiting for their stats to be collected. Of course, this approach is a little clunky, and requires a certain amount of tedious manual labour to get the check done, but once we have the basic principle the rest is easy. Let’s start by using the code in a pipelined function.

create or replace function list_stale (
	i_option	in	varchar2,
	i_user		in	varchar2 default user
)
return dbms_stats.objecttab pipelined
as
	pragma autonomous_transaction;
	m_object_list	dbms_stats.objecttab;
begin

	if i_option not in (
		'LIST AUTO', 'LIST STALE','LIST EMPTY'
	) then
		null;
	else
		dbms_stats.gather_schema_stats(
			ownname		=> i_user,
			options		=> i_option,
			objlist		=> m_object_list
		);

		for i in 1..m_object_list.count loop
			pipe row (m_object_list(i));
		end loop;

		end if;

	return;
end;
/

You’ll notice that I’ve declared the function to run as an autonomous transaction – the call to dbms_stats does various things (such as updating the mon_mods$ and col_usage$ tables) that you’re not supposed to do in a pipelined function, but you can hide these by using the automonous_transaction pragma. (Strangely the error message you get in the absences of the pragma is: “ORA-04092: cannot COMMIT in a trigger”.)

With this definition I can call the function with code like:

select  *
from    table(list_stale('LIST AUTO'))

OWNNAME      OBJTYP OBJNAME         PARTNAME         SUBPARTNAME            CONFIDENCE
------------ ------ --------------- ---------------- ---------------------- ----------
TEST_USER    TABLE  PT_RANGE                                                       100
TEST_USER    TABLE  PT_RANGE        P600                                           100
TEST_USER    TABLE  PT_RL                                                          100
TEST_USER    TABLE  PT_RL           P_2002_MAR                                     100
TEST_USER    TABLE  PT_RL           P_2002_MAR       P_2002_MAR_THE_REST           100
TEST_USER    TABLE  PT_RL           P_2002_MAR       P_2002_MAR_MN                 100
TEST_USER    TABLE  PT_RL           P_2002_MAR       P_2002_MAR_EAST_COAST         100
TEST_USER    TABLE  PT_RL           P_2002_MAR       P_2002_MAR_CA                 100
TEST_USER    TABLE  T1                                                             100
TEST_USER    INDEX  PT_PK                                                          100
TEST_USER    INDEX  PT_PK           P600                                           100
TEST_USER    INDEX  T1_PK                                                          100
        ... etc ...

This will give me a list of all objects in my schema that need stats collected – either because they have no stats, or their stats are stale.

But I can be more subtle than this. I could, for example, write a query that joins this “table” to the view dba_segments and keep running it as the stats collection proceeds to report the objects that are still waiting for stats and the sizes of those objects. The SQL needs just a little thought as the objtype is only reported as “TABLE” or “INDEX” so you have to do a UNION ALL in the join and work out which rows are really for partitions and which for subpartitions. And then you have to mess about a bit with outer joins because Oracle will, for example, want to collect stats on a table when a partition is stale – and there is no data segment at the table level of a partitioned table.

Your code might look something like this:

with list_result as (
	select *
	from	table(list_stale('LIST AUTO'))
)
select
	lst.objname, nvl(seg.segment_type,lst.objtype) segment_type, lst.partname, seg.blocks
from
	list_result	lst,
	dba_segments	seg
where
	lst.partname is null
and	seg.owner(+) = lst.ownname
and	seg.segment_type(+) = lst.objtype
and	seg.segment_name(+) = lst.objname
and	seg.partition_name is null
union all
select
	lst.objname, nvl(seg.segment_type,lst.objtype || ' PARTITION') segment_type, lst.partname, seg.blocks
from
	list_result	lst,
	dba_segments	seg
where
	lst.subpartname is null
and	lst.partname is not null
and	seg.owner(+) = lst.ownname
and	substr(seg.segment_type(+),1,5) = lst.objtype
and	seg.segment_name(+) = lst.objname
and	seg.partition_name(+) = lst.partname
union all
select
	lst.objname, segment_type, lst.partname, seg.blocks
from
	list_result	lst,
	dba_segments	seg
where
	lst.subpartname is not null
and	seg.owner = lst.ownname
and	substr(seg.segment_type,1,5) = lst.objtype
and	seg.segment_name = lst.objname
and	seg.partition_name = lst.subpartname
order by
	1,2,3
/

In a tiny schema, where every data segment is one of my “1MB uniform extents”, this is the sample of output I got:

OBJNAME         SEGMENT_TYPE       PARTNAME                           BLOCKS
--------------- ------------------ ------------------------------ ----------
PT_PK           INDEX
PT_PK           INDEX PARTITION    P200                                  128
PT_PK           INDEX PARTITION    P400                                  128
PT_PK           INDEX PARTITION    P600                                  128
PT_RANGE        TABLE
PT_RANGE        TABLE PARTITION    P200                                  128
PT_RANGE        TABLE PARTITION    P400                                  128
PT_RANGE        TABLE PARTITION    P600                                  128
PT_RL           TABLE
PT_RL           TABLE PARTITION    P_2002_FEB
PT_RL           TABLE PARTITION    P_2002_MAR
PT_RL           TABLE SUBPARTITION P_2002_FEB                            128
PT_RL           TABLE SUBPARTITION P_2002_FEB                            128
PT_RL           TABLE SUBPARTITION P_2002_FEB                            128
PT_RL           TABLE SUBPARTITION P_2002_FEB                            128
PT_RL           TABLE SUBPARTITION P_2002_MAR                            128
PT_RL           TABLE SUBPARTITION P_2002_MAR                            128
PT_RL           TABLE SUBPARTITION P_2002_MAR                            128
PT_RL           TABLE SUBPARTITION P_2002_MAR                            128
RL_PK           INDEX
RL_PK           INDEX PARTITION    P_2002_FEB
RL_PK           INDEX PARTITION    P_2002_MAR
RL_PK           INDEX SUBPARTITION P_2002_FEB                            128
RL_PK           INDEX SUBPARTITION P_2002_FEB                            128
RL_PK           INDEX SUBPARTITION P_2002_FEB                            128
RL_PK           INDEX SUBPARTITION P_2002_FEB                            128
RL_PK           INDEX SUBPARTITION P_2002_MAR                            128
RL_PK           INDEX SUBPARTITION P_2002_MAR                            128
RL_PK           INDEX SUBPARTITION P_2002_MAR                            128
RL_PK           INDEX SUBPARTITION P_2002_MAR                            128
T1              TABLE                                                    128
T1_PK           INDEX                                                    128
T2              TABLE                                                    128
T2_I1           INDEX                                                    128

Obviously you could do something more complex to associate indexes with their tables, then sum subpartitions back to partitions and partitions back to table level stats – but a simple list of objects with primary physical sizes might be sufficient to give you an idea of how much work is still to be done. If you want to be really elegant (or extravagant) you could even combine this SQL with another piece reporting the objects with last_analyzed more recent than the start of run including, if you’re not running a parallel collection, the approximate time taken for each object (think lag() and comparing consecutive times).

Oracle Database XE (Express Edition) 11.2 beta is out!

Oracle Database XE (Express Edition) 11.2 Beta has been released!

Here you go:

The documentation is here (the link at oracle.com/technetwork is broken right now…)

I’m sure you want to know about it’s limitations, these are documented in the license guide above, but here are the main figures from Oracle XE 11.2 license guide:

  1. Express Edition is limited to a single instance on any server;
  2. Express Edition may be installed on a multiple CPU server, but may only be executed on one processor in any server;
  3. Express Edition may only be used to support up to 11 GB of user data (not including Express Edition system data);
  4. Express Edition may use up to 1 GB RAM of available memory.

Installing Oracle Linux 6 as a domU in Xen

Despite all recent progress in other virtualisation technologies I am staying faithful to Xen. The reason is simple: it works for me. Paravirtualised domUs are the fastest way to run virtual machines on hardware that doesn’t support virtualisation in the processor.

I read about cgroups yesterday, a feature that’s appeared in kernel 2.6.38 and apparently was back-ported into RedHat 6. Unfortunately I can’t get hold of a copy, so I decided to use Oracle’s clone instead. I wanted to install the new domU on my existing lab environment which is a 24G RAM core i7 920 system with 1.5TB of storage. The only limitation I can see is the low number of cores, I wish I could rent an Opteron 6100 series system instead (for the same price).Creating the domU

The first setback was the failure of virt-manager. Virt Manager is OpenSuSE’s preferred too to create xen virtual machines. I wrote about virt-manager and OpenSuSE 11.2 some time ago and went back to this post for instructions. However, the logic coded into the application doesn’t seem to handle OL6, it repeatedly failed to start the PV kernel. I assume the directory structure on the ISO has changed or some other configuration issue here, maybe even a PEBKC.

That was a bit of a problem, because it meant I had to do the legwork all on my own. Thinking about it virt-manager is not to blame really, OL6 wasn’t yet released when the tool came out. So be it, at least I’ll learn something new today. To start with, I needed to create a new “disk” to contain my root volume group. The way the dom0 is set up doesn’t allow me to use LVM logical volumes – all the space is already allocated. My domUs are all stored in /var/lib/xen/images/domUName. I started of by creating the top level directory for my new OL6 reference domU:

# mkdir /var/lib/xen/images/ol6

Inside the directory I created the sparse file for my first “disk”:

# cd /var/lib/xen/images/ol6
# dd if=/dev/zero of=disk0 bs=1024k seek=8192 count=0

This will create a sparse file (much like a temp file in Oracle) for use as my first virtual disk. The next step was to extract the kernel and initial ramdisk from the ISO image and store it somewhere convenient. My default location for xen kernels is /m/xenkernels/domUName. The new kernel is a PVOPS kernel (but still not dom0 capable!) so copying it from a loopback mounted ISO image’s isolinux/vmlinuz location was enough. There is also only one initrd to copy. You should get them into the xen kernel location as shown here:

# mkdir /m/xenkernels/ol6
# cp /mnt/ol6/isolinux/{initrd.img,vmlinuz} /m/xenkernels/ol6

Next, I copied the contents of the ISO image to /var/srw/www/htdocs/ol6.

We now need a configuration file to start the domU initially. The below config file worked for me-I have deliberately not chosen a libvirt compatible XML file to keep the example simple. We’ll convert to xenstore later ….

cat /tmp/rhel6ref
name="rhel6ref"
memory=1024
maxmem=4096
vcpus=2
on_poweroff="destroy"
on_reboot="restart"
on_crash="destroy"
localtime=0
builder="linux"
bootargs=""
extra=" "
disk=[ 'file:/var/lib/xen/images/rhel6ref/disk0,xvda,w', ]
vif=[ 'mac=00:16:1e:1b:1d:ef,bridge=br1', ]
vfb=['type=vnc,vncdisplay=11']
kernel = "/m/xenkernels/ol6/vmlinuz"
ramdisk = "/m/xenkernels/ol6/initrd.img"

As I am forwarding the VNC port I needed a fixed one. In my putty session I forwarded local port 5911 to my dom0′s port 5911. Now start the domU using the xm create /tmp/rhel6ref command. Within a few seconds you should be able to point your vncviewer to localhost:11 and connect to the domU. That’s all! From now on the installation is really just a matter of “next”, “next”, “next”. Some things have changed though, have a look at these selected screenshots.

Walking through the installation

First of all you need to configure the network for the domU to talk to your staging server. I always use a manual configuration, and configure IPv4 only. The URL setup is interesting-I used http://dom0/ol6 as the repository and got further. When in doubt, check the access and error logs in /var/log/apache2

After the welcome screen I was greeted with a message stating that my xen-vbd-xxx device had to be reinitialised. Huh? But ok, so I did that and progressed. I then entered the hostname and got to the partitioning screen. I chose to “use all space” and ticked the box next to “review and modify partitioning layout”. Remember that ext4 is now the default for all file systems, but OpenSuSE’s pygrub can’t read it. The important step is to ensure that you have a separate /boot partition outside any LVM devices, and that it’s formatted with ext2. The ext3 file system might also work, but I decided to stick with ext2 which I knew pygrub could deal with. I also tend to rename my volume group to rootvg, instead vg_hostname as the installer suggests.

The VNC interface now became a little bit difficult to use when being asked to select a timezone, I deferred that to later. I ran into a bit of a problem when it came to the package selection screen. Suddenly the installer, which happily read all data from my apache setup, claimed it couldn’t read the repodata.xml file. I thouoght that was strange but then manually pointed it to the Server/repodata/repomd.xml file and clicked on the install button. Unfortunately the installer now couldn’t read the first package. The reason was quickly identified in the access log

192.168.99.124 – - [01/Apr/2011:14:01:42 +0200] “GET /ol6/Server/Packages/alsa-utils-1.0.21-3.el6.x86_64.rpm HTTP/1.1″ 403 1036 “-” “Oracle Linux Server (anaconda)/6.0″
192.168.99.124 – - [01/Apr/2011:14:03:10 +0200] “GET /ol6/Server/Packages/alsa-utils-1.0.21-3.el6.x86_64.rpm HTTP/1.1″ 403 1036 “-” “Oracle Linux Server (anaconda)/6.0″

HTTP 403 errors (i.e. FORBIDDEN). They were responsible for the problem with the repomd.xml file as well:

192.168.99.124 – - [01/Apr/2011:14:00:53 +0200] “GET /ol6/repodata/repomd.xml HTTP/1.1″ 403 1036 “-” “Oracle Linux Server (anaconda)/6.0″

The reason for these could be found in the error log:

[Fri Apr 01 14:00:53 2011] [error] [client 192.168.99.124] Symbolic link not allowed or link target not accessible: /srv/www/htdocs/ol6/repodata

Oha.Where do these come from? Fair enough, the directory structure has changed:

[root@dom0 /srv/www/htdocs/ol6] # ls -l repodata
lrwxrwxrwx 1 root root 15 Apr  1 14:09 repodata -> Server/repodata

Now then, because this is my lab and I’m solely responsible for its security, I change the Options in my apache’s server root to FollowSymLinks.Do not do this in real life! Create a separate directory, or alias, and don’t compromise your server root. Enough said …

A reload of the apache2 daemon fixed that problem, but I had to start from scratch. This time however it ran through without problems.

Cleaning Up

When the installer prompts you for a reboot, don’t click on the “OK” button just yet. The configuration file needs to be changed to use the bootloader pygrub. Change the configuration file to something similar to this:

# cat /tmp/rhel6ref
name="rhel6ref"
memory=1024
maxmem=4096
vcpus=2
on_poweroff="destroy"
on_reboot="restart"
on_crash="destroy"
localtime=0
builder="linux"
bootargs=""
extra=" "
disk=[ 'file:/var/lib/xen/images/rhel6ref/disk0,xvda,w', ]
vif=[ 'mac=00:16:1e:1b:1d:ef,bridge=br1', ]
vfb=['type=vnc,vncdisplay=11']
bootloader = "pygrub"

The only change is the replacement of the kernel and ramdisk lines with bootloader. You may have to xm destroy the VM for the change to take effect, a reboot doesn’t seem to trigger a reparse of the configuration file.

With that done, restart the VM and enjoy the final stages of the configuration. If your domU doesn’t start now, you probably forgot to format /boot with ext2 and it is ext4. In that case you have to do some research on google whether or not you can save your installation.

The result is a new Oracle Linux reference machine!

# ssh root@192.168.99.124
The authenticity of host '192.168.99.124 (192.168.99.124)' can't be established.
RSA key fingerprint is 3d:90:d5:ef:33:e1:15:f8:eb:4a:38:15:cd:b9:f1:7e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.99.124' (RSA) to the list of known hosts.
root@192.168.99.124's password:
[root@rhel6ref ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.0 (Santiago)
[root@rhel6ref ~]# uname -a
Linux rhel6ref.localdomain 2.6.32-100.28.5.el6.x86_64 #1 SMP Wed Feb 2 18:40:23 EST
2011 x86_64 x86_64 x86_64 GNU/Linux

If you want, you can register the domU in xenstore-while the domU is up and running, dump the XML file using virsh dumpxml > /tmp/ol6.xml. My configuration file looked like this:


 ol6ref
 530c97e6-3c23-ec9f-d65b-021a79d61585
 4194304
 1048576
 2
 /usr/bin/pygrub
 
 linux
  
 
 
 destroy
 restart
 destroy
 
 /usr/lib64/xen/bin/qemu-dm
 
 

 
 
 

 
 
 
 

 
 
 
 

Shut the domU down and register it using a call to “virsh define /tmp/ol6.xml”. Subsequent editing can be done using “virsh edit ol6″.

Scripts to Download Documentation

In this post I pointed out that I like to have a copy of the documentation in PDF format on my notebook. In the same post, and its comments, I also described how I generate the scripts I use to download the files. Recently I updated the scripts and, as a result, I thought to share them with you. So, below you find the CMD and SH scripts for the documentation related to 10.2, 11.1 and 11.2.

I hope you find them useful.

Highly Recommended

I've built up quite a blogging back-log that I hope to clear a little over the next week, but I saw something today that's more important than any of the posts to come so I wanted to blog about that quickly first.

John Scott, a pillar of the Oracle Apex community came up with a great idea. There'll be no wild emotion round here because I didn't know any of the people involved well, but I can wholeheartedly recommend people buy that book. Not only can I be fairly certain that it'll be a high quality book written by people who are knowlegable and passionate about Apex, but it's going to help out the families of a couple of good friends of the Apex community. Even my miserable and cloudy personality is struggling to see a down-side there ;-)

I tip my hat to the authors and hope as many people as possible place an order.

Nice idea John!

6th Planboard DBA Symposium – Registration now open

I am pleased to announce that we, the program committee members, just finalized the program for the 6th Planboard DBA Symposium to be held at May 17 2011 in Amstelveen. There are 10 presentations scheduled in 2 parallel tracks by Frits Hoogland, Johan van Veen, Iloon Ellen-Wolff, Paul der Kinderen, Wouter Wethmar, Alex Nuijten, Gert […]