Search

Top 60 Oracle Blogs

Recent comments

April 2011

Poll: Oracle Licensing Audit Results

Easy reading for you today folks and literally couple clicks to vote. Cheers!

Rows per block

A recent question on the OTN database forum:

Can any one please point to me a document or a way to calculate the average number of rows per block in oralce 10.2.0.3

One answer would be to collect stats and then approximate as block / avg_row_len – although you have to worry about things like row overheads, the row directory, and block overheads before you can be sure you’ve got it right. On top of this, the average might not be too helpful anyway. So here’s another (not necessarily fast) option that gives you more information about the blocks that have any rows in them (I picked the source$ table from a 10g system because source$ is often good for some extreme behaviour).


break on report

compute sum of tot_blocks on report
compute sum of tot_rows   on report

column avg_rows format 999.99

select
	twentieth,
	min(rows_per_block)			min_rows,
	max(rows_per_block)			max_rows,
	sum(block_ct)				tot_blocks,
	sum(row_total)				tot_rows,
	round(sum(row_total)/sum(block_ct),2)	avg_rows
from
	(
	select
		ntile(20) over (order by rows_per_block) twentieth,
		rows_per_block,
		count(*)			block_ct,
		rows_per_block * count(*)	row_total
	from
		(
		select
			fno, bno, count(*) rows_per_block
		from
			(
			select
				dbms_rowid.rowid_relative_fno(rowid)	as fno,
				dbms_rOwId.rowid_block_number(rowid)	as bno
			from
				source$
			)
		group by
			fno, bno
		)
	group by
		rows_per_block
	order by
		rows_per_block
	)
group by
	twentieth
order by
	twentieth
;

I’ve used the ntile() function to split the results into 20 lines, obviously you might want to change this according to the expected variation in rowcounts for your target table. In my case the results looked like this:

 TWENTIETH   MIN_ROWS   MAX_ROWS TOT_BLOCKS   TOT_ROWS AVG_ROWS
---------- ---------- ---------- ---------- ---------- --------
         1          1         11       2706       3470     1.28
         2         12         22         31        492    15.87
         3         23         34         30        868    28.93
         4         35         45         20        813    40.65
         5         46         57         13        664    51.08
         6         59         70         18       1144    63.56
         7         71         81         23       1751    76.13
         8         82         91         47       4095    87.13
         9         92        101         79       7737    97.94
        10        102        111        140      14976   106.97
        11        112        121        281      32799   116.72
        12        122        131        326      41184   126.33
        13        132        141        384      52370   136.38
        14        142        151        325      47479   146.09
        15        152        161        225      35125   156.11
        16        162        171        110      18260   166.00
        17        172        181         58      10207   175.98
        18        182        191         18       3352   186.22
        19        193        205         22       4377   198.95
        20        206        222         16       3375   210.94
                                 ---------- ----------
sum                                    4872     284538

Of course, the moment you see a result like this it prompts you to ask more questions.

Is the “bell curve” effect that you can see centred around the 13th ntile indicative of a normal distribution of row lengths – if so why is the first ntile such an extreme outlier – is that indicative of a number of peculiarly long rows, did time of arrival have a special effect, is it the result of a particular pattern of delete activity … and so on.

Averages are generally very bad indicators if you’re worried about the behaviour of an Oracle system.

WordPress 3.1.1 Released…

WordPress 3.1.1 has just been released. Here are the changes. Happy upgrading… :)

Cheers

Tim…




Consolidation Is All About Costs

Back in February, Jonathan Gennick asked me if I would be interested in writing a bit of content for an APRESS brochure to distribute at RMOUG Training Days. I thought it was a cool idea and chose the topic of database consolidation. I only needed 10 short tips but when I started to write, it was difficult to stop — clearly, expressing ideas in concise way must not be my strength.

Jonathan did heavy edits and turned my draft into 10 brief tips and, of course, quite a few details had to go as we shrank the size 3-4 times. Since I’ve already put my efforts into writing, I figured I could share it as well on my blog. Thus, welcome the first blog post from the series of database consolidation tips. Let’s get down to business…

While there are often multiple goals of a consolidation project, the main purpose of consolidation is to optimize costs which usually means minimizing the Total Cost of Ownership (TCO) of data infrastructure. Your current hardware might be past end of life, you might lack capacity for growth, your change management might be too slow, etc.

These non-core goals (for the lack of a better term for side effects of consolidation projects) can play a role of timing triggers but success of a consolidation project is defined by cost metrics. In real-life there are very few pure consolidation projects as project success criteria usually contain other conditions than cost cutting.

Tip: Keep costs at the heart of the consolidation project but don’t get blinded by cost alone! It’s a total failure if a consolidation project delivers a platform with much lower TCO but is unable to support the required availability and performance SLAs.

It’s also not very popular to run a purely cost-cutting project in a company — people are not overly motivated especially if it endangers their jobs. Luckily, most healthy businesses have quickly growing IT requirements and consolidation projects very quickly bust out of the scope of just cost savings.

Tip: Get your success criteria right and keep cost optimization as the core goal. If required, reduce the scope and split projects into stages where each stage has it’s own core goal. This way, you can classify some stages as purely consolidation. It’s so much easier to achieve success if there are only few criteria. You could also check mark success boxes constantly as you go instead of trying to get to the light at the end of the tunnel that could take years.

If you have anything to share on the scope of consolidation projects — whether past experience or current challenges — please, comment away.

How to Get Started with Amazon EC2 (Oracle 11g XE example)

I’ve just published Oracle Database 11g Express Edition Amazon EC2 image (AMI) but most of you have never used Amazon EC2… Not until now! This is a guide to walk you thorough the process of getting your very first EC2 instance up and running. Buckle up — it’s going to be awesome!

  1. Go to Amazon Web Services and open an account. You could use one that you buy your books with.
  2. Go to AWS Management Console for EC2 and sign up for Amazon EC2. You will need your credit card for this. You will not be charged anything unless you are either start using EC2 instances or allocate EBS storage and other related items. The sign-up page shows you all the pricing. You will especially like “Free tier for new AWS customers” section that gives you 750 hours of Micro instance uptime, 10 GB of EBS storage some bandwidth and few small goodies. This mean that you will not be charged anything in the beginning of your experiments. They will also do phone verification — I can’t remember I’ve seen it last time so it must be reasonable new. Works for cell phones too. Activation usually takes just few minutes and you’ll get an email confirmation and you get access to EC2, VPC, S3 and SNS. Direct link to AWS Management Console for EC2
  3. Now you can launch your first instance. So let’s start Oracle 11g XE beta image that I published just recently. Click “Launch Instance” then select “Community AMIs” tab. It will start loading AMIs list and it will take ages so don’t wait for it to finish and search for “pythian” – you will get pythian-oel-5.6-64bit-Oracle11gXE-beta image with AMI ID ami-e231cc8b the latest at the time of this writing.

    Select that image.
  4. On the next tab choose instance size. It’s enough to use Micro instance to start playing with Oracle 11g XE but be prepared that Micro instance doesn’t guarantee any CPU capacity so it might be “bursty” but, hey — it’s free or costs peanuts if you run out of free time. You could also choose an availability zone closer to you.
  5. On the next screen leave everything by default. You could select what to do when you shutdown the instance from inside the instance. Stop will keep your instance and EBS storage allocated and you can start it and all your changes will persist. However, you will be charged for allocated EBS storage (if you go beyond free 10GB) but it’s very little. “Terminate” will actually release EBS storage if you shutdown your instance. Note that you can always stop and terminate instances from AWS Management Console. I usually leave option on “Stop” to avoid accidental data loss. You can skip defining any tags — this is optional metadata so you can orient better in your instances. I recommend you at least specify a descriptive name to make sure you clearly distinguish multiple running instances later.
  6. If you didn’t have a Key Pair created in the past, you will do that at the next step. This is basically public / private key pair and you get to download private part — save it and keep it safe and don’t share this .pem file with anybody. Someone with access to it can gain root access to your images! You can always create more than one Key Pair by the way.
  7. Next, you will need to either select an existing Security Group or create a new one. Default security group doesn’t fit us because you want to open other ports to access you 11g XE database. You can keep default group and only access by SSH if local access from SQL*Plus command prompt is all you need. It’s also the safest way but for your playground, you might want more flexibility. For 11gXE instance you will probably want SSH access (port 22), SQL*Net access (port 1521) and APEX access (port 8080). I also like to open ICMP for ping. Be sure you understand what you are doing if you will be placing any sensitive data there. I also open it to the world (source 0.0.0.0/0) so anybody who knows the passwords or have correct shard keys setup, can get on your instance. You can limit it to your current IP only (and you can change the policy online if you IP changes later — use AWS Management Console). There are bunch of site that would tell you your public IP (providing you don’t use a proxy coming from another IP) like this one. To limit access from that IP only enter it in the source as xxx.xxx.xxx.xxx/32. Of course, you can enter subnets too if you know what I’m talking about.
  8. That’s it — all that’s left is click the “Launch” button.
  9. You will then see your image as “pending” in the console and usually just seconds later it switches into “running” state. Note that it will take a minute or so to boot and launch sshd daemon so you can connect via SSH. You can also check console log by choosing “Get System Log” from the context menu (it does take few minutes usually so it will come back empty until then). The easiest way to connect is to choose “Connect” from the context menu — it will present you instructions to connect as root using the .pam key file you downloaded when creating your Key Pair earlier on. Note that if you are on Unix, you will need to set proper permission for your key to ensure safeguarding — chmod 600 AlexG.pem.

    You can also get the public IP alias from instance details as “Public DNS” – just select and instance and scroll details in the bottom pane. For that particular image, I also enable public key authentication so you can simply add your public key to oracle’s ~/.ssh/authorized_keys file — it’s already there with correct permissions. This way I don’t have to go via root every time.
    If you are a Windows user using Putty, you can convert your .pem file into Putty Private Key (.ppk) file following Marcin’s comment.
  10. Database and listener will auto-start. You can open 11g XE web interface. In my example it’s http://ec2-50-17-156-24.compute-1.amazonaws.com:8080/apex/apex_admin for Administration and http://ec2-50-17-156-24.compute-1.amazonaws.com:8080/apex for APEX web user interface. Note that it’s not SSL connection so you don’t want to use it for any sensitive data unless you reconfigure to https. This is also the time you want to change passwords from default ones.
  11. You can access your database over SQL*Net via sqlplus, SQL Developer or any other tool.
  12. You will see the instance and EBS volume attached in your AWS Management Console. If you stop the instance, you will see that the EBS volume is still attached so you data is still there when you start it. If you terminate the instance, all you changes and data will be gone since the EBS volume will be detached and deleted. You can, however, launch another instance as many time as you want from the same AMI. Just make sure you change the passwords after the launch!

That’s all — you can now start playing with Oracle 11g XE without paying a penny (or very little), without consuming any resources on your own laptop/desktop and have as many of them running as you want. And you can always start from scratch if you screw something up.

Oracle Database 11g XE Beta — Amazon EC2 Image

That’s right folks! Playing with latest beta of free Oracle Database 11g Express Edition couldn’t be any easier than that. If you are using Amazon EC2, you can have a fully working image with 64 bit Oracle Linux and Oracle 11g XE database running in a matter of few clicks and a minute to get the instance to boot.

Image — ami-ae37c8c7
Name — pythian-oel-5.6-64bit-Oracle11gXE-beta-v4
Source — 040959880140/pythian-oel-5.6-64bit-Oracle11gXE-beta-v4

You can find it in public images and at this point it’s only in US East region.

If you never used Amazon EC2 before, see detailed step-by-step guide on how to get started with EC2 on the example of this 11g XE image.

This image works great with Amazon EC2 Micro instance and I configured it specifically for Micro instance. Micro instance costs you only 2 cents per hour to run or even less than 1 cent if you are using spot instance requests (and there is free offer for new AWS users as Niall mentioned in the comments).

So what’s there?

  • Oracle Enterprise Linux 5.6 64 bit (I started with 5.5 and updated to the latest)
  • Oracle Database 11g XE Beta (oracle-xe-11.2.0-0.5.x86_64)
  • Database created and configured to start on boot
  • APEX coming with 11g XE configured on port 8080 and remote access enabled
  • 10GB root volume on EBS with 5+GB free for user data. You could store up to 11GB of data in 11g XE and there is a way to grow volumes if you need but for more critical use then playground, I’d allocate separate EBS volumes anyway.


Few things worth to mention:

  • I enabled public key authentication (“PubkeyAuthentication yes” in /etc/ssh/sshd_config) so you can setup shared key to login directly as oracle OS user – just copy your public key to /home/oracle/.ssh/authorized_keys.
  • SYS and SYSTEM password is “pythian”. Change it!
  • ADMIN password in APEX is “pythian” — change it on the first login.
  • Micro instance has 613 MB of RAM and no swap — no instance (ephemeral) storage.
  • Oracle database and listener autostarts on boot. You can use /etc/init.d/oracle-xe stop/start as root too.
  • listener.ora has been modified to include (HOST=) so that it starts on any hosname/IP.
  • APEX remote access is enabled! DBMS_XDB.SETLISTENERLOCALACCESS(FALSE)
  • Ports 1521 and 8080 are open to the world on local iptables firewall. You still need to configure proper Security Group to be able to access those ports.
  • Access APEX on http://{public-ec2-ip}:8080/apex and admin on http://{public-ec2-ip}:8080/apex/apex_admin. There is currently an issue that APEX stops working after few minutes of run-time returning 404 code. Might be a bug in beta or installation issue (for example, I run it with no swap on Micro instance).

I will be keeping the AMI up to date as things develop so AMI id could change — check back here of just search public AMIs for the latest image. I setup short URL for this page — http://bit.ly/Oracle11gXE.

If you don’t know how to use Amazon EC2 – I recommend to read the second chapter of Expert Oracle Practices: Oracle Database Administration from the Oak Table. This chapter was written by Jeremiah Wilton who’s been long time playing with Amazon EC2 for Oracle before any of us even thought of it.

When few folks confirm that it works, I’ll submit an image vi http://aws.amazon.com/amis/submit.


Update 4-Apr-2011: Create v3 image – fixed typo in database passwords, fixed retrieval of public key for ssh login as root, changed startup sequence so that ssh keys are initialized earlier as well public key retrieval.
Update 4-May-2011: Created v4 image – Increased SGA size to 212M. Set large_pool to 32M (Automatic SGA management doesn’t do it’s job properly – this is why APEX was not working – not enough large pool memory allocated). Enabled DIRECT IO and ASYNC IO for filesystem – buffered IO slowed down things a lot. Now APEX is actually pretty usable on Micro instance. Remember that you can run it on large instance to run in comfort but you are overpaying since there is 2 CPUs in large instance and 7.5GB of RAM while you can’t use more than 1GB. Of course, you could disable Direct IO and use OS buffering to take advantage of more RAM but can’t leverage both cores with APEX (it limits capacity to a single core).
Update 23-Jul-2011: If you need to use networking services from APEX (like web-service, sending emails and etc) then you need to configure network ACLs for APEX_040000 user.

Latch contention troubleshooting case study and Flashback Database performance issues with LOBs

Steve Bamber has written up a case study of library cache latch contention troubleshooting of an Apex application with LatchProf. I’m happy that others also see the value and have had success with my new LatchProf based latch contention troubleshooting approach which takes into account both sides of the contention story (latch waiters and latch holders/blockers) as opposed to the guesswork used previously (hey if it’s shared pool latch contention – is must be about bad SQL not using bind variables ….

Latch contention troubleshooting case study and Flashback Database performance issues with LOBs

Steve Bamber has written up a case study of library cache latch contention troubleshooting of an Apex application with LatchProf. I’m happy that others also see the value and have had success with my new LatchProf based latch contention troubleshooting approach which takes into account both sides of the contention story (latch waiters and latch holders/blockers) as opposed to the guesswork used previously (hey if it’s shared pool latch contention – is must be about bad SQL not using bind variables ….

Latch contention troubleshooting case study and Flashback Database performance issues with LOBs

Steve Bamber has written up a case study of library cache latch contention troubleshooting of an Apex application with LatchProf. I’m happy that others also see the value and have had success with my new LatchProf based latch contention troubleshooting approach which takes into account both sides of the contention story (latch waiters and latch holders/blockers) as opposed to the guesswork used previously (hey if it’s shared pool latch contention – is must be about bad SQL not using bind variables ….

Latch contention troubleshooting case study and Flashback Database performance issues with LOBs

Steve Bamber has written up a case study of library cache latch contention troubleshooting of an Apex application with LatchProf. I’m happy that others also see the value and have had success with my new LatchProf based latch contention troubleshooting approach which takes into account both sides of the contention story (latch waiters and latch holders/blockers) as opposed to the guesswork used previously (hey if it’s shared pool latch contention – is must be about bad SQL not using bind variables …. NOT always…)

Anyway, I’m happy. If you have success stories with LatchProf, please let me know!

As a second topic of interest, Laimutis Nedzinskas has written some good notes about the effect and overhead of Flashback Database option when you are using and modifying (nocache) LOBs. We’ve exchanged some mails on this topic and yeah, my clients have sure seen some problems with this combination as well. You basically want to keep your LOBs cached when using FB database…