Back in February, Jonathan Gennick asked me if I would be interested in writing a bit of content for an APRESS brochure to distribute at RMOUG Training Days. I thought it was a cool idea and chose the topic of database consolidation. I only needed 10 short tips but when I started to write, it was difficult to stop — clearly, expressing ideas in concise way must not be my strength.
Jonathan did heavy edits and turned my draft into 10 brief tips and, of course, quite a few details had to go as we shrank the size 3-4 times. Since I’ve already put my efforts into writing, I figured I could share it as well on my blog. Thus, welcome the first blog post from the series of database consolidation tips. Let’s get down to business…
While there are often multiple goals of a consolidation project, the main purpose of consolidation is to optimize costs which usually means minimizing the Total Cost of Ownership (TCO) of data infrastructure. Your current hardware might be past end of life, you might lack capacity for growth, your change management might be too slow, etc.
These non-core goals (for the lack of a better term for side effects of consolidation projects) can play a role of timing triggers but success of a consolidation project is defined by cost metrics. In real-life there are very few pure consolidation projects as project success criteria usually contain other conditions than cost cutting.
Tip: Keep costs at the heart of the consolidation project but don’t get blinded by cost alone! It’s a total failure if a consolidation project delivers a platform with much lower TCO but is unable to support the required availability and performance SLAs.
It’s also not very popular to run a purely cost-cutting project in a company — people are not overly motivated especially if it endangers their jobs. Luckily, most healthy businesses have quickly growing IT requirements and consolidation projects very quickly bust out of the scope of just cost savings.
Tip: Get your success criteria right and keep cost optimization as the core goal. If required, reduce the scope and split projects into stages where each stage has it’s own core goal. This way, you can classify some stages as purely consolidation. It’s so much easier to achieve success if there are only few criteria. You could also check mark success boxes constantly as you go instead of trying to get to the light at the end of the tunnel that could take years.
If you have anything to share on the scope of consolidation projects — whether past experience or current challenges — please, comment away.
I’ve just published Oracle Database 11g Express Edition Amazon EC2 image (AMI) but most of you have never used Amazon EC2… Not until now! This is a guide to walk you thorough the process of getting your very first EC2 instance up and running. Buckle up — it’s going to be awesome!
That’s all — you can now start playing with Oracle 11g XE without paying a penny (or very little), without consuming any resources on your own laptop/desktop and have as many of them running as you want. And you can always start from scratch if you screw something up.
That’s right folks! Playing with latest beta of free Oracle Database 11g Express Edition couldn’t be any easier than that. If you are using Amazon EC2, you can have a fully working image with 64 bit Oracle Linux and Oracle 11g XE database running in a matter of few clicks and a minute to get the instance to boot.
Image — ami-ae37c8c7
Name — pythian-oel-5.6-64bit-Oracle11gXE-beta-v4
Source — 040959880140/pythian-oel-5.6-64bit-Oracle11gXE-beta-v4
You can find it in public images and at this point it’s only in US East region.
If you never used Amazon EC2 before, see detailed step-by-step guide on how to get started with EC2 on the example of this 11g XE image.
This image works great with Amazon EC2 Micro instance and I configured it specifically for Micro instance. Micro instance costs you only 2 cents per hour to run or even less than 1 cent if you are using spot instance requests (and there is free offer for new AWS users as Niall mentioned in the comments).
So what’s there?
Few things worth to mention:
I will be keeping the AMI up to date as things develop so AMI id could change — check back here of just search public AMIs for the latest image. I setup short URL for this page — http://bit.ly/Oracle11gXE.
If you don’t know how to use Amazon EC2 – I recommend to read the second chapter of Expert Oracle Practices: Oracle Database Administration from the Oak Table. This chapter was written by Jeremiah Wilton who’s been long time playing with Amazon EC2 for Oracle before any of us even thought of it.
When few folks confirm that it works, I’ll submit an image vi http://aws.amazon.com/amis/submit.
Update 4-Apr-2011: Create v3 image – fixed typo in database passwords, fixed retrieval of public key for ssh login as root, changed startup sequence so that ssh keys are initialized earlier as well public key retrieval.
Update 4-May-2011: Created v4 image – Increased SGA size to 212M. Set large_pool to 32M (Automatic SGA management doesn’t do it’s job properly – this is why APEX was not working – not enough large pool memory allocated). Enabled DIRECT IO and ASYNC IO for filesystem – buffered IO slowed down things a lot. Now APEX is actually pretty usable on Micro instance. Remember that you can run it on large instance to run in comfort but you are overpaying since there is 2 CPUs in large instance and 7.5GB of RAM while you can’t use more than 1GB. Of course, you could disable Direct IO and use OS buffering to take advantage of more RAM but can’t leverage both cores with APEX (it limits capacity to a single core).
Update 23-Jul-2011: If you need to use networking services from APEX (like web-service, sending emails and etc) then you need to configure network ACLs for APEX_040000 user.
Steve Bamber has written up a case study of library cache latch contention troubleshooting of an Apex application with LatchProf. I’m happy that others also see the value and have had success with my new LatchProf based latch contention troubleshooting approach which takes into account both sides of the contention story (latch waiters and latch holders/blockers) as opposed to the guesswork used previously (hey if it’s shared pool latch contention – is must be about bad SQL not using bind variables …. NOT always…)
Anyway, I’m happy. If you have success stories with LatchProf, please let me know!
As a second topic of interest, Laimutis Nedzinskas has written some good notes about the effect and overhead of Flashback Database option when you are using and modifying (nocache) LOBs. We’ve exchanged some mails on this topic and yeah, my clients have sure seen some problems with this combination as well. You basically want to keep your LOBs cached when using FB database…
April 4, 2011 A couple of years ago a very well written article appeared in Oracle Magazine that described how to use ROWNUM in the WHERE clause, and also why using ROWNUM might not work quite as expected. Let’s assume that we want to do something a bit unusual, as described in this forum thread. We set [...]
I’ve been involved in a number of clubs and classes since University, and in recent years I’ve been witness to the clubs that kids of my friends and family attend. In most cases, they follow a similar pattern. There are a very small number of highly motivated people who keep the club functioning. A slightly larger number of people who hang around in the periphery, stepping in from time to time to help out, but shying away from any formal commitment. Finally you have the vast majority of people who do nothing but drop off their kids and let them get on with it.
I’m sure you’ve seen a similar pattern yourself in clubs, online communities and even in companies you’ve worked for.
Followers of the blog will know I’m a big fan of UltraEdit. I have a multi-platform unlimited upgrades license, so I run it on Linux, Mac and occasionally on a Windows VM.
I noticed today that version 22.214.171.124 was released for Mac and Linux about a month ago. Not sure how I missed that on the update notices. The changes for Mac are not that big because it was already at version 2.x, but the Linux version had been hanging around the 1.x for some time now and was missing a lot of functionality compared to the Mac version. This latest release is a pretty big catch-up for the Linux version and it now contains pretty much all of the functionality I use on a regular basis.
Both the Mac and Linux versions are still lagging behind the Windows version in terms of total functionality, but who cares about Windows…
From time to time I see people asking how they can check how far their system has got in it’s call to collect stats. The question is often a little ambiguous – they know which call to dbms_stats they’ve used, so they know whether they’re trying to monitor stats collection against a single table (taking advantage of v$session_longops springs to mind) or against an entire schema.
Here’s one simple-minded approach that I whipped up a few years ago – it’s possible you could do better with the latest versions of dbms_stats. Its purpose is simply to give you some way of checking what work dbms_stats needs to do (in this example, for a schema), so that you can check how much has been done and how much remains. The basic method can be modified in various ways to match you tastes.
We start with the simple observation that many calls to dbms_stats have an “options” parameter that allows you to specify things like ‘LIST STALE’, rather than ‘GATHER STALE’. The “list” options populate an object table type with details of the data segments whose statistics would be gathered by the corresponding “gather” option. So we could start with a simple piece of code to do the following:
declare m_object_list dbms_stats.objecttab; begin dbms_stats.gather_schema_stats( ownname => 'test_user', options => 'LIST AUTO', -- options => 'LIST STALE', -- options => 'LIST EMPTY', objlist => m_object_list ); for i in 1..m_object_list.count loop dbms_output.put_line( rpad(m_object_list(i).ownname,30) || rpad(m_object_list(i).objtype, 6) || rpad(m_object_list(i).objname,30) || rpad(m_object_list(i).partname,30) || rpad(m_object_list(i).subpartname,30) || lpad(m_object_list(i).confidence,4) ); end loop; end; /
Call this before you make your call to gather stats (and in general it might be better to use the utl_file package to write to a file rather than using dbms_output and capturing screen output) then, as time passes, you can check the “last_analyzed” column on the relevant view to see which objects are still waiting for their stats to be collected. Of course, this approach is a little clunky, and requires a certain amount of tedious manual labour to get the check done, but once we have the basic principle the rest is easy. Let’s start by using the code in a pipelined function.
create or replace function list_stale ( i_option in varchar2, i_user in varchar2 default user ) return dbms_stats.objecttab pipelined as pragma autonomous_transaction; m_object_list dbms_stats.objecttab; begin if i_option not in ( 'LIST AUTO', 'LIST STALE','LIST EMPTY' ) then null; else dbms_stats.gather_schema_stats( ownname => i_user, options => i_option, objlist => m_object_list ); for i in 1..m_object_list.count loop pipe row (m_object_list(i)); end loop; end if; return; end; /
You’ll notice that I’ve declared the function to run as an autonomous transaction – the call to dbms_stats does various things (such as updating the mon_mods$ and col_usage$ tables) that you’re not supposed to do in a pipelined function, but you can hide these by using the automonous_transaction pragma. (Strangely the error message you get in the absences of the pragma is: “ORA-04092: cannot COMMIT in a trigger”.)
With this definition I can call the function with code like:
select * from table(list_stale('LIST AUTO')) OWNNAME OBJTYP OBJNAME PARTNAME SUBPARTNAME CONFIDENCE ------------ ------ --------------- ---------------- ---------------------- ---------- TEST_USER TABLE PT_RANGE 100 TEST_USER TABLE PT_RANGE P600 100 TEST_USER TABLE PT_RL 100 TEST_USER TABLE PT_RL P_2002_MAR 100 TEST_USER TABLE PT_RL P_2002_MAR P_2002_MAR_THE_REST 100 TEST_USER TABLE PT_RL P_2002_MAR P_2002_MAR_MN 100 TEST_USER TABLE PT_RL P_2002_MAR P_2002_MAR_EAST_COAST 100 TEST_USER TABLE PT_RL P_2002_MAR P_2002_MAR_CA 100 TEST_USER TABLE T1 100 TEST_USER INDEX PT_PK 100 TEST_USER INDEX PT_PK P600 100 TEST_USER INDEX T1_PK 100 ... etc ...
This will give me a list of all objects in my schema that need stats collected – either because they have no stats, or their stats are stale.
But I can be more subtle than this. I could, for example, write a query that joins this “table” to the view dba_segments and keep running it as the stats collection proceeds to report the objects that are still waiting for stats and the sizes of those objects. The SQL needs just a little thought as the objtype is only reported as “TABLE” or “INDEX” so you have to do a UNION ALL in the join and work out which rows are really for partitions and which for subpartitions. And then you have to mess about a bit with outer joins because Oracle will, for example, want to collect stats on a table when a partition is stale – and there is no data segment at the table level of a partitioned table.
Your code might look something like this:
with list_result as ( select * from table(list_stale('LIST AUTO')) ) select lst.objname, nvl(seg.segment_type,lst.objtype) segment_type, lst.partname, seg.blocks from list_result lst, dba_segments seg where lst.partname is null and seg.owner(+) = lst.ownname and seg.segment_type(+) = lst.objtype and seg.segment_name(+) = lst.objname and seg.partition_name is null union all select lst.objname, nvl(seg.segment_type,lst.objtype || ' PARTITION') segment_type, lst.partname, seg.blocks from list_result lst, dba_segments seg where lst.subpartname is null and lst.partname is not null and seg.owner(+) = lst.ownname and substr(seg.segment_type(+),1,5) = lst.objtype and seg.segment_name(+) = lst.objname and seg.partition_name(+) = lst.partname union all select lst.objname, segment_type, lst.partname, seg.blocks from list_result lst, dba_segments seg where lst.subpartname is not null and seg.owner = lst.ownname and substr(seg.segment_type,1,5) = lst.objtype and seg.segment_name = lst.objname and seg.partition_name = lst.subpartname order by 1,2,3 /
In a tiny schema, where every data segment is one of my “1MB uniform extents”, this is the sample of output I got:
OBJNAME SEGMENT_TYPE PARTNAME BLOCKS --------------- ------------------ ------------------------------ ---------- PT_PK INDEX PT_PK INDEX PARTITION P200 128 PT_PK INDEX PARTITION P400 128 PT_PK INDEX PARTITION P600 128 PT_RANGE TABLE PT_RANGE TABLE PARTITION P200 128 PT_RANGE TABLE PARTITION P400 128 PT_RANGE TABLE PARTITION P600 128 PT_RL TABLE PT_RL TABLE PARTITION P_2002_FEB PT_RL TABLE PARTITION P_2002_MAR PT_RL TABLE SUBPARTITION P_2002_FEB 128 PT_RL TABLE SUBPARTITION P_2002_FEB 128 PT_RL TABLE SUBPARTITION P_2002_FEB 128 PT_RL TABLE SUBPARTITION P_2002_FEB 128 PT_RL TABLE SUBPARTITION P_2002_MAR 128 PT_RL TABLE SUBPARTITION P_2002_MAR 128 PT_RL TABLE SUBPARTITION P_2002_MAR 128 PT_RL TABLE SUBPARTITION P_2002_MAR 128 RL_PK INDEX RL_PK INDEX PARTITION P_2002_FEB RL_PK INDEX PARTITION P_2002_MAR RL_PK INDEX SUBPARTITION P_2002_FEB 128 RL_PK INDEX SUBPARTITION P_2002_FEB 128 RL_PK INDEX SUBPARTITION P_2002_FEB 128 RL_PK INDEX SUBPARTITION P_2002_FEB 128 RL_PK INDEX SUBPARTITION P_2002_MAR 128 RL_PK INDEX SUBPARTITION P_2002_MAR 128 RL_PK INDEX SUBPARTITION P_2002_MAR 128 RL_PK INDEX SUBPARTITION P_2002_MAR 128 T1 TABLE 128 T1_PK INDEX 128 T2 TABLE 128 T2_I1 INDEX 128
Obviously you could do something more complex to associate indexes with their tables, then sum subpartitions back to partitions and partitions back to table level stats – but a simple list of objects with primary physical sizes might be sufficient to give you an idea of how much work is still to be done. If you want to be really elegant (or extravagant) you could even combine this SQL with another piece reporting the objects with last_analyzed more recent than the start of run including, if you’re not running a parallel collection, the approximate time taken for each object (think lag() and comparing consecutive times).
Oracle Database XE (Express Edition) 11.2 Beta has been released!
Here you go:
The documentation is here (the link at oracle.com/technetwork is broken right now…)
I’m sure you want to know about it’s limitations, these are documented in the license guide above, but here are the main figures from Oracle XE 11.2 license guide: