Search

OakieTags

Who's online

There are currently 0 users and 49 guests online.

Recent comments

Affiliations

November 2010

Virtual Conference on Oracle Performance!

The E2SN Virtual Conferences are webinar-style online training sessions, but just like at conferences, you can listen to multiple top speakers within a single event.
The standard conference consists of 4 x 1.5 hour presentations with 30-minute Q & A sessions after each presentation. The four sessions are spread across two days, so the conference takes only four hours per day. So, in addition to not even having to leave your desk for learning from world’s top experts, you can still get your important work done in the office, too!
The first scheduled virtual conference is focusing on a walk-through on Systematic Oracle SQL Optimization, with emphasis on practical applications and real-life scenarios.

Systematic Oracle SQL Optimization virtual conference:

  • The speakers are: Cary Millsap, Jonathan Lewis, Kerry Osborne and Tanel Poder

  • The conference takes place on 18. and 19. November ( 2 x 0.5 days )
  • The time of the conference sessions is: 08:00am – 12:00pm Pacific Standard Time (PST) on both days
  • For this inaugural event we have a special launch price of 475 USD per attendee!

Click here for agenda, abstract and registration….

SnagIt on Mac…

I wrote recently about UltraEdit coming to Mac. Another tool I ‘ve missed since my Windows days is Snagit. The default screen capture on Mac is OK, and Shutter and Gimp on Linux are both OK too, but none of them compare to Snagit in my opinion.

Today I got an email proclaiming that a production release of Snagit is now available for Mac and since I have a valid Windows License I get to use it for free on Mac. Joy.

It’s now installed and running quite nicely. Now if only they would do a Linux version… :)

Cheers

Tim…

Database Sizing – How much Disk do I need? (The Easy Way)

How much Disk do I need for my new Oracle database? Answer:-

  • 8-10 times the volume of raw data for an OLTP system
  • 2-4 times the raw data volume for a Data Warehouse.
  • The bigger the database, the nearer you will be to the lower multiplication factors.

{Disclaimer. This is of course just my opinion, based on some experience. If you use the above figures for a real project and get the total disc space you need wrong, don’t blame me. If you do and it is right, then of course you now owe me a beer.}

Many of us have probably had to calculate the expected size a database before, but the actual database is only one component of all the things you need to run the Oracle component of your system. You need to size the other components too – Archived redo logs, backup staging area, dataload staging area, external files, the operating system, swap space, the oracle binaries {which generally gets bigger every year but shrink in comparison to the average size of an Oracle DB} etc…

In a similar way to my thoughts on how much database space you need for a person, I also used to check out the total disk space every database I created and those that I came across took up. {A friend emailed me after my earlier posting to ask if I had an obsession about size. I think the answer must be “yes”}.

First of all, you need to know how much “raw data” you have. By this I mean what will become the table data. Back in the early 90’s this could be the total size of the flat files the old system was using, even the size of the data as it was in spreadsheets. An Oracle export file of the system gives a pretty good idea of the raw data volume too. Lacking all these then you need to roughly size your raw data. Do a calculation of “number_of_rows*sum_of_columns” for your biggest 10 tables (I might blog more on this later). Don’t be tempted to overestimate, my multipliers allow for the padding.

Let us say you have done this and it is 60GB of raw data for an OLTP system. Let the storage guys know you will probably want about 500GB of space. They will then mentally put it down as “of no consequence” as if you have dedicated storage guys you probably have many terabytes of storage. {Oh, I should mention that I am not considering redundancy at all but space that is provided. The amount of actual spinning disk is down to the level and type of RAID you storage guys make you use. That is a whole other discussion}.

If you come up with 5TB of raw data for a DW system then you need around 12-15TB of disk storage.

If you come up with more than a Terabyte or so of raw data for an OLTP system or 10 to 20 Terabytes for a DW, when you give you figures to the storage guys/procurement people then they may well go pale and say something like “you have got to be kidding!”. This is part of why the multiplication factor for Data Warehouses and larger systems in general is less, as you are forced to be more careful about the space you allocate and how you use it.

The overhead of total disk space over Raw data reduces as the database gets bigger for a number of reasons:

  • The size of the Oracle binaries and the OS does not change as the database gets bigger.
  • The size of swap space does not increase in line wiht the database as, generally speaking, if you increase the database size from 100GB to 1TB you do not have the luxury of increasing the system memory of your server. It probably doubles.
  • Very large databases tend to have something making them big, like images or embedded documents, which are not indexed. Thus the ratio of table segments to index segments increases.
  • If you have a very large database you start removing indexes (often those that support constraints) to aid performance of data load and management, again improving the ratio of table segments to index segments.
  • Backups become partial or incremental to reduce the size and duration of the backup.
  • As mentioned before, the sheer size of system is such that you just take more care over cleaning up work areas, paring down the archived redo log areas (those files to compress well) and other areas.
  • If things get extreme or you have been doing this for donkeys years {note to none-UK people, this means many, many years} you start altering PCTFREE and checking over extent sizes.

My best ever ratio of database size to raw data was around 1.6 and it took an awful lot of effort and planning to get there. And an IT manager who made me very, very aware of how much the storage was costing him (it is not the disks, it’s all the other stuff).

I should also just mention that the amount of disk you need is only one consideration. If you want your database to perform well you need to consider the number of spindles. After all, you can create a very large database indeed using a single 2TB disc – but any actual IO will perform terribly.

RHEL 6 Released…

So RHEL 6 was released yesterday. Excuse my lack of interest, but I don’t think I will be bothering with it until Oracle Enterprise Linux 6 is available on eDelivery. Then I will no doubt start a frenzy of installations and testing. :)

Cheers

Tim…

Tool to Add Range Partitions Automatically to a Table

Continuing on my earlier posts on partitioning, here are is another tool on this topic. If you have been using partitioned tables, you might have noticed that one of the thorny issues in partitioning is the need to create partitions. If you don't have the partition ready, the inserts will fail with the following message

ERROR at line 1:
ORA-14400: inserted partition key does not map to any partition

This error is pretty disastrous to applications as they essentially stop functioning, causing outage, until you add the necessary partition. What can you do to prevent it? Maintain a unblinking vigil, of course, which may quickly become impractical. Oracle 11g has a nice answer - the INTERVAL partitioning, which automatically creates partitions when the new records come in. What if you are on 10g or do not want to use INTERVAL partitioning for some reason? Is there a way to simulate the interval partitioning?

After getting frustrated with the lack of proper partitions more than once, I decided to develop this tool. I perfected the tool over several months to make it more generic. Here it is. Enjoy.

Important: This is provided for educational purpose only. Use it at your own risk. There is no implied or expressed warranty and the author assumes no responsibility for any adverse effect.

What the Tool does

The tool is implemented as a stored procedure. When executed, it reads a driver table called partition_create_schedule, which records the tables to which the partitions are to be added. The table is created as shown below:

/* ---------------------------------------------------------------------
Name : partition_create_schedule
Purpose : To store the retention period of partitions of
: indivisual tables. This data is used by partition_drop
: procedure to automatically drop partitions.
----------------------------------------------------------------------*/

create table partition_create_schedule
(
OWNER VARCHAR2(30),
TABLE_NAME VARCHAR2(30),
PARTITION_UNIT VARCHAR2(15),
NUM_PARTS_TO_CREATE NUMBER(4)
)
/
comment on table partition_create_schedule is 'To record the tables for which partitions are to be added automatically'
/
comment on column partition_create_schedule.owner is 'the owner of the table'
/
comment on column partition_create_schedule.table_name is 'the name of the table'
/
comment on column partition_create_schedule.PARTITION_UNIT is 'the partitioning scheme - MONTHLY/DAILY/WEEKELY/QUARTERLY/YEARLY'
/
comment on column partition_create_schedule.num_parts_to_create is 'the number of units to skip in creating the partition. For instance, if you want to create the next partition 3 days from now for a DAILY unit, place 3 here'
/

The partitioning unit is important. Remember, in date-range partitioned tables, you merely provide the range; not whether the range is monthly, weekly or something like that. Even if you have a somewhat similar range, there is no guarantee that it will remain so. You may decide to split some partition or coalesce two. Therefore it is important that you tell the tool what type of partitioning duration the table is under.

Let's insert some records:

insert into partition_create_schedule values ('ARUP','PTEST1','MONTHLY',1);
insert into partition_create_schedule values ('ARUP','PTEST2','MONTHLY',1);
insert into partition_create_schedule values ('ARUP','PTEST3','QUARTERLY',1);
insert into partition_create_schedule values ('ARUP','PTEST4','DAILY',1);
commit;

Now comes the tool - the procedure. To build it, I considered these objectives:

(1) The intervals could be daily, weekly, monthly, quarterly and yearly
(2) When this procedure executes, it should automatically compute the boundary of the partition to add from the existing partitions. This should not be expected to be input from the user.
(3) This procedure will be executed every day automatically. In case of a DAILY partition, a new partition will be created; but not for other ranges. For weekly ranges, the new partition should be created only on the first day of the week; for monthly, on the first day of the month, etc.
(4) The global indexes must be updated automatically
(5) It must handle the two possibilities of the existing highest partition, e.g. a partition with a distinct boundary value and one with (MAXVALUE) clause. In the former case, a partition is added. In the latter case, the maxvalue partition has to be "split"; not added. The procedure must take care of both cases automatically.
(6) When the partition is split, the local indexes get invalidated and must be rebuilt.
(7) It should send an email to the DBA in both success and failure.

The tool handles all the above objectives pretty well.

CREATE OR REPLACE procedure SYS.partition_create
as
cursor sched_cur is
select *
from partition_create_schedule
;
l_sched_rec sched_cur%rowtype;
l_cutoff_date date;
l_hvc varchar2(2000);
l_new_hvc varchar2(2000);
l_old_hvc varchar2(2000);
l_part_name varchar2(2000);
l_new_part_name varchar2(2000);
l_table_name varchar2(2000);
l_hv date;
l_new_hv date;
l_sql varchar2(2000);
l_full varchar2(32000);
l_ts varchar2(2000);
l_errm varchar2(2000);
l_mail_body varchar2(2000);
l_out varchar2(2000);
l_recipients varchar2(2000) :=
'dba.ops@mycompany.com';
errc number(10);
l_place varchar2(2);
l_exec_flag varchar2(4);
invalid_partition_unit exception;
begin
execute immediate 'alter session set smtp_out_server = ''smtp.mycompany.com''';
l_out :=
rpad('Owner',10)||
rpad('Table',30)||
rpad('Part Unit',10)||
rpad('Old HV',11)||
rpad('New HV',11)||
rpad('Exec',4)
;
dbms_output.put_line(l_out);
l_out :=
rpad('-',9,'-')||' '||
rpad('-',29,'-')||' '||
rpad('-',9,'-')||' '||
rpad('-',10,'-')||' '||
rpad('-',10,'-')||' '||
rpad('-',3,'-')
;
dbms_output.put_line(l_out);
open sched_cur;
loop --{
fetch sched_cur into l_sched_rec;
exit when sched_cur%notfound;
select high_value, tablespace_name
into l_old_hvc, l_ts
from dba_tab_partitions tp
where partition_position = (
select max(partition_position) from dba_tab_partitions
where table_name = tp.table_name
and table_owner = tp.table_owner
)
and table_name = l_sched_rec.table_name
and table_owner = l_sched_rec.owner;
if (l_old_hvc = 'MAXVALUE') then --{
-- PMAX code. Split the partition
-- first get the hign value of the partition just prior to PMAX
select high_value, tablespace_name
into l_hvc, l_ts
from dba_tab_partitions tp
where partition_position = (
select max(partition_position) - 1 from dba_tab_partitions
where table_name = tp.table_name
and table_owner = tp.table_owner
)
and table_name = l_sched_rec.table_name
and table_owner = l_sched_rec.owner;
execute immediate 'select '||l_hvc||' from dual' into l_hv;
l_exec_flag := 'NO';
case l_sched_rec.partition_unit --{
when 'DAILY' then
l_new_hv := l_hv + l_sched_rec.num_parts_to_create;
l_new_part_name := 'Y'||to_char(l_new_hv,'YY')||'M'||to_char(l_new_hv,'MM')||'D'||to_char(l_new_hv,'DD');
l_exec_flag := 'YES';
when 'WEEKLY' then
l_new_hv := l_hv + (7*l_sched_rec.num_parts_to_create);
l_new_part_name := 'Y'||to_char(l_new_hv,'YY')||'W'||to_char(l_new_hv,'WW');
if (to_char(sysdate,'d') = '1') then
l_exec_flag := 'YES';
end if;
when 'MONTHLY' then
l_new_hv := add_months(l_hv,l_sched_rec.num_parts_to_create);
if (to_char(sysdate,'dd') = '1') then
l_exec_flag := 'YES';
end if;
l_new_part_name := 'Y'||to_char(l_new_hv,'YY')||'M'||to_char(l_new_hv,'MM');
when 'QUARTERLY' then
l_new_hv := add_months(l_hv,3*l_sched_rec.num_parts_to_create);
l_new_part_name := 'Y'||to_char(l_new_hv,'YY')||'Q'||to_char(l_new_hv,'Q');
if (to_char(sysdate,'mm/dd') in ('01/01','04/01','07/01','10/01')) then
l_exec_flag := 'YES';
end if;
when 'ANNUALLY' then
l_new_hv := add_months(l_hv,12*l_sched_rec.num_parts_to_create);
l_new_part_name := 'Y'||to_char(l_new_hv,'YY');
if to_char(sysdate,'mm/dd') = '01/01' then
l_exec_flag := 'YES';
end if;
else
raise invalid_partition_unit;
end case; --}
l_new_hvc := to_char(l_new_hv,'mm/dd/yyyy');
l_sql := 'alter table '||
l_sched_rec.owner||'.'||
l_sched_rec.table_name||
' split partition pmax '||
' at (to_date('''||l_new_hvc||
''',''mm/dd/yyyy''))'||
' into ( partition '||l_new_part_name||
' tablespace '||l_ts||
' initrans 11 storage (freelists 11 freelist groups 7)'||
' , partition pmax) update global indexes';
else --}{
-- non PMAX code. Add the partition
l_hvc := l_old_hvc;
execute immediate 'select '||l_hvc||' from dual' into l_hv;
l_exec_flag := 'NO';
case l_sched_rec.partition_unit --{
when 'DAILY' then
l_new_hv := l_hv + l_sched_rec.num_parts_to_create;
l_new_part_name := 'Y'||to_char(l_new_hv,'YY')||'M'||to_char(l_new_hv,'MM')||'D'||to_char(l_new_hv,'DD');
l_exec_flag := 'YES';
when 'WEEKLY' then
l_new_hv := l_hv + (7*l_sched_rec.num_parts_to_create);
l_new_part_name := 'Y'||to_char(l_new_hv,'YY')||'W'||to_char(l_new_hv,'WW');
if (to_char(sysdate,'d') = '1') then
l_exec_flag := 'YES';
end if;
when 'MONTHLY' then
l_new_hv := add_months(l_hv,l_sched_rec.num_parts_to_create);
if (to_char(sysdate,'dd') = '1') then
l_exec_flag := 'YES';
end if;
l_new_part_name := 'Y'||to_char(l_new_hv,'YY')||'M'||to_char(l_new_hv,'MM');
when 'QUARTERLY' then
l_new_hv := add_months(l_hv,3*l_sched_rec.num_parts_to_create);
l_new_part_name := 'Y'||to_char(l_new_hv,'YY')||'Q'||to_char(l_new_hv,'Q');
if (to_char(sysdate,'mm/dd') in ('01/01','04/01','07/01','10/01')) then
l_exec_flag := 'YES';
end if;
when 'ANNUALLY' then
l_new_hv := add_months(l_hv,12*l_sched_rec.num_parts_to_create);
l_new_part_name := 'Y'||to_char(l_new_hv,'YY');
if to_char(sysdate,'mm/dd') = '01/01' then
l_exec_flag := 'YES';
end if;
else
raise invalid_partition_unit;
end case; --}
l_new_hvc := to_char(l_new_hv,'mm/dd/yyyy');
l_sql := 'alter table '||
l_sched_rec.owner||'.'||
l_sched_rec.table_name||
' add partition '||
l_new_part_name||
' values less than '||
' (to_date('''||l_new_hvc||
''',''mm/dd/yyyy''))'||
' tablespace '||l_ts||
' initrans 11 storage (freelists 11 freelist groups 7)';
end if; --}
l_out :=
rpad(l_sched_rec.owner,10)||
rpad(l_sched_rec.table_name,30)||
rpad(l_sched_rec.partition_unit,10)||
rpad(to_char(l_hv,'mm/dd/yyyy'),11)||
rpad(l_new_hvc,11)||
rpad(l_exec_flag,4)
;
dbms_output.put_line(l_out);
if (l_exec_flag = 'YES') then --{
dbms_output.put_line('l_new_part_name='||l_new_part_name);
dbms_output.put_line('l_new_hvc='||l_new_hvc);
l_mail_body := 'Statement Executed:'||l_sql;
dbms_output.put_line('l_mail_body='||l_mail_body);
begin
execute immediate l_sql;
l_mail_body := l_mail_body ||'...OK';
exception
when OTHERS then
l_errm := SQLERRM;
l_mail_body := l_mail_body||' ERROR='||l_errm;
raise;
end;
-- table partition split. Now index.
for ind_cur in (
select index_owner, index_name, partition_name
from dba_ind_partitions
where (index_owner, index_name) in (
select owner, index_name
from dba_indexes
where table_name = l_sched_rec.table_name
and table_owner = l_sched_rec.owner
)
and status = 'UNUSABLE'
) loop --{
dbms_output.put_line('Index Part='||ind_cur.index_owner||'.'||ind_cur.index_name||'.'||ind_cur.partition_name);
l_sql := 'alter index '||
ind_cur.index_owner||'.'||
ind_cur.index_name||' rebuild partition '||
ind_cur.partition_name||' nologging online';
l_mail_body := l_mail_body||chr(12)||
'Statement Executed:'||l_sql;
dbms_output.put_line('l_mail_body='||l_mail_body);
begin
execute immediate l_sql;
l_mail_body := l_mail_body||'...OK';
exception
when OTHERS then
l_errm := SQLERRM;
l_mail_body := l_mail_body||' ERROR='||l_errm;
raise;
end;
end loop; --}
-- index partitions made usable
end if; --}
end loop; --}
close sched_cur;
dbms_output.put_line(l_full);
utl_mail.send(
'oracle@prodb1',l_recipients,
null, null,
'Succeeded: PROPRD1 Partition Create',
l_mail_body
);
exception
when OTHERS then
l_errm := SQLERRM;
utl_mail.send(
'oracle@prodb1',l_recipients,
null, null,
'Failed: PROPRD1 Partition Create',
l_mail_body
);
raise_application_error(-20001,l_errm);
end;
/
show error

The code is self explanatory. Let's see some example outputs:

SQL> @ex
Owner Table Part Unit Old HV New HV Exec
--------- ----------------------------- --------- ---------- ---------- ---
ARUP PTEST1 MONTHLY 02/01/2010 04/01/2010 NO
ARUP PTEST2 MONTHLY 02/01/2010 04/01/2010 NO
ARUP PTEST3 QUARTERLY 04/01/2010 10/01/2010 NO
ARUP PTEST4 DAILY 01/12/2010 01/14/2010 YES
l_new_part_name=Y10M01D14
l_new_hvc=01/14/2010
l_mail_body=Statement Executed:alter table ARUP.PTEST4 split partition pmax at
(to_date('01/14/2010','mm/dd/yyyy')) into ( partition Y10M01D14 tablespace
MGMT_TABLESPACE initrans 11 storage (freelists 11 freelist groups 7) , partition
pmax) update global indexes
ARUP PTEST5 DAILY 01/04/2010 01/06/2010 YES
l_new_part_name=Y10M01D06
l_new_hvc=01/06/2010
l_mail_body=Statement Executed:alter table ARUP.PTEST5 add partition Y10M01D06
values less than (to_date('01/06/2010','mm/dd/yyyy')) tablespace
MGMT_TABLESPACE initrans 11 storage (freelists 11 freelist groups 7)
ARUP PTEST6 DAILY 01/02/2010 01/04/2010 YES
l_new_part_name=Y10M01D04
l_new_hvc=01/04/2010
l_mail_body=Statement Executed:alter table ARUP.PTEST6 add partition Y10M01D04
values less than (to_date('01/04/2010','mm/dd/yyyy')) tablespace
MGMT_TABLESPACE initrans 11 storage (freelists 11 freelist groups 7)
ARUP PTEST7 DAILY 01/02/2010 01/04/2010 YES
l_new_part_name=Y10M01D04
l_new_hvc=01/04/2010
l_mail_body=Statement Executed:alter table ARUP.PTEST7 add partition Y10M01D04
values less than (to_date('01/04/2010','mm/dd/yyyy')) tablespace
MGMT_TABLESPACE initrans 11 storage (freelists 11 freelist groups 7)

The output is sent to the mail address recorded in the stored procedure - in case of both success and failure. Now all you have to do is to put it in a Scheduler Job and let it run every day. Populate the table PARTITION_CREATE_SCHEDULE as needed and those tables will be automatically added partitions.

Hope you enjoy and find it useful. As always, I will appreciate your feedback.

Oracle11g: New Locking Modes When Policing FK Constraints (A Wolf at the Door)

As I’ve been focusing mainly with Oracle 11g at work these days, thought I might look at a number of Oracle 11g related topics in the coming weeks.   To start with, there’s been a subtle but potentially significant change introduced in Oracle 11g (since 11.1.0.6) with regard to the manner in which locks are [...]

Tool to Present a Consolidated View of the Status of RMAN Backups

A very important area of database administration is backups which are essential to the availability of the databases. If you use RMAN to backup databases (as you should be doing), how will you ensure that all the database backups have successfully completed? One way of making sure that occurs is by going into each server and checking the backup log – a task not just tedious but often impractical considering the large number of databases we have. Is there an easy way – via a consolidated output of all databases?
The Tool
Yes, there is an easy way – by checking the catalog, assuming of course that you are using a catalog, as you should be. I developed this tool to extract the information from the catalog for each database. Since the repository may be different for each database being backed up, I used a view to union all the RC_* views from all the repositories on a common schema which I named RMAN_COMMON. Then I wrote a PL/SQL program to pull the report presenting the information on all databases together. Since my objective was to have a consolidated view of all the backups, whether they succeeded or not, I selected from multiple views to provide an indented output in case of failures to identify specifically which component failed. I will illustrate with an output shown below. The tool can be customized for number of days it reports. Here I have chosen for the last 2 days.

SQL> conn rman_common/rman_common@catalogdb
Connected.
SQL> @status

******** RMAN Database Backup Report *********

DB Name Start Time End Time Status Time Tak Output S Type
-------- ------------------ ------------------ -------- -------- -------- -------
DEMO1 2010-11-08 00:25:1 2010-11-08 00:50:4 COMPLETE 00:25:27 17.99 DISK
DEMO1 2010-11-09 00:25:1 2010-11-09 00:50:3 COMPLETE 00:25:16 17.99 DISK
ECR 2010-11-07 23:12:0 2010-11-07 23:14:2 COMPLETE 00:02:13 5.55 DISK
ECR 2010-11-08 23:12:0 2010-11-08 23:14:3 COMPLETE 00:02:21 5.58 DISK
EMREP 2010-11-08 01:00:2 2010-11-08 01:02:5 COMPLETE 00:02:35 326.07 DISK
EMREP 2010-11-09 01:00:2 2010-11-09 01:02:5 COMPLETE 00:02:28 353.71 DISK
PROPRD 2010-11-07 23:00:1 2010-11-08 00:04:0 COMPLETE 01:03:49 50.48 DISK
PROPRD 2010-11-08 00:04:2 2010-11-08 11:47:1 COMPLETE 11:42:42 27.59
PROPRD 2010-11-08 10:35:3 2010-11-08 11:20:4 COMPLETE 00:45:12 30.00 DISK
PROPRD 2010-11-08 11:28:5 2010-11-08 12:21:2 COMPLETE 00:52:33 30.00 DISK
PROPRD 2010-11-08 12:23:5 2010-11-08 12:38:5 COMPLETE 00:15:00 10.00 DISK
PROPRD 2010-11-08 12:43:3 2010-11-08 12:43:4 COMPLETE 00:00:07 192.00 DISK
PROPRD 2010-11-08 12:46:1 2010-11-08 12:46:2 COMPLETE 00:00:07 224.00 DISK
PROPRD 2010-11-08 12:48:1 2010-11-08 13:14:0 COMPLETE 00:25:50 20.00 DISK
PROPRD 2010-11-08 13:37:3 2010-11-08 13:58:4 COMPLETE 00:21:11 15.00 DISK
PROPRD 2010-11-08 14:00:2 2010-11-08 14:13:5 COMPLETE 00:13:30 10.00 DISK
PROPRD 2010-11-08 14:29:0 2010-11-08 14:29:0 FAILED 00:00:01 0.00
. Operation Input Status
. -------------------- -------------------- --------------------
. . . FAILED
. Level Status Operation Object Type
. ------ -------- -------------------- --------------------
. > COMPLETE RMAN .
. -> FAILED BACKUP DATAFILE INCR
PROPRD 2010-11-08 23:00:2 2010-11-09 00:21:0 COMPLETE 01:20:43 66.75 DISK
PROPRD 2010-11-09 00:21:2 2010-11-09 14:07:3 COMPLETE 13:46:09 40.25
LIGPRD1 2010-11-08 03:00:0 2010-11-08 03:00:4 COMPLETE 00:00:37 1.74 DISK
LIGPRD1 2010-11-08 03:00:5 2010-11-08 03:04:1 COMPLETE 00:03:12 1.49
LIGPRD1 2010-11-09 03:00:0 2010-11-09 03:00:5 COMPLETE 00:00:45 2.59 DISK
LIGPRD1 2010-11-09 03:01:0 2010-11-09 03:05:1 COMPLETE 00:04:05 1.37
LIGPRD2 2010-11-08 02:00:0 2010-11-08 02:00:3 COMPLETE 00:00:29 1.04 DISK
LIGPRD2 2010-11-08 02:00:4 2010-11-08 02:02:3 COMPLETE 00:01:52 1.34
LIGPRD2 2010-11-09 02:00:0 2010-11-09 02:00:4 COMPLETE 00:00:35 1.67 DISK
LIGPRD2 2010-11-09 02:00:5 2010-11-09 02:03:2 COMPLETE 00:02:26 1.38
SW1 2010-11-08 00:05:0 2010-11-08 00:06:1 COMPLETE 00:01:06 519.17 DISK
SW1 2010-11-09 00:05:0 2010-11-09 00:08:0 COMPLETE 00:03:04 2.01 DISK

PL/SQL procedure successfully completed.

Here you can see the databases in the catalog – PROPRD, LIGPRD1, LIGPRD2 and SW1. The columns – “Start Time”, “End Time” and “Time Taken” – are self-explanatory. The “Output Size” shows the size of the backupset produced. The “Status” column shows the status of the job – the key to this report. If it shows “COMPLETE”, then all was well in the job. If it shows “FAILED” then lines below show what actually failed. For instance you can see on 8th Nov, incremental backup of one datafile of PROPRD failed. That one definitely needs investigating.  You got all that important information in just one report. As you add all the databases into the same catalog, your reports will be more complete and expansive.

Construction
Now that you saw the result of the tool, let’s see the code behind it. First I created a user – RMAN_COMMON:
SQL> grant create session, unlimited tablespace, create view to rman_common identified by rman_common;

We need just three views from the repositories; so this user needs to be granted select privileges on those only. As SYS user, grant these:

grant select on rman_PROPRD.rc_rman_backup_job_details to rman_common;
grant select on rman_LIGPRD11.rc_rman_backup_job_details to rman_common;
grant select on rman_LIGPRD21.rc_rman_backup_job_details to rman_common;
grant select on rman_11g.rc_rman_backup_job_details to rman_common;
--
grant select on rman_PROPRD.rc_rman_backup_subjob_details to rman_common;
grant select on rman_LIGPRD11.rc_rman_backup_subjob_details to rman_common;
grant select on rman_LIGPRD21.rc_rman_backup_subjob_details to rman_common;
grant select on rman_11g.rc_rman_backup_subjob_details to rman_common;
--
grant select on rman_PROPRD.rc_rman_status to rman_common;
grant select on rman_LIGPRD11.rc_rman_status to rman_common;
grant select on rman_LIGPRD21.rc_rman_status to rman_common;
grant select on rman_11g.rc_rman_status to rman_common;

Of course, you will need to customize the above script to add more repositories as you add them.
Next, you will create the views by selecting from the corresponding views from individual repositories. Instead of creating a view with all the columns, I chose only a few columns. This will help us in combining the views from 10g and 11g where the columns could be different.

conn rman_common/rman_common
--
create view rc_rman_backup_job_details
as
select
db_name,
start_time,
end_time,
status,
time_taken_display,
output_bytes_display,
output_device_type,
session_key,
session_recid,
session_stamp
from rman_PROPRD.rc_rman_backup_job_details
union all
select
db_name,
start_time,
end_time,
status,
time_taken_display,
output_bytes_display,
output_device_type,
session_key,
session_recid,
session_stamp
from rman_LIGPRD11.rc_rman_backup_job_details
union all
select
db_name,
start_time,
end_time,
status,
time_taken_display,
output_bytes_display,
output_device_type,
session_key,
session_recid,
session_stamp
from rman_LIGPRD21.rc_rman_backup_job_details
union all
select
db_name,
start_time,
end_time,
status,
time_taken_display,
output_bytes_display,
output_device_type,
session_key,
session_recid,
session_stamp
from rman_11g.rc_rman_backup_job_details
/
create view rc_rman_backup_subjob_details
as
select
operation,
input_type,
status,
session_stamp
from rman_PROPRD.rc_rman_backup_subjob_details
union all
select
operation,
input_type,
status,
session_stamp
from rman_LIGPRD11.rc_rman_backup_subjob_details
union all
select
operation,
input_type,
status,
input_type,
status,
session_stamp
from rman_LIGPRD21.rc_rman_backup_subjob_details
union all
select
operation,
input_type,
status,
session_stamp
from rman_11g.rc_rman_backup_subjob_details
/
create view rc_rman_status
as
select
row_level,
operation,
object_type,
status,
session_key,
session_recid
from rman_PROPRD.rc_rman_status
union all
select
row_level,
operation,
object_type,
status,
session_key,
session_recid
from rman_LIGPRD11.rc_rman_status
union all
select
row_level,
operation,
object_type,
status,
session_key,
session_recid
session_key,
session_recid
from rman_LIGPRD21.rc_rman_status
union all
select
row_level,
operation,
object_type,
status,
session_key,
session_recid
from rman_11g.rc_rman_status
/

Now that the views are in place, you can create the actual tool. Here is the PL/SQL code:
/*---------------------------------------------------------------------------
Name : RMAN Job Summary Report
Version : 2.0
Released : Nov 9th, 2010
Author : Arup Nanda.
Description : Creates a report of all backup jobs in a specified number
of days. If anything failed, it also shows the sub-job details.
Tested on 11g; but will work on 10gR2 as well.
Connect to RMAN repository and execute.
For number of days, change the l_days constant.
---------------------------------------------------------------------------*/
prompt
prompt ******** RMAN Database Backup Report *********
prompt
set serveroutput on size unlimited
declare
l_days constant number := 2;
l_place varchar2(400);
l_db_name rc_rman_backup_job_details.db_name%type;
l_start_time varchar2(40);
l_end_time varchar2(40);
l_status rc_rman_backup_job_details.status%type;
l_time_taken_display rc_rman_backup_job_details.time_taken_display%type;
l_output_bytes_display rc_rman_backup_job_details.output_bytes_display%type;
l_output_device_type rc_rman_backup_job_details.output_device_type%type;
l_session_key rc_rman_backup_job_details.session_key%type;
l_session_recid rc_rman_backup_job_details.session_recid%type;
l_session_stamp rc_rman_backup_job_details.session_stamp%type;

l_operation rc_rman_backup_subjob_details.operation%type;
l_input_type rc_rman_backup_subjob_details.input_type%type;

l_command_level varchar2(9);
l_object_type rc_rman_status.object_type%type;
cursor bjd_cur
(
p_days in number
)
is
select
bjd.db_name,
to_char(bjd.start_time, 'yyyy-mm-dd hh24:mi:ss'),
to_char(bjd.end_time, 'yyyy-mm-dd hh24:mi:ss'),
bjd.status,
bjd.time_taken_display,
bjd.output_bytes_display,
bjd.output_device_type,
bjd.session_key,
bjd.session_recid,
bjd.session_stamp
from rc_rman_backup_job_details bjd
where end_time > sysdate - p_days
order by bjd.db_name, bjd.start_time;

begin
l_place := 'Place 100';
dbms_output.put_line(
rpad('DB Name',8)
||' '||
rpad('Start Time',18)
||' '||
rpad('End Time',18)
||' '||
rpad('Status',8)
||' '||
rpad('Time Taken',8)
||' '||
rpad('Output Size',8)
||' '||
rpad('Type',8)
);
dbms_output.put_line(
rpad('-',8,'-')
||' '||
rpad('-',18,'-')
||' '||
rpad('-',18,'-')
||' '||
rpad('-',8,'-')
||' '||
rpad('-',8,'-')
||' '||
rpad('-',8,'-')
||' '||
rpad('-',8,'-')
);
open bjd_cur (l_days);
loop
fetch bjd_cur
into
l_db_name,
l_start_time,
l_end_time,
l_status,
l_time_taken_display,
l_output_bytes_display,
l_output_device_type,
l_session_key,
l_session_recid,
l_session_stamp
;
exit when bjd_cur%notfound;
dbms_output.put_line(
rpad(l_db_name ,8)
||' '||
rpad(l_start_time ,18)
||' '||
rpad(l_end_time ,18)
||' '||
rpad(l_status ,8)
||' '||
rpad(l_time_taken_display ,8)
||' '||
rpad(l_output_bytes_display ,8)
||' '||
rpad(l_output_device_type,8)
);
--
--
l_place := 'Place 300';
if (l_status != 'COMPLETED') then
for bsjd_rec in (
select
operation,
input_type,
status
from rc_rman_backup_subjob_details
where session_stamp = l_session_stamp
) loop
l_place := 'Place 400';
dbms_output.put_line(
'.'
||' '||
rpad('Operation',20)
||' '||
rpad('Input',20)
||' '||
rpad('Status',20)
);
dbms_output.put_line(
'.'
||' '||
rpad('-',20,'-')
||' '||
rpad('-',20,'-')
||' '||
rpad('-',20,'-')
);
dbms_output.put_line(
'.'
||' '||
rpad(nvl(l_operation,'.') ,20)
||' '||
rpad(nvl(l_input_type,'.') ,20)
||' '||
rpad(nvl(l_status,'.') ,20)
);
end loop;
--
l_place := 'Place 500';
dbms_output.put_line(
'. '||
rpad('Level' ,6)
||' '||
rpad('Status' ,8)
||' '||
rpad('Operation' ,20)
||' '||
rpad('Object Type' ,20)
);
dbms_output.put_line(
'. '||
rpad('-' ,6,'-')
||' '||
rpad('-' ,8,'-')
||' '||
rpad('-' ,20,'-')
||' '||
rpad('-' ,20,'-')
);
for status_rec in (
select
rpad('-', row_level, '-')||'>' command_level,
operation,
object_type,
status
from rc_rman_status
where session_key = l_session_key
order by row_level, session_recid
) loop
l_place := 'Place 600';
dbms_output.put_line(
'. '||
rpad(nvl(status_rec.command_level,'.') ,6)
||' '||
rpad(nvl(status_rec.status,'.') ,8)
||' '||
rpad(nvl(status_rec.operation,'.') ,20)
||' '||
rpad(nvl(status_rec.object_type,'.') ,20)
);
end loop;
end if;
end loop;
exception
when OTHERS then
dbms_output.put_line(l_place);
raise;
end;
/

This is the PL/SQL code in the file status.sql.  You can adjust the value of the constant l_days to generate the report for as many days as you want. When you run the script it shows the status of the RMAN jobs in last 2 days.
Hope you find this useful. As always, any feedback will be highly appreciated.

Oracle XMLDB LinkedIn Group

I started a Oracle XMLDB LinkedIn group to provide via this media some more high level discussions or ways to share info. If you are interested, join. This LinkedIn group will be updated and managed by me until Oracle starts an official Oracle XMLDB LinkedIn group. I have noticed that LinkedIn Groups discussions of other vendors or interest groups can be very insightful of what actually moves people in the market.

Current (still some technical) discussions:

  • If you could influence the next new XML features regarding XML(DB) functionality in the Oracle 12 database release, what would be on your wish list?
  • XMLDB Whitepapers and Tooling about Design, Performance and Selectivity
  • What do you think should the Oracle XMLDB Developers Guide be split up in an “XMLDB Developer” guide and an “XMLDB Administration” guide or “XMLDB Concepts” guide?

So maybe…

Join us via the Oracle XMLDB LinkedIn here.

HTH

Marco

Fedora 14 on my desktop…

I wrote a post a few days ago about Fedora 14. Over the weekend I could resist no longer and switched to Fedora 14 as my desktop OS. Prior to this I had been using CentOS 5 for ages.

Now remember, I do almost everything in VMs, so all my Oracle stuff is still on OEL5 x86-64. This is just the desktop I use to run VirtualBox and a browser.

So far so good. The installation went fine and VirtualBox is behaving it self OK, so all my VMs are running with no problems. For the most part it all feels very similar to CentOS 5, but because all the underlying pieces are up to date I get to run a few extra things, like Chrome as my browser, Shutter for image capture and a newer version of Gimp.

I think Ubuntu is a more natural desktop than Fedora, but I’ve been using Red Hat versions of Linux for years, so I just feel a little happier on them. Fingers crossed this will work out OK.

Cheers

Tim…