Search

Top 60 Oracle Blogs

Recent comments

November 2010

Installing RAC 11.2.0.2 on Solaris 10/09 x64

One of the major adventures this time of the year involves installing RAC 11.2.0.2 on Solaris 10 10/09 x86-64. The system setup included EMC Power Path 5.3 as the multipathing solution to shared storage.

I initially asked for 4 BL685 G6 with 24 cores, but in the end “only” got two-still plenty of resources to experiment with.  I especially like the output of this command:

$ /usr/sbin/psrinfo | wc –l
 24

Nice! Actually, it’s 4 Opteron processors:

$ /usr/sbin/prtdiag | less
System Configuration: HP ProLiant BL685c G6
 BIOS Configuration: HP A17 12/09/2009
 BMC Configuration: IPMI 2.0 (KCS: Keyboard Controller Style)
==== Processor Sockets ====================================
Version                          Location Tag
 -------------------------------- --------------------------
 Opteron                          Proc 1
 Opteron                          Proc 2
 Opteron                          Proc 3
 Opteron                          Proc 4

So much for the equipment. The operating system showed 4 NICs, all called bnxen where n was 0 through 4. The first interface, bnxe0, will be used for the public network. The second NIC is to be ignored and the final 2, bnxe2 and bnxe3 will be used for the high available cluster interconnect feature. This way I can prevent the use of SFRAC which inevitably would have meant a clustered Veritas file system instead of ASM.

One interesting point to notice is that the Oracle MOS document 1210883.1 specifies that the interfaces for the private interconnect are on the same subnet. So-node1 will use 192.168.0.1 for bnxe2 and 192.168.0.2 for bnxe3. Similarly, node2 uses 192.168.0.3 for bnxe2 and 192.168.0.4 for bnxe3. The Oracle example is actually a bit more complicated than it could have been, as they use a /25 subnet mask. But ipcalc confirms that the address range they use are all well within the subnet:

 Address:   10.1.0.128            00001010.00000001.00000000.1 0000000
 Netmask:   255.255.255.128 = 25  11111111.11111111.11111111.1 0000000
 Wildcard:  0.0.0.127             00000000.00000000.00000000.0 1111111
 =>
 Network:   10.1.0.128/25         00001010.00000001.00000000.1 0000000 (Class A)
 Broadcast: 10.1.0.255            00001010.00000001.00000000.1 1111111
 HostMin:   10.1.0.129            00001010.00000001.00000000.1 0000001
 HostMax:   10.1.0.254            00001010.00000001.00000000.1 1111110
 Hosts/Net: 126                   (Private Internet)

This setup will have some interesting implications which I’ll describe a little later.

Part of the test was to find out how mature the port to Solaris on Intel was. So I decided to start off by installing Grid Infrastructure on node 1 first, and extend the cluster to node2 using the addNode.sh script in $ORACLE_HOME/oui/bin.

The installation uses 2 different accounts to store the Grid Infrastructure binaries separately from the RDBMS binaries. Operating system accounts are oragrid and oracle.

Oracle: uid=501(oracle) gid=30275(oinstall) groups=309(dba),2046(asmdba),2047(asmadmin)
OraGrid: uid=502(oragrid) gid=30275(oinstall) groups=309(dba),2046(asmdba),2047(asmadmin)

I started off by downloading files 1,2 and 3 of patch 10098816 for my platform. The ratio of downloads of this patch was 243 to 751 between x64 and SPARC. So not a massive uptake of this patchset for Solaris it would seem.

As the oragrid user I created user equivalence for RSA and DSA ssh-keys, a little utility will do this now for you, but I’m old-school and create the keys and exchanged them on the hosts myself. Not too bad a task on only 2 nodes.

The next step was to find out about the shared storage. And that took me a little while I admit freely: I haven’t used the EMC Power Path multipathing software before and found it difficult to approach, mainly for the lack of information about it. Or maybe I just didn’t find it, but device-mapper-multipath for instance is easier to understand. Additionally, the fact that this was Solaris Intel made it a little more complex. First I needed to know what the device names actually mean. As on Solaris SPARC, /dev/dsk will list the block devices, /dev/rdsk/ lists the raw devices. So there’s where I’m heading. Next I checked the devices, emcpower0a to emcpower9a. In the course of the installation I found out how to deal with these. First of all, on Solaris Intel, you have to create a partition of the LUN before it can be dealt with in the SPARC way. So for each device you would like to use, fdisk the emcpowerxp0 device, i.e.

# fdisk /dev/rdsk/emcpower0p0

If there is no partition, simply say “y” to the question if you want to use all of it for Solaris and exit fdisk. Otherwise, delete the existing partition (AFTER HAVING double/triple CHECKED THAT IT’S REALLY NOT NEEDED!) and create a new one of type “Solaris2”. It didn’t seem necessary to make it active.

Here’s a sample session:

bash-3.00# fdisk /dev/rdsk/emcpower0p0
No fdisk table exists. The default partition for the disk is:
a 100% "SOLARIS System" partition
Type "y" to accept the default partition,  otherwise type "n" to edit the partition table.
Y

Now let’s check the result:

bash-3.00# fdisk /dev/rdsk/emcpower0p0
Total disk size is 1023 cylinders
Cylinder size is 2048 (512 byte) blocks
Cylinders
Partition   Status    Type          Start   End   Length    %
=========   ======    ============  =====   ===   ======   ===
1       Active    Solaris2          1  1022    1022    100
SELECT ONE OF THE FOLLOWING:
1. Create a partition
2. Specify the active partition
3. Delete a partition
4. Change between Solaris and Solaris2 Partition IDs
5. Exit (update disk configuration and exit)
6. Cancel (exit without updating disk configuration)
Enter Selection: 6

bash-3.00#

This particular device will be used for my OCRVOTE disk group, that’s why it’s only 1G. The next step is identical on SPARC-start the format tool, select partition, change the fourth partition to use the whole disk (with an offset of 3 cylinders at the beginning of the slice) and label it. With that done, exit the format application.

This takes me back to the discussion of the emcpower-device name. The letters [a-p] refer to the slices of the device, while p stands for the partition. /dev/emcpowernc is slice 2 of the second multipathed device, in other words the whole disk. I usually create a slice 4 which translates to emcpowerne. After having completed the disk initialisation, I had to ensure that the ones I was working on were really shared. Unfortunately the emcpower devices are not consistently named across the cluster. What is emcpower0a on node1 turned out to be emcpower2a on the second node. How to check? The powermt tool to the rescue. Similar to “multipath –ll” on Linux the powermt command can show the underlying disks which are aggregated under the emcpowern pseudo device. So I wanted to know if my device /dev/rdsk/emcpower0e was shared. What I really was interested on was the native device:

# powermt display dev=emcpower0a | awk \
 > '/c[0-9]t/ {print $3}'
 c1t50000974C00A611Cd6s0
 c2t50000974C00A6118d6s0

Well, does that exist on the other node?

# powermt display dev=all | /usr/sfw/bin/ggrep -B8  c1t50000974C00A611Cd6s0
Pseudo name=emcpower3a
Symmetrix ID=000294900664
Logical device ID=0468
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
3072 pci@39,0/pci1166,142@12/pci103c,1708@0/fp@0,0 c1t50000974C00A611Cd6s0 FA  8eA   active  alive       0      0

So yes it was there. Cool! I checked the 2 other OCR/voting disks LUNS and they were shareable as well. The final piece was to change the ownership of the devices to oragrid:asmdba and permissions to 0660.

Project settings

Another item to look at is the project settings for the grid owner and oracle. It’s important to set projects correctly, otherwise the installation will fail when ASM is starting. All newly created users inherit the settings from the default project. Unless the sys admins set the default project high enough, you will have to change them. To check the settings you can use the “prctl -i project default” call to check all the values for this project.

I usually create a project for the grid owner, oragrid, as well as for the oracle account. My settings are as follows for a maximum SGA size of around 20G:

projadd -c “Oracle Grid Infrastructure” ‘user.oracle’
projmod -s -K “process.max-sem-nsems=(privileged,256,deny)” ‘user.oracle’
projmod -s -K “project.max-shm-memory=(privileged,20GB,deny)” ‘user.oracle’
projmod -s -K “project.max-shm-ids=(privileged,256,deny)” ‘user.oracle’

Repeat this for the oragrid user, then log in as oragrid and check that the project is actually assigned:

# id -p oragrid
uid=223(oragrid) gid=30275(oinstall) projid=100(user.oragrid)

Installing Grid Infrastructure

Finally ready to start the installer! The solaris installation isn’t any different from Linux except for the aforementioned fiddling with the raw devices.

The installation went smoothly, I ran orainstroot.sh and root.sh without any problem. If anything, it was a bit slow, taking 10 minutes to complete root.sh on node1. You can tail the rootcrs_node1.log file in /data/oragrid/product/11.2.0.2/cfgtoollogs/crsconfig to see what’s going on behind the scenes. This is certainly one of the biggest improvements over 10g and 11g Release 1.

Extending the cluster

The MOS document I was alluding to earlier suggested, like I said, to have all the private NIC IP addresses in the same subnet. That isn’t necessarily to the liking of cluvfy. The communication over bnxe3 on both hosts fails, as shown in this example. Tests executed from node1:

bash-3.00# ping 192.168.0.1
192.168.0.1 is alive
bash-3.00# ping 192.168.0.2
192.168.0.2 is alive
bash-3.00# ping 192.168.0.3
192.168.0.3 is alive
bash-3.00# ping 192.168.0.4
^C
192.168.0.4 is not replying

Tests executed on node 2

bash-3.00# ping 192.168.0.1
192.168.0.1 is alive
bash-3.00# ping 192.168.0.2
^C
bash-3.00# ping 192.168.0.3
192.168.0.3 is alive
bash-3.00# ping 192.168.0.4
192.168.0.4 is alive

I decided to ignore this for now, and sure enough, the cluster extension didn’t fail. As I’m not using GNS, the command to add the node was

$ ./addNode.sh -debug -logLevel finest "CLUSTER_NEW_NODES={loninengblc208}" \
 CLUSTER_NEW_VIRTUAL_HOSTNAMES={loninengblc208-vip}"

This is actually a little more verbose than I needed, but it’s always good to be prepared for a SR with Oracle.

However, the OUI command will perform a pre-requisite check before the actual call to runInstaller, and that repeatedly failed, complaining about connectivity on the bnxe3 network. Checking the contents of the addNode.sh script I found an environment variable “$IGNORE_PREADDNODE_CHECKS” which can be set to “Y” to force the script to ignore the pre-requisite checks. With that set, the addNode operation succeeded.

RDBMS installation

This is actually not worthy to report, it’s pretty much the same as on Linux. However, a small caveat is specified to Solaris x86-64. Many files in the Oracle inventory didn’t have correct permissions. When launching runInstaller to install the binaries, I was bombarded with complaints about file permissions.

For example, oraInstaller.properties has the wrong permissions. Example for Solaris Intel:

# ls -l oraInstaller.properties
 -rw-r--r--   1 oragrid  oinstall     317 Nov  9 15:01 oraInstaller.properties

On Linux:

$ ls -l oraInstaller.properties
 -rw-rw---- 1 oragrid oinstall 345 Oct 21 12:44 oraInstaller.properties

There were a few more, I fixed them using these commands:

$ chmod 770 ContentsXML
$ chmod 660 install.platform
$ chmod 770 oui
$ chmod 660 ContentsXML/*
$ chmod 660 oui/*

Once the permissions were fixed the installation succeeded.

DBCA

Nothing to report here, it’s the same as for Linux.

Local Indexes

In a recent article about list partitioned tables I raised some questions about the cases where the optimizer can’t (yet) do partitioning pruning even when the opportunity is clearly visible to the human eye. The most important example was the case where each partition was defined to hold rows for just one partition key value – but the optimizer could not prune out the redundant partition for a query like: “partition_key != {constant}”.

I recently came across a situation where this really made a big difference. The system had a huge table that had been list partitioned as follows (with some camouflage):


partition by list (status) (
	partition p_state01 values  ('STATE01'),
	partition p_state02 values  ('STATE02'),
	    ....
	partition p_state25 values  ('STATE25'),
	partition p_handled values  ('Completed')
)

The table was defined to allow row movement, and every day there would be tens of thousands of rows moving through various states until they reached the “Completed” state.

There are various pros and cons to this setup. The most significant con is that when you update the status of a row Oracle actually has to update the row “in situ”, then delete it from what is now the wrong partition and insert it into the right partition. The most dramatic pro is that if the rows you’re interested in are (almost always) the ones that haven’t got to the “Completed” you’ve put all the boring old garbage out of the way where it doesn’t cause any problems. (In fact, if you’re running 11.2 you might choose to declare some of the “Completed” partitions of any local indexes as unusable and save yourself a lot of space – and by the time I’ve finished this article you might think this is a truly wonderful idea.) In the case of the client, there were about 200 million rows in the completed partition, and barely 2 million spread over the other partitions.

There was a bit of a problem, though. Some of the indexes on this table had been created as local indexes (arguably they should all have been local)and this resulted in some odd optimisation side effects. Here’s a little bit of code to build a table that demonstrates an interesting issue:

create table t1 (
	id,
	n1,
	small_vc,
	padding
)
partition by list (n1) (
	partition p0 values(0),
	partition p1 values(1),
	partition p2 values(2),
	partition p3 values(3),
	partition p4 values(4),
	partition p5 values(5)
)
as
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		rownum <= 10000
)
select
	rownum			id,
	trunc(log(10,rownum))	n1,
	lpad(rownum,10,'0')	small_vc,
	rpad('x',100)		padding
from
	generator	v1,
	generator	v2
where
	rownum <= 999999
;

create index t1_n1 on t1(n1, small_vc) local nologging;
create index t1_id on t1(id, small_vc) local nologging;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1',
		estimate_percent => 100,
		method_opt 	 => 'for all columns size 1'
	);
end;
/

break on index_name skip 1

select
	index_name, partition_name, blevel, num_rows, leaf_blocks
from
	user_ind_partitions    -- but see comment #1 below from Tony Sleight
order by
	index_name, partition_name
;

Thanks to the log definition of column n1, you will see a very skewed distribution of data across the partitions, and the output from the query against the index partitions shows this quite dramatically. Since the code sample uses a 100% sample on the stats, you should get the following figures for the indexes (with a little variation in leaf blocks, perhaps, depending on your version and tablespace definitions. I was using 11.1.0.6 with 8KB blocks, locally managed tablespaces, freelists, and 1MB uniform extents.)

INDEX_NAME           PARTITION_NAME           BLEVEL LEAF_BLOCKS   NUM_ROWS
-------------------- -------------------- ---------- ----------- ----------
T1_ID                P0                            0           1          9
                     P1                            0           1         90
                     P2                            1           4        900
                     P3                            1          33       9000
                     P4                            1         339      90000
                     P5                            2        3384     900000

T1_N1                P0                            0           1          9
                     P1                            0           1         90
                     P2                            1           4        900
                     P3                            1          32       9000
                     P4                            1         314      90000
                     P5                            2        3136     900000

So here’s important question number 1: What do you think the blevel will be at the global level for the two indexes ?

Important question number 2: If you have query with a where clause like “id between 100 and 400 and n1 != 5″ – which is designed very precisely to exclude the last partition – what value of blevel is Oracle going to use when considering the cost of using the index t1_id to access the data ?

My answers are in this follow-up post.

Virtual Conference on Oracle Performance!

The E2SN Virtual Conferences are webinar-style online training sessions, but just like at conferences, you can listen to multiple top speakers within a single event.
The standard conference consists of 4 x 1.5 hour presentations with 30-minute Q & A sessions after each presentation. The four sessions are spread across two days, so the conference takes only four hours per day. So, in addition to not even having to leave your desk for learning from world’s top experts, you can still get your important work done in the office, too!
The first scheduled virtual conference is focusing on a walk-through on Systematic Oracle SQL Optimization, with emphasis on practical applications and real-life scenarios.

Systematic Oracle SQL Optimization virtual conference:

  • The speakers are: Cary Millsap, Jonathan Lewis, Kerry Osborne and Tanel Poder

  • The conference takes place on 18. and 19. November ( 2 x 0.5 days )
  • The time of the conference sessions is: 08:00am – 12:00pm Pacific Standard Time (PST) on both days
  • For this inaugural event we have a special launch price of 475 USD per attendee!

Click here for agenda, abstract and registration….

SnagIt on Mac…

I wrote recently about UltraEdit coming to Mac. Another tool I ‘ve missed since my Windows days is Snagit. The default screen capture on Mac is OK, and Shutter and Gimp on Linux are both OK too, but none of them compare to Snagit in my opinion.

Today I got an email proclaiming that a production release of Snagit is now available for Mac and since I have a valid Windows License I get to use it for free on Mac. Joy.

It’s now installed and running quite nicely. Now if only they would do a Linux version… :)

Cheers

Tim…

Database Sizing – How much Disk do I need? (The Easy Way)

How much Disk do I need for my new Oracle database? Answer:-

  • 8-10 times the volume of raw data for an OLTP system
  • 2-4 times the raw data volume for a Data Warehouse.
  • The bigger the database, the nearer you will be to the lower multiplication factors.

{Disclaimer. This is of course just my opinion, based on some experience. If you use the above figures for a real project and get the total disc space you need wrong, don’t blame me. If you do and it is right, then of course you now owe me a beer.}

Many of us have probably had to calculate the expected size a database before, but the actual database is only one component of all the things you need to run the Oracle component of your system. You need to size the other components too – Archived redo logs, backup staging area, dataload staging area, external files, the operating system, swap space, the oracle binaries {which generally gets bigger every year but shrink in comparison to the average size of an Oracle DB} etc…

In a similar way to my thoughts on how much database space you need for a person, I also used to check out the total disk space every database I created and those that I came across took up. {A friend emailed me after my earlier posting to ask if I had an obsession about size. I think the answer must be “yes”}.

First of all, you need to know how much “raw data” you have. By this I mean what will become the table data. Back in the early 90’s this could be the total size of the flat files the old system was using, even the size of the data as it was in spreadsheets. An Oracle export file of the system gives a pretty good idea of the raw data volume too. Lacking all these then you need to roughly size your raw data. Do a calculation of “number_of_rows*sum_of_columns” for your biggest 10 tables (I might blog more on this later). Don’t be tempted to overestimate, my multipliers allow for the padding.

Let us say you have done this and it is 60GB of raw data for an OLTP system. Let the storage guys know you will probably want about 500GB of space. They will then mentally put it down as “of no consequence” as if you have dedicated storage guys you probably have many terabytes of storage. {Oh, I should mention that I am not considering redundancy at all but space that is provided. The amount of actual spinning disk is down to the level and type of RAID you storage guys make you use. That is a whole other discussion}.

If you come up with 5TB of raw data for a DW system then you need around 12-15TB of disk storage.

If you come up with more than a Terabyte or so of raw data for an OLTP system or 10 to 20 Terabytes for a DW, when you give you figures to the storage guys/procurement people then they may well go pale and say something like “you have got to be kidding!”. This is part of why the multiplication factor for Data Warehouses and larger systems in general is less, as you are forced to be more careful about the space you allocate and how you use it.

The overhead of total disk space over Raw data reduces as the database gets bigger for a number of reasons:

  • The size of the Oracle binaries and the OS does not change as the database gets bigger.
  • The size of swap space does not increase in line wiht the database as, generally speaking, if you increase the database size from 100GB to 1TB you do not have the luxury of increasing the system memory of your server. It probably doubles.
  • Very large databases tend to have something making them big, like images or embedded documents, which are not indexed. Thus the ratio of table segments to index segments increases.
  • If you have a very large database you start removing indexes (often those that support constraints) to aid performance of data load and management, again improving the ratio of table segments to index segments.
  • Backups become partial or incremental to reduce the size and duration of the backup.
  • As mentioned before, the sheer size of system is such that you just take more care over cleaning up work areas, paring down the archived redo log areas (those files to compress well) and other areas.
  • If things get extreme or you have been doing this for donkeys years {note to none-UK people, this means many, many years} you start altering PCTFREE and checking over extent sizes.

My best ever ratio of database size to raw data was around 1.6 and it took an awful lot of effort and planning to get there. And an IT manager who made me very, very aware of how much the storage was costing him (it is not the disks, it’s all the other stuff).

I should also just mention that the amount of disk you need is only one consideration. If you want your database to perform well you need to consider the number of spindles. After all, you can create a very large database indeed using a single 2TB disc – but any actual IO will perform terribly.

RHEL 6 Released…

So RHEL 6 was released yesterday. Excuse my lack of interest, but I don’t think I will be bothering with it until Oracle Enterprise Linux 6 is available on eDelivery. Then I will no doubt start a frenzy of installations and testing. :)

Cheers

Tim…

Tool to Add Range Partitions Automatically to a Table

Continuing on my earlier posts on partitioning, here are is another tool on this topic. If you have been using partitioned tables, you might have noticed that one of the thorny issues in partitioning is the need to create partitions. If you don't have the partition ready, the inserts will fail with the following message

ERROR at line 1:
ORA-14400: inserted partition key does not map to any partition

This error is pretty disastrous to applications as they essentially stop functioning, causing outage, until you add the necessary partition. What can you do to prevent it? Maintain a unblinking vigil, of course, which may quickly become impractical. Oracle 11g has a nice answer - the INTERVAL partitioning, which automatically creates partitions when the new records come in. What if you are on 10g or do not want to use INTERVAL partitioning for some reason? Is there a way to simulate the interval partitioning?

After getting frustrated with the lack of proper partitions more than once, I decided to develop this tool. I perfected the tool over several months to make it more generic. Here it is. Enjoy.

Important: This is provided for educational purpose only. Use it at your own risk. There is no implied or expressed warranty and the author assumes no responsibility for any adverse effect.

What the Tool does

The tool is implemented as a stored procedure. When executed, it reads a driver table called partition_create_schedule, which records the tables to which the partitions are to be added. The table is created as shown below:

/* ---------------------------------------------------------------------
Name : partition_create_schedule
Purpose : To store the retention period of partitions of
: indivisual tables. This data is used by partition_drop
: procedure to automatically drop partitions.
----------------------------------------------------------------------*/

create table partition_create_schedule
(
OWNER VARCHAR2(30),
TABLE_NAME VARCHAR2(30),
PARTITION_UNIT VARCHAR2(15),
NUM_PARTS_TO_CREATE NUMBER(4)
)
/
comment on table partition_create_schedule is 'To record the tables for which partitions are to be added automatically'
/
comment on column partition_create_schedule.owner is 'the owner of the table'
/
comment on column partition_create_schedule.table_name is 'the name of the table'
/
comment on column partition_create_schedule.PARTITION_UNIT is 'the partitioning scheme - MONTHLY/DAILY/WEEKELY/QUARTERLY/YEARLY'
/
comment on column partition_create_schedule.num_parts_to_create is 'the number of units to skip in creating the partition. For instance, if you want to create the next partition 3 days from now for a DAILY unit, place 3 here'
/

The partitioning unit is important. Remember, in date-range partitioned tables, you merely provide the range; not whether the range is monthly, weekly or something like that. Even if you have a somewhat similar range, there is no guarantee that it will remain so. You may decide to split some partition or coalesce two. Therefore it is important that you tell the tool what type of partitioning duration the table is under.

Let's insert some records:

insert into partition_create_schedule values ('ARUP','PTEST1','MONTHLY',1);
insert into partition_create_schedule values ('ARUP','PTEST2','MONTHLY',1);
insert into partition_create_schedule values ('ARUP','PTEST3','QUARTERLY',1);
insert into partition_create_schedule values ('ARUP','PTEST4','DAILY',1);
commit;

Now comes the tool - the procedure. To build it, I considered these objectives:

(1) The intervals could be daily, weekly, monthly, quarterly and yearly
(2) When this procedure executes, it should automatically compute the boundary of the partition to add from the existing partitions. This should not be expected to be input from the user.
(3) This procedure will be executed every day automatically. In case of a DAILY partition, a new partition will be created; but not for other ranges. For weekly ranges, the new partition should be created only on the first day of the week; for monthly, on the first day of the month, etc.
(4) The global indexes must be updated automatically
(5) It must handle the two possibilities of the existing highest partition, e.g. a partition with a distinct boundary value and one with (MAXVALUE) clause. In the former case, a partition is added. In the latter case, the maxvalue partition has to be "split"; not added. The procedure must take care of both cases automatically.
(6) When the partition is split, the local indexes get invalidated and must be rebuilt.
(7) It should send an email to the DBA in both success and failure.

The tool handles all the above objectives pretty well.

CREATE OR REPLACE procedure SYS.partition_create
as
cursor sched_cur is
select *
from partition_create_schedule
;
l_sched_rec sched_cur%rowtype;
l_cutoff_date date;
l_hvc varchar2(2000);
l_new_hvc varchar2(2000);
l_old_hvc varchar2(2000);
l_part_name varchar2(2000);
l_new_part_name varchar2(2000);
l_table_name varchar2(2000);
l_hv date;
l_new_hv date;
l_sql varchar2(2000);
l_full varchar2(32000);
l_ts varchar2(2000);
l_errm varchar2(2000);
l_mail_body varchar2(2000);
l_out varchar2(2000);
l_recipients varchar2(2000) :=
'dba.ops@mycompany.com';
errc number(10);
l_place varchar2(2);
l_exec_flag varchar2(4);
invalid_partition_unit exception;
begin
execute immediate 'alter session set smtp_out_server = ''smtp.mycompany.com''';
l_out :=
rpad('Owner',10)||
rpad('Table',30)||
rpad('Part Unit',10)||
rpad('Old HV',11)||
rpad('New HV',11)||
rpad('Exec',4)
;
dbms_output.put_line(l_out);
l_out :=
rpad('-',9,'-')||' '||
rpad('-',29,'-')||' '||
rpad('-',9,'-')||' '||
rpad('-',10,'-')||' '||
rpad('-',10,'-')||' '||
rpad('-',3,'-')
;
dbms_output.put_line(l_out);
open sched_cur;
loop --{
fetch sched_cur into l_sched_rec;
exit when sched_cur%notfound;
select high_value, tablespace_name
into l_old_hvc, l_ts
from dba_tab_partitions tp
where partition_position = (
select max(partition_position) from dba_tab_partitions
where table_name = tp.table_name
and table_owner = tp.table_owner
)
and table_name = l_sched_rec.table_name
and table_owner = l_sched_rec.owner;
if (l_old_hvc = 'MAXVALUE') then --{
-- PMAX code. Split the partition
-- first get the hign value of the partition just prior to PMAX
select high_value, tablespace_name
into l_hvc, l_ts
from dba_tab_partitions tp
where partition_position = (
select max(partition_position) - 1 from dba_tab_partitions
where table_name = tp.table_name
and table_owner = tp.table_owner
)
and table_name = l_sched_rec.table_name
and table_owner = l_sched_rec.owner;
execute immediate 'select '||l_hvc||' from dual' into l_hv;
l_exec_flag := 'NO';
case l_sched_rec.partition_unit --{
when 'DAILY' then
l_new_hv := l_hv + l_sched_rec.num_parts_to_create;
l_new_part_name := 'Y'||to_char(l_new_hv,'YY')||'M'||to_char(l_new_hv,'MM')||'D'||to_char(l_new_hv,'DD');
l_exec_flag := 'YES';
when 'WEEKLY' then
l_new_hv := l_hv + (7*l_sched_rec.num_parts_to_create);
l_new_part_name := 'Y'||to_char(l_new_hv,'YY')||'W'||to_char(l_new_hv,'WW');
if (to_char(sysdate,'d') = '1') then
l_exec_flag := 'YES';
end if;
when 'MONTHLY' then
l_new_hv := add_months(l_hv,l_sched_rec.num_parts_to_create);
if (to_char(sysdate,'dd') = '1') then
l_exec_flag := 'YES';
end if;
l_new_part_name := 'Y'||to_char(l_new_hv,'YY')||'M'||to_char(l_new_hv,'MM');
when 'QUARTERLY' then
l_new_hv := add_months(l_hv,3*l_sched_rec.num_parts_to_create);
l_new_part_name := 'Y'||to_char(l_new_hv,'YY')||'Q'||to_char(l_new_hv,'Q');
if (to_char(sysdate,'mm/dd') in ('01/01','04/01','07/01','10/01')) then
l_exec_flag := 'YES';
end if;
when 'ANNUALLY' then
l_new_hv := add_months(l_hv,12*l_sched_rec.num_parts_to_create);
l_new_part_name := 'Y'||to_char(l_new_hv,'YY');
if to_char(sysdate,'mm/dd') = '01/01' then
l_exec_flag := 'YES';
end if;
else
raise invalid_partition_unit;
end case; --}
l_new_hvc := to_char(l_new_hv,'mm/dd/yyyy');
l_sql := 'alter table '||
l_sched_rec.owner||'.'||
l_sched_rec.table_name||
' split partition pmax '||
' at (to_date('''||l_new_hvc||
''',''mm/dd/yyyy''))'||
' into ( partition '||l_new_part_name||
' tablespace '||l_ts||
' initrans 11 storage (freelists 11 freelist groups 7)'||
' , partition pmax) update global indexes';
else --}{
-- non PMAX code. Add the partition
l_hvc := l_old_hvc;
execute immediate 'select '||l_hvc||' from dual' into l_hv;
l_exec_flag := 'NO';
case l_sched_rec.partition_unit --{
when 'DAILY' then
l_new_hv := l_hv + l_sched_rec.num_parts_to_create;
l_new_part_name := 'Y'||to_char(l_new_hv,'YY')||'M'||to_char(l_new_hv,'MM')||'D'||to_char(l_new_hv,'DD');
l_exec_flag := 'YES';
when 'WEEKLY' then
l_new_hv := l_hv + (7*l_sched_rec.num_parts_to_create);
l_new_part_name := 'Y'||to_char(l_new_hv,'YY')||'W'||to_char(l_new_hv,'WW');
if (to_char(sysdate,'d') = '1') then
l_exec_flag := 'YES';
end if;
when 'MONTHLY' then
l_new_hv := add_months(l_hv,l_sched_rec.num_parts_to_create);
if (to_char(sysdate,'dd') = '1') then
l_exec_flag := 'YES';
end if;
l_new_part_name := 'Y'||to_char(l_new_hv,'YY')||'M'||to_char(l_new_hv,'MM');
when 'QUARTERLY' then
l_new_hv := add_months(l_hv,3*l_sched_rec.num_parts_to_create);
l_new_part_name := 'Y'||to_char(l_new_hv,'YY')||'Q'||to_char(l_new_hv,'Q');
if (to_char(sysdate,'mm/dd') in ('01/01','04/01','07/01','10/01')) then
l_exec_flag := 'YES';
end if;
when 'ANNUALLY' then
l_new_hv := add_months(l_hv,12*l_sched_rec.num_parts_to_create);
l_new_part_name := 'Y'||to_char(l_new_hv,'YY');
if to_char(sysdate,'mm/dd') = '01/01' then
l_exec_flag := 'YES';
end if;
else
raise invalid_partition_unit;
end case; --}
l_new_hvc := to_char(l_new_hv,'mm/dd/yyyy');
l_sql := 'alter table '||
l_sched_rec.owner||'.'||
l_sched_rec.table_name||
' add partition '||
l_new_part_name||
' values less than '||
' (to_date('''||l_new_hvc||
''',''mm/dd/yyyy''))'||
' tablespace '||l_ts||
' initrans 11 storage (freelists 11 freelist groups 7)';
end if; --}
l_out :=
rpad(l_sched_rec.owner,10)||
rpad(l_sched_rec.table_name,30)||
rpad(l_sched_rec.partition_unit,10)||
rpad(to_char(l_hv,'mm/dd/yyyy'),11)||
rpad(l_new_hvc,11)||
rpad(l_exec_flag,4)
;
dbms_output.put_line(l_out);
if (l_exec_flag = 'YES') then --{
dbms_output.put_line('l_new_part_name='||l_new_part_name);
dbms_output.put_line('l_new_hvc='||l_new_hvc);
l_mail_body := 'Statement Executed:'||l_sql;
dbms_output.put_line('l_mail_body='||l_mail_body);
begin
execute immediate l_sql;
l_mail_body := l_mail_body ||'...OK';
exception
when OTHERS then
l_errm := SQLERRM;
l_mail_body := l_mail_body||' ERROR='||l_errm;
raise;
end;
-- table partition split. Now index.
for ind_cur in (
select index_owner, index_name, partition_name
from dba_ind_partitions
where (index_owner, index_name) in (
select owner, index_name
from dba_indexes
where table_name = l_sched_rec.table_name
and table_owner = l_sched_rec.owner
)
and status = 'UNUSABLE'
) loop --{
dbms_output.put_line('Index Part='||ind_cur.index_owner||'.'||ind_cur.index_name||'.'||ind_cur.partition_name);
l_sql := 'alter index '||
ind_cur.index_owner||'.'||
ind_cur.index_name||' rebuild partition '||
ind_cur.partition_name||' nologging online';
l_mail_body := l_mail_body||chr(12)||
'Statement Executed:'||l_sql;
dbms_output.put_line('l_mail_body='||l_mail_body);
begin
execute immediate l_sql;
l_mail_body := l_mail_body||'...OK';
exception
when OTHERS then
l_errm := SQLERRM;
l_mail_body := l_mail_body||' ERROR='||l_errm;
raise;
end;
end loop; --}
-- index partitions made usable
end if; --}
end loop; --}
close sched_cur;
dbms_output.put_line(l_full);
utl_mail.send(
'oracle@prodb1',l_recipients,
null, null,
'Succeeded: PROPRD1 Partition Create',
l_mail_body
);
exception
when OTHERS then
l_errm := SQLERRM;
utl_mail.send(
'oracle@prodb1',l_recipients,
null, null,
'Failed: PROPRD1 Partition Create',
l_mail_body
);
raise_application_error(-20001,l_errm);
end;
/
show error

The code is self explanatory. Let's see some example outputs:

SQL> @ex
Owner Table Part Unit Old HV New HV Exec
--------- ----------------------------- --------- ---------- ---------- ---
ARUP PTEST1 MONTHLY 02/01/2010 04/01/2010 NO
ARUP PTEST2 MONTHLY 02/01/2010 04/01/2010 NO
ARUP PTEST3 QUARTERLY 04/01/2010 10/01/2010 NO
ARUP PTEST4 DAILY 01/12/2010 01/14/2010 YES
l_new_part_name=Y10M01D14
l_new_hvc=01/14/2010
l_mail_body=Statement Executed:alter table ARUP.PTEST4 split partition pmax at
(to_date('01/14/2010','mm/dd/yyyy')) into ( partition Y10M01D14 tablespace
MGMT_TABLESPACE initrans 11 storage (freelists 11 freelist groups 7) , partition
pmax) update global indexes
ARUP PTEST5 DAILY 01/04/2010 01/06/2010 YES
l_new_part_name=Y10M01D06
l_new_hvc=01/06/2010
l_mail_body=Statement Executed:alter table ARUP.PTEST5 add partition Y10M01D06
values less than (to_date('01/06/2010','mm/dd/yyyy')) tablespace
MGMT_TABLESPACE initrans 11 storage (freelists 11 freelist groups 7)
ARUP PTEST6 DAILY 01/02/2010 01/04/2010 YES
l_new_part_name=Y10M01D04
l_new_hvc=01/04/2010
l_mail_body=Statement Executed:alter table ARUP.PTEST6 add partition Y10M01D04
values less than (to_date('01/04/2010','mm/dd/yyyy')) tablespace
MGMT_TABLESPACE initrans 11 storage (freelists 11 freelist groups 7)
ARUP PTEST7 DAILY 01/02/2010 01/04/2010 YES
l_new_part_name=Y10M01D04
l_new_hvc=01/04/2010
l_mail_body=Statement Executed:alter table ARUP.PTEST7 add partition Y10M01D04
values less than (to_date('01/04/2010','mm/dd/yyyy')) tablespace
MGMT_TABLESPACE initrans 11 storage (freelists 11 freelist groups 7)

The output is sent to the mail address recorded in the stored procedure - in case of both success and failure. Now all you have to do is to put it in a Scheduler Job and let it run every day. Populate the table PARTITION_CREATE_SCHEDULE as needed and those tables will be automatically added partitions.

Hope you enjoy and find it useful. As always, I will appreciate your feedback.

Oracle11g: New Locking Modes When Policing FK Constraints (A Wolf at the Door)

As I’ve been focusing mainly with Oracle 11g at work these days, thought I might look at a number of Oracle 11g related topics in the coming weeks.   To start with, there’s been a subtle but potentially significant change introduced in Oracle 11g (since 11.1.0.6) with regard to the manner in which locks are [...]

Tool to Present a Consolidated View of the Status of RMAN Backups

A very important area of database administration is backups which are essential to the availability of the databases. If you use RMAN to backup databases (as you should be doing), how will you ensure that all the database backups have successfully completed? One way of making sure that occurs is by going into each server and checking the backup log – a task not just tedious but often impractical considering the large number of databases we have. Is there an easy way – via a consolidated output of all databases?
The Tool
Yes, there is an easy way – by checking the catalog, assuming of course that you are using a catalog, as you should be. I developed this tool to extract the information from the catalog for each database. Since the repository may be different for each database being backed up, I used a view to union all the RC_* views from all the repositories on a common schema which I named RMAN_COMMON. Then I wrote a PL/SQL program to pull the report presenting the information on all databases together. Since my objective was to have a consolidated view of all the backups, whether they succeeded or not, I selected from multiple views to provide an indented output in case of failures to identify specifically which component failed. I will illustrate with an output shown below. The tool can be customized for number of days it reports. Here I have chosen for the last 2 days.

SQL> conn rman_common/rman_common@catalogdb
Connected.
SQL> @status

******** RMAN Database Backup Report *********

DB Name Start Time End Time Status Time Tak Output S Type
-------- ------------------ ------------------ -------- -------- -------- -------
DEMO1 2010-11-08 00:25:1 2010-11-08 00:50:4 COMPLETE 00:25:27 17.99 DISK
DEMO1 2010-11-09 00:25:1 2010-11-09 00:50:3 COMPLETE 00:25:16 17.99 DISK
ECR 2010-11-07 23:12:0 2010-11-07 23:14:2 COMPLETE 00:02:13 5.55 DISK
ECR 2010-11-08 23:12:0 2010-11-08 23:14:3 COMPLETE 00:02:21 5.58 DISK
EMREP 2010-11-08 01:00:2 2010-11-08 01:02:5 COMPLETE 00:02:35 326.07 DISK
EMREP 2010-11-09 01:00:2 2010-11-09 01:02:5 COMPLETE 00:02:28 353.71 DISK
PROPRD 2010-11-07 23:00:1 2010-11-08 00:04:0 COMPLETE 01:03:49 50.48 DISK
PROPRD 2010-11-08 00:04:2 2010-11-08 11:47:1 COMPLETE 11:42:42 27.59
PROPRD 2010-11-08 10:35:3 2010-11-08 11:20:4 COMPLETE 00:45:12 30.00 DISK
PROPRD 2010-11-08 11:28:5 2010-11-08 12:21:2 COMPLETE 00:52:33 30.00 DISK
PROPRD 2010-11-08 12:23:5 2010-11-08 12:38:5 COMPLETE 00:15:00 10.00 DISK
PROPRD 2010-11-08 12:43:3 2010-11-08 12:43:4 COMPLETE 00:00:07 192.00 DISK
PROPRD 2010-11-08 12:46:1 2010-11-08 12:46:2 COMPLETE 00:00:07 224.00 DISK
PROPRD 2010-11-08 12:48:1 2010-11-08 13:14:0 COMPLETE 00:25:50 20.00 DISK
PROPRD 2010-11-08 13:37:3 2010-11-08 13:58:4 COMPLETE 00:21:11 15.00 DISK
PROPRD 2010-11-08 14:00:2 2010-11-08 14:13:5 COMPLETE 00:13:30 10.00 DISK
PROPRD 2010-11-08 14:29:0 2010-11-08 14:29:0 FAILED 00:00:01 0.00
. Operation Input Status
. -------------------- -------------------- --------------------
. . . FAILED
. Level Status Operation Object Type
. ------ -------- -------------------- --------------------
. > COMPLETE RMAN .
. -> FAILED BACKUP DATAFILE INCR
PROPRD 2010-11-08 23:00:2 2010-11-09 00:21:0 COMPLETE 01:20:43 66.75 DISK
PROPRD 2010-11-09 00:21:2 2010-11-09 14:07:3 COMPLETE 13:46:09 40.25
LIGPRD1 2010-11-08 03:00:0 2010-11-08 03:00:4 COMPLETE 00:00:37 1.74 DISK
LIGPRD1 2010-11-08 03:00:5 2010-11-08 03:04:1 COMPLETE 00:03:12 1.49
LIGPRD1 2010-11-09 03:00:0 2010-11-09 03:00:5 COMPLETE 00:00:45 2.59 DISK
LIGPRD1 2010-11-09 03:01:0 2010-11-09 03:05:1 COMPLETE 00:04:05 1.37
LIGPRD2 2010-11-08 02:00:0 2010-11-08 02:00:3 COMPLETE 00:00:29 1.04 DISK
LIGPRD2 2010-11-08 02:00:4 2010-11-08 02:02:3 COMPLETE 00:01:52 1.34
LIGPRD2 2010-11-09 02:00:0 2010-11-09 02:00:4 COMPLETE 00:00:35 1.67 DISK
LIGPRD2 2010-11-09 02:00:5 2010-11-09 02:03:2 COMPLETE 00:02:26 1.38
SW1 2010-11-08 00:05:0 2010-11-08 00:06:1 COMPLETE 00:01:06 519.17 DISK
SW1 2010-11-09 00:05:0 2010-11-09 00:08:0 COMPLETE 00:03:04 2.01 DISK

PL/SQL procedure successfully completed.

Here you can see the databases in the catalog – PROPRD, LIGPRD1, LIGPRD2 and SW1. The columns – “Start Time”, “End Time” and “Time Taken” – are self-explanatory. The “Output Size” shows the size of the backupset produced. The “Status” column shows the status of the job – the key to this report. If it shows “COMPLETE”, then all was well in the job. If it shows “FAILED” then lines below show what actually failed. For instance you can see on 8th Nov, incremental backup of one datafile of PROPRD failed. That one definitely needs investigating.  You got all that important information in just one report. As you add all the databases into the same catalog, your reports will be more complete and expansive.

Construction
Now that you saw the result of the tool, let’s see the code behind it. First I created a user – RMAN_COMMON:
SQL> grant create session, unlimited tablespace, create view to rman_common identified by rman_common;

We need just three views from the repositories; so this user needs to be granted select privileges on those only. As SYS user, grant these:

grant select on rman_PROPRD.rc_rman_backup_job_details to rman_common;
grant select on rman_LIGPRD11.rc_rman_backup_job_details to rman_common;
grant select on rman_LIGPRD21.rc_rman_backup_job_details to rman_common;
grant select on rman_11g.rc_rman_backup_job_details to rman_common;
--
grant select on rman_PROPRD.rc_rman_backup_subjob_details to rman_common;
grant select on rman_LIGPRD11.rc_rman_backup_subjob_details to rman_common;
grant select on rman_LIGPRD21.rc_rman_backup_subjob_details to rman_common;
grant select on rman_11g.rc_rman_backup_subjob_details to rman_common;
--
grant select on rman_PROPRD.rc_rman_status to rman_common;
grant select on rman_LIGPRD11.rc_rman_status to rman_common;
grant select on rman_LIGPRD21.rc_rman_status to rman_common;
grant select on rman_11g.rc_rman_status to rman_common;

Of course, you will need to customize the above script to add more repositories as you add them.
Next, you will create the views by selecting from the corresponding views from individual repositories. Instead of creating a view with all the columns, I chose only a few columns. This will help us in combining the views from 10g and 11g where the columns could be different.

conn rman_common/rman_common
--
create view rc_rman_backup_job_details
as
select
db_name,
start_time,
end_time,
status,
time_taken_display,
output_bytes_display,
output_device_type,
session_key,
session_recid,
session_stamp
from rman_PROPRD.rc_rman_backup_job_details
union all
select
db_name,
start_time,
end_time,
status,
time_taken_display,
output_bytes_display,
output_device_type,
session_key,
session_recid,
session_stamp
from rman_LIGPRD11.rc_rman_backup_job_details
union all
select
db_name,
start_time,
end_time,
status,
time_taken_display,
output_bytes_display,
output_device_type,
session_key,
session_recid,
session_stamp
from rman_LIGPRD21.rc_rman_backup_job_details
union all
select
db_name,
start_time,
end_time,
status,
time_taken_display,
output_bytes_display,
output_device_type,
session_key,
session_recid,
session_stamp
from rman_11g.rc_rman_backup_job_details
/
create view rc_rman_backup_subjob_details
as
select
operation,
input_type,
status,
session_stamp
from rman_PROPRD.rc_rman_backup_subjob_details
union all
select
operation,
input_type,
status,
session_stamp
from rman_LIGPRD11.rc_rman_backup_subjob_details
union all
select
operation,
input_type,
status,
input_type,
status,
session_stamp
from rman_LIGPRD21.rc_rman_backup_subjob_details
union all
select
operation,
input_type,
status,
session_stamp
from rman_11g.rc_rman_backup_subjob_details
/
create view rc_rman_status
as
select
row_level,
operation,
object_type,
status,
session_key,
session_recid
from rman_PROPRD.rc_rman_status
union all
select
row_level,
operation,
object_type,
status,
session_key,
session_recid
from rman_LIGPRD11.rc_rman_status
union all
select
row_level,
operation,
object_type,
status,
session_key,
session_recid
session_key,
session_recid
from rman_LIGPRD21.rc_rman_status
union all
select
row_level,
operation,
object_type,
status,
session_key,
session_recid
from rman_11g.rc_rman_status
/

Now that the views are in place, you can create the actual tool. Here is the PL/SQL code:
/*---------------------------------------------------------------------------
Name : RMAN Job Summary Report
Version : 2.0
Released : Nov 9th, 2010
Author : Arup Nanda.
Description : Creates a report of all backup jobs in a specified number
of days. If anything failed, it also shows the sub-job details.
Tested on 11g; but will work on 10gR2 as well.
Connect to RMAN repository and execute.
For number of days, change the l_days constant.
---------------------------------------------------------------------------*/
prompt
prompt ******** RMAN Database Backup Report *********
prompt
set serveroutput on size unlimited
declare
l_days constant number := 2;
l_place varchar2(400);
l_db_name rc_rman_backup_job_details.db_name%type;
l_start_time varchar2(40);
l_end_time varchar2(40);
l_status rc_rman_backup_job_details.status%type;
l_time_taken_display rc_rman_backup_job_details.time_taken_display%type;
l_output_bytes_display rc_rman_backup_job_details.output_bytes_display%type;
l_output_device_type rc_rman_backup_job_details.output_device_type%type;
l_session_key rc_rman_backup_job_details.session_key%type;
l_session_recid rc_rman_backup_job_details.session_recid%type;
l_session_stamp rc_rman_backup_job_details.session_stamp%type;

l_operation rc_rman_backup_subjob_details.operation%type;
l_input_type rc_rman_backup_subjob_details.input_type%type;

l_command_level varchar2(9);
l_object_type rc_rman_status.object_type%type;
cursor bjd_cur
(
p_days in number
)
is
select
bjd.db_name,
to_char(bjd.start_time, 'yyyy-mm-dd hh24:mi:ss'),
to_char(bjd.end_time, 'yyyy-mm-dd hh24:mi:ss'),
bjd.status,
bjd.time_taken_display,
bjd.output_bytes_display,
bjd.output_device_type,
bjd.session_key,
bjd.session_recid,
bjd.session_stamp
from rc_rman_backup_job_details bjd
where end_time > sysdate - p_days
order by bjd.db_name, bjd.start_time;

begin
l_place := 'Place 100';
dbms_output.put_line(
rpad('DB Name',8)
||' '||
rpad('Start Time',18)
||' '||
rpad('End Time',18)
||' '||
rpad('Status',8)
||' '||
rpad('Time Taken',8)
||' '||
rpad('Output Size',8)
||' '||
rpad('Type',8)
);
dbms_output.put_line(
rpad('-',8,'-')
||' '||
rpad('-',18,'-')
||' '||
rpad('-',18,'-')
||' '||
rpad('-',8,'-')
||' '||
rpad('-',8,'-')
||' '||
rpad('-',8,'-')
||' '||
rpad('-',8,'-')
);
open bjd_cur (l_days);
loop
fetch bjd_cur
into
l_db_name,
l_start_time,
l_end_time,
l_status,
l_time_taken_display,
l_output_bytes_display,
l_output_device_type,
l_session_key,
l_session_recid,
l_session_stamp
;
exit when bjd_cur%notfound;
dbms_output.put_line(
rpad(l_db_name ,8)
||' '||
rpad(l_start_time ,18)
||' '||
rpad(l_end_time ,18)
||' '||
rpad(l_status ,8)
||' '||
rpad(l_time_taken_display ,8)
||' '||
rpad(l_output_bytes_display ,8)
||' '||
rpad(l_output_device_type,8)
);
--
--
l_place := 'Place 300';
if (l_status != 'COMPLETED') then
for bsjd_rec in (
select
operation,
input_type,
status
from rc_rman_backup_subjob_details
where session_stamp = l_session_stamp
) loop
l_place := 'Place 400';
dbms_output.put_line(
'.'
||' '||
rpad('Operation',20)
||' '||
rpad('Input',20)
||' '||
rpad('Status',20)
);
dbms_output.put_line(
'.'
||' '||
rpad('-',20,'-')
||' '||
rpad('-',20,'-')
||' '||
rpad('-',20,'-')
);
dbms_output.put_line(
'.'
||' '||
rpad(nvl(l_operation,'.') ,20)
||' '||
rpad(nvl(l_input_type,'.') ,20)
||' '||
rpad(nvl(l_status,'.') ,20)
);
end loop;
--
l_place := 'Place 500';
dbms_output.put_line(
'. '||
rpad('Level' ,6)
||' '||
rpad('Status' ,8)
||' '||
rpad('Operation' ,20)
||' '||
rpad('Object Type' ,20)
);
dbms_output.put_line(
'. '||
rpad('-' ,6,'-')
||' '||
rpad('-' ,8,'-')
||' '||
rpad('-' ,20,'-')
||' '||
rpad('-' ,20,'-')
);
for status_rec in (
select
rpad('-', row_level, '-')||'>' command_level,
operation,
object_type,
status
from rc_rman_status
where session_key = l_session_key
order by row_level, session_recid
) loop
l_place := 'Place 600';
dbms_output.put_line(
'. '||
rpad(nvl(status_rec.command_level,'.') ,6)
||' '||
rpad(nvl(status_rec.status,'.') ,8)
||' '||
rpad(nvl(status_rec.operation,'.') ,20)
||' '||
rpad(nvl(status_rec.object_type,'.') ,20)
);
end loop;
end if;
end loop;
exception
when OTHERS then
dbms_output.put_line(l_place);
raise;
end;
/

This is the PL/SQL code in the file status.sql.  You can adjust the value of the constant l_days to generate the report for as many days as you want. When you run the script it shows the status of the RMAN jobs in last 2 days.
Hope you find this useful. As always, any feedback will be highly appreciated.