Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Aggregating & Purging Batch Timings

Application Engine can collect timing information for the programs being execution. These 'batch timings' can be written to log file and/or tables in the database. I always recommend that this is enabled in all environments. The runtime overhead is very low, and this data is extremely valuable to determine the performance of a system over a period of time, and to identify the pieces of SQL or PeopleCode code that account for the most time. The timing data collected for individual processes can be viewed directly within the Process Monitor component.

The Process Scheduler purge process does not delete batch timings, so this data remains in the database indefinitely, although it can no longer be accessed via the Process Monitor. Over time, on a busy system, a large volume of data can accumulate. In some ways this is a good thing. There are good reasons to purge the Process Scheduler as aggressively as the business will permit.  The batch timings can still be analysed by direct SQL query. However, the sheer volume of data is likely to result in queries that can take quite a while to execute. After a while, you are less likely to be interested in the performance of individual processes, but are more likely to want to aggregate the data. So, it makes sense to hold the data at least partly aggregated.

I have produced a very simple Application Engine program (GFC_TIM_ARCH) to address this problem. This program is available for download from the Go-Faster website as a PeopleTools Application Designer Project. Please note that this process has been written using Oracle RDBMS specific SQL syntax.

  • Application Engine stores batch timing data in three table PS_BAT_TIMINGS_LOG, PS_BAT_TIMINGS_FN and PS_BAT_TIMINGS_DTL.
  • I have created three new tables PS_GFC_TIMINGS_LOG, PS_GFC_TIMINGS_FN and PS_GFC_TIMINGS_DTL.
  • GFC_TIM_ARCH aggregates the data in each of these tables by the day on which the process begin, by the process name, and (where applicable) by the 'detail_id' column. The aggregated data is put into the GFC_TIMINGS% tables, the original data is removed from the BAT_TIMINGS% tables.
  • It finds the earliest three days for which timing data exists that is older than the longest Process Scheduler retention limit. The idea is that the process should be run daily (the delivered 'Daily Purge' recurrence is suitable), but if it doesn't run for some reason it will catch up the next day.
  • Only one instance of WMS_TIM_ARCH is permitted to run concurrently.

On one system, where I tested this process, BAT_TIMINGS_DTL was growing by over 1 million rows per day. This became around 5000 rows per day in GFC_TIMINGS_DTL.

If you have been running with batch timings for a while, then when you first introduce this process you will probably have a large backlog of data to be aggregated and purged. The easiest option is to run this process repeatedly until the data has been processed (possibly using the recurrence that causes a program to run every minute). After the backlog has been cleared the BAT_TIMINGS% tables should be rebuilt or shrunk in order to release the space left in the tables by the deleted rows. This will help queries that scan the BAT_TIMINGS_DTL record, otherwise these scans still need to include the empty rows because they are below the tables High Water Mark.

Once the backlog has been cleared, the GFC_TIM_ARCH process can run daily and, not withstanding variations in the load on the system, the rolling volume of data retained in the BAT_TIMINGS% tables should be fairly constant, and it should not be necessary to rebuild them frequently. Space freed by the daily delete should be used by new rows that are inserted into the table as AE processes run.

ACE Directors Product Briefing '09

One of the most valuable benefits of being an Oracle ACE Director is the briefings by Oracle Product Managers at the Oracle HQ. This year the briefing was on Friday Oct 9th at Oracle conference center rather than the customary Hilton Hotel.

While I was a little disappointed at the coverage of the database topics, I quickly recovered from the alphabet soup that makes up the netherworld of middleware and tools. However, a surprise visit by Thomas Kurian to address questions from the audience about the various product roadmaps was testimonial that Oracle is dead serious about the ACE Program. That proves the commitment Oracle has made for the user community - very heartening.

As always, Vikky Lira and Lillian Buziak did a wonderful job of organizing the event. Considering about 100 ACE Directors from 20+ countries, that is no small task. Perhaps the highlight of the organization was the detailed briefing sheets Lillian prepared for each one individually, down to what car service one takes and when - simply superb! No amount of thanks will be enough. From the bottom of my heart, thank you, Vikky and Lillian. And, thank you Justin Kestelyn - for kicking off and running the event year after year.

Open World 09 Starts

Oracle Open World 2009 has officially started with the User Group sessions today. I am presenting a session today. I started off by registering and getting my cool Blogger badge holder, hanging off the even cooler ACE Director lanyard.

I went off to the first session of today on the IOUG bucket - Workload Management by Alex Gorbachev. Alex is one of those people who know their stuff; so there is always something to be learned from there. Alex successfully demonstrated the difference between Connection Load Balancing and Server Side Listener Load Balancing, with pmon trace to show how the sessions are balanced. It sheds light on the question - why Oracle is not balancing the workload.

If you didn't attend this, you should definitely download the presentation and check it out later.

Off to Oracle OpenWorld

I've been a bit busy with things recently - getting ready for Oracle OpenWorld in particular. I haven't been able to pay as much attention to asktom as I normally do - that'll change the week after OpenWorld (I'll be taking questions again that week - until then, probably not)

ID#: S312577
Title: Keynote: Oracle Develop "What Are We Still Doing Wrong"
Track: Oracle Develop: Database
Date: 11-OCT-09
Time: 09:00 - 10:00
Venue: Hilton Hotel
Room: Grand Ballroom B

ID#: S311235
Title: All About Metadata: Why Telling the Database About Your Schema Matters
Track: Oracle Develop: Database
Date: 12-OCT-09
Time: 11:30 - 12:30
Venue: Hilton Hotel
Room: Imperial Ballroom B

ID#: S311322
Title: DBA 2.0: Battle of the DBAs Revisited
Track: Database
Date: 12-OCT-09
Time: 17:30 - 18:30
Venue: Moscone South
Room: Room 103

ID#: S311234
Title: The top 10 - No, 11 - New Features of Oracle Database 11g Release 2
Track: Database
Date: 13-OCT-09
Time: 17:30 - 18:30
Venue: Moscone South
Room: Room 103

ID#: S311236
Title: Efficient PL/SQL: Why and How to Use PL/SQL to Its Greatest Effect
Track: Database
Date: 14-OCT-09
Time: 17:00 - 18:00
Venue: Moscone South
Room: Room 103

I'll be in the OTN lounge on Wednesday from 12:45pm till 1:30pm with hte "Heavy Hitters" program. Anyone is free to attend that session.

Also, I'll be in the database demo grounds in Moscone West - in the middle of the demo area. It'll say "database overview" in big letters on top and "Oracle Database 11g Release 2" underneath. I'll be manning a station there on Tuesday from 10:30am till 11:30 am and on Wednesday from 9am till 10:15am

Looking forward to OpenWorld - both the beginning and the end :) It'll be nice to get back home after a week at a conference.

Memorization vs. Understanding

This graphic was recently pointed out to me (thanks Jared!) and I loved it...so I thought I'd share.

The key point (as noted in the upper right) is:

Understanding how something works can drastically reduce the need to memorize a bunch of seemingly arbitrary facts.

I believe this is an accurate statement. I've been around many people who seemed to be able to spout out the most arcane command syntax or provide the "textbook" definition of virtually any topic. But, if there was a problem where something more than knowing the definition of something or being able to quote the documentation on a topic was required, they quickly got lost and didn't know what to do or how to proceed.

Certification exams, for instance, ask questions that can be answered correctly if you've memorized a lot of facts. I've always jokingly said that if I gave my mother (sweet southern homemaker that she is) enough material to memorize, that she could likely pass the Oracle certification exams in 1 or 2 tries. And although she'd never indulge me to see if I could prove that theory, I suspect it's close to the truth. She's a bright woman with a great memory so I think she could memorize and do pretty well on the tests.

I once met a kid that had just graduated college and had started his first real job with a client I was doing some work for several years ago. The company paid for him to take a two week "boot camp" course and take the certification exams. So, with about 2 months of on-the-job experience, he went to the two week class, took all of the exams at the end of it, passed them all and returned back to work a "certified expert". Of course there was a big production flame-out right after he got back and he got razzed pretty good about "not knowing" what to do since he was now an "expert".

The bottom-line, as I see it, is that memorized knowledge can (and likely should) be part of your foundation. But, it will only get you so far. It's the ability to really understand something and how it works that will help you solve issues that just aren't solvable with memorized facts alone.

http://asktom.oracle.com/tkyte/

This page will be published and republished over the next couple of days and into the future. It will contain pointers to the material that was previously hosted on http://asktom.oracle.com/tkyte. I'll be reformatting those pages to fit into blogspot here and host the data supporting the pages in my "files" tab on asktom. After that is all done, I'll be doing an update to the questions and answers on asktom to map to the new URLS for everything.

So - beware - you will see a flurry of activity on my blog for a bit. It won't be anything new, but it might be something you didn't see before...

Selected Utilities/Frequently Asked Questions

http://asktom.oracle.com/tkyte/flat

How Can I unload data to a flat file

Many times we are asked

  • "Does anyone know an easy way of dumping the data from an Oracle table into a delimited(comma, tab etc) ascii file?"
  • "Does anyone know an easy way to unload data in a format for sqlldr to reload later?"

Well here is a way to do it into Excel or any spreadsheet that understands the industry standard SYLK file format.

Here is a pro*c program that does it to a flat file very fast.

and here is a PLSQL routine that uses utl_file to do the same

A reader (Andy Rivenes) offers this more "robust" plsql implementation based on the original code.

And lastly, SQLPlus can do this quite easily but it is a pain to have to write a script/table. What I've done is setup scripts for UNIX and NT that allow you from the command line to execute things like:

$ sqlldr_exp scott/tiger dept
LOAD DATA
INFILE *
INTO TABLE dept
REPLACE
FIELDS TERMINATED BY '|'
(
deptno
,dname
,loc
)
BEGINDATA
10|ACCOUNTING|NEW YORK
20|RESEARCH|DALLAS
30|SALES|RESTON
40|OPERATIONS|BOSTON

As you can see, this script unloaded the scott.dept table into a format that sqlldr can easily reload. All you would need to do is execute:

$ sqlldr_exp scott/tiger dept > dept.ctl

to create a control file that can be moved somewhere else and reloaded back into a dept table very quickly.

I also use a slight modification of this script called "flat". Flat does the same thing as sqlldr_exp does mostly except that it dumps the data into a tab delimited format without the sqlldr extras at the top. This makes the extract usable in spreadsheets and such.

In both cases some things you need to be aware of are with regards to this script:

  • There is an absolute limit of 2000 bytes in 7.x and 4000 bytes in 8.x per line/row for unloaded data. The total size of the unloaded data is unlimited -- the maximum size of an individual row of data is what is limited.
  • It makes no attempt to unload dates with the century or time component -- you must change your default NLS_DATE_FORMAT if this is a problem.
  • Beware of data with pipes or tabs in it!
  • Beware of data with newlines as well...
  • The NT scripts (.cmd files) need modifications if your command line sqlplus is not called SQLPLUS (eg: its plus33 or something similar)
  • On NT, you need to set your SQLPATH environment variable and put these files into that directory OR you need to run flat and sqlldr_exp from those directories so sqlplus can find the corresponding flat.sql and sqlldr_exp.sql files.

download that script here

http://asktom.oracle.com/tkyte/who_called_me

How Can I find out who called me or what my name is

Many times we are asked "in a procedure/function, can I find out who called me" or "can I dynamically figure out the name of the procedure or package that is currently executing".

You can find it in the call stack returned by dbms_utility.format_call_stack. I wrote a small routine called who_called_me that returns this sort of information (it doesn't tell you who you are, it lets you know who called you). If you wrap who_called_me with a function who_am_i, you'll get what you need. If you create the who_called_me/who_am_i routines, you'll be able to:


SQL> create or replace procedure demo
2 as
3 begin
4 dbms_output.put_line( who_am_i );
5 end;
6 /

Procedure created.

SQL> exec demo;
TKYTE.DEMO

In current releases of the database, this code has been incorporated into the OWA_UTIL package - you probably already have it in your database. If not, you can use this really old version that might need a tweak or two to work in your database release:


create or replace procedure who_called_me( owner out varchar2,
name out varchar2,
lineno out number,
caller_t out varchar2 )
as
call_stack varchar2(4096) default dbms_utility.format_call_stack;
n number;
found_stack BOOLEAN default FALSE;
line varchar2(255);
cnt number := 0;
begin
--
loop
n := instr( call_stack, chr(10) );
exit when ( cnt = 3 or n is NULL or n = 0 );
--
line := substr( call_stack, 1, n-1 );
call_stack := substr( call_stack, n+1 );
--
if ( NOT found_stack ) then
if ( line like '%handle%number%name%' ) then
found_stack := TRUE;
end if;
else
cnt := cnt + 1;
-- cnt = 1 is ME
-- cnt = 2 is MY Caller
-- cnt = 3 is Their Caller
if ( cnt = 3 ) then
lineno := to_number(substr( line, 13, 6 ));
line := substr( line, 21 );
if ( line like 'pr%' ) then
n := length( 'procedure ' );
elsif ( line like 'fun%' ) then
n := length( 'function ' );
elsif ( line like 'package body%' ) then
n := length( 'package body ' );
elsif ( line like 'pack%' ) then
n := length( 'package ' );
elsif ( line like 'anonymous%' ) then
n := length( 'anonymous block ' );
else
n := null;
end if;
if ( n is not null ) then
caller_t := ltrim(rtrim(upper(substr( line, 1, n-1 ))));
else
caller_t := 'TRIGGER';
end if;

line := substr( line, nvl(n,1) );
n := instr( line, '.' );
owner := ltrim(rtrim(substr( line, 1, n-1 )));
name := ltrim(rtrim(substr( line, n+1 )));
end if;
end if;
end loop;
end;
/

create or replace function who_am_i return varchar2
is
l_owner varchar2(30);
l_name varchar2(30);
l_lineno number;
l_type varchar2(30);
begin
who_called_me( l_owner, l_name, l_lineno, l_type );
return l_owner || '.' || l_name;
end;
/

http://asktom.oracle.com/tkyte/unindex

Unindexed Foreign Keys

Having Unindexed foreign keys can be a performance issue. There are two issues associated with unindexed foreign keys. The first is the fact that a table lock will result if you update the parent records primary key (very very unusual) or if you delete the parent record and the child's foreign key is not indexed.

To read about this issue, please see the Concepts Guide the section on Maintaining Data Integrity/Concurrency Control, Indexes, and Foreign Keys.

The second issue has to do with performance in general of a parent child relationship. Consider that if you have an on delete cascade and have not indexed the child table (eg: EMP is child of DEPT. Delete deptno = 10 should cascade to EMP. If deptno in emp is not indexed -- full table scan). This full scan is probably undesirable and if you delete many rows from the parent table, the child table will be scanned once for each parent row deleted.

Also consider that for most (not all, most) parent child relationships, we query the objects from the 'master' table to the 'detail' table. The glaring exception to this is a code table (short code to long description). For master/detail relationships, if you do not index the foreign key, a full scan of the child table will result.

So, how do you easily discover if you have unindexed foreign keys in your schema? This script can help. When you run it, it will generate a report such as:


SQL> @unindex

STAT TABLE_NAME COLUMNS COLUMNS
---- ------------------------------ -------------------- --------------------
**** APPLICATION_INSTANCES AI_APP_CODE
ok EMP DEPTNO DEPTNO

The **** in the first row shows me that I have an unindexed foreign key in the table APPLICATION_INSTANCES. The ok in the second row shows me I have a table EMP with an indexed foreign key.

http://asktom.oracle.com/tkyte/hexdec

Base Conversion Routines

This set of routines is useful to convert between various 'bases' in Oracle. Once you install these functions, you will be able to perform operations such as:


SQL> select to_bin( 123 ) bin, to_hex( 123 ) hex, to_oct( 123 ) oct from dual
2 /

BIN HEX OCT
--------------- --------------- ---------------
1111011 7B 173

SQL>
SQL> select to_dec( '1111011', 2 ) base2, to_dec( '7B' ) base16,
2 to_dec('173',8) base8
3 from dual
4 /

BASE2 BASE16 BASE8
---------- ---------- ----------
123 123 123

Of course, with current releases - you would never user TO_HEX since TO_CHAR already does this (faster). Also TO_NUMBER would be used to convert HEX to decimal these days as well


ops$tkyte%ORA10GR2> select to_char( '1234', 'XXXX' ) from dual;

TO_CH
-----
4D2


Here is the original code:


create or replace function to_base( p_dec in number, p_base in number )
return varchar2
is
l_str varchar2(255) default NULL;
l_num number default p_dec;
l_hex varchar2(16) default '0123456789ABCDEF';
begin
if ( p_dec is null or p_base is null )
then
return null;
end if;
if ( trunc(p_dec) <> p_dec OR p_dec < 0 ) then
raise PROGRAM_ERROR;
end if;
loop
l_str := substr( l_hex, mod(l_num,p_base)+1, 1 ) || l_str;
l_num := trunc( l_num/p_base );
exit when ( l_num = 0 );
end loop;
return l_str;
end to_base;
/


create or replace function to_dec
( p_str in varchar2,
p_from_base in number default 16 ) return number
is
l_num number default 0;
l_hex varchar2(16) default '0123456789ABCDEF';
begin
if ( p_str is null or p_from_base is null )
then
return null;
end if;
for i in 1 .. length(p_str) loop
l_num := l_num * p_from_base + instr(l_hex,upper(substr(p_str,i,1)))-1;
end loop;
return l_num;
end to_dec;
/
show errors

create or replace function to_hex( p_dec in number ) return varchar2
is
begin
return to_base( p_dec, 16 );
end to_hex;
/
create or replace function to_bin( p_dec in number ) return varchar2
is
begin
return to_base( p_dec, 2 );
end to_bin;
/
create or replace function to_oct( p_dec in number ) return varchar2
is
begin
return to_base( p_dec, 8 );
end to_oct;
/