Search

Top 60 Oracle Blogs

Recent comments

April 2009

The Helskinki approaches to WoD application development

[continuing from my previous post]In a very similar way as I did here for MVC, the Helsinki UI/BL/DL code classes can be mapped across the client, middle and data tiers too:What I do differently here compared to the earlier display of MVC mapping across the tiers, is that whenever the M is distributed across two tiers, I divide the M into BL and DL. The guideline of how to split up the M, is now

Read Consistency, "ORA-01555 snapshot too old" errors and the SCN_ASCENDING hint

Oracle uses for its read consistency model a true multi-versioning approach which allows readers to not block writers and vice-versa, writers to not block readers. Obviously this great feature allowing highly concurrent processing doesn't come for free, since somewhere the information to build multiple versions of the same data needs to be stored.

Oracle uses the so called undo information not only to rollback on-going transactions but also to re-construct old versions of blocks if required. Very simplified when reading data Oracle knows the point in time (which corresponds to an internal counter called SCN, System Change Number) that data needs to be consistent with. In the default READ COMMITTED isolation mode this point in time is defined when a statement starts to execute. You could also say at the moment a statement starts to run its result is pre-ordained. When Oracle processes a block it checks if the block is "old" enough and if it discovers that the block content is too new (has been changed by other sessions but the current access is not supposed to see this updated content according to the point-in-time assigned to the statement execution) it will start to create a copy of the block and use the information available from the corresponding undo segment to re-construct an older version of the block. Note that this process can be iterative: If after re-constructing the older version of the block it's still not sufficiently old more undo information will be used to go further back in time.

Oracle Discoverer - help people write ugly code :)

There's been a discussion going on among some of my friends about all this horrible-looking (and often badly performing) auto-generated SQL coming out of Discoverer and other tools. Here are some of the comments made during the discussion, and some of my memories of how I got started with Oracle with the help of my good friend Mogens Egan...

=======================

Me:

"Oracle Discoverer - helping developers write ugly code for more than a decade."

=======================

NN:

"no no no!

What would you do with 8 disks?

Yesterday, David Best posted this question at Oracle-L:

If you had 8 disks in a server what would you do? From watching this list I can see alot of people using RAID 5 but i'm wary of the performance implicatons. (http://www.miracleas.com/BAARF/)

I was thinking maybe RAID 5 (3 disks) for the OS, software and
backups. RAID 10 (4 disks + 1 hot spare) for the database files.

Any thoughts?

I do have some thoughts about it.

There are four dimensions in which I have to make considerations as I answer this question:

  1. Volume
  2. Flow
  3. Availability
  4. Change

Just about everybody understands at least a little bit about #1: the reason you bought 8 disks instead of 4 or 16 has something to do with how many bytes of data you're going to store. Most people are clever enough to figure out that if you need to store N bytes of data, then you need to buy N + M bytes of capacity, for some M > 0 (grin).

Statistics Management for Partitioned Objects in PeopleSoft

I have implemented partitioned objects in a number of PeopleSoft systems on Oracle. Recently, I was working on a system where a table was partitioned into weekly range partitions, and I encountered a performance problem when Oracle's automatic maintenance window job to collect statistics did not run between populating the new partition for the first time, and running a batch process that referenced that partition. Oracle, understandably produced a execution plan for a statement that assumed the partition was empty, but as the partition actually had quite a lot of data, the statement ran for a long time.

The solution was to tell Oracle the truth by gathering statistics for that partition. However, I didn't want to refresh the statistics for the whole table. There were many partitions with historical data that has not changed, so I don't need to refresh those partitions. I only need to refresh just the stale partitions, and here is the problem. Unfortunately, dbms_stats package will let you gather stale and missing statistics for all tables in a given schema, or the whole database, but not for a named table. It is not completely unreasonable, if you are targeting a single table then you ought to know what needs to be refreshed.

Statistics Management for Partitioned Objects in PeopleSoft

I have implemented partitioned objects in a number of PeopleSoft systems on Oracle. Recently, I was working on a system where a table was partitioned into weekly range partitions, and I encountered a performance problem when Oracle's automatic maintenance window job to collect statistics did not run between populating the new partition for the first time, and running a batch process that referenced that partition. Oracle, understandably produced a execution plan for a statement that assumed the partition was empty, but as the partition actually had quite a lot of data, the statement ran for a long time.

The solution was to tell Oracle the truth by gathering statistics for that partition. However, I didn't want to refresh the statistics for the whole table. There were many partitions with historical data that has not changed, so I don't need to refresh those partitions. I only need to refresh just the stale partitions, and here is the problem. Unfortunately, dbms_stats package will let you gather stale and missing statistics for all tables in a given schema, or the whole database, but not for a named table. It is not completely unreasonable, if you are targeting a single table then you ought to know what needs to be refreshed.

Statistics Management for PeopleSoft Temporary Records in Application Engine Programs

Last year, I wrote about Oracle Optimizer Statistics and Optimizer Dynamic Sampling with PeopleSoft Temprorary Records. Earlier this year, I encountered a situation where Optimizer Dynamic Sampling was not sufficient, and I did need properly gathered statistics on an object. I modified my PL/SQL packaged procedure wrapper which can be called by the %UpdateStats PeopleCode macro via a customised DDL model to collects statistics.

I still recommend locking statistics on PeopleSoft Temporary Record, so that table is omitted from schema or database wide operations to refresh statistics. If the statistics on a table are locked, and it is not a Global Temporary Table, then the wrapper package will force collection and update of statistics on the table (previously it suppressed gathering of statistics on tables with locked statistics).

However, when the Application Engine program completes, any statistics collected on those temporary tables are no longer needed. Worse, the statistics refer to data that will be deleted and replaced by some future program, and if the table were not reanalysed, the statistics would be misleading, and could cause the database to produce an inappropriate execution plan. Some temporary records are shared by multiple programs, so you cannot guarantee that statistics will always be refreshed when the table is next used.

Statistics Management for PeopleSoft Temporary Records in Application Engine Programs

Last year, I wrote about Oracle Optimizer Statistics and Optimizer Dynamic Sampling with PeopleSoft Temprorary Records. Earlier this year, I encountered a situation where Optimizer Dynamic Sampling was not sufficient, and I did need properly gathered statistics on an object. I modified my PL/SQL packaged procedure wrapper which can be called by the %UpdateStats PeopleCode macro via a customised DDL model to collects statistics.

I still recommend locking statistics on PeopleSoft Temporary Record, so that table is omitted from schema or database wide operations to refresh statistics. If the statistics on a table are locked, and it is not a Global Temporary Table, then the wrapper package will force collection and update of statistics on the table (previously it suppressed gathering of statistics on tables with locked statistics).

However, when the Application Engine program completes, any statistics collected on those temporary tables are no longer needed. Worse, the statistics refer to data that will be deleted and replaced by some future program, and if the table were not reanalysed, the statistics would be misleading, and could cause the database to produce an inappropriate execution plan. Some temporary records are shared by multiple programs, so you cannot guarantee that statistics will always be refreshed when the table is next used.

OS Thread Startup

Recently I encountered a performance problem scenario where a simple sqlplus “/ as sysdba” took about 2minutes to finish, this is critical to the client’s business because they have a local C program that loads Call Detail Reports on the database making use of local authentication for most of its operations and Sql*Loader to load the data, so this “2minutes of waiting” when accumulated greatly consumes significant time on their operations and greatly impacts the business.

When I arrived on the client I first checked the alert logs of both ASM (they have a separate home for ASM) and RDBMS, there were no errors…

Then I checked on the server to see if there were any CPU, IO, memory, swap, and network bottlenecks going on

The CPU run queue was zero and most of the time 90% idle

The disks were also most of the time idle

The memory utilization was low with 430MB free

Issues with current trend

In my last post I have introduced the code classification used by the Helsinki declaration (as opposed to MVC used by JEE):User Interface (UI) code: all code that creates UI and responds to events in the UI, the same as JEE's View and ControlData Logic (DL) code: all code that maintains data integrity constraints, a well defined subset of JEE's ModelBusiness Logic (BL) code: all other code, the