Search

OakieTags

Who's online

There are currently 0 users and 42 guests online.

Recent comments

Oakies Blog Aggregator

PeopleTools 8.54: %SelectDummyTable Meta-SQL

This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.
  
PeopleTools simply evaluates this meta-SQL as 'DUAL' on Oracle.

In Oracle, DUAL is just a convenience table.  You don't need to use it, you can use anything you want. PeopleSoft applications often use PS_INSTALLATION when they needs a single row source in a query.  This was another example of platform agnosticism.  PS_INSTALLATION is available on every platform and every PeopleSoft installation, DUAL is not.

Instead of coding this (the example is taken from ESPP_REF_REVMAIN.PSHUP.Do When)

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">%Select(IF_FLAG) 
SELECT 'X'
FROM PS_INSTALLATION
WHERE %Bind(ST_SEND_SRC) = 'N'

You can now code this instead:

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">%Select(IF_FLAG) 
SELECT 'X'
FROM %SelectDummyTable
WHERE %Bind(ST_SEND_SRC) = 'N'

Which resolves to:

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">%Select(IF_FLAG)  
SELECT 'X'
FROM DUAL
WHERE %Bind(ST_SEND_SRC) = 'N'

There are two advantages to using DUAL.

  • In the database, the Oracle optimizer knows that DUAL is a special one row, one column table.  When you use it in queries, it uses this knowledge when generating the execution plan.
  • If you used a real table, there was a risk that it could have no rows, or more than one row.  Either could cause application problems.  In previous versions of Oracle, when it was a real physical table, that problem could also occur with DUAL.  However, since Oracle 10g it is just a memory structure in the database, and accessing it involves neither a logical nor a physical read. 

Conclusion

PeopleSoft have created a Meta-SQL that evaluates as DUAL so that this Oracle optimization is implemented in a PeopleSoft platform generic manner. 
I would not bother to go back a change existing code to use this, unless perhaps I was visiting the code anyway, but I would certainly recommend its use going forward.

RMOUG Training Days 2015

Yet again, it was a fantastic time at the RMOUG Training Days 2015 conference, as it has been every other year I have attended it. That is in no small measure due to the incredible work of the organizing committee, and in particular the Training Days Director, my colleague Kellyn Pot’Vin Gorman of dbakevlar.com fame. For me personally, the travel to get to Denver Colorado was somewhat more daunting than in previous years (see my earlier post for why!), but once I got there it all went relatively smoothly. I flew in on the Sunday before the conference started to allow me to get over any problems from the trip, but as it turned out everything was just fine.

I had three abstracts accepted for the conference:

The first two were two hour sessions on the first day of the conference (Tuesday), which was dedicated to deep dive sessions. I had originally planned these to be short presentations by me, followed by hands on labs to set up each of the different subjects. That fell through a few weeks before the conference when the hardware I had planned to use was taken away for upgrade, so instead I put together longer presentations with demos. As it turned out, that was a Good Thing (TM) as I had way more attendees (particularly at the PDBaaS presentation) than I would have had hardware for anyway! The third session was a much more traditional presentation – a bunch of slides followed by a shorter demo, and again it was very well attended. Lots of interest from attendees for all three, so from that perspective I was very happy.

Unfortunately I did have some technical issues with my laptop. I had some problems getting the material from my laptop onto the presentation screen, both from PowerPoint and the demo itself, so I’m going to have to spend some time sorting that out after I get home. :(

Having said that, the conference was still a blast. As I always do, I thoroughly enjoyed the interactions with attendees, but this time I also had the added enjoyment of interacting with a bunch of my colleagues from the Strategic Customer Program in the Enterprise Manager product management team – Kellyn Pot’Vin-Gorman, Courtney Llamas, Andrew Bulloch, and Werner de Gruyter. It’s when I interact with people like these guys that I realize just how much I still need to learn about Enterprise Manager as a product, particularly around the high end infrastructure architectures that the Strategic Customer Program folks normally work with.

Once the conference was finished, it was time to head up into the mountains at Breckenridge for some very relaxing R&R time. When I finish here it will be back to Intergalactic Headquarters near San Francisco for a week before I head back home again to be with family again. All in all, a fantastic conference as always, so thanks Kellyn and all the organizing committee!

PeopleTools 8.54: %SQLHint Meta-SQL

This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.
 
This new PeopleCode meta-SQL macro performs a search of SQL statement for the nth instance of SQL command keyword and inserts a string after it.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">%SqlHint(SQL_cmd, index, hint_text, DB_platform [, {ENABLE | DISABLE}])

It is particularly effective with the %InsertSelect meta-SQL.  Previously the only way to put a hint into the main select was with a variable assignment, but that didn't work if the DISTINCT keyword was used because the hint appeared behind the distinct.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">%InsertSelect(DISTINCT, DMK,JOB J, EMPLID= /*+LEADING(J)*/ J.EMPLID) 
FROM PS_JOB J

which resolves to:

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">INSERT INTO PS_DMK (EMPLID 
, EMPL_RCD
, EFFDT
, EFFSEQ
, SETID_DEPT
, DEPTID)
SELECT DISTINCT /*+LEADING(J)*/ J.EMPLID
, J.EMPL_RCD
, J.EFFDT
, J.EFFSEQ
, J.SETID_DEPT
, J.DEPTID
FROM PS_JOB J

Here is a deliberately contrived example of how to use the command.

  • I have created a separate SQL object, DMK_CURJOB, to hold effective date/sequence sub-queries which I will reference from an application engine SQL.
#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">%P(2).EFFDT = ( 
SELECT MAX(%P(3).EFFDT)
FROM %Table(%P(1)) %P(3)
WHERE %P(3).EMPLID = %P(2).EMPLID
AND %P(3).EMPL_RCD = %P(2).EMPL_RCD
AND %P(3).EFFDT <= %CurrentDateIn)
AND %P(2).EFFSEQ = (
SELECT MAX(%P(4).EFFSEQ)
FROM %Table(%P(1)) %P(4)
WHERE %P(4).EMPLID = %P(2).EMPLID
AND %P(4).EMPL_RCD = %P(2).EMPL_RCD
AND %P(4).EFFDT = %P(2).EFFDT)
  • I want my insert statement to run in direct-path mode, so I am putting an APPEND hint into the INSERT statement.
  • I am going to put different hints into each of the different SQL query blocks, including the sub-queries in the SQL object.
#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">%SqlHint(INSERT,1,'/*+APPEND*/',ORACLE,ENABLE)
%SqlHint(INSERT,1,'/*Developer Comment*/',ORACLE,DISABLE)
%SqlHint(SELECT,1,'/*+LEADING(J)*/',ORACLE)
%SqlHint(SELECT,2,'/*+UNNEST(J1)*/',ORACLE)
%SqlHint(SELECT,3,'/*+UNNEST(J2)*/',ORACLE)
%InsertSelect(DISTINCT, DMK,JOB J)
FROM PS_JOB J
WHERE %Sql(DMK_CURJOB,JOB,J,J1,J2)

Which resolves to:

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;"> INSERT /*+APPEND*/ INTO PS_DMK (EMPLID 
, EMPL_RCD
, EFFDT
, EFFSEQ
, SETID_DEPT
, DEPTID)
SELECT /*+LEADING(J)*/ DISTINCT J.EMPLID
, J.EMPL_RCD
, J.EFFDT
, J.EFFSEQ
, J.SETID_DEPT
, J.DEPTID
FROM PS_JOB J
WHERE J.EFFDT = (
SELECT /*+UNNEST(J1)*/ MAX(J1.EFFDT)
FROM PS_JOB J1
WHERE J1.EMPLID = J.EMPLID
AND J1.EMPL_RCD = J.EMPL_RCD
AND J1.EFFDT <= TO_DATE(TO_CHAR(SYSDATE,'YYYY-MM-DD'),'YYYY-MM-DD'))
AND J.EFFSEQ = (
SELECT /*+UNNEST(J2)*/ MAX(J2.EFFSEQ)
FROM PS_JOB J2
WHERE J2.EMPLID = J.EMPLID
AND J2.EMPL_RCD = J.EMPL_RCD
AND J2.EFFDT = J.EFFDT)

The %SQLHint processing appears to be done after all other expansions, so the search and insert can reach into %SQL objects.  Previously we had to put hints into the SQL object.  Although, sometimes, we could avoid that by using query block naming hints.  Now, I can place any hint after any SQL command.  I can choose to apply a hint in just one step that references a SQL object, rather than in the SQL object which affects every step that references it.

If you put multiple substitutions in for the same SQL command, only the last enabled one is processed.

I frequently find that developers love to put comments into SQL which then appears in logs files and Oracle monitoring tool.  I hate that.  Comments in SQL that run on the database is an unnecessary overhead, and it turns up later in SQL monitoring and tuning tools.  It is worth noting that comments that are not hints are stripped out of SQL in PL/SQL procedures.  Perhaps developers should put their comment in a disabled %SQLHint so it will not appear in the final SQL?

Oracle SQL Outlines/Profiles/Patches/Baselines

All this talk of adding hints to source code is going to cause an Oracle DBA to ask why not use the techniques provided by Oracle to control execution plans on application engine SQL.  The problem is that those techniques are frequently thwarted by the dynamic nature of SQL created by PeopleSoft.

  • Bind variables can become literals when the SQL is generated, though profiles and baselines can handle this.
  • Different instances of temporary records are different tables in the database. You would have to handle each table (or every combination of tables if you have multiple temporary records in one statement).
  • Many delivered processes have dynamically generated SQL and you would again have to handle every variation separately.

Conclusion

The %SQLHint meta-SQL brings a huge advantage for Oracle's PeopleSoft developers.  Although it is possible to create platform specific application engine sections, there is huge reluctance to do this in development in Oracle.  This is entirely reasonable as it results in having to develop, test and maintain separate code lines.  Many of the meta-SQL macros are designed precisely to overcome SQL differences between different supported database platforms. Now, using %SQLHint they can now put Oracle specific optimizer hints into platform generic application engine steps, safe in the knowledge that the hints will only affect Oracle platforms.

This is going to be a very useful feature.  Simple.  Effective.  I look forward to hinting the places that other techniques cannot reach!

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">

PeopleTools 8.54: Table/Index Partitioning

This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

 Partitioning in Oracle

Partitioning of table (and index) segments involves breaking them into several smaller segments where certain data values only occur in certain segments.  Thus if a query is looking for a certain data value it may be able to eliminate some partitions without having to scan them because by definition those values cannot occur in those partitions.  Thus saving logical and physical read, and improving performance.  This is called partition elimination or pruning.  It is often the principal reason for partitioning a table.
Physically each partition in a partitioned table is a table, but logically the partitions form a single table.  There should be no need to change application code in order to use partitioning, but the way the code is written will affect Oracle's ability to perform partition elimination.
The following diagram is taken from the Oracle 11g Database VLDB and Partitioning Guide

If a query was only looking data in March, then it could eliminate the January and February partitions without inspecting them.  However, if it was filtering data by another column then it would still have to inspect all three partitions.  Application design will determine whether, and if so how to partition a table.
NB: I can't mention partitioning without also saying that Partitioning Option is a licensed feature of Oracle Database Enterprise Edition.

Partitioning in PeopleTools prior to 8.54

I have to declare an interest.  I have been using partitioning in PeopleSoft since PeopleTools 7.5 when it was introduced in Oracle 8i.  The line from PeopleSoft was that you can introduce partitioning without invalidating your support (the same is not true of E-Business suite).  Application Designer won't generate partition DDL, so you were supposed to give your DDL scripts to the DBA who would add the partition clauses.  So you if wanted to use partitioning, you would be plunged into a hellish world of manual scripting.  One of the key benefits of Application Designer is that it generates the DDL for you.
Since 2001, I have developed a PL/SQL utility that effectively reverse engineers the functionality of Application Designer that builds DDL scripts, but then adds the partitioning clauses.  It also adds partitions, and has been extended to assist with partition-wise data archive/purge.  It is in uses at a number sites using Global Payroll (for which it was originally designed) and Financials (see Managing Oracle Table Partitioning in PeopleSoft Applications with GFC_PSPART Package)
So in investigating the new partitioning feature of PeopleTools 8.54 I was concerned:

  • Is my partitioning utility was now obsolete?  Or should I continue to use it?
  • How would I be able to retrofit existing partitioning into PeopleTools?

Partitioning in PeopleTools 8.54

I am going to illustrate the behaviour of the new partition support with a number of example.

Example 1: Range Partitioning PSWORKLIST

In this example, I will range partition table PSWORKLIST on the INSTSTATUS column. The valid statuses for this column are:

INSTSTATUS Description
0 Available
1 Selected
2 Worked
3 Cancelled
  • the first partition will only contain statuses 0 and 1, which are the open worklist items, 
  • the other partition will contain the other statuses; 2 and 3 which are the closed items. 

The application repeatedly queries this table looking for work lists items to be processed, using the criterion INSTSTATUS < 2.  Over time, unless data is archived, the vast majority of entries are closed.  This partitioning strategy will enable the application to find the open worklist items quickly by eliminating the larger closed partition only querying the smaller open item partition.  As items are worked or cancelled, their statuses are updated to 2 or 3, and they will automatically be moved to the other partition.
This is something that I have actually done on a customer site, and it produced a considerable performance improvement.
PeopleSoft provides a component that allows you to configure the partitioning strategy for a record.  However, I immediately ran into my first problem. 

  • The Partitioning Utility component will only permit me to partition by a PeopleSoft unique key column.  If a query doesn't have a predicate on the partitioning column, then Oracle will certainly not be able to prune any partitions, and the query will perform no better than if the table had not been partitioned.  While a column frequently used in selective criteria is often the subject of an index, and sometimes the unique key, this is not always the case.  It does not make sense to assume this in this utility component.
  • In this case, INSTSTATUS is not part of any delivered index, though I added it to index B.  I have seen that the application frequently queries the PSWORKLIST table by INSTSTATUS, so it does make sense to partition it on that column.

However, I can customise the component to get around this.  The key field is validated by the view PPMU_RECKEYS_VW.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT A.RECNAME 
,A.FIELDNAME
FROM PSRECFIELDALL A
WHERE %DecMult(%Round(%DECDIV(A.USEEDIT,2),0 ) , 2) <> A.USEEDIT

I can change the view as follows:

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">DROP TABLE PS_ST_RM2_TAO
/
SELECT A.RECNAME
,A.FIELDNAME
FROM PSRECFIELDALL A /* WHERE %DecMult(%Round(%DECDIV(A.USEEDIT,2),0 ) , 2) <> A.USEEDIT*/
, PSDBFIELD B
WHERE A.FIELDNAME = B.FIELDNAME
AND B.FIELDTYPE IN(0,2,3,4,5,6)

So, now I can specify the partitioning for this table in the Partitioning Utility Component

 I notice that can leave tablespace blank in component, but the tablespace keyword is lying around - so I have to put a tablespace in.  It is valid to omit physical attributes at partition level and they will be inherited from table level, and similarly for table level.

  • The component automatically adds a MAXVALUE partition.  This means that is valid to put any value into the partition column, otherwise it can cause an error.  However, it might not be what I want.
  • The component also adds a table storage clause, overriding anything specified in the record, with a fixed PCTFREE 20 which applies to all partitions.  Again this might not be what I want.  The value of PCTFREE depends on whether and how I update data in the table. 
  • There are a number of things that I can't control in this component
    • The name of MAXVALUE partition
    • The tablespace of the MAXVALUE partition, which defaults to be the same tablespace as the last defined partition, which might not be what I want.
    • Any other physical attribute of any of the partitions, for example I might want a different PCTFREE on partitions containing data will not be updated.
  • The component adds clause to enable row movement.  This permits Oracle to move rows between partitions if necessary when the value of the partitioning key column is updated.  In this case it is essential because as worklist items are completed they move from the first partition to the other.  ALTER TABLE ... SHRINK requires row moment, so it is useful to enable it generally.

The partitioning definition can be viewed in Application Designer under Tools -> Data Administration -> Partitioning.

The create table script (PSBUILD.SQL) does not contain any partition DDL.  So first you build the table and then alter it partitioned.  That is not unreasonable as you would often build the table and then decide to partition it.  I do the same in my own utility.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">-- Start the Transaction 


-- Create temporary table

CREATE TABLE PSYPSWORKLIST (BUSPROCNAME VARCHAR2(30) DEFAULT ' ' NOT
NULL,

DESCR254_MIXED VARCHAR2(254) DEFAULT ' ' NOT NULL) PARTITION BY
RANGE (INSTSTATUS)
(
PARTITION OPEN VALUES LESS THAN (2) TABLESPACE PTTBL,
PARTITION PE_MAXVALUE VALUES LESS THAN (MAXVALUE) TABLESPACE PTTBL
)
PCTFREE 20 ENABLE ROW MOVEMENT
/

-- Copy from source to temp table

INSERT INTO PSYPSWORKLIST (
BUSPROCNAME,

DESCR254_MIXED)
SELECT
BUSPROCNAME,

DESCR254_MIXED
FROM PSWORKLIST
/

-- CAUTION: Drop Original Table

DROP TABLE PSWORKLIST
/

-- Rename Table

RENAME PSYPSWORKLIST TO PSWORKLIST
/

-- Done

CREATE UNIQUE INDEX PS_PSWORKLIST ON PSWORKLIST (BUSPROCNAME,
ACTIVITYNAME,
EVENTNAME,
WORKLISTNAME,
INSTANCEID)
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 40960 NEXT 106496 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "PSINDEX"
/
ALTER INDEX PS_PSWORKLIST NOPARALLEL LOGGING
/

CREATE INDEX PSBPSWORKLIST ON PSWORKLIST (OPRID,
INSTSTATUS)
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 40960 NEXT 106496 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "PSINDEX"
/
ALTER INDEX PSBPSWORKLIST NOPARALLEL LOGGING
/
CREATE INDEX PSBWORKLIST ON PSWORKLIST
('') LOCAL TABLESPACE PTTBL
/

The DDL to create partitioned index does not seem to appear properly.  The first CREATE INDEX command was generated by Application Designer extracting it from the catalogue with DBMS_METADATA.  This functionality was introduced in PeopleTools 8.51 to preserve existing configuration.The second create index comes from the partitioning definition.

  • The index column list is missing, it should come from the column list is defined in Application Designer.
  • The locally partitioned index is the same tablespace as the table instead of the tablespace defined on the index. 
    • I would not normally keep indexes in the same tablespace as the table (the rationale is that in the case of having to recover only a tablespace with indexes then I could rebuild it instead of recovering it).

I also note that the table is not altered NOLOGGING.  Yet the indexes are still made NOPARALLEL.  The default degree of parallelism on a partitioned table is equal to the number of partitioned, so it will cause parallel query to be invoked on the table access. 

  • I strongly recommend against generally allowing parallel query in all SQLs that reference a partitioned table in an OLTP system, which is what PeopleSoft is.  There are occasions where parallel query is the right thing to do, and in those cases I would use a hint, or SQL profile or SQL patch.

If I leave the Partitioning Utility component and then go back to a record where I have previously created partition DDL, then the partition DDL is still there, but all the other information that I typed in has disappeared.

If you trace the SQL generated by this component while entering partition details and generating partition DDL, then the only two tables that are updated at all;  PS_PTTBLPARTDDL and PS_PTIDXPARTDDL.  They are both keyed on RECNAME and PLATFORMID and have just one other column, a CLOB to hold the DDL.

  • The partition information disappears because there is nowhere to hold it persistently, and the component cannot extract it from the DDL.  It was being entered into a derived work record.
    • So it is not going to be much help when I want to adjust partitioning in a table that is already partitioned.  For example, over time, I might want to add new partitions, compress static partitions, or purge old ones.
  • It is also clear that there is no intention to support different partitioning strategies for different indexes on the same table.  There are certainly cases where a table will one or more locally partitioned indexes and some global indexes that may or may not be partitioned.
  • Even these two tables are not fully integrated into Application Designer.  There is a throwaway line in Appendix E of the Data Management Guide - Administering Databases on Oracle:"Record and index partitioning is not migrated as part of the IDE project. If you want to migrate the partitioning metadata along with the record, you will need to…" copy it yourself and it goes on to recommend creating a Data Migration Project in the Data Migration Workbench"

Sample 2: Import Existing Partitioning

Sticking with PSWORKLIST, I have partitioned it exactly the way I want.  The partition DDL was generated by my own partitioning utility .  I have added INSTSTATUS to index B.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">CREATE TABLE sysadm.psworklist
(busprocname VARCHAR2(30) NOT NULL

,descr254_mixed VARCHAR2(254) NOT NULL
)
TABLESPACE PTTBL
PCTFREE 10 PCTUSED 80
PARTITION BY RANGE(INSTSTATUS)
(PARTITION psworklist_select_open VALUES LESS THAN ('2')
,PARTITION psworklist_worked_canc VALUES LESS THAN (MAXVALUE) PCTFREE 1 PCTUSED 90
)
ENABLE ROW MOVEMENT
PARALLEL
NOLOGGING
/

ALTER TABLE sysadm.psworklist LOGGING NOPARALLEL MONITORING
/

CREATE INDEX sysadm.ps0psworklist ON sysadm.psworklist
(transactionid
,busprocname
,activityname
,eventname
,worklistname
,instanceid
)
TABLESPACE PSINDEX
PCTFREE 10
PARALLEL
NOLOGGING
/

CREATE INDEX sysadm.psbpsworklist ON sysadm.psworklist
(oprid
,inststatus
)
LOCAL
(PARTITION psworklistbselect_open
,PARTITION psworklistbworked_canc PCTFREE 1
)
TABLESPACE PSINDEX
PCTFREE 10
PARALLEL
NOLOGGING
/
ALTER INDEX sysadm.psbpsworklist LOGGING
/
ALTER INDEX sysadm.psbpsworklist NOPARALLEL
/

The DDL in the Maintain Partitioning box in Application Designer is extracted from the data dictionary using the Oracle supplied DBMS_METADATA package.  Application Designer has done this since PeopleTools 8.51 for index build scripts, but now you can see the DDL directly in the tool.
When I generate an alter table script I still get two create index command for the partitioned index.  The second one comes from the generated partition DDL and is not correct because it still doesn't have a column list.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">CREATE   INDEX PSBPSWORKLIST ON PSWORKLIST (OPRID,
INSTSTATUS)
PCTFREE 10 INITRANS 2 MAXTRANS 255 LOGGING
STORAGE(
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "PSINDEX" LOCAL
(PARTITION "PSWORKLISTBSELECT_OPEN"
PCTFREE 10 INITRANS 2 MAXTRANS 255 LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS
2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "PSINDEX" ,
PARTITION "PSWORKLISTBWORKED_CANC"
PCTFREE 1 INITRANS 2 MAXTRANS 255 LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS
2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "PSINDEX" )
/
ALTER INDEX PSBPSWORKLIST NOPARALLEL LOGGING
/
CREATE INDEX PSBPSWORKLIST ON PSWORKLIST ('') LOCAL TABLESPACE
PTTBL
/

Example 3 - GP_RSLT_ACUM

I have now chosen to partition one of the Global Payroll result tables.  This is often the largest table in a payroll system.  I have seen more than 1 billion rows in this table at one customer.  In a Global Payroll system, I usually:

  • range partition payroll tables on EMPLID to match the streamed processing (in GP streaming means concurrently running several Cobol or Application Engine programs to process different ranges of employees).  So there is a 1:1 relationship between payroll processes and physical partitions
  • the largest result tables are sub-partitioned on CAL_RUN_ID so each payroll period is in a separate physical partition.  Later I can archive historical payroll data by partition.

Here, I have swapped the partitioning over.  I have partitioned by CAL_RUN_ID and sub-partitioned by EMPLID.  I explain why below.

    And this is Table DDL that the utility generated.

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">PARTITION BY RANGE (CAL_RUN_ID) 
    SUBPARTITION BY RANGE (EMPLID)
    SUBPARTITION TEMPLATE
    (
    SUBPARTITION SUB1 VALUES LESS THAN ('K9999999999'),
    SUBPARTITION SUB2 VALUES LESS THAN ('KE999999999'),
    SUBPARTITION SUB3 VALUES LESS THAN ('KM999999999'),
    SUBPARTITION SUB4 VALUES LESS THAN ('KT999999999') ,
    SUBPARTITION PE_MAXVALUE VALUES LESS THAN (MAXVALUE)
    )
    (
    PARTITION STRM1 VALUES LESS THAN ('K9999999999') TABLESPACE GPPART1,
    PARTITION STRM2 VALUES LESS THAN ('KE999999999') TABLESPACE GPPART2,
    PARTITION STRM3 VALUES LESS THAN ('KM999999999') TABLESPACE GPPART3,
    PARTITION STRM4 VALUES LESS THAN ('KT999999999') TABLESPACE GPPART4,
    PARTITION PE_MAXVALUE VALUES LESS THAN (MAXVALUE) TABLESPACE GPPART4
    )
    PCTFREE 20 ENABLE ROW MOVEMENT

    • Use of the sub-partition template clause simplifies the SQL.  There is certainly a lot less of it.  However, it means you get all the sub-partitions within in all partitions.  That might not be what you want.  In this demo database both employees and calendars are prefixed by something that corresponds to legislature, so some partitions will be empty.  They won't take up any physical space, due to deferred segment creation, but it would be better not to build them at all.
    • I can specify tablespace on the sub-partitions in the utility component, but there is no tablespace on the sub-partition template in the DDL.  I care more about putting different payroll periods into different tablespaces, than different ranges of employees (so I can compress and purge data later) so I swapped the partition key columns and have range partitioned on CAL_RUN_ID and sub-partitioned on EMPLID.

    In Global Payroll, partitioning is required to support streamed processing.  Tables are range partitioned on EMPLID to match the stream definitions.  In UK payroll, there are 45 tables that are updated or heavily referenced by streamed payroll processes that should all have similar range partitioning on EMPLID. 
    In PeopleTools 8.54, it is not possible to define a single partitioning strategy and consistently apply it to several tables.  Even if the data entered into the partition utility component was retained, I would have to enter it again for each table.

    Conclusion

    When I heard that PeopleTools would have native support for partitioning, if only in Oracle, I was hopeful that we would get something that would bring the process of migrating and building partitioned tables in line with normal tables. Unfortunately, I have to say that I don't think the partitioning support that I have seen so far is particularly useful.

    • There is no point typing in a lot of partition data into a utility component that doesn't retain the data.
    • As with materialized views, table partitioning is something on which DBAs will have to advise and will probably implement and maintain.  This component doesn't really help them do anything they already do with a text editor!
    • Even the minimal partition data that the utility component does retain is not migrated between environments by Application Designer when you migrate the record.

    Again, I think the problems stem from PeopleTools development trying to minimize the level of alteration to the Application Designer.  The partitioning utility component looks good because it sets out a lot of the partition attributes and metadata that you do need to consider, but there is no data structure behind that to hold it.
    I would like to see PeopleTools tables to hold partitioning metadata for tables and indexes, and for Application Designer to build DDL scripts to create and alter partitioned tables, to add partitions to existing tables, and then to migrate those definitions between environments.
    One positive that I can take from this is that Oracle has now clearly stated that it is reasonable to introduce partitioning into your PeopleSoft application without invalidating your support. The position hasn't actually changed, but now there is clarity.

    Data Guard Logical Standby – what does it mean?

    With Data Guard, you have the choice between Physical and Logical Standby databases. Let’s see the differences! My demo starts with a Physical Standby, that is then converted into a Logical Standby (therefore the name of the database):

    [oracle@uhesse1 ~]$ dgmgrl sys/oracle@prima
    DGMGRL for Linux: Version 11.2.0.1.0 - 64bit Production
    
    Copyright (c) 2000, 2009, Oracle. All rights reserved.
    
    Welcome to DGMGRL, type "help" for information.
    Connected.
    DGMGRL> show configuration;
    
    Configuration - myconf
    
      Protection Mode: MaxPerformance
      Databases:
        prima - Primary database
        logst - Physical standby database
    
    Fast-Start Failover: DISABLED
    
    Configuration Status:
    SUCCESS
    

    For now, logst is still a Physical Standby. It is called that way, because the datafiles of prima and logst are physically identical. I can even restore them from one side to the other:

    DGMGRL> edit database logst set state=apply-off;
    Succeeded.
    DGMGRL> exit
    [oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba
    
    SQL*Plus: Release 11.2.0.1.0 Production on Fri Feb 20 11:43:07 2015
    
    Copyright (c) 1982, 2009, Oracle.  All rights reserved.
    
    
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    
    SYS@prima > select name from v$datafile where file#=4;
    
    NAME
    --------------------------------------------------
    /u01/app/oracle/oradata/prima/users01.dbf
    
    SYS@prima > alter database datafile 4 offline;
    
    Database altered.
    

    Now I copy the datafile from the standby server uhesse2 to the primary server uhesse1 – there are different ways to do that, but scp is one:

    SYS@logst > select name from v$datafile where file#=4;
    
    NAME
    --------------------------------------------------
    /u01/app/oracle/oradata/logst/users01.dbf
    
    SYS@logst > exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    [oracle@uhesse2 ~]$ scp /u01/app/oracle/oradata/logst/users01.dbf uhesse1:/u01/app/oracle/oradata/prima/users01.dbf
    The authenticity of host 'uhesse1 (192.168.56.10)' can't be established.
    RSA key fingerprint is e9:e7:5b:8b:b2:33:42:26:89:03:54:0c:16:0d:98:57.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'uhesse1,192.168.56.10' (RSA) to the list of known hosts.
    oracle@uhesse1's password: 
    users01.dbf                                                                                               100% 5128KB   5.0MB/s   00:00    
    [oracle@uhesse2 ~]$ 
    

    When I try to online the datafile again on prima, it is like if I would have restored it from backup:

    SYS@prima > alter database datafile 4 online;
    alter database datafile 4 online
    *
    ERROR at line 1:
    ORA-01113: file 4 needs media recovery
    ORA-01110: data file 4: '/u01/app/oracle/oradata/prima/users01.dbf'
    
    
    SYS@prima > recover datafile 4;
    Media recovery complete.
    SYS@prima > alter database datafile 4 online;
    
    Database altered.
    

    The datafiles and also the archived logfiles are physically identical on both sites here, only the controlfiles are different. v$database (like v$datafile, by the way) derives its content from the controlfile:

    SYS@prima > select name,dbid,database_role from v$database;
    
    NAME							 DBID DATABASE_ROLE
    -------------------------------------------------- ---------- ----------------
    PRIMA						   2012613220 PRIMARY
    
    SYS@prima > connect sys/oracle@logst as sysdba
    Connected.
    SYS@logst > select name,dbid,database_role from v$database;
    
    NAME							 DBID DATABASE_ROLE
    -------------------------------------------------- ---------- ----------------
    PRIMA						   2012613220 PHYSICAL STANDBY
    

    Now I will convert it into Logical Standby:

    DGMGRL> edit database logst set state=apply-off;
    Succeeded.
    DGMGRL> exit
    [oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba
    
    SQL*Plus: Release 11.2.0.1.0 Production on Fri Feb 20 17:29:16 2015
    
    Copyright (c) 1982, 2009, Oracle.  All rights reserved.
    
    
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    
    SYS@prima > exec dbms_logstdby.build
    
    PL/SQL procedure successfully completed.
    
    SYS@prima > connect sys/oracle@logst as sysdba
    Connected.
    SYS@logst > alter database recover to logical standby logst;
    
    Database altered.
    
    SYS@logst > shutdown immediate
    ORA-01507: database not mounted
    
    
    ORACLE instance shut down.
    SYS@logst > startup mount
    ORACLE instance started.
    
    Total System Global Area  521936896 bytes
    Fixed Size		    2214936 bytes
    Variable Size		  314573800 bytes
    Database Buffers	  201326592 bytes
    Redo Buffers		    3821568 bytes
    Database mounted.
    SYS@logst > alter database open resetlogs;
    
    Database altered.
    
    SYS@logst > select name,dbid,database_role from v$database;
    
    NAME							 DBID DATABASE_ROLE
    -------------------------------------------------- ---------- ----------------
    LOGST						   3156487356 LOGICAL STANDBY
    
    SYS@logst > exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    [oracle@uhesse1 ~]$ dgmgrl sys/oracle@prima
    DGMGRL for Linux: Version 11.2.0.1.0 - 64bit Production
    
    Copyright (c) 2000, 2009, Oracle. All rights reserved.
    
    Welcome to DGMGRL, type "help" for information.
    Connected.
    DGMGRL> remove database logst;
    Removed database "logst" from the configuration
    DGMGRL> add database logst as connect identifier is logst;
    Database "logst" added
    DGMGRL> enable database logst;
    Enabled.
    DGMGRL> show configuration;
    
    Configuration - myconf
    
      Protection Mode: MaxPerformance
      Databases:
        prima - Primary database
        logst - Logical standby database
    
    Fast-Start Failover: DISABLED
    
    Configuration Status:
    SUCCESS
    

    One significant change is that the DBID and the name is now different from the primary database as you see above. And the datafiles are no longer physically identical:

    DGMGRL> edit database logst set state=apply-off;
    Succeeded.
    DGMGRL> exit
    [oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba
    
    SQL*Plus: Release 11.2.0.1.0 Production on Fri Feb 20 17:38:56 2015
    
    Copyright (c) 1982, 2009, Oracle.  All rights reserved.
    
    
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    
    SYS@prima > alter database datafile 4 offline;
    
    Database altered.
    
    SYS@prima > select name from v$datafile where file#=4;
    
    NAME
    --------------------------------------------------
    /u01/app/oracle/oradata/prima/users01.dbf
    
    SYS@prima > host cp /u01/app/oracle/oradata/prima/users01.dbf /u01/app/oracle/oradata/prima/users01.old
    

    I copy the original file because I know that the restore from logst will not work. It is just to show my point:

    [oracle@uhesse2 ~]$ scp /u01/app/oracle/oradata/logst/users01.dbf uhesse1:/u01/app/oracle/oradata/prima/users01.dbf
    oracle@uhesse1's password: 
    users01.dbf                                                                                   100% 5128KB   5.0MB/s   00:00    
    SYS@prima > alter database datafile 4 online;
    alter database datafile 4 online
    *
    ERROR at line 1:
    ORA-01122: database file 4 failed verification check
    ORA-01110: data file 4: '/u01/app/oracle/oradata/prima/users01.dbf'
    ORA-01206: file is not part of this database - wrong database id
    

    Exactly. logst is now an autonomous database that is just incidentally doing (nearly) the same DML as prima does. It is no longer Oracle-Block-wise the same as prima. The rowids from prima have no meaning on logst any more:

    DGMGRL> edit database logst set state=apply-on;
    Succeeded.
    SYS@prima > insert into scott.dept values (50,'TEST','TEST');
    insert into scott.dept values (50,'TEST','TEST')
                      *
    ERROR at line 1:
    ORA-00376: file 4 cannot be read at this time
    ORA-01110: data file 4: '/u01/app/oracle/oradata/prima/users01.dbf'
    
    
    SYS@prima > host cp /u01/app/oracle/oradata/prima/users01.old /u01/app/oracle/oradata/prima/users01.dbf
    
    SYS@prima > alter database datafile 4 online;
    alter database datafile 4 online
    *
    ERROR at line 1:
    ORA-01113: file 4 needs media recovery
    ORA-01110: data file 4: '/u01/app/oracle/oradata/prima/users01.dbf'
    
    
    SYS@prima > recover datafile 4;
    Media recovery complete.
    SYS@prima > alter database datafile 4 online;
    
    Database altered.
    
    SYS@prima > insert into scott.dept values (50,'TEST','TEST');
    
    1 row created.
    
    SYS@prima > commit;
    Commit complete.
    
    SYS@prima > select rowid,dept.* from scott.dept where deptno=50;
    
    ROWID		       DEPTNO DNAME	     LOC
    ------------------ ---------- -------------- -------------
    AAADS8AAEAAAACNAAE	   50 TEST	     TEST
    

    This rowid is what we normally record in the redo entries and it would be sufficient to retrieve that row on the primary and also on a physical standby where we do “Redo Apply” (another term for “recover database”). But that rowid is different on logst:

    SYS@logst > connect sys/oracle@logst as sysdba
    Connected.
    SYS@logst > select rowid,dept.* from scott.dept where deptno=50;
    
    ROWID DEPTNO DNAME LOC
    ------------------ ---------- -------------- -------------
    AAADS8AAEAAAACOAAA 50 TEST TEST
    

    That is why we need to put additional information – supplemental log data – into the redo entries on the primary. It will help the SQL Apply mechanism to retrieve the row there:

    Logical Standby Architecture

    Logical Standby Architecture

    The supplemental log data contains at least additionally the primary/unique key like on the picture. In the absence of primary/unique keys, every column of a modified row is written into the redo logs. That may impact the performance of the primary database. Another serious drawback of Logical Standby is that not every datatype and not every operation on the primary is supported for the SQL Apply mechanism. The number of unsupported datatypes decreases version by version, though.

    The demo and the sketch above are from my presentation about Transient Logical Standby at the Oracle University Expert Summit 2015 in Dubai – really an amazing location! Hope you find it useful :-)

    Tagged: Data Guard

    Learning for free – UK User Group Meetings Coming Up

    There are a few user group meetings coming up in the UK over the next week or two.

    Note, you need to register to attend any of these, follow the links.

    First is Club Oracle London, which are evening sessions held in London with 3 talks plus free beer and pizza. The next meeting is Thursday Feb 26th at 103a Oxford Street, kicking off at 18:30. You can register for this free event here and also see more details. In brief, Simon Haslam is talking about Oracle Database Appliance, Martin Bach on 12C new features the marketing guys don’t push and finally Phil Brown giving a virtualisation case study, how NOT to do it.

    On the 3rd March there is the 8th meeting of Oracle Midlands – again an evening event but this one is in Birmingham. Again, it’s free and samosas {lots of samosas!} turn up half way through to keep you going. Held at Innovation Birmingham near Aston University, there will be Chris Saxon on using Edition Based Redefinition to release PL/SQL to busy systems (ie the PL/SQL code is constantly being executed) and Tim Hall talking about Pluggable Database and why this new feature can “break” things. Tim is good enough to say how you fix the broken (I wonder if the single logwriter will crop up?). They give away some free stuff too!

    Finally, on the 4th March and back in London there is the next next UKOUG RAC, Cloud, Infrastructure & Availability SIG. Bit of a mouthful that but we can’t come up with a snappy name that covers the remit of the SIG. We are trying a new format for this SIG, inspired by the two above events. We still have a full-day’s-worth of content but we start at 15:00 and go on into 20:00 in the evening. We hope that this will allow more people to attend without feeling they need to lose the whole working day to it. NB this event is free to UKOUG members but you can pay to come if you (or usually your employer) are not a member. We have talks by Jason Arneil on 12C ASM, Dave Burnham talking about free text searching, David Hickson presenting on linux OS resource management and Jamie Wallis from Oracle on TFA – Diagnostics for the Cloud. Plus a panel session where we discuss whatever you want to discuss!

    I’ll be along to Club Oracle London and the RAC CIA SIG but sadly not Oracle Midlands – just can’t squeeze it in which is a real shame as it’s an excellent event.

    If time, travel requirements and inclination allow, there are usually beers at a local pub after all the above where you can continue to talk about oracle stuff. Or not. It’s usually a mix :-)

    Finally, a quick plug for the OUG Ireland on 19th March. This is a full, one-day conference with more speakers than I can cover, held in Dublin. There are loads of excellent presenters, many are Oracle Aces, Oaktable members and experts in their field. I’ll be there doing my favorite talk, but I’ll do a separate blog about that.

    12c Parallel Execution New Features: Hybrid Hash Distribution - Part 2

    In the second part of this post (go to part 1) I want to focus on the hybrid distribution for skewed join expressions.

    2. Hybrid Distribution For Skewed Join Expressions

    The HYBRID HASH distribution allows to some degree addressing data distribution skew in case of HASH distributions, which I've described in detail already in the past. A summary post that links to all other relevant articles regarding Parallel Execution Skew can be found here, an overview of the relevant feature can be found here and a detailed description can be found here.One other side effect of the truly hybrid distribution in case of skew (mixture of BROADCAST / HASH for one row source and ROUND-ROBIN / HASH for the other row source) is that HASH distributions following such a hybrid distribution need to redistribute again even if the same join / distribution keys get used by following joins. If this were regular HASH distributions the data would already be suitably distributed and no further redistribution would be required.Here's an example of this, using the test case setup mentioned here:


    -- Here the HYBRID SKEW distribution works for B->C
    -- But the (B->C)->A join is affected by the same skew
    -- So the HASH re-distribution of the resulting B.ID is skewed, too
    -- And hence the HASH JOIN/SORT AGGREGATE (operation 4+5) are affected by the skew
    -- The big question is: Why is there a re-distribution (operation 12+11)?
    -- The data is already distributed on B.ID??
    -- If there wasn't a re-distribution no skew would happen
    -- In 11.2 no-redistribution happens no matter if C is probe or hash row source
    -- So it looks like a side-effect of the hybrid distribution
    -- Which makes sense as it is not really HASH distributed, but hybrid
    select count(t_2_filler) from (
    select /*+ monitor
    leading(b c a)
    use_hash(c a)
    swap_join_inputs(a)
    no_swap_join_inputs(c)
    pq_distribute(a hash hash)
    pq_distribute(c hash hash)
    --optimizer_features_enable('11.2.0.4')
    pq_skew(c)
    */
    a.id as t_1_id
    , a.filler as t_1_filler
    , c.id as t_2_id
    , c.filler as t_2_filler
    from t_1 a
    , t_1 b
    , t_2 c
    where
    c.fk_id_skew = b.id
    and a.id = b.id
    );

    -- 11.2 plan
    ----------------------------------------------------------------------------
    | Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
    ----------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | | | |
    | 1 | SORT AGGREGATE | | | | |
    | 2 | PX COORDINATOR | | | | |
    | 3 | PX SEND QC (RANDOM) | :TQ10003 | Q1,03 | P->S | QC (RAND) |
    | 4 | SORT AGGREGATE | | Q1,03 | PCWP | |
    |* 5 | HASH JOIN | | Q1,03 | PCWP | |
    | 6 | PX RECEIVE | | Q1,03 | PCWP | |
    | 7 | PX SEND HASH | :TQ10000 | Q1,00 | P->P | HASH |
    | 8 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
    | 9 | TABLE ACCESS FULL | T_1 | Q1,00 | PCWP | |
    |* 10 | HASH JOIN | | Q1,03 | PCWP | |
    | 11 | PX RECEIVE | | Q1,03 | PCWP | |
    | 12 | PX SEND HASH | :TQ10001 | Q1,01 | P->P | HASH |
    | 13 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
    | 14 | TABLE ACCESS FULL| T_1 | Q1,01 | PCWP | |
    | 15 | PX RECEIVE | | Q1,03 | PCWP | |
    | 16 | PX SEND HASH | :TQ10002 | Q1,02 | P->P | HASH |
    | 17 | PX BLOCK ITERATOR | | Q1,02 | PCWC | |
    | 18 | TABLE ACCESS FULL| T_2 | Q1,02 | PCWP | |
    ----------------------------------------------------------------------------

    -- 12.1 plan
    -------------------------------------------------------------------------------------
    | Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
    -------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | | | |
    | 1 | SORT AGGREGATE | | | | |
    | 2 | PX COORDINATOR | | | | |
    | 3 | PX SEND QC (RANDOM) | :TQ10004 | Q1,04 | P->S | QC (RAND) |
    | 4 | SORT AGGREGATE | | Q1,04 | PCWP | |
    |* 5 | HASH JOIN | | Q1,04 | PCWP | |
    | 6 | PX RECEIVE | | Q1,04 | PCWP | |
    | 7 | PX SEND HYBRID HASH | :TQ10002 | Q1,02 | P->P | HYBRID HASH|
    | 8 | STATISTICS COLLECTOR | | Q1,02 | PCWC | |
    | 9 | PX BLOCK ITERATOR | | Q1,02 | PCWC | |
    | 10 | TABLE ACCESS FULL | T_1 | Q1,02 | PCWP | |
    | 11 | PX RECEIVE | | Q1,04 | PCWP | |
    | 12 | PX SEND HYBRID HASH | :TQ10003 | Q1,03 | P->P | HYBRID HASH|
    |* 13 | HASH JOIN BUFFERED | | Q1,03 | PCWP | |
    | 14 | PX RECEIVE | | Q1,03 | PCWP | |
    | 15 | PX SEND HYBRID HASH | :TQ10000 | Q1,00 | P->P | HYBRID HASH|
    | 16 | STATISTICS COLLECTOR | | Q1,00 | PCWC | |
    | 17 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
    | 18 | TABLE ACCESS FULL | T_1 | Q1,00 | PCWP | |
    | 19 | PX RECEIVE | | Q1,03 | PCWP | |
    | 20 | PX SEND HYBRID HASH (SKEW)| :TQ10001 | Q1,01 | P->P | HYBRID HASH|
    | 21 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
    | 22 | TABLE ACCESS FULL | T_2 | Q1,01 | PCWP | |
    -------------------------------------------------------------------------------------

    Note that both joins to A and C are based on B.ID. As you can see from the 11.2 plan therefore the final hash join (operation ID 5) doesn't need to have the output of the previous hash join (operation ID 10) redistributed, since the data is already distributed in a suitable way (and as a consequence both joins therefore will be affected by skewed values in T2.FK_ID_SKEW, but no BUFFERED join variant is required).Now look at the 12c plan when SKEW is detected: Since the SKEW handling in fact leads to a potential mixture of HASH / BROADCAST and HASH / ROUND-ROBIN distribution, the data gets redistributed again for the final join (operation ID 11 + 12) which has several bad side effects: First it adds the overhead of an additional redistribution, as a side effect this then turns one of the hash joins into its BUFFERED variant, and since the SKEW distribution (at present) is only supported if the right side of the join is a table (and not the result of another join), this following join actually will be affected by the skew that was just addressed by the special SKEW handling in the join before (assuming the HYBRID HASH distributions in operation ID 6+7 / 11+12 operate in HASH / HASH, not BROADCAST / ROUND-ROBIN mode)...

    PeopleTools 8.54: Global Temporary Tables

    This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

    Database Feature Overview

    Global Temporary tables were introduced in Oracle 8i.  They can be used where an application temporarily needs a working storage tables.  They are named

    • Global because the content is private
    • Temporary because the definition is permanent

    Or if you prefer

    • Global because the definition is available to everyone
    • Temporary because 
      • physical instantiation of the table is temporary, in the temporary segment (so it isn't redo logged and so isn't recoverable),
      • but it does generate undo in the undo segment, and there is redo on the undo.
      • Each session gets its own private copy of the table in the temp segment.  So you cannot see what is in another session's temporary table, which can make application debugging difficult.
      • The physical instantiation of the table is removed either 
        • when the session disconnects - on commit preserve
        • or when the transaction is terminated with a commit or rollback - on commit delete

    This is a very useful database feature (I have been using it in PeopleSoft application ever since it was introduced). 

    • Can be used for temporary records in Application Engines where restart is disabled.
    • Can be implemented without any application code change.
    • Only Application Designer temporary records can be built as global temporary tables.  You cannot make a SQL Table record global temporary.
    • The reduction in redo generation during intensive batch processes, such as payroll processing, can bring significant performance benefits.  There is no point logging redo information for temporary working storage tables that you do not ever need to restore.
    • Shared temporary tables, such as in the GP calculation process GPPDPRUN that is written in COBOL.  If using payroll streaming (multiple concurrent processes to process in parallel), then concurrent delete/update can cause read consistency problems when using a normal table, but with global temporary tables, each session has its own physical table so there is never any need to read consistency recover to read a global temporary tables.
    • Global temporary tables are also an effective way to resolve table high water mark issues that can occur on non-shared temporary tables in on-line application engine.  The PeopleTools %TruncateTable macro still resolves to delete.  You never get high water mark problems with global temporary tables because they are physically created afresh for each new session.  
    • There is often a reduction in database size because the tables are not retained after the session terminates.  Although there will be an increased demand for temporary tablespace while the global temporary tables are in use.
    • I have occasionally seen performance problems when PeopleSoft systems very frequently truncate tables and experience contention on the RO enqueue.  This problem does not occur with global temporary tables.

    Global temporary table are not a licensed database feature and are also available in standard edition.

    Global Temporary Tables in PeopleTools

    This is the create table DDL created by Application Designer

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">DROP TABLE PS_ST_RM2_TAO
    /
    CREATE GLOBAL TEMPORARY TABLE PS_ST_RM2_TAO (PROCESS_INSTANCE
    DECIMAL(10) NOT NULL,
    EMPLID VARCHAR2(11) NOT NULL,
    GRANT_NBR VARCHAR2(10) NOT NULL,
    VEST_DT DATE,
    SHARES_REMAINDER DECIMAL(21, 9) NOT NULL,
    DEC_PLACES SMALLINT NOT NULL) ON COMMIT PRESERVE ROWS TABLESPACE
    STWORK
    /
    CREATE UNIQUE iNDEX PS_ST_RM2_TAO ON PS_ST_RM2_TAO (PROCESS_INSTANCE,
    EMPLID,
    GRANT_NBR,
    VEST_DT)
    /

    The first thing to point out is the specification of a tablespace.  This is a new feature in Oracle 11g.  It is not mandatory in Oracle, but it is coded into the PeopleSoft DDL model so you must specify a temporary tablespace on the record otherwise it will fail to build.  A new temporary tablespace PSGTT01 is delivered by Oracle when you upgrade to 8.54, or you could just use the existing temporary tables.

    This new feature has been implemented using 2 new DDL models (statement types 6 and 7).

    #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT * FROM psddlmodel WHERE statement_type IN(6,7);

    STATEMENT_TYPE PLATFORMID SIZING_SET PARMCOUNT
    -------------- ---------- ---------- ----------
    MODEL_STATEMENT
    ------------------------------------------------------------------------
    6 2 0 0
    CREATE GLOBAL TEMPORARY TABLE [TBNAME] ([TBCOLLIST]) ON COMMIT PRESERVE
    ROWS TABLESPACE [TBSPCNAME];

    7 2 0 0
    CREATE [UNIQUE] INDEX [IDXNAME] ON [TBNAME] ([IDXCOLLIST]);
    • All tables created ON COMMIT PRESERVE, but on-line instances could be ON COMMIT DELETE (theory subject to testing) and for ALL application engine programs even if restart is enabled because commits suppressed in on-line application engines.  Instead, commit is done by the component.

    If you try adding a global temporary table table to an application engine that is not restart disabled you quite rightly get the following error message. The table will be added, but the program will not execute correctly.

    "Global Temporary Tables allocated to this restart enabled AE program will not retain any data when program exits."

    Problems:

    • There has always been a 13 character limit on temporary records, because there used to be a maximum of 99 non-shared instances, and 2 characters were reserved.  If you try to set the number of instances to greater than 99 in an application Engine (I tried GP_GL_PREP)  you now get the warning message
    "Do not support more than 99 instances when select the Temp Table which are not attributed as GTT"
    • There is now a maximum length of 11 characters for the name of a record built a global temporary table because from PeopleTools 8.54 there can be up to 9999 non-shared instances of the record.  The restriction applies irrespective of how many instances you are actually using. 
      • I have yet to encounter a system where I need more than 99 instances of a temporary table.  I can just about imagine needing 100 non-shared instances, but not 1000.  
      • This means that I cannot retrofit global temporary tables into an existing Application Engine processes without changing record names.  There are existing delivered application engine programs with 12 and 13 character temporary record names that cannot now be switched to use global temporary tables managed by application designer.  I don't need to support more instances just because the table is global temporary.
        • For example, GP_GL_SEGTMP in GP_GL_PREP is a candidate to be made global temporary because that is a streamed Global Payroll process.  When I tried, I got a record name too long error!
    "Record Name is too long. (47,67)"
        • Really, if the table is global temporary you don't need lots of instances.  Everyone could use the shared instance, because Oracle gives each session a private physical copy of the table anyway. 
          • You could do this by removing the record name from the list of temporary records in the application engine, then the %Table() macro will generate the table name without an instance number.
          • There would be a question of how to handle optimizer statistics.  Optimizer statistics collected on a global temporary table in one session could end up being used in another because there is only one place to store them in the data dictionary.
          • The answer is not to collect statistics at all and to use Optimizer Dynamic Sampling.  There is a further enhancement in Oracle 12c where the dynamically sampled stats from different sessions are kept separate.
      • When Application Designer builds an alter script, it can't tell whether it is global temporary or a normal table, so doesn't rebuild the table if you change it from one to the other.
      • The only real runtime downside of global temporary tables is that if you want to debug a process the data is not left behind after the process terminates.  Even while the process is running, you cannot query the contents of a global temporary tables in use by another from your session,

      My Recommendation

      Support for global temporary tables is welcome and long overdue.  It can bring significant run time performance and system benefits due to the reduction in redo and read consistency.  It can be implemented without any code change. 

      We just need to sort out the 11 character record name length restriction.

      255 columns

      You all know that having more than 255 columns in a table is a Bad Thing ™ – and surprisingly you don’t even have to get to 255 to hit the first bad thing about wide tables. If you’ve ever wondered what sorts of problems you can have, here are a few:

      • If you’re still running 10g and gather stats on a table with more than roughly 165 columns then the query Oracle uses to collect the stats will only handle about 165 of them at a time; so you end up doing multiple (possibly sampled) tablescans to gather the stats. The reason why I can’t give you an exact figure for the number of columns is that it depends on the type and nullity of the columns – Oracle knows that some column types are fixed length (e.g. date types, char() types) and if any columns are declared not null then Oracle doesn’t have to worry about counting nulls – so for some of the table columns Oracle will be able to eliminate one or two of the related columns it normally includes in the stats-gathering SQL statement – which means it can gather stats on a few more table columns.  The 165-ish limit doesn’t apply in 11g – though I haven’t checked to see if there’s a larger limit before the same thing happens.
      • If you have more than 255 columns in a row Oracle will split it into multiple row pieces of 255 columns each plus one row piece for “the rest”; but the split counts from the end, so if you have a table with 256 columns the first row-piece has one column the second row-piece has 255 columns. This is bad news for all sorts of operations because Oracle will have to expend extra CPU chasing the the row pieces to make use of any column not in the first row piece. The optimists among you might have expected “the rest” to be in the last row piece. If you want to be reminded how bad row-chaining can get for wide tables, just have a look at an earlier blog note of mine (starting at this comment).
      • A particularly nasty side effect of the row split comes with direct path tablescans – and that’s what Oracle does automatically when the table is large. In many cases all the row pieces for a row will be in the same block; but they might not be, and if a continuation row-piece is in a different block Oracle will do a “db file sequential read” to read that block into the buffer cache and it won’t be cached (see 1st comment below).  As an indication of how badly this can affect performance, the results I got at a recent client site showed “select count(col1) from wide_table” taking 10  minutes while “select count(column40) from wide_table” took 22 minutes because roughly one row in a hundred required a single block read to follow the chain.
      • An important side effect of the split point is that you really need to put the columns you’re going to index near the start of the table to minimise the risk of this row chaining overhead when you create or rebuild an index.
      • On top of everything else, of course, it takes a surprisingly large amount of extra CPU to load a large table if the rows are chained. Another client test reported 140 CPU seconds to load 5M rows of 256 columns, but only 20 CPU seconds to load 255.

      If you are going to have tables with more than 255 columns, think very carefully about column order – if you can get all the columns that are almost always null at the end of the row you may get lucky and find that you never need to create a secondary row piece. A recent client had about 290 columns in one table of 16M rows, and 150 columns were null for all 16M rows – unfortunately they had a mandatory “date_inserted” column at the end of the row, but with a little column re-arrangement they eliminated row chaining and saved (more than) 150 bytes storage per row.  Of course, if they have to add and back-fill a non-null column to the table they’re going to have to rebuild the table to insert the column “in the middle”, otherwise all new data will be chained and wasting 150 bytes per row, and any old data that gets updated will suffer a row migration/chain catastrophe.

      PeopleTools 8.54: Materialized Views

      This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

      Materialized Views in the Database

      Snapshots were introduced in Oracle 7.  They were a way of building and maintaining a physical table to hold the results of a SQL query.  We are well used to a view being the results of a query, but here the results are materialised into a physical table.  Hence the feature was renamed materialized views in Oracle 8i.

      Today materialized views are one among many replication technologies.  They are available in standard edition Oracle, and there is no licensing implication in their use.

      Materialized views can generally be put into two categories

      • A simple, single-table materialized view. 
        • Often used to replicate data across a database link from another database.  Or to produce a subset of the data.
        • Can be refreshed incrementally using a PL/SQL package supplied by Oracle
          •  A materialized view log is created on the source table to record all the changes made to the source table.  It holds the primary key of the changed row, or the rowid (physical address of the row).  It can optionally hold additional columns.  It is populated by a database trigger on the source table (since Oracle 8i that trigger is hidden).
      • Multi-table materialized view
        • Usually done within a single database rather than across a database link.
        • Can only be refreshed by being completely rebuilt. 
        • If the same query as is used in a materialized view is submitted, Oracle can rewrite the query to use the materialized view instead.  Query rewrite only occurs subject to configuration and certain pre-requisites being met.

      Materialized Views can be populated immediately when they are built, or later on demand.  They can be refreshed on demand, on a regular schedule by a database job, or immediately when an update to a source table is committed.  Materialized views can be put into Refresh Groups.  All the materialized views in a refresh group are refreshed in the same database transaction so the data in the materialized views is consistent.

      Materialized views can be updatable and used for bidirectional replication.  I am not going to talk that here.

      When you introduce materialized views into an application you need to consider what you are trying to achieve, and make design decisions accordingly.

      Materialized Views in PeopleTools 8.54

      Using this new feature in PeopleSoft is deceptively easy, but quite a lot is going on behind the scenes.
      PeopleSoft Documentation (the term PeopleBooks seems to have been replaced by PeopleSoft Online Help): Data Management, Using Materialized Views provides an overview.

      There are two new PeopleTools tables:

      • PSPTMATVWDEFN - addition definition fields for the materialized view, build, refresh, staleness, stats.  Doesn't contain the query, that is in PSSQLTEXTDEFN as it is for all other views.
      • PSPTMATVWDEP - lists tables upon which materialized view depends.  PeopleSoft seems to work this out for itself by parsing the SQL query.

      I am going to demonstrate some aspects of the feature by running through some examples.

      Example 1: Replicate part of PS_JOB across a database link

      In this example I am going to use a materialized view to replicate a table from another Oracle database across a database link.

      If I select SQL View the Materialized View check box appears, if I check the checkbox the Materialized View Options appear.

      This is build script generated by Application Designer

      #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">DROP VIEW PS_DMK
      /
      CREATE VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ, SETID_DEPT,
      DEPTID) AS SELECT EMPLID , EMPL_RCD , EFFDT , EFFSEQ , SETID_DEPT ,
      DEPTID FROM PS_JOB@HROTHER
      /
      UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
      PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
      ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
      = 'DMK'
      /
      DELETE FROM MV_CAPABILITIES_TABLE WHERE MVNAME = 'PS_DMK'
      /
      DROP VIEW PS_DMK
      /
      CREATE MATERIALIZED VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ,
      SETID_DEPT, DEPTID) TABLESPACE PSMATVW BUILD DEFERRED REFRESH FAST
      ON DEMAND DISABLE QUERY REWRITE AS SELECT EMPLID , EMPL_RCD , EFFDT ,
      EFFSEQ , SETID_DEPT , DEPTID FROM PS_JOB@HROTHER
      /
      UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 1, PTMAT_REFRESHSTAT = 0,
      PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
      ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
      = 'DMK'
      /

      However, if the materialized view already exists, the script will drop it, recreate and drop the view, and then recreate the materialized view.

      #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">DROP MATERIALIZED VIEW PS_DMK
      /
      UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
      PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
      ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
      = 'DMK'
      /
      CREATE VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ, SETID_DEPT,
      DEPTID) AS SELECT EMPLID , EMPL_RCD , EFFDT , EFFSEQ , SETID_DEPT ,
      DEPTID FROM PS_JOB@HROTHER
      /
      UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
      PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
      ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
      = 'DMK'
      /
      DELETE FROM MV_CAPABILITIES_TABLE WHERE MVNAME = 'PS_DMK'
      /
      DROP VIEW PS_DMK
      /
      CREATE MATERIALIZED VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ,
      SETID_DEPT, DEPTID) TABLESPACE PSMATVW BUILD IMMEDIATE REFRESH FAST
      ON DEMAND DISABLE QUERY REWRITE AS SELECT EMPLID , EMPL_RCD , EFFDT ,
      EFFSEQ , SETID_DEPT , DEPTID FROM PS_JOB@HROTHER
      /
      UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 1, PTMAT_REFRESHSTAT = 1,
      (PTMAT_LASTREFRESH, PTMAT_STALENESS) = (SELECT LAST_REFRESH_DATE,
      STALENESS FROM USER_MVIEWS WHERE MVIEW_NAME = 'PS_DMK') WHERE RECNAME
      = 'DMK'
      /
      • The Application Designer build script creates the materialized view using a primary key based replication.  If there is no WITH PRIMARY KEY clause specified, because it is the default.  There appears to be no way to get Application Designer to generate a WITH ROWID clause, so it is not possible to replicate a single table without a unique key.  You might question whether that is useful, but it is possible in Oracle.
      • If there is no primary key on the source table, you will to add one.  If this is on another system, or a pre-8.58 PeopleSoft system you will need to do this manually.  Otherwise you will get this error message:
      #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">ERROR at line 4:
      ORA-12014: table 'PS_JOB' does not contain a primary key constraint
      • If you specify any key columns on the materialized view it does not result in the usual indexes that you get on tables.  Also, it is not possible to add additional user specified indexes to the materialized view - the option is greyed out.  This is rather disappointing, because you might want to do exactly that so you can query the materialized view it in different ways to the underlying table.
        • You will get a unique index on a materialized view that is replicated by primary key, because the primary key will be inherited from the underlying table.
      • Nor is it possible to specify partitioning in Application Designer on a materialized view.
      • You can specify storage options on the materialized view via Record DDL, but the storage options do not appear in the CREATE MATERIALIZED VIEW statement in the build script.  This is rather disappointing because you don't need to provide free space for updates in a materialized view which is completely refreshed each time, but you might if you do incremental update.

      Example 2: Replicate part of PS_JOB locally

      In this example, I am again only replicating 6 named columns into my materialized view.

      #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">DROP VIEW PS_DMK
      /
      CREATE VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ, SETID_DEPT,
      DEPTID) AS SELECT EMPLID , EMPL_RCD , EFFDT , EFFSEQ , SETID_DEPT ,
      DEPTID FROM PS_JOB
      /
      UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
      PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
      ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
      = 'DMK'
      /
      ALTER TABLE PS_JOB DROP CONSTRAINT PS_JOB_PK
      /
      DROP MATERIALIZED VIEW LOG ON PS_JOB
      /
      DELETE FROM PSPTMATVWDEP WHERE RECNAME = 'DMK' AND PTMAT_BASETBL =
      'PS_JOB'
      /
      ALTER TABLE PS_JOB ADD CONSTRAINT PS_JOB_PK PRIMARY KEY (EFFDT, EFFSEQ
      , EMPLID, EMPL_RCD)
      /
      CREATE MATERIALIZED VIEW LOG ON PS_JOB TABLESPACE PSMATVW WITH PRIMARY
      KEY, ROWID, SEQUENCE(DEPTID, SETID_DEPT) INCLUDING NEW VALUES PURGE
      IMMEDIATE
      /
      INSERT INTO PSPTMATVWDEP(RECNAME, PTMAT_BASETBL) VALUES('DMK',
      'PS_JOB')
      /
      DELETE FROM MV_CAPABILITIES_TABLE WHERE MVNAME = 'PS_DMK'
      /
      DROP VIEW PS_DMK
      /
      UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
      PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
      ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
      = 'DMK'
      /

      I don't know why it rebuilds the non-materialized view as a normal view and drops the primary key constraint each time every time but it does.  You might not want to do this every time for a large materialized view that takes time to build.
      If the materialized view log has been built, next time you generate the view build script it creates and drop the view and then builds the materialized view.

      #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">CREATE VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ, SETID_DEPT,
      DEPTID) AS SELECT EMPLID , EMPL_RCD , EFFDT , EFFSEQ , SETID_DEPT ,
      DEPTID FROM PS_JOB
      /
      DELETE FROM PSPTMATVWDEP WHERE RECNAME = 'DMK' AND PTMAT_BASETBL =
      'PS_JOB'
      /
      UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
      PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
      ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
      = 'DMK'
      /
      ALTER TABLE PS_JOB DROP CONSTRAINT PS_JOB_PK
      /
      DROP MATERIALIZED VIEW LOG ON PS_JOB
      /
      DELETE FROM PSPTMATVWDEP WHERE RECNAME = 'DMK' AND PTMAT_BASETBL =
      'PS_JOB'
      /
      ALTER TABLE PS_JOB ADD CONSTRAINT PS_JOB_PK PRIMARY KEY (EFFDT, EFFSEQ
      , EMPLID, EMPL_RCD)
      /
      CREATE MATERIALIZED VIEW LOG ON PS_JOB TABLESPACE PSMATVW WITH PRIMARY
      KEY, ROWID, SEQUENCE(DEPTID, SETID_DEPT) INCLUDING NEW VALUES PURGE
      IMMEDIATE
      /
      INSERT INTO PSPTMATVWDEP(RECNAME, PTMAT_BASETBL) VALUES('DMK',
      'PS_JOB')
      /
      DELETE FROM MV_CAPABILITIES_TABLE WHERE MVNAME = 'PS_DMK'
      /
      DROP VIEW PS_DMK
      /
      CREATE MATERIALIZED VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ,
      SETID_DEPT, DEPTID) TABLESPACE PSMATVW BUILD DEFERRED REFRESH FAST
      ON DEMAND DISABLE QUERY REWRITE AS SELECT EMPLID , EMPL_RCD , EFFDT ,
      EFFSEQ , SETID_DEPT , DEPTID FROM PS_JOB
      /
      UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 1, PTMAT_REFRESHSTAT = 1,
      (PTMAT_LASTREFRESH, PTMAT_STALENESS) = (SELECT LAST_REFRESH_DATE,
      STALENESS FROM USER_MVIEWS WHERE MVIEW_NAME = 'PS_DMK') WHERE RECNAME
      = 'DMK'
      /
      • It is rather odd to see a build script update a PeopleTools table. Application Designer also updates the PSPTMATVWDEFN table itself every time it generates the build script.  Note that the script doesn't issue an explicit commit, so if you execute the build script in SQL*Plus remember to commit to release the row level lock on PSPTMATVWDEFN.  
      • Application Designer flip-flops between these two build scripts that will repeatedly drop and create the materialized view and materialized view log.  Unless you are very careful you might not know whether you have the objects in the desired state.
      • The materialized view and materialized view log are always created in tablespace PSMATVW.  This is a new tablespace delivered in the standard tablespace script.  It is not possible to set the tablespace to something else as for a normal table because it is a view.  This is unfortunately because, I might not want all my materialized views in the same tablespace.
      • Even though the materialized view is replicated by primary key, the materialized view log also contains the rowid and the supplementary columns.  This is overkill.  The materialized view log as built be application designer contains every length-bounded column in the source table.  This can significantly increase the overhead of the materialized view log which is maintained as other process update the source table.
      #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SQL> desc mlog$_ps_job
      Name Null? Type
      ----------------------------- -------- -------------------
      EFFDT DATE
      EFFSEQ NUMBER
      EMPLID VARCHAR2(11 CHAR)
      EMPL_RCD NUMBER
      DEPTID VARCHAR2(10 CHAR)
      SETID_DEPT VARCHAR2(5 CHAR)
      M_ROW$$ VARCHAR2(255 CHAR)
      SEQUENCE$$ NUMBER
      SNAPTIME$$ DATE
      DMLTYPE$$ VARCHAR2(1 CHAR)
      OLD_NEW$$ VARCHAR2(1 CHAR)
      CHANGE_VECTOR$$ RAW(255)
      XID$$ NUMBER
        • If I just created the materialized view log as follows
      #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">CREATE MATERIALIZED VIEW LOG ON PS_JOB TABLESPACE PSMATVW 
      WITH PRIMARY KEY
      --, ROWID, SEQUENCE(DEPTID, SETID_DEPT)
      INCLUDING NEW VALUES PURGE IMMEDIATE
      /
        • then the materialized view log contains fewer columns
      #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">Name                          Null?    Type
      ----------------------------- -------- --------------------
      EFFDT DATE
      EFFSEQ NUMBER
      EMPLID VARCHAR2(11 CHAR)
      EMPL_RCD NUMBER
      SNAPTIME$$ DATE
      DMLTYPE$$ VARCHAR2(1 CHAR)
      OLD_NEW$$ VARCHAR2(1 CHAR)
      CHANGE_VECTOR$$ RAW(255)
      XID$$ NUMBER
      • The materialized view inherits the primary key from the source table because it is a single table materialized view replicated using the primary key.  Therefore there is also a unique index on this materialised view.
      #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT constraint_name, constraint_type, table_name, index_name
      FROM user_constraints
      WHERE table_name = 'PS_DMK'
      AND constraint_type != 'C'
      /

      CONSTRAINT_NAME C TABLE_NAME INDEX_NAME
      -------------------- - ---------- ----------
      PS_JOB_PK1 P PS_DMK PS_JOB_PK1

      SELECT index_name, index_type, uniqueness
      FROM user_indexes
      WHERE table_name = 'PS_DMK'
      /

      INDEX_NAME INDEX_TYPE UNIQUENES
      ---------- ---------- ---------
      PS_JOB_PK1 NORMAL UNIQUE

      SELECT index_name, column_position, column_name, descend
      FROM user_ind_columns
      WHERE table_name = 'PS_DMK'
      /

      INDEX_NAME COLUMN_POSITION COLUMN_NAME DESC
      ---------- --------------- -------------------- ----
      PS_JOB_PK1 1 EFFDT ASC
      PS_JOB_PK1 2 EFFSEQ ASC
      PS_JOB_PK1 3 EMPLID ASC
      PS_JOB_PK1 4 EMPL_RCD ASC
    • The build script clears out the MV_CAPABILITIES_TABLE when it drops the materialized view.  This table is used to hold the output from DBMS_MVIEW.EXPLAIN_MVIEW (see Oracle Database Data Warehousing Guide - ), which Application Designer executes when the materialized view record is saved.
    • Example 3:DMK_DPT_SEC_MVW is a materialised view that is cloned from security view DEPT_SEC_SRCH.

      This view references various tables (I have edited out column lists and predicates for readability)

      #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">
      SELECT …
      FROM PS_DEPT_TBL DEPT
      , PSOPRDEFN OPR
      WHERE EXISTS (
      SELECT 'X'
      FROM PS_SJT_DEPT SEC
      , PS_SJT_CLASS_ALL CLS
      , PS_SJT_OPR_CLS SOC
      …)
      OR EXISTS (
      SELECT 'X'
      FROM PS_SJT_DEPT SEC
      , PS_SJT_CLASS_ALL CLS
      , PS_SJT_OPR_CLS SOC
      …)
      OR EXISTS (
      SELECT 'X'
      FROM PS_SJT_DEPT SEC
      , PS_SJT_CLASS_ALL CLS
      , PS_SJT_OPR_CLS SOC
      …)

      But only 4 views appear in PSPTMATVWDEP.  PS_SJT_DEPT was not added.

      #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT * FROM psptmatvwdep WHERE recname = 'DMK_DPT_SEC_MVW'
      /

      RECNAME PTMAT_BASETBL
      --------------- ------------------
      DMK_DPT_SEC_MVW PSOPRDEFN
      DMK_DPT_SEC_MVW PS_DEPT_TBL
      DMK_DPT_SEC_MVW PS_SJT_CLASS_ALL
      DMK_DPT_SEC_MVW PS_SJT_OPR_CLS

      I think this is because it tried and failed to add primary key constraint and materialized view log to PS_SJT_DEPT because it has a 'duplicate key' defined in Application Designer.  The following errors are found in the build log even if the build script is not executed.

      #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">ALTER TABLE PS_SJT_DEPT ADD CONSTRAINT PS_SJT_DEPT_PK PRIMARY KEY 
      (SCRTY_KEY1, SCRTY_KEY2, SCRTY_KEY3, SCRTY_TYPE_CD, SETID)
      Error: DMK_DPT_SEC_MVW - SQL Error. Error Position: 39 Return: 2437
      - ORA-02437: cannot validate (SYSADM.PS_SJT_DEPT_PK) - primary key violated

      CREATE MATERIALIZED VIEW LOG ON PS_SJT_DEPT TABLESPACE PSMATVW
      WITH PRIMARY KEY, ROWID, SEQUENCE (DEPTID, EFFDT_NOKEY)
      INCLUDING NEW VALUES PURGE IMMEDIATE
      Error: DMK_DPT_SEC_MVW - SQL Error. Error Position: 39 Return: 2437
      - ORA-02437: cannot validate (SYSADM.PS_SJT_DEPT_PK) - primary key violated

      Application Designer worked out that PS_SJT_DEPT was referenced in the materialized view query, but it didn't check that the table does not has a unique key defined in PeopleTools.

      We didn't get as far as creating the materialized view.  However, Application Designer passed the create Materialized View command to the EXPLAIN_MVIEW function in order to populate

      #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">EXECUTE DBMS_MVIEW.EXPLAIN_MVIEW (q'[CREATE MATERIALIZED VIEW PS_DMK_DPT_SEC_MVW (SETID, OPRID, DEPTID, DESCR
      , DESCRSHORT, SETID_LOCATION, LOCATION, MANAGER_ID, COMPANY, USE_BUDGETS, USE_ENCUMBRANCES)
      TABLESPACE PSMATVW BUILD DEFERRED REFRESH FAST ON DEMAND AS
      SELECT DEPT.SETID, OPR.OPRID, DEPT.DEPTID , DEPT.DESCR , DEPT.DESCRSHORT , DEPT.SETID_LOCATION
      , DEPT.LOCATION , DEPT.MANAGER_ID , DEPT.COMPANY , DEPT.USE_BUDGETS , DEPT.USE_ENCUMBRANCES
      FROM PS_DEPT_TBL DEPT , PSOPRDEFN OPR
      WHERE EXISTS (
      SELECT 'X' FROM PS_SJT_DEPT SEC , PS_SJT_CLASS_ALL CLS , PS_SJT_OPR_CLS SOC
      WHERE SEC.SETID = DEPT.SETID AND SEC.DEPTID = DEPT.DEPTID AND SEC.EFFDT_NOKEY = DEPT.EFFDT
      AND CLS.SCRTY_SET_CD = 'PPLJOB' AND CLS.SCRTY_TYPE_CD = '001' AND CLS.TREE = 'Y'
      AND CLS.SCRTY_KEY1 = SEC.SCRTY_KEY1 AND CLS.SCRTY_KEY2 = SEC.SCRTY_KEY2
      AND CLS.SCRTY_KEY3 = SEC.SCRTY_KEY3 AND SOC.OPRID

      Example 4: DMK_JOB_CUR_MVW is a materialized view cloned from JOB_CURR_ALL_VW

      In this case I will try to create a materialized view on a complex query, but this time the underlying table has a unique key.  When I try to build the materialized view I get the following entries in the error log.  These warnings were obtained from the entries in MV_CAPABILITIES_TABLE which was populated by an attempt to describe the query.

      #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SQL Build process began on 16/02/2015 at 21:05:30 for database HR92U011. 
      Error: Cannot create Materialized View on record DMK_JOB_CUR_MVW.
      Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_COMPLETE| Y | | |
      Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST| N | | |
      Warning: | PS_DMK_JOB_CUR_MVW | REWRITE| N | | |
      Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST_AFTER_INSERT| N | aggregate function in mv | |
      Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST_AFTER_INSERT| N | multiple instances of the same table or view | |
      Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST_AFTER_ONETAB_DML| N | see the reason why REFRESH_FAST_AFTER_INSERT is disabled | |
      Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST_AFTER_ANY_DML| N | see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled | |
      Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_FULL_TEXT_MATCH| N | Oracle error: see RELATED_NUM and RELATED_TEXT for details |expression not supported for query rewrite |
      Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_FULL_TEXT_MATCH| N | Oracle error: see RELATED_NUM and RELATED_TEXT for details |expression not supported for query rewrite |
      Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_FULL_TEXT_MATCH| N | query rewrite is disabled on the materialized view | |
      Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_PARTIAL_TEXT_MATCH| N | materialized view cannot support any type of query rewrite | |
      Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_PARTIAL_TEXT_MATCH| N | query rewrite is disabled on the materialized view | |
      Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_GENERAL| N | subquery present in the WHERE clause | |
      Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_GENERAL| N | materialized view cannot support any type of query rewrite | |
      Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_GENERAL| N | query rewrite is disabled on the materialized view | |

      SQL Build process ended on 16/02/2015 at 21:05:30.
      1 records processed, 1 errors, 15 warnings.
      SQL Build script for all processes written to file C:\Temp\PSBUILD.SQL.
      SQL executed online.
      SQL Build log file written to C:\Temp\PSBUILD.LOG.
      • So, Application Designer does try to prevent you from creating materialized views that Oracle won't manage, but the messages back are a little obscure.
      • If I change the refresh mode to Complete, Application Designer does not create materialized view logs. 
      #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">CREATE MATERIALIZED VIEW PS_DMK_JOB_CUR_MVW (EMPLID, EMPL_RCD,
      ACTION_DT, BUSINESS_UNIT, EMPL_STATUS, HR_STATUS, DEPTID, JOBCODE,
      LOCATION, POSITION_NBR, ACTION, ACTION_REASON, COMP_FREQUENCY,
      COMPRATE, CURRENCY_CD, SAL_ADMIN_PLAN, GRADE, COMPANY, PAY_SYSTEM_FLG
      , PAYGROUP, REG_TEMP, FULL_PART_TIME, SETID_DEPT, SETID_JOBCODE,
      SETID_LOCATION, PER_ORG) TABLESPACE PSMATVW BUILD IMMEDIATE REFRESH
      COMPLETE ON DEMAND AS SELECT A.EMPLID ,A.EMPL_RCD ,A.EFFDT
      ,A.BUSINESS_UNIT ,A.EMPL_STATUS ,A.HR_STATUS ,A.DEPTID ,A.JOBCODE
      ,A.LOCATION ,A.POSITION_NBR ,A.ACTION ,A.ACTION_REASON
      ,A.COMP_FREQUENCY ,A.COMPRATE ,A.CURRENCY_CD ,A.SAL_ADMIN_PLAN
      ,A.GRADE ,A.COMPANY ,A.PAY_SYSTEM_FLG ,A.PAYGROUP ,A.REG_TEMP
      ,A.FULL_PART_TIME ,A.SETID_DEPT ,A.SETID_JOBCODE ,A.SETID_LOCATION
      ,A.PER_ORG FROM PS_JOB A WHERE A.EFFDT = ( SELECT MAX (C.EFFDT) FROM
      PS_JOB C WHERE C.EMPLID = A.EMPLID AND C.EMPL_RCD = A.EMPL_RCD AND
      ((C.EFFDT <= TO_DATE(TO_CHAR(SYSDATE,'YYYY-MM-DD'),'YYYY-MM-DD')) OR
      (A.EFFDT > TO_DATE(TO_CHAR(SYSDATE,'YYYY-MM-DD'),'YYYY-MM-DD') AND
      TO_DATE(TO_CHAR(SYSDATE,'YYYY-MM-DD'),'YYYY-MM-DD') < ( SELECT
      MIN(J2.EFFDT) FROM PS_JOB J2 WHERE J2.EMPLID = A.EMPLID AND
      J2.EMPL_RCD = A.EMPL_RCD) ) )) AND A.EFFSEQ = ( SELECT MAX(D.EFFSEQ)
      FROM PS_JOB D WHERE D.EMPLID = A.EMPLID AND D.EMPL_RCD = A.EMPL_RCD
      AND D.EFFDT = A.EFFDT)
      /
      UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 1, PTMAT_REFRESHSTAT = 1,
      (PTMAT_LASTREFRESH, PTMAT_STALENESS) = (SELECT LAST_REFRESH_DATE,
      STALENESS FROM USER_MVIEWS WHERE MVIEW_NAME = 'PS_DMK_JOB_CUR_MVW')
      WHERE RECNAME = 'DMK_JOB_CUR_MVW'
      /
      • Also, It doesn't create a primary key constraint on either the underlying table or the materialized view.  So this materialized view doesn't have any indexes.

      Query ReWrite

      One common use of complex materialized views is to allow the optimizer to rewrite the query to use the materialized view when it sees the same query as was used to create the materialized view.  Optionally the optimizer will also check that the view is up to date.  I have added the enable query rewrite clause.

      #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">DROP MATERIALIZED VIEW PS_DMK_PER_DEP_MVW
      /
      CREATE MATERIALIZED VIEW PS_DMK_PER_DEP_MVW (SETID_DEPT, DEPTID, EFFDT
      , DESCR) TABLESPACE PSMATVW
      BUILD IMMEDIATE REFRESH COMPLETE ON DEMAND
      enable query rewrite
      AS SELECT A.SETID ,A.DEPTID ,A.EFFDT ,A.DESCR FROM PS_DEPT_TBL A
      WHERE A.EFFDT= ( SELECT MAX(B.EFFDT) FROM PS_DEPT_TBL B WHERE A.SETID
      =B.SETID AND A.DEPTID= B.DEPTID AND B.EFFDT<=TO_DATE(TO_CHAR(SYSDATE
      ,'YYYY-MM-DD'),'YYYY-MM-DD'))
      /

      However, expressions - in this case one generated to determine the current effective-dated department - are not supported for query write.

      #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">=B.SETID AND A.DEPTID= B.DEPTID AND B.EFFDT<=TO_DATE(TO_CHAR(SYSDATE
      *
      ERROR at line 7:
      ORA-30353: expression not supported for query rewrite

      This could make it very difficult to use the feature in PeopleSoft. If you want to use the materialized view you are likely to have to reference it explicitly in the code.

      Refreshing materialized Views

      There is a new component to manage the refresh frequency of materialized views.

      PeopleTools-> Utilities-> Administration -> Oracle Materialized Views -> Materialized View Maintenance
      This component will schedule an Application Engine process called PTMATREFVW.

      What this Application Engine actually does is to specify the refresh frequency for the materialized view

      #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">&AlterSQL = "alter materialized view " | &mview_name 
      | " REFRESH NEXT SYSDATE + (" | &MatRecords.PTMAT_REFINT.Value | "/86400)";

      So the command issued is just

      #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">alter materialized view PS_DMK REFRESH  NEXT SYSDATE + (4242/86400)";

      Effectively, for each materialized view, this creates a refresh group and a database job that refreshes it.

      #eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT rname, next_date, interval FROM user_refresh
      /

      RNAME NEXT_DATE INTERVAL
      ---------- --------- -------------------------
      PS_DMK 24-JAN-15 SYSDATE + (4242/86400)

      SELECT name, type, rname, job, next_date, interval FROM user_refresh_children
      /

      NAME TYPE RNAME JOB NEXT_DATE INTERVAL
      ---------- ---------- ---------- ---------- --------- -------------------------
      PS_DMK SNAPSHOT PS_DMK 21 24-JAN-15 SYSDATE + (4242/86400)

      SELECT job, next_date, next_Sec, interval, what FROM dba_jobs
      /

      JOB NEXT_DATE NEXT_SEC INTERVAL
      ---------- --------- -------- -------------------------
      WHAT
      --------------------------------------------------
      21 24-JAN-15 11:48:52 SYSDATE + (4242/86400)
      dbms_refresh.refresh('"SYSADM"."PS_DMK"');
      • But I might want to group related materialized views together into a single refresh group.
      • I might want to refresh the job at a particular time, which can be done with a more sophisticated function in the interval. 
      • I might prefer to refresh a materialized view at a particular point in a batch schedule.  So I might prefer to code that into an application engine, or have the application engine submit a job that only fires once and does resubmit (depending on whether I want to wait for the refresh).

      My Recommendations

      • Good Things
        • The removal of the descending indexes and the creation of the primary key is a good thing
        • It is useful to be able to define the materialized view query in PeopleTools along with the rest of the applicaiton.
        • The use of EXPLAIN_MVIEW to test the validity of the materialized view and to populate MV_CAPABILITIES_TABLE is clever, but the messages are obscure and should be better documented.
      • Bad Things
        • No checking that a source table in the local database doesn't have a unique key that will support a primary key.
        • I can't build indexes on materialized view.  Although, the primary key will be inherited automatically on single table materialized views.  So you will have to handle that manually outside PeopleTools.
        • There is no support for rowid based materialized views.
        • The materialized view logs created by Application Designer are totally overblown - there is far too much data being logged.  They should be either primark key or rowid (if primary key is not possible), but not both.  I cannot see the point of the additional columns.  I think they are a waste of resource.
        • The flip-flopping of the build script is confusing; you will never be completely sure what you have in the database. 
        • The script dropping the materialized view unnecessarily, which will drop any indexes that you have created manually!
      • I think some of the problems stem from trying to graft materialized views onto the existing view record type, instead of creating a new record type and building it into Application Designer properly and giving it the attributes of both a table and a view.
      • There is not enough control over when a materialized view is refreshed.  Just a time interval is not good enough.  In most systems, you need better control.
      • It is clearly going to be difficult getting database query rewrite to work with complex materialized views in PeopleSoft, especially if effective-date logic is required.  However, that is a reflection on the application code rather than the way support for materialized views has been implemented.

       When you design materialized views into your application you need to make careful choices about

      • Refresh method
        • Complete - pretty much mandatory for multi-table views, but there are some exceptions to this rule described in the Oracle database documentation.
        • Fast - only for single table - rarely used within a single database - more commonly used for moving data between databases.  In which case, you need database links and the materialized view log goes onto the source database and the materialized view is created on the target.
      • Refresh mode
        • On commit - this is potentially dangerous because it could happen too often and there won't be a saving.  Possible exception being when the underlying table is only ever updated by a batch process
        • On demand - manually issue refresh at a specific point in a batch
        • On schedule by a database job.
      • Build options
        • Immediate - the materialized view is populated when it is created
        • Deferred - the materialized view is not poplulated when it is created, and will have to be completely refreshed at some point in the future.

      Choice of refresh method and mode is often a function question.  How stale can the materialized view be allowed to be, especially if you can't get query rewrite to work.  Thoughtful design is required.  I have seen systems severely degraded by the query and redo overhead of excessive refresh. 
      PeopleTools support of materialized views certainly has some good things, but as it stands it is of limited use when it still leaves you with a lot of manual administration to do. 

      In most systems it is the DBAs who will have to manage the materialized views.  They are generally resistant to using PeopleSoft specific tools to do database administration.  That is going to be even more challenging when only a part of the job can be done in PeopleSoft.