Search

OakieTags

Who's online

There are currently 0 users and 49 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

ASH presentations from John Beresniewicz

 

John Beresniewicz and I worked hand in hand on developing the performance pages in Oracle Enterprise Manager. John is one of the foremost experts on active session history (ASH) queries and he has recently posted some of his presentations on ASH to slideshare:

 

 

 

Renaming an Oracle Apache Target in EM12c

When installing Enterprise Manager 12c, the host value can come from a number of places for different applications/tiers.  For most, it comes from the environment variable $ORACLE_HOSTNAME, (for Windows Servers, %ORACLE_HOSTNAME%).

The OHS1 target, aka Oracle Apache in the middle tier of the EM12c environment pulls it’s value from the etc/hosts file, (for Unix as well as Windows) and so it is vulnerable with a virtual host name or host name change occurs.  It can, however, be updated post installation when the OHS1 target fails to return an active status in the EM12c console.

Update the Configuration File

The file that control the configuration of the OHS1 target is the topology.xml file that is located in the $OMS_HOME\user_projects\domains\GCDomain\opmn\topology.xml

Edit the topology.xml file and replace/add the following entries in bolded text, replacing Virtual_Cluster_name with the name of the Cluster:

- ” port=”6701″>
- host=”“>

 Save the file with the new changes.

Remove the OHS1 Target

Log into your EM12c console as the SYSMAN user, (or another user with appropriate privileges) and click on All Targets.  Either do a search for the OHS1 target or just scan down and double-click on it.  The target will show as down and display the incorrect associated targets with the HTTP Server:

ohs_tgt_wrong

You will need to remove and re-add the target to have the EM12c utilize the topology.xml file configuration update to the new host name.

To do this, click on Oracle HTTP Server–> Target Setup –> Remove Target. The target for the Oracle Apache server/HTTP Server, along with its dependents have now been removed.

Refresh the Weblogic Domain

To re-add the OHS1 target, we are going to use a job already built into EM12c.  Go back to All Targets the Targets drop down.  At the very top you will commonly see the EMGC_GCDomain, (Grid Control Domain, yes, it’s still referred to it as that… :))  Log into this target.  There are two “levels” to this target, the parent and then the farm.  Either one will offer you a job in the drop down to Refresh Weblogic Domain.

weblogic_refresh

Once you click on this job, it will ask you to remove or add targets.  You can simply choose to Add Targets and the job will first search for any missing targets that need to be re-added.  Commonly it will locate 12 and display a list of the targets it wishes to add.  You will note that the OHS1 target now displays the CORRECT host name.

Close the window and choose to complete through the wizard steps to add these targets to the Weblogic domain.

Return to All Targets and access the OHS1 Target to verify that it now displays an active status-  it may take up to one collection to update the target status.

 

 



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Renaming an Oracle Apache Target in EM12c], All Right Reserved. 2014.

Dealing with Wi-Fi Problems the DBA Kevlar Way

As much as KSCOPE 14 fixed any and all Low-T levels, it couldn’t do anything for our the WiFi problems we were experiencing back in May. I’ve been a telecommuter for almost 4 straight years now and was quite frustrated when our WiFi service became dismal back then.

We called Comcast service and worked through trouble-shooting with their support multiple times, only afterward, still unable to download files or watch Netflix into the first three weeks of June.  Tim purchased a new modem after Comcast confirmed our model was at “end of support” and yet, still no improvement.  After more research and attempts at fixing the performance resulted in no improvement, I was getting tired of having to commute to telecommute.

In the middle of June, I switched service from Comcast to Centurylink, hoping to solve the problem, but just continued to experience dismal WiFi from another provider.  The situation had become unbearable for me, (and also for Tim who was working from home full-time, too…) and sent me right into one of my “take no prisoner” moods.   I’ve never liked guessing to solve anything and wanted some real information and a solution, so I thought I would share what I ended up doing to truly solve the WiFi problem, as it may save others from headache to time trying to figure out what is really going on.

After a bith of research, I located a program called inSSIDer.  This application, when downloaded and purchased for $20 will give you a clear, graphic display of what WiFi networks are in your vicinity, including unnamed, (hidden) ones.  It shows you what ranges, channels and bands, (2.4Ghz or 5Ghz) they are in.  The app can also show you what hardware is being used, if they overlap or interfere with your own WiFi signal and then make recommendations about channel changes that can be made to improve performance.

Keep in mind- You’ll now be viewing our “resolved” WiFi configuration, but the app is cool and it will make a lot of sense once you are able to visualize how this application works.

wifi1

If you’ll notice, I’m getting quite a bit of speed now that I’ve corrected the problem we were experience, but notice how the application displays the information about all WiFi networks that are in range.  It shows you the channels and security info, along with speed of each. With my new configuration, you can see that my 5Ghz network is very fast, followed by the 2.4Ghz and the kids extended one.

What’s really nice when you are viewing all of this data during a troubleshooting scenario, is the information shown both visually via graphs and tables regarding overlapping and channels that are used by you and your neighboring networks.

Below is the 2.4Ghz networks in just my area.  If you are on a small cul de sac like I am, this can be very frustrating to see how many are running not just on the same channel, but overlapping on frequency:

wifi2

You are able to then click on one of the networks on the list and view detailed data about the WiFi network:

wifi7

Above you can note that there is distinct overlap with another network, specifically by four, but primarily by one that is completely impacting the quality of the WiFi network.  The application also displays the conflict caused by the strongest overlap by the 5th Neighbor’s WiFi to my 3rd Neighbors in the graph.

One of the things that became quickly apparent was that there wasn’t ONE 5Ghz WiFi in our area, so this was our queue to purchase a dual-band and set it up.  Since the range is more limited on 5Ghz over 2.4Ghz, we added a repeater to the configuration and now we can see via the InSSIDer app that we have the whole 5Ghz range to ourselves in our neighborhood:

wifi3

If we look at the detailed information, the only overlap is coming from the repeater, (expected…)  and the Max Data Rate is to die for! :)

wifi4

Choosing a New Modem

As you can see above, we ended up with a dual-band modem, capable of 5Ghz and 2.4Ghz.  This may seem overkill for most, but considering how much is using WiFi these days.  Most people purchase the cheapest modem from a local retailer or the default modem recommended by their provider.  None have I seen were dual-band, (both 2.4Ghz and 5Ghz) as having a wide range of coverage is often more attractive than having a powerful connection.  This might sound like the same thing, but it isn’t.  2.4Ghz provides a less powerful signal than 5Ghz, but it will cover more area.  Your provider will, of course, sell you a modem that will cover a wider range in your household.

This has a catch-22 involved with it-

1.  The range will cover a wider area, resulting in more “over-lap” with your neighbor’s Wi-Fi.

2.  Channel ranges for a 2.4Ghz modem is 0-11 and most often is set to 1,6,10 or 11, which means there is a high chance that you will be set to the same range and channel as your neighbor’s service.

3.  This overlap and channel conflicts are not safe from those who have different Wi-Fi providers.  All providers are restricted to these same channels and ranges.

With this valuable information at hand, we purchased a dual-band modem that provided both 2.4Ghz and 5Ghz Wi-Fi networks.  We set up the 2.4Ghz on the channel had the least traffic and only used it for lighter traffic.

Since setting up the 5Ghz with 2.4Ghz for the kids, we’ve had no issue with WiFi and I can happily report, Tim and I work from home with no issues, downloading, uploading and just clogging the internet as often as we like without impacting anyone here in our household… :)



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Dealing with Wi-Fi Problems the DBA Kevlar Way], All Right Reserved. 2014.

To Hint or not to hint (Application Engine), that is the question

Over the years Oracle has provided a number of plan stability technologies to control how SQL statements are executed.  At the risk of over simplification, Outlines (deprecated in 11g), Profiles, Baselines and Patches work by injecting a set of hints into a SQL statement at parse time.  There is quite a lot of advice from Oracle to use these technologies to fix errant execution plans rather than hint the application.  I think it is generally good advice, however, there are times when this approach does not work well with PeopleSoft, and that is due to the behaviour and structure of PeopleSoft rather than the Oracle database.

It is possible to produce a SQL profile from a plan captured by AWR.  A part of distribution for the SQLT Diagnostic Tool (Doc ID 215187.1) is a script called coe_xfr_sql_profile.sql written by Carlos Sierra.
The only thing I would change in the delivered script, (for use with PeopleSoft and as suggested in a comment) is to create the profile with FORCE_MATCHING so that similar statements with different literal values still match. 

The Slings and Arrows of outrageous execution plans

Let's take an example of a statement (from the vanilla Financials product that has not been customised) that performed poorly because it didn't generate a good execution plan (although I have cut out most of the statement for readability.  Note, that it references instance 5 of PeopleTools temporary record CA_SUM_TAO5.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">INSERT INTO PS_CA_SUM_RC_TAO5 (…) SELECT

FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO5 B , PS_CA_SUM_IN_USE C WHERE

B.PROCESS_INSTANCE = 51381955 AND C.IN_USE_FLAG = 'Y'

Plan hash value: 2039212279
----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 14424 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
| 2 | NESTED LOOPS | | | | | |
| 3 | NESTED LOOPS | | 1 | 430 | 14424 (1)| 00:02:54 |
| 4 | NESTED LOOPS | | 1 | 318 | 14421 (1)| 00:02:54 |
| 5 | TABLE ACCESS FULL | PS_CA_SUM_IN_USE | 1 | 85 | 14420 (1)| 00:02:54 |
| 6 | TABLE ACCESS BY INDEX ROWID| PS_CA_SUM_TAO5 | 1 | 233 | 1 (0)| 00:00:01 |
| 7 | INDEX UNIQUE SCAN | PS_CA_SUM_TAO5 | 1 | | 0 (0)| |
| 8 | INDEX RANGE SCAN | PSACA_PR_SUMM | 1 | | 2 (0)| 00:00:01 |
| 9 | TABLE ACCESS BY INDEX ROWID | PS_CA_PR_SUMM | 1 | 112 | 3 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------

However, below is the plan we get on instance 4.  We get this plan because there is already a profile that has been applied in the past, but now we are on a different non-shared instance of the temporary table, so the profile cannot match because we are on different objects, and we get the same problem, but on different non-shared instances of the temporary record.  Different literal values, such as those for Process Instance can be handled by FORCE_MATCHING, but not different tables.  That is a totally different SQL statement.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SQL_ID 5gtxdqbx0d0c3
--------------------
INSERT INTO PS_CA_SUM_RC_TAO4 (…) SELECT

FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO4 B , PS_CA_SUM_IN_USE C WHERE

B.PROCESS_INSTANCE = 51377796 AND C.IN_USE_FLAG = 'Y'

Plan hash value: 3552771247

---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 36361 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
| 2 | HASH JOIN | | 1 | 430 | 36361 (3)| 00:07:17 |
| 3 | TABLE ACCESS FULL | PS_CA_SUM_IN_USE | 1 | 85 | 14347 (1)| 00:02:53 |
| 4 | NESTED LOOPS | | | | | |
| 5 | NESTED LOOPS | | 1 | 345 | 22014 (3)| 00:04:25 |
| 6 | TABLE ACCESS FULL | PS_CA_PR_SUMM | 5268K| 562M| 21539 (1)| 00:04:19 |
| 7 | INDEX UNIQUE SCAN | PS_CA_SUM_TAO4 | 1 | | 0 (0)| |
| 8 | TABLE ACCESS BY INDEX ROWID| PS_CA_SUM_TAO4 | 1 | 233 | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------

Note
-----
- SQL profile "coe_gn3n77gs6xj2a_3552771247" used for this statement

Of course, the statement on instance 4 had a profile because it was added as a short term fix and then left in situ long term.  It worked fine until a process error, left the non-shared instance of the temporary record allocated to that process instance, and so PeopleSoft allocated instance 5 on the next execution.
So we could just create another profile using the coe_xfr_sql_profile.sql script

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SPO coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.log;
SET ECHO ON TERM ON LIN 2000 TRIMS ON NUMF 99999999999999999999;
REM
REM $Header: 215187.1 coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.sql 11.4.1.4 2014/08/13 csierra $
REM
REM Copyright (c) 2000-2010, Oracle Corporation. All rights reserved.
REM
REM AUTHOR
REM carlos.sierra@oracle.com
REM
REM SCRIPT
REM coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.sql
REM
REM DESCRIPTION
REM This script is generated by coe_xfr_sql_profile.sql
REM It contains the SQL*Plus commands to create a custom
REM SQL Profile for SQL_ID 5gtxdqbx0d0c3 based on plan hash
REM value 3552771247.
REM The custom SQL Profile to be created by this script
REM will affect plans for SQL commands with signature
REM matching the one for SQL Text below.
REM Review SQL Text and adjust accordingly.
REM
REM PARAMETERS
REM None.
REM
REM EXAMPLE
REM SQL> START coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.sql;
REM
REM NOTES
REM 1. Should be run as SYSTEM or SYSDBA.
REM 2. User must have CREATE ANY SQL PROFILE privilege.
REM 3. SOURCE and TARGET systems can be the same or similar.
REM 4. To drop this custom SQL Profile after it has been created:
REM EXEC DBMS_SQLTUNE.DROP_SQL_PROFILE('coe_5gtxdqbx0d0c3_3552771247');
REM 5. Be aware that using DBMS_SQLTUNE requires a license
REM for the Oracle Tuning Pack.
REM
WHENEVER SQLERROR EXIT SQL.SQLCODE;
REM
VAR signature NUMBER;
REM
DECLARE
sql_txt CLOB;
h SYS.SQLPROF_ATTR;
BEGIN
sql_txt := q'[
INSERT INTO PS_CA_SUM_RC_TAO4 (PROCESS_INSTANCE, BUSINESS_UNIT, PROJECT_ID, ACTIVITY_ID, ANALYSIS_TYPE, RESOURCE_TYPE, RESOURCE_CATEGORY, RESOURCE_SUB_CAT, BI_DISTRIB_STATUS, GL_DISTRIB_STATUS, FOREIGN_CURRENCY, CONTRACT_CURRENCY, CONTRACT_NUM, CONTRACT_LINE_NUM, CA_FEE_STATUS, RESOURCE_QUANTITY, FOREIGN_AMOUNT_BSE, FOREIGN_AMOUNT_INC, FOREIGN_AMOUNT, CONTRACT_AMT_BSE, CONTRACT_AMT_INC, CONTRACT_AMT, MIN_TRANS_DT, MAX_TRANS_DT, CAND_MIN_TRANS_DT, CAND_MAX_TRANS_DT) SELECT B.PROCESS_INSTANCE, A.BUSINESS_UNIT, A.PROJECT_ID, A.ACTIVITY_ID, A.ANALYSIS_TYPE, A.RESOURCE_TYPE, A.RESOURCE_CATEGORY, A.RESOURCE_SUB_CAT, A.BI_DISTRIB_STATUS, A.GL_DISTRIB_STATUS,
A.FOREIGN_CURRENCY, A.CONTRACT_CURRENCY, A.CONTRACT_NUM, A.CONTRACT_LINE_NUM, A.CA_FEE_STATUS, (A.RESOURCE_QUANTITY+B.RESOURCE_QUANTITY), A.FOREIGN_AMOUNT, B.FOREIGN_AMOUNT, (A.FOREIGN_AMOUNT+B.FOREIGN_AMOUNT), A.CONTRACT_AMT, B.CONTRACT_AMT, (A.CONTRACT_AMT+B.CONTRACT_AMT), A.MIN_TRANS_DT, A.MAX_TRANS_DT, B.CAND_MIN_TRANS_DT, B.CAND_MAX_TRANS_DT FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO4 B , PS_CA_SUM_IN_USE C WHERE B.BUSINESS_UNIT = C.BUSINESS_UNIT AND B.PROJECT_ID = C.PROJECT_ID AND B.ACTIVITY_ID = C.ACTIVITY_ID AND B.ANALYSIS_TYPE = C.ANALYSIS_TYPE AND B.RESOURCE_TYPE = C.RESOURCE_TYPE AND B.RESOURCE_CATEGORY = C.RESOURCE_CATEGORY AND B.RESOURCE_SUB_CAT =
C.RESOURCE_SUB_CAT AND B.BI_DISTRIB_STATUS = C.BI_DISTRIB_STATUS AND B.GL_DISTRIB_STATUS = C.GL_DISTRIB_STATUS AND B.FOREIGN_CURRENCY = C.FOREIGN_CURRENCY AND B.CONTRACT_CURRENCY = C.CONTRACT_CURRENCY AND B.CONTRACT_NUM = C.CONTRACT_NUM AND B.CONTRACT_LINE_NUM = C.CONTRACT_LINE_NUM AND B.CA_FEE_STATUS = C.CA_FEE_STATUS AND A.BUSINESS_UNIT = B.BUSINESS_UNIT AND A.PROJECT_ID = B.PROJECT_ID AND A.ACTIVITY_ID = B.ACTIVITY_ID AND A.ANALYSIS_TYPE = B.ANALYSIS_TYPE AND A.RESOURCE_TYPE = B.RESOURCE_TYPE AND A.RESOURCE_CATEGORY = B.RESOURCE_CATEGORY AND A.RESOURCE_SUB_CAT = B.RESOURCE_SUB_CAT AND A.BI_DISTRIB_STATUS = B.BI_DISTRIB_STATUS AND A.GL_DISTRIB_STATUS =
B.GL_DISTRIB_STATUS AND A.FOREIGN_CURRENCY = B.FOREIGN_CURRENCY AND A.CONTRACT_CURRENCY = B.CONTRACT_CURRENCY AND A.CONTRACT_NUM = B.CONTRACT_NUM AND A.CONTRACT_LINE_NUM = B.CONTRACT_LINE_NUM AND A.CA_FEE_STATUS = B.CA_FEE_STATUS AND B.PROCESS_INSTANCE = 51377796 AND C.IN_USE_FLAG = 'Y'
]';
h := SYS.SQLPROF_ATTR(
q'[BEGIN_OUTLINE_DATA]',
q'[IGNORE_OPTIM_EMBEDDED_HINTS]',
q'[OPTIMIZER_FEATURES_ENABLE('11.2.0.3')]',
q'[DB_VERSION('11.2.0.3')]',
q'[OPT_PARAM('_unnest_subquery' 'false')]',
q'[OPT_PARAM('optimizer_dynamic_sampling' 4)]',
q'[OPT_PARAM('_optimizer_cost_based_transformation' 'off')]',
q'[ALL_ROWS]',
q'[OUTLINE_LEAF(@"SEL$1")]',
q'[OUTLINE_LEAF(@"INS$1")]',
q'[FULL(@"INS$1" "PS_CA_SUM_RC_TAO4"@"INS$1")]',
q'[FULL(@"SEL$1" "A"@"SEL$1")]',
q'[INDEX(@"SEL$1" "B"@"SEL$1" ("PS_CA_SUM_TAO4"."PROCESS_INSTANCE" "PS_CA_SUM_TAO4"."BUSINESS_UNIT" "PS_CA_SUM_TAO4"."PROJECT_ID" "PS_CA_SUM_TAO4"."ACTIVITY_ID" "PS_CA_SUM_TAO4"."ANALYSIS_TYPE" "PS_CA_SUM_TAO4"."RESOURCE_TYPE" "PS_CA_SUM_TAO4"."RESOURCE_CATEGORY" "PS_CA_SUM_TAO4"."RESOURCE_SUB_CAT" "PS_CA_SUM_TAO4"."BI_DISTRIB_STATUS" "PS_CA_SUM_TAO4"."GL_DISTRIB_STATUS" "PS_CA_SUM_TAO4"."FOREIGN_CURRENCY" "PS_CA_SUM_TAO4"."CONTRACT_CURRENCY" "PS_CA_SUM_TAO4"."CONTRACT_NUM" ]',
q'[ "PS_CA_SUM_TAO4"."CONTRACT_LINE_NUM" "PS_CA_SUM_TAO4"."CA_FEE_STATUS"))]',
q'[FULL(@"SEL$1" "C"@"SEL$1")]',
q'[LEADING(@"SEL$1" "A"@"SEL$1" "B"@"SEL$1" "C"@"SEL$1")]',
q'[USE_NL(@"SEL$1" "B"@"SEL$1")]',
q'[NLJ_BATCHING(@"SEL$1" "B"@"SEL$1")]',
q'[USE_HASH(@"SEL$1" "C"@"SEL$1")]',
q'[SWAP_JOIN_INPUTS(@"SEL$1" "C"@"SEL$1")]',
q'[END_OUTLINE_DATA]');
:signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt);
DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
sql_text => sql_txt,
profile => h,
name => 'coe_5gtxdqbx0d0c3_3552771247',
description => 'coe 5gtxdqbx0d0c3 3552771247 '||:signature||'',
category => 'DEFAULT',
validate => TRUE,
replace => TRUE,
force_match => TRUE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );
END;
/
WHENEVER SQLERROR CONTINUE
SET ECHO OFF;
PRINT signature
PRO
PRO ... manual custom SQL Profile has been created
PRO
SET TERM ON ECHO OFF LIN 80 TRIMS OFF NUMF "";
SPO OFF;
PRO
PRO COE_XFR_SQL_PROFILE_5gtxdqbx0d0c3_3552771247 completed

But then we must manually change the table and index names from 4 to 5.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">DECLARE
sql_txt CLOB;
h SYS.SQLPROF_ATTR;
BEGIN
sql_txt := q'[
INSERT INTO PS_CA_SUM_RC_TAO5 (PROCESS_INSTANCE, BUSINESS_UNIT, PROJECT_ID, ACTIVITY_ID, ANALYSIS_TYPE, RESOURCE_TYPE, RESOURCE_CATEGORY, RESOURCE_SUB_CAT, BI_DISTRIB_STATUS, GL_DISTRIB_STATUS, FOREIGN_CURRENCY, CONTRACT_CURRENCY, CONTRACT_NUM, CONTRACT_LINE_NUM, CA_FEE_STATUS, RESOURCE_QUANTITY, FOREIGN_AMOUNT_BSE, FOREIGN_AMOUNT_INC, FOREIGN_AMOUNT, CONTRACT_AMT_BSE, CONTRACT_AMT_INC, CONTRACT_AMT, MIN_TRANS_DT, MAX_TRANS_DT, CAND_MIN_TRANS_DT, CAND_MAX_TRANS_DT) SELECT B.PROCESS_INSTANCE, A.BUSINESS_UNIT, A.PROJECT_ID, A.ACTIVITY_ID, A.ANALYSIS_TYPE, A.RESOURCE_TYPE, A.RESOURCE_CATEGORY, A.RESOURCE_SUB_CAT, A.BI_DISTRIB_STATUS, A.GL_DISTRIB_STATUS,
A.FOREIGN_CURRENCY, A.CONTRACT_CURRENCY, A.CONTRACT_NUM, A.CONTRACT_LINE_NUM, A.CA_FEE_STATUS, (A.RESOURCE_QUANTITY+B.RESOURCE_QUANTITY), A.FOREIGN_AMOUNT, B.FOREIGN_AMOUNT, (A.FOREIGN_AMOUNT+B.FOREIGN_AMOUNT), A.CONTRACT_AMT, B.CONTRACT_AMT, (A.CONTRACT_AMT+B.CONTRACT_AMT), A.MIN_TRANS_DT, A.MAX_TRANS_DT, B.CAND_MIN_TRANS_DT, B.CAND_MAX_TRANS_DT FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO5 B , PS_CA_SUM_IN_USE C WHERE B.BUSINESS_UNIT = C.BUSINESS_UNIT AND B.PROJECT_ID = C.PROJECT_ID AND B.ACTIVITY_ID = C.ACTIVITY_ID AND B.ANALYSIS_TYPE = C.ANALYSIS_TYPE AND B.RESOURCE_TYPE = C.RESOURCE_TYPE AND B.RESOURCE_CATEGORY = C.RESOURCE_CATEGORY AND B.RESOURCE_SUB_CAT =
C.RESOURCE_SUB_CAT AND B.BI_DISTRIB_STATUS = C.BI_DISTRIB_STATUS AND B.GL_DISTRIB_STATUS = C.GL_DISTRIB_STATUS AND B.FOREIGN_CURRENCY = C.FOREIGN_CURRENCY AND B.CONTRACT_CURRENCY = C.CONTRACT_CURRENCY AND B.CONTRACT_NUM = C.CONTRACT_NUM AND B.CONTRACT_LINE_NUM = C.CONTRACT_LINE_NUM AND B.CA_FEE_STATUS = C.CA_FEE_STATUS AND A.BUSINESS_UNIT = B.BUSINESS_UNIT AND A.PROJECT_ID = B.PROJECT_ID AND A.ACTIVITY_ID = B.ACTIVITY_ID AND A.ANALYSIS_TYPE = B.ANALYSIS_TYPE AND A.RESOURCE_TYPE = B.RESOURCE_TYPE AND A.RESOURCE_CATEGORY = B.RESOURCE_CATEGORY AND A.RESOURCE_SUB_CAT = B.RESOURCE_SUB_CAT AND A.BI_DISTRIB_STATUS = B.BI_DISTRIB_STATUS AND A.GL_DISTRIB_STATUS =
B.GL_DISTRIB_STATUS AND A.FOREIGN_CURRENCY = B.FOREIGN_CURRENCY AND A.CONTRACT_CURRENCY = B.CONTRACT_CURRENCY AND A.CONTRACT_NUM = B.CONTRACT_NUM AND A.CONTRACT_LINE_NUM = B.CONTRACT_LINE_NUM AND A.CA_FEE_STATUS = B.CA_FEE_STATUS AND B.PROCESS_INSTANCE = 51377796 AND C.IN_USE_FLAG = 'Y'
]';
h := SYS.SQLPROF_ATTR(
q'[BEGIN_OUTLINE_DATA]',
q'[IGNORE_OPTIM_EMBEDDED_HINTS]',
q'[OPTIMIZER_FEATURES_ENABLE('11.2.0.3')]',
q'[DB_VERSION('11.2.0.3')]',
q'[OPT_PARAM('_unnest_subquery' 'false')]',
q'[OPT_PARAM('optimizer_dynamic_sampling' 4)]',
q'[OPT_PARAM('_optimizer_cost_based_transformation' 'off')]',
q'[ALL_ROWS]',
q'[OUTLINE_LEAF(@"SEL$1")]',
q'[OUTLINE_LEAF(@"INS$1")]',
q'[FULL(@"INS$1" "PS_CA_SUM_RC_TAO5"@"INS$1")]',
q'[FULL(@"SEL$1" "A"@"SEL$1")]',
q'[INDEX(@"SEL$1" "B"@"SEL$1" ("PS_CA_SUM_TAO5"."PROCESS_INSTANCE" "PS_CA_SUM_TAO5"."BUSINESS_UNIT" "PS_CA_SUM_TAO5"."PROJECT_ID" "PS_CA_SUM_TAO5"."ACTIVITY_ID" "PS_CA_SUM_TAO5"."ANALYSIS_TYPE" "PS_CA_SUM_TAO5"."RESOURCE_TYPE" "PS_CA_SUM_TAO5"."RESOURCE_CATEGORY" "PS_CA_SUM_TAO5"."RESOURCE_SUB_CAT" "PS_CA_SUM_TAO5"."BI_DISTRIB_STATUS" "PS_CA_SUM_TAO5"."GL_DISTRIB_STATUS" "PS_CA_SUM_TAO5"."FOREIGN_CURRENCY" "PS_CA_SUM_TAO5"."CONTRACT_CURRENCY" "PS_CA_SUM_TAO5"."CONTRACT_NUM" ]',
q'[ "PS_CA_SUM_TAO5"."CONTRACT_LINE_NUM" "PS_CA_SUM_TAO5"."CA_FEE_STATUS"))]',
q'[FULL(@"SEL$1" "C"@"SEL$1")]',
q'[LEADING(@"SEL$1" "A"@"SEL$1" "B"@"SEL$1" "C"@"SEL$1")]',
q'[USE_NL(@"SEL$1" "B"@"SEL$1")]',
q'[NLJ_BATCHING(@"SEL$1" "B"@"SEL$1")]',
q'[USE_HASH(@"SEL$1" "C"@"SEL$1")]',
q'[SWAP_JOIN_INPUTS(@"SEL$1" "C"@"SEL$1")]',
q'[END_OUTLINE_DATA]');
:signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt);
DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
sql_text => sql_txt,
profile => h,
name => 'coe_5gtxdqbx0d0c3_3552771247',
description => 'coe 5gtxdqbx0d0c3 3552771247 '||:signature||'',
category => 'DEFAULT',
validate => TRUE,
replace => TRUE,
force_match => TRUE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );
END;
/

Or to take Arms against a Sea of statements,

The profile has advantage that it can be applied quickly without a code change.  It is the perfect tool for the DBA with a production performance problem. However,
  • If applying to statement that references a PS temp record then we need to apply the profile to all instances of the record (both non-shared instances and the shared instance).
  • We were lucky that we referenced instance 5 of two temporary records. However, you could get a situation where a statement references different instances of different temporary records.  So perhaps instance 5 of one table and instance 6 of another.  In which case, you might also get instance 6 of the first table and instance 5 of the other.  A SQL profile could be needed for each permutation.
  • Bear in mind also that some areas of PeopleSoft use dynamically generated SQL.  So you get similar SQL statements which are sufficiently different for the profile not to match.  
  • Any changes to the expansion of Application Engine and PeopleCode MetaSQL will also prevent matching.
  • There is also the challenge of dealing with code changes as the system is upgraded, patched and customised.  A small code change, perhaps just an extra field in the select clause, can result in a performance regression because the profile stops matching. Of course, this challenge is not limited to PeopleSoft systems! 
However, profiles are effective if there are no PeopleSoft temporary records present.  So you can generally use them in COBOL and SQR processes and the on-line application (other than in on-line Application Engine processes).

Aye, there's the rub,

I would use a profile (or a set of profiles) as a short-term temporary fix that is easier to introduce into production, and then add hints to the source code and so fix all instances of the code, not just the ones that have been profiled. Of course, that does entail a code change, and everything that goes with that.  One strong argument against making code change is that you have to change the code again to remove the hint if it becomes unnecessary at some time in future after a significant change, such as an Oracle upgrade.  However, on balance, I think it is better than the scenario where the profile stops working one day without warning.

The rest is silence.

Unless you add a comment.

To Hint or not to hint (Application Engine), that is the question

Over the years Oracle has provided a number of plan stability technologies to control how SQL statements are executed.  At the risk of over simplification, Outlines (deprecated in 11g), Profiles, Baselines and Patches work by injecting a set of hints into a SQL statement at parse time.  There is quite a lot of advice from Oracle to use these technologies to fix errant execution plans rather than hint the application.  I think it is generally good advice, however, there are times when this approach does not work well with PeopleSoft, and that is due to the behaviour and structure of PeopleSoft rather than the Oracle database.

It is possible to produce a SQL profile from a plan captured by AWR.  A part of distribution for the SQLT Diagnostic Tool (Doc ID 215187.1) is a script called coe_xfr_sql_profile.sql written by Carlos Sierra.
The only thing I would change in the delivered script, (for use with PeopleSoft and as suggested in a comment) is to create the profile with FORCE_MATCHING so that similar statements with different literal values still match. 

The Slings and Arrows of outrageous execution plans

Let's take an example of a statement (from the vanilla Financials product that has not been customised) that performed poorly because it didn't generate a good execution plan (although I have cut out most of the statement for readability.  Note, that it references instance 5 of PeopleTools temporary record CA_SUM_TAO5.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">INSERT INTO PS_CA_SUM_RC_TAO5 (…) SELECT

FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO5 B , PS_CA_SUM_IN_USE C WHERE

B.PROCESS_INSTANCE = 51381955 AND C.IN_USE_FLAG = 'Y'

Plan hash value: 2039212279
----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 14424 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
| 2 | NESTED LOOPS | | | | | |
| 3 | NESTED LOOPS | | 1 | 430 | 14424 (1)| 00:02:54 |
| 4 | NESTED LOOPS | | 1 | 318 | 14421 (1)| 00:02:54 |
| 5 | TABLE ACCESS FULL | PS_CA_SUM_IN_USE | 1 | 85 | 14420 (1)| 00:02:54 |
| 6 | TABLE ACCESS BY INDEX ROWID| PS_CA_SUM_TAO5 | 1 | 233 | 1 (0)| 00:00:01 |
| 7 | INDEX UNIQUE SCAN | PS_CA_SUM_TAO5 | 1 | | 0 (0)| |
| 8 | INDEX RANGE SCAN | PSACA_PR_SUMM | 1 | | 2 (0)| 00:00:01 |
| 9 | TABLE ACCESS BY INDEX ROWID | PS_CA_PR_SUMM | 1 | 112 | 3 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------

However, below is the plan we get on instance 4.  We get this plan because there is already a profile that has been applied in the past, but now we are on a different non-shared instance of the temporary table, so the profile cannot match because we are on different objects, and we get the same problem, but on different non-shared instances of the temporary record.  Different literal values, such as those for Process Instance can be handled by FORCE_MATCHING, but not different tables.  That is a totally different SQL statement.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SQL_ID 5gtxdqbx0d0c3
--------------------
INSERT INTO PS_CA_SUM_RC_TAO4 (…) SELECT

FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO4 B , PS_CA_SUM_IN_USE C WHERE

B.PROCESS_INSTANCE = 51377796 AND C.IN_USE_FLAG = 'Y'

Plan hash value: 3552771247

---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 36361 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
| 2 | HASH JOIN | | 1 | 430 | 36361 (3)| 00:07:17 |
| 3 | TABLE ACCESS FULL | PS_CA_SUM_IN_USE | 1 | 85 | 14347 (1)| 00:02:53 |
| 4 | NESTED LOOPS | | | | | |
| 5 | NESTED LOOPS | | 1 | 345 | 22014 (3)| 00:04:25 |
| 6 | TABLE ACCESS FULL | PS_CA_PR_SUMM | 5268K| 562M| 21539 (1)| 00:04:19 |
| 7 | INDEX UNIQUE SCAN | PS_CA_SUM_TAO4 | 1 | | 0 (0)| |
| 8 | TABLE ACCESS BY INDEX ROWID| PS_CA_SUM_TAO4 | 1 | 233 | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------

Note
-----
- SQL profile "coe_gn3n77gs6xj2a_3552771247" used for this statement

Of course, the statement on instance 4 had a profile because it was added as a short term fix and then left in situ long term.  It worked fine until a process error, left the non-shared instance of the temporary record allocated to that process instance, and so PeopleSoft allocated instance 5 on the next execution.
So we could just create another profile using the coe_xfr_sql_profile.sql script

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SPO coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.log;
SET ECHO ON TERM ON LIN 2000 TRIMS ON NUMF 99999999999999999999;
REM
REM $Header: 215187.1 coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.sql 11.4.1.4 2014/08/13 csierra $
REM
REM Copyright (c) 2000-2010, Oracle Corporation. All rights reserved.
REM
REM AUTHOR
REM carlos.sierra@oracle.com
REM
REM SCRIPT
REM coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.sql
REM
REM DESCRIPTION
REM This script is generated by coe_xfr_sql_profile.sql
REM It contains the SQL*Plus commands to create a custom
REM SQL Profile for SQL_ID 5gtxdqbx0d0c3 based on plan hash
REM value 3552771247.
REM The custom SQL Profile to be created by this script
REM will affect plans for SQL commands with signature
REM matching the one for SQL Text below.
REM Review SQL Text and adjust accordingly.
REM
REM PARAMETERS
REM None.
REM
REM EXAMPLE
REM SQL> START coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.sql;
REM
REM NOTES
REM 1. Should be run as SYSTEM or SYSDBA.
REM 2. User must have CREATE ANY SQL PROFILE privilege.
REM 3. SOURCE and TARGET systems can be the same or similar.
REM 4. To drop this custom SQL Profile after it has been created:
REM EXEC DBMS_SQLTUNE.DROP_SQL_PROFILE('coe_5gtxdqbx0d0c3_3552771247');
REM 5. Be aware that using DBMS_SQLTUNE requires a license
REM for the Oracle Tuning Pack.
REM
WHENEVER SQLERROR EXIT SQL.SQLCODE;
REM
VAR signature NUMBER;
REM
DECLARE
sql_txt CLOB;
h SYS.SQLPROF_ATTR;
BEGIN
sql_txt := q'[
INSERT INTO PS_CA_SUM_RC_TAO4 (PROCESS_INSTANCE, BUSINESS_UNIT, PROJECT_ID, ACTIVITY_ID, ANALYSIS_TYPE, RESOURCE_TYPE, RESOURCE_CATEGORY, RESOURCE_SUB_CAT, BI_DISTRIB_STATUS, GL_DISTRIB_STATUS, FOREIGN_CURRENCY, CONTRACT_CURRENCY, CONTRACT_NUM, CONTRACT_LINE_NUM, CA_FEE_STATUS, RESOURCE_QUANTITY, FOREIGN_AMOUNT_BSE, FOREIGN_AMOUNT_INC, FOREIGN_AMOUNT, CONTRACT_AMT_BSE, CONTRACT_AMT_INC, CONTRACT_AMT, MIN_TRANS_DT, MAX_TRANS_DT, CAND_MIN_TRANS_DT, CAND_MAX_TRANS_DT) SELECT B.PROCESS_INSTANCE, A.BUSINESS_UNIT, A.PROJECT_ID, A.ACTIVITY_ID, A.ANALYSIS_TYPE, A.RESOURCE_TYPE, A.RESOURCE_CATEGORY, A.RESOURCE_SUB_CAT, A.BI_DISTRIB_STATUS, A.GL_DISTRIB_STATUS,
A.FOREIGN_CURRENCY, A.CONTRACT_CURRENCY, A.CONTRACT_NUM, A.CONTRACT_LINE_NUM, A.CA_FEE_STATUS, (A.RESOURCE_QUANTITY+B.RESOURCE_QUANTITY), A.FOREIGN_AMOUNT, B.FOREIGN_AMOUNT, (A.FOREIGN_AMOUNT+B.FOREIGN_AMOUNT), A.CONTRACT_AMT, B.CONTRACT_AMT, (A.CONTRACT_AMT+B.CONTRACT_AMT), A.MIN_TRANS_DT, A.MAX_TRANS_DT, B.CAND_MIN_TRANS_DT, B.CAND_MAX_TRANS_DT FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO4 B , PS_CA_SUM_IN_USE C WHERE B.BUSINESS_UNIT = C.BUSINESS_UNIT AND B.PROJECT_ID = C.PROJECT_ID AND B.ACTIVITY_ID = C.ACTIVITY_ID AND B.ANALYSIS_TYPE = C.ANALYSIS_TYPE AND B.RESOURCE_TYPE = C.RESOURCE_TYPE AND B.RESOURCE_CATEGORY = C.RESOURCE_CATEGORY AND B.RESOURCE_SUB_CAT =
C.RESOURCE_SUB_CAT AND B.BI_DISTRIB_STATUS = C.BI_DISTRIB_STATUS AND B.GL_DISTRIB_STATUS = C.GL_DISTRIB_STATUS AND B.FOREIGN_CURRENCY = C.FOREIGN_CURRENCY AND B.CONTRACT_CURRENCY = C.CONTRACT_CURRENCY AND B.CONTRACT_NUM = C.CONTRACT_NUM AND B.CONTRACT_LINE_NUM = C.CONTRACT_LINE_NUM AND B.CA_FEE_STATUS = C.CA_FEE_STATUS AND A.BUSINESS_UNIT = B.BUSINESS_UNIT AND A.PROJECT_ID = B.PROJECT_ID AND A.ACTIVITY_ID = B.ACTIVITY_ID AND A.ANALYSIS_TYPE = B.ANALYSIS_TYPE AND A.RESOURCE_TYPE = B.RESOURCE_TYPE AND A.RESOURCE_CATEGORY = B.RESOURCE_CATEGORY AND A.RESOURCE_SUB_CAT = B.RESOURCE_SUB_CAT AND A.BI_DISTRIB_STATUS = B.BI_DISTRIB_STATUS AND A.GL_DISTRIB_STATUS =
B.GL_DISTRIB_STATUS AND A.FOREIGN_CURRENCY = B.FOREIGN_CURRENCY AND A.CONTRACT_CURRENCY = B.CONTRACT_CURRENCY AND A.CONTRACT_NUM = B.CONTRACT_NUM AND A.CONTRACT_LINE_NUM = B.CONTRACT_LINE_NUM AND A.CA_FEE_STATUS = B.CA_FEE_STATUS AND B.PROCESS_INSTANCE = 51377796 AND C.IN_USE_FLAG = 'Y'
]';
h := SYS.SQLPROF_ATTR(
q'[BEGIN_OUTLINE_DATA]',
q'[IGNORE_OPTIM_EMBEDDED_HINTS]',
q'[OPTIMIZER_FEATURES_ENABLE('11.2.0.3')]',
q'[DB_VERSION('11.2.0.3')]',
q'[OPT_PARAM('_unnest_subquery' 'false')]',
q'[OPT_PARAM('optimizer_dynamic_sampling' 4)]',
q'[OPT_PARAM('_optimizer_cost_based_transformation' 'off')]',
q'[ALL_ROWS]',
q'[OUTLINE_LEAF(@"SEL$1")]',
q'[OUTLINE_LEAF(@"INS$1")]',
q'[FULL(@"INS$1" "PS_CA_SUM_RC_TAO4"@"INS$1")]',
q'[FULL(@"SEL$1" "A"@"SEL$1")]',
q'[INDEX(@"SEL$1" "B"@"SEL$1" ("PS_CA_SUM_TAO4"."PROCESS_INSTANCE" "PS_CA_SUM_TAO4"."BUSINESS_UNIT" "PS_CA_SUM_TAO4"."PROJECT_ID" "PS_CA_SUM_TAO4"."ACTIVITY_ID" "PS_CA_SUM_TAO4"."ANALYSIS_TYPE" "PS_CA_SUM_TAO4"."RESOURCE_TYPE" "PS_CA_SUM_TAO4"."RESOURCE_CATEGORY" "PS_CA_SUM_TAO4"."RESOURCE_SUB_CAT" "PS_CA_SUM_TAO4"."BI_DISTRIB_STATUS" "PS_CA_SUM_TAO4"."GL_DISTRIB_STATUS" "PS_CA_SUM_TAO4"."FOREIGN_CURRENCY" "PS_CA_SUM_TAO4"."CONTRACT_CURRENCY" "PS_CA_SUM_TAO4"."CONTRACT_NUM" ]',
q'[ "PS_CA_SUM_TAO4"."CONTRACT_LINE_NUM" "PS_CA_SUM_TAO4"."CA_FEE_STATUS"))]',
q'[FULL(@"SEL$1" "C"@"SEL$1")]',
q'[LEADING(@"SEL$1" "A"@"SEL$1" "B"@"SEL$1" "C"@"SEL$1")]',
q'[USE_NL(@"SEL$1" "B"@"SEL$1")]',
q'[NLJ_BATCHING(@"SEL$1" "B"@"SEL$1")]',
q'[USE_HASH(@"SEL$1" "C"@"SEL$1")]',
q'[SWAP_JOIN_INPUTS(@"SEL$1" "C"@"SEL$1")]',
q'[END_OUTLINE_DATA]');
:signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt);
DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
sql_text => sql_txt,
profile => h,
name => 'coe_5gtxdqbx0d0c3_3552771247',
description => 'coe 5gtxdqbx0d0c3 3552771247 '||:signature||'',
category => 'DEFAULT',
validate => TRUE,
replace => TRUE,
force_match => TRUE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );
END;
/
WHENEVER SQLERROR CONTINUE
SET ECHO OFF;
PRINT signature
PRO
PRO ... manual custom SQL Profile has been created
PRO
SET TERM ON ECHO OFF LIN 80 TRIMS OFF NUMF "";
SPO OFF;
PRO
PRO COE_XFR_SQL_PROFILE_5gtxdqbx0d0c3_3552771247 completed

But then we must manually change the table and index names from 4 to 5.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">DECLARE
sql_txt CLOB;
h SYS.SQLPROF_ATTR;
BEGIN
sql_txt := q'[
INSERT INTO PS_CA_SUM_RC_TAO5 (PROCESS_INSTANCE, BUSINESS_UNIT, PROJECT_ID, ACTIVITY_ID, ANALYSIS_TYPE, RESOURCE_TYPE, RESOURCE_CATEGORY, RESOURCE_SUB_CAT, BI_DISTRIB_STATUS, GL_DISTRIB_STATUS, FOREIGN_CURRENCY, CONTRACT_CURRENCY, CONTRACT_NUM, CONTRACT_LINE_NUM, CA_FEE_STATUS, RESOURCE_QUANTITY, FOREIGN_AMOUNT_BSE, FOREIGN_AMOUNT_INC, FOREIGN_AMOUNT, CONTRACT_AMT_BSE, CONTRACT_AMT_INC, CONTRACT_AMT, MIN_TRANS_DT, MAX_TRANS_DT, CAND_MIN_TRANS_DT, CAND_MAX_TRANS_DT) SELECT B.PROCESS_INSTANCE, A.BUSINESS_UNIT, A.PROJECT_ID, A.ACTIVITY_ID, A.ANALYSIS_TYPE, A.RESOURCE_TYPE, A.RESOURCE_CATEGORY, A.RESOURCE_SUB_CAT, A.BI_DISTRIB_STATUS, A.GL_DISTRIB_STATUS,
A.FOREIGN_CURRENCY, A.CONTRACT_CURRENCY, A.CONTRACT_NUM, A.CONTRACT_LINE_NUM, A.CA_FEE_STATUS, (A.RESOURCE_QUANTITY+B.RESOURCE_QUANTITY), A.FOREIGN_AMOUNT, B.FOREIGN_AMOUNT, (A.FOREIGN_AMOUNT+B.FOREIGN_AMOUNT), A.CONTRACT_AMT, B.CONTRACT_AMT, (A.CONTRACT_AMT+B.CONTRACT_AMT), A.MIN_TRANS_DT, A.MAX_TRANS_DT, B.CAND_MIN_TRANS_DT, B.CAND_MAX_TRANS_DT FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO5 B , PS_CA_SUM_IN_USE C WHERE B.BUSINESS_UNIT = C.BUSINESS_UNIT AND B.PROJECT_ID = C.PROJECT_ID AND B.ACTIVITY_ID = C.ACTIVITY_ID AND B.ANALYSIS_TYPE = C.ANALYSIS_TYPE AND B.RESOURCE_TYPE = C.RESOURCE_TYPE AND B.RESOURCE_CATEGORY = C.RESOURCE_CATEGORY AND B.RESOURCE_SUB_CAT =
C.RESOURCE_SUB_CAT AND B.BI_DISTRIB_STATUS = C.BI_DISTRIB_STATUS AND B.GL_DISTRIB_STATUS = C.GL_DISTRIB_STATUS AND B.FOREIGN_CURRENCY = C.FOREIGN_CURRENCY AND B.CONTRACT_CURRENCY = C.CONTRACT_CURRENCY AND B.CONTRACT_NUM = C.CONTRACT_NUM AND B.CONTRACT_LINE_NUM = C.CONTRACT_LINE_NUM AND B.CA_FEE_STATUS = C.CA_FEE_STATUS AND A.BUSINESS_UNIT = B.BUSINESS_UNIT AND A.PROJECT_ID = B.PROJECT_ID AND A.ACTIVITY_ID = B.ACTIVITY_ID AND A.ANALYSIS_TYPE = B.ANALYSIS_TYPE AND A.RESOURCE_TYPE = B.RESOURCE_TYPE AND A.RESOURCE_CATEGORY = B.RESOURCE_CATEGORY AND A.RESOURCE_SUB_CAT = B.RESOURCE_SUB_CAT AND A.BI_DISTRIB_STATUS = B.BI_DISTRIB_STATUS AND A.GL_DISTRIB_STATUS =
B.GL_DISTRIB_STATUS AND A.FOREIGN_CURRENCY = B.FOREIGN_CURRENCY AND A.CONTRACT_CURRENCY = B.CONTRACT_CURRENCY AND A.CONTRACT_NUM = B.CONTRACT_NUM AND A.CONTRACT_LINE_NUM = B.CONTRACT_LINE_NUM AND A.CA_FEE_STATUS = B.CA_FEE_STATUS AND B.PROCESS_INSTANCE = 51377796 AND C.IN_USE_FLAG = 'Y'
]';
h := SYS.SQLPROF_ATTR(
q'[BEGIN_OUTLINE_DATA]',
q'[IGNORE_OPTIM_EMBEDDED_HINTS]',
q'[OPTIMIZER_FEATURES_ENABLE('11.2.0.3')]',
q'[DB_VERSION('11.2.0.3')]',
q'[OPT_PARAM('_unnest_subquery' 'false')]',
q'[OPT_PARAM('optimizer_dynamic_sampling' 4)]',
q'[OPT_PARAM('_optimizer_cost_based_transformation' 'off')]',
q'[ALL_ROWS]',
q'[OUTLINE_LEAF(@"SEL$1")]',
q'[OUTLINE_LEAF(@"INS$1")]',
q'[FULL(@"INS$1" "PS_CA_SUM_RC_TAO5"@"INS$1")]',
q'[FULL(@"SEL$1" "A"@"SEL$1")]',
q'[INDEX(@"SEL$1" "B"@"SEL$1" ("PS_CA_SUM_TAO5"."PROCESS_INSTANCE" "PS_CA_SUM_TAO5"."BUSINESS_UNIT" "PS_CA_SUM_TAO5"."PROJECT_ID" "PS_CA_SUM_TAO5"."ACTIVITY_ID" "PS_CA_SUM_TAO5"."ANALYSIS_TYPE" "PS_CA_SUM_TAO5"."RESOURCE_TYPE" "PS_CA_SUM_TAO5"."RESOURCE_CATEGORY" "PS_CA_SUM_TAO5"."RESOURCE_SUB_CAT" "PS_CA_SUM_TAO5"."BI_DISTRIB_STATUS" "PS_CA_SUM_TAO5"."GL_DISTRIB_STATUS" "PS_CA_SUM_TAO5"."FOREIGN_CURRENCY" "PS_CA_SUM_TAO5"."CONTRACT_CURRENCY" "PS_CA_SUM_TAO5"."CONTRACT_NUM" ]',
q'[ "PS_CA_SUM_TAO5"."CONTRACT_LINE_NUM" "PS_CA_SUM_TAO5"."CA_FEE_STATUS"))]',
q'[FULL(@"SEL$1" "C"@"SEL$1")]',
q'[LEADING(@"SEL$1" "A"@"SEL$1" "B"@"SEL$1" "C"@"SEL$1")]',
q'[USE_NL(@"SEL$1" "B"@"SEL$1")]',
q'[NLJ_BATCHING(@"SEL$1" "B"@"SEL$1")]',
q'[USE_HASH(@"SEL$1" "C"@"SEL$1")]',
q'[SWAP_JOIN_INPUTS(@"SEL$1" "C"@"SEL$1")]',
q'[END_OUTLINE_DATA]');
:signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt);
DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
sql_text => sql_txt,
profile => h,
name => 'coe_5gtxdqbx0d0c3_3552771247',
description => 'coe 5gtxdqbx0d0c3 3552771247 '||:signature||'',
category => 'DEFAULT',
validate => TRUE,
replace => TRUE,
force_match => TRUE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );
END;
/

Or to take Arms against a Sea of statements,

The profile has advantage that it can be applied quickly without a code change.  It is the perfect tool for the DBA with a production performance problem. However,
  • If applying to statement that references a PS temp record then we need to apply the profile to all instances of the record (both non-shared instances and the shared instance).
  • We were lucky that we referenced instance 5 of two temporary records. However, you could get a situation where a statement references different instances of different temporary records.  So perhaps instance 5 of one table and instance 6 of another.  In which case, you might also get instance 6 of the first table and instance 5 of the other.  A SQL profile could be needed for each permutation.
  • Bear in mind also that some areas of PeopleSoft use dynamically generated SQL.  So you get similar SQL statements which are sufficiently different for the profile not to match.  
  • Any changes to the expansion of Application Engine and PeopleCode MetaSQL will also prevent matching.
  • There is also the challenge of dealing with code changes as the system is upgraded, patched and customised.  A small code change, perhaps just an extra field in the select clause, can result in a performance regression because the profile stops matching. Of course, this challenge is not limited to PeopleSoft systems! 
However, profiles are effective if there are no PeopleSoft temporary records present.  So you can generally use them in COBOL and SQR processes and the on-line application (other than in on-line Application Engine processes).

Aye, there's the rub,

I would use a profile (or a set of profiles) as a short-term temporary fix that is easier to introduce into production, and then add hints to the source code and so fix all instances of the code, not just the ones that have been profiled. Of course, that does entail a code change, and everything that goes with that.  One strong argument against making code change is that you have to change the code again to remove the hint if it becomes unnecessary at some time in future after a significant change, such as an Oracle upgrade.  However, on balance, I think it is better than the scenario where the profile stops working one day without warning.

The rest is silence.

Unless you add a comment.

The Expendables 3

It’s a holiday weekend and I just got back from watching The Expendables 3.

OK. You don’t go to a film like this expecting an Oscar winning script, but come on! During the end credits, when it said “Story by Sylvester Stallone”, I thought of a scene from Boogie Nights where they discus the plot of an adult movie about 30 seconds before they start to shoot, which goes something like this…

“Enter Reed to audition for Amber. They go at it.

Dirk enters. Meets with Becky. They go at it.”

In the case of The Expendables 3, the same sort of thing must have happened, but something like this…

“Sylvestor enters the room. He shoots sh*t up..

Terry Crews enters. Sees the bad guys. Shoots sh*t up.”

It’s also worth remembering that physics works completely different in The Expendables 3 world… :)

That aside, it’s a good laugh. I’m not really sure how they can sustain the series, because really this is just a rehash of the previous two. Actually, you could probably just rearrange the fight scenes from these three films to create “new” films every year for a while. That way you wouldn’t even had to waste time filming…

Antonio Banderas, who was described by a lady behind me as, “… so damn sexy!”, played the only character that had any sort of back-story, but I always think of that cat from Shrek when I hear his voice… :)

Cheers

Tim…


The Expendables 3 was first posted on August 25, 2014 at 5:58 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Don’t go directly to Maximum Protection!

With a Data Guard Configuration in Maximum Performance protection mode, don’t go to Maximum Protection directly, because that leads to a restart of the primary database:

 Attention!

DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxPerformance
  Databases:
  prima  - Primary database
    physt  - Physical standby database
      physt2 - Physical standby database (receiving current redo)

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> edit configuration set protection mode as maxprotection;
Operation requires shutdown of instance "prima" on database "prima"
Shutting down instance "prima"...
Database closed.
Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "prima" on database "prima"
Starting instance "prima"...
ORACLE instance started.
Database mounted.
Database opened.

Instead, go to Maximum Availability first and then to Maximum Protection:

DGMGRL> edit configuration set protection mode as maxperformance;
Succeeded.
DGMGRL> edit configuration set protection mode as maxavailability;
Succeeded.
DGMGRL> edit configuration set protection mode as maxprotection;
Succeeded.

The demo was done with 12c, involving a cascading standby database, but the behavior is the same in 11g already. The odd thing about it is that DGMGRL will restart the primary without warning. Wanted to share that with the Oracle community for years but always got over it somehow.

Tagged: Data Guard, High Availability

ALS Ice Bucket Challenge : Keeping It In The Family

If you watched my (No) Ice Bucket Challenge yesterday, you will know I nominated my Oracle family.

The first family member to step up to the plate was the daughter, Heli “Hell-Squirrel” Helskyaho. She shared the video on Facebook, which you can see here if you are a friend. We have to be careful with the parental controls to keep her safe!

Next up was the wife, Debra Lilley, who used the opportunity to do an Oscar acceptance speech.

That left only my dad, Graham Wood. This really is a case of saving the best for last. Thanks Dad for proving you really are the daddy! When I grow up I want to be just like you…

Thanks to the whole family for getting involved. It’s enough to bring a tear to your eye! :)

Cheers

Tim…

PS. Please don’t forget to donate!


ALS Ice Bucket Challenge : Keeping It In The Family was first posted on August 25, 2014 at 10:13 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

In-memory Aggregation

The title of this piece is the name given to a new feature in 12.1.0.2, and since I’ve recently blogged about a limitation of the in-memory option I thought I’d pick this feature as the next obvious thing to blog about. This is a bit of a non sequitur, though, as the feature seems to have nothing whatsoever to do with the in-memory option; instead it’s a cunning mechanism combining aspects of the star-transformation (but without the bitmap indexes), Bloom filters, and “group-by” placement to minimise the cost of aggregation over high-volume joins.

Here’s a small data set I’ll use to demonstrate the feature:

create table towns
as
select
        rownum                                          id,
        trunc(dbms_random.value(1,51))                  id_state,
        rpad(dbms_random.string('U',3),12)              name,
        cast (rpad('x',trunc(dbms_random.value(50,60)),'x') as varchar2(60))    padding
from
        all_objects
where
        rownum <= 2000
;

alter table towns add constraint to_pk primary key(id);
create index to_i1 on towns(name);

create table people(
        id_town_work    number(6,0)     not null
                constraint pe_fk_wo references towns,
        id_town_home    number(6,0)     not null
                constraint pe_fk_ho references towns,
        dummy1          varchar2(10),
        dummy2          varchar2(10),
        padding         varchar2(110)
);

insert /*+ append */  into people
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        trunc(dbms_random.value(1,2001)),
        trunc(dbms_random.value(1,2001)),
        lpad(rownum,10),
        lpad(rownum,10),
        cast (rpad('x',trunc(dbms_random.value(50,60)),'x') as varchar2(60))    padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5
;

commit;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'Towns',
                method_opt       => 'for all columns size 1'
        );
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'People',
                method_opt       => 'for all columns size 1'
        );
end;
/

I have a “large” table of people, and people can live in one town and work in another. Towns are in states and I’m interested in a report about people who live in one specific state but work in another (e.g. New Hampshre vs. Massachusetts). There are a couple of “padding” columns to represent the data associated with each town and person that I might want in a report. To keep things simple I haven’t extended the query out to select the name of the state. Here’s the query I might use to get the report I want:

select
        wt.padding,
        ht.padding,
        max(pe.padding)
from
        towns   wt,
        towns   ht,
        people  pe
where
        wt.id_state     = 1
and     pe.id_town_work = wt.id
and     ht.id_state     = 2
and     pe.id_town_home = ht.id
group by
        wt.padding,
        ht.padding
;

You might expect something like the following as the execution plan:

-------------------------------------------------------------------------------
| Id  | Operation            | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |        |    40 |  7600 |   179   (6)| 00:00:01 |
|   1 |  HASH GROUP BY       |        |    40 |  7600 |   179   (6)| 00:00:01 |
|*  2 |   HASH JOIN          |        |    40 |  7600 |   178   (6)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL | TOWNS  |    40 |  2520 |     5   (0)| 00:00:01 |
|*  4 |    HASH JOIN         |        |  2000 |   248K|   173   (6)| 00:00:01 |
|*  5 |     TABLE ACCESS FULL| TOWNS  |    40 |  2520 |     5   (0)| 00:00:01 |
|   6 |     TABLE ACCESS FULL| PEOPLE |   100K|  6250K|   165   (4)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("PE"."ID_TOWN_HOME"="HT"."ID")
   3 - filter("HT"."ID_STATE"=2)
   4 - access("PE"."ID_TOWN_WORK"="WT"."ID")
   5 - filter("WT"."ID_STATE"=1)

The order of operation (row source generation) is: 3,5,6,4,2,1 – we build a hash table from the towns in state 2; build a hash table from the towns in state 1; scan the people table and probe the state 1 hash table, any row that survives is used to probe the state 2 hash table, and the rows that survive the second probe are aggregated to produce the answer.

When you do this type of thing with very large data sets one of the potential performance threats comes from the volume of data you have to aggregate. As we’ve joined the three tables the row length grows significantly before we finally aggregate (admittedly my data set is small, and the number of rows we’re going to aggregate also appears to be very small according to the predictions). There’s also (in the early stages at least) the potential for passing a very large number of rows from the fact table through the first (and possibly subsequent) hash join, doing a lot of work to eliminate the rows you don’t need.

In 12c the optimizer can choose to minimise both these threat points using “vector transformation”. (The name may also reflect the possibility that the code path will take advantage of vector processing (SIMD) operations if they’re available in the CPU.) Here’s the execution path I got when I added the /*+ vector_transform(@sel$1) */ hint to my query – it’s not sensible for this tiny data set, of course, but the hint is a way of learning what Oracle can do:

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                           |    71 | 15975 |   184   (6)| 00:00:01 |

|   1 |  TEMP TABLE TRANSFORMATION    |                           |       |       |            |          |

|   2 |   LOAD AS SELECT              | SYS_TEMP_0FD9D6661_31399B |       |       |            |          |
|   3 |    VECTOR GROUP BY            |                           |    10 |   670 |     6  (17)| 00:00:01 |
|   4 |     KEY VECTOR CREATE BUFFERED| :KV0000                   |    40 |  2680 |     6  (17)| 00:00:01 |
|*  5 |      TABLE ACCESS FULL        | TOWNS                     |    40 |  2520 |     5   (0)| 00:00:01 |

|   6 |   LOAD AS SELECT              | SYS_TEMP_0FD9D6662_31399B |       |       |            |          |
|   7 |    VECTOR GROUP BY            |                           |    10 |   670 |     6  (17)| 00:00:01 |
|   8 |     KEY VECTOR CREATE BUFFERED| :KV0001                   |    40 |  2680 |     6  (17)| 00:00:01 |
|*  9 |      TABLE ACCESS FULL        | TOWNS                     |    40 |  2520 |     5   (0)| 00:00:01 |

|  10 |   HASH GROUP BY               |                           |    71 | 15975 |   172   (6)| 00:00:01 |
|* 11 |    HASH JOIN                  |                           |    71 | 15975 |   171   (5)| 00:00:01 |
|  12 |     TABLE ACCESS FULL         | SYS_TEMP_0FD9D6662_31399B |    10 |   670 |     2   (0)| 00:00:01 |
|* 13 |     HASH JOIN                 |                           |    71 | 11218 |   169   (5)| 00:00:01 |
|  14 |      TABLE ACCESS FULL        | SYS_TEMP_0FD9D6661_31399B |    10 |   670 |     2   (0)| 00:00:01 |
|  15 |      VIEW                     | VW_VT_C444E4CB            |    71 |  6461 |   167   (5)| 00:00:01 |
|  16 |       HASH GROUP BY           |                           |    71 |  5112 |   167   (5)| 00:00:01 |
|  17 |        KEY VECTOR USE         | :KV0000                   |    71 |  5112 |   167   (5)| 00:00:01 |
|  18 |         KEY VECTOR USE        | :KV0001                   |  2000 |   132K|   167   (5)| 00:00:01 |
|* 19 |          TABLE ACCESS FULL    | PEOPLE                    |   100K|  6250K|   165   (4)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - filter("WT"."ID_STATE"=1)
   9 - filter("HT"."ID_STATE"=2)
  11 - access("ITEM_10"=INTERNAL_FUNCTION("C0") AND "ITEM_11"="C2")
  13 - access("ITEM_8"=INTERNAL_FUNCTION("C0") AND "ITEM_9"="C2")
  19 - filter(SYS_OP_KEY_VECTOR_FILTER("PE"."ID_TOWN_HOME",:KV0001) AND
              SYS_OP_KEY_VECTOR_FILTER("PE"."ID_TOWN_WORK",:KV0000))

There are three critical components to this plan: first, we create a couple of “Key Vectors” from the towns table, then we use those key vectors while scanning the people table and aggregate a minimal data set, finally we join back to the data associated with the key vectors. Reprising my introductory paragraph: the creation and use of the key vectors is similar to the Bloom filter approach; the final join-back is similar to the strategy used in Star Transformations (especially the ones where temp tables appear), and the key vector allows the high-volume fact data to be aggregated as much as possible before adding extra row-length from the dimensions.

In outline Oracle does the following:

  • scan the towns table to extract the id, and padding columns for id_state = 1 / work town – this produced 50 rows with my data set
  • manipulate the result to extract the distinct values of padding, and give each value a unique numeric identifier – this is the information that goes into the temp table (with one extra column) – this produced 10 rows
  • manipulate the result again to produce an in-memory array of (town.id, temp_table.identifier) – this is the key vector, containing 50 elements.

The second temp table and key vector for (id_state = 2 /work town ) will be created in the same way.

As the fact table is scanned Oracle can apply the key vectors very efficiently (we hope) to pick out the people rows that would be involved in the final aggregate and associate with each relevant row the two padding identifiers that belong to that row (this step is a bit like doing 2 hash joins – but presumably much more efficient; Bloom filtering does something very similar). After selecting the minimum number of rows we can aggregate them on the  two padding identifiers (an example of the “aggregate early”/”place group by” principle – aggregate before joining); finally we join back to the two temporary tables to translate the short padding identifiers into the long padding values (just as we do in star transformations with temporary table transformation).

Strangely we aggregate again after the join-back. I don’t think it’s necessary in this case because I’m fairly sure that the join back is on a unique set of columns – but perhaps this is a generic strategy allowing for variations in the mechanism, including such things as cases where the vector transform is only applied to a subset of the dimension tables.

Technically you could almost emulate this strategy in any version of Oracle (and I probably have at various times over the last few years) with the major limitation that the “KEY VECTOR USE” operations at lines 17 and 18 would have to be replaced with hash joins; no doubt, though, the major CPU saving of this approach is the difference between consecutive hash joins and what appears to be (from the execution stats) concurrent vector filtering. At some point – if a client needs the extra performance edge before they get to 12c – I’ll have to see if I can engineer an example in 11g that emulates the whole plan but uses Bloom filtering to approximate the key vector filtering.

 

AWR Warehouse

I just noticed last week that there is a new patch for Enterprise Manager and it is enabling AWR Warehouse feature. There is a note ID 1901202.1 which describe bundle patch for OEM 12c release 4.

Today I had a chance to install it in my lab and now I can start testing new OEM feature.
There is some documentation here and on Kellyn's blog.


It is not configured so first task is to configure AWR Warehouse repository. In my case I will use same database which is used for OEM repository.

Retention period and staging area for snapshot files has to be configured as well.

 After these two steps AWR Warehouse configuration job is started and when it will be finished AWR Warehouse will be ready to use.
 

When repository is ready we can start adding databases which will be a source of AWR data.
 

To add a new database to warehouse it has be already configured in OEM and has a default credentials.

If all conditions are met database has been successfully added.

Now it's time to play with these new feature and see what we can achieve using it.

regards,
Marcin