Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

DB Optimizer - "the wow factor of the InSync conference"

Just recovering from a whirlwind 2 weeks in APAC (Asia-Pacific)
Japan:
First off was 48 hours in Tokyo where I was warmly welcomed by Oaktable member, Ichiro Obata. Check out Ichro's blog at
Ichiro not only warmly welcomed me, treated me to Sushi and Sapporo, discussed Exadata, data warehousing, and performance tuning, but he also took me to the crazy neighborhood of Akihabara, sort of a mix of computer stores, nerds, manga, goth and geisha. Here's someones description:
China
Traveled then to China where I gave another presentation to a packed house
and visited numerous clients as well has having Peking Duck for the first time (not to mention pigs feed and jelly fish - two new ones on me)
Australia
Finished up with an awesome time in Melbourne at the Insync Conference where I got to listen to great presentations by Richard Foote, Connor McDonald among others as well as giving my own presentation on simplifying database performance monitoring and tuning with graphics. After my presentation one of the participants called my section on Visual SQL Tuning (VST), "the wow factor of the conference!"

My first PLOG

image

My first plog

SQL Joins Graphically

I've noticed this print out in a number of cubes at work and I think that attests to it's attractiveness and/or usefulness. I don't use most of these join methods but it is a nice complete feeling list with graphic representations and we might expand our join graphics in DB Optimizer to include similar graphics.
See:

http://www.codeproject.com/KB/database/Visual_SQL_Joins.aspx?msg=2919602

Oracle Database 11g Release 2 Patchset 1 (11.2.0.2) Is Now Available, But This Is Not Just An Announcement Blog Entry.

BLOG UPDATE: I should have peeked at my blog aggregator before posting this. I just found that my friend Greg Rahn posted about the same content on his blog earlier today. Hmmm.…plagerism! Oracle Database 11g Release 2 Patchset 1 (11.2.0.2 Part Number E15732-03) is available as of today for x86 and x86_64 Linux as per [...]

Simulated ASH 2.1 (SASH 2.1)

Thanks to Marcin Przepiorowski for improving the SASH code!

V2.1 - Installs on Windows or Unix on Standard Edition
I would like to present a new version of S-ASH package. It is based on Kyle’s version v2 and it has been modified by me to meet a customer requirements related to minimize changes on production database and some bug fixing and new functionality has been added. In addition to that a new project called OraSASH has been created on SourceForge to keep tracking of all changes and bug fixing.

In version 1 and version 2 collection job has been implemented on production database and PL/SQL package has to been deployed on it too. In new version collection job is running on repository database and it is accessing target database via DB link and list of changes on production database has been limited to one user and one view.

————————————————————————————

Installation

Please find a list of installation steps:

  1. create a SASH user - run repo_0_user.sql - as user SYS on repository database
  2. add sys procedure to kill jobs - run repo_1_sys_procedure.sql - as user SYS on repository database
  3. create repository - run - repo_1_tables.sql - as user SASH on repository database
  4. create repository management package - run - repo_2_pkg.sql - as user SASH on repository database
  5. create user and view on target - run - targ_1_userview_9i.sql for 9i ortarg_1_userview_10g.sql for 10g and above - as SYS user on database being monitored
  6. create database link using - run - repo_3_dblink.sql - as user SASH on repository database
  7. create a sash packages - run - repo_4_packages.sql - as user SASH on repository database
  8. fill setup tables - login into SQL*Plus as SASH user and execute following statements

    exec sash_pkg.set_dbid;
    exec sash_pkg.get_event_names;
    exec sash_pkg.get_users;
    exec sash_pkg.get_params;
    sash_pkg.get_data_files ;
    commit;


  9. If target database is in version 9i fill up wait classes table - run repo_5_waitgroups.sql - as user SASH on repository database
  10. If target database is in version 10g or above run following statement as SASH user on repository database

    update sash_event_names sen set sen.wait_class = ( select wg.wait_class from gv$event_name@sashprod wg where wg.name=sen.name);
    commit;


  11. setup jobs - login into SQL*Plus as SASH user and execute following statements
    exec sash_repo.setup_jobs

—————————————–
Maintenace
All commands have to be executed as SASH user connected to repository database.

Starting collection job
exec sash_repo.start_collecting_jobs;

Stoping collection job
exec sash_repo.stop_collecting_jobs;

Data retention
Collection job is using “poor” man partitioning trick and number of tables have been increased from 7 in version 2 into 31 in version 2.1. It allow to keep performance data for one month.
Whole solution has been tested on database with 300 connected sessions and it gathered about 100 MB
of performance data daily.

Retention setup:
exec sash_repo.set_retention(’x');

where x is one of:
- d - last week
- w - last month
- h - last 24 h
- m - last 30 minutes

If you have any problems or questions feel free to add comments here or contact me directly
Marcin Przepiorowski

11.2.0.2 Patch Set For Oracle Database Server

Update: Mon Sep 13 16:02:36 PDT 2010 The 11.2.0.2 patch is now available for download (for real). If you downloaded it on Friday, there is no need to re-download it. It was mistakenly made public before it could be validated (test MD5 sums, etc.), but has since been validated. Enjoy! Just a quick post that the 11.2.0.2 patch set for Oracle Database Server has been released for x86 and x86-64 platforms. The patchset number is 10098816 and is available for download from My Oracle Support. Be sure to give note 1189783.1 Important Changes to Oracle Database Patch Sets Starting With 11.2.0.2 a read as several things have changed in the patching process. {lang: 'ar'}

Oracle 11.2.0.2 – Small but important changes

From this version onward the default storage model for XMLType is not CLOB but Binary XML. This has the advantage that due to its Binary XML nature Oracle can optimize XML handling because the format of the XML document or instance is known. Also Oracle can, if possible, transport XML binary format, so smaller in size, data in and out the database if used with the binary XML API’s. Another advantage handling content driven queries, is the possibility of query rewrites or optimizing  memory objects, so called XOB objects which are more efficient or smaller in size.

See the Oracle XMLDB Developers Guide 11.2.0.x for more info.

…and this concludes my first post via an Android phone…

;)

The patching nightmares are over (11.2.0.2.0)…

One of the things that continually annoys me is that to get the latest version of the database you have to install the base release and then instantly patch it to the latest patch set. Not any more.

“Starting with the 11.2.0.2 patch set, Oracle Database patch sets are full installations of the Oracle Database software. This means that you do not need to install Oracle Database 11g Release 2 (11.2.0.1) before installing Oracle Database 11g Release 2 (11.2.0.2).”

You don’t understand how happy this makes me. In addition, the installer also downloads and applies madatory patches, so even when you’re mid-way through the lifecycle of a patchset, your new installations are still up to date. :)

There is a bunch of new functionality already listed in the new features manual:

Happy downloading and upgrading.

Notes.

  • Read the patch notes before you start downloading. You probably don’t need all the zip files (4.8G). :)
  • Out-of-place patching (new ORACLE_HOME) is the recommended method now, so there is no real difference between patch sets and upgrades. Grid infrastructure *must* be patches out-of-place.
  • I guess OFA directories should now include the first 4 digits of the version (11.2.0 -> 11.2.0.2) as those directories will only ever contain that patch set.

Cheers

Tim…

Oracle Data Mining Primer (2 of 3): Viewing Models

Now that we have a mining model in the database, we can interrogate the database to understand what has been discovered during the training process.
Three catalog views contain the high-level information for mining models: ALL_MINING_MODELS, ALL_MINING_MODEL_ATTRIBUTES, and ALL_MINING_MODEL_SETTINGS.
By querying ALL_MINING_MODELS, we retrieve the list of all mining models available to the current user.

select model_name, mining_function, algorithm
from all_mining_models;
PREDICT_INCOME    CLASSIFICATION    SUPPORT_VECTOR_MACHINES

By querying ALL_MINING_MODEL_ATTRIBUTES, we retrieve the list of column names that were relevant when training the model, as well as other per-attribute information.

select attribute_name, attribute_type, target

from all_mining_model_attributes
where model_name = 'PREDICT_INCOME';
AGE               NUMERICAL      NO
WORKCLASS         CATEGORICAL    NO
FNLWGT            NUMERICAL      NO
EDUCATION         CATEGORICAL    NO
EDUCATION_NUM     NUMERICAL      NO
MARITAL_STATUS    CATEGORICAL    NO
OCCUPATION        CATEGORICAL    NO
RELATIONSHIP      CATEGORICAL    NO
RACE              CATEGORICAL    NO
SEX               CATEGORICAL    NO
CAPITAL_GAIN      NUMERICAL      NO
CAPITAL_LOSS      NUMERICAL      NO
HOURS_PER_WEEK    NUMERICAL      NO
NATIVE_COUNTRY    CATEGORICAL    NO
INCOME            CATEGORICAL    YES
By querying ALL_MINING_MODEL_SETTINGS, we retrieve the list of model settings used during training.  Some of these settings may have been specified by the user, others automatically computed while training.  For the support vector machine algorithm, Oracle supports two kernel functions: gaussian and linear.  In this instance, based on the shape and content of data, Oracle chose to use the linear kernel as evidenced by the setting value SVMS_LINEAR.
select setting_name, setting_value
from all_mining_model_settings
where model_name = 'PREDICT_INCOME';
ALGO_NAME               ALGO_SUPPORT_VECTOR_MACHINES
SVMS_ACTIVE_LEARNING    SVMS_AL_ENABLE
PREP_AUTO               ON
SVMS_COMPLEXITY_FACTOR  0.495891
SVMS_KERNEL_FUNCTION    SVMS_LINEAR
SVMS_CONV_TOLERANCE     .001

The above three catalog views provide high-level information that is relevant to most Oracle Data Mining algorithms.  Additional, deeper insight is often available by querying the details of the model.  Each algorithm tends to use a different structure to represent what was learned while training, and therefore the structure of this returned information will vary from one algorithm to the next (possibly from one flavor of a given algorithm to the next).  For SVM with linear kernel, the model will contain a set of coefficients, as shown below for the PREDICT_INCOME model.

select class, attribute_name, attribute_value, coefficient 
from table(dbms_data_mining.get_model_details_svm('PREDICT_INCOME')) a, table(a.attribute_set) b
order by abs(coefficient) desc;
 >50K    CAPITAL_GAIN      (null)          8.1179930161383904
 >50K    (null)            (null)         -4.1469740381933802
 >50K    EDUCATION_NUM     (null)          1.85498650687918
 >50K    HOURS_PER_WEEK    (null)          1.80588516494733
 >50K    CAPITAL_LOSS      (null)          1.28361583304225
 >50K    AGE               (null)          1.20889883984869
 >50K    EDUCATION         Doctorate       1.1139153328993401
 >50K    NATIVE_COUNTRY    Nicaragua      -1.0957665557355201
 >50K    WORKCLASS         Without-pay    -0.99178110036931799
 >50K    NATIVE_COUNTRY    Columbia       -0.99178110036931699
 >50K    RELATIONSHIP      Wife            0.99046458006739702
 >50K    NATIVE_COUNTRY    Hungary        -0.973898034330827
 ...

In order to understand the information from the above query, it is necessary to scratch the surface of the Support Vector Machine algorithm.  As a result of training the model, we generate an equation which includes the following snippet:
a1x1 + a2x2 + a3x3 + ... + anxn
where ai is a coefficient from the above query result and xi is the corresponding value for the attribute in the dataset.  For example, for a record that has HOURS_PER_WEEK of 10, the resulting contribution of HOURS_PER_WEEK in determining income level is (10*1.80588516494733).   For a categorical attribute, we assign 1 for the value of x for the attribute_value that is present in the incoming record, and 0 for all the others.  In this case, if the incoming record represents a person from Nicaragua, then we will include -1.0957665557355201 in the equation (1*-1.0957665557355201), but will not include anything related to people from Columbia, Hungary, etc. as their respective contributions are 0.

So, what does all this mean?
This means that, according to the trained model, larger values of HOURS_PER_WEEK will tend to make it more likely that the resulting income level is >50K (the class value identified in the above query result).  This also means that residents of Nicaragua are slightly less likely to have an income level >50K than residents of Columbia (the contribution away from the high income class is larger for Nicaragua since the magnitude of the coefficient is larger and both are negative).

By taking information from all of the attributes together, the model is able to yield a prediction as to whether or not a particular individual is likely to earn more than 50K.  The final part of this ODM primer will demonstrate how the database provides these resulting predictions.

PeopleSoft Run Control Purge Utility

Run Control records are used to pass parameters into processes scheduled processes. These tables tend to grow, and are rarely purged. Generally, once created a run control is not deleted.  When operator accounts are deleted, the Run Controls remain, but are no longer accessible to anyone else.

I have worked on systems where new Run Controls, whose IDs contain either a date or sequence number, are generated for each process. The result is that the Run Control tables, especially child tables, grow quickly and if not regularly managed will become very large. On one system, I found 18 million rows on one table!

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
RECNAME         FIELDNAME            NUM_ROWS     BLOCKS
--------------- ------------------ ---------- ----------
TL_RUN_CTRL_GRP RUN_CNTL_ID 18424536 126377
AEREQUESTPARM RUN_CNTL_ID 1742676 19280
AEREQUESTTBL RUN_CNTL_ID 333579 3271
XPQRYRUNPARM RUN_CNTL_ID 121337 1630
TL_TA_RUNCTL RUN_CNTL_ID 112920 622

I have written a simple Application Engine process, GFC_RC_ARCH, that purges old Run Controls from these tables.  It can be downloaded from my website.

Run Control records are easily identified. They are characterised by:

  • the first column of these tables is always OPRID, and the second is either RUNCNTLID or RUN_CNTL_ID,
  • these two columns are also the first two columns of the unique key,
  • the Run Control tables appear on pages of components that are declared as the process security component for that process.

I have decided that if the combination of OPRID and RUN_CNTL_ID does not appear in the process scheduler request table, PSPRCSRQST, then the Run Control record should be deleted. Thus, as the delivered Process Scheduler Purge process, PRCSPURGE, deletes rows from the Process Scheduler tables, so my purge process will delete rows from the Run Control tables.

I have chosen to make these two Application Engine processes mutually exclusive, so the Process Scheduler will not run both at the same time, but that configuration cannot be delivered in an Application Designer project.