Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Quest vs Embarcadero : SQL Tuning

I get asked regularly what does Embarcadero's SQL tuner have that is any different than Quest's SQL tuner. The HUGE difference between Quest and Embarcadero’s approach is the Visual SQL Tuning (VST). Without VST most automated SQL tuning turns into dead ends. Why dead ends? Because in general the databases are going to correctly optimize queries. Queries are rarely incorrectly optimized. In those rare instances where a database get’s it wrong, then the approach of blindly generating and testing alternative execution plans can be helpful but only helpful in the case where queries are small enough that many alternative cases can be tested.

If on the other hand the query is long running say, a day, then testing alternative cases will not lead to results for easily several days. On the other hand with VST diagram we can quickly identified the best execution path using the diagram and guide the database to use this path.
Also for queries that can’t be optimized by the database because the are inherently slow, intractable or badly written - then what do you do? Generating alternative execution cases is pointless. What is going to help you quickly and easily understand the query and what the problems are? The solution is VST diagrams which only Embarcadero has in DB Optimizer.

Not only does DB Optimizer generate and test alternative execution plans, analyze indexes, suggest missing indexes, verify table statistics, analyze columns for histogram candidates and the have the powerful VST diagrams, DB Optimizer also has database load profiling, detail drilldown, load testing and sql editing with error flagging, automated error fixing, type ahead and code templates. All of this is wrapped in the Eclipse IDE and is a single executable supporting the same interface across Oracle, SQL Server, Sybase and DB2.

When is Exadata’s storage indexes used?

When is Exadata’s storage indexes used?

Exadata’s Storage indexes provides data pruning at the storage layer.  The storage indexes eliminating disk IO, by storing a summary of the data distribution on the disks.    MOS Note ID 1094934.1 provides the following summary:

Each disk in the Exadata storage cell is divided into equal sized pieces called storage regions (default 1MB). There is an index entry for every storage regions (1MB of data stored on disk). Each entry contains the minimum and maximum value for columns seen in ‘where’ clause predicates. Information for up to 8 columns can be stored. The index is then used to eliminate disk IO by identifying which storage regions don’t match the ‘where’ clause of a query.

The question I’ll try help answer in this blog post is, when Oracle can take advantage of the storage indexes, by sharing some of the test results and findings I came across when testing storage indexes.  The data used for these tests are simple versions of a dba_tab_columns including an id column.   The tests were performed on a quarter rack ExaData database machine (2 db nodes – with 16 cores each and 3 storage servers)

Create sorted and unsorted Test data for storage index testing
 SQL> create table pba_order AS SELECT rownum pba_id, column_name, owner, table_name
            from (SELECT column_name, owner, table_name
                          from dba_tab_columns where rownum <= 10000),
                      (select rownum from dual CONNECT BY LEVEL <=20000)
          order by pba_id;
Table created.

SQL> select count(*) from pba_order;
          COUNT(*)  --  200,000,000
 
SQL> select count(distinct(pba_id)) from pba_order;
           COUNT(DISTINCT(PBA_ID))   --      200,000,000
 
SQL> create table pba_rnd as  select * from pba_order order by DBMS_RANDOM.RANDOM;
Table created.
 
Create a few null value entries for id column
SQL> update pba_rnd set PBA_ID = null where pba_id between 100 and 120;
21 rows updated.
 
SQL> update PBA_ORDER set PBA_ID = null where pba_id between 100 and 120;
21 rows updated.
SQL> commit;
 
Collect CBO statistics for tables
SQL> EXEC DBMS_STATS.GATHER_TABLE_STATS('JA2','PBA_RND');
SQL> EXEC DBMS_STATS.GATHER_TABLE_STATS('JA2','PBA_ORDER');
PL/SQL procedure successfully completed.

 SQL> select TABLE_NAME, NUM_ROWS, SAMPLE_SIZE, LAST_ANALYZED from user_tables where table_name like '%PBA%'; 

TABLE_NAME     NUM_ROWS SAMPLE_SIZE LAST_ANALYZED
PBA_RND 200000000 200000000 13-SEP-10
PBA_ORDER  200000000 200000000 13-SEP-10


Check the storage session statistics before testing

Name Value
cell physical IO bytes saved by storage index 0
 
Storage indexes works on the columns in your where clause, so let’s check out a few different predicates, please notice that I have not created any indexes on either of the tables.
SQL> select count(*) from PBA_RND where pba_id is null;     -- IS NULL on random sorted data
  COUNT(*)  --    21
 Elapsed: 00:00:00.04
 
Name Value
cell physical IO bytes saved by storage index  10303389184
 
SQL>  select count(*) from PBA_ORDER where pba_id is null;    -- IS NULL on sorted data
    COUNT(*)   --     21
Elapsed: 00:00:00.04 
 
Name Value
cell physical IO bytes saved by storage index  20866007040
 So the entry for the storage index column can contain a NULL value unlike regular b-tree indexes, which is also what Kerry Osborne blog post on Exadata offload confirms http://kerryosborne.oracle-guy.com/2010/06/exadata-offload-the-secret-sauce/#more-2531

Check the storage session statistics before testing

Name Value
cell physical IO bytes saved by storage index  72634597376


SQL> select avg(pba_id) from pba_order where pba_id between 1000000 and 1000999;   - Range scan on sorted data.

    AVG(PBA_ID)  --    1000499.5
1 row selected.
Elapsed: 00:00:00.05 
 
Name Value
cell physical IO bytes saved by storage index  82996625408
 
 SQL> select avg(pba_id) from pba_rnd where pba_id between 1000000 and 1000999;  -- Range scan on random sorted data.
AVG(PBA_ID)  --    1000499.5
 Elapsed: 00:00:05.68 
 
Name Value
cell physical IO bytes saved by storage index  82996625408

I tried a few more test scenarios, with forcing direct reads, using parallel query but it did not make any difference for this SQL.  For the between test, I could only get the storage index to kick in for the ordered data.

Check the storage session statistics before testing

Name Value
cell physical IO bytes saved by storage index  93359243264


SQL> select avg(pba_id) from pba_order where pba_id <100;      -- LESS than on sorted data

AVG(PBA_ID)  --    50
Elapsed: 00:00:00.04
 
Name Value
cell physical IO bytes saved by storage index  103721861120
 
 SQL> select avg(pba_id) from pba_rnd where pba_id <100;       -- LESS than on unsorted data
    AVG(PBA_ID)  --        50
 Elapsed: 00:00:00.13
 
Name Value
cell physical IO bytes saved by storage index 113980342272
 
 SQL> select avg(pba_id) from pba_order where pba_id > 199999000;   -- greater than on sorted data
    AVG(PBA_ID)  --     199999501
Elapsed: 00:00:00.06
 
Name Value
cell physical IO bytes saved by storage index 144962977792
 
 SQL> select avg(pba_id) from pba_rnd where pba_id > 199999000;  -- greater than on unsorted data.
AVG(PBA_ID)  --    199999501
 Elapsed: 00:00:00.62
 
Name Value
cell physical IO bytes saved by storage index 154317127680

So we can confirm that Oracle can use storage indexing for both less and greater than predicates.

Check the storage session statistics before testing

Name Value
cell physical IO bytes saved by storage index 174033272832
 SQL> select pba_id from pba_order where pba_id = 100000000;   -- equal sign on sorted data
    PBA_ID  --   100000000
 Elapsed: 00:00:00.05 
 
Name Value
cell physical IO bytes saved by storage index 184395268096
 
SQL> select pba_id from pba_rnd where pba_id = 100000000;    -- equal sign on unsorted data
    PBA_ID  -- 100000000
 Elapsed: 00:00:05.66
 
Name Value
cell physical IO bytes saved by storage index 184395268096

Again I tried a few more test scenarios, with forcing direct reads, using parallel query but it did not make any difference in this case.  For the equal sign predicate test, I could only get the storage index to kick in for the ordered data.

Before finishing up, let’s check if storage indexes are kicking in on tables compressed with EHCC.   

Segment Name Size MB
PBA_ORDER 9,937
PBA_HCC 856

 

Table Name Last Analyzed Par Flash_C Cell_fl Compressed Compressed_for
PBA_HCC 14-SEP-10 No Default Default Enabled Query High Default
PBA_ORDER 14-SEP-10 No Default Default Disabled Default

Check the storage session statistics before testing 

Name Value
cell physical IO bytes saved by storage index 0

SQL>  select avg(pba_id) from pba_order where pba_id <100;

     AVG(PBA_ID) -- 50
 Elapsed: 00:00:00.13 
 
Name Value
cell physical IO bytes saved by storage index 10362617856
 

SQL> select avg(pba_id) from pba_hcc  where pba_id <100;
      AVG(PBA_ID)  --  50
Elapsed: 00:00:00.04  
 
Name Value
cell physical IO bytes saved by storage index 11097702400
 

I you like to read more on EHCC; I have blog post on EHCC with a few test-case and conclusions on http://www.oaktable.net/blog/what-can-ehcc-do-you

Conclusion

In Exadata, the Storage Index can evaluate predicates like <, <=, =, !=, >=, >, is NULL, is NOT NULL, as you saw above I was not able to get the storage indexes to kick in for the between and equal predicate for the random sorted data.    It may work in other circumstances or test-cases; currently it is very hard to evaluate, as there is no public tracing tool available. 

In general we saw marginal better IO savings on the test-case with sorted data and that storage indexes also worked on EHCC compressed tables.    I have also performed the same tests on a global temp table and the storage indexes did work with global temp tables as well.

Kerry Osborne has a good blog post, confirming that Storage indexes also works with bind variables: http://kerryosborne.oracle-guy.com/2010/09/do-storage-indexes-work-with-bind-variables/

The performance gained from the storage indexes really rocks my world and it made me review our design and index strategy for our warehouse.

More History

In a recent Oracle Magazine there’s a question from Marco Pinzuti about why Oracle Magazine articles are available for the past 5 years only. The editors respond they do not host old articles because technology moves quite fast and there’s no reason to keep old content which is most likely not relevant to current users. [...]

Birmingham to Frankfurt…

Birmingham Airport

Whilst people watching in Birmingham Airport I discovered a new category of traveller I had never noticed before, namely the “control freak”.

The person in question was an older gentleman having a bit of a tizzy because he didn’t think the allocated 25 minutes boarding time was enough to get everyone on the plane. During the next few minutes, the sky was apparently falling in because of this disastrous planning by the airport. I can only conclude that:

  • His life is timetabled down to the last second.
  • He probably uses MS Project to track the slippage caused when the kettle takes 30 seconds longer to boil than he planned for.
  • He probably reads the Daily Mail. Did you know if there are more than two teenagers together they are probably planning to murder you?
  • Things were much better in his day!

Birmingham to Frankfurt

We boarded in plenty of time. It seems 25 minutes was actually more than enough. The cabin doors were shut, then the captain announced that flights to Frankfurt were delayed because of the wind. Frankfurt was apparently down to 2/3 capacity, so that meant a 60 minute wait on the ground in Birmingham. Sigh. Of course, 60 minutes became 70 minutes before we took off. This meant we would land at the same time as boarding was planned to start for my connection to San Francisco.

I arrived in Frankfurt to the good news I hadn’t missed my flight. Joy quickly turned to misery when I was told my flight had been delayed by 4 hours. :( On the plus side I did get a €10 food voucher, which I accidentally spent on loads of diet coke and chocolate. In my defence, the lady in the shop wasn’t too happy about me spending less than €10, so she kept telling me to pick up more items. For the sake of international relations my only option was to take her advice and fill my bag with chocolate.

I’ve been collared by security twice in Frankfurt Airport. The first time focused mainly on my sachets of Lemsip. I’m not sure if they were hoping to confiscate my cold cure, thereby saving themselves about £5 at the pharmacy, or whether they thought it was something naughty. The second encounter was random check to see if I was carrying weapons. I’m not sure quite how I could manage that after going through security in Birmingham and Frankfurt, but I would rather they tried too hard, than not hard enough. I wasn’t sure if they vast quantity of chocolate I was carrying qualified as a deadly weapon or not. It’s certainly going to kill me if I don’t give it up.

It’s a couple more hours until I fly…

Cheers

Tim…

Update: Just bumped into Hajo Normann and Andrejus Baranovskis. They were just about to board their flight to SFO that was running on time. I’m still waiting for mine, even though it was meant to leave 3 hours before theirs. :(

MOTS Mini – Nasty weather, for absent friends

image

Is it a blog, plog, phlog, I don’t know but hereby trying out the WordPress Android app. to see if this works regarding more lasting fast tweet-like updates. The start to this “conference series” for MOTS and OOW was wet; really horrible weather above Holland and me, making a mistake interpreting 12.45 pm, didn’t make it any better. But he, no stress, right?

My flight with delta airlines boarded almost 2 hours before departure which wasn’t also fun at all, but you learn to get by. One big coffee, sandwiches and a “Nicotinel” gum makes life easier. Would they also have a gum for Whiskey addictives…

;-)

Anyway. Ann Arbor here I come.

Performance Overhead of Multiple SQL calls in SQR

I was asked to look at a fairly simple SQR program that reported on payroll data. It pivoted data for each employee on which it reported. It makes 21 calls to two procedures that each obtain a value by summing data across different sets of data in one of two payroll result tables.

The ASH data shows that most of the database time, 184 out of 192 seconds is spent in the two statements that aggregate that data. These statements are in the procedures that are called within the driving query.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
SQL_ID        SQL_PLAN_HASH_VALUE  EXEC_SECS   ASH_SECS
------------- ------------------- ---------- ----------
515d3buvaf8us 1162018321 615 133
55a20fnkwv0ht 3972836246 615 51
...
----------
sum 192

However, more significantly, only 192 seconds out of a total elapsed run time of 615 seconds is spent in the database. That is just 31%. So most of the time is spent executing code within the SQR program.

We need to look at the code to see exactly what is happening.

This is the driving query. It returns each employee who is paid in a specified payroll, and then for each row the procedure get_gp_acum is used to fetch the sum of certain accumulators for that payroll and employee

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
Begin-Select
a.emplid
a.cal_run_id
a.cal_id
a.gp_paygroup
a.orig_cal_run_id
a.rslt_seg_num
a.tax_code_uk
a.gpgb_ni_categorye.
n.name

Let $pin_name = 'GBR AC GRTX SEG' !Taxable Gross
do get_gp_acum
Let #Taxable_gross = &b.calc_rslt_val

Let $pin_name = 'GBR AC NIBL SEG' !Nlable Gross
do get_gp_acum
Let #Niable_gross1 = &b.calc_rslt_val


from ps_gpgb_ee_rslt a, ps_person_name n
where a.emplid = n.emplid
and a.cal_run_id = $PNL_CAL_RUN_ID1
and a.empl_rcd = 0
and a.gp_paygroup = $PNL_PYGRP1
and a.cal_id = $PNL_CAL_ID1
and a.orig_cal_run_id = a.cal_run_id
order by a.emplid,a.gp_paygroup
End-Select
End-Procedure

This is one of the two procedures that is called to obtain each value.  It simply sums the data for that employee.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
Begin-Procedure get_gp_acum
begin-select
sum(b.calc_rslt_val) &b.calc_rslt_val
from ps_xx_gpacum_rpt1 b, ps_gp_pin c
where b.emplid = &a.emplid
and b.empl_rcd = 0
and b.cal_run_id = &a.cal_run_id
and b.gp_paygroup = &a.gp_paygroup
and b.cal_id = &a.cal_id
and b.orig_cal_run_id = &a.orig_cal_run_id
and b.rslt_seg_num = &a.rslt_seg_num
and b.orig_cal_run_id = b.cal_run_id
and b.pin_num = c.pin_num
and c.pin_nm = $pin_name
end-select
End-Procedure

This code is very clear, well structured, and easy to maintain. The only trouble is that it is slow. Each SQL calls makes SQR do a lot of work, and that takes time.

In this case there was not much procedural code in the SQR and so I was able to coalesce the SQL from the called procedures into the driving query.

If the called procedures had been simple single row look-ups I could have used an outer-join. However, as they are using a group function (sum), I put the query into a scalar query (a query within in the select clause that returns only one row and one column). Each call to a procedure was replaced with a separate scalar query. I ended up with 21 scalar queries.

During this rewrite I encountered an SQR quirk; if the scalar query was placed in the main select clause, SQR produces errors because it is expecting an expression, and it complains that the SELECT keyword is not a variable. I then had to wrap the query in an in-line view. Each scalar query must be given a column alias, and the column alias can be referenced in the SQR select clause.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
Begin-Procedure MAIN-REPORT
Begin-Select
a.emplid
a.cal_run_id
a.cal_id
a.gp_paygroup
a.orig_cal_run_id
a.rslt_seg_num
a.tax_code_uk
a.gpgb_ni_category
n.name

a.gbr_ac_grtx_seg
Let #Taxable_gross = &a.gbr_ac_grtx_seg

a.gbr_ac_nibl_seg
Let #Niable_gross1 = &a.gbr_ac_nibl_seg


from (
select a.emplid, a.cal_run_id, a.cal_id, a.gp_paygroup, a.orig_cal_run_id, a.rslt_seg_num, a.tax_code_uk, a.gpgb_ni_category
,NVL((SELECT sum(b.calc_rslt_val) from ps_xx_gpacum_rpt1 b, ps_gp_pin c
where b.emplid = a.emplid and b.empl_rcd = 0 and b.cal_run_id = a.cal_run_id
and b.gp_paygroup = a.gp_paygroup and b.cal_id = a.cal_id
and b.orig_cal_run_id = a.orig_cal_run_id and b.rslt_seg_num = a.rslt_seg_num
and b.orig_cal_run_id = a.cal_run_id and b.pin_num = c.pin_num
and c.pin_nm = 'GBR AC GRTX SEG'),0) gbr_ac_grtx_seg
,NVL((SELECT sum(b.calc_rslt_val) from ps_xx_gpacum_rpt1 b, ps_gp_pin c
where b.emplid = a.emplid and b.empl_rcd = 0 and b.cal_run_id = a.cal_run_id
and b.gp_paygroup = a.gp_paygroup and b.cal_id = a.cal_id
and b.orig_cal_run_id = a.orig_cal_run_id and b.rslt_seg_num = a.rslt_seg_num
and b.orig_cal_run_id = a.cal_run_id and b.pin_num = c.pin_num
and c.pin_nm = 'GBR AC NIBL SEG'),0) gbr_ac_nibl_seg

from ps_gpgb_ee_rslt a
where a.empl_rcd = 0
and a.orig_cal_run_id = a.cal_run_id
) a
,ps_person_name n
where a.cal_run_id = $PNL_CAL_RUN_ID1
and a.gp_paygroup = $PNL_PYGRP1
and a.cal_id = $PNL_CAL_ID1
and a.emplid = n.emplid
order by a.emplid,a.gp_paygroup
End-Select

The SQL looks much more complicated, as does the execution plan.  However, the effect on performance was dramatic.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
SQL_ID        SQL_PLAN_HASH_VALUE  EXEC_SECS   ASH_SECS
------------- ------------------- ---------- ----------
f9d03ffbftv81 1694704409 154 114
5d2x9mqvvyrjk 989254841 154 2
3v550ghn6z8jv 1521271881 154 1
----------
sum 117

The response of just the combined SQL at 117 seconds is better than the separate SQLs at 154 seconds. Much more significantly, the amount of time spent in SQR (rather than the database) has fallen from 432 seconds to just 37. Therefore, 90% of the SQR response time was spent on submitting the SQL calls in the called procedures.

Conclusions

SQL calls in SQR are expensive. The cost of making lots of calls inside a loop or another driving query can add up to a significant amount of time. SQRs that consume time in this way will also be consuming CPU and memory on the server where the Process Scheduler is located. 

In this case, combining SQL statements also improved SQL performance, but that will not always be the case.

There are times when better performance can be achieved at the cost of more convoluted code. In each case there is a judgement to be made as to whether improvement in performance is worth the increase in complexity.

Performance Overhead of Multiple SQL calls in SQR

I was asked to look at a fairly simple SQR program that reported on payroll data. It pivoted data for each employee on which it reported. It makes 21 calls to two procedures that each obtain a value by summing data across different sets of data in one of two payroll result tables.

The ASH data shows that most of the database time, 184 out of 192 seconds is spent in the two statements that aggregate that data. These statements are in the procedures that are called within the driving query.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
SQL_ID        SQL_PLAN_HASH_VALUE  EXEC_SECS   ASH_SECS
------------- ------------------- ---------- ----------
515d3buvaf8us 1162018321 615 133
55a20fnkwv0ht 3972836246 615 51
...
----------
sum 192

However, more significantly, only 192 seconds out of a total elapsed run time of 615 seconds is spent in the database. That is just 31%. So most of the time is spent executing code within the SQR program.

We need to look at the code to see exactly what is happening.

This is the driving query. It returns each employee who is paid in a specified payroll, and then for each row the procedure get_gp_acum is used to fetch the sum of certain accumulators for that payroll and employee

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
Begin-Select
a.emplid
a.cal_run_id
a.cal_id
a.gp_paygroup
a.orig_cal_run_id
a.rslt_seg_num
a.tax_code_uk
a.gpgb_ni_categorye.
n.name

Let $pin_name = 'GBR AC GRTX SEG' !Taxable Gross
do get_gp_acum
Let #Taxable_gross = &b.calc_rslt_val

Let $pin_name = 'GBR AC NIBL SEG' !Nlable Gross
do get_gp_acum
Let #Niable_gross1 = &b.calc_rslt_val


from ps_gpgb_ee_rslt a, ps_person_name n
where a.emplid = n.emplid
and a.cal_run_id = $PNL_CAL_RUN_ID1
and a.empl_rcd = 0
and a.gp_paygroup = $PNL_PYGRP1
and a.cal_id = $PNL_CAL_ID1
and a.orig_cal_run_id = a.cal_run_id
order by a.emplid,a.gp_paygroup
End-Select
End-Procedure

This is one of the two procedures that is called to obtain each value.  It simply sums the data for that employee.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
Begin-Procedure get_gp_acum
begin-select
sum(b.calc_rslt_val) &b.calc_rslt_val
from ps_xx_gpacum_rpt1 b, ps_gp_pin c
where b.emplid = &a.emplid
and b.empl_rcd = 0
and b.cal_run_id = &a.cal_run_id
and b.gp_paygroup = &a.gp_paygroup
and b.cal_id = &a.cal_id
and b.orig_cal_run_id = &a.orig_cal_run_id
and b.rslt_seg_num = &a.rslt_seg_num
and b.orig_cal_run_id = b.cal_run_id
and b.pin_num = c.pin_num
and c.pin_nm = $pin_name
end-select
End-Procedure

This code is very clear, well structured, and easy to maintain. The only trouble is that it is slow. Each SQL calls makes SQR do a lot of work, and that takes time.

In this case there was not much procedural code in the SQR and so I was able to coalesce the SQL from the called procedures into the driving query.

If the called procedures had been simple single row look-ups I could have used an outer-join. However, as they are using a group function (sum), I put the query into a scalar query (a query within in the select clause that returns only one row and one column). Each call to a procedure was replaced with a separate scalar query. I ended up with 21 scalar queries.

During this rewrite I encountered an SQR quirk; if the scalar query was placed in the main select clause, SQR produces errors because it is expecting an expression, and it complains that the SELECT keyword is not a variable. I then had to wrap the query in an in-line view. Each scalar query must be given a column alias, and the column alias can be referenced in the SQR select clause.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
Begin-Procedure MAIN-REPORT
Begin-Select
a.emplid
a.cal_run_id
a.cal_id
a.gp_paygroup
a.orig_cal_run_id
a.rslt_seg_num
a.tax_code_uk
a.gpgb_ni_category
n.name

a.gbr_ac_grtx_seg
Let #Taxable_gross = &a.gbr_ac_grtx_seg

a.gbr_ac_nibl_seg
Let #Niable_gross1 = &a.gbr_ac_nibl_seg


from (
select a.emplid, a.cal_run_id, a.cal_id, a.gp_paygroup, a.orig_cal_run_id, a.rslt_seg_num, a.tax_code_uk, a.gpgb_ni_category
,NVL((SELECT sum(b.calc_rslt_val) from ps_xx_gpacum_rpt1 b, ps_gp_pin c
where b.emplid = a.emplid and b.empl_rcd = 0 and b.cal_run_id = a.cal_run_id
and b.gp_paygroup = a.gp_paygroup and b.cal_id = a.cal_id
and b.orig_cal_run_id = a.orig_cal_run_id and b.rslt_seg_num = a.rslt_seg_num
and b.orig_cal_run_id = a.cal_run_id and b.pin_num = c.pin_num
and c.pin_nm = 'GBR AC GRTX SEG'),0) gbr_ac_grtx_seg
,NVL((SELECT sum(b.calc_rslt_val) from ps_xx_gpacum_rpt1 b, ps_gp_pin c
where b.emplid = a.emplid and b.empl_rcd = 0 and b.cal_run_id = a.cal_run_id
and b.gp_paygroup = a.gp_paygroup and b.cal_id = a.cal_id
and b.orig_cal_run_id = a.orig_cal_run_id and b.rslt_seg_num = a.rslt_seg_num
and b.orig_cal_run_id = a.cal_run_id and b.pin_num = c.pin_num
and c.pin_nm = 'GBR AC NIBL SEG'),0) gbr_ac_nibl_seg

from ps_gpgb_ee_rslt a
where a.empl_rcd = 0
and a.orig_cal_run_id = a.cal_run_id
) a
,ps_person_name n
where a.cal_run_id = $PNL_CAL_RUN_ID1
and a.gp_paygroup = $PNL_PYGRP1
and a.cal_id = $PNL_CAL_ID1
and a.emplid = n.emplid
order by a.emplid,a.gp_paygroup
End-Select

The SQL looks much more complicated, as does the execution plan.  However, the effect on performance was dramatic.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
SQL_ID        SQL_PLAN_HASH_VALUE  EXEC_SECS   ASH_SECS
------------- ------------------- ---------- ----------
f9d03ffbftv81 1694704409 154 114
5d2x9mqvvyrjk 989254841 154 2
3v550ghn6z8jv 1521271881 154 1
----------
sum 117

The response of just the combined SQL at 117 seconds is better than the separate SQLs at 154 seconds. Much more significantly, the amount of time spent in SQR (rather than the database) has fallen from 432 seconds to just 37. Therefore, 90% of the SQR response time was spent on submitting the SQL calls in the called procedures.

Conclusions

SQL calls in SQR are expensive. The cost of making lots of calls inside a loop or another driving query can add up to a significant amount of time. SQRs that consume time in this way will also be consuming CPU and memory on the server where the Process Scheduler is located. 

In this case, combining SQL statements also improved SQL performance, but that will not always be the case.

There are times when better performance can be achieved at the cost of more convoluted code. In each case there is a judgement to be made as to whether improvement in performance is worth the increase in complexity.

Exadata

Inevitably there’s still excitement and uncertainty in the air about Exadata – and/or the Database Machine. For ease of reference and a jolly good read I’ve started collecting a few blog postings about it:

Pythian

Kerry Osborne

Kevin Closson

OOW10 Bloggers Meetup Agenda — T-shirts are Back and More…

Almost time for the Annual Bloggers Meetup @ OOWCounting down. The details are finally organized — this year, we have not one, but TWO great prizes at the Oracle OpenWorld Bloggers Meetup.

1) T-shirt art contest on stylish Pythian designer t-shirts — one lucky blogger will receive an HP X310 Data Vault, generously sponsored again this year by HP.

2) For the best, most creative blog post about the meetup itself, Pythian is giving away an Apple TV. But, there are a few small rules:

  1. the blog post must use as many names of people in attendance as possible.
  2. the blog post must be readable. It needs to make sense to someone who wasn’t there. It must be a story and not a list.
  3. the blog post must contain one other small tidbit of information about each individual that you’ll uncover during your “networking” as you weave your fascinating stories of the evening (like their blog name, a contact detail, favorite color, title, hobby, cat’s name, certification, what they were drinking, how much they had to drink…).
  4. the entry must be posted by midnight on Sunday the 26th of September. Please reference this post in your blog and it should be automatically picked up by the blog engine and posted in the comment as a track-back (if it doesn’t appear in the comments for some reason — please do post the link in your comment!

Can you tell we’re trying to encourage a little more mingling? :)

And last but not least — all this couldn’t be happening without Vanessa Simmons who’s been orchestrating all of the fun this year. Thanks Vanessa!

HP X300 Data Vault appletv

Oracle DB 11.2.0.2 patch set is here … Again…

The incredible disappearing patch set has reappeared on MOS. Download it quick before it gets pulled again. :)

Cheers

Tim…

PS. Remember, read the patch note before you download it. You don’t need all 4.8G. Also, check out the new recommended patching instructions.