Partitions

nVision Performance Tuning: 9. Using Compression without the Advanced Compression Licence

This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

Table compression can significantly decrease the size of tables and reduce the volume of I/O required to retrieve data.  Compression of the ledger, ledger budget, and summary ledger tables can significantly improve the performance of scans in nVision.

nVision Performance Tuning: 4. Partitioning of Ledger, Ledger Budget, and Summary Ledger Tables

This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

Note: Partitioning is a licenced option in Oracle RDBMS, and is only available on Enterprise Edition.

nVision queries always contain single value predicates on LEDGER and FISCAL_YEAR.  They will also always have either single value predicate or a range predicate on ACCOUNTING_PERIOD.  Therefore, partitioning the ledger tables on these columns is an effective way to cut down the data to be processed by the query as early as possible.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT … SUM(A.POSTED_BASE_AMT) 
FROM PS_LEDGER A, …
WHERE A.LEDGER='ACTUALS'

PeopleTools 8.54: Table/Index Partitioning

This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

Partitioning in Oracle

Partitioning of table (and index) segments involves breaking them into several smaller segments where certain data values only occur in certain segments.  Thus if a query is looking for a certain data value it may be able to eliminate some partitions without having to scan them because by definition those values cannot occur in those partitions.  Thus saving logical and physical read, and improving performance.  This is called partition elimination or pruning.  It is often the principal reason for partitioning a table.

PeopleTools 8.54: Table/Index Partitioning

This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

 Partitioning in Oracle

Partitioning of table (and index) segments involves breaking them into several smaller segments where certain data values only occur in certain segments.  Thus if a query is looking for a certain data value it may be able to eliminate some partitions without having to scan them because by definition those values cannot occur in those partitions.  Thus saving logical and physical read, and improving performance.  This is called partition elimination or pruning.  It is often the principal reason for partitioning a table.

Dropped Partitions do not go in the Recycle Bin

If you alter table TEST drop partition Q1, does it go in the recycle bin?

That is the question I was asked today. “Of course it….” Pause. More pause.

No, I did not know. I knew I’d seen partitions in the recyclebin on Oracle but I thought there was no option to state PURGE when you dropped a partition.

So, a quick test was needed.

First I tried a drop of a partition I knew I could live without {NB this is on version 11.2.0.3, I am 99.8% sure this is the same on 10}

Exadata, to index or not to index, that is the question

We will take a look at in this blog post, by testing several different approaches and comparing the time and the statistics for each scenario.

The tests have been performed on a quarter rack ExaData database machine (2 db nodes – with 16 cores each and 3 storage servers). The database is setup for a data warehouse implementation and has been patched with bundle patch 5 at the time of testing.

The tests were executed on a table with 403M rows distributed over 14 range partitions – 7 non-compressed and 7 partitions compressed with HCC query option.  Each test spans over two partitions covering 57.5M rows.  Please note the dimension tables contain 4000 or less rows.  The data is production data and are event based data, meaning data is generated when a certain events occur.

TABLE_NAME  PARTITION_NAME COMPRESS COMPRESS_FOR LAST_ANALYZED  SAMPLE_SIZE   NUM_ROWS
EVENT_PART  TEST_20100901  DISABLED              27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100902  DISABLED              27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100903  DISABLED              27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100904  DISABLED              27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100905  DISABLED              27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100906  DISABLED              27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100907  DISABLED              27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100908  ENABLED  QUERY HIGH   27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100909  ENABLED  QUERY HIGH   27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100910  ENABLED  QUERY HIGH   27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100911  ENABLED  QUERY HIGH   27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100912  ENABLED  QUERY HIGH   27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100913  ENABLED  QUERY HIGH   27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100914  ENABLED  QUERY HIGH   27-SEP-10         28793000   28793000
 

Each test-case/SQL has been executed 5 times under different scenario:

(1)    Without any bitmap or regular indexes on non-compressed partitions
           Storage FTS on event_part and dimensions
(2)    Without any bitmap or regular indexes on HCC partitions
            Storage FTS on event_part and dimensions
(3)    With primary key  constraint on dimension tables only
          Storage FTS on event_part with primary key look up on dimensions
(4)    With bitmap and primary key indexes on non-compressed partitions
            Bitmap index lookup on event_part and primary key lookup on dimensions
(5)    With bitmap and primary key  indexes on HCC partitions
           Bitmap index lookup on event_part and primary key lookup on dimensions 

The test cases used are from a warehouse environment and I have modified the column and table names. For the bitmap test-cases I had to hint the queries to ensure the bitmap indexes was actually used.

The output from the test cases is over 20K lines, so I have summed up the elapsed time and a few statistics in tables below to provide a better overview. 

select /*+ MONITOR  basic */
            st.s_name,  pt.p_name,  cct.pc_name,  ep.c_level, count(ep.c_id)
  from event_part ep,  t_test tt, s_test st, p_test pt, cc_test cct
   where tt.t_id between 20100901 and 20100902
        and ep.t_id = tt.t_id
        and ep.t_id between 20100901 and 20100902
        and ep.s_id = st.s_id
        and ep.p_id = pt.p_id
        and ep.cc_id = cct.c_id
    group by st.s_name, pt.p_name, cct.pc_name, ep.c_level
    order by st.s_name, pt.p_name, cct.pc_name, ep.c_level;
 

Table figure for basic breakdown – Test case 1

Stats / Tests

1

2

3

4

5

Elapsed time sec

8.87

12.00

09.22

185.05

149.55

cell physical IO bytes saved by storage index

0

0

0

0

0

cell physical IO bytes eligible for predicate offload

5,209,325,568

0

5,209,325,568

0

0

cell physical IO interconnect bytes returned by smart scan

2,562,203,600

0

2,562,201,584

0

0

cell flash cache read hits

9

26

9

9,290

2,063

CC Total Rows for Decompression

 

57,586,000

 

 

57,586,000

 This is a basic query, which is used for high level summaries and serves as a good base line to compare with, for the other test-cases.  There is no use of a where clause in the test case, so we will not benefit from any storage indexes in this case.  The first 3 tests are without any indexes on the fact table and are performing much better than test 4 and 5 and we should of course not expect the CBO to follow this path anyway.   It is evident for test 1 and 3 that the performance gained is supported by the storage server offloading and the smart scans.  The above CC stats for test 2, tell us that the db node performs the decompression, so this test will have to burn extra CPU cycles compared to test 1 and 3.  There is more to be mentioned for test 2, but I’ll try to cover that in the conclusion.

 select /*+ MONITOR lc breakdown */
           st.s_name, pt.p_name,  cct.pc_name,  ep.c_level,  count(ep.c_id)
 from event_part ep, t_test tt, s_test st, p_test pt, cc_test cct
 where tt.t_id between 20100903 and 20100904
      and ep.t_id = tt.t_id
      and ep.t_id between 20100903 and 20100904
      and ep.s_id = st.s_id and ep.p_id = pt.p_id
      and ep.cc_id = cct.c_id and ep.c_id = 7
 group by st.s_name, pt.p_name, cct.pc_name, ep.c_level
 order by st.s_name, pt.p_name, cct.pc_name, ep.c_level;
 

Table figure for LC breakdown - Test case 2

Stats / Tests

1

2

3

4

5

Elapsed time sec

2.05

19.17

1.84

30.33

36.58

cell physical IO bytes saved by storage index

4,186,636,288

0

4,186,636,288

0

0

cell physical IO bytes eligible for predicate offload

5,209,292,800

0

5,209,292,800

0

0

cell physical IO interconnect bytes returned by smart scan

317,496,848

0

317,497,280

0

0

cell flash cache read hits

18

59

36

1,043

219

CC Total Rows for Decompression

0

57,782,554

0

0

7,842,364

Similar finding as we saw from the 1st test case; however, in this test-case we are performing the breakdown for a certain ID and therefore the performance of test 1 and 3, improved further from the IO saved by the Storage Index.   For this test case, I ran test 1 and 3 on the save partitions and it is worth noticing, that second time around the savings from the Storage Index improved; so the storage indexes are further maintained/improved as we select data from the tables and partitions.

select /*+ MONITOR lp breakdown */
           st.s_name,  pt.p_name, cct.pc_name, ep.c_level,  count(ep.c_id)
 from event_part ep,  t_test tt,  s_test st,  p_test pt,  cc_test cct
 where tt.t_id between 20100905 and 20100906
       and ep.t_id = tt.t_id
      and ep.t_id between 20100905 and 20100906
      and ep.s_id = st.s_id  and ep.p_id = pt.p_id
      and ep.cc_id = cct.c_id and ep.p_id = 4611686019802841877
 group by st.s_name, pt.p_name, cct.pc_name, ep.c_level
 order by st.s_name, pt.p_name, cct.pc_name, ep.c_level;
 

Table figure for lp breakdown - Test case 3

Stats / Tests

1

2

3

4

5

Elapsed time sec

 2.99

 6.01

 2.72

 49.22

 39.29

cell physical IO bytes saved by storage index

 2,623,143,936

 

0

 

2,623,799,296

 

0

 

0

cell physical IO bytes eligible for predicate offload

 

5,209,325,568

 

0

 

5,209,325,568

 

0

 

0

cell physical IO interconnect bytes returned by smart scan

 

674,439,456

 

0

 

 

674,436,288

 

0

 

0

cell flash cache read hits

64

44

10

2,113

635

CC Total Rows for Decompression

 

0

 

57,979,108

 

0

 

0

 

15,582,048

 Similar findings as we saw from the 2nd test case; this test is just performed on a different ID, which has a higher distinct count than the first ID we tested in test case 2; and as a result of that and on how the data is sorted during insert we are seeing less IO saved by the storage index.

 select /*+ MONITOR spcl breakdown */
           st.s_name, pt.p_name,  cct.pc_name,  ep.c_level, count(ep.c_id)
  from event_part ep, t_test tt,   s_test st, p_test pt,  cc_test cct
 where tt.t_id between 20100906 and 20100907
      and ep.t_id = tt.t_id
      and ep.t_id between 20100906 and 20100907
      and ep.s_id = st.s_id and ep.p_id = pt.p_id
      and ep.cc_id = cct.c_id and ep.s_id = 1
      and ep.cc_id =7 and ep.p_id = 4611686019802841877
  group by st.s_name, pt.p_name, cct.pc_name, ep.c_level
  order by st.s_name, pt.p_name, cct.pc_name, ep.c_level;
 

Table figure for spcl breakdown – Test case 4

Stats / Tests

1

2

3

4

5

Elapsed time sec

1.67

13.69

01.14

12.77

7.90

cell physical IO bytes saved by storage index

 

4,531,191,808

 

0

 

4,532,174,848

 

0

 

0

cell physical IO bytes eligible for predicate offload

 

5,209,325,568

 

0

 

5,209,325,568

 

0

 

0

cell physical IO interconnect bytes returned by smart scan

 

237,932,736

 

0

 

237,933,312

 

0

 

0

cell flash cache read hits

73

52

10

594

183

CC Total Rows for Decompression

 

0

 

57,782,554

 

0

 

0

 

5,614,752

This test case is performed with a where clause on multiple ID’s.  Again test 1 and 3 are taking advantage of the Exadata features and are performing well.   Test 4 and 5 are still not close to test 1 or 3, but have definitely become a bit more competitive.  Comparing the two HCC tests (2 and 5) test 5 seems to do better as it only has to burn CPU cycles for 10% of the results set of test 2.   A valid question to ask here would be why we are not seeing any benefits from either Storage offloading or indexing on test 2, but again I’ll defer that discussion to the conclusion.

select /*+ MONITOR  ttl */
           st.s_name, cct.pc_name, ep.c_level, count(ep.c_id) row_count,
           round(min((ep.ceg - ep.cbg) / 60),2) min_g_min,
           round(avg((ep.ceg - ep.cbg) / 60.0),2)  avg_g_min,
          round(max((ep.ceg - ep.cbg) / 60),2) max_g_min
  from event_part ep,  t_test tt,   s_test st,  cc_test cct
 where  tt.t_id between 20100901 and 20100902
      and ep.t_id = tt.t_id
      and ep.t_id between 20100901 and 20100902
      and ep.s_id = st.s_id
      and ep.character_class_id = cct.class_id
  group by st.shard_name, cct.public_class_name, ep.character_level
  order by  st.shard_name, cct.public_class_name, ep.character_level;
 

Table figure for ttl breakdown - Test case 5

Stats / Tests

1

2

3

4

5

Elapsed time sec

12.82

15.50

11.67

254.26

304.92

cell physical IO bytes saved by storage index

 

0

 

0

 

0

 

0

 

0

cell physical IO bytes eligible for predicate offload

 

5,209,325,568

 

0

 

5,209,325,568

 

0

 

0

cell physical IO interconnect bytes returned by smart scan

 

2,328,566,432

 

0

 

2,328,567,440

 

0

 

0

cell flash cache read hits

9

15

9

132,467

2,341

CC Total Rows for Decompression

 

0

 

57,586,000

 

0

 

0

 

61,643,318

Very similar findings as we saw from the 1st test case; the only difference is this query looks examine the time to something.

select /*+ MONITOR ttlsc */
           st.s_name, cct.pc_name, ep.c_level,  count(ep.c_id) row_count,
           round(min((ep.ceg - ep.cbg) / 60),2) min_g_min,
           round(avg((ep.ceg - ep.cbg) / 60.0),2)  avg_g_min,
           round(max((ep.ceg - ep.cbg) / 60),2) max_g_min
 from event_part ep, t_test tt, shard_test st, cc_test cct
where tt.t_id between 20100903 and 20100904
      and ep.t_id = tt.t_id
      and ep.t_id between 20100903 and 20100904
      and ep.s_id = st.s_id
      and ep.cc_id = cct.c_id
      and ep.s_id = 2
      and ep.cc_id =6
    group by st.s_name, cct.pc_name, ep.c_level
    order by st.s_name, cct.pc_name, ep.c_level;
 

Table figure for ttlsc breakdown - Test case 6

Stats / Tests

1

2

3

4

5

Elapsed time sec

 1.16

4.57

1.01

12.71

 03.87

cell physical IO bytes saved by storage index

 

4,697,096,192

 

0

 

4,698,832,896

 

0

 

0

cell physical IO bytes eligible for predicate offload

 

5,209,292,800

 

0

 

5,209,292,800

 

0

 

0

cell physical IO interconnect bytes returned by smart scan

 

55,906,960

 

0

 

55,906,384

 

0

 

0

cell flash cache read hits

 9

31

10

3891

 107

CC Total Rows for Decompression

 0

 57,749,795

 0

 0

1,998,299

 Very similar findings as we saw for the 4th test case.

 Conclusion

Most warehouse like queries I have performed in our Exadata environment is doing well without indexes on fact tables.  So it is no surprise to me to hear more and more people are dropping most of their indexes and take advantage of the Exadata features.   If you like to keep the primary key indexes on your dimension tables to ensure the hassle of resolving the duplicate key issues, that seems to be a valid option as well.

In my environment I’m still to find a case where the bitmap index search could compete with the no index approach; and let just say we found such a case, when it would still have to show significant improvements before  I would choose that path;  Consider the benefits of not having to maintain the bitmap indexes after each load.   There are also several restrictions with bitmap indexes that would be nice not to have to worry about.

Now, I mentioned that I would get back to the test 2 results, which were based on Storage FTS on partitions compressed with the HCC query option.   In the past I have performed queries on HCC tables and have seen IO savings from the Storage indexes.  

Initially i suspected the test2 results observed above to be a bug or alternatively be related to my HCC compressed partitions are only 29MB a piece versa 2.4GB uncompressed.  Oracle support/development has confirmed it to be related to the data size, as we can see from the stat "cell physical IO bytes eligible for predicate offload", which doesn't get bumped up after query.  The reason for that is after partition pruning,  the table is too small for predicate push to kick in and since predicate push doesn't kick in, the Storage Indexes won't kick in either.

Please be aware i don't know the Storage index internals, but I look forward to learn.

Statistics on Partitioned Tables - Part 6b - COPY_TABLE_STATS

Sigh ... these posts have become a bit of a mess.

There are so many different bits and pieces I want to illustrate and I've been trying to squeeze them in around normal work. Worse still, because I keep leaving them then coming back to them and re-running tests it's easy to lose track of where I was, despite using more or less the same test scripts each time (any new scripts tend to be sections of the main test script). I suspect my decision to only pull out the more interesting parts of the output has contributed to the difficulties too, but with around 18.5 thousand lines of output, I decided that was more or less essential.

It has got so bad that I noticed the other day that there were a couple of significant errors in the last post which are easy to miss when you're looking at detailed output and must be even less obvious if you're looking at it for the first time.

The fact no-one said much about these errors reinforces my argument with several bloggers that less people read and truly absorb the more technical stuff than they think. They just pick up the messages they need and take more on trust than you might imagine!

So what were the errors? Possibly more important, why did they appear? The mistakes are often as instructive as the successes.

Error 1

This is the tail-end of the subpartition stats at the end of part 5

Statistics Management for Partitioned Objects in PeopleSoft

I have implemented partitioned objects in a number of PeopleSoft systems on Oracle. Recently, I was working on a system where a table was partitioned into weekly range partitions, and I encountered a performance problem when Oracle's automatic maintenance window job to collect statistics did not run between populating the new partition for the first time, and running a batch process that referenced that partition. Oracle, understandably produced a execution plan for a statement that assumed the partition was empty, but as the partition actually had quite a lot of data, the statement ran for a long time.

The solution was to tell Oracle the truth by gathering statistics for that partition. However, I didn't want to refresh the statistics for the whole table. There were many partitions with historical data that has not changed, so I don't need to refresh those partitions. I only need to refresh just the stale partitions, and here is the problem. Unfortunately, dbms_stats package will let you gather stale and missing statistics for all tables in a given schema, or the whole database, but not for a named table. It is not completely unreasonable, if you are targeting a single table then you ought to know what needs to be refreshed.

Statistics Management for Partitioned Objects in PeopleSoft

I have implemented partitioned objects in a number of PeopleSoft systems on Oracle. Recently, I was working on a system where a table was partitioned into weekly range partitions, and I encountered a performance problem when Oracle's automatic maintenance window job to collect statistics did not run between populating the new partition for the first time, and running a batch process that referenced that partition. Oracle, understandably produced a execution plan for a statement that assumed the partition was empty, but as the partition actually had quite a lot of data, the statement ran for a long time.

The solution was to tell Oracle the truth by gathering statistics for that partition. However, I didn't want to refresh the statistics for the whole table. There were many partitions with historical data that has not changed, so I don't need to refresh those partitions. I only need to refresh just the stale partitions, and here is the problem. Unfortunately, dbms_stats package will let you gather stale and missing statistics for all tables in a given schema, or the whole database, but not for a named table. It is not completely unreasonable, if you are targeting a single table then you ought to know what needs to be refreshed.