Search

OakieTags

Who's online

There are currently 0 users and 28 guests online.

Recent comments

EHCC

Accessing HCC compressed objects using an index

Problem

I came across another strange SQL performance issue: Problem was that a SQL statement was running for about 3+ hours in an User Acceptance (UA) database, compared to 1 hour in a development database. I ruled out usual culprits such as statistics, degree of parallelism etc. Reviewing the SQL Monitor output posted below, you can see that the SQL statement has already done 6 Billion buffer gets and steps 21 through 27 were executed 3 Billion times so far.

Statistics and execution plan

Making the Most of Oracle Exadata – A Technical Review

class="tweetmeme_button" style="float: right; margin-left: 10px;"> /> />

Over the past few weeks several people have asked me about an Exadata article entitled “Making the Most of Oracle Exadata” by Marc Fielding of Pythian. Overall it’s an informative article and touches on many of the key points of Exadata, however, even though I read (skimmed is a much better word) and briefly commented on the article back in August, after further review I found some technical inaccuracies with this article so I wanted to take the time to clarify this information for the Exadata community.

Exadata Smart Scans

Marc writes:

style="text-decoration: underline;">Smart scans: Smart scans are Exadata’s headline feature. They provide three main benefits: reduced data transfer volumes from storage servers to databases, CPU savings on database servers as workload is transferred to storage servers, and improved buffer cache efficiency thanks to column projection. Smart scans use helper processes that function much like parallel query processes but run directly on the storage servers. Operations off-loadable through smart scans include the following:

  • Predicate filtering – processing WHERE clause comparisons to literals, including logical operators and most SQL functions.
  • Column projection – by looking at a query’s SELECT clause, storage servers return only the columns requested, which is a big win for wide tables.
  • Joins – storage servers can improve join performance by using Bloom filters to recognize rows matching join criteria during the table scan phase, avoiding most of the I/O and temporary space overhead involved in the join processing.
  • Data mining model scoring – for users of Oracle Data Mining, scoring functions like PREDICT() can be evaluated on storage servers.

I personally would not choose a specific number of benefits from Exadata Smart Scan, simply stated, the design goal behind Smart Scan is to reduce the amount of data that is sent from the storage nodes (or storage arrays) to the database nodes (why move data that is not needed?). Smart Scan does this in two ways: it applies the appropriate column projection and row restriction rules to the data as it streams off of disk. However, projection is not limited to just columns in the SELECT clause, as Marc mentions, it also includes columns in the WHERE clause as well. Obviously JOIN columns need to be projected to perform the JOIN in the database nodes. The one area that Smart Scan does not help with at all is improved buffer cache efficiency. The reason for this is quite simple: Smart Scan returns data in blocks that were created on-the-fly just for that given query — it contains only the needed columns (projections) and has rows filtered out from the predicates (restrictions). Those blocks could not be reused unless someone ran the exact same query (think of those blocks as custom built just for that query). The other thing is that Smart Scans use direct path reads (cell smart table scan) and these reads are done into the PGA space, not the shared SGA space (buffer cache).

As most know, Exadata can easily push down simple predicates filters (WHERE c1 = ‘FOO’) that can be applied as restrictions with Smart Scan. In addition, Bloom Filters can be applied as restrictions for simple JOINs, like those commonly found in Star Schemas (Dimensional Data Models). These operations can be observed in the query execution plan by the JOIN FILTER CREATE and JOIN FILTER USE row sources. What is very cool is that Bloom Filters can also pass their list of values to Storage Indexes to aid in further I/O reductions if there is natural clustering on those columns or it eliminates significant amounts of data (as in a highly selective set of values). Even if there isn’t significant data elimination via Storage Indexes, a Smart Scan Bloom Filter can be applied post scan to prevent unneeded data being sent to the database servers.

Exadata Storage Indexes

Marc writes:

style="text-decoration: underline;">Storage indexes: Storage indexes reduce disk I/O volumes by tracking high and low values in memory for each 1-megabyte storage region. They can be used to give partition pruning benefits without requiring the partition key in the WHERE clause, as long as one of these columns is correlated with the partition key. For example, if a table has order_date and processed_date columns, is partitioned on order_date, and if orders are processed within 5 days of receipt, the storage server can track which processed_date values are included in each order partition, giving partition pruning for queries referring to either order_date or processed_date. Other data sets that are physically ordered on disk, such as incrementing keys, can also benefit.

In Marc’s example he states there is correlation between the two columns PROCESSED_DATE and ORDER_DATE where PROCESSED_DATE = ORDER_DATE + [0..5 days]. That’s fine and all, but to claim partition pruning takes place when specifying ORDER_DATE (the partition key column) or PROCESSED_DATE (non partition key column) in the WHERE clause because the Storage Index can be used for PROCESSED_DATE is inaccurate. The reality is, partition pruning can only take place when the partition key, ORDER_DATE, is specified, regardless if a Storage Index is used for PROCESSED_DATE.

Partition Pruning and Storage Indexes are completely independent of each other and Storage Indexes know absolutely nothing about partitions, even if the partition key column and another column have some type of correlation, as in Marc’s example. The Storage Index simply will track which Storage Regions do or do not have rows that match the predicate filters and eliminate reading the unneeded Storage Regions.

Exadata Hybrid Columnar Compression

Marc writes:

style="text-decoration: underline;">Columnar compression: Hybrid columnar compression (HCC) introduces a new physical storage concept, the compression unit. By grouping many rows together in a compression unit, and by storing only unique values within each column, HCC provides storage savings in the range of 80 90% based on the compression level selected. Since data from full table scans remains compressed through I/O and buffer cache layers, disk savings translate to reduced I/O and buffer cache work as well. HCC does, however, introduce CPU and data modification overhead that will be discussed in the next section.

The Compression Unit (CU) for Exadata Hybrid Columnar Compression (EHCC) is actually a logical construct, not a physical storage concept. The compression gains from EHCC come from column-major organization of the rows contained in the CU and the encoding and transformations (compression) that can be done because of that organization (like values are more common within the same column across rows, vs different columns in the same row). To say EHCC only stores unique values within each column is inaccurate, however, the encoding and transformation algorithms use various techniques that yield very good compression by attempting to represent the column values with as few bytes as possible.

Data from EHCC full table scans only remains fully compressed if the table scan is not a Smart Scan, in which case the compressed CUs are passed directly up to the buffer cache and the decompression will then be done by the database servers. However, if the EHCC full table scan is a Smart Scan, then only the columns and rows being returned to the database nodes are decompressed by the Exadata servers, however, predicate evaluations can be performed directly on the EHCC compressed data.

Read more: Exadata Hybrid Columnar Compression Technical White Paper

Marc also writes:

Use columnar compression judiciously: Hybrid columnar compression (HCC) in Exadata has the dual advantages of reducing storage usage and reducing I/O for large reads by storing data more densely. However, HCC works only when data is inserted using bulk operations. If non-compatible operations like single-row inserts or updates are attempted, Exadata reverts transparently to the less restrictive OLTP compression method, losing the compression benefits of HCC. When performing data modifications such as updates or deletes, the entire compression unit must be uncompressed and written in OLTP-compressed form, involving an additional disk I/O penalty as well.

EHCC does require bulk direct path load operations to work. This is because the compression algorithms that are used for EHCC need sets of rows as input, not single rows. What is incorrect with Marc’s comments is that when a row in a CU is modified (UPDATE or DELETE), the entire CU is not uncompressed and changed to non-EHCC compression, only the rows that are UPDATED are migrated to non-EHCC compression. For DELETEs no row migrations take place at all. This is easily demonstrated by tracking ROWIDs as in the example at the bottom of this post.

Exadata Smart Flash Cache

Marc writes:

style="text-decoration: underline;">Flash cache: Exadata s flash cache supplements the database servers buffer caches by providing a large cache of 384 GB per storage server and up to 5 TB in a full Oracle Exadata Database Machine, considerably larger than the capacity of memory caches. Unlike generic caches in traditional SAN storage, the flash cache understands database-level operations, preventing large non-repeated operations such as backups and large table scans from polluting the cache. Since flash storage is nonvolatile, it can cache synchronous writes, providing performance benefits to commit-intensive applications.

While flash (SSD) storage is indeed non-volatile, the Exadata Smart Flash Cache is volatile – it loses all of its contents if the Exadata server is power cycled. Also, since the Exadata Smart Flash is currently a write-through cache, it offers no direct performance advantages to commit-intensive applications, however, it does offer indirect performance advantages by servicing read requests that would otherwise be serviced by the HDDs, thus allowing the HDDs to service more write operations.

Read more: Exadata Smart Flash Cache Technical White Paper

EHCC UPDATE and DELETE Experiment

--
-- EHCC UPDATE example - only modified rows migrate
--

SQL> create table order_items1
  2  compress for query high
  3  as
  4  select rownum as rnum, x.*
  5  from order_items x
  6  where rownum <= 10000;

Table created.

SQL> create table order_items2
  2  as
  3  select rowid as rid, x.*
  4  from order_items1 x;

Table created.

SQL> update order_items1
  2  set quantity=10000
  3  where rnum in (1,100,1000,10000);

4 rows updated.

SQL> commit;

Commit complete.

SQL> select b.rnum, b.rid before_rowid, a.rowid after_rowid
  2  from order_items1 a, order_items2 b
  3  where a.rnum(+) = b.rnum
  4  and (a.rowid != b.rid
  5    or a.rowid is null)
  6  order by b.rnum
  7  ;

           RNUM BEFORE_ROWID       AFTER_ROWID
--------------- ------------------ ------------------
              1 AAAWSGAAAAAO1aTAAA AAAWSGAAAAAO1aeAAA
            100 AAAWSGAAAAAO1aTABj AAAWSGAAAAAO1aeAAB
           1000 AAAWSGAAAAAO1aTAPn AAAWSGAAAAAO1aeAAC
          10000 AAAWSGAAAAAO1aXBEv AAAWSGAAAAAO1aeAAD

--
-- EHCC DELETE example - no rows migrate
--

SQL> create table order_items1
  2  compress for query high
  3  as
  4  select rownum as rnum, x.*
  5  from order_items x
  6  where rownum <= 10000;

Table created.

SQL> create table order_items2
  2  as
  3  select rowid as rid, x.*
  4  from order_items1 x;

Table created.

SQL> delete from order_items1
  2  where rnum in (1,100,1000,10000);

4 rows deleted.

SQL> commit;

Commit complete.

SQL> select b.rnum, b.rid before_rowid, a.rowid after_rowid
  2  from order_items1 a, order_items2 b
  3  where a.rnum(+) = b.rnum
  4  and (a.rowid != b.rid
  5    or a.rowid is null)
  6  order by b.rnum
  7  ;

           RNUM BEFORE_ROWID       AFTER_ROWID
--------------- ------------------ ------------------
              1 AAAWSIAAAAAO1aTAAA
            100 AAAWSIAAAAAO1aTABj
           1000 AAAWSIAAAAAO1aTAPn
          10000 AAAWSIAAAAAO1aXBEv
name="googleone_share_1" style="position:relative;z-index:5;float: right; margin-left: 10px;">{lang: 'ar'} class='wpfblike' >

Exadata, to index or not to index, that is the question

We will take a look at in this blog post, by testing several different approaches and comparing the time and the statistics for each scenario.

The tests have been performed on a quarter rack ExaData database machine (2 db nodes – with 16 cores each and 3 storage servers). The database is setup for a data warehouse implementation and has been patched with bundle patch 5 at the time of testing.

The tests were executed on a table with 403M rows distributed over 14 range partitions – 7 non-compressed and 7 partitions compressed with HCC query option.  Each test spans over two partitions covering 57.5M rows.  Please note the dimension tables contain 4000 or less rows.  The data is production data and are event based data, meaning data is generated when a certain events occur.

TABLE_NAME  PARTITION_NAME COMPRESS COMPRESS_FOR LAST_ANALYZED  SAMPLE_SIZE   NUM_ROWS
EVENT_PART  TEST_20100901  DISABLED              27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100902  DISABLED              27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100903  DISABLED              27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100904  DISABLED              27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100905  DISABLED              27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100906  DISABLED              27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100907  DISABLED              27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100908  ENABLED  QUERY HIGH   27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100909  ENABLED  QUERY HIGH   27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100910  ENABLED  QUERY HIGH   27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100911  ENABLED  QUERY HIGH   27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100912  ENABLED  QUERY HIGH   27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100913  ENABLED  QUERY HIGH   27-SEP-10         28793000   28793000
EVENT_PART  TEST_20100914  ENABLED  QUERY HIGH   27-SEP-10         28793000   28793000
 

Each test-case/SQL has been executed 5 times under different scenario:

(1)    Without any bitmap or regular indexes on non-compressed partitions
           Storage FTS on event_part and dimensions
(2)    Without any bitmap or regular indexes on HCC partitions
            Storage FTS on event_part and dimensions
(3)    With primary key  constraint on dimension tables only
          Storage FTS on event_part with primary key look up on dimensions
(4)    With bitmap and primary key indexes on non-compressed partitions
            Bitmap index lookup on event_part and primary key lookup on dimensions
(5)    With bitmap and primary key  indexes on HCC partitions
           Bitmap index lookup on event_part and primary key lookup on dimensions 

The test cases used are from a warehouse environment and I have modified the column and table names. For the bitmap test-cases I had to hint the queries to ensure the bitmap indexes was actually used.

The output from the test cases is over 20K lines, so I have summed up the elapsed time and a few statistics in tables below to provide a better overview. 

select /*+ MONITOR  basic */
            st.s_name,  pt.p_name,  cct.pc_name,  ep.c_level, count(ep.c_id)
  from event_part ep,  t_test tt, s_test st, p_test pt, cc_test cct
   where tt.t_id between 20100901 and 20100902
        and ep.t_id = tt.t_id
        and ep.t_id between 20100901 and 20100902
        and ep.s_id = st.s_id
        and ep.p_id = pt.p_id
        and ep.cc_id = cct.c_id
    group by st.s_name, pt.p_name, cct.pc_name, ep.c_level
    order by st.s_name, pt.p_name, cct.pc_name, ep.c_level;
 

Table figure for basic breakdown – Test case 1

Stats / Tests

1

2

3

4

5

Elapsed time sec

8.87

12.00

09.22

185.05

149.55

cell physical IO bytes saved by storage index

0

0

0

0

0

cell physical IO bytes eligible for predicate offload

5,209,325,568

0

5,209,325,568

0

0

cell physical IO interconnect bytes returned by smart scan

2,562,203,600

0

2,562,201,584

0

0

cell flash cache read hits

9

26

9

9,290

2,063

CC Total Rows for Decompression

 

57,586,000

 

 

57,586,000

 This is a basic query, which is used for high level summaries and serves as a good base line to compare with, for the other test-cases.  There is no use of a where clause in the test case, so we will not benefit from any storage indexes in this case.  The first 3 tests are without any indexes on the fact table and are performing much better than test 4 and 5 and we should of course not expect the CBO to follow this path anyway.   It is evident for test 1 and 3 that the performance gained is supported by the storage server offloading and the smart scans.  The above CC stats for test 2, tell us that the db node performs the decompression, so this test will have to burn extra CPU cycles compared to test 1 and 3.  There is more to be mentioned for test 2, but I’ll try to cover that in the conclusion.

 select /*+ MONITOR lc breakdown */
           st.s_name, pt.p_name,  cct.pc_name,  ep.c_level,  count(ep.c_id)
 from event_part ep, t_test tt, s_test st, p_test pt, cc_test cct
 where tt.t_id between 20100903 and 20100904
      and ep.t_id = tt.t_id
      and ep.t_id between 20100903 and 20100904
      and ep.s_id = st.s_id and ep.p_id = pt.p_id
      and ep.cc_id = cct.c_id and ep.c_id = 7
 group by st.s_name, pt.p_name, cct.pc_name, ep.c_level
 order by st.s_name, pt.p_name, cct.pc_name, ep.c_level;
 

Table figure for LC breakdown - Test case 2

Stats / Tests

1

2

3

4

5

Elapsed time sec

2.05

19.17

1.84

30.33

36.58

cell physical IO bytes saved by storage index

4,186,636,288

0

4,186,636,288

0

0

cell physical IO bytes eligible for predicate offload

5,209,292,800

0

5,209,292,800

0

0

cell physical IO interconnect bytes returned by smart scan

317,496,848

0

317,497,280

0

0

cell flash cache read hits

18

59

36

1,043

219

CC Total Rows for Decompression

0

57,782,554

0

0

7,842,364

Similar finding as we saw from the 1st test case; however, in this test-case we are performing the breakdown for a certain ID and therefore the performance of test 1 and 3, improved further from the IO saved by the Storage Index.   For this test case, I ran test 1 and 3 on the save partitions and it is worth noticing, that second time around the savings from the Storage Index improved; so the storage indexes are further maintained/improved as we select data from the tables and partitions.

select /*+ MONITOR lp breakdown */
           st.s_name,  pt.p_name, cct.pc_name, ep.c_level,  count(ep.c_id)
 from event_part ep,  t_test tt,  s_test st,  p_test pt,  cc_test cct
 where tt.t_id between 20100905 and 20100906
       and ep.t_id = tt.t_id
      and ep.t_id between 20100905 and 20100906
      and ep.s_id = st.s_id  and ep.p_id = pt.p_id
      and ep.cc_id = cct.c_id and ep.p_id = 4611686019802841877
 group by st.s_name, pt.p_name, cct.pc_name, ep.c_level
 order by st.s_name, pt.p_name, cct.pc_name, ep.c_level;
 

Table figure for lp breakdown - Test case 3

Stats / Tests

1

2

3

4

5

Elapsed time sec

 2.99

 6.01

 2.72

 49.22

 39.29

cell physical IO bytes saved by storage index

 2,623,143,936

 

0

 

2,623,799,296

 

0

 

0

cell physical IO bytes eligible for predicate offload

 

5,209,325,568

 

0

 

5,209,325,568

 

0

 

0

cell physical IO interconnect bytes returned by smart scan

 

674,439,456

 

0

 

 

674,436,288

 

0

 

0

cell flash cache read hits

64

44

10

2,113

635

CC Total Rows for Decompression

 

0

 

57,979,108

 

0

 

0

 

15,582,048

 Similar findings as we saw from the 2nd test case; this test is just performed on a different ID, which has a higher distinct count than the first ID we tested in test case 2; and as a result of that and on how the data is sorted during insert we are seeing less IO saved by the storage index.

 select /*+ MONITOR spcl breakdown */
           st.s_name, pt.p_name,  cct.pc_name,  ep.c_level, count(ep.c_id)
  from event_part ep, t_test tt,   s_test st, p_test pt,  cc_test cct
 where tt.t_id between 20100906 and 20100907
      and ep.t_id = tt.t_id
      and ep.t_id between 20100906 and 20100907
      and ep.s_id = st.s_id and ep.p_id = pt.p_id
      and ep.cc_id = cct.c_id and ep.s_id = 1
      and ep.cc_id =7 and ep.p_id = 4611686019802841877
  group by st.s_name, pt.p_name, cct.pc_name, ep.c_level
  order by st.s_name, pt.p_name, cct.pc_name, ep.c_level;
 

Table figure for spcl breakdown – Test case 4

Stats / Tests

1

2

3

4

5

Elapsed time sec

1.67

13.69

01.14

12.77

7.90

cell physical IO bytes saved by storage index

 

4,531,191,808

 

0

 

4,532,174,848

 

0

 

0

cell physical IO bytes eligible for predicate offload

 

5,209,325,568

 

0

 

5,209,325,568

 

0

 

0

cell physical IO interconnect bytes returned by smart scan

 

237,932,736

 

0

 

237,933,312

 

0

 

0

cell flash cache read hits

73

52

10

594

183

CC Total Rows for Decompression

 

0

 

57,782,554

 

0

 

0

 

5,614,752

This test case is performed with a where clause on multiple ID’s.  Again test 1 and 3 are taking advantage of the Exadata features and are performing well.   Test 4 and 5 are still not close to test 1 or 3, but have definitely become a bit more competitive.  Comparing the two HCC tests (2 and 5) test 5 seems to do better as it only has to burn CPU cycles for 10% of the results set of test 2.   A valid question to ask here would be why we are not seeing any benefits from either Storage offloading or indexing on test 2, but again I’ll defer that discussion to the conclusion.

select /*+ MONITOR  ttl */
           st.s_name, cct.pc_name, ep.c_level, count(ep.c_id) row_count,
           round(min((ep.ceg - ep.cbg) / 60),2) min_g_min,
           round(avg((ep.ceg - ep.cbg) / 60.0),2)  avg_g_min,
          round(max((ep.ceg - ep.cbg) / 60),2) max_g_min
  from event_part ep,  t_test tt,   s_test st,  cc_test cct
 where  tt.t_id between 20100901 and 20100902
      and ep.t_id = tt.t_id
      and ep.t_id between 20100901 and 20100902
      and ep.s_id = st.s_id
      and ep.character_class_id = cct.class_id
  group by st.shard_name, cct.public_class_name, ep.character_level
  order by  st.shard_name, cct.public_class_name, ep.character_level;
 

Table figure for ttl breakdown - Test case 5

Stats / Tests

1

2

3

4

5

Elapsed time sec

12.82

15.50

11.67

254.26

304.92

cell physical IO bytes saved by storage index

 

0

 

0

 

0

 

0

 

0

cell physical IO bytes eligible for predicate offload

 

5,209,325,568

 

0

 

5,209,325,568

 

0

 

0

cell physical IO interconnect bytes returned by smart scan

 

2,328,566,432

 

0

 

2,328,567,440

 

0

 

0

cell flash cache read hits

9

15

9

132,467

2,341

CC Total Rows for Decompression

 

0

 

57,586,000

 

0

 

0

 

61,643,318

Very similar findings as we saw from the 1st test case; the only difference is this query looks examine the time to something.

select /*+ MONITOR ttlsc */
           st.s_name, cct.pc_name, ep.c_level,  count(ep.c_id) row_count,
           round(min((ep.ceg - ep.cbg) / 60),2) min_g_min,
           round(avg((ep.ceg - ep.cbg) / 60.0),2)  avg_g_min,
           round(max((ep.ceg - ep.cbg) / 60),2) max_g_min
 from event_part ep, t_test tt, shard_test st, cc_test cct
where tt.t_id between 20100903 and 20100904
      and ep.t_id = tt.t_id
      and ep.t_id between 20100903 and 20100904
      and ep.s_id = st.s_id
      and ep.cc_id = cct.c_id
      and ep.s_id = 2
      and ep.cc_id =6
    group by st.s_name, cct.pc_name, ep.c_level
    order by st.s_name, cct.pc_name, ep.c_level;
 

Table figure for ttlsc breakdown - Test case 6

Stats / Tests

1

2

3

4

5

Elapsed time sec

 1.16

4.57

1.01

12.71

 03.87

cell physical IO bytes saved by storage index

 

4,697,096,192

 

0

 

4,698,832,896

 

0

 

0

cell physical IO bytes eligible for predicate offload

 

5,209,292,800

 

0

 

5,209,292,800

 

0

 

0

cell physical IO interconnect bytes returned by smart scan

 

55,906,960

 

0

 

55,906,384

 

0

 

0

cell flash cache read hits

 9

31

10

3891

 107

CC Total Rows for Decompression

 0

 57,749,795

 0

 0

1,998,299

 Very similar findings as we saw for the 4th test case.

 Conclusion

Most warehouse like queries I have performed in our Exadata environment is doing well without indexes on fact tables.  So it is no surprise to me to hear more and more people are dropping most of their indexes and take advantage of the Exadata features.   If you like to keep the primary key indexes on your dimension tables to ensure the hassle of resolving the duplicate key issues, that seems to be a valid option as well.

In my environment I’m still to find a case where the bitmap index search could compete with the no index approach; and let just say we found such a case, when it would still have to show significant improvements before  I would choose that path;  Consider the benefits of not having to maintain the bitmap indexes after each load.   There are also several restrictions with bitmap indexes that would be nice not to have to worry about.

Now, I mentioned that I would get back to the test 2 results, which were based on Storage FTS on partitions compressed with the HCC query option.   In the past I have performed queries on HCC tables and have seen IO savings from the Storage indexes.  

Initially i suspected the test2 results observed above to be a bug or alternatively be related to my HCC compressed partitions are only 29MB a piece versa 2.4GB uncompressed.  Oracle support/development has confirmed it to be related to the data size, as we can see from the stat "cell physical IO bytes eligible for predicate offload", which doesn't get bumped up after query.  The reason for that is after partition pruning,  the table is too small for predicate push to kick in and since predicate push doesn't kick in, the Storage Indexes won't kick in either.

Please be aware i don't know the Storage index internals, but I look forward to learn.

What can EHCC do for you?

What can EHCC do for you?
By now you have probably heard about ExaData Hybrid Columnar Compression (EHCC), but what benefit can EHCC give you in terms of storage and performance savings?  
 
As always, it depends on your data.  Below I’ll share some of the test results I came across when testing EHCC. The data used for these tests are a short version of a fact table.   The tests were performed on a quarter rack ExaData database machine (2 db nodes – with 16 cores each and 3 storage servers)
 
As you may already have noticed, English is my second language, so please excuse me for spelling, grammar and whatever errors you may find in this post ;-)
 
- Test block compression against EHCC compression tables
First let’s look at the time it took to create the test data and the compression rate.
 
-- None compressed table
SQL> create table he100 PARALLEL 64 as select /*+ PARALLEL 64 */ * from he100_load_test;
Table created.
 
Elapsed: 00:00:33.97  
Did not seem to be CPU bound - Saw CPU utilization around 10 - 25%.
 
-- Block compressed table
SQL> create table he100_block compress for all operations PARALLEL 64 as select /*+ PARALLEL 64 */ *
                      from he100_load_test;
 
Elapsed: 00:00:28.51
Noticed CPU utilization to be around 40 - 70%
 
-- Table creation with EHCC query option