Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Rule Rules

Everybody knows you shouldn’t be using the Rule-based optimizer (RBO) any more – everyone, that is, except some of the folks at Oracle Corp. I had a conversation a few weeks ago with someone who was having a problem with their standby database on 10.2 because a query against v$archive_gap was taking a very long [...]

Funny Developer Tricks - first_rows(999999999)

I ran across a funny SQL statement recently (funny strange, not funny ha ha - well actually funny ha ha too I guess). It had a first_rows hint like so:

 
 
select /*+ FIRST_ROWS (999999999)  */ 
"MANDT" , "OPBEL" , "OPUPW" , "OPUPK" , "OPUPZ" , "BUKRS" , "GSBER" , "BUPLA" , "SEGMENT" , 
"AUGST" , "GPART" , "VTREF" , "VTRE2" , "VKONT" , "ABWBL" , "ABWTP" , "ABWKT" , "APPLK" , 
"HVORG" , "TVORG" , "KOFIZ" , "SPART" , "HKONT", "MWSKZ" , "MWSZKZ" , "XANZA" , "STAKZ" , 
"BLDAT" , "BUDAT" , "OPTXT" , "WAERS" , "FAEDN", "FAEDS" , "VERKZ" , "STUDT" , "SKTPZ" , 
"XMANL" , "KURSF" , "BETRH" , "BETRW" , "BETR2" , "BETR3" , "SKFBT" , "SBETH" , "SBETW" , 
"SBET2" , "SBET3" , "MWSKO" , "MWVKO" , "TXRUL" , "SPZAH" , "PYMET" , "PYBUK" , "PERNR" , 
"GRKEY" , "PERSL" , "XAESP" , "AUGDT" , "AUGBL" , "AUGBD" , "AUGRD" , "AUGWA" , "AUGBT" , 
"AUGBS" , "AUGSK" , "AUGVD" , "AUGOB" , "WHANG" , "WHGRP" , "XEIPH" , "MAHNV" , "MANSP" , 
"XAUGP" , "ABRZU" , "ABRZO" , "FDGRP" , "FDLEV" , "FDZTG", "FDWBT" , "XTAUS" , "AUGRS" , 
"PYGRP" , "PDTYP" , "SPERZ" , "INFOZ" , "TXJCD" , "TXDAT" ,"VBUND" , "KONTT" , "KONTL" , 
"OPSTA" , "BLART" , "EMGPA" , "EMBVT" , "EMADR" , "IKEY" , "EUROU" , "XRAGL" , "ASTKZ" , 
"ASBLG" , "XBLNR" , "INKPS" , "RNDPS" , "QSSKZ" , "QSSEW" , "QSPTP" , "QSSHB" , "QBSHB" , 
"QSZNR" , "RFUPK" , "STRKZ" , "FITPR" , "XPYOR" , "LANDL" , "INTBU", "EMCRD" , "C4EYE" , 
"C4EYP" , "SCTAX" , "STTAX" , "STZAL" , "ORUPZ" , "NEGBU" , "SUBAP" , "PSWSL" , "PSWBT" , 
"PSWTX" , "PSGRP" , "FINRE" , "RDSTA" , "RDSTB" , "DEAKTIV" , "SGRKEY", "SOLLDAT" , "RECPT" , 
"TOCOLLECT" , "EINMALANF" , "VORAUSZAHL" , "APERIODIC" , "ABRABS" , "GRBBP" , "ASMETH" , 
"INT_CROSSREFNO" , "ETHPPM" , "PAYFREQID" , "INVOICING_PARTY" , "PPMST" , "LOGNO" , "APERIODICT" , 
"ADD_REFOBJ" , "ADD_REFOBJID" , "ADD_SERVICE" , "ZZAGENCY" , "ZZ_EXT_REF" , "ZZ_PAY_AGENT" , 
"ZZFUNDSOURCE" , "ZZINSTALLMENT" , "Z_PROD_ID" , "ZZUSERNAME" , "ZZWF_STAT" , "ZZPAYCHANNEL" 
FROM "DFKKOP" 
WHERE "MANDT" = :A0 -- NDV=1
AND "BUKRS" = :A1 -- NDV=1 
AND "AUGST" = :A2 -- NDV=2 
AND "FAEDN" < :A3 -- less than today probably all records
AND ( "PYMET" = :A4 OR "PYMET" = :A5 ) -- NDV=8
AND ROWNUM <= :A6; -- less than 1B

Yes - that’s a first rows hint with about a billion as the number of rows to optimizer for.

The reason I noticed it is that it runs for 15 hours before getting a Snapshot Too Old error. The attempted solution was to restart it the next day (thinking maybe it will run better the second time I guess). The table has roughly 100M rows. There was no index on PYMET which is unfortunate as the two values requested account for only about 0.15% (not 15%, 0.15%). The optimizer chooses an index on MANDT, BURKRS, AUGST, FAEDN and as you might expect, it doesn’t work very well (see the NDV comments I added to the statement).

Funny things:

The First_Rows hint is requesting the Oracle optimizer to return the first billion records as fast as possible (even though there are only 100M rows).

The documentation for the First_Rows hint in 11g looks like this:

The FIRST_ROWS hint instructs Oracle to optimize an individual SQL statement for fast response, choosing the plan that returns the first n rows most efficiently. For integer, specify the number of rows to return.

For example, the optimizer uses the query optimization approach to optimize the following statement for best response time:

SELECT /*+ FIRST_ROWS(10) */ employee_id, last_name, salary, job_id
FROM employees
WHERE department_id = 20;

In this example each department contains many employees. The user wants the first 10 employees of department 20 to be displayed as quickly as possible.

So I can see where the developers might have interpreted this as the ever elusive “Go Fast” hint.

The developers also added “and rownum < 999999999" to the where clause which limits the amount of rows that can be returned. I'm not sure whether they knew it or not, but this clause also has the same affect as the hint. That is to say that the clause causes the optimizer to modify it's calculations as if the first_rows_N hint had been applied. Maybe the developers weren't getting the "go fast" behavior they expected from the hint and after doing some research found that the "rownum <" syntax would basically do the same thing. I'm guessing that's the case because I can't see why they would really want to limit the number of rows coming back, but I'm not sure.

It's a very odd statement because the First_Rows hint tends to push the optimizer towards index usage, and this statement was behaving badly precisely because it was using an index (a full table scan only took about 1 hour). Regardless of what the developers were trying to do, the fact that they used such a big number caused the optimizer to ignore the hint anyway. Since the table only had 100M rows and the parameter was 1B, the hint was ignored (well at least the "First K Rows" modifications to the optimizer calculations were not used). This happens to the "rownum <" induced behavior as well by the way.

Here's a bit of a couple of 10053 trace file showing some details:

First a trace file from a select using a first_rows(10).

blah blah blah
...
First K Rows: Setup begin
...
First K Rows: Setup end
...
***************************************
OPTIMIZER STATISTICS AND COMPUTATIONS
***************************************
GENERAL PLANS
***************************************
Considering cardinality-based initial join order.
Permutations for Starting Table :0
***********************
Join order[1]:  SKEW[A]#0
***********************
Best so far: Table#: 0  cost: 10258.1471  card: 32000004.0000  bytes: 352000044
First K Rows: K = 10.00, N = 32000004.00
First K Rows: old pf = -1.0000000, new pf = 0.0000003
***************************************
SINGLE TABLE ACCESS PATH (First K Rows)
  -----------------------------------------
  BEGIN Single Table Cardinality Estimation
  -----------------------------------------
  Table: SKEW  Alias: A
    Card: Original: 10  Rounded: 10  Computed: 10.00  Non Adjusted: 10.00
  -----------------------------------------
  END   Single Table Cardinality Estimation
  -----------------------------------------
  Access Path: TableScan
    Cost:  2.00  Resp: 2.00  Degree: 0
      Cost_io: 2.00  Cost_cpu: 9321
      Resp_io: 2.00  Resp_cpu: 9321
kkofmx: index filter:"A"."COL1">0
  Access Path: index (RangeScan)
    Index: SKEW_COL1
    resc_io: 13.00  resc_cpu: 96279
    ix_sel: 1  ix_sel_with_filters: 1
    Cost: 13.02  Resp: 13.02  Degree: 1
  Access Path: index (skip-scan)
    SS sel: 1  ANDV (#skips): 10
    SS io: 10.00 vs. table scan io: 2.00
    Skip Scan rejected
  Access Path: index (FullScan)
    Index: SKEW_COL2_COL1
    resc_io: 14.00  resc_cpu: 103400
    ix_sel: 1  ix_sel_with_filters: 1
    Cost: 14.02  Resp: 14.02  Degree: 1
  ****** trying bitmap/domain indexes ******
  ****** finished trying bitmap/domain indexes ******
  Best:: AccessPath: IndexRange  Index: SKEW_COL1
         Cost: 13.02  Degree: 1  Resp: 13.02  Card: 10.00  Bytes: 11
First K Rows: unchanged join prefix len = 1
***********************
Join order[1]:  SKEW[A]#0
***********************
Best so far: Table#: 0  cost: 13.0227  card: 10.0000  bytes: 110
*********************************
Number of join permutations tried: 1
*********************************
Final - First K Rows Plan:  Best join order: 1
  Cost: 13.0227  Degree: 1  Card: 10.0000  Bytes: 110
  Resc: 13.0227  Resc_io: 13.0000  Resc_cpu: 96279
  Resp: 13.0227  Resp_io: 13.0000  Resc_cpu: 96279
kkoipt: Query block SEL$1 (#0)
******* UNPARSED QUERY IS *******
SELECT /*+ FIRST_ROWS (10) */ "A"."PK_COL" "PK_COL" FROM "KSO"."SKEW" "A" WHERE "A"."COL1">0
kkoqbc-subheap (delete addr=0xf62bdf2c, in-use=23752, alloc=24592)
kkoqbc-end
          : call(in-use=21112, alloc=49112), compile(in-use=39416, alloc=40744)
apadrv-end: call(in-use=21112, alloc=49112), compile(in-use=39936, alloc=40744)
 
sql_id=3n4vu47jvx7qg.
Current SQL statement for this session:
select /*+ first_rows(10) */ pk_col from kso.skew a where col1 > 0
 
============
Plan Table
============
------------------------------------------------+-----------------------------------+
| Id  | Operation                    | Name     | Rows  | Bytes | Cost  | Time      |
------------------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT             |          |       |       |    13 |           |
| 1   |  TABLE ACCESS BY INDEX ROWID | SKEW     |    10 |   110 |    13 |  00:00:01 |
| 2   |   INDEX RANGE SCAN           | SKEW_COL1|       |       |     3 |  00:00:01 |
------------------------------------------------+-----------------------------------+

And now an example with a number larger than the number of rows in the table. (it recognizes the hint but ignores it after it determines that the parameter is larger than the expected number of rows)

blah blah blah
...
First K Rows: Setup begin
...
First K Rows: Setup end
...
***************************************
OPTIMIZER STATISTICS AND COMPUTATIONS
***************************************
GENERAL PLANS
***************************************
Considering cardinality-based initial join order.
Permutations for Starting Table :0
***********************
Join order[1]:  SKEW[A]#0
***********************
Best so far: Table#: 0  cost: 10258.1471  card: 32000004.0000  bytes: 352000044
*********************************
Number of join permutations tried: 1
*********************************
Final - All Rows Plan:  Best join order: 1
  Cost: 10258.1471  Degree: 1  Card: 32000004.0000  Bytes: 352000044
  Resc: 10258.1471  Resc_io: 8323.0000  Resc_cpu: 8195767863
  Resp: 10258.1471  Resp_io: 8323.0000  Resc_cpu: 8195767863
kkoipt: Query block SEL$1 (#0)
******* UNPARSED QUERY IS *******
SELECT /*+ FIRST_ROWS (33000000) */ "A"."PK_COL" "PK_COL" FROM "KSO"."SKEW" "A" WHERE "A"."COL1">0
kkoqbc-subheap (delete addr=0xf62bdf2c, in-use=13620, alloc=16344)
kkoqbc-end
          : call(in-use=17088, alloc=49112), compile(in-use=37632, alloc=40744)
apadrv-end: call(in-use=17088, alloc=49112), compile(in-use=38152, alloc=40744)
 
sql_id=bfzsrf3z0nbr9.
Current SQL statement for this session:
select /*+ first_rows(33000000) */ pk_col from kso.skew a where col1 > 0
 
============
Plan Table
============
-------------------------------------+-----------------------------------+
| Id  | Operation          | Name    | Rows  | Bytes | Cost  | Time      |
-------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT   |         |       |       |   10K |           |
| 1   |  TABLE ACCESS FULL | SKEW    |   31M |  336M |   10K |  00:01:23 |
-------------------------------------+-----------------------------------+

So the optimizer knows about the hint but doesn’t do anything with it.

Well that’s my story for today. The First K Rows modifications to optimizer calculations are interesting in there own right, by the way. There are a couple of good posts on the subject here:

Dion Cho on FIRST_ROWS vs. ALL_ROWS and ROWNUM predicate

Jonathan Lewis on first_rows_N

And the most thorough discussion of the First_Rows hint I have seen comes from Randolf Geist. Here’s a link to a presentation he did on the topic:

“Everything You Wanted To Ask About FIRST_ROWS_N But Were Afraid To Ask”

So What Is A Good Cardinality Estimate For A Bitmap Index Column ? (Song 2)

As I’ve discussed previously, using a Bitmap index on a unique column makes little sense as the underling index must be larger than a corresponding B-tree index due to the implicit additional overheads associated with Bitmap indexes. As such, Oracle doesn’t permit the use of a Bitmap Index on a declared unique column or to [...]

Statistics on Partitioned Tables - Part 6a - COPY_TABLE_STATS - Intro

[Phew. At last. The first draft of this was dated more than two weeks ago .... One of the problems with blogging about copying stats was the balance between explaining it and pointing out some of the problems I've encountered. So I've broken up this post, with a little explanation first ...]

Oracle have a lot of on-site consultants, designers and developers who are out working with customers on very large databases and they no doubt have their own internal systems too, so you shouldn't make the mistake of thinking they're unaware of the problems that people face in gathering accurate and timely statistics on large partitioned tables. I don't say that based on any inside knowledge but by

  • Attending conferences, listening to presentations, reading White Papers and blog posts; and
  • Because they are constantly introducing new features to try to address the issues.

Over the next few posts, I'll look at some of those new features, some of which appear more successful than others. First of all, copying statistics using DBMS_STATS.COPY_TABLE_STATS.

Copying stats is a variation of a technique I've seen used on a couple of Oracle Data Warehouse projects in the past where we decided that spending time and system resources gathering object statistics wasn't useful. Think about some of the problems you'll face when gathering stats on large partitioned objects that I've highlighted so far :-

  • It takes time.
  • It sucks system resources.
  • As you change your strategy and the parameters to reduce the resource and time consumption, the stats are likely to be less accurate.

Then consider why we're gathering these stats in the first place :-

  • To describe database objects and data to the optimiser by setting values in the data dictionary (e.g. Number of Rows, Number of Distinct Values, etc.)
  • To help the optimiser identify the optimal execution plan

If we know the precise contents of specific objects at all times (unlikely, but bear with me), why bother trawling through the database examining the contents when we could just set the relevant values in the data dictionary manually? DBMS_STATS has several procedures that can help you achieve that, for example GET_COLUMN_STATS, SET_COLUMN_STATS, SET_TABLE_STATS etc.

We could set the stats for specific tables, indexes or partitions manually and then choose not to gather stats for those objects. As I said, you really need to know your data for this to work, although I would suggest that it's not as important that the stats are absolutely precise as that they are an accurate enough description of the data to lead to optimal plans.

There are a couple of cases where this might prove particularly useful.

  • You add a new partition each month and so the partition starts out empty and fills up gradually over the course of the month. If you think about it, this is likely to lead to fluctuating execution plans as the contents of the partition change. That's what a cost-based optimiser does - calculates new plans based on changing data distribution - but it also implies plan instability and you might want to avoid that if possible. By setting stats manually you can 'pretend' that you start off with a full partition and have the optimiser evaluate plans based on that fixed assumption.
  • You add new partitions very frequently and have very little time to gather statistics before users are likely to report against the data. It might prove useful to set some approximate synthetic stats quickly to get reasonable plans and then gather more detailed stats later when time allows.

The latter is similar to the situation we're faced with on the system I'm working on at the moment, so we were very keen to investigate copying stats as a means to reduce the stats gathering workload and improve the stats quality into the bargain.

COPY_TABLE_STATS when used on partitioned tables is designed to achieve a similar effect to setting stats manually at a similarly low cost but with a little more intelligence. For example, if we are adding a new partition for each new REPORTING_DATE on our example table, maybe it's reasonable to assume that the data volumes and distribution of values is similar to the previous REPORTING_DATE? If so, why not avoid setting fixed stats based on fixed assumptions about the contents of a partition, but copy the stats from a previous partition to the new partition?

Here is a very basic example of how this might look, based on my example table.

At the end of Part 5, the stats on TEST_TAB1 looked like this (column stats excluded to simplify things). Important Note - the bold text in this output has been edited to correct an earlier error in this post. See part 6b for explanation.

SQL> select  table_name, global_stats, last_analyzed, num_rows
  2  from dba_tables
  3  where table_name='TEST_TAB1'
  4  and owner='TESTUSER'
  5  order by 1, 2, 4 desc nulls last;

TABLE_NAME                     GLO LAST_ANALYZED          NUM_ROWS                                           
------------------------------ --- -------------------- ----------                                          
TEST_TAB1                      NO                                                                           

SQL> select  table_name, partition_name, global_stats, last_analyzed, num_rows
  2  from dba_tab_partitions
  3  where table_name='TEST_TAB1'
  4  and table_owner='TESTUSER'
  5  order by 1, 2, 4 desc nulls last;

TABLE_NAME                     PARTITION_NAME                 GLO LAST_ANALYZED          NUM_ROWS           
------------------------------ ------------------------------ --- -------------------- ----------           
TEST_TAB1                      P_20100131                     NO                                            
TEST_TAB1                      P_20100201                     NO                                            
TEST_TAB1                      P_20100202                     NO                                            
TEST_TAB1                      P_20100203                     NO                                            
TEST_TAB1                      P_20100204                     NO                                            
TEST_TAB1                      P_20100205                     NO                                            
TEST_TAB1                      P_20100206                     NO                                            
TEST_TAB1                      P_20100207                     NO                                            
TEST_TAB1                      P_20100209                     NO  28-MAR-2010 15:38:33         12           

9 rows selected.

SQL> select  table_name, subpartition_name, global_stats, last_analyzed, num_rows
  2  from dba_tab_subpartitions
  3  where table_name='TEST_TAB1'
  4  and table_owner='TESTUSER'
  5  order by 1, 2, 4 desc nulls last;

TABLE_NAME                     SUBPARTITION_NAME              GLO LAST_ANALYZED          NUM_ROWS           
------------------------------ ------------------------------ --- -------------------- ----------           
TEST_TAB1                      P_20100131_GROT                NO                                            
TEST_TAB1                      P_20100131_HALO                NO                                            
TEST_TAB1                      P_20100131_JUNE                NO                                            
TEST_TAB1                      P_20100131_OTHERS              NO                      

<>                                              
 
TEST_TAB1                  P_20100209_GROT            NO  28-MAR-2010 15:38:32      3           
TEST_TAB1                  P_20100209_HALO            NO  28-MAR-2010 15:38:32      3           
TEST_TAB1                  P_20100209_JUNE            NO  28-MAR-2010 15:38:32      3           
TEST_TAB1                  P_20100209_OTHERS          NO  28-MAR-2010 15:38:33      3           

36 rows selected.

Next I'm going to add a new partition (and by implication, the related subpartitions) to contain rows with a REPORTING_DATE of 20100210, then empty LOAD_TAB1 and insert some appropriate rows into it.

SQL> ALTER TABLE TEST_TAB1
  2  ADD  PARTITION P_20100210 VALUES LESS THAN (20100211);

Table altered.

SQL> TRUNCATE TABLE LOAD_TAB1;

Table truncated.

SQL> INSERT INTO LOAD_TAB1 VALUES (20100210, 'GROT', 1000, 'P');

1 row created.

SQL> INSERT INTO LOAD_TAB1 VALUES (20100210, 'GROT', 30000, 'P');

1 row created.

SQL> INSERT INTO LOAD_TAB1 VALUES (20100210, 'GROT', 2000, 'P');

1 row created.

SQL> INSERT INTO LOAD_TAB1 VALUES (20100210, 'GROT', 10000, 'N');

1 row created.

SQL> INSERT INTO LOAD_TAB1 VALUES (20100210, 'GROT', 2400, 'P');

1 row created.

SQL> INSERT INTO LOAD_TAB1 VALUES (20100210, 'GROT', 500, 'P');

1 row created.

SQL> INSERT INTO LOAD_TAB1 VALUES (20100210, 'GROT', 1200, 'P');

1 row created.

SQL> INSERT INTO LOAD_TAB1 VALUES (20100210, 'GROT', 400, 'P');

1 row created.

SQL> INSERT INTO LOAD_TAB1 VALUES (20100210, 'GROT', 600, 'P');

1 row created.

SQL> INSERT INTO LOAD_TAB1 VALUES (20100210, 'GROT', 700, 'Z');

1 row created.

SQL> COMMIT;

Commit complete.

There are 10 rows that will be loaded into the new partition, all with a SOURCE_SYSTEM of GROT, so all will be added to that subpartition. Now I'll use partition exchange to move the data in LOAD_TAB1 into the P_20100210_GROT subpartition.

SQL> ALTER TABLE test_tab1 EXCHANGE SUBPARTITION P_20100210_GROT WITH TABLE load_tab1;

Table altered.

SQL> ALTER TABLE test_tab1 MODIFY SUBPARTITION  P_20100210_GROT REBUILD UNUSABLE LOCAL INDEXES;

Table altered.

Now to copy some stats. First I'll try copying the statistics from the previous partition (P_20100209) to the new partition. i.e. I am performing a Partition-level copy. Important Note - the bold text in this output has been edited to
correct an earlier error in this post. See part 6b for explanation.

SQL> exec dbms_stats.copy_table_stats(ownname => 'TESTUSER', tabname => 'TEST_TAB1', srcpartname => 

'P_20100209', dstpartname => 'P_20100210');

PL/SQL procedure successfully completed.

Let's see what has been copied

SQL> select  table_name, global_stats, last_analyzed, num_rows
  2  from dba_tables
  3  where table_name='TEST_TAB1'
  4  and owner='TESTUSER'
  5  order by 1, 2, 4 desc nulls last;

TABLE_NAME                     GLO LAST_ANALYZED          NUM_ROWS                                       
------------------------------ --- -------------------- ----------                                       
TEST_TAB1                      NO                                                                        

SQL>
SQL> select  table_name, partition_name, global_stats, last_analyzed, num_rows
  2  from dba_tab_partitions
  3  where table_name='TEST_TAB1'
  4  and table_owner='TESTUSER'
  5  order by 1, 2, 4 desc nulls last;

TABLE_NAME                     PARTITION_NAME                 GLO LAST_ANALYZED          NUM_ROWS        
------------------------------ ------------------------------ --- -------------------- ----------        
TEST_TAB1                      P_20100131                     NO                                         
TEST_TAB1                      P_20100201                     NO                                         
TEST_TAB1                      P_20100202                     NO                                         
TEST_TAB1                      P_20100203                     NO                                         
TEST_TAB1                      P_20100204                     NO                                         
TEST_TAB1                      P_20100205                     NO                                         
TEST_TAB1                      P_20100206                     NO                                         
TEST_TAB1                      P_20100207                     NO                                         
TEST_TAB1                      P_20100209                     NO  28-MAR-2010 15:38:38         12        
TEST_TAB1                      P_20100210                     NO                                         

10 rows selected.

SQL> select  table_name, subpartition_name, global_stats, last_analyzed, num_rows
  2  from dba_tab_subpartitions
  3  where table_name='TEST_TAB1'
  4  and table_owner='TESTUSER'
  5  order by 1, 2, 4 desc nulls last;

TABLE_NAME                     SUBPARTITION_NAME              GLO LAST_ANALYZED          NUM_ROWS        
------------------------------ ------------------------------ --- -------------------- ----------        
TEST_TAB1                      P_20100131_GROT                NO                                         
TEST_TAB1                      P_20100131_HALO                NO                                         

<>

TEST_TAB1                  P_20100209_GROT            NO  28-MAR-2010 15:38:32      3        
TEST_TAB1                  P_20100209_HALO            NO  28-MAR-2010 15:38:32      3        
TEST_TAB1                  P_20100209_JUNE            NO  28-MAR-2010 15:38:32      3        
TEST_TAB1                  P_20100209_OTHERS          NO  28-MAR-2010 15:38:33      3        
TEST_TAB1                  P_20100210_GROT            NO  28-MAR-2010 15:38:33      3    
TEST_TAB1                      P_20100210_HALO                NO                                         
TEST_TAB1                      P_20100210_JUNE                NO                                         
TEST_TAB1                      P_20100210_OTHERS              NO                                         

40 rows selected.

Mmmm, that's quite interesting. Although there were valid stats on all of the P_20100209 subpartitions, it looks like nothing was copied. It appears that in our situation, where we only gather stats on subpartitions and allow them to aggregate up to the partition and table levels, I need to copy the statistics subpartition-by-subpartition, which is what I'll start looking at in the next post.

Important Note - Please go on to read part 6b. An error on my part in the output I pasted above made it look like there no statistics copied at all. In fact, subpartition statistics for P_20100210_GROT were copied but it's still a confusing picture because I was using a Partition-level copy and yet there are no copied statistics for three of the subpartitions even though the source subpartitions had valid stats, nor for the partition itself.

Which means it's still true that the Partition-level copy will not serve our particular purposes, so next I'll try copying statistics at the individual subpartition level.

I've updated the date of this post so hopefully it will re-appear in aggregators for those who read it earlier.

If You Are Going to MOW…

(just on a side-note)

…and would really like to attend Anjo Kolk and Tommy Pedersen’s presentation: “Accessing the Oracle Database from Google (Apps, App Engine, Spreadsheets)” then do yourself a favor and also attend my presentation on XMLDB based out-of-the-box interfacing (“Boost your environment with Oracle XMLDB“) so you have an idea how it all hooks in…

See the MOW agenda for Friday

;-)

An Evening with Oracle Database Security Expert: Pete Finnigan


AMIS Technology School is proud to present, in collaboration with Miracle Benelux Masterclasses:

Miracle Benelux and Pete Finnigan agreed to do an extra special on the AMIS premises the evening just before Pete’s 2 day Masterclass in Utrecht will start (for the 2 day Masterclass agenda, see also the following URL). During this AMIS Query, besides the free food and normal standard setup of such an AMIS Query Event, Pete will have a presentation on Oracle security and there will be a lot of room of informal discussions during and after this session. There is still some room if you would like to learn from one of the best on Oracle database security.

Pete Finnigan

More details on those masterclasses can be found on the Miracle Benelux site. Hopefully this will be the first of series… More details about this event will follow shortly.

About Pete Finnigan…

Pete is a world renowned expert in the area of Oracle security providing consultancy, design, security audits and trainings all in the area of Oracle Security. Pete is a member of the Oak table network, he has spoken regularly all over the world at various conferences such as UKOUG, PSOUG, BlackHat and Risk. Pete is a published author on Oracle security and researches and writes about the subject regularly. Pete also runs his website www.petefinnigan.com dedicated to Oracle security”.

About Miracle Benelux…

Miracle Benelux is specialized in database related performance- en stability problems. We have years of experience on troubleshooting Performance problems in large scale Oracle based systems. This is why Miracle Benelux is frequently asked to analyze where the performance-problems are when others give up and “everything” is tried. Miracle Benelux will present the solution on a quick and efficient base. Straight-to-the-point. Also when you have a different point of view with your supplier you can consult Miracle Benelux. Miracle Benelux has customers like Ahold, Free Record Shop, NUON, Eneco, Ziggo, TenneT, het Kadaster etc.

About AMIS…

AMIS creates ICT solutions to ensure the success of our customers to increase. We take along challenges together, with a passion for our profession, trust in each other and ambition to continuous growth. Committed to ICT. Involved in people. We want to help the customer progress by utilizing high-quality information technology. It is our aim to realize targets in cooperation with our customers thanks to standard software, our custom-made applications and our services. In short, to achieve results with IT.

We care about applying our knowledge for the customer’s benefit in the best possible way. This passion for our job could be said to be a compulsory element in our culture within AMIS. Getting results by doing what you love is great, but realizing them together gives them even more value. Cooperation is something we cherish. We hone our knowledge and experiences by sharing them with colleagues and customers. In our view this makes knowledge more valuable. Thus everyone is very welcome to attend the knowledge sessions that we organize or take part in discussions in our technology blog.

oaktable

You can register you for this event here:

Event Registration Form (Dutch)

HTH, Marco

Call for Papers

The call for papers  for the  Tech Server and E-Business event in the UKOUG Conference Series is only a couple of weeks from closing, so I’m just bouncing this email to the top of the pile: Once again we are launching call for papers for the annual Technology & E-Business Suite user group conference. This [...]

UKOUG 2010 CFP is now open!

Can you believe it? Already?? Yes, that’s right, the Call for Papers for the UKOUG Technology and E-Business Suite conference is already open! In my opinion, this is one of the best conferences out there! Wide variety of speakers, great topics, and, it’s not too big! This coming year will be my fifth (or is it sixth?) year attending, and it’s a trip I’ve always enjoyed, and never regretted taking the time (or money) to attend.

The CFP is open through Monday, August 2nd, 2010. The conference itself is coming up on November 29th – December 1st, 2010, in Birmingham, England.

More information is available here.

Record-Breakers

Browsing around the internet recently I came across this result: “During February, 2010, jonathanlewis.wordpress.com was positioned by Compete.com as the 33 most visited website in the United States. In order to be ranked in traffic in number 33, jonathanlewis.wordpress.com had 25,165,482 visits.” Pretty impressive, isn’t it. On the other hand, Worpress tells me that I [...]

Michigan OakTable Symposium 2010

The terrific agenda of the first Michigan OakTable Symposium is online.
I plan to give the following presentations:

Edition-Based Redefinition
Interpreting Execution Plans
Transaction Management Internals

The abstracts are available on my Public Appearances page as well as on the event’s site. By the way, the early bird registration ends April 30.