Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

A Good Weekend

OK, so it's more personal stuff but I had an excellent end to last weekend so I don't care ;-) Skip over this one if you don't care what's going on in my life. That's cool.

I finished work on Thursday lunchtime and then made what seemed a ridiculously long 'short' trip to Schiphol via the joys of London Gatwick and a taxi on landing that seemed to go on forever. Which meant I was pretty late when I was met by Marco Gralike and his daughter, Athina. They had a gift for me which was very sweet (more on that later) and I had bought the very orange Mr. Tickle who had decided he wanted to move to the Netherlands.

By the time we got to the Amis office in Nieuwegein, near Utrecht, I just about had time to setup for the informal presentation. As Marco introduced me he presented me with a gift that he and Eva had organised. This is it under Marco's care.

Marco with my gift

I suspect a few things are obvious.

1) It's very carefully and prettily wrapped (thanks Eva :-))
2) It's very Orange!

As Marco later described it, it's a Dutch Football Supporter's Survival Kit. Including some of the noisiest, craziest, and Orangest toys I've seen in a while ;-) There were also some sweets, Stroopwafels (that are scrummy and have gone down well at work) and this guy at the back.

More Cuddly Toys

At the front is another new cuddly toy, Mr. Hollandia, the gift from Marco and Athina. So, let's get this straight. Mr. Tickle decides to move to Holland and free up some space on the comfy seat and then I come home with two more! LOL. Speaking of Cuddly Toys, this is me introducing Gordon and Pierre to the audience.

People were very understanding ;-) I had fun seeing Jacco again and some people I'd heard about but never met.

In the end, once I get talking about those pictures .... well, it went on for about 2 hours I think and although I initially thought it went well, it subsequently hit me that I didn't complete a demo in the way I intended, but I'll make up for that with a blog post soon.

One funny moment was when Echofon popped up a notification to show me how many tweets had arrived because I had a network connection. As friends know, I never use Twitter so that gave Marco a chuckle ;-)

Marco had lined up a really nice hotel for us in Utrecht, but we were both too beat to do much more than head off to bed before the course the following day.

The course was probably less of a success from my perspective. Not disastrous, but pretty difficult to talk about so many different things in one day, but the feedback I got from Marco and Jacco was pretty good. People stayed late on a Friday afternoon so they can't have been completely sick of me.

Time to party. We went to a bar that was probably quieter than we expected to watch the England match and there were times when I wasn't sure it was really happening or we hadn't drunk way too much beer (we had). It was a rare chance to spend some time with Marco and Jacco away from a conference, where things are hectic, so it was one of the best nights I'd had for quite a while. Only to be beaten the next day when Marco drove us over to Jacco's place in Deventer. Marco and I have a running joke about how cool Jacco is - he wears nice clothes, completely unlike a techie! Sure enough, his apartment was completely lovely, his partner's lovely, the cat is lovely, Deventer is fantastic. This man has landed on his feet!

It wasn't even a 5 minute walk to the bar where we would do what I'd really come for - watch Holland play in a bar in a Dutch town. Marvellous experience. They go crazy for football. A couple of St. George's Crosses doesn't cover it, folks.

After more and more beers, Jacco's partner Margot turned up from work and it was back to his place for .... meat and potatoes. But I should say *extremely* tasty casserole. There had been much discussion about my diet because Margot is a great cook so it was difficult finding something that I would eat that she could also be proud of. Job done! ;-)

A comfy night's sleep with a cool cat for company and I was done.

I had a great time so special thanks to Marco, Jacco, Eva, Margot, all of the AMIS people and those who came along to either presentation.


(Thanks also to Wesley Schneider for being the first goal scorer. If only the
Netherlands could have added two more goals, I would have been in the
money ...)

DB Optimizer Demo Webinar June 22 @ 9am PST

We had almost 2000 people sign up for the webinar on SQL tuning with Jonathan Lewis and I. After the webinar many people had questions on how to harness DB Optimizer for performance tuning on their database whether DB2, SQL Server, Oracle or Sybase so
tomorrow, June 22 at 9am PST I'll be giving a live demo of the different parts of DB Optimizer and how to use them.
To view this webinar sign up at

DBMS_STATS - Gather table statistics with many columns

Here is another good reason why you probably don't want to use tables with too many columns.

The first good reason is that Oracle stores rows of conventional heap tables with more than 255 columns (where at least one column value after the 255th is non-null) in multiple row pieces even when the entire row may fit into a single block (and up to 11.2 doesn't support basic and OLTP heap table compression on tables with more than 255 columns). This leads to something that is sometimes called "intra-row chaining" which means that Oracle needs to follow the row piece pointer to access columns after the 255th one leading to multiple logical I/Os per row, up to four for a row approaching the hard limit of 1,000 columns.

Of course such a multiple-piece row easily can result in actual row chaining where the row pieces are scattered across different blocks now potentially leading to additional random single block I/O, but I digress...

This simple example allows to see this (and the other, yet to describe) effect in action - it creates by default a table with 1,000 columns with 10,000 rows where each row resides in a separate block (and therefore only intra-row chaining should occur). I used a 8K block size tablespace with Manual Segment Space Management (MSSM) in order to avoid the ASSM overhead in this case:

-- Defaults: [1000 ] [99 ] [1 ] [TEST_8K ] [001 ] [256 ] [512 ] [768 ] [10000 ]
-- Create a table MANY_X_COLS with a configurable number of columns
-- and populate four columns of it using skewed data (normal distribution)

set termout off verify off
REM Save current settings
store set .settings replace

REM Command line handling

REM Preparation for default value handling
column 1 new_value 1
column 2 new_value 2
column 3 new_value 3
column 4 new_value 4
column 5 new_value 5
column 6 new_value 6
column 7 new_value 7
column 8 new_value 8
column 9 new_value 9

'' as "1"
, '' as "2"
, '' as "3"
, '' as "4"
, '' as "5"
, '' as "6"
, '' as "7"
, '' as "8"
, '' as "9"
rownum = 0;

REM Default values
nvl('&&1', '1000') as "1"
, nvl('&&2', '99') as "2"
, nvl('&&3', '1') as "3"
, nvl('&&4', 'TEST_8K') as "4"
, nvl('&&5', '001') as "5"
, nvl('&&6', '256') as "6"
, nvl('&&7', '512') as "7"
, nvl('&&8', '768') as "8"
, nvl('&&9', '10000') as "9"

define n_iter = &1
define n_pctfree = &2
define n_pctused = &3
define s_tblspace = &4
define s_col1 = &5
define s_col2 = &6
define s_col3 = &7
define s_col4 = &8
define n_num_rows = &9

set termout on echo on verify on

s_sql1 varchar2(32767);
s_sql varchar2(32767);

e_table_or_view_not_exists exception;
pragma exception_init(e_table_or_view_not_exists, -942);
for i in 1..&n_iter loop
s_sql1 := s_sql1 || 'col' || to_char(i, 'FM9000') || ' varchar2(10),';
end loop;

s_sql1 := rtrim(s_sql1, ',');

s_sql := 'drop table many_x_cols purge';

execute immediate s_sql;
when e_table_or_view_not_exists then

s_sql := 'create table many_x_cols (' || s_sql1 || ') pctfree &n_pctfree pctused &n_pctused tablespace &s_tblspace';

execute immediate s_sql;


s_sql := 'insert /*+ append */ into many_x_cols (
, col&s_col2
, col&s_col3
, col&s_col4
trunc(sys.dbms_random.normal * &n_num_rows.) as col&s_col1
, trunc(sys.dbms_random.normal * &n_num_rows.) as col&s_col2
, trunc(sys.dbms_random.normal * &n_num_rows.) as col&s_col3
, trunc(sys.dbms_random.normal * &n_num_rows.) as col&s_col4
connect by
level <= &n_num_rows

execute immediate s_sql;


set termout off
REM Restore and cleanup
undefine 1 2 3 4 5 6 7 8 9
column 1 clear
column 2 clear
column 3 clear
column 4 clear
column 5 clear
column 6 clear
column 7 clear
column 8 clear
column 9 clear
set termout on

If you have created the table with the defaults and now run some simple queries on it, for example:



you should see approximately 10,000 consistent gets after some initial runs to avoid the overhead of hard parsing. Your output should look similar to the following:


0 recursive calls
0 db block gets
10004 consistent gets
10000 physical reads
0 redo size
422 bytes sent via SQL*Net to client
415 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

However if you start to access columns after the 255th one (and you've used the defaults or suitable values to populate at least one these columns), then you'll notice that the number of consistent gets increases whereas the number of physical reads stays more or less the same:




0 recursive calls
0 db block gets
30004 consistent gets
10000 physical reads
0 redo size
427 bytes sent via SQL*Net to client
415 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

It is interesting to note that in and it happened sometimes that the consistent gets jumped directly from 10,000 to 30,000 and increased then to 40,000 and 50,000 respectively when accessing later columns whereas in the expected result of 20,000, 30,000 and 40,000 was seen.

It is also interesting to note that the statistic "table fetch continued row" is only increasing once per row which could be another interesting variation of the theme "How does Oracle record this statistic".

OK, back to the main topic for today: DBMS_STATS and such tables with many columns.

The other good reason is that DBMS_STATS might switch to a multi-pass mode if a table exceeds a certain number of columns. This means even if you request only basic column statistics (METHOD_OPT=>"FOR ... COLUMNS SIZE 1...") without any additional histograms that require additional passes anyway DBMS_STATS.GATHER_TABLE_STATS will perform multiple passes (full segment scans usually) on the same segment. This is even true if basic column statistics are only requested for a limited set of columns (for example "FOR COLUMNS SIZE 1 COL1, COL2, COL3") since Oracle gathers some column statistics always no matter what options are passed to DBMS_STATS (very likely in order to calculate for example the average row length for the table statistics).

The threshold when DBMS_STATS switches to multiple passes depends the options passed to DBMS_STATS and differs between Oracle versions 11g (11.1 and 11.2) and versions prior to 11g - and I'm not talking about the multiple passes that are used in versions prior to 11g to determine the optimal auto sample size (DBMS_STATS.AUTO_SAMPLE_SIZE, see below for more information).

In Oracle DBMS_STATS is capable of handling a minimum of approximately 160 columns per pass if basic columns statistics are requested and a maximum of approximately 440 columns if no column statistics are requested, which means that it requires between three and seven passes to gather statistics for a 1,000 columns table. Below is what such a typical query looks like - and you'll find up to seven of them in a SQL trace file of a DBMS_STATS call with a different column list for a 1,000 columns table.

Notice that in DBMS_STATS gathers five different values per column for basic column statistics:

- Number of non-null values
- Number of distinct values
- Column size
- Minimum value
- Maximum value

If no column statistics are requested then still the "Number of non-null values" and "Column size" per column are gathered which explains why DBMS_STATS is capable of handling more columns per pass since simply less information per column is requested.

select /*+ no_parallel(t) no_parallel_index(t) dbms_stats
cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring
*/ count(*),count("COL257"),count(distinct "COL257"),
count(distinct "COL259"),sum(sys_op_opnsize("COL259")),

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.20 0.19 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 1.21 3.07 9713 40010 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 1.41 3.26 9713 40010 0 1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 61 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT GROUP BY (cr=40010 pr=9713 pw=0 time=3070537 us)
10000 TABLE ACCESS FULL MANY_X_COLS (cr=40010 pr=9713 pw=0 time=1820245 us)

Tracing the DBMS_STATS.GATHER_TABLE_STATS call furthermore reveals that Oracle performs at least one additional query when using DBMS_STATS.AUTO_SAMPLE_SIZE (the default) which is probably used for determining the optimal sample size because Oracle uses an iterative approach prior to 11g to determine the auto sample size. The sample size is increased in steps until the number of distinct values per column passes some statistical checks. The interesting thing however seems to be that these additional queries are repeated per pass:

select /*+ no_parallel(t) no_parallel_index(t) dbms_stats
cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring
*/ count(*)
"CBO_TEST"."MANY_X_COLS" sample block ( 1.0000000000,1) t

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 7 0.01 0.00 0 0 0 0
Execute 7 0.00 0.00 0 0 0 0
Fetch 7 0.12 3.67 1971 764 0 7
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 21 0.13 3.67 1971 764 0 7

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 61 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=119 pr=332 pw=0 time=530678 us)
108 TABLE ACCESS SAMPLE MANY_X_COLS (cr=119 pr=332 pw=0 time=646431 us)


select /*+ no_parallel(t) no_parallel_index(t) dbms_stats
cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring
*/ count(*)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 7 0.00 0.00 0 0 0 0
Execute 7 0.00 0.00 0 0 0 0
Fetch 7 3.41 15.85 65858 70070 0 7
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 21 3.41 15.86 65858 70070 0 7

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 61 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=10010 pr=9445 pw=0 time=2228265 us)
10000 TABLE ACCESS FULL MANY_X_COLS (cr=10010 pr=9445 pw=0 time=816529 us)

This means that in total between six (no column statistics) and at least 14 queries (basic column statistics for all columns) are used to gather the statistics for a 1,000 columns table in with DBMS_STATS.AUTO_SAMPLE_SIZE.

Using a fixed sample size (or compute) it takes between three and seven queries to gather the statistics for a 1,000 columns table in When computing the basic statistics for all columns this means a minimum of 220,000 consistent gets to gather the statistics on a 10,000 blocks table with 1,000 columns in comparison to approximately 20,000 consistent gets to perform the same on a 255 columns table - not taking into account the CPU overhead of performing the aggregates (count distinct, min, max, etc.) on all those columns.
Note that these consistent gets exclude the work required to maintain the dictionary and statistics history information.

The additional queries executed when using DBMS_STATS.AUTO_SAMPLE_SIZE in increase the number of consistent gets to 300,000 for the 1,000 columns / 10,000 blocks table (in this case the automatism ends up in a compute) and 40,000 consistent gets for the 255 columns / 10,000 blocks table.

Of course the intention of the AUTO_SAMPLE_SIZE is to find an optimal sample size and not to actually increase the amount of work compared to a compute gather statistics, but in this particular case this was true.

This significant difference in consistent gets between the 1,000 columns and 255 columns table both having 10,000 blocks has three reasons:

1. It requires more consistent gets per pass due to the multiple row pieces (up to four times more consistent gets)
2. DBMS_STATS performs many more passes on the 1,000 columns table (when gathering basic column statistics for all columns at least seven on the 1,000 columns table compared to at least two on the 255 columns table)
3. Obviously DBMS_STATS doesn't make a systematic attempt to minimize the number of consistent gets per pass by trying to access only leading columns per pass (so to require only 10,000 / 20,000 / 30,000 / 40,000 gets per pass). It looks more like the columns accessed per pass are randomly chosen often causing the maximum number of consistent gets per pass

The same applies to version 11.1 and 11.2 as long as fixed sample sizes are used. However when using DBMS_STATS.AUTO_SAMPLE_SIZE the whole gather statistics logic has been revised, in particular due to the introduction of the improvements regarding number of distinct column values approximation and incremental partition statistics, which is described in great detail by Amit Poddar here.

Due to these improvements to DBMS_STATS when using DBMS_STATS.AUTO_SAMPLE_SIZE Oracle (and are capable of handling up to 1,000 columns in a single pass if no columns statistics are requested and a minimum of approximately 260 columns per pass if basic column statistics are requested. This means it takes four passes to gather basic column statistics on all columns for a 1,000 columns table. Furthermore due to the revised logic the additional queries used in for the DBMS_STATS.AUTO_SAMPLE_SIZE are no longer used.

Tracing the DBMS_STATS.GATHER_TABLE_STATS call in and also reveals that there is a special operation activated that is not represented in SQL. Accompanied by a comment outlining what the special NDV operation is supposed to do DBMS_STATS seems to activate a "APPROXIMATE NDV AGGREGATE" operation that is very likely used for the aforementioned improvement regarding the NDV algorithm.

Such a query looks then like the following:

select /*+ full(t) no_parallel(t) no_parallel_index(t) dbms_stats
cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring
no_substrb_pad */to_char(count("COL174")),to_char(substrb(dump(min("COL174")

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.11 0.10 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 1.01 2.24 9845 30010 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 1.12 2.35 9845 30010 0 1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 93 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=30010 pr=9845 pw=0 time=0 us)
8755 APPROXIMATE NDV AGGREGATE (cr=30010 pr=9845 pw=0 time=2336556 us cost=2 size=121114 card=82)
10000 TABLE ACCESS FULL MANY_X_COLS (cr=30010 pr=9845 pw=0 time=300223 us cost=2 size=121114 card=82)

The following differences to 10g become obvious:

1. The SQL part has been reduced to

- Number of non-null values
- Minimum value
- Maximum value

for basic column statistics. The number of distinct values is obviously performed as part of the special operation "APPROXIMATE NDV AGGREGATE" as it can be seen in the "Row Source Operation" (for further details refer to Amit Poddar's document). Note that in case of no column statistics this is reduced to a single expression per column which is the "Number of non-null values".

2. The special operation "APPROXIMATE NDV AGGREGATE" appears in the execution plan

3. The trailing comment section of the statement describes what the "APPROXIMATE NDV AGGREGATE" is supposed to perform, for more details refer to the aforementioned document by Amit Poddar

4. All expressions are converted to strings using TO_CHAR

5. A new hint NO_SUBSTRB_PAD has been introduced (example taken from

Probably the fact that the number of expressions in the SQL part has been reduced allows Oracle 11g to process more columns per pass.

This improved variant requires approximately 150,000 consistent gets (four passes) for basic column statistics for all columns of a 10,000 blocks / 1,000 columns table with DBMS_STATS.AUTO_SAMPLE_SIZE in comparison to 10,000 consistent gets for table with 255 columns (single pass).

So even with the improvements of 11g approximate NDV algorithm it will still require multiple passes when requesting basic column statistics (which generally speaking is recommended) for tables with more than 260 columns. If you think about the consequence of that in case of a table that not only has many columns but also many rows (that are then at least intra-row chained) the potential overhead becomes quite obvious.

So in summary the best way to deal with such tables is to avoid them in first place - in Oracle 10g attempt to keep the number of columns below 160 respectively 260 in 11g when using DBMS_STATS.AUTO_SAMPLE_SIZE to ensure a single pass operation covering all columns.

If you really need to use that many columns it might be worth the effort to write a custom gather statistics procedure that explicitly states the columns to gather statistics for in the METHOD_OPT parameter in order to minimize the number of passes required - of course keeping in mind the potential side effects on the cost based optimizer when column statistics are missing and these become relevant for optimization. In 11g with DBMS_STATS.AUTO_SAMPLE_SIZE this potentially allows to reduce the number of passes down to a single one.

And as a side note - using the deprecated ANALYZE ... STATISTICS command requires exactly a single pass to gather basic column statistics for all 1,000 columns of the 10,000 rows table - it even records only 10,000 consistent gets (and no "table fetch continued row") which is quite interesting since it is unclear how the different row pieces are accessed then. It might however simply be an instrumentation issue. Of course in most cases it doesn't make sense to revert to ANALYZE ... STATISTICS due to its shortcomings (just to name a few: No parallelism, no statistics history, no global statistics on partitioned tables and no approximate NDV improvements in 11g) and differences to DBMS_STATS that might lead to plan changes - see also the other notes linked to in that note by Jonathan Lewis.

Russian Forum

This is probably something that anyone who needed to know already knew, but there is a very active forum in Russian for Oracle users – it’s actually just one part of a much larger Russian forum, but it’s the Oracle bit that catches my eye occasionally. If you haven’t come across it before, the lead-in [...]

I learned I don't think I like this...

I was reading around, stumbled on an article from Dr Dobb's online (I have a long history with Dr Dobb's - had it not been for them - you wouldn't be reading this!). The article described a 'feature' of Windows 7 that I have mixed feelings on. The same sort of mixed (well, not so mixed, I lean far to one side on the use of this feature) feelings I have for cursor_sharing being set to anything other than EXACT.

Here is the article.

Ok, so why don't I like it? It seems to be a way to 'self correct' a program. It "seems" like a "good thing".

I don't like it because it won't help get the problem fixed (same with cursor_sharing :( ). In fact, it will promote *more* code being developed that suffers from heap overwrites. It lets bad developers develop bad code even faster and distribute it - thinking they are seriously good developers. That is, it leads to delusion and the bad coders getting more senior without learning from their mistakes.

In short, it instills a false sense of "hey, I'm pretty good" in developers that probably shouldn't have that sense.

It could definitely lead to some really strange issues - think about it, a program that used to crash - stops crashing - for a while - then crashes more (as the overwrite occasionally gets bigger than normal, requiring more "pad" bytes). And who knows how allowing a memory overwrite to propagate into other bits of the code will affect it. I prefer code that works right - not code that sometimes seems to work.

I'm reminded of a code review I did some 20 years ago. I asked the developer "why do you have this massive static array defined - we don't seem to use it". The answer "if you take it out, the program crashes, the compiler must have a bug". The look on my face - I wish I had a picture - it would have been priceless.

I'm not a fan of this "let's try to fix it for you and let you pretend you know what you're doing" approach to software. We rely on software way too much.

Oh well, just a 30 second thought - I just read the article and felt the need to gripe... Things like this scare me.

I have to say - I wrote something similar myself some many years ago . It was a C library I called xalloc. It replaced the malloc, calloc, realloc, free, etc functions of C. It worked by allocating (or freeing) the requested memory and adding a few bytes. It would set some bytes at the FRONT and the END of the allocated memory, set some bytes to represent the source code file and line number that allocated the memory, and return a pointer to the memory to be used by the program. Every time you called any of the xalloc functions - it would inspect the allocated memory (all of it) and CRASH the program if any of the magic bytes in front/at the end of the memory block had been changed. When the program exited - it would report on all allocated memory that wasn't freed. You could turn off the checking with an environment variable if you wanted, but it was always ready to be "on". I made everyone that worked in my team use it - it saved us countless hours (and it found the bug in the code of the person that needed to allocate that big array in a few seconds)...

My approach differed from Windows 7 in that I would prefer a program to crash and burn immediately rather than live for another second if it made such a grievous error. I'd still rather the program die than continue today...


I usually think about locks (and the meaning of the lmode and request columns in v$lock) in terms of numbers, and never seem to remember which number goes with which name – apart from mode 6 = exclusive. So I’ve finally put up a little list somewhere I’ll always be able to find it if [...]

Little Things Doth Crabby Make – Part XIII. When Startup Means Shutdown And Stay That Way.

This seemed worthy of Little Things Doth Crabby Make status mostly because I was surprised to see that Oracle Database 11g STARTUP command worked this way… Consider the following text box. I purposefully tried to do a STARTUP FORCE specifying a PFILE that doesn’t exist. Well, it did exactly what I told it to do, [...]

I should've been in movies...

Have you seen the movie Avatar? If not, this will just look like a crazily distorted blue photo of me. But, if you saw the movie, you'll see me as a Na'vi. Cool, huh?!?

I loved the movie (it was fabulous in 3D!) and if you haven't seen it yet, check out the trailer.

I wish (2)

Continuing my little list of things in SQL Server that I’d like to see in Oracle: 1 Index statistics: when SQL Server collects index statistics, it also collects stats not only on the full index definition, but also on every “prefix” of the index, i.e. if you have a 4-column index, you will get stats [...]

Instrumentation rocks!!

Those who have read my previous posts know I've got a few favorite topics:1. Data Models2. Measuring3. Variance4. InstrumentationThis post will be short but it touches on all of my favorite things. Over the last year and a half, I've had the opportunity to redesign a not-so-great application working with the amazingly great developers at Method-R. In the process, we redesigned the data model,