Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Dynamic logging with global application context

Controlling logging output across sessions using global application context. September 2007 (updated April 2009)

Collaborate 09: Don’t miss these sessions

Collaborate 09 starts on Sunday, May 3 (a few days from now!) in Orlando. I’ve been offline for several weeks (more on that later), but will be returning to the world of computers and technology in full force in Orlando. I’ve had a few inquiries about whether or not I’ll be at Collaborate, so I thought I’d resurrect my blog with a post about where I’ll be and some of the highlights I see at Collaborate 09.

First, where I’ll be presenting:

  • Monday, 10:45-11:45am, #301, “Avoiding Common RAC Problems”
  • Tuesday, 9:45am-12pm, #332, “Installing RAC From The Ground Up”
  • Wednesday, 9:45-10:45am, #121, “Troubleshooting Oracle Clusterware”

I’m also currently the President of the Oracle RAC Special Interest Group (RAC SIG). The RAC SIG is hosting several great sessions (I’m moderating a couple of these panels) at Collaborate 09 as well:

  • Sunday, 6-7:30pm, IOUG/SIG Welcome Reception (each SIG will have representatives there–this is open to all IOUG attendees)
  • Monday, 8-9am, RAC SIG Orientation
  • Tuesday, 12:15-1:15pm, RAC SIG Birds of a Feather
  • Tuesday, 4:30-5:30pm, RAC SIG Expert Panel
  • Wednesday, 4:30-5:30pm, RAC SIG Customer Panel (not in online scheduler at the moment, check again later)
  • Thursday, 8:30am-12pm, RAC Attack (University Session – Additional fee required)

The RAC SIG has also assembled this list of RAC-related sessions at Collaborate 09 to help you plan your conference agenda.

Be sure to set up your personal agenda using the agenda builder and add these sessions to your agenda. I think that if you have these in your agenda and details (like date or room assignments) change, you’ll be notified via email (not sure, but I think that’s how it works).

Also, you can follow @IOUG on Twitter (follow me too if you’d like) and that will help you find where the action is during the event next week. It’s going to be a great event and I look forward to seeing you there!

The Most Common Performance Problem I See

At the Percona Performance Conference in Santa Clara this week, the first question an audience member asked our panel was, "What is the most common performance problem you see in the field?"

I figured, being an Oracle guy at a MySQL conference, this might be my only chance to answer something, so I went for the mic. Here is my answer.

The most common performance problem I see is people who think there's a most-common performance problem that they should be looking for, instead of measuring to find out what their actual performance problem actually is.

It's a meta answer, but it's a meta problem. The biggest performance problems I see, and the ones I see most often, are not problems with machines or software. They're problems with people who don't have a reliable process of identifying the right thing to work on in the first place.

That's why the definition of Method R doesn't mention Oracle, or databases, or even computers. It's why Optimizing Oracle Performance spends the first 69 pages talking about red rocks and informed consent and Eli Goldratt instead of Oracle, or databases, or even computers.

The most common performance problem I see is that people guess instead of knowing. The worst cases are when people think they know because they're looking at data, but they really don't know, because they're looking at the wrong data. Unfortunately, every case of guessing that I ever see is this worst case, because nobody in our business goes very far without consulting some kind of data to justify his opinions. Tim Cook from Sun Microsystems pointed me yesterday to a blog post that gives a great example of that illusion of knowing when you really don't.

Understanding the different modes of System Statistics aka. CPU Costing and the effects of multiple blocksizes - part 1

Forward to part 2

This is the first part of a series of posts that cover one of the fundamentals of the cost based optimizer in 9i and later. Understanding how the different system statistics modes work is crucial in making the most out of the cost based optimizer, therefore I'll attempt to provide some detailed explanations and samples about the formulas and arithmetics used. Finally I'll show (again) that using multiple block sizes for "tuning" purposes is a bad idea in general, along with detailed examples why I think this is so.

One of the deficiencies of the traditional I/O based costing was that it simply counted the number of I/O requests making no differentation between single-block I/O and multi-block I/O.

System statistics were introduced in Oracle 9i to allow the cost based optimizer to take into account that single-block I/Os and multi-block I/Os should be treated differently in terms of costing and to include a CPU component in the cost calculation.

The system statistics tell the cost based optimizer (CBO) among other things the time it takes to perform a single block read request and a multi-block read request. Given this information the optimizer ought to be able to come to estimates that better fit the particular environment where the database is running on and additionally use an appropriate costing for multi-block read requests that usually take longer than single block read requests. Given the information about the time it takes to perform the read requests the cost calculated can be turned into a time estimate.

The cost calculated with system statistics is still expressed in the same units as with traditional I/O based costing, which is in units of single-block read requests.

Although the mode using system statistics is also known as "CPU costing" despite the name the system statistics have the most significant impact on the I/O costs calculated for full table scans due to the different measure MREADTIM used for multi-block read requests.

Starting with Oracle 10g you have actually the choice of three different modes of system statistics also known as CPU costing:

1. Default NOWORKLOAD system statistics
2. Gathered NOWORKLOAD system statistics
3. Gathered WORKLOAD system statistics

The important point to understand here is that starting with Oracle 10g system statistics are enabled by default (using the default NOWORKLOAD system statistics) and you can only disable them by either downgrading your optimizer (using the OPTIMIZER_FEATURES_ENABLE parameter) or using undocumented parameters or hints ("_optimizer_cost_model" respectively the CPU_COSTING and NOCPU_COSTING hints).

This initial part of the series will focus on the default NOWORKLOAD system statistics introduced with Oracle 10g.

Default NOWORKLOAD system statistics

The default NOWORKLOAD system statistics measure only the CPU speed (CPUSPEEDNW), the two other remaining values used for NOWORKLOAD system statistics IOSEEKTIM (seek time) and IOTFRSPEED (transfer speed) are using default values (10 milliseconds seek time and 4096 bytes per millisecond transfer speed).

Using these default values for the I/O part the SREADTIM (single-block I/O read time) and MREADTIM (multi-block I/O read time) values are synthesized for cost calculation by applying the following formula:

SREADTIM = IOSEEKTIM + db_block_size / IOTFRSPEED

MREADTIM = IOSEEKTIM + mbrc * db_block_size / IOTFRSPEED

where "db_block_size" represents your database standard block size in bytes and "mbrc" is either the value of "db_file_multiblock_read_count" if it has been set explicitly, or a default of 8 if left unset. From 10.2 on this is controlled internally by the undocumented parameter "_db_file_optimizer_read_count". This means that in 10.2 and later the "mbrc" used by the optimizer to calculate the cost can be different from the "mbrc" actually used at runtime when performing multi-block read requests. If you leave the "db_file_multiblock_read_count" unset in 10.2 and later then Oracle uses a default of 8 for cost calculation but uses the largest possible I/O request size depending on the platform, which is usually 1MB (e.g. 128 blocks when using a block size of 8KB). In 10.2 and later this is controlled internally by the undocumented parameter "_db_file_exec_read_count".

Assuming a default block size of 8KB (8192 bytes) and "db_file_multiblock_read_count" left unset, this results in the following calculation:

SREADTIM = 10 + 8192 / 4096 = 10 + 2 = 12ms

MREADTIM = 10 + 8 * 8192 / 4096 = 10 + 16 = 26ms

These values will then be used to calculate the I/O cost of single block and multi-block read requests according to the execution plan (number of single-block reads + number of multi-block reads * MREADTIM / SREADTIM), which means that the I/O cost with system statistics aka. CPU costing is expressed in units of single block reads.

You can derive from above formula that with system statistics the cost of a full table scan operation is going to be more expensive approximately by the factor MREADTIM / SREADTIM compared to the traditional I/O based costing used in pre-10g by default, therefore system statistics usually tend to favor index access a bit more.

Note that above factor MREADTIM / SREADTIM is not entirely correct since the traditional I/O costing introduces a efficiency reduction factor when using higher MBRC settings, presumably to reflect that the larger the number of blocks per I/O request the higher the possibility that it won't be possible to use that large number of blocks per I/O request due to blocks already being in the buffer cache or hitting extent boundaries.

So with a MBRC setting of 8 the adjusted MBRC used for calculation is actually 6.59. Using e.g. a very high setting of 128 for the MBRC will actually use 40.82 for calculation. So the higher the setting the more the MRBC used for calculation will be reduced.

The following test case shall demonstrate the difference between traditional I/O costing, CPU costing and the factor MREADTIM / SREADTIM when using different "db_file_multiblock_read_count" settings. The test case was run against 10.2.0.4 Win32.

Note that the test case removes your current system statistics so you should be cautious if you have non-default system statistics at present in your database.

Furthermore the test case assumes a 8KB database default block size, and a locally managed tablespace with 1MB uniform extent size using manual segment space management (no ASSM).

drop table t1;

-- Create a table consisting of 10,000 blocks / 1 row per block
-- in a 8KB tablespace with manual segment space management (no ASSM)
create table t1
pctfree 99
pctused 1
-- tablespace test_2k
-- tablespace test_4k
tablespace test_8k
-- tablespace test_16k
as
with generator as (
select --+ materialize
rownum id
from all_objects
where rownum <= 3000
)
select
/*+ ordered use_nl(v2) */
rownum id,
trunc(100 * dbms_random.normal) val,
rpad('x',100) padding
from
generator v1,
generator v2
where
rownum <= 10000
;

begin
dbms_stats.gather_table_stats(
user,
't1',
cascade => true,
estimate_percent => null,
method_opt => 'for all columns size 1'
);
end;
/

-- Use default NOWORKLOAD system statistics
-- for test but ignore CPU cost component
-- by using an artificially high CPU speed
begin
dbms_stats.delete_system_stats;
dbms_stats.set_system_stats('CPUSPEEDNW',1000000);
end;
/

-- In order to verify the formula against the
-- optimizer calculations
-- don't increase the table scan cost by one
-- which is done by default from 9i on
alter session set "_table_scan_cost_plus_one" = false;

alter session set db_file_multiblock_read_count = 8;

-- Assumption due to formula is that CPU costing
-- increases FTS cost by MREADTIM/SREADTIM, but
-- traditional I/O based costing introduces a
-- efficiency penalty the higher the MBRC is
-- therefore the factor is not MREADTIM/SREADTIM
-- but MREADTIM/SREADTIM/(MBRC/adjusted MBRC)
--
-- NOWORKLOAD synthesized SREADTIM = 12, MREADTIM = 26
-- MREADTIM/SREADTIM = 26/12 = 2.16
-- Factor CPU Costing / traditional I/O costing
-- 2,709/1,518 = 1.78
-- MBRC = 8, adjusted MBRC = 10,000 / 1,518 = 6.59
-- 8/6.59 = 1.21
-- 2.16 / 1.21 = 1.78

select /*+ nocpu_costing */ max(val)
from t1;

-----------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
-----------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 | 1518 |
| 1 | SORT AGGREGATE | | 1 | 4 | |
| 2 | TABLE ACCESS FULL| T1 | 10000 | 40000 | 1518 |
-----------------------------------------------------------

select /*+ cpu_costing */ max(val)
from t1;

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 | 2709 (0)| 00:00:33 |
| 1 | SORT AGGREGATE | | 1 | 4 | | |
| 2 | TABLE ACCESS FULL| T1 | 10000 | 40000 | 2709 (0)| 00:00:33 |
---------------------------------------------------------------------------

alter session set db_file_multiblock_read_count = 16;

-- Assumption due to formula is that CPU costing
-- increases FTS cost by MREADTIM/SREADTIM, but
-- traditional I/O based costing introduces a
-- efficiency penalty the higher the MBRC is
-- therefore the factor is not MREADTIM/SREADTIM
-- but MREADTIM/SREADTIM/(MBRC/adjusted MBRC)
--
-- NOWORKLOAD synthesized SREADTIM = 12, MREADTIM = 42
-- MREADTIM/SREADTIM = 42/12 = 3.5
-- Factor CPU Costing / traditional I/O costing
-- 2,188/962 = 2.27
-- MBRC = 16, adjusted MBRC = 10,000 / 962 = 10.39
-- 16/10.39 = 1.54
-- 3.5 / 1.54 = 2.27

select /*+ nocpu_costing */ max(val)
from t1;

-----------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
-----------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 | 962 |
| 1 | SORT AGGREGATE | | 1 | 4 | |
| 2 | TABLE ACCESS FULL| T1 | 10000 | 40000 | 962 |
-----------------------------------------------------------

select /*+ cpu_costing */ max(val)
from t1;

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 | 2188 (0)| 00:00:27 |
| 1 | SORT AGGREGATE | | 1 | 4 | | |
| 2 | TABLE ACCESS FULL| T1 | 10000 | 40000 | 2188 (0)| 00:00:27 |
---------------------------------------------------------------------------

alter session set db_file_multiblock_read_count = 32;

-- Assumption due to formula is that CPU costing
-- increases FTS cost by MREADTIM/SREADTIM, but
-- traditional I/O based costing introduces a
-- efficiency penalty the higher the MBRC is
-- therefore the factor is not MREADTIM/SREADTIM
-- but MREADTIM/SREADTIM/(MBRC/adjusted MBRC)
--
-- NOWORKLOAD synthesized SREADTIM = 12, MREADTIM = 74
-- MREADTIM/SREADTIM = 74/12 = 6.16
-- Factor CPU Costing / traditional I/O costing
-- 1,928/610 = 3.16
-- MBRC = 32, adjusted MBRC = 10,000 / 610 = 16.39
-- 32/16.39 = 1.95
-- 6.16 / 1.95 = 3.16

select /*+ nocpu_costing */ max(val)
from t1;

-----------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
-----------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 | 610 |
| 1 | SORT AGGREGATE | | 1 | 4 | |
| 2 | TABLE ACCESS FULL| T1 | 10000 | 40000 | 610 |
-----------------------------------------------------------

select /*+ cpu_costing */ max(val)
from t1;

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 | 1928 (0)| 00:00:24 |
| 1 | SORT AGGREGATE | | 1 | 4 | | |
| 2 | TABLE ACCESS FULL| T1 | 10000 | 40000 | 1928 (0)| 00:00:24 |
---------------------------------------------------------------------------

alter session set db_file_multiblock_read_count = 64;

-- Assumption due to formula is that CPU costing
-- increases FTS cost by MREADTIM/SREADTIM, but
-- traditional I/O based costing introduces a
-- efficiency penalty the higher the MBRC is
-- therefore the factor is not MREADTIM/SREADTIM
-- but MREADTIM/SREADTIM/(MBRC/adjusted MBRC)
--
-- NOWORKLOAD synthesized SREADTIM = 12, MREADTIM = 138
-- MREADTIM/SREADTIM = 138/12 = 11.5
-- Factor CPU Costing / traditional I/O costing
-- 1,798/387 = 4.64
-- MBRC = 64, adjusted MBRC = 10,000 / 387 = 25.84
-- 64/25.84 = 2.48
-- 11.5 / 2.48 = 4.64

select /*+ nocpu_costing */ max(val)
from t1;

-----------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
-----------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 | 387 |
| 1 | SORT AGGREGATE | | 1 | 4 | |
| 2 | TABLE ACCESS FULL| T1 | 10000 | 40000 | 387 |
-----------------------------------------------------------

select /*+ cpu_costing */ max(val)
from t1;

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 | 1798 (0)| 00:00:22 |
| 1 | SORT AGGREGATE | | 1 | 4 | | |
| 2 | TABLE ACCESS FULL| T1 | 10000 | 40000 | 1798 (0)| 00:00:22 |
---------------------------------------------------------------------------

alter session set db_file_multiblock_read_count = 128;

-- Assumption due to formula is that CPU costing
-- increases FTS cost by MREADTIM/SREADTIM, but
-- traditional I/O based costing introduces a
-- efficiency penalty the higher the MBRC is
-- therefore the factor is not MREADTIM/SREADTIM
-- but MREADTIM/SREADTIM/(MBRC/adjusted MBRC)
--
-- NOWORKLOAD synthesized SREADTIM = 12, MREADTIM = 266
-- MREADTIM/SREADTIM = 266/12 = 22.16
-- Factor CPU Costing / traditional I/O costing
-- 1,732/245 = 7.07
-- MBRC = 128, adjusted MBRC = 10,000 / 245 = 40.82
-- 128/40.82 = 3.13
-- 22.16 / 3.13 = 7.07

select /*+ nocpu_costing */ max(val)
from t1;

-----------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
-----------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 | 245 |
| 1 | SORT AGGREGATE | | 1 | 4 | |
| 2 | TABLE ACCESS FULL| T1 | 10000 | 40000 | 245 |
-----------------------------------------------------------

select /*+ cpu_costing */ max(val)
from t1;

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 | 1732 (0)| 00:00:21 |
| 1 | SORT AGGREGATE | | 1 | 4 | | |
| 2 | TABLE ACCESS FULL| T1 | 10000 | 40000 | 1732 (0)| 00:00:21 |
---------------------------------------------------------------------------

So as you can see the I/O costs for a full table scan are significantly different when using default NOWORKLOAD system statistics. You can also see that the SREADTIM and MREADTIM values derived are quite different when using different "db_file_multiblock_read_count" settings. Furthermore the difference between traditional I/O based costing and the CPU costing is not the factor MREADTIM / SREADTIM as suggested by the formula, but is reduced by the adjustment applied to the MBRC when using traditional I/O costing.

The next part of the series will cover the remaining available System Statistics mode.

Unloading data using external tables in 10g

External tables can write as well as read in 10g. May 2005

Helsinki code layers in the DBMS

Ok, let's continue with the second part of "The Helsinki Declaration". That would be the part where I zoom in on the DBMS and show you how best to do this database centric thing.We have seen that the DBMS is the most stable component in everybodies technology landscape. We have also concluded that the DBMS has been designed to handle WoD application BL-code and DL-code. And current DBMS's are

Advanced Oracle Troubleshooting by Tanel Poder in Singapore

When I first saw that Tanel will conduct his seminar in Singapore, I told myself that I would even spend my own money just to be on that training! I’ve already read performance books like Optimizing Oracle Performance, Oracle 8i Internal Services, Forecasting Oracle Performance… And after that I still want more, and I still have questions that need to be answered. Well, if you’re on a tight budget you just opt to download some more docs/books to do multiple reads coupled with research/test cases and also reading through others blog…
But thanks to my boss for the funding, I was there! </p />
</p></div>

    	  	<div class=

Oracle ACE

I've recently been invited by Oracle to accept the Oracle ACE award.

So I'm now an Oracle ACE. You can check my Oracle ACE profile here.

Thanks to Oracle ACE H.Tonguç Yılmaz and special thanks to Oracle ACE Dion Cho, who nominated me for the Oracle ACE award.

Some statistics (since I'm a CBO guy :-):

- I'm truly honored to be Oracle ACE no. 210 in the world
- There are at present 57 Oracle ACEs in the "Database Management & Performance" category (53 in "Database App Development" and 10 in "Business Intelligence")
- There are 7 ACEs from Germany at present

Maxine Johnson

I want to introduce you to Maxine Johnson, assistant manager of men's sportswear at Nordstrom Galleria Dallas. The reason I think Maxine is important is because she taught my son and me about customer service. I met her several months ago. I still have her card, and I'm still grateful to her. Here's what happened.

A few months ago, my wife and I were in north Dallas with some time to spare, and I convinced her to go with me to pick out one or two pairs of dress slacks. I felt like I was wearing the same pants over and over again when I traveled, and I could use an extra pair or two. We usually go to Nordstrom for that, and so we did again. After some time, I had two pairs of trousers that we both liked, and so we had them measured for hemming and picked them up a few days later.

A week or two passed, and then I packed a pair of my new pants for a trip to Zürich. I put them on in the hotel the first morning I was supposed to speak at an event. On my few-block walk from the hotel to the train station, I caught my reflection in a store window, and—hmmp—my pants were just not... really... quite... long enough. Every step, the whole cuff would come way up above the tops of my shoes. I stopped and tugged them down, and then they seemed alright, but then as soon as I started walking again, they'd ride back up and look too short.

They weren't bad enough that anyone said anything, but I was a little self-consious about it. I kept tugging at them all day.

When I hung them back up in my closet at home, I noticed that when I folded them over the hanger, they didn't reach as far as the other pants that I really liked. Sure enough, when I lined up the waists, these new pants were about an inch shorter than my favorite ones that I had bought at Nordstrom probably four years ago.

Now, pants at Nordstrom cost a little more than maybe at a lot of other places, but they're worth to me what I pay for them because they're nice, and they last a long time. But these new ones made me feel bad, because they were just a little bit off. I could already foresee a future of two new pairs of slacks hanging in my closet for years, never really making the starting rotation because they're just a little bit off, but never making the garage sale pile, either, because they had cost too much.

My wife agreed. They were shorter than the others. They were shorter than they should be. I needed to get them fixed.

Now, this is the part I always hate. Having made the decision, the next step is that step where you take the thing back and try to get the problem fixed. I hate that part. My wife doesn't mind it so much, but these were my pants, and so I was the one that had to go back and put them on so someone could fix them. I really dreaded it though, because I knew that the only way they could fix those pants was to take off the cuff.

It's late in the evening by the time my wife helps me build up a little head of steam, and we both decide (well, she decides, but she's right) that tonight is the perfect night for me to go on a 20-mile drive across town to Nordstrom to get my pants fixed. As a matter of fact, it'd be good if my older boy went with me. That makes it a little more fun, because he's good company for me.

It's late enough by now that before I could leave, I had to phone ahead, just to make sure the store was still open. A nice lady answered the phone. I said my name and told the nice lady that I was having some trouble with some slacks I had bought a few weeks ago, and how late did they stay open? She told me to come right on over.

So my boy and I got into the car, and I drove right on over.

A half hour later, I walked into the store, thankful that the doors were still open, carrying two pairs of slacks on a hanger, with my son walking beside me. A smiling nice lady approached me as I entered the men's department. "Mr. Millsap?" Yes, I am. It surprises me anytime someone remembers my name from that one phase of the conversation where I say real fast, "My name is Cary Millsap, and blah blah blah blah blah," and tell my whole story. The person on the phone hadn't asked me again what my name was. She had caught it in the blur at the beginning of my story.

She proceeded to explain to me what was going to happen. I was going to try on the slacks in the dressing room. The tailor would be there waiting for me. She and the tailor would look them over. If there was enough fabric to make them longer, then they'd do that tonight. If there weren't, then she was going to find two new pairs of slacks for me, and the tailor would have them ready for me tomorrow. If for any reason, those didn't work, then she'd keep preparing new trousers for me until I was satisfied.

Mmm, ok. I was probably grinning a little bit by now, because this was pretty fantastic news. I wasn't going to have to get my pants de-cuffed. I was still a little nervous, though, that when I came out of the dressing room, everyone was going to look at me like, "So what's the problem? I don't see any problem. Those are long enough."

When I came out, Maxine Johnson crossed her arms, put her hand to her chin, shook her head a little, and immediately said something to the effect of, "Oh my, no. That won't do at all." So she brought me two new pairs, which I tried on, and which the tailor measured for me. She gave me a reclaim ticket for the next day. As usual, I had missed her name when she introduced herself as I first entered the men's department. (As you probably already figured out, I have a bad habit of not paying enough attention to that part of the conversation that I think of as "the blur.") I did have the good sense to ask for her business card, which is why I know her name is Maxine Johnson.

My boy and I talked the whole ride home that what we had seen that night had been some real, first-class retail customer care right there, and that we all knew where we'd be buying my next pairs of pants. When I had gotten into the car an hour or so before, I had been very apprehensive about what might happen. I had been especially nervous about how I'd perform during the proving-what's-wrong part of the project. But Maxine Johnson put me completely at ease during my experience. She didn't just do the right thing, she did it in such a manner that I felt glad the whole problem had happened. Here's the thing:

Maxine Johnson made me feel like it was not just ok that I brought the pants back for repair, she made me feel like she was delighted by the opportunity to show me what Nordstrom could do for me under pressure.

I hope that the way Maxine Johnson made me feel is the way that my employees and I make our customers feel. I hope it's the way my children make their customers feel someday when they go to work.

Thank you, Maxine Johnson. Thank you.

People ask the wrong question

People who know me, know that I am enthusiastic about Apex. But I am certainly not an Apex expert. By far not. The DBMS is where my knowledge is. But because they know of my enthusiasm, I often get the question whether Apex is mature enough for building a critical or large-scale WoD application.I then (sigh and) reply by saying: "You are asking the wrong question."Pay attention please.In the