In the previous post, I showed that even though a partition was “removed” (ie, exchanged out) from a table, a query running against the table could still successfully run the completion.
However, of course, if once that partition is exchanged out, it is now a table in it’s own right…and is subject to the whims of what a DBA may wish to do with it. If that table is dropped, or truncated, then as you might expect, our query is going to struggle to find that data !
Here’s an example of what happens when the query cannot successfully run:
You might be thinking “Nothing is more frustrating that encountering a string column that is full of dates”.
But there is something worse than that…and that is, when that string column is full of potential dates, and your job is to sift out the good data from the bad data. For example, if your table looks like this:
SQL> select * from T; DATE_STR -------------------- qwe 01/01/2000 31/02/2000 12-jan-14 20001212 Jan 14, 2016 6 rows selected.
Some of those values are definitely not dates, some of them definitely are dates, and some of them are might be dates. The only real way of knowing is to try convert those strings to dates, and see what happens.
The OTN database forum supplied a little puzzle a few days ago – starting with the old, old, question: “Why is the plan with the higher cost taking less time to run?”
The standard (usually correct) answer to this question is that the optimizer doesn’t know all it needs to know to predict what’s going to happen, and even if it had perfect information about your data the model used isn’t perfect anyway. This was the correct answer in this case, but with a little twist in the tail that made it a little more entertaining. Here’s the query, with the two execution plans and the execution statistics from autotrace:
It’s sad that I could not capture in pictures the amazing sights and sounds from both inside the conference and outside in the streets of these amazing places.
I was too busy just absorbing it myself and neglected to take enough pictures
But again, a truly wonderful couple of days.
Subtle variances in the way you work with statistics can have a significant impact on how your optimizer plans work out…so you need to take care.
Let’s look at the following example
SQL> create table T ( 2 x varchar2(20) , y varchar2(100)); Table created. SQL> insert into T 2 select 'x' , rpad('z',100) from all_objects; 94117 rows created. SQL> insert into T 2 select 'X' , rpad('z',100) from all_objects; 94117 rows created. SQL> exec dbms_stats.gather_table_stats(user,'T',cascade=>true) PL/SQL procedure successfully completed. SQL> create index TX on T ( upper(x) ) ; Index created.
So you can see that in reality, ALL of the rows have a single value for UPPER(X), namely “X”. So let’s look at an execution plan.
For “normal” indexes, USER_IND_COLUMNS will contain the column name(s) for an index, but things (appear to) go astray when looking at function-based indexes.
SQL> create table blah ( x varchar2(30)); Table created. SQL> create index blah_ix on blah ( upper(x)); Index created. SQL> select column_name from user_ind_columns 2 where index_name = 'BLAH_IX' 3 / COLUMN_NAME ------------------------------ SYS_NC00002$
Don’t panic. Simply take a look at USER_IND_EXPRESSIONS to find the function that you used.
Most people are aware of this standard referential integrity control, namely you can’t delete the parent if children exist