Search

Top 60 Oracle Blogs

Recent comments

November 2010

phpBB 3.0.8 Released

phpBB 3.0.8 has been released. Happy upgrading.

Cheers

Tim…

MOW2011

Miracle Openworld 2011
Event date: 
Wed, 2011-04-13 - Fri, 2011-04-15

 On April 13 to 15, Hotel LEGOLAND, Billund MOW2011 planning progresses. We hold the conference again at Hotel LEGOLAND in Billund, with accommodation at Lalandia. 
MOW2011 will have four main categories, Oracle (two tracks), Microsoft (two tracks), a Java track and an open-source track. 
 

Seminar in Bucharest

Thank you all those who attended my seminars in Bucharest. Thank you for taking the time of two days with me. I hope it has been worthwhile. I sincerely apologize for the delay in start of the first day. It was entirely my fault - I messed up the time difference calculations.

Please access the scripts and other materials from http://www.proligence.com/advrman_bucharest. You should have the userid and password from the class. If you have forgotten, please let me know.
Here are is the addition resource I was referring to. To present the consolidated view the status of all RMAN jobs in a single catalog:  http://arup.blogspot.com/2010/11/tool-to-present-consolidated-view-of.html

Quiz Night

Apart from the fact that the “Rows” figure for the FILTER operation at line 6 is blank, what’s the obvious error in this extract from an execution plan:

-------------------------------------------------------------------------
| Id  | Operation                            | Name             | Rows  |
-------------------------------------------------------------------------
        ...
|   5 |      NESTED LOOPS                    |                  |  3864 |
|   6 |       FILTER                         |                  |       |
|   7 |        HASH JOIN OUTER               |                  |  3864 |
|   8 |         HASH JOIN OUTER              |                  |   282K|
|   9 |          TABLE ACCESS BY INDEX ROWID | PRODUCT          |   282K|
|  10 |           INDEX RANGE SCAN           | PRD_SUPP_I1      |   282K|
|  11 |          VIEW                        |                  |  2293K|
|  12 |           HASH GROUP BY              |                  |  2293K|
|  13 |            PARTITION LIST SINGLE     |                  |  5790K|
|  14 |             TABLE ACCESS FULL        | PRODUCT_PRICING  |  5790K|
|  15 |         VIEW                         |                  |  2307K|
|  16 |          HASH GROUP BY               |                  |  2307K|
|  17 |           PARTITION LIST SINGLE      |                  |  5703K|
|  18 |            TABLE ACCESS FULL         | PRODUCT_PRICING  |  5703K|
        ...
-------------------------------------------------------------------------

Update 21/Nov/2010:
Once again I am reminded of two things – it’s important to be precise in your use of language if you want people to understand the question; and you can see a lot if you look carefully.

If you start to think about the activity that the plan represents, and the SQL that might have produced it, there are some ideas you might get about re-writing the query to be more efficient – but the point I was trying to make is that there is clearly an error in the content that the optimizer is displaying. The error suggests either that the optimizer has done the wrong arithmetic, or that the output is not a correct copy of the results produced by the optimizer.

The answer I was expecting comes from line 7. Stripping the error back to the bare minimum we see this:

--------------------------------------------------------------------------
| Id  | Operation                             | Name             | Rows  |
--------------------------------------------------------------------------
        ...
|   7 |        HASH JOIN OUTER                |                  |  3864 |
|   8 |         rowsource 1 (HASH JOIN OUTER) |                  |   282K|
|  15 |         rowsource 2 (VIEW)            |                  |  2307K|
        ...
--------------------------------------------------------------------------

As Milo points out in comment 3, In an outer join the result set cannot have fewer rows than the “preserved” rowsource (which, in this case, is the result set from line 8). I mentioned the fact that the “Rows” figure for the FILTER operation at line 6 was blank – it’s just possible that the optimizer has overwritten the figure in line 7 with the figure that should have been in line 6; there are cases where a FILTER operation and the operation you would normally think of as its first child are combined, so it’s possible that a little storage glitch has appeared in some cases where the combination rule doesn’t apply.

Someone did mention the FILTER operation and pointed out that it wasn’t filtering any data. The commonest forms of FILTER operation essentially check that some predicate it true for each row in their first child rowsource – and it is possible for someone to write code that has a filter that doesn’t eliminate any rows. In fact, though, this plan is probably saying: “line 7 will produce 282K rows, and the filter at line 6 will reduce that to 3,684.” (There’s also a comment about a “group by” not reducing the size of the rowsource – the comment was caused by a parallax error, but it is possible, of course, for Oracle to decide that a “group by” is going to produce an output with just as many rows as the input.)

Sean Molloy’s opening comment asks how you can get two different estimates from the same tablescan — and follows up with one answer which is that since we are looking at PARTITION LIST SINGLE the two tablescans could be of different partitions. But it’s only a puzzle if there were no predicates on the tablescans and, as Pavol points out in comment 7, there are no “star flags” in the ID column to suggest the presence of any predicates – but there are no stars anywhere – and there have to be some predicates in the plan, since you can’t do a hash join, index range scan, or filter without a predicate. As Timur points out – you don’t get the predicate section in the report from dbms_xplan.display_awr(), so you don’t get the stars.

Speaking of missing information, Dave Costa in comment 4 suggests that the user made an error in choosing which bits of the plan to copy. I did the choosing – how could it possible by wrong ! Given the number of times I’ve said “you must include the predicate section”, why isn’t it there ? (answer: it’s display_awr). In fact lines 5 and 6 were redundant as far as the “obvious problem” was concerned – but I thought that the blank and reappearance of the same cardinality might be a helpful visual clue.

The SQL:

Several people have commented on the rationale for code that does two outer join aggregations on the same table. It does look a little unusual, but it’s not possible to come to any conclusion about whether it’s a good thing or a bad thing without knowing the data and the intent of the SQL. For example the intent could be something like:


select
        product_name, min(offer_price), max(offer_price) ...
from

(You can assume that in the actual code, the min() and max() would be hidden inside a couple of inline views)
In this case code which visits the product_pricing table twice might be a very good idea – because there is a special “index (min/max) range scan” optimisation that works very well if (a) have the right indexes in place (b) use a nested loop and (c) only ask for min, or max, but not both. Perhaps our problem is that the optimizer is doing a hash join when it should be doing a nested loop.

Another possibility is that we have something like:


select
        product_name, min(gbp_offer_price), min(usd_offer_price) ...
from

(Again you can assume the min() and max() would be hidden inside inline view, and the different column names would be derived names rather than being from two separate columns in a table).
Notice that the plan shows list partitioning – maybe we have one currency per partition, and we’ve written the query to maximise the benefits of partition elimination (as well as leaving the data nicely normalised, thus maximising efficiency of maintenance).

Bottom line on the SQL – in this case I wasn’t asking people to guess what was wrong with the code; but it’s very interesting to see how many different topics of thought can come out from a starting point of a few lines extracted from a plan.

Large ASM Adoptions and Lessons Learned

Seems like December came quickly this year…  and UKOUG is only one week from Monday!

This will be my first year attending UKOUG – and I will be giving a new presentation called “Large Scale ASM Adoptions and Lessons Learned.”  I was personally involved in a very large ASM adoption project and I’m also talking to a few other acquaintances with similar experiences.  I will be summarizing our collective stories and lessons learned in this presentation.  My session will be Monday morning (Nov 29) at 10:25 am – please stop by!

However there’s another way you are invited to participate in my UKOUG session.  Do you know anybody who has been involved in an ASM adoption?  Have you been involved in one?  I have created a web survey with the questions that I’m asking my acquaintances about their experiences with ASM.

Single Sign-On Security Issue in 11.2.0.2 Patchset

November 18, 2010 I just browsed though Metalink (MOS) to see which operating system platforms are currently supported for Oracle Database 11.2.0.2.  I noticed that the 11.2.0.2 patchset for 64 bit Linux was updated yesterday, and more digging revealed that all of the 11.2.0.2 patches for the other supported operating systems were also recently updated. Take [...]

Configuring Linux Hugepages for Oracle Database Is Just Too Difficult! Isn’t It? Part – III. Do You Really Want To Configure The Absolute Minimum Hugepages?

In part I of my recent blog series on Linux hugepages and modern Oracle releases I closed the post by saying that future installments would materialize if I found any pitfalls. I don’t like to blog about bugs, but in cases where there is little material on the matter provided elsewhere I think it adds value. First, however, I’d like to offer links to parts I and II in the series:

The pitfall I’d like to bring to readers’ attention is a situation that can arise in the case where the Oracle Database 11g Release 2 11.2.0.2 parameter use_large_pages is set to “only” thus forcing the instance to either successfully allocate all shared memory from the hugepages pool or fail to boot. As I pointed out in parts I and II this is a great feature. However, after an instance is booted it stands to reason that other processes (e.g., Oracle instances) may in fact use hugepages thus drawing down the amount of free hugepages. In fact, it stands to reason that other uses of hugepages could totally deplete the hugepages pool.

So what happens to a running instance that successfully allocated its shared memory from the hugepages pool and hugepages are later externally drawn down? The answer is nothing. An instance can plod along just fine after instance startup even if hugepages continue to get drawn down to the point of total depletion. But is that the end of the story?

What Goes Up, Must (be able to) Come Down
OK, so for anyone that finds themselves in a situation where an instance is up and happy but HugePages_Free is zero the following is what to expect:

$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Wed Sep 29 17:32:32 2010

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Connected to an idle instance.

SQL>
SQL> HOST grep -i huge /proc/meminfo
HugePages_Total:  4663
HugePages_Free:      0
HugePages_Rsvd:     10
Hugepagesize:     2048 kB

SQL> shutdown immediate
ORA-01034: ORACLE not available
ORA-27102: out of memory
Linux-x86_64 Error: 12: Cannot allocate memory
Additional information: 1
Additional information: 6422533
SQL>

Pay particular attention to the fact that sqlplus is telling us that it is attached to an idle instance! I assure you, this is erroneous. The instance is indeed up.

Yes, this is bug 10159556 (I filed it for what it is worth). The solution is to have ample hugepages as opposed to precisely enough. Note, in another shell a privileged user can dynamically allocate more hugepages (even a single hugepage) and the instance will be then able to be shutdown cleanly. As an aside, an instance in this situation can be shutdown with abort. I don’t aim to insinuate that this is some sort of zombie instance that will not go away.

Filed under: oracle

I might have to move...

Nothing about technology or the database here. If you aren't interested in my thoughts about things outside of that - stop reading, go away.

I lived next to New Jersey for much of my life (1972-1987)..

I lived in New Jersey for more than a year (commuted there for a year, lived there for a year - 1992-1993).

I might have to move back. Because of this (NJ is not the only one, Minnesota is looking into it as well). Bravo I say. I fly *a lot* and I love flying *out*of country - because I know all of my flights that week (after the first one) - until the last one back home - will be "nice" ones.

Because:

I don't have to take off my shoes.
I don't have to empty my entire backpack for someone.
I don't have to be naked (in a picture for someone, somewhere).
I don't have to be felt up all over.
I don't have to be insulted in many ways and forms.

Until I come home that is - leaving and coming back - that is the problem these days. Didn't used to be that way, but it is now. It is easier for me to go from one foreign country to another than it is for me to go from state to state in the US. Seriously. I'd rather go to another country and fly from place to place than in the US - is it that much of a hassle. I feel the need to get to an airport with at least 2 hours, if not 3 to be safe, "just in case something new and special" will happen to me that time.

It isn't about safety anymore - it's about "let us react to the latest thing that happened and pretend that makes everyone feel better about themselves"

I've flown in/out of Israel more than a few times. They have it right. I've never stood in their lines (and they have lots of traffic) for more than 30 minutes. And I've never been subjected to what I am in the US. Just saying. They seem to be pretty darn secure. I've seen every layer of their security and experienced it. Pretty innocuous - yet daunting. But definitely not embarrassing or intrusive.

Autotrace oddity

I got an unexpected error message from SQL*Plus a couple of days ago while trying to enable autotrace with statistics:

SQL> set autotrace  traceonly statistics;
ERROR:
ORA-24315: illegal attribute type

SP2-0619: Error while connecting
SP2-0611: Error enabling STATISTICS report
SQL>

This was a little puzzling, but since I get through a lot of different sites, databases, and instances, I did a quick check to make sure that the plustrace role had been created on this system and that the schema I was using to log on had been granted the role. Everything looked okay … except for this error message. So I reconnected (almost as good as rebooting as a method for solving problems) and got the following error message:

SQL> connect XXXXXXXX/XXXXXXXX@XXXXXXXX       -- no names, no pack drill
ERROR:
ORA-28002: the password will expire within 7 days

Problem solved. When you enable autotrace statistics SQL*Plus tries to create a second session through your current process (which is one of the reasons why v$session can end up with more rows than v$process) . By an unfortunate coincidence or timing, my password had reached the expiry warning point while my initial session was logged on, so the recursive session got error message ORA-28002 when it tried to connect, and SQL*Plus reported this back in the only way that seemed appropriate.

Once I’d worked out what the problem was all I did was change my password, end all my active sessions, and then logon again. If you want a demonstration of the effect, here’s a simple demo of a very similar problem (cut-n-paste from an SQL*Plus session connected to an 11gR1 database – using schema test_user, when the password wasn’t “fred”):

SQL> alter user test_user identified by fred;

User altered.

SQL> set autotrace traceonly statistics;
ERROR:
ORA-01017: invalid username/password; logon denied

SP2-0619: Error while connecting
SP2-0611: Error enabling STATISTICS report
SQL>

slb-the-silly-little-benchmark

 New SLB Kit
I’d like to point out a couple of things about the new SLB tar archive.

  1. The code has changed so results from this kit are not comparable to prior kits.
  2. The kit now performs 30 seconds of random memory reads followed by 30 seconds of random memory writes.
  3. The kit includes a wrapper script called runit.sh that runs SLB processes each with 512MB physical memory. The argument to runit.sh is a loop control of how many SLB processes to run upon each invocation of the benchmark.
  4. The kit includes a README that shows how to compile the kit and also offers further explanation of item #3 in this list.