Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Oaktable Website

Thanks to the valient work of James Morle there is a awesome new website aggregating the work and postings of the members of the Oaktable network. Check it out:

Fun with Oracle Exadata V2

Well I’ve been holed up playing with an Exadata V2 machine for the past several weeks. Wow. Very interesting technology.

I must say that I believe the concept of offloading SQL processing to the storage layer is a game changer and I wouldn’t be at all surprised to see this as a standard feature a few years from now. What that means for other storage vendors is unclear at this point. So for this first post on the topic let me just describe the configuration (and some potential upgrades).

The basic architecture consists of a set of database severs and a set of storage servers.

Database Servers:

  • Sun x4170 (1RU 64x server)
  • 2 - Quad-core Intel Xeon E5540 2.53GHz processors
  • 72G Ram (18×4G Dimms - max of 144G using 8G DIMMs)
  • Dual-Port QDR InfiniBand Host Channel Adapter
  • HBA with 512MB Battery Backed Write Cache (only for internal disks???)
  • 4 - 146G internal drives (SAS 10,000 RPM)
  • dual hot swappable power supplies
  • no spare/empty slots!

Here’s what the Database Servers look like:

Storage Servers:

  • Sun x4275 (2RU 64x server)
  • 2 - Quad-core Intel Xeon E5540 (2.53GHz) processors
  • 24G Ram
  • Dual-Port QDR InfiniBand Host Channel Adapter
  • HBA with 512MB Battery Backed Write Cache (only for internal disks???)
  • dual hot swappable power supplies
  • 4 - 96G Sun Flash PCIe Cards (total of 384 GB)
  • 12 - 600 GB 15,000 RPM SAS or 2 TB 7,200 RPM SATA


Here’s what the Storage Servers look like:

In addition to these two components there is are 2 Infiniband switches (redundant) to tie the whole thing together. There is also a KVM and a couple of Cisco 4948 switches. And the whole thing comes in a pretty Rack with a fancy Exadata Logo. Here’s a couple of real pictures taken by a real crew member:

Oracle is big on the balanced configuration. That is to say, they have spent a lot of time trying to eliminate bottlenecks in any one component. That said, it seems to me that at some point they will start to allow more customizable configurations. For example, 72G on the database servers is not a huge amount of memory these days, especially for a consolidation platform that will house several instances. So here are some thoughts on potential ala-cart upgrades that may be possible at some point.

  1. The database servers are capable of using any Intel 5500 series chips. They are shipped with 2.53GHz 5540’s which are middle of the road for the series. The 5540’s could potentially be replaced with 5570’s which are clocked at 2.93 GHz.
  2. The database servers come with 18 4G Dimms for a total of 72G of RAM. 8G Dimms are available which would increase the RAM to 144G.
  3. The database servers (Sun 4170) have no spare slots, making it difficult to attach external devices such as tape backup units and other fiber based storage. Swapping the 1U 4170’s for the 2U 4275’s would provide additional slots allowing more connection options.
  4. Adding additional storage servers is possible regardless of which option is chosen (quarter rack, half rack, etc…)  Oracle has already agreed to this as long as they are not mounted in the Exadata rack enclosure.
  5. External database servers can be configured to access the storage servers. This would require Infiniband cards on the external servers and of course you’d need to be running 11gR2 to take advantage of the storage servers “special sauce”.

Here are a couple of pertinent statements regarding upgrades out of the Oracle Exadata FAQ:

Can the Sun Oracle Exadata Storage Server hardware or software be modified or customized?

Customization or modification of the Sun Oracle Exadata Server Hardware or the Oracle Exadata Server Software is not allowed or supported. If a customer modifies, customizes, or installs additional hardware or software on the Sun Oracle Exadata Storage Server, the Sun Oracle Exadata Storage Server will no longer be supported.

#103077;">

#103077;">

What additional software can be installed on the database servers in a Sun Oracle Database Machine?

For optimal performance and supportability, Oracle recommends only the Oracle Database and database options, be installed on the Sun Oracle Database Machine. Other than the Oracle Database software we recommend that only agent software, for services like backup or system management, be installed on the Sun Oracle Database Machine.

What are the consequences of making Sun Oracle Database Machine hardware modifications?

Oracle does not currently allow changes to the hardware in the Sun Oracle Database Machine including re-racking of the servers and switches. The Sun Oracle Database machine is a standard configuration that is assembled, tested, and supported as a unit. For many years, Oracle has supported configurations in the market in which the customer can choose any hardware platform and configuration that they desire. With the Sun Oracle Database machine Oracle is taking a different approach and standardizing the complete configuration. We believe a standard configuration will allow customers to achieve stable, high-performance production deployments much faster and with lower cost.

Can a customer add additional storage cells, to an existing Quarter Rack or Half Rack Sun Oracle Database Machine?

Additional Exadata cells should be added to a Database Machine by installing them in a separate adjacent rack. You should not attempt to add Exadata cells to a Half or Quarter rack as it will change the environmental characteristics of those Database Machines and prevent use of the standard Database Machine upgrades.

Can the Oracle Database use Exadata Storage simultaneously with traditional non-Exadata storage products?

Yes, databases can be concurrently deployed on Exadata storage and non-Exadata storage. Tablespaces can exist on Exadata storage, non-Exadata storage, or a combination of the two and this is transparent to database applications. However, Exadata Smart Scan offload processing is enabled only if the entire tablespace being queried resides on Exadata storage.

Can the Database Machine hardware be modified or customized? For example, can I add more Ethernet Cards or Fibre Channel HBAs?

The Sun Oracle Database Machine hardware cannot be modified or customized. There is one exception to this. The only hardware modification to the Database Machine that is allowed is to the administrative 48-port Cisco 4948 Gigabit Ethernet switch is included in the Database Machine. Customers may choose to:

  1. Replace the Gigabit Ethernet switch, at their own expense, with an equivalent 1U 48-port Gigabit Ethernet switch that conforms to their internal datacenter network standards. This replacement must be performed by the customer after delivery of the Database Machine at their expense and labor. If the customer chooses to make this change, Sun and Oracle cannot make or assist with this change given the numerous possible scenarios involved and because it is not scoped as part of the standard installation. The customer must supply the replacement hardware and make or arrange for this change through other means.
  2. Remove the CAT5 cables connected to the Cisco 4948 Ethernet switch and connect them to the customer’s network through an external switch or patch panel. The customer must perform these changes at their expense and labor. In this case, the Cisco 4948 Ethernet switch in the rack can be turned off and left unconnected to the data center network.

Well, that’s it for now. I’ll try to post soon on some of the more interesting features of the platform. Let me know what you think or if you have any specific things you’d like me to investigate.

A new thing about sql*plus

What I learned today was - a new feature in the venerable old tool SQL*Plus.

I was asked about trapping the SP2 errors in SQL*Plus, those you get when you have a bad SQL*Plus command - like this:

ops$tkyte%ORA11GR2> selct * from dual;
SP2-0734: unknown command beginning "selct * fr..." - rest of line ignored.

In that case - neither of the SQL*Plus error handling bits:

OSERROR
SQLERROR

will work for you - it is not an OS error like "unable to open spool file", it is not a SQL error - selct is not SQL, it never got to the SQL layer. that SP2 error is uncatchable. And I wrote as much.

Things change - my answer is dead on correct, for version 10gR2 and before (hey, the question was from someone using 10gR2 - so technically, I was correct ;) ). It is not technically true in 11g and above.

SQL*Plus in 11g added an "error logging" facility. A session may issue:

SQL> set errorlogging on

and have any SQL, OS or SP2 errors logged into a logging table, similar to DML error logging. Additionally - you can have your errors tagged with an identifier, making it easy to find your error records. So, you can now check (using SQL) at various points in time to see if you've hit an error - or your program that runs sqlplus and runs a script can check to see if any errors occurred in your session easily.

thanks to Enrique Aviles for pointing it out and thanks to Arup Nanda for writing it up (ctl-f for SQL*Plus Error Logging on that page).

Note that you need the 11g SQL*Plus, not just an 11g database with an old sqlplus connected to it! This is a feature of SQL*Plus.

On the flipside though, this means it is available for older database releases! You can connect to 9i with 11g SQL*plus and use this:


[tkyte@dellpe ~]$ sqlplus scott/tiger@ora9ir2

SQL*Plus: Release 11.2.0.1.0 Production on Fri Apr 23 15:36:51 2010

Copyright (c) 1982, 2009, Oracle. All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.8.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 - Production

scott%ORA9IR2> set errorlogging on
scott%ORA9IR2> selct * from dual;
SP2-0734: unknown command beginning "selct * fr..." - rest of line ignored.
scott%ORA9IR2> select timestamp, username, script, statement, message
2 from sperrorlog;

TIMESTAMP
---------------------------------------------------------------------------
USERNAME
-------------------------------------------------------------------------------
SCRIPT
-------------------------------------------------------------------------------
STATEMENT
-------------------------------------------------------------------------------
MESSAGE
-------------------------------------------------------------------------------
23-APR-10 03.37.02.000000 PM
SCOTT

selct * from dual;
SP2-0734: unknown command beginning "selct * fr..." - rest of line ignored.

The Core Performance Fundamentals Of Oracle Data Warehousing – Data Loading

[back to Introduction] Getting flat file data into your Oracle data warehouse is likely a daily (or more possibly frequent) task, but it certainly does not have to be a difficult one.  Bulk loading data rates are governed by the following operations and hardware resources: How fast can the data be read How fast can data be written out How much CPU power is available I’m always a bit amazed (and depressed) when I hear people complain that their data loading rates are slow and they proceed to tell me things like: The source files reside on a shared NFS filer (or similar) and it has just a single GbE (1 Gigabit Ethernet) network path to the Oracle database host(s). The source files reside on this internal disk volume which consists of a two disk mirror (or a volume with very few spindles). Maybe it’s not entirely obvious so let me spell it out (as I did in this tweet): One can not load data into a database faster than it can be delivered from the source. Database systems must obey the laws of physics! Or putting it another way: Don’t fall victim to slow data loading because of a slow performing data source. [...]

Types of Data Visualization


I'm fascinated by data visualization and here is a fun little widget - the periodic chart of visualization methods. Make sure and pass your mouse over each cell to see the example graphic popup.

Quiz: Explaining index creation

Did you know that it’s possible to use EXPLAIN PLAN FOR CREATE INDEX ON table(col1,col2,col3) syntax for explaining what exactly would be done when an index is created?

That’s useful for example for seeing the Oracle’s estimated index size without having to actually create the index.

You can also use EXPLAIN PLAN FOR ALTER INDEX i REBUILD to see whether this operation would use a FULL TABLE SCAN or a FAST FULL INDEX SCAN (offline index rebuilds of valid indexes can use this method).

Anyway, you can experiment with this yourself, but here’s a little quiz (with a little gotcha :)

What kind of index creation statement would create such an execution plan?

SQL> explain plan for create index hack_index on hack_table ....the rest is a secret for now....
Explained.

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 457720527
-------------------------------------------------------------------------------------------------
| Id  | Operation                | Name                 | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | CREATE INDEX STATEMENT   |                      | 88868 |       |   657   (1)| 00:00:12 |
|   1 |  INDEX BUILD NON UNIQUE  | HACK_INDEX           |       |       |            |          |
|   2 |   SORT AGGREGATE         |                      |     1 |       |            |          |
|   3 |    VIEW                  | HACK_VIEW            | 74062 |       |   318   (1)| 00:00:06 |
|*  4 |     HASH JOIN            |                      | 74062 |  1012K|   210   (1)| 00:00:04 |
|   5 |      TABLE ACCESS FULL   | TEST_USERS           |    46 |   368 |     4   (0)| 00:00:01 |
|   6 |      INDEX FAST FULL SCAN| IDX2_INDEXED_OBJECTS | 74062 |   433K|   206   (1)| 00:00:04 |
|   7 |   SORT CREATE INDEX      |                      | 88868 |       |            |          |
|   8 |    TABLE ACCESS FULL     | HACK_TABLE           | 88868 |       |   544   (1)| 00:00:10 |
-------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
4 - access("U"."USERNAME"="O"."OWNER")
Note
-----
- estimated index size: 3145K bytes

Share/Bookmark

An improved OakTable web site

Today the new OakTable web site, www.oaktable.net, has been published: many thanks to Kurt Van Meerbeeck (that I'm told worked the most on the site), James Morle and Marco Gralike!

I really like (besides the light and modern look) the aggregator of the OakTable members' blogs - a window on high-quality news and investigations about Oracle ...

Thanks, Koert (and Marco and James ...)!

The members of the Oak Table Network had long bemoaned the slightly tired look of the old oaktable.net website. James Morle put a lot of spare-time effort into running this for years now but it really needed some attention in terms of the content and presentation.

But, as all discussions about the website seemed to take place over whisky, beer or wine ... nothing much seemed to happen!

Actually, what happened was that a small team of volunteers got together and got to work, each with their own responsibilities. As far as I know, that team consisted of Kurt the Dude guy, Marco Gralike of Amis and James. Apologies if I've forgotten someone or over-emphasise Koert's role in doing the majority of the Drupal work, but the new site is a joy to behold, relative to the old one, and as it includes Twitter and blog feeds, I think it'll be a more useful daily resource.

Thanks, guys - great job! (Although I'm not suprised ....)

If anyone has any feedback, we're all ears.

OTN

That’s the Oak Table Network, not the other OTN. We have a new website which, amongst other things,  includes a convenient blog aggregator that will include the blogs of all Oak Table members. I’ll probably be taking all the Oak table members off my blog roll some time soon since I’ve found it very convenient to [...]

Statistics on Partitioned Tables - Part 6b - COPY_TABLE_STATS - Mistakes

Sigh ... these posts have become a bit of a mess.

There are so many different bits and pieces I want to illustrate and I've been trying to squeeze them in around normal work. Worse still, because I keep leaving them then coming back to them and re-running tests it's easy to lose track of where I was, despite using more or less the same test scripts each time (any new scripts tend to be sections of the main test script). I suspect my decision to only pull out the more interesting parts of the output has contributed to the difficulties too, but with around 18.5 thousand lines of output, I decided that was more or less essential.

It has got so bad that I noticed the other day that there were a couple of significant errors in the last post which are easy to miss when you're looking at detailed output and must be even less obvious if you're looking at it for the first time.

The fact no-one said much about these errors reinforces my argument with several bloggers that less people read and truly absorb the more technical stuff than they think. They just pick up the messages they need and take more on trust than you might imagine!

So what were the errors? Possibly more important, why did they appear? The mistakes are often as instructive as the successes.

Error 1

This is the tail-end of the subpartition stats at the end of part 5

SQL> select  table_name, subpartition_name, global_stats, last_analyzed, num_rows
  2  from dba_tab_subpartitions
  3  where table_name='TEST_TAB1'
  4  and table_owner='TESTUSER'
  5  order by 1, 2, 4 desc nulls last;

TABLE_NAME                     SUBPARTITION_NAME              GLO LAST_ANALYZED          NUM_ROWS
------------------------------ ------------------------------ --- -------------------- ----------
<>                                              
 
TEST_TAB1                      P_20100209_GROT                NO  28-FEB-2010 21:41:47          3
TEST_TAB1                      P_20100209_HALO                NO  28-FEB-2010 21:41:49          3
TEST_TAB1                      P_20100209_JUNE                NO  28-FEB-2010 21:41:49          3
TEST_TAB1                      P_20100209_OTHERS              NO  28-FEB-2010 21:41:50          3

Compared to the supposedly same section produced at the start of part 6a :-

TABLE_NAME                     SUBPARTITION_NAME              GLO LAST_ANALYZED          NUM_ROWS           
------------------------------ ------------------------------ --- -------------------- ----------           
<>                                              

TEST_TAB1                      P_20100209_GROT                YES 28-MAR-2010 15:38:32          3           
TEST_TAB1                      P_20100209_HALO                YES 28-MAR-2010 15:38:32          3           
TEST_TAB1                      P_20100209_JUNE                YES 28-MAR-2010 15:38:32          3           
TEST_TAB1                      P_20100209_OTHERS              YES 28-MAR-2010 15:38:33          3           

Spot the difference? Forget the timestamps for now, although I hope it's clear that :-

- The stats were gathered at different times
- I really need to get my blogging act together ;-)

Instead, notice that GLOBAL_STATS is set to NO in part 5 and YES in part 6a. How could that happen?

The first thing to note is that it's probably not as significant as it first appears because what does it mean for Subpartition stats to be Global when there are no underlying sub-components of a Subpartition? In fact I'd argue that all Subpartition stats are Global by implication but I may be missing something important. (Comments welcome ...)

Instead I'll focus on how you can manage to get the two different results. The output from part 5 was the result of gathering statistics on a load table (LOAD_TAB1) and then exchanging it with the relevant subpartition of TEST_TAB1 (as shown in part 5). When you do that, the GLOBAL_STATS flag will be set to NO.

If I take the alternate approach of exchanging LOAD_TAB1 with the subpartition of TEST_TAB1 and only then gathering statistics on the subpartition of TEST_TAB1 then GLOBAL_STATS will be YES for that subpartition. That's the most obvious reason I can think of for the discrepancy but I can't be certain because the log files that I took the output from are history now.

At some point when ripping parts of a master script to run in isolation for each blog post I've changed the stats gathering approach from gather-then-exchange to exchange-then-gather. The output shown in part 5 was correct so I've updated part 6a to reflect that and added a note.

Error 2

I think this one is worse, because it's down to me mixing up some pastes because the original results looked wrong when they were, in fact right. It's extremely rare for me to edit results and I regret doing it here. Whenever you start tampering with the evidence, you're asking for trouble!

When I'd been pasting in the final example output, showing that subpartition stats had been copied for the new P_20100210_GROT subpartition, I saw another example when the subpartition stats hadn't been copied, so I decided I was mistaken and fixed the post. But the original was correct so I've put it back the way it should be and added a further note.

If you weren't confused already, you have my permission to be utterly confused now ;-)

Summary

On a more serious note, let's recap what I'm trying to do here and what does and doesn't work.

I've added a new partition for the 20100210 reporting date and I tried to copy the partition and subpartition stats from the previous partition (P_20100209) to the new partition. I attempted a Partition-level copy

SQL> exec dbms_stats.copy_table_stats(ownname => 'TESTUSER', tabname => 'TEST_TAB1', 
                                      srcpartname => 'P_20100209', dstpartname => 'P_20100210');

PL/SQL procedure successfully completed.

in the expectation that DBMS_STATS would copy the partition stats as well as all of the subpartition stats from the P_20100209 partition. I'll repeat how the stats looked here ...

SQL> select  table_name, global_stats, last_analyzed, num_rows
  2  from dba_tables
  3  where table_name='TEST_TAB1'
  4  and owner='TESTUSER'
  5  order by 1, 2, 4 desc nulls last;

TABLE_NAME                     GLO LAST_ANALYZED          NUM_ROWS                                       
------------------------------ --- -------------------- ----------                                       
TEST_TAB1                      NO                                                                        

SQL>
SQL> select  table_name, partition_name, global_stats, last_analyzed, num_rows
  2  from dba_tab_partitions
  3  where table_name='TEST_TAB1'
  4  and table_owner='TESTUSER'
  5  order by 1, 2, 4 desc nulls last;

TABLE_NAME                     PARTITION_NAME                 GLO LAST_ANALYZED          NUM_ROWS        
------------------------------ ------------------------------ --- -------------------- ----------        
TEST_TAB1                      P_20100131                     NO                                         
TEST_TAB1                      P_20100201                     NO                                         
TEST_TAB1                      P_20100202                     NO                                         
TEST_TAB1                      P_20100203                     NO                                         
TEST_TAB1                      P_20100204                     NO                                         
TEST_TAB1                      P_20100205                     NO                                         
TEST_TAB1                      P_20100206                     NO                                         
TEST_TAB1                      P_20100207                     NO                                         
TEST_TAB1                      P_20100209                     NO  28-MAR-2010 15:38:38         12        
TEST_TAB1                      P_20100210                     NO                                         

10 rows selected.

SQL> select  table_name, subpartition_name, global_stats, last_analyzed, num_rows
  2  from dba_tab_subpartitions
  3  where table_name='TEST_TAB1'
  4  and table_owner='TESTUSER'
  5  order by 1, 2, 4 desc nulls last;

TABLE_NAME                     SUBPARTITION_NAME              GLO LAST_ANALYZED          NUM_ROWS        
------------------------------ ------------------------------ --- -------------------- ----------        
TEST_TAB1                      P_20100131_GROT                NO                                         
TEST_TAB1                      P_20100131_HALO                NO                                         

<>

TEST_TAB1                      P_20100209_GROT                NO  28-MAR-2010 15:38:32          3        
TEST_TAB1                      P_20100209_HALO                NO  28-MAR-2010 15:38:32          3        
TEST_TAB1                      P_20100209_JUNE                NO  28-MAR-2010 15:38:32          3        
TEST_TAB1                      P_20100209_OTHERS              NO  28-MAR-2010 15:38:33          3        
TEST_TAB1                      P_20100210_GROT                NO  28-MAR-2010 15:38:33          3    
TEST_TAB1                      P_20100210_HALO                NO                                         
TEST_TAB1                      P_20100210_JUNE                NO                                         
TEST_TAB1                      P_20100210_OTHERS              NO                                         

40 rows selected.

So here's where we are

- No sign of partition statistics for P_20100210, despite the P_20100209 'source' partition having valid stats.
- The subpartition stats have been copied from P_20100209_GROT to P_20100210_GROT.
- The subpartition stats have not been copied for the other three P_20100210 partitions.

Weird, right? I've checked this over and over and I'm pretty sure I'm right, but decided to upload the entire script and output here in case others can spot some mistake I'm making.

Getting back down to earth, though, this isn't the end of the world. It just means that for our particular stats collection strategy of loading data into load tables, exchanging them with subpartitions and then copying earlier stats, we just need to make sure we're working at the subpartition level all the time and that's what I'll look at next.

Finally, I'll re-emphasise that this is not the only strategy and it's fair to say it's flushing out some unusual effects that you might never see if you work primarily with Table and Partition-level stats!