Search

Top 60 Oracle Blogs

Recent comments

August 2010

Broken(ish) Links…

A couple of days ago Sten Vesterli tweeted about the URL changes on OTN. The previous base URL of “http://www.oracle.com/technology” has been replaced by “http://www.oracle.com/technetwork“. There are redirects in place, so for many of the top level pages this isn’t a problem, but some of the deeper links result in “page not found” errors or redirect to rather generic pages.

Fortunately, most of the links from my site are to the Oracle docs, whose URLs haven’t changed, but there are also plenty of OTN links. I’m trying to clean up the problem links, but it’s going to take a little time. If you spot any broken links, or links that don’t look like they point to the intended information, feel free to contact me and I’ll do my best to sort them.

Cheers

Tim…

OakTable member!

Just like Tim Hall I was nominated and approved as a member of the OakTable network. I am very happy to have been accepted to a group of such gifted individuals!

OakTable…

I was recently nominated and approved as a member of the OakTable Network .

Do you ever get that feeling that one day people are going to realize you don’t have a clue what you are talking about? I think that day just got a little closer. :)

Cheers

Tim…

Michigan Oaktable Symposium 2010 promo

Media: 
See video

 
Register here !

Cardinalilty One

I think anyone who has read Wolfgang Breitling’s material about the optimizer will be familiar with the concept of Cardinality Feedback and one particular detail that when Oracle gets a cardinality estimate of one for a “driving” table then there’s a good chance that the execution plan will go wrong. (That’s not rule, by the way, just a fairly common observation after things have gone wrong.)

A recent note on OTN reminded me of a particular scenario where this specific problem can occur. It’s not particularly common, but it may hit people who are building data warehouses from multiple different sources. We start with an unlikely looking data set and very simple query:

drop table t1;

create table t1 as
select
	rownum id1,
	rownum id2
from
	all_objects
where
	rownum <= 10000
;

execute dbms_stats.gather_table_stats(user,'t1');

set autotrace traceonly

select
	count(*)
from
	t1
where
	id1 = id2
;

What do you think Oracle estimated cardinality will be for this predciate ? We know, because we saw the data being built, that we’re going to identify 10,000 rows. But the optimizer doesn’t see it that way – check line 2 of the execution plan. The optimizer thinks it will find just one row:

Frits Hoogland Weblog

IT Technology; Oracle, linux, TCP/IP and other stuff I find interesting

Oracle Musings

ORACLE, Oracle Server, Oracle7, 8i, 9i, 10g and related (mostly performance) commentary.

The ORACLE-BASE Blog

Oracle related rants (and lots of off-topic stuff)...

More PC support…

One of my Yoga buddies was given a laptop by is dad and wanted to get it connected over wireless. His dad also gave him a wireless ADSL router, but couldn’t get it set up.  This sounds like a job for Captain Support…

The router wasn’t able to connect to the internet. It turned out that the router was not working properly and needed a firmware update. Next issue was the wireless connection between the router and the laptop was kinda funky. The connection would never work when any form of encryption was turned on. In the end I had to turn off encryption and stopped the router from broadcasting in an attempt to reduce the chances of people piggy-backing on it.

How are normal folk meant to cope with this? The answer is they don’t and they need Captain Support… :)

Cheers

Tim…

In-Memory nonParallel + nocompressで16000qph達成

TPC-Hベンチマークの続き、

今までの結論は、

In-Memory nonParallelはTPC-Hの総合点には大きく貢献したが、実際のデータウェアハウスではParallel Queryの方が優れていることが確認できた。

もう少し、総合点を上げるためにIn-Memory nonParallelのテストを追加した。

Parallel Queryを速くするCompressオプションを外してみる。

1セッション COMPRESS:

NOCOMPRESSにすると

同時4セッションCOMPRESS:

NOCOMPRESSにすると

同時8セッションCOMPRESS:

NOCOMPRESSにすると

そして、Resut_Cacheを使った、今までの最高点:

NOCOMPRESSにすると

COMPRESSされたOracle BlockをUNCOMPRESSするCPU負荷がなくなったことが高結果の要因だと思う。
COMPRESSでディスクI/O回数を減らすコストの方がUNCOMPRESSのコストより数倍大きい。でも今回はディスクI/OゼロだからUNCOMPRESSの負荷が目立ったんだな。。。

今回の結果はあくまで参考値。
大規模データウェアハウス環境ではParallel Query効率化のためのCOMPRESSは非常に有効。
In-Memory nonParallelは、TPC-Hベンチマークの点数を上げるためには都合の良いテクニックだったけど、せっかくのSSD RAID0の意味がなくなっている。

因みに、Exadata1のTPC-Hベンチマークでは、sub partition(composit key)を使っているのでCOMPRESSは使えない筈。そしてV1なので、カラムCOMPRESSも使えない。