Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Oracle acquisitions

In this graphic of the size, type and timeline of Oracle acquisitions:

Follow-up:
I played with the data a little myself. Unfortunately not as long as I would like since I was using Tableau and my Tableau license runs out today. Tableau was a fun way to play with the data, though I didn't find all the knobs and dials I wanted, but here is what I had so far:

_db_file_direct_io_count で安定稼動

前回db_file_multiblock_read_count=64のとき、安定しない結果に終わった。
そして「ディスク転送量+アルファ」の何かがあると書いた。

_db_file_direct_io_count : Sequential I/O buf size

今回のテストはBLOCK_SIZE=8K=8192Bでテストをしている。
このI/O buffer sizeを最適化してからテストを再度行う:

db_file_multiblock_read_count = 64, _db_file_direct_io_count = 524288

db_file_multiblock_read_count = 128, _db_file_direct_io_count = 1048576
db_file_multiblock_read_count = 256, _db_file_direct_io_count = 2907152
db_file_multiblock_read_count = 512, _db_file_direct_io_count = 4194304db_file_multiblock_read_count = 1024, _db_file_direct_io_count = 8388608

結果db_file_multiblock_read_count = 64でも安定した。そして128のときと比べて、ほぼ同じ。

direct path readは、そのバッファサイズを最適化しておかないと無駄なTemporaryアクセスを発生する。
それを踏まえてdb_file_multiblock_read_countを調整する。
DWHだからって「むやみにdb_file_multiblock_read_countを増やせばよい」ではない。
因みに、Exadata1のTPC-Hベンチマークでは:
db_file_multiblock_read_count=64
db_block_size=32K
を設定している。

最後に、
_db_file_direct_io_countの設定はdb_file_multiblock_read_countと必ず対応しなければなりません。たとえば、multiblock_read_cnt=512なのに、この値を1024対応のものにすると最適化されなくなり、ディスク応答時間に影響します。今回の例ですとBlockSize=8K=8192。それにmultiblock_read_cntを掛けて算出しています。
因みに、これらは_adaptive_direct_read = TRUEで制御されているみたいです。

Oracle’s next blockbuster deal…

I got an email today, which I think is for once a genuine one…beside that, just before Oracle Open World, its always interesting to guess about what Mr. Ellison is about to announce…

I have checked out the site/post and, besides the apparent ones (PR), I couldn’t find any hidden traps (always a bit, healthy I hope, paranoia on this kind of stuff).

From: Stephen Jannise
Sent: Tuesday 3 August 2010 19:57
To: Marco Gralike
Subject: Editorial question about your blog

Hi Marco,

I thought you would be interested in an article I’ve written about Oracle’s next blockbuster deal. The company’s surprising acquisition of Sun suggests that Oracle is willing and able to make major deals in unexpected areas. I’m interested to see where they go next, so I’m hosting a poll on my blog at: http://www.softwareadvice.com/articles/manufacturing/oracle-mergers-acquisitions-whos-next-1080310/.

I’ve presented a list of thirteen potential targets for readers to vote on. Rather than guess these targets at random, I’ve done some research into the past five years of Oracle acquisitions as well as studied the current market to make some educated suggestions about possible future acquisitions.

Responses are trickling in, so I’m reaching out to a few bloggers to spread the word and drive more responses. Would you mind posting a brief entry about this on your blog? I would really appreciate your help. Please let me know what you think.

Thanks,

Stephen

Stephen said about his article in one of his emails to me:

The article traces the last five years of Oracle acquisitions in an attempt to pinpoint the strategies behind Oracle’s blockbuster purchases. Based on this analysis and a review of potential targets, the article suggests thirteen companies that Oracle may conceivably be interested in over the upcoming months. There are a few obvious choices along with some more unlikely options, and readers are encouraged to voice their own opinions and participate in the poll. You can find the article and the poll here:

http://www.softwareadvice.com/articles/manufacturing/oracle-mergers-acquisitions-whos-next-1080310/.

My idea about it, if I were allowed to gamble (getting into Mr. Ellison’s head) is that Stephen is wrong. The mentioned companies aren’t, IMHO, the ones that Oracle needs to keep in pace with Google, IBM or Microsoft, to name the few, although I think that “VMware” is an interesting one.

I would still vote for a company like Amazon (probably too expensive, but then again I have seen stranger things happen…), which isn’t on Stephens list, but it would provide the opportunity for Oracle to get into the Cloud with his virtualization software, deploy apps like OpenOffice, EBS or Fusion Apps and Middleware etc.

Being a Oracle DBA “nerd” for 15+ years I know that Mr. Ellison is always looking waaaaay ahead into the future, remember Video on Demand via “nCube” or the “raw iron” project…? He also sticks to it… Its all long term stuff, some paid off and on some he / Oracle lost, but you could say via JRockit / OEL some, also, still became a “raw iron” reality. With some data centers in place, “Oracle Docs” or “Oracle Apps” could become the next reality…

Guessing can be fun…

8-)

Oracle 5 Installation Steps

While browsing my old pictures, I discovered two small directories with some installation snapshots I once made, because there is almost no info left about this topic (besides in people’s heads) I left it here for “past” reference…

Double click on it to go to the bigger “version” on Picassa.

Product Design : VST

I think the VST diagrams are powerful on their own but my original goal was to show the execution path on top of the diagram in order to look at to execution plans side by side and quickly see the differences which isn’t possible with textual explain plans. Here is an example of two explain plans in text and then graphically

This graphic was one of my attempts to show executions on top of the VST diagram, and I’ve been working with how best to show the order. Because the diagram layout stays the same, and the order is overlaid on top, it’s easy to compare them side by side. I can also see why original execution plan, on the left, was wrong. The original execution started at E which joins to C producing 198201422 rows (as seen on the join line), yet the query only returns 44,000 rows (not shown). Also table E has no filtering, really, since 99% of it's rows are returned after applying the filtering condition. The filtering is represented in blue to the bottom right of the table as a percentage of rows returned from the table. The query should start at A where there is 2% filter ratio. After starting at A we should join to C or B (which is a subquery) because the result set sizes are the smallest 85K and 642K respectively. Making this change (via hints) took this query from running over 24 hours to 5 minutes)

Designing VST diagrams has been fun exciting and challenging. I wonder if you can imagine this: Have you ever been rock climbing ? and I’m not talking about the gym, but out on real cliffs? If so you probably know that 99% of the time people use a route map that shows exactly where to go and that indeed there is a way to climb up and off the cliff and so you won’t get stuck halfway up with nowhere to go. Climbing these routes is super scary and exciting, but the amazing thing is someone climbed them without a map, not knowing if they would get up 1000 feet and find that there is no way to finish, and somehow have to get down, which might not even be possible given the gear . Seriously life threatening.

That’s what I think about writing a completely new way of tuning SQL. Of course it’s not mortally life threatening J, but years of my life and my job are on the line. I have no idea where the end is, or what all the road blocks will be. I certainly didn’t when I started but I’m far enough along to see that the diagrams provide powerful information rapidly to the user in an easy to understand graphic way. Graphics can be misused even abused resulting worse information instead of better, but when graphics are used well they are much more powerful than the textual quantitative data. The graphical diagrams are a great way to understand the relationships in the query faster than reading the query text. It’s sort of like looking at the Google map and seeing the route drawn verses having to read the directions. Both are important but I can understand the route on map much faster than the directions, though I might want to read the directions after reading the map. The map with the route though is often all I need.

I’m not able to store vast amounts of information and see all the permutations like some Oracle experts can. This can be a curse and a blessing. It’s a blessing because it makes me look for ways where I can understand the problem space with out having to buffer tons of information and such solutions are valuable to the general public. On the other hand its scary in that I don’t see all the possible problems, but that can be a blessing as well because I try to just concentrate on the solutions that have the biggest return, the biggest bang for the buck , the issues that can be most easily solved, instead of getting distracted the overwhelming possibilities and issues.

In some ways the SQL optimization space seems overwhelming and in others ways it seems really small. I think that VST diagrams will help make the problem space seem much smaller, manageable and understandable as the VST diagrams mature.


It’s the End of the World As We Know It (NoSQL Edition)

This post originally appeared over at Pythian. There are also some very smart comments over there that you shouldn’t miss, go take a look!

Everyone knows that seminal papers need a simple title and descriptive title. “A Relational Model for Large Shared Data Banks” for example. I think Michael Stonebraker overshot the target In a 2007 paper titled, “The End of an Architectural Era”.

Why is this The End? According to Michael Stonebraker “current RDBMS code lines, while attempting to be ‘one size fits all’ solution, in face, excel at nothing. Hence, they are 25 years old legacy code lines that should be retired in favor of a collection of ‘from scratch’ specialized engined”.

He makes his point by stating that traditional RDBM design is already being replaced for a variety of specialized solutions: Data-warehouses, streams processing, text and scientific databases. The only uses left for RDBMS is OLTP and hybrid systems.

The provocatively named paper is simply a description of a system, designed from scratch for modern OLTP requirements and the demonstration that this system gives better performance than traditional RDBMS on OLTP type load. The conclusion is that since RDBMS can’t even excel at OLTP – it must be destined for the garbage pile. I’ll ignore the fact that hybrid systems are far from extinct and look at the paper itself.

The paper starts with a short review of the design considerations behind traditional RDBMS, before proceeding to list the design considerations behind the new OLTP system, HStore.:

  1. The OLTP database should fit entirely in-memory. There should be no disk writes at all. Based on TPC-C size requirements this should be possible, if not now then within few years.
  2. The OLTP database should be single threaded - no concurrency at all. This should be possible since OLTP transactions are all sub-millisecond. In an memory-only system they should be even faster. This will remove the need for complex algorithms and data structures and will improve performance even more. Ad-hoc queries will not be allowed.
  3. It should be possible to add capacity to an OLTP system without any downtime. This means incremental expansion – it should be possible to grow the system by adding nodes transparently.
  4. The system should be highly available, with a peer-to-peer configuration – the OLTP load should be distributed across multiple machines and inter-machine replication should be used for availability. According to the paper, in such a system redo and undo logging becomes unnecessary. This paper references another paper that argues that rebuilding a failed node over the network is as efficient as recovering from redo log. Obviously, eliminating redo logs eliminates one of the worse OLTP bottlenecks where data is written to disk synchronously.
  5. No DBAs. Modern systems should be completely self tuning.

In other sections Stonebraker describes few more properties of the system:

  1. With persistent redo logs gone, and locks/latches gone, the overhead of JDBC interface is likely to be the next bottleneck. Therefore the application code should be in form of stored procedures inside the data store. The only command ran externally should be “execute transaction X”.
  2. Given that the DB will be distributed and replicated, and network latencies still take milliseconds, the two-phase commit protocol should be avoided
  3. There will be no ad-hoc queries. The entire workload will be specified in advance.
  4. SQL is an old legacy language with serious problems that were exposed by Chris Date two decades ago. Modern OLTP systems should be programmable in a modern light-weight language such as Ruby. Currently the system is queried with C++.

The requirements seem mostly reasonable and very modern – use replication as a method of high availability and scalabilty, avoid disks and their inherent latencies, avoid the complications of concurrency, avoid ad-hoc queries, avoid SQL and avoid annoying DBAs. If Stonebraker can deliver on his promise, if he can do all of the above without sacraficing the throughput and durability of the system, this sounds like a database we’ll all enjoy.
In the rest of the paper, the authors describe some special properties of OLTP work loads, and then explains how HStore utilizes the special properties to implement a very efficient distributed OLTP system. In the last part of the paper, the authors use HStore to run a TPC-C like benchmark and compare the results with an RDBMS.

Here are in very broad strokes the idea:
The paper explains in some detail how things are done, while I only describe what is done:

The system is distributed, with each object partitioned over the nodes. You can have specify how many copies of each row will be distributed, and this will provide high availability (if one node goes down you will have all the data available on other nodes).

Each node is single threaded. Once SQL query arrives at a node, it will be performed to the end without interruptions. There are no physical files. The data objects are stored as Btrees in memory, Btree block is sized to match L2 cache line.

The system will have a simple cost-based optimizer. It can be simple because OLTP queries are simple. If multi-way joins happen they always involve identifying a single tuple and then tuples to join to that record in a small number of 1-to-n joins. Group by and aggregation don’t happen in OLTP systems.

The query plans can either run completely in one of the nodes, can be decomposed to a set of independent transactions that can run completely in one node each, or require results to be communicated between nodes.

The way to make all this efficient is by using a “database designer” – Since the entire workload is known in advance, the database designer’s job is to make sure that most queries in the workload can run completely on a single node. It does this by smartly partitioning the tables, placing parts that are used together frequently on the same node and copying tables (or just specific columns) that are read-only all over the place.

Since there are at least two copies of each row and each table, there must be a way to consistently update them. Queries that can complete on a single node, can just be sent to all relevant nodes and we can be confident that they will all complete them with identical results. The only complication is that each node must wait a few milliseconds before running the latest transaction to allow for recieving prior transactions from other nodes. The order in which transactions run is identified by timestamps and node ids. This allows for identical order of execution on all nodes and is responsible for consistent results.

In case of transactions that span multiple sites and involve changes that affect other transactions (i.e. The order in which they execute in relation to other transactions matter), one way to achieve consistency could be locking the data sources for the duration of the transaction. The HStore uses another method – each worker node recieves its portion of the transaction from a coordinator. If there are no conflicting transactions with lower timestamps, the transaction runs and the worker sends the coordinator an “ok”, otherwise the worker aborts and notifies the coordinator. The transaction failed and its up to the application to recover from this. Of course, some undo should be used to rollback the successfull nodes.

The coordinator monitors the number of aborts and if there are too many unsuccessfull transactions, it starts waiting longer between the time a transaction arrives at a node until the node attempts to run it. If there are still too many failures, a more advanced strategy of aborting is used. In short, this is a very optimistic database where failure is prefered to locking.

I’ll skip the part where a modified TPC-C proves that HStore is much faster than a traditional RDBMS tuned for 3 days by an expert. We all know that all benchmarks are institutionalized cheating.

What do I think of this database?

  1. It may be too optimistic in its definition of OLTP. I’m not sure we are all there with the pre-defined workload. Especially since adding queries can require a complete rebuild of the data-store.
  2. I’m wondering how he plans to get a consistent image of the data stored there to another system to allow querying. ETL hooks are clearly required, but it is unclear how they can be implemented.
  3. Likewise, there is no clear solution on how to migrate existing datasets into this system.
  4. HStore seems to depend quite heavily on the assumption that networks never fail or slow down. Not a good assumption from my experience.
  5. If Stonebraker is right and most datasets can be partitioned in a way that allows SQL and DML to almost always run on a single node, this can be used to optimize OLTP systems on RAC.
  6. I like the idea of memory only systems. I like the idea of replication providing recoverability and allowing us to throw away the redo logs. I’m not sure we are there yet, but I want to be there.
  7. I also like the idea of a system allowing only stored procedures to run.
  8. I’m rather skeptical about systems without DBAs, I’ve yet to see any large system work without someone responsible for it to keep working.
  9. I’m even more skeptical about systems without redo logs and how they manage to still be atomic, durable and just plain reliable. Unfortunately this paper doesn’t explain how redo-less systems can be recovered. It references another paper as proof that it can be done.
  10. Stonebraker deserves credit for anticipating the NoSQL boom 3 years in advance. Especially the replication and memory-only components.

I hope that in the next few month I’ll add few more posts reviewing futuristic systems. I enjoy keeping in touch with industry trends and cutting-edge ideas.

db_block_size=8Kでdb_file_multiblock_read_count

TPC-Hベンチマークの続き、
db_file_multiblock_read_count=64db_file_multiblock_read_count=128db_file_multiblock_read_count=256db_file_multiblock_read_count=512db_file_multiblock_read_count=1024
db_file_multiblock_read_count=128が一番成績がよい。

SSDの応答時間(殆んど一定の応答時間)

db_file_multiblock_read_count x 8K SSD応答時間(ミリ秒)
64 512K 3
128 1024K 6
256 2048K 10
512 4096K 16
1024 8192K 25

そして、、、、
db_file_multiblock_read_count > 128 でTemporary Tablespaceへの書き出しが始まった。

pga_aggregate_target=5Gでmemory_targetは使用していない。
書き出し量はdb_file_multiblock_read_countに比例して増える。

この地道な調査で、今後のチューニングの切り札DB_BLOCK_SIZEを割り出す方法を模索している。
いい加減にすると無駄なTemporaryアクセスが発生し、ノード間Parallel Queryの苦手パターンを踏むことになる。

最後に、
db_block_size=8Kでは1024以上はサポートされていない動きだった。
8MBが限界なのか?1024が限界なのか?

db_file_multiblock_read_count=64ではTPC-Hが安定しなかった。
しかし、ディスク転送量は430MB/sを軽く上回り現時点では最高値を記録した。
direct path readのベンチマークは「ディスク転送量」だけでは計れない「+アルファ」がある。

PROTOCOL=IPC でディスク転送量7%アップ

TPC-Hベンチマークの続き

前回のSQLを見てみると、やはりTPC-H。TPC-Cとはぜんぜん違う。
実行結果もたくさん返される:

TPC-Cのときは:

ネットワークバッファを広げてみる。
sqlnet.ora


DEFAULT_SDU_SIZE=32768
RECV_BUF_SIZE=524288
SEND_BUF_SIZE=524288


Oracle用語集から

セッション・データ・ユニット(session data unit: SDU)
Oracle Netがネットワーク間でデータを転送する前にデータを配置するバッファ。Oracle Netがバッファ内のデータを送信するのは、データ送信が要求されたとき、またはバッファがデータでいっぱいになったときである。

hammeroraでTPC-Hベンチマークを再び行い、ディスク転送量をみる:

前回のほぼ最大が400MB/sから安定した420MB/s強に改善された。5%アップだ。

もう少し突っ込んでIPC接続でもテストをした(tnsnames.oraにORACLE2を追加):

ORACLE =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = oracle)
)
)

LISTENER_ORACLE =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))


ORACLR_CONNECTION_DATA =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
)
(CONNECT_DATA =
(SID = CLRExtProc)
(PRESENTATION = RO)
)
)

ORACLE2 =
(DESCRIPTION=
(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = oracle)
)
)


5%→7%にUpした。
しかし、IPC接続は使用環境が限定されるため、今回は5%止まりとする。

最後に、
IPCの結果がTCPとそれほど変わらなかった。
UNIX環境ではIPCはOSの機能だが、Windows環境はOracleによるシュミレーション機能だ。
したがって、、、、。

Data Breach Survey Results

Lindsay Hamilton of Cervello Consultants has just started a new blog aimed at data security, data breaches and data security vulnerability scanning and activity monitoring. This should be worth watching as data breaches are certainly an in topic subject with....[Read More]

Posted by Pete On 03/08/10 At 04:18 PM

Notes on Learning MySQL (as an Oracle DBA)

This post originally appeared over at Pythian. There are also some very smart comments over there that you shouldn’t miss, go take a look!

I spent some time last month getting up to speed on MySQL. One of the nice perks of working at Pythian is the ability to study during the workday. They could have easily said “You are an Oracle DBA, you don’t need to know MySQL. We have enough REAL MySQL experts”, but they didn’t, and I appreciate.

So how does an Oracle DBA goes about learning MySQL?
Obviously you start by reading the docs. Specifically, I looked for the MySQL equivalent of the famous Oracle “Concepts Guide”.
Unfortunately, it doesn’t exist. I couldn’t find any similar overview of the architecture and the ideas behind the database. The first chapter of “High Performance MySQL” had a high level architecture review, which was useful but being just one chapter in a book, it lacked many of the details I wanted to learn. Peter Zaitsev’s “InnoDB Architecture” presentation had the kind of information I needed – but covered just InnoDB.

Thats really too bad because I definitely feel the lack – which I can easily tell you what Oracle does when you connect to a database, run a select, an update, commit or rollback – I can’t say the same about MySQL. So far I managed without this knowledge, but I have a constant worry that this will come back and bite me later.

Lacking a concepts guide, I read the documentation I had access to: Sheeri has nice presentations available for Pythian employees (and probably customers too. I’m not sure if she ever released them to the whole world). The official documentation is not bad either – it covers syntax without obvious errors and serves as a decent “how do I do X?” guide.

But reading docs is only half the battle. The easier half too. So I installed MySQL 5.1 on my Ubuntu from ready packages. Then I installed MySQL 5.5 from the tarball – which was not nearly as much fun, but by the time this worked I know much more about where everything is located and the various ways one can mis-configure MySQL.

Once the installation was successfull, I played a bit with users, schemas and databases. MySQL is weird – Schemas are called databases, users have many-to-many relation with databases. If a user logs in from a differnet IP, it is almost like a different user. If you delete all the data files and restart MySQL – it will create new empty data files instead. You can easily start a new MySQL server on the same physical box by modifying one file and creating few directories.

MySQL docs make a very big deal about storage engines. There are only 2 things that are important to rememeber though: MyISAM is non-transactional and is used for mysql schema (the data dictionary), it doesn’t have foreign keys or row level locks. InnoDB is transactional, has row level locks and is used everywhere else.

There are a confusing bunch of tools for backing up MySQL. MySQLDump is the MySQL equivalent of Export. Except that it creates a file full of the SQL commands required to recreate the database. These files can grow huge very fast, but it is very easy to restore from them, restore any parts of the schema or even modifying the data or schema before restoring.
XTRABackup is a tool for consistent backups of InnoDB schema (remember that in MyISAM there are no transactions so consistent backups is rather meaningless). It is easy to use – one command to backup, two commands to restore. You can do PITR of sorts with it, and you can restore specific data files. It doesn’t try to manage the backup policies for you the way RMAN does – so cleaning old backups is your responsibility.

Replication is considered a basic skill, not an advanced skill like in the Oracle world. Indeed once you know how to restore from a backup, setting up replication is trivial. It took me about 2 hours to configure my first replication in MySQL. I think in Oracle Streams it took me few days, and that was on top of years of other Oracle experience.

Having access to experienced colleagues who are happy to spend time teaching a newbie is priceless. I already mentioned Sheeri’s docs. Chris Schneider volunteered around 2 hours of his time to introduce me to various important configuration parameters, innoDB secrets and replication tips and tricks. Raj Thukral helped me by providing step by step installation and replication guidance and helping debug my work. I’m so happy to work with such awesome folks.

To my shock and horror, at that point I felt like I was done. I learned almost everything important there was to know about MySQL. It took a month. As an Oracle DBA, after two years I still felt like a complete newbie, and even today there are many areas I wish I had better expertise. I’m sure it is partially because I don’t know how much I don’t know, but MySQL really is a rather simple DB – there is less to tweak, less to configure, fewer components, less tools to learn.

Jonathan Lewis once said that he was lucky to learn Oracle with version 6, because back then it was still relatively simple to learn, but the concepts didn’t change much since so what he learned back then is still relevant today. Maybe in 10 years I’ll be saying the same about MySQL.