Search

OakieTags

Who's online

There are currently 0 users and 32 guests online.

Recent comments

Oakies Blog Aggregator

Join Me at COLLABORATE 13

#5 COLLABORATE 13 Banner Ad

I’ll be speaking on Mon, Wednesday and Thursday:

  • Boosting performance on reporting and development databases
    April 8, 2:30-3:30pm (Mile High Ballroom 2B)
  • Database virtualization: accelerating application development
    April 10,1:00-2:00pm (Mile High Ballroom 2B)
  • NFS tuning for Oracle
    April 11, 8:30-9:30am (Mile High Ballroom 3A)

Attend COLLABORATE online

IOUG Forum for the Best in User-Driven Oracle Education and Networking

Have you made your Oracle training plans for 2013? As an IOUG RUG leader, I’d like to extend a personal invitation to the conference that I continue to attend for truly valuable personal and professional growth- COLLABORATE 13 – IOUG Forum, April 7-11 at the Colorado Convention Center in Denver, CO.

COLLABORATE is interactive: have your most difficult questions answered by other IOUG Oracle experts who’ve been there and come out swinging.

COLLABORATE is diverse: attend education sessions across all of the tracks offered at IOUG Forum, or pick the one that fits you best! Choose from Database, Business Intelligence, Big Data, Exadata, Security and more!

COLLABORATE is personal: tap into the shared resources of hundreds of IOUG members, Oracle ACEs and more! Meet new friends and contacts throughout the week and strengthen ties with existing business partners.

COLLABORATE is an awesome big conferences of over 6000 attendees with many opportunities to network, learn new technology and see what is happening in the industry.

By registering specifically with IOUG for COLLABORATE 13, you’ll reap the following benefits:

  • Pre-Conference Workshops
    On Sunday, April 7, IOUG offers an extra, comprehensive day of Oracle training complimentary for IOUG Forum attendees.
  • IOUG Curricula
    The IOUG Curricula are designed to give well-rounded training and education on specialized topics, chosen by peers, throughout COLLABORATE.
  • IOUG IT Strategic Leadership Package
    Get the chance to discuss corporate strategy with senior IT management and improve your skills at influencing without authority, present business cases and help develop business solutions for your company.
  • Access to hands-on labs
    Sessions on Upgrade to the Latest Generation of Oracle Database and Consolidate your Databases Using Best Practices, Hacking, Cracking, Attacking – OH MY! and RAC Attack.
  • Networking
    Admission to IOUG’s exclusive networking activities, including our Sunday evening reception
  • Conference Proceedings
    Online access to conference proceedings before, during and after the event

You will also be able to attend education sessions offered by the OAUG and Quest user groups with your COLLABORATE 13 registration through the IOUG. Make plans to join me and thousands of your Oracle peers to soak in the solutions at this unique Oracle community event.

If you are an Oracle technology professional, be sure to secure your registration through the IOUG to obtain exclusive access to IOUG offerings.

Please contact  IOUG headquarters at ioug@ioug.org if you have any questions.

I hope to see you in Denver!

Best Regards,

Kyle Hailey

Delphix Overview

Update: Here’s the link to the recording of the webinar

I’ll b online tomorrow morning (Friday 5th, 9:00 Pacific time, 5:00 pm UK) in a webinar with Kyle Hailey to talk about my first impressions of Delphix, so I thought I’d write up a few notes beforehand.

I’ve actually installed a complete working environment on my laptop to model a production setup. This means I’ve got three virtual machines running under VMWare: my “production” machine (running Oracle 11.2.0.2 on OEL 5, 64-bit), a “development” machine (which has the 11.2.0.2 software installed, again on OEL 5, 64-bit), and a machine which I specified as Open Solaris 10, 64-bit for the Delphix server VM (pre-release bloggers’ version). The two Linux servers are running with 2.5GB of RAM, the Delphix server is running with 8GB RAM, and all three machines are running 2 virtual CPUs. (My laptop has an Intel quad core i7, running two threads per CPU, 16GB RAM, and 2 drives of 500GB each.) The Linux machines were simply clones of another virtual machine I previously prepared and the purpose of the exercise was simply to see how easy it would be to “wheel in” a Delphix server and stick it in the middle. The answer is: “pretty simple”. (At some stage I’ll be writing up a few notes about some of the experiments I’ve done on that setup.)

To get things working I had to create a couple of UNIX accounts for a “delphix” user on the Linux machines, install some software, give a few O/S privileges to the user (mainly to allow it to read and write a couple of Oracle directories), and a few Oracle privileges. The required Oracle privileges vary slightly with the version of Oracle and your prefered method of operation, but basically the delphix user needs to be able to run rman, execute a couple of Oracle packages, and query some of the dynamic performance views. I didn’t have any difficulty with the setup, and didn’t see any threats in the privileges that I had to give to the delphix user. The last step was simply to configure the Delphix server to give it some information about the Linux machines and accounts that it was going to have access to.

The key features about the Delphix server are that it uses a custom file system (DxFS, which is based on ZFS with a number of extensions and enhancements) and it exposes files to client machines through NFS; and there are two major components to the software that make the whole Delphix package very clever.

Oracle-related mechanisms

At the Oracle level, the Delphix server sends calls to the production database server to take rman backups (initially a full backup, then incremental backups “from SCN”); between backup requests it also pulls the archived redo logs from the production server – or can even be configured to copy the latest entries from the online redo logs a few seconds after they’ve been written (which is one of the reasons for requiring privileges to query some of the dynamic performance views, but the feature does depend on the Oracle version).

If you want to make a copy of the database available, you can use the GUI interface on the Delphix server to pick a target machine, invent a SID, and Service name, and pick an SCN (or approximate timetamp) that you want to database to start from, and within a few minutes the Delphix server will have combined all the necessary backup pieces, applied any relevant redo, and configured your target machine to start up an instance that can use the (NFS-mounted) database that now exists on the Delphix server. I’ll explain in a little while why this is a lot cleverer than a simple rman “restore and recover”.

DxFS

Supporting the Oracle-related features, the other key component of the Delphix server is the Delphix file-system (DxFS). I wrote a little note a few days ago to describe how Oracle can handle “partial” updates to LOB values – the LOB exists in chunks with an index on (lob_id, chunk_number) that allows you to pick the right chunks in order. When you update a chunk in the LOB Oracle doesn’t really update the chunk, it creates a new chunk and modifies the index to point at it. If another session has a query running that should see the old chunk, though, Oracle can read the index “as at SCN” (i.e. it creates a read consistent copy of the required index blocks) and the read-consistent index will automatically be pointing at the correct version of the LOB chunk. DxFS does the same sort of thing – when a user “modifies” a file system block DxFS doesn’t overwrite the original copy, it writes a new copy to wherever there’s some free space and maintains some “indexing” metadata that tells it where all the pieces are. But if you never tell the file system to release the old block you can ask to see the file as at a previous point in time at no extra cost!

But DxFs is even cleverer than that because (in a strange imitation of the “many worlds” interpretation of quantum theory) a single file can have many different futures. Different users can be identified as working in different “contexts” and the context is part of the metadata describing the location of blocks that belong to the file. Imagine we have a file with 10 blocks sitting on DxFs - in your context you modify blocks 1,2 and 3 but at the same time I modify blocks 1,2 and 3 in my context. Under DxFS there are now 16 blocks associated with that file – the original 10, your three modified blocks and my three modified blocks and, depending on timestamp and context, someone else could ask to see any one of three different versions of that file – the original version, your version, or my version.

Now think of that in an Oracle context. If we copy an entire set of database files onto DxFS, then NFS-mount the files on a machine with Oracle installed, we can configure and start up an instance to use those files. At the same time we could NFS-mount the files on another machine, configuring and starting another instance to use the same data files at the same time! Any blocks changed by the first instance would be written to disc as private copies, any blocks changed by the second instance would be written to discs as private copies – if both instances managed to change 1% of the data in the course of the day then DxFs would end up holding 102% of the starting volume of data: the original datafiles plus the two sets changed blocks – but each instance would think it was the sole user of its version of the files.

There’s another nice (database-oriented) feature to Delphix, though. The file system has built-in compression that operates at the “block” level. You can specify what you mean by the block size (and for many Oracle sites that would be 8KB) and the file system would transparently apply a data compression algorithm on that block boundary. So when the database writer writes an 8KB block to disc, the actual disc space used might be significantly less than 8KB, perhaps by a factor of 2 to 3. So in my previous example, not only could you get two test databases for the space of 1 and a bit – you might get two test databases for the space of 40% or less of the original database.

Delphix vs. rman

I suggested earlier on that Delphix can be a lot clever than an rman restore and recover. If you take a full backup to Delphix on Sunday, and a daily incremental backup (let’s preted that’s 1% of the database per day) for the week, then Delphix can superimpose each incremental onto the full backup as it arrives. So on Monday we construct the equivalent of a full Monday backup, on Tuesday we construct the equivalent of a full Tuesday backup, and so on. But since DxFS keeps all the old copies of blocks this means two things that we can point an instance at a full backup for ANY day of the week simply by passing a suitable “timestamp” to DxFs – and we’ve 7 full backups for the space of 107% of a single full backup.

There are lots more to say, but I think they will have to wait for tomorrow’s conversation with Kyle, and for a couple more articles.

Register of Interests / Disclosure

Delphix Corp. paid my consultancy rates and expenses for a visit to the office in Menlo Park to review their product.

And now, ...the video

First, I want to thank everyone who responded to my prior blog post and its accompanying survey, where I asked when video is better than a paper. As I mentioned already in the comment section for that blog post, the results were loud and clear: 53.9% of respondents indicated that they’d prefer reading a paper, and 46.1% indicated that they’d prefer watching a video. Basically a clean 50/50 split.

The comments suggested that people have a lower threshold for “polish” with a video than with a paper, so one of the ideas to which I’ve needed to modify my thinking is to just create decent videos and publish them without expending a lot of effort in editing.

But how?

Well, the idea came to me in the process of agreeing to perform a 1-hour session for my good friends and customers at The Pythian Group. Their education department asked me if I’d mind if they recorded my session, and I told them I would love if they would. They encouraged me to open it up the the public, which of course I thought (cue light bulb over head) was the best idea ever. So, really, it’s not a video as much as it’s a recorded performance.

I haven’t edited the session at all, so it’ll have rough spots and goofs and imperfect audio... But I’ve been eager ever since the day of the event (2013-02-15) to post it for you.

So here you go. For the “other 50%” who prefer watching a video, I present to you: “The Method R Profiling Ecosystem.” If you would like to follow along in the accompanying paper, you can get it here: “A First Look at Using Method R Workbench Software.”

Not “how”, but “why” should we upgrade to JDeveloper & ADF 11.1.1.7.0 ?

Followers of the blog know I’m an Oracle database guy, but my current job also has me honing my newbie WebLogic 11g skills, setting up a number of servers to deliver ADF and Forms & Reports 11gR2 applications.

As you’ve no doubt heard, Oracle have just released the 11.1.1.7.0 version of JDeveloper and ADF. I tried applying the 11.1.1.7.0 patch to a WebLogic 11g (10.3.6) installation and it worked without any problems (see here).

The real issue is, we currently have developers working hard to get applications converted from AS10g to ADF (11.1.1.6) running on WebLogic 11g (10.3.6). As much as I would like to “force” them to upgrade to 11.1.1.7, it has to be justified. So why should we upgrade to JDeveloper & ADF 11.1.1.7.0?

One of the great things about the Oracle ACE program is the level of access you get to experts in a variety of Oracle technologies. This network of people includes both Oracle ACEs and Oracle employees.

So how did I go about answering my question? Simple! I emailed my buddy Chris Muir (Oracle ADF Product Manager at Oracle), who is far better qualified to answer than me. :) In that email I asked the following three questions:

  1. Assuming we don’t need the extra functionality in ADF 11.1.1.7, what is the advantage of moving to it? Are the bug fixes and maybe browser compatibility changes enough to warrant the upgrade?
  2. Is there a significance as far as support lifecycle is concerned?
  3. Is the upgrade likely to break anything that has already been converted for 11.1.1.6?

I suggested Chris might want to write a blog post based on these questions. He suggested a remote Q&A style post, so this is the “Q” and Chris will supply the “A” here!

Cheers

Tim…


Not “how”, but “why” should we upgrade to JDeveloper & ADF 11.1.1.7.0 ? was first posted on April 3, 2013 at 10:02 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Hotsos Revisited 2013 – Presentatie materiaal

Hierbij nog dank voor allen die aanwezig waren bij de weer gevulde, informatieve & gezellige avond tijdens “Hotsos Revisited 2013″. Wij presentatoren hebben genoten van het ambiance. Hier ook nog voor degenen die graag het nog een keer willen nalezen het presentatie materiaal van Toon, Jacco, Gerwin, Frits en mij… Presentatie materiaal in alfabetische volgorde: …

Continue reading »

A High Performance Architecture for Virtual Databases

Delphix completely changes the way companies do business by removing critical bottlenecks in development productivity. Gone are the days when a simple database clone operation from production to development takes hours or even days. With Delphix database virtualization, cloning of Oracle databases becomes a fast and practically free operation. Additionally, infrastructure costs can be drastically reduced due to decreased disk utilization in your cloned environments; in fact, the more environments you deploy, the more cost savings you will realize.

The old adage is that your options are Fast, Good, and Cheap; pick two of three. But as a recent performance study between Delphix and IBM shows, all three options are within reach.

  • Fast database deployments due to Delphix database virtualization technology
  • High performance results through shared block caching in Delphix memory
  • Reduced costs through compression, block mapping, and other Delphix core capabilities

Delphix and IBM partnered to design a high performance virtual database implementation that could be used for reporting, development, quality assurance (QA), and user acceptance testing (UAT) at reasonable costs. In the end, the research shows that the virtual database environments provisioned with Delphix can achieve strong performance levels and scalability suitable for any organization.

The average organization provisions multiple copies of a single production database which can include multiple copies of development, QA, UAT, stress testing, maintenance QA, and others. Each of these copies requires time to build and provision and the end result is several identical instances with their own copies of largely identical data blocks across several shared memory regions. Delphix achieves a great reduction is disk usage by only saving identical blocks to disk once. But more importantly, Delphix also only caches unique data blocks in memory. This means that read performance across multiple systems that are provisioned from a single source can show dramatic improvements and scalability.

Purpose of the Tests

Delphix is typically used to create non-production environments from production sources. With a Delphix powered infrastructure you can:

  • Enable high performance virtualized databases for reporting, development, and QA which improves productivity and decreases bottlenecks in deployment schedules
  • Dramatically reduce the cost of non-production environments
  • Reduce the impact of cloning and/or reporting on the production database system
  • Reduce the occurrence of application errors due to inadequate testing on out of date or partially populated development and QA systems

By design, Delphix has a small performance overhead because the I/O is supplied not over a dedicated fiber channel connection to storage but over NFS; however, by properly architecting the Delphix infrastructure, environments can be designed that actually run faster than physical deployments. On top of that, the environments come at a lower cost and as you will see from the research, performance actually gets better as the number of users increases.

Test Goals

In the tests run by IBM and Delphix, physical and virtual databases were tested under several scenarios to identify the effective performance benefits and how best to architect the Delphix server to maximize virtual database performance. The goal of the test was to both determine optimal configuration settings for the Delphix infrastructure and to show optimal I/O throughput on virtualized databases compared to physical database environments.

Test Environment

In order to properly test physical and Delphix virtual database environments, tests were performed concurrently at IBM Labs in Research Triangle Park, NC and Delphix Labs in Menlo Park, CA. Both environments used the same configuration and server hardware.

Generally a standard physical Oracle deployment will involve a database host that connects directly to a SAN or other storage device, typically over fiber. As such the physical database test performed by IBM used a direct connection to SAN. For the Delphix tests however, the SAN is connected directly to the Delphix host via fiber and the Oracle host is connected to Delphix via NFS. This allows Delphix to act as a layer between the Oracle database server and the storage device.

phys_vs_virt_setup

While this extra I/O layer may seem counterproductive, Delphix actually acts as a caching tier (in addition to its many other virtues). The presence of a properly configured Delphix server augments the storage subsystem and improves I/O performance on read requests from the Oracle host. The bigger and ‘warmer’ the cache, the greater the performance gain. And if SSD is used for your dedicated disk area, you can get more money out of your investment thanks to the single-block sharing built into Delphix.

Storage Configuration

For the purposes of this test, both the physical and virtual database testing environments used the same SAN. The SAN configuration consisted of a 5-disk stripe of 10K RPM spindles. Two 200GB LUNs were cut from the stripe set.

IBM Hardware

One of the main goals in this joint test with IBM was to find an optimal configuration for Delphix on hardware that could provide flexibility and power at an attainable price. To this end we chose the IBM x3690 X5 with Intel Xeon E7 chips; a system that is reasonably priced, powerful, and supports up to 2TB memory. Delphix does not necessarily require a large amount of memory, but the more there is available the better Delphix and your virtualized databases will scale with extremely fast response times.

The test x3690 servers were configured with 256GB RAM. VMware ESX 5.1 was installed on the systems to serve as hypervisor for the virtual environments. The Delphix VM itself (did I mention Delphix can run on as a virtual guest?) was configured with 192GB RAM. Additionally, two Linux guests were created and configured with 20GB each to act as a source and target system respectively.

delphix_architecture2
In this configuration, the source database is the initial provider of data to Delphix. Following the initial instantiation, change data is incorporated into Delphix to keep the environment up to date and for point-in-time deployment purposes. The target system connects to Delphix via NFS mounts and can have a virtualized database provisioned to it at any time.

Load Configuration with Swingbench

Swingbench (a free database loading utility found at http://www.dominicgiles.com/swingbench.html) was used to load and measure transactions per minute (TPM) on the database host. 60GB data sets were used to populate the source Oracle database that filled a 180GB datafile inside the DB. Tests were run using standard OLTP Think Times. User load was varied for more realistic testing between 1 and 60 concurrent users. Each test ran for a 60 second window.

Database Configuration

The Swingbench database served as a test bed for the physical database servers and as the source database for Delphix virtual database provisioning.
Delphix is capable of linking to multiple source databases. In this test, it connected and linked to a Swingbench source database. Once the initial link was made a virtual database was provisioned (extremely quickly) on the target database host.

The Oracle instance on both physical and virtual systems were set up with a 100MB buffer cache, a fact that is sure to make some administrators cringe with fear. But the buffer cache was intentionally set to a small size to emphasize the impact of database reads. A large cache would not show improvement to database reads, but simply that a cache was in use. In order to show the true power Delphix brings to your I/O configuration, a small cache on the target database instance shows the work that the benchmark has performed at the read level. As such, I/O latency becomes the key factor in benchmark performance. Caching at the virtualization layer will produce good performance, but uncached data could result in very poor I/O results.

Network Configuration

In the virtual database environment provisioned by Delphix, the network is just as important as storage due to the way the target Oracle host uses NFS to the Delphix server’s attached storage. The performance of NFS and TCP has a direct impact on the I/O latency of virtualized databases.

To reduce the I/O latency and increase I/O bandwidth in the virtual database test, the Oracle Linux guest used for the target is on the same ESX server as Delphix. By keeping the Delphix tier and target system on the same hardware and VM infrastructure NFS and TCP communication can be done without additional physical network latency by running communications through a NIC. This architecture is known as Pod Architecture and it eliminates typical issues with networks such as suboptimal NICs, routers, or congestion.

In the physical environment the network configuration was not important as processes and communications were all performed locally on the database host and storage on the SAN was accessed via fiber.

High Level Results

All tests were run twice – once with a completely cold cache (0 data in cache) and once with a warm cache (the full dataset has been touched and a working set of data in cache has been established).

With a load of 10 users, the throughput measured in TPM was as follows:

  • Physical Database
    • Cold Cache – 1800 TPM
    • Warm Cache – 1800 TPM
  • Virtual Database
    • Cold Cache – 2100 TPM
    • Warm Cache – 4000 TPM

The implications of these tests are absolutely astounding. Not only was Delphix capable of provisioning a target database host in a matter of minutes, but also the virtual database outperformed the physical counterpart with both a cold and warm cache. With nothing in cache, the virtualized database performed very well, and additional caching increased performance dramatically while on the physical database it remained stagnant.

Detailed Results

OLTP Tests
Performance improving by virtue of an increasingly warm cache is great, but there was much more the test revealed. During the course of the tests, it was found that an increased number of concurrent users (generally seen as a detriment to performance) improved performance even further. This level of scalability goes beyond standard environments where things like high levels of concurrency can bog down the database environment. Instead, higher concurrency improved throughput for scaling that comes with an exponential performance improvement.

With more than five users on a cold cache and testing 1-60 users on a warm cache, the virtual databases outperform the physical counterpart.

Cold Cache TPM Results by # of users:

phys_vs_virt_tpm_coldNumber of users


Warm Cache TPM Results by number users:

phys_vs_virt_tpm_warmnumber of users

Increased users (concurrency) brought greater performance benefits to the virtual database environment. Instead of degraded performance  as the number of concurrent users rose the tests conclusively showed dramatic improvements as more users were added to the testing process. As configured with a warm cache and 60 concurrent users, Delphix showed a TPM six times greater than the physical databases.

OLTP Load on Multiple Virtual Databases

In order to test the impact of multiple virtual databases sharing a single cache, two virtual databases were provisioned from the same production source. Tests were run similar to the previous testing exercise against:

  • A single physical database
  • Two concurrent physical databases using the same storage
  • A single virtual database
  • Two virtual databases using the same storage

In this test, we measured both the throughput in TPM and the average I/O latency.

TPM vs. Single Block I/O Latency by number of users:

TPM_vs_latencynumber of users

As you can see from these tests, as the number of a) database instances and b) concurrent users increased, latency degraded exponentially on the physical environment. As concurrency at the I/O layer rises, latency becomes a huge bottleneck in efficient block fetches resulting in flat or decreasing TPM.

On the other hand, the Delphix environment flourished under increased load. Thanks to shared block caching, more users and more virtual databases meant dramatically increased TPM because of the lack of latency due to shared blocks. Just as in the previous test, the scalability of Delphix is entirely unexpected based on traditional Oracle scaling; as systems and users rise the environment as a whole not only scales, but also scales exponentially.

Performance measured in seconds of a full table scan on the ORDERS table:

 

phys_vs_virt_fts_orders

Seconds

 

 

 

 

 

 

 

 

 

 

 

 

It is worth clarifying here that the impacts seen with caching on Delphix for a single database can of course be attained in any environment by caching more on the database host with an increased buffer cache. However, if there are multiple copies of the database running (remember the need for Dev, QA, Stress, UAT, etc.) there will be no benefit on a physical environment due to the inability of Oracle to share cache data. In a Delphix environment this problem disappears due to its dual function as a provisioning tool and I/O caching layer. Blocks in cache on one virtual database target system will be shared in the Delphix shared cache between all virtual databases.

To show the impact of this functionality, throughput tests were run against two different virtual database targets. First against Virtual DB 1 with a cold cache, then Virtual DB 1 with a warm cache, following by Virtual DB 2 with a  cold cache and Virtual DB 2 with a warm cache.

Performance measured in seconds of a full table scan  on customers, orders and order_items

virt_cold_vs_warm_fts

 

Seconds

 

 

 

 

 

 

 

 

 

 

Three queries were used for this test, all performing full table scans against different tables in each virtual database. Each query was run as previously described (Virtual DB 1 cold, Virtual DB 1 warm, Virtual DB 2 cold, Virtual DB 2 warm). The purpose of this test was to simulate standard Decision Support System (DSS) type queries across multiple virtual databases to show the effects of block caching in Delphix.

We see that by warming the cache on Virtual DB 1 the query time dropped dramatically (which is to be expected). However, it is also clear that running the query on Virtual DB 2 with a cold cache retained the same caching benefit. Because Virtual DB 1 warmed the cache, Virtual DB 2 was able to utilize the fast I/O of cached data without any pre-warming. This behavior is the core of the exceptional results in the previous TPM tests when a higher user load and target database count was introduced.

Maximizing Resources and Performance
We have all been required at some point to create multiple databases on a single host, and each time it is difficult to decide exactly how to set each SGA to make the best use of RAM on the server. Make a single database instance’s SGA too large and you take away critical assets from the other databases on the host. Make all of the instance’s SGAs too large and you can seriously hurt the performance of all databases mounted by instances on that server.

RAM is an expensive resource to use; not necessarily in terms of cost, but in terms of importance to the system and limited availability. By sharing the I/O cache between virtual database clones of the same source via Delphix, the RAM usage on the server is optimized in a way that’s simply not possible on a physical database configuration.

Delphix removes the RAM wall that exists in a physical database environment. Multiple instances are able to share data blocks for improved read times and increased concurrency capabilities. While it is possible to share caching on a SAN in a physical database configuration, remember that each database copy will use different cached blocks on the SAN. Additionally, SAN memory is far more expensive than memory on standard commodity x86 hardware. For example, the cost of 1GB RAM on an x86 server is around $30. On an EMC VMAX the same 1GB RAM will cost over $900.*  And that SAN caching will not carry with it all the additional provisioning benefits that Delphix brings to the table.

Even though the tests were constructed to show maximum performance improvements due to a well-architected Delphix infrastructure, the impact of nearly any Delphix deployment will be dramatic. Slower hardware, smaller cache, or other factors can contribute to a less optimal architecture but the principle benefits remain. The actual minimum requirement for the Delphix host is 16GB RAM, but the average size among customers is 64GB. This is obviously not a small size but it is dramatically smaller than the rest of the test.

In real use cases of Delphix at our customers’ sites, we have found that on average 60% of all database I/O block requests to  Delphix are satisfied  by Delphix cache. This means that 60% of customer queries against their development, QA, reporting, and other environments provisioned via Delphix never have to touch the SAN at all. This relieves bottlenecks at the SAN level and improves latency for the I/O that actually does need to occur on disk.

 

http://www.emc.com/collateral/emcwsca/master-price-list.pdf Price obtained on pages 897-898: Storage engine for VMAX 40k with 256 GB RAM is ~$393,000 Storage engine for VMAX 40k with 48 GB RAM is ~$200,000, thus 256GB – 48GB = 208GB  and $393,000 – $200,000 = $193,000, So the cost of RAM here is $193,000 / 208GB = $927/GB.

Summary

In nearly every test, Delphix outperformed the traditional physical database, a shift that flies in the face of every ‘performance scaling’ fact we have held as true until this point. By breaking outside of the normal optimization configuration (physical database with cache connected to SAN with cache), we are introducing a multipurpose layer which provides incredible shared caching benefits beyond any that can be found on a solitary database host.

Additionally, the more we threw at the Delphix targets the better it got. More databases, more concurrent users, all came back with better than linear scaling; dramatic gains were common as more work was required of the Delphix server. By implementing Delphix, the IBM x3690 we used for testing was capable of so much more than it could normally handle with the added benefit of cheap RAM as the cache layer and incredibly fast provisioning to boot. The architecture as a whole was significantly cheaper than a robust SAN caching configuration on purpose-built hardware while performing and scaling with dramatic improvements.

 Other reading

    The Delphix documentation is online at http://docs.delphix.com
    For more information on Delphix see http://dboptimizer.com/delphix/
    A video on database virtualization along with a blog article
  • #3b5998; font-family: lucida grande,tahoma,verdana,arial,sans-serif;">Oaktable World presentation on Database Virtualization
and a few more blog posts on database virtualization:

I’m Back !! Oracle Index Internals Seminar Scheduled in Europe For June.

It’s been quite a while but I’ve recently been asked to present my Indexing Seminar in Europe for Oracle University and at this stage, all the planets seem to be aligning themselves to make this a distinct possibility. Seminar details are as follows: Location: Bucharest, Romania When: 17-18 June 2013 Cost: 2000 RON (approx. 450 […]

Oracle Linux : Frequently Asked Questions (FAQs)…

I mentioned in a previous post that my company were planning to move all of our middle tier infrastructure and some of our Oracle databases to Oracle Linux running on a virtual infrastructure. That process is now underway.

Persuading the company to ditch Red Hat Enterprise Linux (RHEL) in favor of Oracle Linux took a bit of effort, partly due to some Fear, Uncertainty and Doubt (FUD) spread by one of the vendors we use. In the process of trying to counter the FUD I put together an Oracle Linux FAQ document. I thought it might come in handy for anyone else in a similar position, so I thought I would make it available on my site.

As I say at the top of the article, this includes some of my opinions as well as facts. This made me a little nervous, so I thought I would run it by an expert before I let it loose.  Big thanks to Lenz Grimmer for giving the article the once-over. His corrections and suggestions were very welcome!

Cheers

Tim…


Oracle Linux : Frequently Asked Questions (FAQs)… was first posted on April 2, 2013 at 11:20 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Ignoring hints

A hint is an instruction to the optimizer

This is what’s written in Oracle documentation. Instruction is defined as

a code that tells a computer to perform a particular operation

Which means Oracle CBO must obey the hints and must perform particular operation. The latter is hard to define correctly and explain precisely because it involves the logic of the block-box (what Cost Based Optimizer is). Some of the operations are mentioned in the standard Oracle documentation, some of them scattered across different places, and there are exceptions as usual. I think I’ll list here these cases which could lead to “ignoring hints” with the links to documentation/blogs.

Description Hints affected Reference
The hint has a syntax error, or doesn’t follow DELETE/INSERT/SELECT/MERGE/UPDATE keyword, or conflicts with other hints All http://docs.oracle.com/cd/E11882_01/server.112/e26088/sql_elements006.htm#sthref482
The optimizer ignores FIRST_ROWS in DELETE and UPDATE statement blocks and in SELECT statement blocks that include any blocking operations, such as sorts or groupings FIRST_ROWS http://docs.oracle.com/cd/E11882_01/server.112/e26088/sql_elements006.htm#sthref524
The LEADING hint is ignored if the tables specified cannot be joined first in the order specified because of dependencies in the join graph. If you specify two or more conflicting LEADING hints, then all of them are ignored. If you specify the ORDERED hint, it overrides all LEADING hints. LEADING, ORDERED http://docs.oracle.com/cd/E11882_01/server.112/e26088/sql_elements006.htm#sthref564
If a NO_INDEX hint and an index hint (INDEX, INDEX_ASC,INDEX_DESC, INDEX_COMBINE, or INDEX_FFS) both specify the same indexes, then the database ignores both the NO_INDEX hint and the index hint for the specified indexes and considers those indexes for use during execution of the statement. INDEX* http://docs.oracle.com/cd/E11882_01/server.112/e26088/sql_elements006.htm#sthref589
If two or more query blocks have the same name, or if the same query block is hinted twice with different names, then the optimizer ignores all the names and the hints referencing that query block. QB_NAME http://docs.oracle.com/cd/E11882_01/server.112/e26088/sql_elements006.htm#sthref692
If a hint specifies an unavailable access path, the optimizer ignores it Access path http://docs.oracle.com/cd/E11882_01/server.112/e16638/hintsref.htm#PFGRF94938
If the statement uses an alias for the table, then use the alias rather than the table name in the hint Access path http://docs.oracle.com/cd/E11882_01/server.112/e16638/hintsref.htm#PFGRF94938
The table name within the hint should not include the schema name if the schema name is present in the statement Access path http://docs.oracle.com/cd/E11882_01/server.112/e16638/hintsref.htm#PFGRF94938
For access path hints, Oracle Database ignores the hint if you specify the SAMPLE option in the FROM clause of a SELECT statement Access path http://docs.oracle.com/cd/E11882_01/server.112/e16638/hintsref.htm#PFGRF94938
The hints USE_NL & USE_MERGE are ignored if the referenced table is the outer table in the join Join operations http://docs.oracle.com/cd/E11882_01/server.112/e16638/hintsref.htm#autoId7
Oracle Database ignores global hints that refer to multiple query blocks ? http://docs.oracle.com/cd/E11882_01/server.112/e16638/hintsref.htm#i21188
Access path and join hints on referenced views are ignored unless the view contains a single table or references an Additional Hints view with a single table. ? http://docs.oracle.com/cd/E11882_01/server.112/e16638/hintsref.htm#autoId21
With nonmergeable views, optimization approach and goal hints inside the view are ignored. Access path hints on the view in the top-level query are ignored. ? http://docs.oracle.com/cd/E11882_01/server.112/e16638/hintsref.htm#autoId23
If an invalid hint is a valid SQL keyword, it causes other hints to be ignored All https://support.oracle.com/epmos/faces/DocumentDisplay?id=826893.1
When parallel_instance_group points to a non-existent service name, PARALLEL hint will be ignored PARALLEL https://support.oracle.com/epmos/faces/DocumentDisplay?id=1467447.1
INDEX hint may be “ignored” if materialized query rewrite produces plan with lower cost ? http://jonathanlewis.wordpress.com/2007/02/21/ignoring-hints/
Transitive closure and join elimination may produce a plan which ignores USE_HASH hint Join operations http://jonathanlewis.wordpress.com/2010/02/11/ignoring-hints-2/
Hints in ANSI joins could be ignored due to query transformation and introduction of new query blocks ? http://jonathanlewis.wordpress.com/2010/12/03/ansi-argh/
Undocumented limit of 20 chars for query block name causes QB_NAME to be ignored QB_NAME http://oracle-randolf.blogspot.com/2013/02/qbname-hint-query-block-name-length.html

Unsurprisingly, most of the cases are covered by the documentation. Good to know.
PS. Apart from documentation, an excellent source of information about hinting is presentation and paper Hint on Hints by Jonathan Lewis.

Filed under: CBO, Hints, Oracle

Everyone should write/present because…

Following on from my post about the ACE program, Yuri from Pythian asked what I get out of presenting that makes it worthwhile. In this post I will tell a few little stories to explain why I think writing and presenting are important skills for people, regardless of their ambitions.

Presenting

I mentioned in the previous post that I was originally scared of public speaking. There are only two reactions to that. You either avoid it, or face it head-on. In my case I chose the latter and it worked for me. I’m now really comfortable speaking to large groups of people. It’s always a bit nervy, but in a good way. At UKOUG last year I got up on stage and I could see my hands were shaking, so I pointed it out to the crowd and laughed at myself. Once I acknowledged the fear, I felt pretty calm and got on with it. The confidence to accept this sort of thing only comes if you put yourself through the ringer a few times. Preparation makes life alot easier, but no amount of practicing in your house can truly prepare you for the first time you get on stage.

If you do your preparation well, you will learn a lot more about your subject area. I spend a lot of time looking at what I am presenting and trying to think about the questions people are likely to ask me. If I come across anything I can’t answer in a convincing manner, I hit the books to find out what the answer is. There are always a few surprises, but you can incorporate those into your presentations to improve them over time.

In a similar vein, learning how to explain things to other people teaches you a lot about your subject. When you have to think of multiple approaches to explain a subject, you often gain more clarity yourself.

“Those who know, do. Those that understand, teach.” – Aristotle

There are pivotal moments in your life when being able to communicate clearly and calmly can have a big impact. I was speaking to some University students a few months back and asked how many of them had done formal presentations. The answer was pretty much zero. So then I posed the question, how do you think you are going to cope in a job interview if you’ve never actually put yourself under that sort of pressure before? I’m not saying presenting in front of your peers or at a conference will make you an interview demon, but these skills are transferable and they will help.

Likewise, when you are in a meeting and you have to present your arguments for following a specific route, if you babble inanely I doubt you will get the result you want. Communicating your thoughts and ideas in a clear manner is a skill everyone needs. Being able to communicate with people of differing technical backgrounds is a great skill too. It allows you to be the glue that binds the teams together. There is nothing worse than working in a company where all the teams are cool, but the interfaces between them are broken.

Above all, when you’ve done a good presentation you are on such a high. You feel like skipping out of the room. :)

Writing

I think everyone should write. Not just technical people, but everyone. I never kept a diary as a kid, but on reflection I wish I had. You don’t need to write fancy prose. Not every article has to been 50 pages long. It’s about ordering your thoughts. You don’t have to make them available on the internet, but I think it helps if you do.

I remember the first time I answered a question on a forum. It was dbasupport.com. I must have reread my answer about 20 times. I read the relevant pages in the documentation several times, making sure I’d not made a mistake. I hit submit and then refreshed the page every few seconds waiting to see if someone would criticise my answer. It was terrifying. The point is, putting your content out for public consumption opens you up to criticism, so you try a bit harder. I recently got one of my colleagues to start blogging. He kept his notes as word documents on a memory stick. In transferring stuff to his blog he commented on how scrappy some of his notes were and how putting them on his blog was forcing him to neaten things up. :) How many times have to looked back at scrappy notes and found them pretty much useless?

I’ve got 12+ years of notes to fall back on. You ask me to do anything, chances are the first thing I will do is read my article on that subject as a refresher. If it doesn’t fill in all the gaps, I’ll add to it. The fact I can rely on my notes is a big confidence boost for me. Without them I would be winging through the manuals desperately hoping I can find the right bit before I make a fool of myself.

If career progression is your thing, ask yourself this question. If you were an employer and you were faced with two candidates of equal ability and one maintained a blog with regular posts of a technical nature and the other didn’t, which would you pick? I would pick the blogger, just because they showed an extra level of enthusiasm for the subject. I would find that an attractive quality in a candidate.

I don’t think your career should be your main motive though. Most of my employers, including my current one, haven’t had a clue about my website when I’ve been hired. My colleagues tend to catch on over time when I follow up every answer to a question with a link to oracle-base.com. :)

OK. So it’s a bit of a raggedy post, but it gives you some idea of why I think presenting and writing are important and what I get out of them. The fact that occasionally people will give you good feedback or make you part of a community program is a nice bonus. :)

Cheers

Tim…


Everyone should write/present because… was first posted on April 2, 2013 at 5:34 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.