Who's online

There are currently 0 users and 27 guests online.

Recent comments

Oakies Blog Aggregator

UKOUG Speaker Evaluations…

This is a bit of a vanity post, so excuse me blowing my own trumpet.

I just took a look at the speaker evaluation feedback from UKOUG 2012 and it was pretty good. Only 4 of the people left “wordy” feedback:

  • Could have done with more time. Just the sort of presentation I came to conference for v.good.
  • Impressive.
  • Brilliant presenter and useful content, could have been twice as long.
  • Very nice presentation.

As far as the rating feedback went, the scores in each area were out of 6. I got just under 5 for the slides and the rest (content, presentation skills and value of presentation) were all above 5, which I think is pretty good. I was particularly happy with the presentation skills result.

I don’t take this stuff too seriously as these ratings are very subjective. Some people are always mega-critical and some people would give you 6/6 if you tripped and fell of the stage, but it is a nice little ego boost.



PS. It kinda makes up for the terrible job I did last year, so I guess my average rating for UKOUG presentations is now about “meh’… :)

UKOUG Speaker Evaluations… was first posted on December 20, 2012 at 5:19 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

HOWTO: Viewing XML data in V_$CELL_% storage cell SYS views

Got a small question from Frits if I could help him make some XML data readable in one of the SYS.V_$CELL_% / V$CELL_% views. I have been a bit busy, in between jobs, to try to make some of those XML columns more readable anyway (for myself and others) so…lets have a go at it. …

Continue reading »

5 Year Anniversary (5 Years)

It’s just occurred to me that I’ve just recently past the 5 year anniversary of when I started this humble little blog !! My first real post on Invisible Indexes was written back on 11 December 2007. Some 256 posts later and I’m still here, posting occasionally on Oracle Indexes with as many David Bowie […]

VirtualBox 4.2.6…

I just saw Fat Bloke‘s post about VirtualBox 4.2.6. You can get the downloads and the changelog from the usual places.

Happy upgrading! :)



VirtualBox 4.2.6… was first posted on December 20, 2012 at 1:02 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Debra Lilley’s email disclaimer…

I think I’m going to reply to all of Debra Lilley‘s emails with my own disclaimer that says,

Unless otherwise stated, this email has got nothing to do with Debra Lilley or the massive disclaimer that gets added to every email she sends… Even the iddy-biddy 3 word emails…

That Fujitsu email disclaimer is one serious piece of text!

Heaven forbid you get an out of office reply… :)



Debra Lilley’s email disclaimer… was first posted on December 19, 2012 at 9:11 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Storage Indexes vs Database Indexes Part II: Clustering Factor (Fast Track)

Two posts in two days !! Well, with Christmas just around the corner, I thought I better finish off a couple of blog posts before I get fully immersed in the festive season The Clustering Factor (CF) is the most important index related statistic, with the efficiency of an index performing multi-row range scans very much […]

Some stuff about dates and timestamps in Oracle

set null ~ pagesize 40 linesize 132;
column id format 90;
column datetrunc format a30;
column dateuntrunc format a30;
column timetrunc format a30;
column timeuntrunc format a30;
create table time_size2
id number,
datetrunc date,
dateuntrunc date,
timetrunc timestamp,
timeuntrunc timestamp
insert into time_size2
select rownum, trunc(sysdate), sysdate,trunc(sysdate),sysdate
from dual;
insert into time_size2 values (2,null,null,null,null);
select * from time_size2;
to_char(datetrunc, 'YYYY MM DD HH24:MI:SS') "datetrunc",
to_char(dateuntrunc,'YYYY MM DD HH24:MI:SS') "dateuntrunc",
to_char(timetrunc, 'YYYY MM DD HH24:MI:SS.FF9') "timetrunc",
to_char(timeuntrunc,'YYYY MM DD HH24:MI:SS.FF9') "timeuntrunc"
select vsize(datetrunc),vsize(dateuntrunc),vsize(timetrunc),vsize(timeuntrunc) from time_size2;
-- Notice that vsize reports NULL as the physical length for NULLs in accordance with the standard
-- (so don't blame Oracle for the stupidity that the physical length of the NULL is known but not accurately reported
-- to date Oracle has ignored enhancement requests for an rsize function that reports the real size in all cases
select dump(datetrunc),dump(dateuntrunc),dump(timetrunc),dump(timeuntrunc) from time_size2;
-- dump on the other hand could be more informative if Oracle wanted, since that is not part of the standard.
-- and the .FF9 format just doesn't seem to work for dates as a convenience that would show zeroes anyway
-- so remember you can't just use a standard fractional second format model for dates and timestamps
-- if you want to simulate a matching format you can tack a decimal point and literal zeroes onto a date
select to_char(dateuntrunc,'YYYY MM DD HH24:MI:SS.FF9') "dateuntrunc" from time_size2;
select to_char(dateuntrunc,'YYYY MM DD HH24:MI:SS')||'.00000000' "dateuntrunc" from time_size2;
-- okay, you need a little fix-up to make that an accurate simulation for NULLs
select to_char(dateuntrunc,'YYYY MM DD HH24:MI:SS')||decode(dateuntrunc,null,'','.00000000') "dateuntrunc" from time_size2;

Will show you some interesting details about dates and timestamps in Oracle. A sample run on r11 is here:

Of course if you run on a release prior to timestamps, this will fail horribly. If Oracle ever listens to my enhancement requests for the rsize function and adding .FF [1..9]
as a legal format, the bit about that work-around for a constant date and timestamp output format will be obsolete.


Snapper v3.61 released – and more work is in progress!

Here’s the latest version of my Oracle session level performance Snapper tool:

I’m going to add more stuff to Snapper in coming days, but thought to release something for testing already :)

  1. There are some cosmetic changes, like printing empty lines and header lines for better readability (still controllable by the pagesize parameter, look into the scripts).
  2. I’ve made some changes in the visual “graph” column just for easier readability when glancing at a snapper report:

    the @-character is used for CPU time (@ looks like a letter C a bit, doesn’t it :)
    the #-character is for all other Time model time
    thw W-character is for wait time 

  3. Snapper now displays the SQL child cursor number by default in the “ASH” breakdown section 
  4. The main new feature is the addition of event count columns from V$SESSION_EVENT, which allows me to calculate session-level wait event duration averages too. For example, next time you see the log file sync wait event taking lots of time, you can immediately check how much these individual waits take on average instead of having to derive it from other metrics like user commits and transaction rollbacks. In the example below I’ve identified one user session performing commits (SID 298) and also LGWRs session (492) and am snapping them at the same time.


Scroll right in the below section to see the wait time average columns (if you want narrower output, look into the script’s “configurable stuff” section to hide individual columns):

SQL> @snapper all 5 1 298,492

Sampling SID 298,492 with interval 5 seconds, taking 1 snapshots...

-- Session Snapper v3.61 by Tanel Poder ( )

    SID, USERNAME  , TYPE, STATISTIC                                                 ,         DELTA, HDELTA/SEC,    %TIME, GRAPH       , NUM_WAITS,  WAITS/SEC,   #ff0000;">AVERAGES
    298, SOE       , STAT, Requests to/from client                                   ,            36,        7.2,         ,             ,          ,           ,
    298, SOE       , STAT, opened cursors cumulative                                 ,           379,       75.8,         ,             ,          ,           ,
    298, SOE       , STAT, user commits                                              ,            54,       10.8,         ,             ,          ,           ,
    298, SOE       , STAT, user calls                                                ,            72,       14.4,         ,             ,          ,           ,
    298, SOE       , STAT, recursive calls                                           ,           433,       86.6,         ,             ,          ,           ,

... lots of output removed ...

    298, SOE       , STAT, execute count                                             ,           323,       64.6,         ,             ,          ,           ,
    298, SOE       , STAT, bytes sent via SQL*Net to client                          ,          2859,      571.8,         ,             ,          ,           ,
    298, SOE       , STAT, bytes received via SQL*Net from client                    ,          6972,      1.39k,         ,             ,          ,           ,
    298, SOE       , STAT, SQL*Net roundtrips to/from client                         ,            34,        6.8,         ,             ,          ,           ,
    298, SOE       , STAT, cell flash cache read hits                                ,            44,        8.8,         ,             ,          ,           ,
    298, SOE       , TIME, parse time elapsed                                        ,          1242,    248.4us,      .0%, [          ],          ,           ,
    298, SOE       , TIME, PL/SQL execution elapsed time                             ,         39365,     7.87ms,      .8%, [#         ],          ,           ,
    298, SOE       , TIME, DB CPU                                                    ,        137978,     27.6ms,     2.8%, [@         ],          ,           ,
    298, SOE       , TIME, sql execute elapsed time                                  ,        152021,     30.4ms,     3.0%, [#         ],          ,           ,
    298, SOE       , TIME, DB time                                                   ,        690485,    138.1ms,    13.8%, [##        ],          ,           ,
    298, SOE       , WAIT, latch: cache buffers chains                               ,            27,      5.4us,      .0%, [          ],         1,         .2,       27us
    298, SOE       , WAIT, log file sync                                             ,        530709,   106.14ms,    10.6%, [WW        ],        20,          4,    26.54ms
    298, SOE       , WAIT, SQL*Net message to client                                 ,            65,       13us,      .0%, [          ],        34,        6.8,     1.91us
    298, SOE       , WAIT, SQL*Net message from client                               ,        511002,    102.2ms,    10.2%, [W         ],        34,        6.8,    15.03ms
    298, SOE       , WAIT, PL/SQL lock timer                                         ,       3589765,   717.95ms,    71.8%, [WWWWWWWW  ],       227,       45.4,    15.81ms
    298, SOE       , WAIT, cell single block physical read                           ,         30350,     6.07ms,      .6%, [W         ],        47,        9.4,   645.74us
    298, SOE       , WAIT, events in waitclass Other                                 ,            50,       10us,      .0%, [          ],         1,         .2,       50us

    SID, USERNAME  , TYPE, STATISTIC                                                 ,         DELTA, HDELTA/SEC,    %TIME, GRAPH       , NUM_WAITS,  WAITS/SEC,   AVERAGES
    492, (LGWR)    , STAT, non-idle wait time                                        ,           328,       65.6,         ,             ,          ,           ,
    492, (LGWR)    , STAT, non-idle wait count                                       ,           350,         70,         ,             ,          ,           ,
    492, (LGWR)    , STAT, in call idle wait time                                    ,           148,       29.6,         ,             ,          ,           ,
    492, (LGWR)    , STAT, messages received                                         ,           343,       68.6,         ,             ,          ,           ,
    492, (LGWR)    , STAT, background timeouts                                       ,             1,         .2,         ,             ,          ,           ,
    492, (LGWR)    , STAT, physical write total IO requests                          ,           384,       76.8,         ,             ,          ,           ,
    492, (LGWR)    , STAT, physical write total bytes                                ,       1330688,    266.14k,         ,             ,          ,           ,
    492, (LGWR)    , STAT, cell physical IO interconnect bytes                       ,       2661376,    532.28k,         ,             ,          ,           ,
    492, (LGWR)    , STAT, redo wastage                                              ,        105336,     21.07k,         ,             ,          ,           ,
    492, (LGWR)    , STAT, redo writes                                               ,           344,       68.8,         ,             ,          ,           ,
    492, (LGWR)    , STAT, redo blocks written                                       ,          2604,      520.8,         ,             ,          ,           ,
    492, (LGWR)    , STAT, redo write time                                           ,           332,       66.4,         ,             ,          ,           ,
    492, (LGWR)    , STAT, redo blocks checksummed by FG (exclusive)                 ,           200,         40,         ,             ,          ,           ,
    492, (LGWR)    , TIME, background cpu time                                       ,        117983,     23.6ms,     2.4%, [#         ],          ,           ,
    492, (LGWR)    , TIME, background elapsed time                                   ,       4282180,   856.44ms,    85.6%, [######### ],          ,           ,
    492, (LGWR)    , WAIT, rdbms ipc message                                         ,       1673152,   334.63ms,    33.5%, [WWWW      ],       261,       52.2,     6.41ms
    492, (LGWR)    , WAIT, log file parallel write                                   ,       3068565,   613.71ms,    61.4%, [WWWWWWW   ],       367,       73.4,     8.36ms
    492, (LGWR)    , WAIT, events in waitclass Other                                 ,            55,       11us,      .0%, [          ],         4,         .8,    13.75us

--  End of Stats snap 1, end=2012-12-18 20:23:18, seconds=5

Active% | SQL_ID          | SQL_CHILD | EVENT                               | WAIT_CLASS
    71% |                 | 0         | log file parallel write             | System I/O
     7% |                 |           | log file sync                       | Commit
     5% | c13sma6rkr27c   | 0         | ON CPU                              | ON CPU
     2% | 0y1prvxqc2ra9   | 0         | ON CPU                              | ON CPU

--  End of ASH snap 1, end=2012-12-18 20:23:18, seconds=5, samples_taken=42

PL/SQL procedure successfully completed.

Having a quick way to check session level wait event average durations can speed up your troubleshooting – as it’s an important difference whether your session is responding slowly due to 1000 x 1 ms waits or 1 x 1000 ms wait.

Of course averages may hide some detail, but with Snapper these averages are not from system wide, but from session level sources and usually calculated over a snapshot of a few seconds, not minutes or hours.

As a next step, I plan to add more helpful averages (the usual things I currently manually calculate when needed) after the TIME model and some STAT metrics in Snapper output too. This should help to save a few seconds when troubleshooting that super-critical issue next time ;-)

So please test it out and tell me what you think!


(P.S. I have just updated my Oracle performance online seminars page with 2013 info!)

Database Virtualization

What is database virtualization? Database virtualization, as opposed to operating system virtualization, is sharing a read only copy of a source database between clone databases. The clone databases are called virtual databases (also called thin provision clones as distinct from full physical copy clones).  Virtual databases are much more than simple read only databases. The virtual databases can also write to the data files. How can the virtual databases write to the data file when the data files are read only? Writing to the data files is accomplished either through one of two basic mechanisms. Those mechanisms are block pointer copy on write file systems or journal file systems. When a virtual database writes to the data files the changes are not written to the data file but are kept in a private area only visible to that virtual database. Each virtualized database sees what appears to be a private read/write copy of the database. Database changes made by a virtual database are only seen by that virtual database as if the virtual database had its own full private copy of the source database.

Database Virtualization gives:

  1. Enormous storage savings. Clones initially take up almost no space.
  2. Instant provisioning of virtual databases.  Clones can be made instantly since there is no copying of data.

A virtual database can be made in seconds and takes up almost no space since the virtual database only creates new control files, redo log files and a new temporary table space. All the rest of the data is shared initially.


For example, if one wanted to supply a team of 100 developers each with their own copy of a production database, then for every 1TB of production data files it would cost 100TB of disk to support the full copies. Copying 100TB of data would require not only  massive amounts of storage but would also be extremely time consuming.  Because of the time it takes to create those 100 copies, the copies would be out of date with production.  On the other hand with virtual databases, the amount of disk required would be about 1TB total for all 100 virtual databases and because of how quick virtual databases can be created, the virtual database would be closely in sync with production

Coming of age

Database virtualization is finally coming of age.  The core technology to virtualize databases has been around for almost 2 decades but because virtualizing databases has required specialized hardware, expert knowledge, and complex scripting. Thus virtualizing databases remained an underutilized technology.  An analogy is that the usage of the internet. The internet existed before the web browser and it was possible to share photos, music, and documents via the internet but few people did because the tools were relatively unknown and difficult. Once the web browser came out and sharing became as easy as point and click and then usage of the internet exploded.

The same explosion is happening with database virtualization.  Delphix for example now eliminates the requirements for specialized hardware, expert knowledge and complex scripting. Delphix is so completely turnkey that  a developer can provision a fully operational virtual database in seconds with 3 clicks of a mouse.  Delphix is not the only one  automating database virtualization. Oracle as well will be introducing a point and click interface in Oracle 12c.

Barriers to entry

There have been two basic barriers to entry for adoption of database virtualization

1. Specialized Hardware

One of the barriers to entry to database virtualization has been the requirement of specialized storage systems such as EMC,  Netapp  or Oracle ZFS storage appliance that were capable of taking snapshots of the file system. The requirement for specialized hardware has been eliminated with new technologies such as  open source ZFS file system, Oracle dNFS Clonedb and Delphix file system (DxFS) .

2. Costly Administration

Even without the requirement of specialized hardware, the adoption of virtualized databases has been low. Adoption has been low because of the complicated scripting requirements for managing virtual databases.  Managing the process of virtualization has been too high a barrier for major adoption. Some of the steps that have to be managed in a database virtualization system are administering the RMAN backups, incremental backups, archive log files, file system snapshots, file system clones, exporting file systems on fibre channel, iscis, or nfs,  mounting those file systems on virtual database host machines, creating proper startup parameters and recovering the virtual database to a point in time.

Now the hardware and management barriers are falling with fully automated database virtualization solutions the adoption of database virtualization is taking off. For the first time companies such as Delphixare adding the automation steps on top of the storage snapshot technologies. The automation allows anyone to harness the power of database virtualization.

Delphix example

Delphix for example is a software stack that implements Delphix file system (DxFS) and Delphix Datavisor that automates the whole process of database vitualization and doesn’t require file system snapshots or EMC or NetApp. It only requires an x86 box with about the same amount of the disk space of the database to be virtualized. The source database is backed up onto the Delphix appliance via  automated RMAN APIs, the data is automatically compressed and Delphix automates syncing of the local copy with changes in production and Delphix handles the provisioning of virtual databases. A virtual database can be provisioned from any SCN or  second in time during the retention window. The retention window is typically 2 weeks.

Oracle as well  is attempting to bring automation to database virtualization in Oralce12c with SMU or snapshot manager utility a pay for  licensed software utility . The utility is based on ZFS and even though ZFS is open source, Oracle’s implantation of SMU requires an Oracle ZFS appliance, ie specialized hardware which will be a barrier to entry for most IT shops. An even higher barrier to entry for SMU is that SMU requires the source database data files be stored on the ZFS appliance. Using the ZFS storage appliance for production databases will all but prohibit the use of SMU for production databases.  Delphix on the other hand has no such limitations.

Eliminate bureaucracy

Automated virtual database solutions eliminates bureaucracy.  What took weeks in the past takes seconds now by eliminating both the data copying time of the production database as well as all the time for requesting, discussing, processing and allocating resources.  When a developer wants a clone they typically have to ask their manager, their DBA, their storage admin etc.  The managerial decision making process, administrative tasks and coordination meetings often take weeks. With database virtualization all of the overhead can be eliminated. The developer can provision their own virtual database in seconds by themselves, when they want with a few clicks of a mouse with now storage overhead.

Speed up development: each developer gets a copy

Because the resource cost of providing database copies is eliminated it means that a team of developers can go from sharing one copy of production to each having their own private copy of the source database. Now with a private copy of the database, a developer can change schema and metadata as fast as they want instead of waiting days or weeks of review time to check in changes to a shared development database

Saves Ram

Virtualizing databases not only saves disk space but can save RAM.  RAM on the virtual database hosts can be minimized because virtual databases share the same data files and can share the same blocks and thus share the same blocks in the file system cache on Delphix, thus reducing  the amount of memory required to cache data. No longer does each copy require private memory to cache the data. With virtual databases  not only are blocks shared on disk but the blocks are  also shared in memory.  For example, Delphix is connected between the backend storage SAN and the database hosts and thus Delphix sits between the back end SAN and the virtual database hosts and serves as a shared cache for all the virtual databases. Since the virtual databases use the same underlying copy of the original database then Delphix can hold blocks in memory from the original database and those memory cached blocks can be shared between all the virtual databases.

How to virtualize databases

What are the current virtualization technologies and what are the advantages and disadvantages of each ?

  1. EMC
  2. NetApp
  3. Clone DB (Oracle)
  4. ZFS Storage Appliance (Oracle)
  5. Data Director (VMware)
  6. Oracle 12c Snapshot Manager Utility (SMU)
  7. Delphix

Issues to consider in virtualization are

  • Specialized hardware
  • Golden copy problem
  • Performance issues
  • Size restrictions
  • Database  limitations
  • Automation


Specialized Hardware

The largest barrier to entry is specialized hardware.  EMC, Netapp , ZFS Storage Appliance and  Oracle 12c SMU all require specialized storage hardware.

Of these choices, the least interesting is probably EMC because EMC’s snapshot technology is based on a journaling file system and not a pointer based file system and thus one is constrained by the “golden copy” problem (see below)

Of the other 3 solutions Netapp, ZFS storage appliance and Oracle 12c SMU,  Netapp is the most tried and true and has limited automation such as Snap Manager for Oracle. At the opposite end with no automation is the the ZFS storage appliance which requires a custom scripting using an Oracle proprietary shell. The scripting requirements are removed largely by Oracle 12c SMU licensed product which automates the snapshot and cloning and works on top of the ZFS storage appliance. One of the biggest issues with 12c SMU is that the source database is required to use storage on the ZFS appliance.

Golden Copy problem

With pointer based file systems one can start with an initial full copy of the source database and then keep a rolling window of incremental changes from the source database. For example one could have incremental backups of the source database everyday for a month. These snapshots would consist of the new changed blocks in the incremental backup as well as the unchanged blocks from the initial full copy backup. As incremental backups fill the retention time window then blocks that are older than the time window can be freed when there are newer copies of those blocks.

Netapp, ZFS and Delphix are all based on pointer based file systems and can have rolling retention windows.

On the other hand CloneDB,  Data Director and EMC all have the golden copy problem. For CloneDB there is only one copy of the source database and if a new copy is required then a new full copy has to be taken of the source databases. For Data Director and EMC they allow snapshots so the original copy can be shared with newer versions of the source database but the number of snapshot is limited and once the limit is reached, the source database has to be copied over in its entirety.


Two of the solutions have significant performance drawbacks. Oracle’s clonedb and VMware’s Data Director have performance issues. VMware’s Data Director performance degrades the more clones there are sharing the same snapshot. Also the more snapshots that are taken the more performance degrades. For Oracle’s CloneDB, Oracle doesn’t recommend cloned for databases where performance is a concern.

Size Restrictions

Net App has a limit of 255 snapshots. Snaps are limited to the same aggregate (storage pool)
Aggregates have size limits depending on controller. A 32 bit controllers   is limited to  16T.

Size restrictions for  VMware Data Director  linked clones are limited to 30 snapshots and EMC  are limited to 16 snapshots.

Delphix file system (DxFS) and ZFS solutions are for all practical purposes unbounded  in size, in snapshots and in snapshots of snapshots (i.e. branching clones).

Database limitations

VMwares data director is limited to support x86 databases, such as Linux and OpenSolaris only.

The other solutions support any OS version of Oracle.


The solutions offer different level of automation from none to fully automated. Automation is critical for adoption and utilization.

EMC, Netapp and VMware Data Director all provide a limited set of automation but not enough to significantly create adoption. Even with automation EMC and Data Director have the golden copy problem thus can never be fully automated to create a rolling window of time from which clones can be provisioned. Netapp can be set up to create a rolling window of snapshots, but Netapp lacks an interface for provisioning the clones, thus creating clones requires significant work for DBAs and is out of reach for an end user such as a developer. Oracle’s 12c SMU will provide automated provisioning of databases but as this is an unreleased product,  it’s unclear whether this will be functionality accessible to end users such as developers.]

Delphix is the only solution that fully automates the system eliminating the need to know how to run RMAN or how to recover a database and gives a friendly, slick UI to developers and end users that allows them to create virtual databases.


Here are a list of the technologies out there and the issues

  • EMC – golden copy issues, hardware lock-in, requires advanced scripting
  • NetApp – hardware lock-in, size limitations, requires advanced scripting
  • Clone DB (Oracle) – golden copy issue, performance issues, requires advanced scripting
  • ZFS Storage Appliance (Oracle)  – hardware lock-in, requires advanced scripting
  • Data Director (VMware) -  golden copy issue, performance issues, x86 databases only
  • Oracle 12c Snapshot Manager Utility (SMU) – hardware lock-in, requires source database have it’s datafiles located on Oracle ZFS Appliance
  • Delphix – automated solution for both administrator and end user. Delphix works for Oracle 9,10,11 on RAC, Standard Edition and Enterprise Edition. Fully automated with time retention windows and end user self service provisioning. Also support SQL Server databases. With Delphix there are no size restrictions and unlimited clones and snapshots. Snapshots can even be taken of snapshots creating branched versions of source databases.

If interested in producing a home grown data virtualization, then either set up open source ZFS or buy an Oracle ZFS storage appliance and read Oracle’s white paper listed below in the references and start coding.

If looking for an enterprise level data virtualization system that is fully automated, hardware agnostic, runs itself, and provides self service end user provisioning  then Delphix is the solution.


  • CloneDB


  • ZFS


  • ZFS Appliance


  • Data Director



– linked Clone

  • EMC


  • NetApp


–  flexclone


  • Delphix




Who Stole gettimeofday() System Calls From Oracle strace() sessions?

I've been meaning to write this blog post for a while now but never seemed to find the time. Hopefully this posting will be useful information for anyone that spends a lot of time tracing processes in Oracle from the Linux Operating System. I'm one of those people, so it was good to get to […]