Oracle Cloud Control 12cR2 is installed and merrily monitoring one of the test 11gR2 databases running on HP-UX. I’ll probably leave it like that until I come back from Oracle OpenWorld. I don’t want to change the entire administration and monitoring infrastructure just as I leave for a couple of weeks.
As I’m re-familiarizing myself with the 12c way of doing things, I’ve been wondering if this really is a full “Release 2″ product, or just 12cR1 with Bundle Patch 2. Not surprisingly, one of my readers asked the same question, pointing out the version 220.127.116.11 does not look consistent with a “Release 2″ product, which would typically be 18.104.22.168.
I take Oracle version numbers with a pinch of salt. I’m currently using WebLogic version 10.3.5, which is 11g WebLogic. At least WebLogic 12c has been versioned appropriately.
So why the 12cR2 branding, when 12cR1 of the database hasn’t been released yet? My guess is this is a marketing move for one very specific reason. One of the big marketing messages around Cloud Control was its ability manage clouds and allow you to charge customers based on their resource usage. While speaking recently to a representative of a large oracle customer/partner, I found out this functionality plain didn’t work, at least not with their selection of (latest version) Oracle products.
Assuming Cloud Control 12cR2 is now actually capable of delivering on this promise, that represents quite a big change that’s probably worthy of a re-brand, even if the version number doesn’t warrant it.
Of course, this is all speculation on my part. I’m not using it for managing clouds or charging customers. I’m just a regular DBA who likes to watch the performance page every few minutes while doing my administration in SQL*Plus.
Update: See Hans’ comment about the version number. He’s quite correct that this version falls in line with a new release in the GC/CC universe.
I’m working on creating two new courses – one of them about Oracle structures (tables, indexes etc.) the other about Oracle mechanisms (undo, redo etc.). The aim of the courses is to help people do the right thing, avoid the wrong thing, and recognise the difference; the latter, as you might guess, is based loosely on my last book. The courses will probably be ready some time around March next year.
One of the problems of creating the content for courses like this is getting the balance between what people really need to know, what they already know, and what they would really like to know (or know better); so I thought I’d throw the idea open to the public and ask the question: what would you like to learn (more) about if you came on one of these courses.
Generally I’m hoping for response like: “how Oracle does X”, “Why Oracle does Y”, “What’s the difference between A and B”, “When is P better than Q”, “Why would you use feature F “, but it would be particularly helpful if you could also include a comment about why you think the point ought to be included.
This year again I'm going to present at the DOAG 2012 conference in November. Since it is a popular topic I decided to talk about "Cost-Based Optimizer Basics" there, too. According to the official schedule the presentation will take place on Thursday, the 22nd of November, "Raum 1" at 13:00.
Like last year, I'll also try to do some Unconference sessions in addition - if it is going to take place, which I don't know yet.
I'll post some updates as soon as I know more details.
In my current "big" project, I am building a network of nested materialized views to transform rows of one schema into rows of another (very different) schema. The former is used by the old (but still live) version of an application of ours, the latter by the new version; our idea is to incrementally (aka "fast") refresh the network daily in order to have the new schema ready when the new version goes live. We need this nework because we have only a few hours of allowed downtime, and the transformations are very complex: the MV network is going to be composed of at least 200+ MVs, each containing tens of millions of rows.
We have carefully designed all MVs as fast refreshable, and built a simple PL/SQL engine to refresh them in parallel using Oracle jobs; we are hence sure that we can meet our time constraints. Now I'm looking at optimizing the COMPLETE refresh, both for damage limitation (should some MVs required to be refreshed as complete, due to corruption or recreation mandated by to last-minute functional changes) and to speed up the initial network build.
One of the optimization I was thinking about was to refresh as complete any MV whose masters have been completely refreshed before, since it is common knowledge that in this case, "complete" is much faster then "fast" (pardon the pun) - mostly because using the MV log to identify modified rows is far from cheap and it's a huge, useless overhead when ALL rows have been modified. But actually I don't need to worry, since I have discovered that in this case, Oracle silently turns the fast refresh into a complete one, at least in 22.214.171.124.
The test case (script mv_fast_becomes_complete.sql) builds a chain of three MVs:
create table dellera_t .. create materialized view dellera_mv1 .. as select .. from dellera_t; create materialized view dellera_mv2 .. as select .. from dellera_mv1; create materialized view dellera_mv3 .. as select .. from dellera_mv2;
Then the test case modifies some rows of the master table and asks for fast refresh:
exec dbms_mview.refresh('dellera_mv1', method=>'f', atomic_refresh => true); exec dbms_mview.refresh('dellera_mv2', method=>'f', atomic_refresh => true); exec dbms_mview.refresh('dellera_mv3', method=>'f', atomic_refresh => true);
It's no surprise that we get:
SQL> select mview_name, last_refresh_type from user_mviews where mview_name like 'DELLERA%' order by mview_name; MVIEW_NAME LAST_REFRESH_TYPE -------------------- ------------------------ DELLERA_MV1 FAST DELLERA_MV2 FAST DELLERA_MV3 FAST
But, when the test case refreshes the first alone as complete, keeping the next ones as fast:
exec dbms_mview.refresh('dellera_mv1', method=>'c', atomic_refresh => true); exec dbms_mview.refresh('dellera_mv2', method=>'f', atomic_refresh => true); exec dbms_mview.refresh('dellera_mv3', method=>'f', atomic_refresh => true);
MVIEW_NAME LAST_REFRESH_TYPE -------------------- ------------------------ DELLERA_MV1 COMPLETE DELLERA_MV2 COMPLETE DELLERA_MV3 COMPLETE
Hence, our request for fast has been silently turned by Oracle into a complete refresh, for all MVs down the chain. By the way, I have verified that this is true also for join-only and aggregates-only Mvs (check the test case if interested): all it takes to trigger this silent optimization is that at least one master has been completely refreshed before.
Moreover, the trace file shows the following steps, for the fast-turned-complete refresh of dellera_mv2 (edited for readability):
a) update mlog$_dellera_mv1 set snaptime$$ = :1 where ...; b) delete from dellera_mv2; c) delete from mlog$_dellera_mv2; d) insert into dellera_mv2 (m_row$$, x , y , rid#) select rowid,x,y,rowid from dellera_mv1; e) delete from mlog$_dellera_mv1 where snaptime$$ <= :1;
This is indeed the signature of a complete refresh: a complete delete (b) of the refreshing MV (dellera_mv2) followed by a straight insert (d) from the master (dellera_mv1), with no visit of the MV log of the master besides the usual initial marking (a) and the final purge (e).
What is also interesting is that the MV log is completely (no reference to snaptime$$) deleted in (c) BEFORE the insert; insertion that does not populate the MV log since the kernel trigger is "disabled" during a complete refresh (otherwise we would find a lot of rows in the log since the delete happens before and not after, but the script verifies that no row is found). No MV log management overhead, as expected.
It's also interesting to check what changes when the refresh is done with atomic => false (NB I have skipped table-locking and index-maintenance operations, and non pertinent hints, for brevity):
a) same as above b) same as above c) truncate table dellera_mv2 purge snapshot log; d) insert /*+ append */ into dellera_mv2 (m_row$$, x , y , rid#) select rowid,x,y,rowid from dellera_mv1; e) same as above
That is, the delete has turned into a "truncate purge snapshot log" and the insert is now running in append mode, the rest is the same. In passing, (b) looks a tad redundant.
As always, you can find all scripts in the test case .zip above - including of course spool files and traces, the latter also processed with xtrace exec flow to quickly mine the SQL statements of interest.
... well it is for me, anyway.
I know this isn't a universal feeling, but I always look forward to going over to San Francisco for Openworld. There are so many people I'll see that I don't see often enough, lots of cool presentations (if you know where to look) and I expect this year to be a year full of announcements in the areas I care about (and probably in others, too!). I'm gutted that I'll miss the ACE Director briefing because I think I'd be particularly interested this year, but the reality is that it's difficult getting any days away from my current project, never mind a week and a half.
On top of my main agenda presentation (Tuesday, 5pm, Moscone West 3014), I've decided to do a quick 10-minute talk on Tuesday lunchtime at Oracle Closed World, an alternative event organised by Kyle Hailey with assistance from various Oak Table types and sponsored by a number of the best-known Oracle consulting companies. As well as an agenda featuring some of the best speakers around, check out Marco Gralike's cool video on the home page
This year my main task seems to be as a Party Planner, though. That's something I never thought would happen which has given me a new-found respect for all of those poor people who organise the myriad parties that will be going on that week! Still, the great thing about it is that there is a Party at the end
Can't wait ...
Past Closed World Events, 2009/2010 – Collage OakTable World 2012 – Teaser Past Closed World Events, 2009/2010 – Snapbook
I did an EM Cloud Control 12cR2 installation at work yesterday. The database repository was 126.96.36.199 on HP-UX and the middle tier was installed on RHEL 5.8. The installation was pretty much the same as the 12cR1 version. Over the next few days I’ll be testing out some of the features to decide if we can move across to it permanently.
Today I did two run throughs of single server installations on Oracle Linux 5.8 and 6.3. There are a couple of minor differences, but nothing to worry about. You can see what I did here:
The installations are a little small, so they are not too fast, but it’s good enough to test things out.
Update: It’s been a while since I used the 12c version, so I’ve had to relearn a few simple things. I thought I might as well write the down in case it helps others.
I’m now rockin’ UltraEdit 3.2 on Mac and Linux…
This is the version that is meant to bring the Mac/Linux version in line with the Windows version as far as functionality is concerned. I’m not sure that is the case, but it’s getting ever closer. It certainly does everything I need it to do now.
It’s a major release with a bunch of interesting new features, so I’ve got an interesting few days ahead while I get to grips with it. You can get an idea about what’s in it here.
I posted, as (my) tradition dictates, the Oracle agenda overview of XMLDB sessions during Oracle Open World 2012. I already signed up for the HOL10055 session but yesterday I also got a quick peek of it’s contents this year. And yes, I can promise you, it is a must go, if you are dealing with