Search

OakieTags

Who's online

There are currently 0 users and 33 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

PL/SQL New Features in Oracle Database 12c

I recently put some more PL/SQL new features articles live.

I’ve also posted a top-level new features article.

This contains a number smaller features as well as links to other articles on the site that discuss some of the new features in greater depth.

I’ve got a couple of PL/SQL books I’ve got to read and review, but I’ve been holding back because I wanted to get my take on this subject written before I was influenced by others. I guess I don’t have that excuse any more. :)

Cheers

Tim…


PL/SQL New Features in Oracle Database 12c was first posted on September 25, 2014 at 8:52 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle OpenWorld 2014 : The Journey Begins

I did my normal last minute packing last night. After a quick panic this morning, I was off in the taxi I for the airport.

I find it amazing how sense goes out of the window at airports. There was a big sign saying “Put empty trays on rollers”, so people were either leaving them or stacking them up. Either way, they were getting in the way. WTF? RTMF!

The first flight to Frankfurt was fine. While waiting to board I was staring at the guy in front thinking, “I’m sure I could do his fade better than that!” I might have to start hairdresser-base.com… :)

The flight to SFO went without incident. I met Joze SenegacnikØyvind Isene, Martin Bach and Harshad Oak during it. :) Once we landed, it was the airport shuttle to the hotel, then straight out with the family for some food. Its sad that we only get to meet at conferences. :)

Tomorrow is the ACED briefing, so lots of tweets telling people I can’t tell them what his going on. :)

Cheers

Tim…


Oracle OpenWorld 2014 : The Journey Begins was first posted on September 25, 2014 at 5:26 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oaktable World & Delphix Labs on Mon & Tues

Next week during Oracle Open World, be sure and come on Monday and Tuesday to the free Oaktable World and  on Tuesday to Delphix hands on lab and free 90 day trial version at #CloneAttack at the same venue as Oaktable World.   The labs will also be joined by DBvisit for #RepAttack and Solarwinds (Confio) for #MonitorAttack.

 

Screen Shot 2014-09-12 at 12.38.32 PM

Screen Shot 2014-09-25 at 8.30.39 AM

Beer, Meat, Explode

 

Screen Shot 2014-09-15 at 10.57.34 AM

 

Screen Shot 2014-09-15 at 10.57.41 AM

Screen Shot 2014-09-22 at 7.18.15 AM

 

 

Is X a Big Data product?

Virtually everyone in data space today claims that they are a Big Data vendor and that their products are Big Data products. Of course — if you are not in Big Data then you are legacy. So how do you know whether a product is a Big Data product?

While there might not be fully objective criteria (and mainly because Big Data definition is still in the air and people interpret it as they see fit for their purpose), I think I can provide one good suggestion on how to determine when a certain product is NOT a Big Data product. Of course, it will depend on the definition of Big Data that you believe in.

I believe that Big Data is mostly about being “affordable at scale“, quoting Jeff Needham, my good friend and fellow member of OakTable Network. In practice, that means commodity software, commodity hardware and commodity operations of the solution. I won’t define the thresholds of scale in terabytes or levels of complexity and etc but I can provide some guidelines.

Talking about commodity hardware, it’s generally based on x86 architecture (though, some say ARM is emerging but it’s been emerging way too long for my liking) with some reasonably priced components. That would typically be dual socket systems with up to few hundred GB of RAM and maybe a dozen disks or some SSDs and cost effective networking. If we narrow down to Hadoop-like architectures then a cluster node would typically cost between $4,000 and $10,000. Anything significantly above that is probably overpriced or overspec’ed.

OK. Now that we are good with hardware let’s look at software. Obviously, open-source software without any commercial support qualifies for commodity and being affordable. If you are Facebook-scale (or getting relatively close), your commercial support can be you own large scale, capable engineering team. Otherwise, you will most likely have commercial support. Back to Hadoop world, you should expect to pay for commercially supported Hadoop distribution (whoever it is out of three leading distributions — Cloudera, Hortonworks or MapR) the same order of magnitude as for the hardware itself. Annually, it would be a fraction of hardware cost or over three years it would be about the cost of hardware purchase or slightly above depending on the level of support and platform features. You get an idea. Non-open-source products licensed on similar pricing levels are Big Data products too — you don’t have to be open-source to call your technology Big Data.

Let’s take an example of a supposedly Big Data product. If a product has “Big Data” in the name, it surely must be a Big Data product. Eh?

I love quite a few Oracle products so why don’t I look at their line up… Big Data Appliance is a prebuilt Hadoop system or Hadoop appliance with 18 powerful data nodes per rack and list price tag of $525K per rack. That gets you to almost $30K per data node which is quite high and you would likely not build your own clusters like that. Add to that about $100K per year of support and maintenance for systems and OS (you can check pricing in the public engineered system price list). Big Data Appliance does include commercially supported Cloudera distribution so it might not be that terrible pricing-wise. If you have experience buying Oracle products you also know that customers don’t pay list prices. Thus, I can accept that Big Data Appliance can actually be called a Big Data product… just.

Now let’s looks at another product — Big Data SQL. It has been announced but hasn’t quite been released just yet (or did I miss it?). Awesome product, by the way. Great way to push some of data-intensive SQL processing from Oracle Database down to Hadoop. Now, it’s probably not widely known (since it wasn’t really publicly released and sold yet) that Big Data SQL is licensed per disk spindle and it’s $4,000 per spindle as list-price. Add to that typical 22% of annual software support and maintenance from Oracle. If I were to license Big Data SQL for a 100 nodes Hadoop cluster with 12 disks per node, it would cost me almost $5M based on list-price. Don’t forget to add 22% annually. This is order of magnitude more than I would spend on the hardware building such cluster. But wait, it looks like Big Data SQL is only working with Big Data Appliance. Even in this case, the cost of Big Data SQL per single rack appliance is $864K + 22% annually and that’s just one additional tool for your Big Data platform.

Based on what I know about Big Data SQL (and assuming it works as advertised when released), I love it — push code to data, scalable massive parallel processing, leveraging great features from Exadata Storage software. Great job to the folks who developed this product. Unfortunately, I cannot call it a Big Data product — it’s not affordable at scale.

So when you look at other vendors calling their product Big Data — do this costing assessment and if it doesn’t come as affordable at scale then it’s not a Big Data product. And feel free to share your assessments for the rest of us here. I’m sure not everyone will share my line of thinking here either. Fire way.

New 12c Default: Controlfile Autobackup On – But only for Multitenant

This a a little discovery from my present Oracle Database 12c New Features course in Copenhagen: The default setting for Controlfile Autobackup has changed to ON – but only for Multitenant, apparently:

$ rman target sys/oracle_4U@cdb1

Recovery Manager: Release 12.1.0.1.0 - Production on Wed Sep 24 13:28:39 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CDB1 (DBID=832467154)

RMAN> select cdb from v$database;

using target database control file instead of recovery catalog
CDB
---
YES

RMAN> show controlfile autobackup;

RMAN configuration parameters for database with db_unique_name CDB1 are:
CONFIGURE CONTROLFILE AUTOBACKUP ON; # default

Above you see the setting for a container database (CDB). Now an ordinary (Non-CDB) 12c Database:

$ rman target sys/oracle_4U@orcl

Recovery Manager: Release 12.1.0.1.0 - Production on Wed Sep 24 13:33:27 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL (DBID=1386527354)

RMAN> select cdb from v$database;

using target database control file instead of recovery catalog
CDB
---
NO

RMAN> show controlfile autobackup;

RMAN configuration parameters for database with db_unique_name ORCL are:
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default

I really wonder why we have this difference! Is that still so with 12.1.0.2? Don’t believe it, test it! :-)

Tagged: 12c New Features, Backup & Recovery, Multitenant, RMAN

Oracle Open World schedule

Here is my schedule of presentations at Open World this coming week.

I most excited about my talk on Sunday. The Phoenix Project is a awesome book everyone should read, DevOps is the rising star of IT and this talk will empower DBAs to take their careers to the next level with data virtualization – virtualize, govern and deliver data efficiently boosting your companies efficiencies and bottom line much greater than any other change one could make as a DBA.

21-11-06-664_950x510

Lock Time

Here’s a little detail I was forced to re-learn yesterday; it’s one of those things where it’s easy to say “yes, obviously” AFTER you’ve had it explained so I’m going to start by posing it as a question. Here are two samples of PL/SQL that using locking to handle a simple synchronisation mechanism; one uses a table as an object that can be locked, the other uses Oracle’s dbms_lock package. I’ve posted the code for each fragment, and a sample of what you see in v$lock if two sessions execute the code one after the other:

Table locking – the second session to run this code will wait for the first session to commit or rollback:


begin
        lock table t1 in exclusive mode;
end;
/

ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK
---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
00007FF409E57BF8 00007FF409E57C58         15 TM     157778          0          0          6         65          0
00007FF409E57BF8 00007FF409E57C58        125 TM     157778          0          6          0         91          1

Using dbms_lock.

variable m_handle       varchar2(255);

declare
        n1              number;
begin

        dbms_lock.allocate_unique(
                lockname        => 'Synchronize',
                lockhandle      => :m_handle
        );

        dbms_output.put_line(:m_handle);

        n1 := dbms_lock.request(
                lockhandle              => :m_handle,
                lockmode                => dbms_lock.x_mode,
                timeout                 => dbms_lock.maxwait,
                release_on_commit       => true         -- the default is false !!
        );

        dbms_output.put_line(n1);

end;
/

ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK
---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
000000008ED8F738 000000008ED8F790         15 UL 1073742427          0          0          6         42          0
000000008ED902B0 000000008ED90308        125 UL 1073742427          0          6          0        103          1

The big question is this – although the two code fragments produce the same effects in terms of lock waits and the reports from v$lock, what’s the big difference in the way that they are reported in the AWR report.

The high-level difference appears in the Time Model stats. Here are two extracts showing the difference:

Using dbms_lock.


Statistic Name                                       Time (s) % of DB Time
------------------------------------------ ------------------ ------------
sql execute elapsed time                                 65.4         99.9
PL/SQL execution elapsed time                            63.8         97.4

-> Captured SQL account for    2.8% of Total DB Time (s):              65
-> Captured PL/SQL account for   99.4% of Total DB Time (s):              65

        Elapsed                  Elapsed Time
        Time (s)    Executions  per Exec (s)  %Total   %CPU    %IO    SQL Id
---------------- -------------- ------------- ------ ------ ------ -------------
            63.7              1         63.72   97.3     .0     .0 10u1qbw4a27sp
Module: SQL*Plus
declare n1 number; begin dbms_lock.allocate_unique
( lockname => 'Synchronize', lockhandle
 => :m_handle ); dbms_output.put_line(:m_handle); n
1 := dbms_lock.request( lockhandle => :m_handle,

Table locking method:

Time Model Statistics

Statistic Name                                       Time (s) % of DB Time
------------------------------------------ ------------------ ------------
sql execute elapsed time                                 95.5         99.9
DB CPU                                                    0.9           .9
parse time elapsed                                        0.1           .1
hard parse elapsed time                                   0.1           .1
PL/SQL execution elapsed time                             0.1           .1

SQL ordered by Elapsed Time

-> Captured SQL account for   99.6% of Total DB Time (s):              96
-> Captured PL/SQL account for   98.7% of Total DB Time (s):              96

        Elapsed                  Elapsed Time
        Time (s)    Executions  per Exec (s)  %Total   %CPU    %IO    SQL Id
---------------- -------------- ------------- ------ ------ ------ -------------
            93.9              1         93.88   98.3     .0     .0 8apkdghttmndx
Module: SQL*Plus
begin lock table t1 in exclusive mode; end;

            93.9              1         93.88   98.3     .0     .0 29fwr53agvbc0
Module: SQL*Plus
LOCK TABLE T1 IN EXCLUSIVE MODE

The time spent waiting for the table lock is reported purely as SQL execution time in the Time Model stats; but the time spent waiting for the user-defined lock is reported as SQL execution time AND as PL/SQL execution time. I had forgotten this yesterday so, as I skipped through the various headline figures of an hourly snapshot, I was amazed to see the Time Model stats reporting 33,000 seconds of PL/SQL and 66,000 seconds of SQL – how on earth do you manage to do that much PL/SQL on any Oracle system. (To add to the embarrassment, it had only been a few moments earlier that I’d gone through the Top 5 Timed Events and said something like: “you can ignore all time spent on ‘enq: UL – contention’, it’s probably just synchronisation code”.

In this case the SQL ordered by Elapsed Time gives you a strong visual clue about what’s going on – but it won’t always be that obvious.

Bottom Line:

PL/SQL execution time includes the time spent waitng for UL locks, don’t forget that you may need to subtract wait time for ‘enq: UL – contention’ from the PL/SQL time before you start to worry about how much actual work you’re doing in PL/SQL.

 

#Oracle Certification: Always go for the most recent one!

It is quite often that I encounter attendees in my Oracle University courses that strive to become OCP or sometimes even OCM, asking me whether they should better go for an older versions certificate before they take on the most recent. The reasoning behind those questions is mostly that it may be easier to do it with the older version. My advise is then always: Go for the most recent version! No Oracle Certification exam is easy, but the older versions certificate is already outdated. The now most recent one will become outdated also sooner as you may think :-)

OCP 12c upgrade

For that reason I really appreciate the option to upgrade from 9i/10g/11g OCA directly to 12c OCP as discussed in this posting. There is just no point in becoming a new 11g OCP now when 12c is there, in my opinion. What do you think?

Tagged: Oracle Certification

Delphix trial version available for first time : #CloneAttack

Screen Shot 2014-09-24 at 9.41.34 AM

 

Update:

Delphix is now hosting a community board for the hands on lab called Landshark

——

At Oracle Open World we will be leading hands on labs to install a fully operation Delphix environment on your laptop.

The labs will be hosted at the Creativity Museum, the same location as Oaktable World on Tuesday Sept 30 from 10am to 5pm. Solarwinds Confio and DBvisit will also beholding #MonitorAttack and #RepAttack in the same location.

The labs for Delphix will be led by

  • #24890d;" title="@KyleHailey" href="https://twitter.com/kylehhailey" target="_blank">Kyle Hailey
  • #24890d;" title="@OracleAlchemist" href="https://twitter.com/OracleAlchemist" target="_blank">Steve Karam
  • #24890d;" title="@CloudSurgeon" href="https://twitter.com/CloudSurgeon" target="_blank">Adam Bowen
  • #24890d;" title="@BenPrusinski" href="https://twitter.com/benprusinski" target="_blank">Ben Prusinski
  • #24890d;" title="@OracleAlchemist" href="https://twitter.com/timothyjgorman" target="_blank">Tim Gorman

We will provide automated install scripts along with 3 OVA files for the following VMs:

  • Linux Source (CentOS 6.5) 1.8 GB OVA file,   19GB installed 
    • Oracle 11G R2 datafiles
  • Linux Target (CentOS 6.5) 1.2 GB OVA file, 4GB installed
  • Delphix 4.1.2  , 3.1 GB OVA file, #222222;">10GB installed
    • Oracle timeflow of a  Linux Employee WebDB
    • Application timeflow of a Linux Employee WebApp

These VMs require each 2GB memory and 1vCPU

Prerequisites

#222222;">
    #767676;">
  • #222222;">Pre-download 11.2.0.1 for “Download Linux x86-64″  from Oracle site
  • Mac: VMware Fusion (free trial version #24890d;" title="download" href="https://www.vmware.com/products/fusion/fusion-evaluation.html" target="_blank">download)
  • Win: VMware Workstation (free trial version #24890d;" title="download" href="http://www.vmware.com/products/workstation/workstation-evaluation" target="_blank">download)
  • Win: Cygwin download
  • at least 8 GB RAM
  • at least 50 GB free disk space, but preferably 100 GB free
  • at least 2 Ghz CPU, preferably dual-core or better

Steps

To get up and running you will need to have VMware fusion for Mac or VMware Workstation for Windows. We will provide these files if you need them and installation is a double click. You should bring the Oracle 11gR1 (11.2.0.1) install zips. With these zips, our install will automatically install Oracle on the Target and Source machine, and start up a database on the source machine. From there you can make virtual databases in minutes from the source, do rollback, refresh etc of virtual databases. The install will come on a USB stick and from there all you have to do is

#000000;">
  1. #222222;">Import the OVAs into VMWare (right click on OVA, chose open with VMware Fusion)
  2. #222222;">Note IP Address on the boot screen of each VM (via Console)
  3. #222222;">Download the Oracle 11.2.0.1 install for “Linux x86-64″
  4. #222222;">execute (from directory with Oracle install zip files)
    • On MAC  lsde_install_mac_v1.2.sh
    • On PC lsde_install_v1.2.sh
    • On Linux lsde_install_v1.2.sh
  5. #222222;">Login to the Delphix Admin portal
  6. #222222;">Go to Manage->Environments
  7. #222222;">Choose the appropriate environment and then change the Host Address to the IP address from step 4
  8. #222222;">Refresh Environment

Once installed we will have a set of exercises that will show you some of the power of virtualized data.

Reference

9009_03

 

Here is a video showing the lab setup. A later video will show some exercises we can do with the lab setup.

Here is a video of an exercise to do on the lab VMs after install

 

Changing my focus?

I leave for Oracle OpenWorld 2014 tomorrow, which is normally a time when I’m super amped up, but I feel rather flat and in need of change at the moment. I just feel like I’m investing a massive amount of my time on things I don’t enjoy and are not really benefiting me or the community at large, so what’s the point?

What do I enjoy?

  • Playing with technology.
  • Writing articles and blog posts about it.
  • Meeting and talking to people about this stuff.

What don’t I enjoy?

  • I don’t really enjoy answering questions on forums any more. For every one good interaction I have, there seems to be about 10 where people refuse to think for themselves and treat me as their own personal slave.
  • I’m getting an increasing number of people asking me questions on Facebook and Google+ chat. Not only does this have a similar ratio of good:bad interactions, but there is nothing left behind that helps other people. There is no content produced that allows others to answer their own questions from a Google search. This feeds into the dependency cycle.
  • Cleaning up the massive amounts of spam I get.
  • Dealing with the endless stream of people wanting me to help advertise their product or website.

This is not a business for me. It’s fun. When it ceases to be fun, what’s the point?

At the moment, I feel like:

  • Locking the forum.
  • Locking the article comments.
  • Blocking the comments on my blog.
  • Deleting all emails that include technical questions, without even reading them.
  • Ignoring requests for help on all social media.

This would allow me to spend a lot more time playing with Oracle and writing articles, which I feel benefits myself and others to a greater extent than what I’m doing now.

If I look at my web stats, they seem to back up my opinion. The vast majority of hits on my website are for content. Only a very small proportion relate to “interaction”, yet it is this interaction which takes up the vast majority of my time.

I guess some people would see this as going against the whole ethos of the ACE Program, but I don’t think so myself. I think I’m more use to the community by putting out content, rather than doing what I’m doing now.

Maybe I’ll come back from OOW and have a change of heart. Maybe not. Time will tell.

Cheers

Tim…

Update: Some people have taken this post as a message that I’m going to stop writing articles. I think the post is pretty clear on that front. This is about giving me more time to write articles. :)

Update 2: The forums are locked while I’m at OOW. I’ll decide if they will stay locked once I get back to the UK.


Changing my focus? was first posted on September 23, 2014 at 12:32 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.