Search

OakieTags

Who's online

There are currently 0 users and 29 guests online.

Recent comments

Oakies Blog Aggregator

Oracle OpenWorld Rejections : #TeamRejectedByOracleOpenWorld

Once again it is Oracle OpenWorld paper rejection season. :)

Invariably, us conference types start to have a bit of a moan about being rejected, hence my little jibe #TeamRejectedByOracleOpenWorld. In case people lost sight of this being a joke, here was one of my tweets on the subject.

“Setting up a helpline for #TeamRejectedByOracleOpenWorld to deal with all us people who can’t cope with not being heard for 5 mins. :)”

The reaction to these tweets is quite interesting, because some in the community are stunned by the people getting rejected. In reality it shouldn’t be a surprise to anyone. Jonathan Lewis summed the situation up nicely with the following tweet.

“You’re confusing OOW with a user group event. Different organisations, reasons, and balance”

If I’m honest, presenting is not high on my list of desires where OpenWorld is concerned. There is too much to do anyway, without having to worry about preparing for talks. If someone asks me to get involved in a panel session, RAC Attack or some similar thing I’m happy to help out, but if I do none of the above, I will still be rushed off my feet for a week.

The Oracle ACE program is allocated a few slots each year. Some people need to present or their company won’t allow them to attend. Others want the “profile” associated with presenting at OpenWorld. Neither of these things affect me, so I typically don’t submit for the ACE slots. I would rather see them go to someone who really does want them. I get plenty of opportunities to speak. :)

If you really want to speak at conferences, your focus should be on user group events. Getting to speak at something like OOW can be a nice treat for some people, but it probably shouldn’t be your goal. :)

Cheers

Tim…


Oracle OpenWorld Rejections : #TeamRejectedByOracleOpenWorld was first posted on July 11, 2015 at 10:50 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

YouTube : Rags to Riches in 1 Week?

youtubeIf you’ve followed me on Twitter you will have seen me posting links to videos on my YouTube channel. You can see me talking about starting the channel in the first video.

One week and 5 videos in and I’ve just hit 50 subscribers. Watch out PewDiePie!

One thing I didn’t mention in that video was my hopes/expectations as far as subscribers are concerned. As I said in one of my writing tips posts, Oracle is a niche subject on the internet. If you put out some half-decent content on a YouTube gaming or fitness channel, you would probably expect to get a few thousand subscribers fairly quickly. That’s not the case for an Oracle channel. Before I started this YouTube channel I did a little research and the biggest Oracle-related channel I could find was about 30,000 subscribers and that was Oracle’s main YouTube channel. After that some were knocking around 1000-4000 subscribers. Below that were a bunch of channels that were pulling double or triple figures of subscribers. Starting an Oracle-related channel is *not* a good idea if your master plan is to dominate YouTube! :)

OK. With that bullshit out of the way, how have I found my first week?

  • Making YouTube videos is hard! It takes a long time. I’m estimating about 1 hour of effort per minute of footage. The first 3 minute video took 3 days, but that included learning the technology and getting to grips with editing. Hopefully I’ll get a bit slicker as time goes on. :)
  • Doing the vocal for a video is a nightmare. After a certain number of retakes your voice ends up sounding so flat you start to wonder if you are going to send people to sleep. I listen back to my voice on some of the videos and it makes me cringe. It’s monotone and devoid of personality (insert insult of your choice here). As I get better at the recording thing, I’m hoping the number of retakes will reduce and my vocal will sounds less like I’m bored shitless. :)
  • I love the fact I can do “quick” hit-and-run videos and not feel guilty about not including every detail. I’m putting links back to my articles, which contain more detail and most importantly links back to the documentation, so I’m happy that these videos are like little tasters.
  • I’m being a bit guarded about the comments section at the moment. When I look at other channels, their comments are full of spam and haters. I can’t be bothered with that. I’ll see how my attitude to comments develops over time.
  • I’m hoping to do some beginner series for a few areas, which I will build into playlists. This probably won’t be of much interest to regular followers of the blog and website, but it will come in handy for me personally when I’m trying to help people get started, or re-skilled into Oracle. I might be doing some of that at work, hence my interest. :)
  • I’ve tried to release a burst of videos to get the thing rolling, but I don’t know how often I will be able to upload in future. Where Oracle is concerned, the website is my main priority. Then the blog. Then the conference thing. Then YouTube. The day job and general life have to fit in around that somewhere too. This is always going to be a labour of love, not a money spinner, so I have to be realistic about what I can achieve.

So there it is. One week down. Five videos. Four cameos by other members of the Oracle community. Superstardom and wealth beyond my wildest dreams are just around the corner… Not!

Cheers

Tim…

Note to self: Why is this a blog post, not another video? :(


YouTube : Rags to Riches in 1 Week? was first posted on July 11, 2015 at 7:58 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

SLOB 2.3 User Guide

SLOB 2.3 is releasing within the next 48 hours. In case anyone wants to read about all the new features here is a link to the SLOB 2.3 User Guide:

SLOB 2.3 User Guide (pdf)

 

Filed under: oracle

VirtualBox 5.0

virtualboxOracle VirtualBox 5.0 has been released. You can see the Oracle Virtualization Blog announcement here, which includes a link to the official announcement.

Downloads and changelog in the normal places.

I’m downloading… Now!

Cheers

Tim…

Update: Up and running on my Windows 7 PC at work. Will have to wait until tonight to do it on the Mac and Linux boxes at home… :)

Update 2: Running fine on Mac too. :)


VirtualBox 5.0 was first posted on July 9, 2015 at 4:02 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle Security and Electronics

How does Oracle Security and Electronic mix together? - Well I started my working life in 1979 as an apprentice electrician in a factory here in York, England where I live. The factory designed and built trains for the national....[Read More]

Posted by Pete On 09/07/15 At 11:24 AM

Ten Things in Enterprise Manager to Improve a DBA’s Sleep at Night

Happy Birthday to me!  So for my birthday, I give a present to you…  As I want all DBAs to sleep better at night, here are the top ten features you can use in Enterprise Manager Cloud Control to offer a good night’s rest instead of during the day at your desk… :)

sleeping_pc

1.  Disable the Default Rule Sets Shipped with Cloud Control.

Yes, you heard me.  I believe you should use them as a starting point or an example, but don’t put them into production.  These were examples set by development to see all that you could be notified on, but what you need to be woke up for should be anything mission critical that will SUFFER an outage if you DON’T respond.  Anything that can wait till the morning SHOULD wait till the morning.

rulesets

Make copies of the default rules and disable the originals.  Plan on making as many copies and edits as necessary to ensure that you are only being notified on the appropriate targets, life cycle status and line of business that YOU are responsible for ensuring is up and available to the business.

2.  Implement Monitoring Templates and Default for Important Target Types.

Monitoring templates ensure that you are monitoring each target in the same way and for the same metric thresholds.  This ensures you start with metric thresholds that make sense for the target and should be applied to all targets of that target type.  Creating monitoring templates are easy when you create one target as an example and use it for the source of your template.

3.  Use Metric Thresholds for Individual Targets and Set Them to Not Be Overridden by Monitoring Templates

Now this might sound like a complete 180 from #2 on this list, but it’s not.  This is just like #1, break down and specialize for unique targets that have unique challenges.  This means, if you have a target backup drive that fills up to 97% each night, you shouldn’t be woke up for it.  This is expected behavior and you can either set a static threshold specific to this target or an adaptive threshold that won’t be overridden by the monitoring template for this target ONLY.

4.  Utilize Administration Groups

admin_grp1

Administration Groups offer you advanced features and scalability to your Cloud Control environment that standard groups, and to a lesser extent, Dynamic groups, do not.  Line of business and life cycle management features that ensure you can break down notification groups, rule sets and other features, along with more advanced features with Database as a Service and other features to allow you to do more with less.  The natural life of a database environment is one of growth, so thinking ahead one, five and ten years is a great way to add value to the business as a database administrator.

5.  Create Metric Extensions for Unique Monitoring Scenarios

Enterprise Manager 12c is a self-service product.  So often there are unique situations that the business needs monitored for or the DBA notes creates a situation or outage, but isn’t, by default, a metric that comes with EM12c.  It’s easy enough to create a metric extension and take the concern and worry out of the situation, creating more value to the business.

6.  Add Corrective Actions, (Jobs)

Often when, a problem occurs, a DBA has a simple shell script or SQL they run and it corrects the problem.  If this is the case, why not have Cloud Control monitor for the issue, create an incident in the Incident Manager, send an email, then run the SQL or script as a Corrective Action?  The DBA will still know the problem occurred the next morning, but no one needs to be woke up to do what can be automated in the system.

corr_act

7.  Use Patch Plans and Automate Patching

I understand, really.  Something could somehow, somewhere, some rare time go wrong, but the patch plans you can create in Enterprise Manager are surprisingly robust and full featured. If you’re still doing patching the old fashioned way and not patching environments in the more automated and global patch plan way, you’re wasting time and let’s face it-  DBAs rarely have time to waste.  You are a resource that could be utilized for more important tasks and quarterly PSU patching is just not one of those.

8.  Centralize Database Jobs to EM Jobs

The common environment is structured with multiple DBAs, often with one DBA as primary to a database environment and the others playing catch up to figure out how the primary has the database set up.  My favorite DBA to work with once told me, “Kellyn, love your shell scripts.  They make the world go ‘round.  I just don’t want to try to figure out how you write shell at 3am in the morning or what kind of scheduler is used on all the OS’s you support!”  I realized that I owed him to centralize all my environments with an interface that made it easy for ANYONE to manage it.  No one had to look at cron, the task scheduler or a third party scheduling tool anymore.  Everything was in Enterprise Manager and no matter what operating system, it all looked very similar with the logs in the same place, found in the same tab of the UI.  Think about it- this is one you do for the team, move those jobs to inside Enterprise Manager, too…

9.  Put Compliance Framework into Place

Compliance is one of those things that seem a mystery to most.  I’m often asked why environments really need it and does it make sense.  It can seem overwhelming at first, but the idea that you know what database environments, hosts and such are out of compliance helps to distinguish how to get your database environment all set up to ensure that business best practices are in place-  You have a baseline of compliance standards for configuration settings, installation and real-time monitoring to view globally via EM12c.

10.  Plug-ins to Offer a Single Pane of Glass View

A database is a database or that’s how the business sees it.  I have almost as many years in SQL Server as I do in Oracle.  I’ve worked in Sybase, Informix, Postgres and MySQL.  After being hired for my Oracle DBA skills in every job I’ve held, it never failed-  within 6 weeks, a mission critical database environment on a secondary database platform was discovered that a group, often outside of IT had implemented and now needed critical support of.  Enterprise Manager offers plug-ins to support all of the above database platforms and more.  It offers plug-ins for engineered systems, storage arrays and other hardware that the DBA is now expected to manage, too.  Why manage all of this from multiple systems when you can easily create a single pane to ensure you’re covered?

So there you have it, my top ten list.  There are, of course, 100’s of other great features in EM12c, but make sure you are taking advantage of these in the list!

morning



Tags:  , , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Ten Things in Enterprise Manager to Improve a DBA’s Sleep at Night], All Right Reserved. 2015.

PK Index

Here’s one of those little details that I might have known once, or maybe it wasn’t true in earlier versions of oracle, or maybe I just never noticed it and it’s “always” been true; and it’s a detail I’ll probably have forgotten again a couple of years from now.  Consider the following two ways of creating a table with primary key:


Option 1:

create table orders (
        order_id        number(10,0) not null,
        customer_id     number(10,0) not null,
        date_ordered    date         not null,
        other_bits      varchar2(250),
--      constraint ord_fk_cus foreign key(customer_id) references customers,
        constraint ord_pk primary key(order_id)
)
tablespace TS_ORD
;

Option 2:

create table orders (
        order_id        number(10,0) not null,
        customer_id     number(10,0) not null,
        date_ordered    date         not null,
        other_bits      varchar2(250)
)
tablespace TS_OP_DATA
;

alter table orders add constraint ord_pk primary key(order_id);

There’s a significant difference between the two strategies (at least in 11.2.0.4, I haven’t gone back to check earlier versions): in the first form the implicit primary key index is created in the tablespace of the table, in the second form it’s created in the default tablespace of the user. To avoid the risk of putting something in the wrong place you can always add the “using index” clause, for example:


alter table order add constraint ord_pk primary key (order_id) using index tablespace TS_OP_INDX;

Having noticed / reminded myself of this detail I now have on my todo list a task to check the equivalent behaviour when creating partitioned (or composite partitioned) tables – but that’s a task with a very low priority.

Oracle Midlands : Event #10 Summary

oracle-midlands Last night was Oracle Midlands Event #10 with Jonathan Lewis.

The first session was on “Five Hints for Optimizing SQL”. The emphasis was very much on “shaping the query plan” to help the optimizer make the right decisions, not trying to determine every single join and access structure etc.

In the past I’ve seen Jonathan do sessions on hints, which made me realise how badly I was using them. As a result of that I found myself a little scared by them and gravitating to this “shaping” approach, but my version was not anywhere near as well thought out and reasoned as Jonathan’s approach. It’s kind-of nice to see I was on the right path, even if my approach was the mildly pathetic, infantile version of it. :)

red-stack-tech-swagThe break consisted of food, chatting and loads of prizes. It’s worth coming even if you don’t want to see the sessions, just to get a chance of winning some swag. :) Everyone also got to take home a Red Stack Tech mug, stress bulb and some sweets as well.

The second session was on “Creating Test Data to Model Production”. I sat there smugly thinking I knew what was coming, only to realise I had only considered a fraction of the issues. I think “eye opening” would be the phrase I would use for this one. Lots of lessons learned!

I must say, after nearly 20 years (19 years and 11 months) in the game, it’s rather disconcerting to feel like such a newbie. It seems to be happening quite a lot recently. :)

redstacktechSo that was another great event! Many thanks to Jonathan for taking the time to come and speak to us. Hopefully we’ll get another visit next year? Well done to Mike for keeping this train rolling. Wonderful job! Thanks to all the sponsors of the prize draw and of course, thanks to Red Stack Tech for their support, allowing the event to remain free! Big thanks to all the members of the Oracle Midlands family that came out to support the event. Without your asses on seats it wouldn’t happen!

The next event will be on the 1st September with Christian Antognini, so put it in your diary!

Cheers

Tim…


Oracle Midlands : Event #10 Summary was first posted on July 8, 2015 at 2:26 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

JSON support in Exadata 12.1.2.1.0 and later

Some time ago Oracle announced that RDBMS 12.1.0.2 has built-in support for JSON processing. A little later it was also mentioned that you have support for JSON in the Exadata storage servers for offloading. This is probably a lot more exciting to users of JSON than it is to me as I’m not a developer. However, whenever an announcement such as the one I’m referring to is made I would like to see for myself how much of it is implemented in software. Like I said, I’m not a developer so apologies for a silly example: what I’m showing you here can probably done differently and is not the best use of an Exadata. But all I really wanted to test is if JSON support actually exists. I am using cellsrv 12.1.2.1.0 and RDBMS 12.1.0.2.2 for this test.

JSON

I have to say I struggled a little bit to understand the use case for JSON and therefore did what probably everyone does and consulted the official documentation and oracle-base.com for Tim’s views on JSON. Here’s a summary of links I found useful to get started:

The Test

Ok so that was enough to get me started. I needed data, and a table to store this in. It appeared to me that an apache log could be a useful source for JSON records, so I converted my webserver’s log file to JSON using libee0 on OpenSuSE (yes I know, but it’s a great virtualisation platform). The converted file was named httpd_access_log.json and had records such as these:

{"host": "192.168.100.1", "identity": "", "user": "", "date": "05/Feb/2015:12:13:05 +0100", "request": "HEAD /ol70 HTTP/1.1", "status": "404", "size": "", "f1": "", "useragent": "Python-urllib/2.7"}
{"host": "192.168.100.1", "identity": "", "user": "", "date": "05/Feb/2015:12:13:25 +0100", "request": "GET / HTTP/1.1", "status": "403", "size": "989", "f1": "", "useragent": "Mozilla/5.0 (X11; Linux x86_64; rv:35.0) Gecko/20100101 Firefox/35.0"}

Sorry for the wide output-it’s an Apache log…

I then created the table to store the data. JSON appears to be pretty unstructured, so this will do:

SQL> create table jsontest (id number,
  2   jdata clob,
  3   constraint jsontest check (jdata is json)
  4  )
  5  lob (jdata) store as securefile (
  6   enable storage in row
  7  );

SQL> create sequence s_jsontest;

Sequence created

If you look closely then you’ll see that the JSON data is stored in an inline CLOB-that’s one of the pre-requisites for offloading LOBs in 12c.

Loading JSON

Now I needed a way to get the data into the table. I think I could have used SQLLDR but since I have rusty perl scripting skills I gave DBD::Oracle on 12.1.0.2 a go. The following script inserts records slow-by-slow or row-by-row into the table and is probably not the most efficient way to do this. But one of the reasons I blog is so that I don’t have to remember everything. If you ever wondered how to write a DBI/DBD::Oracle script here’s a starting point. Note the emphasis on “starting point” since the script has been trimmed for readability-essential error checking is not shown. Whenever you work with data make sure that your error handling is top-notch!

#!/usr/bin/perl

use strict;
use warnings;

use DBI;
use DBD::Oracle;
use Getopt::Long;

# these will be set by GetOpt::Long
my $service;            # has to be in tnsnames.ora
my $username;
my $jsonfile;

GetOptions (
  "service=s"   => \$service,
  "user=s"      => \$username,
  "jsonfile=s"  => \$jsonfile
);
die "usage: load_json.pl --service  --jsonfile [--user username] " if (!defined ($service ) || !defined ($jsonfile));

die "$jsonfile is not a file" unless ( -f $jsonfile );

print "connecting to service $service as user $username to load file $jsonfile\n";

# about to start...
my $dbh = DBI->connect ("dbi:Oracle:$service", "$username", "someCleverPasswordOrCatName")
  or die ("Cannot connect to service $service: DBI:errstr!");

print "connection to the database established, trying to load data...\n";

# prepare a cursor to loop over all entries in the file
my $sth = $dbh->prepare(q{
 insert into jsontest (id, jdata) values(s_jsontest.nextval, :json)
});

if (! open JSON, "$jsonfile")  {
  print "cannot open $jsonfile: $!\n";
  $dbh->disconnect();
  die "Cannot continue\n";
}

while () {
  chomp;
  $sth->bind_param(":json", $_);
  $sth->execute();
}

$dbh->disconnect();

close JSON;

print "done\n";

This script read the file and inserted all the data into the table. Again, essential error checking must be added, the script is far from being complete. You also need to set the Perl environment variables to the perl installation in $ORACLE_HOME for it to find the DBI and DBD::Oracle drivers.

Offloading or not?

It turned out that the data I inserted was of course not enough to trigger a direct path read that could turn into a Smart Scan. A little inflation of the table was needed. Once that was done I started to get my feet wet with JSON queries:

SQL> select jdata from jsontest where rownum < 6;

JDATA
--------------------------------------------------------------------------------
{"host": "192.168.100.1", "identity": "", "user": "", "date": "05/Feb/2015:12:26
{"host": "192.168.100.156", "identity": "", "user": "", "date": "05/Feb/2015:12:
{"host": "192.168.100.156", "identity": "", "user": "", "date": "05/Feb/2015:12:
{"host": "192.168.100.156", "identity": "", "user": "", "date": "05/Feb/2015:12:
{"host": "192.168.100.156", "identity": "", "user": "", "date": "05/Feb/2015:12:

Interesting. Here are a few more examples with my data set. Again, refer to oracle-base.com and the official documentation set for more information about JSON and querying it in the database. It’s by no means an Exadata only feature.

SQL> select count(*) from jsontest where json_exists(jsontest.jdata, '$.host' false on error);

  COUNT(*)
----------
   2195968

SQL> select count(*) from jsontest where not json_exists(jsontest.jdata, '$.host' false on error);

  COUNT(*)
----------
         0

And finally, here is proof that you can offload JSON data in Exadata; at least for some of the operations it should  be possible judging by the information in v$sqlfn_metadata:

SQL> select name,offloadable from v$sqlfn_metadata where name like '%JSON%'
  2  order by offloadable,name;

NAME                                               OFF
-------------------------------------------------- ---
JSON_ARRAY                                         NO
JSON_ARRAYAGG                                      NO
JSON_EQUAL                                         NO
JSON_OBJECT                                        NO
JSON_OBJECTAGG                                     NO
JSON_QUERY                                         NO
JSON_SERIALIZE                                     NO
JSON_TEXTCONTAINS2                                 NO
JSON_VALUE                                         NO
JSON                                               YES
JSON                                               YES
JSON_EXISTS                                        YES
JSON_QUERY                                         YES
JSON_VALUE                                         YES

14 rows selected.

The two entries named “JSON” are most likely “is JSON” and “is not JSON”.

And now with real data volumes on a real system using JSON_EXISTS:

SQL> select /*+ monitor am_I_offloaded */ count(*)
  2  from jsontest where json_exists(jsontest.jdata, '$.host' false on error);                                                                                        

   COUNT(*)
-----------
    2195968

Elapsed: 00:00:04.96

SQL> select * from table(dbms_xplan.display_cursor);                                                                                                        

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------
SQL_ID  6j73xcww7hmcw, child number 0
-------------------------------------
select /*+ monitor am_I_offloaded */ count(*) from jsontest where
json_exists(jsontest.jdata, '$.host' false on error)

Plan hash value: 568818393

---------------------------------------------------------------------------------------
| Id  | Operation                  | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |          |       |       | 91078 (100)|          |
|   1 |  SORT AGGREGATE            |          |     1 |   610 |            |          |
|*  2 |   TABLE ACCESS STORAGE FULL| JSONTEST | 21960 |    12M| 91078   (1)| 00:00:04 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - storage(JSON_EXISTS2("JSONTEST"."JDATA" FORMAT JSON , '$.host' FALSE ON
              ERROR)=1)
       filter(JSON_EXISTS2("JSONTEST"."JDATA" FORMAT JSON , '$.host' FALSE ON
              ERROR)=1)

So the execution plan looks promising-I can see “table access storage full” and a storage() predicate. Looking at V$SQL I get:

SQL> select sql_id, child_number,
  2  case when io_cell_offload_eligible_bytes = 0 then 'NO' else 'YES' end offloaded,
  3  io_cell_offload_eligible_bytes/power(1024,2) offload_eligible_mb,
  4  io_interconnect_bytes/power(1024,2) interconnect_mb,
  5  io_cell_offload_returned_bytes/power(1024,2) returned_mb,
  6  io_cell_offload_returned_bytes/io_cell_offload_eligible_bytes*100 offload_pct
  7   from v$sql where sql_id = '6j73xcww7hmcw';                                                                                                                                                   

SQL_ID        CHILD_NUMBER OFF OFFLOAD_ELIGIBLE_MB INTERCONNECT_MB RETURNED_MB OFFLOAD_PCT
------------- ------------ --- ------------------- --------------- ----------- -----------
6j73xcww7hmcw            0 YES         2606.695313     1191.731941 1191.724129 45.71781455

And to avoid any doubt, I have the SQL Trace as well:

PARSING IN CURSOR #140370400430072 len=120 dep=0 uid=65 oct=3 lid=65 tim=1784977054418 hv=1750582781 ad='5bfcebed8' sqlid='bfwd4t5n5gjgx'
select /*+ monitor am_I_offloaded */ count(*)   from jsontest where json_exists(jsontest.jdata, '$.host' false on error)
END OF STMT
PARSE #140370400430072:c=103984,e=272239,p=909,cr=968,cu=0,mis=1,r=0,dep=0,og=1,plh=568818393,tim=1784977054417
EXEC #140370400430072:c=0,e=105,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=568818393,tim=1784977054587
WAIT #140370400430072: nam='SQL*Net message to client' ela= 4 driver id=1650815232 #bytes=1 p3=0 obj#=96305 tim=1784977054666
WAIT #140370400430072: nam='reliable message' ela= 826 channel context=27059892880 channel handle=27196561216 broadcast message=26855409216 obj#=96305 tim=1784977055727
WAIT #140370400430072: nam='enq: KO - fast object checkpoint' ela= 159 name|mode=1263468550 2=65629 0=1 obj#=96305 tim=1784977055942
WAIT #140370400430072: nam='enq: KO - fast object checkpoint' ela= 229 name|mode=1263468545 2=65629 0=2 obj#=96305 tim=1784977056265
WAIT #140370400430072: nam='cell smart table scan' ela= 196 cellhash#=3249924569 p2=0 p3=0 obj#=96298 tim=1784977057370
WAIT #140370400430072: nam='cell smart table scan' ela= 171 cellhash#=822451848 p2=0 p3=0 obj#=96298 tim=1784977057884
WAIT #140370400430072: nam='cell smart table scan' ela= 188 cellhash#=674246789 p2=0 p3=0 obj#=96298 tim=1784977058461
WAIT #140370400430072: nam='cell smart table scan' ela= 321 cellhash#=3249924569 p2=0 p3=0 obj#=96298 tim=1784977061623
WAIT #140370400430072: nam='cell smart table scan' ela= 224 cellhash#=822451848 p2=0 p3=0 obj#=96298 tim=1784977062053
WAIT #140370400430072: nam='cell smart table scan' ela= 254 cellhash#=674246789 p2=0 p3=0 obj#=96298 tim=1784977062487
WAIT #140370400430072: nam='cell smart table scan' ela= 7 cellhash#=3249924569 p2=0 p3=0 obj#=96298 tim=1784977062969
WAIT #140370400430072: nam='cell smart table scan' ela= 25 cellhash#=822451848 p2=0 p3=0 obj#=96298 tim=1784977063016
WAIT #140370400430072: nam='cell smart table scan' ela= 81 cellhash#=674246789 p2=0 p3=0 obj#=96298 tim=1784977063115
WAIT #140370400430072: nam='cell smart table scan' ela= 1134 cellhash#=674246789 p2=0 p3=0 obj#=96298 tim=1784977065442
WAIT #140370400430072: nam='cell smart table scan' ela= 6 cellhash#=3249924569 p2=0 p3=0 obj#=96298 tim=1784977065883
WAIT #140370400430072: nam='cell smart table scan' ela= 14 cellhash#=822451848 p2=0 p3=0 obj#=96298 tim=1784977065917
WAIT #140370400430072: nam='cell smart table scan' ela= 105 cellhash#=674246789 p2=0 p3=0 obj#=96298 tim=1784977066037
WAIT #140370400430072: nam='cell smart table scan' ela= 12 cellhash#=674246789 p2=0 p3=0 obj#=96298 tim=1784977066207
WAIT #140370400430072: nam='cell smart table scan' ela= 6605 cellhash#=3249924569 p2=0 p3=0 obj#=96298 tim=1784977072866
WAIT #140370400430072: nam='cell smart table scan' ela= 27 cellhash#=674246789 p2=0 p3=0 obj#=96298 tim=1784977073877
WAIT #140370400430072: nam='cell smart table scan' ela= 29 cellhash#=3249924569 p2=0 p3=0 obj#=96298 tim=1784977074903
WAIT #140370400430072: nam='cell smart table scan' ela= 907 cellhash#=674246789 p2=0 p3=0 obj#=96298 tim=1784977077783
WAIT #140370400430072: nam='cell smart table scan' ela= 28 cellhash#=674246789 p2=0 p3=0 obj#=96298 tim=1784977078753
WAIT #140370400430072: nam='cell smart table scan' ela= 24 cellhash#=3249924569 p2=0 p3=0 obj#=96298 tim=1784977080860
WAIT #140370400430072: nam='cell smart table scan' ela= 1077 cellhash#=674246789 p2=0 p3=0 obj#=96298 tim=1784977082935
...

Summary

So yet, it would appear as if JSON is offloaded.

Become an #Oracle Certified Expert for Data Guard!

It is with great pride that I can announce a new certification being available – Oracle Database 12c: Data Guard Administration.

We wanted this for years and finally got it now, after having put much effort and expertise into the development of the exam. It is presently in beta and offered with a discount. Come and get it!

Tagged: Data Guard, Oracle Certification