Search

OakieTags

Who's online

There are currently 1 user and 37 guests online.

Online users

Recent comments

Oakies Blog Aggregator

Announcing SLOB 2.2 : Think Time and Limited-Scope User-Data Modification

This is a hasty blog post to get SLOB 2.2 out to those who are interested.

In addition to doing away with the cumbersome “seed” table and procedure.sql, this kit introduces 5 new slob.conf parameters. By default these parameters are disabled.

This SLOB distribution does not require re-executing setup.sh. One can simply adopt the kit and use it to test existing SLOB databases. The following explains the new slob.conf parameters:

DO_UPDATE_HOTSPOT=FALSE

When set to TRUE, modify SQL will no longer affect random rows spanning each session’s schema. Instead, each session will only modify HOTSPOT_PCT percent of their data.

HOTSPOT_PCT=10

This parameter controls how much of each session’s schema gets modified when UPDATE_PCT is non-zero. The default will limit the scope of each session’s data modifications to a maximum of 10% of their data.

THINK_TM_MODULUS=0

When set to non-zero this is a frequency control on how often sessions will incur think time. For example, if set to 7, every seventh SQL statement will be following by a sleep (think time) for a random amount of time between THINK_TM_MIN and THINK_TM_MAX. It’s best to assign a prime number to THINK_TM_MODULUS.

THINK_TM_MIN=.1

The low-bound for selection of a random period to sleep when THINK_TM_MODULUS triggers a think time event.

THINK_TM_MAX=.5

The high-bound for selection of a random period to sleep when THINK_TM_MODULUS triggers a think time event.

Notes About Think Time

The resolution supported for think time is hundreds of a second. The following is a link to the SLOB 2.2 release tarball (md5 is be3612c50d134636a56ef9654b5865c5) :

https://my.syncplicity.com/share/5vmflakvyqbawsy/2014.12.05.slob_2.2.1.tar

The additional tarball (at the following link) has a slob.conf, simple.ora and awr.txt that show a way to have 256 sessions produce the following load profile (on 2s16c32t E5 Xeon):
https://my.syncplicity.com/share/geydubw3q42okrt/think-time-help-files.tar

load-profile-think-time

Filed under: oracle, SLOB Tagged: Oracle, SLOB

my Time/Task/Goal/Habit (TTGH) management

There was a post last night on oracle-l on the topic of task/project/time management, and I’m a big fan of productivity hacks and the whole GTD idea or method http://en.wikipedia.org/wiki/Getting_Things_Done. The reason for adopting this method is I’ve been on consulting business my entire career and I always get swamped with a lot of work from multiple customers or research stuff and it’s really difficult to prioritize and focus without a system. The same thing I learned how to speed read (see tony buzan book) because there’s just not enough time and you want to learn as much as you can. The bottom line is you want to be efficient.

So I thought of sharing my way of dealing with this in general…

below is my post on oracle-l 

I would start with Tom Limoncelli’s Time Management for System Administrators, he’s got a video presentation here

http://youtu.be/XMc7jw38Bxs?t=2m48s  that became the foundation on how I manage my tasks

Personally I have Goal, Habit, Task, Time Trackers

And there are two parts to this: having a system that works for you and being able to instrument that

1) you have to have a system on setting your goals and prioritizing your tasks, and this one is a great response from quora http://www.quora.com/Productivity/As-a-startup-CEO-what-is-your-favorite-productivity-hack/answer/Paul-A-Klipp?srid=n2Fg&share=1
  • now I’m using mindmaps for goal setting and kanbanflow.com for task management which I really like because you can specify swimlanes+colors which makes it kind multidimensional than just a regular calendar
  • before I just use google calendar for tasks and “goals on track” for goal/habit tracker but I unsubscribed from that service and migrated the entire workflow to mindmaps (I use freemind)
2) you have to have a way of instrumenting your time
  • on my windows VM where I do all my work I have ManicTime installed and this enables me to track everything I’m doing.. automatically without any user input and it can auto tag applications let’s say if I open putty the time I spent on that app will be tagged as “Work”, and I can see where my time went just by graphing the data  https://www.evernote.com/l/ADBlN746vCxDXJykSPwZMT4TFUMQ6xT9oVw
  • on my mac host I have this free version of RescueTime, I like the weekly productivity percentage being emailed every week usually I’m about 68% per week.. If I go below, that means I’m pretty lazy that week. Above that means I was pretty busy
  • kanbanflow on the other hand forces you to input your tasks + the corresponding time you spent on it. So what I would do is at the end of the day I would export the data and graph it on my tableau dashboard. I just need to open the dashboard I created and it will automatically read the new file and it looks like this  https://www.evernote.com/l/ADD5nUeDwrZLpoc87uhpsqdKeHeNvvMPJcI on that link you’ll see the entire workflow I have for task management up to visualization
Some of these may not work for you, but at least you’ll get the high level idea. So a couple of years back early on my career I was learning and coming up with my own systems and using these tools, through the years I’m also improving and hacking it and up until now that helps me getting things done (GTD) and motivated.

 

:)

OMS and OMR Performance- Part III, Metrics Page

Before heading off to UKOUG’s Tech 14 conference, thought I would jump back from the Agents performance page and look into the very important page in the Oracle Management Service, (OMS) and Oracle Management Repository, (OMR) regarding metrics.  Standard metrics collection is demanding in itself, so when we add plugins, metric extensions and manual metric collection changes, it’s important to know how these changes can impact the performance of the Enterprise Manager 12c, (EM12c) Cloud Control environment.

We have returned to the same location in the EM12c console page: Setup–>Manage Cloud Control –> Repository.  The Repository Page through three tabs, which we’re currently working with the middle tab, Metrics.

metrics1

Metric Collections and Rows via Bubble Map

We’ll jump into the first graph, which is a fantastic bubble graph, (love seeing us use these more in EM12c console pages!)

metrics2

Note the data is distributed across the graph by the left side, Number of Rows Loaded vs. bottom, Number of Collections.  This information is important as we note that our heaviest hitters are the blue and the orange circles.  If we hover our mouse over the blue circle, we then get to see a few more details:

metrics3

We now can see that the blue circle is the metric collections for cluster information and can see not just the two graphing points, but the amount of data in MB loaded.  We can use this information to make decisions on updating collections intervals to ease stress on the EM12c OMS and OMR.

If we then focus on the orange circle, we can view the same type of detail information:

metrics4

So there are a number more collections on the infiniband.  This is expected, as this is the network connectivity between our nodes on the engineered systems.  The amount of rows are higher, too, but note that the MB of data is no more than what the EM12c had to handle for the cluster data being uploaded.  We can use this data to see if the collection interval pressure justifies the impact to the OMS and OMR.  As we work through the rest of the data offered on this page, these are two important pain points to keep in the back of our mind regarding metric data uploaded vs. number of collections.

Investigating Bubble Map Performance Data

Now lets say we want to dig deeper into the inifiniband metric info that’s been shown here.  We can double click on the orange circle in the graph and we’ll be taken to specific detail regarding this one metric and how target and metric details.

metrics5

Now we see the top 25 metrics data load information not just for the parent metric target type, but broken down by metric specifics.  We can quickly see that the Switch Port Configuration Monitor consists of the most metric data.  As the data is compiled and uploaded as one collection, the bubbles are superimposed on top of each other.

If we switch to a Target view, a very different story is presented:

metrics6

Note that six different collections interval schedules [most likely] displayed here.  In the center, you can see the bubbled super-imposed on top of each other that are interconnected, but the large, red bubble is of significant interest.  If we hover our cursor over this bubble:

metrics7

One scan listener, (scan listener 3 for an exalogic environment)  is uploading more data and more often than the rest of the environment?  This is over 26% of the total metric impact for infiniband on the EM12c.  Investigating this, reviewing agent patch levels and comparing collection intervals would be a good idea!

 Classic View of Metric Data

For those that prefer a more standard graph of the data, the right side graph displays the high level data in just a second format:

metrics8

You also have the option, instead of the top 10, to display 15 or 20 top metrics.

Metric Alerts Per Day

Awareness of metric alerts per day can be valuable, especially when there are OMS, OMR or agent patches missing!  I can commonly look at this graph and tell quickly if there an environment has skipped applying important EM12c patches, (can be located in the master note Enterprise Manager Base Platform (All Releases) (Doc ID 822485.1))

metrics10

Now you see the breaks in the graph and may wonder what’s up with that-  this environment has been patched and down for quarterly maintenance.  We can see this when we click on the Table View link and see the Unavailable sections:

metrics12

This also gives you quick access to the recent, raw data without having to query the MGMT$EVENTS view in the OMR directly or using EM CLI.  Once we close this view, we can go back and highlight the different links below the graph to show advanced options.

metrics11

For the Opened, Closed and Backlogged Metric Alerts, we can view Problem Analysis, Metrics Detail or go to the Target home for this metric data.  This is so cool that I’m going to do a separate blog post to do it justice, so be patient on this topic… :)

Top Ten Metric Collection Errors

The last graph on this page is another one that can give away if patches are missing pretty quickly.

metrics13

This is over 30 days, so if you are seeing 100’s of metric collections errors, you should first check to verify that there aren’t any patches that address metric collections of the type you are experiencing.  If this isn’t the case-  investigate the error messages for the collections in the MGMT$METRIC_ERROR_HISTORY view in the OMR.  You can start with something as simple as:

SELECT TARGET_NAME, METRIC_NAME, COLL_NAME, ERROR_MESSAGE
FROM MGMT$METRIC_ERROR_HISTORY
WHERE ERROR_TYPE='ERROR'
AND COLLECTION_TIMESTAMP >=sysdate-30;

There is still a lot to cover in this series, but now it’s time to get ready for head over to Liverpool, England next week!



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [OMS and OMR Performance- Part III, Metrics Page], All Right Reserved. 2015.

UKOUG TECH14 SQL Quiz

If you are an attendee of UKOUG Tech14 you have a chance to win a free copy of Expert Oracle SQL by Tony Hasler AND a free copy of Troubleshooting Oracle Performance (2nd edition) by myself.

If you want to know more, click here.

Expert Oracle SQL

Troubleshooting Oracle Performance, 2nd Edition

UKOUG Tech14 Suggestions for Intro Talks and My Picks of the Rest

As I mentioned in my last post, we tried to organise a thread of intro talks into day one and two of this year’s UKOUG Tech14 conference (you can see the agenda grid here). It was not complete but I thought I should pull it into it’s own post and add in what I would recommend from the overall agenda for people who are relatively new to Oracle RDBMS.

Monday 8th

  • 08:50 – Welcome and Introduction
    • Get there in time for the intro if you can, as if you are newish to the tech you are probably newish to a conference.
  • 09:00 RMAN the basics, by Michael Abbey.
    • If you are a DBA type, backup/recovery is your number one concern.
  • 10:00 – How Oracle Works in 50 Minutes
    • My attempt to cover the basic architecture in under an hour.
  • 11:30 – All about Joins by Tony Hasler
    • Top presenter, always good content
  • 12:30 – Lunch. Go and talk to people, lots of people, find some people you might like to talk with again.
  • 13:20 – Go to the Oracle Keynote.
    • The keynote itself is shorter than normal and afterit there is a panel discussion by technical experts.
  • 14:30 is a bit tricky. Tim Hall on Analytical Functions is maybe a bit advanced, but Tim is a brilliant teacher and it is an intro to the subject. Failing that, I’d suggest the Oracle Enterprise Manager round table hosted by Dev Nayak as Database-centric oracle people should know OEM.
  • 16:00 – Again a bit tricky for someone new but I’d plump for The role of Privileges and Roles in Oracle 12C by Carl Dudley. He lectures (lectured?) in database technology and knows his stuff, but this is a New Feature talk…
  • 17:00 – Tuning by Explain Plan by Arian Stijf
    • This is a step-by-step guide to understanding the most common tool used for performance tuning
  • 17:50 onwards – go to the exhibition drinks, the community drinks and just make friends. One of the best thing to come out of conferences is meeting people and swapping stories.

Tuesday 9th

  • 09:30 Maria ColganTop 5 things you need to know about Oracle Database in-Memory Option
    • This is actually the Database technical keynote, about one of the key new technologies.
  • 10:30 Introduction to Oracle Application Express by Joel Kallman
    • APEX, as it is often called, is a simple but powerful way to develop applications. It’s probably THE most common thing that DBA-types don’t know and  wish they did?
  • 12:00 If you know any Java then Jacob Landlust on What all DBAs need to understand about JDBC Configuration or else Pete Finnigan on Secure, Review & Lock Down your Oracle Database.
  •  14:00 Chris Lawless on Zero Downtime Migrations using logical Replication
    • Good as he covers the principals of such things which teachers you a lot
  • 15:00 A bit of a struggle for a general Intro talk so I will plump for…Tim Gorman on RDBMS Forensics: Troubleshooting Using ASH as I know Tim will explain why understanding and solving performance issues is a science, not an art
  • 16:30 Tom Kyte on SQL is the best Development Language for Big Data
    • If you are new to Oracle, you pretty much have to go to at least one Tom Kyte presentation.
  • 17:30 Jonathan Lewis Five Hints for Efficient SQL
    • If you are new to Oracle, you pretty much have to go to at least one Jonathan Lewis presentation :-)

Oh, what the heck…

Wednesday 10th

  • 09:00 Jonathan Lewis Fundamentals of trouble shooting Pt1
  • 10:00  Jonathan Lewis Fundamentals of trouble shooting Pt2
  • 11:30 Tim Gorman on three types of table compression
  • 12:30 Tom Kyte More things about Oracle Database 12C
  • 14:30 Alex Nuijten Oracle 12C for developers
  • 15:30 Neil Chandler Goldengate – Migrating my first TB

 

Each year I struggle more and more to get to all the talks I want to, partly as there are so many clashes of good talks but also I end up in interesting conversations with old friends and suddenly realise I’ve missed a talk. Or my brain hits “full” and I have to take a breather.

However, my intended agenda is:

  • 08:50 Welcome and Intro to delegates prior to…
  • 09:00 Martin Bach on Oracle 12C features that didn’t make the marketing top 10
  • 10:00 Myself, HOw Oracle works in 50 minutes
  • 11:00 Coffee and recovering!
  • 11:30 Hmm, I want to go to four… Maybe Robyn Sands, Why Solid SQL still delivers the best ROI.
  • 13:30 Oracle Keynote panel
  • 14:30 Tom Hall on Analytical Functions..Or maybe Larry Carpenter on Active Data Guard…
  • 16:00 Antti Koskinen , Undocumented 11g.12c Features Internals
  • 17:00 Graham Wood, AWR: looking Beyond the Wait Events and Top SQL

Tuesday

  • 09:30 I have the pleasure of chairing Maria Colgan’s Database Keynote, Top Five Things you need to know about Oracle Database in-Memory option
  • 10:30 Joze Senegacnik, Most common Databse Configuration Mistakes
  • 12:00 Richard Foote, Oracle database 12XC New Indexing Features
  • 14:00 Damn… I’ll plump for Maria Colgan on IN-memory and the optimizer. Sorry Tim and Chris
  • 15:00 Now Tim, on RDBMS Forensics and Ash
  • 16:30 Chris Antognini on adaptive query optimization
  • 17:30 it better be Pete Sharman, hot over from Aus, doing deployment best practices for Private cloud, as I am chairing him

Wednesday

  • 09:00 Patrick Hurley, Adventures in Database Administration.
  • 10:00 Me, on boosting performance by clustering data
  • 11:30 Richard Foote, indexing in Exadata
  • 12:30 Tom Kyte, More things about Oracle 12C
  • 14:30 chairing Ganco Dimitriov on the importance of having an appropriate data segmentation
  • 15:30 Last one, 3 to chose from… Neil Chandler on Goldengate I think

Drive home and sleep

 

Updated (XML) Content Section

“Once per year I try to update the “XML Content” page that, in principle, should…

STATISTICS COLLECTOR Row Source Operation and LOBs

The aim of the STATISTICS COLLECTOR row source operation, which is used in adaptive plans, is to buffer all data produced by its child operation until it is known whether the inflection point is crossed. It goes without saying that buffering requires memory and, therefore, Oracle Database has to limit the amount of memory that can be allocated for that purpose. As a result, in some situations no adaptive plans can be used because according to the query optimizer estimations too much memory is required.

A special case of the previous limitation is when the row source operation under the STATISTICS COLLECTOR operation produces a LOB. In fact, when a LOB is detected, independently of the amount of data that is expected to be produced, the adaptive plan is bypassed by the query optimizer.

Let’s have a look to an example:

  • Create the test table T1
SQL> CREATE TABLE t1 (id, n, pad)
  2  AS
  3  SELECT rownum, rownum, lpad('*',100,'*')
  4  FROM dual
  5  CONNECT BY level <= 10000;

SQL> INSERT INTO t1
  2  SELECT 10000+rownum, 666, lpad('*',100,'*')
  3  FROM dual
  4  CONNECT BY level <= 150;

SQL> COMMIT;

SQL> ALTER TABLE t1 ADD CONSTRAINT t1_pk PRIMARY KEY (id);

SQL> execute dbms_stats.gather_table_stats(user,'t1')
  • Create the test table T2
SQL> CREATE TABLE t2 (id, n, pad)
  2  AS
  3  SELECT rownum, rownum, lpad('*',100,'*')
  4  FROM dual
  5  CONNECT BY level <= 10000;

SQL> ALTER TABLE t2 ADD CONSTRAINT t2_pk PRIMARY KEY (id);

SQL> execute dbms_stats.gather_table_stats(user,'t2')
  • The query optimizer selects an adaptive plan when T1 and T2 are joined
SQL> EXPLAIN PLAN FOR
  2  SELECT *
  3  FROM t1, t2
  4  WHERE t1.id = t2.id
  5  AND t1.n = 666;

SQL> SELECT * FROM table(dbms_xplan.display(format=>'basic +predicate +note +adaptive'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------
Plan hash value: 1837274416

-------------------------------------------------
|   Id  | Operation                     | Name  |
-------------------------------------------------
|     0 | SELECT STATEMENT              |       |
|- *  1 |  HASH JOIN                    |       |
|     2 |   NESTED LOOPS                |       |
|     3 |    NESTED LOOPS               |       |
|-    4 |     STATISTICS COLLECTOR      |       |
|  *  5 |      TABLE ACCESS FULL        | T1    |
|  *  6 |     INDEX UNIQUE SCAN         | T2_PK |
|     7 |    TABLE ACCESS BY INDEX ROWID| T2    |
|-    8 |   TABLE ACCESS FULL           | T2    |
-------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T1"."ID"="T2"."ID")
   5 - filter("T1"."N"=666)
   6 - access("T1"."ID"="T2"."ID")

Note
-----
   - this is an adaptive plan (rows marked '-' are inactive)
  • Create the test table T1L which contains the same data as table T1; notice that a CLOB is used instead of a VARCHAR2
SQL> CREATE TABLE t1l
  2  AS
  3  SELECT id, n, to_clob(pad) AS pad
  4  FROM t1;

SQL> ALTER TABLE t1l ADD CONSTRAINT t1l_pk PRIMARY KEY (id);

SQL> execute dbms_stats.gather_table_stats(user,'t1l')
  • The query optimizer does not selects an adaptive plan when T1L and T2 are joined
SQL> EXPLAIN PLAN FOR
  2  SELECT *
  3  FROM t1l, t2
  4  WHERE t1l.id = t2.id
  5  AND t1l.n = 666;

SQL> SELECT * FROM table(dbms_xplan.display(format=>'basic +predicate +note +adaptive'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------
Plan hash value: 810895618

----------------------------------------------
| Id  | Operation                    | Name  |
----------------------------------------------
|   0 | SELECT STATEMENT             |       |
|   1 |  NESTED LOOPS                |       |
|   2 |   NESTED LOOPS               |       |
|*  3 |    TABLE ACCESS FULL         | T1L   |
|*  4 |    INDEX UNIQUE SCAN         | T2_PK |
|   5 |   TABLE ACCESS BY INDEX ROWID| T2    |
----------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter("T1L"."N"=666)
   4 - access("T1L"."ID"="T2"."ID")

Note that when the query optimizer detects the limitation just described, in the 10053 trace it writes the following information:

AP: Adaptive joins bypassed for table T2 @ SEL$1 due to LOB-producing table under statistics collector

The script I wrote to check this behaviour can be downloaded here.

Upgrades

I have a simple script that creates two identical tables , collects stats (with no histograms) on the pair of them, then executes a join. Here’s the SQL to create the first table:


create table t1
nologging
as
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		level <= 1e4
)
select
	trunc(dbms_random.value(0,1000))	n_1000,
	trunc(dbms_random.value(0,750))		n_750,
	trunc(dbms_random.value(0,600))		n_600,
	trunc(dbms_random.value(0,400))		n_400,
	trunc(dbms_random.value(0,90))		n_90,
	trunc(dbms_random.value(0,72))		n_72,
	trunc(dbms_random.value(0,40))		n_40,
	trunc(dbms_random.value(0,3))		n_3
from
	generator	v1,
	generator	v2
where
	rownum <= 1e6
;

-- gather stats: no histograms

The two tables have 1,000,000 rows each and t2 is created from t1 with a simple “create as select”. The columns are all defined to be integers, and the naming convention is simple – n_400 holds 400 distinct values with uniform distribution from 0 – 399, n_750 holds 750 values from 0 – 749, and so on.

Here’s the simple query:


select
        t1.*, t2.*
from
        t1, t2
where
        t1.n_400 = 0
and     t2.n_72  = t1.n_90
and     t2.n_750 = t1.n_600
and     t2.n_400 = 1
;

Since I’ve created no indexes you might expect the query to do a couple of and a hash join to get its result – and you’d be right; but what do you think the predicted cardinality would be ?

Here are the results from running explain plan on the query and then reporting the execution plan – for three different versions of Oracle:



9.2.0.8
-------------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost (%CPU)|
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |    96 |  4992 |  1230  (10)|
|*  1 |  HASH JOIN           |             |    96 |  4992 |  1230  (10)|
|*  2 |   TABLE ACCESS FULL  | T1          |  2500 | 65000 |   617  (11)|
|*  3 |   TABLE ACCESS FULL  | T2          |  2500 | 65000 |   613  (10)|
-------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."N_750"="T1"."N_600" AND "T2"."N_72"="T1"."N_90")
   2 - filter("T1"."N_400"=0)
   3 - filter("T2"."N_400"=1)

***************************************************************************

10.2.0.5
---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |   116 |  6032 |  1229  (10)| 00:00:07 |
|*  1 |  HASH JOIN         |      |   116 |  6032 |  1229  (10)| 00:00:07 |
|*  2 |   TABLE ACCESS FULL| T1   |  2500 | 65000 |   616  (11)| 00:00:04 |
|*  3 |   TABLE ACCESS FULL| T2   |  2500 | 65000 |   612  (10)| 00:00:04 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."N_750"="T1"."N_600" AND "T2"."N_72"="T1"."N_90")
   2 - filter("T1"."N_400"=0)
   3 - filter("T2"."N_400"=1)

***************************************************************************

11.2.0.4
---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |  2554 |   139K|  1225  (10)| 00:00:07 |
|*  1 |  HASH JOIN         |      |  2554 |   139K|  1225  (10)| 00:00:07 |
|*  2 |   TABLE ACCESS FULL| T1   |  2500 | 70000 |   612  (10)| 00:00:04 |
|*  3 |   TABLE ACCESS FULL| T2   |  2500 | 70000 |   612  (10)| 00:00:04 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."N_72"="T1"."N_90" AND "T2"."N_750"="T1"."N_600")
   2 - filter("T1"."N_400"=0)
   3 - filter("T2"."N_400"=1)

The change for 11.2.0.4 (which is still there for 12.1.0.2. I didn’t check to see if it also appears in 11.1.0.7) is particularly worrying. When you see a simple query like this changing cardinality on the upgrade you can be fairly confident that some of your more complex queries will change their plans – even if there are no clever new optimizer transformations coming into play.

I’ll write up an explanation of how the optimizer has produced three different estimates some time over the next couple of weeks; but if you want an earlier answer this is one of the things I’ll be covering in my presentation on calculating selectivity at “Super Sunday” at UKOUG Tech 14.

UKOUG 2014

So it’s that time of year when I have to decide on my timetable for the UKOUG annual conference. Of course, I never manage to stick to it, but in principle here are the sessions I’ve highlighted:

Sunday

  • 12:30 – How to Avoid a Salted Banana – Lothar Flatz
  • 13:30 – Calculating Selectivity  – Me
  • 15:00 – Advanced Diagnostics Revisited – Julian Dyke
  • 16:00 – Testing Jumbo Frames for RAC – Neil Johnson

Monday

  • 9:00 – Oracle Indexes Q & A session – Richard Foote
  • 10:00 – How Oracle works in 50 minutes – Martin Widlake
  • 11:30 – Predictive Queries in 12c – Brendan Tierney
  • 14:30 – Oracle Database In-Memory DB and the Query Optimizer – Christian Antognini
  • 16:00 – Instrumenting, Analysing, & Tuning the Performance of Oracle ADF Applications – Frank Houweling
  • 17:00 – Techniques for Strategic Testing – Clive King

Tuesday

  • 9:30 – Top Five Things You Need To Know About Oracle Database In-Memory Option – Maria Colgan
  • 10:30 – How to Write Better PL/SQL – Andrew Clarke
  • 12:00 – Optimizer Round Table – Tony Hasler
  • 14:00 – What we had to Unlearn & Learn when Moving from M9000 to Super Cluster -Philippe Fierens
  • 15:00 – Maximum Availability Architecture: A Recipe for Disaster? – Julian Dyke
  • 16:30 – Chasing the Holy Grail of HA – Implementing Transaction Guard & Application Continuity in Oracle Database 12c -Mark Bobak
  • 17:30 – Five Hints for Efficient SQL – Me

Wednesday

  • 9:00 – Fundamentals of Troubleshooting (without graphics) pt.1 – Me
  • 10:00 – Fundamentals of Troubleshooting (without graphics) pt.2 – Me
  • 11:30 – Indexing in Exadata – Richard Foote

 

 

How do you Explain Oracle in 50 Minutes?

I’ve done a very “brave”* thing. I’ve put forward a talk to this year’s UKOUG Tech14 conference titled “How Oracle Works – in under 50 minutes”. Yes, I really was suggesting I could explain to people how the core of Oracle functions in that time. Not only that, but the talk is aimed at those new to Oracle technology. And it got accepted, so I have to present it. I can’t complain about that too much, I was on the paper selection committee…

* – “brave”, of course, means “stupid” in this context.

As a result I am now strapped to the chair in front of my desk, preparing an attempt to explain the overall structure of an Oracle instance, how data moves in out of storage, how ACID works and a few other things. Writing this blog is just avoidance behaviour on my part as I delay going back to it.

Is it possible? I’m convinced it is.

If you ignore all the additional bits, the things that not all sites use, such as Partitioning, RAC, Resource Manager, Materialized Views etc, etc, etc, then that removes a lot. And if not everyone uses it, then it is not core.
There is no need or intention on my part to talk about details of the core – for example, how the Cost Based aspect of the optimizer works, Oracle permissions or the steps needed for instance recovery. We all use those but the details are ignored by some people for their whole career {not usually people who I would deem competent, despite them holding down jobs as Oracle technicians, but they do}.

You are left with a relatively small set of things going on. Don’t get me wrong, it is still a lot of stuff to talk about and is almost certainly too much for someone to fully take in and digest in the time I have. I’m going to have to present this material as if I am possessed. But my intention is to describe a whole picture that makes sense and will allow people to understand the flow. Then, when they see presentations on aspects of it later in the conference, there is more chance it will stick. I find I need to be taught something 3 or 4 times. The first time simply opens my mind to the general idea, the second time I retain some of the details and the third or forth time I start integrating it into what I already new.

My challenge is to say enough so that it makes sense and *no more*. I have developed a very bad habit of trying to cram too much into a presentation and of course this is a real danger here. I’m trying to make it all visual. There will be slides of text, but they are more for if you want to download the talk after the conference. However, drawing pictures takes much, much, much longer than banging down a half dozen bullet points.

One glimmer in the dark is that there is a coffee break after my session. I can go right up to the wire and then take questions after I officially stop, if I am not wrestled to the ground and thrown out the room.

If anyone has any suggestions or comments about what I should or should not include, I’d love to hear them.

This is all part of my intention to provide more conference content for those new to Oracle. As such, this “overview” talk is at the start of the first day of the main conference, 10am Monday. I have to thank my fellow content organisers for allowing me to stick it in where I wanted it. If you are coming to the conference and don’t know much Oracle yet – then I am amazed you read my blog (or any other blog other than maybe AskTom). But if you have colleagues or friends coming who are still relatively new to the tech, tell them to look out for my talk. I really hope it will help them get that initial understanding.

I had hoped to create a fully fledged thread of intro talks running through all of Monday and Tuesday, but I brought the idea up too late. We really needed to promote the idea at the call for papers and then maybe sources a couple of talk. However, using the talks that were accepted we did manage to get a good stab at a flow of intro talks through Monday. I would suggest:

  • 08:50 – Welcome and Introduction
    • Get there in time for the intro if you can, as if you are newish to the tech you are probably newish to a conference
  • 09:00 RMAN the basics, by Michael Abbey.
    • If you are a DBA type, backup/recovery is your number one concern.
  • 10:00 – How Oracle Works in 50 Minutes
    • I think I have said enough!
  • 11:30 – All about Joins by Tony Hasler
    • Top presenter, always good content
  • 12:30 – Lunch. Go and talk to people, lots of people, find some people you might like to talk with again. *don’t stalk anyone*
  • 13:20 – Go to the Oracle Keynote.
    • Personally, I hate whole-audience keynotes, I am sick of being told every year how “there has never been a better time to invest in oracle technology” – but this one is short and after it there is a panel discussion by technical experts.
  • 14:30 is a bit tricky. Tim Hall on Analytical Functions is maybe a bit advanced, but Tim is a brilliant teacher and it is an intro to the subject. Failing that, I’d suggest the Oracle Enterprise Manager round table hosted by Dev Nayak as Database-centric oracle people should know OEM.
  • 16:00 – Again a bit tricky for someone new but I’d plump for The role of Privileges and Roles in Oracle 12C by Carl Dudley. He lectures (lectured?) in database technology and knows his stuff, but this is a New Feature talk…
  • 17:00 – Tuning by Explain Plan by Arian Stijf
    • This is a step-by-step guide to understanding the most common tool used for performance tuning
  • 17:50 onwards – go to the exhibition drinks, the community drinks and just make friends. One of the best thing to come out of conferences is meeting people and swapping stories.

I better get back to drawing pictures. Each one takes me a day and I need about 8 of them. Whoops!