Search

OakieTags

Who's online

There are currently 0 users and 32 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

Munching the Squarespace Hamburger

Many Squarespace templates indicate mobile menus by the so-called hamburger
icon. Some argue that the hamburger is unintuitive. Templates like Five and
Marquee make it easy to avoid the burger...



Read the full post at www.gennick.com/database.

Ten Year Site Anniversary

I realized yesterday that this site has pasted his ten year anniversary. In all funny…

Closure

It’s been a long time since I said anything interesting about transitive closure in Oracle, the mechanism by which Oracle can infer that if a = b and b = c then a = c but only (in Oracle’s case) if one of a, b, or c is a literal constant rather than a column. So with that quick reminder in place, here’s an example of optimizer mechanics to worry you. It’s not actually a demonstration of transitive closure coming into play, but I wanted to remind you of the logic to set the scene.

I have three identical tables, one million rows, no indexes. The SQL to create the first table is one I supplied a couple of days ago to demonstrate changes in join cardinality dependent on Oracle version:


create table t1
nologging
as
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		level <= 1e4
)
select
	trunc(dbms_random.value(0,1000))	n_1000,
	trunc(dbms_random.value(0,750))		n_750,
	trunc(dbms_random.value(0,600))		n_600,
	trunc(dbms_random.value(0,400))		n_400,
	trunc(dbms_random.value(0,90))		n_90,
	trunc(dbms_random.value(0,72))		n_72,
	trunc(dbms_random.value(0,40))		n_40,
	trunc(dbms_random.value(0,3))		n_3
from
	generator	v1,
	generator	v2
where
	rownum <= 1e6
;

Here’s a simple SQL statement that joins the three tables:


select
	t1.*, t2.*, t3.*
from
	t1, t2, t3
where
	t2.n_90  = t1.n_90
and	t3.n_90  = t2.n_90
and	t3.n_600 = t2.n_600
and	t1.n_400 = 1
and	t2.n_400 = 2
and	t3.n_400 = 3
;

Given the various n_400 = {constant} predicates we should expect to see close to 2,500 rows from each table participating in the join – and that is exactly what Oracle predicts in the execution plan. The question is: what is the cardinality of the final join? Before showing you the execution plan and its prediction I’m going to bring transitivity into the picture.  Note the lines numbered 6 and 7.  If t2.n_90 = t1.n_90 and t3.n_90 = t2.n_90 then t3.n_90 = t1.n_90; so I might have written my query slightly differently – note the small change at line 7 below:


select
	t1.*, t2.*, t3.*
from
	t1, t2, t3
where
	t2.n_90  = t1.n_90
and	t3.n_90  = t1.n_90		-- changed
and	t3.n_600 = t2.n_600
and	t1.n_400 = 1
and	t2.n_400 = 2
and	t3.n_400 = 3
;

So here’s the exciting bit. My two queries are logically equivalent, and MUST return exactly the same row set. Check the final cardinality predictions in these two execution plans (from 12.1.0.2, but you get the same results in 11.2.0.4, older versions have other differences):


First Version - note the predicate for operation 3
----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 70949 |  5820K|  1869  (10)| 00:00:01 |
|*  1 |  HASH JOIN          |      | 70949 |  5820K|  1869  (10)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL | T1   |  2500 | 70000 |   622  (10)| 00:00:01 |
|*  3 |   HASH JOIN         |      |  2554 |   139K|  1245  (10)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL| T2   |  2500 | 70000 |   622  (10)| 00:00:01 |
|*  5 |    TABLE ACCESS FULL| T3   |  2500 | 70000 |   622  (10)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."N_90"="T1"."N_90")
   2 - filter("T1"."N_400"=1)
   3 - access("T3"."N_90"="T2"."N_90" AND "T3"."N_600"="T2"."N_600")
   4 - filter("T2"."N_400"=2)
   5 - filter("T3"."N_400"=3)

Second Version - note the predicate for operation 1
----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |  3264 |   267K|  1868  (10)| 00:00:01 |
|*  1 |  HASH JOIN          |      |  3264 |   267K|  1868  (10)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL | T1   |  2500 | 70000 |   622  (10)| 00:00:01 |
|*  3 |   HASH JOIN         |      | 10575 |   578K|  1245  (10)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL| T2   |  2500 | 70000 |   622  (10)| 00:00:01 |
|*  5 |    TABLE ACCESS FULL| T3   |  2500 | 70000 |   622  (10)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."N_90"="T1"."N_90" AND "T3"."N_90"="T1"."N_90")
   2 - filter("T1"."N_400"=1)
   3 - access("T3"."N_600"="T2"."N_600")
   4 - filter("T2"."N_400"=2)
   5 - filter("T3"."N_400"=3)

The a small change in the choice of presenting the predicates gives me a factor of 22 in the cardinality estimate – oops!

The actual result with my data was close to 3,000 rows – so one of the estimates in the second version was pretty good; but the point of the blog isn’t that you can “tune” the optimizer by carefully picking your way through transitive closure, the point is that a small “cosmetic” change you might make to a query could result in a significant change in the cardinality calculations which could then make a dramatic difference to the final execution plan. This example, by the way, depends on the same “multi-column sanity check” that showed up in the previous posting.

I will be expanding on this posting some time in the next couple of weeks but, again, the example should come up in my session on calculating selectivity at “Super Sunday” at UKOUG Tech 14.

 

 

Announcing SLOB 2.2 : Think Time and Limited-Scope User-Data Modification

This is a hasty blog post to get SLOB 2.2 out to those who are interested.

In addition to doing away with the cumbersome “seed” table and procedure.sql, this kit introduces 5 new slob.conf parameters. By default these parameters are disabled.

This SLOB distribution does not require re-executing setup.sh. One can simply adopt the kit and use it to test existing SLOB databases. The following explains the new slob.conf parameters:

DO_UPDATE_HOTSPOT=FALSE

When set to TRUE, modify SQL will no longer affect random rows spanning each session’s schema. Instead, each session will only modify HOTSPOT_PCT percent of their data.

HOTSPOT_PCT=10

This parameter controls how much of each session’s schema gets modified when UPDATE_PCT is non-zero. The default will limit the scope of each session’s data modifications to a maximum of 10% of their data.

THINK_TM_MODULUS=0

When set to non-zero this is a frequency control on how often sessions will incur think time. For example, if set to 7, every seventh SQL statement will be following by a sleep (think time) for a random amount of time between THINK_TM_MIN and THINK_TM_MAX. It’s best to assign a prime number to THINK_TM_MODULUS.

THINK_TM_MIN=.1

The low-bound for selection of a random period to sleep when THINK_TM_MODULUS triggers a think time event.

THINK_TM_MAX=.5

The high-bound for selection of a random period to sleep when THINK_TM_MODULUS triggers a think time event.

Notes About Think Time

The resolution supported for think time is hundreds of a second. The following is a link to the SLOB 2.2 release tarball (md5 is be3612c50d134636a56ef9654b5865c5) :

https://my.syncplicity.com/share/5vmflakvyqbawsy/2014.12.05.slob_2.2.1.tar

The additional tarball (at the following link) has a slob.conf, simple.ora and awr.txt that show a way to have 256 sessions produce the following load profile (on 2s16c32t E5 Xeon):
https://my.syncplicity.com/share/geydubw3q42okrt/think-time-help-files.tar

load-profile-think-time

Filed under: oracle, SLOB Tagged: Oracle, SLOB

my Time/Task/Goal/Habit (TTGH) management

There was a post last night on oracle-l on the topic of task/project/time management, and I’m a big fan of productivity hacks and the whole GTD idea or method http://en.wikipedia.org/wiki/Getting_Things_Done. The reason for adopting this method is I’ve been on consulting business my entire career and I always get swamped with a lot of work from multiple customers or research stuff and it’s really difficult to prioritize and focus without a system. The same thing I learned how to speed read (see tony buzan book) because there’s just not enough time and you want to learn as much as you can. The bottom line is you want to be efficient.

So I thought of sharing my way of dealing with this in general…

below is my post on oracle-l 

I would start with Tom Limoncelli’s Time Management for System Administrators, he’s got a video presentation here

http://youtu.be/XMc7jw38Bxs?t=2m48s  that became the foundation on how I manage my tasks

Personally I have Goal, Habit, Task, Time Trackers

And there are two parts to this: having a system that works for you and being able to instrument that

1) you have to have a system on setting your goals and prioritizing your tasks, and this one is a great response from quora http://www.quora.com/Productivity/As-a-startup-CEO-what-is-your-favorite-productivity-hack/answer/Paul-A-Klipp?srid=n2Fg&share=1
  • now I’m using mindmaps for goal setting and kanbanflow.com for task management which I really like because you can specify swimlanes+colors which makes it kind multidimensional than just a regular calendar
  • before I just use google calendar for tasks and “goals on track” for goal/habit tracker but I unsubscribed from that service and migrated the entire workflow to mindmaps (I use freemind)
2) you have to have a way of instrumenting your time
  • on my windows VM where I do all my work I have ManicTime installed and this enables me to track everything I’m doing.. automatically without any user input and it can auto tag applications let’s say if I open putty the time I spent on that app will be tagged as “Work”, and I can see where my time went just by graphing the data  https://www.evernote.com/l/ADBlN746vCxDXJykSPwZMT4TFUMQ6xT9oVw
  • on my mac host I have this free version of RescueTime, I like the weekly productivity percentage being emailed every week usually I’m about 68% per week.. If I go below, that means I’m pretty lazy that week. Above that means I was pretty busy
  • kanbanflow on the other hand forces you to input your tasks + the corresponding time you spent on it. So what I would do is at the end of the day I would export the data and graph it on my tableau dashboard. I just need to open the dashboard I created and it will automatically read the new file and it looks like this  https://www.evernote.com/l/ADD5nUeDwrZLpoc87uhpsqdKeHeNvvMPJcI on that link you’ll see the entire workflow I have for task management up to visualization
Some of these may not work for you, but at least you’ll get the high level idea. So a couple of years back early on my career I was learning and coming up with my own systems and using these tools, through the years I’m also improving and hacking it and up until now that helps me getting things done (GTD) and motivated.

 

:)

OMS and OMR Performance- Part III, Metrics Page

Before heading off to UKOUG’s Tech 14 conference, thought I would jump back from the Agents performance page and look into the very important page in the Oracle Management Service, (OMS) and Oracle Management Repository, (OMR) regarding metrics.  Standard metrics collection is demanding in itself, so when we add plugins, metric extensions and manual metric collection changes, it’s important to know how these changes can impact the performance of the Enterprise Manager 12c, (EM12c) Cloud Control environment.

We have returned to the same location in the EM12c console page: Setup–>Manage Cloud Control –> Repository.  The Repository Page through three tabs, which we’re currently working with the middle tab, Metrics.

metrics1

Metric Collections and Rows via Bubble Map

We’ll jump into the first graph, which is a fantastic bubble graph, (love seeing us use these more in EM12c console pages!)

metrics2

Note the data is distributed across the graph by the left side, Number of Rows Loaded vs. bottom, Number of Collections.  This information is important as we note that our heaviest hitters are the blue and the orange circles.  If we hover our mouse over the blue circle, we then get to see a few more details:

metrics3

We now can see that the blue circle is the metric collections for cluster information and can see not just the two graphing points, but the amount of data in MB loaded.  We can use this information to make decisions on updating collections intervals to ease stress on the EM12c OMS and OMR.

If we then focus on the orange circle, we can view the same type of detail information:

metrics4

So there are a number more collections on the infiniband.  This is expected, as this is the network connectivity between our nodes on the engineered systems.  The amount of rows are higher, too, but note that the MB of data is no more than what the EM12c had to handle for the cluster data being uploaded.  We can use this data to see if the collection interval pressure justifies the impact to the OMS and OMR.  As we work through the rest of the data offered on this page, these are two important pain points to keep in the back of our mind regarding metric data uploaded vs. number of collections.

Investigating Bubble Map Performance Data

Now lets say we want to dig deeper into the inifiniband metric info that’s been shown here.  We can double click on the orange circle in the graph and we’ll be taken to specific detail regarding this one metric and how target and metric details.

metrics5

Now we see the top 25 metrics data load information not just for the parent metric target type, but broken down by metric specifics.  We can quickly see that the Switch Port Configuration Monitor consists of the most metric data.  As the data is compiled and uploaded as one collection, the bubbles are superimposed on top of each other.

If we switch to a Target view, a very different story is presented:

metrics6

Note that six different collections interval schedules [most likely] displayed here.  In the center, you can see the bubbled super-imposed on top of each other that are interconnected, but the large, red bubble is of significant interest.  If we hover our cursor over this bubble:

metrics7

One scan listener, (scan listener 3 for an exalogic environment)  is uploading more data and more often than the rest of the environment?  This is over 26% of the total metric impact for infiniband on the EM12c.  Investigating this, reviewing agent patch levels and comparing collection intervals would be a good idea!

 Classic View of Metric Data

For those that prefer a more standard graph of the data, the right side graph displays the high level data in just a second format:

metrics8

You also have the option, instead of the top 10, to display 15 or 20 top metrics.

Metric Alerts Per Day

Awareness of metric alerts per day can be valuable, especially when there are OMS, OMR or agent patches missing!  I can commonly look at this graph and tell quickly if there an environment has skipped applying important EM12c patches, (can be located in the master note Enterprise Manager Base Platform (All Releases) (Doc ID 822485.1))

metrics10

Now you see the breaks in the graph and may wonder what’s up with that-  this environment has been patched and down for quarterly maintenance.  We can see this when we click on the Table View link and see the Unavailable sections:

metrics12

This also gives you quick access to the recent, raw data without having to query the MGMT$EVENTS view in the OMR directly or using EM CLI.  Once we close this view, we can go back and highlight the different links below the graph to show advanced options.

metrics11

For the Opened, Closed and Backlogged Metric Alerts, we can view Problem Analysis, Metrics Detail or go to the Target home for this metric data.  This is so cool that I’m going to do a separate blog post to do it justice, so be patient on this topic… :)

Top Ten Metric Collection Errors

The last graph on this page is another one that can give away if patches are missing pretty quickly.

metrics13

This is over 30 days, so if you are seeing 100’s of metric collections errors, you should first check to verify that there aren’t any patches that address metric collections of the type you are experiencing.  If this isn’t the case-  investigate the error messages for the collections in the MGMT$METRIC_ERROR_HISTORY view in the OMR.  You can start with something as simple as:

SELECT TARGET_NAME, METRIC_NAME, COLL_NAME, ERROR_MESSAGE
FROM MGMT$METRIC_ERROR_HISTORY
WHERE ERROR_TYPE='ERROR'
AND COLLECTION_TIMESTAMP >=sysdate-30;

There is still a lot to cover in this series, but now it’s time to get ready for head over to Liverpool, England next week!



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [OMS and OMR Performance- Part III, Metrics Page], All Right Reserved. 2014.

UKOUG TECH14 SQL Quiz

If you are an attendee of UKOUG Tech14 you have a chance to win a free copy of Expert Oracle SQL by Tony Hasler AND a free copy of Troubleshooting Oracle Performance (2nd edition) by myself.

If you want to know more, click here.

Expert Oracle SQL

Troubleshooting Oracle Performance, 2nd Edition

UKOUG Tech14 Suggestions for Intro Talks and My Picks of the Rest

As I mentioned in my last post, we tried to organise a thread of intro talks into day one and two of this year’s UKOUG Tech14 conference (you can see the agenda grid here). It was not complete but I thought I should pull it into it’s own post and add in what I would recommend from the overall agenda for people who are relatively new to Oracle RDBMS.

Monday 8th

  • 08:50 – Welcome and Introduction
    • Get there in time for the intro if you can, as if you are newish to the tech you are probably newish to a conference.
  • 09:00 RMAN the basics, by Michael Abbey.
    • If you are a DBA type, backup/recovery is your number one concern.
  • 10:00 – How Oracle Works in 50 Minutes
    • My attempt to cover the basic architecture in under an hour.
  • 11:30 – All about Joins by Tony Hasler
    • Top presenter, always good content
  • 12:30 – Lunch. Go and talk to people, lots of people, find some people you might like to talk with again.
  • 13:20 – Go to the Oracle Keynote.
    • The keynote itself is shorter than normal and afterit there is a panel discussion by technical experts.
  • 14:30 is a bit tricky. Tim Hall on Analytical Functions is maybe a bit advanced, but Tim is a brilliant teacher and it is an intro to the subject. Failing that, I’d suggest the Oracle Enterprise Manager round table hosted by Dev Nayak as Database-centric oracle people should know OEM.
  • 16:00 – Again a bit tricky for someone new but I’d plump for The role of Privileges and Roles in Oracle 12C by Carl Dudley. He lectures (lectured?) in database technology and knows his stuff, but this is a New Feature talk…
  • 17:00 – Tuning by Explain Plan by Arian Stijf
    • This is a step-by-step guide to understanding the most common tool used for performance tuning
  • 17:50 onwards – go to the exhibition drinks, the community drinks and just make friends. One of the best thing to come out of conferences is meeting people and swapping stories.

Tuesday 9th

  • 09:30 Maria ColganTop 5 things you need to know about Oracle Database in-Memory Option
    • This is actually the Database technical keynote, about one of the key new technologies.
  • 10:30 Introduction to Oracle Application Express by Joel Kallman
    • APEX, as it is often called, is a simple but powerful way to develop applications. It’s probably THE most common thing that DBA-types don’t know and  wish they did?
  • 12:00 If you know any Java then Jacob Landlust on What all DBAs need to understand about JDBC Configuration or else Pete Finnigan on Secure, Review & Lock Down your Oracle Database.
  •  14:00 Chris Lawless on Zero Downtime Migrations using logical Replication
    • Good as he covers the principals of such things which teachers you a lot
  • 15:00 A bit of a struggle for a general Intro talk so I will plump for…Tim Gorman on RDBMS Forensics: Troubleshooting Using ASH as I know Tim will explain why understanding and solving performance issues is a science, not an art
  • 16:30 Tom Kyte on SQL is the best Development Language for Big Data
    • If you are new to Oracle, you pretty much have to go to at least one Tom Kyte presentation.
  • 17:30 Jonathan Lewis Five Hints for Efficient SQL
    • If you are new to Oracle, you pretty much have to go to at least one Jonathan Lewis presentation :-)

Oh, what the heck…

Wednesday 10th

  • 09:00 Jonathan Lewis Fundamentals of trouble shooting Pt1
  • 10:00  Jonathan Lewis Fundamentals of trouble shooting Pt2
  • 11:30 Tim Gorman on three types of table compression
  • 12:30 Tom Kyte More things about Oracle Database 12C
  • 14:30 Alex Nuijten Oracle 12C for developers
  • 15:30 Neil Chandler Goldengate – Migrating my first TB

 

Each year I struggle more and more to get to all the talks I want to, partly as there are so many clashes of good talks but also I end up in interesting conversations with old friends and suddenly realise I’ve missed a talk. Or my brain hits “full” and I have to take a breather.

However, my intended agenda is:

  • 08:50 Welcome and Intro to delegates prior to…
  • 09:00 Martin Bach on Oracle 12C features that didn’t make the marketing top 10
  • 10:00 Myself, HOw Oracle works in 50 minutes
  • 11:00 Coffee and recovering!
  • 11:30 Hmm, I want to go to four… Maybe Robyn Sands, Why Solid SQL still delivers the best ROI.
  • 13:30 Oracle Keynote panel
  • 14:30 Tom Hall on Analytical Functions..Or maybe Larry Carpenter on Active Data Guard…
  • 16:00 Antti Koskinen , Undocumented 11g.12c Features Internals
  • 17:00 Graham Wood, AWR: looking Beyond the Wait Events and Top SQL

Tuesday

  • 09:30 I have the pleasure of chairing Maria Colgan’s Database Keynote, Top Five Things you need to know about Oracle Database in-Memory option
  • 10:30 Joze Senegacnik, Most common Databse Configuration Mistakes
  • 12:00 Richard Foote, Oracle database 12XC New Indexing Features
  • 14:00 Damn… I’ll plump for Maria Colgan on IN-memory and the optimizer. Sorry Tim and Chris
  • 15:00 Now Tim, on RDBMS Forensics and Ash
  • 16:30 Chris Antognini on adaptive query optimization
  • 17:30 it better be Pete Sharman, hot over from Aus, doing deployment best practices for Private cloud, as I am chairing him

Wednesday

  • 09:00 Patrick Hurley, Adventures in Database Administration.
  • 10:00 Me, on boosting performance by clustering data
  • 11:30 Richard Foote, indexing in Exadata
  • 12:30 Tom Kyte, More things about Oracle 12C
  • 14:30 chairing Ganco Dimitriov on the importance of having an appropriate data segmentation
  • 15:30 Last one, 3 to chose from… Neil Chandler on Goldengate I think

Drive home and sleep

 

Updated (XML) Content Section

“Once per year I try to update the “XML Content” page that, in principle, should…

STATISTICS COLLECTOR Row Source Operation and LOBs

The aim of the STATISTICS COLLECTOR row source operation, which is used in adaptive plans, is to buffer all data produced by its child operation until it is known whether the inflection point is crossed. It goes without saying that buffering requires memory and, therefore, Oracle Database has to limit the amount of memory that can be allocated for that purpose. As a result, in some situations no adaptive plans can be used because according to the query optimizer estimations too much memory is required.

A special case of the previous limitation is when the row source operation under the STATISTICS COLLECTOR operation produces a LOB. In fact, when a LOB is detected, independently of the amount of data that is expected to be produced, the adaptive plan is bypassed by the query optimizer.

Let’s have a look to an example:

  • Create the test table T1
SQL> CREATE TABLE t1 (id, n, pad)
  2  AS
  3  SELECT rownum, rownum, lpad('*',100,'*')
  4  FROM dual
  5  CONNECT BY level <= 10000;

SQL> INSERT INTO t1
  2  SELECT 10000+rownum, 666, lpad('*',100,'*')
  3  FROM dual
  4  CONNECT BY level <= 150;

SQL> COMMIT;

SQL> ALTER TABLE t1 ADD CONSTRAINT t1_pk PRIMARY KEY (id);

SQL> execute dbms_stats.gather_table_stats(user,'t1')
  • Create the test table T2
SQL> CREATE TABLE t2 (id, n, pad)
  2  AS
  3  SELECT rownum, rownum, lpad('*',100,'*')
  4  FROM dual
  5  CONNECT BY level <= 10000;

SQL> ALTER TABLE t2 ADD CONSTRAINT t2_pk PRIMARY KEY (id);

SQL> execute dbms_stats.gather_table_stats(user,'t2')
  • The query optimizer selects an adaptive plan when T1 and T2 are joined
SQL> EXPLAIN PLAN FOR
  2  SELECT *
  3  FROM t1, t2
  4  WHERE t1.id = t2.id
  5  AND t1.n = 666;

SQL> SELECT * FROM table(dbms_xplan.display(format=>'basic +predicate +note +adaptive'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------
Plan hash value: 1837274416

-------------------------------------------------
|   Id  | Operation                     | Name  |
-------------------------------------------------
|     0 | SELECT STATEMENT              |       |
|- *  1 |  HASH JOIN                    |       |
|     2 |   NESTED LOOPS                |       |
|     3 |    NESTED LOOPS               |       |
|-    4 |     STATISTICS COLLECTOR      |       |
|  *  5 |      TABLE ACCESS FULL        | T1    |
|  *  6 |     INDEX UNIQUE SCAN         | T2_PK |
|     7 |    TABLE ACCESS BY INDEX ROWID| T2    |
|-    8 |   TABLE ACCESS FULL           | T2    |
-------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T1"."ID"="T2"."ID")
   5 - filter("T1"."N"=666)
   6 - access("T1"."ID"="T2"."ID")

Note
-----
   - this is an adaptive plan (rows marked '-' are inactive)
  • Create the test table T1L which contains the same data as table T1; notice that a CLOB is used instead of a VARCHAR2
SQL> CREATE TABLE t1l
  2  AS
  3  SELECT id, n, to_clob(pad) AS pad
  4  FROM t1;

SQL> ALTER TABLE t1l ADD CONSTRAINT t1l_pk PRIMARY KEY (id);

SQL> execute dbms_stats.gather_table_stats(user,'t1l')
  • The query optimizer does not selects an adaptive plan when T1L and T2 are joined
SQL> EXPLAIN PLAN FOR
  2  SELECT *
  3  FROM t1l, t2
  4  WHERE t1l.id = t2.id
  5  AND t1l.n = 666;

SQL> SELECT * FROM table(dbms_xplan.display(format=>'basic +predicate +note +adaptive'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------
Plan hash value: 810895618

----------------------------------------------
| Id  | Operation                    | Name  |
----------------------------------------------
|   0 | SELECT STATEMENT             |       |
|   1 |  NESTED LOOPS                |       |
|   2 |   NESTED LOOPS               |       |
|*  3 |    TABLE ACCESS FULL         | T1L   |
|*  4 |    INDEX UNIQUE SCAN         | T2_PK |
|   5 |   TABLE ACCESS BY INDEX ROWID| T2    |
----------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter("T1L"."N"=666)
   4 - access("T1L"."ID"="T2"."ID")

Note that when the query optimizer detects the limitation just described, in the 10053 trace it writes the following information:

AP: Adaptive joins bypassed for table T2 @ SEL$1 due to LOB-producing table under statistics collector

The script I wrote to check this behaviour can be downloaded here.