Search

OakieTags

Who's online

There are currently 0 users and 40 guests online.

Recent comments

Affiliations

January 2011

Shared Server – 3

The previous post in this series showed you how v$reqdist summarised the time taken by tasks running through shared servers – but there are several other ways we need to look at what’s going on with shared servers. One of the more important ones is to find out how much time a task is queueing before it gets to a shared server to start running – and Oracle gives us v$queue as the place to find this information:

set linesize 120
set trimspool on
set pagesize 40

column program format a24
column totalq  format 9,999,999,999
column wait    format 9,999,999,999

select
        prc.program,
        que.type,
        que.totalq,
        que.wait,
        round(que.wait/nullif(que.totalq,0),2)  avg_wait
from
        v$queue que,
        v$process prc
where
        prc.addr(+) = que.paddr
/

PROGRAM                  TYPE               TOTALQ          WAIT   AVG_WAIT
------------------------ ---------- -------------- ------------- ----------
oracle@xxxxxxxx (D000)   DISPATCHER    199,171,228     4,405,840        .02
oracle@xxxxxxxx (D001)   DISPATCHER    204,687,986 1,435,915,630       7.02
oracle@xxxxxxxx (D002)   DISPATCHER      2,068,714        29,477        .01
oracle@xxxxxxxx (D003)   DISPATCHER      3,541,736        61,355        .02
                         COMMON        286,032,262 1,105,120,053       3.86

 The view gives statistics since instance startup, usually, but can produce very strange results because it is possible to stop and start dispatchers dynamically. In this example the system had been running for quite a long time with just two dispatchers and I had increased it to four dispatchers in the previous 24 hours – hence the two dispatchers with a relatively tiny number of waits in the queue. (I stopped and restarted the dispatchers for subsequent test runs so that I didn’t have to keep working out the deltas.)

The COMMON queue is where incoming requests are queued waiting for a shared server to servive them, the DISPATCHER queues are where a shared server places its result set when it has processed the message. (A session is permanently “connected” to a dispatcher – but the association between a session and a shared server is transient, and each session request could end up being serviced by a different shared server).

If there are too many concurrent messages for the number of shared servers (or, almost equivalently) too many long-running tasks, then we would expect to see the time spent in the COMMON queue climbing as sessions wait for a shared served to become free. In principle I wouldn’t really expect to see much time spent in the DISPATCHER queues.

Before you ask, I can’t explain how the waits on D001 got to be so long when the similar number of waits on D000 were so short – all the explanations I hypothesised end up with a logical (usually statistical) flaw . Suggestions will be welcomed.

Footnote: Most of the notes I’ve seen about v$queue on the Internet say that there is only every one COMMON queue, so the number of rows in v$queue will always be one more than the number of rows in v$dispatcher – but my copy of 11.1.0.6 seems to start up with two COMMON queues (dictated by hidden parameter _shared_server_num_queues) – which are always fairly evenly loaded.

[Further reading on Shared Server / MTS]

Exponental Learning and Mumbai

Many of my endeavors have been accelerated tremendously by networks of friends and acquaintances I have. One group of acquaintances and friends is the members of the Oaktable who have from time to time accelerated my projects by their answers, advice and feedback. For example, back when I wrote a program to directly read Oracle’s SGA memory, many people added bits and pieces that kept me on track. I orignally was just dumping memory and looking at the contents when someone pointed me to James Morles work in the area and James pointed me to Jonathan Lewis’s discovery of an offset table etc etc.

Such accelerated learning has been outlined in the book the “Power of Pull” by John Hagel III , John Seely Brown and Lang Davison.

An individual or even a fixed team will have at in general linear output when things are going well, but when different  groups  and far flung individuals work together in tight feedback loops, output, learning and creation can accellerate.

The acceleration is happening more and more with the communication and collaboration that is enables and facilitated by the web.

This collaborative tight feedback loop is what I want to see happen in the database performance tuning arena  and graphical monitoring and what I think I see the beginnings of in  Marcus Mönnig’s Mumbai tool. Marcus creates his own Oracle database tuning tool but also pulls in the work of Tanel Poder’s Snapper package and Egor Starostin Orasrp.

Is this valid SQL syntax? :-)

I’m talking about this:

select-1from from dual;

Looks like invalid, right? Well, let’s run it:

SQL> select-1from from dual;

       ROM
----------
 -1.0E+000

This is because:

  1. Oracle doesn’t need whitespace for tokenizing the SQL statement (differences in character classes will do – as I’ve explained here)
  2. The first from “keyword” in the above statement is broken down to two tokens as an “F” right after a digit means that the preceding number is a FLOAT (and “D” means DOUBLE) and the tokenizer stops right there, knowing that whatever comes after this character (“ROM”) is a next token, which according to the Oracle SQL syntax rules will be assigned as the output column alias

The following funky-looking SQL statements are also valid:

SQL> select.1e2ffrom dual;

     .1E2F
----------
  1.0E+001

SQL> select.1e2fas"."from dual;

         .
----------
  1.0E+001

In the upper example, the “.1e2f” means number .1 * 10^2 (scientific notation) represented as a FLOAT internally and in the lower one I’ve just added a column alias with “AS” keyword just to make the SQL look a bit crazier.

:-)

Share

Query Executes in Seconds with the RULE Hint and Several Minutes Otherwise – What Would You Do?

January 10, 2011 An interesting hypothetical question was posed on the OTN forums yesterday.  Consider this situation.  You are using Oracle Database 11.2.0.2 and you find a query that is executing much slower than expected, typically requiring 10 to 15 minutes to execute.  You add a RULE hint to the query and find that the [...]

Cost – again

Browsing through some postings on Tony Hasler’s blog a little while ago I found this response to a note he had posted on some anomalies (i.e. bugs) in the costing of the “(min/max)” index scans:

My current understanding is it is not valid to try to compare costs across different queries (even if you just alter it by adding a hint). In general a better plan will have a lower cost but you cannot rely on this metric. The metric is really for the CBO to choose between alternative plans for this specific query, not to compare plans generated for different queries.

Now I know that this is a statement that pretty much paraphrases something that Tom Kyte wrote on AskTom several years ago – but it’s wrong. As I’ve pointed out in the past, “Cost is Time”. The cost of a query represents the optimizer’s estimate of how long it will take that query to run – so it is perfectly valid to compare the cost of two queries to see which one the optimizer thinks will be faster but, thanks to limitations and defects in the optimizer it may not be entirely sensible to do so.

The point I want to address in this post though is the comment that “it’s valid to compare the cost of different plans, but not to compare the cost of two different queries”. Consider the following queries:

select
	t1.v1
from
	t1
where	t1.id in (
		select
			t2.id
		from	t2
		where	t2.n1 = 15
	)
;

select
	t1.v1
from
	t1
where	exists (
		select
			t2.id
		from	t2
		where	t2.n1 = 15
		and	t2.id = t1.id
	)
;

select
	t1.v1
from
	(
		select	distinct t2.id
		from	t2
		where	t2.n1 = 15
	) t2,
	t1
where
	t1.id = t2.id
;

select
	v1.v1
from	(
	select
		distinct
			t1.rowid,
			t2.id,
			t1.v1
	from
		t2, t1
	where
		t2.n1 = 15
	and	t1.id = t2.id
	) v1
;

Tables t1 and t2 have the following definitions, so all four queries are logically equivalent:

Name                    Null?    Type
----------------------- -------- ----------------
ID                      NOT NULL NUMBER
N1                      NOT NULL NUMBER
V1                               VARCHAR2(6)
PADDING                          VARCHAR2(100)

According to the claim, it is not valid to compare the costs that the optimizer gives you for these four different queries – but they are the same query. In principle the optimizer might transform the IN to an EXISTS, it might simply unnest, it might unnest and merge – so when the optimizer is “comparing different costs for the same query”, it will also be “comparing costs of different queries”.

Cost IS time – but only in theory. The “trick” to sorting out optimization problems lies in recognising where the optimizer model is not right for your data, or the optimizer arithmetic is too simplistic or has a bug.

Planning failures…

My sister is currently in hospital having some major surgery. The front line staff have been great, but there seems to be a total absence of planning around the user-experience of the hospital that leaves me feeling like it is all a bloody big joke. These things stand out:

  • A big new hospital has been builtrecently. Unfortunately it is not big enough to house all the departments for the two existing (old and decrepit) hospitals on the same site, or the other hospital about 2 miles away. Now we have 4 hospitals in various states of repair, some of which have the same departments split across multiple sites. This should all be one hospital, then you would know where you have to go. FAIL.
  • The new hospital is up and running, but some of the roads to it are not properly finished yet. FAIL.
  • On the site in question there are 3 distinct hospitals, but the is nowhere near enough parking to cope with them. Some of the staff told me they have trouble parking! Today I spent 30 minutes trying to find a parking space. A good 15 minutes of this time was spent in a car park with no spaces left. The top floor had been closed, reducing the capacity, but the ticket machine still let people in based on the full capacity. There was a guy letting us out for free because they knew there were no spaces, but they didn’t bother telling the queues of people still coming in they were sitting in a 15 minute queue for no possibility of parking. I was hoping to catch 30 minutes of visiting time with my sister before she went to surgery. As it is I got about 2 seconds as they were putting her in the lift, and I was so out of breath from running up 5 flights of stairs all I managed to say was have a good one. FAIL.
  • The first car park I tried was really poorly designed, such that people wanting to leave had to drive round in the same queue as people coming in. This added to the congestion and frustration. FAIL.
  • The lifts in the building are so slow and there aren’t enough of them I’ve been forced to use the stairs every time. I don’t mind so much, but it’s a bit off to expect my mom in her sixties to climb 5 very long flights of stairs, which is what we had to do yesterday. FAIL.

Hospitals are about people so they should be planned and designed to maximize the user experience. A negative user experience affects everyone. The patients are anxious because they wonder where their visitors are. The visitors are sitting in their cars going nuts. The staff have to deal with irate patients and visitors who are freaking out because to factors that are nothing to do with them. The people involved in planning the new Queen Elizabeth Hospital in Birmingham should be totally ashamed of themselves. If they came to me looking for a job and I knew they were involved in this project I would send them packing!

Cheers

Tim…

The Broadband Joke…

I received an email from my broadband supplier telling me that my line has been upgraded and I can now get up to a 24M connect. When I was first upgraded to an 8M line I actually got 8M. I checked today and I am struggling to hit 2M. I live in the center of a big city and I am *really* close to an exchange. Of course I’ve not alone. It seems 68% of US broadband connections don’t meet the recent minimum definition of broadband (read). Do they really think I’m going to pay more cash to get a 24M connection that serves content at 2M? I think not.

Cheers

Tim…

Android vs iPhone user base. Can I have some real comparisons please?

I’ve just seen another headline about how the Android user base has overtaken that of the iPhone in the US and it really irks me. Let me start by saying I’ve not got either, so this is not a disgruntled fan-boy post. Also, my argument is based on the situation in the UK market, so if the US market is very different I apologize.

My problem with this sort of headline is to do with the lack of clarity in these comparisons. How do the writers define smartphone? It seems where Android is concerned they just mean a phone that can actually run Android. I guess Android does allow for most of the functionality you would commonly associate with a smartphone. OK. So let’s assume this a a fight between iPhone and anything else that can run Android.

If I do a price comparison from one specific UK retailer, based on their “most popular” contracts here’s what I get:

  • iPhone 4: £139 one-off payment, plus £35.75 contract
  • Android: Free phone. £15.32 per month.

So why is there such a discrepancy? Because you can run Android on some very mediocre phones. In some cases the kit is inferior to the first generation iPhones. I would suggest the cheaper Android options, although capable of doing all the things associated with a smartphone, do them so badly (mostly due to very small screen size) that they shouldn’t really be called smartphones. They certainly shouldn’t be compared with the user experience of the iPhone 4 or the top end Android phones. Also, the basic Android contract listed above comes with no data plan, so it’s a phone with no internet connectivity. That’s right, you have bought a smart phone that is really not smart. In the UK market Android phones span the entire market from “cheap and cheerful” to expensive and bleeding edge.

So when I look at these headlines I have to question what is really being compared. If you buy an iPhone and only use it as a mobile phone you are a bit of a muppet. If you buy a bottom of the range Android phone and contract, I don’t think you should be included as a smartphone user because what you have is not usable in a manner that I consider a smartphone should be.

So what I would like to see is a comparison of people using iPhone 4 with people using “real” Android smart phones. In the UK at least, I think this would drastically alter the results.

Cheers

Tim…

Snapper 3.52 – With Oracle 9.2 support!

As I promised last year, I have 2 christmas gifts for you. I have already forgotten what the other one was supposed to be :), but the first one is Snapper v3.52 which has (the much requested) Oracle 9.2 support!

The syntax is the same, with Snapper you can now sample ASH-like data on Oracle 9.2 too. Instead of SQL_IDs it will display you SQL hash values:

SQL> @snapper ash,ash1=user+sql_id,ash2=sid+event 5 1 all
Sampling SID all with interval 5 seconds, taking 1 snapshots...

-- Session Snapper v3.52 by Tanel Poder @ E2SN ( http://tech.e2sn.com )

--------------------------------------------------
Active% | SQL_HASH_VAL | EVENT
--------------------------------------------------
    19% | 867131449    | db file scattered read
    19% | 1458866975   | db file scattered read
    13% | 1267657304   | db file scattered read
     6% | 884811952    | ON CPU
     6% | 581825634    | db file scattered read
     3% | 867131449    | ON CPU
     3% | 1267657304   | ON CPU
     3% | 1866659945   | ON CPU
     3% | 1671194465   | db file scattered read
     3% | 3021169464   | ON CPU

------------------------------------------------
Active% | USERNAME             | SQL_ID
------------------------------------------------
    23% | SYSTEM               | 867131449
    19% | SYSTEM               | 1458866975
    16% | SYSTEM               | 1267657304
     6% | SYSTEM               | 581825634
     6% | SYSTEM               | 884811952
     3% | SYSTEM               | 1558333473
     3% | SYSTEM               | 1671194465
     3% | SYSTEM               | 1866659945
     3% | SYSTEM               | 1927486197
     3% | SYSTEM               | 2700565926

--------------------------------------------
Active% |    SID | EVENT
--------------------------------------------
    77% |     18 | db file scattered read
    19% |     18 | ON CPU
     3% |     18 | db file sequential read

--  End of ASH snap 1, end=2011-01-10 03:02:58, seconds=5, samples_taken=31

PL/SQL procedure successfully completed.


Other than the 9i change, the rest of the snapper is pretty much the same as earlier, with some minor bugfixes and additions.

You can download it from here.

If you want to get the most out of snapper, read this article here (and make sure you look inside the script!)

  • #000000;">http://tech.e2sn.com/oracle-scripts-and-tools/session-snapper

Note: Big thanks to Marcus Mönnig who who wrote the additional 9i support code for Snapper first and Jamey Johnston for his additions (and myself for some final polishing fixes ;-)

Note2: I've agreed with Marcus that he could add Snapper into his free performance tool distribution in unchanged form, check out his Mumbai tool which could be useful for Oracle performance monitoring...

Share

Licensing Requirements for SQL Profiles

Jonathan Lewis reminded me on my last post that using SQL Profiles (because they are part of the SQL Tuning Advisor) requires a license for Oracle Tuning Pack (which requires a license for the Diagnostics Pack). He also mentioned that Baselines did not require any additional license (at least creating and using Baselines on SQL statements). It’s been a while since I worked on a database that didn’t have both packs, but frankly I wasn’t sure I had a good handle of what was allowed and what wasn’t. So I thought it might be worthwhile to check. There is an easy way to check by the way. I did a post a while back on Tuning Pack and Diagnostic Pack license requirements for running AWR and how to check what was allowed and what wasn’t using the CONTROL_MANAGEMENT_PACK_ACCESS parameter. Here’s a link to the post:

Oracle Management Packs

Here’s an example using the same technique to show that SQL Profiles are indeed disabled by turning off the Diagnostic and Tuning Packs (at least on 11.2.02).

SQL*Plus: Release 11.2.0.2.0 Production on Fri Jan 7 21:15:21 2011
 
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
 
 
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
 
SYS@SANDBOX1> @parms
Enter value for parameter: management_pack
 
 
Enter value for isset: Enter value for show_hidden: 
NAME                                               VALUE                                                                  ISDEFAUL ISMODIFIED ISSET
-------------------------------------------------- ---------------------------------------------------------------------- -------- ---------- ----------
control_management_pack_access                     DIAGNOSTIC+TUNING                                                      TRUE     TRUE       TRUE
 
SYS@SANDBOX1> @flush_pool
 
System altered.
 
SYS@SANDBOX1> @avgskew
 
AVG(PK_COL)
-----------
 16093748.8
 
1 row selected.
 
SYS@SANDBOX1> /
 
AVG(PK_COL)
-----------
 16093748.8
 
1 row selected.
 
SYS@SANDBOX1> @dplan
Enter value for sql_id: 05cq2hb1r37tr
Enter value for child_no: 
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  05cq2hb1r37tr, child number 0
-------------------------------------
select avg(pk_col) from kso.skew a where col1 > 0
 
Plan hash value: 568322376
 
-----------------------------------------------------------------------------------
| Id  | Operation                  | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |      |       |       | 44486 (100)|          |
|   1 |  SORT AGGREGATE            |      |     1 |    11 |            |          |
|*  2 |   TABLE ACCESS STORAGE FULL| SKEW |    32M|   335M| 44486   (1)| 00:08:54 |
-----------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - storage("COL1">0)
       filter("COL1">0)
 
Note
-----
   - SQL profile PROF_05cq2hb1r37tr_568322376 used for this statement
 
 
24 rows selected.
 
SYS@SANDBOX1> @flush_pool
 
System altered.
 
SYS@SANDBOX1> alter system set control_management_pack_access='none';
 
System altered.
 
SYS@SANDBOX1> @avgskew
 
AVG(PK_COL)
-----------
 16093748.8
 
1 row selected.
 
SYS@SANDBOX1> /
 
AVG(PK_COL)
-----------
 16093748.8
 
1 row selected.
 
SYS@SANDBOX1> @dplan
Enter value for sql_id: 05cq2hb1r37tr
Enter value for child_no: 
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  05cq2hb1r37tr, child number 0
-------------------------------------
select avg(pk_col) from kso.skew a where col1 > 0
 
Plan hash value: 568322376
 
-----------------------------------------------------------------------------------
| Id  | Operation                  | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |      |       |       | 44486 (100)|          |
|   1 |  SORT AGGREGATE            |      |     1 |    11 |            |          |
|*  2 |   TABLE ACCESS STORAGE FULL| SKEW |    32M|   335M| 44486   (1)| 00:08:54 |
-----------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - storage("COL1">0)
       filter("COL1">0)
 
 
20 rows selected.
 
SYS@SANDBOX1> alter system set control_management_pack_access='DIAGNOSTIC+TUNING';
 
System altered.
 
SYS@SANDBOX1> @flush_pool
 
System altered.
 
SYS@SANDBOX1> @avgskew
 
AVG(PK_COL)
-----------
 16093748.8
 
1 row selected.
 
SYS@SANDBOX1> /
 
AVG(PK_COL)
-----------
 16093748.8
 
1 row selected.
 
SYS@SANDBOX1> @dplan
Enter value for sql_id: 05cq2hb1r37tr
Enter value for child_no: 
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  05cq2hb1r37tr, child number 0
-------------------------------------
select avg(pk_col) from kso.skew a where col1 > 0
 
Plan hash value: 568322376
 
-----------------------------------------------------------------------------------
| Id  | Operation                  | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |      |       |       | 44486 (100)|          |
|   1 |  SORT AGGREGATE            |      |     1 |    11 |            |          |
|*  2 |   TABLE ACCESS STORAGE FULL| SKEW |    32M|   335M| 44486   (1)| 00:08:54 |
-----------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - storage("COL1">0)
       filter("COL1">0)
 
Note
-----
   - SQL profile PROF_05cq2hb1r37tr_568322376 used for this statement
 
 
24 rows selected.

So as you can see, there was a SQL Profile on the statement that was used when Diagnostic and Tuning Packs were enabled, but when I set CONTROL_MANAGEMENT_PACK_ACCESS to NONE, the SQL Profile was ignored. Let’s try the same test with a Baseline.

 
SYS@SANDBOX1> @flush_pool
 
System altered.
 
SYS@SANDBOX1> @avgskew
 
AVG(PK_COL)
-----------
 16093748.8
 
SYS@SANDBOX1> @create_baseline
Enter value for sql_id: 05cq2hb1r37tr
Enter value for plan_hash_value: 568322376
Enter value for fixed (NO): 
Enter value for enabled (YES): 
Enter value for plan_name (SQLID_sqlid_planhashvalue): 
sql_id: 05cq2hb1r37tr
plan_hash_value: 568322376
fixed: NO
enabled: YES
 
Baseline SQLID_05cq2hb1r37tr_568322376 created.
 
PL/SQL procedure successfully completed.
 
SYS@SANDBOX1> @avgskew
 
AVG(PK_COL)
-----------
 16093748.8
 
SYS@SANDBOX1> /
 
AVG(PK_COL)
-----------
 16093748.8
 
SYS@SANDBOX1> @dplan
Enter value for sql_id: 05cq2hb1r37tr
Enter value for child_no: 
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  05cq2hb1r37tr, child number 0
-------------------------------------
select avg(pk_col) from kso.skew a where col1 > 0
 
Plan hash value: 568322376
 
-----------------------------------------------------------------------------------
| Id  | Operation                  | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |      |       |       | 44486 (100)|          |
|   1 |  SORT AGGREGATE            |      |     1 |    11 |            |          |
|*  2 |   TABLE ACCESS STORAGE FULL| SKEW |    32M|   335M| 44486   (1)| 00:08:54 |
-----------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - storage("COL1">0)
       filter("COL1">0)
 
Note
-----
   - SQL profile PROF_05cq2hb1r37tr_568322376 used for this statement
   - SQL plan baseline SQLID_05CQ2HB1R37TR_568322376 used for this statement
 
 
25 rows selected.
 
SYS@SANDBOX1> -- so Baseline and SQL Profile Used
SYS@SANDBOX1>
SYS@SANDBOX1> alter system set control_management_pack_access='none';
 
System altered.
 
SYS@SANDBOX1> @flush_pool
 
System altered.
 
SYS@SANDBOX1> @avgskew
 
AVG(PK_COL)
-----------
 16093748.8
 
SYS@SANDBOX1> /
 
AVG(PK_COL)
-----------
 16093748.8
 
SYS@SANDBOX1> @dplan
Enter value for sql_id: 05cq2hb1r37tr
Enter value for child_no: 
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  05cq2hb1r37tr, child number 0
-------------------------------------
select avg(pk_col) from kso.skew a where col1 > 0
 
Plan hash value: 568322376
 
-----------------------------------------------------------------------------------
| Id  | Operation                  | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |      |       |       | 44486 (100)|          |
|   1 |  SORT AGGREGATE            |      |     1 |    11 |            |          |
|*  2 |   TABLE ACCESS STORAGE FULL| SKEW |    32M|   335M| 44486   (1)| 00:08:54 |
-----------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - storage("COL1">0)
       filter("COL1">0)
 
Note
-----
   - SQL plan baseline SQLID_05CQ2HB1R37TR_568322376 used for this statement
 
 
24 rows selected.

So Baselines apparently do not require licenses for the Tuning and Diagnostics Packs, at least they aren’t disabled by setting the CONTROL_MANAGEMENT_PACK_ACCESS parameter to NONE.