Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

IOUG Collaborate 2017

I have a few presentations scheduled in IOUG Collaborate 2017, Las Vegas.

1. Session: 621: RAC Clusterware internals
**Date/time: Mon, Apr 03, 2017 (01:30 PM – 02:30 PM) : Jasmine A

2. Session: RAC cache fusion internals. ( OakTable track )
**Date/time: Wed, Apr 05, 2017 (09:45 AM – 10:45 AM) Room: South Seas C

3. Session: 479: An in-depth review of ASM and internals
**Date/time: Wed, Apr 05, 2017 (02:45 PM – 03:45 PM) : Palm B

Presentation files ( Updated after the presentations).

Thank you for coming to my presentation(s).

ASM review and internals
clusterware internals
RAC_cache_fusion_internals
rac_training_scripts

Come to Vegas </p />
</p></div>

    	  	<div class=

Collaborate 2017

Every year I make the trek to Vegas for the large Collaborate conference and 2017 is no different!

It’s a busy schedule for me at Delphix, so buckle up and hang on for this next week’s events!

Sunday, April 2nd, 9am-1pm

Pre-Conference Hands on Lab

Location: South Seas C  Session# 666

Registration is required for this special conference event and it will give the registered attendees a chance to get hands on experience with databases with virtualization.  This is an important skill set for Database Administrators as they discover how easy it is to migrate to the cloud with virtualized environments and for developers and testers, as they discover this is how to get those pesky, monstrous databases to move at the same speed as DevOps.

Monday, April 3rd, 1:30-2:30pm

Database Patching and Upgrading Using Virtualization, (IOUGenius Session)

Location:  Jasmine C  Session# 149

Tuesday, April 4th, ALL DAY– OK, for the entire time the exhibitor area is up! </p />
</p></div></div>

    	  	<div class=

Preparing an AzureEnvironment- Part I

Azure is the second most popular cloud platform to date, so it’s where Delphix naturally is going to support second on our road to the cloud.  As I start to work with the options for us deploying Delphix, there are complexities I need to educate myself on in Azure.  As we’re just starting out, there’s a lot to learn and a lot of automation we can take advantage of.  It’s an excellent time for me to get up to speed with this cloud platform, so hopefully everyone will learn right along with me!

We’ll be using Terraform to deploy to Azure, just as we prefer to use it for our AWS deployments.  It’s open source, very robust and has significant support in the community, so we’ll switch from cloud setup to Terraform prep in many of these posts.  Before we can do that, we need to set up our Azure environment after we’ve registered our subscription with Azure.

Azure Consoles

There are the New and the Classic consoles for Azure, but also ones in the modern, New console that are marked as “Classic” that aren’t part of the actual “Classic” one.  I found this a bit confusing, so it’s good to have the distinction.

https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/03/Screen-Shot-2... 300w" sizes="(max-width: 462px) 100vw, 462px" data-recalc-dims="1" />

Azure’s “New” Portal, with it’s modern, sleek design

https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/03/Screen-Shot-2... 300w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/03/Screen-Shot-2... 768w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/03/Screen-Shot-2... 1304w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/03/Screen-Shot-2... 1200w" sizes="(max-width: 467px) 100vw, 467px" data-recalc-dims="1" />

Azure’s “Classic” management interface, with it’s pale blue and white schema, which still serves a very significant purpose

Once you’ve created your Azure account, you’ll find that you need access to the Classic console to perform many of the basic setup tasks, where the Modern console is better for advanced administration.

Preparation is Key

There are a number of steps you’ll need to perform in preparation for Delphix to deploy to Azure.  The delphix engine, a source and a target are out goal, so we’ll start simple and work our way out.  Let’s see how much I can figure out and how much I may need to look to others more experienced to get me through.  No matter what, you’ll need both consoles, so keep the links above handy and I’ll refer to the consoles by “New” and “Classic” to help distinguish them as I go along.  Know that in this post, we’ll spend most of our time in the Classic console.

Set up an Account and Add Web App

If you don’t already have one, Microsoft will let you set up an account and even give you $200 in free credits to use.  Once you sign up, then you need to know where to go next.  This is where the “Classic” console comes in, as you need to set up your “application” that will be used for your deployment.

Log into the “Classic” console and click on Active Directory and the Default Directory highlighted in blue.  This will open a new page and you will have the opportunity to click Add at the bottom of the page to add a new Active Directory application.

  • Name the Application, (open up a text editor, copy and paste the name of the app into it, you’ll need this data later)
  • The application type is web app or api
  • Enter a URL/URI and yes, they can be made up.  They don’t have to be real.

Client and Client Secret

Now that your application is created, you’ll see a tab called Configure.  Click on this tab and you’ll see the client ID displayed.  Copy the Client ID and add that to your text editor, also for later.

Scroll down and you’ll see a section called Keys.  Click on the button that says “Select Duration” and choose 1 or 2 years.  At the very bottom of the screen, you’ll see a Save button, click it and then the Secret passcode will be displayed for you to copy and paste into your text editor.  Do this now, as you won’t be able to get to it later.

Tenant ID

To the left of the Save button, you’ll see “View Endpoints”.  Click on this and you’ll see a number of entries.  The tenant ID is the repeat value shown in each of the entries at the end.  An example is shown below:

https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/03/Screen-Shot-2... 300w" sizes="(max-width: 340px) 100vw, 340px" data-recalc-dims="1" />

Copy and paste this into your text editor under a heading of tenant ID.

Add Application to the Active Directory

Now that you’ve created this framework, you need to grant permissions to use it all.  In the Configure tab, scroll to the very bottom where it says “Permissions to Other Applications” and click on Add Application.  Choose the Active Directory application from the list, (if you have a new account, you won’t have much to choose from) Azure Service Management API and click on the checkmark in the lower right corner of the pane.  This will return you to the previous page.  Click on the designated privileges and choose to grant it Access Azure Service Management as organization and then save.

Subscription Data

Now, log into the New portal and click on Subscriptions on the left hand side.  Click on the Subscription and it will open up to display your Subscription ID, which you’ll need to copy and paste into your text editor.

Click on Access Control, (IAM) and click on Add.  Now you may only see your username, but the applications are there-  they just won’t be displayed by default.  Type in your application name that you put in your text editor, (example, mine is Web_K_Terra.)  Reminder-  you must type in the name of your app, just as you did when you created it, (it is cap sensitive, etc.) Grant reader and contributor roles from the role list, saving between each additional role.

You should now see your user in the list with both roles assigned to it like the example below for Web_K_Terra app:

https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/03/Screen-Shot-2... 300w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/03/Screen-Shot-2... 768w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/03/Screen-Shot-2... 1210w" sizes="(max-width: 439px) 100vw, 439px" data-recalc-dims="1" />

Our configuration is complete and ready to go onto the networking piece.

The first part of my terraform template is ready, too.  All the pertinent data that I required from my build out has been added to it and it looks something like the following:

provider “Web_K_Terra” {
subscription_id = “gxxxxxxx-db34-4gi7-xxxxx-9k31xxxxxxxxxp2”
client_id = “d76683b5-9848-4d7b-xxxx-xxxxxxxxxxxx”
client_secret = “trKgvXxxxxxXXXXxxxXXXXfNOc8gipHf-xxxxxxXXx=”
tenant_id = “xxxxxxxx-9706-xxxx-a13a-4a8363bxxxxx”

}

This is a great start to getting us out on Azure, in part II, we’ll talk about setting up connectivity between your desktop and Azure for remote access and recommendations for tools to access it locally.

 



Tags:  , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Preparing an AzureEnvironment- Part I], All Right Reserved. 2017.

The post Preparing an AzureEnvironment- Part I appeared first on DBA Kevlar.

Adding Indexes

The following question came up on the OTN database forum recently:


We have below table with columns,

Table T1
Columns:
-----------
Col_1, Col_2, Col_3, Col_4, Col_5, Col_6, Col_7, Col_8, Col_9, Col_10, Col_11, Col_12, Col_13, Col_14, Col_15

on which below indexes are created.

XXTEST_Col_1    Col_1
XXTEST_Col_2    Col_2
XXTEST_Col_3    Col_3
XXTEST_Col_5    Col_5
XXTEST_Col_6    Col_6
XXTEST_Col_7    Col_7
XXTEST_Col_8    Col_8
XXTEST_Col_8    (Col_4, Col_10, Col_11)

I have requirement to update table T1 as below and it’s taking really long. [JPL: I’m assuming that the naming of the second xxtest_col_8 index is a trivial error introduced while the OP was concealing names.)

UPDATE T1
SET Col_3 = x_value,
    Col_6 = y_value
where Col_4='N'
and Col_3 IS NULL;

To improve performance, created below index and it improved the performance of the update.

 
New Index:  XXTEST_Col_4    (Col_4, Col_3)

My question is it good idea to create the above index with already existing indexes? Is there any other better way to handle this?

There is insufficient information to answer these questions properly but if I had been given this problem there are several things I’d want to check before I came up with any answers. (In the absence of complete information the default answers are “probably not”, “probably”).

What Happened

The first question, of course, is “why did the update take a really long time?”, and the second is “why did the performance improve after the index creation?”.

There are (generally) two execution paths for the update – full tablescan or index access through the index starting with col_4. Whichever path Oracle took would the update have been faster (and fast enough) if the alternative path had been taken ?  (I have to include the “generally” caveat since it’s just possible that with silly settings for a couple of parameters and statistics the optimizer could have been doing an amazingly stupid “index full scan” on any index that contained a column declared as not null.)

Then consider the fact that the update is changing two columns that are indexed – and one of those columns (col_3) is changing from null to a value, so it’s going to increase the size of the index. Is the slow performance due to the time spent maintaining the indexes, and is the improved performance due to the fact that there are no rows left to update because the first run of the test updated all of them ?

It does seem reasonably likely that Oracle would use the new index for the update and that the better precision would be the cause of the improved performance; but a side effect of the index creation is that Oracle would now have some statistics about the combination of (col_4, col_3) that wasn’t previously available. Perhaps gathering stats on the column group (col_4, col_3) would have been sufficient to result in a change in execution path that would have given the required performance without introducing the extra  maintenance costs.

What’s the requirement

Before deciding whether or not an index should be used to improve the performance of a statement it’s important to have some idea of the cost/benefit/risk analysis. This often boils down to “how much data should change, how often does it happen, what could possibly go wrong”.

As far as “what could possibly go wrong” – by adding the index (col_4, col_3) we introduce 4 groups of risk:

  • any SQL currently using the col_3 index might change plans
  • any SQL currently using the (col_4, col_10, col_11) index might change plans
  • any query that has a predicate “col_3 is null” might change plans because that predicate could be resolved in the index if col_4 had been declared not null,
  • any query with equality predicates on both col_3 and col_4 might change plans because the new index supplies some statistics about the combination

We don’t have enough information to evaluate the degree of risk – and we haven’t been given any figures about the volume of data that changes or the frequency with which the update runs, so we can’t really say much beyond raising the warning flags.

What does the data look like.

Despite my closing comment above, we have two little clues about the data. First is that one of the predicates is col_4 = ‘N’ – to me this always raises the suspicion that we’re looking at a column which is a Yes/No, or Yes/No/NULL column; secondly the OP has marked the following response as “Correct Answer”:

Depends on your data.
Check count of both queries

select count(*) from T1 where Col_4='N' and Col_3 IS NULL;
select count(*) from T1 where Col_4='N';

If there is a significant difference in count, and delete is executed frequently, it might be good to have

If we assume that this means (a) there is a significant difference in the counts and (b) the delete IS executed frequently then we we have some (though not really enough) useful information about what the data looks like and how it is handled. Perhaps over a short period of time a small volume of new data appears where col_3 is null and there is a scheduled mechanism that looks for that data and modifies it.

If that’s the case then an index that gives use high precision when we’re searching for a (very) small amount of data from a very large table would be very helpful, while an index that starts with the first of our search columns (col_4 is the leading column of an existing index) might actually be a liability if we do a large range scan, visit the table for lots of rows, and then discard them because they fail our second predicate.

However, if this IS the case then it’s not a good idea to create a large index that is constantly subject to change – especially if the update from NULL to something else results in updates that scatter the col_3 values randomly around the index. (The NULL rows will be well packed in the index, and may be in a small collection of recent table blocks; but the update will delete one index entry and we have no idea about the impact that inserting the replacement could have.) So, if we’re also allowed to modify the code, let’s create an index that is as small as possible, won’t be used accidentally by any other code, and minimises the workload when updated. Here’s the code to create the index and one option for gathering stats on the resulting hidden column, and the modified update code:


create index xxtext_fbi1 on t1(
        case when col_4='N' and col_3 is null then 0 end
)
;

begin
        dbms_stats.gather_table_stats(
                user,
                't1',
                method_opt=>'for columns (case when col_4=''N'' and col_3 is null then 0 end) size 1'
        );
end;
/

update t1 set
        col_3 = {x_value},
        col_6 = {y_value}
where
        case when col_4='N' and col_3 is null then 0 end = 0
;

This index will contain entries for just those rows that we need to update, and when we update them the index entries will simply disappear. The index is the smallest that we could have, with the minimum workload imposed, and no risk of being hi-jacked by any other process.

As a variation on a theme, and depending on version, we could create a virtual column with this definition (making it invisible in 12c) and creating an index on the virtual column. For preference I think I’d use the FBI in 11g, and the invisible virtual column in 12c.


alter table t1
add (
        wanted_flag invisible
        generated always as (
                case when col_4='N' and col_3 is null then 0 end
        ) virtual
)
;

begin
        dbms_stats.gather_table_stats(
                user,
                't1',
                method_opt=>'for columns wanted_flag size 1'
        );
end;
/

create index xxtest_virt1 on t1(wanted_flag);

update t1 set
        col_3 = {x_value},
        col_6 = {y_value}
where
        wanted_flag = 0
;

The thread raised a couple of other worthwhile points – one post pointed out that if they were going to create a new index they would want to make it as useful for as many other processes as possible: this, of course, is a different view on the cost/benefit/risk assessment, especially (I would say) if there is no option to create a very small, low-cost, index that is guaranteeably safe. The point also came up that there are two or three variations on how one could produce an expression that produced index entries only for the rows where col_4 = ‘N’ and col_3 is null; and a point about further possibilities if there were a few different values for col_4 that resulted in similar queries.

Apex patch for 5.1

If you are running Application Express, there is a new patch available.  Lots of fixes which you can read about here

http://www.oracle.com/technetwork/developer-tools/apex/application-express/apex-511-patch-set-notes-3661846.html

I just downloaded patch 25341386 and followed the installation instructions and it went through with no problems at all in just a few minutes.


Verifying Application Express version...
...have version 5.1.0.00.45

PL/SQL procedure successfully completed.

Verifying Application Express version...
...have version 5.1.0.00.45

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


Session altered.

Stopping and disabling APEX jobs

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


Session altered.


Grant succeeded.

Verifying Application Express version...
...have version 5.1.0.00.45

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


Session altered.


...
...
...

Validating Application Express
(09:02:38) Starting validate_apex for APEX_050100
(09:02:41) Checking missing sys privileges
(09:02:41) Recompiling
(09:02:42) Checking for objects that are still invalid
(09:02:42) Key object existence check
(09:02:42) Setting DBMS Registry for APEX to valid
(09:02:42) Exiting validate_apex

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.

timing for: Complete Patch
Elapsed: 00:02:33.76

Enjoy !

Index out of range

I’ve waxed lyrical in the past about creating suitable column group statistics whenever you drop an index because even when the optimizer doesn’t use an index in its execution path it might have used the number of distinct keys of the index (user_indexes.distinct_keys) in its estimates of cardinality.

I’ve also highlighted various warnings (here (with several follow-on URLs) and here) about when the optimizer declines to use column group statistics. One of those cases is when a predicate on one of the columns goes “out of  range” – i.e. is below the column low_value or above the column high_value. Last night it suddenly crossed my mind that if we drop an index and replace it with a column group we might see an example of inconsistent behaviour: what happens when the index exists but the predicate is out of range – would you find that dropping the index and replacing it with a column group would give you different cardinality estimates for out of range predicates ?

Here’s the demonstration of what happened when I created a simple test on 12.1.0.2:


rem
rem     Script:         index_v_colgrp.sql
rem     Author:         Jonathan Lewis
rem
rem     Last tested
rem             12.1.0.2
rem

create table t1
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum                          id,
        mod(rownum-1,100)               n1,
        mod(rownum-1,100)               n2,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid WordPress format problem
;

begin dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );
end;
/

create index t1_i1 on t1(n1, n2);

set autotrace traceonly explain

I’ve created a table with 1M rows, where n1 and n2 are perfectly correlated – there are 100 distinct pairs of values (ranging from (0,0) to (99,99)). Now with autotrace enabled I’ll execute three queries – two with an index on the table of which one will be the baseline plan for predicates that are “in-range” and the other will take the predicates out of range, and the third after I’ve dropped the index and substituted a matching column group to see what I get for the “out of range” plan. The plans may produce different paths as the index disappears, of course, but what we’re only interested in the cardinality estimates in this experiment.

Here’s the code to run the three queries:


select  padding
from    t1
where
        n1 = 50
and     n2 = 50
;

select  padding
from    t1
where
        n1 = 110
and     n2 = 110
;

drop index t1_i1;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for columns (n1, n2) size 1'
        );
end;
/

select  padding
from    t1
where
        n1 = 110
and     n2 = 110
;

And the three execution plans:


--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 10000 |  1044K|  2142   (4)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   | 10000 |  1044K|  2142   (4)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("N1"=50 AND "N2"=50)


---------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |    79 |  8453 |    83   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |    79 |  8453 |    83   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | T1_I1 |    79 |       |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("N1"=110 AND "N2"=110)


--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    79 |  8453 |  2142   (4)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |    79 |  8453 |  2142   (4)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("N1"=110 AND "N2"=110)

In summary:

  • With the index in place and the predicates in range the optimizer used user_indexes.distinct_keys to calculate cardinality.
  • With the index in place and the predicates (or just one of them, in fact) out of range the optimizer used the individual column selectivities with linear decay.
  • With a column group instead of an index the optimizer behaved exactly as it used to with the index in place.

So my concern that substituting column groups for indexes was unfounded – the optimizer was being silly (legal disclaimer: that’s just my opinion) with indexes, and the silly (ditto) behaviour with column groups hasn’t changed anything.

I’ll have to go back a couple of versions of Oracle to repeat these tests – maybe this behaviour with user_indexes.distinct_keys in place is relatively recent, but it’s another reason why execution plans may change suddenly and badly as time passes when “nothing changed”.

 

JAN17 Proactive Bundle Patch + Adaptive Statistics control

If you have to create a new database now (I’m writing this in March 2017) for a migration of OLTP database with minimal risks of regression, I would recommend:

  • The latest patchset of Oracle Database 12cR1
  • The latest Proactive Bundle Patch
  • The two patches to get full control over Adaptive statistics

This post gives more detail about it and which patches to apply. It would be nice to have those patches merged into each bundle patch, but it is not.

dbi services 12cR2 free event

This Month, the 12cR2 was released and we immediately informed our customers about the new features that we think are important to know in order to plan when to go to 12.2 and for which projects. We started with the most important one, the Adaptive Statistics which helps to avoid all those performance issues encountered after migrating to 12cR1. We also mentioned that this new behavior has been backported to 12.2 with two patches, as explained here by Clemens Bleile:
IMG_3833

12cR1

This event was to inform about 12.2 but lot of customers waited for this to plan their 12.1 migration. Knowing the roadmap and the new features helps to plan what can wait for a stable 12cR2 (after few PSUs) and what must be migrated now to 12cR1. This is why we did everything to rollout this event as soon as possible once the main platform (Linux and Windows) were publicly available.

What to install for 12.1.0.2

Our recommendation for new installations of 12c for current migration with minimal risks of regression is

  • The latest patchset of Oracle Database 12cR1: 12.1.0.2
  • The latest Proactive Bundle Patch: JAN2017 BP
  • The two patches to get full control over Adaptive statistics: patches 22652097 and 21171382

Nothing is easy with patching, so I’ll detail how to find exactly what to install.

Latest patchset of Oracle Database 12cR1: 12.1.0.2

Now that the latest version is 12.2.0.1 you cannot find anymore the 12.1.0.2 on the oracle.com download page.
You can download 12.1.0.2 from the Patchset 21419221: https://updates.oracle.com/download/21419221.html

Latest Proactive Bundle Patch: JAN2017 BP

Finding the latest patch set update is easy if you follow the MOS Note Quick Reference to Patch Numbers for Database/GI PSU, SPU(CPU), Bundle Patches and Patchsets (Doc ID 1454618.1)

You will download https://updates.oracle.com/download/24968615.html for Linux or https://updates.oracle.com/download/25115951.html for Windows.

Adaptive statistics: patches 22652097 and 21171382

Then you have to find the two patches to apply them on top of the JAN17 ProactiveBP.

The first one is for separate parameters to enable adaptive plans and adaptive statistics separately: https://updates.oracle.com/download/22652097.html and you can find it for Oracle 12.1.0.2.170117 Proactive BP

The second one is for dbms_stats preference to control statistics extensions creation and you will have two suprises when following the link:
The name is AUTO DOP COMPUTES A HIGH DOP UNNECESSARILY because the change has been merged with some other changes on concurrent statistics
It is available only for (listed in the non-chronological order of the platform list on MOS):

  • Oracle 12.1.0.2.13 Proactive BP
  • Oracle 12.1.0.2.0
  • Oracle 12.1.0.2.5
  • Oracle 12.1.0.2.160119 Proactive BP
  • Oracle 12.1.0.2.7 Proactive BP

If you want to go further, you have to open a SR, provide an opatch lsinventory (because it seems that Oracle Support Engineers are not able to get the lsinventory for the latest Proactive BP – the recommended one). And this is where the nightmare starts. The lastest we have here is for JAN16 Proactive Bundle Patch – 12.1.0.2.160119 Proactive BP.

I can’t wait for a relevant answer from MOS support engineers, so I got to look at the patch. It is very simple change actually.In DBMS_STATS it has to check whether AUTO_STAT_EXTENSIONS is ON before creating the column group. This is all in prvtstat.plb and if we are lucky there were no changes on it since the JAN16.

Before trying it, we can check conflicts in MOS. Here are the 3 patches I would like to apply, in their most recent release for Linux:

CaptureOpatchConflict

Yes, this is a nice feature of My Oracle Support: you can analyze the conflicts online.

The result of conflict analysis shows that we are lucky:
CaptureOpatchConflict003

Ready now to download the files:
CaptureOpatchConflict004

So the last patch we need, https://updates.oracle.com/download/21171382.html, can be downloaded in its latest Proactive BP version, even if it is 1 year old. And don’t worry about its name: p21171382_12102160119DBEngSysandDBIM_Generic.zip

datapatch and verification

Don’t forget to run datapatch on your databases to be sure that the dictionary is patched.

$ORACLE_HOME/OPatch/datapatch
 
SQL Patching tool version 12.1.0.2.0 Production on Mon Mar 27 09:18:47 2017
Copyright (c) 2012, 2017, Oracle. All rights reserved.
 
Connecting to database...OK
Bootstrapping registry and package to current versions...done
Determining current state...done
Adding patches to installation queue and performing prereq checks...done
Installation queue:
Nothing to roll back
The following patches will be applied:
24732088 (DATABASE BUNDLE PATCH 12.1.0.2.170117)
21171382 (AUTO DOP COMPUTES A HIGH DOP UNNECESSARILY)
22652097 (PROVIDE SEPARATE CONTROLS FOR ADAPTIVE PLANS AND ADAPTIVE STATISTICS FEATURES)
 
Installing patches...
Patch installation complete. Total patches installed: 3
 
Validating logfiles...done
SQL Patching tool complete on Mon Mar 27 09:21:33 2017

And then connect to check the new Adaptive Statistics behavior:

SQL> show parameter optimizer_adaptive
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
optimizer_adaptive_plans boolean TRUE
optimizer_adaptive_reporting_only boolean FALSE
optimizer_adaptive_statistics boolean FALSE
 
SQL> select dbms_stats.get_prefs('AUTO_STAT_EXTENSIONS') from dual;
 
DBMS_STATS.GET_PREFS('AUTO_STAT_EXTENSIONS')
--------------------------------------------------------------------------------
OFF

Summary for 12.1.0.2 new install

Don’t install a 12c database with only the software and documentation that was released 3 years ago. Lot of migration feedback has improved the stability of such migration, and this is implemented in patchset updates and MOS notes.
A good source of information is Mike Dietrich blog (the best source you can find to benefit from lot of migration projects experience):

Then, get the latest recommended software.

  1. The patchset: https://updates.oracle.com/download/21419221.html for your platform
  2. The patchset update https://updates.oracle.com/download/24968615.html for Unix/Linux
    or https://updates.oracle.com/download/25115951.html for Windows
  3. The Adaptive Statistics instance parameters: https://updates.oracle.com/download/22652097.html for Oracle 12.1.0.2.170117 Proactive BP
  4. The Adaptive Statistics stat preference https://updates.oracle.com/download/21171382.html for Oracle 12.1.0.2.160119 Proactive BP

Don’t forget to run datapatch on all databases, even the newly created ones.

 

Cet article JAN17 Proactive Bundle Patch + Adaptive Statistics control est apparu en premier sur Blog dbi services.

12.2 Online Conversion of a Non-Partitioned Table to a Partitioned Table (A Small Plot Of Land)

In my previous post, I discussed how you can now move heap tables online with Oracle Database 12.2 and how this can be very beneficial in helping to address issues with the Clustering Factor of key indexes. A problem with this technique is that is requires the entire table to be effectively reorganised when most of […]

Little help in creating SQL Plan Baselines

One of the main design goals behind SQLd360 is to have no installation nor “evidence” left in the database, i.e. there is no SQLd360 repository in the database while there is a SQLTXPLAIN one (this isn’t necessarily bad, it’s just a different approach).

As a consequence several little things SQLT provided are gone with SQLd360, for example few years ago (it’s been disabled by default for a while) SQLT generated a script to create a “custom” SQL Profile for the best performing plan it identified, something similar was happening for SQL Plan Baselines stored in SQL Tuning Set for quick implementation of baselines down the road.

There is a new standalone script shipped with SQLd360, sql/sqld360_create_sql_baseline.sql, that aims at providing the same functionality of SPM+STS provided by SQLT, using the idea of “no evidence left by default”.
The script is NOT executed as part of SQLd360, it can (and needs to) be manually executed at any time, which in turns means there is no need to run SQLd360 for this script to work.
The script requires three parameters:

  • SQL ID for the SQL of interest
  • Oracle Pack available in this database ([N]one, [D]iagnostic or [T]uning)
  • Plan Hash Value you wish to “freeze”

You can provide the parameters directly inline to SQL*Plus (assuming you know the PHV you want to enforce) or just enter them one by one when prompted, the script will list all the PHV it can find around (memory + history) with their performance, helping you in case you don’t recall which PHV you want.

The script does just two things:

  1. It create a SQL Tuning Set (named s_<>_<>) and loads the plan details into it, the goal is to “freeze” it in time so info don’t get lost if the plan is aged out of memory, purged from AWR, etc.
  2. Provide copy&paste instructions to create a Baseline based on the plan in the STS either in the current system or in a remote one. The script DOES NOT execute such steps, only prints them at screen. This way you can read, understand, digest and validate them before YOU execute them.

Hopefully it will make it a little bit easier to play with SPM.

Feedbacks, correction, recommendations are welcome as usual!

Oak Table World, Collaborate 2017 Edition!

 

We weren't sure we'd get Oak Table World for Collaborate going, but Bryn made sure that it happened, (better thank him when you see him!)   We'll be in the South Seas C Ballroom all day Wednesday, so come learn database knowledge that everyone else is too frightened to talk about!

Submit your business card in the bowl at the front door and get a chance to win an AWESOME Nintendo NES Classic System with 30 games!

 

 

Wednesday    
Time Speaker Title of Abstract
Wednesday, April 5    
8:30am - 9:30am Keynote – no sessions  
9:45am - 10:45am RAC Cache Fusion internals - Riyaj Shamsudeen Demos of RAC cache fusion using wireshark.
11:00am - 12:00pm Bryn Llewellyn Why Use PL/SQL?
12:00pm – 1:30pm Lunch- See Ted Slots below  
1:30pm - 2:30pm Tim Gorman dNFS
2:45pm - 3:45pm Kellyn Pot'Vin-Gorman Oracle vs. SQL Server- Indexing Features and Storage, View from the Multi-Platform World
3:45pm - 4:15pm Break – no sessions  
4:15pm - 5:15pm Mauro Pagano Something cool with 10053 Trace Files
Oakie Hour with Beer? Reception starts at 7pm; Working on this idea with Mark and IOUG  
     
     
Ted Talks- 12pm-1:30pm 10 minutes long, six total slots Title of Abstract
12:05-12:15 Jonathan Gennick Learnings from SQL Server and Power BI: And Why I didn't Use Oracle
12:20-12:30 Bryn Llewellyn Ten rules for doing a PL/SQL performance experiment (Of course there's many more rules, but at talk has to have a catchy title.)
12:40-1:30 Bryn, Mauro, John King, Tim and Kellyn DBA Panel

 

 Do you have a 10 minute TED talk you'd like to present at lunchtime?  Send an email to dbakevlar at gmail to submit and possibly speak! Sponsors for this great event?  Check'em out!