Search

OakieTags

Who's online

There are currently 0 users and 49 guests online.

Recent comments

Oakies Blog Aggregator

RMOUG Training Days 2015

I will be talking in Rocky Mountain Oracle User Group Training Days 2015( http://www.rmoug.org), with live demos (hopefully there will be no failures in the demo). My topics are:

Feb 17: Deep dive: 3:15PM to 5:15PM – RAC 12c optimization: I will discuss RAC global cache layer in detail with a few demos. You probably can’t find these deep Global Cache layer details anywhere else :)

Feb 19: Wednesday: 2:45PM to 3:45PM – Advanced UNIX tools: I will discuss both Solaris and Linux advanced tools to debug deep performance issues.

Feb 19: Wednesday: 12:15PM – 1:15PM – Exadata SIG panel with Alex Fatkulin.

Come to Denver. Come on, it won’t be cold ( I think :) )

Free Webinar "Oracle Exadata & In-Memory Real-World Performance"

It's webinar time again.

Join me on Wednesday, January 28th at AllThingsOracle.com for a session based on a real world customer experience.

The session starts at 3pm UK (16:00 Central European) time. The webinar is totally free and the recording will made available afterwards.

Here's the link to the official landing page where you can register and below is the official abstract:

Abstract

After a short introduction into what the Oracle Exadata Database Machine is, in this one-hour webinar I will look at an analysis of different database query profiles that are based on a real-world customer case, how these different profiles influence the efficiency of Exadata’s “secret sauce” features, as well as the new Oracle In-Memory column store option. Based on the analysis different optimization strategies are presented along with lessons learned.

AWR Warehouse and SYSDBA

I’ve had a few folks ask me a similar question about the AWR Warehouse, occuring numerous times this week.

“How can I limit what the user of the AWR Warehouse can view in the AWR Warehouse?”

“How can I add source databases to the AWR Warehouse without DBA privileges?”

This topic bridges into the area of confusion of use of the EM12c environment, (which I consistently promote for use by DBAs, Developers and all of IT) and then the AWR Warehouse, which shares it’s user interface as part of cloud control, that currently is a DBA’s deep analysis and research tool.

The request to limit privileges to add source databases, limit view access to targets, but also bind values, full SQL statements, and advanced performance data impacts the purpose of the AWR Warehouse.  I fully understand the security requirements for AWR Warehouse access as stated in Oracle’s support documentation:

“You can add and remove source databases provided you have access to the database target and to database credentials with execute privileges on the sys.dbms_swrf_internal package and the DBA role.”

https://docs.oracle.com/database/121/TDPPT/tdppt_awr_warehouse.htm#CACEEAHD

Why is this important?

The first reasoning would be for space considerations.  AWR data with long term retention could add up to a considerable disk space if just any database was added without careful planning to ensure the right database information is retained for the right databases.  The power of the Automatic Workload Repository, (AWR) is that its always on, but always on means its always growing and this is best left to a DBA resource to ensure that allocation of space is used wisely.

Second, when discussing limiting view of data in the AWR Warehouse-  When investigating an issue with any SQL execution, there are many factors to take into consideration.  One of the most significant and commonly important information to answering why a performance issue has occurred, requires me to look into differences in the amount of data resulting in the where clause and objects vs. the data provided to the optimizer.

If we take the following, simple where clause into consideration:

where a.col1=b.col2
and a.col2=:b1
and b.col7=:b2;

When the optimizer uses data provided to it from statistics, histograms and any dynamic sampling, there is going to be a number of choices that can be made from the following information provided.

  1. Hash join on table a and b to address the join or perform nested loop if…
  2. Column a.col2 is unique, making the join quite small or…
  3. Adding b.col7 to a.col2 to the join will make it so unique that a nested loop is sufficient.

Now, what if the optimizer decided to perform a nested loop when 16 million+ rows were returned?

To the untrained eye, some may assume that the optimizer had made a poor choice or that there was a bug and would walk away.  More often, if you have the data provided by the values passed to the bind variables, along with the data provided to the optimizer, assumptions would fall away and a very different story would present itself.

This is why Oracle requires DBA privileges to add a source database to the AWR Warehouse and to work with the data provided as part of the AWR Warehouse. This feature provides an abundance of data that is most useful to the professional that knows how to work with the Automatic Workload Repository.  This professional, to have the access required to perform this type of analysis and research should be the database administrator, so the requirement for the AWR Warehouse now makes perfect sense.

Now to return to the query, adding in the values for the bind variables, a new picture develops to research:

where a.col1=b.col2
and a.col2=6002
and b.col7='YES';

We now can verify the statistics data behind the values for both a.col2 and b.col7 and accurately diagnose where the optimizer may have been mislead due to incorrect data provided to the Cost Based Optimizer.

This may be a simple explanation behind why I believe in the DBA privilege policy was chosen for source database additions and view options to the AWR Warehouse, but hopefully it sheds a bit of light onto the topic.

 



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [AWR Warehouse and SYSDBA], All Right Reserved. 2015.

Spatial space

One thing you (ought to) learn very early on in an Oracle career is that there are always cases you haven’t previously considered. It’s a feature that is frequently the downfall of “I found it on the internet” SQL.  Here’s one (heavily paraphrased) example that appeared on the OTN database forum a few days ago:

select table_name,round((blocks*8),2)||’kb’ “size” from user_tables where table_name = ‘MYTABLE';

select table_name,round((num_rows*avg_row_len/1024),2)||’kb’ “size” from user_tables where table_name = ‘MYTABLE';

The result from the first query is 704 kb,  the result from the second is 25.4 kb … fragmentation, rebuild, CTAS etc. etc.

The two queries are perfectly reasonable approximations (for an 8KB block size, with pctfree of zero) for the allocated space and actual data size for a basic heap table – and since the two values here don’t come close to matching it’s perfectly reasonable to consider doing something like a rebuild or shrink space to reclaim space and (perhaps) to improve performance.

In this case it doesn’t look as if the space reclaimed is likely to be huge (less than 1MB), on the other hand it’s probably not going to take much time to rebuild such a tiny table; it doesn’t seem likely that the rebuild could make a significant difference to performance (though apparently it did), but the act of rebuilding might cause execution plans to change for the better because new statistics might appear as the rebuild took place. The figures came from a test system, though, so maybe the table on the production system was much larger and the impact would be greater.

Being cautious about wasting time and introducing risk, I made a few comments about the question –  and learned that one of the columns was of type SDO_GEOMETRY. This makes a big difference about what to do next, because dbms_stats.gather_table_stats() doesn’t process such columns correctly, which results in a massive under-estimate for the avg_row_len (which is basically the sum of avg_col_len for the table). Here’s an example (run on 12c, based on some code taken from the 10gR2 manuals):


drop table cola_markets purge;

CREATE TABLE cola_markets (
  mkt_id NUMBER,
  name VARCHAR2(32),
  shape SDO_GEOMETRY);

INSERT INTO cola_markets VALUES(
  1,
  'cola_a',
  SDO_GEOMETRY(
    2003,  -- two-dimensional polygon
    NULL,
    NULL,
    SDO_ELEM_INFO_ARRAY(1,1003,3), -- one rectangle (1003 = exterior)
    SDO_ORDINATE_ARRAY(1,1, 5,7) -- only 2 points needed to
          -- define rectangle (lower left and upper right) with
          -- Cartesian-coordinate data
  )
);

insert into cola_markets select * from cola_markets;
/
/
/
/
/
/
/
/
/

execute dbms_stats.gather_table_stats(user,'cola_markets')
select
	avg_row_len, num_rows, blocks,
	round(avg_row_len * num_rows / 7200,0) expected_blocks
from user_tables where table_name = 'COLA_MARKETS';

analyze table cola_markets compute statistics;
select
	avg_row_len, num_rows, blocks,
	round(avg_row_len * num_rows / 7200,0) expected_blocks
from user_tables where table_name = 'COLA_MARKETS';

If you care to count the number of times I execute the “insert as select” it’s 10, so the table ends up with 2^10 = 1024 rows. The 7,200 in the calculated column converts bytes to approximate blocks on the assumption of 8KB blocks and pctfree = 10. Here are the results following the two different methods for generating object statistics:


PL/SQL procedure successfully completed.

AVG_ROW_LEN   NUM_ROWS     BLOCKS EXPECTED_BLOCKS
----------- ---------- ---------- ---------------
         14       1024        124               2

Table analyzed.

AVG_ROW_LEN   NUM_ROWS     BLOCKS EXPECTED_BLOCKS
----------- ---------- ---------- ---------------
        109       1024        124              16

Where does the difference in Expected_blocks come from ? (The Blocks figures is 124 because I’ve used 1MB uniform extents – 128 block – under ASSM (which means 4 space management blocks at the start of the first extent.)

Here are the column lengths after the call to dbms_stats: as you can see the avg_row_len is the sum of avg_col_len.


select column_name, data_type, avg_col_len
from   user_tab_cols
where  table_name = 'COLA_MARKETS'
order by
        column_id
;

COLUMN_NAME          DATA_TYPE                AVG_COL_LEN
-------------------- ------------------------ -----------
MKT_ID               NUMBER                             3
NAME                 VARCHAR2                           7
SYS_NC00010$         SDO_ORDINATE_ARRAY
SHAPE                SDO_GEOMETRY
SYS_NC00008$         NUMBER                             0
SYS_NC00004$         NUMBER                             4
SYS_NC00005$         NUMBER                             0
SYS_NC00006$         NUMBER                             0
SYS_NC00007$         NUMBER                             0
SYS_NC00009$         SDO_ELEM_INFO_ARRAY

The figures from the analyze command are only slightly different, but fortunately the analyze command uses the row directory pointers to calculate the actual row allocation, so picks up information about the impact of inline varrays, LOBs, etc. that the dbms_stats call might not be able to handle.


COLUMN_NAME          DATA_TYPE                AVG_COL_LEN
-------------------- ------------------------ -----------
MKT_ID               NUMBER                             2
NAME                 VARCHAR2                           6
SYS_NC00010$         SDO_ORDINATE_ARRAY
SHAPE                SDO_GEOMETRY
SYS_NC00008$         NUMBER                             1
SYS_NC00004$         NUMBER                             3
SYS_NC00005$         NUMBER                             1
SYS_NC00006$         NUMBER                             1
SYS_NC00007$         NUMBER                             1
SYS_NC00009$         SDO_ELEM_INFO_ARRAY

As a basic reminder – whenever you do anything slightly non-trivial (e.g. something you couldn’t have done in v5, say) then remember that all those dinky little script things you find on the Internet might not actually cover your particular case.

The info in OTHER_XML of view DBA_HIST_SQL_PLAN

I had some time to spend, killing time, and thought about something that was “on…

Brief introduction to ASM mirroring

Automatic Storage Management (ASM) is becoming the standard for good reasons. Still, the way it mirrors remains a mystery for many customers I encounter, so I decided to cover it briefly here.

ASM Basics: What does normal redundancy mean at all?

ASM normal redundancy

It means that every stripe is mirrored once. There is a primary on one disk and a mirror on another disk. All stripes are spread across all disks. High redundancy would mean that every primary stripe has two mirrors, each on another disk. Obviously, the mirroring reduces the usable capacity: It’s one half of the raw capacity for normal redundancy and one third for high redundancy. The normal redundancy as on the picture safeguards against the loss of any one disk.

ASM Basics: Spare capacity

ASM spare capacity usage

When disks are lost, ASM tries to re-establish redundancy again. Instead of using spare disks, it uses spare capacity. If enough free space in the diskgroup is left (worth the capacity of one disk) that works as on the picture above.

ASM 11g New Feature: DISK_REPAIR_TIME

What if the disk from the picture above is only temporarily offline and comes back online after a short while? These transient failures have been an issue in 10g, because the disk got immediately dropped, followed by a rebalancing to re-establish redundancy. Afterwards an administrator needed to add the disk back to the diskgroup which causes again a rebalancing. To address these transient failures, Fast Mirror Resync was introduced:

disk_repair_time

No administrator action required if the disk comes back before DISK_REPAIR_TIME (default is 3.6 hours) is over. If you don’t like that, setting DISK_REPAIR_TIME=0 brings back the 10g behavior.

ASM 12c New Feature: FAILGROUP_REPAIR_TIME

If you do not specify failure groups explicitly, each ASM disk is its own failgroup. Failgroups are the entities across which mirroring is done. In other words: A mirror must always be in another failgroup. So if you create proper failgroups, ASM can mirror according to your storage layout. Say your storage consists of four disk arrays (each with two disks) like on the picture below:

ASM failure groups

That is not yet the new thing, failgroups have been possible in 10g already. New is that you can now use the Fast Mirror Resync feature also on the failgroup layer with the 12c diskgroup attribute FAILGROUP_REPAIR_TIME. It defaults to 24 hours.

So if maintenance needs to be done with the disk array from the example, this can take up to 24 hours before the failgroup gets dropped.

I hope you found the explanation helpful, many more details are here :-)

Tagged: ASM, Brief Introduction

Bind Effects

A couple of days ago I highlighted an optimizer anomaly caused by the presence of an index with a descending column. This was a minor (unrelated) detail that appeared in a problem on OTN where the optimizer was using an index FULL scan when someone was expecting to see an index RANGE scan. My earlier posting supplies the SQL to create the table and indexes I used to model the problem – and in this posting I’ll explain the problem and answer the central question.

Here’s the query and execution plan (from 11.2.0.x) as supplied by the OP – the odd appearance of the sys_op_descend() function calls is the minor detail that I explained in the previous post, but that’s not really relevant to the question of why Oracle is using an index full scan rather than an index range scan. The /*+ first_rows */ hint isn’t something you should be using but it was in the OP’s query, so I’ve included it in my model:


select /*+ FIRST_ROWS gather_plan_statistics scanned */ count(1)      FROM  XXX
where  (((((COL1 = '003' and COL2 >= '20150120') and COL3 >= '00000000') and COL4>= '000000000000' )
or ((COL1 = '003' and COL2 >= '20150120') and COL3> '00000000' )) or (COL1= '003' and COL2> '20150120'))
order by COL1,COL2,COL3,COL4  

Plan hash value: 919851669  

---------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |        |      1 |        |  18533 |00:01:47.04 |    156K |  70286 |
|   1 |  TABLE ACCESS BY INDEX ROWID| XXX    |      1 |  7886K |  18533 |00:01:47.04 |    156K |  70286 |
|*  2 |   INDEX FULL SCAN           | XXXXPK |      1 |  7886K |  18533 |00:01:30.36 |    131K |  61153 |
---------------------------------------------------------------------------------------------------------  

Predicate Information (identified by operation id):
---------------------------------------------------
  2 - filter((("COL2">:B2 AND "COL1"=:B1 AND
              SYS_OP_DESCEND("COL2")=:B2
              AND "COL3">:B3 AND SYS_OP_DESCEND("COL2")<=SYS_OP_DESCEND(:B2)) OR
              ("COL1"=:B1 AND "COL2">=:B2 AND "COL3">=:B3 AND "COL4">=:B4 AND
              SYS_OP_DESCEND("COL2")<=SYS_OP_DESCEND(:B2))))  

If you look closely you’ll see that the OP has NOT supplied the output from a call to dbms_xplan.display_cursor() – the column and table names are highly suspect (but that’s allowable cosmetic change for confidentiality reasons) the giveaway is that the SQL statement uses literals but the execution plan is using bind variables (which are of the form B{number}, suggesting that the real SQL is embedded in PL/SQL with PL/SQL variables being used to supply values): the bind variables make a difference.

Let’s go back to my model to demonstrate the problem. Here’s a query with the same predicate structure as the problem query (with several pairs of brackets eliminated to improve readability) showing the actual run-time plan (from 11.2.0.4) when using literals:


select
        /*+ first_rows */
        *
from t1
where
        (C1 = 'DE' and C2 >  'AB')
or      (C1 = 'DE' and C2 >= 'AB' and C3 > 'AA' )
or      (C1 = 'DE' and C2 >= 'AB' and C3 >= 'AA' and C4 >= 'BB')
order by
        C1, C2, C3, C4
;

---------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |       |       |     4 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1      |    21 |  2478 |     4  (25)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T1_IASC |    21 |       |     3  (34)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("C1"='DE')
       filter(((SYS_OP_DESCEND("C2")'AB') OR
              (SYS_OP_DESCEND("C2")<=SYS_OP_DESCEND('AB') AND "C3">'AA' AND "C2">='AB') OR
              (SYS_OP_DESCEND("C2")<=SYS_OP_DESCEND('AB') AND "C4">='BB' AND "C2">='AB' AND
              "C3">='AA')))

As you can see, the optimizer has managed to “factor out” the predicate C1 = ‘DE’ from the three disjuncts and has then used it as an access() predicate for an index range scan. Now let’s see what the code and plan look like if we replace the four values by four bind variables:


variable B1 char(2)
variable B2 char(2)
variable B3 char(2)
variable B4 char(2)

begin
        :b1 := 'DE';
        :b2 := 'AB';
        :b3 := 'AA';
        :b4 := 'BB';
end;
/

select
        /*+ first_rows */
        *
from t1
where
        (C1 = :B1  and C2 >  :B2 )
or      (C1 = :B1  and C2 >= :B2 and C3 >  :B3 )
or      (C1 = :B1  and C2 >= :B2 and C3 >= :B3 and C4 >= :B4)
order by C1, C2, C3, C4
;

---------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |       |       |    31 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1      |   437 | 51566 |    31   (4)| 00:00:01 |
|*  2 |   INDEX FULL SCAN           | T1_IASC |   437 |       |    27   (4)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter((("C1"=:B1 AND "C2">:B2) OR ("C1"=:B1 AND "C3">:B3 AND
              "C2">=:B2) OR ("C1"=:B1 AND "C4">=:B4 AND "C2">=:B2 AND "C3">=:B3)))

The optimizer hasn’t factored out the common expression C1 = :B1. The reason for this, I think, is that though WE know that :B1 is supposed to be the same thing in all three occurrences the optimizer isn’t able to assume that that’s the case; in principle :B1 could be the place holder for 3 different values – so the optimizer plays safe and optimizes for that case. This leaves it with three options: Full tablescan with filter predicates, index full scan with filter predicates, three-part concatenation with index range scans in all three parts. The combination of the /*+ first_rows */ hint and the “order by” clause which matches the t1_1asc index has left the optimizer choosing the index full scan path – presumably to avoid the need to collect all the rows and sort them before returning the first row.

Given our understanding of the cause of the problem we now have a clue about how we might make the query more efficient – we have to eliminate the repetition of (at least) the :B1 bind variable. In fact we can get some extra mileage by modifying the repetition of the :B2 bind variable. Here’s a rewrite that may help:


select
        /*+ first_rows */
        *
from t1
where
        (C1 = :B1 and C2 >= :B2)
and     (
             C2 > :B2
         or  C3 > :B3
         or (C3 >= :B3 and C4 > :B4)
        )
order by C1, C2, C3, C4
;

---------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |       |       |     4 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1      |   148 | 17464 |     4   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T1_IASC |   148 |       |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("C1"=:B1 AND "C2">=:B2 AND "C2" IS NOT NULL)
       filter(("C2">:B2 OR "C3">:B3 OR ("C4">:B4 AND "C3">=:B3)))

I’ve factored out as much of the C1 and C2 predicates as I can – and the optimizer has used the resulting conditions as the access() predicate on the index (adding in a “not null” predicate on C2 that looks redundant to me – in fact the index was on the primary key in the original, but I hadn’t included that constraint in my model). You’ll notice, by the way, that the cardinality is now 148; compare this with the previous cardinality of 437 and you might (without bothering to look closely as the 10053 trace) do some hand-waving around the fact that 437 = (approximately) 148 * 3, which fits the idea that the optimizer was treating the three :B1 appearances as if they were three different possible values accessing three sets of data.

Miscellaneous.

This isn’t the end of the story; there are always more complications and surprises in store as you look further into the detail. For example, on the upgrade to 12c the execution plan for the query with bind variables was the same (ignoring the sys_op_descend() functions) as the query using literals – the optimizer managed to factor out the C1 predicate: does this mean SQL*Plus got smarter about telling the optimizer about the bind variables, or does it mean the optimizer got smarter about something that SQL*Plus has been doing all along ?

This change might make you think that the optimizer is supposed to assume that bind variables of the same name represent the same thing – but that’s not correct, and it’s easy to show; here’s a trivial example (accessing the same table with a query that, for my data, identifies the first row):


declare
        m_id number := 1;
        m_c1 char(2) := 'BV';
        m_c2 char(2) := 'GF';
        m_n number := 0;
begin
        execute immediate
                'SELECT /*+ FIND THIS */ COUNT(*) FROM T1 WHERE ID = :B1 AND C1 = :B1 AND C2 = :B1'
                into m_n
                using m_id, m_c1, m_c2
        ;
end;
/

select sql_id, sql_text from V$sql where sql_text like 'SELECT%FIND THIS%';

SQL_ID        SQL_TEXT
------------- ----------------------------------------------------------------------------------
9px3nuv54maxp SELECT /*+ FIND THIS */ COUNT(*) FROM T1 WHERE ID = :B1 AND C1 = :B1 AND C2 = :B1

If you were looking at the contents of v$sql, or a trace file, or an AWR report, you might easily be fooled into thinking that this was a query where the same value had been used three times – when we know that it wasn’t.

So, as we upgrade from 11g to 12c my model of the original problem suggests that the problem is going to go away – but, actually, I don’t really know why that’s the case (yet). On the other hand, I have at least recognised a pattern that the 11g optimizer currently has a problem with, and I have a method for helping the optimizer to be a little more efficient.

 

Using Database In-Memory Column Store with Complex Datatypes

From those who are interested, hereby my slide deck I used during UKOUG Tech14, regarding…

Why Automate Target Patching with Enterprise Manager 12c

Every job comes with tasks that no one likes to perform and database administration is no exception.  Patching is one of those necessary tasks that must be performed and when we are expected to do more with less everyday, the demands of patching another host, another agent, another application is often a task that no one looks forward to.  It’s not that it goes wrong, but that it’s just tedious and many DBAs know there are a lot of other tasks that could be better use of their time.   Patching is still an essential and important task that must be performed, we all know that. OPatch and other patching utilities from Oracle make patching easy, but it can still remove a lot of time from a resource’s day.

Enterprise Manager 12c’s automated patching and provisioning, using the Database Lifecycle Management Pack is gaining more appreciation from the IT community as it assists the DBA with features to search recommended patches, create patch plans, review for conflicts and allow sharing and re-use of patch plans.

Configuring a Database for Online or Offline Patching

After logging into a target database, you can click on Setup and go to the Offline Patching setup:

patching22

You can then choose to use Online patching with MOS credentials:

patching1

or use Offline Credentials and configure the patching catalog and ensure you upload all the XML’s for the catalog, which will now be stored locally to a workstation.  Once the upload is complete, run the Refresh From My Oracle Support job.

patching2

The Online configuration is recommended and works with the software library.  It’s what we’ll be talking about today.

Also ensure that you’ve set up correct privileges to perform patching. Provisioning and patching require steps to be performed that will require privileges to run root scripts, so ensure that the credentials that are used for the patching allow to sudo to root or PBrun.

Database Patch Plans

To set up a patch plan for a database, there are a number of steps, but the patch plan wizard makes this very easy to do.  For our example, we’ll choose to patch 11.2.0.4 databases to the latest recommended patches.

First, let’s do a search to find out what patches we’ll need to apply to our 11.2.0.4 databases in our EM environment.

patching3

Our Enterprise menu takes us to the Provisioning and Patching, Patches and Updates.

From this console page, we can view what patch plans are already created in case we can reuse one:

patching4

As there isn’t an existing plan that fits what we need to do, we are going to first search for what patches are recommended with the Recommended Patch Advisor:

patching10

We’ve chosen to perform a search for recommended patches for 11.2.0.4.0 databases on Linux x86-64.  This will return the following four patches:

patching11

We can click on the first Patch Name, which will take us to the patch information, including what bugs are addressed in this patch, along with the option to download or create a patch plan.  For the purpose of this post, we’ll choose to create a patch plan:

patching12

We’ll create a new patch plan for this, as our existing ones currently do not include an 11g database patch plan that would be feasible to add to.  We can see our list of patches on the left, too, so this helps as we proceed to build onto our patch plans.

After clicking on the Add to New, we come to the following:

patching13

Name your patch plan something meaningful, (I choose to name the patch for a single instance, “SI”, the patch number and that it’s for 11.2.0.4) and then choose the database from the list you wish to apply the patch to.  You can hold down the CTRL key and choose more than one database and when finished, click on Create Plan.

The patch plan wizard will then check to see if any other targets monitored by Cloud Control will be impacted and asks you to either add them to the patch plan or to cancel the patch plan for further investigation:

patching14

If you are satisfied to with the additions, you can click on Add All to Plan to proceed.  The wizard then checks for any conflicts by the additions and will report them:

patching15

In our example above, I’ve added an 11.2.0.3 instance home to show that the wizard notes it and offers to either ignore the warnings and add it or (more appropriately) cancel the patch plan and correct the mistake.

Adding to Patch Plans

In our recommended patch list, we had four recommended patches.  Once we’ve created our first patch plan, we can now choose to add to it with the subsequent patches from the list:

patching16

This allows us to create one patch plan for all four patches and EM will apply them in the proper order as part of the patch deployment process.

Patch Plan Review and Deploy

One a patch plan is created, the next step is to review and deploy it.  Choose the patch plan from the list that we created earlier:

patching18

Double clicking on it will bring up the validation warning if any exist:

patching17

We can then analyze the validations required and correct any open issues as we review the patch plan and correct them before deploying:

patching29

We can see in the above checks, that we are missing credentials required for our patches to be successful.  These can now be set by clicking to the right of the “Not Set” and proceed with the review of our patch plan.

patching20

Next we add any special scripts that are required, (none here…) any notification on the patching process so we aren’t in the dark while the patch is being applied, rollback options and conflicts checks.

These steps give the database administrator a true sense of comfort that allows them to automate, yet have notifications and options that they would choose if they were running the patch interactively.

Once satisfied with the plan, choose the Deploy button and your patch is now ready to scheduled.

patching21

Once the patching job completes or if it experiences an issue and results in executing the logic placed in the above conflict/rollback steps, the DBA can view the output log to see what issues have occurred before correcting and rescheduling.

Output Log 
Step is being run by operating system user : 'ptch_em_user' 
 
Run privilege of the step is : Normal  

This is Provisioning Executor Script
…
Directive Type is SUB_Perl
…
The output of the directive is:
…
Tue Jan 6 00:15:40 2015 - Found the metadata files; '19121551' is an patch
…
Tue Jan 6 00:15:40 2015 - OPatch from '/u01/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch.pl' 
  will be used to apply the Interim Patch.
…
Tue Jan 6 00:15:52 2015 - Invoking OPatch 11.2.0.4.7
…
Following patches will be rolled back from Oracle Home on application of the patches in the given list :
   4612895
…
Do you want to proceed? [y|n]
Y (auto-answered by -silent)
User Responded with: Y
OPatch continues with these patches:  6458921  

Do you want to proceed? [y|n]
Y (auto-answered by -silent)
User Responded with: Y

Running prerequisite checks...

This is high level, but really, it’s quite easy and the more you automate provisioning and patching, the easier it’ll get and you’ll wonder why you waited so long!

 

 



Tags:  , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Why Automate Target Patching with Enterprise Manager 12c], All Right Reserved. 2015.

Free Version of Delphix !

Delphix is now available for  30 day trial direct download ! ( if you would like a longer version please contact me at kyle@delphix.com year trials and even indefinite trials  potential possible for partners, bloggers, Oracle Aces etc)

Just got to the

The Delphix download trial consists of 3 pre-configured virtual machines downloadable as OVA files

  • source machine with Oracle XE and Postgres  (1.3 GB)
  • Delphix engine (1.2 GB)
  • target machine with Oracle XE and Postgres binaries (1.9 GB)

Just startup the source, target and Delphix VM and you are ready to go.  After starting up the source, target and Delphix the lab will automatically link Delphix to the source databases. After a few minutes the source databases will show up in the the Delphix console in a browser. Access Delphix via the browser by simply typing in the IP address of Delphix. Once the sources are visible in the Delphix console you can start creating thin clones on the target machine. The thin clones only take a couple minutes to make and take up almost no space.

#555555;">
The Delphix download trial consists of 3 pre-configured virtual machines downloadable as OVA files

    #555555;">
  • source machine with Oracle XE and Postgres  databases (1.3 GB)
  • Delphix engine (1.2 GB)
  • target machine with Oracle XE and Postgres binaries (1.9 GB)

#555555;">Just startup the source, target and Delphix VM and you are ready to go.  After starting up the source, target and Delphix the lab will automatically link Delphix to the source databases. After a few minutes the source databases will show up in the the Delphix console in a browser. Access Delphix console by simply typing in the IP address of Delphix into a browser. Once the sources are visible in the Delphix console you can start creating thin clones, i.e. virtual databases, on the target machine. The thin clones only take a couple minutes to make and take up almost no space.

#555555;">Prerequisites on the machine where the lab is installed:

    #767676;">
  • Mac, Linux or windows #555555;">(laptop or desktop or workstation)
  • O/S virtualization #555555;"> either
    • Virtualbox or
    • VMware
      • Mac or Linux: VMware Fusion (free trial version #24890d;" title="download" href="https://www.google.com/url?q=https://www.vmware.com/products/fusion/fusion-evaluation.html&sa=D&sntz=1&usg=AFrqEzeedP_fYzx0peKjyOfLLQklcEOmuA" target="_blank">download)
      • Win: VMware Workstation (free trial version #24890d;" title="download" href="http://www.google.com/url?q=http://www.vmware.com/products/workstation/workstation-evaluation&sa=D&sntz=1&usg=AFrqEzfwURMexoK4vPN1mmmlLXYLY0GCSg" target="_blank">download)
  • at least 8 GB RAM
  • at least 50 GB free disk space, but preferably 100 GB free
  • at least 2 Ghz CPU, preferably dual-core or better

#555555;">There is a Vimeo channel for videos of the lab at   #990000;" href="https://vimeo.com/channels/landshark" target="_blank" rel="nofollow">https://vimeo.com/channels/landshark

  • #222222;">The “Quick install” for VMWare Workstation: #1155cc;" href="http://vimeo.com/channels/landshark/113624718" target="_blank">http://vimeo.com/channels/landshark/113624718
  • The “Quick install” for VMWare Fusion: #1155cc;" href="http://vimeo.com/channels/landshark/113627544" target="_blank">http://vimeo.com/channels/landshark/113627544
  • #222222;">The “Quick install” for VirtualBox: #1155cc;" href="http://vimeo.com/channels/landshark/113633016" target="_blank">http://vimeo.com/channels/landshark/113633016
  • #222222;">Lab exercise to provision a virtual application and virtual database: #1155cc;" href="http://vimeo.com/channels/landshark/113850959" target="_blank">http://vimeo.com/channels/landshark/113850959

For example the lab setup video for VMware Workstation:

Example of provisioning a virtual database and a virtual application

There is also a full online community where you can find answers and ask questions

#000000;"> #990000;" href="https://community.delphix.com/delphix/categories/delphix_landshark" target="_blank" rel="nofollow">https://community.delphix.com/delphix/categories/delphix_landshark

The Delphix demo lab is nicknamed “landshark” and the hands-on labs given at conferences is called “#CloneAttack”.

Get one-on-one help with installing and running the demo at the nearest Oracle conference to you that has a Clone Attack event such as RMOUG, Collaborate, UKOUG, DOAG, OUGN etc

#2970a6;" src="http://www.oraclerealworld.com/wp-content/uploads/2014/09/Screen-Shot-2014-09-24-at-9.41.34-AM-1024x658.png" alt="Screen Shot 2014-09-24 at 9.41.34 AM" width="464" height="299" />