Search

OakieTags

Who's online

There are currently 0 users and 32 guests online.

Recent comments

Oakies Blog Aggregator

Fail Fast

Among movements like Agile, Lean Startup, and Design Thinking these days, you hear the term fail fast. The principle of failing fast is vital to efficiency, but I’ve seen project managers and business partners be offended or even agitated by the term fail fast. I’ve seen it come out like, “Why the hell would I want to fail fast?! I don’t want to fail at all.” The implication, of course: “Failing is for losers. If you’re planning to fail, then I don’t want you on my team.”

I think I can help explain why the principle of “fail fast” is so important, and maybe I can help you explain it, too.

Software developers know about fail fast already, whether they realize it or not. Yesterday was a prime example for me. It was a really long day. I didn’t leave my office until after 9pm, and then I turned my laptop back on as soon as I got home to work another three hours. I had been fighting a bug all afternoon. It was a program that ran about 90 seconds normally, but when I tried a code path that should have been much faster, I could let it run 50 times that long and it still wouldn’t finish.

At home, I ran it again and left it running while I watched the Thunder beat the Spurs, assuming the program would finish eventually, so I could see the log file (which we’re not flushing often enough, which is another problem). My MacBook Pro ran so hard that the fan compelled my son to ask me why my laptop was suddenly so loud. I was wishing the whole time, “I wish this thing would fail faster.” And there it is.

When you know your code is destined to fail, you want it to fail faster. Debugging is hard enough as it is, without your stupid code forcing you to wait an hour just to see your log file, so you might gain an idea of what you need to go fix. If I could fail faster, I could fix my problem earlier, get more work done, and ship my improvements sooner.

But how does that relate to wanting my business idea to fail faster? Well, imagine that a given business idea is in fact destined to fail. When would you rather find out? (a) In a week, before you invest millions of dollars and thousands of hours investing into the idea? Or (b) In a year, after you’ve invested millions of dollars and thousands of hours?

I’ll take option (a) a million times out of a million. It’s like asking if I’d like a crystal ball. Um, yes.

The operative principle here is “destined to fail.” When I’m fixing a reported bug, I know that once I create reproducible test case for that bug, my software will fail. It is destined to fail on that test case. So, of course, I want for my process of creating the reproducible test case, my software build process, and my program execution itself to all happen as fast as possible. Even better, I wish I had come up with the reproducible test case a year or two ago, so I wouldn’t be under so much pressure now. Because seeing the failure earlier—failing fast—will help me improve my product earlier.

But back to that business idea... Why would you want a business idea to fail fast? Why would you want it to fail at all? Well, of course, you don’t want it to fail, but it doesn’t matter what you want. What if it is destined to fail? It’s really important for you to know that. So how can you know?

Here’s a little trick I can teach you. Your business idea is destined to fail. It is. No matter how awesome your idea is, if you implement your current vision of some non-trivial business idea that will take you, say, a month or more to implement, not refining or evolving your original idea at all, your idea will fail. It will. Seriously. If your brain won’t permit you to conceive of this as a possibility, then your brain is actually increasing the probability that your idea will fail.

You need to figure out what will make your idea fail. If you can’t find it, then find smart people who can. Then, don’t fear it. Don’t try to pretend that it’s not there. Don’t work for a year on the easy parts of your idea, delaying the inevitable hard stuff, hoping and praying that the hard stuff will work its way out. Attack that hard stuff first. That takes courage, but you need to do it.

Find your worst bottleneck, and make it your highest priority. If you cannot solve your idea’s worst problem, then get a new idea. You’ll do yourself a favor by killing a bad idea before it kills you. If you solve your worst problem, then find the next one. Iterate. Shorter iterations are better. You’re done when you’ve proven that your idea actually works. In reality. And then, because life keeps moving, you have to keep iterating.

That’s what fail fast means. It’s about shortening your feedback loop. It’s about learning the most you can about the most important things you need to know, as soon as possible.

So, when I wish you fail fast, it’s a blessing; not a curse.

FREE Webinar: Efficient techniques to create and maintain your #Oracle Standby Database

Join us with this FREE event on May 19, 13:00 CET and register now, because seats are limited.

DataGuard_Lunchtime_Webinar

I will be talking about

  • how to create and maintain a 12c Standby Database in the most efficient way
  • how to do switchover and failover
  • how to keep up client connectivity after role changes

These topics will be live demonstrated – positively no slide show reading.

After this major part of the event, we will briefly advertise our digital learning offerings that relate to Oracle Database Core Technology and how you may take advantage of them.

Hope to see YOU in the session:-)

Those pesky dates as strings

You might be thinking “Nothing is more frustrating that encountering a string column that is full of dates”.

But there is something worse than that…and that is, when that string column is full of potential dates, and your job is to sift out the good data from the bad data. For example, if your table looks like this:


SQL> select * from T;

DATE_STR
--------------------
qwe
01/01/2000
31/02/2000
12-jan-14
20001212
Jan 14, 2016

6 rows selected.

Some of those values are definitely not dates, some of them definitely are dates, and some of them are might be dates. The only real way of knowing is to try convert those strings to dates, and see what happens. But we cannot just throw a TO_DATE around the column, because the moment we encounter a bad value, our query will crash. Even if all of the data was valid, because the formats of the strings are variable, we’d still get issues with that approach.

And the comes the kicker (because we get this on AskTom all the time)…

“Can we do it without creating a PLSQL function?”

This always bamboozles me…it is like saying “I need to write a book, but I’m only allowed to use a DVORAK keyboard and my tongue, with one eye closed”.

Why restrict yourself on the facilities available ?

Anyway, here is my workaround and not a stored function in sight Smile


SQL> with
  2    function date_checker(p_str varchar2) return date is
  3      l_format sys.odcivarchar2list :=
  4         sys.odcivarchar2list('dd/mm/yyyy','dd-mon-yyyy','yyyymmdd','Mon DD, YYYY');
  5      l_dte date;
  6    begin
  7      for i in 1 .. l_format.count loop
  8        begin
  9          l_dte := to_date(p_str,l_format(i));
 10          return l_dte;
 11        exception
 12          when others then
 13            if i = l_format.count then return null; end if;
 14        end;
 15      end loop;
 16    end;
 17  select date_str, date_checker(date_str) str_as_date
 18  from t
 19  /

DATE_STR             STR_AS_DA
-------------------- ---------
qwe
01/01/2000           01-JAN-00
31/02/2000
12-jan-14            12-JAN-14
20001212             12-DEC-00
Jan 14, 2016         14-JAN-16

6 rows selected.

Gotta love 12c Smile

Speaking at Oracle Midlands on Tuesday 17th May

As the title indicates, I’ll be speaking at the UK Oracle Midlands event on Tuesday evening next week. Details can be found here (and that link should mention the next event if you click it in the future).

oracleMidlands2

I’ll be talking about PL/SQL being called from SQL and how you can “extend” SQL by writing your own functions. That is a relatively well known thing to do but the potential impact on performance and the 12C improvements to reduce that impact are less well known. Maybe even more significantly, calling PL/SQL functions from SQL breaks the point-in-time view most of us take for granted with Oracle. More people are blogging and talking about this but it is still not widely appreciated. Is this a potential issue in any of your systems?

Joel Goodman is also presenting, on storage fragmentation. Joel is one of the best presenters on Oracle tech on the circuit and knows his stuff inside out.

I really love the Oracle Midlands user group, I’ve been to a few of the meetings and presented there one-and-a-bit times before. It meets in the evenings and lays on some free refreshements at half time (Samosas when I have been there!). It’s a real, dedicated, ground-roots user group. Annoyingly (for me) most of the meetings for the last year or so have been when I could not get up to the Midlands for them (it’s not a hard or long journey, it was just the timing was always wrong).

Red Stack are good enough to support/sponsor these events and do so with a light touch. You know they are there but it is not a hard sell, so kudos to them. Mike McKay-Dirden is the person behind these meetings and, with this being the 15th such meeting, I must take my hat off to Mike for running such a successful group.

So, if you are able to get to Birmingham (UK! Not USA…) on Tuesday evening, you should do so for an excellent, free learning opportunity. I hope to see some of you there!

NoCOUG Spring Conference and GLOC 2016

I fly out tomorrow for NoCOUG’s Spring Conference, which will be held on Friday, the 13th in San Jose, California. If you were thinking of attending and needed an added incentive to attend, I’ll be putting on a FOUR hour Enterprise Manager 13c hands on lab, so admit it, you’re intrigued… </p />
</p></div></div>

    	  	<div class=

Oracle Management Cloud – Log Analytics

In this last overview post I will give you a first glance of the Oracle…

dbms_xplan

My favourite format options for dbms_xplan.display_cursor().

This is another of those posts where I tell you about something that I’ve frequently mentioned but never documented explicitly as a good (or, at least, convenient) idea. It also another example of how easy it is to tell half the story most of the time when someone asks a “simple” question.

You’re probably familiar with the idea of “tuning by cardinality feedback” – comparing the predicted data volumes with the actual data volumes from an execution plan – and I wrote a short note about how to make that comparison last week; and you’re probably familiar with making a call to dbms_xplan.display_cursor() after enabling the capture of rowsource execution statistics (in one of three ways) for the execution of the query, and the format parameter usually suggested for the call is ‘allstats last’ to get the execution stats for the most recent execution of the query. I actually like to see the Cost column of the execution plan as well, so I usually add that to the format, so (with all three strategies shown for an SQL*Plus environment):

set linesize 180
set trimspool on
set pagesize 60
set serveroutput off

alter session set "_rowsource_execution_statistics"=true;
alter session set statistics_level=all;

select /*+ gather_plan_statistics */ * from user_tablespaces;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost'));

So what do we often forget to mention:

  • For SQL*Plus it is important to ensure that serveroutput is off
  • The /*+ gather_plan_statistics */ option uses sampling, so may be a bit inaccurate
  • The two accurate strategies may add a significant amount of CPU overhead (300% is my best/worst case on Windows)
  • This isn’t appropriate if the query runs parallel

For a parallel query the “last” execution of a query is typically carried out by the query co-ordinator, so the rowsource execution stats of many (or all) of the parallel execution slaves are likely to disappear from the output. If you’re testing with parallel queries you need to add some “tag” text to the query to make it unique and omit the ‘last’ option from the format string.

Now, a common suggestion is that you need to add the ‘all’ format option instead – but this doesn’t mean “all executions” it means (though doesn’t actually deliver) all the data that’s available about the plan. So here’s an execution plans produced after running a parallel query and using ‘allstats all’ as the format option (t1 is a copy of all_objects, and this demo is running on 12.1.0.2).

SQL_ID  51u5j42rvnnfg, child number 1
-------------------------------------
select  /*+   parallel(2)  */  object_type,  sum(object_id) from t1
group by object_type order by object_type

Plan hash value: 2919148568

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time   |    TQ  |IN-OUT| PQ Distrib | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |          |      1 |        |       |   113 (100)|          |        |      |            |     30 |00:00:00.04 |       5 |      0 |       |       |          |
|   1 |  PX COORDINATOR          |          |      1 |        |       |            |          |        |      |            |     30 |00:00:00.04 |       5 |      0 |       |       |          |
|   2 |   PX SEND QC (ORDER)     | :TQ10001 |      0 |     30 |   420 |   113   (9)| 00:00:01 |  Q1,01 | P->S | QC (ORDER) |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|   3 |    SORT GROUP BY         |          |      2 |     30 |   420 |   113   (9)| 00:00:01 |  Q1,01 | PCWP |            |     30 |00:00:00.01 |       0 |      0 |  2048 |  2048 |     2/0/0|
|   4 |     PX RECEIVE           |          |      2 |     30 |   420 |   113   (9)| 00:00:01 |  Q1,01 | PCWP |            |     50 |00:00:00.01 |       0 |      0 |       |       |          |
|   5 |      PX SEND RANGE       | :TQ10000 |      0 |     30 |   420 |   113   (9)| 00:00:01 |  Q1,00 | P->P | RANGE      |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|   6 |       HASH GROUP BY      |          |      2 |     30 |   420 |   113   (9)| 00:00:01 |  Q1,00 | PCWP |            |     50 |00:00:00.05 |    1492 |   1440 |  1048K|  1048K|     2/0/0|
|   7 |        PX BLOCK ITERATOR |          |      2 |  85330 |  1166K|   105   (2)| 00:00:01 |  Q1,00 | PCWC |            |  85330 |00:00:00.03 |    1492 |   1440 |       |       |          |
|*  8 |         TABLE ACCESS FULL| T1       |     26 |  85330 |  1166K|   105   (2)| 00:00:01 |  Q1,00 | PCWP |            |  85330 |00:00:00.01 |    1492 |   1440 |       |       |          |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$1
   8 - SEL$1 / T1@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------

   8 - access(:Z>=:Z AND :Z<=:Z)

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "OBJECT_TYPE"[VARCHAR2,23], SUM()[22]
   2 - (#keys=0) "OBJECT_TYPE"[VARCHAR2,23], SUM()[22]
   3 - (#keys=1; rowset=200) "OBJECT_TYPE"[VARCHAR2,23], SUM()[22]
   4 - (rowset=200) "OBJECT_TYPE"[VARCHAR2,23], SYS_OP_MSR()[25]
   5 - (#keys=1) "OBJECT_TYPE"[VARCHAR2,23], SYS_OP_MSR()[25]
   6 - (rowset=200) "OBJECT_TYPE"[VARCHAR2,23], SYS_OP_MSR()[25]
   7 - (rowset=200) "OBJECT_ID"[NUMBER,22], "OBJECT_TYPE"[VARCHAR2,23]
   8 - (rowset=200) "OBJECT_ID"[NUMBER,22], "OBJECT_TYPE"[VARCHAR2,23]

Note
-----
   - Degree of Parallelism is 2 because of hint


48 rows selected.

You’ll notice we’ve reported the “alias” and “projection” information – those are two of the format options that you can use with a + or – to include or exclude if you want. We’ve also got E-Bytes and E-time columns in the body of the plan. In other words (at least in my opinion) we’ve got extra information that makes the output longer and wider and therefore harder to read.

The format string I tend to use for parallel query is ‘allstats parallel cost’ – which (typically) gives something like the following:

SQL_ID  51u5j42rvnnfg, child number 1
-------------------------------------
select  /*+   parallel(2)  */  object_type,  sum(object_id) from t1
group by object_type order by object_type

Plan hash value: 2919148568

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Starts | E-Rows | Cost (%CPU)|    TQ  |IN-OUT| PQ Distrib | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |          |      1 |        |   113 (100)|        |      |            |     30 |00:00:00.04 |       5 |      0 |       |       |          |
|   1 |  PX COORDINATOR          |          |      1 |        |            |        |      |            |     30 |00:00:00.04 |       5 |      0 |       |       |          |
|   2 |   PX SEND QC (ORDER)     | :TQ10001 |      0 |     30 |   113   (9)|  Q1,01 | P->S | QC (ORDER) |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|   3 |    SORT GROUP BY         |          |      2 |     30 |   113   (9)|  Q1,01 | PCWP |            |     30 |00:00:00.01 |       0 |      0 |  2048 |  2048 |     2/0/0|
|   4 |     PX RECEIVE           |          |      2 |     30 |   113   (9)|  Q1,01 | PCWP |            |     50 |00:00:00.01 |       0 |      0 |       |       |          |
|   5 |      PX SEND RANGE       | :TQ10000 |      0 |     30 |   113   (9)|  Q1,00 | P->P | RANGE      |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|   6 |       HASH GROUP BY      |          |      2 |     30 |   113   (9)|  Q1,00 | PCWP |            |     50 |00:00:00.05 |    1492 |   1440 |  1048K|  1048K|     2/0/0|
|   7 |        PX BLOCK ITERATOR |          |      2 |  85330 |   105   (2)|  Q1,00 | PCWC |            |  85330 |00:00:00.03 |    1492 |   1440 |       |       |          |
|*  8 |         TABLE ACCESS FULL| T1       |     26 |  85330 |   105   (2)|  Q1,00 | PCWP |            |  85330 |00:00:00.01 |    1492 |   1440 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   8 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
   - Degree of Parallelism is 2 because of hint


30 rows selected.

Of course you may prefer ‘allstats all’ – and sometimes I do actually want to see the alias or projection information – but I think there’s so much information available on the execution plan output that anything that makes it a little shorter, cleaner and tidier is a good thing.

You might have noticed, by the way, that the Buffers, Reads, and A-Time columns have still managed to lose information on the way up from operation 6; information that should have been summing up the plan has simply disappeared.  Make sure you do a sanity check for disappearing numbers when you’re looking at more complex plans.

 

Database Comparisons with AWR Warehouse- Part II, Comparison Period Report

There are two ways to compare one database to another in the AWR Warehouse.  I covered the ADDM Comparison Report here and now we’ll go through the second one, which is much more involved and has us empowering the AWR Warehouse taking two AWR Warehouse reports and comparing two databases to each other.

possiblehttp://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/possible.gif?r... 300w" sizes="(max-width: 480px) 100vw, 480px" data-recalc-dims="1" />

The AWR Warehouse, once setup and databases that are targets already monitored by your EM12c or EM13c environment, can then be added and upload all AWR snapshots to this central repository.

Comparison Period Report

The AWR Warehouse second comparison reporting option is accessible from the drop down menu in the AWR Warehouse dashboard:

addm_compare1http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/addm_compare1.... 300w, http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/addm_compare1.... 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

Once you click on Compare Period Report, you’re offered to choose a baseline or snapshots from the list for the databases you wish to compare:

compare1http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/compare1.png?r... 300w, http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/compare1.png?r... 768w, http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/compare1.png?w... 1455w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

In my example, I simply chose the DNT database, with a one hour snapshot window to compare to an OMR, (Oracle Management Repository) database for another one hour snapshot interval.  Clicking on Generate Report will then create an HTML formatted report.

The Actual Report

awrc1http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/awrc1.png?resi... 300w, http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/awrc1.png?resi... 768w, http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/awrc1.png?w=1462 1462w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

In the report summary, not only does the report show that I’m comparing two different databases from two different hosts, but any differences about the main configuration will be displayed.  We can see that although I’m comparing the same amount of time, the average number of users is twice and the DB Time is extensively different for the two databases.

The report will then start comparing the high level information, including the host, the memory and I/O configuration-

awrc2http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/awrc2.png?resi... 300w, http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/awrc2.png?resi... 768w, http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/awrc2.png?w=1232 1232w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

The Top Ten Foreground events are displayed for each environment, ensuring there isn’t anything missed that could be confusing if a comparison was performed.  In a more similar database, (let’s say test against production or old production vs. a newly consolidated environment)  there’s going to be more similarities and you’d be able to see how the workload had changed between systems.

awrc3http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/awrc3.png?resi... 300w, http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/awrc3.png?resi... 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

Each section contains values for the specific database and then the differences, saving the DBA considerable time manually calculating what has changed.  Once you get to the Top SQL, the report updates it’s format again to display the SQL in order, over all, for time elapsed, CPU, etc. and then bread down between the times for each environment run or not and the difference.

awrc4http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/awrc4.png?resi... 300w, http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/awrc4.png?resi... 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

After breaking down the SQL in every way possible, as commonly seen in an AWR report, but with the added benefit of comparisons between two different AWR reports and databases, the report digs into each of the Activity Stats and compares all of those:

awrc5http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/awrc5.png?resi... 300w, http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/awrc5.png?resi... 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

The report then does comparisons for SGA, PGA, interconnects and even IO:

awrc6http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/awrc6.png?resi... 300w, http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/awrc6.png?resi... 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

Once completed with these, it then digs into the objects and tablespaces to see if there are any outliers or odd differences in what objects are being called by both or either database.

awrc7http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/awrc7.png?resi... 300w, http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/awrc7.png?resi... 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

As with all AWR reports, it also pulls up all Initialization Parameters and performs a clear comparison of what is set for each database so you can view if there is anything amiss that would cause performance impacts.

awrc8http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/awrc8.png?resi... 300w, http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/awrc8.png?resi... 768w, http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/awrc8.png?w=1530 1530w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

This is an incredibly valuable report for those that want to perform a deep analysis comparison between two databases for time periods around performance, workload, migration or consolidation.  The comparison reports are one of the top features of the AWR Warehouse and is so infrequently considered a selling point of the product, (and if you already have the diagnostic and tuning pack, heck, it comes with it’s own limited EE license like the RMAN catalog and Enterprise Manager repository database) so what are you waiting for??

 



Tags:  , , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Database Comparisons with AWR Warehouse- Part II, Comparison Period Report], All Right Reserved. 2016.

Speaker Scores

I published a note this morning that I drafted in January 2015, and I didn’t notice that it had gone back in time to publish itself on the date that I first drafted it – and it’s already been tweeted twice so I can’t move it. So this is a temporary link to pop it to the head of the queue while leaving it where it first appeared.

Database Comparisons Using AWR Warehouse Part 1- ADDM Comparison

A lot of my ideas for blog posts come from questions emailed to me or asked via Twitter.  Today’s blog is no different, as I was asked by someone in the community what the best method of comparing databases using features within AWR when migrating from one host and OS to another.

Don't_know_wat

There is a  lot of planning that must go into a project to migrate a database to another host or consolidate to another server, but when we introduce added changes, such as a different OS, new applications, workload or other demands, these need to be taken into consideration.  How do you plan for this and what kind of testing can you perform to eliminate risk to performance and the user experience once you migrate over?

AWR Warehouse

I won’t lie to any of you, this is where the AWR Warehouse just puts it all to shame.  The ability to compare AWR data is the cornerstone of this product and it’s about to shine here again.  For a project of this type, it may very well be a consideration to deploy one and load the AWR data into the warehouse, especially if you’re taking on a consolidation.

There are two main comparison reports, one focused on AWR, (Automatic Workload Repository) data and the other on ADDM, (Automatic Database Diagnostic Monitor).

addm_compare1http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/addm_compare1.... 300w, http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/addm_compare1.... 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

From the AWR Warehouse, once you highlight a database from the main dashboard, you’ll have the option to run either report and the coolest part of these reports is that you don’t just get to compare time snapshots from the same database, but you can compare one snapshot from a database source in the AWR Warehouse to ANOTHER database source that resides in the warehouse!

ADDM Comparison Period

This report is incredibly valuable and offers the comparisons to pinpoint many of the issues that are going to create the pain-points of a migration.  The “just the facts” and crucial information about what is different, what has changed and what doesn’t match the “base” for the comparison will be displayed very effectively.

When you choose this report, the option to compare from any snapshot interval for the current database is offered, but you can then click on the magnifying glass icon for the Database to compare to and change to compare to any database that is loaded into the AWR Warehouse-

 

compare2http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/compare2.png?r... 260w, http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/compare2.png?r... 768w, http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/compare2.png?w... 1202w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

For our example, we’re going to use a day difference, same timeline to use as our Base Period.  Once we fill in these options, we can click Run to request the report.

The report is broken  down into three sections-

  • A side by side comparison of activity by wait event.
  • Details of differences via tabs and tables
  • Resource usage graphs, separated by tabs.

compare3http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/compare3.png?r... 300w, http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/compare3.png?r... 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

We can clearly compare between the two comparisons of activity that there was more commit waits during the base period, along with user I/O in the comparison period.  During a crisis situation, these graphs can be very beneficial when needed to show waits to less technical team members.

compare4http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/compare4.png?r... 300w, http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/compare4.png?r... 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

The Configuration tab below the activity graphs will display quickly what differences in OS, initialization parameters, host and other external influences to the database.  The Findings tab will then go into the performance comparisons differences.  Did the SQL perform better or degrade?  In the below table, the SQL ID, along with detailed information about the performance change is displayed.

Resources are the last tab to display graphs about the important area of resource usage.  Was there an impact difference to CPU usage between one host and the other?

compare6http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/compare6.png?r... 300w, http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/compare6.png?r... 768w, http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/compare6.png?w... 1246w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

Was there swapping or other memory issues?

compare7http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/compare7.png?r... 300w, http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/compare7.png?r... 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

In our example, we can clearly see the extended data reads and for Exadata consolidations, the ever valuable single block read latency is shown-

compare8http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/compare8.png?r... 300w, http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/compare8.png?r... 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

Now for those in engineered systems and RAC environments, you’re going to want to know waits for interconnect.  Again, these are simply and clearly compared, then displayed in graph form.

compare9http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/compare9.png?r... 300w, http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/compare9.png?r... 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

This report will offer very quick answers to

“What Changed?”

“What’s different?”

“What Happened at XXpm?”

The value this report provides is easy to see, but when offered to compare one database to another, even when on different hosts, you can see how valuable the AWR Warehouse becomes that even the consolidation planner can’t offer.

Next post, I’ll go over the AWR Warehouse AWR Comparision Period Report.

 

 

 

 

 

 



Tags:  , , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Database Comparisons Using AWR Warehouse Part 1- ADDM Comparison], All Right Reserved. 2016.