Search

OakieTags

Who's online

There are currently 0 users and 24 guests online.

Recent comments

Oakies Blog Aggregator

Nologging redo size

It is probably easy to calculate hourly redo rate or daily redo rate using AWR data. For example, my script awr_redo_size.sql can be used to calculate daily redo rate, and awr_redo_size_history.sql can be used to calculate hourly redo rate. Hourly redo rate is especially useful since you can export to an excel spreadsheet, graph it to see redo rate trend.

Update: I added another script to calculate redo rate if you don’t have AWR license. redo_size_archived_log.sql.

Introduction to Direct Mode Writes

Direct mode operations write directly in to the database file skipping buffer cache. Minimal redo(aka invalidation redo) is generated, if the database is not in force logging mode. Keeping the database in no force logging mode is peachy as long as you don’t use Data guard, Streams, or Golden Gate.

Suddenly, business decide to use one of these log mining based replication products. This means that you must turn on Force logging at the database level so that replication tools can capture (just replay in the case of Data guard) the redo information correctly and consistently.

But, what if your application performs high amount of direct mode operation, such as insert /*+ append */ operations? Now, you need to estimate the redo size to identify the effect of FORCE LOGGING mode That estimation gets little tricky.

Direct writes

During direct mode operation, blocks are pre-formatted and written directly to the disk bypassing buffer cache. If the database is altered to Force logging mode, then still direct mode operations will write to the block. In addition to direct mode writes, these operations will generate redo for the blocks written directly, almost similar to writing the whole block in to the redo log files. This will increase redo size.

There are few statistics capturing the direct mode writes. Using these statistics, we can estimate the redo size for direct mode operations.

Statistics

Statistics ‘physical writes direct’ includes mostly three component statistics as given below.

Physical writes direct = < writes to data file due to direct mode operations> +
                              physical writes direct to temporary tablespace +
                              physical writes direct (LOB)

To identify the size of direct writes to data file, excluding temp files, then the formula becomes trivial as :

Physical writes to datafile = block_size * ( physical writes direct -
                                                physical writes direct to temporary tablespace )

Script awr_redo_nologging_size.sql uses this formula to estimate the amount of redo size if the database is altered to FORCE Logging mode. One caution with this script is that, this script assumes an uniform block size( of what you specify , 8192 is default). If you use multiple block sizes in your database, then specify the biggest block size in use (or average!). Script will overestimate it, but it is better than underestimation.

awr_redo_nologging_size.sql v1.00 by Riyaj Shamsudeen @orainternals.com

To generate Report about Redo rate from AWR tables

Enter the block size(Null=8192):
Enter past number of days to search for (Null=30):21

DB_NAME   REDO_DATE                redo_size (MB) phy_writes_dir (MB) phy_writes_dir_temp(MB)
--------- ------------------- ------------------- ------------------- -----------------------
...
TEST1      01/09/2012 00:00:00          554,967.92        4,337,470.54            4,048,463.09
TEST1      01/10/2012 00:00:00          725,161.69        7,631,308.52            7,311,254.35
TEST1      01/11/2012 00:00:00        1,417,910.43       11,022,558.04           10,424,339.66
TEST1      01/12/2012 00:00:00          162,109.27        2,756,108.79            2,658,140.35
TEST1      01/13/2012 00:00:00          736,137.74        5,449,356.39            5,107,896.82
TEST1      01/14/2012 00:00:00          880,102.10        3,494,355.88            3,119,470.18
...

In the code output above, notice the line for 1/11/2012. Estimated total redo size is ~1,417GB if we alter the database to FORCE LOGGING mode at database level. Out of that 1417 GB redo size, ~600 GB of redo will be generated due to direct mode operations from the calculation: 11,022GB will be generated due to direct mode operations minus adjustment for direct writes to temporary tablespace of size 10,424GB (over 10TB writes to temporary tablespace).

Example #2
In this example, notice 28-DEC-11. 62GB of redo estimated if alter the database to force logging mode. Out of that just 600MB of redo will be generated due to direct mode operation.

DB_NAME   REDO_DATE            redo_size (MB) phy_writes_dir (MB) phy_writes_dir_temp(MB)
--------- --------------- ------------------- ------------------- -----------------------
...
TEST2     24-DEC-11                 19,149.11            2,796.57                2,361.68
TEST2     25-DEC-11                 18,362.74            1,630.83                1,379.95
TEST2     26-DEC-11                 60,097.50            3,867.92                3,303.37
TEST2     27-DEC-11                 55,696.98            3,266.89                2,756.84
TEST2     28-DEC-11                 62,971.37            4,650.37                4,096.75
TEST2     29-DEC-11                 62,167.32            3,839.07                3,255.70
TEST2     30-DEC-11                 64,072.57            4,462.38                3,788.39
...

Summary
In summary, we can estimate the amount of redo size if we alter the database to FORCELOGGING mode. This is a very useful estimation while implementing these replication tools.

Thanks to Kirti Deshpande and Kalyan Maddali for testing out my script. Of course, any mistake in the script is mine, only mine.

Non-Specific Index Hints

January 24, 2012 (Modified January 25, 2012) As I read the “Troubleshooting Oracle Performance” book for the second time a couple of months ago, I made note on page 385 that it was possible to specify table columns in an index hint, rather than specifying specific index names (or just specifying the table name).  This might be [...]

Dropped Partitions do not go in the Recycle Bin

If you alter table TEST drop partition Q1, does it go in the recycle bin?

That is the question I was asked today. “Of course it….” Pause. More pause.

No, I did not know. I knew I’d seen partitions in the recyclebin on Oracle but I thought there was no option to state PURGE when you dropped a partition.

So, a quick test was needed.

First I tried a drop of a partition I knew I could live without {NB this is on version 11.2.0.3, I am 99.8% sure this is the same on 10}

mdw1123> alter table person_call drop partition d_20111205 purge
  2  /
alter table person_call drop partition d_20111205 purge
                                                  *
ERROR at line 1:
ORA-14048: a partition maintenance operation may not be combined with other operations


mdw1123> alter table person_call drop partition d_20111205
  2  /

Table altered.

mdw1123> select count(*) from dba_recyclebin
  2  /
Any Key>

  COUNT(*)
----------
         0

1 row selected.

That’s pretty conclusive then, Oracle does not like the PURGE option when dropping a partitions, which suggests it does not go into the recyclebin, and in fact my recyclebin remains empty when I correctly drop the partition. {NB I had purged the recyclebin before this test}.

However, I want to make sure that it is indeed just the partition operation that does not go to the recyclebin (you can of course turn off the recyclebin and I have not proven that I have not done this). So:


mdw1123> create table mdw (id number,vc1 varchar2(10))
  2  partition by range (id)
  3  (partition p1 values less than (10)
  4  ,partition p2 values less than (20)
  5  ,partition p3 values less than (30)
  6  ,partition pm values less than (maxvalue)
  7  )
  8
mdw1123> /
Table created.

mdw1123> insert into mdw
  2  select rownum,'AAAAAAAA'
  3  from dual
  4  connect by level <40
  5  /
39 rows created.

mdw1123> select * from dba_recyclebin
  2  /
no rows selected

mdw1123> alter table mdw drop partition p3
  2  /
Table altered.

mdw1123> select * from dba_recyclebin
  2  /
no rows selected

mdw1123> drop table mdw
  2  /
Table dropped.

mdw1123> select * from dba_recyclebin;
Any Key>
OWNER                          OBJECT_NAME                    ORIGINAL_NAME
------------------------------ ------------------------------ --------------------------------
OPERATION TYPE                      TS_NAME                        CREATETIME
--------- ------------------------- ------------------------------ -------------------
DROPTIME               DROPSCN PARTITION_NAME                   CAN CAN    RELATED BASE_OBJECT
------------------- ---------- -------------------------------- --- --- ---------- -----------
PURGE_OBJECT      SPACE
------------ ----------
MDW                            BIN$DI/3ZwiMRneOxLrJ4jidyg==$0 MDW
DROP      Table Partition           INDEX_01                       2012-01-24:16:13:55
2012-01-24:16:15:33    2787392                                  NO  NO       77672       77672
       77672       1024
MDW                            BIN$DI/3ZwiMRneOxLrJ4jidyg==$0 MDW
DROP      Table Partition           INDEX_01                       2012-01-24:16:13:55
2012-01-24:16:15:33    2787392                                  NO  NO       77672       77672
       77672       1024
MDW                            BIN$DI/3ZwiMRneOxLrJ4jidyg==$0 MDW
DROP      Table Partition           INDEX_01                       2012-01-24:16:13:55
2012-01-24:16:15:33    2787392                                  NO  NO       77672       77672
       77672       1024
MDW                            BIN$DI/3ZwiMRneOxLrJ4jidyg==$0 MDW
DROP      TABLE                                                    2012-01-24:16:13:55
2012-01-24:16:15:33    2787393                                  YES YES      77672       77672
       77672

4 rows selected.

So, if you go and drop the wrong partition, you can’t go getting it back without some sort of restore or recovery (as there is nothing to get it back from) but you can if you drop the whole table. And I was not going mad, I had seen partitions in the Recyclebin (there is, after all, a PARTITION_NAME column in the table)

This seems a little odd to me, I would imagine that dropping the wrong partition is something you would want to use flashback query to fix (as you can with accidentally dropping a table) but there you go. You can’t.

Back to the day job….

Website Updates: Cleanup Complete…

I wrote last month about some issues with my website as a result of an issue with my hosting provider. It’s been a very long and arduous process, but I think I’ve finally finished the last of the cleanup. It’s certainly not something I would like to repeat again. Having said that, there are lots of positives about the process.

  • When I started the website, over 11 years ago, my knowledge of HTML was very limited. In going back through all the pages to clean up the logical corruption it gave me an opportunity to clean up the HTML as well. I don’t think anyone on the outside looking in would notice the difference, but it makes me happier to know things are a little more in order.
  • I used my web stats to decide which pages to sort out first, figuring it would make sense to focus on the stuff that people are actually reading first. That in itself was a bit of a revelation. There are a lot of very old articles on the site and I kinda figured people don’t look at them much these days. It appears that’s not exactly true. This presented a couple of problems. First, some articles were really bad! I managed to turn a blind eye to this when I thought people weren’t reading them, but once I saw people were still hitting them I decided to rewrite them. That took a lot more time than I expected. In other cases, all that was required were more obvious links to the updated content. Many articles now have “Related articles” links at the top.
  • I use a home-grown CMS to manage the site. This process has forced me improve it and fix some of the things that were bugging me.

Even with all the positives, I still feel a little frustrated by all this. The time this has eaten up has impacted badly on my to-do list. I started this process the evening after I attended Cary Millsap‘s Mastering Oracle Trace Data course. As a result, I’ve not reviewed the material from the course, or even downloaded the free tools yet. That’s pretty high up my list.

Cheers

Tim…

 




Kscope 12 – Here we Come!

San Antonio is the site for Kscope12. I had a chance to visit the site of Kscope12 at the JW Marriott Hill Country in San Antonio. I thought the conference committee must have been exaggerating but this is truly a huge, beautiful place. We’re reserving an extra room the weekend before so that some of my family can join us in reveling in the water park, golf course, and nature preserve. Of course, the conference will be outstanding but this venue is so cool you might find it tempting to “play hooky” and miss a few sessions!

Funny Cloud Computing Video…

If you loved this,

And you loved this,

Then you will probably love this,

Cheers

Tim…




ASH Visualizations: R, ggplot2, Gephi, Jit, HighCharts, Excel ,SVG

There is more and more happening in the world of visualization and visualizing Oracle performance specifically with v$active_session_history.

Of these visualizations,  the one pushing the envelope the most is Marcin Przepiorowski. Marcin is responsible for writing S-ASH , ie Simulated ASH versions 2.1,2.2 and 2.3. See

Here are some examples of what I have seen happening out there in the web with these visualizations grouped by the visualization tool.

Gephi

The first example is using Gephi. The coolest example of Gephi I’ve seen is Greg Rahn’s analysis of the voting for Oracle World mix sessions

Here is Marcin’s example using Gephi with ASH data:

JIT

Here are two more examples from Marcin Przepiorowski using JIT or Javascript InfoVis Toolkit

Click on the examples to go to the actual HTML and not just the image. On the actual page from Marcin you can double click on nodes to make them the center of the network.

TOP 5 SQL_ID, SESSIONS and PROGRAMS joined together with additional joins to points not included in top 5. ex. TOP 5 SQL ID with list of sessions and programs TOP 5 SESSIONS with list of sql_id’s and programs TOP 5 PROGAMS with list of sessions and sql id’s

TOP 5 SQL_ID, SESSIONS and PROGRAMS joined together with additional joins to points not included in top 5. ex. TOP 5 SQL ID with list of sessions and programs TOP 5 SESSIONS with list of sql_id’s and programs TOP 5 PROGAMS with list of sessions and sql id’s

 

R

Frits Hoogland gives a great blog entry on getting started with R and then using R to analyze 10046 tracefiles (sort of ASH on steroids)

Here from Greg Rahn again is one of the cooler examples of R. In this case it’s the ASH data of a parallel query execution showing the activity of the different processes:

HighCharts

Here is an example basically reproducing the Top Activity screen with highcharts, a sqlscript on v$session and a cgi script to feed the data to the web page:

SVG

OEM 12c shows the new face of Enterprise Manager with the load maps

PL/SQL and SVG : EMlite

(quick apex example: http://ba6.us/node/132)

 

Excel

Karl Arao has been doing a good bit of ASH visualization using his own Excel work as well as Tanel’s Excel Perf Sheet  ( see a video  here)

http://karlarao.wordpress.com/2009/06/07/diagnosing-and-resolving-gc-block-lost/

 

 

 

 

Hotsos Symposium

My favorite conference of the year is just around the corner.

The Hotsos Symposium will be held March 4-8 and will be celebrating its tenth anniversary. I'll be speaking again this year and, as always, am eagerly looking forward to the week. Every year the speakers, topics and opportunities for networking continue to excel and I'm sure this year will be no different. This year's Training Day event will be conducted by Jonathan Lewis so make sure to stay the extra day to take advantage of spending a day with him as he discusses "Designing Optimal SQL".

If you haven't signed up yet, there's still time. I'll look forward to seeing you there!

Underworld: Awakening…

I went to watch the 3D version of Underworld: Awakening yesterday.

I’ve made my feeling know about 3D several times and this film changes nothing. The 3D element is an expensive gimmick I can live without. What’s more, the 3D in this film is pretty crappy. It looks like a post-production thing, rather than being filmed in 3D. It just so happens the cinema I went to was only showing it in 3D so I had little choice.

3D gripe aside, what did I think of the film? I liked it. It doesn’t bring anything new to the table. In fact it’s very much like the previous two films starring Kate Beckinsale, but it is crammed with iconic slow-mo shots that scream cool to people like me who are fans of women kicking ass in films. Just like the Resident Evil films, if they keep making them, I’ll keep watching them, pretty much regardless of plot or quality… :)

Cheers

Tim…




Friday Philosophy – Lead or Lag (When to Upgrade)?

I was involved in a discussion recently with Debra Lilley which version of Oracle to use. You can see her blog about it here (and she would love any further feedback from others). Oracle now has a policy that it will release the quarterly PSUs for a given point release for 12 months once that point release is superseded. ie once 11.2.0.3 came out, Oracle will only guarantee to provide PSUs for 11.2.0.2 for 12 months. See “My Oracle Support” note ID 742060.1. However, an older Terminal release such as 11.1.0.7 is not superseded and is supported until 2015 – and will get the quarterly PSU updates. This left the customer with an issue. Should they start doing their development on the latest and theoretically greatest version of Oracle and be forced to do a point upgrade “soon” to keep getting the PSUs, or use an older version of Oracle and avoid the need to upgrade?

This is in many ways a special case of the perennial issue of should you use the latest version of Oracle (or in fact any complex software solution) or go with the version you know and trust? Plus, should you patch up to the latest version which in theory gives you protection against bugs and vulnerabilities (along with the CPUs). Yes, they are two separate issues but people tend to sit on the same side of both points, for the same reasons.

The arguments to stay using an older version are that it is working, it is stable, you do not need the new features and upgrading is a lot of work and effort. Plus the new version will have new bugs that come along with the new features you do not need and things might be turned on by default that you could do without (like stats collecting or not creating the actual segments when a new table or partition is created). If you remain on your favourite version long enough, you get another issue which is that the latest version of Oracle might not be compatible with your ancient version of the OS or another package or programming language critical to your system (I got caught in a terrible web with old perl, old O/S and old DB that resulted in a need to upgrade all three together – ouch!).

The arguments to moving forward are that you get access to the latest features, that over all older features will have more bugs fixed in newer version, performance will be better {again, overall, exceptions allowing}. Also, if you do hit bugs and problems there are no issues in having to first upgrade to a fully supported version. Plus, fixes are made for current versions first and then back-ported to older ones. Those pack-ported fixes can cause real problems when you DO decide to upgrade.

The big sticking points are the effort involved in upgrading and living with the bugs that you find that Oracle Testing didn’t.

I’ve got a few of other considerations to throw into the pot.

Firstly, if you are developing something new, it is not a lot more effort to use the latest version. This allows you to learn the new version and eases the transition of older systems to it.

Secondly, Oracle like you if you use the latest version, especially if it is the latest-latest version or even beta. Yeah, the helpdesk will not have a clue about some of your issues but in my experience you get access to those really smart guys and gals in Oracle who do the third-line support or even the development work.

Thirdly, if you are on the latest version, if you do decide to freeze on that version for a while, for stability and a quiet life, you have a lot longer before your version (at least at a major level) drops out of support.

Fourthly, dynamic, inquisitive, flexible staff like new things. In my experience, environments that freeze on an old version have a higher percentage of staff who either like it dull and repetitive, or hate it being dull and repetitive – and itch to get out. If I’m in charge, I know which type of staff I like to have more of {NB there are some very good arguments for having some staff who like it dull and repetitive}.

As you can guess, I am in the “be on the latest version” side of the argument. I was ambivalent about it until a few years ago when I noticed a trend:

Sites that like to move forward tend to (a) do it in a controlled manner and (b) have the infrastructure to do proper regression testing.
Site that like to stay still lack the ability to do regression testing and move forward only when forced – and in a pressured, unplanned and frankly chaotic manner.

That was it, that was the real key thing for me. The further you lag behind the more likely you are to eventually be forced to upgrade and it won’t be a nice time doing it. I know, there are exceptions, systems still running Oracle 6 absolutely fine on an old DOS6.1 box. In the same way you also get the odd 95-year-old life-long smokers – and thousands of 45-year-old smokers with emphysema.

When I have any sway over the situation I now always strive to be on modern versions of Oracle {OS, language, whatever} and to patch small and regular. To support all this, have very good regression testing. I’ve only a couple of times been able to get the regression testing sorted out as well as I would like, but when you do the pain of patching and upgrading, as well as developing and integrating, is so much reduced that not patching seems madness.

So to sum up:

  • If it is a new development, go for the very latest version, play with the latest features if potentially beneficial and see if you can get Oracle to be interested in your attempts. ie (B)lead.
  • If you have good regression testing, plan and carry out patch and version upgrades as they come available and stay current. ie Lead
  • If you have a complex solution in place and no/poor regression testing, do not move to a new major release, leave it a while for the worst new bugs to be found and fixed. Then move. ie Lag
  • If your system is old AND critical and all the guys and gals who implemented it are long gone, stay on that version for ever. ie stagnate.

Oh, and if that last one applies to many of your systems – dust off the CV and start reading technical manuals. One day you will need a new job in a hurry.