Search

OakieTags

Who's online

There are currently 0 users and 26 guests online.

Recent comments

Affiliations

April 2011

Can I Fit a 80MB Database Completely in a 80MB Buffer Cache?

This is in the Series "100 Things You Probably Didn't Know About Oracle". If you haven't already, I urge you to read the other parts -

  • Part1 (Commit does not force writing of buffers into the disk), 
  • Part2 (How Oracle Locking Works), 
  • Part3 (More on Interested Transaction Lists).

During the recently concluded Collaborate 11 (http://collaborate11.ioug.org) I was asked a question which led me to the idea for this entry - the fourth in the series. If the database size is 100GB and you have a 100GB buffer cache, all the blocks will be in the memory and there will never be an I/O to the datafiles other than the regular DBWR lazy writes, right?

Fourth Day at Collaborate 11

Thank you to those who attended my fourth and last session Designing Applications for RAC at Collaborate 11. Considering it was at 8:30 AM on Thursday, right after the big party night, you must all be really committed to your work. Much appreciated.

You can get the slides here.

I look forward to seeing you all at Collab next year.

Second Day at Collaborate 11

Many thanks to those attended my session RAC for Beginners despite the 8 AM timeslot. You must be really motivated. I hope you found the session useful and leaned something you can start using.

You can download the slides here.

Thanks to those who came to my second session of the day - Secure Your Database in a Single Day. Hopefully that met your expectations. The slides are found here.

UKOUG

I’ve just bumped this to the top of the stack because the Call for Papers for the UKOUG annual conference (RDBMS and EBS themes, 5th – 7th Dec) is getting close to the closing date of 5th June. If you’re interested in presenting, you need to go to this link pretty soon.

Importing Multiple Databases AWR repositories

Gavin Soorma has a nice blog on exporting and importing AWR repositories http://gavinsoorma.com/2009/07/exporting-and-importing-awr-snapshot-data/

I  thought I would use this technique to load the AWR repositories of two databases I wante to compare.  I  loaded up the two repositories onto my laptop databases, and low and behold I could only find one new DBID in the data. Turns out I have two databases on different machines which are both the copy of the same database, so the have the same DBID.
I didn’t realize this until I had imported the two different AWR repositories into the database on my laptop but only found one new DBID. Then I realized, “oh they have the same DBID!”
So, OK, I’ll tell awrload to load the data into different schemas.
Well, awrload drops the staging user. First step was to stop awrload from dropping the staging user. I went  into @?/rdbms/admin/awrload.sql and commented out the drop user section.
I then ran awrload and tried to give it a unique name:

begin
*
ERROR at line 1:
ORA-20102: user name 'toto' is invalid
ORA-06512: at "SYS.DBMS_SWRF_INTERNAL", line 412
ORA-01918: user '' does not exist
ORA-06512: at "SYS.DBMS_SWRF_INTERNAL", line 1713
ORA-06512: at line 3

it only seemed to accept the default user AWR_STAGE
ok, so I load with AWR_STAGE and try to rename the user using

update user$ set name='T_NEWNAME' where name='T';

see http://arjudba.blogspot.com/2008/05/how-to-rename-user-through-oracle.html

no go, awrload gives me an error AWR_STAGE already exists even after the rename.
OK, so I export the data from the first AWR_STAGE user

 expdp schemas=awr_stage
 create a new user:
 create user usrtst
 identified by usrtst
 default tablespace sysaux
 temporary tablespace  temp;
 grant connect to usrtst;
 alter user usrtst quota unlimited on sysaux;

then import the data:
impdp REMAP_SCHEMA=awr_stage:usrtst directory=AWR_DATA DUMPFILE=usrtst_awr.DMP FULL=y
I do this same technique with the second database manually dropping the AWR_STAGE user between awrload loads.

 create user chprod
 identified by chprod
 default tablespace sysaux
 temporary tablespace  temp;
 grant connect to chprod;
 alter user chprod quota unlimited on sysaux;
 impdp REMAP_SCHEMA=awr_stage:chprod directory=AWR_DATA DUMPFILE=chprod_awr.DMP FULL=y

Now I have two schemas with the AWR data from the two different databases.
Unfortunately now I don’t have the benefit of the views on the WRH tables, so I have to create them myself.
First I have to give the new users access to some views:

 grant create view to chprod;
 grant create view to usrtst;
 grant select on sys.v_$event_name to usrtst;
 grant select on sys.v_$event_name to chprod;

Now I connect as each user and create an ASH view

 create or replace view v$ash as select
 en.name  event,
 decode(wait_time,0,'WAITING','ON CPU') session_state,
 ash.*
 from
 WRH$_ACTIVE_SESSION_HISTORY ash,
 v$event_name en
 where en.event_id = ash.event_id
 /

The “event_id” column comes in handy and I don’t have to work about event# mapping to different names.

Now I can runs some of my  favorite queries on the ASH data at least. More work to be done for sure to figure out the best method consolidating multiple AWR repositories with the same DBID.

New scripts, tools and broken links

I have some broken links in my old blog entries right now, so if you’re looking for something, then download the whole zip file from here:

I have uploaded a .zip file (for Windows) and a .tar.gz file (for Unix/Mac). The scripts are all the same with differences in the CR/LF bytes in the files and the init.sql and i.sql which have some OS specific commands in them.

I also uploaded the latest PerfSheet there where I fixed an annoying bug which complained about some missing reference files when opening the file.

I plan to fix the broken links some time between now and my retirement.

 

Brain Teaser: 10046 Extended SQL Trace Shows “EXEC #435118472:c=15600,e=510″, How is that Possible?

April 14, 2011 I find interesting details from time to time about Oracle Database.  I am in the process of organizing material for a presentation, and I encountered something interesting when testing Oracle Database 11.2.0.2 with patch 3 applied.  I traced the execution of an application and then processed the 660MB  trace file using a trace file [...]

SATAケーブルだけで1台あたり¥6000もします

SSDをたくさん、InfiniBandで連携。作りました。で組み立てたPCは1台に6個のSSDと2個のHDDを入れました。だから、SATAケーブルが8本必要になりました。SATAケーブルは180円で秋葉原で買えますが、それをビジネスで使うわけにはいきません。

Handling Human Errors

Interesting question on human mistakes was posted on the DBA Managers Forum discussions today.

As human beings, we are sometimes make mistakes. How do you make sure that your employees won’t make mistakes and cause downtime/data loss/etc on your critical production systems?

I don’t think we can avoid this technically, probably working procedures is the solution.
I’d like to hear your thoughts.

I typed my thoughts and as I was finishing, I thought that it makes sense to post it on the blog too so here we go…

The keys to prevent mistakes are low stress levels, clear communications and established processes. Not a complete list but I think these are the top things to reduce the number of mistakes we make managing data infrastructure or for that matter working in any critical environment be it IT administration, aviation engineering or medical surgery field. It’s also a matter of personality fit – depending on your balance between mistakes tolerance and agility required, you will favor hiring one individual or another.

Regardless of how much you try, there are still going to be human errors and you have to account for them in the infrastructure design and processes. The real disasters happen when many things align like several failure combined with few human mistakes. The challenge is to find the right balance between efforts invested in making no mistakes and efforts invested into making your environment errors-proof to the point when risk or human mistake is acceptable to the business.

Those are the general ideas.

Just a few examples of the practical solutions to prevent mistakes when it comes to Oracle DBA:

  • test production actions on a test system before applying in production
  • have a policy to review every production change by another senior member of a team
  • watch over my shoulder policy working on production environments – i.e. second pair of eye all the time
  • employee training, database recovery bootcamp
  • discipline of performing routing work under non-privileged accounts

Some of the items to limit impact of the mistakes:

  • multiples database controlfiles for Oracle database (in case DBA manually does something bad to one of them – I saw this happen)
  • standby database with delayed recovery or flashback database (for Oracle)
  • no SPOF architecture
  • Oracle RAC, MySQL high availability setup (like sharding or replication), SQL*Server cluster — architecture examples that limit impact of human mistakes affecting a single hardware component

Both lists can go on very long. Old article authored by Paul Vallee is very relevant top this topic — The Seven Deadly Habits of a DBA…and how to cure them.

Feel free to post your thoughts and example. How do you approach human mistakes in managing production data infrastructure?

ITL Waits – Changes in Recent Releases

In recent releases Oracle has silently changed the behavior of ITL waits. The aim of this post it to describe what has changed and why. But, first of all, let’s review some essential concepts about ITLs and ITL waits.

Interested Transaction List

The Oracle database engine locks the data modified by a transaction at the row level. To implement this feature every data block contains a list of all transactions that are modifying it. This list is commonly called interested transaction list (ITL). Its purpose is twofold. First, it is used to store information to identify a transaction as well as a reference to access the undo data associated to it. Second, it is referenced by every modified or locked row to indicate which transaction it is involved.

INITRANS