This is in the Series "100 Things You Probably Didn't Know About Oracle". If you haven't already, I urge you to read the other parts -
During the recently concluded Collaborate 11 (http://collaborate11.ioug.org) I was asked a question which led me to the idea for this entry - the fourth in the series. If the database size is 100GB and you have a 100GB buffer cache, all the blocks will be in the memory and there will never be an I/O to the datafiles other than the regular DBWR lazy writes, right?
Thank you to those who attended my fourth and last session Designing Applications for RAC at Collaborate 11. Considering it was at 8:30 AM on Thursday, right after the big party night, you must all be really committed to your work. Much appreciated.
You can get the slides here.
I look forward to seeing you all at Collab next year.
Many thanks to those attended my session RAC for Beginners despite the 8 AM timeslot. You must be really motivated. I hope you found the session useful and leaned something you can start using.
You can download the slides here.
Thanks to those who came to my second session of the day - Secure Your Database in a Single Day. Hopefully that met your expectations. The slides are found here.
Gavin Soorma has a nice blog on exporting and importing AWR repositories http://gavinsoorma.com/2009/07/exporting-and-importing-awr-snapshot-data/
I thought I would use this technique to load the AWR repositories of two databases I wante to compare. I loaded up the two repositories onto my laptop databases, and low and behold I could only find one new DBID in the data. Turns out I have two databases on different machines which are both the copy of the same database, so the have the same DBID.
I didn’t realize this until I had imported the two different AWR repositories into the database on my laptop but only found one new DBID. Then I realized, “oh they have the same DBID!”
So, OK, I’ll tell awrload to load the data into different schemas.
Well, awrload drops the staging user. First step was to stop awrload from dropping the staging user. I went into @?/rdbms/admin/awrload.sql and commented out the drop user section.
I then ran awrload and tried to give it a unique name:
begin * ERROR at line 1: ORA-20102: user name 'toto' is invalid ORA-06512: at "SYS.DBMS_SWRF_INTERNAL", line 412 ORA-01918: user '' does not exist ORA-06512: at "SYS.DBMS_SWRF_INTERNAL", line 1713 ORA-06512: at line 3
it only seemed to accept the default user AWR_STAGE
ok, so I load with AWR_STAGE and try to rename the user using
update user$ set name='T_NEWNAME' where name='T'; see http://arjudba.blogspot.com/2008/05/how-to-rename-user-through-oracle.html
no go, awrload gives me an error AWR_STAGE already exists even after the rename.
OK, so I export the data from the first AWR_STAGE user
expdp schemas=awr_stage create a new user: create user usrtst identified by usrtst default tablespace sysaux temporary tablespace temp; grant connect to usrtst; alter user usrtst quota unlimited on sysaux;
then import the data:
impdp REMAP_SCHEMA=awr_stage:usrtst directory=AWR_DATA DUMPFILE=usrtst_awr.DMP FULL=y
I do this same technique with the second database manually dropping the AWR_STAGE user between awrload loads.
create user chprod identified by chprod default tablespace sysaux temporary tablespace temp; grant connect to chprod; alter user chprod quota unlimited on sysaux; impdp REMAP_SCHEMA=awr_stage:chprod directory=AWR_DATA DUMPFILE=chprod_awr.DMP FULL=y
Now I have two schemas with the AWR data from the two different databases.
Unfortunately now I don’t have the benefit of the views on the WRH tables, so I have to create them myself.
First I have to give the new users access to some views:
grant create view to chprod; grant create view to usrtst; grant select on sys.v_$event_name to usrtst; grant select on sys.v_$event_name to chprod;
Now I connect as each user and create an ASH view
create or replace view v$ash as select en.name event, decode(wait_time,0,'WAITING','ON CPU') session_state, ash.* from WRH$_ACTIVE_SESSION_HISTORY ash, v$event_name en where en.event_id = ash.event_id /
The “event_id” column comes in handy and I don’t have to work about event# mapping to different names.
Now I can runs some of my favorite queries on the ASH data at least. More work to be done for sure to figure out the best method consolidating multiple AWR repositories with the same DBID.
I have some broken links in my old blog entries right now, so if you’re looking for something, then download the whole zip file from here:
I have uploaded a .zip file (for Windows) and a .tar.gz file (for Unix/Mac). The scripts are all the same with differences in the CR/LF bytes in the files and the init.sql and i.sql which have some OS specific commands in them.
I also uploaded the latest PerfSheet there where I fixed an annoying bug which complained about some missing reference files when opening the file.
I plan to fix the broken links some time between now and my retirement.
April 14, 2011 I find interesting details from time to time about Oracle Database. I am in the process of organizing material for a presentation, and I encountered something interesting when testing Oracle Database 220.127.116.11 with patch 3 applied. I traced the execution of an application and then processed the 660MB trace file using a trace file [...]
Interesting question on human mistakes was posted on the DBA Managers Forum discussions today.
As human beings, we are sometimes make mistakes. How do you make sure that your employees won’t make mistakes and cause downtime/data loss/etc on your critical production systems?
I don’t think we can avoid this technically, probably working procedures is the solution.
I’d like to hear your thoughts.
I typed my thoughts and as I was finishing, I thought that it makes sense to post it on the blog too so here we go…
The keys to prevent mistakes are low stress levels, clear communications and established processes. Not a complete list but I think these are the top things to reduce the number of mistakes we make managing data infrastructure or for that matter working in any critical environment be it IT administration, aviation engineering or medical surgery field. It’s also a matter of personality fit – depending on your balance between mistakes tolerance and agility required, you will favor hiring one individual or another.
Regardless of how much you try, there are still going to be human errors and you have to account for them in the infrastructure design and processes. The real disasters happen when many things align like several failure combined with few human mistakes. The challenge is to find the right balance between efforts invested in making no mistakes and efforts invested into making your environment errors-proof to the point when risk or human mistake is acceptable to the business.
Those are the general ideas.
Just a few examples of the practical solutions to prevent mistakes when it comes to Oracle DBA:
Some of the items to limit impact of the mistakes:
Both lists can go on very long. Old article authored by Paul Vallee is very relevant top this topic — The Seven Deadly Habits of a DBA…and how to cure them.
Feel free to post your thoughts and example. How do you approach human mistakes in managing production data infrastructure?
In recent releases Oracle has silently changed the behavior of ITL waits. The aim of this post it to describe what has changed and why. But, first of all, let’s review some essential concepts about ITLs and ITL waits.
Interested Transaction List
The Oracle database engine locks the data modified by a transaction at the row level. To implement this feature every data block contains a list of all transactions that are modifying it. This list is commonly called interested transaction list (ITL). Its purpose is twofold. First, it is used to store information to identify a transaction as well as a reference to access the undo data associated to it. Second, it is referenced by every modified or locked row to indicate which transaction it is involved.