Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Oracle Database 11g Release 2 for HP-UX Itanium and AIX (PPC64) Now Available

The HP-UX Itanium and AIX (PPC64) ports of Oracle Database 11g Release 2 can now be downloaded from OTN. Happy Holidays!!! Tweet This Post

My Whole System Is Slow. Now What?

At CMG'09 a couple of weeks ago, I presented "Measuring Response Times of Code on Oracle Systems." The paper for this presentation was a subset of "For Developers: Making Friends with the Oracle Database." In the presentation, I spent a few minutes talking about why to measure response times in Oracle, and then I spent a lot of minutes talking about how. As usual, I focused heavily on the importance of measuring response times of individual business tasks executed by individual end users.

At the end of the talk, a group of people came to the podium to ask questions (always a good sign). The first question was the question that a lot of people ask. It was:

My whole system is slow. That's all my users will tell me. So then, how do I begin to do what you're describing?

Here's the answer:

Ask your users to show you what they're doing. Just go look at it.

The results of this simple advice are routinely spectacular. Just go look at it: I'm surprised whenever someone doesn't think of doing that, but I shouldn't be. That's because I didn't do it either, for the longest time. I had to learn to. And that's the story I want to tell you here.

In the early 1990s, I was a consultant with Oracle Corporation visiting clients with performance problems at a pace of more than 30 per year. Back then, I did Oracle performance work the old fashioned way: I checked everything I knew how to check, and then I fixed everything I knew how to fix. All billable by the hour. (Note: When I was doing it this way, I had not yet been taught by Dave Ensor, who changed me forever.)

On weeks when I was lucky, I'd be finished checking and fixing by sometime Wednesday, leaving a couple of days to find out what people thought of my work. If I were lucky again (that's two "lucky"s now), everyone would be thrilled with the results. I'd get my hug (so to speak), and I'd catch my flight.

But I wasn't always lucky. Some weeks, I wouldn't find anything suspicious in my checking and fixing. Some weeks, I'd find plenty, but still not everyone would be thrilled with the work. Having people be less than thrilled with my work caused pain for me, which motivated me to figure out how to take more control of my consulting engagements, to drive luck out of the equation.

The most important thing I figured out was...

People knew before I came on-site how they were going to measure on Thursday whether they liked the results of my work.

And...

They were willing to tell me on Monday.

All I had to do was be honest, like this:

On the day I'm done working here, I'd imagine you're going to want to run something that will demonstrate whether I accomplished what you were hoping for while I was here. Would you mind telling me about that now? Maybe even showing me?

I could ask that on Monday, and people were glad to tell me. I'd watch the things run and record how long they ran, and then I'd know how to prioritize my time on site. I'd record how long they ran so at the end of my engagement, I'd be able to show very clearly what improvements I had made.

Sometimes, there would be thirty different things that people would expect to measure on Thursday. If I might not have time to fix them all, then I needed to make sure that I knew the priority of the things I was being asked to fix.

That one step alone—knowing on Monday that prioritized list of what tasks needed to be fast by Thursday—drastically reduced my reliance on luck as a success factor in my job at these sites. Knowing that list on Monday is just like when your teacher in school tells you exactly what's going to be on your next test. It allows you to focus your attention on exactly what you need to do to optimize your reward for the week. (Note to fellow education enthusiasts: Please don't interpret this paragraph as my advocating the idea that GPA should be a student's sole—or even dominant—optimization constraint.)

So, what I learned is that the very first step of any good performance optimization method is necessarily this:

1. Identify the task that's the most important to you.

When I say "task," think "program" or "click" or "batch job" if you want to. What I mean is "a useful unit of work that makes sense to the business." ...Something that a business user would show you if you just went and watched her work for a few minutes.

Then comes step two:

2. Measure its response time (R). In detail.

Why is response time so important? Because that's what's important to the person who'll be watching it run on Thursday, assessing whether she thinks you've done a good job or not. That person's going to click and then wait. Happiness will be inversely proportional to how long the wait is. That's it. That's what "performance" means at 99% of sites I've ever visited.

(If you're interested in the other 1% of sites I've visited, they're interested in throughput, which I've written about in another blog post.)

Measuring response time is vital. You must be able to measure response time if you're going to nail that test on Thursday.

The key is to understand that the term response time doesn't even have a definition except in the context of a task. You can't measure response time if you don't first decide what task you're going to measure. In other words, you cannot do step 2 before you do step 1. With Oracle, for example, you can collect ASH data (if you're licensed to use it) or even trace data for a whole bunch of Oracle processes, but you won't have a single response time until you define which tasks buried within that data are the ones you want to extract and pay attention to.

You get that by visiting a user and watching what she does.

There are lots of excuses for not watching your users. Like these...

  • "I don't know my users." I know. But you should. You'd do your job better if you did. And your users would, too.
  • "My users aren't here." I know. They're on the web. They're in Chicago and Singapore and Istanbul, buying plane tickets or baseball caps or stock shares. But if you can't watch at least a simulation of the things those users actually do with the system you help manage, then I can't imagine how you would possibly succeed at providing good performance to them.
  • "I'm supposed to be able to manage performance with my dashboard." I know. I was supposed to have a hover car by the year 2000.

The longer you stay mired in excuses like these, the longer it's going to be before you can get the benefit of my point here. Your users are running something, and whatever that is that they're running is your version of my Thursday test. You can check and fix all you want, but unless you get lucky and fix the exact tooth that's hurting, your efforts aren't going to be perceived as "helpful." Checking and fixing everything you can think of is far less efficient and effective than targeting exactly what your user needs you to target.

Lots of performance analysts (DBAs, developers, architects, sysadmins, and so on) assume that when someone says, "The whole system is slow," it means there must be a single parameter somewhere in the bowels of the system that needs adjustment, and if you can just make that adjustment, everything is going to be ok. It might mean that, but in my experience, the overwhelming majority of cases are not that way. (Pages 25–29 of Optimizing Oracle Performance has more information about this.)

The great thing about measuring response time is that no matter what the problem is, you'll see it. If the program you're watching is poorly written, you'll see it. If some other program is hogging too much of a resource that your program needs, you'll see it. If you have a bad disk controller, you'll see it. If some parameter needs adjusting, you'll see it.

Realize that when a business user says "system," she doesn't mean what you would mean if you said "system." She means that the thing she runs is slow. Look at that thing. Maybe there are seventeen of them. And sure, maybe all seventeen suffer from the same root cause. If that's the case, then fantastic, because fixing the first problem will magically fix the other sixteen, too. If it's not, then fantastic anyway, because now all of them are on your prioritized list of tasks to optimize, and you'll probably surprise yourself how quickly you'll be able to pick them all off when you focus on one task at a time.

Compression to the rescue

We've had issues getting enough storage space allocated on the development server our project is intended to use. For this project, our team doesn't have full DBA rights on the box and have very limited privileges. Even before I joined the team, a request had been made for a little over 3.5 TB of space to be used for the development of a proof of concept datamart. The first project deliverable is due this week and we've only been allocated 200 GB so far. As you can imagine, it's difficult to get needed data loaded when there is less than 10% of the space we need available.

The data is supposed to span a 2 year period. A set of base transaction tables have to be extracted from various sources and loaded, then that data will be flattened out into a model that will represent the final design. With only 200 GB currently available, my initial loads for 5 (out of 50) tables took nearly all of what we had available. So, I had to do something to try and fit as much data as possible into the space we had available. Compression to the rescue!

I decided to compress the tables I had extracted using basic compression (just add COMPRESS to the table definition). But first, in order to try and get the most optimal compression possible, I collected statistics on the tables I wanted to compress and reviewed the num_distinct column statistic. Using this statistic, I sorted the data into my new compressed table using a column order from lowest to highest number of distinct column values.

When the new compressed tables were ready, I had achieved 86% compression! Using this method, I was able to load the majority of the key tables I needed to allow progress to continue for our initial project deliverable. I still haven't been able to get everything, but at least I got enough to allow our statisticians to produce some initial findings (the project is intended to create a predictive analytical model).

An additional 2 TB was allocated to us on Friday but of course it was on a different server so the initial data has to be exported and imported into the new location (sigh...). But the lack of available space gave me an opportunity to effectively use a feature I probably wouldn't have used if I had all the space I needed to start with.

The CPU Costing Model: A Few Thoughts Part III (Bang Bang)

One of the advantages of system statistics and the CPU costing model is in how the CBO deals with multiblock reads and the impact of the db_file_multiblock_read_count parameter. When performing a full table (or fast full index) scan, the CBO needs to determine just how many multiblock read operations are likely to be performed so the associated operation can [...]

TEMP Table Transformation and Remote Procedure Calls

I've recently come across a interesting side-effect regarding temp table transformations at one of my clients.

There was a PL/SQL package procedure that worked fine when called locally but somehow "hung" when being called from a remote database - all it did was to call exactly the same package procedure with the same parameters as the local call, but one of the SQL statements executed as part of the procedure generated an suboptimal execution plan that never completed.

Further investigations revealed that the significant difference between the execution plan of the local and the remote execution of the procedure was the different treatment of a contained "WITH" clause.

The interesting point is that the procedure called itself didn't perform any "distributed" queries or DML - the only difference was that one time the procedure got called locally, and one time remotely per database link. All processing within the procedure was local - no activities using database links were involved.

There are (at least) two known areas where Oracle can optionally use a so called TEMP TABLE TRANSFORMATION as part of the execution plan:

1. Materialization of a Subquery Factoring, also known as "Common Table Expression" or simply "WITH clause"

Oracle uses this when the subquery is used more than once in the execution plan, or if forced with the undocumented MATERIALIZE hint as part of the SELECT in the WITH clause. There are a few (not really documented) limitations of this materialization, in particular if LOBs or LONGs are part of the projection then this TEMP TABLE transformation can't get used.

2. Star transformation with TEMP TABLE transformation

Star transformations can also make use of the TEMP TABLE transformation. This is enabled by default when STAR_TRANSFORMATION_ENABLED is set to TRUE, but can be disabled by setting STAR_TRANSFORMATION_ENABLED to TEMP_DISABLE.

The following testcase shall demonstrate the subtle side effect of the Remote Procedure Call via Database Link.

Consider the following simple package:

create or replace package pkg_test_materialize_remote authid current_user as
procedure test_star_transform_statement;
procedure test_simple_statement;
end pkg_test_materialize_remote;
/

create or replace package body pkg_test_materialize_remote as
procedure test_star_transform_statement as
c sys_refcursor;
n1 number;
c1 varchar2(255);
c2 varchar2(255);
begin
execute immediate 'alter session set star_transformation_enabled = true';

open c for '
select
sum(quantity_sold),
p.prod_subcategory_desc,
c.cust_gender
from
sh.sales s
join sh.products p ON (s.prod_id = p.prod_id)
join sh.customers c ON (s.cust_id = c.cust_id)
join sh.countries d ON (c.country_id = d.country_id)
where
p.prod_subcategory_desc = ''Memory'' and
c.cust_city = ''Oxford'' and
c.cust_gender = ''F''
group by
p.prod_subcategory_desc, c.cust_gender';

fetch c into n1, c1, c2;

close c;
end test_star_transform_statement;

procedure test_simple_statement as
begin
for rec in (
with a as (
select /*+ materialize */
*
from
dual
)
select
*
from
a
where
1 = 2
) loop
null;
end loop;
end test_simple_statement;
end pkg_test_materialize_remote;
/

The simple statement is not really useful but probably the simplest form of requesting a TEMP TABLE transformation.

The statement using star transformation generates a TEMP TABLE transformation when using the default demo SH setup. I've only wrapped it in dynamic SQL to take advantage of any enabled roles due to the AUTHID CURRENT_USER clause.

And indeed when calling the two procedures locally like the following:

exec pkg_test_materialize_remote.test_simple_statement

exec pkg_test_materialize_remote.test_star_transform_statement

You get these execution plans both using TEMP TABLE transformations:

SQL_ID cdudvb233tdau, child number 0
-------------------------------------
WITH A AS ( SELECT /*+ materialize */ * FROM DUAL ) SELECT * FROM A
WHERE 1 = 2

Plan hash value: 1137659336

-------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | TEMP TABLE TRANSFORMATION | | | | | |
| 2 | LOAD AS SELECT | | | | | |
| 3 | TABLE ACCESS FULL | DUAL | 1 | 2 | 2 (0)| 00:00:01 |
|* 4 | FILTER | | | | | |
| 5 | VIEW | | 1 | 2 | 2 (0)| 00:00:01 |
| 6 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6617_E0AEB | 1 | 2 | 2 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - filter(NULL IS NOT NULL)

SQL_ID 5bp3rpfs2f4bw, child number 0
-------------------------------------
select sum(quantity_sold),
p.prod_subcategory_desc, c.cust_gender from
sh.sales s join sh.products p ON (s.prod_id = p.prod_id)
join sh.customers c ON (s.cust_id = c.cust_id) join
sh.countries d ON (c.country_id = d.country_id) where
p.prod_subcategory_desc = 'Memory' and c.cust_city = 'Oxford'
and c.cust_gender = 'F' group by
p.prod_subcategory_desc, c.cust_gender

Plan hash value: 3883759770

------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 435 (100)| | | |
| 1 | TEMP TABLE TRANSFORMATION | | | | | | | |
| 2 | LOAD AS SELECT | | | | | | | |
|* 3 | TABLE ACCESS FULL | CUSTOMERS | 45 | 990 | 405 (1)| 00:00:05 | | |
| 4 | HASH GROUP BY | | 2 | 94 | 30 (10)| 00:00:01 | | |
|* 5 | HASH JOIN | | 27 | 1269 | 24 (9)| 00:00:01 | | |
|* 6 | HASH JOIN | | 27 | 1134 | 22 (5)| 00:00:01 | | |
|* 7 | HASH JOIN | | 27 | 810 | 20 (5)| 00:00:01 | | |
|* 8 | TABLE ACCESS FULL | PRODUCTS | 3 | 54 | 3 (0)| 00:00:01 | | |
| 9 | PARTITION RANGE ALL | | 31 | 372 | 16 (0)| 00:00:01 | 1 | 28 |
| 10 | TABLE ACCESS BY LOCAL INDEX ROWID| SALES | 31 | 372 | 16 (0)| 00:00:01 | 1 | 28 |
| 11 | BITMAP CONVERSION TO ROWIDS | | | | | | | |
| 12 | BITMAP AND | | | | | | | |
| 13 | BITMAP MERGE | | | | | | | |
| 14 | BITMAP KEY ITERATION | | | | | | | |
| 15 | BUFFER SORT | | | | | | | |
|* 16 | TABLE ACCESS FULL | PRODUCTS | 3 | 54 | 3 (0)| 00:00:01 | | |
|* 17 | BITMAP INDEX RANGE SCAN | SALES_PROD_BIX | | | | | 1 | 28 |
| 18 | BITMAP MERGE | | | | | | | |
| 19 | BITMAP KEY ITERATION | | | | | | | |
| 20 | BUFFER SORT | | | | | | | |
| 21 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6616_E0AEB | 45 | 225 | 2 (0)| 00:00:01 | | |
|* 22 | BITMAP INDEX RANGE SCAN | SALES_CUST_BIX | | | | | 1 | 28 |
| 23 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6616_E0AEB | 45 | 540 | 2 (0)| 00:00:01 | | |
| 24 | INDEX FULL SCAN | COUNTRIES_PK | 23 | 115 | 1 (0)| 00:00:01 | | |
------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - filter(("C"."CUST_CITY"='Oxford' AND "C"."CUST_GENDER"='F'))
5 - access("C1"="D"."COUNTRY_ID")
6 - access("S"."CUST_ID"="C0")
7 - access("S"."PROD_ID"="P"."PROD_ID")
8 - filter("P"."PROD_SUBCATEGORY_DESC"='Memory')
16 - filter("P"."PROD_SUBCATEGORY_DESC"='Memory')
17 - access("S"."PROD_ID"="P"."PROD_ID")
22 - access("S"."CUST_ID"="C0")

Note
-----
- star transformation used for this statement

Let's simulate a Remote Procedure Call using a loopback database link:

create database link loopback@local connect to user identified by pwd using 'orcl';

Now get rid of the two execution plans in the Shared Pool (e.g. by flushing it if a test system) and execute the following:

exec pkg_test_materialize_remote.test_simple_statement@loopback@local

exec pkg_test_materialize_remote.test_star_transform_statement@loopback@local

The execution plans look now like this:

SQL_ID cdudvb233tdau, child number 0
-------------------------------------
WITH A AS ( SELECT /*+ materialize */ * FROM DUAL ) SELECT * FROM A
WHERE 1 = 2

Plan hash value: 3752461848

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 1 (100)| |
|* 1 | FILTER | | | | | |
| 2 | TABLE ACCESS FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(NULL IS NOT NULL)

SQL_ID 5bp3rpfs2f4bw, child number 0
-------------------------------------
select sum(quantity_sold),
p.prod_subcategory_desc, c.cust_gender from
sh.sales s join sh.products p ON (s.prod_id = p.prod_id)
join sh.customers c ON (s.cust_id = c.cust_id) join
sh.countries d ON (c.country_id = d.country_id) where
p.prod_subcategory_desc = 'Memory' and c.cust_city = 'Oxford'
and c.cust_gender = 'F' group by
p.prod_subcategory_desc, c.cust_gender

Plan hash value: 3619028137

-------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 1238 (100)| | | |
| 1 | SORT GROUP BY NOSORT | | 1 | 57 | 1238 (1)| 00:00:15 | | |
|* 2 | HASH JOIN | | 26 | 1482 | 830 (1)| 00:00:10 | | |
|* 3 | HASH JOIN | | 26 | 1352 | 828 (1)| 00:00:10 | | |
|* 4 | HASH JOIN | | 27 | 810 | 423 (1)| 00:00:06 | | |
|* 5 | TABLE ACCESS FULL | PRODUCTS | 3 | 54 | 3 (0)| 00:00:01 | | |
| 6 | PARTITION RANGE ALL | | 31 | 372 | 419 (1)| 00:00:06 | 1 | 28 |
| 7 | TABLE ACCESS BY LOCAL INDEX ROWID| SALES | 31 | 372 | 419 (1)| 00:00:06 | 1 | 28 |
| 8 | BITMAP CONVERSION TO ROWIDS | | | | | | | |
| 9 | BITMAP AND | | | | | | | |
| 10 | BITMAP MERGE | | | | | | | |
| 11 | BITMAP KEY ITERATION | | | | | | | |
| 12 | BUFFER SORT | | | | | | | |
|* 13 | TABLE ACCESS FULL | PRODUCTS | 3 | 54 | 3 (0)| 00:00:01 | | |
|* 14 | BITMAP INDEX RANGE SCAN | SALES_PROD_BIX | | | | | 1 | 28 |
| 15 | BITMAP MERGE | | | | | | | |
| 16 | BITMAP KEY ITERATION | | | | | | | |
| 17 | BUFFER SORT | | | | | | | |
|* 18 | TABLE ACCESS FULL | CUSTOMERS | 45 | 765 | 405 (1)| 00:00:05 | | |
|* 19 | BITMAP INDEX RANGE SCAN | SALES_CUST_BIX | | | | | 1 | 28 |
|* 20 | TABLE ACCESS FULL | CUSTOMERS | 45 | 990 | 405 (1)| 00:00:05 | | |
| 21 | INDEX FULL SCAN | COUNTRIES_PK | 23 | 115 | 1 (0)| 00:00:01 | | |
-------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("C"."COUNTRY_ID"="D"."COUNTRY_ID")
3 - access("S"."CUST_ID"="C"."CUST_ID")
4 - access("S"."PROD_ID"="P"."PROD_ID")
5 - filter("P"."PROD_SUBCATEGORY_DESC"='Memory')
13 - filter("P"."PROD_SUBCATEGORY_DESC"='Memory')
14 - access("S"."PROD_ID"="P"."PROD_ID")
18 - filter(("C"."CUST_CITY"='Oxford' AND "C"."CUST_GENDER"='F'))
19 - access("S"."CUST_ID"="C"."CUST_ID")
20 - filter(("C"."CUST_CITY"='Oxford' AND "C"."CUST_GENDER"='F'))

Note
-----
- star transformation used for this statement

Notice how the TEMP TABLE TRANSFORMATION is gone from both plans?

The odd thing is that I wasn't able so far to pinpoint any optimizer environment settings that are related to this - in fact the optimizer happily shares the existing plan with the remote or local execution, so there seems not to be any difference in the optimizer environment (and which makes me think that this is more a side-effect than an intended feature).

Therefore it is required to get rid of the plans generated by the local execution to reproduce the issue above.

The behaviour seems to be consistent across 10.2.0.4, 11.1.0.7 and 11.2.0.1.

I could imagine there is something related to the fact that a distributed transaction has been started by the RPC call, but since the plans are re-used when already in the Shared Pool, this doesn't seem to be a reasonable explanation either.

I couldn't find yet any MetaLink (sorry, MOS) documents that describe this particular issue. I might raise an SR if I find the time, the issue has been worked around at the client side by ensuring particular execution paths of the critical statement.

Update Jan 2011: Thanks to the anonymous poster below - the corresponding official bug is Bug 9399589: "WITH" subqueries cannot be materialized inside a global transaction

Resolving Gaps in Data Guard Apply Using Incremental RMAN BAckup

Recently, we had a glitch on a Data Guard (physical standby database) on infrastructure. This is not a critical database; so the monitoring was relatively lax. And that being done by an outsourcer does not help it either. In any case, the laxness resulted in a failure remaining undetected for quite some time and it was eventually discovered only when the customer complained. This standby database is usually opened for read only access from time to time.This time, however, the customer saw that the data was significantly out of sync with primary and raised a red flag. Unfortunately, at this time it had become a rather political issue.

Since the DBA in charge couldn’t resolve the problem, I was called in. In this post, I will describe the issue and how it was resolved. In summary, there are two parts of the problem:

(1) What happened
(2) How to fix it

What Happened

Let’s look at the first question – what caused the standby to lag behind. First, I looked for the current SCN numbers of the primary and standby databases. On the primary:

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1447102

On the standby:

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1301571

Clearly there is a difference. But this by itself does not indicate a problem; since the standby is expected to lag behind the primary (this is an asynchronous non-real time apply setup). The real question is how much it is lagging in the terms of wall clock. To know that I used the scn_to_timestamp function to translate the SCN to a timestamp:

SQL> select scn_to_timestamp(1447102) from dual;

SCN_TO_TIMESTAMP(1447102)
-------------------------------
18-DEC-09 08.54.28.000000000 AM

I ran the same query to know the timestamp associated with the SCN of the standby database as well (note, I ran it on the primary database, though; since it will fail in the standby in a mounted mode):

SQL> select scn_to_timestamp(1301571) from dual;

SCN_TO_TIMESTAMP(1301571)
-------------------------------
15-DEC-09 07.19.27.000000000 PM

This shows that the standby is two and half days lagging! The data at this point is not just stale; it must be rotten.

The next question is why it would be lagging so far back in the past. This is a 10.2 database where FAL server should automatically resolved any gaps in archived logs. Something must have happened that caused the FAL (fetch archived log) process to fail. To get that answer, first, I checked the alert log of the standby instance. I found these lines that showed the issue clearly:


Fri Dec 18 06:12:26 2009
Waiting for all non-current ORLs to be archived...
Media Recovery Waiting for thread 1 sequence 700
Fetching gap sequence in thread 1, gap sequence 700-700

Fri Dec 18 06:13:27 2009
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence 700-700
DBID 846390698 branch 697108460
FAL[client]: All defined FAL servers have been attempted.


Going back in the alert log, I found these lines:

Tue Dec 15 17:16:15 2009
Fetching gap sequence in thread 1, gap sequence 700-700
Error 12514 received logging on to the standby
FAL[client, MRP0]: Error 12514 connecting to DEL1 for fetching gap sequence
Tue Dec 15 17:16:15 2009
Errors in file /opt/oracle/admin/DEL2/bdump/del2_mrp0_18308.trc:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
Tue Dec 15 17:16:45 2009
Error 12514 received logging on to the standby
FAL[client, MRP0]: Error 12514 connecting to DEL1 for fetching gap sequence
Tue Dec 15 17:16:45 2009
Errors in file /opt/oracle/admin/DEL2/bdump/del2_mrp0_18308.trc:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

This clearly showed the issue. On December 15th at 17:16:15, the Managed Recovery Process encountered an error while receiving the log information from the primary. The error was ORA-12514 “TNS:listener does not currently know of service requested in connect descriptor”. This is usually the case when the TNS connect string is incorrectly specified. The primary is called DEL1 and there is a connect string called DEL1 in the standby server.

The connect string works well. Actually, right now there is no issue with the standby getting the archived logs; so there connect string is fine - now. The standby is receiving log information from the primary. There must have been some temporary hiccups causing that specific archived log not to travel to the standby. If that log was somehow skipped (could be an intermittent problem), then it should have been picked by the FAL process later on; but that never happened. Since the sequence# 700 was not applied, none of the logs received later – 701, 702 and so on – were applied either. This has caused the standby to lag behind since that time.

So, the fundamental question was why FAL did not fetch the archived log sequence# 700 from the primary. To get to that, I looked into the alert log of the primary instance. The following lines were of interest:


Tue Dec 15 19:19:58 2009
Thread 1 advanced to log sequence 701 (LGWR switch)
Current log# 2 seq# 701 mem# 0: /u01/oradata/DEL1/onlinelog/o1_mf_2_5bhbkg92_.log
Tue Dec 15 19:20:29 2009Errors in file /opt/oracle/product/10gR2/db1/admin/DEL1/bdump/del1_arc1_14469.trc:
ORA-00308: cannot open archived log '/u01/oraback/1_700_697108460.dbf'
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory
Additional information: 3
Tue Dec 15 19:20:29 2009
FAL[server, ARC1]: FAL archive failed, see trace file.
Tue Dec 15 19:20:29 2009
Errors in file /opt/oracle/product/10gR2/db1/admin/DEL1/bdump/del1_arc1_14469.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed.
Archiver continuing
Tue Dec 15 19:20:29 2009
ORACLE Instance DEL1 - Archival Error. Archiver continuing.


These lines showed everything clearly. The issue was:

ORA-00308: cannot open archived log '/u01/oraback/1_700_697108460.dbf'
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory

The archived log simply was not available. The process could not see the file and couldn’t get it across to the standby site.

Upon further investigation I found that the DBA actually removed the archived logs to make some room in the filesystem without realizing that his action has removed the most current one which was yet to be transmitted to the remote site. The mystery surrounding why the FAL did not get that log was finally cleared.

Solution

Now that I know the cause, the focus was now on the resolution. If the archived log sequence# 700 was available on the primary, I could have easily copied it over to the standby, registered the log file and let the managed recovery process pick it up. But unfortunately, the file was gone and I couldn’t just recreate the file. Until that logfile was applied, the recovery will not move forward. So, what are my options?

One option is of course to recreate the standby - possible one but not technically feasible considering the time required. The other option is to apply the incremental backup of primary from that SCN number. That’s the key – the backup must be from a specific SCN number. I have described the process since it is not very obvious. The following shows the step by step approach for resolving this problem. I have shown where the actions must be performed – [Standby] or [Primary].

1. [Standby] Stop the managed standby apply process:

SQL> alter database recover managed standby database cancel;

Database altered.

2. [Standby] Shutdown the standby database

3. [Primary] On the primary, take an incremental backup from the SCN number where the standby has been stuck:

RMAN> run {
2> allocate channel c1 type disk format '/u01/oraback/%U.rmb';
3> backup incremental from scn 1301571 database;
4> }

using target database control file instead of recovery catalog
allocated channel: c1
channel c1: sid=139 devtype=DISK

Starting backup at 18-DEC-09
channel c1: starting full datafile backupset
channel c1: specifying datafile(s) in backupset
input datafile fno=00001 name=/u01/oradata/DEL1/datafile/o1_mf_system_5bhbh59c_.dbf

piece handle=/u01/oraback/06l16u1q_1_1.rmb tag=TAG20091218T083619 comment=NONE
channel c1: backup set complete, elapsed time: 00:00:06
Finished backup at 18-DEC-09
released channel: c1

4. [Primary] On the primary, create a new standby controlfile:

SQL> alter database create standby controlfile as '/u01/oraback/DEL1_standby.ctl';

Database altered.

5. [Primary] Copy these files to standby host:

oracle@oradba1 /u01/oraback# scp *.rmb *.ctl oracle@oradba2:/u01/oraback
oracle@oradba2's password:
06l16u1q_1_1.rmb 100% 43MB 10.7MB/s 00:04
DEL1_standby.ctl 100% 43MB 10.7MB/s 00:04

6. [Standby] Bring up the instance in nomount mode:

SQL> startup nomount

7. [Standby] Check the location of the controlfile:

SQL> show parameter control_files

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
control_files string /u01/oradata/standby_cntfile.ctl

8. [Standby] Replace the controlfile with the one you just created in primary.

9. $ cp /u01/oraback/DEL1_standby.ctl /u01/oradata/standby_cntfile.ctl

10.[Standby] Mount the standby database:

SQL> alter database mount standby database;

11.[Standby] RMAN does not know about these files yet; so you must let it know – by a process called cataloging. Catalog these files:

$ rman target=/

Recovery Manager: Release 10.2.0.4.0 - Production on Fri Dec 18 06:44:25 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database: DEL1 (DBID=846390698, not open)
RMAN> catalog start with '/u01/oraback';

using target database control file instead of recovery catalog
searching for all files that match the pattern /u01/oraback

List of Files Unknown to the Database
=====================================
File Name: /u01/oraback/DEL1_standby.ctl
File Name: /u01/oraback/06l16u1q_1_1.rmb

Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u01/oraback/DEL1_standby.ctl
File Name: /u01/oraback/06l16u1q_1_1.rmb

12.Recover these files:

RMAN> recover database;

Starting recover at 18-DEC-09
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: /u01/oradata/DEL2/datafile/o1_mf_system_5lptww3f_.dbf
...
channel ORA_DISK_1: reading from backup piece /u01/oraback/05l16u03_1_1.rmb
channel ORA_DISK_1: restored backup piece 1
piece handle=/u01/oraback/05l16u03_1_1.rmb tag=TAG20091218T083619
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07

starting media recovery

archive log thread 1 sequence 8012 is already on disk as file /u01/oradata/1_8012_697108460.dbf
archive log thread 1 sequence 8013 is already on disk as file /u01/oradata/1_8013_697108460.dbf


13. After some time, the recovery fails with the message:

archive log filename=/u01/oradata/1_8008_697108460.dbf thread=1 sequence=8009
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 12/18/2009 06:53:02
RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/u01/oradata/1_8008_697108460.dbf'
ORA-00310: archived log contains sequence 8008; sequence 8009 required
ORA-00334: archived log: '/u01/oradata/1_8008_697108460.dbf'

This happens because we have come to the last of the archived logs. The expected archived log with sequence# 8008 has not been generated yet.

14.At this point exit RMAN and start managed recovery process:

SQL> alter database recover managed standby database disconnect from session;

Database altered.

15.Check the SCN’s in primary and standby:

[Standby] SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1447474
[Primary] SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1447478
Now they are very close to each other. The standby has now caught up.

Collaborate 2010: Penny Lane

Well, so much for my plans to not travel as much next year … Found out earlier in the week I’ve now been selected to present at next year’s Collaborate 10 event in Las Vegas. I’ll be presenting a new version of my Indexing Tricks and Traps presentation on Monday, 19 April from 10:45am to 11:45am [...]

Chinese Edition of TOP Available!

Few weeks ago the Chinese Edition of my book, Troubleshooting Oracle Performance, was published by China-Pub! Honestly, this is something I did not expect when publishing the book.
Even though I still have to touch one of them myself, let me spend few words about it…
I still remember when I saw for the first time part [...]

Miracle Open World 2010, April 14-16, Legoland

Finally Miracle has published the first official advertisement for Miracle Open World 2010 event, which will take place in Legoland, Billund, Jutland, Denmark, from 14-16 April 2010. Here is the link to the event site.

I'll be presenting at this spectacular and unique event. Hopefully the weather will permit me to fly to Legoland by myself and land at Billund airport which is just several hundred meters away from the conference venue.

Kernel NFS fights back… Oracle throughput matches Direct NFS with latest Solaris improvements

After my recent series of postings, I was made aware of David Lutz’s blog on NFS client performance with Solaris.  It turns out that you can vastly improve the performance of NFS clients using a new parameter to adjust the number of client connections.

root@saemrmb9> grep rpcmod /etc/system
set rpcmod:clnt_max_conns=8

This parameter was introduced in a patch for various flavors of Solaris.  For details on the various flavors, see David Lutz’s recent blog entry on improving NFS client performance.  Soon, it should be the default in Solaris making out-of-box client performance scream.

DSS query throughput with Kernel NFS

I re-ran the DSS query referenced in my last entry and now kNFS matches the throughput of dNFS with 10gigE.


Kernel NFS throughput with Solaris 10 Update 8 (set rpcmod:clnt_max_conns=8)

This is great news for customers not yet on Oracle 11g.  With this latest fix to Solaris, you can match the throughput of Direct NFS on older versions of Oracle.  In a future post, I will explore the CPU impact of dNFS and kNFS with OLTP style transactions.

Posted in Oracle, Storage Tagged: 11g, 7410, analytics, database, dNFS, NAS, NFS, Oracle, performance, Solaris, Sun, tuning