Oakies Blog Aggregator

IO performance: Oracle 9.2.0.7/Solaris 8/6130 SAN/Veritas Volume Manager.

I had a look a system here in the Netherlands for a company that was having severe I/O performance problems. They tried to switch to DirectIO (filesystemset_option=setall), but they discovered that the IO performance got worse. So they quickly turned it back to ASYNC. The System Admins told the DBAs several times that there were no OS or File System issues. However they did recently migrate from one SAN to another with Volumne Manager Mirrorring. And the local was remotely mirrored, supposedly in asynchronous mode.

So then I had a look at the system. The system has 24 CPUs, 57GB of internal memory and 14 Oracle databases running on it. The two most important databases did according to statspack around 1700 I/Os per second. So I did a quickscan of the Physical Reads per SQL statement and found that some of the statements were missing indexes (based on Tapios Rules of Engagement), I also noticed that the buffer caches were really small.

After adding some indexes and increasing the buffer cache(s) it was discovered that the writes were still a bit (understatement) slow. To get the focus of the database and more onto the system and OS, I decided to use the test program for writes from Jonathan Lewis. The tests performed were 1K, 10000 blocks sequential writes, 8K, 10000 blocks random writes and 16K, 10000 blocks sequential writes. The tests were run on different mounts points and the interesting things were observered. The tests also performed slow on the database mount points. So the database was no longer the root of problem, something else was. Now the system administrators had to pay attention </p />
</p></div>

    	  	<div class=

Google Sync for Windows Mobile for Contacts and Calendar

Just started to use this new Google feature and installed for my Google Apps domain (www.miraclebenelux.nl) and it works great. I can now sync my Contacts and Google Calendar automatically. It uses the Active Sync utility from Windows Mobile and you have to enable this feature for your Google Apps Domain.

Now the only thing to do is the Tasks list from the mail view and I am all set </p />
</p></div>

    	  	<div class=

Missing. The. Point....

I probably buy 80 to 90% of my non-grocery items online.  Furniture, pictures, gifts, TV's, books, kitchen stuff - whatever I can - all online.  I hate the "store" experience.  Before I go into an actual physical store I usually know exactly what I want - buy it and leave.  It took me about 5 minutes to buy shoes this weekend :)

I buy online for the convenience - and the experience is fairly similar regardless where you shop.  You typically have to create "that account" (even if you never intend to shop there again..) and you get that form to opt in or out of mailing.  They almost always default to "opt in" and I invariably set it to "opt out"

I just bought some shelves while sitting here in King of Prussia, PA (I live in VA, another benefit of shopping online, just do it when/where-ever you want)... I received two emails.  Email 1 - my receipt (great).  Email 2, well, it was in response to me opting out:

While registering as a shopper with xxxxxx.com, you chose not to receive our promotional Email. This is being sent to confirm that yyyy@yahoo.com will not receive Email from xxxxxx.com.

The decision to receive Email is personal and can be influenced for a variety of reasons. In an attempt to better understand and respond to our customers, we would appreciate it if you would answer a short survey on this topic.

That just strikes me as "missing the point" :)

Can you imagine what my survey comment field might have contained.... The survey did contain

We value your feedback and encourage you to give us candid answers. Are there any comments you would like to make to xxxx? (Note: Response is limited to 250 characters)

250 characters.  I shall have to choose my words carefully...  I should have it written in Kanji to see if they support multi-byte and truly support 250 characters.  Or if it is really 250 bytes.

Using Amazon Cloudfront

Last year, a customer was running into trouble with static product images on their website. The images were store in an OCFS2 Filesystem and every so often OCFS2 would hang and the site became slowly unresponsive. We had talked a number of times about using a Content Delivery Network (CDN) to also improve the download streams to the client. The number of concurrent downloads from a domain is limited and different per browser. So increasing the number of domains for your site will help to improve the concurrent download. While we were discussing this (and after another problem with OCFS2) Amazon AWS sent out an email about the availability of Cloudfront. Here you can store static content and it will be cached in different servers around the world that are the closest to the browsers that request content from them. So we decide to implement this.  

Step 1 was to signup for the Amazon AWS service. Then we had to create an S3 bucket and upload the static content. There you run into of one performance issues with the Cloud. Uploading large amounts of data to the S3 (European) bucket is limited by your internet UPLOAD speed. Most people use ADSL connection with high DOWNLOAD speeds but with lower UPLOAD speeds. So uploading 30 GB of data will take some time (depending on your upload speed). An S3 bucket is basically a raw datastore. You have to tell what objects in that datastore are directories or files. Uploading the data was done with the JetS3 program. This has a commandline interface (CLI) and is written in Java so it can run on different platforms. After 3 days(a weekend) all the data was online and available from the Cloudfront. We implemented a monitor service with ZABBIX to see the peformance and availability of the Cloudfront service. It hasn’t been down but we noticed a couple of performance degradations, but the service has been available 100%. 

So was it worth all the effort that was put into it? The reason we did it was that OCFS2 seem to have a performance and stability problem. Well 2 months later we discovered the real reason for the problem. There was a firewall between the Read-Only nodes an the one Read-Write node of the OCFS2 cluster. This firewall was doing some housekeeping and lost control of that . So certain connections and messages between the OCFS2 got lost. That caused hangs and performance degradations. So the reason for switching has been fixed, but we haven’t switched back to the old implementation.

This Amazon Cloudfront service turned out to be a nice solution to serve static content. 

More to follow later.

Shared Pool Latch Contention

Recently I was looking at a system that had some shared pool instability. Once a week during the day it would start flushing and loading the objects back in when needed. This resulted in large library cache pin and library cache load lock waits. That problem was attacked with some simple changes. One of the problems was that this customer had changed the reserved size minimal alloc (hidded parameter) from the default 5120 bytes to 51200 bytes. As a result the (large) reserved shared pool wasn’t used.

Another strange problem was that every hour there was a spike in shared pool latch waits. It turned out that an DBA had built to script to check the shared pool (queries against x$ksmsp) and that caused some problems. While querying this view, Oracle needs to hold the shared pool latch. So if there are many small pieces that need to be checked, one can hold on a long time for this latch.  When we killed the script, the spikes also dissappeared </p />
</p></div>

    	  	<div class=

Stability is your friend

Oracle and other Microsoft are putting more and more automatic and self – everything features into their database. There are of course many reasons why that makes sense (for Oracle and Microsoft), but does it make sense for all Oracle Systems and their DBAs? I don’t think so. Consider this:

All these automatic and self-tuning features will manage resources and make decisions that can and will change the behavior of your system. Now consider that you are the DBA of a mission critical Oracle system. Do you want  a system that runs good enough and stable or do you want a system that sometimes runs perfect and sometimes runs badly? Let me know.

Xplan utility

A utility to add parent ID and execution order information to plans reported by DBMS_XPLAN. XPlan includes DISPLAY, DISPLAY_CURSOR and DISPLAY_AWR functionality for use in exactly the same way as the DBMS_XPLAN equivalents. Supports versions from 10g onwards. ***Update*** Now available in two formats: 1) as a PL/SQL package and 2) as a collection of three free-standing SQL*Plus scripts (i.e. no installation/database objects needed). January 2009 (updated October 2011)

30 days in the hole ...

So ... not really 30 days (21 to be precise) but it feels like forever.Late November, early December (while I was in the U.K.) several sites were installed with an updated version of one of the tools. Everything seemed to be going well, until all the databases ran out of space roughly around the same time. (We lost a team member a few months ago and he used to monitor sites until they achieved

Ultra-Fast MV Alteration using Prebuilt Table Option

Here is an interesting question posed to me one time and I had found a solution. After 9 years, I encountered the same question and was shocked to find that many people still don't know about a little trick that could avoid a potential problem later.

Someone asked me how to modify a column of a Materialized View, e.g. from varchar2(20) to varchar2(25), or something similar. Drop and recreate? Not an option. We are talking about a several hundred GB MV with a very complex query that will take days to complete.

Problem

When you alter a materialized view to add a column or modify a column definition, unfortunately there is no command functionally equivalent to ALTER MATERIALIZED VIEW … ADD COLUMN. The only way to alter an MV is to completely drop and recreate it with the alteration. That approach may be acceptable for small MVs; but for larger MVs the cost of rebuilding can make the process quite infeasible. In addition to the time it will take to rebuild the entire MV (which could be days, depending on the size), the redo/undo generation and the surge in logical I/O due to the MV query may seriously affect the performance of the source database. In some cases, large MVs may even fail to be rebuilt as sometimes the undo segments may not have the undo information for long running queries – causing ORA-1555 errors.

So is there a better approach? Yes, there is. In this document I am going to explain a better approach for creating an MV that makes the alterations possible without rebuilding the MV – a task accomplished in mere seconds as opposed to potentially days.

Concept of Segments

Segments are stored units in Oracle. So, a table has a segment; not a view – since the contents of the view are not stored; only the view definition is. A Materialized View, however, stores the contents; so it is a segment.

Actually, the concept of segment goes a little bit further. If the table is partitioned, then each partition is a different segment. So, the relationship between tables and segments is one-to-many.

When you create an object that needs storage, such as a table, an MV or an index, Oracle first creates the corresponding segment. Once that is complete, the segment is shrouded by the cover of the object. The segment still continue to exist; but is now connected to the object. Until the segment is completely created and populated, the object technically does not exist. The segment may, in some cases, have a different name from the object. If the segment creation (or population) fails, Oracle automatically cleans up the remnants of the failed segment; but sometimes it may not be, leaving behind the chards that are eventually cleaned up by SMON process.

MVs and Segments

Anyway, how is this discussion about segments relevant to our objective here –the fast alteration of MViews?

Plenty. Remember, MVs are nothing but tables behind the covers? Property-wise, MVs and tables are like sisters, not even cousins. You can think of MVs are regular tables with some built in intelligence about how they were created (the defining query), how often they should be refreshed automatically by a job and how queries should be transformed to take advantage of the presence of the MVs. But apart from that, there is not much difference. You can directly insert into an MV, create indexes and so on. As far as a segment is concerned, there is no difference between an MV and a table. In fact Oracle stores the segment as a table:

SQL> select SEGMENT_TYPE
2 from user_segments
3 where SEGMENT_NAME = 'MV1';

SEGMENT_TYPE
------------------
TABLE

However, the biggest difference is the very issue we are discussing – you can’t add/modify columns of an MV while you can do that freely for a table. If I could attempt to logically represent tables and MVs, here is how it would look like.

The segment is the same. If it was created as an MV, the properties of MV take over the segment. If it was created as a table, the properties of a table take over the control.

Prebuilt Table

Since under the covers the segment is the same for both MV and table, can’t you take advantage of the fact? Suppose you have a table and you now want to convert that to an MV. In other words, you want to repoint that arrow initially pointed at the table to the MV properties:

Can you do it? Yes, of course you can. Since at the segment level it is the same, Oracle allows you to do it. When you create an MV, you can use a special clause ON PREBUILT TABLE. Here is how you create a MV in the regular approach:

create materialized view mv1
never refresh as
select cast(count (1) as number(10)) cnt from t1;

If you check the objects created:

SQL> select object_id, data_object_id, object_type
2 from user_objects
3 where object_name = 'MV1';

OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE
---------- -------------- -------------------
74842 74842 TABLE
74843 MATERIALIZED VIEW

So, it creates two objects – a table and an MV - anyway. Note a very important difference though: the DATA_OBJECT_ID for the MV object is null. If you drop the MV and check for the objects:

SQL> drop materialized view mv1;

Materialized View dropped.

SQL> select object_id, data_object_id, object_type
2 from user_objects
3 where object_name = 'MV1';

no rows selected

Even though there were two objects – a table and an MV, when you dropped the MV, both were dropped. The table object didn’t have an independent existence. Dropping the MV drops the table automatically.

Now, in the modified approach, you first create the table in the same name as the MV you are going to create:

SQL> create table mv1 (cnt number(10));

Next you create the MV by adding a new clause called ON PREBUILT TABLE shown below:

create materialized view mv1
on prebuilt table
never refresh
as
select cast(count (1) as number(10)) cnt from t1;

Now there will be two objects as well – one table and one MV. The MV simply took over the command over the segment but since the table already existed, it did not recreate the table object. So there are still only 2 objects.

One concern: since you created the table manually, can you accidentally drop it? Let’s see:

SQL> drop table mv1;
drop table mv1
*
ERROR at line 1:
ORA-12083: must use DROP MATERIALIZED VIEW to drop "ARUP"."MV1"

That answers it. The table simply loses its independent existence. However, see what happens when you drop the MV:

SQL> DROP MATERIALIZED VIEW mv1;

Materialized view dropped.

Now check the segment:

SQL> select segment_type
2 from user_segments
3 where segment_name = 'MV1';

SEGMENT_TYPE
------------------
TABLE

The segment still exists! When you dropped the MV, the segment was not dropped; it simply reverted to being a table. You can confirm that by checking the objects view:

OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE
---------- -------------- -------------------
77432 77432 TABLE

Voila! The object still exists as a table. Previously you saw dropping the MV removed all the objects and the segment. However, in this approach the segment was preserved. Since it reverted to a table, you can do all things possible in a table – select from it, create index, and - most important – modify the column. You can alter the column to make NUMBER(11).

SQL> alter table mv1 modify (cnt number(11));

Table altered.

Now, create the MV again:

create materialized view mv1
on prebuilt table
never refresh as
select cast(count (1) as number(11)) cnt from t1;

That’s it. The MV is altered. The whole process took about a few seconds, and since you didn’t have to recreate the segment, you saved enormous load on the database. Here a schematic representation of what happened.

Now you know how powerful prebuilt table option is. It only affects how you define the MV; nothing else. All other properties of the MV remain intact. The end users don’t even know about the prebuilt table option; but for the DBA it remains a powerful tool in the arsenal. As a best practice I recommend creating any MV, regardless of size, with the ON PREBUILT TABLE clause. In small tables you probably don’t see a huge advantage; but what if today’s small table grows to a large one tomorrow? It’s better to be safe than sorry.

Conversion to the New Approach

Now that you understand the power of the prebuilt option, you may be wondering how to convert the existing MVs to the new clause. Unfortunately there is no conversion path. You have to drop and recreate the MVs. That is why this time – when we are moving MVs to new tablespaces – we have the golden opportunity.

One approach is to create new tables with new names and then rename them. Here are the steps:

1. Create a table with nologging clause from the old MV
create table new_mv1
nologging
as
select * from mv1;

2. Capture the MV definition from the data dictionary:

select dbms_metadata.get_ddl ('MATERIALIZED_VIEW','MV1')
from dual ;

DBMS_METADATA.GET_DDL('MATERIALIZED_VIEW','MV1')
------------------------------------------------
CREATE MATERIALIZED VIEW "ARUP"."MV1" ("CNT")
ORGANIZATION HEAP PCTFREE 10
… and so on …

3. Spool this to a file to be executed later.

4. Edit this file to place ON PREBUILT TABLE CLAUSE.

CREATE MATERIALIZED VIEW "ARUP"."MV1" ("CNT")
ORGANIZATION HEAP ON PREBUILT TABLE PCTFREE 10

5. Take a Data Pump export with CONTENTS=METADATA_ONLY option. This creates all relevant privileges on the export dump file. Keep it aside.

6. Drop the Materialized View MV1.

7. Rename table NEW_MV1 to MV1

8. Execute the script you created earlier to recreate the MV.

9. Import the export dump file. It will recreate all the privileges.

This is slow; but the best approach since it generates minimum amount of redo and undo.

Hope this is helpful. You may look at an article I wrote http://www.dbazine.com/oracle/or-articles/nanda2 The article describes, with complete code, how to alter an MV where the refresh occurs across databases.

Making a Shell Variable Read Only

Being inherently lazy, I am always a sucker for shortcuts, neat tricks to cut my work and, most important, not to do the same thing again and again. Here is a tip I find useful.

Have you ever been frustrated to find that some line has changed some important shell variable such as ORACLE_BASE inside a shell script? The list of variables that are important to safety and efficiency of your shell is a long one - PS1, ORACLE_BASE, PATH, and so on. Using this little known command, you can easily "protect" a variable. The trick is to make it readonly. First, set the variable:

# export ORACLE_BASE=/opt/oracle

Then make it readonly:

# readonly ORACLE_BASE

Now if you want to set it:

# export ORACLE_BASE=/opt/oracle1
-bash: ORACLE_BASE: readonly variable

You can't. You can't even unset the variable:

# unset ORACLE_BASE
-bash: unset: ORACLE_BASE: cannot unset: readonly variable

This is a cool way to protect important variables.

To get a list of variables that are readonly, use

# declare -r
declare -ar BASH_VERSINFO='([0]="3" [1]="00" [2]="15" [3]="1" [4]="release" [5]="i386-redhat-linux-gnu")'
declare -ir EUID="500"
declare -rx ORACLE_BASE="/opt/oracle"
declare -ir PPID="13204"
declare -r SHELLOPTS="braceexpand:emacs:hashall:histexpand:history:interactive-comments:monitor"
declare -ir UID="500"

Unfortunately there is no comamnd to make it readwrite.

In the same way, you can also prevent a specific variable not to be set. LD_LIBRARY_PATH should not be set during some type of installations. To force it that way:

# export LD_LIBRARY_PATH
# readonly LD_LIBRARY_PATH

Now if you want to assign a value:

# export LD_LIBRARY_PATH=d
-bash: LD_LIBRARY_PATH: readonly variable

You will not be able to. You can also achieve the same goal by:

# declare -r LD_LIBRARY_PATH=

I hope you find it useful.