Search

OakieTags

Who's online

There are currently 0 users and 29 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

Don’t go directly to Maximum Protection!

With a Data Guard Configuration in Maximum Performance protection mode, don’t go to Maximum Protection directly, because that leads to a restart of the primary database:

 Attention!

DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxPerformance
  Databases:
  prima  - Primary database
    physt  - Physical standby database
      physt2 - Physical standby database (receiving current redo)

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> edit configuration set protection mode as maxprotection;
Operation requires shutdown of instance "prima" on database "prima"
Shutting down instance "prima"...
Database closed.
Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "prima" on database "prima"
Starting instance "prima"...
ORACLE instance started.
Database mounted.
Database opened.

Instead, go to Maximum Availability first and then to Maximum Protection:

DGMGRL> edit configuration set protection mode as maxperformance;
Succeeded.
DGMGRL> edit configuration set protection mode as maxavailability;
Succeeded.
DGMGRL> edit configuration set protection mode as maxprotection;
Succeeded.

The demo was done with 12c, involving a cascading standby database, but the behavior is the same in 11g already. The odd thing about it is that DGMGRL will restart the primary without warning. Wanted to share that with the Oracle community for years but always got over it somehow.

Tagged: Data Guard, High Availability

ALS Ice Bucket Challenge : Keeping It In The Family

If you watched my (No) Ice Bucket Challenge yesterday, you will know I nominated my Oracle family.

The first family member to step up to the plate was the daughter, Heli “Hell-Squirrel” Helskyaho. She shared the video on Facebook, which you can see here if you are a friend. We have to be careful with the parental controls to keep her safe!

Next up was the wife, Debra Lilley, who used the opportunity to do an Oscar acceptance speech.

That left only my dad, Graham Wood. This really is a case of saving the best for last. Thanks Dad for proving you really are the daddy! When I grow up I want to be just like you…

Thanks to the whole family for getting involved. It’s enough to bring a tear to your eye! :)

Cheers

Tim…

PS. Please don’t forget to donate!


ALS Ice Bucket Challenge : Keeping It In The Family was first posted on August 25, 2014 at 10:13 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

In-memory Aggregation

The title of this piece is the name given to a new feature in 12.1.0.2, and since I’ve recently blogged about a limitation of the in-memory option I thought I’d pick this feature as the next obvious thing to blog about. This is a bit of a non sequitur, though, as the feature seems to have nothing whatsoever to do with the in-memory option; instead it’s a cunning mechanism combining aspects of the star-transformation (but without the bitmap indexes), Bloom filters, and “group-by” placement to minimise the cost of aggregation over high-volume joins.

Here’s a small data set I’ll use to demonstrate the feature:

create table towns
as
select
        rownum                                          id,
        trunc(dbms_random.value(1,51))                  id_state,
        rpad(dbms_random.string('U',3),12)              name,
        cast (rpad('x',trunc(dbms_random.value(50,60)),'x') as varchar2(60))    padding
from
        all_objects
where
        rownum <= 2000
;

alter table towns add constraint to_pk primary key(id);
create index to_i1 on towns(name);

alter table towns add constraint to_pk primary key(id);
create index to_i1 on towns(name);

create table people(
        id_town_work    number(6,0)     not null
                constraint pe_fk_wo references towns,
        id_town_home    number(6,0)     not null
                constraint pe_fk_ho references towns,
        dummy1          varchar2(10),
        dummy2          varchar2(10),
        padding         varchar2(110)
);

insert /*+ append */  into people
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        trunc(dbms_random.value(1,2001)),
        trunc(dbms_random.value(1,2001)),
        lpad(rownum,10),
        lpad(rownum,10),
        cast (rpad('x',trunc(dbms_random.value(50,60)),'x') as varchar2(60))    padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5
;

commit;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'Towns',
                method_opt       => 'for all columns size 1'
        );
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'People',
                method_opt       => 'for all columns size 1'
        );
end;
/

I have a “large” table of people, and people can live in one town and work in another. Towns are in states and I’m interested in a report about people who live in one specific state but work in another (e.g. New Hampshre vs. Massachusetts). There are a couple of “padding” columns to represent the data associated with each town and person that I might want in a report. To keep things simple I haven’t extended the query out to select the name of the state. Here’s the query I might use to get the report I want:

select
        wt.padding,
        ht.padding,
        max(pe.padding)
from
        towns   wt,
        towns   ht,
        people  pe
where
        wt.id_state     = 1
and     pe.id_town_work = wt.id
and     ht.id_state     = 2
and     pe.id_town_home = ht.id
group by
        wt.padding,
        ht.padding
;

You might expect something like the following as the execution plan:

-------------------------------------------------------------------------------
| Id  | Operation            | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |        |    40 |  7600 |   179   (6)| 00:00:01 |
|   1 |  HASH GROUP BY       |        |    40 |  7600 |   179   (6)| 00:00:01 |
|*  2 |   HASH JOIN          |        |    40 |  7600 |   178   (6)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL | TOWNS  |    40 |  2520 |     5   (0)| 00:00:01 |
|*  4 |    HASH JOIN         |        |  2000 |   248K|   173   (6)| 00:00:01 |
|*  5 |     TABLE ACCESS FULL| TOWNS  |    40 |  2520 |     5   (0)| 00:00:01 |
|   6 |     TABLE ACCESS FULL| PEOPLE |   100K|  6250K|   165   (4)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("PE"."ID_TOWN_HOME"="HT"."ID")
   3 - filter("HT"."ID_STATE"=2)
   4 - access("PE"."ID_TOWN_WORK"="WT"."ID")
   5 - filter("WT"."ID_STATE"=1)

The order of operation (row source generation) is: 3,5,6,4,2,1 – we build a hash table from the towns in state 2; build a hash table from the towns in state 1; scan the people table and probe the state 1 hash table, any row that survives is used to probe the state 2 hash table, and the rows that survive the second probe are aggregated to produce the answer.

When you do this type of thing with very large data sets one of the potential performance threats comes from the volume of data you have to aggregate. As we’ve joined the three tables the row length grows significantly before we finally aggregate (admittedly my data set is small, and the number of rows we’re going to aggregate also appears to be very small according to the predictions). There’s also (in the early stages at least) the potential for passing a very large number of rows from the fact table through the first (and possibly subsequent) hash join, doing a lot of work to eliminate the rows you don’t need.

In 12c the optimizer can choose to minimise both these threat points using “vector transformation”. (The name may also reflect the possibility that the code path will take advantage of vector processing (SIMD) operations if they’re available in the CPU.) Here’s the execution path I got when I added the /*+ vector_transform(@sel$1) */ hint to my query – it’s not sensible for this tiny data set, of course, but the hint is a way of learning what Oracle can do:

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                           |    71 | 15975 |   184   (6)| 00:00:01 |

|   1 |  TEMP TABLE TRANSFORMATION    |                           |       |       |            |          |

|   2 |   LOAD AS SELECT              | SYS_TEMP_0FD9D6661_31399B |       |       |            |          |
|   3 |    VECTOR GROUP BY            |                           |    10 |   670 |     6  (17)| 00:00:01 |
|   4 |     KEY VECTOR CREATE BUFFERED| :KV0000                   |    40 |  2680 |     6  (17)| 00:00:01 |
|*  5 |      TABLE ACCESS FULL        | TOWNS                     |    40 |  2520 |     5   (0)| 00:00:01 |

|   6 |   LOAD AS SELECT              | SYS_TEMP_0FD9D6662_31399B |       |       |            |          |
|   7 |    VECTOR GROUP BY            |                           |    10 |   670 |     6  (17)| 00:00:01 |
|   8 |     KEY VECTOR CREATE BUFFERED| :KV0001                   |    40 |  2680 |     6  (17)| 00:00:01 |
|*  9 |      TABLE ACCESS FULL        | TOWNS                     |    40 |  2520 |     5   (0)| 00:00:01 |

|  10 |   HASH GROUP BY               |                           |    71 | 15975 |   172   (6)| 00:00:01 |
|* 11 |    HASH JOIN                  |                           |    71 | 15975 |   171   (5)| 00:00:01 |
|  12 |     TABLE ACCESS FULL         | SYS_TEMP_0FD9D6662_31399B |    10 |   670 |     2   (0)| 00:00:01 |
|* 13 |     HASH JOIN                 |                           |    71 | 11218 |   169   (5)| 00:00:01 |
|  14 |      TABLE ACCESS FULL        | SYS_TEMP_0FD9D6661_31399B |    10 |   670 |     2   (0)| 00:00:01 |
|  15 |      VIEW                     | VW_VT_C444E4CB            |    71 |  6461 |   167   (5)| 00:00:01 |
|  16 |       HASH GROUP BY           |                           |    71 |  5112 |   167   (5)| 00:00:01 |
|  17 |        KEY VECTOR USE         | :KV0000                   |    71 |  5112 |   167   (5)| 00:00:01 |
|  18 |         KEY VECTOR USE        | :KV0001                   |  2000 |   132K|   167   (5)| 00:00:01 |
|* 19 |          TABLE ACCESS FULL    | PEOPLE                    |   100K|  6250K|   165   (4)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - filter("WT"."ID_STATE"=1)
   9 - filter("HT"."ID_STATE"=2)
  11 - access("ITEM_10"=INTERNAL_FUNCTION("C0") AND "ITEM_11"="C2")
  13 - access("ITEM_8"=INTERNAL_FUNCTION("C0") AND "ITEM_9"="C2")
  19 - filter(SYS_OP_KEY_VECTOR_FILTER("PE"."ID_TOWN_HOME",:KV0001) AND
              SYS_OP_KEY_VECTOR_FILTER("PE"."ID_TOWN_WORK",:KV0000))

There are three critical components to this plan: first, we create a couple of “Key Vectors” from the towns table, then we use those key vectors while scanning the people table and aggregate a minimal data set, finally we join back to the data associated with the key vectors. Reprising my introductory paragraph: the creation and use of the key vectors is similar to the Bloom filter approach; the final join-back is similar to the strategy used in Star Transformations (especially the ones where temp tables appear), and the key vector allows the high-volume fact data to be aggregated as much as possible before adding extra row-length from the dimensions.

In outline Oracle does the following:

  • scan the towns table to extract the id, and padding columns for id_state = 1 / work town – this produced 50 rows with my data set
  • manipulate the result to extract the distinct values of padding, and give each value a unique numeric identifier – this is the information that goes into the temp table (with one extra column) – this produced 10 rows
  • manipulate the result again to produce an in-memory array of (town.id, temp_table.identifier) – this is the key vector, containing 50 elements.

The second temp table and key vector for (id_state = 2 /work town ) will be created in the same way.

As the fact table is scanned Oracle can apply the key vectors very efficiently (we hope) to pick out the people rows that would be involved in the final aggregate and associate with each relevant row the two padding identifiers that belong to that row (this step is a bit like doing 2 hash joins – but presumably much more efficient; Bloom filtering does something very similar). After selecting the minimum number of rows we can aggregate them on the  two padding identifiers (an example of the “aggregate early”/”place group by” principle – aggregate before joining); finally we join back to the two temporary tables to translate the short padding identifiers into the long padding values (just as we do in star transformations with temporary table transformation).

Strangely we aggregate again after the join-back. I don’t think it’s necessary in this case because I’m fairly sure that the join back is on a unique set of columns – but perhaps this is a generic strategy allowing for variations in the mechanism, including such things as cases where the vector transform is only applied to a subset of the dimension tables.

Technically you could almost emulate this strategy in any version of Oracle (and I probably have at various times over the last few years) with the major limitation that the “KEY VECTOR USE” operations at lines 17 and 18 would have to be replaced with hash joins; no doubt, though, the major CPU saving of this approach is the difference between consecutive hash joins and what appears to be (from the execution stats) concurrent vector filtering. At some point – if a client needs the extra performance edge before they get to 12c – I’ll have to see if I can engineer an example in 11g that emulates the whole plan but uses Bloom filtering to approximate the key vector filtering.

 

AWR Warehouse

I just noticed last week that there is a new patch for Enterprise Manager and it is enabling AWR Warehouse feature. There is a note ID 1901202.1 which describe bundle patch for OEM 12c release 4.

Today I had a chance to install it in my lab and now I can start testing new OEM feature.
There is some documentation here and on Kellyn's blog.


It is not configured so first task is to configure AWR Warehouse repository. In my case I will use same database which is used for OEM repository.

Retention period and staging area for snapshot files has to be configured as well.

 After these two steps AWR Warehouse configuration job is started and when it will be finished AWR Warehouse will be ready to use.
 

When repository is ready we can start adding databases which will be a source of AWR data.
 

To add a new database to warehouse it has be already configured in OEM and has a default credentials.

If all conditions are met database has been successfully added.

Now it's time to play with these new feature and see what we can achieve using it.

regards,
Marcin





AWR Warehouse

I just noticed last week that there is a new patch for Enterprise Manager and it is enabling AWR Warehouse feature. There is a note ID 1901202.1 which describe bundle patch for OEM 12c release 4.

Today I had a chance to install it in my lab and now I can start testing new OEM feature.
There is some documentation here and on Kellyn's blog.


It is not configured so first task is to configure AWR Warehouse repository. In my case I will use same database which is used for OEM repository.

Retention period and staging area for snapshot files has to be configured as well.

 After these two steps AWR Warehouse configuration job is started and when it will be finished AWR Warehouse will be ready to use.
 

When repository is ready we can start adding databases which will be a source of AWR data.
 

To add a new database to warehouse it has be already configured in OEM and has a default credentials.

If all conditions are met database has been successfully added.

Now it's time to play with these new feature and see what we can achieve using it.

regards,
Marcin





SLOB Physical I/O Randomness. How Random Is Random? Random!

I recently read a blog post by Kyle Hailey regarding some lack of randomness he detected in the Orion I/O generator tool. Feel free to read Kyle’s post but in short he used dtrace to detect Orion was obliterating a very dense subset of the 96GB file Orion was accessing.

I’ve used Orion for many years and, in fact, wrote my first Orion related blog entry about 8 years ago. I find Orion to be useful for some things and of course DBAs must use Orion’s cousin CALIBRATE_IO as a part of their job. However, neither of these tools perform database I/O. If you want to see how database I/O behaves on a platform it’s best to use a database. So, SLOB it is. But wait! Is SLOB is just another storage cache-poking randomness-challenged distraction from your day job? No, it isn’t.

But SLOB Is So Very Difficult To Use

It’s quite simple actually. You can see how simple SLOB is to set up and test by visiting my picture tutorial.

How Random Is Random? Random!

SLOB is utterly random. However, there are some tips I’d like to offer in this post to show you how you can choose even higher levels of randomness in your I/O testing.

Kyle used dtrace and some shell commands to group block visits into buckets. Since I’m analyzing the randomness of SLOB I’ll use a 10046 trace on the database sessions. First I’ll run a 96 user SLOB test with slob.conf->UPDATE_PCT=0.

After the SLOB test was completed I scrambled around to find the trace files and worked out a simple set of sed(1) expressions to spit out the block numbers being visited by each I/O of type db file sequential read:

how-random2

I then grouped the blocks being visited into buckets much the same way Kyle did in his post:

how-random3

I’ll show some analysis of the those buckets later in the post.  Yes, SLOB is random as analysis of 96u.blocks.txt will show but it can be even more random if one configures a RECYCLE buffer pool. One of the lesser advertised features of SLOB is the fact that all working tables in the SLOB schemas are created with BUFFER_POOL RECYCLE in the storage clause. The idea behind this is to support the caching of index blocks in the SGA buffer pool. When no RECYCLE pool is allocated there is a battle for footprint in the SGA buffer pool causing even buffers with index blocks to be reused for buffering table/index blocks of the active transactions. Naturally when indexes are not cached there will be slight hot-spots for constant, physical, re-reads of the index blocks. The question becomes what percentage of the I/O do these hot blocks account for?

To determine how hot index blocks are I allocated a recycle buffer pool and ran another 2 minute SLOB test. As per the following screen shot I again grouped block visits into buckets:

how-random4

After having both SLOB results (with and without RECYCLE buffer pool) I performed a bit of text processing to determine how different the access patterns were in both scenarios. The following shows:

  • The vast majority of blocks are visited 10 or less times in both models
  • The RECYCLE pool model clearly flattens out the re-visit rates as the hotest block is visited only 12 times compared to the 112 visits for the hottest block in the default case
  • If 12 is the golden standard for sparsity (as per the RECYCLE pool test case) then even the default is quite sparse because dense buckets accounted for only 84,583 physical reads compared to the nearly 14 million reads of blocks in the sparse buckets

how-random5

The following table presents the data including the total I/O operations traced. The number of sparse visits are those blocks that were accessed less than or equal to 10 times during the SLOB test. I should point out that there will naturally be more I/O during a SLOB test when index accesses are forced physical as is the case with the default buffer pools. That is, the RECYCLE buffer pool case will have a slightly higher logical I/O rate (cache hits) due to index buffer accesses.

table

Summary

If you want to know how database I/O performs on a platform use a database. If using a database to test I/O on a platform then by all means drive it with SLOB ( a database I/O tool).

Regarding randomness, even in the default case SLOB proves itself to be very random. If you want to push for even more randomness then the way forward is to configure db_recycle_cache_size.

Enjoy SLOB! The best place to start with SLOB is the SLOB Resources Page.

 

 

 

 

 

 

Filed under: oracle, Oracle I/O Performance, Oracle11g, Oracle11g RAC, SLOB Tagged: Oracle Exadata Storage Server, Oracle I/O Performance, Oracle Performance, Random I/O, SLOB

Crossplatform transportable tablespaces - part 2

It took some time since I wrote a first post about TTS migration but I finished that project literally hours before my summer break. Now after couple of days while I enjoyed thermal waters and good wine of Hungary it's time to write next post.

As I described in my previous post I had to migrate database from HP-UX into Linux and also upgrade it from 10g into 12c. This time it was only PoC but my goal was to minimize downtime of production database.

Source database datasheet:
- version 10.2.0.4 
- OS - HP-UX 
- existing backup using data files copy 
- there is a one backup set per data file 
- daily incremental backups are recovered into data files and keep in FRA

On target server a new version of Oracle 12.1.0.1 has been installed and configured with ASM. New database with same character set as source database has been created as well.

Target database datasheet:
- version 12.1.0.1
- OS -Linux 64 bit
- storage - ASM

Transportable tablespaces (TTS) allow us to migrate data between databases but it is DBA responsibility to migrate rest of objects like views and PL/SQL code using for example DataPump. Before I have started a work on TTS I did the following preparation steps:

  1. On source database identify list of tablespaces and it's datafiles to move to new server
  2. On source database identify owners of objects included in TTS
    select distinct owner from dba_tables where tablespace_name like ('LIST','OF','TABLESPACES','TO','MIGRATE');
  3. On source database verify that tablespaces are self contained
    begin
    SYS.DBMS_TTS.TRANSPORT_SET_CHECK(ts_list => 'LIST,OF,TABLESPACES,TO,MIGRATE', full_check => TRUE);
    end;
    /

    select * from SYS.TRANSPORT_SET_VIOLATIONS;
  4. On target database create owners for all objects included in TTS 

This is list of steps I performed to achieve my goal.

  1. Copy existing data files copies into new server - if other location is used on new server change script in point 2
  2. Create a script to convert data file from data file copy into data file in new location
    select 'convert datafile ''' || b.name || ''' format ''+DATA/POCDB/TTS/' || REGEXP_REPLACE(f.name,'(/disk\d/oradata/XXX/)','') || ''' from platform ''HP-UX IA (64-bit)'';' from V$BACKUP_COPY_DETAILS b, v$datafile f where f.file# = b.file#;
  3. Convert file using script from point 2. Example output
    convert datafile '/oracle/fra/o1_mf_pocdb_rep_9x3xjcon_.dbf' format '+DATA/POCDB/TTS/reports01.dbf' from platform 'HP-UX IA (64-bit)';
    convert datafile '/oracle/fra/o1_mf_pocdb_rep_aas24412_.dbf' format '+DATA/POCDB/TTS/reports02.dbf' from platform 'HP-UX IA (64-bit)';
    convert datafile '/oracle/fra/o1_mf_pocdb_rep_22ee1445_.dbf' format '+DATA/POCDB/TTS/reports03.dbf' from platform 'HP-UX IA (64-bit)';
    convert datafile '/oracle/fra/o1_mf_pocdb_rep_34ddr545_.dbf' format '+DATA/POCDB/TTS/reports04.dbf' from platform 'HP-UX IA (64-bit)';
  4. Copy daily incremental backupsets into new server - if other location is used on new server change script in point 5
  5. Create a script to apply incremental backupset into new files
    set linesize 600 pagesize 999 feedback off head off trimspool on
    select 'recover from platform ''HP-UX IA (64-bit)'' foreign datafilecopy ''' || name || ''' from backupset ''' || handle || ''';'
    from V$BACKUP_DATAFILE bd, v$datafile d, V$BACKUP_PIECE bp where bd.file# = d.file#
    and bp.set_count = bd.set_count and handle is not null
    and bp.COMPLETION_TIME > sysdate -1
    order by bp.set_count;
  6. Recover data files copies
    recover from platform 'HP-UX IA (64-bit)' foreign datafilecopy '+DATA/POCDB/TTS/reports01.dbf' from backupset '/oracle/fra/POCDB/backupset/2014_07_20/o1_mf_nnnd1_TAG20140720T065649_9wppkp6w_.bkp';
    recover from platform 'HP-UX IA (64-bit)' foreign datafilecopy '+DATA/POCDB/TTS/reports02.dbf' from backupset '/oracle/fra/POCDB/backupset/2014_07_20/o1_mf_nnnd1_TAG20140720T065649_9wppkxg5_.bkp';
    recover from platform 'HP-UX IA (64-bit)' foreign datafilecopy '+DATA/POCDB/TTS/reports03.dbf' from backupset '/oracle/fra/POCDB/backupset/2014_07_20/o1_mf_nnnd1_TAG20140720T065649_9wppk4w9_.bkp';
    recover from platform 'HP-UX IA (64-bit)' foreign datafilecopy '+DATA/POCDB/TTS/reports04.dbf' from backupset '/oracle/fra/POCDB/backupset/2014_07_20/o1_mf_nnnd1_TAG20140720T065649_9wppkbws_.bkp';
  7. Run steps 4 to 6 until cut over date
  8. Run incremental backup on source
  9. Switch all required tablespace into read only mode
  10. Export transportable tablespaces using DataPump  using parameter file like this
    directory=XXX_DPDUMP
    dumpfile=tts_aws1.dmp
    logfile=tts.log
    TRANSPORT_TABLESPACES='TABLESPACES', 'TO', 'EXPORT'
    TRANSPORT_FULL_CHECK=y

    EXPDP command

    expdp parfile=tts.par
  11. Run incremental backup on source
  12. Copy backupsets from point 8 and 11 into new server
  13. Create a script to apply incremental backupset into new files (like in point 5)
  14. Import transportable tablespaces using dump file from point 9 and all converted files. In my case first attempt took very long as I didn't excluded stats and Oracle was gathering stats during importing process. This operation can be postponed to next phase using EXCLUDE option. Example IMPDP parameter file
    directory=AWS
    dumpfile=tts_aws1.dmp
    logfile=aws_tts_import1.log
    exclude=TABLE_STATISTICS,INDEX_STATISTICS
    TRANSPORT_DATAFILES=+DATA/POCDB/TTS/reports01.dbf,
    +DATA/POCDB/TTS/reports02.dbf,
    +DATA/POCDB/TTS/reports03.dbf,
    +DATA/POCDB/TTS/reports04.dbf

    Run IMPDP command

    impdp parfile=imp.par
  15. Export source database code and users
    expdp directory=DPDUMP dumpfile=code.dmp exclude=TABLE_DATA full=y
  16. Import PL/SQL code - quick, dirty approach - but it was enough fot that case
    impdp directory=AWS TABLE_EXISTS_ACTION=SKIP dumpfile=code.dmp log=code_import.log full=y
  17. Perform backup of new database and gather new statistics

 
Performing all steps above allow me to migrate 1 TB database from HP-UX into Linux with 30 min downtime on source database. As it was POC I left source database working as main production database. For real migration time it's necessary to add time to recover last incremental backup and import TTS on new platform and also resolve issue with time necessary to gather statistics on new platform. Probably copy existing stats using PL/SQL will be solution there but it has to be check in next phase.

This post is long enough so I leave lesson learned to the next one.

regards,
Marcin

Crossplatform transportable tablespaces - part 2

It took some time since I wrote a first post about TTS migration but I finished that project literally hours before my summer break. Now after couple of days while I enjoyed thermal waters and good wine of Hungary it's time to write next post.

As I described in my previous post I had to migrate database from HP-UX into Linux and also upgrade it from 10g into 12c. This time it was only PoC but my goal was to minimize downtime of production database.

Source database datasheet:
- version 10.2.0.4 
- OS - HP-UX 
- existing backup using data files copy 
- there is a one backup set per data file 
- daily incremental backups are recovered into data files and keep in FRA

On target server a new version of Oracle 12.1.0.1 has been installed and configured with ASM. New database with same character set as source database has been created as well.

Target database datasheet:
- version 12.1.0.1
- OS -Linux 64 bit
- storage - ASM

Transportable tablespaces (TTS) allow us to migrate data between databases but it is DBA responsibility to migrate rest of objects like views and PL/SQL code using for example DataPump. Before I have started a work on TTS I did the following preparation steps:

  1. On source database identify list of tablespaces and it's datafiles to move to new server
  2. On source database identify owners of objects included in TTS
    select distinct owner from dba_tables where tablespace_name like ('LIST','OF','TABLESPACES','TO','MIGRATE');
  3. On source database verify that tablespaces are self contained
    begin
    SYS.DBMS_TTS.TRANSPORT_SET_CHECK(ts_list => 'LIST,OF,TABLESPACES,TO,MIGRATE', full_check => TRUE);
    end;
    /

    select * from SYS.TRANSPORT_SET_VIOLATIONS;
  4. On target database create owners for all objects included in TTS 

This is list of steps I performed to achieve my goal.

  1. Copy existing data files copies into new server - if other location is used on new server change script in point 2
  2. Create a script to convert data file from data file copy into data file in new location
    select 'convert datafile ''' || b.name || ''' format ''+DATA/POCDB/TTS/' || REGEXP_REPLACE(f.name,'(/disk\d/oradata/XXX/)','') || ''' from platform ''HP-UX IA (64-bit)'';' from V$BACKUP_COPY_DETAILS b, v$datafile f where f.file# = b.file#;
  3. Convert file using script from point 2. Example output
    convert datafile '/oracle/fra/o1_mf_pocdb_rep_9x3xjcon_.dbf' format '+DATA/POCDB/TTS/reports01.dbf' from platform 'HP-UX IA (64-bit)';
    convert datafile '/oracle/fra/o1_mf_pocdb_rep_aas24412_.dbf' format '+DATA/POCDB/TTS/reports02.dbf' from platform 'HP-UX IA (64-bit)';
    convert datafile '/oracle/fra/o1_mf_pocdb_rep_22ee1445_.dbf' format '+DATA/POCDB/TTS/reports03.dbf' from platform 'HP-UX IA (64-bit)';
    convert datafile '/oracle/fra/o1_mf_pocdb_rep_34ddr545_.dbf' format '+DATA/POCDB/TTS/reports04.dbf' from platform 'HP-UX IA (64-bit)';
  4. Copy daily incremental backupsets into new server - if other location is used on new server change script in point 5
  5. Create a script to apply incremental backupset into new files
    set linesize 600 pagesize 999 feedback off head off trimspool on
    select 'recover from platform ''HP-UX IA (64-bit)'' foreign datafilecopy ''' || name || ''' from backupset ''' || handle || ''';'
    from V$BACKUP_DATAFILE bd, v$datafile d, V$BACKUP_PIECE bp where bd.file# = d.file#
    and bp.set_count = bd.set_count and handle is not null
    and bp.COMPLETION_TIME > sysdate -1
    order by bp.set_count;
  6. Recover data files copies
    recover from platform 'HP-UX IA (64-bit)' foreign datafilecopy '+DATA/POCDB/TTS/reports01.dbf' from backupset '/oracle/fra/POCDB/backupset/2014_07_20/o1_mf_nnnd1_TAG20140720T065649_9wppkp6w_.bkp';
    recover from platform 'HP-UX IA (64-bit)' foreign datafilecopy '+DATA/POCDB/TTS/reports02.dbf' from backupset '/oracle/fra/POCDB/backupset/2014_07_20/o1_mf_nnnd1_TAG20140720T065649_9wppkxg5_.bkp';
    recover from platform 'HP-UX IA (64-bit)' foreign datafilecopy '+DATA/POCDB/TTS/reports03.dbf' from backupset '/oracle/fra/POCDB/backupset/2014_07_20/o1_mf_nnnd1_TAG20140720T065649_9wppk4w9_.bkp';
    recover from platform 'HP-UX IA (64-bit)' foreign datafilecopy '+DATA/POCDB/TTS/reports04.dbf' from backupset '/oracle/fra/POCDB/backupset/2014_07_20/o1_mf_nnnd1_TAG20140720T065649_9wppkbws_.bkp';
  7. Run steps 4 to 6 until cut over date
  8. Run incremental backup on source
  9. Switch all required tablespace into read only mode
  10. Export transportable tablespaces using DataPump  using parameter file like this
    directory=XXX_DPDUMP
    dumpfile=tts_aws1.dmp
    logfile=tts.log
    TRANSPORT_TABLESPACES='TABLESPACES', 'TO', 'EXPORT'
    TRANSPORT_FULL_CHECK=y

    EXPDP command

    expdp parfile=tts.par
  11. Run incremental backup on source
  12. Copy backupsets from point 8 and 11 into new server
  13. Create a script to apply incremental backupset into new files (like in point 5)
  14. Import transportable tablespaces using dump file from point 9 and all converted files. In my case first attempt took very long as I didn't excluded stats and Oracle was gathering stats during importing process. This operation can be postponed to next phase using EXCLUDE option. Example IMPDP parameter file
    directory=AWS
    dumpfile=tts_aws1.dmp
    logfile=aws_tts_import1.log
    exclude=TABLE_STATISTICS,INDEX_STATISTICS
    TRANSPORT_DATAFILES=+DATA/POCDB/TTS/reports01.dbf,
    +DATA/POCDB/TTS/reports02.dbf,
    +DATA/POCDB/TTS/reports03.dbf,
    +DATA/POCDB/TTS/reports04.dbf

    Run IMPDP command

    impdp parfile=imp.par
  15. Export source database code and users
    expdp directory=DPDUMP dumpfile=code.dmp exclude=TABLE_DATA full=y
  16. Import PL/SQL code - quick, dirty approach - but it was enough fot that case
    impdp directory=AWS TABLE_EXISTS_ACTION=SKIP dumpfile=code.dmp log=code_import.log full=y
  17. Perform backup of new database and gather new statistics

 
Performing all steps above allow me to migrate 1 TB database from HP-UX into Linux with 30 min downtime on source database. As it was POC I left source database working as main production database. For real migration time it's necessary to add time to recover last incremental backup and import TTS on new platform and also resolve issue with time necessary to gather statistics on new platform. Probably copy existing stats using PL/SQL will be solution there but it has to be check in next phase.

This post is long enough so I leave lesson learned to the next one.

regards,
Marcin

ALS (No) Ice Bucket Challenge

Ronald Vargas challenged me to the ALS Ice Bucket Challenge. Unfortunately we didn’t have any ice, but the water was cold anyway, as you can see by my reaction at the end. :)

I challenge Debra Lilley, Graham Wood and Heli Helskyaho.

Please don’t forget to donate!

Cheers

Tim…


ALS (No) Ice Bucket Challenge was first posted on August 24, 2014 at 10:20 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Why I don’t like the term “Center of Excellence”

I keep hearing about “Center of Excellence”. The term makes no sense to me. What does it mean? The middle (center) of wonderfulness? A place (center) where one goes to become excellent? Lately I’ve been hearing it used to describe our lab which is just a collection of computer equipment. In our case it’s a bunch of Oracle Engineered Systems. I have even started hearing people refer to it as the Oracle Engineered Systems Center of Excellence, which doesn’t really roll off the tongue does it? Some people don’t even say the words, they abbreviate them. So for them it’s the “OES COE”. I don’t speak TLA very well in the first place, so the whole thing is confusing to me. Anyway, I prefer to call it a Research Lab. It’s because that’s what we do with the equipment. We test it, we try weird experiments with it, we take it apart with a screw driver and we put it back together again. And lot’s of times we break things. But better that we break them in the lab than on a production system at a client site. The lab is a great learning environment for us and our customers. Enkitec has had a pretty solid lab for a while, but now that we’ve teamed up with Accenture and Oracle it’s going to get a lot bigger.

Here’s an abbreviated list of the what we expect to end up with in the lab in the next few weeks.

  • Lot’s of Exadata’s (V2, X2, X3, X4)
  • A SPARC SuperCluster
  • A few Exalytics’s
  • At least one Exalogic
  • A couple of Big Data Appliances (BDAs)
  • Several Oracle Database Appliances (ODAs)
  • A ZFS Appliance or 2
  • An Oracle Virtual Compute Appliance (OVCA)

So here’s to the new and improved Research Lab. Hopefully a few of the new systems will have the cool built in beer shelf like out current BDA.

BDA Beer

I’ll post some pictures when the new stuff shows up. :)