Who's online

There are currently 0 users and 24 guests online.

Recent comments

Oakies Blog Aggregator

PeopleTools 8.54: Performance Performance Monitor Enhancements

This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

Transaction History Search Component

There are a number of changes:

  • You can specify multiple system identifiers.  For example, you might be monitoring Portal, HR and CRM.  Now you can search across all of them in a single search.
    • It has always been the case that when you drill into the Performance Monitoring Unit (PMU), by clicking on the tree icon, you would see the whole of a PMU that invoked services from different systems.
  • You can also specify multiple transaction types, rather than have to search each transaction type individually.

This is a useful enhancement when searching for a specific or a small number of transaction.  However, I do not think it will save you from having to query the underlying transactions table.

PPM Archive Process

The PPM archive process (PSPM_ARCHIVE) has been significantly rewritten in PeopleTools 8.54.  In many places, it still uses this expression to identify rows to be archived or purged:

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">%DateTimeDiff(X.PM_MON_STRT_DTTM, %CurrentDateTimeIn) >= (PM_MAX_HIST_AGE * 24 * 60)

This expands to

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">ROUND((CAST(( CAST(SYSTIMESTAMP AS TIMESTAMP)) AS DATE) - CAST((X.PM_MON_STRT_DTTM) AS DATE)) * 1440, 0)
   >= (PM_MAX_HIST_AGE * 24 *  60)

which has no chance of using an index.  This used to cause performance problems when the archive process had not been run for a while and the high water marks on the history tables had built up.

Now, the archive process now works hour by hour, and this will use the index on the timestamp column.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">"... AND X.PM_MON_STRT_DTTM <= SYSDATE - PM_MAX_HIST_AGE 
and (PM_MON_STRT_DTTM) >= %Datetimein('" | DateTimeValue(&StTime) | "')
and (PM_MON_STRT_DTTM) <= %DateTimeIn('" | DateTimeValue(&EndTime) | "')"

Tuxedo Queuing

Since Performance Monitor was first introduced, event 301 has never reported the length of the inbound message queues in Tuxedo.  The reported queue length was always zero.  This may have been fixed in PeopleTools 8.53, but I have only just noticed it

Java Management Extensions (JMX) Support

There have been some additions to Performance Monitor that suggest that it will be possible to extract performance metrics using JMX.  The implication is that the Oracle Enterprise Manager Application Management Pack of PeopleSoft will be able to do this.  However, so far I haven't found any documentation. The new component is not mentioned in the PeopleTools 8.54: Performance Monitor documentation.

  • New Table
  • New Columns
    • PSPMSYSDEFAULTS - PTPHONYKEY.  So far I have only seen it set to 0.
    • PSPMAGENT - PM_JMX_RMI_PORT.  So far only seen it set to 1
  • New Component

    Flashback logging

    When database flashback first appeared many years ago I commented (somewhere, but don’t ask me where) that it seemed like a very nice idea for full-scale test databases if you wanted to test the impact of changes to batch code, but I couldn’t really see it being a good idea for live production systems because of the overheads.

    Features and circumstances change, of course, and someone recently pointed out that if your production system is multi-terabyte and you’re running with a dataguard standby and some minor catastrophe forces you to switch to the standby then you don’t really want to be running without a standby for the time it would take for you to use restore and recover an old backup to create a new standby and there may be cases where you could flashback the original primary to before the catastrophe and turn it into the standby from that point onward. Sounds like a reasonable argument to me – but you might still need to think very carefully about how to minimise the impact of enabling database flashback, especially if your database is a datawarehouse, DSS, or mixed system.

    Imagine you have a batch processes that revolve around loading data into an empty table with a couple of indexes – it’s a production system so you’re running with archivelog mode enabled, and then you’re told to switch on database flashback. How much impact will that have on your current loading strategies ? Here’s a little bit of code to help you on your way – I create an empty table as a clone of the view all_objects, and create one index, then I insert 1.6M rows into it. I’ve generated 4 different sets of results: flashback on or off, then either maintaining the index during loading or marking it unusable then rebuilding it after the load. Here’s the minimum code:

    create table t1 segment creation immediate tablespace test_8k
    select * from all_objects
    where   rownum < 1
    create index t1_i1 on t1(object_name, object_id) tablespace test_8k_assm_auto;
    -- alter index t1_i1 unusable;
    insert /*+ append */ into t1
    with object_data as (
            select --+ materialize
                    rownum <= 50000
    counter as (
            select  --+ materialize
                    rownum id
            from dual
            connect by
                    level <= 32
            /*+ leading (ctr obj) use_nl(obj) */
            counter         ctr,
            object_data     obj
    -- alter index t1_i1 rebuild;

    Here’s a quick summary of the timing I got  before I talk about the effects (running

    Flashback off:
    Maintain index in real time: 138 seconds
    Rebuild index at end: 66 seconds

    Flashback on:
    Maintain index in real time: 214 seconds
    Rebuild index at end: 112 seconds

    It is very important to note that these timings do not allow you to draw any generic conclusions about optimum strategies for your systems. The only interpretation you can put on them is that different circumstances may lead to very different timings, so it’s worth looking at what you could do with your own systems to find good strategies for different cases.

    Most significant, probably, is the big difference between the two options where flashback is enabled – if you’ve got to use it, how do you do damage limitation. Here are some key figures, namely the file I/O stats and the some instance activity stats, I/O stats first:

    "Real-time" maintenance
    Tempfile Stats - 09-Mar 11:41:57
    file#       Reads      Blocks    Avg Size   Avg Csecs     S_Reads   Avg Csecs    Writes      Blocks   Avg Csecs    File name
    -----       -----      ------    --------   ---------     -------   ---------    ------      ------   ---------    -------------------
        1       1,088      22,454      20.638        .063         296        .000     1,011      22,455        .000    /u01/app/oracle/oradata/TEST/datafile/o1_mf_temp_938s5v4n_.tmp
    Datafile Stats - 09-Mar 11:41:58
    file#       Reads      Blocks    Avg Size   Avg Csecs     S_Reads   Avg Csecs     M_Reads   Avg Csecs         Writes      Blocks   Avg Csecs    File name
    -----       -----      ------    --------   ---------     -------   ---------     -------   ---------         ------      ------   ---------    -------------------
        3      24,802      24,802       1.000        .315      24,802        .315           0        .000          2,386      20,379        .239    /u01/app/oracle/oradata/TEST/datafile/o1_mf_undotbs1_938s5n46_.dbf
        5         718      22,805      31.762        .001           5        .000         713        .002            725      22,814        .002    /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k_bcdy0y3h_.dbf
        6       8,485       8,485       1.000        .317       8,485        .317           0        .000            785       6,938        .348    /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k__bfqsmt60_.dbf
    Mark Unusable and Rebuild
    Tempfile Stats - 09-Mar 11:53:04
    file#       Reads      Blocks    Avg Size   Avg Csecs     S_Reads   Avg Csecs    Writes      Blocks   Avg Csecs    File name
    -----       -----      ------    --------   ---------     -------   ---------    ------      ------   ---------    -------------------
        1       1,461      10,508       7.192        .100           1        .017       407      10,508        .000    /u01/app/oracle/oradata/TEST/datafile/o1_mf_temp_938s5v4n_.tmp
    Datafile Stats - 09-Mar 11:53:05
    file#       Reads      Blocks    Avg Size   Avg Csecs     S_Reads   Avg Csecs     M_Reads   Avg Csecs         Writes      Blocks   Avg Csecs    File name
    -----       -----      ------    --------   ---------     -------   ---------     -------   ---------         ------      ------   ---------    -------------------
        3          17          17       1.000       5.830          17       5.830           0        .000             28          49       1.636    /u01/app/oracle/oradata/TEST/datafile/o1_mf_undotbs1_938s5n46_.dbf
        5         894      45,602      51.009        .001           2        .002         892        .001            721      22,811        .026    /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k_bcdy0y3h_.dbf
        6       2,586       9,356       3.618        .313         264       3.064       2,322        .001          2,443       9,214        .000    /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k__bfqsmt60_.dbf

    There are all sorts of interesting differences in these results due to the different way in which Oracle handles the index. For the “real-time” maintenance the session accumulates the key values and rowids as it writes the table, then sorts them, then does an cache-based bulk update to the index. For the “rebuild” strategy Oracle simply scans the table after it has been loaded, sorts the key values and indexes, then writes the index to disc using direct path writes; you might expect the total work done to be roughly the same in both cases – but it’s not.

    I’ve shown 4 files: the temporary tablespace, the undo tablespace, the tablespace holding the table and the tablespace holding the index; and the first obvious difference is the number of blocks written and read and the change in average read size on the temporary tablespace. Both sessions had to spill to disc for the sort, and both did a “one-pass” sort; the difference in the number of blocks written and read appears because the “real-time” session wrote the sorted data set back to the temporary tablespace one more time than it really needed to – it merged the sorted data in a single pass but wrote the data back to the temporary tablespace before reading it again and applying it to the index (for a couple of points on tracing sorts, see this posting). I don’t know why Oracle chose to use a much smaller read slot size in the second case, though.

    The next most dramatic thing we see is that real-time maintenance introduced 24,800 single block reads with 20,000 blocks written to the undo tablespace (with a few thousand more that would eventually be written by dbwr – I should have included a “flush buffer_cache” in my tests), compared to virtually no activity in the “rebuild” case. The rebuild generates no undo; real-time maintenance (even starting with an empty index) generates undo because (in theory) someone might look at the index and need to see a read-consistent image of it. So it’s not surprising that we see a lot of writes to the undo tablespace – but where did the reads come from? I’ll answer question that later.

    It’s probably not a surprise to see the difference in the number of blocks read from the table’s tablespace. When we rebuild the index we have to do a tablescan to acquire the data; but, again, we can ask why did we see 22,800 blocks read from the table’s tablespace when we were doing the insert with real-time maintenance. On a positive note those reads were multiblock reads, but what caused them? Again, I’ll postpone the answer.

    Finally we see that the number of blocks read (reason again postponed) and written to the index’s tablespace are roughly similar. The writes differ because because the rebuild is doing direct path writes, while the real-time maintenance is done in the buffer cache, so there are some outstanding index blocks to be written. The reads are similar, though one test is exclusively single block reads and the other is doing (small) multiblock reads – which is just a little bit more efficient.  The difference in the number of reads is because the rebuild was at the default pctfree=10 while the index maintenance was a massive “insert in order” which would have packed the index leaf blocks at 100%.

    To start the explanation – here are the most significant activity stats – some for the session, a couple for the instance:

    "Real-time" maintenance
    Name                                                                     Value
    ----                                                                     -----
    physical reads for flashback new                                        33,263
    redo entries                                                           118,290
    redo size                                                          466,628,852
    redo size for direct writes                                        187,616,044
    undo change vector size                                            134,282,356
    flashback log write bytes                                          441,032,704
    Name                                                                     Value
    ----                                                                     -----
    physical reads for flashback new                                           156
    redo entries                                                            35,055
    redo size                                                          263,801,792
    redo size for direct writes                                        263,407,628
    undo change vector size                                                122,156
    flashback log write bytes                                          278,036,480

    The big clue is the “physical reads for flashback new”. When you modify a block, if it hasn’t been dumped into the flashback log recently (as defined by the hidden _flashback_barrier_interval parameter) then the original version of the block has to be written to the flashback log before the change can be applied; moreover, if a block is being “newed” (Oracle-speak for being reformatted for a new use) it will also be written to flashback log. Given the way that the undo tablespace works it’s not surprising if virtually every block you modify in the undo tablespace has to be written to the flashback log before you use it. The 33,264 blocks read for “flashback new” consists of the 24,800 blocks read from the undo tablespace when we were maintaining the index in real-time plus a further 8,460 from “somewhere” – which, probably not coincidentally, matches the number of blocks read from the index tablespace as we create the index. The odd thing is that we don’t see the 22,800 reads on the table’s tablespace (which don’t occur when flashback is off) reported as “physical reads for flashback new”; this looks like a reporting error to me.

    So the volume of undo requires us to generate a lot of flashback log as well as the usual increase in the amount of redo. As a little side note, we get confirmation from these stats that the index was rebuilt using direct path writes – there’s an extra 75MB of redo for direct writes.


    If you are running with flashback enabled in a system that’s doing high volume data loading remember that the “physical reads for flashback new” could be a major expense. This is particularly expensive on index maintenance, which can result in a large number single block reads of the undo tablespace. The undo costs you three times – once for the basic cost of undo (and associated redo), once for the extra reads, and once for writing the flashback log. Although you have to do tablescans to rebuild indexes, the cost of an (efficient, possibly direct path) tablescan may be much less than the penalty of the work relating to flashback.

    Footnote: since you can’t (officially) load data into a table with an unusable unique index or constraint, you may want to experiment with using non-unique indexes to support unique/PK constraints and disabling the constraints while loading.

    Useful list of Oracle Conferences and Call For Papers

    Do you want to know what Oracle conferences are run, where they are and when? Do you present (or are thinking of presenting) and want to know when the call for papers is open?

    Then go and look at Jan Karremans’ excellent page on oracle conferences.

    It lists most (all?) of the European and US conferences and is a really useful reference – I’ve not come across a similar, maintained list. The below is a static screen shot of part of the list, current of today – but visit the page to see the full, maintained list.

    Jan's conference list

    If you spot that a conference you know about (or are helping organise!) is missing, then Jan is happy to be contacted via the page and he will add the details.

    Converting non-CDB database to a PDB when TDE is in use

    Converting a non-CDB database to a PDB is a rather straightforward process. However once TDE (Transparent Data Encryption) gets involved certain things become not so obvious so I've decided to write a small guide on how to accomplish that. In order for a non-CDB database to be converted to a PDB it's version needs to be at least

    Encryption Wallet Location

    My encryption wallet location is set to the following (sqlnet.ora):

    (SOURCE =
    (DIRECTORY = /u01/app/oracle/admin/$ORACLE_SID/wallet/tde)

    Create a non-CDB database

    Let's start by creating a non-CDB database. I will call this database db1 and it will later be converted to a pluggable database called pdb1:

    dbca -silent \
    -createDatabase \
    -templateName New_Database.dbt \
    -gdbName db1 \
    -createAsContainerDatabase false \
    -sysPassword oracle \
    -systemPassword oracle \
    -emConfiguration none \
    -datafileDestination /u02/oradata \
    -redoLogFileSize 128 \
    -recoveryAreaDestination /u02/fra \
    -storageType FS \
    -characterSet al32utf8 \
    -nationalCharacterSet al16utf16 \
    -automaticMemoryManagement false \
    -initParams filesystemio_options=setall \
    -initParams session_cached_cursors=100 \
    -totalMemory 1024
    Creating and starting Oracle instance
    1% complete
    100% complete

    Since I have PSU2 applied I need to run datapatch once the database has been created:

    [oracle@ora12cr1 ~]$ . oraenv
    ORACLE_SID = [oracle] ? db1
    The Oracle base remains unchanged with value /u01/app/oracle
    [oracle@ora12cr1 ~]$ $ORACLE_HOME/OPatch/datapatch -verbose

    Create the wallet and encrypted table

    We're now ready to setup the wallet and create a user with an encrypted table:

    [oracle@ora12cr1 ~]$ mkdir -p /u01/app/oracle/admin/db1/wallet/tde
    [oracle@ora12cr1 ~]$ sqlplus / as sysdba

    SQL*Plus: Release Production on Sat Mar 7 15:51:21 2015

    Copyright (c) 1982, 2014, Oracle. All rights reserved.

    Connected to:
    Oracle Database 12c Enterprise Edition Release - 64bit Production
    With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

    SQL> alter system set encryption key identified by "db1";

    System altered.

    SQL> alter system set db_create_file_dest='/u02/oradata';

    System altered.

    SQL> create tablespace encrypted datafile size 64m encryption using 'AES256' default storage (encrypt);

    Tablespace created.

    SQL> create user encrypt_user identified by "encrypt_user" default tablespace encrypted;

    User created.

    SQL> alter user encrypt_user quota unlimited on encrypted;

    User altered.

    SQL> create table encrypt_user.z_encrypted as
    select dbms_random.string('x', 100) s
    from dual
    connect by level <= 100; 2 3 4

    Table created.

    Note that I did not use the new administer key management syntax on purpose and instead created the wallet the old style way as I expect most of the existing environments to be the same. We will explore the significance of this in a moment.

    Create a CDB

    Before our non-CDB database can be converted to a PDB and plugged into a CDB we need to create the container database first:

    [oracle@ora12cr1 ~]$ dbca -silent \
    > -createDatabase \
    > -templateName New_Database.dbt \
    > -gdbName cdb12cr1 \
    > -createAsContainerDatabase true \
    > -sysPassword oracle \
    > -systemPassword oracle \
    > -emConfiguration none \
    > -datafileDestination /u02/oradata \
    > -redoLogFileSize 128 \
    > -recoveryAreaDestination /u02/fra \
    > -storageType FS \
    > -characterSet al32utf8 \
    > -nationalCharacterSet al16utf16 \
    > -automaticMemoryManagement false \
    > -initParams filesystemio_options=setall \
    > -initParams session_cached_cursors=100 \
    > -totalMemory 1024
    Creating and starting Oracle instance
    1% complete
    100% complete

    As before I need to run the datapatch utility:

    [oracle@ora12cr1 ~]$ . oraenv
    ORACLE_SID = [oracle] ? cdb12cr1
    The Oracle base remains unchanged with value /u01/app/oracle
    [oracle@ora12cr1 ~]$ $ORACLE_HOME/OPatch/datapatch -verbose

    Setup CDB with a wallet

    Our newly created CDB needs to be setup with a wallet before we proceed with plugging a PDB which utilizes TDE. Let's do it now:

    [oracle@ora12cr1 ~]$ mkdir -p /u01/app/oracle/admin/cdb12cr1/wallet/tde
    [oracle@ora12cr1 ~]$ sqlplus / as sysdba

    SQL*Plus: Release Production on Sat Mar 7 17:23:58 2015

    Copyright (c) 1982, 2014, Oracle. All rights reserved.

    Connected to:
    Oracle Database 12c Enterprise Edition Release - 64bit Production
    With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

    SQL> administer key management
    create keystore '/u01/app/oracle/admin/cdb12cr1/wallet/tde'
    identified by "cdb12cr1"; 2 3

    keystore altered.

    SQL> administer key management
    set keystore open
    identified by "cdb12cr1"; 2 3

    keystore altered.

    SQL> administer key management
    set key identified by "cdb12cr1"
    with backup; 2 3

    keystore altered.

    Here I have setup the wallet and the master encryption key using the new syntax.


    Before we move forward let's explore the differences between db1 (created using the old syntax) and cdb12cr1 (created using the new syntax) wallets:

    [oracle@ora12cr1 ~]$ orapki wallet display -wallet /u01/app/oracle/admin/db1/wallet/tde
    Oracle PKI Tool : Version
    Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

    Enter wallet password:
    Requested Certificates:
    Subject: CN=oracle
    User Certificates:
    Oracle Secret Store entries:
    Trusted Certificates:
    [oracle@ora12cr1 ~]$ orapki wallet display -wallet /u01/app/oracle/admin/cdb12cr1/wallet/tde
    Oracle PKI Tool : Version
    Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

    Enter wallet password:
    Requested Certificates:
    Subject: CN=oracle
    User Certificates:
    Oracle Secret Store entries:
    Trusted Certificates:

    Notice how db1 has the encryption key listed under ORACLE.SECURITY.DB.ENCRYPTION while cdb12cr1 in addition has the encryption key listed under the new ORACLE.SECURITY.KM.ENCRYPTION as well.

    In practice what I found is unless your encryption key is listed under ORACLE.SECURITY.KM.ENCRYPTION it will not be exported when doing administer key management export keys. As a result when you import the export file the master key required to decrypt the data will not be there potentially leaving you in a peculiar situation especially if you have used an in-place conversion. Even if you specifically export the key using the with identifier in syntax you will not be able to import the key because specific key exports are not permitted to be imported into the PDBs.


    This step is only required if you created the wallet without using the new administer key management syntax. Re-keying the wallet will generate a new master key preserving the old master key necessary to decrypt the data while adding missing ORACLE.SECURITY.KM.ENCRYPTION entries at the same time. Execute while connected to db1:

    SQL> administer key management
    set key identified by "db1"
    with backup; 2 3

    keystore altered.

    If we look at the wallet we can see that it now contains the necessary entires:

    [oracle@ora12cr1 ~]$ orapki wallet display -wallet /u01/app/oracle/admin/db1/wallet/tde
    Oracle PKI Tool : Version
    Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

    Enter wallet password:
    Requested Certificates:
    Subject: CN=oracle
    User Certificates:
    Oracle Secret Store entries:
    Trusted Certificates:

    Prepare db1 to be converted into a PDB

    Before db1 can be plugged into a container database it needs to be converted to a PDB and it's encryption keys exported. Shutdown db1 and open it in read only:

    SQL> show parameter db_name

    ------------------------------------ ----------- ------------------------------
    db_name string db1
    SQL> shutdown immediate
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup mount restrict
    ORACLE instance started.

    Total System Global Area 1241513984 bytes
    Fixed Size 2923872 bytes
    Variable Size 452985504 bytes
    Database Buffers 771751936 bytes
    Redo Buffers 13852672 bytes
    Database mounted.
    SQL> alter database open read only;

    Database altered.

    We can now export the encryption keys:

    SQL> administer key management
    set keystore open
    identified by "db1"; 2 3

    keystore altered.

    SQL> administer key management
    export keys with secret "db1"
    to '/u01/app/oracle/admin/cdb12cr1/wallet/tde/db1.exp'
    identified by "db1"; 2 3 4

    keystore altered.

    Note that in order for the export operation to work the wallet needs to be explicitly opened with a password otherwise you will receive an error. In case of an auto login (local) wallet you will have to close the wallet and reopen it with a password.

    The next step is to generate metadata necessary for PDB conversion and shutdown db1 database:

    SQL> exec dbms_pdb.describe(pdb_descr_file => '/u01/app/oracle/db1.xml');

    PL/SQL procedure successfully completed.

    SQL> shutdown immediate
    Database closed.
    Database dismounted.
    ORACLE instance shut down.

    Convert and plug db1 into a CDB

    We can now login into cdb12cr1 and perform in-place conversion of db1 which does not requires any datafiles to be copied. Of course if something were to go wrong with the conversion process you might end up in a situation where you need to restore your original database from a backup so use this approach with care.

    SQL> show parameter db_name

    ------------------------------------ ----------- ------------------------------
    db_name string cdb12cr1
    SQL> create pluggable database pdb1
    using '/u01/app/oracle/db1.xml'
    nocopy tempfile reuse; 2 3

    Pluggable database created.

    Before pdb1 can be opened we need to run the script which will convert pdb1's data dictionary:

    SQL> alter session set container=pdb1;

    Session altered.

    SQL> @?/rdbms/admin/noncdb_to_pdb.sql

    Once the script completes we can open pdb1:

    SQL> alter pluggable database pdb1 open;

    Warning: PDB altered with errors.

    The error while opening the PDB tells us that the encryption key is missing (can be seen in pdb_plug_in_violations view). Let's go and import the key now:

    [oracle@ora12cr1 ~]$ sqlplus / as sysdba

    SQL*Plus: Release Production on Sat Mar 7 18:22:23 2015

    Copyright (c) 1982, 2014, Oracle. All rights reserved.

    Connected to:
    Oracle Database 12c Enterprise Edition Release - 64bit Production
    With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

    SQL> alter session set container=pdb1;

    Session altered.

    SQL> administer key management
    set keystore open
    identified by "cdb12cr1"; 2 3

    keystore altered.

    SQL> administer key management
    import keys with secret "db1"
    from '/u01/app/oracle/admin/cdb12cr1/wallet/tde/db1.exp'
    identified by "cdb12cr1"
    with backup; 2 3 4 5

    keystore altered.

    SQL> alter pluggable database pdb1 close;

    Pluggable database altered.

    SQL> alter pluggable database pdb1 open;

    Pluggable database altered.

    The encryption keys are now imported and we no longer get an error. Note that as with the export in order to import the keys the wallet must be explicitly opened with a password. We can verify that everything is in order by querying the encrypted table:

    SQL> administer key management
    set keystore open
    identified by "cdb12cr1"; 2 3

    keystore altered.

    SQL> select count(*) from encrypt_user.z_encrypted;


    Friday Philosophy – Do Average to Be a Success

    A few days ago a friend of mine, helifromfinland, tweeted something that exactly matched the topic that I was thinking of doing my next Friday Philosophy on. Heli said:

    I am learning to do things well enough, not always perfect. Even writing that sentence feels so wrong but #babysteps :-D

    That made me smile – I know the feeling myself and I know some people for whom it is all-consuming. It is something that I suspect many people who are active in the oracle community struggle with. We all try and do the best we can at all we do.

    In our jobs in I.T what is needed most often is not the perfect solution – or even the best solution we can come up with. It is:

    The best solution that achieves the requirement within the timeframe allowed.

    I think I was lucky in that the principle of “good enough” was explained to me fairly early on – and in an environment where “good enough” is not usually the prescribed wisdom.

    I was at college doing my degree. In academia or school you are usually encouraged to strive for perfection, in the aim of doing the best you can. It seems to me that they don’t teach you what the real world wants. I can’t remember the exact details (it’s almost 3 decades ago!) but I was trying to finish a written assignment in genetics and it was deadline day. I hunted down the professor who had assigned the task and asked if I could have a few more days as I wanted to check up some of the latest papers on it in the library {I know, what a terrible swot {definition – see item two here!} I was}. He did not say no, he did not say yes. Instead he took me into his office and asked me a few questions about the topic and what I had written so far. I think he was checking I had done something rather than was just covering up being lazy. He then asked me what the purpose of the assignment was.


    I started explaining the topic again but he cut me short. It took him a few attempts I think to get to where he was directing me, which was that it was a task to be completed in a time frame, to show I understood the topic. I was not doing original research, I was not trying to prove anything. It was Just A Task. The Prof then explained to me that his wife was not an academic but worked in industry. She had tasks to do in set time frames and others relied on her doing those tasks on time. She had more work to do than she could easily cope with. The Prof asked me “Should she keep asking for more time do them? Should she only do a few tasks to the best of her ability or most of her tasks to a level that everyone was happy with?” I got his point, but surely in academia the aim is always “as good as you can?”. He felt not and I think he was vexed {meaning, “really pissed off”} that many academics see it that way. There are times you need to do the very best you can; to spend the time to prove your theory; to cover off all the alternatives or caveats to your point; to get the lab result that clearly corroborates your point. But most of the time, you are doing tasks. Stop dithering and do them. It’s more pointed in the commercial world but the academic world is fundamentally the same.

    I think he left it to me to decide if I was going to hand the assignment in late or not but I can’t remember what I did (I’ve got my notes from back then, I can probably find out! But I’ve decided this post does not need that level of perfection… :-) ).

    I think we can all agree that, especially in a work environment where others are dependent on us doing our bit in a timely manner, it is better to do an acceptable job on time than constantly overrun. It is also better to get most {aiming unrealistically for “all”} of your work done rather than failing to do tasks that then impact on others. Of course, what is acceptable is all relative and there is a time/achievement cost-benefit-analysis in moving up the poor-acceptable-good-excellent-perfect spectrum.

    Maybe what defines your skill in a role is how far up the poor-acceptable-good-excellent-perfect spectrum you hit on a regular basis.

    The problem is that, for some of us, we are like Heli and we absolutely, totally and utterly want to do a very good job on everything we do. This is an idea that our parents, teachers and society do press upon us in our formative years, after all.

    Of course, your employer will want you to do six impossible things this morning but most are happy with 4 good things this morning and would prefer that over 2 excellent things by the end of the day and 4 undone.

    I can’t say I’ve always stuck to the principal of limiting a task to the effort or time it deserves – I have a natural tendency to try and do too good{no, complete is a better way to put it} a job or else I go the opposite and don’t do the task justice {or even at all!}, so I really empathise with Heli’s tweet. When I first became a contractor I struggled with doing enough average work to keep the client happy, I was just spending too much time on doing the best I could at one or two tasks. In reality, they just wanted lots of stuff done competently. So my Prof had failed to instill the right attitude in me!

    One of the nuances of “good enough”, and my point about getting {nearly} all your work done, is that it is almost an impossible thing to achieve. If you get all your tasks done, what happens? Yes, more work comes your way. Especially as our working society has gone in exactly the opposite direction to both what many predicted in the 50’s, 60’s & 70’s and also against what we, the workers, would want. The plan was we would all be working, but working fewer hours and days for similar pay. But as most of us can testify, we seem to be asked to do more and more. It’s a topic for a different day but, basically, we are all screwed by the desire by our employers to get more out of each one of us to maximise profit – more work done by the same number or less people is reducing staff pay in relation to output. The reward for getting all your work done on time is more work will be allocated to you.

    Another nuance is one I know I have gone on about before. If you do a job, especially an unpleasant or hard job, very well – what is your reward? You get to do the job for ever. Or you get the next horrible, hard job to do. The reward for exceeding expectations is to set the bar that people will want you to hit ever higher and higher and higher

    But you do want some recognition and some promotions.

    So, for goodness sake, do just an acceptable-good job of a slightly-more-than-is-reasonable number of tasks and don’t do the next horrible job you are handed beyond expectation. And if you forget yourself and go and do the horrible task well, remember to make an utter mess of the next one – you must stop that expectation bar rising!

    The final nuance is perhaps the hardest one, and the one I still struggle with despite someone explaining it to me almost 30 years ago. Some tasks really do need to be at the brilliant end of the spectrum and some are fine at being at the average or even poor end. If your role is as a DBA, your backup/recovery protocols need to be towards the brilliant. You may hope to never need to do disaster recovery but one day you will and if it goes wrong, expect to be fired. However, tuning a batch report to run in under an hour? Usually, you are asked for an ideal run time that the business does not need. Under 2 hours is enough and you have a SHED load of other tasks. No one needs the report in under a minute. You should do an average job, even if your soul dies a little in doing so.

    As I mentioned above, as a contractor I initially struggled at times to do lots-of-average-work. As a consultant the requirements and expectations are a little different. You are expected to do excellent, come up with something the regular team has not. It’s nice if it is achieved quickly but heck, hard takes time :-). Average (ie what the regular team would come up with) is NOT acceptable (*NB Not always true). I personally find that the consultant paradigm suits me more, my character and working method is more suited to a slower, more considered approach. I really need to get to be a proper consultant…

    So the take home message on how to get on in the working world is:

    Be just above average at tasks.

    Do 80% of your work but back pedal if you hit 90%.

    If you accidentally do a magnificent job, mess up the next one.

    Occasionally, only occasionally, let rip and blow them all away with your brilliance.

    And please let me know how the above works out for you :-)


    Quick update – a recent xkcd panel that makes the point well :-)

    Getting Psyched for Collaborate 2015

    It’s that time again. Collaborate 2015 is fast approaching and, I have to say, I’m pretty damn psyched! This is the first time my personal company, NEXTGRES, will be demonstrating its game-changing database compatibility features to Oracle DBAs, developers, and users. I’d love for anyone reading my blog to drop by, whether it’s just to say “hi” or to see […]

    Remote graphical installs

    Most of my servers tend to be Linux VMs on VMware ESX without any graphics desktops setup, so it can be disconcerting trying to install Oralce with it’s graphical “runInstaller” being the gate way we have to cross to achieve installation.  Of course there is the silent install, but to do that requires knowing all the secret handshake responses to make the silent installer work. Often the easiest way to get a response file is to capture the responses form a graphical run of the runInstaller.

    There is another way, and that is using vncserver to run the graphic desktop on another machine. For example I can display the the graphical desktop on my Mac.

    Here is the setup:

    On the Linux box:

     cd /etc/yum.repos.d
    yum install oracle-rdbms-server-11gR2-preinstall-1.0-3.el6.x86_64
    wget -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
    yum install oracle-rdbms-server-11gR2-preinstall-1.0-3.el6.x86_64
    yum install elfutils-libelf-devel
    yum install glibc.i686
    yum install tigervnc-server
    yum install xterm
    yum install twm
    yum install xsetroot

    # without the following get errors
    # [dix] Could not init font path element /usr/share/X11/fonts/misc, removing from list!
    # [dix] Could not init font path element /usr/share/X11/fonts/100dpi, removing from list!
    # [dix] Could not init font path element /usr/share/X11/fonts/Type1, removing from list!
    # [dix] Could not init font path element /usr/share/fonts/default/Type1, removing from list!
    # [dix] Could not init font path element built-ins, removing from list!
    # Fatal server error:
    (EE) could not open default font ‘fixed’(EE)

    yum install libXfont

    then run


    on my Mac, I install and run



    And all this makes me wonder why there can’t be a web based installer …


    Distributed Transactions and EM12c Page Performance

    While presenting at HotSos in Dallas, Tx, an attendee, (thank you Wenju! :)) asked how page performance issues could be identified when the call was being made remotely and not from the Oracle Management  Repository, (OMR) using the Page Performance console within Cloud Control.

    This is a very good question and although not straight forward, I would like to demonstrate what is available to assist in answering the challenge.

    First, I have to add that the answer is provided using release of the Enterprise Manager 12c, so if you are still running an earlier version, your results may not render as successful an outcome, but I will also provide you with a command line option to help those of you on those earlier releases, too.

    Page Performance Console

    The Page Performance Console is accessed once you’ve entered the Cloud Control Management feature from the Setup –> Manage Cloud Control –> Health Overview.  After entering the Health Overview, at the left, top menu, click on OMS and Repository –> Monitoring –> Page Performance.  There are three tabs available in this console and using the second tab, Page Level Performance, you will view the list of pages that are experiencing the longest waits, in descending order.


    Using the first one in the list, we note that this is the AWR Warehouse.  The calls are made to a remote database, (the AWR Repository is not stored in the same database as the Oracle Management Repository, so of course, the calls are 90%+ remote…) so it’s an excellent choice to use as an example.

    Clicking on the link for the page, I’m brought to the Page Performance Report that has all the details on the performance waits for the console page.  As I’ve been a heavy user of this, it’s no wonder if it showing up as the top in the page processing time.


    The initial section clearly shows us how much of the performance issue is derived from java waits.  Knowing that this is the case, you would think that we can just bypass this, but to have the data verifying what the culprit is for waits is essential for a DBA.   We should never assume or guess when it comes to performance and this is just one more valuable feature of EM12c that eliminates that from the demanding challenges from users that are posed to you each day.  To make it easier to understand the above section, if there were additional time distribution for database, agent or network/browser time, it would be displayed similarly to the top sessions and top SQL in the Top Activity:


    When you are viewing the data in the Page Performance report, the translation from Top Activity for the DBA is:

    Time Distribution = Activity(%)

    Even though the database calls are completely insignificant, the data is still provided as part of the report.  Just saying the remote calls are not enough, we need to see why they were deemed insignificant.  If we want to see how our distributed (aka remote) calls are contributing to the time consumed by the total process, then all we have to do is scan down in the Top SQL part of the report to investigate.


    Note that no SQL_IDs are displayed, even though these are all queries.  The reason is that the OMR is not able to pull these remote calls in the AWR Warehouse that are being queried by Cloud Control.  What is displayed is the time CONSUMED by each of the executions till the results are returned to the console and no Top SQL is listed as nothing was over 10ms.  As I’ve stated, there is incredible value in knowing that database time is still not the main consumer of the remote calls.  The second valuable information is displayed to the very right of the image,  which are the remote session IDs, which can be used if you wished to investigate this via EM12c in the AWR Repository database with an ASH report at the time this executed or other search session opportunity, (there are a number options available.)

    Older Release Options

    If you are on an earlier version and don’t have the Performance Page Report available to you, there are great command line tools available via the EM CLI.  The following graphic takes you through the steps and the translations here for the DBA is that its similar to TKProf-  you give collect the session ID, create a trace of that session, stop the trace, then create the report from the trace file.


    Although the report isn’t as aesthetically pleasing as what I’ve shown you from the Performance Page Report, you still are provided the data you need to answer those important performance questions about console page response and this data can be very valuable if you are looking into an issue or working with Oracle Support.

    Tags:  ,





    Copyright © DBA Kevlar [Distributed Transactions and EM12c Page Performance], All Right Reserved. 2015.

    Adding An EMC XtremIO Volume As An ASM Disk With Oracle Database 12c On Linux – It Does Not Get Any Easier Than This.

    When Something Is Simple It Must Be Simple To Prove

    Provisioning high-performance storage has always been a chore. Care and concern over spindle count, RAID type, RAID attributes, number of controller arms involved and a long list of other complexities have burdened storage administrators. Some of these troubles were mitigated by the advent of Automatic Storage Management–but not entirely.

    Wouldn’t it be nice if the complexity of storage provisioning could be boiled down to but a single factor? Wouldn’t it be nice if that single factor was, simply, capacity? With EMC XtremIO the only factor storage administrators need to bear in mind when provisioning storage is, indeed, capacity.

    With EMC XtremIO a storage administrator hears there is a need for, say, one terabyte of storage and that is the entirety of information needed. No more questions about the I/O pattern (e.g., large sequential writes ala redo logging, etc). The Database Administrator simply asks for capacity with a very short sentence and the Storage Administrator clicks 3 buttons in the XtremIO GUI and that’s all there is to it.

    Pictures Speak Thousands of Words

    I too enjoy the simplicity of XtremIO in my engineering work. Just the other day I ran short on space in a tablespace while testing Oracle Database 12c intra-node parallel query. I was studying a two-node Real Application Clusters setup attached to an EMC XtremIO array via 8 paths of 8GFC Fibre Channel. The task at hand was a single parallel CTAS (Create Table As Select) but the command failed because my ASM disk group ran out of space when Oracle Database tried to extend the BIGFILE tablespace.

    Since I had to add some space I thought I’d take a few screen shots to show readers of this blog how simple it is to perform the full cycle of tasks required to add space to an active cluster with ASM in an XtremIO environment.

    The following screen shot shows the error I was reacting to:


    Since the following example shows host configuration steps please note the Linux distribution (Oracle Linux) and kernel version (UEK) I was using:


    The following screenshot shows the XtremIO GUI configuration tab. I selected “Add” and then typed a name and size (1TB) of the volume I wanted to create:

    NOTE: Right click the embedded images for greater clarity


    The following screenshot shows how I then selected the initiators (think hosts) from the right-hand column that I wanted to see the new volume:


    After I clicked “apply” I could see my new volume in my “12C” folder. With the folder construct I can do things like create zero-overhead, immediate, writable snapshots with a single mouse click. As the following screenshot shows, I highlighted “data5″ so I could get details about the volume in advance of performing tasks on the host. The properties tab shows me the only information I need to proceed–the NAA Identifier. Once I had the NAA Identifier I moved on to the task of discovering the new volume on the hosts.



    Host Discovery

    Host discovery consists of three simple steps:

    1. Multipath discovery
    2. Updating the udev rules file with a text editor
    3. Updating udev state with udevadm commands

    Multipath Discovery

    On both nodes of the cluster I executed the following series of commands. This series of commands generates a lot of terminal output so I won’t show that in this blog post.

    # multipath -F ;service multipathd restart ; -r

    After executing the multipath related commands I was able to see the new volume (0002a) on both nodes of the cluster. Notice how the volume has different multipath names (mpathab, mpathai) on the hosts. This is not an issue since the volumes will be controlled by udev:


    Updating Udev Rules File and Udev State

    After verifying the volumes were visible under DM-MPIO I moved on to the udev actions. The following screenshot shows how I added an ACTION line in the udev rules file and copied it to the other RAC host and then executed the udev update commands on both RAC hosts:


    I then could see “/dev/asmdisk6″ on both RAC hosts:


    Adding The New XtremIO Volume As An ASM Disk

    The next task was to use ASMCA (ASM Configuration Assistant) to add the XtremIO volume to the ASM disk group called “DATA”:


    As the following screenshot shows the volume is visible as /dev/asmdisk6:


    I selected asmdisk6 and the task was complete:


    I then saw evidence of ASM rebalancing in the XtremIO GUI Performance tab:




    With EMC XtremIO you provision capacity and that allows you to speak in very short sentences with the application owners that share space in the array.

    It doesn’t get any easier than this.

    Filed under: oracle

    Oracle University Instructors on the Cruise Ship

    Oracle User Group Norway Annual ConferenceI’m really looking forward to speak at the Oracle User Group Norway Spring Seminar 2015, together with my dear colleague Joel Goodman! For sure it’s one of the highlights this year in terms of Oracle Events.

    Joel will present about Oracle Automatic Parallel Execution on MAR-12, 6pm and about Oracle 12c Automatic Data Optimization and Heat Map on MAR-13, 9:30am

    Yours sincerely will talk about The Data Guard Broker – Why it is recommended on MAR-12, 6pm and about The Recovery Area – Why it is recommended on MAR-13, 8:30am

    Joel Goodman & Uwe Hesse

    The OUGN board has again gathered an amazing lineup of top-notch speakers for this event, so I will gladly take the opportunity to improve my knowledge :-)

    Tagged: #ougn2015