Search

OakieTags

Who's online

There are currently 0 users and 24 guests online.

Recent comments

Oakies Blog Aggregator

My Oracle Life of Clubbing & Harmony

Last year I promised myself I would do more conferences & presenting and that it would include more events further afield, such as in Europe. I can’t say I managed it in 2014 (Liverpool for the UKOUG Tech14 did not count as a foreign country for me, even if I found a couple of the locals hard to understand) but 2015 is proving more successful. I attended the OUG Ireland conference 2 weeks ago, for my first trip to that country, and I learnt recently that I have papers accepted for Harmony 2015. This conference is a joint event between the Oracle user group of Finland, the Oracle user group of Estonia and the Latvian Oracle user group.

The conference is on the 11th and 12th of June in Tallinn, Estonia. I know that a few of my friends I’ve met in the flesh will also be there but also some people I only know online and who I’m looking forward to meeting for real {and one who I am not sure if I have met in the flesh or not!!!}. That’s part of why I like going to conferences; It is nice to get to know people via electronic means but there is nothing like actually being in the same room and chatting, especially if it is relaxing over a coffee, beer or a meal.

However, I am particularly happy to be going to Tallinn as my wife has been there and loves it. We are trying to organise it so that she can come over as well, but she has her own travel commitments that vary from week to week. Sue knows how to say “can you punch my ticket” in Estonian – and she assures me this is not a euphemism for anything.

In case Sue cannot make it, she has given me the book she learnt from, so I can learn Estonian myself:

Learn Estonian - in Russian!

Learn Estonian – in Russian!

First I have to learn Russian though… Yes, it’s a Russian “How to learn Estonian” book.

Have you any idea how much pleasure she took in doing that to me?

So that is the Harmony. What about the Clubbing? That would be Club Oracle London, which is a user group I mention each time there is a meeting. It is in London in the evening and there are 3 talks, beer, pizza and lots of chat between the crowd & the presenters. I’m doing my Disasters talk at the next meeting on the 30th April. Click that link to register and secure your place, it’s free. The other presenters are Svetoslav Gyurov and Dominic Giles. Dom is being particularly brave and is offering to answer any questions people have about the database {“as honestly as I can”}. I’ve known Dom for years, he used to come over to the place I worked when we were doing a lot of beta testing of Oracle. He secured his place in my admiration by not only thoroughly knowing his stuff but also when he told me off for being pathetic and not pushing the new tech and that I was being a wimp. Utter honesty from the vendor works for me.

I’ve currently got nothing else organised for 2015 conference-wise (apart from the small issue of helping define the technical content for UKOUG Tech15! So I guess I will be there. Oh, and probably a couple of SIGs). I keep saying I’ll try to do Bulgaria but again I’d like to get that to work with going with Sue. And of course, I could put in for Oracle Open World 15, but it’s a loooong way to go and costs me a lot. And Larry does not seem to want to talk to me anymore.

DOAG Expertenseminar "Parallel Execution Masterclass" (German)

In zwei Wochen findet das Expertenseminar "Parallel Execution Masterclass" in Berlin statt, das ich gemeinsam mit der DOAG veranstalte.

Es sind noch ein paar Plätze frei - sollten Sie also Lust und Zeit haben, nach Berlin zu kommen und exklusives Wissen (nicht nur) über das Parallel Execution Feature der Oracle Datenbank zu erfahren, würde ich mich sehr freuen, Sie dort mit den anderen Teilnehmern begrüßen zu dürfen, um gemeinsam mit Ihnen eine gute und produktive Zeit zu verbringen!

Bei Interesse wenden Sie sich bitte an die Ansprechpartner der DOAG, die im Link angegeben sind - dort finden Sie auch eine genauere Beschreibung des Seminars.

Ad: Trivadis DK Opening Event

Trivadis, the company I work for, just opened a new branch in Denmark (the press release, in German, is available here). On the 14th of April (09:00-15:00) takes place, in the Oracle’s office in Ballerup, the opening event. Two colleagues of mine and I will be there to deliver the following presentations:

Big Data versus Conventional Techniques – Decision Criteria (Peter Welker, Senior Principal Consultant)

Better use Big Data technologies in the Business intelligence and Data Warehouse environment or not? The presentation explains “classic” and “new” requirements for data analysis and compares them with technical and other criteria: latency, throughput, data volume, information quality and structure are even not enough. User friendliness, maturity, stability, maintainability or simply the available know-how and the bias of developers, admins and users are important. And of course the costs are not to overlook.

The Oracle Database In-Memory Option: Challenges and Possibilities (Christian Antognini, Senior Principal Consultant)

Oracle Database In-Memory, which was introduced with version 12.1.0.2, promises to deliver in-memory performance without modifying the application’s code, increasing the complexity of database administration, or jeopardizing the utilization of other Oracle Database functionalities.
The aim of this presentation, after explaining the key concepts the Oracle Database In-Memory option is based on, is to review what you can expect from this new technology. Specifically, we’ll take a look at the situations where you can take advantage of it and what kind of overhead you should expect when enabling it.

The Power To Perform – Managed Services Made Easy (Gerald Klump, Head of Managed Services & Outsourcing)

How many ideas, new business models or process optimisation measures are just waiting to be implemented in your company? Too many? Because your IT department is already at full capacity handling its day-to-day tasks and a host of other projects?
The diversity of challenges that have to be faced today means that involving external IT service providers is the best way, from a business point of view, of mastering all these tasks as a whole on time. The quality of the provider, alongside the quality of the collaboration itself, are the key to successfully achieving this goal. But what are the characteristics of effectively cooperating with an external service provider?
How can you effortlessly tailor services that require support to precisely accommodate your own needs? Trivadis, as a leading IT service provider, can offer you the solutions that you’re looking for. Based on concrete examples of our projects for small to medium-sized companies and key account customers such as Audi, we’ll show you what made-to-measure services actually look like in practice, what the benefits for your company will be, and what you will have to take into consideration.
With our Managed Services, you will enhance your performance as well.

Sound interesting? Or just want to stop by to say hi? Then, do not hesitate to let us know that you want to participate by sending an email to this address. Further information about the event, in Danish, is available here.

Using the Oracle Grid Infrastructure Agents for GoldenGate on Oracle restart

I was testing Oracle Goldengate on a non-clustered Oracle 12.1.0.2 database with ASM. With ASM, you need to have the grid infrastructure installed. The cluster ware for the single node install is called ‘oracle restart’.

The most convenient way to have Goldengate running at startup that I could find, was using the Oracle Grid Infrastructure Agents. These agents are not installed by default, you need to download these from the Oracle Technology Network. The download is with the grid infrastructure downloads in the database section.

The installation is very simple: unzip the xagpack_6.zip file, and run the xagsetup.sh script with the arguments ‘–install’ and ‘–directory’ arguments. The directory argument needs to point to a directory outside of the grid infrastructure home. My first choice would be to have it in the infrastructure home, but the install script does not allow that.

$ ./xagsetup.sh --install --directory /u01/app/oracle/product/xag
Installing Oracle Grid Infrastructure Agents on: ogg-dest
Done.

The control utility, ‘agctl’ can be run without any environment variables set to point to the directory where the utility is installed, or the grid infrastructure home.

$ /u01/app/oracle/product/xag/bin/agctl query releaseversion
The Oracle Grid Infrastructure Agents release version is 6.1.1

To add a golden gate resource, add it via the agctl utility:

$ /u01/app/oracle/product/xag/bin/agctl add goldengate gg1 --gg_home /u01/app/oracle/product/12.1.2/oggcore_1 --instance_type target --oracle_home /u01/app/oracle/product/12.1.0.2/dbhome_1 --databases ora.dest.db

This is a very simple example, where a cluster resource called ‘gg1′ is created, for which we point out the golden gate home, the database ORACLE_HOME and the database cluster resource (not the database name as some documentation from Oracle says).

In my case, golden gate was not running. After the golden gate cluster resource was added, it’s offline:

xag.gg1.goldengate
      1        OFFLINE OFFLINE                               STABLE

Now start the resource using the ‘agctl’ utility:

/u01/app/oracle/product/xag/bin/agctl start goldengate gg1

(please mind you could do exactly the same with crsctl: crsctl start res xag.gg1.goldengate)
Starting does not result in any message. Let’s look at the golden gate cluster resource:

xag.gg1.goldengate
      1        ONLINE  ONLINE       ogg-dest                 STABLE

What it does, is start the golden gate manager (mgr) process:

$ /u01/app/oracle/product/12.1.2/oggcore_1/ggsci << H
> info all
> H

Oracle GoldenGate Command Interpreter for Oracle
Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Aug  7 2014 10:21:34
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.



GGSCI (ogg-dest.local) 1>
Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
REPLICAT    ABENDED     REP1        00:00:00      17:55:53

Let’s add monitoring of the replicat. In real life scenario’s it’s the extracts or replicats that are important, these perform the actual work golden gate is supposed to do!

/u01/app/oracle/product/xag/bin/agctl modify goldengate gg1 --monitor_replicats rep1

The cluster ware checks the golden gate resource type status every 30 seconds. So if you check the cluster immediately after modifying the golden gate resource, it could not have performed the check with the added check for the replicat.

xag.gg1.goldengate
      1        ONLINE  INTERMEDIATE ogg-dest                 ER(s) not running :
                                                             REP1,STABLE

It now neatly reports the replicat group rep1 not running! I guess ‘ER’ means Extract and Replicat.
Now start the replicat, and check the status again:

xag.gg1.goldengate
      1        ONLINE  ONLINE       ogg-dest                 STABLE

A few final notes. This is the simplest possible setup. I couldn’t find a lot of information on the infrastructure agents except for some Oracle provided ones, which is why I created this blog. If you apply this on a cluster, I would urge you to look into the oracle provided ones, and create the full setup with golden gate writing its files on a cluster filesystem, using an application VIP, etcetera, so golden gate could be started or failed over to another host.

Goldengate is not the only thing this agents can incorporate into the cluster ware. The current version (6.1.1) supports, besides golden gate: tomcat, apache, JDE, mysql, various types of peoplesoft servers, siebel and web logic.

Deinstalling is very simple, but might put you on your wrong foot. The agents are deinstalled by using the setup script (xagsetup.sh) with the ‘–deinstall’ argument. This of course is quite normal, but it deinstalls the agents from the location where you call xagsetup.sh. This means that if you run xagsetup.sh from the location where you unzipped it for installation, it will ‘deinstall’, which means clean out, from that directory, leaving the true installation alone. The documentation states that upgrades to the agents are done by removing them and installing the new version.

Tagged: clusterware, grid, grid infra, grid infrastructure agents, install, xag

Little things worth knowing: exp/imp vs expdp and impdp for HCC in Exadata

Do you know the difference between exp/imp and expdp/impdp when it comes to importing HCC compressed data in Exadata?

If not, then follow me through two examples. This is on 11.2.0.3/11.2.3.3.1 but applies to all database releases you can have on Exadata. The task at hand is to export a table (which happens to be non-partitioned and HCC compressed for query high) and import it into a different user’s schema. This is quite a common approach when migrating data from a non-Exadata system into an Exadata system. You could for example pre-create the DDL for the tables and implement HCC before even importing a single row. When importing the data, the partitions’ HCC attributes will be honoured and data will be inserted compressed. Or won’t it?

The table

The table I want to export resides on our V2 system. Since I am (trying to be) a good citizen I want to use dbfs for the dump files. Beginning with 12.1.0.2 Grid Infrastructure you can also have ACFS by the way. Let’s start by creating the directory needed and some meta-information about the source table:

SQL> create or replace directory DBM01 as '/dbfs_direct/FS1/mbach/';

Directory created.

SQL> select owner, table_name, partitioned, num_rows, compress_for from dba_tables where table_name = 'T1_QH';

OWNER                          TABLE_NAME                     PAR   NUM_ROWS COMPRESS_FOR
------------------------------ ------------------------------ --- ---------- ------------
MARTIN                         T1_QH                          NO             QUERY HIGH

SQL> select bytes/power(1024,2) m, blocks from dba_segments where segment_name = 'T1_QH';

         M     BLOCKS
---------- ----------
        72       9216

The table is 72 MB in size, and HCC compressed (I actually ensured that it was by issuing an “alter table t1_qh move;” before the export started).

Data Pump

The first export uses expdp, followed by impdp to get the data back. I am remapping the schema so I don’t have to risk overwriting my source data.

[enkdb01:oracle:DBM011] /home/oracle/mbach
> expdp martin/xxxxxx directory=DBM01 dumpfile=exp_t1_qh.dmp logfile=exp_t1_qh.log tables=t1_qh

Export: Release 11.2.0.3.0 - Production on Tue Mar 24 06:15:34 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "MARTIN"."SYS_EXPORT_TABLE_02":  martin/******** directory=DBM01 dumpfile=exp_t1_qh.dmp logfile=exp_t1_qh.log tables=t1_qh
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 72 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "MARTIN"."T1_QH"                            9.675 GB 10000000 rows
Master table "MARTIN"."SYS_EXPORT_TABLE_02" successfully loaded/unloaded
******************************************************************************
Dump file set for MARTIN.SYS_EXPORT_TABLE_02 is:
  /dbfs_direct/FS1/mbach/exp_t1_qh.dmp
Job "MARTIN"."SYS_EXPORT_TABLE_02" successfully completed at 06:17:16

The interesting bit here is that the table on disk occupies around 72 MB, and yet expdp tells me the 10000000 rows occupy 9.7 GB. Can anyone guess why?

[enkdb01:oracle:DBM011] /home/oracle/mbach
> ls -lh /dbfs_direct/FS1/mbach/exp_t1_qh.dmp
-rw-r----- 1 oracle dba 9.7G Mar 24 06:17 /dbfs_direct/FS1/mbach/exp_t1_qh.dmp

Yes, 10GB. Data apparently is not exported in its compressed form. Now this table is going to be imported:

[enkdb01:oracle:DBM011] /home/oracle/mbach
> impdp imptest/xxxxxx directory=DBM01 dumpfile=exp_t1_qh.dmp logfile=imp_t1_qh.log \
> tables=martin.t1_qh remap_schema=martin:imptest 

Import: Release 11.2.0.3.0 - Production on Tue Mar 24 06:23:44 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Master table "IMPTEST"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "IMPTEST"."SYS_IMPORT_TABLE_01":  imptest/******** directory=DBM01
  dumpfile=exp_t1_qh.dmp logfile=imp_t1_qh.log tables=martin.t1_qh remap_schema=martin:imptest
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "IMPTEST"."T1_QH"                           9.675 GB 10000000 rows
Job "IMPTEST"."SYS_IMPORT_TABLE_01" successfully completed at 06:29:53

This looks all right-all data back (the number of rows exported matches those imported). What about the segment size?

SQL> select owner, table_name, partitioned, num_rows, compress_for from dba_tables where table_name = 'T1_QH';

OWNER                          TABLE_NAME                     PAR   NUM_ROWS COMPRESS_FOR
------------------------------ ------------------------------ --- ---------- ------------
MARTIN                         T1_QH                          NO             QUERY HIGH
IMPTEST                        T1_QH                          NO             QUERY HIGH

2 rows selected.

SQL> select bytes/power(1024,2) m, blocks from dba_segments where segment_name = 'T1_QH';

         M     BLOCKS
---------- ----------
        72       9216
        72       9216

2 rows selected.

Identical down to the block.

Traditional Export/Import

Despite the fact that exp/imp are deprecated they are still included with 12.1.0.2, the current release at the time of writing. What if you did the same process with these instead? After all, many DBAs “grew up” with those tools and can use them in their sleep. This plus some initial deficits with Data Pump in 10g keep exp/imp high up in the list of tools we like.

Let’s export the table:

[enkdb01:oracle:DBM011] /home/oracle/mbach
> exp martin/xxxxxxx file=/dbfs_direct/FS1/mbach/exp_t1_qh_classic.dmp log=/dbfs_direct/FS1/mbach/exp_t1_qh_classic.log tables=t1_qh

Export: Release 11.2.0.3.0 - Production on Tue Mar 24 06:42:34 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Tes
Export done in US7ASCII character set and AL16UTF16 NCHAR character set

About to export specified tables via Conventional Path ...
. . exporting table                          T1_QH   10000000 rows exported
Export terminated successfully without warnings.

The files are more or less identical in size to the ones created before by Data Pump:

[enkdb01:oracle:DBM011] /home/oracle/mbach
> ls -lh /dbfs_direct/FS1/mbach/*classic*
-rw-r--r-- 1 oracle dba 9.7G Mar 24 07:01 /dbfs_direct/FS1/mbach/exp_t1_qh_classic.dmp
-rw-r--r-- 1 oracle dba  537 Mar 24 07:01 /dbfs_direct/FS1/mbach/exp_t1_qh_classic.log

What about the import?

[enkdb01:oracle:DBM011] /home/oracle/mbach
> imp imptest/xxxxxx file=/dbfs_direct/FS1/mbach/exp_t1_qh_classic.dmp log=/dbfs_direct/FS1/mbach/imp_t1_qh_classic.log \
> FROMUSER=martin TOUSER=imptest tables=t1_qh

Import: Release 11.2.0.3.0 - Production on Tue Mar 24 07:27:42 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Tes

Export file created by EXPORT:V11.02.00 via conventional path

Warning: the objects were exported by MARTIN, not by you

import done in US7ASCII character set and AL16UTF16 NCHAR character set
. importing MARTIN's objects into IMPTEST
. . importing table                        "T1_QH"

And this takes a loooong time. What’s happening in the background? I first checked what the session was doing. That’s simple-have a look at the output.

SQL> @scripts/as

   INST_ID   SID    SERIAL# USERNAME      PROG       SQL_ID         CHILD PLAN_HASH_VALUE      EXECS   AVG_ETIME OFF SQL_TEXT
---------- ----- ---------- ------------- ---------- ------------- ------ --------------- ---------- ----------- --- -----------------------------------------
         1   594         23 IMPTEST       imp@enkdb0 fgft1tcrr12ga      0               0      22723         .00 No  INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "

The #execs were interesting and indeed, after a couple of minutes that counter has gone up a lot:

   INST_ID   SID    SERIAL# USERNAME      PROG       SQL_ID         CHILD PLAN_HASH_VALUE      EXECS   AVG_ETIME OFF SQL_TEXT
---------- ----- ---------- ------------- ---------- ------------- ------ --------------- ---------- ----------- --- -----------------------------------------
         1   594         23 IMPTEST       imp@enkdb0 fgft1tcrr12ga      0               0     639818         .00 Yes INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "

You can probably guess what’s coming next. After quite some while the import finished:

> imp imptest/xxxxxxx file=/dbfs_direct/FS1/mbach/exp_t1_qh_classic.dmp log=/dbfs_direct/FS1/mbach/imp_t1_qh_classic.log \
> FROMUSER=martin TOUSER=imptest tables=t1_qh

Import: Release 11.2.0.3.0 - Production on Tue Mar 24 07:27:42 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Tes

Export file created by EXPORT:V11.02.00 via conventional path

Warning: the objects were exported by MARTIN, not by you

import done in US7ASCII character set and AL16UTF16 NCHAR character set
. importing MARTIN's objects into IMPTEST
. . importing table                        "T1_QH"   10000000 rows imported
Import terminated successfully without warnings.

And now for the reason of this blog post:

SQL> select owner, bytes/power(1024,2) m, blocks from dba_segments where segment_name = 'T1_QH';

OWNER                                   M     BLOCKS
------------------------------ ---------- ----------
MARTIN                                 72       9216
IMPTEST                             11227    1437056

The newly important table was not compressed at all when importing using the traditional path.

Presentation material & E-learning videos – In-Memory Column Store Workshop with Maria Colgan

You can now download and have another look at the presentations used during the In-Memory…

Oracle Database In-Memory Test Drive Workshop: Canberra 28 April 2015

I’ll be running a free Oracle Database In-Memory Test Drive Workshop locally here in Canberra on Tuesday, 28th April 2015. Just bring a laptop with at least 8G of RAM and I’ll supply a VirtualBox image with the Oracle Database 12c In-Memory environment. Together we’ll go through a number of hands-on labs that cover: Configuring the Product Easily […]

Sqlplus is my second home, part 8: Embedding multiple sqlplus arguments into one variable

I’ve updated some of my ASH scripts to use these 4 arguments in a standard way:

  1. What ASH columns to display (and aggregate by)
  2. Which ASH rows to use for the report (filter)
  3. Time range start
  4. Time range end

So this means whenever I run ashtop (or dashtop) for example, I need to type in all 4 parameters. The example below would show top SQL_IDs only for user SOE sessions from last hour of ASH samples:

SQL> @ashtop sql_id username='SOE' sysdate-1/24 sysdate

    Total
  Seconds     AAS %This   SQL_ID        FIRST_SEEN          LAST_SEEN           DIST_SQLEXEC_SEEN
--------- ------- ------- ------------- ------------------- ------------------- -----------------
     2271      .6   21% | 56pwkjspvmg3h 2015-03-29 13:13:16 2015-03-29 13:43:34               145
     2045      .6   19% | gkxxkghxubh1a 2015-03-29 13:13:16 2015-03-29 13:43:14               149
     1224      .3   11% | 29qp10usqkqh0 2015-03-29 13:13:25 2015-03-29 13:43:32               132
      959      .3    9% | c13sma6rkr27c 2015-03-29 13:13:19 2015-03-29 13:43:34               958
      758      .2    7% |               2015-03-29 13:13:16 2015-03-29 13:43:31                 1

When I want more control and specify a fixed time range, I can just use the ANSI TIMESTAMP (or TO_DATE) syntax:

SQL> @ashtop sql_id username='SOE' "TIMESTAMP'2015-03-29 13:00:00'" "TIMESTAMP'2015-03-29 13:15:00'"

    Total
  Seconds     AAS %This   SQL_ID        FIRST_SEEN          LAST_SEEN           DIST_SQLEXEC_SEEN
--------- ------- ------- ------------- ------------------- ------------------- -----------------
      153      .2   22% | 56pwkjspvmg3h 2015-03-29 13:13:29 2015-03-29 13:14:59                 9
      132      .1   19% | gkxxkghxubh1a 2015-03-29 13:13:29 2015-03-29 13:14:59                 8
       95      .1   14% | 29qp10usqkqh0 2015-03-29 13:13:29 2015-03-29 13:14:52                 7
       69      .1   10% | c13sma6rkr27c 2015-03-29 13:13:31 2015-03-29 13:14:58                69
       41      .0    6% |               2015-03-29 13:13:34 2015-03-29 13:14:59                 1

Note that the arguments 3 & 4 above are in double quotes as there’s a space within the timestamp value. Without the double-quotes, sqlplus would think the script has total 6 arguments due to the spaces.

I don’t like to type too much though (every character counts!) so I was happy to see that the following sqlplus hack works. I just defined pairs of arguments as sqlplus DEFINE variables as seen below (also in init.sql now):

  -- geeky shorcuts for producing date ranges for various ASH scripts
  define     min="sysdate-1/24/60 sysdate"
  define  minute="sysdate-1/24/60 sysdate"
  define    5min="sysdate-1/24/12 sysdate"
  define    hour="sysdate-1/24 sysdate"
  define   2hours="sysdate-1/12 sysdate"
  define  24hours="sysdate-1 sysdate"
  define      day="sysdate-1 sysdate"
  define    today="TRUNC(sysdate) sysdate"

And now I can type just 3 arguments instead of 4 when I run some of my scripts and want some predefined behavior like seeing last 5 minutes’ activity:

SQL> @ashtop sql_id username='SOE' &5min

    Total
  Seconds     AAS %This   SQL_ID        FIRST_SEEN          LAST_SEEN           DIST_SQLEXEC_SEEN
--------- ------- ------- ------------- ------------------- ------------------- -----------------
      368     1.2   23% | gkxxkghxubh1a 2015-03-29 13:39:34 2015-03-29 13:44:33                37
      241      .8   15% | 56pwkjspvmg3h 2015-03-29 13:40:05 2015-03-29 13:44:33                25
      185      .6   12% | 29qp10usqkqh0 2015-03-29 13:39:40 2015-03-29 13:44:33                24
      129      .4    8% | c13sma6rkr27c 2015-03-29 13:39:35 2015-03-29 13:44:32               129
      107      .4    7% |               2015-03-29 13:39:34 2015-03-29 13:44:33                 1

That’s it, I hope this hack helps :-)

By the way – if you’re a command line & sqlplus fan, check out the SQLCL command line “new sqlplus” tool from the SQL Developer team! (you can download it from the SQL Dev early adopter page for now).

 

Video Tutorial: XPLAN_ASH Active Session History - Part 4

#333333; font-family: Verdana, Arial, sans-serif; font-size: 13px; line-height: 16.8999996185303px;">The next part of the video tutorial explaining the XPLAN_ASH Active Session History functionality continuing the actual walk-through of the script output.
#333333; font-family: Verdana, Arial, sans-serif; font-size: 13px; line-height: 16.8999996185303px;" />
#333333; font-family: Verdana, Arial, sans-serif; font-size: 13px; line-height: 16.8999996185303px;" />#333333; font-family: Verdana, Arial, sans-serif; font-size: 13px; line-height: 16.8999996185303px;">More parts to follow.
#333333; font-family: Verdana, Arial, sans-serif; font-size: 13px; line-height: 16.8999996185303px;">

#333333; font-family: Verdana, Arial, sans-serif; font-size: 13px; line-height: 16.8999996185303px;">

In-memory – can you REALLY drop those indexes ?

Its not a hard sell to convince anyone that a whopping huge great chunk of compressed column-based memory is going to assist with those analytic-style queries.  The In-memory option seems a natural fit, especially for those people where the bottleneck in their infrastructure is the capacity to consume data from storage.

What perhaps a more interesting area of investigation is the claim that In-memory will also be of benefit in OLTP systems, where (to paraphrase many presenters at OpenWorld 2014), we wont be needing all those indexes on transactional tables.

So I thought I’d have an initial play around in that space, and see what happens.

We’ll start with a simple test bed – a transactional table "T", being a child of parent table "PAR",  with a few indexes representing those "additional" OLTP indexes.

SQL*Plus: Release 12.1.0.2.0 Production on Sat Mar 28 11:58:03 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


SQL> drop table PAR cascade constraints purge;

Table dropped.

SQL> create table PAR ( p int primary key);

Table created.

SQL> insert into PAR
  2  select rownum-1 from dual
  3  connect by level <= 100;

100 rows created.

SQL>

SQL> commit;

Commit complete.


SQL>

SQL> exec dbms_stats.gather_table_stats('','PAR');

PL/SQL procedure successfully completed.


SQL>

SQL> drop table T purge;

Table dropped.


SQL>

SQL> create table T (
  2    pk int,
  3    dist50  int,
  4    dist1000 int,
  5    dist100000 int,
  6    stuff char(30),
  7    numeric_stuff number(10,2)
  8  );

Table created.


SQL>

SQL> drop sequence seq;

Sequence dropped.


SQL> create sequence seq cache 1000;

Sequence created.


SQL>

SQL> insert /*+ APPEND */ into T
  2  select
  3    seq.nextval,
  4    trunc(dbms_random.value(1,50)),
  5    trunc(dbms_random.value(1,1000)),
  6    trunc(dbms_random.value(1,100000)),
  7    'x',
  8    dbms_random.value(1,1000)
  9  from
 10  ( select 1 from dual connect by level <= 1000 ),
 11  ( select 1 from dual connect by level <= 1000 )
 12  /

1000000 rows created.


SQL>

SQL> commit;

Commit complete.


SQL>

SQL> exec dbms_stats.gather_table_stats('','T');

PL/SQL procedure successfully completed.


SQL>

SQL> alter table T add primary key ( pk );

Table altered.


SQL> create index IX1 on T ( dist50 );

Index created.


SQL> create index IX2 on T ( dist1000 );

Index created.


SQL> create index IX3 on T ( dist100000 );

Index created.


SQL>

SQL> alter table T add constraint FK foreign key ( dist50 )
  2  references PAR ( p );

Table altered.

So at this point, we’ve got our table pre-populated with 1million rows, with some columns of varying levels of cardinality, each indexed to support adhoc query.

Now we’ll create our "application" that sits on top of this table.  We have a routine for logging a transaction (this is the "OLTP" part of our application)

SQL> create or replace
  2  procedure TRANS_LOGGER is
  3  begin
  4  insert into T values (
  5    seq.nextval,
  6    trunc(dbms_random.value(1,50)),
  7    trunc(dbms_random.value(1,1000)),
  8    trunc(dbms_random.value(1,100000)),
  9    'x',
 10    dbms_random.value(1,1000)
 11  );
 12  commit work write wait immediate;
 13  end;
 14  /

Procedure created.

We’re using the extended commit syntax, because when we run this routine in a PL/SQL loop, we dont want to ‘cheat’ by using the PL/SQL optimization that avoids waiting for commits to complete. Now we’ll add the other half of our application, the support for adhoc queries on those non-primary key columns.

SQL> create or replace
  2  procedure adhoc_query is
  3    x int;
  4    res number;
  5  begin
  6    x := trunc(dbms_random.value(1,50));
  7    select sum(numeric_stuff) into res
  8    from t where dist50 = x;
  9
 10    x := trunc(dbms_random.value(1,1000));
 11    select sum(numeric_stuff) into res
 12    from t where dist1000 = x;
 13
 14    x := trunc(dbms_random.value(1,100000));
 15    select  sum(numeric_stuff) into res
 16    from t where dist100000 = x;
 17
 18  end;
 19  /

Procedure created.

So there we have it. Our "application" is ready to go :-) Now we need to make it "popular", that is, have lots of people use it. To do this, I’ll create a routine which simulates a moderately busy application server thread. We’ll be firing off lots of these later to crank up the load.

SQL> create or replace 
  2  procedure app_server_thread is 
  3    think_time number := 0.05; 
  4    iterations int := 600 * ( 1 / think_time ); 
  5  begin 
  6    for i in 1 .. iterations loop 
  7      dbms_application_info.set_client_info(i||' of '||iterations); 
  8      trans_logger; 
  9      dbms_lock.sleep(think_time); 
  10    end loop; 
  11  end ; 
  12  / 

Procedure created. 

So the "app_server_thread" procedure, will iterate a number of times, calling TRANS_LOGGER with a little bit of "think time", and will roughly run for 10 minutes (the 600 seconds in the iterations expression above).

We’ll also have a another procedure doing a similar thing, but for adhoc query. It will have a little longer think time, representing that our app is mainly transactional focussed with intermittent query.

SQL> create or replace
  2  procedure app_server_thread_query is
  3    think_time number := 0.25;
  4    iterations int := 600 * ( 1 / think_time );
  5  begin
  6   for i in 1 .. iterations loop
  7     dbms_application_info.set_client_info(i||' of '||iterations);
  8     adhoc_query;
  9     dbms_lock.sleep(think_time);
 10   end loop;
 11  end ;
 12  /

Procedure created.

We’re pretty much ready to go now.  I’m running this on a Linux VM which has 12 "virtual" cores, and 128G of RAM hooked up to a Netapp filer for storage.  Using some shell script, we can fire off concurrent session of our "app server" routines.  I’m taking an AWR before and after so we can poke around in the results.

#!/bin/ksh

awr()
{
print "
connect / as sysdba
exec dbms_workload_repository.create_snapshot
exit" | sqlplus /nolog 1>awr.out 2>&1
}

TxnThread()
{
print "
connect xxx/yyy
set timing on
exec app_server_thread
exit" | sqlplus /nolog 1>txnthread.out.$1 2>&1
}

QueryThread()
{
print "
connect xxx/yyy
set timing on
exec app_server_thread_query
exit" | sqlplus /nolog 1>querythread.out.$1 2>&1
}


echo AWR
awr

echo TxnThreads
for i in 0 1 2 3 4 5 6 7 8 9
do
  TxnThread $i &
done

echo QueryThreads
for i in 1 2 3 4 5
do
  QueryThread $i &
done


wait

echo AWR
awr

echo Done

and wait 10 minutes for for it to finish. 

The simplest analysis is how long did it take – we know that a good chunk of the time is sleep time, but we’re just going to compare this later with an in-memory test.

# grep Elapsed *out*
querythread.out.1:Elapsed: 00:12:09.42
querythread.out.2:Elapsed: 00:12:08.45
querythread.out.3:Elapsed: 00:12:08.20
querythread.out.4:Elapsed: 00:12:09.04
querythread.out.5:Elapsed: 00:12:08.74
txnthread.out.0:Elapsed: 00:10:20.91
txnthread.out.1:Elapsed: 00:10:20.92
txnthread.out.2:Elapsed: 00:10:21.01
txnthread.out.3:Elapsed: 00:10:21.11
txnthread.out.4:Elapsed: 00:10:20.90
txnthread.out.5:Elapsed: 00:10:21.00
txnthread.out.6:Elapsed: 00:10:21.06
txnthread.out.7:Elapsed: 00:10:21.10
txnthread.out.8:Elapsed: 00:10:20.86
txnthread.out.9:Elapsed: 00:10:20.95

So about 130 seconds (over the 10mins) for the query threads, and 21 seconds (over the 10mins) for the transactional threads. Now I’ve re-run the exact same setup script above, and added a few more steps:

SQL> alter table T inmemory priority critical;

Table altered.

SQL> alter index ix1 unusable;

Index altered.

SQL> alter index ix2 unusable;

Index altered.

SQL> alter index ix3 unusable;

Index altered.

SQL> select segment_name, bytes from v$im_segments;

SEGMENT_NAME                                  BYTES
---------------------------------------- ----------
T                                          67108864

So now our table is loaded into the in-memory store, and the non-primary key indexes have been tossed out. Now we re-run our benchmark, and see what pops out

# grep Elapsed *out*
querythread.out.1:Elapsed: 00:10:53.10
querythread.out.2:Elapsed: 00:10:53.10
querythread.out.3:Elapsed: 00:10:53.10
querythread.out.4:Elapsed: 00:10:53.07
querythread.out.5:Elapsed: 00:10:53.09
txnthread.out.0:Elapsed: 00:10:20.55
txnthread.out.1:Elapsed: 00:10:21.02
txnthread.out.2:Elapsed: 00:10:20.82
txnthread.out.3:Elapsed: 00:10:20.77
txnthread.out.4:Elapsed: 00:10:20.92
txnthread.out.5:Elapsed: 00:10:20.82
txnthread.out.6:Elapsed: 00:10:20.86
txnthread.out.7:Elapsed: 00:10:20.91
txnthread.out.8:Elapsed: 00:10:20.75
txnthread.out.9:Elapsed: 00:10:20.75

Well thats certainly showing some promise. Our transaction times dont seem to have been adversely affected, and our query performance is better. We can also look at the AWR reports and see how in-memory has influenced things:

Without in-memory

image

 

With in-memory

image

One of the positive outcomes is that redo generation shrinks; less indexes to update means less redo.  If you’ve got crappy storage and you’re suffering with redo write issues, then perhaps in-memory is an avenue worth exploring (although you’d want to be carefully considering license costs versus storage upgrade costs!)

Taking a look at the wait events is also interesting

Without in-memory

image

With in-memory

image

With in-memory enabled, buffer busy waits have dropped…but they have been "replaced" with a new event "IM buffer busy".  Always remember – Memory is memory – if multiple people want to manipulate it, then some sort of serialisation is going to occur.  Similarly, note the appearance of the "latch free" event once in-memory comes into play.  But (in this simple demo), the enabling of in-memory has not hindered the transactional characteristics of the app, and looks to have yielded some redo and query benefits.

This is by no means a definitive "yes, you are fine to drop those indexes" and similarly, not a definitive "you must keep those indexes" .  Your own application will have its own particular idiosyncracies.  But the results are promising – if you’ve got the license dollars for in-memory, you may as well look at exploiting that expenditure in every way possible.