Search

OakieTags

Who's online

There are currently 0 users and 21 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

Beauty of command line - OEM 12c

Why all software should have a command line and automation plugin ? Answer is simple - if you have to repeat number of operation for different targets - scripts can help you save your precious time.

I really enjoy a fact that Oracle added a command line to Oracle Enterprise Manager line, and now you can script lot of boring tasks like adding new administrator to list of users who can access Named Credentials.

To add new admin (przepiorom) it's enough to run the following script
 

add_privs.sh przepiorom

This is first draft of this script (no error handling but it's doing his work)

#!bin/bash

NEW_ADMIN=$1

TPID=$$
PRIV_LIST=`emcli list_named_credentials | awk '{ print $1; }' | grep -v Credential > /tmp/priv_$TPID`


while read LINE ; do
echo $LINE
emcli grant_privs -name="${NEW_ADMIN}" -privilege="FULL_CREDENTIAL;CRED_NAME=${LINE}:CRED_OWNER=sysman"
done > /tmp/priv_$TPID

rm /tmp/priv_$PPID



The next example is an another script which is refreshing a Weblogic domain components.
When a new version of application is deployed a previous one are still registered as a targets and you will see it as down in your OEM.



There is a domain refresh command in OEM menu but if you have more systems going through all of those is not what you want. Using a command line and configuration file you can be done with one line.

emcli login -username=sysman -password=xxxxxxx -force
emcli refresh_wls -input_file=domain_refresh_file:/home/oracle/bin/domain_refresh_file.csv –debug


Content of domain_refresh_file.csv looks like this:

/xxx_soa_mot_domain_soa/soa,R
/xxx_soa_mot_domain_soa/soa,E


There is a one line per target split into two parts.

First part of line is a target name and domain name, ex. /xxx_soa_mot_domain_soa/soa Second part is operation: 
R - remove target which doesn't exist in domain anymore 
E - enable refresh of domain (aka. add monitoring targets)


regards,
Marcin

Oracle Database 12c Release 12.1.0.2 – My First Observations. Licensed Features Usage Concerns – Part II.

Preface

In this post you’ll see that I provide an scenario of accidental paid-feature “use.”  The key elements of the scenario are: 1) I enabled the feature (by “accident”) but 2) I didn’t actually use the feature because I neither created nor altered any tables.

In Part I of this series I aimed to bring to people’s attention what I see as a significant variation from the norm when it comes to Oracle licensed-option usage triggers and how to prevent them from being triggered. Oracle Database Enterprise Edition supports several separately licensed options such as Real Application Clusters, Partitioning, and so on.  A feature like Real Application Clusters is very expensive but if  “accidental usage” of this feature is a worry on an administrator’s mind there is a simple remedy: unlink it. If the bits aren’t in the executable you’re safe. Is that a convoluted procedure? No. An administrator simply executes make -d ins_rdbms.mk rac_off and then relinks the Oracle executable. Done.

What about other separately licensed options like Partitioning?  As I learned from Paul Bullen, once can use the Oracle-supplied chopt command to remove any chance of using Partitioning if, in fact, one does not want to use Partitioning. I thought chopt might be the solution to the issue of possible, accidental usage of the In-Memory Column Store feature/option. However, I found that chopt, as of this point, does not offer the ability to neutralize the feature as per the following screenshot.

img5

Trivial Pursuit of the Ignoramus or Mountainous Mole Hill?

There is yet no way I know of to prevent accidental use of the In-Memory Column Store feature/option. Am I just making a mountain out of a mole hill? I’ll let you be the judge. And if you side with folks that do feel this is a mountainous-mole hill you’d be in really good company.

Lest folks think that we Oaktable Network Members are a blind, mutual admiration society, allow me to share the rather sizzling feedback I got for raising awareness to this aspect of Oracle Database 12c:

oaktable-email-calls-bs

Geez!

No, I didn’t just want to dismiss this feedback. Instead  I pushed the belt-sander off of my face and read the words a couple of times. The author of this email asserted I’m conveying misinformation ( aka “BS”) and to fortify that position it was pointed out that one must:

  1. Set a database (instance initialization) parameter.
  2. Bounce the instance.
  3. Alter any object to use the feature. I’ll interpret that as a DDL action (e.g., ALTER TABLE, CREATE TABLE).

Even before I read this email I knew these assertions were false. We all make mistakes–this I know!  I should point out that unlike every release of Oracle from 5.1.17 to 11gR2 I was not invited to participate in the Beta for this feature. I think a lot of Oaktable Network members were in the program–perhaps even the author of the above email snippet–but I don’t know that for certain. Had I encountered this during a Beta test I would have raised it to the Beta manager as an issue and maybe, just maybe, the feature behavior might have changed before first customer ship. Why am I blabbering on about the Beta program? Well, given the fact that even Oaktable Network members with pre-release experience with this feature evidently do not know what I’m about to show in the remainder of this post.

What Is An Accident?

Better yet, what is an accident and how full of “BS” must one be to fall prey? Maybe the remainder of the post will answer that rhetorical question. Whether or not  it does, in fact, answer the question I’ll be done with this blog series and move on to the exciting work of performance characterization of this new, incredibly important feature.

Anatomy of a “Stupid Accident.”

Consider a scenario. Let’s say a DBA likes to use the CREATE DATABASE statement to create a database. Imagine that!  Let’s pretend for a moment that DBAs can be very busy and operate in chaotic conditions. In the fog of this chaos, a DBA could, conceivably, pull the wrong database instance initialization file (e.g., init.ora or SPFILE) and use it when creating a database. Let’s pretend for a moment I was that busy, overworked DBA and I’ll show you what happens in the following:

  1. I executed sqlplus from the bash command prompt.
  2. I directed sqlplus to execute a SQL script called cr_db.sql. Many will recognize this as the simple little create script I supply with SLOB.
  3. The cr_db.sql script uses a local initialization parameter file called create.ora
  4. sqlplus finished creating the database. NOTE: this procedure does not create even a single user table.
  5. After the database was created I connected to the instance and forced the feature usage tracking views to be updated (thanks to Morgan’s Library for that know-how as well…remember, I’m a database platform engineer not a DBA so I learn all the time in that space).
  6. I executed a SQL script to report feature usage of only those features that match a predicate such as “In-%’

 

img1

This screen shot shows that the list of three asserted must-happen steps (provided me by a fellow Oaktable Network member) were not, in fact, the required recipe of doom.  The output of the features.sql script proves that I didn’t  need to create even a single a user table to trigger the feature.

The following screen shot shows what the cr_db.sql script does:

img2

The following screenshot shows the scripts I used to update the feature usage tracking views and to report against same:

img4

The “Solution” To The “Puzzle”

Stepping on a landmine doesn’t just happen. You have to sort of be on your feet and walking around for that to happen. In the same vein, triggering usage of the separately licensed Oracle Database 12c Release 12.1.0.2 In-Memory Column Store feature/option required me to be “on my feet and walking around” the landmine–as it were. Did I have to jump through hoops and be a raging, bumbling idiot to accidentally trigger usage of this feature? No. Or, indeed, did I issue a single CREATE TABLE or ALTER TABLE DDL statement? No. What was my transgression? I simply grabbed the wrong database initialization parameter file from my repository–in the age old I’m-only-human sort of way these things can  happen.

To err to such a degree would certainly not be human, would it?

The following screenshot shows the parameter file I used to prove:

  1. You do not need to alter parameters and bounce an instance to trigger this feature usage in spite of BS-asserting feedback from experts.
  2. You don’t even have to create a single application table to trigger this feature usage.

img3

Summary

This blog thread has made me a feel a little like David Litchfield must have surely felt for challenging the Oracle9i-era claims of how Oracle Database was impenetrable by database security breaches. We all know how erroneous those claims where. Unbreakable, can’t break it, can’t break in?

Folks, I know we all have our different reasons to be fans of Oracle technology–and, indeed, I am a fan. However, I’m not convinced that unconditional love of a supposed omnipotent and omniscient god-like idol are all that healthy for the IT ecosystem. So, for that reason alone I have presented these findings. I hope it makes at least a couple of DBAs aware of how this licensed feature differs from other high-dollar features like Real Application Clusters in exactly what it takes to “use” the feature–and, moreover, how to prevent stepping on a landmine as it were.

 

…and now, I’m done with this series.

 

 

Filed under: oracle

Finding the blocking SQL in a lock wait

#555555;">One of my pet peeves on Oracle is the inability to find out what SQL took out a lock that another user is waiting. It’s easy to find the waiting user and their SQL with v$session by looking at v$session.event where the event is an “enqueue” (v8 and v9) or “enq: TX – row lock contention” and then looking up their SQL via the v$session.sql_hash_value which joins to v$sql.hash_value for the v$sql.sql_text.

#555555;">So far so good and easy.
#555555;">Second step of finding the blocker is really easy starting in 10g because Oracle has a new field v$session.blocking_session which can be joined back to v$session.sid to find information on that user.
#555555;">The rub is that there is no way to find the SQL text that the blocking session ran that took out the original blocking lock.
#555555;">For the 2 day course I teach on Active Session History (ASH) and Oracle wait events, I wanted to show students how to actually get the blocking SQL text if they really had to.
I went as far as looking at log miner to try and get the blocking SQL text and this works sometimes and sometimes it doesn’t. At that point I gave up, knowing the next step was dumping the redo logs which was more research than I felt like doing at the time.Luckily someone has picked up the torch – Doug Burns!
On the Oaktable email list I shared my research with Doug and Doug took it even farther and posted it on his blog:
#555555;">
  • #2970a6;" href="http://oracledoug.com/serendipity/index.php?/archives/1477-Diagnosing-Locking-Problems-using-ASH-Part-1.html">Diagnosing Locking Problems using ASH – Part 1#555555;"> – try OEM 10g on Lock Problem
  • #2970a6;" href="http://oracledoug.com/serendipity/index.php?/archives/1478-Diagnosing-Locking-Problems-using-ASH-Part-2.html">Diagnosing Locking Problems using ASH – Part 2#555555;"> – look at raw ASH data, missing blocker SQL
  • #2970a6;" href="http://oracledoug.com/serendipity/index.php?/archives/1480-Diagnosing-Locking-Problems-using-ASH-Part-3.html">Diagnosing Locking Problems using ASH – Part 3#555555;"> – look at raw ASH data, find the blocker SQL
  • #2970a6;" href="http://oracledoug.com/serendipity/index.php?/archives/1481-Diagnosing-Locking-Problems-using-ASH-Part-4.html">Diagnosing Locking Problems using ASH – Part 4#555555;"> – misleading data from ASH
  • #2970a6;" href="http://oracledoug.com/serendipity/index.php?/archives/1484-Diagnosing-Locking-Problems-using-ASH-Part-5.html">Diagnosing Locking Problems using ASH – Part 5#555555;"> – logminer undependable
  • #2970a6;" href="http://oracledoug.com/serendipity/index.php?/archives/1487-Diagnosing-Locking-Problems-using-ASH-Part-6.html">Diagnosing Locking Problems using ASH – Part 6#555555;"> – overview of all the parts (table of contents)
  • #2970a6;" href="http://oracledoug.com/serendipity/index.php?/archives/1488-Diagnosing-Locking-Problems-using-ASHLogMiner-Part-7.html">Diagnosing Locking Problems using ASH/LogMiner – Part 7#555555;"> – Redo log dump
  • #2970a6;" href="http://oracledoug.com/serendipity/index.php?/archives/1491-Diagnosing-Locking-Problems-using-ASHLogMiner-Part-8.html">Diagnosing Locking Problems using ASH/LogMiner – Part 8#555555;"> – Redo log dump lacks “select for update”
  • #2970a6;" href="http://www.blogger.com/Diagnosing%20Locking%20Problems%20using%20ASH/LogMiner%20%E2%80%93%20Part%209">Diagnosing Locking Problems using ASH/LogMiner – Part 9#555555;"> – Redo log dump – search for traces of “select for update”
  • Final Post  (thanks to JL for pointing out I’d left this off)
#555555;">
#555555;">Long story short, the best way to try and see what changed (when there was a change and not a “select for update”) to cause the lock is to use flashback information. For example if or contention table was TEST_TAB and our field that we knew was modified “VAL1″ then we could try to find what it was changed from:

Session 1

update test_tab set val1=’aa’ where id=1;

Session 2

update test_tab set val1=’aaa’ where id=1;

Blocking info from ASH where wait is enq: TX – row lock contention

select
      distinct
      lpad(ltrim(to_char(p2,'XXXXXX')),6,'0')||'00'||
      ltrim(to_char(mod(p3,256),'XX'))||
      ltrim(to_char(trunc(p3/256),'XX'))||'0000'
      block_xid,
      to_char(p2,'XXXXXXXX') p2hex,
      to_char(p3,'XXXXXXXX') p3hex,
      trunc(p2/65536) usn,
      mod(p2,65536) slot,
      p3 sqn, xid wait_xid
from v$active_session_history
where event like 'enq: T%'
and sample_time > sysdate - &v_minutes/(60*24)
;

BLOCK_XID	      P2HEX     P3HEX	    USN         SLOT      SQN  WAIT_XID
----------------  --------- --------- ---------- ---------- ---------- ----------------
0A0001007264000       A0001      6472	      10          1      25714

Data from flashback, after session 1 commits (before the commit there is no data returned)

SELECT        VERSIONS_XID
       ,      VERSIONS_STARTTIME
       ,      VERSIONS_ENDTIME
       ,      VERSIONS_STARTSCN
       ,      VERSIONS_ENDSCN
       ,      VERSIONS_OPERATION
       ,      id
       ,      val1
       FROM   TEST_TAB 
              VERSIONS BETWEEN
              TIMESTAMP MINVALUE AND MAXVALUE
     where VERSIONS_XID=HEXTORAW('0A0001007264000')
      ORDER  BY  VERSIONS_STARTTIME
/
VERSIONS_XID	 VERSIONS_STARTTIME    ENDTIME STARTSCN  ENDSCN  V          VAL1
---------------- --------------------- ------- -------- -------  - --------------
0A00010072640000 15-OCT-13 06.46.30 PM         17042888	         U            aa


Now that’s not the blocking SQL but at least you can see what the value of the field was that the blocker changed it to, so you can guess to some degree what the actual SQL was. Not great, but better than nothing.

Oracle’s In-Memory Database: The True Cost Of Licensing

At last, the long-awaited release of Oracle Database 12.1.0.2 has arrived, including the highly anticipated In-Memory Database Option. I have had the pleasure of being involved in the early beta stages of this feature and I have say: I'm impressed. This is truly game changing technology, and has been extremely well designed and implemented by […]

Oracle’s In-Memory Database: The True Cost Of Licensing

At last, the long-awaited release of Oracle Database 12.1.0.2 has arrived, including the highly anticipated In-Memory Database Option. I have had the pleasure of being involved in the early beta stages of this feature and I have say: I'm impressed. This is truly game changing technology, and has been extremely well designed and implemented by […]

In-Memory Column Store and More

It was inevitable that I would have to have a play with the In-Memory Column Store.

It’s just a functional look. I have no idea about performance at the moment, although I should be starting a proof of concept in the next couple of weeks for one of our internal systems, so it will be interesting to see how it holds up. :)

Another thing that has come out of this release is the Full Database Cache Mode. That’s nothing to do with the In-Memory Column Store, but I can definitely see us using this on some of our stuff!

Finally, I’ve done a quick write up of the Smart Flash Cache stuff for my OCP exam notes.

It’s been out since 11.2.0.2, so I’m not sure why it is considered worthy of inclusion in the 12c DBA exam.

There have been a lot of changes to the multitenant option, so I’m probably going to spend some time amending/rewriting those articles to take on board the new stuff. From what I can see so far, the stuff they’ve added has certainly plugged a lot of “holes” in that option. Looks good so far!

Cheers

Tim…


In-Memory Column Store and More was first posted on July 25, 2014 at 10:32 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

12.1.0.2 Released With Cool Indexing Features (Short Memory)

Oracle Database 12.1.0.2 has finally been released and it has a number of really exciting goodies from an indexing perspective which include: Database In-Memory Option, which enables specific portions of the database to be in dual format, in both the existing row based format and additionally into an efficient memory only columnar based format. This in […]

AWR Warehouse, Status

So the AWR Warehouse patches are out, but documentation has not officially caught up to it yet, so as we appreciate your patience.  I thought I would post about what I went over in my webinar last week when I had the wonderful opportunity to speak to the ODTUG members on this feature that everyone is so anxious to get their hands on.

Let’s start with some of the top questions:

1. Can I just load the AWR data into the OMR, (Enterprise Manager Repository) database?

A.  No, it is highly recommended that you do not do this-  use a separate database and server to house the AWR Warehouse.

2. What licensing requirements are there?

A.  The AWR Warehouse, (AWRW, my new acronym and hashtag, #AWRW) requires the Diagnostic Pack license and with this license, a limited use EE license is included for the AWR Warehouse.  This is subject to licensing changes in the future, but at this time, this is a great opportunity considering all the great features that will be included in the AWR Warehouse to house, partition, access and report on via EM12c, etc.

3.  Can I query the AWR Warehouse directly?

A.  It is another Oracle database, so yes, of course you can!

Behind the Scenes

The AWR Warehouse’s data is sourced from target databases in EM12c, providing the ability to retain AWR data indefinitely or for any period of time you choose.  For retail companies or those that do heavy once per year processing, this is gold.  The idea that you can do comparisons on performance a year ago vs. today is incredibly valuable.

This data is loaded via an ETL process using an Agent to Agent direct push of the data, initialized by an Enterprise Manager.  The actual export on the source database and import on the AWR Warehouse is performed by a DBMS_Scheduler job local to those servers.

awrw_etl

The actual interval on the source database and AWR Warehouse depends on if you’ve just added the database to the AWR Warehouse, (back load of data, requires “catch up”) or if the AWRW ETL load has been disabled for a period of time.  There is a built in “throttle” to ensure that no more than 500 snapshots are loaded at any given time and intervals that cause very little to no network traffic in the environment.  During the catchup that required a full 500 snapshots to load on a VM test environment, I was thrilled to see it took a total maximum execution time of less than 12 minutes and 2GB of data.  The network latency was nominal, too.

For the next sections, you will notice the naming convention in jobs and objects of “CAW” either in the beginning or middle of the name.  CAW stands for Consolidated AWR Warehouse and you can use %CAW% to help filter to locate via queries in any AWRW related search, including on source databases, (targets).

Source ETL Job

The job on the source database, (targets) to datapump the AWR data from the source for a given snapshot(s) to reside on an OS directory location to be “pushed” by the agent to agent onto an AWR Warehouse OS directory location.

DBMS Scheduler Job Name: MGMT_CAW_EXTRACT

Exec Call: begin dbsnmp.mgmt_caw_extract.run_extract; end;

How Often: 3 Hour Intervals if “playing catch up”, otherwise, 24 hr interval.

AWR Warehouse Job

This is the job that loads the data from source targets, (databases) to the AWR Warehouse.

DBMS Scheduler Job Name: MGMT_CAW_LOAD

Exec Call: begin dbsnmp.mgmt_caw_load.run_master;

How Often: 5 Minute Intervals

Biggest Resource Demand from the “run_master”:

begin dbms_swrf_internal.move_to_awr(schname => :1); end;

EM Job Service

The EM12c comes into play with the ETL job process by performing a direct agent to agent push to the AWR Warehouse via a job submitted to the EM Job Service.  You can view the job in the Job Activity in the EM12c console:

em_etl_job

Under the Hood

The additions to the source database, (target) and the AWR Warehouse once adding to the AWR Warehouse or creating an AWR Warehouse is done through the DBNSMP schema.  The objects currently begin with the CAW_, (Consolidated AWR Warehouse) naming convention, so they are easy to locate in the DBSNMP schema.

AWR Warehouse Objects

The additions to the DBSNMP schema are used to support the ETL jobs and ease mapping from the Enterprise Manager to the AWR Warehouse for AWR and ASH reporting.  The AWR schema objects that already exist in the standard Oracle database are updated to be partitioned on ETL loads by DBID, Snapshot ID or a combination of both, depending on what the AWR Warehouse developers found important to assist in performance.

There are a number of objects that are added to the DBSNMP schema to support the AWRW.  Note the object types and counts below:

caw_object_count

The table that is of particular interest to those of you with AWR queries that are interested in updating them to be AWRW compliant, is the CAW_DBID_MAPPING table:

dbsnmp_caw_dbid_mapping

You will be primarily joining the AWR objects DBID column to the #ff0000;">CAW_DBID_MAPPING.NEW_DBID/OLD_DBID to update those AWR scripts.

An example of changes required, would be like the following:

from   dba_hist_sys_time_modelstm,   dba_hist_snapshot s, gv$parameter p,   dbsnmp.caw_dbid_mapping m
             where stm.stat_name in (‘DB CPU’,’backgroundcpu time’)    
             and   LOWER(m.target_name)= ‘&dbname
             and   s.dbid= m.new_dbid     and   s.snap_id = stm.snap_id
            and   s.dbid = stm.dbid           and   s.instance_number = stm.instance_number
            and   p.name = ‘cpu_count’   and   p.inst_id = s.instance_number)

 Notice that the simple change with the addition of the mapping table and addition to the where clause has resolved the requirements to query just the data for the database in question by DBID.

I’ve included some updated scripts to use as examples and hopefully give everyone a quick idea on how to work forward with the AWR Warehouse if you so decide to jump headfirst into querying it directly.

Source Database Objects

There are only a couple additions to the Source Databases when they become part of the AWR Warehouse.

source_target_count

The objects are only used to manage the AWR extract jobs and track information about the tasks.

CAW_EXTRACT_PROPERTIES : Information on ETL job, dump location and intervals.

CAW_EXTRACT_METADATA : All data about extracts- times, failures, details.

So….

Do you feel educated?  Do you feel overwhelmed?  I hope this was helpful to go over some of the processes, objects and information for AWR queries regarding the AWR Warehouse and I’ll continue to blog about this topic as much as I can!   This feature is brand new and as impressed and excited as I am about it now, I can’t wait for all there is to come!



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [AWR Warehouse, Status], All Right Reserved. 2014.

Excel connect to Oracle – 64bit and 32bit issues

#555555;">Wow, thanks to

#555555;">Process Monitor   #2970a6;" href="http://technet.microsoft.com/en-us/sysinternals/bb896645">http://technet.microsoft.com/en-us/sysinternals/bb896645

#555555;">I was able track down why I couldn’t connect to Oracle from Excel.

#555555;">I had wanted to try some of the examples Charles Hooper has posted on connecting to and monitoring Oracle, for example

#555555;">#2970a6;" href="http://hoopercharles.wordpress.com/2010/01/20/excel-scrolling-oracle-performance-charts/">http://hoopercharles.wordpress.com/2010/01/20/excel-scrolling-oracle-performance-charts/

#555555;">I kept getting the error “Provider not found”
Now what kind of trace info is there for an error like this in Excel? None AFAIK. Time to start guessing.
I’m on windows 7 64 bit. I have the 64bit 11gR2 Oracle installed.  Excel shows up in task manager as “EXCEL.EXE  *32″. My first guess was, “oh, excel must want the 32bit libraries” so I got the 32 bit instant client from Oracle. Unzipped them into a directory and put them first into the path. Still no dice.
Did a lot of google searches and turned up that I needed

#555555;">oraoledb.dll

#555555;">but this wasn’t in any of the instant client zips that I downloaded from

#555555;">#2970a6;" href="http://www.oracle.com/technetwork/topics/winsoft-085727.html">http://www.oracle.com/technetwork/topics/winsoft-085727.html.

#555555;">Turns out it’s in a download halfway down the page:

#555555;">*Instant Client Package – ODAC: Includes ODP.NET, Oracle Services for MTS, Oracle Providers for ASP.NET, Oracle Provider for OLE DB, and OO4O with Oracle Instant Client

#555555;">Downloaded this, put all the dlls in the first directory in my path. Still no dice.

#555555;">I tried “strace”, but it gave no output. Rrrr.

#555555;">Then I tried process monitor – bingo.

#555555;">With process monitor, I filtered on processes containing “excel”, ran the connect, got tons of output, but knew it was a problem with libraries. I found “oraoledb10.dll” among the output. It was listed in an old Oracle 10 install directory. Oracle 10 32bit had been initially been installed. The install gave no warnings but bombed out late in the game so I remvoed the 10g and I installed Oracle 11gR2 64bit. (Oracle 11gR2 32bit won’t even begin the install)
So now, I scoured the registry with regedit and found the old oraoledb10.dll here

#555555;">HKEY_CLASSES_ROOT\Wow6432Node\CLSID\{3F63C36E-51A3-11D2-BB7D-00C04FA30080}\InprocServer32

#555555;">I changed this to the new location of oraoledb11.dll
and now it works.

Oracle Database 12.1.0.2.0 – Turning on the In-Memory Database option

It is indeed that sample as switching a knob to turn it on. To enable it you will have to set a reasonable among of...
class="readmore">Read More