I think I’ve lived through all the ages of Enterprise Manager. I used the Java console version back in the days when admitting you used it got you excommunicated from the church of DBA. I lived through the difficult birth of the web-based Grid Control. I’ve been there since the start of Cloud Control. I’ll no doubt be there when it is renamed to Big Data Cloud Pixie Dust Manager (As A Service).
I was walking from the pool to work this morning, checking my emails on my phone and it struck me (not for the first time) that I’m pretty much a 24 hour DBA these days. I’m not paid to be on call, I’m just a 9-5 guy, but all my Cloud Control notifications come through to my phone and tablet. I know when backups have completed (or failed). I know when a Tnsping takes too long. I know when we have storage issues. I know all this because Cloud Control tells me.
Now you might look on this as a bad thing, but being the control freak I am, I prefer to get a message on a Sunday telling me something is broken, hop on the computer and fix it there and then, rather than coming in on Monday to a complete sh*t-storm of users complaining. I’m not paid to do it, but that’s the way I roll.
While walking down memory lane I was thinking about all the scripting I used to do to check all this stuff. Endless amounts of shell scripts to check services and backups etc. I don’t do hardly any of that these days. Cloud Control handles all that.
We are a pretty small Oracle shop, but I think life would be a whole lot more difficult without Cloud Control. I’ve mentioned this a number of times, but it’s worth saying again… If you have more than a handful of Oracle databases, you really should be using Cloud Control these days. It’s as simple as that.
Just in case you are wondering, this is how our infrastructure looks this morning…
Oracle 22.214.171.124 is out, after lots of announcements the product has finally been released. I had just extended my 126.96.36.199.3 cluster to 3 nodes and was about to apply the July PSU when I saw the news. So why not try and upgrade to the brand new thing?
What struck me at first was the list of new features … Oracle’s patching strategy has really changed over time. I remember the days when Oracle didn’t usually add additional features into point releases. Have a look at the new 188.8.131.52 features and that would possibly qualify to be 12c Release 2…
In summary the upgrade process is actually remarkably simple, and hasn’t changed much since earlier versions of the software. Here are the steps in chronological order.
I don’t know how often I have type ./ruinInstaller instead of runInstaller, but here you go. This is the first wizard screen after splash screen has disappeared.
Naturally I went for the upgrade of my cluster. Before launching the installer though I made sure that everything was in working order by means of cluvfy. On to the next screen:
I always install English only. Troubleshooting Oracle in a different language (especially if I don’t speak or understand) is really hard so I avoid it in the first place.
Over to the screen that follows and oops-my SYSDG disk group (containing OCR and voting files) is too small. Bugger. In the end I added 3 new 10GB LUNs and dropped the old ones. But it took me a couple of hours to do so. Worse: it wasn’t even needed, but proved to be a good learning exercise. The requirement to have that much free space is most likely caused by the management repository and related infrastructure.
Back to this screen everything is in best order, the print screen has been taken just prior to the change to the next. Note the button to skip the updates on unreachable nodes. Not sure if I wanted to do that though.
I haven’t got OEM agents on the servers (yet) so I’m skipping the registration for now. You can always do that later.
This screen is familiar; I am keeping my choices from the initial installation. Grid Infrastructure is owned by Oracle despite the ASMDBA and ASMADMIN groups by the way.
On the screen below you define where on the file system you want to install Grid Infrastructure. Remember that for clustered deployments the ORACLE_HOME cannot be in the path of the ORACLE_BASE. For this to work you have to jump to the command line and create the directory on all servers and grant ownership to the GI owner account (oracle in this case, could be grid as well).
Since I like to be in control I don’t allow Oracle to run the root scripts. I didn’t in 184.108.40.206 either:
In that screen you notice the familiar checking of requirements.
In my case there were only a few new ones shown here. This is a lab server so I don’t plan on using swap, but the kernel parameter “panic_on_oops” is new. I also didn’t set the reverse path filtering which I corrected before continuing. Interestingly the installer points out that there is a change in the asm_diskstring with its implications.
One thing I haven’t recorded here (because I am using Oracle Linux 6.5 with UEK3) is the requirement for using a 2.6.39 kernel – that sounds like UEK2 to me. I wonder what that means for Red Hat customers. I needed the RHEL compatible kernel because Oracle hasn’t provided virtio SCSI in UEK 2 (but does in UEK3).
Another interesting case was that the kernel_core pattern wasn’t equal on all nodes. Turned out that 2 nodes had the package abrt installed, and the other two didn’t. Once the packages were installed on all nodes, the warning went away.
Unfortunately I didn’t take a print screen of the summary in case you wonder where that is. I went straight into the installation phase:
At the end of which you are prompted to run the upgrade scripts. Remember to run them in screen and pay attention to the order you run them in.
The output from the last node is shown here:
[root@rac12node3 ~]# /u01/app/220.127.116.11/grid/rootupgrade.sh Performing root user operation. The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/18.104.22.168/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/22.214.171.124/grid/crs/install/crsconfig_params 2014/07/26 16:15:51 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector. 2014/07/26 16:19:58 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector. 2014/07/26 16:20:02 CLSRSC-464: Starting retrieval of the cluster configuration data 2014/07/26 16:20:51 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed. 2014/07/26 16:20:51 CLSRSC-363: User ignored prerequisites during installation ASM configuration upgraded in local node successfully. 2014/07/26 16:21:16 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack 2014/07/26 16:22:51 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed. OLR initialization - successful 2014/07/26 16:26:53 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf' CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. 2014/07/26 16:34:34 CLSRSC-343: Successfully started Oracle Clusterware stack clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 12c Release 1. Successfully taken the backup of node specific configuration in OCR. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. 2014/07/26 16:35:55 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded 2014/07/26 16:35:55 CLSRSC-482: Running command: '/u01/app/126.96.36.199/grid/bin/crsctl set crs activeversion' Started to upgrade the Oracle Clusterware. This operation may take a few minutes. Started to upgrade the CSS. The CSS was successfully upgraded. Started to upgrade Oracle ASM. Started to upgrade the CRS. The CRS was successfully upgraded. Successfully upgraded the Oracle Clusterware. Oracle Clusterware operating version was successfully set to 188.8.131.52.0 2014/07/26 16:38:51 CLSRSC-479: Successfully set Oracle Clusterware active version 2014/07/26 16:39:13 CLSRSC-476: Finishing upgrade of resource types 2014/07/26 16:39:26 CLSRSC-482: Running command: 'upgrade model -s 184.108.40.206.0 -d 220.127.116.11.0 -p last' 2014/07/26 16:39:26 CLSRSC-477: Successfully completed upgrade of resource types 2014/07/26 16:40:17 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Did you notice that TFA has been added? Trace File Analyzer is another of these cool things to play with, it was available with 18.104.22.168 and as an add-on to 22.214.171.124.
Back to OUI to complete the upgrade. After which cluvfy performs a final check and I’m done. Prove it worked:
[oracle@rac12node1 ~]$ sqlplus / as sysdba SQL*Plus: Release 126.96.36.199.0 Production on Thu Jul 26 17:13:02 2014 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 188.8.131.52.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options SQL> select banner from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 12c Enterprise Edition Release 184.108.40.206.0 - 64bit Production PL/SQL Release 220.127.116.11.0 - Production CORE 18.104.22.168.0 Production TNS for Linux: Version 22.214.171.124.0 - Production NLSRTL Version 126.96.36.199.0 - Production SQL>
In another post I’ll detail the upgrade for my databases. I am particularly interested about the unplug/plug way of migrating…
There is more and more happening in the world of visualization and visualizing Oracle performance specifically with v$active_session_history.
Of these visualizations, the one pushing the envelope the most is Marcin Przepiorowski. Marcin is responsible for writing S-ASH , ie Simulated ASH versions 2.1,2.2 and 2.3. See
Here are some examples of what I have seen happening out there in the web with these visualizations grouped by the visualization tool.
The first example is using Gephi. The coolest example of Gephi I’ve seen is Greg Rahn’s analysis of the voting for Oracle World mix sessions.
Here is Marcin’s example using Gephi with ASH data:
Click on the examples to go to the actual HTML and not just the image. On the actual page from Marcin you can double click on nodes to make them the center of the network.
TOP 5 SQL_ID, SESSIONS and PROGRAMS joined together with additional joins to points not included in top 5. ex. TOP 5 SQL ID with list of sessions and programs TOP 5 SESSIONS with list of sql_id’s and programs TOP 5 PROGAMS with list of sessions and sql id’s
Frits Hoogland gives a great blog entry on getting started with R and then using R to analyze 10046 tracefiles (sort of ASH on steroids)
Here from Greg Rahn again is one of the cooler examples of R. In this case it’s the ASH data of a parallel query execution showing the activity of the different processes:
Here is an example basically reproducing the Top Activity screen with highcharts, a sqlscript on v$session and a cgi script to feed the data to the web page:
OEM 12c shows the new face of Enterprise Manager with the load maps
(quick apex example: http://ba6.us/node/132)
I did a quick update of my Oracle installation articles on Oracle Linux 7. The last time I ran through them was with the beta version OL7 and before the release of 188.8.131.52.
The installation process of 184.108.40.206 on the production release of Oracle Linux 7 hasn’t changed since the beta. The installation of 220.127.116.11 on Oracle Linux 7 is a lot neater than the 18.104.22.168 installation. It’s totally problem free for a basic installation.
You can see the articles here.
There is a bold warning on the top of both articles reminding you that the database is not supported on Oracle Linux 7 yet! Please don’t do anything “real” with it until the support is official.
Note. I left the fix-it notes for the 22.214.171.124 installation at the bottom of the 12c article, but now 126.96.36.199 is available from OTN there is really no need for someone to be installing 188.8.131.52 other than for reference I guess.
So how to turn it the option off/disabled…As a privileged database user: > Just don’t set the INMEMORY_SIZE parameter to a non zero value…(the default...
This blog post offers proof that you can trigger In-Memory Column Store feature usage with the default INMEMORY_* parameter settings. These parameters are documented as the approach to ensure In-Memory functionality is not used inadvertently–or at least they are documented as the “enabling” parameters.
During the development of this study, Oracle’s Product Manager in charge of the In-Memory feature has cited Bug #19308780 as it relates to my findings.
BLOG UPDATE 2014.07.29: Oracle’s Maria Colgan issued a tweet stating “What u found in you 3rd blog is a bug [...] Bug 19308780.” Click here for a screenshot of the tweet. Also, click here for a Wayback Machine (web.archive.org) copy of the tweet.
This is a post about enabling versus using the Oracle Database 12c Release 184.108.40.206 In-Memory Column Store feature which is a part of the separately licensed Database In-Memory Option of 12c. While reading this please be mindful that in this situation all that really matters is what actions on your part effect the internal tables that track feature usage.
There is a huge kerfuffle regarding the separately licensed In-Memory Column Store feature in Oracle Database 12c Release 220.127.116.11–specifically how the feature is enabled and what triggers usage of the feature.
I pointed out a) the fact that the feature is enabled by default and b) the feature is easily accidentally used. I did that in Part I and Part II in my series on the matter. In Part III I shared how the issue has lead to industry journalists quoting–and then removing–said quotes. I’ve endured an ungodly amount of shameful backlash from friends on the Oaktable Network list as they asserted I was making a mole hill out of something that was a total lark (that was a euphemistic way of saying they all but accused me of misleading my readers). I even had friends suggesting this is a friendship-ending issue. Emotion and high-technology are watery-oil like in nature.
About the only thing that hasn’t happened is for anyone to apologize for being totally wrong in their blind-faith rooted feelings about this issue. What did he say? Please read on.
From the start I pointed out that the INMEMORY_QUERY feature is enabled by default–and that it is conceivable that someone could use it accidentally. The back lash from that was along the lines of how many parameters and what user actions are needed for that to be a reality. Maria Colgan–who is Oracle’ s PM for the In-Memory Column Store feature–tweeted that I’m confusing people when announcing her blog post on the fact that In-Memory Column Store usage is controlled not by INMEMORY_QUERY but instead INMEMORY_SIZE. Allow me to add special emphasis to this point. In a blog post on oracle.com, Oracle’s PM for this Oracle database feature explicitly states that INMEMORY_SIZE must be changed from the default to use the feature.
If I were to show you everyone else was wrong and I was right, would you think less of me? Please, don’t let it make you feel less of them. We’re just people trying to wade through the confusion.
Here is the truth and I’ll prove it in a screen shot to follow:
In the following screenshot I’ll show that INMEMORY_QUERY is at the default setting of ENABLE and INMEMORY_SIZE is at the default setting of zero. I prove first there is no prior feature usage. I then issue a CREATE TABLE statement specifying INMEMORY. Remember, the feature-blocking INMEMORY_SIZE parameter is zero. If “they” are right I shouldn’t be able to trigger In-Memory Column Store feature usage, right? Observe–or better yet, try this in your own lab:
So I proved my point which is any instance with the default initialization parameters can trigger feature usage. I also proved that the words in the following three screenshots are factually incorrect:
Screenshot of blog post on Oracle.com:
Screenshot of email to Oracle-L Email list:
I didn’t want to make a mole hill of this one. It’s just a bug. I don’t expect apologies. That would be too human–almost as human as being completely wrong while wrongly clinging to one’s wrongness because others are equally, well, wrong on the matter.
Filed under: oracle
It was my intention to only write 2 installments on my short series about Oracle Database 12c In-Memory Column Store feature usage. My hopes were quickly dashed when the following developments occurred:
1. A quote from an Oracle spokesman cited on informationweek.com was pulled because (I assume) it corroborated my assertion that the feature is enabled by default. It is, enabled by default.
The July 26, 2014 version of the Informationweek.com article quoted an Oracle spokesman as having said the following:
Yes, Oracle Database In-Memory is an option and it is on by default, as have been all new options since Oracle Database 11g
2. An email from an Oracle Product Manager appeared on the oracle-l email list and stated the following:
So, it is explicitly NOT TRUE that Database In-Memory is enabled by default – and it’s (IMHO) irresponsible (at best) to suggest otherwise
I stated in Part I that I think the In-Memory Column Store feature is a part of a hugely-important release. But, since the topic is so utterly confusing I have to make some points.
It turns out that neither of the Oracle folks I mention above are correct. Please allow me to explain. Yes, the Oracle spokesman spoke the truth originally to Informationweek.com as reported by Doug Henschen. The truth that was spoken is, yes indeed, the In-Memory Column Store feature/option is enabled by default. Now don’t be confused. There is a difference between enabled and usable and in-use.
In Part II of the series I showed an example of the things that need to be done to put the feature into use–and remember, you’re not charged for it until it is used. I believe that post made it quite clear that there is a difference between enabled and in-use. What does the Oracle documentation say about In-Memory Column Store feature/option default settings? It says it is enabled by default. Full stop. Citation: Top-level initialization parameter enabled by default. I’ve put a screenshot of that documentation here for education sake:
This citation of the documentation means the Oracle spokesman was correct. The feature is enabled by default.
The problem is with the mixing of the words enabled and “use” in the documentation.
Please consider the following screenshot of a session where the top-level INMEMORY_QUERY parameter is set to the default (ENABLE) as well as the INMEMORY_SIZE parameter to grant some RAM to the In-Memory Column Store feature. In the screenshot you’ll see that I didn’t trigger usage of the feature just by enabling it. I did, however, show that you don’t have to “use” the feature to trigger “usage” so please visit Part II on that matter.
So here we sit with wars over words.
Oracle’s Maria Colgan just posted a helpful blog (or, practically a Documentation addendum) going over the initialization parameters needed to fully, really-truly enable the feature–or more correctly how to go beyond enabled to usable. I’ve shown that Oracle’s spokesman was correct in stating the feature is enabled by default (INMEMORY_QUERY enabled by default). Maria and others showed that you have to set 2 parameters to really, truly, gosh-darnit use the feature that is clearly ENABLE(d) by default. I showed you that enabling the feature doesn’t mean you use the feature (as per the DBA_FEATURE_USAGE_STATICS view). I also showed you in Part II how one can easily, accidentally use the feature. And using the feature is chargeable and that’s why I assert INMEMORY_QUERY should ship with the default value of DISABLE. It is less confusing and it maps well to prior art such as the handling of Real Application Clusters.
So how does one summarize all of this? I see it as quite simple. Oracle could have shipped Oracle Database 12c 18.104.22.168 with the sole, top-level enabling parameter disabled (e.g., INMEMORY_QUERY=DISABLE). Doing so would be crystal clear because it nearly maps to a trite sentence–the feature is DISABLE(d). Instead we have other involved parameters that are not top level adding to the confusion. And confusion indeed since the Oracle documentation insinuates INMEMORY_SIZE is treated differently when Automatic Memory Management is in play:
And what is that prior art on the matter? Well, consider Oracle’s (presumably) most profitable separately-licensed feature of all time–Real Application Clusters. How does Oracle treat this desirable feature? It treats it with a crystal-clear top-level, nuance-free disabled state:
So, in summary, the In-Memory feature is not disabled by default. It happens to be that the capacity-sizing parameter INMEMORY_SIZE is set to zero so the feature is unusable. However, setting both INMEMORY_QUERY and INMEMORY_SIZE does not constitute usage of the feature.
Confused? I’m not.
Filed under: oracle
I mentioned in a previous post that I would be revisiting some of my existing multitenant articles to include some of the features introduced in the 22.214.171.124 patch. Here’s one of them.
Not only have Oracle fixed the bug in 126.96.36.199 that prevented the remote cloning, but they’ve also added the ability to clone directly from non-CDB style instances, giving you another option for migrating from non-CDBs to PDBs. Pretty darn cool if you ask me!
Some more stuff will be amended over the coming days. If the changes result in some major rewrites I’ll probably blog about them. If not, I’ll just slip them into the articles and make a reference to the fact the specific feature was introduced in 188.8.131.52…
So 184.108.40.206 is out with a number of interesting new features, of which the most noisily touted is the “in-memory columnar storage” feature. As ever the key to making best use of a feature is to have an intuitive grasp of what it gives you, and it’s often the case that a good analogy helps you reach that level of understanding; so here’s the first thought I had about the feature during one of the briefing days run by Maria Colgan.
“In-memory columnar storage gives you bitmap indexes on OLTP systems without the usual disastrous locking side effects.”
Obviously the analogy isn’t perfect … but I think it’s very close: for each column stored you use a compression technique to pack the values for a large number of rows into a very small space, and for each stored row you can derive the rowid by arithmetic. In highly concurrent OLTP systems there’s still room for some contention as the session journals are applied to the globally stored compressed columns (but then, private redo introduces some effects of that sort anyway); and the “indexes” have to be created dynamically as tables are initially accessed (but that’s a startup cost, it’s timing can be controlled, and it’s basically limited to a tablescan).
Whatever the technical hand-waving it introduces – thinking of the in-memory thing as enabling real-time bitmaps ought to help you think of ways to make good use of the feature.