Search

OakieTags

Who's online

There are currently 0 users and 47 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

Analogy – 2

I suggested a little while ago that thinking about the new in-memory columnar store as a variation on the principle of bitmap indexes was quite a good idea. I’ve had a couple of emails since then asking me to expand on the idea because “it’s wrong” – I will follow that one up as soon as I can, but in the meantime here’s another angle for connecting old technology with new technology:

It is a feature of in-memory column storage that the default strategy is to store all columns in memory. But it’s quite likely that you’ve got some tables where a subset of the columns are frequently accessed and other columns are rarely accessed and it might seem a waste of resources to keep all the columns in memory just for the few occasional queries. So the feature allows you to de-select columns with the “no inmemory({list of columns})” option – it’s also possible to use different degrees of compression for different columns, of course, which adds another dimension to design and planning – but that’s a thought for another day.

So where else do you see an example of being selective about where you put columns ?  Index Organized Tables (IOTs) – where you can choose to put popular columns in the index (IOT_TOP) segment, and the rest in the overflow segment, knowing that this can give you good performance for critical queries, but less desirable performance for the less important or less frequent queries. IOTs allow you to specify the (typically short) list of columns you want “in” – it might be quite nice if the same were true for the in-memory option, I can imagine cases where I would want to include a small set of columns and exclude a very large number of them (for reasons that bring me back to the bitmap index analogy).

 

Bedford Thumbnails in Squarespace

Squarespace's Bedford template is wonderful in its bold use of thumbnail images to headline individual blog posts. I love that aspect of the template. The images add character and a splash of fun to my otherwise text-heavy posts. 

Oddly enough, the Bedford template does not show thumbnail images when you view the main page for a blog. Figure 1 shows the default view of my Database and Tech blog. All you see is boring text. But not to worry! Those images are one line of CSS away.

(Click any image to embiggen it)

Figure 1. Default listing of blog posts in the Bedford template

Figure 1. Default listing of blog posts in the Bedford template

The Bedford template does generate the HTML for display of thumbnail images. It's just that it's switched off. Because of that foresight on the template designer's part, you can switch on the display of thumbnails with the following single line of CSS added through the Custom CSS editor:

.excerpt-thumb {display: block !important;}

Figure 2 shows the underlying HTML structure. You can see for yourself by right-clicking in the area of a post title or an excerpt, and selecting Inspect Element from the fly-out menu. Navigate your page structure in the left pane. Find and expand the div named excerpt-thumb. Within that div will be an tag that is a hyperlink, and inside that hyperlink is an tag with a reference to the thumbnail image. 

Figure 2. The underlying HTML structure in the listing of blog post excerpts

Figure 2. The underlying HTML structure in the listing of blog post excerpts

The right-hand pane shows the excerpt-thumb div's display property set to none. Displaying the div as a block element causes the images to appear as you see in Figure 3. Simple and done!

Figure 3. Default location of thumbnail images once their display is enabled

Figure 3. Default location of thumbnail images once their display is enabled

You can add even more impact by making the images span the entire width of the content column like in the Marquee template. Do that by adding three more property declarations to give the following rule set:

.excerpt-thumb {
  display: block !important; 
  float: none !important;
  width: auto !important;
  margin-right: 0 !important;
}

Figures 4 through 6 step through the additional three declarations and their effects. Removing the float prevents the text from wrapping around the image (Figure 4). Setting the width to auto allows the image to take up the entire column width (Figure 5). Finally, and this one is easy to miss at first, there is a right margin that is specified in the template to prevent text from wrapping too close when the image is floated left, and that margin should be removed. Figure 6 shows the final result, with each blog post's thumbnail image stretching across the entire column. 

Figure 4. After float: none to remove the float

Figure 4. After float: none to remove the float

Figure 5. After width: auto to allow the image to expand

Figure 5. After width: auto to allow the image to expand

Figure 6. The final version after adding margin-right: 0 

Figure 6. The final version after adding margin-right: 0 

Well-chosen thumbnail images provide instant impact and help to grab reader attention and focus it toward your posts. Bedford does a nice job in presenting those images at the top of each post, but you might as well have them in your summary listings too. Now you have not one, but two ways of doing that. Use the one-line rule to have images appear at the left of your excerpts, and the longer rule set to have them span across the entire column width.


Just getting started with CSS? See my post on CSS and Squarespace: Getting Started for help on navigating the HTML structure of your pages and finding the correct elements to target. Also check out my book below. It teaches CSS from the ground up with a specific focus on Squarespace. It's the fastest way to get up to speed on CSS and begin to have fun customizing the look and feel of your own Squarespace site.


coverThumbLarge.jpg

Learn CSS for Squarespace

9.99

Learn CSS for Squarespace is a short, seven-chapter introduction to working with CSS on the Squarespace platform. The book is delivered as a mobi file that is readable from the Kindle Fire HDX or similar devices. A complimentary sample chapter is available. Try before you buy.

Add To Cart

How to ask a question: The optician edition

I’ve just returned from a rather awkward and unpleasant visit to the optician…

Let me start by saying this is the same optician I’ve used for the last four years. I don’t think we would ever be capable of being friends, but I don’t have to like someone to “work with them”. That’s what being professional is all about. It’s only once a year, so before now I’ve never felt the need to go elsewhere. That has probably changed now.

Issue 1:

  • Optician: Look at the black spots. Do they look blacker with or without the lens?
  • Me: Neither. They look the same.
  • Optician: That is impossible.
  • Me: With the lens the spots look bigger and more defined, but the “intensity of the blackness” is the same. Do you mean which look clearer, or do you mean the which actually look “more black”?
  • Optician: Stop overcomplicating it. Do they look more black with or without the lens?
  • Me: Neither. Like I said, the “level of blackness” is the same, but the clarity is different. What are you asking for, the “blackness” or the clarity.
  • Optician: Can you just answer my question?
  • Me: I can’t answer the question unless I understand the question.

Issue 2:

  • Optician: Look at the two sets of stripes. Which look clearer?
  • Me: Where should I be looking? If I look at the vertical stripes they look very clear and the horizontal stripes look burred. If I look at the horizontal stripes, they look clear and the vertical stripes look blurred. If I try to look between the two they look equally clear/blurred. What do you want me to do?
  • Optician: Please just answer my question.

Issue 3:

  • Optician: Look at the “+” symbol. Do the lines above and below line up in the centre of the “+”.
  • Me: The top one does, but the bottom one is kind-of jumping between being in line and being slightly out of line. Also, the bottom one sometimes goes very pale.
  • Optician: Please don’t give extra information. Just answer my question. Do they line up.
  • Me: If the bottom line is moving between sometimes lining up and sometimes not, what answer should I give?
  • Optician: Do they line up or don’t they?
  • Me: Sometimes.

Issue 4:

  • Optician: Can you see the improvement by using this lens when reading 5pt font at this distance.
  • Me: Yes.
  • Optician: That means you should probably consider having your glasses adjusted to allow for that.
  • Me: OK, but I never read something that small at that distance, so does it really matter?
  • Optician: Well, I’m not able to test you for 10 hours straight at your normal resolution to see if it is giving you eye strain.
  • Me: I’m looking at a monitor pretty much from the time I wake up to the time I go to bed. If this were an issue, would I feel like I had eye strain?
  • Optician: I’m telling you it’s a issue.
  • Me: I understand that. I’m just trying to get an handle on if my current reading habits are affected by this, or is it only an issue if I want to do something I never do?
  • Optician: Well, you will probably just change the resolution of your screen to counter it.
  • Me: Well, I’ve not changed the resolution of my screen and I am not feeling any noticeable eye strain, so do you think it’s actually an issue?
  • Optician: I’ve just shown you it is an issue. You said the print looked clearer.
  • Me: Yes, but only when I do something I never do. My point is, is this affecting my “normal” life or are we trying to fix a problem that has never and probably never will manifest itself?

Maybe someone will read this and think I was being a complete jerk, but I was genuinely unable to understand some of the things I was asked to do. What’s more, when I asked for clarification it was not forthcoming.

On my way home I was thinking how similar this situation was to things that happen in the IT world. People are generally really bad at asking questions (see here) and very quick to complain when they don’t get the answer to the question they think they have asked…

Cheers

Tim…

PS. On a brighter note, swimming went well this morning. I’ve started to incorporate sprints into my sessions.


How to ask a question: The optician edition was first posted on August 2, 2014 at 11:53 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Visualizing AWR data using python

In my earlier post, I talked about, how tableau can be used to visualize the data. In some cases, I find it useful to query AWR base tables directly using Python and graph it using matplotlib package quickly. Since python is preinstalled in almost all computers, I think, this method will be useful for almost everyone. Of course, you may not have all necessary packages installed in your computer, you can install the packages using install python packages . Of course, if you improve the script, please send it to me, I will share it in this blog entry.

Script is available as a zip file: plotdb.py

Usage:

Script usage is straight forward. Unzip the zip file and you will have a .py script in the current directory. Execute the script (after adjusting permissions of the script) using the format described below:

# To graph the events for the past 60 days, for inst_id=1, connecting to PROD, with username system. 
./plotdb.py -d PROD -u system -n 'latch free' -t e -i 1
# To graph the statistics for the past 60 days, for inst_id=2, connecting to PROD
./plotdb.py -d PRD -u system  -n 'physical reads' -t s -i 2

A typical graph from the above script is:

physical_reads

physical_reads

Common Roles get copied upon plug-in with #Oracle Multitenant

What happens when you unplug a pluggable database that has local users who have been granted common roles? They get copied upon plug-in of the PDB to the target container database!

Before Unplug of the PDBThe picture above shows the situation before the unplug command. It has been implemented with these commands:

 

SQL> connect / as sysdba
Connected.
SQL> create role c##role container=all;

Role created.

SQL> grant select any table to c##role container=all;

Grant succeeded.

SQL> connect sys/oracle_4U@pdb1 as sysdba
Connected.
SQL> grant c##role to app;

Grant succeeded.



SQL> grant create session to app;

Grant succeeded.

The local user app has now been granted the common role c##role. Let’s assume that the application depends on the privileges inside the common role. Now the pdb1 is unplugged and plugged in to cdb2:

SQL> shutdown immediate
Pluggable Database closed.
SQL> connect / as sysdba
Connected.
SQL> alter pluggable database pdb1 unplug into '/home/oracle/pdb1.xml';

Pluggable database altered.

SQL> drop pluggable database pdb1;

Pluggable database dropped.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@EDE5R2P0 ~]$ . oraenv
ORACLE_SID = [cdb1] ? cdb2
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 is /u01/app/oracle
[oracle@EDE5R2P0 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Tue Jul 29 12:52:19 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> create pluggable database pdb1 using '/home/oracle/pdb1.xml' nocopy;

Pluggable database created.

SQL> alter pluggable database pdb1 open;

Pluggable database altered.

SQL> connect app/app@pdb1
Connected.
SQL> select * from scott.dept;

    DEPTNO DNAME          LOC
---------- -------------- -------------
        10 ACCOUNTING     NEW YORK
        20 RESEARCH       DALLAS
        30 SALES          CHICAGO
        40 OPERATIONS     BOSTON

SQL> select * from session_privs;

PRIVILEGE
----------------------------------------
CREATE SESSION
SELECT ANY TABLE

SQL> connect / as sysdba
Connected.

SQL> select role,common from cdb_roles where role='C##ROLE';

ROLE
--------------------------------------------------------------------------------
COM
---
C##ROLE
YES

As seen above, the common role has been copied upon the plug-in like the picture illustrates:
After plug-in of the PDBNot surprisingly the local user app together with the local privilege CREATE SESSION was moved to the target container database. But it is not so obvious that the common role is copied then to the target CDB. This is something I found out during delivery of a recent Oracle University LVC about 12c New Features, thanks to a question of one attendee. My guess was it will lead to an error upon unplug, but this test-case proves it doesn’t. I thought that behavior may be of interest to the Oracle Community. As always: Don’t believe it, test it! :-)

Tagged: 12c New Features, Multitenant

MySQL Upgrades

I read Wim Coekaerts post about the MySQL 5.6.20-4 this morning. I logged on to my server and did the following command as root.

# yum update -y

That’ll be the upgrade done then… :)

If you are using MySQL on Linux you can use the MySQL Repository for your distribution, rather than using the bundled MySQL version, to make sure you stay up to date with the latest and greatest. As long as you stay within a point release (5.6, 5.7 etc.) of the latest version, upgrades should really be as simple as a “yum update”.

I’ve started the ball rolling for the upgrades to the MySQL servers at work. That will take a bit longer because of the required testing. :)

Now I know that Oracle is a very different beast to MySQL or SQL Server, but the patches for MySQL and SQL Server are so much easier that patching Oracle, it’s not surprising people gravitate to them. I’m sure the pluggable database stuff in 12c is going to simplify things somewhat, but it’s still not going to be anywhere near as simple as this stuff.

Cheers

Tim…


MySQL Upgrades was first posted on August 1, 2014 at 11:05 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

How to get insights into the Linux Kernel

This is probably as much a note-to-self as it can possibly be. Recently I have enjoyed some more in-depth research about how the Linux kernel works. To that extent I started fairly low-level. Theoretically speaking, you need to understand the hardware-software interface first before you can understand the upper levels. But in practice you get by with less knowledge. But if you are truly interested in how computers work you might want to consider reading up on some background. Some very knowledgable people I deeply respect have recommended books by David A. Patterson and John L. Hennessy. I have these two:

  • Computer Organization and Design, Fifth Edition: The Hardware/Software Interface
  • Computer Architecture, Fifth Edition: A Quantitative Approach

I think I found a few references to the above books in James Morle’s recent blog article about the true cost of licensing the in-memory database option and he definitely refers to the second book in his Sane SAN paper. I complemented these books with The Linux Programming Interface: A Linux and UNIX System Programming Handbook to get an overview of the Linux API. Oh and Linux from Scratch is a great resource too!

The Foundation is set

But now-what next? The Linux kernel evolves rather quickly, and don’t be fooled by version numbers. The “enterprise” kernels keep a rather conservative, static version number scheme. Remember 2.6.18? The kernel with RHEL 5.10 has little in common with the one released years and years ago with RHEL 5.0. SuSE seems to be more aggressive, naming kernels differently. A good discussion of the pros and cons for that approach can be found on LWN: http://lwn.net/Articles/486304/ Long story short: the Linux kernel developers keep pushing the limits with the “upstream” or “vanilla” kernel. You can follow the development on the LKML or Linux Kernel Mailing List. But that list is busy… The distribution vendors in turn take a stable version of the kernel and add features they need. That includes back-porting as well, which is why it’s so hard to see what’s going on with a kernel internally. But there are exceptions.

The inner workings

Apologies to all SuSE and Red Hat geeks: I haven’t been able to find a web-repository for the kernel code! If you know of one and have the URL, let me know and I’ll add it here. I don’t want to sound biased but it simply happens to be that I know Oracle Linux best.

Now to really dive into the internals and implementation you need to look at the source code. When browsing the code it helps to understand the C-programming language. And maybe some Assembler. I would love to know more about Assembler than I do but I don’t believe it’s strictly speaking necessary.

Oracle publishes the kernel code at the GIT repositories on  oss.oracle.com:

Oracle also provides patches for Red Hat kernels in project Red Patch. If I understand things correctly then Red Hat provides changes to the kernel in a massive tarball with the patches already applied. Previously it appears to have shipped the kernel + patches, which caused some controversy.

The Linux Cross Reference gives you insights into the upstream kernel.

NB: Kernel documentation can be found in the Documentation subdirectory. This is very useful stuff!

Now why would you want to do this?

My use case! I wanted to find out if/how I could do NFS over RDMA. When in doubt, use an Internet search engine and common sense. In this case: use the kernel documentation and sure enough, NFS-RDMA seems possible.

https://www.kernel.org/doc/Documentation/filesystems/nfs/nfs-rdma.txt

The link suggests a few module names and pre-requisites on enabling NFA-RDMA. The nfs-utils package must be version 1.1.2 or later, and the kernel NFS server must be built with RDMA support. Using the kernel source RPM you can check the options being used for compiling the kernel. Normally you’d use make menuconfig or an equivalent to enable/disable options or to build them as modules (refer to the excellent Linux From Scratch). Except that you don’t do that with the enterprise distributions of course. Building kernels for fun is off limits on these. If you have a problem with the Linux kernel (like a buggy kernel module), your vendor provides the fix, not the Linux engineer. But I digress… Each subtree in the kernel has a Kconfig file that lists the configuration option and meaning.

For the purpose of NFS-RDMA Infiniband support must be enabled (no brainer), but also IPoIB and then the RDMA support for NFS (“sunrpc”).

Back to the source RPM: it installs a file called .config in /usr/src/kernels/nameAndVersion/ listing all the build options. Grepping for RDMA in the file shows the following for UEK 3:

[root@rac12node1 3.8.13-35.3.3.el6uek.x86_64]# grep -i rdma .config
CONFIG_RDS_RDMA=m
CONFIG_NET_9P_RDMA=m
CONFIG_CARDMAN_4000=m
CONFIG_CARDMAN_4040=m
# CONFIG_INFINIBAND_OCRDMA is not set
CONFIG_SUNRPC_XPRT_RDMA_CLIENT=m
# CONFIG_SUNRPC_XPRT_RDMA_CLIENT_ALLPHYSICAL is not set
CONFIG_SUNRPC_XPRT_RDMA_SERVER=m

And here is the same for UEK 2:

[root@server1 2.6.39-400.17.1.el6uek.x86_64]# grep -i rdma .config
CONFIG_RDS_RDMA=m
CONFIG_NET_9P_RDMA=m
CONFIG_CARDMAN_4000=m
CONFIG_CARDMAN_4040=m
CONFIG_SUNRPC_XPRT_RDMA=m

So that looks promising, the letter “m” stands for “module”. But what do these options mean? The Kconfig file to the rescue again, but I first have to find the correct one. This example is for UEK 2:

[root@server1 2.6.39-400.17.1.el6uek.x86_64]# for file in $(rpm -qil kernel-uek-devel | grep Kconfig );
> do grep -i SUNRPC_XPRT_RDMA $file /dev/null;
> done
/usr/src/kernels/2.6.39-400.17.1.el6uek.x86_64/net/sunrpc/Kconfig:config SUNRPC_XPRT_RDMA

Found you! Notice that I’m adding /dev/null to the grep command to get the file name where grep found a match. Looking at the file just found:

config SUNRPC_XPRT_RDMA
        tristate
        depends on SUNRPC && INFINIBAND && INFINIBAND_ADDR_TRANS && EXPERIMENTAL
        default SUNRPC && INFINIBAND
        help
          This option allows the NFS client and server to support
          an RDMA-enabled transport.

          To compile RPC client RDMA transport support as a module,
          choose M here: the module will be called xprtrdma.

          If unsure, say N.

All that remained to be done was to check if these other configurationvariables (INFINIBAND, INFINIBAND_ADDR_TRANS etc) were set in the top level .config file and they were.

Parallel Execution Skew - Summary

I've published the final part of my video tutorial and the final part of my mini series "Parallel Execution Skew" at AllThingsOracle.com concluding what I planned to publish on the topic of Parallel Execution Skew.

Anyone regularly using Parallel Execution and/or relying on Parallel Execution for important, time critical processing should know this stuff. In my experience however almost no-one does, and therefore misses possibly huge opportunities for optimizing Parallel Execution performance.

Since all this was published over a longer period of time this post therefore is a summary with pointers to the material.

If you want to get an idea what the material is about, the following video summarizes the content:

Parallel Execution Skew in less than four minutes

Video Tutorial "Analysing Parallel Execution Skew":

Part 1: Introduction
Part 2: DFOs and DFO Trees
Part 3: Without Diagnostics / Tuning Pack license
Part 4: Using Diagnostics / Tuning Pack license

"Parallel Execution Skew" series at AllThingsOracle.com:

Part 1: Introduction
Part 2: Demonstrating Skew
Part 3: 12c Hybrid Hash Distribution With Skew Detection
Part 4: Addressing Skew Using Manual Rewrites
Part 5: Skew Caused By Outer Joins

Using ASH Analytics to View Blocked Sessions

When concurrency is the crippling factor in a database performance issue, often I’m told that viewing blocked sessions in Enterprise Manager is difficult.  The query behind, along with flash image generation in any Enterprise Manager can take considerable time to render and no matter how valuable the view is, the wait is something DBAs just can’t hold out for when needing the answer now.

Blocking Sessions View in OEM

If you’re wondering which feature I’m speaking of, once you log into any database, click on Performance, Blocking Sessions.

blocked_sessions

If there aren’t any or any significant load on the database, it can return quite quickly.  If there is significant load and blocking sessions, well, you could be waiting quite some time….

Behind the Scenes

The query that is run behind the scenes will be executed by the DBNSMP, (or whatever user you have configured for use from a target to communicate with the OEM) to the database in question and will look like the following:

#4444dd; font-weight: bold;">select#000000;">sid#000000;">,#000000;">username#000000;">,#000000;">serial#000000;">##000000;">,#000000;">process#000000;">,#000000;">nvl#000000;">(#000000;">sql_id#000000;">,#000000;">0#000000;">)#000000;">,#000000;">sql_address#000000;">,#000000;">blocking_session#000000;">,
#000000;">wait_class#000000;">,#000000;">event#000000;">,#000000;">p1#000000;">,#000000;">p2#000000;">,#000000;">p3#000000;">,#000000;">seconds_in_wait
 #4444dd; font-weight: bold;">from#000000;">v#000000;">$#4444dd; font-weight: bold;">session
 #4444dd; font-weight: bold;">where#000000;">blocking_session_status#000000;">=#cc0000;">'VALID' #4444dd; font-weight: bold;">OR #000000;">sid #4444dd; font-weight: bold;">IN
 #000000;">(#4444dd; font-weight: bold;">select#000000;">blocking_session
 #4444dd; font-weight: bold;">from#000000;">v#000000;">$#4444dd; font-weight: bold;">session
 #4444dd; font-weight: bold;">where#000000;">blocking_session_status#000000;">=#cc0000;">'VALID'#000000;">)

So what do you do when you need blocking information quickly and can’t wait for the Enterprise Manager Blocking Sessions screen?  Use ASH Analytics to view blocking session information!

ASH Analytics View Options

Start out by telling me you have installed ASH Analytics in your databases, right?  If not, please do this, it’s well worth the short time to install the support package and view via an EM Job for this valuable feature.

Next, once its installed or if you’ve already installed it, then for any database target Home Page in the EM12c, click on Performance, ASH Analytics.

ash_Access

The default timeline will come up for ASH Analytics.  If the blocking is occurring now, no change to the time window will be required and you’ll simply scroll down to the middle wait events graph.

ash_analytics1

Notice that no filters or session data is present on the current graph and it’s focused on the standard Wait Class data.  This can be updated to view blocking sessions and offer very clear info on the sessions and waits involved by doing the following quick changes:

Switch to

  • Load Map from Top Activity
  • Switched to Advanced Mode
  • Chose the following Dimensions of data to display
             – Blocking Session
             – User Session

You will see the following data displayed instantly on the screen, without the wait.

blocked_sessions3

You will see the blocking sessions and below, will be displayed the sessions blocked for each.  If there is more than one session blocked, it will show as a second, third, fourth box, etc. under the blocking session ID.

Advanced Dimensions for Blocking Sessions

If you want to build out and see what wait events are involved on the blocking session, this can be done as well.  Just move the Dimensions bar below the load map from two dimensions to three.  Then add another dimension to the load map.

ash_analytics_3

I now can see that I have a concurrency issue on one of the blocking sessions, (calling same objects) and the second blocking session is waiting on a commit.

The additional advantage of using this method to view blocking session data is that it’s not just “current blocking data” that is available as when you use the “Blocking Sessions” view in OEM.

bar1

Using ASH Analytics allows you the added option to move the upper bar to display time in the past or move it to view newer data just refreshed.

If there is specific data that you are searching for, (username, SQL_ID, etc.)  change the dimensions to display what you are interested in isolating.  ASH Analytics supports a wide variety of data to answer questions about blocking sessions along with all other types of ASH data collected!

 

 

 



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Using ASH Analytics to View Blocked Sessions], All Right Reserved. 2014.

Using the Self Service Portal with Schema as a Service in EM 12.1.0.4

In a previous blog post, I covered setting up Schema as a Service for schema level consolidation. In this post I’m going to cover how to use the Self Service Portal with Schema as a Service in EM 12.1.0.4.

Just as it was in my posting on using the Self Service Portal with DBaaS, our first step here is to login as the Self Service user, so I provide the right username and password and click “Login”:

usingsaas01

By default, this will take me to the Infrastructure Self Service Portal page. I need to select “Databases” from the “Manage” dropdown list:

usingsaas02

Next, I need to request a schema from the Database Service Instances region:

usingsaas03

On the “Select Service Template” pop-up, I select the DBaaS Schema Service with Data template I created earlier and click “Select”:

usingsaas04

On the “Create Schema” page, there are three regions that I need to provide information for:

  1. General – in this region you provide a request name, select the zone the schema will be created in, provide a database service name, and choose a workload size from the workloads we created earlier.
  2. Schedule Request – this is where you define when the request will start and an end date after which the service instance will be removed. You also have the option to keep the service instance indefinitely.
  3. Schema Details – here you define schema details, the master account and the tablespace that will be defined as part of the service instance. While most of the other information you provide is self-explanatory, some of this region may be a bit unclear, so let’s look at these fields in more detail:
    • Schema details – The schemas that will be created as part of this Self Service request which is dependent upon the the Service Template chosen. You can choose to create multiple schemas at once if your template was based on that, but in this example I only selected the HR schema. Each schema will be remapped to another name, based on the provided prefix, so in the example here I will end up with a new schema called “DBAAS_HR”. Note also that you can either choose to have different passwords for each schema if your request has multiple schemas, or alternatively, if you’re lazy like me you could keep the same password for each schema. Obviously it’s better from a security perspective to not be lazy. :)
    • Master account – the master account is the account which has privileges over all the schemas created as part of this service request.
    • Tablespace – This is the name of a tablespace that will be created to contain the schema objects as part of the service request.

The fields on this page that are marked with an asterisk (*) are mandatory fields, so you need to make sure you provide values for those fields at least. Once I’ve filled those in I just click “Submit” to start the service request processing:

usingsaas05

Back on the Database Cloud Self Service portal page, I can swap the refresh rate from manual to every 30 seconds:

usingsaas06

You should also notice the “Usage” region has been updated to reflect the newly submitted request:

usingsaas07

After a short period of time, you will notice the “HR2_Service” instance has now been added to the list of Database Service Instances. If you want to see more details, you can click on the link in the “Status” column for the “Requests” region (depending on the screen refresh timing, you will either see the word “Running” or “Success” there – the fact that we now have a new instance in the Database Service Instances list is your real indication that the service instance was created successfully). You should also notice that we actually added TWO requests in this case – one was to create the service instance and the other is to remove it as I had specified a duration of 7 days for the service instance lifetime:

usingsaas08

If I click on either the word “Running” or “Success”, I can see the Request Details pop-up:

usingsaas09

Selecting any of the execution steps will show more details in the “Execution Details” region for that particular step. You can also see that some steps weren’t needed (like the custom scripts) so they will show a status of “Skipped”. You can click “OK” to close that window.

Tips and Tricks

As with any software there can be little idiosyncrasies with Oracle software – I know, I know, it’s hard to believe, right? ;) So here are a few tips and tricks to point you to ways around these:

  • There is a known bug, where the Create Schema request is failing with the error “Tablespace ‘NULL’ does not exist”. This only occurs in fairly obscure configurations where the default tablespace for the schema being exported doesn’t contain any objects. If you hit this, you can probably get around it by setting the default tablespace for the schema you want to export to something like the “EXAMPLE” tablespace that contains the Sample Schema. Alternatively, the bug is fixed in Patch 19176910.
  • On the Create Schema page, you are asked to provide (among other values) a database service name and a new tablespace name for the new service instance. If something goes wrong, when you attempt to enter the same values again you are not allowed to. Sometimes the new tablespace has been created, but in other situations it is not, so I’m not sure why this problem occurs then. I wasn’t able to locate where the database services were kept to delete those, but for the tablespace issue you can of course just drop the newly created tablespace including contents and datafiles
  • In another example, the processing of the request may end up creating the new schema name and then failing. If you then try the operation again, the operation fails because the new schema already exists. However, the error message you get is:

    Placement Failure: Unable to find targets with sufficient resources.

    This is a completely incorrect red herring message.

  • I have logged a bug for these last two cleanup issues. You can workaround them but it would be nicer if the system cleaned up after itself! :)