Search

OakieTags

Who's online

There are currently 0 users and 31 guests online.

Recent comments

Oakies Blog Aggregator

Open Source is Free – Free like a puppy!

I’m at the ODTUG Kscope conference in Hollywood Florida and was just talking with some Oracle folks about Open Source (yes, Oracle has people devoted to working with Open Source tools) and I shared with them my general comments to students and colleagues about Open Source.

There are two kinds of free:

1. Here’s a free cup of coffee (or beer or soda…)

2. Here’s a free puppy

When accepting option 1 (free coffee) you take it, consume it, and enjoy.

When accepting option 2 (free puppy) you take it, you find a place for it to sleep, you take it to the vet, you walk the puppy, you feed the puppy, and oh-yeah — every once in a while the puppy might make a mess on your floor! You’ve made a commitment.
(I cannot take credit for the metaphor; I first heard it used by friend and colleague Jim Cody of Cardinal Directions – thanks Jim!)

Clearly, “free” is not always “completely free”

I love using Open Source software and working at client sites that use Open Source software; I’m just amused when I hear the old “it’s free” story. Software must be installed, maintained, and updated. Many companies exist today making good profits managing Open Source software for a fee; some even re-package open-source software so that a customer “has a neck to choke” when something goes wrong (again, for small fee).

Another sometime cost occurs when significant changes occur to Open Source software. Vendors who charge for software are expected to make new releases “compatible” with existing code or provide utilities to facilitate the process; if you’re not paying for the software, don’t expect that kind of hand-holding. Many Open Source projects will come up with a great new change, test it, then release it – even though the new release might require a complete rewrite of everything using the software. Your choice: stay with the down-level version (after all it still works), or, do what it takes to make your existing projects interact with the new version (might require some work).

Again, I love Open Source software and use it every day. I just want to shed a little light on the “it’s free” argument. Few things worth doing are truly free, in the case of Open Source software the initial download/install might be free but, like a puppy a commitment is being made.

Performance #datachat

 

2920562020_e808543f0b_o

 

photo by #212124;" title="Go to Tom Woodward's photostream" href="https://www.flickr.com/photos/bionicteaching/" data-track="attributionNameClick" data-rapid_p="51">Tom Woodward

 Summary of responses to questions from Solarwinds #datachat back in the fall.

1. Why do performance issues happen?


Performance issues happen mainly because of bad design.  Performance issues occasionally happen because of hardware undersizing (IO, CPU, Memory, Network) but that’s the 20 (or less) of 80/20.
At  IOUG a last year someone tweeted out a slide that said 80% of performance problems were from IO and  Kevin Closson tweeted the other 80% are from bad SQL :P  . I thought that was pretty funny. In other words bad SQL , which is part of bad design, is the cause of most performance problems and not I/O per say.
Performance design problems arise primarily due to unnecessary work . As I once heard Cary Millsap say “The fastest query is the query that doesn’t have to be run.”

#1 problem is impedance between Object Oriented design as well as functional programming verse relational set operation paradigm

My top 5 perf issues

1. Entity Attribute Value schema, aka the stuff table.

One of the worst performance designs is Entity Attribute Value schemas, aka stuff table, which is ironically the design of many noSQL implementations .

2. single row operations

Using databases without leveraging set operations leads to single row inserts followed by commits.
Single row operations are like going to the grocery store buying one item, coming home, then going back

 When fetching data, its much more efficient to multiple rows at a time in an array interface rather than single row
“Refactoring SQL Applications” by Stephane Faroult is great for understanding and fixing bad database code access design
flip-side corollary: batching too much, collecting hour of data before inserting thus application data is up to hour out of date

3. joining tables in code instead of SQL

3. One of the most disturbing performance issues is coders joining tables in application code instead of the database

4. using count(*) to define loop iterations or array sizes

using count(*) to define things like how make loops to make instead of just looping over a cursor

running count(*) to find out how many iterations to make in a loop
running count(*) to find out how big an array to make

5. flooding connection pools with new sessions when the application perceives a slowdown

when using a connection pool, increasing the sessions when performance goes down which usually makes performance worse

2. What is the number one performance tuning myth?


Probably the greatest performance myth is that there are silver bullets to fix your problem like Exadata instead of good design
As Cary Milsap once said, the most common problem is people looking for the most common problem instead of measuring for it
myth: It’s such a beaten dead horse but it still comes to mind having a good buffer cache hit ratio means I/O performance is good
myth: better yet that BCHR is the myth that adding CPU, faster disks and hardware will make performance scale

 3. What do you look for first when performance tuning?


According to the theory of constraints “Any improvement not made at the constraint is an illusion.”
The first step in tuning is to find the constraint. To find the constraint you need to find where the most time is being spent

As Kellyn Pot’vin said, “if your not tuning for time you are waisting time”
Graphical tools that visualize quantitative information are the most powerful way to find the constraints in Oracle
To find the constraint in Oracle, I use Average Active Session load in Top Activity Screen in OEM or a tool like Confio Ignite
To find the constraint without Graphic tools look at Top 5 Timed Events in AWR and/or run ashrpt.sql

4. What 12c features do you see as having the greatest performance impact?


EM Express where developers can actually see the impact of their code
It is not fair when a DBA sees a performance problem in OEM top activity and then yells at a developer who is flying blind
EM Express can give developers safe visibility into performance impact and load their code puts on the system
Other things I’m looking forward to in 12c are Real Time ADDM, Compare period ADDM and ASH Analytics
http://www.aioug.org/sangam12/Presentations/20159.pdf
http://allthingsoracle.com/oracle-database-12c-new-features-part-i/
http://allthingsoracle.com/oracle-database-12c-new-features-part-2/

5. How does virtualization impact performance tuning?


Virtualization massively helps performance at the level of a company
RealGeneKim, author of “The Phoenix Project” points out, provisioning environments is the biggest constraint in IT today
Highly recommend “The Phoenix Project” to DBAs and everyone in IT to understand finding the constraints and optimizing them
For provisioning environments, VMware, Puppet, Chef etc have alleviated enormous constraints on provisioning environments

The remaining major constraint in IT is provisioning copies of databases and data for environments
Delphix eliminates the enormous bureaucracy and resource constraints of provisioning database and data copies by virtualizing data

Performance of the company as a whole is orders of magnitude more important the performance impact of virtualization on a database
Happy to say though that in my experience VMware overhead at a database performance level is usually negligible
On the other hand, since multiple VMs can run on the same shared hardware resource, other VMs can cause problems
Finding performance problems due to other VMs or the saturation of resources on ESX can be difficult
Confio’s tools that provide direct insight into VM performance correlated with Oracle performance are exciting

 

 6. How often do poor configuration options lead to poor performance?


Configuration performance problems are pretty common but usually get ironed out quickly by competent DBAs
Typical configuration issues: redo logs too small, PGA too small , Parallel query over configured, SQL plans changing

 

Friday Philosophy – Friday Afternoon Phone

{<}
{
Update on my trip to the Apple Store >>}

There used to be a phrase in the car industry in the UK (I don’t know about elsewhere) a “Friday Afternoon Car“. This is a car which is unusually unreliable, as it was built on Friday afternoon when the workers were tired, the weekend was coming and, heck, they might have been to the pub at lunch. It is occasionally used just to describe something that is a bit crap and unreliable.

I have a Friday Afternoon Phone it would seem. I am fast becoming quite disillusioned with it. You may remember my post about my sitting on said phone to make it work again. It’s an iPhone 5, I bought it as I was finally persuaded that it would be more useful to have a smart phone than the “temporary” cheap Samsung I had bought about 2 years prior to then – as an emergency replacement for my previous web-enabled phone that committed suicide with a jar of pickled onions (it’s a long, hardly believable story). I expected the Samsung to keep me going for a month or two but it was so simple and reliable it just stayed in use for over 2 years.

Your Honour, allow me to present item A and item B

Your Honour, allow me to present evidence item A and item B

Comparison:
. . . . . . . . . . . . . . Phone A. . . . . . . . . . . . Phone B
Cost. . . . . . . . . . .£400 or so . . . . . . . . . £15 with a free £10 pay-as-you-go top up.
Battery . . . . . . . . New, 8-12 hours. . . . . .New, a week
. . . . . . . . . . . . . . Now, 4-5 hours. . . . . . Now, a week!
Reliability . . . . . . Breaks every update . . .No issues ever
Making calls. . . . .6/10. . . . . . . . . . . . . . .9/10
Receiving calls . . .4/10. . . . . . . . . . . . . . 9/10
Plays Angry Birds. Yes. . . . . . . . . . . . . . . .No
Taking pictures . . 9/10. . . . . . . . . . . . . . 0/10
Helps me up a . . . Yes. . . . . . . . . . . . . . . No
mountain
Connection to web 6/10. . . . . . . . . . . . . .Are you kidding? But I’m mostly sat at a computer anyway
Impresses friends. No. . . . . . . . . . . . . . . Yes, for all the wrong reasons :-)

{There must be a better way to line up text in this wordpress theme!!!}

The Web Enabled Phone that Does not Like to Connect
In some ways the iPhone has been really good. The screen (at the time I bought it) was very good, apps run nice and fast, way too much software is available for it and it can hold a lot of pictures and videos before running out of space. Its size also suits me. But phone and web reception has always been a bit poor and its ability to hold onto a wireless connection seems to be especially questionable – as soon as a few other devices are contending for a router with my iPhone, my iPhone seems to give up its hold on the connection and sulk. I’ve had this in several places in several countries. I’m the only one up? Phone connects fine. 2 others wake up and connect? I’m off the network. I’ve also often been in a busy place (conference, sports event) and everyone else seems to be on the net but my phone just pretends.

Battery Blues
And of course, there is the issue of the battery becoming very poor. It runs on a full charge for only a few hours and if it gets cold it has a tendency to act like a squirrel and hibernate. I now carry around the spare battery pack my wife got given by her work for her work phone use abroad. The good news is, having been put on to it by Neil Chandler, I am now aware my phone has been recalled for a battery replacement. What I am a little irked about is that Apple have my details and the serial number of the phone but have never contacted me directly to let me know. OK, it is not a car (it’s just like a car – a Friday Afternoon Car) so I am unlikely to die as a result of the fault, but if they know it has a fault and it did cost a good whack of cash to buy, they should be being moral and contacting me.

Upsetting Upgrades
But the thing that utterly hacks me off is how it does not handle upgrading to the next version of iOS. I had an upgrade early on in my relationship with the phone and it blew everything off the phone. Not a big issue then as I had not had it long. But it made me cautious about upgrading. About this time last year the phone was insisting it must be upgraded and things were getting flaky (I suspect software manufacturers do this on purpose – I’ve noticed my PC running Windows can start acting odd when an update is due). Before doing anything, I backed it up. Or tried to. The first attempt said it worked but it was too swift to have backed up anything, let alone back up my photos. After then it just refused to back up. But the phone utterly refused to allow me access to the photos from my PC – it should do of course but no, nothing would pries those images out of the phone. I was particularly concerned as I had lots of snaps from a friend’s wedding. Said friend eventually helped me out by pushing all my photos to an iCloud account (It’s Just A Server On The Net) in a way he could access. I then updated the phone and, yep it failed. And locked the phone and I had to factory reset and lost all the photos. It had also lied about uploading the pictures to the net (which it took hours to not do) so they had gone for good. Grrrrr.

So this time when it started getting dodgy I managed to save all my photos (Huzzah!), backed it up, ran through the update – and it failed and locked up the phone. *sigh!!!!*. Only, this time it won’t respond even after a factory reset. My iTunes is up-to-date, it could see the phone OK at the start of the update (because I was doing it via iTunes!) but now it won’t see the phone and once I try, guess what, iTunes also locks up. So the phone is useless. I can’t help wonder if the battery issue and the failure to ever upgrade smoothly are linked somehow (by eg it being rubbish).

So I pop along to the kitchen drawer with the odds n’ sods in and pull out the old Samsung & charger and plug it in. 20 minutes later, I have a working phone. Turns out I have no credit on it anymore but I can sort that out. It even gets reception in the kitchen (I have to lean out the window of the back bedroom to get the iPhone to pick up a reliable signal at home).

Oh No! I have to Contact Apple!
Now the real fun starts. I contact my local Apple shop. Only I don’t, I access a damned annoying voice system that smugly announces “I understand full sentences” and immediately knows who I am and what my device is and when it was bought (as Apple have my details including home phone) – and it was over 2 years ago and it wants me to agree to a paid support package to go further. Of course it won’t give me options to speak to a human or understand “full sentences” even when I shout “battery issue recall” and “your update killed my phone!” plus various permutations at it. It also did not understand the sentence “I want to speak to a person”.

I eventually trick it by pretending that I will buy a support package. Huzzah, a human to talk to. Said human is helpful, pleasant, a bit hard to understand (usual call center woes of background noise and she has the microphone clipped to her socks). I explain that the phone has a recall on it and I just want that sorted and a proper reset. She’s not sure I can have this without a support package {after all, her job is to sell me some support and I am breaking her script} but she says the battery might be replaced under the recall (she has all my details, she can see the iPhone serial number, she could check!). “So I can drop it off at the store?”.

I expect “yes”. I get “no”. I have to organise an appointment. A 10 minute slot. Why? I want to drop off some kit for you to repair and I’ll come back another day. I am not making an appointment to see a doctor to discuss my piles. No, I have to have an appointment. On Monday at 10:10 or “plrbsburhpcshlurp” as the mike once once slips down the sock. OK, 10:10 Monday, she’s getting tired of me saying “please repeat that”. Then she says what sounded like “and the repair may cost up to £210 if there is a hardware fault”. WHAT?!? I don’t fully understand what she says next – but she understands I am not going to pay £210 to fix a device that has a known fault and has been screwed over again by their software update, so she backs off to “they can look at the device and advise me”.

It’ll be interesting to see how it goes on Monday. At 10:10 am. If they try and charge me hundreds of pounds to reset the damned thing or tell me (after I’ve checked) that they won’t replace the dying battery, I can imagine me becoming one of those ranting, incoherent people you see on YouTube. If they want anything more than the cost of an evening in the pub to get it working, I think it will become a shiny, expensive paperweight.

Meanwhile, welcome back Reliable Samsung Phone. You still seem to make calls just fine. Still not able to play Angry Birds though.

Old ventures and new adventures

I have some news, two items actually.

First, today (it’s still 18th June in California) is my blog’s 8th anniversary!

I wrote my first blog post, about Advanced Oracle Troubleshooting, exactly 8 years ago, on 18th June 2007 and have written 229 blog posts since. I had started writing and accumulating my TPT script collection a couple of years earlier and now it has over 1000 files in it! And no, I don’t remember what all of them do and even why I had written them. Also I haven’t yet created an index/documentation for all of them (maybe on the 10th anniversary? ;)

Thanks everyone for your support, reading, commenting and the ideas we’ve exchanged over all these years, it’s been awesome to learn something new every single day!

You may have noticed that I haven’t been too active in online forums nor blogging much in the last couple of years, which brings me to the second news item(s):

I’ve been heavily focusing on Hadoop. It is the future. It will win, for the same reasons Linux won. I moved to US over a year ago and am currently in San Francisco. The big data hype is the biggest here. Except it’s not hype anymore; and Hadoop is getting enterprise-ready.

I am working on a new startup. I am the CEO who still occasionally troubleshoots stuff (must learn something new every day!). We officially incorporated some months ago, but our first developers in Dallas and London have been busy in the background for over a year. By now we are beta testing with our most progressive customers ;-) We are going to be close partners with old and new friends in modern data management space and especially the awesome folks in Accenture Enkitec Group.

The name is Gluent. We glue together the old and new worlds in enterprise IT. Relational databases vs. Hadoop. Legacy ETL vs. Spark. SAN storage vs. the cloud. Jungles of data feeds vs. a data lake. I’m not going to tell you any more as we are still in stealth mode ;-)

Now, where does this leave Oracle technology? Well, I think it still kicks ass and it ain’t going away! In fact we are betting on it. Hadoop is here to stay, but your existing systems aren’t going away any time soon.

I wouldn’t want to run my critical ERP or complex transactional systems on anything other than Oracle. Want real time in-memory reporting on your existing Oracle OLTP system – with immediate consistency, not a multi-second lag: Oracle. Oracle is the king of complex OLTP and I don’t see it changing soon.

So, thanks for reading all the way to the end – and expect to hear much more about Gluent in the future! You can follow @GluentInc Twitter handle to be the first to hear any further news :-)

 

New Monitor… Again…

220px-Commodore_PET2001I’ve just bought myself a Dell U3415W 34-Inch IPS LCD Monitor for use with the laptop. It’s quite an extravagant purchase, but it’s pretty amazing. Having 3440×1440 resolution on a single screen feels much more useful than sitting a couple of smaller monitors next to each other. It feels almost like having 3-4 screens in one.

I bought it to replace the Asus PB298Q 29 inch Widescreen AH-IPS Multimedia Monitor I got about 7 months ago. The resolution of 2560×1080 is pretty darn decent, but I don’t like having a depth of 1080. When you are using a wider screen, the limited height feels really restrictive for some reason.

Currently I have both screens plugged into the laptop, but I can’t see it staying that way. I’ve really had no reason to look at the MacBook or ASUS screen yet. I’ll see how I feel over the next few days. If I’m happy to let it go I’ll probably take the ASUS screen to work and use it there. It’s better than either of my work monitors. :)

Ditching the second screen will also free up some room on my desk, which is looking a little crazy at the moment… :(

Cheers

Tim…


New Monitor… Again… was first posted on June 18, 2015 at 7:40 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

12.1.0.5 Hybrid Cloud Cloning

Last week’s release of 12.1.0.5 was a pleasant surprise for everyone out in the Oracle world.  This release hit the bulls-eye for another cloud target of Oracle’s, announcing the introduction of Enterprise Manager 12c’s offering a single pane of glass management of the hybrid cloud.  The EM12c team has been been trained and testing out the new features of this release with great enthusiasm and I have to admit, pretty cool stuff, folks!

Why is the Hybrid Cloud Important?

Many companies are still a bit hesitant to embrace the cloud or due to sensitive data and security requirements, aren’t able to take advantage of cloud offerings for their production systems.  Possessing a powerful tool like Enterprise Manager to help guide them to the cloud could make all the difference-

hybcld1

 

You’re going to start hearing the EM folks use the term, “Single Pane of Glass” a lot in the upcoming months, as it’s part of the overall move, taking Enterprise Manager from the perception that EM is still a DBA tool and getting everyone to embrace the truth that EM12c has grown into an infrastructure tool.

What is the hybrid cloud?

As we’ve discussed the baby-steps that many companies are taking, (vs. others that are jumping in, feet first! :)) with the hybrid cloud, the company can now uphold those requirements and maintain their production systems within on-premise sites, but enforce data masking and sub-setting, the sensitive data is never presented outside the production database, (including to the test master database that is used to track the changes in the snapclone copies…)  This then allows them with Database as a Service to clone development, test, Q&A environments to a less expensive cloud storage platform without exposing any sensitive data.

clone_datamask

Once the datamasking or any other pre-clone data cleansing/subsetting is performed, then the Test Master database is created and can be used to create as many snap clones as needed.  These snaps can be used for development, QA or testing.  The space savings continues to increase as the snapclone copies are added, as the block changes are most of the space consumption in the test master database.  This can add up to a 90% storage savings over traditional database full copies.

Hybrid Cloning

The power of hybrid cloning is the Hybrid Cloud Gateway, a secure SSH tunneling, that allows seamless communication between on-premise systems and the cloud.

hybcld3

Types of Clones

There are four types of clones currently offered with Hybrid cloning-

  • On-premise source cloned to the cloud.
  • Cloud source, cloned to on-premise.
  • Cloud source cloned in the cloud.
  • Migrate from a schema in a database to a PDB in the cloud.

Simplicity is Key

The user interface is simple to engage, use to create a clone or clones, save off templates, build out a catalog to be used for a self-service portal and when cloning, the status dashboard is a great quick view of success on cloning steps:

hybcld4

If deeper investigation of any single step needs to be perforrmed, the logging is no different than inspecting an EM job log, (because an EM job is exactly what it is… :)):

hybcld5

I’ll be returning from Europe soon and hope to do more with the product, digging into this great new feature, but until then, here’s a great overview of 12.1.0.5’s brand new star!

 



Tags:  ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [12.1.0.5 Hybrid Cloud Cloning], All Right Reserved. 2015.

Oracle Enterprise Manager Cloud Control 12c Release 5 (12.1.0.5) : My first two installations

em-12cI’ve done a couple of play installations of EM12c 12.1.0.5, just to get a feel for it. You can see the result of that here.

From an installation perspective, everything was pretty similar to the previous releases. I tried the installation on both OL5 and OL6, in both cases using 12c as the database repository. No dramas there.

A couple of things of note.

  1. The 12c repository template database is a Non-CDB architecture.
  2. The Weblogic installation uses Java6.

Interesting…

The next step is to try some upgrades from EM 12.1.0.4 (on DB 11.2.0.4) to EM 12.1.0.5, which is what I’ll need for my upgrades at work. The testing is quite time consuming and boring, but it’s got to be done before I can unleash this on the company. :)

Cheers

Tim…

PS. Remember to download from edelivery.oracle.com (in a couple of days) for your production installations. Apparently there is a difference to the license agreement.


Oracle Enterprise Manager Cloud Control 12c Release 5 (12.1.0.5) : My first two installations was first posted on June 17, 2015 at 4:55 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Reverse Key

A question came up on the OTN database forum recently asking if you could have a partitioned index on a non-partitioned table.

(Aside: I’m not sure whether it would be quicker to read the manuals or try the experiment – either would probably be quicker than posing the question to the forum. As so often happens in these RTFM questions the OP didn’t bother to acknowledge any of the responses)

The answer to the question is yes – you can create a globally partitioned index, though if it uses range partitioning you have to specify a MAXVALUE partition. The interesting thing about the question, though is that several people tried to guess why it had been asked and then made suggestions based on the most likely guess (and wouldn’t it have been nice to see some response from the OP ). The common guess was that there was a performance problem with the high-value block of a sequence-based (or time-based) index – a frequent source of “buffer busy wait” events and other nasty side effects.

Unfortunately too many people suggesting reverse key as a solution to this “right-hand” problem. If you’re licensed for partitioning it’s almost certain that a better option would simple be to use global hash partitioning (with 2^N for some N) partitions. Using reverse keys can result in a bigger performance than the one you’re trying to avoid – you may end up turning a little time spent on buffer busy waits into a large amount of time spent on db file sequential reads. To demonstrate the issue I’ve created a sample script – and adjusted my buffer cache down to the appropriate scale:

create table t1(
	id	not null
)
nologging
as
with generator as (
	select	--+ materialize
		rownum id 
	from dual 
	connect by 
		rownum <= 1e4
)
select
	1e7 + rownum	id
from
	generator	v1,
	generator	v2
where
	rownum <= 1e7 
;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1'
	);
end;
/

alter table t1 add constraint t1_pk primary key(id) 
using index 
	reverse 
	nologging 
;

alter system flush buffer_cache;
alter session set events '10046 trace name context forever, level 8';

begin
	for i in 20000001..20010000 loop
		insert into t1 values(i);
	end loop;
end;
/

I’ve created a table with 10,000,000 rows using a sequential value as the primary key, then inserted “the next” 10,000 rows into the table in order. The index occupied about about 22,000 blocks, so to make my demonstration show you the type of effect you could get from a busy production system with more tables and many indexes I ran my test with the buffer cache limited to 6,000 blocks – a fair fraction of the total index size. Here’s a small section of the trace file from the test running 10.2.0.3 on an elderly machine:


WAIT #43: nam='db file sequential read' ela= 13238 file#=6 block#=12653 blocks=1 obj#=63623 tim=3271125590
WAIT #43: nam='db file sequential read' ela=  7360 file#=6 block#=12749 blocks=1 obj#=63623 tim=3271133150
WAIT #43: nam='db file sequential read' ela=  5793 file#=6 block#=12844 blocks=1 obj#=63623 tim=3271139110
WAIT #43: nam='db file sequential read' ela=  5672 file#=6 block#=12940 blocks=1 obj#=63623 tim=3271145028
WAIT #43: nam='db file sequential read' ela= 15748 file#=5 block#=13037 blocks=1 obj#=63623 tim=3271160998
WAIT #43: nam='db file sequential read' ela=  8080 file#=5 block#=13133 blocks=1 obj#=63623 tim=3271169314
WAIT #43: nam='db file sequential read' ela=  8706 file#=5 block#=13228 blocks=1 obj#=63623 tim=3271178240
WAIT #43: nam='db file sequential read' ela=  7919 file#=5 block#=13325 blocks=1 obj#=63623 tim=3271186372
WAIT #43: nam='db file sequential read' ela= 15553 file#=6 block#=13549 blocks=1 obj#=63623 tim=3271202115
WAIT #43: nam='db file sequential read' ela=  7044 file#=6 block#=13644 blocks=1 obj#=63623 tim=3271209420
WAIT #43: nam='db file sequential read' ela=  6062 file#=6 block#=13741 blocks=1 obj#=63623 tim=3271215648
WAIT #43: nam='db file sequential read' ela=  6067 file#=6 block#=13837 blocks=1 obj#=63623 tim=3271221887
WAIT #43: nam='db file sequential read' ela= 11516 file#=5 block#=13932 blocks=1 obj#=63623 tim=3271234852
WAIT #43: nam='db file sequential read' ela=  9295 file#=5 block#=14028 blocks=1 obj#=63623 tim=3271244368
WAIT #43: nam='db file sequential read' ela=  9466 file#=5 block#=14125 blocks=1 obj#=63623 tim=3271254002
WAIT #43: nam='db file sequential read' ela=  7704 file#=5 block#=14221 blocks=1 obj#=63623 tim=3271261991
WAIT #43: nam='db file sequential read' ela= 16319 file#=6 block#=14444 blocks=1 obj#=63623 tim=3271278492
WAIT #43: nam='db file sequential read' ela=  7416 file#=6 block#=14541 blocks=1 obj#=63623 tim=3271286129
WAIT #43: nam='db file sequential read' ela=  5748 file#=6 block#=14637 blocks=1 obj#=63623 tim=3271292163
WAIT #43: nam='db file sequential read' ela=  7131 file#=6 block#=14732 blocks=1 obj#=63623 tim=3271299489
WAIT #43: nam='db file sequential read' ela= 16126 file#=5 block#=14829 blocks=1 obj#=63623 tim=3271315883
WAIT #43: nam='db file sequential read' ela=  7746 file#=5 block#=14925 blocks=1 obj#=63623 tim=3271323845
WAIT #43: nam='db file sequential read' ela=  9208 file#=5 block#=15020 blocks=1 obj#=63623 tim=3271333239
WAIT #43: nam='db file sequential read' ela=  7708 file#=5 block#=15116 blocks=1 obj#=63623 tim=3271341141
WAIT #43: nam='db file sequential read' ela= 15484 file#=6 block#=15341 blocks=1 obj#=63623 tim=3271356807
WAIT #43: nam='db file sequential read' ela=  5488 file#=6 block#=15437 blocks=1 obj#=63623 tim=3271362623
WAIT #43: nam='db file sequential read' ela= 10447 file#=6 block#=15532 blocks=1 obj#=63623 tim=3271373342
WAIT #43: nam='db file sequential read' ela= 12565 file#=6 block#=15629 blocks=1 obj#=63623 tim=3271386741
WAIT #43: nam='db file sequential read' ela= 17168 file#=5 block#=15725 blocks=1 obj#=63623 tim=3271404135
WAIT #43: nam='db file sequential read' ela=  7542 file#=5 block#=15820 blocks=1 obj#=63623 tim=3271411882
WAIT #43: nam='db file sequential read' ela=  9400 file#=5 block#=15917 blocks=1 obj#=63623 tim=3271421514
WAIT #43: nam='db file sequential read' ela=  7804 file#=5 block#=16013 blocks=1 obj#=63623 tim=3271429519
WAIT #43: nam='db file sequential read' ela= 14470 file#=6 block#=16237 blocks=1 obj#=63623 tim=3271444168
WAIT #43: nam='db file sequential read' ela=  5788 file#=6 block#=16333 blocks=1 obj#=63623 tim=3271450154
WAIT #43: nam='db file sequential read' ela=  9630 file#=6 block#=16429 blocks=1 obj#=63623 tim=3271460008
WAIT #43: nam='db file sequential read' ela= 10910 file#=6 block#=16525 blocks=1 obj#=63623 tim=3271471174
WAIT #43: nam='db file sequential read' ela= 15683 file#=5 block#=16620 blocks=1 obj#=63623 tim=3271487065
WAIT #43: nam='db file sequential read' ela=  8094 file#=5 block#=16717 blocks=1 obj#=63623 tim=3271495454
WAIT #43: nam='db file sequential read' ela=  6670 file#=5 block#=16813 blocks=1 obj#=63623 tim=3271502293
WAIT #43: nam='db file sequential read' ela=  7852 file#=5 block#=16908 blocks=1 obj#=63623 tim=3271510360
WAIT #43: nam='db file sequential read' ela= 10500 file#=6 block#=17133 blocks=1 obj#=63623 tim=3271521039
WAIT #43: nam='db file sequential read' ela= 11038 file#=6 block#=17229 blocks=1 obj#=63623 tim=3271532275
WAIT #43: nam='db file sequential read' ela= 12432 file#=6 block#=17325 blocks=1 obj#=63623 tim=3271544974
WAIT #43: nam='db file sequential read' ela=  7784 file#=6 block#=17421 blocks=1 obj#=63623 tim=3271553331
WAIT #43: nam='db file sequential read' ela=  7774 file#=5 block#=17517 blocks=1 obj#=63623 tim=3271561346
WAIT #43: nam='db file sequential read' ela=  6583 file#=5 block#=17613 blocks=1 obj#=63623 tim=3271568146
WAIT #43: nam='db file sequential read' ela=  7901 file#=5 block#=17708 blocks=1 obj#=63623 tim=3271576231
WAIT #43: nam='db file sequential read' ela=  6667 file#=5 block#=17805 blocks=1 obj#=63623 tim=3271583259
WAIT #43: nam='db file sequential read' ela=  9427 file#=6 block#=18029 blocks=1 obj#=63623 tim=3271592988
WAIT #43: nam='db file sequential read' ela= 52334 file#=6 block#=18125 blocks=1 obj#=63623 tim=3271646055
WAIT #43: nam='db file sequential read' ela= 50512 file#=6 block#=18221 blocks=1 obj#=63623 tim=3271697284
WAIT #43: nam='db file sequential read' ela= 10095 file#=6 block#=18317 blocks=1 obj#=63623 tim=3271708095

Check the block number for this list of single block reads – we’re jumping through the index about 100 blocks at a time to read the next block where an index entry has to go. The jumps are the expected (and designed) effect of reverse key indexes: the fact that the jumps turn into physical disc reads is the (possibly unexpected) side effect. Reversing an index makes adjacent values look very different (by reversing the bytes) and go to different index leaf blocks: the purpose of the exercise is to scatter concurrent similar inserts across multiple blocks, but if you scatter the index entries you need to buffer a lot more of the index to keep the most recently used values in memory. Reversing the index may eliminate buffer busy waits, but it may increase time lost of db file sequential reads dramatically.

Here’s a short list of interesting statistics from this test – this time running on 11.2.0.4 on a machine with SSDs) comparing the effects of reversing the index with those of not reversing the index – normal index first:


Normal index
------------
CPU used by this session               83
DB time                                97
db block gets                      40,732
physical reads                         51
db block changes                   40,657
redo entries                       20,174
redo size                       5,091,436
undo change vector size         1,649,648

Repeat with reverse key index
-----------------------------
CPU used by this session              115
DB time                               121
db block gets                      40,504
physical reads                     10,006
db block changes                   40,295
redo entries                       19,973
redo size                       4,974,820
undo change vector size         1,639,232

Because of the SSDs there’s little difference in timing between the two sets of data and, in fact, all the other measures of work done are very similar except for the physical read, and the increase in reads is probably the cause of the extra CPU time thanks to both the LRU manipulation and the interaction with the operating system.

If you want to check the effect of index reversal you can take advantage of the sys_op_lbid() function to sample a little of your data – in my case I’ve queried the last 10,000 rows (values) in the table:


select 
	/*+ 
		cursor_sharing_exact 
		dynamic_sampling(0) 
		no_monitoring 
		no_expand 
		index_ffs(t1,t1_i1) 
		noparallel_index(t,t1_i1) 
	*/ 
	count (distinct sys_op_lbid( &m_ind_id ,'L',t1.rowid)) as leaf_blocks
from 
	t1
where 
	id between 2e7 + 1 and 2e7 + 1e4
;

The &m_ind_id substition variable is the object_id of the index t1_i1.

In my case, with an index of 22,300 leaf blocks, my 10,000 consecutive values were scattered over 9,923 leaf blocks. If I want access to “recent data” to be as efficient as possible I need to keep that many blocks of the index cached, compared to (absolute) worst case for my data 100 leaf blocks. When you reverse key an index you have to think about how much bigger you have to make your buffer cache to keep the performance constant.

HOWTO: Check if an XMLType View or Table is Hierarchy Enabled

The following simple, code snippet, demonstrates how you can check if an XMLType view or…

IO Resource Manager for Pluggable Databases in Exadata

Another interesting topic that goes over and above the CDB Resource Manager Plans I described earlier this week is the implementation of IORM Plans for Pluggable Databases. Pluggable Databases are an interesting object for studies, and I like to research things. When 12c came out there was no support for offloading initially, it is required that you are on cell software 12.1.x.x.x for full 12c support on Exadata. One aspect I was particularly interested in were PDBs and IORM. Limiting CPU on a PDB basis is already a cool thing, but in a hugely consolidated environment it does make sense to limit I/O based on a consumer group. Another one of the perks of working on Exadata :)

According to what I found out PDBs are treated as entities within the database, and they probably fall into the category of intra-database IORM. I have previously written about how DBRM plans filter down to the cells and become intra-database resource plans. This seems to be happening here, too.

IORM metric definitions

When looking at IORM I’m old fashioned and like to rely on the command line. More GUI oriented people should consider the use of OEM 12 to get similar data but in pretty pictures. But since OEM taps into cellcli under the covers it is only fair trying to understand the underlying technology.

The cells provide a lot of performance information in the metriccurrent and metrichistory views. You are shown metrics based on a name and objectType. For IORM the following objectTypes are of interest in general:

[celladmin@enkcel04 ~]$ cellcli -e "list metricdefinition attributes objectType" | grep IORM | sort | uniq
	 IORM_CATEGORY
	 IORM_CONSUMER_GROUP
	 IORM_DATABASE
	 IORM_PLUGGABLE_DATABASE

This was executed on Exadata 12.1.2.1.1, if you try this on an 11.2.3.x Exadata release you won’t see the IORM_PLUGGABLE_DATABASE category. Question is: which metrics are gathered in 12.1.2.1.1?

CELLCLI> list metricdefinition attributes name,description where objectType = 'IORM_PLUGGABLE_DATABASE'
	 PDB_FC_BY_ALLOCATED	 "Number of megabytes allocated in flash cache for this pluggable database"
	 PDB_FC_IO_BY_SEC   	 "Number of megabytes of I/O per second for this pluggable database to flash cache"
	 PDB_FC_IO_RQ       	 "Number of IO requests issued by this pluggable database to flash cache"
	 PDB_FC_IO_RQ_LG    	 "Number of large IO requests issued by this pluggable database to flash cache"
	 PDB_FC_IO_RQ_LG_SEC	 "Number of large IO requests issued by this pluggable database to flash cache per second"
	 PDB_FC_IO_RQ_SEC   	 "Number of IO requests issued by this pluggable database to flash cache per second"
	 PDB_FC_IO_RQ_SM    	 "Number of small IO requests issued by this pluggable database to flash cache"
	 PDB_FC_IO_RQ_SM_SEC	 "Number of small IO requests issued by this pluggable database to flash cache per second"
	 PDB_FD_IO_BY_SEC   	 "Number of megabytes of I/O per second for this pluggable database to flash disks"
	 PDB_FD_IO_LOAD     	 "Average I/O load from this pluggable database for flash disks"
	 PDB_FD_IO_RQ_LG    	 "Number of large IO requests issued by this pluggable database to flash disks"
	 PDB_FD_IO_RQ_LG_SEC	 "Number of large IO requests issued by this pluggable database to flash disks per second"
	 PDB_FD_IO_RQ_SM    	 "Number of small IO requests issued by this pluggable database to flash disks"
	 PDB_FD_IO_RQ_SM_SEC	 "Number of small IO requests issued by this pluggable database to flash disks per second"
	 PDB_FD_IO_TM       	 "The cumulative latency of reading blocks by this pluggable database from flash disks"
	 PDB_FD_IO_TM_RQ    	 "The rate which is the average latency of reading or writing blocks per request by this pluggable database from flash disks"
	 PDB_FD_IO_UTIL     	 "Percentage of flash resources utilized by this pluggable database"
	 PDB_FD_IO_WT_LG    	 "IORM wait time for large IO requests issued to flash disks by this pluggable database"
	 PDB_FD_IO_WT_LG_RQ 	 "Average IORM wait time per request for large IO requests issued to flash disks by this pluggable database"
	 PDB_FD_IO_WT_SM    	 "IORM wait time for small IO requests issued to flash disks by this pluggable database"
	 PDB_FD_IO_WT_SM_RQ 	 "Average IORM wait time per request for small IO requests issued to flash disks by this pluggable database"
	 PDB_IO_BY_SEC      	 "Number of megabytes of I/O per second for this pluggable database to hard disks"
	 PDB_IO_LOAD        	 "Average I/O load from this pluggable database for hard disks"
	 PDB_IO_RQ_LG       	 "Number of large IO requests issued by this pluggable database to hard disks"
	 PDB_IO_RQ_LG_SEC   	 "Number of large IO requests issued by this pluggable database to hard disks per second"
	 PDB_IO_RQ_SM       	 "Number of small IO requests issued by this pluggable database to hard disks"
	 PDB_IO_RQ_SM_SEC   	 "Number of small IO requests issued by this pluggable database to hard disks per second"
	 PDB_IO_TM_LG       	 "The cumulative latency of reading or writing large blocks by this pluggable database from hard disks"
	 PDB_IO_TM_LG_RQ    	 "The rate which is the average latency of reading or writing large blocks per request by this pluggable database from hard disks"
	 PDB_IO_TM_SM       	 "The cumulative latency of reading or writing small blocks by this pluggable database from hard disks"
	 PDB_IO_TM_SM_RQ    	 "The rate which is the average latency of reading or writing small blocks per request by this pluggable database from hard disks"
	 PDB_IO_UTIL_LG     	 "Percentage of disk resources utilized by large requests from this pluggable database"
	 PDB_IO_UTIL_SM     	 "Percentage of disk resources utilized by small requests from this pluggable database"
	 PDB_IO_WT_LG       	 "IORM wait time for large IO requests issued to hard disks by this pluggable database"
	 PDB_IO_WT_LG_RQ    	 "Average IORM wait time per request for large IO requests issued to hard disks by this pluggable database"
	 PDB_IO_WT_SM       	 "IORM wait time for small IO requests issued to hard disks by this pluggable database"
	 PDB_IO_WT_SM_RQ    	 "Average IORM wait time per request for small IO requests issued to hard disks by this pluggable database"

IORM metrics for PDBs

You still can’t “join” metricdefintition to metriccurrent but it is possible to use the objectType in metriccurrent, too. The way that Oracle externalises information about PDBs is as shown here:

CellCLI> list metriccurrent where objectType = 'IORM_PLUGGABLE_DATABASE' and metricObjectName like 'MBACHMT.*'
	 PDB_FC_BY_ALLOCATED	 MBACHMT.CDB$ROOT   	 1,843 MB
	 PDB_FC_BY_ALLOCATED	 MBACHMT.PDB$SEED   	 41.500 MB
	 PDB_FC_BY_ALLOCATED	 MBACHMT.SWINGBENCH0	 3,772 MB
	 PDB_FC_BY_ALLOCATED	 MBACHMT.SWINGBENCH1	 3,374 MB
	 PDB_FC_BY_ALLOCATED	 MBACHMT.SWINGBENCH2	 3,280 MB
...

PDB_FC_BY_ALLOCATED translates to “Number of megabytes allocated in flash cache for this pluggable database” as per the above translation. The Metric Object Name therefore is made up of the database name (I continue using MBACHMT as my CDB) and the container name. CDB$ROOT stands for the root, PDB$SEED for the seed database, and then the various PDB names you define. In my example I have “user PDBs” defined as swingbench{0,1,2}.

Stressing it

With the basics covered it is time to run some stress testing. I have created a 6 GB table named IOTEST in all my swingbench* PDBs and will use the same script to issue 80 sessions against that table in each PDB. My CDB resource manager plan is still the same, repeated here for your convenience:

begin
 dbms_resource_manager.clear_pending_area;
 dbms_resource_manager.create_pending_area;
 
 dbms_resource_manager.create_cdb_plan(
  plan => 'ENKITC_CDB_PLAN',
  comment => 'A CDB plan for 12c'
 );
 
 dbms_resource_manager.create_cdb_plan_directive(
  plan => 'ENKITC_CDB_PLAN',
  pluggable_database => 'swingbench0',
  shares => 5,
  utilization_limit => 100);
 
 dbms_resource_manager.create_cdb_plan_directive(
  plan => 'ENKITC_CDB_PLAN',
  pluggable_database => 'swingbench1',
  shares => 3,
  utilization_limit => 50);
 
 dbms_resource_manager.create_cdb_plan_directive(
  plan => 'ENKITC_CDB_PLAN',
  pluggable_database => 'swingbench2',
  shares => 1,
  utilization_limit => 30);
 
 dbms_resource_manager.validate_pending_area;
 dbms_resource_manager.submit_pending_area;
end;
/

If I execute the scripts concurrently (80 sessions connecting against each PDB) and prefixing my scheduler with time command then I get the following results

  • swingbench0 – 143s
  • swingbench1 – 223s
  • swingbench2 – 288s

Interestingly, there is no event that would show I/O throttling in 12.1.0.2.2:

SYS:MBACHMT2> select count(*), con_id, event from v$session where username = 'SOE' group by con_id, event;

   COUNT(*)      CON_ID EVENT
----------- ----------- ----------------------------------------------------------------
         80           4 cell smart table scan
         80           5 cell smart table scan
         80           3 cell smart table scan

So all of them are Smart-Scanning. The fact that some of the sessions are throttled is not visible from the wait interface, or at least I haven’t seen a way to externalise I/O throttling. But it does happen. Using one of my favourite tools, metric_iorm.pl (available from MOS), I noticed the following:

Database: MBACHMT
Utilization:     Small=0%    Large=18%
Flash Cache:     IOPS=13087
Disk Throughput: MBPS=361
Small I/O's:     IOPS=4.0    Avg qtime=0.0ms
Large I/O's:     IOPS=346    Avg qtime=1026ms
	Consumer Group: SWINGBENCH1.SWINGBENCH_GROUP
	Utilization:     Small=0%    Large=4%
	Flash Cache:     IOPS=3482
	Disk Throughput: MBPS=91
	Small I/O's:     IOPS=0.0    Avg qtime=0.0ms
	Large I/O's:     IOPS=87.7    Avg qtime=1336ms
	Consumer Group: SWINGBENCH0.SWINGBENCH_GROUP
	Utilization:     Small=0%    Large=13%
	Flash Cache:     IOPS=8886
	Disk Throughput: MBPS=254
	Small I/O's:     IOPS=0.0    Avg qtime=0.0ms
	Large I/O's:     IOPS=244    Avg qtime=906ms
	Consumer Group: CDB$ROOT._ORACLE_LOWPRIBG_GROUP_
	Utilization:     Small=0%    Large=0%
	Flash Cache:     IOPS=0.8
	Disk Throughput: MBPS=0
	Small I/O's:     IOPS=0.0    Avg qtime=0.0ms
	Large I/O's:     IOPS=0.0    Avg qtime=0.0ms
	Consumer Group: CDB$ROOT.ORA$AUTOTASK
	Utilization:     Small=0%    Large=0%
	Flash Cache:     IOPS=1.4
	Disk Throughput: MBPS=0
	Small I/O's:     IOPS=0.4    Avg qtime=0.0ms
	Large I/O's:     IOPS=0.0    Avg qtime=0.0ms
	Consumer Group: CDB$ROOT._ORACLE_BACKGROUND_GROUP_
	Utilization:     Small=0%    Large=0%
	Flash Cache:     IOPS=0.7
	Disk Throughput: MBPS=0
	Small I/O's:     IOPS=3.6    Avg qtime=0.0ms
	Large I/O's:     IOPS=0.0    Avg qtime=0.0ms
	Consumer Group: SWINGBENCH2.SWINGBENCH_GROUP
	Utilization:     Small=0%    Large=0%
	Flash Cache:     IOPS=717
	Disk Throughput: MBPS=15
	Small I/O's:     IOPS=0.0    Avg qtime=0.0ms
	Large I/O's:     IOPS=14.5    Avg qtime=1152ms

These are statistics from a single cell-this X2-2 has 3 of them. I have also gathered some of the raw stats here in case you are interested, again from a single cell:

CellCLI> list metriccurrent where name like 'PDB.*' and metricObjectName like 'MBACHMT.SW.*' and metricValue not like '0.*';
	 PDB_FC_BY_ALLOCATED	 MBACHMT.SWINGBENCH0	 3,779 MB
	 PDB_FC_BY_ALLOCATED	 MBACHMT.SWINGBENCH1	 3,467 MB
	 PDB_FC_BY_ALLOCATED	 MBACHMT.SWINGBENCH2	 3,461 MB
	 PDB_FC_IO_BY_SEC   	 MBACHMT.SWINGBENCH0	 835 MB/sec
	 PDB_FC_IO_BY_SEC   	 MBACHMT.SWINGBENCH1	 635 MB/sec
	 PDB_FC_IO_BY_SEC   	 MBACHMT.SWINGBENCH2	 338 MB/sec
	 PDB_FC_IO_RQ       	 MBACHMT.SWINGBENCH0	 4,807,060 IO requests
	 PDB_FC_IO_RQ       	 MBACHMT.SWINGBENCH1	 4,835,038 IO requests
	 PDB_FC_IO_RQ       	 MBACHMT.SWINGBENCH2	 4,833,804 IO requests
	 PDB_FC_IO_RQ_LG    	 MBACHMT.SWINGBENCH0	 4,797,336 IO requests
	 PDB_FC_IO_RQ_LG    	 MBACHMT.SWINGBENCH1	 4,828,945 IO requests
	 PDB_FC_IO_RQ_LG    	 MBACHMT.SWINGBENCH2	 4,827,506 IO requests
	 PDB_FC_IO_RQ_LG_SEC	 MBACHMT.SWINGBENCH0	 12,738 IO/sec
	 PDB_FC_IO_RQ_LG_SEC	 MBACHMT.SWINGBENCH1	 9,691 IO/sec
	 PDB_FC_IO_RQ_LG_SEC	 MBACHMT.SWINGBENCH2	 5,158 IO/sec
	 PDB_FC_IO_RQ_SEC   	 MBACHMT.SWINGBENCH0	 12,789 IO/sec
	 PDB_FC_IO_RQ_SEC   	 MBACHMT.SWINGBENCH1	 9,721 IO/sec
	 PDB_FC_IO_RQ_SEC   	 MBACHMT.SWINGBENCH2	 5,182 IO/sec
	 PDB_FC_IO_RQ_SM    	 MBACHMT.SWINGBENCH0	 9,724 IO requests
	 PDB_FC_IO_RQ_SM    	 MBACHMT.SWINGBENCH1	 6,093 IO requests
	 PDB_FC_IO_RQ_SM    	 MBACHMT.SWINGBENCH2	 6,298 IO requests
	 PDB_FC_IO_RQ_SM_SEC	 MBACHMT.SWINGBENCH0	 51.2 IO/sec
	 PDB_FC_IO_RQ_SM_SEC	 MBACHMT.SWINGBENCH1	 30.2 IO/sec
	 PDB_FC_IO_RQ_SM_SEC	 MBACHMT.SWINGBENCH2	 23.9 IO/sec
	 PDB_FD_IO_BY_SEC   	 MBACHMT.SWINGBENCH0	 835 MB/sec
	 PDB_FD_IO_BY_SEC   	 MBACHMT.SWINGBENCH1	 635 MB/sec
	 PDB_FD_IO_BY_SEC   	 MBACHMT.SWINGBENCH2	 338 MB/sec
	 PDB_FD_IO_LOAD     	 MBACHMT.SWINGBENCH0	 30.3
	 PDB_FD_IO_LOAD     	 MBACHMT.SWINGBENCH1	 30.4
	 PDB_FD_IO_LOAD     	 MBACHMT.SWINGBENCH2	 28.1
	 PDB_FD_IO_RQ_LG    	 MBACHMT.SWINGBENCH0	 4,797,336 IO requests
	 PDB_FD_IO_RQ_LG    	 MBACHMT.SWINGBENCH1	 4,828,945 IO requests
	 PDB_FD_IO_RQ_LG    	 MBACHMT.SWINGBENCH2	 4,827,506 IO requests
	 PDB_FD_IO_RQ_LG_SEC	 MBACHMT.SWINGBENCH0	 12,738 IO/sec
	 PDB_FD_IO_RQ_LG_SEC	 MBACHMT.SWINGBENCH1	 9,691 IO/sec
	 PDB_FD_IO_RQ_LG_SEC	 MBACHMT.SWINGBENCH2	 5,158 IO/sec
	 PDB_FD_IO_TM       	 MBACHMT.SWINGBENCH0	 69,803,464 us
	 PDB_FD_IO_TM       	 MBACHMT.SWINGBENCH1	 45,061,357 us
	 PDB_FD_IO_TM       	 MBACHMT.SWINGBENCH2	 40,433,099 us
	 PDB_FD_IO_TM_RQ    	 MBACHMT.SWINGBENCH0	 1.0 us/request
	 PDB_FD_IO_TM_RQ    	 MBACHMT.SWINGBENCH1	 1.0 us/request
	 PDB_FD_IO_TM_RQ    	 MBACHMT.SWINGBENCH2	 1.0 us/request
	 PDB_FD_IO_UTIL     	 MBACHMT.SWINGBENCH0	 67 %
	 PDB_FD_IO_UTIL     	 MBACHMT.SWINGBENCH1	 54 %
	 PDB_FD_IO_UTIL     	 MBACHMT.SWINGBENCH2	 29 %
	 PDB_FD_IO_WT_LG    	 MBACHMT.SWINGBENCH0	 830,669 ms
	 PDB_FD_IO_WT_LG    	 MBACHMT.SWINGBENCH1	 717,211 ms
	 PDB_FD_IO_WT_LG    	 MBACHMT.SWINGBENCH2	 221,666 ms
	 PDB_IO_BY_SEC      	 MBACHMT.SWINGBENCH0	 380 MB/sec
	 PDB_IO_BY_SEC      	 MBACHMT.SWINGBENCH1	 305 MB/sec
	 PDB_IO_BY_SEC      	 MBACHMT.SWINGBENCH2	 151 MB/sec
	 PDB_IO_LOAD        	 MBACHMT.SWINGBENCH0	 2.4
	 PDB_IO_LOAD        	 MBACHMT.SWINGBENCH1	 2.6
	 PDB_IO_LOAD        	 MBACHMT.SWINGBENCH2	 2.3
	 PDB_IO_RQ_LG       	 MBACHMT.SWINGBENCH0	 105,784 IO requests
	 PDB_IO_RQ_LG       	 MBACHMT.SWINGBENCH1	 88,549 IO requests
	 PDB_IO_RQ_LG       	 MBACHMT.SWINGBENCH2	 61,617 IO requests
	 PDB_IO_RQ_LG_SEC   	 MBACHMT.SWINGBENCH0	 365 IO/sec
	 PDB_IO_RQ_LG_SEC   	 MBACHMT.SWINGBENCH1	 292 IO/sec
	 PDB_IO_RQ_LG_SEC   	 MBACHMT.SWINGBENCH2	 145 IO/sec
	 PDB_IO_TM_LG       	 MBACHMT.SWINGBENCH0	 3,822,888,945 us
	 PDB_IO_TM_LG       	 MBACHMT.SWINGBENCH1	 3,355,167,650 us
	 PDB_IO_TM_LG       	 MBACHMT.SWINGBENCH2	 2,004,747,904 us
	 PDB_IO_TM_LG_RQ    	 MBACHMT.SWINGBENCH0	 1.0 us/request
	 PDB_IO_TM_LG_RQ    	 MBACHMT.SWINGBENCH1	 1.0 us/request
	 PDB_IO_TM_LG_RQ    	 MBACHMT.SWINGBENCH2	 1.0 us/request
	 PDB_IO_UTIL_LG     	 MBACHMT.SWINGBENCH0	 27 %
	 PDB_IO_UTIL_LG     	 MBACHMT.SWINGBENCH1	 20 %
	 PDB_IO_UTIL_LG     	 MBACHMT.SWINGBENCH2	 9 %
	 PDB_IO_WT_LG       	 MBACHMT.SWINGBENCH0	 108,668,272 ms
	 PDB_IO_WT_LG       	 MBACHMT.SWINGBENCH1	 105,099,717 ms
	 PDB_IO_WT_LG       	 MBACHMT.SWINGBENCH2	 132,192,319 ms
	 PDB_IO_WT_LG_RQ    	 MBACHMT.SWINGBENCH0	 1,655 ms/request
	 PDB_IO_WT_LG_RQ    	 MBACHMT.SWINGBENCH1	 2,979 ms/request
	 PDB_IO_WT_LG_RQ    	 MBACHMT.SWINGBENCH2	 4,921 ms/request

Some of the numbers don’t seem to make sense here, for example PDB_FD_IO_RQ_LG as the values are very similar. This is actually a feature (really!), because some metrics are cumulative, and some are instantaneous:

CELLCLI> list metricdefinition where name = 'PDB_FD_IO_RQ_LG' detail
	 name:              	 PDB_FD_IO_RQ_LG
	 description:       	 "Number of large IO requests issued by this pluggable database to flash disks"
	 metricType:        	 Cumulative
	 objectType:        	 IORM_PLUGGABLE_DATABASE
	 unit:              	 "IO requests"

So this is a cumulative metric. Others, like PDB_IO_RQ_LG_SEC measure the state “as is”:

CELLCLI> list metricdefinition where name = 'PDB_IO_RQ_LG_SEC' detail
	 name:              	 PDB_IO_RQ_LG_SEC
	 description:       	 "Number of large IO requests issued by this pluggable database to hard disks per second"
	 metricType:        	 Rate
	 objectType:        	 IORM_PLUGGABLE_DATABASE
	 unit:              	 IO/sec

Have fun!