Search

OakieTags

Who's online

There are currently 0 users and 23 guests online.

Recent comments

Oakies Blog Aggregator

12.1.0.5 Hybrid Cloud Cloning

Last week’s release of 12.1.0.5 was a pleasant surprise for everyone out in the Oracle world.  This release hit the bulls-eye for another cloud target of Oracle’s, announcing the introduction of Enterprise Manager 12c’s offering a single pane of glass management of the hybrid cloud.  The EM12c team has been been trained and testing out the new features of this release with great enthusiasm and I have to admit, pretty cool stuff, folks!

Why is the Hybrid Cloud Important?

Many companies are still a bit hesitant to embrace the cloud or due to sensitive data and security requirements, aren’t able to take advantage of cloud offerings for their production systems.  Possessing a powerful tool like Enterprise Manager to help guide them to the cloud could make all the difference-

hybcld1

 

You’re going to start hearing the EM folks use the term, “Single Pane of Glass” a lot in the upcoming months, as it’s part of the overall move, taking Enterprise Manager from the perception that EM is still a DBA tool and getting everyone to embrace the truth that EM12c has grown into an infrastructure tool.

What is the hybrid cloud?

As we’ve discussed the baby-steps that many companies are taking, (vs. others that are jumping in, feet first! :)) with the hybrid cloud, the company can now uphold those requirements and maintain their production systems within on-premise sites, but enforce data masking and sub-setting, the sensitive data is never presented outside the production database, (including to the test master database that is used to track the changes in the snapclone copies…)  This then allows them with Database as a Service to clone development, test, Q&A environments to a less expensive cloud storage platform without exposing any sensitive data.

clone_datamask

Once the datamasking or any other pre-clone data cleansing/subsetting is performed, then the Test Master database is created and can be used to create as many snap clones as needed.  These snaps can be used for development, QA or testing.  The space savings continues to increase as the snapclone copies are added, as the block changes are most of the space consumption in the test master database.  This can add up to a 90% storage savings over traditional database full copies.

Hybrid Cloning

The power of hybrid cloning is the Hybrid Cloud Gateway, a secure SSH tunneling, that allows seamless communication between on-premise systems and the cloud.

hybcld3

Types of Clones

There are four types of clones currently offered with Hybrid cloning-

  • On-premise source cloned to the cloud.
  • Cloud source, cloned to on-premise.
  • Cloud source cloned in the cloud.
  • Migrate from a schema in a database to a PDB in the cloud.

Simplicity is Key

The user interface is simple to engage, use to create a clone or clones, save off templates, build out a catalog to be used for a self-service portal and when cloning, the status dashboard is a great quick view of success on cloning steps:

hybcld4

If deeper investigation of any single step needs to be perforrmed, the logging is no different than inspecting an EM job log, (because an EM job is exactly what it is… :)):

hybcld5

I’ll be returning from Europe soon and hope to do more with the product, digging into this great new feature, but until then, here’s a great overview of 12.1.0.5’s brand new star!

 



Tags:  ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [12.1.0.5 Hybrid Cloud Cloning], All Right Reserved. 2015.

Oracle Enterprise Manager Cloud Control 12c Release 5 (12.1.0.5) : My first two installations

em-12cI’ve done a couple of play installations of EM12c 12.1.0.5, just to get a feel for it. You can see the result of that here.

From an installation perspective, everything was pretty similar to the previous releases. I tried the installation on both OL5 and OL6, in both cases using 12c as the database repository. No dramas there.

A couple of things of note.

  1. The 12c repository template database is a Non-CDB architecture.
  2. The Weblogic installation uses Java6.

Interesting…

The next step is to try some upgrades from EM 12.1.0.4 (on DB 11.2.0.4) to EM 12.1.0.5, which is what I’ll need for my upgrades at work. The testing is quite time consuming and boring, but it’s got to be done before I can unleash this on the company. :)

Cheers

Tim…

PS. Remember to download from edelivery.oracle.com (in a couple of days) for your production installations. Apparently there is a difference to the license agreement.


Oracle Enterprise Manager Cloud Control 12c Release 5 (12.1.0.5) : My first two installations was first posted on June 17, 2015 at 4:55 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Reverse Key

A question came up on the OTN database forum recently asking if you could have a partitioned index on a non-partitioned table.

(Aside: I’m not sure whether it would be quicker to read the manuals or try the experiment – either would probably be quicker than posing the question to the forum. As so often happens in these RTFM questions the OP didn’t bother to acknowledge any of the responses)

The answer to the question is yes – you can create a globally partitioned index, though if it uses range partitioning you have to specify a MAXVALUE partition. The interesting thing about the question, though is that several people tried to guess why it had been asked and then made suggestions based on the most likely guess (and wouldn’t it have been nice to see some response from the OP ). The common guess was that there was a performance problem with the high-value block of a sequence-based (or time-based) index – a frequent source of “buffer busy wait” events and other nasty side effects.

Unfortunately too many people suggesting reverse key as a solution to this “right-hand” problem. If you’re licensed for partitioning it’s almost certain that a better option would simple be to use global hash partitioning (with 2^N for some N) partitions. Using reverse keys can result in a bigger performance than the one you’re trying to avoid – you may end up turning a little time spent on buffer busy waits into a large amount of time spent on db file sequential reads. To demonstrate the issue I’ve created a sample script – and adjusted my buffer cache down to the appropriate scale:

create table t1(
	id	not null
)
nologging
as
with generator as (
	select	--+ materialize
		rownum id 
	from dual 
	connect by 
		rownum <= 1e4
)
select
	1e7 + rownum	id
from
	generator	v1,
	generator	v2
where
	rownum <= 1e7 
;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1'
	);
end;
/

alter table t1 add constraint t1_pk primary key(id) 
using index 
	reverse 
	nologging 
;

alter system flush buffer_cache;
alter session set events '10046 trace name context forever, level 8';

begin
	for i in 20000001..20010000 loop
		insert into t1 values(i);
	end loop;
end;
/

I’ve created a table with 10,000,000 rows using a sequential value as the primary key, then inserted “the next” 10,000 rows into the table in order. The index occupied about about 22,000 blocks, so to make my demonstration show you the type of effect you could get from a busy production system with more tables and many indexes I ran my test with the buffer cache limited to 6,000 blocks – a fair fraction of the total index size. Here’s a small section of the trace file from the test running 10.2.0.3 on an elderly machine:


WAIT #43: nam='db file sequential read' ela= 13238 file#=6 block#=12653 blocks=1 obj#=63623 tim=3271125590
WAIT #43: nam='db file sequential read' ela=  7360 file#=6 block#=12749 blocks=1 obj#=63623 tim=3271133150
WAIT #43: nam='db file sequential read' ela=  5793 file#=6 block#=12844 blocks=1 obj#=63623 tim=3271139110
WAIT #43: nam='db file sequential read' ela=  5672 file#=6 block#=12940 blocks=1 obj#=63623 tim=3271145028
WAIT #43: nam='db file sequential read' ela= 15748 file#=5 block#=13037 blocks=1 obj#=63623 tim=3271160998
WAIT #43: nam='db file sequential read' ela=  8080 file#=5 block#=13133 blocks=1 obj#=63623 tim=3271169314
WAIT #43: nam='db file sequential read' ela=  8706 file#=5 block#=13228 blocks=1 obj#=63623 tim=3271178240
WAIT #43: nam='db file sequential read' ela=  7919 file#=5 block#=13325 blocks=1 obj#=63623 tim=3271186372
WAIT #43: nam='db file sequential read' ela= 15553 file#=6 block#=13549 blocks=1 obj#=63623 tim=3271202115
WAIT #43: nam='db file sequential read' ela=  7044 file#=6 block#=13644 blocks=1 obj#=63623 tim=3271209420
WAIT #43: nam='db file sequential read' ela=  6062 file#=6 block#=13741 blocks=1 obj#=63623 tim=3271215648
WAIT #43: nam='db file sequential read' ela=  6067 file#=6 block#=13837 blocks=1 obj#=63623 tim=3271221887
WAIT #43: nam='db file sequential read' ela= 11516 file#=5 block#=13932 blocks=1 obj#=63623 tim=3271234852
WAIT #43: nam='db file sequential read' ela=  9295 file#=5 block#=14028 blocks=1 obj#=63623 tim=3271244368
WAIT #43: nam='db file sequential read' ela=  9466 file#=5 block#=14125 blocks=1 obj#=63623 tim=3271254002
WAIT #43: nam='db file sequential read' ela=  7704 file#=5 block#=14221 blocks=1 obj#=63623 tim=3271261991
WAIT #43: nam='db file sequential read' ela= 16319 file#=6 block#=14444 blocks=1 obj#=63623 tim=3271278492
WAIT #43: nam='db file sequential read' ela=  7416 file#=6 block#=14541 blocks=1 obj#=63623 tim=3271286129
WAIT #43: nam='db file sequential read' ela=  5748 file#=6 block#=14637 blocks=1 obj#=63623 tim=3271292163
WAIT #43: nam='db file sequential read' ela=  7131 file#=6 block#=14732 blocks=1 obj#=63623 tim=3271299489
WAIT #43: nam='db file sequential read' ela= 16126 file#=5 block#=14829 blocks=1 obj#=63623 tim=3271315883
WAIT #43: nam='db file sequential read' ela=  7746 file#=5 block#=14925 blocks=1 obj#=63623 tim=3271323845
WAIT #43: nam='db file sequential read' ela=  9208 file#=5 block#=15020 blocks=1 obj#=63623 tim=3271333239
WAIT #43: nam='db file sequential read' ela=  7708 file#=5 block#=15116 blocks=1 obj#=63623 tim=3271341141
WAIT #43: nam='db file sequential read' ela= 15484 file#=6 block#=15341 blocks=1 obj#=63623 tim=3271356807
WAIT #43: nam='db file sequential read' ela=  5488 file#=6 block#=15437 blocks=1 obj#=63623 tim=3271362623
WAIT #43: nam='db file sequential read' ela= 10447 file#=6 block#=15532 blocks=1 obj#=63623 tim=3271373342
WAIT #43: nam='db file sequential read' ela= 12565 file#=6 block#=15629 blocks=1 obj#=63623 tim=3271386741
WAIT #43: nam='db file sequential read' ela= 17168 file#=5 block#=15725 blocks=1 obj#=63623 tim=3271404135
WAIT #43: nam='db file sequential read' ela=  7542 file#=5 block#=15820 blocks=1 obj#=63623 tim=3271411882
WAIT #43: nam='db file sequential read' ela=  9400 file#=5 block#=15917 blocks=1 obj#=63623 tim=3271421514
WAIT #43: nam='db file sequential read' ela=  7804 file#=5 block#=16013 blocks=1 obj#=63623 tim=3271429519
WAIT #43: nam='db file sequential read' ela= 14470 file#=6 block#=16237 blocks=1 obj#=63623 tim=3271444168
WAIT #43: nam='db file sequential read' ela=  5788 file#=6 block#=16333 blocks=1 obj#=63623 tim=3271450154
WAIT #43: nam='db file sequential read' ela=  9630 file#=6 block#=16429 blocks=1 obj#=63623 tim=3271460008
WAIT #43: nam='db file sequential read' ela= 10910 file#=6 block#=16525 blocks=1 obj#=63623 tim=3271471174
WAIT #43: nam='db file sequential read' ela= 15683 file#=5 block#=16620 blocks=1 obj#=63623 tim=3271487065
WAIT #43: nam='db file sequential read' ela=  8094 file#=5 block#=16717 blocks=1 obj#=63623 tim=3271495454
WAIT #43: nam='db file sequential read' ela=  6670 file#=5 block#=16813 blocks=1 obj#=63623 tim=3271502293
WAIT #43: nam='db file sequential read' ela=  7852 file#=5 block#=16908 blocks=1 obj#=63623 tim=3271510360
WAIT #43: nam='db file sequential read' ela= 10500 file#=6 block#=17133 blocks=1 obj#=63623 tim=3271521039
WAIT #43: nam='db file sequential read' ela= 11038 file#=6 block#=17229 blocks=1 obj#=63623 tim=3271532275
WAIT #43: nam='db file sequential read' ela= 12432 file#=6 block#=17325 blocks=1 obj#=63623 tim=3271544974
WAIT #43: nam='db file sequential read' ela=  7784 file#=6 block#=17421 blocks=1 obj#=63623 tim=3271553331
WAIT #43: nam='db file sequential read' ela=  7774 file#=5 block#=17517 blocks=1 obj#=63623 tim=3271561346
WAIT #43: nam='db file sequential read' ela=  6583 file#=5 block#=17613 blocks=1 obj#=63623 tim=3271568146
WAIT #43: nam='db file sequential read' ela=  7901 file#=5 block#=17708 blocks=1 obj#=63623 tim=3271576231
WAIT #43: nam='db file sequential read' ela=  6667 file#=5 block#=17805 blocks=1 obj#=63623 tim=3271583259
WAIT #43: nam='db file sequential read' ela=  9427 file#=6 block#=18029 blocks=1 obj#=63623 tim=3271592988
WAIT #43: nam='db file sequential read' ela= 52334 file#=6 block#=18125 blocks=1 obj#=63623 tim=3271646055
WAIT #43: nam='db file sequential read' ela= 50512 file#=6 block#=18221 blocks=1 obj#=63623 tim=3271697284
WAIT #43: nam='db file sequential read' ela= 10095 file#=6 block#=18317 blocks=1 obj#=63623 tim=3271708095

Check the block number for this list of single block reads – we’re jumping through the index about 100 blocks at a time to read the next block where an index entry has to go. The jumps are the expected (and designed) effect of reverse key indexes: the fact that the jumps turn into physical disc reads is the (possibly unexpected) side effect. Reversing an index makes adjacent values look very different (by reversing the bytes) and go to different index leaf blocks: the purpose of the exercise is to scatter concurrent similar inserts across multiple blocks, but if you scatter the index entries you need to buffer a lot more of the index to keep the most recently used values in memory. Reversing the index may eliminate buffer busy waits, but it may increase time lost of db file sequential reads dramatically.

Here’s a short list of interesting statistics from this test – this time running on 11.2.0.4 on a machine with SSDs) comparing the effects of reversing the index with those of not reversing the index – normal index first:


Normal index
------------
CPU used by this session               83
DB time                                97
db block gets                      40,732
physical reads                         51
db block changes                   40,657
redo entries                       20,174
redo size                       5,091,436
undo change vector size         1,649,648

Repeat with reverse key index
-----------------------------
CPU used by this session              115
DB time                               121
db block gets                      40,504
physical reads                     10,006
db block changes                   40,295
redo entries                       19,973
redo size                       4,974,820
undo change vector size         1,639,232

Because of the SSDs there’s little difference in timing between the two sets of data and, in fact, all the other measures of work done are very similar except for the physical read, and the increase in reads is probably the cause of the extra CPU time thanks to both the LRU manipulation and the interaction with the operating system.

If you want to check the effect of index reversal you can take advantage of the sys_op_lbid() function to sample a little of your data – in my case I’ve queried the last 10,000 rows (values) in the table:


select 
	/*+ 
		cursor_sharing_exact 
		dynamic_sampling(0) 
		no_monitoring 
		no_expand 
		index_ffs(t1,t1_i1) 
		noparallel_index(t,t1_i1) 
	*/ 
	count (distinct sys_op_lbid( &m_ind_id ,'L',t1.rowid)) as leaf_blocks
from 
	t1
where 
	id between 2e7 + 1 and 2e7 + 1e4
;

The &m_ind_id substition variable is the object_id of the index t1_i1.

In my case, with an index of 22,300 leaf blocks, my 10,000 consecutive values were scattered over 9,923 leaf blocks. If I want access to “recent data” to be as efficient as possible I need to keep that many blocks of the index cached, compared to (absolute) worst case for my data 100 leaf blocks. When you reverse key an index you have to think about how much bigger you have to make your buffer cache to keep the performance constant.

HOWTO: Check if an XMLType View or Table is Hierarchy Enabled

The following simple, code snippet, demonstrates how you can check if an XMLType view or…

IO Resource Manager for Pluggable Databases in Exadata

Another interesting topic that goes over and above the CDB Resource Manager Plans I described earlier this week is the implementation of IORM Plans for Pluggable Databases. Pluggable Databases are an interesting object for studies, and I like to research things. When 12c came out there was no support for offloading initially, it is required that you are on cell software 12.1.x.x.x for full 12c support on Exadata. One aspect I was particularly interested in were PDBs and IORM. Limiting CPU on a PDB basis is already a cool thing, but in a hugely consolidated environment it does make sense to limit I/O based on a consumer group. Another one of the perks of working on Exadata :)

According to what I found out PDBs are treated as entities within the database, and they probably fall into the category of intra-database IORM. I have previously written about how DBRM plans filter down to the cells and become intra-database resource plans. This seems to be happening here, too.

IORM metric definitions

When looking at IORM I’m old fashioned and like to rely on the command line. More GUI oriented people should consider the use of OEM 12 to get similar data but in pretty pictures. But since OEM taps into cellcli under the covers it is only fair trying to understand the underlying technology.

The cells provide a lot of performance information in the metriccurrent and metrichistory views. You are shown metrics based on a name and objectType. For IORM the following objectTypes are of interest in general:

[celladmin@enkcel04 ~]$ cellcli -e "list metricdefinition attributes objectType" | grep IORM | sort | uniq
	 IORM_CATEGORY
	 IORM_CONSUMER_GROUP
	 IORM_DATABASE
	 IORM_PLUGGABLE_DATABASE

This was executed on Exadata 12.1.2.1.1, if you try this on an 11.2.3.x Exadata release you won’t see the IORM_PLUGGABLE_DATABASE category. Question is: which metrics are gathered in 12.1.2.1.1?

CELLCLI> list metricdefinition attributes name,description where objectType = 'IORM_PLUGGABLE_DATABASE'
	 PDB_FC_BY_ALLOCATED	 "Number of megabytes allocated in flash cache for this pluggable database"
	 PDB_FC_IO_BY_SEC   	 "Number of megabytes of I/O per second for this pluggable database to flash cache"
	 PDB_FC_IO_RQ       	 "Number of IO requests issued by this pluggable database to flash cache"
	 PDB_FC_IO_RQ_LG    	 "Number of large IO requests issued by this pluggable database to flash cache"
	 PDB_FC_IO_RQ_LG_SEC	 "Number of large IO requests issued by this pluggable database to flash cache per second"
	 PDB_FC_IO_RQ_SEC   	 "Number of IO requests issued by this pluggable database to flash cache per second"
	 PDB_FC_IO_RQ_SM    	 "Number of small IO requests issued by this pluggable database to flash cache"
	 PDB_FC_IO_RQ_SM_SEC	 "Number of small IO requests issued by this pluggable database to flash cache per second"
	 PDB_FD_IO_BY_SEC   	 "Number of megabytes of I/O per second for this pluggable database to flash disks"
	 PDB_FD_IO_LOAD     	 "Average I/O load from this pluggable database for flash disks"
	 PDB_FD_IO_RQ_LG    	 "Number of large IO requests issued by this pluggable database to flash disks"
	 PDB_FD_IO_RQ_LG_SEC	 "Number of large IO requests issued by this pluggable database to flash disks per second"
	 PDB_FD_IO_RQ_SM    	 "Number of small IO requests issued by this pluggable database to flash disks"
	 PDB_FD_IO_RQ_SM_SEC	 "Number of small IO requests issued by this pluggable database to flash disks per second"
	 PDB_FD_IO_TM       	 "The cumulative latency of reading blocks by this pluggable database from flash disks"
	 PDB_FD_IO_TM_RQ    	 "The rate which is the average latency of reading or writing blocks per request by this pluggable database from flash disks"
	 PDB_FD_IO_UTIL     	 "Percentage of flash resources utilized by this pluggable database"
	 PDB_FD_IO_WT_LG    	 "IORM wait time for large IO requests issued to flash disks by this pluggable database"
	 PDB_FD_IO_WT_LG_RQ 	 "Average IORM wait time per request for large IO requests issued to flash disks by this pluggable database"
	 PDB_FD_IO_WT_SM    	 "IORM wait time for small IO requests issued to flash disks by this pluggable database"
	 PDB_FD_IO_WT_SM_RQ 	 "Average IORM wait time per request for small IO requests issued to flash disks by this pluggable database"
	 PDB_IO_BY_SEC      	 "Number of megabytes of I/O per second for this pluggable database to hard disks"
	 PDB_IO_LOAD        	 "Average I/O load from this pluggable database for hard disks"
	 PDB_IO_RQ_LG       	 "Number of large IO requests issued by this pluggable database to hard disks"
	 PDB_IO_RQ_LG_SEC   	 "Number of large IO requests issued by this pluggable database to hard disks per second"
	 PDB_IO_RQ_SM       	 "Number of small IO requests issued by this pluggable database to hard disks"
	 PDB_IO_RQ_SM_SEC   	 "Number of small IO requests issued by this pluggable database to hard disks per second"
	 PDB_IO_TM_LG       	 "The cumulative latency of reading or writing large blocks by this pluggable database from hard disks"
	 PDB_IO_TM_LG_RQ    	 "The rate which is the average latency of reading or writing large blocks per request by this pluggable database from hard disks"
	 PDB_IO_TM_SM       	 "The cumulative latency of reading or writing small blocks by this pluggable database from hard disks"
	 PDB_IO_TM_SM_RQ    	 "The rate which is the average latency of reading or writing small blocks per request by this pluggable database from hard disks"
	 PDB_IO_UTIL_LG     	 "Percentage of disk resources utilized by large requests from this pluggable database"
	 PDB_IO_UTIL_SM     	 "Percentage of disk resources utilized by small requests from this pluggable database"
	 PDB_IO_WT_LG       	 "IORM wait time for large IO requests issued to hard disks by this pluggable database"
	 PDB_IO_WT_LG_RQ    	 "Average IORM wait time per request for large IO requests issued to hard disks by this pluggable database"
	 PDB_IO_WT_SM       	 "IORM wait time for small IO requests issued to hard disks by this pluggable database"
	 PDB_IO_WT_SM_RQ    	 "Average IORM wait time per request for small IO requests issued to hard disks by this pluggable database"

IORM metrics for PDBs

You still can’t “join” metricdefintition to metriccurrent but it is possible to use the objectType in metriccurrent, too. The way that Oracle externalises information about PDBs is as shown here:

CellCLI> list metriccurrent where objectType = 'IORM_PLUGGABLE_DATABASE' and metricObjectName like 'MBACHMT.*'
	 PDB_FC_BY_ALLOCATED	 MBACHMT.CDB$ROOT   	 1,843 MB
	 PDB_FC_BY_ALLOCATED	 MBACHMT.PDB$SEED   	 41.500 MB
	 PDB_FC_BY_ALLOCATED	 MBACHMT.SWINGBENCH0	 3,772 MB
	 PDB_FC_BY_ALLOCATED	 MBACHMT.SWINGBENCH1	 3,374 MB
	 PDB_FC_BY_ALLOCATED	 MBACHMT.SWINGBENCH2	 3,280 MB
...

PDB_FC_BY_ALLOCATED translates to “Number of megabytes allocated in flash cache for this pluggable database” as per the above translation. The Metric Object Name therefore is made up of the database name (I continue using MBACHMT as my CDB) and the container name. CDB$ROOT stands for the root, PDB$SEED for the seed database, and then the various PDB names you define. In my example I have “user PDBs” defined as swingbench{0,1,2}.

Stressing it

With the basics covered it is time to run some stress testing. I have created a 6 GB table named IOTEST in all my swingbench* PDBs and will use the same script to issue 80 sessions against that table in each PDB. My CDB resource manager plan is still the same, repeated here for your convenience:

begin
 dbms_resource_manager.clear_pending_area;
 dbms_resource_manager.create_pending_area;
 
 dbms_resource_manager.create_cdb_plan(
  plan => 'ENKITC_CDB_PLAN',
  comment => 'A CDB plan for 12c'
 );
 
 dbms_resource_manager.create_cdb_plan_directive(
  plan => 'ENKITC_CDB_PLAN',
  pluggable_database => 'swingbench0',
  shares => 5,
  utilization_limit => 100);
 
 dbms_resource_manager.create_cdb_plan_directive(
  plan => 'ENKITC_CDB_PLAN',
  pluggable_database => 'swingbench1',
  shares => 3,
  utilization_limit => 50);
 
 dbms_resource_manager.create_cdb_plan_directive(
  plan => 'ENKITC_CDB_PLAN',
  pluggable_database => 'swingbench2',
  shares => 1,
  utilization_limit => 30);
 
 dbms_resource_manager.validate_pending_area;
 dbms_resource_manager.submit_pending_area;
end;
/

If I execute the scripts concurrently (80 sessions connecting against each PDB) and prefixing my scheduler with time command then I get the following results

  • swingbench0 – 143s
  • swingbench1 – 223s
  • swingbench2 – 288s

Interestingly, there is no event that would show I/O throttling in 12.1.0.2.2:

SYS:MBACHMT2> select count(*), con_id, event from v$session where username = 'SOE' group by con_id, event;

   COUNT(*)      CON_ID EVENT
----------- ----------- ----------------------------------------------------------------
         80           4 cell smart table scan
         80           5 cell smart table scan
         80           3 cell smart table scan

So all of them are Smart-Scanning. The fact that some of the sessions are throttled is not visible from the wait interface, or at least I haven’t seen a way to externalise I/O throttling. But it does happen. Using one of my favourite tools, metric_iorm.pl (available from MOS), I noticed the following:

Database: MBACHMT
Utilization:     Small=0%    Large=18%
Flash Cache:     IOPS=13087
Disk Throughput: MBPS=361
Small I/O's:     IOPS=4.0    Avg qtime=0.0ms
Large I/O's:     IOPS=346    Avg qtime=1026ms
	Consumer Group: SWINGBENCH1.SWINGBENCH_GROUP
	Utilization:     Small=0%    Large=4%
	Flash Cache:     IOPS=3482
	Disk Throughput: MBPS=91
	Small I/O's:     IOPS=0.0    Avg qtime=0.0ms
	Large I/O's:     IOPS=87.7    Avg qtime=1336ms
	Consumer Group: SWINGBENCH0.SWINGBENCH_GROUP
	Utilization:     Small=0%    Large=13%
	Flash Cache:     IOPS=8886
	Disk Throughput: MBPS=254
	Small I/O's:     IOPS=0.0    Avg qtime=0.0ms
	Large I/O's:     IOPS=244    Avg qtime=906ms
	Consumer Group: CDB$ROOT._ORACLE_LOWPRIBG_GROUP_
	Utilization:     Small=0%    Large=0%
	Flash Cache:     IOPS=0.8
	Disk Throughput: MBPS=0
	Small I/O's:     IOPS=0.0    Avg qtime=0.0ms
	Large I/O's:     IOPS=0.0    Avg qtime=0.0ms
	Consumer Group: CDB$ROOT.ORA$AUTOTASK
	Utilization:     Small=0%    Large=0%
	Flash Cache:     IOPS=1.4
	Disk Throughput: MBPS=0
	Small I/O's:     IOPS=0.4    Avg qtime=0.0ms
	Large I/O's:     IOPS=0.0    Avg qtime=0.0ms
	Consumer Group: CDB$ROOT._ORACLE_BACKGROUND_GROUP_
	Utilization:     Small=0%    Large=0%
	Flash Cache:     IOPS=0.7
	Disk Throughput: MBPS=0
	Small I/O's:     IOPS=3.6    Avg qtime=0.0ms
	Large I/O's:     IOPS=0.0    Avg qtime=0.0ms
	Consumer Group: SWINGBENCH2.SWINGBENCH_GROUP
	Utilization:     Small=0%    Large=0%
	Flash Cache:     IOPS=717
	Disk Throughput: MBPS=15
	Small I/O's:     IOPS=0.0    Avg qtime=0.0ms
	Large I/O's:     IOPS=14.5    Avg qtime=1152ms

These are statistics from a single cell-this X2-2 has 3 of them. I have also gathered some of the raw stats here in case you are interested, again from a single cell:

CellCLI> list metriccurrent where name like 'PDB.*' and metricObjectName like 'MBACHMT.SW.*' and metricValue not like '0.*';
	 PDB_FC_BY_ALLOCATED	 MBACHMT.SWINGBENCH0	 3,779 MB
	 PDB_FC_BY_ALLOCATED	 MBACHMT.SWINGBENCH1	 3,467 MB
	 PDB_FC_BY_ALLOCATED	 MBACHMT.SWINGBENCH2	 3,461 MB
	 PDB_FC_IO_BY_SEC   	 MBACHMT.SWINGBENCH0	 835 MB/sec
	 PDB_FC_IO_BY_SEC   	 MBACHMT.SWINGBENCH1	 635 MB/sec
	 PDB_FC_IO_BY_SEC   	 MBACHMT.SWINGBENCH2	 338 MB/sec
	 PDB_FC_IO_RQ       	 MBACHMT.SWINGBENCH0	 4,807,060 IO requests
	 PDB_FC_IO_RQ       	 MBACHMT.SWINGBENCH1	 4,835,038 IO requests
	 PDB_FC_IO_RQ       	 MBACHMT.SWINGBENCH2	 4,833,804 IO requests
	 PDB_FC_IO_RQ_LG    	 MBACHMT.SWINGBENCH0	 4,797,336 IO requests
	 PDB_FC_IO_RQ_LG    	 MBACHMT.SWINGBENCH1	 4,828,945 IO requests
	 PDB_FC_IO_RQ_LG    	 MBACHMT.SWINGBENCH2	 4,827,506 IO requests
	 PDB_FC_IO_RQ_LG_SEC	 MBACHMT.SWINGBENCH0	 12,738 IO/sec
	 PDB_FC_IO_RQ_LG_SEC	 MBACHMT.SWINGBENCH1	 9,691 IO/sec
	 PDB_FC_IO_RQ_LG_SEC	 MBACHMT.SWINGBENCH2	 5,158 IO/sec
	 PDB_FC_IO_RQ_SEC   	 MBACHMT.SWINGBENCH0	 12,789 IO/sec
	 PDB_FC_IO_RQ_SEC   	 MBACHMT.SWINGBENCH1	 9,721 IO/sec
	 PDB_FC_IO_RQ_SEC   	 MBACHMT.SWINGBENCH2	 5,182 IO/sec
	 PDB_FC_IO_RQ_SM    	 MBACHMT.SWINGBENCH0	 9,724 IO requests
	 PDB_FC_IO_RQ_SM    	 MBACHMT.SWINGBENCH1	 6,093 IO requests
	 PDB_FC_IO_RQ_SM    	 MBACHMT.SWINGBENCH2	 6,298 IO requests
	 PDB_FC_IO_RQ_SM_SEC	 MBACHMT.SWINGBENCH0	 51.2 IO/sec
	 PDB_FC_IO_RQ_SM_SEC	 MBACHMT.SWINGBENCH1	 30.2 IO/sec
	 PDB_FC_IO_RQ_SM_SEC	 MBACHMT.SWINGBENCH2	 23.9 IO/sec
	 PDB_FD_IO_BY_SEC   	 MBACHMT.SWINGBENCH0	 835 MB/sec
	 PDB_FD_IO_BY_SEC   	 MBACHMT.SWINGBENCH1	 635 MB/sec
	 PDB_FD_IO_BY_SEC   	 MBACHMT.SWINGBENCH2	 338 MB/sec
	 PDB_FD_IO_LOAD     	 MBACHMT.SWINGBENCH0	 30.3
	 PDB_FD_IO_LOAD     	 MBACHMT.SWINGBENCH1	 30.4
	 PDB_FD_IO_LOAD     	 MBACHMT.SWINGBENCH2	 28.1
	 PDB_FD_IO_RQ_LG    	 MBACHMT.SWINGBENCH0	 4,797,336 IO requests
	 PDB_FD_IO_RQ_LG    	 MBACHMT.SWINGBENCH1	 4,828,945 IO requests
	 PDB_FD_IO_RQ_LG    	 MBACHMT.SWINGBENCH2	 4,827,506 IO requests
	 PDB_FD_IO_RQ_LG_SEC	 MBACHMT.SWINGBENCH0	 12,738 IO/sec
	 PDB_FD_IO_RQ_LG_SEC	 MBACHMT.SWINGBENCH1	 9,691 IO/sec
	 PDB_FD_IO_RQ_LG_SEC	 MBACHMT.SWINGBENCH2	 5,158 IO/sec
	 PDB_FD_IO_TM       	 MBACHMT.SWINGBENCH0	 69,803,464 us
	 PDB_FD_IO_TM       	 MBACHMT.SWINGBENCH1	 45,061,357 us
	 PDB_FD_IO_TM       	 MBACHMT.SWINGBENCH2	 40,433,099 us
	 PDB_FD_IO_TM_RQ    	 MBACHMT.SWINGBENCH0	 1.0 us/request
	 PDB_FD_IO_TM_RQ    	 MBACHMT.SWINGBENCH1	 1.0 us/request
	 PDB_FD_IO_TM_RQ    	 MBACHMT.SWINGBENCH2	 1.0 us/request
	 PDB_FD_IO_UTIL     	 MBACHMT.SWINGBENCH0	 67 %
	 PDB_FD_IO_UTIL     	 MBACHMT.SWINGBENCH1	 54 %
	 PDB_FD_IO_UTIL     	 MBACHMT.SWINGBENCH2	 29 %
	 PDB_FD_IO_WT_LG    	 MBACHMT.SWINGBENCH0	 830,669 ms
	 PDB_FD_IO_WT_LG    	 MBACHMT.SWINGBENCH1	 717,211 ms
	 PDB_FD_IO_WT_LG    	 MBACHMT.SWINGBENCH2	 221,666 ms
	 PDB_IO_BY_SEC      	 MBACHMT.SWINGBENCH0	 380 MB/sec
	 PDB_IO_BY_SEC      	 MBACHMT.SWINGBENCH1	 305 MB/sec
	 PDB_IO_BY_SEC      	 MBACHMT.SWINGBENCH2	 151 MB/sec
	 PDB_IO_LOAD        	 MBACHMT.SWINGBENCH0	 2.4
	 PDB_IO_LOAD        	 MBACHMT.SWINGBENCH1	 2.6
	 PDB_IO_LOAD        	 MBACHMT.SWINGBENCH2	 2.3
	 PDB_IO_RQ_LG       	 MBACHMT.SWINGBENCH0	 105,784 IO requests
	 PDB_IO_RQ_LG       	 MBACHMT.SWINGBENCH1	 88,549 IO requests
	 PDB_IO_RQ_LG       	 MBACHMT.SWINGBENCH2	 61,617 IO requests
	 PDB_IO_RQ_LG_SEC   	 MBACHMT.SWINGBENCH0	 365 IO/sec
	 PDB_IO_RQ_LG_SEC   	 MBACHMT.SWINGBENCH1	 292 IO/sec
	 PDB_IO_RQ_LG_SEC   	 MBACHMT.SWINGBENCH2	 145 IO/sec
	 PDB_IO_TM_LG       	 MBACHMT.SWINGBENCH0	 3,822,888,945 us
	 PDB_IO_TM_LG       	 MBACHMT.SWINGBENCH1	 3,355,167,650 us
	 PDB_IO_TM_LG       	 MBACHMT.SWINGBENCH2	 2,004,747,904 us
	 PDB_IO_TM_LG_RQ    	 MBACHMT.SWINGBENCH0	 1.0 us/request
	 PDB_IO_TM_LG_RQ    	 MBACHMT.SWINGBENCH1	 1.0 us/request
	 PDB_IO_TM_LG_RQ    	 MBACHMT.SWINGBENCH2	 1.0 us/request
	 PDB_IO_UTIL_LG     	 MBACHMT.SWINGBENCH0	 27 %
	 PDB_IO_UTIL_LG     	 MBACHMT.SWINGBENCH1	 20 %
	 PDB_IO_UTIL_LG     	 MBACHMT.SWINGBENCH2	 9 %
	 PDB_IO_WT_LG       	 MBACHMT.SWINGBENCH0	 108,668,272 ms
	 PDB_IO_WT_LG       	 MBACHMT.SWINGBENCH1	 105,099,717 ms
	 PDB_IO_WT_LG       	 MBACHMT.SWINGBENCH2	 132,192,319 ms
	 PDB_IO_WT_LG_RQ    	 MBACHMT.SWINGBENCH0	 1,655 ms/request
	 PDB_IO_WT_LG_RQ    	 MBACHMT.SWINGBENCH1	 2,979 ms/request
	 PDB_IO_WT_LG_RQ    	 MBACHMT.SWINGBENCH2	 4,921 ms/request

Some of the numbers don’t seem to make sense here, for example PDB_FD_IO_RQ_LG as the values are very similar. This is actually a feature (really!), because some metrics are cumulative, and some are instantaneous:

CELLCLI> list metricdefinition where name = 'PDB_FD_IO_RQ_LG' detail
	 name:              	 PDB_FD_IO_RQ_LG
	 description:       	 "Number of large IO requests issued by this pluggable database to flash disks"
	 metricType:        	 Cumulative
	 objectType:        	 IORM_PLUGGABLE_DATABASE
	 unit:              	 "IO requests"

So this is a cumulative metric. Others, like PDB_IO_RQ_LG_SEC measure the state “as is”:

CELLCLI> list metricdefinition where name = 'PDB_IO_RQ_LG_SEC' detail
	 name:              	 PDB_IO_RQ_LG_SEC
	 description:       	 "Number of large IO requests issued by this pluggable database to hard disks per second"
	 metricType:        	 Rate
	 objectType:        	 IORM_PLUGGABLE_DATABASE
	 unit:              	 IO/sec

Have fun!

A Database Wordfile…

It is not often that something like the following happens on Google while searching for…

Analysing UKOUG Presenter’s – I Know How You Performed.

This last few days I’ve been analysing how well received previous presentations (since 2006!) have been at the UKOUG tech conferences. It’s interesting to look at the information. I’ve learnt some interesting things about all those well-known-names :-)

Like many conferences and user group meetings, during the conference and shortly afterwards the UKOUG ask attendees to feed back on the presentations, keynotes and round tables that people go to. If you chair a session, one of your tasks is to request people do the feedback forms. Talks are judged for several aspect (concept, quality of slides, presentation skills, overall value and a couple more) from 1 (very poor) to 6 (excellent). You can also add a free-text comment. The reason for an even number of possible scores it to prevent a non-committal middle score. Why not 1-8 or 1-10? I don’t really know, but I did see a blog post recently about using a wider range and it seemed to not really add to the overall benefit of the feedback as the very top and very bottom scores were never used. This information is compiled and fed back to the individual speakers, along with the average scores across the event. Speakers are very keen to know how they did compared to everyone else {no egos involved here :-) } and also any specific comments on their efforts. It is important to us speakers, we need to know if you liked or disliked what we did so we can improve. Or sulk.

This is an example of the feedback we get (one of mine, of course).
Speaker Scores

Something that has annoyed me for many years is that the speaker scores are not formally analysed and fed into the speaker selection process for future UKOUG conferences. I used to get really quite vexed by this {ie bad tempered and, well, annoying & complaining to the UKOUG office}. I suspected they were going to do to me what you should always consider doing to some loud-mouthed complainer and say “well, if you are so passionate about this – you damn well do it!”. So I offered to take the data and process it.

The information I received was not the raw feedback forms, but the average scores per talk – the information actually passed back to the presenters. Which is perfect for my purposes. I analysed data from 2006 to 2014 {though 2008 was missing), so a pretty comprehensive data set. SIG talk feedback is also missing, I’ll work with the UKOUG office to incorporate that for version 2 of the analysis.

Compiling all the data into a single rating per speaker is more demanding than first seems. Isn’t any data analysis project? Rather than consider all areas speakers are judged on I decided to score people on only two dimensions (areas to you and I) – presentation skills and overall value of the session. When we judge papers those are the two main things we want to know – can the person present well and do they usually give a talk people value.

Some of the challenges were:
– What to score speakers on – I’ve just said what I chose.
– People’s names. I need to group talks on this but it’s a free-text field of course so I had to clean that. I stripped off title, edited variations and then reviewed. For women, their Surname can alter with marital status {I find it rather archaic that this is still very common and almost exclusively impacts women. But then, I offered to my wife we use her maiden name as our married one and she was fine to take mine.} But you also get variations on first name, spelling mistakes and alterations of title. I had to solve that to group scores.
– The number of feedback forms received for a given presentation. If a presentation only gets one feedback form, how reliable are the scores? If it gets 5 feedback forms, how reliable is that? 35 forms? I came up with a weighting based on the number of feedback forms where if only 1 feedback score was given, it held less weight than 5, that held less weight than 15… etc.
– Oddities of eg co-presenters or the “speaker” actually being a facilitator, as is often the case with Round Table sessions.
– The data, as held in Excel, being “damaged”. ie it caused my analysis issues as things had been done to the data to support other purposes – what was important to the UKOUG organising the conference that year. Sorting those issues out took up most of my efforts.
– The fact that I was using Excel as the analysis tool. I’m a SQL guy!!!! But the thing is, with a relatively small volume of data and a need to constantly visually check alterations, some things are just much easier in a tool like Excel than SQL. And some things are way harder.

In the end, I got a set of scores that helped us on the Agenda Planning Day (well, it did in the database stream) and hopefully will develop over the years. It would be wrong of me to discuss how specific Oracle Names did, especially any who did poorly, but the scores informed our deliberations this year and should do so for years to come. If you want to contact me directly and ask me how you did – I won’t tell you (or anyone else). But I can talk about more generic things I discovered.

Over all presentations from 2006 to 2014
the average number of feedbacks for a session is around 10
The average for Presentation Skills is 4.6
The average for Overall Value is 4.5

So almost 4.5 out of 6 as the average scores, which is “Good” to “Very Good”.

NB I do not calculate my averages in the same way as the UKOUG office.

Because I weight my scores and remove zero values (and probably a couple of other differences, such as I already have averages not the raw scores) my average scores do not compare and are higher to the ones they issue for events. I think I am a harsher judge :-)

So what were some of the interesting things I discovered?

  • Well, for starters, scores for a given presentation rarely hit as low as 3. In fact, except for a small number of stand-out-bad talks, most scores of 3 were where only 1 feedback score was received, some with 2. We don’t seem to like giving low feedback scores. The same goes for 6. I only saw 6 if the number of feedback forms was 1 or 2. So reliable scores are between 3.01 and 5.99 really.
  • As I was stripping off people’s title by manual replacement runs, I know how many Mrs, Ms, Miss, Dr etc we get. Miss and Ms go up – and down – over the years. It varies a lot, but Ms is becoming more common. What is disappointing is the consistently low number of presentations by women. But I know that in the years I have been involved, the proportion of presentations by women is in proportion to the number of submissions, or even a tad higher. Come on ladies, represent your constituents! Some of the highest speaking scores are by women.
  • Another thing I get from the person title, we get no papers submitted by Professors, Colonels (or any military bigwigs), members of the clergy or peers of the realm. Or members of royal families. They are simply not trying are they?
  • On a personal note – I am Utterly Average. Over 8 years I fall number 296 and 298 out of 623 speakers for Presentation Skills and Overall Value respectively. Have you any idea how much that damaged my ego?!? I was gutted! Where I am a little more unusual is my average number of feedback scores, which is 21.6, in the top 15%. I’m massaging my ego with that (it’s all I’ve got!).
    {what is really vexing is I dug out my scores from earlier years and they were better than my running average – and my scores are pulled down by one talk in 2010 where I really bombed. Have you any idea how tempting it was for me to delete that one talk out of the data set?}
  • Some speakers, a small number, always-always-always get high votes, mostly as they are excellent but with an added slice I suspect of of, well, they are deeply respected. But interestingly, even well known people (what I think of as the ‘B’ list and even a couple of ‘A’ listers in my opinion) can bomb. Some regularly. I mean, if you saw the scores for….no, I won’t say :-).
    But the scores for individual speakers can and do vary. I saw one speaker, who in my opinion is a brilliant technician and a fantastic speaker, be up in the high 5.8’s for one talk and then down in the low 3’s for another. That made me dig in further and there are several people I know and hold a similar opinion on who have high and low talks. So that makes me feel that the user feedback scores are generally reliable and even respected speakers will get a poor score if the talk misses it’s mark. The best just simply never miss the mark, or not by much.
  • Not to be too harsh, but if you score 4 or below for either presentation skills or overall value and got 3 or more feedback forms – you bombed.

But bombing occasionally is OK. I’ve bombed (well, this close to bombed) and I’ve learned. Many excellent presenters have bombed. We all alter in our presenting skills over time. Most of you get better over time – I’ve got a tiny bit worse! But if you bomb all the time? Then maybe presenting is not your thing. It is not the only route to spreading the word, maybe try writing. But, again to be harsh, if you can’t present we owe it to the delegates of the conference not to select you to present.

Those of us organising the content know, as a group, who the best speakers are. We ensure that they get slots. And we have a good feel for who the better speakers are and they get looked on “favorably”. We do this as we want the best content and experience for the audience. Eric Postlethwaite may be a genius at VPD and know it inside out, but if they present like a cardboard cut-out with bad breath then the session will be a failure. Judging scores are the top filter but we on the planning committee keep in mind how good a speaker is. What worried me was that this was not scientific, it was word-of-mouth and gut-feel, which is why I spent many days in Excel World to take the raw feedback and convert it into scores. I want the audience feedback to influence the content.

If you speak (or have spoken) for the first time and your scores are below average, don’t worry too much. As you can see from the above, you are up against a pre-selected set of known, excellent speakers. Hitting average is actually something of an achievement (and I would say that as I am Mr Average!).

One thing jumped out at me. I looked at all the comments (and I do mean all) for a couple of years and I noticed that you get the odd person who tries to “make a point” by adding the same comment to all the speakers’ feedback forms they fill in. Don’t do that. The speakers do not deserve your ire at the conference. It’s childish of you. If you want to raise an issue with the conference as a whole, don’t spam it on the speaker feedback forms, you dilbert, be an adult and contact someone involved in the conference organisation direct. Oddly enough, we DO like to have people come and say what you felt did not work, but spamming it on all the speaker feedback forms is just non-directed trolling.

I said I would not name names, it is not fair. But I’m going to name one though, and this is based on MY opinion of what I have seen looking at the stats. This is not official UKOUG opinion. Connor McDonald? Your presentation skills are awesome. I wanted to edit your scores down through pure envy. You are a good presenter, sir.

Feedback from the Oracle documentation team

feedbackI got some feedback from the Oracle documentation team, based on my recent post.

GUIDs

One of the concerns I raised was about how the GUIDs would be used in different releases of the documentation. Although I don’t like the look of the GUIDs, I can understand why they might be more convenient that trying to think of a neat, descriptive, human readable slug. My concern was the GUID might be unique for every incarnation of the same page. That is, a new GUID for the same page for each patchset, DB version and/or minor text correction. That would make it really hard to flick between versions, as you couldn’t predict what the page was called in each variant.

It seems my worries were unfounded. The intention is the GUID of a specific page will stay the same, regardless of patchset, DB version or document correction. That’s great news!

Broken Links

The team are trying to put some stuff in place to correct the broken links. I think I might know who is developing this solution. :)

The quick fix will be to direct previously broken links to the table of contents page of the appropriate manual. Later, they will attempt to provide topic-to-topic links. No promises here, but it sounds promising.

Conclusion

I’m going to continue to fix the broken links on my site as I want to maintain the direct topic links in the short term, but this sounds like really good news going forward.

It also sounds like the documentation team are feeling our pain and putting stuff in place to prevent this happening in future, which is fantastic news! :)

Note to self: It’s much better to engage with the right people and discuss the issue, rather than just bitch about stuff.

Cheers

Tim…


Feedback from the Oracle documentation team was first posted on June 16, 2015 at 10:36 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle Enterprise Manager Cloud Control 12c Release 5 (12.1.0.5) – Just Born

em-12cOracle Enterprise Manager Cloud Control 12c Release 5 (12.1.0.5) was announced a few days ago. I woke up today and checked the interwebs and it’s actually available for download.

I must admit I’m a little nervous about the upgrade. I had a few bad times with upgrades in the early days of Grid Control and Cloud Control and that has left me with a little bit of voodoo lurking in the back of my mind. The last couple of upgrades have been really easy, so I’m sure it will be fine, but that voodoo…

I’ll download it now and do a clean install. Then do a couple of practice upgrades. If all that goes well, I’ll schedule a date to sacrifice a chicken, raise a zombie from the dead to do my bidding, then do the real upgrade.

Cheers

Tim…

Update. Looking at the certification matrix, the repository is now certified on 12.1.0.2, as well as 11.2.0.4 and 11.2.0.3.

Update 2. Pete mentioned in the comments that 12.1.0.2 has been certified for the Cloud Control repository since march, with some restrictions. So it’s not new to this release. See the comments for details.

Update 3. Remember to download from edelivery.oracle.com (in a couple of days) for your production installations. Apparently there is a difference to the license agreement.


Oracle Enterprise Manager Cloud Control 12c Release 5 (12.1.0.5) – Just Born was first posted on June 16, 2015 at 7:49 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Indexing and Transparent Data Encryption Part III (You Can’t Do That)

In Part II of this series, we looked at how we can create a B-Tree index on a encrypted column, providing we do not apply salt during encryption. However, this is not the only restriction with regard to indexing an encrypted column using column-based encryption. If we attempt to create an index that is not a […]