Who's online

There are currently 0 users and 39 guests online.

Recent comments

Oakies Blog Aggregator

Enterprise Manager 13c and AWS

This posting is to try to clarify what is supported when using Enterprise Manager Cloud Control 13c (EM13c) and Amazon Web Services (AWS). The question came from some of our sales consultants who were finding confusing information about what is actually supported when using EM13c and AWS, so I have asked our Support folks to write a support note to provide the definitive answer on this. While that is being written, I thought I would post a quick blog that could be used to help clarify matters.

So let’s look at what the different scenarios are:

Using on-premises Enterprise Manager to manage public cloud resources without Amazon’s Virtual Private Cloud (VPC): The Hybrid Cloud Management functionality available in Enterprise Manager 12cR5 and beyond uses SSH tunneling without VPC to monitor Oracle Cloud resources. This set of features is ONLY supported for Oracle Cloud. In fact, the code will not allow the gateway agent to be configured for a 3rd party cloud.

NOTE: In this scenario, there is an existing EM plugin for EC2 that can create an Amazon target type and monitor some cloudwatch metrics using REST, but it’s only basic.

Using Enterprise Manager installed on Amazon to monitor resources on Amazon: This is the scenario where you deploy the OMS and all the targets on Amazon. In this case, Oracle has not explicitly certified the deployment scenario and will treat this as just another data center deployment, as long as all the relevant Enterprise Manager pre-requisites (and any over-arching Oracle policies) are being adhered to.

Using VPC: If you have implemented VPC to unify your data center and Amazon at the network layer, and you have http(s) access between the OMS and agents, then you can use Enterprise Manager installed anywhere within the VPC to monitor the resources within the VPC.

To summarize, barring the first scenario for Oracle Cloud, none of the other scenarios are certified, but we have not explicitly banned them either. We treat those scenarios as just forms of data center deployments and as long as the Enterprise Manager pre-requisites (OS versions, packages, network configurations and so on) and any over-arching Oracle policies are satisfied, we remain agnostic.

I hope that will clarify matters for you!

The post Enterprise Manager 13c and AWS appeared first on

Short-term Answers Vs. The Correct Answer

Enterprise Manager does a LOT.  Sometimes it may do too much.  Customers on forums, on support or via email and social media may come to us asking how to address something they view as not working right and the truth is, we could simply answer their question, but they aren’t using the right tool to accomplish what they’re attempting.


The Export Feature in Enterprise Manager

A customer was frustrated as he was performing scheduled exports using the export utility that can be found under the Database, Schema drop down.  He wanted to perform these exports more than once per hour and was running into issues due to limitations in the functionality for how often and naming convention.  The scheduling mechanism quite as robust as he needed, so I understood his frustration, but I also realized that he was using the wrong tool for the job.  It seemed so natural to me that he should be using the EM Job System, but he really didn’t understand why he should use that when exports was right in the drop down.

Even though he can do the following:

  1.  Export a schema or database
  2. Schedule it to happen immediately or later and set it to repeat.
  3. Allows for variable calls in the file naming
  4. Simple GUI interface

Limitations Include:

  • Was never meant to replace the job system, it just was enhanced and offered the ability to schedule and repeat jobs.
  • Doesn’t offer all the bells and whistles you’d be given if you scripted with shell, perl or another scripting language from the command line.
  • Has no success notification or alerting for failure in the job interface.
  • No template like the Job Library.

It can be very confusing if you don’t know that we commonly have about 10 ways to skin a cat in Enterprise Manager and its important to review your requirements before choosing which one will meet those requirements, even if the naming convention tells you there is a specific feature for it.  An infrastructure feature may be the correct one that is built out to support advanced functionality for all that you have to accomplish vs. one specific requirement.

I’m a command line DBA, so I wasn’t even aware of the Export utility in the drop down menu.  I rarely, if ever, look at the database administration offerings.  I took the time this morning on one of my databases using the export utility in EM13c so that I knew what it offered, (along with what it didn’t…)

Please, don’t ask me if EM Express offers this.  I really couldn’t tell you, (inside joke… :))


Tags:  ,





Copyright © DBA Kevlar [Short-term Answers Vs. The Correct Answer], All Right Reserved. 2016.

Virtual Partitions

Here’s a story of (my) failure prompted by a recent OTN posting.

The OP wants to use composite partitioning based on two different date columns – the table should be partitioned by range on the first date and subpartitioned by month on the second date. Here’s the (slightly modified) table creation script he supplied:

rem     Script: virtual_partition.sql
rem     Dated:  May 2016

        R_ID    NUMBER(3),
        R_AMT   NUMBER(5),
        DATE1   DATE,
        DATE2   DATE,
PARTITION BY RANGE (DATE1) interval (numtoyminterval(1,'MONTH'))
                SUBPARTITION M1 VALUES (1),
                SUBPARTITION M2 VALUES (2),
                SUBPARTITION M3 VALUES (3),
                SUBPARTITION M4 VALUES (4),
                SUBPARTITION M5 VALUES (5),
                SUBPARTITION M6 VALUES (6),
                SUBPARTITION M7 VALUES (7),
                SUBPARTITION M8 VALUES (8),
                SUBPARTITION M9 VALUES (9),
                SUBPARTITION M10 VALUES (10),
                SUBPARTITION M11 VALUES (11),
                SUBPARTITION M12 VALUES (12)
        PARTITION M_DTX_2015060100 VALUES LESS THAN (TO_DATE('2015-06-01 00:00:01', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))

There’s nothing particularly exciting about this – until you get to the query requirement – the user wants to query on date1 and date2, and doesn’t know about the virtual month column, e.g. (and, I know that there should be a to_date() or ANSI equivalent here):

SELECT * FROM m_dtx WHERE date1 = trunc(sysdate) AND date2 = '01-Jun-2016';

Now, as a general rule, you don’t expect partition elimination to occur unless the partitioning column appears with a predicate that make elimination possible, so your first response to this query is that it could eliminate on date1, but can’t possibly eliminiate on vc because vc isn’t in the where clause. However it’s possible that the partitioning code might be coded to recognise that the subpartition is on a virtual column that is derived from date2, so perhaps it could generate a new predicate before optimising, for example:

date2 = '01-Jun-2016'  => vc = 6

Unfortunately, your first response is correct – the optimizer doesn’t get this clever, and doesn’t do the sub-partition elimination. Here’s the execution plan from for the sample query, followed by the execution plan when I explicitly add the predicate vc = 6.

SQL_ID  8vk1a05uv16mb, child number 0
SELECT /*+ dynamic_sampling(0) */  * FROM m_dtx WHERE date1 =
trunc(sysdate) AND date2 = to_date('01-Jun-2016','dd-mon-yyyy')

Plan hash value: 3104206240

| Id  | Operation              | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
|   0 | SELECT STATEMENT       |       |       |       |    15 (100)|          |       |       |
|   1 |  PARTITION RANGE SINGLE|       |     1 |    57 |    15   (7)| 00:00:01 |   KEY |   KEY |
|   2 |   PARTITION LIST ALL   |       |     1 |    57 |    15   (7)| 00:00:01 |     1 |    12 |
|*  3 |    TABLE ACCESS FULL   | M_DTX |     1 |    57 |    15   (7)| 00:00:01 |   KEY |   KEY |

Predicate Information (identified by operation id):
   3 - filter(("DATE2"=TO_DATE(' 2016-06-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND

SQL_ID  33q012bdhjrpn, child number 0
SELECT /*+ dynamic_sampling(0) */  * FROM m_dtx WHERE date1 =
trunc(sysdate) AND date2 = to_date('01-Jun-2016','dd-mon-yyyy') and vc
= 6

Plan hash value: 938710559

| Id  | Operation              | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
|   0 | SELECT STATEMENT       |       |       |       |    15 (100)|          |       |       |
|   1 |  PARTITION RANGE SINGLE|       |     1 |    57 |    15   (7)| 00:00:01 |   KEY |   KEY |
|   2 |   PARTITION LIST SINGLE|       |     1 |    57 |    15   (7)| 00:00:01 |     6 |     6 |
|*  3 |    TABLE ACCESS FULL   | M_DTX |     1 |    57 |    15   (7)| 00:00:01 |   KEY |   KEY |

Predicate Information (identified by operation id):
   3 - filter(("DATE2"=TO_DATE(' 2016-06-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND

Note how the predicate vc = 6  doesn’t show up in the predicate section in either case, but the execution plan shows PARTITION LIST ALL at operation 2 when we omit the predicate and PARTITION LIST SINGE when we include it (with suitable values also appearing for Pstart and Pstop). (The cost, by the way, is the cost of scanning a whole (range)partition whether or not the optimizer expects to restrict that scan to just one sub-partition.)

So the optimizer isn’t quite clever enough (yet). BUT … the optimizer can be very clever with constraints, combining constraints with predicates and applying transitive closure to produce new predicates – so maybe we could get the optimizer to do this if we helped it a little bit. Given the table definition supplied I’m going to assume that the date2 column is supposed to be non-null, so let’s add some truthful constraints/declarations to the table definition:

alter table m_dtx modify date2 not null;
alter table m_dtx modify vc  not null;
alter table m_dtx add constraint md_ck_vc check (vc = extract(month from date2));

Alas, this didn’t make any difference to the execution plan. But it did do something surprising to my attempts to load data into the table:

insert into m_dtx (r_id, r_amt, date1, date2)
with generator as (
                rownum id
        from dual
        connect by
                level <= 1e4
        mod(rownum, 1000),
        trunc(sysdate,'yyyy') + dbms_random.value(0,365),
        trunc(sysdate,'yyyy') + dbms_random.value(0,365)
        generator       v1,
        generator       v2
        rownum <= 1e4

insert into m_dtx (r_id, r_amt, date1, date2)
ERROR at line 1:
ORA-01400: cannot insert NULL into (???)

So the array insert with the virtual column doesn’t like the NOT NULL constraint on the virtual column because vc is, presumably, still null when the constraint is checked (though there’s no problem with single row inserts with the values() clause – I wonder what happens with the PL/SQL “FORALL” clause) – so let’s remove the not null constraint on vc and see what happens.

insert into m_dtx (r_id, r_amt, date1, date2)
ERROR at line 1:
ORA-02290: check constraint (TEST_USER.MD_CK_VC) violated

Unsurprisingly, given the fact that Oracle didn’t like the not null constraint, the critical check constraint also fails. This, by the way, is odd because a check constraint should accept a row when the constraint doesn’t evaluate to FALSE, so (a) vc can’t have been evaluated at this point or the constraint would evaluate to TRUE – which is not FALSE, and (b) vc at this point can no longer be null or the constraint would evaluate to NULL – which is not FALSE: so what “value” has vc got that makes the constraint check return FALSE ?

Bottom line:

I can see some scope for an optimizer enhancement that tries to find eliminating predicates from virtual columns; and I think there’s a need for ensuring that we can safely add constraints to virtual columns – after all we might want to create an index on a virtual column and sometimes we need a NOT NULL declaration to ensure that an index-only execution path can be found. Unfortunately I have to end this blog without finding an immediate solution for the OP.

Despite this failure, though, there are cases (as I showed a couple of years ago) where the optimizer in 12c can get clever enough to recognize the connection between a queried date column and the virtual partitioning column based on that date column.

New Oracle Security Paper on Non-Production and Delphix

I was asked by Delphix earlier this year to review their product with a particular focus on Oracle security of course. I wrote two papers; the first about Data Masking and Delphix and the second about securing data in non-production....[Read More]

Posted by Pete On 23/05/16 At 11:23 AM

library cache lock on BUILD$ object

I was testing an application performance in 12c, and one job was constantly running slower than 11g. This post is to detail the steps. I hope the steps would be useful if you encounter similar issue.


In an one hour period, over 90% of the DB time spent on waiting for library cache lock waits. Upon investigation, one statement was suffering from excessive waits for ‘library cache lock’ event. We recreated the problem and investigated it further to understand the issue.

Following is the output of wait_details_rac.sql script (that I will upload here) and there are many PX query servers are waiting for ‘library cache lock’ wait event.

   SID PID        EVENT                          USERNAME   OSUSER     STATE               WAIT_TIME   WIS P1_P2_P3_TEXT
------ ---------- ------------------------------ ---------- ---------- ------------------- --------- ----- ----------------------------------------

   276  12445     library cache lock             TST_USR    test       WAITING                     0     1 handle address 399021346904-lock address

   288  12449     library cache lock             TST_USR    test       WAITING                     0     4 handle address 399021346904-lock address

   303  12453     library cache lock             TST_USR    test       WAITING                     0     4 handle address 399021346904-lock address

   315  12457     library cache lock             TST_USR    test       WAITING                     0     4 handle address 399021346904-lock address
 ...Snipped..                                                                                                           2152

Lock address to object

For all session waiting, library cache handle address is the same. That means that the sessions are waiting for one library cache object. Querying x$kgllk confirms that and we can also identify object name.

  select ses.sid, ses.serial#,lck.kgllkcnt, lck.kgllkmod,lck.kgllkreq, lck.kglnaobj
  from x$kgllk lck , v$session ses
  where kgllkhdl in
     (select kgllkhdl from x$kgllk where kgllkreq >0)
  and lck.KGLLKUSE = ses.saddr
------ ---------- ---------- ---------- ---------- ------------------------------------------------------------
   276       6518          0          0          2 5ce7869058
    84       6429          0          0          2 5ce7869058
   329      19358          0          0          2 5ce7869058
   342      37088          0          0          2 5ce7869058
   407      17734          0          0          2 5ce7869058
    74      26333          0          0          2 5ce7869058

Object name 5ce7869058 is not a typical schema object and seems to be an internal object. We need to identify the owner of the object and that might lead to some clues.

 Two ways to find the row in x$kglob:
1. Convert the handle address to HEX and pad it.
SQL>  select to_char(399021346904,'xxxxxxxxxxxxxx') from dual;   

SQL> select kglnawon, kglnaobj from x$kglob where kglhdpar =hextoraw('0000005CE7869058');
-------- -------------------------
$BUILD$  5ce7869058

2. Join to x$kgllk.
SQL> select kglnaown, kglnaobj from x$kglob where kglhdadr in 
        ( select kgllkhdl from x$kgllk where kgllkreq >0);
-------- -------------------------
$BUILD$  5ce7869058

Interesting. These objects are owned by the user $BUILD$. But, there is no such database user and this object must be an internal object.

At this time, I took a few pstack samples of the process and tried to learn a bit more about the problem. Nothing too interesting, function call kglLockWait indicates that we are waiting for library cache lock.

# pstack 12485
#0  0x00007f42737d100a in semtimedop () from /lib64/
#1  0x000000000cda832d in sskgpwwait ()
#2  0x000000000cda5c98 in skgpwwait ()
#3  0x000000000c8f2bab in ksliwat ()
#4  0x000000000c8f1fb1 in kslwaitctx ()
#5  0x0000000000c00810 in ksfwaitctx ()
#6  0x0000000003b6765d in kglLockWait ()
#7  0x000000000cdcd441 in kgllkal ()
#8  0x000000000cdc53e4 in kglLock ()
#9  0x000000000cdbeb32 in kglget ()
#10 0x000000000cb7650c in kksfbc ()
#11 0x000000000cb166ec in opiexe ()
#12 0x00000000021ee529 in kpoal8 ()
#13 0x000000000cb1254d in opiodr ()
#14 0x0000000003117d7e in kpoodr ()
#15 0x000000000cd41b4a in upirtrc ()
#16 0x000000000cd2cde6 in kpurcsc ()
#17 0x000000000cd28014 in kpuexec ()
#18 0x000000000cd420e9 in OCIStmtExecute ()
#19 0x0000000002073a10 in kxfxsStmtExecute ()
#20 0x0000000002073387 in kxfxsExecute ()
#21 0x000000000206df42 in kxfxsp ()
#22 0x000000000206c489 in kxfxmai ()
#23 0x00000000020a91b3 in kxfprdp ()
#24 0x00000000031014a6 in opirip ()
#25 0x0000000001bb0a08 in opidrv ()
#26 0x00000000026c0f71 in sou2o ()
#27 0x0000000000bbd85e in opimai_real ()
#28 0x00000000026cb6bc in ssthrdmain ()
#29 0x0000000000bbd72c in main ()


At this time, I have some information. So, I searched for ‘library cache lock $BUILD$’ in and of course, found a bug matching with my symptoms. Further investigation from BDE confirmed my analysis. I didn’t want to list the bug numbers as similar symptom may have different root cause. So, you need to work with support further.

EM13c Monitoring Non-Oracle Databases

This last week I presented at Great Lakes Oracle Conference, (GLOC16) and the discussion on monitoring of non-Oracle databases came up while we were on the topic of management packs, how to monitor usage and what ones were required to monitor non-Oracle databases.  I didn’t realize how confusing the topic could be until I received an email while in on layover in Chicago and relaying what the attendee had taken away from it.  I was even more alarmed when I read the email again, planning to blog about it today after a full nights sleep!


You’ll often hear me refer to EM13c as the single-pane of glass when discussing hybrid cloud management, performance management when concerning AWR Warehouse and such, but it also can make a multi-platform environments easier to manage, too.

The difference between managing many Oracle features with EM13c and non-Oracle database platforms is that we need to shift the discussion from Management Packs to Plug-ins.  I hadn’t really thought too much of it when I’d been asked what management packs were needed to manage Microsoft SQL Server, Sybase or DB2.  My brain was solely focused on the topic of management packs and I told the audience how they could verify management packs on any page in EM, (while on the page, click on Settings, Management Packs, Packs Used for This Page) for any database they were monitoring:

em13c_mssql 300w, 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

As easily demonstrated in the image above, there aren’t any management packs utilized to access information about the MSSQL_2014 Microsoft SQL Server and you can quickly see each of the User databases status, CPU usage, IO for read and writes, along with errors and even control the agent from this useful EM dashboard.

I can do the same for a DB2_unit6024 database environment:

em13c_sybase 300w, 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

You’ll note that the DB2 database dashboard is different from the SQL Server one, displaying the pertinent data for that database platform.

Now, you may be saying, Kellyn’s right, I don’t need to have any management packs, (which is true) but then you click on Settings, Extensibility, Plug-ins and you’ll then locate the Database Plug-ins used to add each one of these databases to the Enterprise Manager.

em13c_dbplugins 300w, 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

These plug-ins are offered often by third parties and must be licensed through them.  There may be and are often charges from these providers and I should have been more in-tune to the true discussion and not stuck on the topic of management packs.

Luckily for me, there is a small amount of explanation on the very bottom of the management pack documentation that should clear up any questions. Hope this offers some insight and thank you to everyone who came to my sessions at GLOC!

Tags:  , , , , ,





Copyright © DBA Kevlar [EM13c Monitoring Non-Oracle Databases], All Right Reserved. 2016.

ORAchk / EXAchk questions

Yesterday I wrote a post on the ORAchk / EXAchk plug-in in Enterprise Manager Cloud Control 13c, and I promised I would write another post that would cover some of the more frequently asked questions we’ve been receiving on the plug-in. That’s what this post in, so the rest of the post will be in a Q&A sort of format.

Question: What are the benefits of EXAchk integration with the Enterprise Manager compliance framework i.e. what can we do with this that we could not in EM12c?

Answer: In EM 12c, we ask customers to setup EXAchk in the target themselves, we just bring the results to EM and show the results on the EXAchk target Home page. In 13c these are our main features:

  • Install & Setup ORAchk utility from Enterprise Manager Cloud Control
  • Convert check results into Compliance Standards (CS) violations
  • Associate the ORAchk results (CS violations) with appropriate targets
  • Upgrade of ORAchk utility from Enterprise Manager
  • Release new checks as compliance standards at the same time ORAchk releases their new version.

Question: Can I do a “Create Like” of a standard based on EXAchk i.e. include my own rules into a standard?

Answer: No, actual checks are still done within the EXAchk utility, we take the results and show violations against appropriate targets. So our CS rule is just looking for Pass/Fail only.

Question: Does EM rely on the EXAchk executable at all? Does the EXAchk executable (not the rules) need to be updated periodically?

Answer: We fully rely on the EXAchk executable and rules. Yes we need to update periodically, we have added self update entity types to deliver the EXAchk binaries.

Question: Are the rules evaluated on the agent side or the repository side?

Answer: We bring the results of EXAchk to the repository and within our repository rule we check for Pass/Fail only.

Question: How will Enterprise Manager keep up with new revisions of the rules? Are the new revisions automatically applied?

Answer: Our current plan is to deliver two entities – one for EXAchk/ORAchk binaries and one for compliance standard and rules.

Question: I don’t see the Enterprise Manager ORAchk Healthchecks Plug-in in my Enterprise Manager 12c environment. Does it exist?

Answer: Yes, in the environment I just checked on, the plug-in is called “Oracle Engineered Systems Healthchecks” and it’s found under the Engineered Systems plug-ins on the Plug-ins page (go to Setup -> Extensibility -> Plug-ins. If you can’t find it there, it probably means you didn’t select it from the list of plug-ins to install when you were installing Enterprise Manager. In that case, click the “Check Updates” button, click “Plug-In” from the list of types and then either click the “Check Updates” button or choose “Check Updates” from the Actions menu. You will need to download and deploy the plug-in as per normal after that.

Hopefully that helps clarify a few areas for you! If you have any questions on ORAchk / EXAchk that haven’t been answered above, feel free to add the question as a comment on this post. I will then update the blog with your question and the answer!

The post ORAchk / EXAchk questions appeared first on

Messed-Up App of the Day: Tables of Numbers

Quick, which database is the biggest space consumer on this system?

Database                  Total Size   Total Storage
-------------------- --------------- ---------------
SAD99PS 635.53 GB 1.24 TB
ANGLL 9.15 TB 18.3 TB
FRI_W1 2.14 TB 4.29 TB
DEMO 6.62 TB 13.24 TB
H111D16 7.81 TB 15.63 TB
HAANT 1.1 TB 2.2 TB
FSU 7.41 TB 14.81 TB
BYNANK 2.69 TB 5.38 TB
HDMI7 237.68 GB 476.12 GB
SXXZPP 598.49 GB 1.17 TB
TPAA 1.71 TB 3.43 TB
MAISTERS 823.96 GB 1.61 TB
p17gv_data01.dbf 800.0 GB 1.56 TB

It’s harder than it looks.

Did you come up with ANGLL? If you didn’t, then you should look again. If you did, then what steps did you have to execute to find the answer?

I’m guessing you did something like I did:

  1. Skim the entire list. Notice that HDMI7 has a really big value in the third column.
  2. Read the column headings. Parse the difference in meaning between “size” and “storage.” Realize that the “storage” column is where the answer to a question about space consumption will lie.
  3. Skim the “Total Storage” column again and notice that the wide “476.12” number I found previously has a GB label beside it, while all the other labels are TB.
  4. Skim the table again to make sure there’s no PB in there.
  5. Do a little arithmetic in my head to realize that a TB is 1000× bigger than a GB, so 476.12 is probably not the biggest number after all, in spite of how big it looked.
  6. Re-skim the “Total Storage” column looking for big TB numbers.
  7. The biggest-looking TB number is 15.63 on the H111D16 row.
  8. Notice the trap on the ANGLL row that there are only three significant digits showing in the “18.3” figure, which looks physically the same size as the three-digit figures “1.24” and “4.29” directly above and below it, but realize that 18.3 (which should have been rendered “18.30”) is an order of magnitude larger.
  9. Skim the column again to make sure I’m not missing another such number.
  10. The answer is ANGLL.

That’s a lot of work. Every reader who uses this table to answer that question has to do it.

Rendering the table differently makes your readers’ (plural!) job much easier:

Database          Size (TB)  Storage (TB)
---------------- --------- ------------
SAD99PS .64 1.24
ANGLL 9.15 18.30
FRI_W1 2.14 4.29
DEMO 6.62 13.24
H111D16 7.81 15.63
HAANT 1.10 2.20
FSU 7.41 14.81
BYNANK 2.69 5.38
HDMI7 .24 .48
SXXZPP .60 1.17
TPAA 1.71 3.43
MAISTERS .82 1.61
p17gv_data01.dbf .80 1.56

This table obeys an important design principle:

The amount of ink it takes to render each number is proportional to its relative magnitude.

I fixed two problems: (i) now all the units are consistent (I have guaranteed this feature by adding unit label to the header and deleting all labels from the rows); and (ii) I’m showing the same number of significant digits for each number. Now, you don’t have to do arithmetic in your head, and now you can see more easily that the answer is ANGLL, at 18.30 TB.

Let’s go one step further and finish the deal. If you really want to make it as easy as possible for readers to understand your space consumption problem, then you should sort the data, too:

Database          Size (TB)  Storage (TB)
---------------- --------- ------------
ANGLL 9.15 18.30
H111D16 7.81 15.63
FSU 7.41 14.81
DEMO 6.62 13.24
BYNANK 2.69 5.38
FRI_W1 2.14 4.29
TPAA 1.71 3.43
HAANT 1.10 2.20
MAISTERS .82 1.61
p17gv_data01.dbf .80 1.56
SAD99PS .64 1.24
SXXZPP .60 1.17
HDMI7 .24 .48

Now, your answer comes in a glance. Think back at the comprehension steps that I described above. With the table here, you only need:

  1. Notice that the table is sorted in descending numerical order.
  2. Comprehend the column headings.
  3. The answer is ANGLL.

As a reader, you have executed far less code path in your brain to completely comprehend the data that the author wants you to understand.

Good design is a topic of consideration. And even conservation. If spending 10 extra minutes formatting your data better saves 1,000 readers 2 minutes each, then you’ve saved the world 1,990 minutes of wasted effort.

But good design is also a very practical matter for you personally, too. If you want your audience to understand your work, then make your information easier for them to consume—whether you’re writing email, proposals, reports, infographics, slides, or software. It’s part of the pathway to being more persuasive.

A cool thing with EXCHANGE PARTITION (part 2)

In the previous post, I showed that even though a partition was “removed” (ie, exchanged out) from a table, a query running against the table could still successfully run the completion.

However, of course, if once that partition is exchanged out, it is now a table in it’s own right…and is subject to the whims of what a DBA may wish to do with it.   If that table is dropped, or truncated, then as you might expect, our query is going to struggle to find that data ! Smile

Here’s an example of what happens when the query cannot successfully run: