Search

OakieTags

Who's online

There are currently 0 users and 31 guests online.

Recent comments

Oakies Blog Aggregator

Storing Date Values As Characters (What’s Really Happening)

For something that’s generally considered an extremely bad idea, I’ve lost count of the number of times I’ve come across applications that insist on storing date values as characters within the database. We’ve all seen them … I recently got called in to assist a customer who was having issues with a POC in relation […]

CBO++

While browsing the web recently for articles on the HyperLogLog algorithm that Oracle uses for some of its approximate functions, I came upon a blog post written in Jan 2014 with the title Use Subqueries to Count Distinct 50X Faster. There are various ways that subqueries can be used to rewrite queries for improved performance, but when the title caught my eye I couldn’t think of a way in which they could improve “count distinct”.  It turned out that the word “subquery” was being used (quite correctly) in the sense of “inline view” while my mind had immediately turned to subqueries in the select list or where clause.

The article started by pointing out that if you have a query that does a join then aggregates the result you might be able to improve performance by finding a way of rewriting the query to aggregate before doing the join. (See this note from 2008). The article then went one step further to optimise a “count distinct” by wrapping a “select count” around a “select distinct” inline view as follows:

Original
--------
  select
    dashboard_id,
    count(distinct user_id) as ct
  from time_on_site_logs 
  group by dashboard_id

Rewrite
-------
select 
    inline.dashboard_id, 
    count(1) as ct
  from (
    select distinct dashboard_id, user_id
    from time_on_site_logs
  ) as inline
  group by inline.dashboard_id

(I’ve reproduced only the central part of the query being examined and I’ve changed the name of the inline view to eliminate the potential visual confusion due to the word “distinct” appearing in its name in the original).

The article was written using the Postgres SQL with the comment that the technique was universal; and this brings me to the point of the post. The technique can be applied to Oracle’s dialect of SQL. Both ideas are good ideas whose effectiveness depends on the data patterns, data volume, and (potentially) indexing; but you may not need to rewrite the code because the optimizer is programmed to know that the ideas are good and it can transform your query to the appropriate form internally. The “place group by” transformation appeared in 11.1.0.6 in 2007, and the “transform distinct aggregation” appeared in 11.2.0.1 in 2009.

Here’s a litte demo of Oracle handling a variation of the query I’ve shown above:


rem     Script: transform_distinct_agg.sql
rem     Dated:  May 2016
rem     Author: J.P.Lewis

create table t1 nologging 
as 
select  * 
from    all_objects 
where   rownum <= 60000
;
execute dbms_stats.gather_table_stats(user,'t1', method_opt=>'for all columns size 1')

alter session set statistics_level = all;

select owner, count(distinct object_type) from t1 group by owner;
select * from table(dbms_xplan.display_cursor(null,null,'allstats last outline'));

prompt  ===============
prompt  Rewritten query
prompt  ===============

select  owner, count(1)
from    (
         select distinct owner, object_type
         from   t1
        ) distinct_types
group by
        owner
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last outline'));

Here are the two execution plans, pulled from memory – with the outline and some other peripheral lines deleted:


-----------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |           |      1 |        |      5 |00:00:00.23 |     865 |       |       |          |
|   1 |  HASH GROUP BY       |           |      1 |      5 |      5 |00:00:00.23 |     865 |  1452K|  1452K|  728K (0)|
|   2 |   VIEW               | VM_NWVW_1 |      1 |     78 |     30 |00:00:00.23 |     865 |       |       |          |
|   3 |    HASH GROUP BY     |           |      1 |     78 |     30 |00:00:00.23 |     865 |  4588K|  1708K| 2497K (0)|
|   4 |     TABLE ACCESS FULL| T1        |      1 |  60000 |  60000 |00:00:00.12 |     865 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------

===============
Rewritten query
===============

------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |      1 |        |      5 |00:00:00.23 |     865 |       |       |          |
|   1 |  HASH GROUP BY       |      |      1 |      5 |      5 |00:00:00.23 |     865 |  1452K|  1452K|  735K (0)|
|   2 |   VIEW               |      |      1 |     78 |     30 |00:00:00.23 |     865 |       |       |          |
|   3 |    HASH UNIQUE       |      |      1 |     78 |     30 |00:00:00.23 |     865 |  4588K|  1708K| 1345K (0)|
|   4 |     TABLE ACCESS FULL| T1   |      1 |  60000 |  60000 |00:00:00.12 |     865 |       |       |          |
------------------------------------------------------------------------------------------------------------------

Apart from the change from “HASH UNIQUE” to “HASH GROUP BY” the two plans are the same, using the same resources – the UNIQUE being a special case of the algorithm for the GROUP BY. Here (with some cosmetic editing) is the SQL of the “unparsed query” taken from the 10053 (CBO) trace file – notice how similar it is to the text suggested by the original article, in particular the inline view to get the distinct list of owner and object_type (using a group by with no aggregated columns, rather than a distinct):

SELECT 
        VM_NWVW_1.$vm_col_2 OWNER,
        COUNT(VM_NWVW_1.$vm_col_1) COUNT(DISTINCTOBJECT_TYPE)
FROM    (
                SELECT
                        T1.OBJECT_TYPE $vm_col_1,
                        T1.OWNER $vm_col_2
                FROM    TEST_USER.T1 T1
                GROUP BY 
                        T1.OWNER,T1.OBJECT_TYPE
        ) VM_NWVW_1
GROUP BY
        VM_NWVW_1.$vm_col_2
;

The Oracle optimizer is pretty good at finding efficient transformations for the query you wrote so, rather than rewriting a query (with the option for making a mistake as you do so), you may only need to add a couple of hints to generate a suitable SQL Plan Baseline that you can attach to the original query.

Footnote:

Sometimes the optimizer will decide not to transform when it should, or decide to transform when it shouldn’t, so it’s nice to know that there are hints to block transformations – here’s the effect of adding /*+ qb_name(main) no_transform_distinct_agg(main) */ to my query:


----------------------------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |      5 |00:00:00.25 |     865 |       |       |          |
|   1 |  SORT GROUP BY     |      |      1 |      5 |      5 |00:00:00.25 |     865 |  4096 |  4096 | 4096  (0)|
|   2 |   TABLE ACCESS FULL| T1   |      1 |  60000 |  60000 |00:00:00.12 |     865 |       |       |          |
----------------------------------------------------------------------------------------------------------------

The interesting thing to note here is that even though the query took a little longer to complete the amount of memory allocated to run the query in memory was only 4K compared to the 2M needed by the transformed query (In this example both workareas would have been in existence at the same time – that won’t be true of every query using multiple workareas.) This isn’t significant in this trivial case, but it demonstrates the point that sometimes there is no one best path – you can choose the path that protects the resource that’s under most pressure.

Are Zero Days or Bugs Fixed by CPU The Worst?

I spoke yesterday about compartmentalising Oracle Security and one element that comes out of this is the need to consider what you are trying to achieve; secure actual data and also secure the platform. In general applying security patches will....[Read More]

Posted by Pete On 25/05/16 At 12:51 PM

Tech 16 – We Want Your Abstracts on SE, Real-World & Practical Topics

The Call for Papers is open for the UK Oracle User Group annual TECH conference. It’s in Birmingham, in December, and is being co-located with the JDE & Apps conferences too (and the call for papers is open for them also).

ScreenHunter_100 May. 25 10.02

If you are a Standard Edition (SE) expert, have a real-world story to tell about how you used Oracle (good or bad!) or want to teach others how to get going with some aspect of Oracle, I really want to see your abstracts.

You can register as a speaker and submit abstracts here at this link. You have until June 1st, so that’s just under a week. Plenty of time:-)

I love this event, I’ve been going since 2002. Last year was the best for many years, almost everyone I spoke to felt it had regained the energy of prior years, attendance was up, lots of new faces were there and, a real sign of a good agenda, people complained that they were having to pick between excellent talks.

A couple of things have changed a little in the last two years, which I think have increased the overall appeal of the UKOUG conference.

First is that we now have “introductory streams”. These are talks that need no or little prior knowledge of the topic and give you all the information about it to get going. The conference had become a little too “expert-focused”, packed with great talks about esoteric aspects of tuning or internals that many of us love – but not everyone is ready for or interested in. We will still have lots of those, but we are giving more talks for those who are not experts (yet). This will be the third year we are doing this due to it’s success. If you are an expert, how about offering a paper that gets people started? Such talks tend to get much larger and enthusiastic audiences.

Second is the Standard Edition stream. This was really popular last year, the first ever dedicated stream of sessions for SE at any conference. Lots of you use SE but like the small kid in the schoolyard, it tends to get ignored. Last year we chose introductory talks, for obvious reasons, this year we are aiming for more depth – can you talk for 45 minutes about an aspect of SE, help people really make the most of it?

Third is more emphasis on real-world experience based talks. They are always the most popular, especially if they are about things not working out as the theory or Oracle Sales Guys would make out. The UKOUG is a User Group, we want to share good, bad and ugly. Personally I’d love for someone to step up to the mark and give some talks about real Cloud adoption or why Cloud is NOT the answer to all requirements.

Of course, we are always interested in the latest-greatest, just-released and did-you-know-about type talks too. But to be honest, we get lots of those:-)

Compartmentalised Oracle Security

I have been teaching security classes about Oracle Security for many years and they are very popular and I teach many classes per year around the world; mostly in the UK and EEC but I also venture to the Middle....[Read More]

Posted by Pete On 24/05/16 At 12:43 PM

Enterprise Manager 13c and AWS

This posting is to try to clarify what is supported when using Enterprise Manager Cloud Control 13c (EM13c) and Amazon Web Services (AWS). The question came from some of our sales consultants who were finding confusing information about what is actually supported when using EM13c and AWS, so I have asked our Support folks to write a support note to provide the definitive answer on this. While that is being written, I thought I would post a quick blog that could be used to help clarify matters.

So let’s look at what the different scenarios are:

Using on-premises Enterprise Manager to manage public cloud resources without Amazon’s Virtual Private Cloud (VPC): The Hybrid Cloud Management functionality available in Enterprise Manager 12cR5 and beyond uses SSH tunneling without VPC to monitor Oracle Cloud resources. This set of features is ONLY supported for Oracle Cloud. In fact, the code will not allow the gateway agent to be configured for a 3rd party cloud.

NOTE: In this scenario, there is an existing EM plugin for EC2 that can create an Amazon target type and monitor some cloudwatch metrics using REST, but it’s only basic.

Using Enterprise Manager installed on Amazon to monitor resources on Amazon: This is the scenario where you deploy the OMS and all the targets on Amazon. In this case, Oracle has not explicitly certified the deployment scenario and will treat this as just another data center deployment, as long as all the relevant Enterprise Manager pre-requisites (and any over-arching Oracle policies) are being adhered to.

Using VPC: If you have implemented VPC to unify your data center and Amazon at the network layer, and you have http(s) access between the OMS and agents, then you can use Enterprise Manager installed anywhere within the VPC to monitor the resources within the VPC.

To summarize, barring the first scenario for Oracle Cloud, none of the other scenarios are certified, but we have not explicitly banned them either. We treat those scenarios as just forms of data center deployments and as long as the Enterprise Manager pre-requisites (OS versions, packages, network configurations and so on) and any over-arching Oracle policies are satisfied, we remain agnostic.

I hope that will clarify matters for you!

The post Enterprise Manager 13c and AWS appeared first on PeteWhoDidNotTweet.com.

Short-term Answers Vs. The Correct Answer

Enterprise Manager does a LOT.  Sometimes it may do too much.  Customers on forums, on support or via email and social media may come to us asking how to address something they view as not working right and the truth is, we could simply answer their question, but they aren’t using the right tool to accomplish what they’re attempting.

wrongtool

The Export Feature in Enterprise Manager

A customer was frustrated as he was performing scheduled exports using the export utility that can be found under the Database, Schema drop down.  He wanted to perform these exports more than once per hour and was running into issues due to limitations in the functionality for how often and naming convention.  The scheduling mechanism quite as robust as he needed, so I understood his frustration, but I also realized that he was using the wrong tool for the job.  It seemed so natural to me that he should be using the EM Job System, but he really didn’t understand why he should use that when exports was right in the drop down.

Even though he can do the following:

  1.  Export a schema or database
  2. Schedule it to happen immediately or later and set it to repeat.
  3. Allows for variable calls in the file naming
  4. Simple GUI interface

Limitations Include:

  • Was never meant to replace the job system, it just was enhanced and offered the ability to schedule and repeat jobs.
  • Doesn’t offer all the bells and whistles you’d be given if you scripted with shell, perl or another scripting language from the command line.
  • Has no success notification or alerting for failure in the job interface.
  • No template like the Job Library.

It can be very confusing if you don’t know that we commonly have about 10 ways to skin a cat in Enterprise Manager and its important to review your requirements before choosing which one will meet those requirements, even if the naming convention tells you there is a specific feature for it.  An infrastructure feature may be the correct one that is built out to support advanced functionality for all that you have to accomplish vs. one specific requirement.

I’m a command line DBA, so I wasn’t even aware of the Export utility in the drop down menu.  I rarely, if ever, look at the database administration offerings.  I took the time this morning on one of my databases using the export utility in EM13c so that I knew what it offered, (along with what it didn’t…)

Please, don’t ask me if EM Express offers this.  I really couldn’t tell you, (inside joke… :))

 



Tags:  ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Short-term Answers Vs. The Correct Answer], All Right Reserved. 2016.

Virtual Partitions

Here’s a story of (my) failure prompted by a recent OTN posting.

The OP wants to use composite partitioning based on two different date columns – the table should be partitioned by range on the first date and subpartitioned by month on the second date. Here’s the (slightly modified) table creation script he supplied:


rem
rem     Script: virtual_partition.sql
rem     Dated:  May 2016
rem

CREATE TABLE M_DTX
(
        R_ID    NUMBER(3),
        R_AMT   NUMBER(5),
        DATE1   DATE,
        DATE2   DATE,
        VC GENERATED ALWAYS AS (EXTRACT(MONTH FROM DATE2))
)
PARTITION BY RANGE (DATE1) interval (numtoyminterval(1,'MONTH'))
SUBPARTITION BY LIST (VC)
        SUBPARTITION TEMPLATE (
                SUBPARTITION M1 VALUES (1),
                SUBPARTITION M2 VALUES (2),
                SUBPARTITION M3 VALUES (3),
                SUBPARTITION M4 VALUES (4),
                SUBPARTITION M5 VALUES (5),
                SUBPARTITION M6 VALUES (6),
                SUBPARTITION M7 VALUES (7),
                SUBPARTITION M8 VALUES (8),
                SUBPARTITION M9 VALUES (9),
                SUBPARTITION M10 VALUES (10),
                SUBPARTITION M11 VALUES (11),
                SUBPARTITION M12 VALUES (12)
        )
        (
        PARTITION M_DTX_2015060100 VALUES LESS THAN (TO_DATE('2015-06-01 00:00:01', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
        )
;

There’s nothing particularly exciting about this – until you get to the query requirement – the user wants to query on date1 and date2, and doesn’t know about the virtual month column, e.g. (and, I know that there should be a to_date() or ANSI equivalent here):

SELECT * FROM m_dtx WHERE date1 = trunc(sysdate) AND date2 = '01-Jun-2016';

Now, as a general rule, you don’t expect partition elimination to occur unless the partitioning column appears with a predicate that make elimination possible, so your first response to this query is that it could eliminate on date1, but can’t possibly eliminiate on vc because vc isn’t in the where clause. However it’s possible that the partitioning code might be coded to recognise that the subpartition is on a virtual column that is derived from date2, so perhaps it could generate a new predicate before optimising, for example:

date2 = '01-Jun-2016'  => vc = 6

Unfortunately, your first response is correct – the optimizer doesn’t get this clever, and doesn’t do the sub-partition elimination. Here’s the execution plan from 12.1.0.2 for the sample query, followed by the execution plan when I explicitly add the predicate vc = 6.


SQL_ID  8vk1a05uv16mb, child number 0
-------------------------------------
SELECT /*+ dynamic_sampling(0) */  * FROM m_dtx WHERE date1 =
trunc(sysdate) AND date2 = to_date('01-Jun-2016','dd-mon-yyyy')

Plan hash value: 3104206240

------------------------------------------------------------------------------------------------
| Id  | Operation              | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |       |       |       |    15 (100)|          |       |       |
|   1 |  PARTITION RANGE SINGLE|       |     1 |    57 |    15   (7)| 00:00:01 |   KEY |   KEY |
|   2 |   PARTITION LIST ALL   |       |     1 |    57 |    15   (7)| 00:00:01 |     1 |    12 |
|*  3 |    TABLE ACCESS FULL   | M_DTX |     1 |    57 |    15   (7)| 00:00:01 |   KEY |   KEY |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter(("DATE2"=TO_DATE(' 2016-06-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "DATE1"=TRUNC(SYSDATE@!)))



SQL_ID  33q012bdhjrpn, child number 0
-------------------------------------
SELECT /*+ dynamic_sampling(0) */  * FROM m_dtx WHERE date1 =
trunc(sysdate) AND date2 = to_date('01-Jun-2016','dd-mon-yyyy') and vc
= 6

Plan hash value: 938710559

------------------------------------------------------------------------------------------------
| Id  | Operation              | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |       |       |       |    15 (100)|          |       |       |
|   1 |  PARTITION RANGE SINGLE|       |     1 |    57 |    15   (7)| 00:00:01 |   KEY |   KEY |
|   2 |   PARTITION LIST SINGLE|       |     1 |    57 |    15   (7)| 00:00:01 |     6 |     6 |
|*  3 |    TABLE ACCESS FULL   | M_DTX |     1 |    57 |    15   (7)| 00:00:01 |   KEY |   KEY |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter(("DATE2"=TO_DATE(' 2016-06-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "DATE1"=TRUNC(SYSDATE@!)))


Note how the predicate vc = 6  doesn’t show up in the predicate section in either case, but the execution plan shows PARTITION LIST ALL at operation 2 when we omit the predicate and PARTITION LIST SINGE when we include it (with suitable values also appearing for Pstart and Pstop). (The cost, by the way, is the cost of scanning a whole (range)partition whether or not the optimizer expects to restrict that scan to just one sub-partition.)

So the optimizer isn’t quite clever enough (yet). BUT … the optimizer can be very clever with constraints, combining constraints with predicates and applying transitive closure to produce new predicates – so maybe we could get the optimizer to do this if we helped it a little bit. Given the table definition supplied I’m going to assume that the date2 column is supposed to be non-null, so let’s add some truthful constraints/declarations to the table definition:


alter table m_dtx modify date2 not null;
alter table m_dtx modify vc  not null;
alter table m_dtx add constraint md_ck_vc check (vc = extract(month from date2));

Alas, this didn’t make any difference to the execution plan. But it did do something surprising to my attempts to load data into the table:


insert into m_dtx (r_id, r_amt, date1, date2)
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        mod(rownum, 1000),
        rownum,
        trunc(sysdate,'yyyy') + dbms_random.value(0,365),
        trunc(sysdate,'yyyy') + dbms_random.value(0,365)
from
        generator       v1,
        generator       v2
where
        rownum <= 1e4
;

insert into m_dtx (r_id, r_amt, date1, date2)
*
ERROR at line 1:
ORA-01400: cannot insert NULL into (???)

So the array insert with the virtual column doesn’t like the NOT NULL constraint on the virtual column because vc is, presumably, still null when the constraint is checked (though there’s no problem with single row inserts with the values() clause – I wonder what happens with the PL/SQL “FORALL” clause) – so let’s remove the not null constraint on vc and see what happens.


insert into m_dtx (r_id, r_amt, date1, date2)
*
ERROR at line 1:
ORA-02290: check constraint (TEST_USER.MD_CK_VC) violated

Unsurprisingly, given the fact that Oracle didn’t like the not null constraint, the critical check constraint also fails. This, by the way, is odd because a check constraint should accept a row when the constraint doesn’t evaluate to FALSE, so (a) vc can’t have been evaluated at this point or the constraint would evaluate to TRUE – which is not FALSE, and (b) vc at this point can no longer be null or the constraint would evaluate to NULL – which is not FALSE: so what “value” has vc got that makes the constraint check return FALSE ?

Bottom line:

I can see some scope for an optimizer enhancement that tries to find eliminating predicates from virtual columns; and I think there’s a need for ensuring that we can safely add constraints to virtual columns – after all we might want to create an index on a virtual column and sometimes we need a NOT NULL declaration to ensure that an index-only execution path can be found. Unfortunately I have to end this blog without finding an immediate solution for the OP.

Despite this failure, though, there are cases (as I showed a couple of years ago) where the optimizer in 12c can get clever enough to recognize the connection between a queried date column and the virtual partitioning column based on that date column.

New Oracle Security Paper on Non-Production and Delphix

I was asked by Delphix earlier this year to review their product with a particular focus on Oracle security of course. I wrote two papers; the first about Data Masking and Delphix and the second about securing data in non-production....[Read More]

Posted by Pete On 23/05/16 At 11:23 AM

library cache lock on BUILD$ object

I was testing an application performance in 12c, and one job was constantly running slower than 11g. This post is to detail the steps. I hope the steps would be useful if you encounter similar issue.

Problem

In an one hour period, over 90% of the DB time spent on waiting for library cache lock waits. Upon investigation, one statement was suffering from excessive waits for ‘library cache lock’ event. We recreated the problem and investigated it further to understand the issue.

Following is the output of wait_details_rac.sql script (that I will upload here) and there are many PX query servers are waiting for ‘library cache lock’ wait event.

   SID PID        EVENT                          USERNAME   OSUSER     STATE               WAIT_TIME   WIS P1_P2_P3_TEXT
------ ---------- ------------------------------ ---------- ---------- ------------------- --------- ----- ----------------------------------------
                                                                                                            2163

   276  12445     library cache lock             TST_USR    test       WAITING                     0     1 handle address 399021346904-lock address
                                                                                                            2147

   288  12449     library cache lock             TST_USR    test       WAITING                     0     4 handle address 399021346904-lock address
                                                                                                            2136

   303  12453     library cache lock             TST_USR    test       WAITING                     0     4 handle address 399021346904-lock address
                                                                                                            2136

   315  12457     library cache lock             TST_USR    test       WAITING                     0     4 handle address 399021346904-lock address
 ...Snipped..                                                                                                           2152

Lock address to object

For all session waiting, library cache handle address is the same. That means that the sessions are waiting for one library cache object. Querying x$kgllk confirms that and we can also identify object name.

  select ses.sid, ses.serial#,lck.kgllkcnt, lck.kgllkmod,lck.kgllkreq, lck.kglnaobj
  from x$kgllk lck , v$session ses
  where kgllkhdl in
     (select kgllkhdl from x$kgllk where kgllkreq >0)
  and lck.KGLLKUSE = ses.saddr
/
   SID    SERIAL#   KGLLKCNT   KGLLKMOD   KGLLKREQ KGLNAOBJ
------ ---------- ---------- ---------- ---------- ------------------------------------------------------------
   276       6518          0          0          2 5ce7869058
    84       6429          0          0          2 5ce7869058
   329      19358          0          0          2 5ce7869058
   342      37088          0          0          2 5ce7869058
   407      17734          0          0          2 5ce7869058
    74      26333          0          0          2 5ce7869058
...snipped..

Object name 5ce7869058 is not a typical schema object and seems to be an internal object. We need to identify the owner of the object and that might lead to some clues.

 Two ways to find the row in x$kglob:
1. Convert the handle address to HEX and pad it.
SQL>  select to_char(399021346904,'xxxxxxxxxxxxxx') from dual;   
5ce7869058

SQL> select kglnawon, kglnaobj from x$kglob where kglhdpar =hextoraw('0000005CE7869058');
KGLNAOWN KGLNAOBJ
-------- -------------------------
$BUILD$  5ce7869058

2. Join to x$kgllk.
SQL> select kglnaown, kglnaobj from x$kglob where kglhdadr in 
        ( select kgllkhdl from x$kgllk where kgllkreq >0);
KGLNAOWN KGLNAOBJ
-------- -------------------------
$BUILD$  5ce7869058

Interesting. These objects are owned by the user $BUILD$. But, there is no such database user and this object must be an internal object.

At this time, I took a few pstack samples of the process and tried to learn a bit more about the problem. Nothing too interesting, function call kglLockWait indicates that we are waiting for library cache lock.

# pstack 12485
#0  0x00007f42737d100a in semtimedop () from /lib64/libc.so.6
#1  0x000000000cda832d in sskgpwwait ()
#2  0x000000000cda5c98 in skgpwwait ()
#3  0x000000000c8f2bab in ksliwat ()
#4  0x000000000c8f1fb1 in kslwaitctx ()
#5  0x0000000000c00810 in ksfwaitctx ()
#6  0x0000000003b6765d in kglLockWait ()
#7  0x000000000cdcd441 in kgllkal ()
#8  0x000000000cdc53e4 in kglLock ()
#9  0x000000000cdbeb32 in kglget ()
#10 0x000000000cb7650c in kksfbc ()
#11 0x000000000cb166ec in opiexe ()
#12 0x00000000021ee529 in kpoal8 ()
#13 0x000000000cb1254d in opiodr ()
#14 0x0000000003117d7e in kpoodr ()
#15 0x000000000cd41b4a in upirtrc ()
#16 0x000000000cd2cde6 in kpurcsc ()
#17 0x000000000cd28014 in kpuexec ()
#18 0x000000000cd420e9 in OCIStmtExecute ()
#19 0x0000000002073a10 in kxfxsStmtExecute ()
#20 0x0000000002073387 in kxfxsExecute ()
#21 0x000000000206df42 in kxfxsp ()
#22 0x000000000206c489 in kxfxmai ()
#23 0x00000000020a91b3 in kxfprdp ()
#24 0x00000000031014a6 in opirip ()
#25 0x0000000001bb0a08 in opidrv ()
#26 0x00000000026c0f71 in sou2o ()
#27 0x0000000000bbd85e in opimai_real ()
#28 0x00000000026cb6bc in ssthrdmain ()
#29 0x0000000000bbd72c in main ()

Conclusion

At this time, I have some information. So, I searched for ‘library cache lock $BUILD$’ in support.oracle.com and of course, found a bug matching with my symptoms. Further investigation from BDE confirmed my analysis. I didn’t want to list the bug numbers as similar symptom may have different root cause. So, you need to work with support further.