Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

The 2019 Year in Review

It’s that time of year where everyone is doing a “Year in Review” post and why not?

https://dbakevlar.com/wp-content/uploads/2019/12/2019_2-300x83.png 300w, https://dbakevlar.com/wp-content/uploads/2019/12/2019_2-768x211.png 768w, https://dbakevlar.com/wp-content/uploads/2019/12/2019_2.png 1203w" sizes="(max-width: 800px) 100vw, 800px" />

It’s good to gain perspective, plan for 2020 and celebrate all that was wonderful about 2019, (outside of politics, that was an utter failure on all fronts, pretty much most countries… :))

The DBAKevlar blog:

  • 46 published posts for 2019
  • Over 800 posts on the site
  • Over 600K in visitors for 2019
  • Daily views have increased steadily, with over 2500 visitors yesterday.
  • Somehow, still awarded top 60 blog sites for Oracle database blogs with all that Microsoft content…:)

Microsoft Employment for 2019:

  • My mobile office worked in 25 states, not even going to count how many locations.
  • Traveled to 37 customer locations
  • Working regularly with over 70 higherEd institutions and a number of K-12 customers on analytics and AI
  • Presented at 3 EdTech conferences
  • Contributed 4 blogs posts to the Microsoft SQL Server/Azure blog
  •  2 role redesigns to make the Oracle work I was doing “official”, (you rock, Denny Ramsey- Thank you for being an awesome boss!)
  • Over 70 projects either migrating Oracle from on-prem to Azure IaaS or part of the Oracle Cloud/Azure partnership.
  • Accepted a new engineering role in Customer Success starting on January 1st, 2020.

Webinars:

Conferences:

I love me a great SQL Saturday and this year I presented at 19 of them!  Its a few down from 2018, when I was an evangelist, but the number is still considerable.  I love getting to present to different audiences, hang out with my #SQLFamily and the opportunity to learn new things.  One of my goals for 2019 at Microsoft was to see if there was interest from the Oracle community in Power BI, so I presented at two Oracle events, RMOUG and KSCOPE to gauge response.  I really enjoyed these sessions, as I got to do something very knew with the technology and also had the opportunity to visit with my Oracle peeps.

I also presented at three sessions at PASS Summit this year.  I love this conference and really appreciate that they’ve let me continue to be an official blogger.  I gave up this year on keeping up with the social events-  there’s just too many and no one has that amount of energy at my age.  I did what I could, seriously.

Publications:

I started my WIT book, “Crushing the IT Gender Bias” in June, 2018.  I had just started at Microsoft, so it was an awesome time to start a book, too, (not.)  As much as I put in it, of course I can now think of so much more that should have been added.  I was really surprised the difference in audience and how often I see it in book stores and libraries.  It’s just very different from my technical books in where it will end up, which results in you feeling proud and vulnerable all at the same time.

I still owe Simple Talk a few more submissions to finish up the Linux series.  The book for Packt came in and stole much of my writing time and at some point, I’ll need to get my content finishes, but luckily, each of the series is stand alone, so everything I add is just gravy to the mashed potatoes posted. </p />
</p></div>

    	  	<div class=

Oracle wait event ‘log file parallel write’ change

This post is about a change in how the time is measured for the event ‘log file parallel write’. This is important for the performance tuning of any change activity in an Oracle database, because with the default commit settings, a foreground session that commits changes waits in the wait event ‘log file sync’, which is a wait on logwriter activity, for which the wait event ‘log file parallel write’ always has been the indicator of the time spend on IO.

Log file sync
First things first: a foreground session normally waits on the wait event ‘log file sync’ when it commits waiting for its change vectors to be written to the online redologfile(s) by the logwriter. It is wrong to always assume a ‘log file sync’ will be present. If, somehow, the logwriter manages to increase the ON DISK SCN to or beyond the foreground session’s commit SCN, there will be no ‘log file sync’ wait event.

I found the absence of the ‘log file sync’ wait event by artificially slowing the committing session down. However, with the improvements in parallelism in the Oracle code, and even more with the improvements on the hardware layer for lowering write latency (especially with persistent memory), this might be a situation that is actually visible in the near future. This is true for at least version 11.2.0.4 and higher (probably earlier too, but these are versions that I verified).

Log file parallel write
However, this post is about the event that in a lot of cases is the culprit for the ‘log file sync’ time: it’s about the ‘log file parallel write’ wait event, which takes place in the logwriter (when the logwriter itself writes, it could delegate that to the logwriter worker processes, LGnn), which normally would be measured by Oracle database tuners to validate logwriter IO latency in case of foreground sessions showing high ‘log file sync’ waits indicating waiting on the logwriter.

I was looking at the logwriter code path using the ‘debug trace’ tool from intel pin tools and used Tanel Poder’s snapper using begin and end snapshots on the logwriter process on Linux, non ASM, Oracle 19.5. I put the logwriter into single logwriter mode (“_use_single_log_writer”=true) and performed redo, which is bound to be picked up by the logwriter.

There I noticed the statistics indicated a write to have been taking place, whilst there was no ‘log file parallel write’ wait event. Weird… So I dug deeper and looked at the functions taking place. These are the functions that I found: https://gitlab.com/snippets/1926207

The important bits are:
– In the function ksfdaio() the IO request is submitted:
ksfdaio>ksfdgo>ksfd_skgfqio>skgfqio>skgfr_lio_listio64>io_submit
– And in the function ksfdblock() the IO request result is read (“reaped”):
ksfdblock>ksfdpoll>ksfdwtio>skgfrwat>skgfrliopo>io_getevents

But no wait event is called *at all!*

This is a clear change from earlier versions, where the logwriter write would always yield a ‘log file parallel write’ wait event. But when did this change happen? To answer that question, I installed version 12.1.0.2 (no patches) to look at the code path and found: https://gitlab.com/snippets/1926211

The important bits are:
– The wait interface is invoked: kslwtbctx()
– In the function ksfdgo() the IO request is submitted:
ksfd_sbio>ksfdgo>ksfd_skgfqio>skgfqio>skgfr_lio_listio64>io_submit
– And in the function ksfdwtio() to IO request result is read (“reaped”):
ksfd_sbio>ksfdwtio>skgfrwat>skgfospo>skgfrliopo>io_getevents
– And the wait interface is ended: kslwtectx()

Ah! So a clear change! So when did this change happen? I installed 12.2.0.1 and found the same ‘waiting’ codepath, then installed 18.3, and found the waiting codepath. So the change probably happened in Oracle 19! So I installed 19.3, and found the waiting codepath once again!! So was it a change in PSU 19.4 or 19.5? I installed 19.5, and found the waiting codepath too.

O.M.G. Why am I seeing different behaviour between two databases on the exact same version? Well, I set some undocumented parameters, so I replicated these in my newly installed database…and it didn’t change.

What else is different? Well…one is a multi-tenant database, and the other isn’t…but surely that doesn’t change the IO codepath?? Of course this is easy to check, so I dropped the non-CDB database and created a CDB one, and…it now showed the same “non waiting” codepath.

So, apparently, using the oracle database in multi-tenant mode changes the IO codepath behaviour. I checked the codepath going down in versions, and it turns out this change appeared in 12.2.0.1.0, so essentially in Oracle 12.2 (Oracle 18 and 19 are actually Oracle 12.2.0.2 and Oracle 12.2.0.3). To be clear on this: I didn’t find this change in 12.1.0.2.

Does this mean the wait events ‘log file parallel write’ does not show up at all? No, if the call io_getevents after io_submit does not return all submitted IOs, it must wait for it, and at that point invoke the wait interface and then go into a blocking io_getevents call, so the actual waiting is timed. This is shown in this snippet: https://gitlab.com/snippets/1926212

This shows ksfdblock>ksfdpoll>ksfdwtio>skgfrwat>skgfrliopo>io_getevents, which returns 0 (what means that it didn’t found any IOs in the completion queue), which then returns up to the ksfdpoll function, invokes the wait interface, kslwtbctx, and then dives to io_getevents again, but now in blocking mode to wait for the IO.

I also replayed the tests on ASM, which shows the exact same (non-)waiting behaviour.

Conclusion
For Oracle versions 12.1.0.2 up to 19.5, the wait event ‘log file parallel write’ includes both the submission of the IO request (io_submit call) and the waiting for the submitted IOs to finish (io_getevents call). This means the waiting time of the event in this case is the total latency of the IO request, including operating system processing.

Starting from version 12.2, only if the multi-tenant option is used, this is changed, and the wait event ‘log file parallel write’ only is shown if the submitted IOs are not available after they are submitted, and thus the logwriter process has to wait for them. If the wait is shown, the time in the event is the actual waiting time for the IOs to finish from the perspective of the logwriter process, not the IO latency time.

Of course if you have multiple log groups on disks with different latencies, this wait event waits for all IOs to finish, which means it times the slowest IO of all submitted IOs.

ADHD and Certification Tests- A Tale of Disaster

I don’t talk too much about my Attention Deficit Hyperactivity Disorder, (ADHD) brain outside of the occasional squirrel joke and more often view it as a super power, but when it is a challenge, I think it’s important to share what we go through.  You can’t solve a problem if you don’t know it is one.

ADHD and Autism

For those that aren’t aware, I was diagnosed with ADHD/on the spectrum for autism back in 2004-2005.  Its not that I wasn’t before this, I just grew up in a very rural area. I fully demonstrated traditional traits for a kid that suffered from both, just no one diagnosed me until one of my own children was diagnosed, (which is quite common.)  At that point in my life, I’d developed numerous, effective, coping mechanisms, but it still created enough challenges that I required medication to address some of the more serious symptoms.

As a kid, I had great difficulty interacting with other children, I became over-stimulated in places with large crowds, bright lights, loud noises and would easily react with tears.  To this day, if I’m not careful, I can become overwhelmed in public and seek out quiet spaces.  I hyper-focus on things I’m interested in and can exhaust those around me on the topic.  I still am quite sensitive to tags, seams and stitching in clothing, making it my mission to dress as comfortable as possible.  You will also notice that when I stand, I often rock back and forth, which is a comforting mechanism I never outgrew.

Do The Thing

In my adult years, I’ve learned how to navigate the world around me and protect the part of me that works a little different than the majority.  It’s happened through extensive trial and error, but it’s because these were situations that I dealt with regularly and was able to figure it out….except for one thing-  taking tests.

Written tests aren’t something you do often once you’re outside of education.  I wasn’t oblivious-  I knew the challenge existed when I would take compliance training at jobs-  which is an annual event at the most.  Where most peers would finish in an hour or two, it would take me five or six hours.  I DREAD annual compliance training not for the training, but if there is a test at the end of it, it will take extensive additional time to get through it.

What’s ADHD Like?

If you don’t have ADHD, it might be difficult to understand what it’s like, so hopefully the following explanation will help.  When an ADHD’r is engaged-  either through interactive experiences or working on something they’re interested in, simply put- the activity levels in the temporal lobes are maintained, allowing them focus and control.  This is why those with ADHD are able to do just fine when they are playing a computer game or another activity their frustrated teachers/parents see as “fun”, when the real term should be “engaging”.  When the situation is the opposite-  tedious or monotonous, activity levels in the temporal lobes decrease, often significantly and we’re no longer able to maintain.  It’s not that we want to pay attention or stand still-  WE HAVE NO CONTROL OVER EITHER.  This is why we’re prescribed stimulants, which can seem very odd when those with ADHD are identified as hyperactive.  These stimulants are designed to increase the activity levels in the temporal lobes, letting us regain focus and self-control, which in turn, let’s us control our emotions, actions and decreases the hyperactivity.

For me, my ADHD when triggered turns most noises, including voices, to sound like  “nails down a chalkboard”.  Those individuals speaking directly to me are drowned out by the distractions, like a bad connection on a conference call, where I receive only half of what they’re saying.   The written word, immediately after read, can’t be recalled-  I simply can’t process the information.  If I have to transition between screens on the computer, the simple act of switching screens causes me to lose track of what I was switching the screen for in the first place.

It can cause a lot of frustration, as most with ADHD have suffered demoralizing misconceptions about how their brain worked as a child, (“If you’d just pay attention”, “You’re not living up to your potential”, “He/She is just lazy”).  We are embarrassed about the condition and try to force ourselves to focus, which often makes it worse.  This creates a domino effect, making us more frustrated, resulting in less focus in our already distracted brain.  It’s obvious to a bystander that my ADHD is effecting me to this level, as my hyperactivity will override my medication, (and FYI- hyperactivity often presents itself differently in men vs. women) which means the speed of my speech will increase, my foot will tap, my skin will seem irritated, especially my ears, nose and scalp and I may fiddle with items in front of me.

Many have asked me how I treat my ADHD and I believe in a combination of nutrition, life structure/routine changes, followed by medication.  I decided I did need medication after an experience that left me quite shaken. I was driving down the street, my children were chattering in the backseat of the car and it was a significant distraction- so distracting that I couldn’t focus long enough to tell you if the traffic light in front of me was red or green.  This type of dangerous situation is definitely an indicator you need medication and needing medication for ADHD isn’t a sign of failure-  Treat ADHD for what makes you successful, don’t treat it for what others deem a success.

Timed Tests, Set Up for Failure

Some of the best steps to successfully working through tedious or monotonous tasks that trigger our ADHD symptoms are:

  1. Streamline the process to remove any added frustration or monotony.
  2. Perform the task, broken up in smaller sections of time, so a 1 hour task we’ll do in four, 15 minute sessions, taking breaks between.
  3. Insert interesting, short distractions in the middle of the tedious task to raise the activity levels in the temporal lobes and regain our focus.
  4. Have something to eat or drink that can help increase focus, (controlled distraction)
  5. Wear headphones and listen to music/podcast while performing the task, (controlled distraction)

So this is where my difficult week came into play.  I have two certifications, comprised of three tests that are part of my yearly goals this year at Microsoft.  As any of you who’ve taken certification tests know, these tests are done under a very controlled environment and have strict rules that must be adhered to:

  1. Complex and poorly designed UI for logging in, scheduling and authorizing the test which enhances frustration.
  2. Timed test- must be performed beginning to end, no break
  3. No outside interaction- no headphones, drinks, food or interruptions.
  4. Test isn’t interactive in any way, built complex to test your knowledge
  5. Due to travel, I was to perform the test at home, (the RV)

I was aware through previous compliance testing, which often took me two to three tries to get through stupid, mundane questions that this would not be easy for me.  It didn’t matter if I knew the material, I would have difficulty “absorbing” the questions, even after reading them 2, 3 or more times and it had less to do with what I knew and more about what I could get through.

Happy Girl

My anxiety has seriously decreased while at Microsoft, to the point that Esme, who is a trained service dog to deal with my sleep anxiety attacks, failed to wake me up this last week.  My anxiety has decreased so much, my service dog is out of service. It surprises me how the little things can take some of us down, not the big things you’d expect.  Job change?  Sell your property, sell 95% of your stuff and move into an RV?  Kids move out?  All four at once?  Handle it with grace and calm.  Give me a certification test to pass?  No sleep for a week!

While preparing for the test, I haven’t slept and had two sleep anxiety attacks, (oh those are lovely, we should talk…) hardly eating and all in all, pretty much a tense freakazoid.  Seriously-  major crisis?  I got this.  Certification test?  I don’t got this. </p />
</p></div>

    	  	<div class=

Scalar Subq Bug

This is an observation that came up on the Oracle Developer Forum a couple of days ago, starting life as the fairly common problem:

I have a “select” that runs quickly  but when I use in a “create as select” it runs very slowly.

In many cases this simply means that the query was a distributed query and the plan changed because the driving site changed from the remote to the local server. There are a couple of other reasons, but distributed DML is the one most commonly seen.

In this example, though, the query was not a distributed query, it was a fully local query. There were three features to the query that were possibly suspect, though:

  • “ANSI” syntax
  • scalar subqueries in the select list
  • redundant “order by” clauses in inline views

The OP had supplied the (horrible) SQL in a text format along with images from the Enterprise Manager SQL Monitor screen showing the two execution plans and two things were  obvious from the plans – first that the simple select had eliminated the scalar subqueries (which were redundant) while the CTAS had kept them in the plan, and secondly most of the elapsed time for the CTAS was spent in kits if executions of the scalar subqueries.

My first thought was that the problem was probably a quirk of how the optimizer translates “ANSI” SQL to Oracle-standard SQL, so I created a model that captured the key features of the problem – starting with 3 tables:

rem
rem     Script:         ctas_scalar_subq.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2019
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem             11.2.0.4
rem

create table t1 as
select * from all_objects
where rownum <= 10000 -- > comment to avoid wordpress format issue
;

alter table t1 add constraint t1_pk primary key(object_id);

create table t2 as
select * from t1
;

alter table t2 add constraint t2_pk primary key(object_id);

create table t3 as
select * from all_objects
where rownum <= 500 -- > comment to avoid wordpress format issue
;

alter table t3 add constraint t3_pk primary key(object_id);

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T2',
                method_opt  => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T3',
                method_opt  => 'for all columns size 1'
        );
end;
/

I’m going to use the small t3 table as the target for a simple scalar subquery in the select list of a query that selects some columns from t2; then I’m going to use that query as an inline view in a join to t1 and select some columns from the result. Here’s the starting query that’s going to become an inline view:


select 
        t2.*,
        (
        select  t3.object_type 
        from    t3 
        where   t3.object_id = t2.object_id
        )       t3_type
from
        t2
order by
        t2.object_id
;

And here’s how I join the result to t1:


explain plan for
        select
                v2.*
        from    (
                select
                        t1.object_id,
                        t1.object_name  t1_name,
                        v1.object_name  t2_name,
                        t1.object_type  t1_type,
                        v1.object_type  t2_type
                from
                        t1
                join (
                        select 
                                t2.*,
                                (
                                select  t3.object_type 
                                from    t3 
                                where   t3.object_id = t2.object_id
                                )       t3_type
                        from
                                t2
                        order by
                                t2.object_id
                )       v1
                on
                        v1.object_id = t1.object_id
                and     v1.object_type = 'TABLE'
                )       v2
;

select * from table(dbms_xplan.display(null,null,'outline alias'));

The initial t2 query becomes an inline view called v1, and that becomes the second table in a join with t1. I’ve got the table and view in this order because initially the OP had an outer (left) join preserving t1 and I thought that that might be significant, but it turned out that it wasn’t.

Having joined t1 and v1 I’ve selected a small number of columns from the t1 and t2 tables and ignored the column that was generated by the inline scalar subquery. (This may seem a little stupid – but the same problem appears when the inline view is replaced with a stored view, which is a more realistic possibility.) Here’s the resulting execution plan (taken from 11.2.0.4 in this case):


-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |   476 | 31416 |    45  (12)| 00:00:01 |
|*  1 |  HASH JOIN           |      |   476 | 31416 |    45  (12)| 00:00:01 |
|   2 |   VIEW               |      |   476 | 15708 |    23  (14)| 00:00:01 |
|   3 |    SORT ORDER BY     |      |   476 | 41888 |    23  (14)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL| T2   |   476 | 41888 |    22  (10)| 00:00:01 |
|   5 |   TABLE ACCESS FULL  | T1   | 10000 |   322K|    21   (5)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("V1"."OBJECT_ID"="T1"."OBJECT_ID")
   4 - filter("T2"."OBJECT_TYPE"='TABLE')

I was a little surprised by this plan as I had expected the optimizer to eliminate the in-line “order by” in view v1 – but even when I changed the code to traditional Oracle join syntax the redundant and wasteful sort at operaton 3 still took place. (You might note that the data will be reported in an order dictated by the order of the data arriving from the t1 tablescan thanks to the mechanism of the hash join, so the sort is a total waste of effort.)

The plus point, of course, is that the optimizer had been smart enough to eliminate the scalar subquery referencing t3. The value returned from t3 is not needed anywhere in the course of the execution, so it simply disappears.

Now we change from a simple select to a Create as Select which I’ve run, with rowsource execution stats enabled, using Oracle 19.3 for this output:

set serveroutput off
set linesize 156
set trimspool on
set pagesize 60

alter session set statistics_level = all;

create table t4 as
        select  
                v2.*
        from    (
                select
                        t1.object_id,
                        t1.object_name  t1_name,
                        v1.object_name  t2_name,
                        t1.object_type  t1_type,
                        v1.object_type  t2_type
                from
                        t1
                join (
                        select 
                                t2.*,
                                (
                                select  t3.object_type 
                                from    t3 
                                where   t3.object_id = t2.object_id
                                )       t3_type
                        from
                                t2
                        order by 
                                t2.object_id
                )       v1
                on
                        v1.object_id = t1.object_id
                and     v1.object_type = 'TABLE'
                )       v2
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

alter session set statistics_level = typical;

And here’s the run-time execution plan – showing the critical error and statistics to prove that it really happened:

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Writes |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | CREATE TABLE STATEMENT           |       |      1 |        |      0 |00:00:00.01 |     471 |      3 |       |       |          |
|   1 |  LOAD AS SELECT                  | T4    |      1 |        |      0 |00:00:00.01 |     471 |      3 |  1042K|  1042K| 1042K (0)|
|   2 |   OPTIMIZER STATISTICS GATHERING |       |      1 |    435 |    294 |00:00:00.01 |     414 |      0 |   256K|   256K|  640K (0)|
|*  3 |    HASH JOIN                     |       |      1 |    435 |    294 |00:00:00.01 |     414 |      0 |  1265K|  1265K| 1375K (0)|
|   4 |     VIEW                         |       |      1 |    435 |    294 |00:00:00.01 |     234 |      0 |       |       |          |
|   5 |      TABLE ACCESS BY INDEX ROWID | T3    |    294 |      1 |     50 |00:00:00.01 |      54 |      0 |       |       |          |
|*  6 |       INDEX UNIQUE SCAN          | T3_PK |    294 |      1 |     50 |00:00:00.01 |       4 |      0 |       |       |          |
|   7 |      SORT ORDER BY               |       |      1 |    435 |    294 |00:00:00.01 |     234 |      0 | 80896 | 80896 |71680  (0)|
|*  8 |       TABLE ACCESS FULL          | T2    |      1 |    435 |    294 |00:00:00.01 |     180 |      0 |       |       |          |
|   9 |     TABLE ACCESS FULL            | T1    |      1 |  10000 |  10000 |00:00:00.01 |     180 |      0 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("V1"."OBJECT_ID"="T1"."OBJECT_ID")
   6 - access("T3"."OBJECT_ID"=:B1)
   8 - filter("T2"."OBJECT_TYPE"='TABLE')

You’ll notice that the VIEW at operation 4 reports the inline scalar subquery as operations 5 and 6, and the Starts column show that the scalar subquery executes 294 times – which is the number of rows returned by the scan of table t2. Although my first thought was that this was an artefact of the transformation from ANSI to Oracle syntax it turned out that when I modified the two statements to use traditional Oracle syntax the same difference appeared. Finally I re-ran the CTAS after removing the order by clause in the in-line view and the redundant subquery disappeared from the execution plan.

Tiny Geek bit

It’s not immediately obvious why there should be such a difference between the select and the CTAS in this case, but the 10053 trace files do give a couple of tiny clues the CTAS trace file includes the lines:

ORE: bypassed - Top query block of a DML.
TE: Bypassed: Top query block of a DML.
SQT:    SQT bypassed: in a transaction.

The first two suggest that we should expect some cases where DML statement optimise differently from simple queries. The last one is a further indication that differences may appear. (SQT – might this be subquery transformation, it doesn’t appear in the list of abbreviations in the trace file).

Unfortunately the SELECT trace file also included the line:


SQT:     SQT bypassed: Disabled by parameter.

So “SQT” – whatever that is – being in or out of a transaction may not have anything to do with the difference.

Summary

There are cases where optimising a select statement is not sufficient as a strategy for optimising a CTAS statement. In this case it looks as if an inline view which was non-mergable (thanks to a redundant order by clause) produced the unexpected side-effect that a completely redundant scalar subquery in the select list of the inline view was executed during the CTAS even though it was transformed out of existence for the simple select.

There are some unexpected performance threats in “cut-and-paste” coding. and in re-using stored views if you haven’t checked carefully what they do and how they’re supposed to be used.

 

 

The Shepherd’s Crown


The Shepherd’s Crown is the last book in the Discworld series by Terry Pratchett. I really don’t like the finality of how that sounds, which is probably why I’ve waited a long time to read this book.

Having complained about how the female lead character was used in The Unicorn Project, I figured it was time to read this book and see it done properly again.

I’m not going to include any full-on spoilers, but some things might give the game away, so don’t read this if you’ve not already read the book.

During the first sequence in the book I got a pretty good idea what was coming and I was like, “No. No. Nooooooooo!” I’m not going to say what it was, but it was a major kick in the gonads…

Tiffany Aching is a great character. It would be so easy for a lesser writer to make her a Mary Sue, but Pratchett keeps her multi-dimensional. Sometimes strong, clever, and at times ruthless. Sometimes self-doubting and almost naive.

As you would expect for this part of the disc (world), there are a number of familiar characters. It’s wrong to say any character in Discworld is “my favourite”, as it changes with each book, and sometime several times in a single book. This book contained several of my favourite characters. Some old and some new. </p />
</p></div>

    	  	<div class=

Friday Philosophy – Community Means So Much

There have been a few things in the last month that have really brought home to me how much I personally receive from the Community. In my case, my Community is primarily the Oracle User Community – The end users who come together to share knowledge, the Oracle employees who support this, and the companies that support the end users. For most of you reading this, you are part of the Oracle User Community, but most of you will be (I hope) in other communities too through your other interests, be they religious, music, hobbies, sports, charities etc.

Tools To Monitor and Work with Oracle on Azure

I’ve been studying for over a week for my certs.  It really is a challenge for my ADHD brain, as I learn by interacting and using a product, not be reading about it and guess what?  Most of what the certs are on are not in my technical area.  Yeah, this is not fun for me.  I find that my brain hits a limit on what it can absorb before the activity levels in the temporal lobes diminish and I need to take a break, which is what I’m doing right now after a full day of Azure Synapse Analytics, (yes, I know it’s not GA.  Yes, I know I have limited exposure to work with it, which means it’s going to be difficult for me to know at the deep level I’d need for a cert.  Yes, I want to find the person who added this, along with CosmosDB and Polybase as the main content for the cert… :))

Now that I’m done with that rant, let’s talk about something that IS in my area of expertise and something that my customers are interested in-  “What tools are available to work with Oracle on Azure?”  Azure is coming up pretty quick in the cross-platform space for databases- new additions to PaaS include PostgreSQL and MongoDB, but for Oracle, I’m currently still building out Linux VMs for my Oracle environments.  For those that are wondering how they can interact with those databases, there are a few tools that I’d like to highlight today and why I recommend them.

Oracle Enterprise Manager Express/Database Cloud Service

This is the stand-alone web-based interface for the Oracle database and the descendent, (although it has been impressively redesigned) of the Database Control product of olden days and is available to any Oracle database you build on a VM.

https://dbakevlar.com/wp-content/uploads/2019/12/emexpress-300x152.jpg 300w, https://dbakevlar.com/wp-content/uploads/2019/12/emexpress-768x388.jpg 768w, https://dbakevlar.com/wp-content/uploads/2019/12/emexpress-1536x776.jpg 1536w, https://dbakevlar.com/wp-content/uploads/2019/12/emexpress.jpg 1996w" sizes="(max-width: 695px) 100vw, 695px" />

This is for Oracle database 12c and above, this will be available once you build a database and to access, you need to locate the connection information and the port that OEM Express is installed on.  Although much of this looks very similar to the Oracle Cloud Control, (or Oracle Enterprise Manager) infrastructure tool, most of what you see was designed by a different team that the EM team at Oracle, so there are some vital differences.

Benefits:

  • Doesn’t require any additional hardware or infrastructure
  • Great for monitoring performance

Drawbacks:

  • No Notifications
  • If the database is down, so is OEM Express
  • Not for database management like adding datafiles and querying data.

If you’re interested in learning more, check out the Oracle documentation and Azure’s link to using it on a VM.

Oracle Cloud Control, (OEM) on an Azure VM

I’m building out the rest of the environment in Azure VMs, why not the infrastructure tool that the DBA is familiar with?  Oracle Cloud Control 13c, (Oracle Enterprise Manager or OEM) is the most widely used tool to manage Oracle databases.

https://dbakevlar.com/wp-content/uploads/2019/12/oem13c-300x183.jpg 300w, https://dbakevlar.com/wp-content/uploads/2019/12/oem13c-768x468.jpg 768w, https://dbakevlar.com/wp-content/uploads/2019/12/oem13c-1536x937.jpg 1536w, https://dbakevlar.com/wp-content/uploads/2019/12/oem13c-2048x1249.jpg 2048w" sizes="(max-width: 657px) 100vw, 657px" />

It has three main components:

  • The OEM database repository
  • The Oracle Management Service, (OMS)
  • The OEM agent

There are a number of other components, but for this post, we’re going to touch on these three, as they are important for the infrastructure build. For small environments, the database repository and the OMS can be located on a single 16 vcore VM, but if you have 100’s of databases, it’s best to separate them on their own VM.  The OEM agent is installed on each of the target VM hosts and is used to collect and send information to OEM.  Uploads must occur on a regular basis from the agent into the repository or thresholds are reached, information is missing and notifications are sent regarding down targets.

Benefits:

  • It is a full infrastructure tool-  management, monitoring, alerts and notification of the entire stack.
  • Metric extensions to write out your own metrics and alerts to notify on.
  • Uses AWR/ASH data for performance dashboards and reporting.
  • Understands and has built in templates for Oracle centric product monitoring/management
  • Is free for main features if you have the diagnostic and tuning pack for the databases you’re monitoring, (if they’re not Enterprise edition, then you’re out of luck, too.)
  • Building it in Azure cuts down on egress charges from the Azure cloud that would exist if it were on-prem.

Drawbacks:

  • Requires different management packs for advance features.
  • Requires knowledge to support this unique multi-tier architecture, including backups, patching and MAA.
  • If you desire a single OEM environment, (for on-prem and Azure) there will be egress charges for the data coming from Azure to on-prem.

I worked with this product for a number of years and I still love it-  it has incredible value and yes, it can be deployed and built out in Azure to monitor/manage/notify on your Oracle environment as it can on-prem.

Database Performance Analyzer, (DPA) from Solarwinds

I admit I’m new to this-  When I was with Oracle, I didn’t do the research into competitive products like I did when I was in database cloning.  Thomas LaRock pointed me to it just recently, so I decided to try it and added the DPA service in the Azure marketplace to my Oracle database resource group.

https://dbakevlar.com/wp-content/uploads/2019/12/slrwinds1-300x109.jpg 300w, https://dbakevlar.com/wp-content/uploads/2019/12/slrwinds1-768x278.jpg 768w, https://dbakevlar.com/wp-content/uploads/2019/12/slrwinds1-1536x556.jpg 1536w, https://dbakevlar.com/wp-content/uploads/2019/12/slrwinds1-2048x741.jpg 2048w" sizes="(max-width: 800px) 100vw, 800px" />

Benefits:

  • Simple to configure-  helpful wizard and very user friendly
  • Only requires TNS to connect to Oracle databases on Azure VMs
  • Provides great insight to performance metrics and monitoring
  • Has an option to create a database repository for retention of historical data
  • Doesn’t require an agent installation to any database VM.

Drawbacks:

  • Does not rely on AWR/ASH data, so the processing can create some performance impact, (per documentation, less than 1% overhead) where OEM will use the AWR/ASH buffer and background processes.  At the same time, if you don’t have the diagnostic and tuning pack, this could go into the benefit list.
  • Does require a JDBC driver to each database VM

https://dbakevlar.com/wp-content/uploads/2019/12/dpa_ora-300x186.jpg 300w, https://dbakevlar.com/wp-content/uploads/2019/12/dpa_ora-768x475.jpg 768w, https://dbakevlar.com/wp-content/uploads/2019/12/dpa_ora-1536x950.jpg 1536w, https://dbakevlar.com/wp-content/uploads/2019/12/dpa_ora.jpg 1840w" sizes="(max-width: 681px) 100vw, 681px" />

There are significant features to this product, more similar to Oracle Enterprise Manager and has the support of a company that has considerable investment in the product and experience in Azure.

Oracle SQL Developer

The Oracle SQL Developer product has come a long way since it’s inception and much of that credit needs to go to the incredible team at Oracle, including those that are prevelant in the Oracle community, Like Jeff Smith, Kris Rice and Ashley Chen.  Their willingness to listen to the Oracle community and turn their needs into features has been one of the critical reasons for the product success.

Although this product is more focused towards the developer, unlike the previous three, I want to point out a few areas that hopefully will convince you there are more similarities than differences.

Benefits:

  • Simple installation on a desktop or VM jump box
  • Performance metric dashboards and data using ASH, AWR, etc.
  • Sweet SQLcl that is like SQL*Plus command line, but on steroids
  • Ability to manage database objects, tablespaces, users, etc., (which OEM Express can’t do most of this.)

Drawbacks:

  • No centralized repository to manage alerts, etc.  This can be done through the REST API, but not as full infrastructure tool as OEM Cloud Control.
  • Although great for migrations from other databases platforms to Oracle, not as much cross-platform as DPA from Solarwinds.

The truth is, there’s just not that much to complain about here and honestly, this comes from a DBA that doesn’t use much in the way of tools, (I’m missing over 50% of my visual field, so I spend most of my time at the command line, so to get me to rave about a GUI tool is difficult at best… :))  Heck, don’t take my word for it, just go over to Jeff’s blog and read up on all the cool stuff you can do.

The deal is, there are a TON of ways to monitor, manage and interact with Oracle on Azure.  These are just a few, but I wanted to highlight, Oracle DBAs, developers and application folks will find that their world hasn’t changed just because the location of the Oracle databases have.  By moving to the cloud, it also removes bottlenecks like network latency that would exist in many misconfigured hybrid solutions, etc.

OK, enough time avoiding more studying, back to it. </p />
</p></div>

    	  	<div class=

Flashback Archive

A classic example of Oracle’s “mix and match” problem showed up on the Oracle Developer Forum a few days ago. Sometimes you see two features that are going to be really helpful in your application – and when you combine them something breaks. In this case it was the combination of Virtual Private Database (VPD/FGAC/RLS) and Flashback Data Archive (FDA/FBA) that resulted in the security predicate not being applied the way you would expect, hence allowing users to see data they were not supposed to see.

The OP supplied us with a model (based in part on Tim Hall’s FDA article) to demonstrate the issue on 11.2.0.4, and I’ve hacked it about a bit to explain it here, and to test it on 12.2.0.1 and 19.3.0.0 where the same failure occurs.

I’m going to start with just the VPD part of the setup before adding in the FDA. Most of the code has been written to run as the SYS user and it creates a new tablespace and a couple of users so you may want to do some editing before you try any tests. There’s also a short script at the end of the blog to remove the flashback data archive, tablespace, and users – again, something to be run by SYS.

You’ll note that this script assumes you already have a tablespace called test_8k_assm, and a temporary tablespace called temp. The latter may well be a valid assumption, the former probably isn’t.

rem
rem     Script:         vpd_fda_bug.sql
rem     Author:         Duncs (ODC)
rem     Dated:          Dec 2019
rem
rem     Last tested
rem             19.3.0.0
rem             12.2.0.1
rem             11.2.0.4
rem
rem     Notes
rem     Has to be run as SYS
rem
rem     See also
rem     https://community.oracle.com/thread/4307453
rem


create user vpd_test_data_owner identified by Password_1234 
        default tablespace test_8k_assm
        temporary tablespace temp 
        quota unlimited on test_8k_assm
;
 
grant 
        connect,
        resource, 
        create any context
to
        vpd_test_data_owner
;

grant
        execute on dbms_rls
to
        vpd_test_data_owner
;
 
 
create table vpd_test_data_owner.person (
        person_id       number, 
        surname         varchar2(30), 
        unit_id         number
);

insert into  vpd_test_data_owner.person values (-1, 'One',  -1);
insert into  vpd_test_data_owner.person values (.2, 'Two',  -2);
insert into  vpd_test_data_owner.person values (.3, 'Three',-3);
insert into  vpd_test_data_owner.person values (-4, 'Four', -4);
insert into  vpd_test_data_owner.person values (-5, 'Five', -5);

commit;

create user vpd_test_function_owner identified by Password_1234
        default tablespace test_8k_assm 
        temporary tablespace temp 
        quota unlimited on test_8k_assm
;
 
grant 
        connect, 
        resource
to 
        vpd_test_function_owner
;
 
prompt  ============================================
prompt  Create a packaged function to set a context
prompt  that we will use in a VPD security predicate
prompt  ============================================

create or replace package vpd_test_function_owner.context_api_pkg AS

procedure set_parameter(
        p_name  in  varchar2,
        p_value in  varchar2
);

end context_api_pkg;
/
 
create or replace package body vpd_test_function_owner.context_api_pkg IS
 
procedure set_parameter (
        p_name  in  varchar2,
        p_value in  varchar2
) is
begin
        dbms_session.set_context('my_vpd_context', p_name, p_value);
end set_parameter;

end context_api_pkg;
/

prompt  ======================================================
prompt  Allow public to set the context value.  (Not sensible)
prompt  ======================================================

grant execute on vpd_test_function_owner.context_api_pkg to public;

prompt  ===============================================================
prompt  Create a context that can only be set by our packaged procedure
prompt  ===============================================================

create or replace context my_vpd_context 
        using vpd_test_function_owner.context_api_pkg
;

prompt  =====================================================
prompt  Create a security function that generates a predicate
prompt  based on our context, then create a policy to connect
prompt  the function to the test table for select statements.
prompt  =====================================================
 
create or replace function vpd_test_function_owner.test_vpd_function (
    p_schema  in varchar2 default null
  , p_object  in varchar2 default null
)
return varchar2
as
    lv_unit_id number := nvl(sys_context('my_vpd_context','unit_id'), -1);
begin
    return 'unit_id = ' || lv_unit_id;
end test_vpd_function;
/

begin
      dbms_rls.add_policy (
               object_schema    => 'vpd_test_data_owner'
             , object_name      => 'person'
             , policy_name      => 'test_vpd_policy'
             , function_schema  => 'vpd_test_function_owner'
             , policy_function  => 'test_vpd_function'
             , statement_types  => 'select'
      );
end;
/


There are several quick and dirty bits to the script – you shouldn’t be using the connect and resoruce roles, for example; they exist only for backwards compatibility and don’t even manage that very well any more. Any grants made should be carefully chosen to be the minimum necessary to achieve the required functionality, and you should be defining roles of your own rather than using pre-existing ones.

Generally you don’t expect to set up a security policy that stops the owner of the data from seeing all the data – and I’ve left the policy to default to dynamic which means the function will execute on every parse and execute of a statement accessing the table (and that’s somethin to avoid if you can). For convenience I’ve also alloweed the owner of the data to execute the function that changes the context that is used by the predicate function – and you don’t really want to allow anyone who is constrained by a security policy to be able to modify their own access rights like this.

Since the code allows a deliberately lax setup on VPD you could at this point do something like the following to check that VPD is actually working before moving on to test the effect of FDA:

connect vpd_test_data_owner/Password_1234
select * from person;

execute vpd_test_function_owner.context_api_pkg.set_parameter('unit_id',-2)
select * from person;

The first execution of the query should show you only the row where unit_id = -1 as “unit_id = -1” is the default return value from the security function. The second execution should return only the row where unit_id = -2 as the call to set_parameter() changes the context value so that when the security function re-executes it generate a new security predicate “unit_it = -2”. (It’s worth noting that one of the options for security policies is to make them context-dependent so that they re-execute only when the relevant context is changed – but in this case the policy defaults to “re-execute the function on every parse and execute”.)  [NOTE: for some clues on the possible performance impact of badly defined VPD, check the comments made in response to this blog note]

Once you’re satisfied that the security policy is working correctly you can move on to the second feature – flashback data archive. Logging on as SYS once again, execute the following code – which, amongst other things, creates a new tablespace. You’ll notice that I’ve got three lines in the “create tablespace” statement naming a datafile (though one of them doesn’t actually supply a name). The names (or absence thereof) correspond to the default naming conventions I have for my sandbox 11g, 12c, and 19c instances in that order. You will want to adjust according to your file-naming conventions.


prompt  ============================
prompt  Setting up Flashback Archive
prompt  ============================

create tablespace fda_ts 
        datafile        
--              no name needed if OMF
--              '/u01/app/oracle/oradata/orcl12c/orcl/fda_ts.dbf'
--              '/u02/data/OR19/orclpdb/fda_ts.dbf'
        size 1m autoextend on next 1m
;

alter user vpd_test_data_owner quota unlimited on fda_ts;

create flashback archive default fda_1year tablespace fda_ts
quota 1g retention 1 year;
 
grant flashback archive on fda_1year to vpd_test_data_owner;
grant flashback archive administer to vpd_test_data_owner;
grant execute on dbms_flashback_archive to vpd_test_data_owner;
 
prompt  Sleeping for 1 minute before adding table to flashback archive
execute dbms_lock.sleep(60);
alter table vpd_test_data_owner.person flashback archive fda_1year;

prompt  Sleeping for 1 minute before updating the date
execute dbms_lock.sleep(60);
update vpd_test_data_owner.person set surname = upper(surname);

commit;

prompt  Sleeping for 5 minutes to give FDA a chance to do its thing.
execute dbms_lock.sleep(300);
alter system flush shared_pool;

prompt  ==================================================
prompt  Now connect to the data owner schema and run the 
prompt  original query then a couple of flashback queries, 
prompt  pulling their plans from memory
prompt  ==================================================

connect vpd_test_data_owner/Password_1234

set linesize 120
set pagesize 50
set trimspool on
set serveroutput off

spool vpd_fda_bug.lst
-- set autotrace on explain

select * from vpd_test_data_owner.person;
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-1/1440;
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-2/1440;
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-3/1440;
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-4/1440;
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-5/1440;
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-6/1440;
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-7/1440;  
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-8/1440;  
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-9/1440;  
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-10/1440; 
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-15/1440; 
set autotrace off
spool off 

I’ve created a tablespace that I’m going to reserve for the flashback archive and given my data owner a quota on that tablespace; then I’ve created a flashback archive in that tablespace and granted various privileges relating to flashback archive to my data owner.

The next few lines of code include a couple of calls to dbms_lock.sleep() because I want to avoid the risk of getting an Oracle error ORA-01466: unable to read data – table definition has changed, but all I’ve done otherwise is modify the person table to be archiving and then made a little change that will eventually be recorded as part of the archive.

I’ve then introduced a 5 minute wait as it seems to take about 5 minutes before the flashback process takes any action to capture the original table data and copy any related undo; but after that 5 minutes is up I’ve queried the person table directly (which should show you the one row where unit_id = -1, then gradually gone backwards in time re-querying the data.

You should see the same result being produced for a few minutes, then a version of the “pre-update” data (upper case ‘ONE’ changing to mixed case ‘One’), and then you will (I hope) see the entire original data set appearing and finally you should see Oracle raising error “ORA-01466: unable to read data – table definition has changed” when your “as of timestamp” goes back beyond the moment you created the archive. (Except that that doesn’t happen with 11.2.0.4, which manages to report the data as if it had existed long before you created it).

I’ve commented out the “set autotrace on explain” in the above, but if you leave it in, or introduce it for one of the queries, you’ll see what’s going on that allows flashback data archive show you data that should have been hidden by the security predicate. Here’s the execution plan for one run:

-----------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name                | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                     |     2 |    86 |    17  (12)| 00:00:01 |       |       |
|   1 |  VIEW                     |                     |     2 |    86 |    17  (12)| 00:00:01 |       |       |
|   2 |   UNION-ALL               |                     |       |       |            |          |       |       |
|*  3 |    FILTER                 |                     |       |       |            |          |       |       |
|   4 |     PARTITION RANGE SINGLE|                     |     1 |    71 |     7   (0)| 00:00:01 |   KEY |     1 |
|*  5 |      TABLE ACCESS FULL    | SYS_FBA_HIST_353151 |     1 |    71 |     7   (0)| 00:00:01 |   KEY |     1 |
|*  6 |    FILTER                 |                     |       |       |            |          |       |       |
|   7 |     MERGE JOIN OUTER      |                     |     1 |  2083 |    10  (20)| 00:00:01 |       |       |
|   8 |      SORT JOIN            |                     |     1 |    55 |     7  (15)| 00:00:01 |       |       |
|*  9 |       TABLE ACCESS FULL   | PERSON              |     1 |    55 |     6   (0)| 00:00:01 |       |       |
|* 10 |      SORT JOIN            |                     |     5 | 10140 |     3  (34)| 00:00:01 |       |       |
|* 11 |       TABLE ACCESS FULL   | SYS_FBA_TCRV_353151 |     5 | 10140 |     2   (0)| 00:00:01 |       |       |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("TIMESTAMP_TO_SCN"(SYSDATE@!-.004861111111111111111111111111111111111111)<12670390363943)
   5 - filter("ENDSCN" .le. 12670390363943 AND ("OPERATION" IS NULL OR "OPERATION"<>'D') AND
              "ENDSCN">"TIMESTAMP_TO_SCN"(SYSDATE@!-.004861111111111111111111111111111111111111) AND ("STARTSCN" IS
              NULL OR "STARTSCN" .le. "TIMESTAMP_TO_SCN"(SYSDATE@!-.004861111111111111111111111111111111111111)))
   6 - filter("STARTSCN"<="TIMESTAMP_TO_SCN"(SYSDATE@!-.004861111111111111111111111111111111111111) OR
              "STARTSCN" IS NULL)
   9 - filter("UNIT_ID"=(-1) AND ("VERSIONS_OPERATION" IS NULL OR "VERSIONS_OPERATION"<>'D') 
             AND ("VERSIONS_STARTSCN" IS NULL OR "VERSIONS_STARTSCN".le."TIMESTAMP_TO_SCN(SYSDATE@!-.004861111111111111111111111111111111111111))
             AND ("VERSIONS_ENDSCN" IS NULL OR "VERSIONS_ENDSCN">"TIMESTAMP_TO_SCN"(SYSDATE@!-.004861111111111111111111111111111111111111)))
  10 - access("RID"(+)=ROWIDTOCHAR("T".ROWID))
       filter("RID"(+)=ROWIDTOCHAR("T".ROWID))
  11 - filter(("ENDSCN"(+) IS NULL OR "ENDSCN"(+)>12670390363943) AND ("STARTSCN"(+) IS NULL OR
              "STARTSCN"(+)<12670390363943))

Note
-----
   - dynamic sampling used for this statement (level=2)

Notice that the predicate “unit_id = -1″ appears on the full table scan of person at operation 9 – that’s Oracle applying the security predicate to the person table. But the flashback code has replaced the person table with a union all of (some partititions of) the SYS_FBA_HIST_353151 and a join between the person table and the SYS_FBA_TCRV_353151 table. And the code path that attaches the security predicate fails to attach it to the history table.

tl;dr

VPD (virtual private database) does not seem to be aware of the query rewrite that takes place if a table has an assocated FDA (flashback data archive), so a flashback query may report rows from the “history” table that should have been blocked by the VPD security policy.

Lagniappe

There is another little problem with FDA that might affect you if you try to optimizer flashback queries by creating SQL Plan Baselines. If you create a baseline on a test system (that isn’t a backup copy of the production system) and use the export/import facility to move the baseline to production then the baseline won’t work because the sys_fba_hist_nnnnn and sys_dba_tcrv_nnnnn table names are constructed from the object_id of the base table – which means the archive table names (and associated baseline hints) in the test system are probably going to have different names from the production system.

Housekeeping

To clean up the database after you’ve done all this testing, run the following script (modified to match any changes you’ve made in the test) after logging on as SYS:


alter table vpd_test_data_owner.person no flashback archive;

drop flashback archive fda_1year;

drop USER VPD_TEST_FUNCTION_OWNER cascade;
drop USER VPD_TEST_DATA_OWNER cascade;

drop tablespace fda_ts including contents and datafiles;

 

 

 

 

Philosophy – Treating Illness As A Performance Issue

Firstly, I’m on the mend. This is not a post about “oh woe is me, I am so ill”. But I have been rather ill.  I’ll just give you the highlights: The highs of UKOUG Techfest19 at the start of the month were followed very quickly by me developing full-blown influenza (Type A). After about 5 days of being ill in bed I realised I was fighting for breath just laying there. I analysed the problem and came to the conclusion “something else is very wrong and paracetamol is not the answer”. I was taken into hospital and put on increasingly powerful ventilating machines until they could get enough oxygen into me to keep me (sort of) functioning. I’d developed secondary infections & bronchitis, seriously reducing my lungs’ capacity to exchange oxygen & carbon dioxide.

A normal oxygen (O2) blood saturation level is 100. Below 90 is a cause for concern. A constant level below 85% is medically deeply worrying as 80% and less is harmful to several organs and confusion/unconsciousness are likely. Below 75% and you are almost certainly unconscious. I do know that when I first got to hospital they could not get me up to 85% and they could not understand how I could still talk and be (mostly) rational. I went from nasal O2 to a face mask to a machine that blew damp, warm O2 up my nose and finally a pressurised face mask. Next step was sedation and full mechanical ventilation – but they did not want to do that. My blood O2 became my main metric and I followed it like a hawk.

So I’m in hospital, very unwell (but not dying), under excellent care. And I’m almost, but not quite, totally incapacitated. To me it was a bit like I was a computer system with a serious deficiency of CPU power. Or a toy robot with failing batteries. Energy conservation is paramount. So what do you do in this extreme situation? Turn off everything you don’t need turned on, and save energy for things you have to do.

Turning things off was relatively easy. After all, I was connected to a load of monitoring technology and breathing kit so I was not going wandering about. I didn’t move much. If I needed to sit up I was not going to use those stomach muscles, the bed was powered and would move me about. If I wanted something from the table besides me (like a drink) I’d relax, breath deep, get my Blood O2 up –  and then get it. A little rest and then I’d e.g drink or look at the phone.

Some things I had to do (or insisted on doing) and I realised how much energy they took:

Having a poo

It takes a lot more energy to have a poo than you think. I was just about mobile so I was allowed to look after my toilet needs. They would bring me a commode, position it so I could get there still attached to the ventilator keeping me going, remove some of the monitoring (but not all, heart monitor and blood 02 had to stay on) and get me ready to swap to the commode. And then leave me alone for 10 mins. I think at first they hung about by the door listening for a thud, and I had a call button. Getting onto the commode was OK (breath-breath-breath, move, pant like you ran 100m for a minute… relax), but the actual job itself uses more abdominal muscles than you probably every realised. Tiring.

All the monitoring kit was on one arm. The one I normally use to “tidy up” with. Using the other hand was very odd. And again, tiring, I had to take it in stages!

Post event I would need to rest and let the blood O2 lift back above 90% and then I could shuffle back to the bed and press my button. The nurse would come in, congratulate me, and attach any removed monitoring. I’d lay there and wait for the O2 to get back above 90% and the bloody monitoring machine would stop pinging.

For 3 days this was the hardest thing I did, it was my main exercise…

Eating

Actually chewing & swallowing was easy. But to do that you have to get the food into your mouth. Holding your arms up to eat is hard work! I took to cutting up what I had (if it needed cutting up), having a rest, then eating with one arm, mostly balanced on the table with brief bouts of effort to ferry morsels to my mouth. I was incapable of lifting a pint. Even more incredible – I had no desire to lift a pint!

I’d have a little rest after eating and I found I had to listen carefully to my body about how much to eat. Anything more than about what you would feed a cat in one sitting, and I would lay there, 2 or 3 hours later, conscious of the need to move that food out my stomach and into my gut. Who knew the mechanical side of digesting needed effort.

Washing

You can’t have a shower if you are attached to breathing machines. And I was not able to leave my room anyway as I was an infection risk to other patients (I had ‘flu remember). Plus, at first, no way could I do all that waving arms, applying shower gel and the rest of the business. So it was washing with a bowl, cloths, and lots of towels. If I took it steady I could do this myself, except my back and, oddly enough, my legs. lifting them up was exhausting! Nice nurses did those bits.

Weeing

I’m a man, weeing was not an issue. They give you an endless supply of little bottles with a hole towards the top. So long as you tuck the relevant part into the hole (having sat up using the bed) you just “let go” and pressure does the rest. No energy. The first time is worrying – “what if I miss, what if I fill the bottle”. The bottles are designed to be bigger than a human bladder.

However, if you do what I did and then drop the bottle of wee on yourself, do not attempt to sort it out! I did, I stared trying to use a towel and get out the wet spot and I nearly went unconscious as my O2 plummeted from the effort.

What you do do is call the nurse and say “Nurse, I just poured my own wee over myself and I’m wet”. Nurse will remove your wee-covered clothing (a hospital smock), un-plumb you from some machines, move you to a chair, plumb you back in, and then clean up the bed, bedding, floor etc. They don’t either laugh at you or grumble. You just sit there feeling like a pillock.

Nurse will then ask how much you had wee’d. Why? I’m on a high dependency ward, they measure everything. A key thing is fluid in (via saline drips and drinks) and fluid out – blood taken for observations and weeing. I knew I was getting better when the weeing increased compared to drinking. This is because my lungs had swollen with fluid and, as they recovered, they released the fluid. A pair of swollen lungs hold a lot of fluid!

It had been a good wee, I guessed 350ml. It certainly was enough to make me and the bed very wet.

Coughing

Having a good cough (which I did a lot of, of course, what with all the lung issues) would send my blood oxygen plummeting. Again, lots of abdominal muscles and the diaphragm (the sheet of muscle between your lungs and stomach) are used in coughing. If I could, I would build up to a good cough, conserving my strength and getting my O2 up in preparation for a real good go. But if it caught me by surprise, it could drop the blood O2 dramatically. But the good thing was, coughing helped expand the lungs and I recovered quickly and was “better” for half an hour.

Thinking

At rest, your brain uses about 20% of your total energy. This is true even for stupid people like Donald Trump :-). Biologically it’s very interesting that humans have such large brains and put so much energy into it – far more than any other animal (in relation to body size). Our brains makes us different to all other animals, ie “intelligent”, but at a significant energy and nutrients cost.

If my blood O2 dropped too low I would start shutting down. This is why people with breathing difficulties pass out, once blood O2 goes below a certain level, your higher brain functions stop to reduce demand and protect the rest, unconsciousness comes quickly.

Thinking was hard. I’ve never been one for just sitting there “thinking of nothing” but I did an awful lot of that in hospital. It was my brain saving energy.

Socialising was a real drain. I could do the 2 or 3 mins with the nurses or docs when they came to do observations (oh, so many observations in a high dependency ward!) or put drugs in me, take out my bood (Oi! I wanted that blood! I was using it to ship the small amount of O2 I can absorb!). I had to be really on my game when the docs popped in once or twice a day as this was my opportunity to try and ask smart questions like “so we have a diagnosis, what is the prognosis?”. Don’t worry what it means, medics live by it so asking them makes them think you know some of the secrets… Docs don’t tend to tell you much in my experience, unless you can ask pertinent questions and show you understand the answers. They seem to think ignorance (on the part of the patient) is bliss. If I knew when the docs were going to be in, I’d try and have a pre-visit snooze so I was at the top of my game.

Where it was hard was dealing with Mrs Widlake. Mrs Widlake was wonderful, she would ask me what I wanted and the next day she would bring it in, let me know what was going on, if people had been in touch, who was annoying her. And kept me company. It was very important to me.  But after about 1/2 an hour I would start shutting down, the thinking (and talking) reduced my low energy levels. We worked out a solution. She would come in, give me my new book and the bizarre, random items I asked for and chat to me. After 1/2 an hour she would go have a cup of tea in the visitor’s room whilst I zoned out, then came back for more chat. Resource management and time sharing! Sue did not want to leave me alone but after a few days we both accepted that a daily visit in sections, kept to maybe 3 hours, made the best use of the resources available. It was a bit like my batch processing window!

Monitoring

So I was managing my resources and finding out which ones took effort. But like any good system, you need monitoring, real-time feedback. And boy was I monitored! I was on a high dependency/close observation ward. Every hour, every single damned hour all night too, they would come and do blood pressure, record my heart rate and O2, measure my wee, what I had eaten, temperature etc, steal blood.

But the main thing was the machines I was attached up to. They constantly monitored. And pinged. Oh god, did they all bloody Ping. If a chest sensor fell off or I sat on the connector, that machine would ping. If a drip (drug or fluids) ran out, it would Ping. Breathing machine ran out of water? Ping Ping Ping! The ventilator had to up pressure or I moved too much (I duuno why) – Ping Ping Ping.

But the worst, my nemesis (and also my KPI) was the blood O2 monitor. If I dropped below X, usually 86 or 88, it pinged & pinged & pinged. If I dropped below 85 it would up the volume and multi-ping: “ping-Ping-PING! ping-Ping-PING! ping-Ping-PING!”. You could not even cheat it by taking the monitor off, as then it went ping-bloody-crazy. All those tasks I mentioned above that took effort? They all made this blasted machine ping or ping-Ping-PING!

The one biggest challenge to me during my stay was not boredom, not pain (I was lucky, no pain other than what they inflicted on me putting in drains etc, or headaches due to low O2), not fear – it was trying not to go crazy due to the the pinging. I did everything I could to stop the pinging. The only time I really lost it with the nurses was one night as I improved and they changed the warning levels up to 90 and every time, every time I started to fall asleep it bloody ping-Ping-PING’d. I told the nurse to turn the levels back down else I would rip the damn thing off the wall. She said she could not, as she was not qualified to make the decision. “Well find someone who can as, if it does not let me sleep, I will lose my shit”.  It’s the only time I swore at any of the people looking after me. It got turned down.

This is exactly like having OEM monitoring a database and alerting on a KPI such as CPU usage when usage spikes and is actually OK. Just endless, endless false alarms. What the damn things should have done (in my opinion, for me) was only Ping if I was below a limit for over a minute, or went down to critical. Then it should go absolutely crazy.

And it was not just my machines. Other people in the ward had their own pinging machines. They. All. Pinged.

I’m back home now and recovering. I can breath unaided and slowly, slowly I am able to do more without running out of breath. Like have a shower or make my own cup of tea. Give me a month (I’ve been told it’ll take a month) and I should be back to sort-of normal. I won’t be running marathons or using the axe in the garden for a while. I’m still treating myself like an under-resourced computer and dolling out effort where best used. But each day another core comes on line and I can expand the extra effort. I think it’s called getting better.

The Unicorn Project : My Review


The Unicorn Project is a follow-up to The Phoenix Project. Actually, it’s more like the same book again, but written from different person’s perspective.

I loved The Phoenix Project, but absolutely hated The DevOps Handbook, so I was a little reluctant to start reading The Unicorn Project, as I was really worried I would hate it, and it would tarnish the memory of The Phoenix Project.

Overall it was fine, but IMHO it was nowhere near as good as The Phoenix Project.

I’m not going to talk details here, but instead talk about my feelings about the book. You don’t have to agree. </p />
</p></div>

    	  	<div class=