Search

OakieTags

Who's online

There are currently 0 users and 30 guests online.

Recent comments

Oakies Blog Aggregator

Configuration Management Searches in EM13c

With the addition of the Configuration Management from OpsCenter to Enterprise Manager 13c, there are some additional features to ease the management of changes and drift in Enterprise Manager, but I’m going to take these posts in baby steps, as the feature can be a little daunting.  We want to make sure that you understand this well, so we’ll start with the configuration searches and search history first.

babysteps

To access the Configuration Management feature, click on Enterprise and Configuration.

Click on Search to being your journey into Configuration Management.

confs2http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/confs2.png?res... 300w, http://i2.wp.com/dbakevlar.com/wp-content/uploads/2016/05/confs2.png?res... 768w" sizes="(max-width: 833px) 100vw, 833px" data-recalc-dims="1" />

From the Search Dashboard, click on Actions, Create and History.  You’ll be taken to the History wizard and you’ll need to fill in the following information:

config_hist2http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/config_hist2.p... 300w, http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/config_hist2.p... 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

And then click on Schedule and Notify to build out a schedule to check the database for configuration changes.

config_hist1http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/config_hist1.p... 300w, http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/config_hist1.p... 768w, http://i1.wp.com/dbakevlar.com/wp-content/uploads/2016/05/config_hist1.p... 1514w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

For our example, we’ve chosen to run our job once every 10 minutes, set up a grace period and once satisfied, click on Schedule and Notify.  Once you’ve returned to the main screen, click on Save.

Now when we click on Enterprise, Configuration, Search, we see our Search we created in the list of Searches.  The one we’ve created is both runnable AND MODIFIABLE.  The ones that come with the EM13c are locked down and should be considered templates to be used in Create Like options.

The job runs every 10 minutes, so if we wait long enough after a change, we can then click on the search from the list and click on Run from the menu above the list:

confs4http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/confs4.png?res... 300w, http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/confs4.png?res... 768w, http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/confs4.png?w=1287 1287w" sizes="(max-width: 389px) 100vw, 389px" data-recalc-dims="1" />

As I’ve made a change to the database, it shows immediately in the job and if I set this up to notify, it would email me via the settings for the user who owns the configuration:

confs5http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/confs5.png?res... 300w, http://i0.wp.com/dbakevlar.com/wp-content/uploads/2016/05/confs5.png?res... 768w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

If you highlight a row and click on See Real-Time Observations.  This will take you to the reports that show you that each of the pluggable databases weren’t brought back up to an open mode post maintenance and that they need to be returned to an open status before they will match the original historical configuration.

We can quickly verify that the databases aren’t open.  In fact, one is read only and the other is only mounted:

SQL> select name, open_mode from v$database;

NAME OPEN_MODE
--------- --------------------
CDBKELLY READ WRITE

SQL> select name, open_mode from v$pdbs;

NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDBKELLYN MOUNTED
PDBK_CL1 READ ONLY

So let’s open our PDBs and then we’ll be ready to go :‏

ALTER PLUGGABLE DATABASE PDBKELLYN OPEN;
ALTER PLUGGABLE DATABASE PDBK_CL1 CLOSE;
ALTER PLUGGABLE DATABASE PDBK_CL1 OPEN;

Ahhhh, much better.



Tags:  ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Configuration Management Searches in EM13c], All Right Reserved. 2016.

Gluent New World #03: Real Time Stream Processing in Modern Enterprises with Gwen Shapira

Update: Added links to video recording and slides below.

It’s time to announce the 3rd episode of Gluent New World webinar series! This time Gwen Shapira will talk about Kafka as a key data infrastructure component of a modern enterprise. And I will ask questions from an old database guy’s viewpoint :)

Apache Kafka and Real Time Stream Processing

Video recording & slides:

Speaker:

  • Gwen Shapira (Confluent)
  • Gwen is a system architect at Confluent helping customers achieve
    success with their Apache Kafka implementation. She has 15 years of
    experience working with code and customers to build scalable data
    architectures, integrating relational and big data technologies. She
    currently specializes in building real-time reliable data processing
    pipelines using Apache Kafka. Gwen is an Oracle Ace Director, an
    author of “Hadoop Application Architectures”, and a frequent presenter
    at industry conferences. Gwen is also a committer on the Apache Kafka
    and Apache Sqoop projects. When Gwen isn’t coding or building data
    pipelines, you can find her pedaling on her bike exploring the roads
    and trails of California, and beyond.

Time:

Abstract:

  • Modern businesses have data at their core, and this data is
    changing continuously. How can we harness this torrent of continuously
    changing data in real time? The answer is stream processing, and one
    system that has become a core hub for streaming data is Apache Kafka.This presentation will give a brief introduction to Apache Kafka and
    describe it’s usage as a platform for streaming data. It will explain
    how Kafka serves as a foundation for both streaming data pipelines and
    applications that consume and process real-time data streams. It will
    introduce some of the newer components of Kafka that help make this
    possible, including Kafka Connect, framework for capturing continuous
    data streams, and Kafka Streams, a lightweight stream processing
    library. Finally it will describe some of our favorite use-cases of
    stream processing and how they solved interesting data scalability
    challenges.

Register:

See you soon!

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Hotsos 2016 Recap

I wanted to go to Hotsos Symposium for quite some time, having heard so many great topics from there for years. And every time I was a bit lazy to think about what I can talk about. Apparently I thought that everything I know is well covered elsewhere, so why would I be accepted. Plus all these complexities of budget, getting a visa, travel arrangements, jet lag, and personal matters in between. Last year, when call for papers was still open, I realized that there’s a good chance I can make it to Hotsos in 2016: I had a budget, visa is a doable thing, and most importantly I knew I had a topic to talk about which most likely will not be in competition with other speakers.

I’ve been working on a large Middleware project which lasts for more than 8 months. And with each and every week of it I was getting confirmations to my theory that many clients, independent of their size, are making silly mistakes in simple things and as a result they have huge issues just because of these small errors in basic stuff. Fixing simple things would give them a huge benefit in terms of applications performance and stability. So I thought this is it: I should talk about simple things done right, and what to expect when you mess them up. I was pretty confident I’ll have no issues explaining what I know in whatever time is available. I quickly submitted a very short abstract and waited till December 1st when I’ve found that I’m accepted. I was happy and worried at the same time: no way to go back and now I have to deliver what I promised, which means the work has to start immediately. I decided that in December I’ll deal with the budget & travel, and start US visa application; not to mention I’ll start adding topics for the presentation. From January I was thinking to spend 1h every day dedicated to presentation, plus whatever time is available over the weekend as well, especially when I’m oncall and have time to think.

After some flight search I’ve realized the most convenient way for me to get to Dallas would be via Frankfurt earlier on Saturday, which would give me some time to fight the jet lag. I also hoped to attend the Training day with Richard Foote, but unfortunately I had to be back in Moscow next Friday evening which forced me to get tickets back home on Thursday mid day and skip more than 50% of the training day. The visa application process for US seemed a bit tricky in the beginning, but in reality it was quite simple: just fill out an online form, pay the application fee and get to the consulate for a so-called interview. The form took me about 3 hours, and it required lots of information like all the countries you’ve visited in the last 5 years. Combined with terrible UI it was tricky. The most complicated part was to take my photo scanned & uploaded with good quality and small size at the same time. I managed to succeed after something like a dozen of attempts. The interview itself is more of a “bring me your passport” and explain what do you need to do in the US. It was easy although took about 2h waiting in the queues. I just showed printed emails with invitation to Hotsos and the conference schedule which had my name on it – and that’s it. Thinking about the process, it is streamlined and is easier than getting a visa to EU countries (those usually require tons of real papers and check them thoroughly), and an order of magnitude easier than getting a Canadian visa (which is also online for the most part but requires you to upload enormous amount of scanned documents and wait for visa processing for something like 2 months or even more). I got my passport ready and delivered in 2 days giving me a visa for 3 years – awesome speed! And one more step closer to Hotsos as well.

Presentation preparation went good and as planned. I realized that the topic I’ve chosen is too complex for a single presentation, and tried to get rid of some things. I hate “bullet point hell” slides so even though I started with bullet points, I was slowly making changes to remove them as much as possible. Sometimes I’ve added very simple visuals just to make sure it is easier to understand a point as I know how hard it is to digest text information without a picture. Sometimes I had to do some diagrams, and animations were required as well. Those were taking a lot of time initially but later it went better and better. So in the end I was OK with animations. Initially I didn’t plan for a Demo in the talk since I thought that it would take too much time both in preparation and presentation. In the end I decided that I need to show at least something, so that the audience could remember it. One of the things I picked up from a quick review of this book was: good presentation should be about 1 thing. Unfortunately it was too late to change the whole slide deck. But in the end it went this unexpected way…

My journey to Dallas started at 4AM on Saturday with a quick drive to the airport to take a relatively short 3h flight to Frankfurt. I remember last time I’ve been there it was very chaotic with huge lines for security check, so I was worried about 1h connection and really long way between the gates. It appears that at this time early in the morning the airport is empty, and the navigation inside has definitely been improved. My flight even though being delayed for 40 minutes, arrived almost on time. The connection was smooth. For the next leg I tried Lufthansa’s Premium Economy first time and was very pleased how good it is: lots of space everywhere, and a bit more service as well. I spent few hours watching end of the 2nd season of Fargo, worked on the presentation for a few hours and shortly after that my flight was over. It was perfect 10h on the plane, couldn’t be better! Great start.

Dallas welcomed with warm weather and no queues at the immigration: I’ve spent less than 5 minutes there. I heard a lot of stories of other people spending hours in line. So I was lucky again. After dark and cold Moscow it was unbelievable to see +22°C outside, so I decided to take a longer route to the hotel by train instead of a taxi. It is a good deal for its price ($5 per daily ticket), has lots of space above the head but for some reason very little space between the rows. Apart from that it is clear that public transport is used by “poor” people with no car:) and in general is under used. On a week day it may be different though, I don’t know. It was a good experience.

The Omni hotel as everything in Texas is big: lobby, rooms, everything. Check in was a bit odd: apparently they hold additional $50 per night for security deposit which was unexpected. Anyway when I was up in the room I was tired, it felt like being drunk. I knew I need to stay up as long as I can, otherwise I’ll not be in a good shape for the conference. Somehow I managed to stay up till 11PM which is ~27 hours of uptime. It didn’t help much and I was up in a few hours at 4AM in the morning. I’ve spent some time working on the slides, had a breakfast and then went straight to 6th Floor Museum. I wish it had less people at this time. Since it is one of a must visit places in Dallas I guess it makes it so crowded, which is really unexpected as the city is empty on the streets. Anyway, the exposition was quite good if you can say that about a crime scene. Lots of US history is carefully gathered and presented in there, and it takes a few hours to get through. Even more with the people around. I thought about going to Dallas Museum of Art as well but I was so tired that decided to go back to the hotel. Sunday finished quickly.

Monday was the real start to Hotsos. As usual I was up early & spent time working on the slides. After a good breakfast and quick chat with people around the sessions have started, and the first was a keynote from Kellyn Pot’vin Gorman and Jeff Smith. In short they encourage people to use more Twitter, Blog regularly, keep your LinkedIn profile up-to-date and use the same style (picture, tag line, etc) in all your profiles to make your online presence consistent – all these things will give you more options next time you want to change your job.
Then I went to Carlos Sierra‘s session. I’ve used his edb360 a few times and just wanted to see if there’s anything that I didn’t know about it. Apparently nothing much. This is a really nice tool and it does what is advertised: extracts tons of relevant performance information from a database & visualises it in 1 click – for free.
After a short break I decided to go back to Hall A to hear the news about EM 13c. I don’t know EM good enough to say anything about the content except that EM is completely changed (again), and there were some technical issues with the projector. Also my impression was that EM development is quite hectic, disorganized and unreliable. The product is free which is great, and it’s very unstable which isn’t so good.

After a lunch I didn’t know which session to choose as both were interesting enough to attend. I decided to see Jim Czuprynski. It was the first time I heard him speak. His speed and clear diction at the same time are unbelievable, I’ve never heard anything like this. I don’t remember specifics though, and need to see the slides again – if I ever happen to use RAT, which is quite unlikely. One thing that crossed my mind during the session was this tweet – not sure if this is still the case for RAT or not.
After Jim everything will appear slow and Vlado Barun was not an exception. He showed different approaches to SQL optimizations. I didn’t get many things during the session due to lots of small font code. The room was apparently quite long and my eyes definitely require glasses :-\ Checking the slides now I understand a bit more what was in the room. The audience liked this talk as it had lots of interaction with attendees. For some reason the obvious route of rewriting SQL to get the required data in a single pass was not considered (“can’t change the application” I guess was one of the restrictions that contradicts the slide with recommendation to fix the application) – and I would try it first for sure, as properly written SQL is much easier to optimize or it is not required at all.
Later on I went to Friedrich Pfeiffer speaking about SQL Patches. It was a nice detailed review of the feature with examples and some code wrapper around DBMS_SQLDIAG. One thing which I forgot to ask Friedrich was why is he using OPEN/FETCH/CLOSE cursor calls in PL/SQL – the code looks so ancient this way, and requires more typing.
Kerry Osborne closed the 1st day with his Controlling Execution Plans Workshop which was a short live version of the many things he wrote in his blog. If you haven’t read it before, you should definitely spend time reading his posts and trying out his scripts.

That was it for sessions of Day 1. It finished quickly I must say, since there were no boring sessions and I had a few interesting conversations during the day as well. Right after the Day 1 organizers invited Symposium speakers to appreciation event in President Suite; to have a drink, to relax and to talk. It was really nice.
IMG_2084

The day finished with a Pythian gang dinner (me, Gleb Otochkin, Simon Pane, Jared Still) in The Keg. It was a pleasure to meet and talk face to face with Pythian folks; and steak was sooo good (and big of course).

Next day started with Tanel Poder showing new 12c features of the SQL Monitor including still work in progress he’s been doing with Gluent. Good stuff. Unfortunately I’m not doing much with SQL Monitor lately so I have no idea when/if I’m going to see the improvements in real life cases.
After that Finn Jorgensen showed an approach to fixing execution plans with SQL Plan baselines based on some SQL execution statistics. Instead of bothering why execution plans are changing they try to identify SQL to “fix” their execution plan. It works in their case & application (11.2.0.4) which is good. Obviously there are cases when this approach will have limited value: generated SQL, statements that do not use lots of resources but change their plans in a bad way, etc.
In the next session Andy Weiss with Kirby Sand presented their specific case of SQL for which Oracle builds an inefficient execution plan by default (link to the online test case). Again I couldn’t see much on the screen as the font was too small. Apparently they have a join which uses BETWEEN condition with data in two table columns, which makes Oracle to guess selectivity estimates & make a mistake of few orders of magnitude. It causes Oracle to choose inefficient execution plan by default. Looking at the slides and code I see they make a common mistake with hinting: USE_NL(A B) means USE_NL(A) USE_NL(B) which technically is impossible for Oracle to obey in full (actually the documentation still contains wrong examples for hints so many years, spreading this nonsense). It was interesting to listen to them. I noticed that the presentations where people are discussing there own in-house issues were usually more interesting.
Next on the agenda was Mauro Pagano showing how to understand Cost Based Query Transformations based on the 10053 trace. Although I don’t spend a lot of time lately with CBO traces, I enjoyed this session a lot, and recommend it for people who want to understand the structure of 10053 trace.

It was the time for me to speak. Surprisingly I wasn’t nervous. When I started to prepare my laptop I heard the music playing in the background – and it was main theme of the Requiem for a Dream! That is when I started to worry:) I only heard good classic rock or current pop music playing in between the sessions, and it was great. How come so dramatic tune got it into the playlist exactly when I started I don’t know. That was probably the time when my luck has ended as well. I tried to stay positive and started to speak. As I introduced myself I realized that my body doesn’t want me to stand up & suggested to have a nap. I felt that with every word my heart rate was going down and at some point my brain said “good night”. I tried to continue and move a little bit to stay up. It helped only a little, and I didn’t have a microphone with me. Also I started to get many questions right from the start – and I suggested that the questions can be asked at any time. I repeated all the questions before answering them (funny enough most speakers didn’t do that), and that was probably the best thing I’ve done. It slowed down things considerably and I was thinking what I can do to speed up. I couldn’t come up with a good plan and decided that I will need to stop early but make it look complete. So in the end I have only been able to cover memory management, and that was ~60% of the things I wanted to talk about. After it was over I felt horrible. I thought it was a big disaster, and the positive thing was only that the number of people in the room was small. It took me about 10 minutes to write down all the things that I did wrong, and after that I realized it’s over, I need to learn from it and move on. I decided I need to work more on this presentation and try to repeat it at some European conference, maybe BGOUG or some other venue (by the time I’ve finished writing this post, I’ve been accepted to BGOUG Spring conference). We’ll see. Surprisingly this & following days I haven’t heard anything really bad about my presentation apart from my personal opinion, and I got 3-4 people saying it was good and they liked it, and learned something from me – which was a huge relief. I understand that people usually don’t say bad things in the face and keep it private (judging by myself), so I don’t have the full picture how good/bad it was.

In the end of the day Richard Foote took the stage for Q&A on indexing. That was the best session of the conference, lots of good stuff in a fun Australian way:)
After some break there was a Dinner and celebration – just pure fun. I had lots of interesting discussions over a few glasses of wine. The fact that I’m from Russia has led to questions about the country, politics, and life and everything. It was good:) I even met a DBA who has a NetCracker DB in their shop (I’ve worked at NetCracker for ~8 years) – this was a funny coincidence. 3-4 hours finished really fast and I went to sleep close to midnight.

The next day was the last day of the conference. I didn’t have anything to deliver so I was very relaxed and enjoyed it as much as I could. It was shorter than two days before and finished around 3PM. Tanel’s session on the Linux tools was the best of the day, and other sessions were interesting to attend as well.

On Thursday Richard Foote delivered his one day training on indexes. All attendees got a big printed book with all the slides. I could only attend first two sessions of the day. The content of all he is explaining follows the same style he uses in the blog, and for the most part it is a compiled version of his blog posts, which is awesome – anyone could get the same ideas online for free. Few hours with Richard on stage went so fast, and it was so unfortunate that I had to miss the rest of the day to catch the plane.

The airport was again pretty empty and I had no issues passing the security queue. On the way back I had usual Economy, which was not as convenient but still manageable. Also I had an interesting man sitting next to me. Usually I don’t talk much with people on an air plane, but this time somehow we started to talk and had a long chat for a few hours. This man was around 75, he lives close to Dallas. He was a pilot of C-130 long time ago in Europe and Vietnam, and then worked in The Pentagon. He retired around 52, and now he travels about twice a year to get the number of countries he’s visited to 100 (~80 right now). We talked about travel and some countries and life and everything. I wish I could travel transatlantic at 75 as well:)
The connection back to Moscow was simple, and the flight was on time as well. Hotsos 2016 has finished for me. Thanks a lot to the conference organizers who made it possible & invited me. I had so much fun there!

A few random thoughts based on my experience:
* don’t be shy and submit an abstract if you have anything to share and can attend the conference
* the more specific your experience is the better
* don’t think that you won’t be accepted – most likely you will be
* allocate twice as more time than you think is enough for preparation
* do not make too many slides; 40-50 tops
* make more visual slides
* try connecting your laptop to the projector & use microphone in advance
* have a short run before delivering presentation – 2-3km would awake you, make your brain work & speak fast
* organizers are always looking for new names, new people, new things
* DB 12c is going be the theme of next year but it’s not necessary to push it as the main topic of a presentation
* performance stability issues are still hot topic for many people; 12c will definitely add more headache & more opportunity to talk about that
* do not make your presentation too hardcore; most attendees are normal people doing pretty normal things at their job; they may rebuild indexes regularly
* do not stay in the Omni hotel as it’s over priced; I’m used to AirBnb and would choose something close next time there
* arriving to Houston and then going to Dallas on a rented car with a few other people could be easier, cheaper and more convenient
* don’t waste time before the conference & try to talk to as many people as possible
* don’t think that you’ll learn a lot of technical stuff if you are used to reading famous Oracle blogs and Twitter: most likely you heard everything many times

What I can recommend to the organizers:
* do not share presentations with attendees before the end of symposium; it really makes no sense to me to show the slides before the session
* try making conference 2+1 days, as it seems like 3+1 days is a bit too much
* conference hall should probably be smaller so that more people could see what is on the screen
* maybe try different more casual venue
* collect feedback for all sessions, and make it available to speakers at least

Would I go to Hotsos 2017? Most likely no: it costs too much to get there: ~$2200 as a speaker, and I can visit 2-3 conferences in Europe for the same budget. Would I go there in 2-3 years? Possible, and I’d try to combine it with a tourist trip over US. Before returning to Dallas again I’m going to attend some European conferences: in Croatia, Slovenia, Norway, Finland, Germany and of course UK to name a few that are on my wish list for the next few years.

Filed under: Oracle, Performance Tagged: conferences, Hotsos, presentations, travel

Cursor_Sharing problem

Here’s a possible bug (though maybe “not a bug”) that came up over the weekend on the OTN database forum. An application generating lots of “literal string” SQL was tested with cursor_sharing set to force. This successfully forced the use of bind variable substitution, but a particular type of simple insert statement started generating very large numbers of child cursors – introducing a lot of mutex waits and library cache contention. Here’s a (substituted) statement that was offered as an example of the problem:


INSERT INTO schema.tableName (column1,columns2,..)
VALUES (:"SYS_B_0",:"SYS_B_1",:"SYS_B_2",:"SYS_B_3",:"SYS_B_4",:"SYS_B_5",:"SYS_B_6",timestamp:"SYS_B_7",SYSTIMESTAMP,:"SYS_B_8")

Note particularly the slightly odd looking detail: timestamp:”SYS_B_7″; this is how bind variable substitution looks if you’ve used the “compact” ANSI mechanism for handling datetime literals. We were told, in fact, that the tables had only number, varchar2, and date columns – so it looks a little suspicious when see timestamp values being inserted but the implied coercion wasn’t the source of the problem. Here’s all it takes to see the problem (tested only on 12.1.0.2):


rem
rem     Script:         ansi_datetime_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2016
rem

create table t1 (
        n1      number(8,0),
        v1      varchar2(10),
        d1      date,
        t1      timestamp
);

insert into t1 values(-1,'x',sysdate, systimestamp);

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );
end;
/

set serveroutput off
alter session set cursor_sharing = force;

prompt  ============================
prompt  Testing DATE literals
prompt  Expect to see child number 2
prompt  ============================

insert into t1 values(1, 'A', date'2016-01-01', null);
insert into t1 values(1, 'A', date'2016-01-02', null);
insert into t1 values(1, 'A', date'2016-01-03', null);

select * from table(dbms_xplan.display_cursor(null,null,'-note -plan_hash'));

prompt  ============================
prompt  Testing TIMESTAMP literals
prompt  Expect to see child number 2
prompt  ============================

insert into t1 values(1, 'A', null, timestamp'2016-01-01 00:00:00');
insert into t1 values(1, 'A', null, timestamp'2016-01-02 00:00:00');
insert into t1 values(1, 'A', null, timestamp'2016-01-03 00:00:00');

select * from table(dbms_xplan.display_cursor(null,null,'-note -plan_hash'));

prompt  ===============================
prompt  Need privilege to see this view
prompt  Pre-coded for the sql_ids above
prompt  ===============================

break on sql_id skip 1

select
        sql_id, child_number, hash_match_failed
from
        v$sql_shared_cursor
where
        sql_id in ('58udhcm270bhn', 'gssz5cbnt7mgn')
order by
        sql_id, child_number
;

A couple of points – there are several more tests in the script demonstrating things that do NOT cause multiple child cursors to appear. I probably didn’t cover all the options that I could have covered but I hit a number of common cases to check that it wasn’t simply that cursor_sharing being severely broken in 12c. I’ve also allowed a side effect to demonstrate the presence of multiple child cursors rather than explcitly listing the child cursors. If the three statements (of each type) had produced shareable cursors then the child number reported by dbms_xplan.display_cursor() would have been zero in both cases. Here are the actual outputs:


SQL_ID  58udhcm270bhn, child number 2
-------------------------------------
insert into t1 values(:"SYS_B_0", :"SYS_B_1", date:"SYS_B_2", null)

-------------------------------------------------
| Id  | Operation                | Name | Cost  |
-------------------------------------------------
|   0 | INSERT STATEMENT         |      |     1 |
|   1 |  LOAD TABLE CONVENTIONAL | T1   |       |
-------------------------------------------------


SQL_ID  gssz5cbnt7mgn, child number 2
-------------------------------------
insert into t1 values(:"SYS_B_0", :"SYS_B_1", null, timestamp:"SYS_B_2")


-------------------------------------------------
| Id  | Operation                | Name | Cost  |
-------------------------------------------------
|   0 | INSERT STATEMENT         |      |     1 |
|   1 |  LOAD TABLE CONVENTIONAL | T1   |       |
-------------------------------------------------


And, having pre-coded the script with the SQL_IDs of the two guilty statements, here’s the output identifying the cause of the failure to share from v$sql_shared_cursor:


SQL_ID        CHILD_NUMBER H
------------- ------------ -
58udhcm270bhn            0 N
                         1 Y
                         2 Y

gssz5cbnt7mgn            0 N
                         1 Y
                         2 Y

There is a bug on MoS relating to timestamp columns and failure to share cursors – it doesn’t really look like the same problem but it could be related in some way: Bug 13407937 : HIGH VERSION COUNT FOR INSERT WITH CURSOR_SHARING=FORCE AND TIMESTAMP COLUMN. It’s described as “not a bug” :(

Trouble with multiple SCAN listeners in 12c Release 1

Prompted by comments made by readers about my posts describing how to add a second SCAN in 12c Release 1 and problems regarding the listener_networks parameter I thought it was time to revisit the situation. I’m still running Oracle Linux 7.1/UEK3 (although that should not matter) but upgraded to 12.1.0.2.5. This is the release that is reportedly showing the odd behaviour. I don’t recall my exact version when I wrote the original posts back in April 2014, but by looking at them I think it all worked ok at the time. Here is my current patchlevel after the upgrade to the troublesome PSU.

 SQL> select patch_id,description,status from DBA_REGISTRY_SQLPATCH;

  PATCH_ID DESCRIPTION                                                                      STATUS
---------- -------------------------------------------------------------------------------- ---------------
  21555660 Database PSU 12.1.0.2.5, Oracle JavaVM Component (Oct2015)                       SUCCESS
  21359755 Database Patch Set Update : 12.1.0.2.5 (21359755)                                SUCCESS

This is not the latest patch level! After having seen a presentation by Ludovico Caldara (@ludodba) just this week I would think that the April 2016 patch which was current at the time of writing is the one you want to be on :) I have an upgrade to the April 2016 PSU planned but first wanted to test against 12.1.0.2.5 to see if I could reproduce the issue.

Problem Statement

The parameter listener_networks should be populated automatically by CRS when the database comes up-and indeed you can see evidence of that in the respective node’s crsd_oraagent_oracle.trc file. However, with 2 SCANs present, there is something not quite right. You can see this in the usual locations:

– SCAN listeners
– listener_networks parameter
– CRSD trace

The most obvious clue is that you cannot connect to the database any more using one of the SCANs. To avoid you having to flick back and forth between my posts, here is the network configuration again:

[oracle@ron12cprinode2 ~]$ srvctl config scan -netnum 1
SCAN name: ron12cpri-scan, Network: 1
Subnet IPv4: 192.168.100.0/255.255.255.0/eth0, static
Subnet IPv6: 
SCAN 0 IPv4 VIP: 192.168.100.65
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 
SCAN 1 IPv4 VIP: 192.168.100.66
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 
SCAN 2 IPv4 VIP: 192.168.100.64
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 

[oracle@ron12cprinode2 ~]$ srvctl config scan -netnum 2
SCAN name: ron12cpri-dg-scan, Network: 2
Subnet IPv4: 192.168.102.0/255.255.255.0/eth2, static
Subnet IPv6: 
SCAN 0 IPv4 VIP: 192.168.102.66
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 
SCAN 1 IPv4 VIP: 192.168.102.64
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 
SCAN 2 IPv4 VIP: 192.168.102.65
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 

It’s essentially using the 192.168.100/24 network for the “public” traffic and 192.168.102/24 for Data Guard. I still use my RAC One Node database RON, which is currently active on node 2. All of my SCAN listeners should know about its services, RON_SRV and RON_DG_SRV. However, that’s not the case, as a quick check reveals:

[oracle@ron12cprinode1 ~]$ lsnrctl status LISTENER_SCAN1

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 14-MAY-2016 19:56:14

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER_SCAN1
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                14-MAY-2016 19:21:39
Uptime                    0 days 0 hr. 34 min. 35 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/12.1.0.2/grid/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/ron12cprinode1/listener_scan1/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.100.65)(PORT=1521)))
Services Summary...
Service "-MGMTDBXDB" has 1 instance(s).
  Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
Service "RON" has 1 instance(s).
  Instance "RON_1", status READY, has 1 handler(s) for this service...
Service "RONXDB" has 1 instance(s).
  Instance "RON_1", status READY, has 1 handler(s) for this service...
Service "RON_DG_SRV" has 1 instance(s).
  Instance "RON_1", status READY, has 1 handler(s) for this service...
Service "RON_SRV" has 1 instance(s).
  Instance "RON_1", status READY, has 1 handler(s) for this service...
Service "_mgmtdb" has 1 instance(s).
  Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
Service "ron12cpri" has 1 instance(s).
  Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
The command completed successfully
[oracle@ron12cprinode1 ~]$ lsnrctl status DGSCANLSNR_SCAN2_NET2

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 14-MAY-2016 19:56:23

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=DGSCANLSNR_SCAN2_NET2)))
STATUS of the LISTENER
------------------------
Alias                     DGSCANLSNR_SCAN2_NET2
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                14-MAY-2016 19:21:42
Uptime                    0 days 0 hr. 34 min. 41 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/12.1.0.2/grid/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/ron12cprinode1/dgscanlsnr_scan2_net2/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=DGSCANLSNR_SCAN2_NET2)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.102.64)(PORT=1521)))
The listener supports no services
The command completed successfully
[oracle@ron12cprinode1 ~]$ 

Before generating the above output I specifically “reset” the listener_networks settings on both instances, and ensured that they were created dynamically. After the database restart I couldn’t make out ANY entry for listener_networks:

SQL> select inst_id,name,value from gv$parameter where name = 'listener_networks';

   INST_ID NAME                           VALUE
---------- ------------------------------ ------------------------------
         1 listener_networks

SQL> select inst_id,name,value from gv$spparameter where name = 'listener_networks';

   INST_ID NAME                           VALUE
---------- ------------------------------ ------------------------------
         1 listener_networks

Nothing at all. I manage to reach the system using the RON_SRV service that’s known to the first (public) network’s SCAN:

[oracle@ron12cprinode1 ~]$ sqlplus a/b@ron12cpri-scan/RON_SRV

SQL*Plus: Release 12.1.0.2.0 Production on Sat May 14 20:18:02 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

ERROR:
ORA-01017: invalid username/password; logon denied


Enter user-name: 
[oracle@ron12cprinode1 ~]$ 

The same test failed for connections against ron12cpri-dg-scan, stating that the listener didn’t know about the service. Checking the CRSD trace (on the node the instance runs!) I could see the reason:

2016-05-14 19:47:49.637611 : USRTHRD:2023044864: {1:58687:2893} Endp=ron12cpri-dg-scan:1521
2016-05-14 19:47:49.637638 : USRTHRD:2023044864: {1:58687:2893} Final Endp=(( NAME=net2)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.102.63)(PORT=1523)))))','(( NAME=net2)(REMOTE_LISTENER=ron12cpri-dg-scan:1521ron12cpri-dg-scan:1521ron12cpri-dg-scan:1521)), remoteEndp= ron12cpri-scan:1521','ron12cpri-scan:1521','ron12cpri-scan:1521
2016-05-14 19:47:49.637662 : USRTHRD:2023044864: {1:58687:2893} sqlStmt = ALTER SYSTEM SET REMOTE_LISTENER=' ron12cpri-scan:1521','ron12cpri-scan:1521','ron12cpri-scan:1521' SCOPE=MEMORY SID='RON_1' /* db agent *//* {1:58687:2893} */
2016-05-14 19:47:49.645739 : USRTHRD:2023044864: {1:58687:2893} sqlStmt = ALTER SYSTEM SET LISTENER_NETWORKS='(( NAME=net2)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.102.63)(PORT=1523)))))','(( NAME=net2)(REMOTE_LISTENER=ron12cpri-dg-scan:1521ron12cpri-dg-scan:1521ron12cpri-dg-scan:1521))' SCOPE=MEMORY SID='RON_1' /* db agent *//* {1:58687:2893} */
2016-05-14 19:47:49.655035 : USRTHRD:2023044864: {1:58687:2893} ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00119: invalid specification for system parameter LISTENER_NETWORKS
ORA-00137: invalid LISTENER_NETWORKS specification with NAME=net2
ORA-00138: all addresses specified for attribute REMOTE_LISTENER are invalid
ORA-00132: syntax error or unresolved network name 'ron12cpri-dg-scan:1521ron12cpri-dg-scan:1521ron12cpri-dg-scan:1521'

2016-05-14 19:47:49.655191 : USRTHRD:2023044864: {1:58687:2893} DbAgent::DedicatedThread::run setRemoteListener Exception OCIException
2016-05-14 19:47:49.655207 : USRTHRD:2023044864: {1:58687:2893} ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00119: invalid specification for system parameter LISTENER_NETWORKS
ORA-00137: invalid LISTENER_NETWORKS specification with NAME=net2
ORA-00138: all addresses specified for attribute REMOTE_LISTENER are invalid
ORA-00132: syntax error or unresolved network name 'ron12cpri-dg-scan:1521ron12cpri-dg-scan:1521ron12cpri-dg-scan:1521'

Looking at this output it appears that setting the remote_listener worked, although I thought we’d only set the host once and not thrice? This looks fishy. It appears to work though, as confirmed in v$parameter and the fact that I can connect against the system.

Interestingly setting listener_networks fails with an ORA-02097: parameter cannot be modified because specified value is invalid. This makes sense: there are white spaces missing in the alter system command, and even if the spaces were correct, the command would fail. Trying manually confirms that thought:

SQL> ALTER SYSTEM SET LISTENER_NETWORKS='(( NAME=net2)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.102.63)(PORT=1523)))))','(( NAME=net2)(REMOTE_LISTENER=ron12cpri-dg-scan:1521 ron12cpri-dg-scan:1521 ron12cpri-dg-scan:1521))' SCOPE=MEMORY SID='RON_1';
ALTER SYSTEM SET LISTENER_NETWORKS='(( NAME=net2)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.102.63)(PORT=1523)))))','(( NAME=net2)(REMOTE_LISTENER=ron12cpri-dg-scan:1521 ron12cpri-dg-scan:1521 ron12cpri-dg-scan:1521))' SCOPE=MEMORY SID='RON_1'
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00119: invalid specification for system parameter LISTENER_NETWORKS
ORA-00137: invalid LISTENER_NETWORKS specification with NAME=net2
ORA-00138: all addresses specified for attribute REMOTE_LISTENER are invalid
ORA-00132: syntax error or unresolved network name 'ron12cpri-dg-scan:1521
ron12cpri-dg-scan:1521 ron12cpri-dg-scan:1521'

OK – that’s all I need to know. When changing the command to look like what I thought it should look like in the first place (the remote listener specifies only 1 host:port) it works:

SQL> ALTER SYSTEM SET LISTENER_NETWORKS='(( NAME=net2)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.102.63)(PORT=1523)))))','(( NAME=net2)(REMOTE_LISTENER=ron12cpri-dg-scan:1521))' SCOPE=MEMORY SID='RON_1';

System altered.

SQL> alter system register;

System altered.

What does that mean? Back to my SCAN listeners again:

[oracle@ron12cprinode1 ~]$ lsnrctl status LISTENER_SCAN1

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 14-MAY-2016 20:07:19

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------   
Alias                     LISTENER_SCAN1
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                14-MAY-2016 19:21:39
Uptime                    0 days 0 hr. 45 min. 40 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/12.1.0.2/grid/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/ron12cprinode1/listener_scan1/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.100.65)(PORT=1521)))
Services Summary...
Service "-MGMTDBXDB" has 1 instance(s).
  Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
Service "RON" has 1 instance(s).
  Instance "RON_1", status READY, has 1 handler(s) for this service...
Service "RONXDB" has 1 instance(s).
  Instance "RON_1", status READY, has 1 handler(s) for this service...
Service "RON_DG_SRV" has 1 instance(s).
  Instance "RON_1", status READY, has 1 handler(s) for this service...
Service "RON_SRV" has 1 instance(s).
  Instance "RON_1", status READY, has 1 handler(s) for this service...
Service "_mgmtdb" has 1 instance(s).
  Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
Service "ron12cpri" has 1 instance(s).
  Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
The command completed successfully

No negative change there, but would the DG_SCAN listener also pick it up?

[oracle@ron12cprinode1 ~]$ lsnrctl status DGSCANLSNR_SCAN2_NET2

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 14-MAY-2016 20:07:21

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=DGSCANLSNR_SCAN2_NET2)))
STATUS of the LISTENER
------------------------   
Alias                     DGSCANLSNR_SCAN2_NET2
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                14-MAY-2016 19:21:42
Uptime                    0 days 0 hr. 45 min. 39 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/12.1.0.2/grid/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/ron12cprinode1/dgscanlsnr_scan2_net2/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=DGSCANLSNR_SCAN2_NET2)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.102.64)(PORT=1521)))
Services Summary...
Service "RON" has 1 instance(s).
  Instance "RON_1", status READY, has 1 handler(s) for this service...
Service "RONXDB" has 1 instance(s).
  Instance "RON_1", status READY, has 1 handler(s) for this service...
Service "RON_DG_SRV" has 1 instance(s).
  Instance "RON_1", status READY, has 1 handler(s) for this service...
Service "RON_SRV" has 1 instance(s).
  Instance "RON_1", status READY, has 1 handler(s) for this service...
The command completed successfully

Well it seems it did. Now the question is: can I connect?

[oracle@ron12cprinode1 ~]$ sqlplus system/secretPWD@ron12cpri-scan/RON_SRV

SQL*Plus: Release 12.1.0.2.0 Production on Sat May 14 20:08:45 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

ERROR:
ORA-28002: the password will expire within 7 days


Last Successful login time: Sat May 14 2016 19:58:28 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
[oracle@ron12cprinode1 ~]$ sqlplus system/secretPWD@ron12cpri-dg-scan/RON_DG_SRV

SQL*Plus: Release 12.1.0.2.0 Production on Sat May 14 20:08:54 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

ERROR:
ORA-28002: the password will expire within 7 days


Last Successful login time: Sat May 14 2016 20:08:45 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> 

Summary

Well I can! So that should solve the problem for the active instance, however there are problems bound to happen when the instance restarts. Since I don’t really have control over the instance name in RAC One Node (RON_1 can be started on node 1 and node 2) I can’t hard-code the value for listener_networks into the spfile. As an end result I’d lock myself out just like CRS did. This is likely a similar issue for multi-node RAC using policy managed databases.

I have repeated the test with the latest version of the stack (upgraded in place), and got the same result. Here are the version numbers:

[oracle@ron12cprinode2 ~]$ /u01/app/oracle/product/12.1.0.2/dbhome_1/OPatch/opatch lspatches
22674709;Database PSU 12.1.0.2.160419, Oracle JavaVM Component (Apr2016)
22502555;OCW Patch Set Update : 12.1.0.2.160419 (22502555)
22291127;Database Patch Set Update : 12.1.0.2.160419 (22291127)

OPatch succeeded.

[oracle@ron12cprinode2 ~]$ /u01/app/12.1.0.2/grid/OPatch/opatch lspatches
22674709;Database PSU 12.1.0.2.160419, Oracle JavaVM Component (Apr2016)
22502555;OCW Patch Set Update : 12.1.0.2.160419 (22502555)
22291127;Database Patch Set Update : 12.1.0.2.160419 (22291127)

OPatch succeeded.

SQL> select action_time, patch_id,description,status from DBA_REGISTRY_SQLPATCH order by action_time;

ACTION_TIME                                PATCH_ID DESCRIPTION                                                            STATUS
---------------------------------------- ---------- ---------------------------------------------------------------------- ---------------
29-OCT-15 12.46.55.763581                  21555660 Database PSU 12.1.0.2.5, Oracle JavaVM Component (Oct2015)             SUCCESS
29-OCT-15 12.46.55.939750                  21359755 Database Patch Set Update : 12.1.0.2.5 (21359755)                      SUCCESS
14-MAY-16 21.32.15.211167                  21555660 Database PSU 12.1.0.2.5, Oracle JavaVM Component (Oct2015)             SUCCESS
14-MAY-16 21.32.15.233105                  22674709 Database PSU 12.1.0.2.160419, Oracle JavaVM Component (Apr2016)        SUCCESS
14-MAY-16 21.32.15.591460                  22291127 Database Patch Set Update : 12.1.0.2.160419 (22291127)                 SUCCESS

The same values for remote_listener and listener_networks as with 12.1.0.2.5 have been observed, and the error about setting listener_networks in the CRSD log was identical to the previous release. I guess that needs to be raised with Oracle …

Fail Fast

Among movements like Agile, Lean Startup, and Design Thinking these days, you hear the term fail fast. The principle of failing fast is vital to efficiency, but I’ve seen project managers and business partners be offended or even agitated by the term fail fast. I’ve seen it come out like, “Why the hell would I want to fail fast?! I don’t want to fail at all.” The implication, of course: “Failing is for losers. If you’re planning to fail, then I don’t want you on my team.”

I think I can help explain why the principle of “fail fast” is so important, and maybe I can help you explain it, too.

Software developers know about fail fast already, whether they realize it or not. Yesterday was a prime example for me. It was a really long day. I didn’t leave my office until after 9pm, and then I turned my laptop back on as soon as I got home to work another three hours. I had been fighting a bug all afternoon. It was a program that ran about 90 seconds normally, but when I tried a code path that should have been much faster, I could let it run 50 times that long and it still wouldn’t finish.

At home, I ran it again and left it running while I watched the Thunder beat the Spurs, assuming the program would finish eventually, so I could see the log file (which we’re not flushing often enough, which is another problem). My MacBook Pro ran so hard that the fan compelled my son to ask me why my laptop was suddenly so loud. I was wishing the whole time, “I wish this thing would fail faster.” And there it is.

When you know your code is destined to fail, you want it to fail faster. Debugging is hard enough as it is, without your stupid code forcing you to wait an hour just to see your log file, so you might gain an idea of what you need to go fix. If I could fail faster, I could fix my problem earlier, get more work done, and ship my improvements sooner.

But how does that relate to wanting my business idea to fail faster? Well, imagine that a given business idea is in fact destined to fail. When would you rather find out? (a) In a week, before you invest millions of dollars and thousands of hours investing into the idea? Or (b) In a year, after you’ve invested millions of dollars and thousands of hours?

I’ll take option (a) a million times out of a million. It’s like asking if I’d like a crystal ball. Um, yes.

The operative principle here is “destined to fail.” When I’m fixing a reported bug, I know that once I create reproducible test case for that bug, my software will fail. It is destined to fail on that test case. So, of course, I want for my process of creating the reproducible test case, my software build process, and my program execution itself to all happen as fast as possible. Even better, I wish I had come up with the reproducible test case a year or two ago, so I wouldn’t be under so much pressure now. Because seeing the failure earlier—failing fast—will help me improve my product earlier.

But back to that business idea... Why would you want a business idea to fail fast? Why would you want it to fail at all? Well, of course, you don’t want it to fail, but it doesn’t matter what you want. What if it is destined to fail? It’s really important for you to know that. So how can you know?

Here’s a little trick I can teach you. Your business idea is destined to fail. It is. No matter how awesome your idea is, if you implement your current vision of some non-trivial business idea that will take you, say, a month or more to implement, not refining or evolving your original idea at all, your idea will fail. It will. Seriously. If your brain won’t permit you to conceive of this as a possibility, then your brain is actually increasing the probability that your idea will fail.

You need to figure out what will make your idea fail. If you can’t find it, then find smart people who can. Then, don’t fear it. Don’t try to pretend that it’s not there. Don’t work for a year on the easy parts of your idea, delaying the inevitable hard stuff, hoping and praying that the hard stuff will work its way out. Attack that hard stuff first. That takes courage, but you need to do it.

Find your worst bottleneck, and make it your highest priority. If you cannot solve your idea’s worst problem, then get a new idea. You’ll do yourself a favor by killing a bad idea before it kills you. If you solve your worst problem, then find the next one. Iterate. Shorter iterations are better. You’re done when you’ve proven that your idea actually works. In reality. And then, because life keeps moving, you have to keep iterating.

That’s what fail fast means. It’s about shortening your feedback loop. It’s about learning the most you can about the most important things you need to know, as soon as possible.

So, when I wish you fail fast, it’s a blessing; not a curse.

FREE Webinar: Efficient techniques to create and maintain your #Oracle Standby Database

Join us with this FREE event on May 19, 13:00 CET and register now, because seats are limited.

DataGuard_Lunchtime_Webinar

I will be talking about

  • how to create and maintain a 12c Standby Database in the most efficient way
  • how to do switchover and failover
  • how to keep up client connectivity after role changes

These topics will be live demonstrated – positively no slide show reading.

After this major part of the event, we will briefly advertise our digital learning offerings that relate to Oracle Database Core Technology and how you may take advantage of them.

Hope to see YOU in the session:-)

Those pesky dates as strings

You might be thinking “Nothing is more frustrating that encountering a string column that is full of dates”.

But there is something worse than that…and that is, when that string column is full of potential dates, and your job is to sift out the good data from the bad data. For example, if your table looks like this:


SQL> select * from T;

DATE_STR
--------------------
qwe
01/01/2000
31/02/2000
12-jan-14
20001212
Jan 14, 2016

6 rows selected.

Some of those values are definitely not dates, some of them definitely are dates, and some of them are might be dates. The only real way of knowing is to try convert those strings to dates, and see what happens. But we cannot just throw a TO_DATE around the column, because the moment we encounter a bad value, our query will crash. Even if all of the data was valid, because the formats of the strings are variable, we’d still get issues with that approach.

And the comes the kicker (because we get this on AskTom all the time)…

“Can we do it without creating a PLSQL function?”

This always bamboozles me…it is like saying “I need to write a book, but I’m only allowed to use a DVORAK keyboard and my tongue, with one eye closed”.

Why restrict yourself on the facilities available ?

Anyway, here is my workaround and not a stored function in sight Smile


SQL> with
  2    function date_checker(p_str varchar2) return date is
  3      l_format sys.odcivarchar2list :=
  4         sys.odcivarchar2list('dd/mm/yyyy','dd-mon-yyyy','yyyymmdd','Mon DD, YYYY');
  5      l_dte date;
  6    begin
  7      for i in 1 .. l_format.count loop
  8        begin
  9          l_dte := to_date(p_str,l_format(i));
 10          return l_dte;
 11        exception
 12          when others then
 13            if i = l_format.count then return null; end if;
 14        end;
 15      end loop;
 16    end;
 17  select date_str, date_checker(date_str) str_as_date
 18  from t
 19  /

DATE_STR             STR_AS_DA
-------------------- ---------
qwe
01/01/2000           01-JAN-00
31/02/2000
12-jan-14            12-JAN-14
20001212             12-DEC-00
Jan 14, 2016         14-JAN-16

6 rows selected.

Gotta love 12c Smile

Speaking at Oracle Midlands on Tuesday 17th May

As the title indicates, I’ll be speaking at the UK Oracle Midlands event on Tuesday evening next week. Details can be found here (and that link should mention the next event if you click it in the future).

oracleMidlands2

I’ll be talking about PL/SQL being called from SQL and how you can “extend” SQL by writing your own functions. That is a relatively well known thing to do but the potential impact on performance and the 12C improvements to reduce that impact are less well known. Maybe even more significantly, calling PL/SQL functions from SQL breaks the point-in-time view most of us take for granted with Oracle. More people are blogging and talking about this but it is still not widely appreciated. Is this a potential issue in any of your systems?

Joel Goodman is also presenting, on storage fragmentation. Joel is one of the best presenters on Oracle tech on the circuit and knows his stuff inside out.

I really love the Oracle Midlands user group, I’ve been to a few of the meetings and presented there one-and-a-bit times before. It meets in the evenings and lays on some free refreshements at half time (Samosas when I have been there!). It’s a real, dedicated, ground-roots user group. Annoyingly (for me) most of the meetings for the last year or so have been when I could not get up to the Midlands for them (it’s not a hard or long journey, it was just the timing was always wrong).

Red Stack are good enough to support/sponsor these events and do so with a light touch. You know they are there but it is not a hard sell, so kudos to them. Mike McKay-Dirden is the person behind these meetings and, with this being the 15th such meeting, I must take my hat off to Mike for running such a successful group.

So, if you are able to get to Birmingham (UK! Not USA…) on Tuesday evening, you should do so for an excellent, free learning opportunity. I hope to see some of you there!

NoCOUG Spring Conference and GLOC 2016

I fly out tomorrow for NoCOUG’s Spring Conference, which will be held on Friday, the 13th in San Jose, California. If you were thinking of attending and needed an added incentive to attend, I’ll be putting on a FOUR hour Enterprise Manager 13c hands on lab, so admit it, you’re intrigued… </p />
</p></div></div>

    	  	<div class=