Search

OakieTags

Who's online

There are currently 0 users and 42 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

Parse Time

Here’s a little query I wrote some time ago to see where my time went while running a query. It’s nothing sophisticated, just one of those simple things you can do with v$active_session_history (or dba_hist_active_sess_history, if you don’t get to the crime scene in time).


set null CPU

select
        sql_exec_id, in_parse, in_hard_parse, event, count(*)
from
        v$active_session_history
where
        sql_id = '{your choice here}'
group by
        sql_exec_id, in_parse, in_hard_parse, event
order by
        sql_exec_id, in_parse, in_hard_parse, count(*)
;

SQL_EXEC_ID I I EVENT                                            COUNT(*)
----------- - - ---------------------------------------------- ----------
   40341649 N N db file sequential read                                68
   40341649 N N CPU                                                    21
   40341649 N N db file scattered read                                  9
            Y Y CPU                                                     7

I had run the query that I was tracking exactly once, but my ASH query allows for, and separates, multiple executions of the same query by summing on sql_exec_id (the thing that the SQL Monitor also uses). The last row looks a little odd, though: it does’t have a value for sql_exec_id; that’s because those are ASH samples when the query is being optimized, not being executed – note that I’ve reported the columns in_parse and in_hard_parse – and both are set to “Y” for that row.

So  (statistically speaking) it’s probably taken about 7 CPU seconds for Oracle to optimise the statement, and from the rest of the results you can see that it’s taken about 21 CPU seconds to run, with 68 seconds spent on random I/Os and 9 seconds spent on multiblock reads for a total of 103 seconds elapsed.

Seven seconds sounds like quite a lot of time for parsing – but it was a fairly complex statement. However, the reason I’d been running the statement on a test system (a fairly good clone of production) was that I’d been seeing something stranger on production and I needed to get a baseline on the test system before I starting trying to fix the problem. Here’s the equivalent ASH reports for the same statement when it had run on production at a time that allowed me to capture its ASH samples.


SQL_EXEC_ID I I EVENT                                            COUNT(*)
----------- - - ---------------------------------------------- ----------
  514257929 N N CPU                                                    21
  514257929 Y Y latch: row cache objects                                1
            Y Y CPU                                                   119

Note the 119 CPU seconds spent parsing to run a 22 second query ! But that wasn’t the worst of it – sometimes the results looked more like this:


SQL_EXEC_ID I I EVENT                                            COUNT(*)
----------- - - ---------------------------------------------- ----------
  523748347 N N db file sequential read                                 1
  523748347 N N CPU                                                    32
            Y Y resmgr:cpu quantum                                     11
            Y Y latch: row cache objects                               38
            Y Y CPU                                                   415

That’s a pretty catastrophic optimsation time – especially since the statement can be optimised in seven seconds in another environment. You might note the resource manager kicking in there, the session is exceeding the CPU limit set for its resource group – though not very often given how infrequently it seems to be waiting on “resmgr:cpu quantum”. But there’s another important wrinkle to this report – which you can see when compare v$active_session_history with v$sql.


  1  select
  2     sql_id,
  3     round(cpu_time/1000000,2) cpu,
  4     round(elapsed_time/1000000,2) ela from v$sql
  5  where
  6     sql_text like '{some identifying text}'
  7* and        sql_text not like '%v$sql%'
SQL> /

SQL_ID               CPU        ELA
------------- ---------- ----------
2atyuc3vtpswy     285.81     516.13

The figures from v$sql don’t match very well with the summed results from ASH which has a total sample of 497 seconds and a CPU sample of 447 seconds. I think I can live with a statistical error of 4% ((516-497)/516) in a random sample for total time, but how do you explain the 36% error in the CPU time ?

The samples reporting “resmgr:cpu quantum” are a clue: the machine is overloaded; it’s trying to use far more CPU time than is available. As a result a process that gets pushed off the CPU by the operating system scheduler while it’s running can spend a long time in the run queue waiting to start running again. And if it’s an Oracle process that got pre-empted it doesn’t “know” that it’s not running, it didn’t put itself into a wait state so all it “knows” is that it’s not in a wait state.

So how do ASH and v$sql differ ? The code that derives the cpu_time for v$sql issues a call to the O/S asking “how much CPU have I used”. The code that takes an ASH sample says: “is this session active, if so is it in a wait state and if it’s not in a wait state then it’s either on the CPU or in the run queue waiting for the CPU”. So when we compare v$sql with ASH the difference in CPU is (statistically speaking) time spent in the run queue. So of our 447 seconds of CPU recorded by ASH, we spent 161 seconds in the CPU run queue waiting for CPU.

We still have to account for the difference between the 7 CPU seconds on a test system and the variation between 119 CPU seconds and 415 CPU seconds in optimisation on production. In a word – concurrency. Apart from everything else going on at the time there were, in the worst case, 16 slightly different versions of the same statement being called at the same time (on a machine with 24 cores) – all 16 statement were competing violently for the same resources at the same time, and the escalating conflict as more session joined in produced an exponential growth in time spent competing for resources rather than doing the job. (I managed to demonstrate the effect quite nicely during the evening by limiting the batch to 4 concurrent executions – and got a typical parse time of 40 CPU seconds).

I’ve often warned people about the problems of concurrency and encouraged them to think about how much time is being spent in competition rather then doing the job; I think this is the most extreme case I’ve seen in a production system. Given how dramatic the variation is, I can’t help wondering if the problem has been exaggerated by some corner case of sessions spinning for mutexes or latches; perhaps even an error in the code that allows resource management to put a session into “resmgr:cpu quantum” while it’s holding a latch or mutex. (I wasn’t able to emulate such an extreme display of the problem on a slightly newer version of Oracle, but I was able to construct a test that demonstrated the effect with a much smaller wastage of CPU.)

The solution (almost certainly): the statements are extremely similar, varying in just one predicate that is using a literal constant. It ought to be a relatively safe and simple change to make the query use a bind variable in that predicate. If the solution is adopted I’d expect to see the (once only) parse time on production drop back to about 7 seconds. Of course, if any other session tries to call the same statement at the same time it ought to end up reporting 7 seconds waiting on “cursor: pin S wait on X” before it starts executing – but that 7 seconds wait is a lot better than an extra 493 CPU seconds trying to optimise the “same” statement at the same time.

Footnote:

Running a minor variation on my ASH query to report the sql_plan_hash_value along with the sql_exec_id, I found that the “fix-up” code that updates older ASH rows with information that only becomes available later (e.g. plan_hash_value when optimising, or long time_waited values for a single wait event) only goes back 255 rows – so when I queried ASH for the statements that took 500 seconds to optimizer only 255 of the in_parse rows showed the final sql_plan_hash_value.

 

UKOUG Tech14 slides – Exadata Security Best Practices

I think 2 years is long enough to wait between posts!

Today I delivered a session about Oracle Exadata Database Machine Best Practices and promised to post the slides for it (though no one asked about them :). I’ve also posted them to the Tech14 agenda as well.

Direct download: UKOUG Tech14 Exadata Security slides

Day 1 of UKOUG Tech 14

Monday was the first day of the main part of the UKOUG Tech 14 conference, after SuperSunday’s additional content. I had a great night’s sleep and woke at 7 am, much to the disgust of Richard Foote who had been up for four hours already and didn’t seem to appreciate my #NoJetLagHere hashtag replying to his lack of sleep tweet. :) I managed to get to the conference relatively dry, but realized yet again that you don’t come to Liverpool in December for the weather. :)

The conference itself is at the Arena and Convention Centre in Liverpool. The conference and exhibition are spread over three floors of the conference centre, with the exhibition on floor 0 (it sounds better than the basement, I guess!) and most of the presentations and the speaker’s lounge on the 2nd floor. Some of the presentations are also held in rooms off the exhibition hall. The first session timeslot didn’t hold a lot to interest me specifically, so I went to the speaker lounge and caught up on a bit of email and the like. The second session included a presentation by Joze Senegacnik, a fellow Oak Table member that I had finally met in person the previous night at the Oracle ACE dinner. His presentation was titled “The Evolvement of Oracle Performance Troubleshooting”. When we met at the ACE dinner, he described his paper as a tour of performance since Oracle 2, so that sounded pretty interesting. Not many people have been around to use a lot of those early versions, so it’s always interesting to hear from someone who experienced just how limited some of that early stuff was!

IMG_0495

Unfortunately, Joze’s presentation was in one of the rooms off the exhibition hall and was separated from another one of those rooms by a wall that didn’t reach the roof. The end result is that neither presenter could use a microphone (as they would have drowned out the presenter in the other room) and it was quite difficult to hear, particularly as my hearing isn’t the best anyway. Still, it gave me a heads-up of what it will be like for my two papers today, as both of them are in that same room. I think I’ll have a bit of an advantage over Joze as he is a fairly quietly spoken guy, whereas I’m much more of a “basso profundo”! :)

After Joze’s presentation I finally got to catch up with my lovely colleague Tania Le Voi. Tania and I had worked together in the group I was with in the Enterprise Manager product management team before I moved to the DBaaS team, so it was great to actually spend some together before sharing lunch with her, James Kao and Jeff Barber (also fellow product managers in the EM team. I also caught up with a customer who has been a reader of my blog, so that’s always fun! Thanks for the positive feedback, Jon! :)

Straight after lunch we had the EM Roundtable slot, which is particularly valuable to us as product managers as these sessions are where we get to hear directly from customers about what they’re doing with the EM product suite, what pain points they might still have and so on. Again, this session was in one of the rooms off the exhibition hall, and if there’s one piece of feedback I would offer the UKOUG Tech 14 organizers, it would be to definitely NOT use these rooms for roundtables, as I had a lot of difficulty hearing some of the issues customers were raising. In fact, it would be better to not use these rooms at all. Those of us who have a hearing loss will realize just how difficult it can be to hear in rooms with competing presenters! The other time when I had problems hearing were in sessions that were on the second floor. From time to time during the day, there were quite heavy periods of rainfall and even hail. I don’t know what sort of material the roof of the convention centre is made from, but the noise level when it was hailing was horrendous!

Unfortunately, I had to spend most of the rest of the afternoon working on my day job, which is one of the downsides of being at a conference. I managed to be finished in time for the exhibition drinks slot, which was held in the exhibition hall. Once again, I enjoyed catching up with a few people over drinks that I hadn’t had the chance to meet in person before. I had to leave after that, so I missed the community drinks session while I went off for dinner with some customers from Barclays, an event organized by Ian Carney, a colleague from my days back in the late 90’s / early 2000’s with Oracle in the US.

All in all, a fairly enjoyable day at UKOUG Tech 14. Looking forward to tomorrow when I have two presentations to do myself, as well as more time catching up with good friends old and new!

Cardinality Change

Here’s an entertaining little change across versions of Oracle, brought to my attention by Tony Hasler during UKOUG Tech 14. It’s a join cardinality estimate, so here are a couple of tables to demonstrate the issue – the only columns needed are the alpha_06 columns, but I reused some code from other demonstrations to create my test case, so there are lots of irrelevant columns in the create table script:


create table t1 nologging as
with generator as (
        select rownum id
        from dual
        connect by rownum <= 1000
)
select
        rownum                                          id,
        mod(rownum-1,200)                               mod_200,
        trunc(dbms_random.value(0,300))                 rand_300,
        mod(rownum-1,10000)                             mod_10000,
        trunc(sysdate) +
                trunc(dbms_random.value(0,1000))        date_1000,
        dbms_random.string('l',6)                       alpha_06,
        dbms_random.string('l',20)                      alpha_20
from
        generator,
        generator
where
        rownum <= 1e6
;

execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for all columns size 1')

create table t2 nologging as select * from t1;
execute dbms_stats.gather_table_stats(user,'t2',method_opt=>'for all columns size 1')

I’m going to join t1 to t2 with a predicate based on the alpha_06 columns – using a LIKE predicate. Before I do so I’ll point out that there are are 1,000,000 rows in the table, and (checking the column stats) 985,920 distinct values for alpha_06. Here’s my query, with the execution plan I got from 11.1.0.7:


select
        count(*)
from
        t1, t2
where
        t2.alpha_06 like t1.alpha_06
;

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |     1 |    14 |  1122M  (6)|999:59:59 |
|   1 |  SORT AGGREGATE     |      |     1 |    14 |            |          |
|   2 |   NESTED LOOPS      |      |    50G|   651G|  1122M  (6)|999:59:59 |
|   3 |    TABLE ACCESS FULL| T1   |  1000K|  6835K|  1123   (6)| 00:00:06 |
|*  4 |    TABLE ACCESS FULL| T2   | 50000 |   341K|  1122   (6)| 00:00:06 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter("T2"."ALPHA_06" LIKE "T1"."ALPHA_06")

The 50,000 cardinality estimate for t2 looks like the standard 5% guess for “column >= {unknown value}”, following which the join cardinality of 50G is the same 5% guess applied to the Cartesian join between t1 and t2 (1M * 1M * 0.05). It’s not a good estimate in my case because the right answer happens to be close to 1M rows, specifically 1,003,176. So let’s upgrade to 11.2.0.4 and see what we get instead:

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |     1 |    14 |  1050M  (6)|999:59:59 |
|   1 |  SORT AGGREGATE     |      |     1 |    14 |            |          |
|   2 |   NESTED LOOPS      |      |  2014K|    26M|  1050M  (6)|999:59:59 |
|   3 |    TABLE ACCESS FULL| T1   |  1000K|  6835K|  1051   (6)| 00:00:06 |
|*  4 |    TABLE ACCESS FULL| T2   |     2 |    14 |  1050   (6)| 00:00:06 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter("T2"."ALPHA_06" LIKE "T1"."ALPHA_06")

The estimate has dropped from 50 Billion rows down to 2 Million – a factor of about 25,000: possibly an indicator that the algorithm has changed, and that a few people might find execution plans changing as they upgrade to a newer version of Oracle. The change occurred at 11.2.0.2 as revealed by fix control 9303766 which has the description: “use 1/NDV+1/NROWS for col1 LIKE col2 selectivities”.

Just as a quick check on the arithmetic: there are 1 million rows in table t2, with (as noted above) 985,920 distinct values in the column, so the selectivity should be: 1/1000000 + 1/985920 = 2.014281 * e-6. Multiply the selectivity by 1e6 and you get 2, the cardinality estimate for t2; multiply the selectivity by 1M*1M (the Cartesian join) and you get 2,014,281, the cardinality estimate of the join. QED.

There are workarounds, of course. One would be to reverse out the fix control, either as an initialisation parameter or in a session logon trigger, another might be to modify the SQL – I think the following would be equivalent:


select
        *
from    t1, t2
where
        t2.alpha_06 like substr(t1.alpha_06,1,length(t1.alpha_06))||'%'
and     t1.alpha_06 is not null
and     t2.alpha_06 is not null

This changes the critical predicate from the form “col1 like col2″ to “col1 like {unknown value from function}” i.e. back to a case where the optimizer uses the 5% guess, and the cardinality estimates go back the original values.

Days -1 and 0 of UKOUG Tech 14

The UKOUG user group conference is one conference I have wanted to get to for a long time, but never managed to before. This year, with the change in job roles to being a database architect in the DBaaS team within the Enterprise Manager product management group, I finally got the go ahead to submit some papers. I was notified some time back that one of them had been accepted (Snap Clone) and one shortlisted (DBaaS), and I was asked by one of my colleagues to help out with the Enterprise Manager round table. Just before I came over to the UK, the UKOUG folks asked me to also present the DBaaS presentation, so now I have something on each of the 3 days the conference is scheduled for.

Day -1

This blog post, however, is about getting to the conference, and the SuperSunday session before the main conference starts (hence the Day -1 and 0 part of the title). As you can imagine, the only part of the conference I really wasn’t looking forward to was getting here. Being from Canberra in Australia, the trip was supposed to be about an hour Canberra to Sydney, 15 hours or so Sydney to Dubai, and nearly another 8 from Dubai to Manchester.

The first leg wasn’t too bad. I had about an hour in Qantas Club before getting on the plane to Sydney, so I managed a couple of beers there. :) I must say I was quite surprised by the flight to Sydney. As is often the case around this time of year, afternoon storms were sweeping through so I was expecting either delays or a bumpy ride. However, the plane left on time and it was a pretty smooth flight, so full credit to the Qantas pilots for getting me there relatively comfortably. The plane was unfortunately a Dash 8. The downside of that is I had to check my laptop bag as premium luggage (that is, leave it at the foot of the stairs) and carry my laptop and iPad on board. That’s a bit more problematic at the moment as I’m on crutches with my ongoing knee and back problems, so that slowed down a lot of things.

Once I got the laptop bag back again in Sydney, it was a bit easier going on the crutches, as I could put everything in the laptop bag and hobble around – slow but I get there. Transferred across to the international terminal at Sydney, and hit Qantas Club again. Thankfully I asked what gate the plane to Dubai was from – it was about as far from Qantas Club as you could get! So I asked them to organize a chair to get me there. The plane was delayed by about half an hour because of the storms, so I had time for a couple more beers. :) A lovely young lady arrived with a wheelchair for me, and I was glad she did! I might still be walking there if I had to get there under my own speed!

Once I got on the plane, I found out (as did a lot of other people!) that the plane wasn’t going to Dubai – at least not directly. We flew to Bangkok first. This time it was pretty bumpy most of the way, because of the storms and presumably the back end of the Philippines hurricane. We lost another half hour along the way, so I suppose they were finding their way around the storms as well as they could. I still managed to get quite a bit of sleep along the way (no doubt the mixture of the beers and painkillers helped ;) ).

When we got to Bangkok, we were told we had about an hour stopover, and given the option of either getting off the plane or staying on board. If you left, you not only had to take all your bags with you but you also had to check through Security again. Since I had a good bottle of Australian red in my luggage to share with Lothar Flatz, I decided not to risk losing it at Security (as has happened before) and elected to stay on the plane. A lot of people got off, so there was plenty of room to stretch out on the plane as well. I also managed a reasonable amount of sleep on the Bangkok to Dubai leg. It turned out that the stopover in Bangkok worked fairly well for me. With both back and knee problems raising their ugly heads, two 8 hour flights (well, three if you count Dubai to Manchester as well!) is a much better option than 15 hours plus in one go which it would have been if we’d flown direct to Dubai.

The downside of the two delays getting to Dubai, though, was I now had 24 minutes to get off the plane and get to the gate for the next leg. It’s quite a hike through Dubai airport if you’ve never been there – up two escalators, a fairly circuitous walk and then down two escalators again – but I made it! At a fairly fast trot on crutches, which wasn’t great for either my back or knee, so the first thing I did when I got on the plane was down some more drugs (no time for beers in Qantas Club this time!)

The flight from Dubai to Manchester was probably the worst leg of the lot. I was seated in the pointy end of an A380, which was quite comfortable, but the rows beside me and behind me both had parents with one child each who seemed to have no clue whatsoever how to keep their babies quiet. As a father of three kids, I know how hard that can be but these people didn’t even seem to try to keep the kids quiet! The mother directly behind me was so loud talking to her child that I could hear her (and the little boy) even when I had ear plugs in, headphones on and the music turned up fairly loud. So much for any chance of sleep on that leg!

As I expected when I got to Manchester, my name was called out and I was directed to the baggage counter. I had made the connecting flight but of course the bag hadn’t. Still, 24 minutes isn’t much time to get bags from one plane to another, so I wasn’t exactly surprised. The lady who took all my details was very pleasant, which must be a hard thing to achieve when you probably deal mostly with quite irate people (some of the other people that were having their details taken were much less polite about it than I was!)

Anyway, I wandered out to be met by a good friend who had volunteered to pick me up and take me from Manchester to Liverpool (about an hour’s drive west from Manchester, depending on traffic). It was great to catch up with Phil, and find out what a number of people I hadn’t seen for a while on the Oracle scene in the UK were now up to. He dropped me off fairly close to the hotel as it turned out (the hotel I’m at is in Liverpool One, a pedestrian only region). I think all the rooms are actually apartments, so it’s quite comfortable and more roomy than the normal hotel room you get. I had booked it through the UKOUG web site as it was the closest one (in terms of physical distance) offered, so I was quite annoyed when I got out and about to find there were other hotels MUCH closer still. Normally a 10 minute walk from the hotel doesn’t bother me much, but it’s all a bit harder when you’re on crutches. Of course, being the time of year it is, the weather in Liverpool isn’t fantastic. As I walked around the place today, there were times when I wasn’t sure if it was snowing, hailing, raining, or all three and the damn wind is nearly too strong to walk into. In Australia, we’d call it a lazy wind – it cuts right through you because it’s too darn lazy to go around you. :)

When I got here, the hotel was still getting my apartment ready, so I left my laptop bag with them and wandered off to see where the conference was (always good to know before you have to there in earnest). It wasn’t open, so I went down to Albert Dock and had a look round there for a while. Nice place, as you can see from the pictures:

IMG_0473

IMG_0474

IMG_0476

IMG_0478

IMG_0482

And of course, being Liverpool you’ve got to have a mention of the Beatles. :) Here’s a picture of them made out of jelly beans.

IMG_0481

I stopped while I was at the Albert Docks for lunch – fish and chips and mushy peas and beer. How English was that! :)

beer

lunch

After some more touristing around Albert Dock, it was back to the hotel, and time to catch up on some sleep. Here endeth Day -1. :)

Day 0

Being a good Catholic lad, day 0 started out with breakfast while I googled for the nearest Catholic church with mass at a reasonable time, which today meant sometime in the morning as SuperSunday at the conference started at 12:30. I found there was a mass at 8:30 at the Liverpool Metropolitan Cathedral, supposedly 20 minutes walk from the hotel. I left 45 minutes before mass, knowing full well I would get totally lost and I was gonna be slow anyway on the crutches. And of course, both were true and I ended up finally getting into the cathedral 10 minutes after mass started. I must say the cathedral itself is quite magnificent. Unfortunately, it was a bit dark in the cathedral itself, so my photos didn’t turn out very well (you can see plenty on Google Images), but here’s one from outside:

cathedral

When I got back to the hotel from the cathedral, my bag had arrived from the airport, complete with my gloves, scarf and hat that I could have done with on the walk back! I unpacked the suitcase, only to find my suit had a hole in it! Not sure if that was there before (the suit is getting a bit old now ;) ) or from the trip over. Anyway, I went back over to the conference to register and attend SuperSunday. SuperSunday is advertised as an additional day of free technical content for Tech 14 attendees. There were a couple of sessions I wanted to see, one on Profiling the Database Writer and Log Writer by Frits Hoogland and the other on Advanced Diagnostics Revisited by Julian Dyke, both co-Oak Table Members with me. I haven’t had a chance to see either present before, though I knew both of them were really good technical people with a deep knowledge of Oracle Database internals.

Frits was first cab off the rank at 12:30, and he warned us at the start that he had a lot of material to cover and had only been able to cover Log Write at DOAG a few weeks back. This time he at least managed to get a start on the Database Writer material, but still didn’t have enough time to finish it. Given the level of interest from attendees, I think he could have easily broken the presentation in two to cover the lot. Pity he only had one slot. :) However, what he did manage to cover was covered very well, and it was great to see someone present and backup everything they covered with facts (in this case largely from trace files and strace output). Far too often, you see people presenting “facts” with no backing – you’re just expected to take the presenter’s word for what they say.

Unfortunately, I was a few minutes late when I went to see Julian do his presentation. When I stuck my head in the back of the room, I realized that was all there was room for – my head! I’m almost certain the presentation would have been closed down by fire wardens in the US as there were certainly more people in the room than it was supposed to cater for. So I still haven’t seen Julian speak – oh well, maybe another time.

However, the best value of the day was not in the presentations, interesting though they undoubtedly were. The best part, as is often the case, was the networking you can do. I caught up with Richard Foote, David Kurtz, Marcin Przepiórowski and Bjoern Rost before Frits’s presentation, then spent some time talking with James Morle and Frits before people started streaming out from the second presentation for lunch. I also had time before lunch to say a quick hi to Jared Still and Mark Bobak before they headed into a presentation. While I was having lunch I managed to have a chat with Joel Goodman, Marc Fielding and a few others. Joel and I chatted about his plans for rolling some of the Oracle Expert Summits he’s been organizing throughout the UK and Europe into the Asia Pacific region, so look for some of those to start up sometime in the near future. After that I tried to get into Julian’s talk, but Michael Abbey and I both decided there wasn’t enough room left, so we chatted for some time. It was the first time I’d met Michael in person, but both of us knew of each other even though we hadn’t met before. One thing I hadn’t realized was Michael has been working for Pythian for the past 10 years or so. They certainly have some impressive people at Pythian! Just before I left, Debra Lilley (my Irish wife ;) ) arrived, so I stopped and chatted to her and James Haslam (UKOUG CEO) for a while. James is from a hospitality background rather than an Oracle one, but given his position hospitality is more important anyway!

There were no presentations in the final session at SuperSunday that I wanted to see, so I decided to head back to the hotel to stretch out my back. On the way, I stopped off at a couple of stores – one to get a lock for the laptop as I didn’t have a spare one to bring with me, and the other to get a replacement suit so I can look suitably dashing for my presentations. ;) Then it was back to the Spice Lounge for the Oracle ACE Dinner that Debra Lilley had kindly invited me to. Again, this was a great chance to catch up with some people I knew but hadn’t had the chance to physically meet yet, as well as meet some people that I wasn’t familiar with.

At this stage, it’s probably worth raising an issue that many of us who present regularly would be familiar with. A lot of people know us by name, because they’ve seen us presenting at some conference or another, or at some training event that those of us that have been instructors have run. The end result is that many people can come up to me and greet me by name. If you’re lucky, I’ll remember your face, but a lot of the time I’ll have no clue that I’ve interacted with you before. It’s a problem a lot of us face – I discussed it tonight with Maria Colgan and she hits exactly the same issue. If we’re lucky, people at conferences have their name tags on so we can surreptitiously glance at that to come up with a name. But if we don’t greet you by name, please don’t take it as a personal affront. It’s not meant that way by any means. So if you come up to someone you’ve seen present before, it’s probably worthwhile saying “Hi, I’m Joe Bloggs, I saw you present at XYZ conference in 20xx” so at least we have a means to identify where we met before. Otherwise, I might resort to the generic “Hi, how are you?” sort of greeting (not that there’s anything wrong with that!)

So that was Day -1 and day 0 of UKOUG TECH 14. Tomorrow, there’ll be a lot of interesting presentations to get to, and again, lots of interesting people to meet. I’m looking forward to it, so if you’re at the event, come up and say hi!

Announcing XtremIO Performance Engineering Lab Report: Facts About Redo Logging And NAND Flash.

I invite you to please read this report.

NAND Flash is good for a lot of things but not naturally good with write-intensive workloads. Unless, that is, skillful engineering is involved to mitigate the intrinsic weaknesses of NAND Flash in this regard. I assert EMC XtremIO architecture fills this bill.

Regardless of your current or future plans for adopting non-mechanical storage I hope this lab report will show some science behind how to determine suitability for non-mechanical storage–and NAND Flash specifically–where Oracle Database redo logging is concerned.

Please note: Not all lab tests are aimed at achieving maximum theoretical limits in all categories. This particular lab testing required sequestering precious lab gear for a 104 hour sustained test.

The goal of the testing was not to show limits but, quite to the contrary, to show a specific lack of limits in the area of Oracle Database redo logging. For a more general performance-focused paper please download this paper (click here).  With that caveat aside, please see the following link for the redo logging related lab report:

Link to XtremIO Performance Engineering Lab Report (click here).

 

Redo-Durability-splash

Filed under: oracle

Munching the Squarespace Hamburger

Many Squarespace templates indicate mobile menus by the so-called hamburger icon. It's the three-bar icon in the upper-left corner of Figures 1. Use of the hamburger is controversial; some believe it unintuitive; some prefer a button labeled "Menu". Templates like Five and Marquee make it easy to substitute a menu button like you see in Figure 2.

------------ Figure 1. The Five template used by the Seventh-day Adventist congregation in Munising, MI has the hamburger icon at the upper left.

------------
Figure 1. The Five template used by the Seventh-day Adventist congregation in Munising, MI has the hamburger icon at the upper left.

------------ Figure 2. The template's design makes it easy to substitute a menu button via custom CSS, thereby improving the clarity of the navigation.

------------
Figure 2. The template's design makes it easy to substitute a menu button via custom CSS, thereby improving the clarity of the navigation.

The Empty Element Method

Five and Marquee are two templates generating the hamburger icon through an approach I term as the Empty Element Method. If you're a Five user for example, you can bring up a page in your site and view its HTML source to find the following tag that is empty:

<span id="mobile-navigation-label">span>

The hamburger icon seen from a phone or a tablet is supplied through a CSS rule such as the following, again from Five. I've elided some of the declarations to focus on the two that combine to generate the icon.

#mobile-navigation-label:before {
  font-family:'squarespace-ui-font';
  ...
  content:"\e030";
}

The rule adds a single character prior to the contents of the targeted span element. The content declaration specifies the character as code point E030, corresponding to the hamburger icon in the particular font created by Squarespace to hold user-interface symbols.

The Benefit

Bizarre as it seems, the Empty Element Method is a benefit by enabling you to write custom CSS rules to specify any character or string that you desire. For example, you can achieve the blue-button effect in Figure 2 by adding the following rules to the Custom CSS Editor of any site built on the Five template:

#mobile-navigation-label:before {
  content: "Menu";
  font-family: "Open Sans";
  font-weight: bold;
  font-size: 18px;
  background-color: #3581d6;
  border-radius: 6px;
  width: 120%;
}
#mobile-navigation-title {
  margin-left: 20px;
}

The same general approach works in the Marquee and any other template implementing the Empty Element Method. Discover the empty element. Write a before rule to specify whatever content you want in place of the hamburger icon. Job done. 

Discovering the Empty Element

How do you know whether your chosen template implements the Empty Element Method? It's easy to check. Here's what to do:

  1. View your site from a browser on a laptop or desktop PC.
  2. Narrow your browser window just enough to show the mobile view.
  3. Right-click the hamburger icon and select Inspect Element.
  4. You should see a result similar to that in Figure 3.
  5. Expand the highlighted element in the left inspector pane.
  6. Look for a before rule. Click it if it's there.
  7. Scroll down the right-side pane and look for a content declaration.  

If you find a before rule having a content declaration, then your template is likely using the Empty Element Method. The highlighted element that you expand in Step 5 is the one to target. 

Figure 3. The Five template's empty element and its associated before rule. 

Figure 3. The Five template's empty element and its associated before rule. 

Changing the Content

Look carefully at the before rule in the lower-right pane of Figure 3. Find the content declaration, and scan upward until you see a selector. The template designer targeted the mobile-navigation-label element, so you should do the same:

#mobile-navigation-label:before {
...
}

Now change the font. Squarespace relies upon a font named squarespace-ui-font for various user-interface symbols such as the hamburger. You're replacing the hamburger with text, so choose a font in harmony with the rest of your site.

One approach to choosing a font is to right-click your site title, select Inspect Element, and see what font is being used. Then use the same for your navigation label. (You can inspect one of your heading fonts instead of the site title if you prefer). The site title in Figure 3 is in the Open Sans font, so add that font to the rule:

#mobile-navigation-label:before {
  font-family: "Open Sans";
  ...
}

Don't forget to specify the text to display instead of the hamburger. Do that through a content declaration:

#mobile-navigation-label:before {
  font-family: "Open Sans";
  content: "Menu";
}

Pause to save your changes and view the result, which might or might not look so good depending upon the template you're using. Figure 4 shows the result so far in the Five template.

 

------------ Figure 4. First cut at changing the hamburger icon into a menu button. New text is in place.

------------
Figure 4. First cut at changing the hamburger icon into a menu button. New text is in place.

 

Do not be dismayed. The overlap in Figure 4 does look awful. In some templates you won't have that problem, but in Five you will have it. The font size is also not pleasing in Figure 4, and the font really ought to be made bold. Finish styling the button first. Then work on adjusting out any remaining problems.

Styling the Content

Some styling is needed to make the word "Menu" in Figure 4 appear as the Menu button in Figure 2. Following is an extension of the CSS rule with additional declarations making the text bold and changing its size. I went with 18px to be consistent with the site title size. The background color is specified as a shade of blue, and the border-radius takes the edge off the button corners.  

#mobile-navigation-label:before {
  font-family: "Open Sans";
  content: "Menu";
  font-weight: bold;
  font-size: 18px;
  background-color: #3581d6;
  border-radius: 6px;
}

Figure 5 shows the result so far. Clearly there's a problem with the button background. The blue square surprised me. I had been expecting the button to encompass the text and be rectangular.

 

------------ Figure 5. The blue menu button takes shape, but is not quite all there yet.

------------
Figure 5. The blue menu button takes shape, but is not quite all there yet.

 

Troubleshooting the Design

Sometimes you make write custom CSS rules and everything just works as you expect and hope. Not always though, and not in the case of Figure 5. Sometimes you have to troubleshoot and resolve unexpected side-effects.

My first thought upon seeing the result in Figure 5 was that the width of the span containing the mobile navigation label was fixed. Indeed, that's the case. Inspect the navigation label again as in Figure 3, scroll up just a tad, and you'll see that its width is fixed at 32 pixels. Specify width: 120% to force the button just a bit wider than its text. Doing so adds a bit "breathing room" as a margin on either side of the button text, and you can adjust the percentage as you see fit for a pleasing result. The rule in its final form is now:

#mobile-navigation-label:before {
  font-family: "Open Sans";
  content: "Menu";
  font-weight: bold;
  font-size: 18px;
  background-color: #3581d6;
  border-radius: 6px;
  width: 120%;
}

Setting the width to 120% solved the square-button problem, giving a nicely-sized and rectangular Menu button. The one remaining issue is to "nudge" the site title a bit to the right so as not to crowd the button. Do that by adding some left margin to the mobile navigation title:

#mobile-navigation-title {
  margin-left: 20px;
}

Figure 6 shows the result, which is the same as in Figure 2. Gone is the unintuitive hamburger, and in its place is a clearly labeled button adding a delightful splash of color. 

 

Figure 6. The final result, showing a blue menu button clearly labeled as to its function. 

Figure 6. The final result, showing a blue menu button clearly labeled as to its function. 

 

Other Templates

Keep in mind that not all templates implement the Empty Element Method. The approach in this post applies only to a subset of the available templates. Other templates -- Bedford and its spin-offs are an example -- specify the hamburger in ways that are not amenable to the same approach.

Even templates that are amenable to the approach in this post vary in their implementation. My CSS rules for the Five template won't apply to Marquee, for example. The general approach is sound. You just need to work through the details for your template of choice.

null

Munching the Squarespace Hamburger

Many Squarespace templates indicate mobile menus by the so-called hamburger
icon. Some argue that the hamburger is unintuitive. Templates like Five and
Marquee make it easy to avoid the burger...



Read the full post at www.gennick.com/database.

Ten Year Site Anniversary

I realized yesterday that this site has pasted his ten year anniversary. In all funny…

Closure

It’s been a long time since I said anything interesting about transitive closure in Oracle, the mechanism by which Oracle can infer that if a = b and b = c then a = c but only (in Oracle’s case) if one of a, b, or c is a literal constant rather than a column. So with that quick reminder in place, here’s an example of optimizer mechanics to worry you. It’s not actually a demonstration of transitive closure coming into play, but I wanted to remind you of the logic to set the scene.

I have three identical tables, one million rows, no indexes. The SQL to create the first table is one I supplied a couple of days ago to demonstrate changes in join cardinality dependent on Oracle version:


create table t1
nologging
as
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		level <= 1e4
)
select
	trunc(dbms_random.value(0,1000))	n_1000,
	trunc(dbms_random.value(0,750))		n_750,
	trunc(dbms_random.value(0,600))		n_600,
	trunc(dbms_random.value(0,400))		n_400,
	trunc(dbms_random.value(0,90))		n_90,
	trunc(dbms_random.value(0,72))		n_72,
	trunc(dbms_random.value(0,40))		n_40,
	trunc(dbms_random.value(0,3))		n_3
from
	generator	v1,
	generator	v2
where
	rownum <= 1e6
;

Here’s a simple SQL statement that joins the three tables:


select
	t1.*, t2.*, t3.*
from
	t1, t2, t3
where
	t2.n_90  = t1.n_90
and	t3.n_90  = t2.n_90
and	t3.n_600 = t2.n_600
and	t1.n_400 = 1
and	t2.n_400 = 2
and	t3.n_400 = 3
;

Given the various n_400 = {constant} predicates we should expect to see close to 2,500 rows from each table participating in the join – and that is exactly what Oracle predicts in the execution plan. The question is: what is the cardinality of the final join? Before showing you the execution plan and its prediction I’m going to bring transitivity into the picture.  Note the lines numbered 6 and 7.  If t2.n_90 = t1.n_90 and t3.n_90 = t2.n_90 then t3.n_90 = t1.n_90; so I might have written my query slightly differently – note the small change at line 7 below:


select
	t1.*, t2.*, t3.*
from
	t1, t2, t3
where
	t2.n_90  = t1.n_90
and	t3.n_90  = t1.n_90		-- changed
and	t3.n_600 = t2.n_600
and	t1.n_400 = 1
and	t2.n_400 = 2
and	t3.n_400 = 3
;

So here’s the exciting bit. My two queries are logically equivalent, and MUST return exactly the same row set. Check the final cardinality predictions in these two execution plans (from 12.1.0.2, but you get the same results in 11.2.0.4, older versions have other differences):


First Version - note the predicate for operation 3
----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 70949 |  5820K|  1869  (10)| 00:00:01 |
|*  1 |  HASH JOIN          |      | 70949 |  5820K|  1869  (10)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL | T1   |  2500 | 70000 |   622  (10)| 00:00:01 |
|*  3 |   HASH JOIN         |      |  2554 |   139K|  1245  (10)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL| T2   |  2500 | 70000 |   622  (10)| 00:00:01 |
|*  5 |    TABLE ACCESS FULL| T3   |  2500 | 70000 |   622  (10)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."N_90"="T1"."N_90")
   2 - filter("T1"."N_400"=1)
   3 - access("T3"."N_90"="T2"."N_90" AND "T3"."N_600"="T2"."N_600")
   4 - filter("T2"."N_400"=2)
   5 - filter("T3"."N_400"=3)

Second Version - note the predicate for operation 1
----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |  3264 |   267K|  1868  (10)| 00:00:01 |
|*  1 |  HASH JOIN          |      |  3264 |   267K|  1868  (10)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL | T1   |  2500 | 70000 |   622  (10)| 00:00:01 |
|*  3 |   HASH JOIN         |      | 10575 |   578K|  1245  (10)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL| T2   |  2500 | 70000 |   622  (10)| 00:00:01 |
|*  5 |    TABLE ACCESS FULL| T3   |  2500 | 70000 |   622  (10)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."N_90"="T1"."N_90" AND "T3"."N_90"="T1"."N_90")
   2 - filter("T1"."N_400"=1)
   3 - access("T3"."N_600"="T2"."N_600")
   4 - filter("T2"."N_400"=2)
   5 - filter("T3"."N_400"=3)

The a small change in the choice of presenting the predicates gives me a factor of 22 in the cardinality estimate – oops!

The actual result with my data was close to 3,000 rows – so one of the estimates in the second version was pretty good; but the point of the blog isn’t that you can “tune” the optimizer by carefully picking your way through transitive closure, the point is that a small “cosmetic” change you might make to a query could result in a significant change in the cardinality calculations which could then make a dramatic difference to the final execution plan. This example, by the way, depends on the same “multi-column sanity check” that showed up in the previous posting.

I will be expanding on this posting some time in the next couple of weeks but, again, the example should come up in my session on calculating selectivity at “Super Sunday” at UKOUG Tech 14.