Search

Top 60 Oracle Blogs

Recent comments

December 2016

How to configure FLASHBACK in #Oracle

flashhttps://uhesse.files.wordpress.com/2016/12/flash.png?w=600&h=524 600w, https://uhesse.files

Oracle User Groups and the (Lack of) Support from Oracle

UPDATE-  I want to thank everyone who supported this post and reached out to me about the challenges of managing and working with user group events.  This post, as with other’s I’ve written on different topics, (I have quite the knack to say what most just think, don’t I?

12.2 Index Advanced Compression “High” Part II (One Of My Turns)

In Part I, I introduced the new Index Advanced Compression default value of “HIGH”, which has the potential to significantly compress indexes much more than previously possible. This is due to new index compression algorithms that do more than simply de-duplicate indexed values within a leaf block. Previously, any attempt to completely compress a Unique […]

Queue-based Concurrent Stats Prototype Implementation

This is just a prototype of a queue-based concurrent statistics implementation - using the same basic implementation I've used a a couple of years ago to create indexes concurrently.There are reasons why such an implementation might be useful - in 11.2.0.x the built-in Concurrent Stats feature might turn out to be not really that efficient by creating lots of jobs that potentially attempt to gather statistics for different sub-objects of the same table at the same time - which can lead to massive contention on Library Cache level due to the exclusive Library Cache locks required by DDL / DBMS_STATS calls.In 12.1 the Concurrent Stats feature obviously got a major re-write by using some more intelligent processing what and how should be processed concurrently - some of the details are exposed via the new view DBA_OPTSTAT_OPERATION_TASKS, but again I've seen it running lots of very

Friday Philosophy – When Tech Fails to Deliver, is it Always a Problem?

I nipped out to the local supermarket this lunch time to get stuff. I use one of those self-use barcode scanners to log all the goods I put in my basket (apart from the bottle of whisky I was stealing). I then go to the payment machine, scan the “finish shopping” barcode and try to pay. I can’t pay.

New mean demo machine

My new Notebook is there! Will spend a couple of hours to do the setup. The specs are quite promising:

CPU:
Intel Core i7-6700 | 4 Cores | 8 Threads | 3,4 – 4,0GHz
Memory:
32GB  SO-DIMM DDR4 RAM 2400MHz Crucial Ballistix Sport LT
6 TB SSD Storage:
1TB m.2 Crucial MX300
1TB m.2 Crucial MX300
2TB Seagate FireCuda | 5400U/Min | 7mm
2TB Seagate FireCuda | 5400U/Min | 7mm

Okay it did cost me a fortune. See the new one on the left:

nc

Extended Stats

After my Masterclass on indexes at the UKOUG Tech2016 conference this morning I got into a conversation about creating extended stats on a table. I had pointed out in the masterclass that each time you dropped an index you really ought to be prepared to create a set of extended stats (specifically a column group) on the list of columns that had defined the index just in case the optimizer had been using the distinct_keys statistic from the index to help it calculate cardinalities.

#ukoug_tech16 Review with Tweets

UKOUG Tech16 is in its final hours, time for a special review: Tweets about it

It was in Birmingham again – very nice location!

 

//platform.twitter.com/widgets.js

Standard Edition attracts not only my attention more and more

Gluent Podcast with Mark Rittman

Mark Rittman has been publishing his podcast series (Drill to Detail) for a while now and I sat down with him at UKOUG Tech 2016 conference to discuss Gluent and its place in the new world with him.

This podcast episode is about 49 minutes and it explains the reasons why I decided to go on to build Gluent a couple of years ago and where I see the enterprise data world going in the future.

It’s worth listening to, if you are interested in what we are up to at Gluent and hear Mark’s excellent comments about what he sees going on in the modern enterprise world too!

 

How to reduce Buffer Busy Waits with Hash Partitioned Tables in #Oracle

fight_contention_2https://uhesse.files.wordpress.com/2016/12/fight_contention_2.png?w=576&... 576w, https://uhesse.files.wordpress.com/2016/12/fight_contention_2.png?w=144&... 144w" sizes="(max-width: 288px) 100vw, 288px" />

Large OLTP sites may suffer from Buffer Busy Waits. Hash Partitioning is one way to reduce it on both, Indexes and Tables. My last post demonstrated that for Indexes, now let’s see how it looks like with Tables. Initially there is a normal table that is not yet hash partitioned. If many sessions do insert now simultaneously, the problem shows: