I’m surprised to find that Google is not cleanly ranking the helpful set of blog posts by Oracle’s Maria Colgan on the Oracle Database 12c In-Memory Column Store feature so I thought I’d put together this convenient set of links. Google search seems to only return a few of them in random order.
Over time I may add other helpful links regarding Oracle’s new, exciting caching technology.
There are a large number of “moving parts” when performance tuning or trouble shooting an Enterprise Manager environment. The new EM performance features, (available in release 18.104.22.168) are there to assist you in understanding the source of the issue and can really make the difference for those that are unfamiliar with the challenges of Weblogic, java, network or other complexities that make up EM12c and
Prompted by an email from Yves Colin (who’ll be presenting on the Tuesday of UKOUG Tech14) I was prompted to dig out a little script I wrote some years ago and re-run an old test, leading to this simple question: what’s the largest size array insert that Oracle will handle ?
If you’re tempted to answer, watch out – it’s not exactly a trick question, but there is a bit of a catch.
A fairly important question, and a little surprise, appeared on Oracle-L a couple of days ago. Running 22.214.171.124 a query completed quickly on the first execution then ran very slowly on the second execution because Oracle had used cardinality feedback to change the plan. This shouldn’t really be entirely surprising – if you read all the notes that Oracle has published about cardinality feedback – but it’s certainly a little counter-intuitive.
Just a quick note that I posted slides for the 2 talks I did at ECO in Raleigh this week:
Great crowd. I really enjoyed myself.
Note: You can also find other presentations on my Whitepapers/Presentations page via the link at the top of the screen.
The new Oracle Enterprise Manager 12c Command Line Interface book is available via a number of locations, including Amazon and directly from<
As shared by a well known Oracle and Big Data performance geek!
SQL> ALTER SYSTEM SET inmemory_size = 5T SCOPE=spfile; ALTER SYSTEM SET inmemory_size = 5T SCOPE=spfile * ERROR at line 1: ORA-32005: error while parsing size specification [5T] SQL> ALTER SYSTEM SET inmemory_size = 5120G SCOPE=spfile; System altered.
One of the worst problems with upgrades is that things sometimes stop working. A particular nuisance is the execution plan that suddenly stops appearing, to be replaced by an alternative plan that is much less efficient.
Apart from the nuisance of the time spent trying to force the old plan to re-appear, plus the time spent working out a way of rewriting the query when you finally decide the old plan simply isn’t going to re-appear, there’s also the worry about WHY the old plan won’t appear. Is it some sort of bug, is it that some new optimizer feature has disabled some older optimizer feature, or is it that someone in the optimizer group realised that the old plan was capable of producing the wrong results in some circumstances … it’s that last possibility that I find most worrying.
No, not the 10th posting about first_rows() this week – whatever it may seem like – just an example that happens to use the “calculate costs for fetching the first 10 rows” optimizer strategy and does it badly. I think it’s a bug, but it’s certainly a defect that is a poster case for the inherent risk of using anything other than all_rows optimisation. Here’s some code to build a couple of sample tables:
A quick question out to the world. What management tools do you use for MySQL?
We currently have: