Search

Top 60 Oracle Blogs

Recent comments

October 2014

Pattern Matching (MATCH_RECOGNIZE) in Oracle Database 12c

I’ve spent the last couple of evenings playing with the new SQL pattern matching feature in Oracle 12c.

I’m doing some sessions on analytic functions in some upcoming conferences and I thought I should look at this stuff. I’m not really going to include much, if anything, about it as my sessions are focussed on beginners and I don’t really want to scare people off. The idea is to ease people in gently, then let them scare themselves once they are hooked on analytics. :) I’m thinking about Hooked on Monkey Fonics now…

The APPROX_COUNT_DISTINCT Function – A Test Case

The aim of this post is not to explain how the APPROX_COUNT_DISTINCT function works (you find basic information in the documentation and in this post written by Luca Canali), but to show you the results of a test case I run to assess how well it works.

Here’s what I did…

I created a table with several numerical columns (the name of the column shows how many distinct values it contains), loaded 100 million rows into it (the size of the segment is 12.7 GB), and gathered the object statistics.

Oracle ACE = Oracle’s Bitch?

I got a comment today on my recent Oracle fanboy post, which I thought was very interesting and worthy of a blog post in reply. The commenter started by criticising the Oracle license and support costs (we’ve all had that complaint) as well as the quality of support (yes, I’ve been there too), but that wasn’t the thing that stood out. The final paragraph was as follows…

“One addition. I know you, your past work and you are very brainy person but since last couple of years you became Oracle doctrine submissive person just like most of the rest of ACE Directors. When you were just ACEs, you were more trustworthy than now and you weren’t just Oracle interpreters… And unfortunately I’m not the only person with this opinion, but probably I’m only one who is not affraid to make it public.”

Deadlocks

A recent question on the OTN forum asked about narrowing down the cause of deadlocks, and this prompted me to set up a little example. Here’s a deadlock graph of a not-quite-standard type:

The Ad Hoc Reporting Myth

Empowering users! Giving users access to the information they need, when they need it! Allowing users to decide what they need! These are all great ideas and there are plenty of products out there that can be used to achieve this. The question must be, is it really necessary?

There will always be some users that need this functionality. They will need up-to-the-second ad hoc reporting and will invest their time into getting the most from the tools they are given. There is also a large portion of the user base that will quite happily use what they are given and will *never* invest in the tool set. They don’t see it as part of their job and basically just don’t care.

Plan depth

A recent posting on OTN reminded me that I haven’t been poking Oracle 12c very hard to see which defects in reporting execution plans have been fixed. The last time I wrote something about the problem was about 20 months ago referencing 11.2.0.3; but there are still oddities and irritations that make the nice easy “first child first” algorithm fail because the depth calculated by Oracle doesn’t match the level that you would get from a connect-by query on the underlying plan table. Here’s a simple fail in 12c:

Dallas Oracle Users Group Performance Meetup

I spoke at a one day DOUG meeting yesterday. It was pretty cool. Very small intimate group of about 50. The speakers were Nitin Vengurlekar, Charles Kim, Cary Millsap and myself. All are Ace Directors and either work at Viscosity or Enkitec. As a bonus, Tanel Poder showed up to weigh in on some open discussion. Anyway, I thoroughly enjoyed it. I promised the group I would post my slides and a zip file with some of my scripts that I demoed. So here it is (click on the image to download a zip file with PDF and scripts):

How to setup git as a daemon

This is a quick post on using git on a server. I use my Synology NAS as a fileserver, but also as a git repository server. The default git package for Synology enables git usage on the command line, which means via ssh, or via web-DAV. Both require a logon to do anything with the repository. That is not very handy if you want to clone and pull from the repository in an automated way. Of course there are ways around that (basically setting up password-less authentication, probably via certificates), but I wanted simple, read-only access without authentication. If you installed git on a linux or unix server you get the binaries, but no daemon, which means you can only use ssh if you want to use that server for central git repositories.

ReadyNAS 104

My old NAS went pop a little while ago and I’ve spent the last few weeks backing up to alternate servers while trying to decide what to get to replace it.

Reading the reviews on Amazon is a bit of a nightmare because there are always scare stories, regardless how much you pay. In the end I decided to go for the “cheap and cheerful” option and bought a ReadyNAS 104. I got the diskless one and bought a couple of 3TB WD Red disks, which were pretty cheap. It supports the 4TB disks, but they are half the price again and I’m mean. Having just two disks means I’ve got a single 3TB RAID 1 volume. I can add a third and fourth disk, which will give me approximately 6 or 9 TB. It switches to RAID 5 by default with more than 2 disks.