Search

OakieTags

Who's online

There are currently 0 users and 21 guests online.

Recent comments

May 2016

The Oracle database, in-memory parallel execution and NUMA

In a previous article called ‘memory allocation on startup’ I touched on the subject of NUMA; Non Uniform Memory Access. This article is about how to configure NUMA, how to look into NUMA usage and a real life case of NUMA optimisation using in-memory parallel execution.

At this point in time (start of the summer of 2016) we see that the CPU speed competition has stagnated and settled at somewhere below maximally 4 gigahertz, and instead the number of core’s and the size of memory is growing. The common used server type in the market I am in is a two socket server. It is not unthinkable that in the near future servers with more than two sockets will increase in popularity, which facilitates the increase in (parallel) computing capacity and maximal amount of memory.

Experimenting with the ZFSSA’s snapshot capability using the simulator part 2

In my last post I wrote down some notes about my experience while experimenting with the ZFSSA simulator. A simulator is a great way to get familiar with the user interface and general usability of a product. What I wanted to find out using the ZFSSA simulator was the answer to the question: “what happens to a clone of a database when I roll the master copy forward?”

In the first part of the series I explained how I created a clone of a database, named CLONE1. It is based on a backup of my NCDB database. On top of the backup I have created a snapshot as the basis for my clone. A clone in ZFS(SA) terminology is a writeable snapshot, and CLONE1 uses it. But what would happen to CLONE1 if I modified the source database, NCDB? And can I create a new clone-CLONE2-based on a new backup of the source without modifying the first clone? Let’s try this.

Changing the Source Database

NoSQL workshop at Oracle user group conference

Oracle NoSQL Database has been regularly featured at the conferences of the Northern California Oracle Users Group. But, at its most recent conference, the Northern California Oracle Users Group dared to play outside the Oracle sandbox with a whole day NoSQL workshop featuring three Oracle competitors: MongoDB, Couchbase, and Cassandra.(read more)

Exceptional SQL

SQL's union operators can make queries easy to write and intuitive to read
and understand. One of these is the EXCEPT operator that "subtracts" one
set of rows from another. 



Read the full post at www.gennick.com/database.

Exceptional SQL

Fifth in a series of posts in response to Tim Ford's #EntryLevel Challenge.


SQL implements a number of so-called union operators that under the right circumstances can make queries easy to write and intuitive to read and understand. One of these is the EXCEPT operator that "subtracts" one set of rows from another. 

Say for example that you're doing some work on data quality and want to investigate products that your firm has sold without ever having first purchased. What have you sold but never bought? You can answer that question easily by executing the following EXCEPT query:

perf top ‘Too many events are opened.’ message

This is a small blogpost on using ‘perf’. I got an error message when I tried to run ‘perf top’ systemwide:

# perf top
Too many events are opened.
Try again after reducing the number of events

What actually is the case here, is actually described in the perf wiki:

Open file limits
The design of the perf_event kernel interface which is used by the perf tool, is such that it uses one file descriptor per event per-thread or per-cpu.
On a 16-way system, when you do:
perf stat -e cycles sleep 1
You are effectively creating 16 events, and thus consuming 16 file descriptors.

SQL statements using literals

16 years ago, someone “Ask-ed Tom” how to find those SQL statements that were not using bind variables.   You can see the question here (because we don’t delete stuff ever Smile) but I’ll paraphrase the answer below:

Storing Date Values As Characters Part II (A Better Future)

In the previous post, I discussed how storing date values within a character data type is a really really bad idea and illustrated how the CBO can easily get its costings totally wrong as a result. A function-based date index helped the CBO get the correct costings and protect the integrity of the date data. During […]

MERGE vs UPDATE/INSERT revisited

I wrote a few years back that for single row operations, MERGE might in fact have a large overhead than the do-it-yourself approach (ie, attempt an update, if it fails, then do an insert).

Just to show that it’s always good to revisit things as versions change, here’s the same demo (scaled up now because my laptop is faster Smile)

Datatypes for DATES

Richard Foote has written a post about not using the DATE datatype for storing dates.

So I’ve come up with a revolutionary system to store dates as well…using the very efficient RAW datatype.

Here’s a demo