November 2016

SQL Server on Linux

I thought I’d do something on Oracle this week, but then Microsoft made an announcement that was like an early Christmas present-  SQL Server release for Linux.

Introduction to Intel Pin

This blogpost is an introduction to Intel’s Pin dynamic instrumentation framework. Pin and the pintools were brought to my attention by Mahmoud Hatem in his blogpost Tracing Memory access of an oracle process: Intel PinTools. The Pin framework provides an API that abstracts instruction-set specifics (on the CPU layer). Because this is a dynamic binary instrumentation tool, it requires no recompiling of source code. This means we can use it with programs like the Oracle database executable.
The Pin framework download comes with a set of pre-created tools called ‘Pintools’. Some of these tools are really useful for Oracle investigation and research.

Building Classrooms in the Cloud

Jumpbox Lab Server

Let’s face it: education without interaction is about as effective as shouting origami instructions at a lumberjack who is cutting down trees. Sure, your informative lessons will come in handy when the product of their work finally becomes paper, but it will be long forgotten and ultimately worthless by then. The only way a student is going to learn is if they can put their hands on what you’re teaching, walk the journey with you, and attach a positive interactive experience to the lesson.

Performing a large correlated update

We had a question on AskTom recently asking for the best way to update a new column on a very large table.  Four in-place update options are at our disposal:

  • Serial update SQL
  • Parallel DML update SQL
  • DBMS_PARALLEL_EXECUTE
  • PL/SQL batching (as long as sensible restart-after-error logic is readily available)

all of which will achieve the required outcome but they have a significant redo and undo cost associated with them.  Also, depending on the size of the new column, such an update might create chaos with row migration, because whilst unlikely, there is definitely the potential for every row to grow beyond the available block space required to hold it in situ.

Enjoying a great conference at the #DOAG2016

It’s DOAG season again and the Oracle Community is having a good time here! That includes me, presenting about the  Multitenant architecture:

doag2016https://uhesse.files.wordpress.com/2016/11/doag2016.jpg?w=1240&h=930 1240w, https://uhesse.files.wordpress.com/2016/11/doag2016.jpg?w=150&h=113 150w, https://uhesse.files.wordpress.com/2016/11/doag2016.jpg?w=300&h=225 300w, https://uhesse.files.wordpress.com/2016/11/doag2016.jpg?w=768&h=576 768w, https://uhesse.files.wordpress.com/2016/11/doag2016.jpg?w=1024&h=768 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Redo a blogpost

This blogpost is about the Oracle redo log structures and redo efficiency in modern Oracle databases. Actually, a lot of subtle things changed surrounding redo (starting from Oracle 10 actually) which have gone fairly unnoticed. One thing the changes have gone unnoticed for is the Oracle documentation, the description of redo in it is an accurate description for Oracle 9, not how it is working in Oracle 10 or today in Oracle 12.1.0.2.

My test environment is a virtual machine with Oracle Linux 7.2 and Oracle 12.1.0.2.161018, and a “bare metal” server running Oracle Linux 6.7 and Oracle 12.1.0.2.160419. Versions are important, as things can change between versions.

Optimizer Compatibility, Short Pants and Hot Sauce

OK, so I’m all over the map, (technology wise) right now.  One day I’m working with data masking on Oracle, the next it’s SQL Server or MySQL, and the next its DB2.

Distributed Trap

Here’s an interesting (and potentially very useful) observation from an OTN database forum thread that appeared at the end of last week. It concerns the problems of pulling data from remote systems, and I’ll start by building some data:

Demographically correct test data

One of the main reasons behind finishing versions 1.5.0 of RANDOM_NINJA, was to be able to add the localization for different countries so that I could create test data sets, that are demographically correct.

So after adding that, the TESTDATA_NINJA package, can now create statistically correct data sets for 3 countries:

  • United States
  • China
  • Denmark

The following ratio of data will be correct according to UN Statistics and CIA World Book:

  • Age group - Age ratios will be divided in 0-14, 15-64 and 65+
  • Female/Male - Gender rations will be divided within the different age groups, according to statistics

Identification numbers and birthdays will also be in statistically correct ratios and in valid formats.