Search

Top 60 Oracle Blogs

Recent comments

December 2018

Clean Data = Happy Analytics

I just finished cleaning up the example data that I was offered for my own demos and solutions. Working in Education requires you use education data to ensure what you’re presenting resonates with the users you’re working with. Otherwise you just look silly presenting something that makes utterly no sense to the individual you’re hoping to impress.

Having been given the gift a large data set from demos and solutions, I quickly took the data in its original form and attempted to use it, “as is” in Power BI. After a less than stellar demonstration, set off by bizarre results in my visuals, I chalked it up to my lack of experience with Power BI. Upon research, a different culprit appeared- incomplete, inaacurate and After all my years as a DBA, I should have known that it always goes back to the data. If you don’t have clean data and a clean data model, forget it. You’re just wasting your time.

Little things worth knowing: parallel Data Pump export in table mode

I haven’t used Data Pump in a little while but recently needed to do a bit of work involving this very useful utility to export a single table. I know that it is possible to export data in parallel using expdp, but I can’t recall the syntax for doing so off the top of my head when I need it. This post describes a potential approach to exporting a table in parallel. In the next post I will demonstrate an interesting case where using parallelism didn’t help me speed up the export. All of this was tested on 12.2 and 18.4.0, the examples I am sharing originate from my 18.4.0 single instance database (without ASM) running on Linux.

The setup

My lab environment is a bit limited when it comes to storage, so I’ll have to do with small-ish tables. The basic principles should still apply for larger segments though. Please note that my tables aren’t partitioned to keep the individual segment size as large as possible. 

Understanding Partitioning in #Exasol

Exasol introduced Partitioning in version 6.1. This feature helps to improve the performance of statements accessing large tables. As an example, let’s take these two tables:

 

CLI for Amazon RDS Performance Insights

Installing CLI on LINUX

1. install PIP

https://docs.aws.amazon.com/cli/latest/userguide/awscli-install-linux.html#awscli-install-linux-pip

curl -O https://bootstrap.pypa.io/get-pip.py
python get-pip.py --user

2. install AWS CLI

https://docs.aws.amazon.com/cli/latest/userguide/installing.html

pip install awscli --upgrade --user

3. configure

aws configure

For “aws configure” you will need

  • AWS Access Key ID:
  • AWS Secret Access Key:

Which you can get by going to the AWS console, going to IMS and creating access key.

Running example

Once “aws” is configured you can run the CLI like

Accelerate your #BI Performance with #Exasol

Your BI users complain about slow performance of their analytical queries? Is this your Status Quo?

Minimal Oracle installation (and Docker image)

A new blog post on the Databases at CERN blog about some research on the minimal Oracle Database installation: https://db-blog.web.cern.ch/blog/franck-pachot/2018-12-minimal-oracle-1

And a bonus here: the Dockerfile which builds this minimal image. You need to build the Oracle XE image (oracle/database:18.4.0-xe) with the buildfiles provided by Oracle (https://github.com/oracle/docker-images/tree/master/OracleDatabase/SingleInstance/dockerfiles/18.4.0) and this Dockerfile will copy only the necessary:

Case Study

A recent thread on the ODC database forum highlighted a case where the optimizer was estimating 83,000 for a particular index full scan when the SQL Monitor output for the operation showed that it was returning 11,000,000 rows.

Apart from the minor detail that the OP didn’t specifically ask a question, the information supplied was pretty good. The OP had given us a list of bind variables, with values, and the SQL statement, followed by the text output of the Monitor’ed SQL and, to get the predicate section of the plan, the output from a call to dbms_xplan. This was followed by the DDL for the critical index and a list of the stats for all the columns in the index.

Sangam 18: Presentations, Scripts and More

Many, many thanks to those who came to my three sessions at Sangam 18, the largest meeting of Oracle technologists in South Asia.

As I promised, you can download the presentations and scripts for all my sessions here. As always, your feedback will be highly appreciated.

Visual Studio 2017 Incompatibility on *PROJ/SLN Files

Upon joining the Analytics and AI Team at Microsoft to support education, it became quickly apparent that my automation skills were needed more than getting me up to speed on Power BI.  My predecessor had built some great solutions, but to be more scalable, it would pay forward in the end for me to do the automation work, knowing I’d be granted a greater understanding of the products in the long run.

Plans and Trees

Prompted by a question on the ODC database forum – and also because I failed to get to the “Bonus slides” on my presentation on basic execution plans at both the DOAG and UKOUG conferences, here’s a small of slides demonstrating how to convert a text execution plan into a tree that you can read using the mechanism described in Oracle’s white paper by the phrase: “start from the bottom left and work across and then up”.

The file is a Microsoft Powerpoint file (early version).