Search

OakieTags

Who's online

There are currently 0 users and 45 guests online.

Recent comments

HCC

  • warning: Invalid argument supplied for foreach() in /www/oaktable/sites/all/modules/cck/content.module on line 1284.
  • warning: Invalid argument supplied for foreach() in /www/oaktable/sites/all/modules/cck/content.module on line 1284.
  • warning: Invalid argument supplied for foreach() in /www/oaktable/sites/all/modules/cck/content.module on line 1284.

Accessing HCC compressed objects using an index

Problem

I came across another strange SQL performance issue: Problem was that a SQL statement was running for about 3+ hours in an User Acceptance (UA) database, compared to 1 hour in a development database. I ruled out usual culprits such as statistics, degree of parallelism etc. Reviewing the SQL Monitor output posted below, you can see that the SQL statement has already done 6 Billion buffer gets and steps 21 through 27 were executed 3 Billion times so far.

Statistics and execution plan

Index Usage

There are some questions about Oracle that are like the mythical Hydra – you think you’ve killed it, but for every head you cut off another two grow. The claim that “the optimizer will switch between using an index and doing a tablescan when you access more than X% of the data” re-appeared on the OTN database forum a little while ago – it doesn’t really matter what the specific value of X was – and it’s a statement that needs to be refuted very firmly because it’s more likely to cause problems than it is to help anyone understand what’s going on.

Little things worth knowing: exp/imp vs expdp and impdp for HCC in Exadata

Do you know the difference between exp/imp and expdp/impdp when it comes to importing HCC compressed data in Exadata?

If not, then follow me through two examples. This is on 11.2.0.3/11.2.3.3.1 but applies to all database releases you can have on Exadata. The task at hand is to export a table (which happens to be non-partitioned and HCC compressed for query high) and import it into a different user’s schema. This is quite a common approach when migrating data from a non-Exadata system into an Exadata system. You could for example pre-create the DDL for the tables and implement HCC before even importing a single row. When importing the data, the partitions’ HCC attributes will be honoured and data will be inserted compressed. Or won’t it?

The table

Compression Units – 6

I received an email recently asking me if I knew how Oracle found specific rows and columns in a compression unit. This is a topic that I’ve spoken about a couple of times, and I’ve published several notes on the blog about it, including an image of a critical slide from one of my presentations, and I was expecting to find some notes somewhere about Oracle catalogues all the bits and pieces.

HCC error on Exadata after partitioning maintenance

Recently I have been asked to investigate the following error on an Exadata system.

ORA-64307: hybrid columnar compression is not supported for tablespaces on this storage type

Well, that’s simple I thought! Must be (d)NFS mounted storage, right? Everyone knows that you can have HCC on Exadata (and a few other storage products). So I looked at the problem and soon found out that the data files in question all resided on the cells. Here is the sequence of events:

Exadata Smart Scan Projection Limitation

Here is an interesting limitation to Exadata Smart Scans - if more than 254 columns from a table (not HCC compressed, more on that in moment) need to be projected, Smart Scans for that particular segment will be disabled and Exadata will fall back to conventional I/O.

n/a

n/a

n/a

HCC And Virtual Columns

This is just a short heads-up note to those dealing with HCC-enabled tables (so at present this applies only to Exadata customers).

As already outlined in a previous post about compression restrictions tables with HCC enabled do not support dropping columns - DROP COLUMN gets silently converted into SET UNUSED and DROP UNUSED COLUMNS throws an error to be unsupported.

I've recently come across an interesting variation of this restriction. Obviously Oracle treats virtual columns in this case the same: If you drop a virtual column of a HCC-enabled table it doesn't get dropped but is also silently turned into an unused column - which doesn't really make sense to me since dropping it doesn't require any physical modification to the underlying data structures.

Now you might wonder why this could be relevant? Well it can be important for several reasons: