Search

Top 60 Oracle Blogs

Recent comments

compression

Index compression–quick tip

If you’re appropriately licensed and want to use advanced index compression, you can take advantage of the setting a tablespace to automatically add compression as a default at a nominated level in the database.  From the docs:

Here is an example of that in action.   We’ll set our tablespace default accordingly


SQL> create tablespace demo
  2  datafile 'C:\ORACLE\ORADATA\DB122\DEMO.DBF'
  3  size 100M
  4  default index compress advanced high;

Tablespace created.

Now we’ll create a table and an index on it in that tablespace

Debugging

The OTN database forum supplied a little puzzle a few days ago – starting with the old, old, question: “Why is the plan with the higher cost taking less time to run?”

The standard (usually correct) answer to this question is that the optimizer doesn’t know all it needs to know to predict what’s going to happen, and even if it had perfect information about your data the model used isn’t perfect anyway. This was the correct answer in this case, but with a little twist in the tail that made it a little more entertaining. Here’s the query, with the two execution plans and the execution statistics from autotrace:

Implementing Index Compression (and other Physical Storage Options) via Application Designer

There are some performance improvements that require physical storage options to be set on tables or indexes.
One particular technique that I will take as an example for this article is index compression.  A good example in PeopleSoft is the tree node table, PSTREENODE.  It drives many security and hierarchical queries.  It is not updated very frequently, only as new trees are brought on.  Many of the indexes are good candidates for compression.
This compression works by storing repeated column values only one per index leaf block.  Each distinct set of values in the columns up to the prefix length are stored in a symbol table.  The choice of prefix length can significantly effect the compression.  Oracle can calculate the optimal prefix length using

Index Compression – aargh

The problem with telling people that some feature of Oracle is a “good thing” is that some of those people will go ahead and use it; and if enough people use it some of them will discover a hitherto undiscovered defect. Almost inevitably the bug will turn out to be one of those “combinations” bugs that leaves you thinking: “Why the {insert preferred expression of disbelief here} should {feature X} have anything to do with {feature Y}”.

Here – based on index compression, as you may have guessed from the title – is one such bug. I got it first on 11.1.0.7, but it’s still there on 11.2.0.4 and 12.1.0.1

12c In-memory

I wrote a note about the 12c “In-Memory” option some time ago on the OTN Database forum and thought I’d posted a link to it from the blog. If I have I can’t find it now so, to avoid losing it, here’s a copy of the comments I made:

Juan Loaiza’s presentation is probably available on the Oracle site by now, but in outline: the in-memory component duplicates data (specified tables – perhaps with a restriction to a subset of columns) in columnar format in a dedicated area of the SGA. The data is kept up to date in real time, but Oracle doesn’t use undo or redo to maintain this copy of the data because it’s never persisted to disc in this form, it’s recreated in-memory (by a background process) if the instance restarts. The optimizer can then decide whether it would be faster to use a columnar or row-based approach to address a query.

MV Refresh

I have a fairly strong preference for choosing simple solutions over complex solutions, and using Oracle-supplied packaged over writing custom code – provided the difference in cost THere’(whether that’s in human effort, run-time resources, or licence fees) is acceptable. Sometimes, though, the gap between simplicity and cost is so extreme that a hand-crafted solution is clearly the better choice. Here’s an idea prompted by a recent visit to a site that makes use of materialized views and also happens to be licensed for the partitioning option.

Wasted Space

Here’s a little quiz: If I take the average row length of the rows in a table, multiply by the number of rows, and convert the result to the equivalent number of blocks, how can the total volume of data in the table be greater than the total number of blocks below the table high water mark ? I’ve got three tables in a schema, and they’re all in the same (8KB block, 1M uniform extent, locally managed) tablespace, but here’s a query, with results, showing their space utilisation – notice that I gather schema stats immediately before running my query:

OLTP compression: migrated rows are compressed

In his articles Compression in Oracle – Part 2: Read-Only Data and Compression in Oracle – Part 3: OLTP Compression, Jonathan Lewis has shown that block (re)compression is never attempted on updates - it is attempted only on inserts (and, of course, only if the used space crosses the PCTFREE threshold).
Now, since a row migration could be considered a row re-insertion - does it trigger a compression attempt of the block that the row is migrated into? The answer is yes, as I will show you in a while.

Compression

Originally published Jan 2013

Red Gate have asked me to write a few articles for their Oracle site, so I’ve sent them a short series on “traditional” compression in Oracle – which means I won’t be mentioning Exadata hybrid columnar compression (HCC a.k.a. EHCC). There will be five articles, published at the rate of one per week starting Tuesday (15th Jan). I’ll be supplying links for them as they are published.

What can EHCC do for you?

What can EHCC do for you?
By now you have probably heard about ExaData Hybrid Columnar Compression (EHCC), but what benefit can EHCC give you in terms of storage and performance savings?  
 
As always, it depends on your data.  Below I’ll share some of the test results I came across when testing EHCC. The data used for these tests are a short version of a fact table.   The tests were performed on a quarter rack ExaData database machine (2 db nodes – with 16 cores each and 3 storage servers)
 
As you may already have noticed, English is my second language, so please excuse me for spelling, grammar and whatever errors you may find in this post ;-)
 
- Test block compression against EHCC compression tables
First let’s look at the time it took to create the test data and the compression rate.
 
-- None compressed table
SQL> create table he100 PARALLEL 64 as select /*+ PARALLEL 64 */ * from he100_load_test;
Table created.
 
Elapsed: 00:00:33.97  
Did not seem to be CPU bound - Saw CPU utilization around 10 - 25%.
 
-- Block compressed table
SQL> create table he100_block compress for all operations PARALLEL 64 as select /*+ PARALLEL 64 */ *
                      from he100_load_test;
 
Elapsed: 00:00:28.51
Noticed CPU utilization to be around 40 - 70%
 
-- Table creation with EHCC query option