There was a little conversation on Oracle-L about ASH (active session history) recently which I thought worth highlighting – partly because it raised a detail that I had got wrong until Tim Gorman corrected me a few years ago.
Once every second the dynamic performance view v$active_session_history copies information about active sessions from v$session. (There are a couple of exceptions to the this rule – for example if a session has called dbms_lock.sleep() it will appear in v$session as state = ‘ACTIVE’, but it will not be recorded in v$active_session_history.) Each of these snapshots is referred to as a “sample” and may hold zero, one, or many rows.
Some time ago I wrote a blog note describing a hack for refreshing a large materialized view with minimum overhead by taking advantage of a single-partition partitioned table. This note describes how Oracle 12c now gives you an official way of doing something similar – the “out of place” refresh.
I’ll start by creating a matieralized view and creating a couple of indexes on the resulting underlying table; then show you three different calls to refresh the view. The materialized view is based on all_objects so it can’t be made available for query rewrite (ORA-30354: Query rewrite not allowed on SYS relations) , and I haven’t created any materialized view logs so there’s no question of fast refreshes – but all I intend to do here is show you the relative impact of a complete refresh.
One of the waits that is specific to ASSM (automatic segment space management) is the “enq: FB – contention” wait. You find that the “FB” enqueue has the following description and wait information when you query v$lock_type, and v$event_name:
When database flashback first appeared many years ago I commented (somewhere, but don’t ask me where) that it seemed like a very nice idea for full-scale test databases if you wanted to test the impact of changes to batch code, but I couldn’t really see it being a good idea for live production systems because of the overheads.
You all know that having more than 255 columns in a table is a Bad Thing ™ – and surprisingly you don’t even have to get to 255 to hit the first bad thing about wide tables. If you’ve ever wondered what sorts of problems you can have, here are a few:
A recent thread on the OTN database forum supplied some code that seemed to show that In-memory DB made no difference to performance when compared with the traditional row-store mechanism and asked why not. (It looked as if the answer was that almost all the time for the tests was spent returning the 3M row result set to the SQL*Plus client 15 rows at a time.)
The responses on the thread led to the question: Why would the in-memory (column-store) database be faster than simply having the (row-store) data fully cached in the buffer cache ?
Following on from a recent “check the space” posting, here’s another case of the code not reporting what you thought it would, prompted by a question on the OTN database forum about a huge space discrepancy in LOBs.
There’s a fairly well-known package called dbms_space that can give you a fairly good idea of the space used by a segment stored in a tablespace that’s using automatic segment space management. But what can you think when a piece of code (written by Tom Kyte, no less) reports the following stats about your biggest LOB segment:
One thing you (ought to) learn very early on in an Oracle career is that there are always cases you haven’t previously considered. It’s a feature that is frequently the downfall of “I found it on the internet” SQL. Here’s one (heavily paraphrased) example that appeared on the OTN database forum a few days ago:
select table_name,round((blocks*8),2)||’kb’ “size” from user_tables where table_name = ‘MYTABLE';
select table_name,round((num_rows*avg_row_len/1024),2)||’kb’ “size” from user_tables where table_name = ‘MYTABLE';
The result from the first query is 704 kb, the result from the second is 25.4 kb … fragmentation, rebuild, CTAS etc. etc.
For those who don’t read Oracle-l and haven’t found Nikolay Savvinov’s blog, here’s a little note pulling together a recent question on Oracle-L and a relevant (and probably unexpected) observation from the blog. The question (paraphrased) was:
The developers/data modelers are creating all the tables with varchar2(4000) as standard by default “Just in case we need it”. What do you think of this idea?