Search

OakieTags

Who's online

There are currently 0 users and 41 guests online.

Recent comments

Oakies Blog Aggregator

Oracle Forms and Reports Services 11gR2 on Oracle Linux 6…

Last year I wrote an article about the installation of Oracle Forms and Reports Services 11gR2 on Oracle Linux 5. I’ve now written the article for Oracle Forms and Reports Services 11gR2 on Oracle Linux 6. The latest patch of F&RS is certified for OL6, along with JDK6 and JDK7.

In addition to the installation articles, I’ve compiled a collection of random notes about post-installation configuration into a separate article. I keep adding to it every time I come across a new (for me) issue.

I’m hoping this will stop me falling into the trap I did with AS10g, where I didn’t write down any of this stuff, assuming I would remember it, only to find I couldn’t remember Jack a few years later. :)

Cheers

Tim…

 


Oracle Forms and Reports Services 11gR2 on Oracle Linux 6… was first posted on March 23, 2013 at 2:45 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

UltraEdit for Linux/Mac v4.0 Beta II

Hot on the heels of the recent UltraEdit v19 release for Windows, comes the UltraEdit v4 Beta II release for Linux/Mac.

I’ve just started using it and so far so good. They usually progress through the betas pretty quick. I didn’t have time to install the beta I before this one dropped. :)

Cheers

Tim…


UltraEdit for Linux/Mac v4.0 Beta II was first posted on March 23, 2013 at 9:29 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

LOB Update

This note is about a feature of LOBs that I first desribed in “Practial Oracle 8i” but have yet to see used in real life. It’s a description of how efficient Oracle can be, which I’ll start with a description of, and selection from, a table:

create table test_lobs (
	id              number(5),
	bytes           number(38),
	text_content    clob
)
lob (text_content) store as text_lob(
	disable storage in row
	cache
)
;

-- insert a row

SQL> desc test_lobs
 Name                    Null?    Type
 ----------------------- -------- ----------------
 ID                               NUMBER(5)
 BYTES                            NUMBER(38)
 TEXT_CONTENT                     CLOB

SQL> select id, bytes, dbms_lob.getlength(text_content) from test_lobs;

        ID      BYTES DBMS_LOB.GETLENGTH(TEXT_CONTENT)
---------- ---------- --------------------------------
         1     365025                           365025

1 row selected.

I’ve got a table with a single CLOB column holding a single row. The size of the single CLOB is roughly 365KB (or about 45 blocks of 8KB). Old hands who have had to suffer LONG columns will recognise the trick of recording the size of a LONG as a separate column in the table; it’s a strategy that isn’t really necessary with LOBs but old coding habits die hard. It’s quite hard to find details of how much space has been used in a LOB segment (the space_usage procedure in the dbms_space package doesn’t allow you to examine LOBSEGMENTs), but I did a coupld of block dumps to check on this LOBSEGMENT and it had allocated 46 blocks on the first insert.

So here’s the clever bit – how big will the LOBSEGMENT grow when I update that one CLOB ?

It’s common knowledge (to users of LOBs) that the undo mechanism Oracle has for LOBs is simply to leave the old LOB in place and create a new one – so the intial response to the question might be to guess that the LOBSEGMENT will grow to roughly double the size. But it doesn’t have to be like that, at least, not if you update the LOB the way I happen to have done, which is like this:

declare

	m_length	integer;
	m_lob		clob;

begin

	select
		text_content,
		dbms_lob.getlength(text_content)
	into	m_lob, m_length
	from
		test_lobs
	where
		id = 1
	for update
	;

	dbms_output.put_line('Lob size: ' || m_length);

	dbms_lob.write(
		lob_loc	=> m_lob,
		amount	=> 17,
		offset	=> 1,
		buffer	=> 'This is an update'
	);

	commit;

end;
/

My code very specifically changes only the first 17 bytes of the LOB. So how much does Oracle have to do to effect this change ? The LOB-handling mechanisms are smart enough to work out that only the first (of 45) blocks in the LOB need to be changed, so Oracle need only add one block to the segment and write the new version of the first LOB block to that one block. (In fact the segment – which was in a tablespace using freelist management – grew by the “standard” 5 blocks from which Oracle selected just one block to add to the LOB.)

So how does Oracle keep track of the whole LOB if it can change it one piece at a time ? This is where the (notionally invisible and you don’t need to know about it) LOBINDEX comes into play. Oracle maintains an index keyed by (LOB_ID, chunk_number) *** pointing to all the chunks of a LOB in order, so when you update a single chunk Oracle simply creates an updated copy of the chunk and changes the appropriate index entry to point to the new chunk. So here’s an image representing our one LOB value just after we’ve created it and before we’ve updated:

lob_1

And then we “modify” the first chunk – which means we have to add a chunk (which in this case is a single block) to the segment, create a new version of the first chunk, modify the index to point to the new block, and add an index entr – keyed by time-stamp – to the end of the index to point to the old chunk; something like this:

lob_2

Now, when we run a query to select the LOB, Oracle will follow the index entries in order and pick up the new chunk from the end of the LOBSEGMENT. But the LOBINDEX is protected by undo in the standard fashion, so if another long-running query that started before our update needs to see the old version of the LOB it will create a read-consistent copy of the relevant index leaf block- which means that from its perspective the index will automatically be pointing to the correct LOB chunk.

The index is actually quite an odd one because it serves two functions; apart from pointing to current lobs by chunk number, it also points to “previous” chunks by timestamp (specifically the number of seconds between Midnight of 1st Jan 1970 and the time at which the chunk was “overwritten”). This makes it easy for Oracle to deal with the retention interval for LOBs – any time it needs space in the LOBSEGMENT it need only find the minimum timestamp value in the index and compare it with “sysdate – retention” to see if there are any chunks available for re-use.

To sum up – when you update LOBs, and it’s most beneficial if you have an application which doees piece-wise updates, you leave a trail of old chunks in  in the LOBSEGMENT. The version of the LOB you see is dictated by the version of the index that you generate when you request a copy of the LOB at a given SCN.

 

*** Footnote: My description of the LOBINDEX was an approximation. Each index entry carries a fixed size “payload” listing up to eight lob chunks; so the (LOB_ID, chunk_number) index entries in a LOBINDEX may point to every 8th chunk in the LOB. The significance of the “fixed size” payload is that the payload can be modified in place if the pointer to a LOB chunk has to be changed – and this minimises disruption of the index (at a cost of some wasted space).

 

Where to begin with Oracle and SQL

Seeing more and more questions on “where do I start with Oracle if I want to be a DBA?”  My perspective is a bit off since I’ve been surrounded by Oracle for over 20 years.  I hardly remember what it was like to start with Oracle and starting with Oracle now in 2013 is quite different than starting with Oracle in 1990.

Here is my list and everything on this list is excellent. I’m sure I missed a few good ones, but maybe people can add them in the comments.

Start with Oracle Docs, they are free and good!

Get the best books and read them

A bit old, but this is a great overview of Oracle: Practical Oracle 8i by Jonathan Lewis

After you have read the above, dive into the important details:

If you know SQL but want to get great at SQL performance, read these books

Join Online discussions and follow blogs

Create your own Oracle blog aggregator, here is mine: http://oracle.collected.info/

Go to conferences. First of all your local Oracle user groups and if you have the time and resources then go to the bigger conferences:

  • Hotsos
  • RMOUG
  • UKOUG

Listing Exadata storage cells and their configuration info from V$CELL_CONFIG

If you attended my Exadata hacking session today, you saw me using the cellver.sql script which lists some basic configuration info about the currently connected storage cells:

SQL> @exadata/cellver
Show Exadata cell versions from V$CELL_CONFIG....

CELLNAME             CELLSRV_VERSION      FLASH_CACHE_MODE     CPU_COUNT  UPTIME               KERNEL_VERSION                 MAKE_MODEL
-------------------- -------------------- -------------------- ---------- -------------------- ------------------------------ --------------------------------------------------
192.168.12.10        11.2.3.2.1           WriteBack            24         8 days, 2:07         2.6.32-400.11.1.el5uek         Oracle Corporation SUN FIRE X4270 M2 SERVER SAS
192.168.12.11        11.2.3.2.1           WriteBack            24         8 days, 2:06         2.6.32-400.11.1.el5uek         Oracle Corporation SUN FIRE X4270 M2 SERVER SAS
192.168.12.8         11.2.3.2.1           WriteBack            24         8 days, 2:06         2.6.32-400.11.1.el5uek         Oracle Corporation SUN FIRE X4270 M2 SERVER SAS
192.168.12.9         11.2.3.2.1           WriteBack            24         8 days, 2:06         2.6.32-400.11.1.el5uek         Oracle Corporation SUN FIRE X4270 M2 SERVER SAS

The output is pretty self-explanatory. One thing to note is that the CPU_COUNT is not the number of CPU cores, but just the number of “virtual” CPU threads presented to the OS, in this case 24 threads over 12 cores.

The script itself is simple, just extracting some XML values from the “CELL” type config records:

SQL> l
  1  SELECT
  2      cellname cv_cellname
  3    , CAST(extract(xmltype(confval), '/cli-output/cell/releaseVersion/text()') AS VARCHAR2(20))  cv_cellVersion
  4    , CAST(extract(xmltype(confval), '/cli-output/cell/flashCacheMode/text()') AS VARCHAR2(20))  cv_flashcachemode
  5    , CAST(extract(xmltype(confval), '/cli-output/cell/cpuCount/text()')       AS VARCHAR2(10))  cpu_count
  6    , CAST(extract(xmltype(confval), '/cli-output/cell/upTime/text()')         AS VARCHAR2(20))  uptime
  7    , CAST(extract(xmltype(confval), '/cli-output/cell/kernelVersion/text()')  AS VARCHAR2(30))  kernel_version
  8    , CAST(extract(xmltype(confval), '/cli-output/cell/makeModel/text()')      AS VARCHAR2(50))  make_model
  9  FROM
 10      v$cell_config  -- gv$ isn't needed, all cells should be visible in all instances
 11  WHERE
 12      conftype = 'CELL'
 13  ORDER BY
 14*     cv_cellname

I will add some more scripts to the exadata directory over the coming days.

Enjoy! :)


OIC(A) again

Issues with OICA/OIC (OPTIMIZER_INDEX_COST_ADJ/OPTIMIZER_INDEX_CACHING) parameters have already been mentioned many times. Recently I’ve noticed one more and I think I didn’t see this case somewhere else on the Internet so I’ll share it.


Here’s a simple table T1 holding 1M rows with the indexed column X having 10 distinct values:

drop table t1 cascade constraints purge;
create table t1 (id, x, pad, constraint t1_pk primary key(id))
as
select rownum
     , mod(rownum, 10)
     , lpad('x', 200, 'x')
  from all_source s1, all_source s2
 where rownum <= 1e6;
 
create index t1_indx on t1(x);
exec dbms_stats.gather_table_stats(user, 't1', method_opt=>'for all columns size 1', cascade=>true, no_invalidate=>false)

I run multiple EXPLAIN PLANs for a query accessing T1 by a single X value using T1_INDX range scan. I know it’s a very inefficient way to access the data in this case, but it fits perfectly well for my purposes. I’m using different OICA/OIC settings with the help of OPT_PARAM hint and print the resulting IO cost for the query based on the PLAN_TABLE data:

delete plan_table;
commit;

set serveroutput on
declare
  l_query varchar2(4000);
begin
  for oica in 0..10 loop
    for oic in 0..10 loop
      l_query := 'explain plan set statement_id = ''' || oica*10 || ',' || oic*10 || ''' for ' ||
                  'select /*+ index_rs(t1 t1_indx) ' || 
                            ' opt_param(''optimizer_index_cost_adj'' ' || oica*10 || ') ' ||
                            ' opt_param(''optimizer_index_caching'' ' || oic*10 || ') */ * ' ||
                  '  from t1 ' ||
                  ' where x = :1';
--      dbms_output.put_line(l_query);
      execute immediate l_query;
    end loop;
  end loop;
end;
/

set numwidth 7

select *
  from (select to_number(substr(statement_id, 1, instr(statement_id, ',') - 1)) oica
              ,to_number(substr(statement_id, instr(statement_id, ',') + 1)) oic
              ,io_cost
          from plan_table
         where id = 0) 
  pivot(max(io_cost) for oic in (0  as oic_0
                                ,10 as oic_10
                                ,20 as oic_20
                                ,30 as oic_30
                                ,40 as oic_40
                                ,50 as oic_50
                                ,60 as oic_60
                                ,70 as oic_70
                                ,80 as oic_80
                                ,90 as oic_90
                                ,100 as oic_100))
 order by oica desc;

The results are expected:

   OICA   OIC_0  OIC_10  OIC_20  OIC_30  OIC_40  OIC_50  OIC_60  OIC_70  OIC_80  OIC_90 OIC_100
------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- -------
    100   30499   30499   30499   30499   30499   30499   30499   30499   30499   30499   30499
     90   27449   27449   27449   27449   27449   27449   27449   27449   27449   27449   27449
     80   24399   24399   24399   24399   24399   24399   24399   24399   24399   24399   24399
     70   21349   21349   21349   21349   21349   21349   21349   21349   21349   21349   21349
     60   18299   18299   18299   18299   18299   18299   18299   18299   18299   18299   18299
     50   15250   15250   15250   15250   15250   15250   15250   15250   15250   15250   15250
     40   12200   12200   12200   12200   12200   12200   12200   12200   12200   12200   12200
     30    9150    9150    9150    9150    9150    9150    9150    9150    9150    9150    9150
     20    6100    6100    6100    6100    6100    6100    6100    6100    6100    6100    6100
     10    3050    3050    3050    3050    3050    3050    3050    3050    3050    3050    3050
      0       1       1       1       1       1       1       1       1       1       1       1

Since the table is around 30K blocks and the data associated with any given value X is spread across all of the table blocks and the data is poorly clustered, I’m expecting the query cost to be around the number of table blocks, with table access being the major contributor, and it’s true by default:

---------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |   100K|    19M| 30511   (1)| 00:06:07 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1      |   100K|    19M| 30511   (1)| 00:06:07 |
|*  2 |   INDEX RANGE SCAN          | T1_INDX |   100K|       |   197   (1)| 00:00:03 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("X"=TO_NUMBER(:1))

With the changes of OICA, the query cost changes almost linearly, except for some rounding error. Changes to the OIC do not impact query cost, as the documentation suggests:

OPTIMIZER_INDEX_CACHING lets you adjust the behavior of cost-based optimization to favor nested loops joins and IN-list iterators.

And I have nor NLJ, nor IN-list iterator here.

Now let’s change the table layout: make it index-organized and run the same PL/SQL test & query for the PLAN_TABLE results

drop table t1 cascade constraints purge;
create table t1 (id, x, pad, constraint t1_pk primary key(id))
organization index
as
select rownum
     , mod(rownum, 10)
     , lpad('x', 200, 'x')
  from all_source s1, all_source s2
 where rownum <= 1e6;
 
create index t1_indx on t1(x);
exec dbms_stats.gather_table_stats(user, 't1', method_opt=>'for all columns size 1', cascade=>true, no_invalidate=>false)
   OICA   OIC_0  OIC_10  OIC_20  OIC_30  OIC_40  OIC_50  OIC_60  OIC_70  OIC_80  OIC_90 OIC_100
------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- -------
    100   30582     255     226     198     170     142     113      85      57      29   30304
     90   27524     230     203     178     153     128     102      77      51      26   27274
     80   24466     204     181     158     136     114      90      68      46      23   24243
     70   21407     179     158     139     119      99      79      60      40      20   21213
     60   18349     153     136     119     102      85      68      51      34      17   18182
     50   15291     128     113      99      85      71      57      43      29      15   15152
     40   12233     102      90      79      68      57      45      34      23      12   12122
     30    9175      77      68      59      51      43      34      26      17       9    9091
     20    6116      51      45      40      34      28      23      17      11       6    6061
     10    3058      26      23      20      17      14      11       9       6       3    3030
      0       1       1       1       1       1       1       1       1       1       1       1

This time the results are amusing. As long as the OIC doesn’t change, everything goes pretty much similar to the heap-organized table, with the query cost being a little bit higher – that’s OK as secondary indexes, based on logical rowids, may be slightly less efficient than indexes on heap tables.
When the OIC changes, everything gets too crazy. By modifying the CBO index caching assumption to a mere 10%, the query cost drops by two orders of magnitude! This is tremendous shift in the costing which poses terrific consequences for legacy applications trying to utilize IOTs advantages.
It looks like the issue was introduced by the bug fix 5236908, which is described in the V$SESSION_FIX_CONTROL as “Allow _optimizer_index_caching to affect IOT primary keys” and was introduced in the 10.2.0.4. Probably they did it wrong: they’ve discounted UQ scan completely, thus dropping major part of the cost in some cases:

SQL> explain plan for select /*+ index(t1 t1_indx) opt_param('optimizer_index_caching' 10) */ * from t1 where x = :1;

Explained.

SQL> @xp

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------
Plan hash value: 728231638

-----------------------------------------------------------------------------
| Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |         |   100K|    19M|   257   (1)| 00:00:04 |
|*  1 |  INDEX UNIQUE SCAN| T1_PK   |   100K|    19M|   257   (1)| 00:00:04 |
|*  2 |   INDEX RANGE SCAN| T1_INDX |   100K|       |   256   (1)| 00:00:04 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("X"=TO_NUMBER(:1))
   2 - access("X"=TO_NUMBER(:1))

SQL> explain plan for select /*+ index(t1 t1_indx) opt_param('optimizer_index_caching' 10) */ * from t1 where id in (:1);

Explained.

SQL> @xp

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------
Plan hash value: 426392120

---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |     1 |   209 |     0   (0)| 00:00:01 |
|*  1 |  INDEX UNIQUE SCAN| T1_PK |     1 |   209 |     0   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("ID"=TO_NUMBER(:1))

SQL> explain plan for select /*+ index(t1 t1_indx) opt_param('optimizer_index_caching' 10) */ * from t1 where id in (:1, :2);

Explained.

SQL> @xp

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1529266855

----------------------------------------------------------------------------
| Id  | Operation          | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |       |     2 |   418 |     2   (0)| 00:00:01 |
|   1 |  INLIST ITERATOR   |       |       |       |            |          |
|*  2 |   INDEX UNIQUE SCAN| T1_PK |     2 |   418 |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("ID"=TO_NUMBER(:1) OR "ID"=TO_NUMBER(:2))

The bug fix can’t be turned off with _fix_control for some reason and I don’t have an old Oracle database to run the test so I can’t confirm if the bug-fix is the culprit or not. Anyway it looks stupid. I’ll try to submit bug report for this issue.

Filed under: CBO, Oracle Tagged: 10.2.0.3, indexes, IOT, OIC, OICA

Join Jonathan Lewis for a discussion of Delphix

tires

Kicking the Tires on Delphix

Live webcast: Jonathan Lewis  – An Oracle Expert’s Look at the Delphix Technology

Jonathan Lewis came out to the Delphix offices in Menlo Park, CA to kick the tires on our product for a week.

Join Jonathan Lewis and myself for a discussion and sharing of first impressions of Delphix. Jonathan worked closely with the Delphix team investigating performance, comparing technologies, and testing community-requested use cases. In this chat, Jonathan  and I will have an informal discussion on the preliminary results of Jonathan’s  first-hand experiences with Delphix, working closely with our team to learn about the functional aims, implementation strategies, and underlying technologies. As part of his hands-on activity, Jonathan started with the simple provisioning benefits of Delphix, and went on to look at the performance implications of various scenarios, including test cases suggested by the Oracle community. For a glimpse into what will be discussed  see Jonathan’s blog post on his visit to Delphix.

 

 

 

Related  blog posts by Jonathan’s blog

 

Virtual IOUG Collaborate 13 for Only $299

class="l-submain">
class="l-submain-h g-html i-cf">

https://www.pythian.com/blog/wp-content/uploads/IOUG20-360x161.png 360w" sizes="(max-width: 300px) 100vw, 300px" />For only $299, you can access Virtual IOUG Collaborate 13 individually or set up a conference room at your company for the whole team. There will be two tracks broadcasted, so if you have demand for both tracks, it makes sense to purchase two access passes and setup two tracks broadcasted in parallel in your office so that members of your team can individually choose what to attend. I think that’s a steal even if you want to purchase this package individually.

id="more-53127"> /> Virtual attendance doesn’t completely replace the physical presence, but if you can’t come to Denver in April, this is the next best thing. Relationships that you gain from meeting your peers and speakers face to face are extremely valuable and you will carry them throughout your whole career. I’ve been just recently discussing with Rene Antunez (who’s just joined Pythian) about how social media and Twitter revolutionized our professional networking, and we agreed that things like Twitter alone won’t cut it for many of us. While some folks only know most of their peers online, the real relationships are established face to face — discussing a presentation with the speaker during lunch, sharing a drink at the reception or having a heated discussion in a pub near the convention center. With such relationships, social media becomes a natural extended communication channel.

But I digress… So those of you who can’t come to Denver for one reason or another, do consider taking Virtual IOUG Collaborate 2013 Pass and enjoy the session in tracks “High Availability, Disaster Recovery, Manageability” and “Performance, Scalability and Internals”. Here are some of the speakers who you would have a chance to see over 4 days of sessions broadcasting:

  • Kuassi Mensah, Oracle
  • John Beresniewicz, Oracle
  • Michael Abbey, Pythian
  • Yury Velikanov, Pythian
  • Craig Shallahamer, OraPub
  • Karl Arao, Enkitec
  • Carlos Sierra, Oracle
  • Mark W Farnham, Rightsizing
  • Guy Harrison, Dell
  • Frits Hoogland, VX Company
  • Tim Gorman, Evergreen Database Technologies
  • Kyle Hailey, Delphix
  • Gwen Shapira, Pythian

My favorite virtual track is “Performance, Scalability and Internals”, but I will be at the conference physically so I will also be attending lots of sessions on Big Data and Data Science. By the way, as virtual attendee, you will get access to the recordings as well so even if you miss certain live sessions, you can watch them later at your convenience.

If your team is dispersed geographically, the members would need a pass each to watch broadcast live but, again, I still think it’s a steal at $299!

I’m looking forward to see some of you in Denver and some of you on Twitter talking about what you saw at the virtual conference. If you don’t come to Denver and aren’t going to be virtual attendee, tell me why and what else can we do to plug you into Collaborate. I will relay your message to the IOUG Board of Directors and the Conference Committee.

Next OakTable 2012 video is online

Hi,

This time Alex Gorbachev presentation about ASM failures - next videos on the way.



regards,
Marcin