Search

OakieTags

Who's online

There are currently 0 users and 31 guests online.

Recent comments

Oakies Blog Aggregator

Where to begin with Oracle and SQL

Seeing more and more questions on “where do I start with Oracle if I want to be a DBA?”  My perspective is a bit off since I’ve been surrounded by Oracle for over 20 years.  I hardly remember what it was like to start with Oracle and starting with Oracle now in 2013 is quite different than starting with Oracle in 1990.

Here is my list and everything on this list is excellent. I’m sure I missed a few good ones, but maybe people can add them in the comments.

Start with Oracle Docs, they are free and good!

Get the best books and read them

A bit old, but this is a great overview of Oracle: Practical Oracle 8i by Jonathan Lewis

After you have read the above, dive into the important details:

If you know SQL but want to get great at SQL performance, read these books

Join Online discussions and follow blogs

Create your own Oracle blog aggregator, here is mine: http://oracle.collected.info/

Go to conferences. First of all your local Oracle user groups and if you have the time and resources then go to the bigger conferences:

  • Hotsos
  • RMOUG
  • UKOUG

Listing Exadata storage cells and their configuration info from V$CELL_CONFIG

If you attended my Exadata hacking session today, you saw me using the cellver.sql script which lists some basic configuration info about the currently connected storage cells:

SQL> @exadata/cellver
Show Exadata cell versions from V$CELL_CONFIG....

CELLNAME             CELLSRV_VERSION      FLASH_CACHE_MODE     CPU_COUNT  UPTIME               KERNEL_VERSION                 MAKE_MODEL
-------------------- -------------------- -------------------- ---------- -------------------- ------------------------------ --------------------------------------------------
192.168.12.10        11.2.3.2.1           WriteBack            24         8 days, 2:07         2.6.32-400.11.1.el5uek         Oracle Corporation SUN FIRE X4270 M2 SERVER SAS
192.168.12.11        11.2.3.2.1           WriteBack            24         8 days, 2:06         2.6.32-400.11.1.el5uek         Oracle Corporation SUN FIRE X4270 M2 SERVER SAS
192.168.12.8         11.2.3.2.1           WriteBack            24         8 days, 2:06         2.6.32-400.11.1.el5uek         Oracle Corporation SUN FIRE X4270 M2 SERVER SAS
192.168.12.9         11.2.3.2.1           WriteBack            24         8 days, 2:06         2.6.32-400.11.1.el5uek         Oracle Corporation SUN FIRE X4270 M2 SERVER SAS

The output is pretty self-explanatory. One thing to note is that the CPU_COUNT is not the number of CPU cores, but just the number of “virtual” CPU threads presented to the OS, in this case 24 threads over 12 cores.

The script itself is simple, just extracting some XML values from the “CELL” type config records:

SQL> l
  1  SELECT
  2      cellname cv_cellname
  3    , CAST(extract(xmltype(confval), '/cli-output/cell/releaseVersion/text()') AS VARCHAR2(20))  cv_cellVersion
  4    , CAST(extract(xmltype(confval), '/cli-output/cell/flashCacheMode/text()') AS VARCHAR2(20))  cv_flashcachemode
  5    , CAST(extract(xmltype(confval), '/cli-output/cell/cpuCount/text()')       AS VARCHAR2(10))  cpu_count
  6    , CAST(extract(xmltype(confval), '/cli-output/cell/upTime/text()')         AS VARCHAR2(20))  uptime
  7    , CAST(extract(xmltype(confval), '/cli-output/cell/kernelVersion/text()')  AS VARCHAR2(30))  kernel_version
  8    , CAST(extract(xmltype(confval), '/cli-output/cell/makeModel/text()')      AS VARCHAR2(50))  make_model
  9  FROM
 10      v$cell_config  -- gv$ isn't needed, all cells should be visible in all instances
 11  WHERE
 12      conftype = 'CELL'
 13  ORDER BY
 14*     cv_cellname

I will add some more scripts to the exadata directory over the coming days.

Enjoy! :)


OIC(A) again

Issues with OICA/OIC (OPTIMIZER_INDEX_COST_ADJ/OPTIMIZER_INDEX_CACHING) parameters have already been mentioned many times. Recently I’ve noticed one more and I think I didn’t see this case somewhere else on the Internet so I’ll share it.


Here’s a simple table T1 holding 1M rows with the indexed column X having 10 distinct values:

drop table t1 cascade constraints purge;
create table t1 (id, x, pad, constraint t1_pk primary key(id))
as
select rownum
     , mod(rownum, 10)
     , lpad('x', 200, 'x')
  from all_source s1, all_source s2
 where rownum <= 1e6;
 
create index t1_indx on t1(x);
exec dbms_stats.gather_table_stats(user, 't1', method_opt=>'for all columns size 1', cascade=>true, no_invalidate=>false)

I run multiple EXPLAIN PLANs for a query accessing T1 by a single X value using T1_INDX range scan. I know it’s a very inefficient way to access the data in this case, but it fits perfectly well for my purposes. I’m using different OICA/OIC settings with the help of OPT_PARAM hint and print the resulting IO cost for the query based on the PLAN_TABLE data:

delete plan_table;
commit;

set serveroutput on
declare
  l_query varchar2(4000);
begin
  for oica in 0..10 loop
    for oic in 0..10 loop
      l_query := 'explain plan set statement_id = ''' || oica*10 || ',' || oic*10 || ''' for ' ||
                  'select /*+ index_rs(t1 t1_indx) ' || 
                            ' opt_param(''optimizer_index_cost_adj'' ' || oica*10 || ') ' ||
                            ' opt_param(''optimizer_index_caching'' ' || oic*10 || ') */ * ' ||
                  '  from t1 ' ||
                  ' where x = :1';
--      dbms_output.put_line(l_query);
      execute immediate l_query;
    end loop;
  end loop;
end;
/

set numwidth 7

select *
  from (select to_number(substr(statement_id, 1, instr(statement_id, ',') - 1)) oica
              ,to_number(substr(statement_id, instr(statement_id, ',') + 1)) oic
              ,io_cost
          from plan_table
         where id = 0) 
  pivot(max(io_cost) for oic in (0  as oic_0
                                ,10 as oic_10
                                ,20 as oic_20
                                ,30 as oic_30
                                ,40 as oic_40
                                ,50 as oic_50
                                ,60 as oic_60
                                ,70 as oic_70
                                ,80 as oic_80
                                ,90 as oic_90
                                ,100 as oic_100))
 order by oica desc;

The results are expected:

   OICA   OIC_0  OIC_10  OIC_20  OIC_30  OIC_40  OIC_50  OIC_60  OIC_70  OIC_80  OIC_90 OIC_100
------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- -------
    100   30499   30499   30499   30499   30499   30499   30499   30499   30499   30499   30499
     90   27449   27449   27449   27449   27449   27449   27449   27449   27449   27449   27449
     80   24399   24399   24399   24399   24399   24399   24399   24399   24399   24399   24399
     70   21349   21349   21349   21349   21349   21349   21349   21349   21349   21349   21349
     60   18299   18299   18299   18299   18299   18299   18299   18299   18299   18299   18299
     50   15250   15250   15250   15250   15250   15250   15250   15250   15250   15250   15250
     40   12200   12200   12200   12200   12200   12200   12200   12200   12200   12200   12200
     30    9150    9150    9150    9150    9150    9150    9150    9150    9150    9150    9150
     20    6100    6100    6100    6100    6100    6100    6100    6100    6100    6100    6100
     10    3050    3050    3050    3050    3050    3050    3050    3050    3050    3050    3050
      0       1       1       1       1       1       1       1       1       1       1       1

Since the table is around 30K blocks and the data associated with any given value X is spread across all of the table blocks and the data is poorly clustered, I’m expecting the query cost to be around the number of table blocks, with table access being the major contributor, and it’s true by default:

---------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |   100K|    19M| 30511   (1)| 00:06:07 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1      |   100K|    19M| 30511   (1)| 00:06:07 |
|*  2 |   INDEX RANGE SCAN          | T1_INDX |   100K|       |   197   (1)| 00:00:03 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("X"=TO_NUMBER(:1))

With the changes of OICA, the query cost changes almost linearly, except for some rounding error. Changes to the OIC do not impact query cost, as the documentation suggests:

OPTIMIZER_INDEX_CACHING lets you adjust the behavior of cost-based optimization to favor nested loops joins and IN-list iterators.

And I have nor NLJ, nor IN-list iterator here.

Now let’s change the table layout: make it index-organized and run the same PL/SQL test & query for the PLAN_TABLE results

drop table t1 cascade constraints purge;
create table t1 (id, x, pad, constraint t1_pk primary key(id))
organization index
as
select rownum
     , mod(rownum, 10)
     , lpad('x', 200, 'x')
  from all_source s1, all_source s2
 where rownum <= 1e6;
 
create index t1_indx on t1(x);
exec dbms_stats.gather_table_stats(user, 't1', method_opt=>'for all columns size 1', cascade=>true, no_invalidate=>false)
   OICA   OIC_0  OIC_10  OIC_20  OIC_30  OIC_40  OIC_50  OIC_60  OIC_70  OIC_80  OIC_90 OIC_100
------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- -------
    100   30582     255     226     198     170     142     113      85      57      29   30304
     90   27524     230     203     178     153     128     102      77      51      26   27274
     80   24466     204     181     158     136     114      90      68      46      23   24243
     70   21407     179     158     139     119      99      79      60      40      20   21213
     60   18349     153     136     119     102      85      68      51      34      17   18182
     50   15291     128     113      99      85      71      57      43      29      15   15152
     40   12233     102      90      79      68      57      45      34      23      12   12122
     30    9175      77      68      59      51      43      34      26      17       9    9091
     20    6116      51      45      40      34      28      23      17      11       6    6061
     10    3058      26      23      20      17      14      11       9       6       3    3030
      0       1       1       1       1       1       1       1       1       1       1       1

This time the results are amusing. As long as the OIC doesn’t change, everything goes pretty much similar to the heap-organized table, with the query cost being a little bit higher – that’s OK as secondary indexes, based on logical rowids, may be slightly less efficient than indexes on heap tables.
When the OIC changes, everything gets too crazy. By modifying the CBO index caching assumption to a mere 10%, the query cost drops by two orders of magnitude! This is tremendous shift in the costing which poses terrific consequences for legacy applications trying to utilize IOTs advantages.
It looks like the issue was introduced by the bug fix 5236908, which is described in the V$SESSION_FIX_CONTROL as “Allow _optimizer_index_caching to affect IOT primary keys” and was introduced in the 10.2.0.4. Probably they did it wrong: they’ve discounted UQ scan completely, thus dropping major part of the cost in some cases:

SQL> explain plan for select /*+ index(t1 t1_indx) opt_param('optimizer_index_caching' 10) */ * from t1 where x = :1;

Explained.

SQL> @xp

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------
Plan hash value: 728231638

-----------------------------------------------------------------------------
| Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |         |   100K|    19M|   257   (1)| 00:00:04 |
|*  1 |  INDEX UNIQUE SCAN| T1_PK   |   100K|    19M|   257   (1)| 00:00:04 |
|*  2 |   INDEX RANGE SCAN| T1_INDX |   100K|       |   256   (1)| 00:00:04 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("X"=TO_NUMBER(:1))
   2 - access("X"=TO_NUMBER(:1))

SQL> explain plan for select /*+ index(t1 t1_indx) opt_param('optimizer_index_caching' 10) */ * from t1 where id in (:1);

Explained.

SQL> @xp

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------
Plan hash value: 426392120

---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |     1 |   209 |     0   (0)| 00:00:01 |
|*  1 |  INDEX UNIQUE SCAN| T1_PK |     1 |   209 |     0   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("ID"=TO_NUMBER(:1))

SQL> explain plan for select /*+ index(t1 t1_indx) opt_param('optimizer_index_caching' 10) */ * from t1 where id in (:1, :2);

Explained.

SQL> @xp

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1529266855

----------------------------------------------------------------------------
| Id  | Operation          | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |       |     2 |   418 |     2   (0)| 00:00:01 |
|   1 |  INLIST ITERATOR   |       |       |       |            |          |
|*  2 |   INDEX UNIQUE SCAN| T1_PK |     2 |   418 |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("ID"=TO_NUMBER(:1) OR "ID"=TO_NUMBER(:2))

The bug fix can’t be turned off with _fix_control for some reason and I don’t have an old Oracle database to run the test so I can’t confirm if the bug-fix is the culprit or not. Anyway it looks stupid. I’ll try to submit bug report for this issue.

Filed under: CBO, Oracle Tagged: 10.2.0.3, indexes, IOT, OIC, OICA

Join Jonathan Lewis for a discussion of Delphix

tires

Kicking the Tires on Delphix

Live webcast: Jonathan Lewis  – An Oracle Expert’s Look at the Delphix Technology

Jonathan Lewis came out to the Delphix offices in Menlo Park, CA to kick the tires on our product for a week.

Join Jonathan Lewis and myself for a discussion and sharing of first impressions of Delphix. Jonathan worked closely with the Delphix team investigating performance, comparing technologies, and testing community-requested use cases. In this chat, Jonathan  and I will have an informal discussion on the preliminary results of Jonathan’s  first-hand experiences with Delphix, working closely with our team to learn about the functional aims, implementation strategies, and underlying technologies. As part of his hands-on activity, Jonathan started with the simple provisioning benefits of Delphix, and went on to look at the performance implications of various scenarios, including test cases suggested by the Oracle community. For a glimpse into what will be discussed  see Jonathan’s blog post on his visit to Delphix.

 

 

 

Related  blog posts by Jonathan’s blog

 

Virtual IOUG Collaborate 13 for Only $299

class="l-submain">
class="l-submain-h g-html i-cf">

https://www.pythian.com/blog/wp-content/uploads/IOUG20-360x161.png 360w" sizes="(max-width: 300px) 100vw, 300px" />For only $299, you can access Virtual IOUG Collaborate 13 individually or set up a conference room at your company for the whole team. There will be two tracks broadcasted, so if you have demand for both tracks, it makes sense to purchase two access passes and setup two tracks broadcasted in parallel in your office so that members of your team can individually choose what to attend. I think that’s a steal even if you want to purchase this package individually.

id="more-53127"> /> Virtual attendance doesn’t completely replace the physical presence, but if you can’t come to Denver in April, this is the next best thing. Relationships that you gain from meeting your peers and speakers face to face are extremely valuable and you will carry them throughout your whole career. I’ve been just recently discussing with Rene Antunez (who’s just joined Pythian) about how social media and Twitter revolutionized our professional networking, and we agreed that things like Twitter alone won’t cut it for many of us. While some folks only know most of their peers online, the real relationships are established face to face — discussing a presentation with the speaker during lunch, sharing a drink at the reception or having a heated discussion in a pub near the convention center. With such relationships, social media becomes a natural extended communication channel.

But I digress… So those of you who can’t come to Denver for one reason or another, do consider taking Virtual IOUG Collaborate 2013 Pass and enjoy the session in tracks “High Availability, Disaster Recovery, Manageability” and “Performance, Scalability and Internals”. Here are some of the speakers who you would have a chance to see over 4 days of sessions broadcasting:

  • Kuassi Mensah, Oracle
  • John Beresniewicz, Oracle
  • Michael Abbey, Pythian
  • Yury Velikanov, Pythian
  • Craig Shallahamer, OraPub
  • Karl Arao, Enkitec
  • Carlos Sierra, Oracle
  • Mark W Farnham, Rightsizing
  • Guy Harrison, Dell
  • Frits Hoogland, VX Company
  • Tim Gorman, Evergreen Database Technologies
  • Kyle Hailey, Delphix
  • Gwen Shapira, Pythian

My favorite virtual track is “Performance, Scalability and Internals”, but I will be at the conference physically so I will also be attending lots of sessions on Big Data and Data Science. By the way, as virtual attendee, you will get access to the recordings as well so even if you miss certain live sessions, you can watch them later at your convenience.

If your team is dispersed geographically, the members would need a pass each to watch broadcast live but, again, I still think it’s a steal at $299!

I’m looking forward to see some of you in Denver and some of you on Twitter talking about what you saw at the virtual conference. If you don’t come to Denver and aren’t going to be virtual attendee, tell me why and what else can we do to plug you into Collaborate. I will relay your message to the IOUG Board of Directors and the Conference Committee.

Next OakTable 2012 video is online

Hi,

This time Alex Gorbachev presentation about ASM failures - next videos on the way.



regards,
Marcin

Next OakTable 2012 video is online

Hi,

This time Alex Gorbachev presentation about ASM failures - next videos on the way.



regards,
Marcin

Share, share, share…

Sometimes there are these days that I need to remember why…

Signed on the dotted line-Enkitec!

I had to think of ‏@OyvindIsene, a great ambassador of the Norwegian Oracle User Group when I typed the heading for this post. Unlike him I have not actively been looking for new challenges but sometimes things just develop, and in my case that was a great turn of events. I am very happy to have signed on the dotted line and in a couple of weeks will join Enkitec in Europe.

How did that happen? During an Oracle conference I met Andy Colvin together with some of his colleagues during a break in the busy schedule. I already knew and respected Enkitec as a great company with lots of seriously experienced DBAs. I feel fortunate to actually know some of them already from email and other social media exchanges.

Andy and I have exchanged a few tweets in the past and I really like his blog so I was curious to meet him in person. I haven’t yet had the opportunity to go and speak at an American conference so anytime someone I know from the other side of the Atlantic comes to Europe I try to meet up. I had a great time but unfortunately had to run since my talk started a few minutes later. It was quite funny actually although I’m not so sure if my presentation was up to my own expectations. The conversations I had made a lasting impression on me.

Over the cause of the next months we remained in contact, and I had the great pleasure to meet Kerry Osborne together with Andy a little later and that was when I seriously thought that I wanted to join a team I do admire. Now with all the paperwork done and dusted, and having signed I can’t wait to get started.