Search

OakieTags

Who's online

There are currently 0 users and 35 guests online.

Recent comments

Oakies Blog Aggregator

VirtualBox 5.0.14

VirtualBox 5.0.14 has been born.

Downloads and changelog are in the usual places.

I’ve not done the installation on Linux yet, but it installed and seems to work fine on Windows 7 and Mac OS X (El Crapitan).

Cheers

Tim…


VirtualBox 5.0.14 was first posted on January 20, 2016 at 9:22 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Converting LONG to CLOB

Some folks still are stuck with LONG columns, and are keen to move to LOB.  Since version 9, we’ve had a nice facility to do that – just with a simple alter command.

You can now simply issue “alter table (longcol CLOB)” to perform the conversion. This is a neat tool, but be aware of the space implications before attempting a conversion.

Here is a simple example to demonstrate:



-- 
-- First a very small tablespace so we can monitor temporary space requirements
--

SQL> create tablespace DEMO datafile 'C:\ORACLE\ORADATA\NP12\DEMO01.DBF' size 1m
  2  autoextend on next 256k
  3  extent management local uniform size 128k;

Tablespace created.

SQL> alter user mcdonac default tablespace DEMO;

User altered.


-- 
-- Now a basic table with a LONG, we'll load some data from DBA_VIEWS
--

SQL> create table testlong ( text long );

Table created.

SQL> declare
  2      p varchar2(32767);
  3  begin
  4   for j in 1 .. 10 loop
  5     for i in ( select text from dba_views where text_length < 32500 ) loop
  6        p := i.text;
  7       insert into testlong values (p);
  8     end loop;
  9     commit;
 10   end loop;
 11  end;
 12  /

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats('','TESTLONG')

PL/SQL procedure successfully completed.

SQL> col bytes format 999,999,999,999
SQL> col name format a60
SQL> set lines 120
SQL> select bytes, name
  2  from v$datafile
  3  where name like '%DEMO01%';

           BYTES NAME
---------------- ------------------------------------------------------------
      80,216,064 C:\ORACLE\ORADATA\NP12\DEMO01.DBF


SQL> select bytes from user_segments
  2  where tablespace_name = 'DEMO';

           BYTES
----------------
      80,084,992

SQL> select NUM_ROWS,BLOCKS,EMPTY_BLOCKS,AVG_ROW_LEN,CHAIN_CNT
  2  from user_tables
  3  where table_name = 'TESTLONG';

  NUM_ROWS     BLOCKS EMPTY_BLOCKS AVG_ROW_LEN  CHAIN_CNT
---------- ---------- ------------ ----------- ----------
     68130       9633            0           0          0


So as you can see, the table in its current form is using about 80M of space and the tablespace has grown to match. Now we convert that column to a clob



SQL> alter table TESTLONG modify ( text clob);

Table altered.

SQL> col segment_name format a30

SQL> select segment_name, bytes
  2  from user_segments
  3  where tablespace_name = 'DEMO';

SEGMENT_NAME                              BYTES
------------------------------ ----------------
TESTLONG                             49,938,432
SYS_IL0000105876C00001$$                131,072
SYS_LOB0000105876C00001$$            42,074,112


So we have some additional segments (as we would expect) and the amount of space used has grown by about 10%. This seems pretty perfectable good. But take a look at the tablespace


SQL> select bytes, name
  2  from v$datafile
  3  where name like '%DEMO%';

           BYTES NAME
---------------- ------------------------------------------------------------
     172,490,752 C:\ORACLE\ORADATA\NP12\DEMO01.DBF


SQL>

If we look at the tablespace, it has grown to 170M so at least temporarily, over double the space was required. I’m not saying that this is a bad thing – but its certainly something to be aware of when you doing CLOB conversions

PL/SQL arrays–the index datatype

You get some interesting (but perhaps not unexpected) results when playing with the speed of array functions in PL/SQL.  This is a series of tests comparing “BY PLS_INTEGER” arrays with “BY VARCHAR2” arrays.  In all the cases, their speed is pretty much blindingly fast, but the comparison between the two seems to be dependent on the type of array function being performed. 

Anyway … on to the tests

Repeated assignment to the same array element



SQL> set timing off
SQL> set serverout on
SQL> declare
  2    type vtab is table of number
  3      index by varchar2(100);
  4    type ntab is table of number
  5      index by pls_integer;
  6    p ntab;
  7    p1 vtab;
  8    q number;
  9  begin
 10    q := dbms_utility.get_time;
 11    for i in 1 .. 10000000 loop
 12      p(12345) := i;
 13      p(6789) := i;
 14    end loop;
 15    dbms_output.put_line('by number: Ela='||(dbms_utility.get_time-q));
 16    q := dbms_utility.get_time;
 17    for i in 1 .. 10000000 loop
 18      p1('12345') := i;
 19      p1('6789') := i;
 20    end loop;
 21    dbms_output.put_line('by varchar2: Ela='||(dbms_utility.get_time-q));
 22  end;
 23  /
by number: Ela=81
by varchar2: Ela=164

PL/SQL procedure successfully completed.

Repeated assignment to the contiguous array elements



SQL> declare
  2    type vtab is table of number
  3      index by varchar2(100);
  4    type ntab is table of number
  5      index by pls_integer;
  6    p ntab;
  7    p1 vtab;
  8    q number;
  9  begin
 10    q := dbms_utility.get_time;
 11    for i in 1 .. 1000000 loop
 12      p(i) := i;
 13    end loop;
 14    dbms_output.put_line('by number: Ela='||(dbms_utility.get_time-q));
 15    q := dbms_utility.get_time;
 16    for i in 1 .. 1000000 loop
 17      p1(i) := i;
 18    end loop;
 19    dbms_output.put_line('by varchar2: Ela='||(dbms_utility.get_time-q));
 20  end;
 21  /
by number: Ela=11
by varchar2: Ela=59

PL/SQL procedure successfully completed.

Repeated assignment to the contiguous but sparse array elements



SQL>
SQL> declare
  2    type vtab is table of number
  3      index by varchar2(100);
  4    type ntab is table of number
  5      index by pls_integer;
  6    p ntab;
  7    p1 vtab;
  8    q number;
  9  begin
 10    q := dbms_utility.get_time;
 11    for i in 1 .. 1000000 loop
 12      p(i*345) := i;
 13    end loop;
 14    dbms_output.put_line('by number: Ela='||(dbms_utility.get_time-q));
 15    q := dbms_utility.get_time;
 16    for i in 1 .. 1000000 loop
 17      p1(i*345) := i;
 18    end loop;
 19    dbms_output.put_line('by varchar2: Ela='||(dbms_utility.get_time-q));
 20  end;
 21  /
by number: Ela=50
by varchar2: Ela=63

PL/SQL procedure successfully completed.

Repeated assignment to the random array elements



SQL>
SQL>
SQL> set timing off
SQL> set serverout on
SQL> declare
  2    type vtab is table of number
  3      index by varchar2(100);
  4    type ntab is table of number
  5      index by pls_integer;
  6    p ntab;
  7    p1 vtab;
  8    rnd ntab;
  9    q number;
 10  begin
 11    for i in 1 .. 1000000 loop
 12      rnd(i) := trunc(dbms_random.value(1,1000000));
 13    end loop;
 14
 15    q := dbms_utility.get_time;
 16    for i in 1 .. 1000000 loop
 17      p(rnd(i)) := i;
 18    end loop;
 19    dbms_output.put_line('by number: Ela='||(dbms_utility.get_time-q));
 20    q := dbms_utility.get_time;
 21    for i in 1 .. 1000000 loop
 22      p1(rnd(i)) := i;
 23    end loop;
 24    dbms_output.put_line('by varchar2: Ela='||(dbms_utility.get_time-q));
 25  end;
 26  /
by number: Ela=34
by varchar2: Ela=100

PL/SQL procedure successfully completed.

Navigation through contiguous array elements



SQL>
SQL>
SQL>
SQL> set timing off
SQL> set serverout on
SQL> declare
  2    type vtab is table of number
  3      index by varchar2(100);
  4    type ntab is table of number
  5      index by pls_integer;
  6    p ntab;
  7    p1 vtab;
  8    rnd ntab;
  9    q number;
 10    idx pls_integer;
 11  begin
 12    for i in 1 .. 1000000 loop
 13      rnd(i) := trunc(dbms_random.value(1,1000000));
 14    end loop;
 15    for i in 1 .. 1000000 loop
 16      p(rnd(i)) := i;
 17    end loop;
 18    for i in 1 .. 1000000 loop
 19      p1(rnd(i)) := i;
 20    end loop;
 21
 22    q := dbms_utility.get_time;
 23    idx := p.first;
 24    while idx != p.last loop
 25      idx := p.next(idx);
 26    end loop;
 27    dbms_output.put_line('by number: Ela='||(dbms_utility.get_time-q));
 28
 29    q := dbms_utility.get_time;
 30    idx := p1.first;
 31    while idx != p1.last loop
 32      idx := p1.next(idx);
 33    end loop;
 34    dbms_output.put_line('by varchar2: Ela='||(dbms_utility.get_time-q));
 35  end;
 36  /
by number: Ela=8
by varchar2: Ela=42

PL/SQL procedure successfully completed.

SQL>
SQL>

Navigation through sparse array elements



SQL> set timing off
SQL> set serverout on
SQL> declare
  2    type vtab is table of number
  3      index by varchar2(100);
  4    type ntab is table of number
  5      index by pls_integer;
  6    p ntab;
  7    p1 vtab;
  8    rnd ntab;
  9    q number;
 10    idx pls_integer;
 11  begin
 12    for i in 1 .. 100000 loop
 13      rnd(i) := trunc(dbms_random.value(1,1000000));
 14    end loop;
 15    for i in 1 .. 100000 loop
 16      p(rnd(i)) := i;
 17    end loop;
 18    for i in 1 .. 100000 loop
 19      p1(rnd(i)) := i;
 20    end loop;
 21
 22    q := dbms_utility.get_time;
 23    for i in 1 .. 10 loop
 24      idx := p.first;
 25      while idx != p.last loop
 26        idx := p.next(idx);
 27      end loop;
 28    end loop;
 29    dbms_output.put_line('by number: Ela='||(dbms_utility.get_time-q));
 30
 31    q := dbms_utility.get_time;
 32    for i in 1 .. 10 loop
 33      idx := p1.first;
 34      while idx != p1.last loop
 35        idx := p1.next(idx);
 36      end loop;
 37    end loop;
 38    dbms_output.put_line('by varchar2: Ela='||(dbms_utility.get_time-q));
 39  end;
 40  /
by number: Ela=14
by varchar2: Ela=61

PL/SQL procedure successfully completed.

SQL>

Certification exams

This is an update of a blog item I made nearly 15 years ago … but I think it still holds true

Is doing the OCP exams worthwhile ?
Yes, but not for the reasons you may be thinking. The OCP exams are a relatively cheap way of identifying possible weaknesses in your knowledge base on Oracle. For example, when I did the OCP, all of the database sites I had worked on did not use MTS and thus the exams revealed an area that could be “swotted up” on.

What does an OCP mean ?
The exams could probably be passed with moderate understanding of Oracle and some “cramming”. This is not to denigrate the exams, or the people that produce them in any way.  It is just an unfortunate reality that gleaning someone’s real world industry knowledge from an exam format is an extremely difficult problem to solve (See alternative 2 below).  Any employer that takes on an Oracle professional purely on the basis of their OCP status probably will (and some might argue, deserves to) get burnt.

Even the Certification site itself – describes that the certification is perhaps not the real benefit, but more the journey toward certification, that is, learning a good breadth of the database technology (emphasis mine) is the goal:

“The process of obtaining Oracle Database Certifications broadens your knowledge and skills by exposing you to an array of important database features, functions and tasks. Through your exam preparation, which includes labs, study and practice, you’ll learn how to perform complex, hands-on activities to build your database skill set.” – https://education.oracle.com

A good DBA will pass the OCP, but someone who passes the OCP is not necessarily a good DBA.  For example, a true story…a question I once saw posted to a newsgroup:

  "What does ROWNUM mean?
   Signed XXX
   OCP"

Are there any alternatives ?
1) I heartily recommend downloading the sample exams for the reason mentioned above. You can use them to identify areas upon which to improve.

2) The other option if you’ve got some training budget, is the Oracle Certified Master program.  That is a series of genuine hands-on tasks to demonstrate knowledge in a broad range of areas.

Jenkins Plugin for Delphix

In my last blog I talked about trying out Jenkins. In this blog post I want to talk about the new Jenkins plugin for Delphix.

Delphix plugin

Delphix plugin is easy to add. Just navigate to “Manage Jenkins” in the top left

Screen Shot 2016-01-15 at 11.34.17 AM

Then click on “Manage Plugins”

Screen Shot 2016-01-15 at 11.34.27 AM

Then type in “delphix” in the search bar and then click on the “Delphix Jenkins plugin” to install

Screen Shot 2016-01-15 at 11.45.10 AM

Now we can access Delphix APIs through the Jenkins interface. If I access a job (pick a job then click “configure” on the top left, or create a new job) and click  “add build step” under build, you will now see Delphix command options:

Screen Shot 2015-12-28 at 12.34.37 PM

Here is a zoom in to the new “Add build step” options:

Screen Shot 2015-12-28 at 12.34.43 PMBefore we can use these build steps, we have to add the Delphix engine(s) to Jenkins.

To add Delphix engine(s) click on “Manage Jenkins” on the top left

jenkins_add_delphix1

Then you should see a long page. In the middle of this page you should see a “Delphix” section

jenkins_add_delphix2

Add the Delphix engine(s) and hit “save” at the bottom of the screen

jenkins_add_delphix3Now when I add a build step I can choose a Delphix action like “add environment”

Screen Shot 2016-01-15 at 12.05.34 PMand now I can click “run now” and it adds the environment

Getting Your Transaction SCN – USERENV(COMMITSCN)

A few days ago I was introduced (or re-introduced) to USERENV(‘COMMITSCN’) by Jonathan Lewis. This is an internal function that allows limited access to the SCN of your transaction.

I was trying to find a way to get the actual commit SCN easily as it struck me that Oracle would have it to hand somewhere and it would be unique to the change and generated very efficiently. I could not find anything to do it so I asked Jonathan and he pointed me straight to this post he did about it a while back. What a nice chap. However, the post is from 1999 (last CENTURY!) so I thought I should just check it out first…

I’ll say up front that this is an undocumented feature of a now-deprecated function. Well, almost undocumented – older SQL reference manuals mention that ‘commitscn’ returns a NUMBER as opposed to the VARCHAR2 returned by most parameters you can use with USERENV, but it does not list it as a valid parameter for that function.
USERENV has been deprecated since Oracle 10g (see the old 10g documentation link here about USERENV) and you have been instructed to use SYS_CONTEXT(‘userenv’,’parameter‘) as described in the 12c database SQL reference manual here. However, there is no way to get the commit SCN from SYS_CONTEXT that I can find, so I thought I’d check out if USERENV(‘COMMITSCN’) still works. It does, on my version of Oracle 12.1.0.2!

There are some strict limits to this function. To begin with, you can’t select it, you can only use it on insert/update:


-- attempt a simple select of the SCN
mdw> select userenv('commitscn') from dual;
select userenv('commitscn') from dual
               *
ERROR at line 1:
ORA-01725: USERENV('COMMITSCN')  not allowed here

--But I can use in an insert,
mdw> insert into test_scn (seq_no,vc1,scn_no)
  2  values (100,'abcd',userenv('commitscn'))

1 row created.

mdw> select * from test_scn where seq_no=100

    SEQ_NO VC1            SCN_NO
---------- ---------- ----------
       100 abcd         11144739

-- Now commit my new record
mdw> commit;
Commit complete.

mdw> select * from test_scn where seq_no=100
  2  /

    SEQ_NO VC1            SCN_NO
---------- ---------- ----------
       100 abcd         11144753

--LOOK at the value for SCN_NO now! Compare to before the commit!

If you look at the above you will see a couple of things. The first is that, as I said, you cannot SELECT the function USERENV(‘COMMITSCN’).
The other is, though a value is put into the column when I insert a row using it, and I see that when I query the information back, it changes when I commit. This is because Oracle is recording something at the point of commit, not at the point of the SQL statement running – and the new SCN is only generated when you commit. A lot could have happened since I did the INSERT, I might have gone for a cup of tea and a batch job kicked off doing 1 million transactions, each with it’s own SCN. So though a placeholder of the current SCN is put into your view of the table row, the value put in the actual table is generated at the time of the commit.

Another limiting rule is that you can only reference USERENV(‘COMMITSCN’) once in a transaction, for one row. If I try and create 2 rows using the function in the same transaction I get an error, if I try to update more than 1 row I get an error:

mdw> insert into test_scn (seq_no,vc1,scn_no)
  2  values (101,'abcd',userenv('commitscn'))

1 row created.

mdw> insert into test_scn (seq_no,vc1,scn_no)
  2  values (102,'abcd',userenv('commitscn'))

insert into test_scn (seq_no,vc1,scn_no)
            *
ERROR at line 1:
ORA-01721: USERENV(COMMITSCN) invoked more than once in a transaction

-- now test updating several records
mdw> commit;
Commit complete.

mdw> select * from test_scn;

    SEQ_NO VC1            SCN_NO
---------- ---------- ----------
         1 AAAAAAA      11143743
         2 AAAAAAA      11143746
         3 AAAAAAA      11143749
         4 AAAAAAA      11143774
         5 AAAAAAA      11143777
       100 abcd         11144753
       101 abcd         11145543

mdw> update test_scn set scn_no = userenv('commitscn');
update test_scn set scn_no = userenv('commitscn')
       *
ERROR at line 1:
ORA-01721: USERENV(COMMITSCN) invoked more than once in a transaction

-- but one row works
mdw> update test_scn set scn_no = userenv('commitscn') where rownum =1;
1 row updated.

USERENV(‘COMMITSCN’) is old, undocumented and limited in use. So why am I looking at it, let alone even telling you all about it? {Apart from the fact that a lot of you *love* this sort of obscure, tid-bitty stuff}. Well, as there is no replacement for it that I am aware of. You can get the current SCN in a couple of ways, the easiest probably being to get it from V$DATABASE:

mdw> select current_scn from v$database;
any key>

CURRENT_SCN
-----------
   11146718

However, that is the last SCN used at the time you check it and is not the SCN when you commit. ie it is a different thing. I always find it irksome on those odd occasions when something is deprecated in Oracle with nothing really to replace it.

I just demonstrate again that USERENV(‘COMMITSCN’) is a little special below, and not the same as just selecting SCN from V$DATABASE. Before I go any further, I think the value USERENV(‘COMMITSCN’) puts into the table is the actual COMMIT SCN minus 1. I mostly think this as Jonathan said so :-). I do see each time I run this test that the first select from V$DATABASE and then my insert and a commit straight away results in a value in the table 2 higher than the select.

Further iterations (2nd and 3rd in this case) show the value selected from V$DATABASE and the value inserted into TEST_SCN immediately after are the same, and are 3 higher than the previous iteration. I anticipated an increase of two, once for the change to the UNDO tablespace for the insert and once for the insert. I am not sure where the third one comes in.

However, in the fourth iteration I have a PAUSE in my SQL*Plus script between checking V$DATABASE and doing my insert and, in a second session, I do some simple inserts and commits {it does not matter what, so I don’t show it}. Thus the difference between the SCN I collected from V$DATABASE and the value inserted into the table.
Finally, in the fifth iteration, I check the value in V$DATABASE, do the insert, query it back and see the two are the same. And THEN I pause so I can do some changes and commit them in my second session. After I’ve done that I continue my first session which commits my latest insert into TEST_SCN. I check the values actually stored in the table and, just as at the top of this post, you see that the value actually preserved in the table is a later SCN than the placeholder one. It is doing something special.

(the below has the noddy code to create my test table and sequence as well as the test)

-- test_scn1
--create table test_scn (seq_no number,vc1 varchar2(10),scn_no number)
-- create sequence scn_seq;
select current_scn from v$database;
insert into test_scn values (scn_seq.nextval,'AAAAAAA',userenv('commitscn'));
commit;
select * from test_scn;
select current_scn from v$database;
insert into test_scn values (scn_seq.nextval,'AAAAAAA',userenv('commitscn'));
commit;
select * from test_scn;
select current_scn from v$database;
insert into test_scn values (scn_seq.nextval,'AAAAAAA',userenv('commitscn'));
commit;
select * from test_scn;
select current_scn from v$database;
pause 'commit some stuff in second session and then press any key'
insert into test_scn values (scn_seq.nextval,'AAAAAAA',userenv('commitscn'));
commit;
select * from test_scn;
select current_scn from v$database;
insert into test_scn values (scn_seq.nextval,'AAAAAAA',userenv('commitscn'));
pause 'commit some stuff in second session again and then press any key'
select * from test_scn;
commit;
select * from test_scn;
select current_scn from v$database;

-- the output of the test
mdw> @test_scn1

--check V$DATABASE SCN
CURRENT_SCN
-----------
   11147809

-- Create and commit 1 row
1 row created.
Commit complete.

    SEQ_NO VC1            SCN_NO
---------- ---------- ----------
        11 AAAAAAA      11147811
-- note that the inserted SCN is 2 higher than the current SCN.

--Same steps, 2nd iteration
CURRENT_SCN
-----------
   11147814

1 row created.
Commit complete.

    SEQ_NO VC1            SCN_NO
---------- ---------- ----------
        11 AAAAAAA      11147811
        12 AAAAAAA      11147814
-- select SCN and inserted SCN are the same, 3 higher than first iteration

-- same steps, 3rd iteration
CURRENT_SCN
-----------
   11147817

1 row created.
Commit complete.

    SEQ_NO VC1            SCN_NO
---------- ---------- ----------
        11 AAAAAAA      11147811
        12 AAAAAAA      11147814
        13 AAAAAAA      11147817
-- select SCN and inserted SCN are the same, 3 higher than first iteration

-- 4th iteration, a pause
CURRENT_SCN
-----------
   11147820

'commit some stuff in second session and then press any key'
-- I did indeed change and commit some stuff in second session, before I create my record in test_scn

1 row created.
Commit complete.

    SEQ_NO VC1            SCN_NO
---------- ---------- ----------
        11 AAAAAAA      11147811
        12 AAAAAAA      11147814
        13 AAAAAAA      11147817
        14 AAAAAAA      11147831
-- larger gap in SCN (11147817 to 11147831

-- 5th iteration, pause now after insert and before commit
CURRENT_SCN
-----------
   11147834

1 row created.
'commit some stuff in second session again and then press any key'
-- I did indeed change and commit some stuff in second session 

    SEQ_NO VC1            SCN_NO
---------- ---------- ----------
        11 AAAAAAA      11147811
        12 AAAAAAA      11147814
        13 AAAAAAA      11147817
        14 AAAAAAA      11147831
        15 AAAAAAA      11147834

-- Notice the current_scn from V$DATABASE and the last row in the table match, 11147834..

Commit complete.

    SEQ_NO VC1            SCN_NO
---------- ---------- ----------
        11 AAAAAAA      11147811
        12 AAAAAAA      11147814
        13 AAAAAAA      11147817
        14 AAAAAAA      11147831
        15 AAAAAAA      11147842

-- But after the commit the SCN in row "15" has increased to 11147842.

CURRENT_SCN
-----------
   11147851

-- and the next check of V$DATABASE SCN shows the usual increase of 3 by the commit.

As you can see from the above, USERENV(‘COMMITSCN’) is doing something a little special and, despite all the limitations, I might actually have a use for it…

A little known ORDER BY extension

Within a CONNECT BY statement, you can order siblings, that is, under a particular branch of the hierarchy, the child entries can be ordered, but you still preserve the hierarchy.


SQL> select lpad('*', level, '*' ) || ename ename
  2  from emp
  3  start with mgr is null
  4  connect by prior empno = mgr
  5  order SIBLINGS by ename
  6  /

ENAME
------------------------------
*KING
**BLAKE
***ALLEN      <==| 
***JAMES      <==|
***MARTIN     <==| ordered within the "BLAKE" branch
***TURNER     <==|
***WARD       <==|
**CLARK
***MILLER
**JONES
***FORD
****SMITH
***SCOTT
****ADAMS

14 rows selected.

SQL> select lpad('*', level, '*' ) || ename ename
  2  from emp
  3  start with mgr is null
  4  connect by prior empno = mgr
  5  order SIBLINGS by ename DESC
  6  /

ENAME
------------------------------
*KING
**JONES
***SCOTT
****ADAMS
***FORD
****SMITH
**CLARK
***MILLER
**BLAKE
***WARD
***TURNER
***MARTIN
***JAMES
***ALLEN

14 rows selected.

Multiple partitions

In 12c, one of the nice changes to come along is that partition maintenance operations can now be done on multiple partitions, for example


ALTER TABLE t1 MERGE PARTITIONS p01, p02, p03, p04 INTO p0;

ALTER TABLE SPLIT PARTITION p0 INTO 
  (PARTITION p01 VALUES LESS THAN (25),
   PARTITION p02 VALUES LESS THAN (50), 
   PARTITION p03 VALUES LESS THAN (75),
   PARTITION p04);

and so on. However, one of the things that you still cannot do is the same thing with SELECT



SQL> create table T ( x int, y int )
  2  partition by hash ( x )
  3  --partitions 8
  4  (
  5  partition  p1,
  6  partition  p2,
  7  partition  p3,
  8  partition  p4,
  9  partition  p5,
 10  partition  p6,
 11  partition  p7,
 12  partition  p8
 13  )
 14  /

Table created.

SQL>
SQL> insert /*+ APPEND */ into T
  2  select rownum, rownum
  3  from dual
  4  connect by rownum <= 1000000
  5  /

1000000 rows created.

SQL> select * from T partition ( p1,p2);
select * from T partition ( p1,p2)
                          *
ERROR at line 1:
ORA-00933: SQL command not properly ended


SQL>
SQL> select * from T partition2 ( p1,p2);
select * from T partition2 ( p1,p2)
                           *
ERROR at line 1:
ORA-00933: SQL command not properly ended


So if you’re looking to come up with a way getting access to multiple partitions easily, here’s a little trick you could employ

  • have a partition-view style SQL statement that accesses ALL of the partitions
  • with each SQL, include a bind variable and some bitwise logic

and it will look something like this



SQL> set autotrace off
SQL>
SQL> variable b number
SQL> exec :b := 17

PL/SQL procedure successfully completed.

SQL>
SQL> set autotrace on stat
SQL> select count(*)
  2  from
  3  (
  4  select * from t partition ( p1 ) where bitand(:b,1) = 1 union all
  5  select * from t partition ( p2 ) where bitand(:b,2) = 2 union all
  6  select * from t partition ( p3 ) where bitand(:b,4) = 4 union all
  7  select * from t partition ( p4 ) where bitand(:b,8) = 8 union all
  8  select * from t partition ( p5 ) where bitand(:b,16) = 16 union all
  9  select * from t partition ( p6 ) where bitand(:b,32) = 32 union all
 10  select * from t partition ( p7 ) where bitand(:b,64) = 64 union all
 11  select * from t partition ( p8 ) where bitand(:b,128) = 128
 12  )
 13  /

  COUNT(*)
----------
    250498

1 row selected.


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        524  consistent gets
          0  physical reads
          0  redo size
        544  bytes sent via SQL*Net to client
        551  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>
SQL> exec :b := 255

PL/SQL procedure successfully completed.

SQL>
SQL> select count(*)
  2  from
  3  (
  4  select * from t partition ( p1 ) where bitand(:b,1) = 1 union all
  5  select * from t partition ( p2 ) where bitand(:b,2) = 2 union all
  6  select * from t partition ( p3 ) where bitand(:b,4) = 4 union all
  7  select * from t partition ( p4 ) where bitand(:b,8) = 8 union all
  8  select * from t partition ( p5 ) where bitand(:b,16) = 16 union all
  9  select * from t partition ( p6 ) where bitand(:b,32) = 32 union all
 10  select * from t partition ( p7 ) where bitand(:b,64) = 64 union all
 11  select * from t partition ( p8 ) where bitand(:b,128) = 128
 12  )
 13  /

  COUNT(*)
----------
   1000000

1 row selected.


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       2091  consistent gets
          0  physical reads
          0  redo size
        542  bytes sent via SQL*Net to client
        551  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>
SQL>

You can see from the consistent gets that it looks like we are accessing just the partitions of note. A look at the execution plan shows why – we are applying a filter on each SELECT statement in the UNION ALL based on the bind variable to decide if we need to access the partition


SQL> set autotrace traceonly explain
SQL> select count(*)
  2  from
  3  (
  4  select * from t partition ( p1 ) where bitand(:b,1) = 1 union all
  5  select * from t partition ( p2 ) where bitand(:b,2) = 2 union all
  6  select * from t partition ( p3 ) where bitand(:b,4) = 4 union all
  7  select * from t partition ( p4 ) where bitand(:b,8) = 8 union all
  8  select * from t partition ( p5 ) where bitand(:b,16) = 16 union all
  9  select * from t partition ( p6 ) where bitand(:b,32) = 32 union all
 10  select * from t partition ( p7 ) where bitand(:b,64) = 64 union all
 11  select * from t partition ( p8 ) where bitand(:b,128) = 128
 12  )
 13  /

Execution Plan
----------------------------------------------------------
Plan hash value: 726641459

------------------------------------------------------------------------------------------
| Id  | Operation                 | Name | Rows  | Cost (%CPU)| Time     | Pstart| Pstop |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |      |     1 |   621   (1)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE           |      |     1 |            |          |       |       |
|   2 |   VIEW                    |      |  1000K|   621   (1)| 00:00:01 |       |       |
|   3 |    UNION-ALL              |      |       |            |          |       |       |
|*  4 |     FILTER                |      |       |            |          |       |       |
|   5 |      PARTITION HASH SINGLE|      |   125K|    78   (2)| 00:00:01 |     1 |     1 |
|   6 |       TABLE ACCESS FULL   | T    |   125K|    78   (2)| 00:00:01 |     1 |     1 |
|*  7 |     FILTER                |      |       |            |          |       |       |
|   8 |      PARTITION HASH SINGLE|      |   124K|    78   (2)| 00:00:01 |     2 |     2 |
|   9 |       TABLE ACCESS FULL   | T    |   124K|    78   (2)| 00:00:01 |     2 |     2 |
|* 10 |     FILTER                |      |       |            |          |       |       |
|  11 |      PARTITION HASH SINGLE|      |   124K|    78   (2)| 00:00:01 |     3 |     3 |
|  12 |       TABLE ACCESS FULL   | T    |   124K|    78   (2)| 00:00:01 |     3 |     3 |
|* 13 |     FILTER                |      |       |            |          |       |       |
|  14 |      PARTITION HASH SINGLE|      |   124K|    78   (2)| 00:00:01 |     4 |     4 |
|  15 |       TABLE ACCESS FULL   | T    |   124K|    78   (2)| 00:00:01 |     4 |     4 |
|* 16 |     FILTER                |      |       |            |          |       |       |
|  17 |      PARTITION HASH SINGLE|      |   125K|    78   (2)| 00:00:01 |     5 |     5 |
|  18 |       TABLE ACCESS FULL   | T    |   125K|    78   (2)| 00:00:01 |     5 |     5 |
|* 19 |     FILTER                |      |       |            |          |       |       |
|  20 |      PARTITION HASH SINGLE|      |   125K|    78   (2)| 00:00:01 |     6 |     6 |
|  21 |       TABLE ACCESS FULL   | T    |   125K|    78   (2)| 00:00:01 |     6 |     6 |
|* 22 |     FILTER                |      |       |            |          |       |       |
|  23 |      PARTITION HASH SINGLE|      |   125K|    78   (2)| 00:00:01 |     7 |     7 |
|  24 |       TABLE ACCESS FULL   | T    |   125K|    78   (2)| 00:00:01 |     7 |     7 |
|* 25 |     FILTER                |      |       |            |          |       |       |
|  26 |      PARTITION HASH SINGLE|      |   124K|    78   (2)| 00:00:01 |     8 |     8 |
|  27 |       TABLE ACCESS FULL   | T    |   124K|    78   (2)| 00:00:01 |     8 |     8 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - filter(BITAND(TO_NUMBER(:B),1)=1)
   7 - filter(BITAND(TO_NUMBER(:B),2)=2)
  10 - filter(BITAND(TO_NUMBER(:B),4)=4)
  13 - filter(BITAND(TO_NUMBER(:B),8)=8)
  16 - filter(BITAND(TO_NUMBER(:B),16)=16)
  19 - filter(BITAND(TO_NUMBER(:B),32)=32)
  22 - filter(BITAND(TO_NUMBER(:B),64)=64)
  25 - filter(BITAND(TO_NUMBER(:B),128)=128)

Video: Oracle Linux Virtual Machine (VM) on Micorosft Azure

The interface for Microsoft Azure has been re-jigged since I last did screen shots, so I did a run through of creating an Oracle Linux VM and recorded it for my channel.

I also updated the associated article.

Cheers

Tim…


Video: Oracle Linux Virtual Machine (VM) on Micorosft Azure was first posted on January 18, 2016 at 9:17 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Drop Column

I published a note on AllthingsOracle a few days ago discussing the options for dropping a column from an existing table. In a little teaser to a future article I pointed out that dropping columns DOESN’T reclaim space; or rather, probably doesn’t, and even if it did you probably won’t like the way it does it.

I will  be writing about “massive deletes” for AllthingsOracle in the near future, but I thought I’d expand on the comment about not reclaiming space straight away. The key point is this – when you drop a column you are probably dropping a small fraction of each row. (Obviously there are some extreme variants on the idea – for example, you might have decided to move a large varchar2() to a separate table with shared primary key).

If you’ve dropped a small fraction of each row you’ve freed up a small fraction of each block, which probably means the block hasn’t been identified as having available free space for inserts. In many cases this is probably  a good thing – because it’s quite likely the if every block in your table is suddenly labelled as having sufficient free space for new row then you could end up with a difficult and ongoing performance problem.

Many large tables have a “time-based” component to their usage – as time passes the most recently entered rows are the ones that get most usage, and older rows are no longer accessed; this means you get a performance benefit from caching because the most useful fractions of such tables are often well cached and the “interesting” data is fairly well clustered.

In a case like this, imagine what will happen if EVERY block in your table suddenly acquires enough free space to accept a couple of new rows – over the next few days the incoming data will be spread across the entire length of the table, and for the next couple of months, or years, you will have to keep the entire table cached in memory if the performance is to stay constant; moreover the clustering_factor of the most useful indexes is likely to jump from “quite small” to “absolutely massive”, and the optimizer will start changing lots of plans because it will decide that your favourite indexes are probably much to expensive to user.

I am, of course, painting a very grim picture – but it is a possible scenario that should be considered before you drop a column from a table. Combined with my observations about the locking and overheads of dropping a column you might (probably ought to) decide that you should never drop a column you should only mark it as unused or (better still if you’re on 12c) mark it invisible for a while before marking it unused. You can worry about space reclamation at a later date when you considered all the ramifications of how it might impact on performance.

Footnote: If you’re still using freelist management then dropping a column won’t put a block on the freelist until the total used space in the block falls below the value dictated by pctused (default 40%); if you’re using ASSM then the block doesn’t become available for reuse until (by default) the free space exceeds 25% of the block’s usable space.