Search

OakieTags

Who's online

There are currently 0 users and 46 guests online.

Recent comments

Affiliations

Uncategorized

Version control a multi-terrabyte database?! Yes !

#555555;" align="left">#2970a6;" href="http://www.kylehailey.com/sample-page/current/">

9009_03

Can you version control database?

How do you manage databases in application development?  For all the advances in application development such as machine virtualization , agile development, devops practices, the database still stands as a huge barrier to development agility.

NFS max “rsize” on Solaris – gotcha

Laptop and Stethoscope

#555555;">When mounting NFS file systems there is an option to set the max rsize requested. For example:

#555555;">mount -o rsize=1048576,wsize=1048576,proto=tcp,vers=3  192.168.1.10:/foo /foo

#555555;">The general rsize used is 32K,  for example in Oracle documentation, but for large sequential I/O the larger rsize can make a big difference. In some tests the larger rsize was twice as fast.

Query slower on exact clone

#555555;">A customer  was working with a clone copy of their production database for reports, but the cloned database reports were running slower than the same reports on production and they wanted to know why.

#555555;">Ostensibly the machines were similar and the memory was the same, though it’s always worth verifying.  The first thing I checked was the load profile. If you have OEM you can look at the performance page and/or top activity page to see the load profile.  The load profile of production was much more CPU bound and much less I/O bound than the clone and their was clearly a difference.

#555555;">The customer knew the time they the report was run on the clone  but was unsure exactly when it was run on production, though knew it had been run in the past day.

Dumb triggers part 2 – session level control

In the previous post, I pontificated about triggers that "lock you in" to having them fire, which can create dramas when it comes to doing data patching.

Maybe you can design your application around this, but if you cant, the last thing you want to be doing is having to take an outage so that you can disable the trigger whilst you do your data maintenance.  Ideally you want the trigger to fire as per normal, except in your session. 

And that’s actually pretty easy to do.  A simple example is below

Oracle 12c IDENTITY Columns and Default SEQUENCES

As promised, here is the first in a series of posts dealing with Oracle 12c new features.

Creating new table rows often requires assigning a key value. In the past, it has been common to use an Oracle SEQUENCE to generate key values using an Insert trigger. Oracle 12c provides two new options: IDENTITY columns and SEQUENCEs used as column default values.

IDENTITY Columns

IDENTITY columns are new to Oracle, but, not new to the database world. IDENTITY columns use an Oracle SEQUENCE “under the covers” and their creation is automatic rather than manual. Should a table be dropped and recreated, the IDENTITY value starts over again.

Oracle has had SEQUENCES for years; the IDENTITY column allows use of a SEQUENCE as part of a column definition (much like some competitor databases):

Really dumb triggers

Some people hate triggers, some people love triggers…

I am not that opinionated on them in either direction, but one thing I do hate, whether it be a trigger or not, is dumb code.  And today’s post just happens to be about dumb code in a trigger.

Consider this simple trigger (you see these everywhere pre 12c):

CREATE OR REPLACE TRIGGER MY_TRG
  BEFORE INSERT ON MY_TABLE 
  FOR EACH ROW
BEGIN
   SELECT MY_SEQ.NEXTVAL INTO :NEW.MY_PK_COL FROM DUAL;
END;
/

Seems harmless enough…and I’m not talking about whether we should be using ":new.col := seq.nextval", because under the covers it will do a select-from-dual anyway.

The reason the code is really dumb…is you are totally screwed if you need to some data maintenance…

Consider the following scenario:

“external table write” wait events… but I am only running a query?

I was helping a customer debug some external table load problems.  They are developing some code to do massive inserts via external tables.  As the code was being tested, we saw a fair number of tests that were doing simple queries of an external table.  I expected to see “external table read” wait events, but was surprised when we saw more “external table write” wait events than reads.

I thought this was due to writes to the “log” file and possible “bad” file, but I had to be sure.  I searched the docs but could not find reference to this wait event.  I specifically was seeing the following:

An accusatory error message

I found this hilarious

SQL> startup
ORACLE instance started.

Total System Global Area 1469792256 bytes
Fixed Size                  2402776 bytes
Variable Size             536872488 bytes
Database Buffers          922746880 bytes
Redo Buffers                7770112 bytes
Database mounted.
ORA-19821: an intentionally corrupt log file was found

Really ? I intentionally corrupted my log file ?  I dont think so !

:-)

SQL injection

Another big public username and password leak…

http://o.canada.com/technology/bell-canada-security-breach-391451/

Some good reading on how it was done, and thus ensuring your code isn’t prone to SQL injection here:

http://www.troyhunt.com/2014/02/heres-how-bell-was-hacked-sql-injection.html

Pluggable database and restricted sessions

Once you get into pluggable database territory, you might need to check your usage of "alter system enable restricted session", because unless you’ve patched, there’s a little bug which lets you enable restricted session, but wont let you get out of it ! :-)

For example:

SQL> alter session set container = pdb12;
 
Session altered.
 
SQL> alter system enable restricted session;
 
System altered.
 
SQL> select logins from v$instance;
 
LOGINS
----------
RESTRICTED
 
SQL> alter system disable restricted session;
alter system disable restricted session
*
ERROR at line 1:
ORA-65144: ALTER SYSTEM DISABLE RESTRICTED SESSION is not permitted
 

You can get out of the predicament, by force opening the pluggable database as shown below, but probably best to look at the latest 12c PSU, which contains a fix (unverified)