Search

Top 60 Oracle Blogs

Recent comments

Uncategorized

Performing a large correlated update

We had a question on AskTom recently asking for the best way to update a new column on a very large table.  Four in-place update options are at our disposal:

  • Serial update SQL
  • Parallel DML update SQL
  • DBMS_PARALLEL_EXECUTE
  • PL/SQL batching (as long as sensible restart-after-error logic is readily available)

all of which will achieve the required outcome but they have a significant redo and undo cost associated with them.  Also, depending on the size of the new column, such an update might create chaos with row migration, because whilst unlikely, there is definitely the potential for every row to grow beyond the available block space required to hold it in situ.

OTN tour 2016 APAC

The OTN tour came to APAC this year, so it’s been a pleasure and privilege to be able to participate in some of the legs.  Being Perth born and bred, I know all too well that any travel to Australia from … well… anywhere except Australia, is a long haul, so I’m very grateful to the array of overseas speakers who gave up their time and comfort to make the journey.

My first stop was Sydney and I was glad to get back there for a few reasons.  One of them was to catch up with family and offload, oops, I mean “share” my young boys with their grandmother.  The highlight of the trip for them of course was travelling back home to Perth unaccompanied :-) 

Code enhancements without changing code

An interesting suggestion came through on AskTom this week, which prompted the following exploration.

Let us assume you are populating a table with


INSERT INTO EMP SELECT * FROM SOURCE_EMPS

and it falls over with


ORA-12899: value too large for column

To capture the data that caused that error is not trivial. Of course, we could edit the code to perform DML error logging but of course, that means changing the source code, which means change control, testing, red tape etc etc etc. Is there a better way ? Perhaps there is.

So Long ACED

… and thanks for all the fish.

Today I removed myself from the OTN ACE program. This isn’t a reflection on the anything to do with the ACE program – quite the reverse, in fact – it’s because they’re introducing steps to ensure that the ACE Directors can justify their titles. Unfortunately, as anyone who has gone through (e.g.) ISO 9001 certification can tell you, quality assurance tends to translate into paperwork and ticking boxes – and while I can always find time to write up some interesting feature of Oracle I really find it hard to prioritise time for filling in forms.

In the last 4 months I’ve failed to file my monthly list of relevant activities twice, failed to request funding for two of the international conferences I’ve spoken at, and failed to submit claims against the two for which I had requested and received funding approval – so there really was no hope of me being motivated to collect all the extra details that the new regime requires.

BULK COLLECT into nested table

I had an observation come to me last week about PL/SQL and populating nested tables.

“The BULK COLLECT into statement cannot be used repeatedly to append results into a table. 
Instead, it silently truncates the target table each time. “

This is true.  However, if you need to use BULK COLLECT to append results into a single nested table data structure, it is trivial to come up with a workaround using the MULTISET syntax.

Truncating a timestamp to the second

We had an interesting AskTom question recently, about why certain TRUNC commands would generate an error when applied to a TIMESTAMP value.  At first glance, TRUNC seems fine.


SQL> select trunc(localtimestamp,'MM') to_the_month from dual;

TO_THE_MO
---------
01-OCT-16

1 row selected.

SQL> select trunc(localtimestamp,'YYYY') to_the_year from dual;

TO_THE_YE
---------
01-JAN-16

1 row selected.

SQL> select trunc(localtimestamp,'DD') to_the_day from dual;

TO_THE_DA
---------
20-OCT-16

1 row selected.

But the moment you try apply a TRUNC down to the second, then things go wrong

Taking a peek at SYS_CONTEXT

There was Twitter discussion about using context variables, accessed via SYS_CONTEXT within queries. It’s an easy means of passing parameters into a view. The question that got asked was – does a sys_context variable act exactly like a bind variable. Let’s take a look at a simple demo to explore that.

We’ll create a table with 1 row with a value of ‘a’, and 100,000 rows with a value of ‘b’.

Translating SQL (a migration tool)

When you are moving from another database to Oracle, in a perfect world, you’d set aside plenty of time to refactor all of your code and re-engineer your applications to get the maximum benefit out of the new database technology.  But there are not many “perfect worlds” out there  in the IT landscape Smile  So sometimes we have to “make do” with the time and budget constraints that are placed upon us.  To assist in migrations without having to change every piece of SQL code, you can use our translation tool that will intercept SQL as it is passed to the database, and replace with one of your own choosing. 

Here’s a little demo of the feature called “SQL Translation”.  First we’ll create a user DEMO that our “application” will be running under.  It has two tables – a copy of EMP, and an empty copy of EMP (called

Locked rows and lunch breaks ? A simple fix

Even in the new world of stateless web applications, from time to time we still come across the “lunch break locking” problem.  That is, someone or something, commences a transaction which locks some rows, and then their session goes idle, without committing or rolling back the transaction.  The metaphor commonly used was the “someone locks a row in a table and then goes out for lunch”, with them being totally oblivious to the havoc they have caused until they return.

Generating rowids

We have several posts on AskTom where it is described how to “carve up” a table into equi-sized chunks in order to (say) perform a task in parallel on that table.  Here is an example of one.

Much of this has nowadays been obsoleted by the DBMS_PARALLEL_EXECUTE package, but in either instance, one key point remains:  When you are generating rowids, there is no guarantee that the rowid you generate is either valid or will return a row from the table even if you used xxx_EXTENTS to build that rowid.

Let’s look at an example