SQL Plan Directives (SPD) remind me of bind peeking about 10 years ago, a nice feature on paper that worked fine most of the times but caused some major headache on those occasions when it didn’t. Luckily for bind peeking social media wasn’t that popular 10y ago so it took a while to “make a name” for a troublemaking feature, nowadays a couple of blog posts and many re-tweets to get pretty popular, poor SPD!
DISCLAIMER: I happen to like a lot both features and think positively of them, I just admit the “transparent” behavior was a little too ambitious
I’m off to UKOUG again this year. It’s an awesome conference, and I’ll be doing three talks there.
On Monday at 3:30 it’s my first keynote talk “12 Things You’ll Love About the Oracle Database 12.2”, followed up at 6pm by “AskTom – One Year On”.
On Tuesday, at 3:30 I’ll be doing a talk for anyone want to come up to speed on all of the partitioning features with “Partitioning 101”.
So in part1 and part2, we looked at creating tables with clustered data. If you’re ready to climb aboard the attribute clustering heading toward Fastville you might want to take an existing table and cluster it. In part 2 we saw how we had to be extra careful with syntax. The same rule applies with altering a table to cluster it. Lets start with our SOURCE_DATA table which was not clustered.
In the previous post, I demonstrated attribute clustering by
SQL> create table source_data as 2 select d.* 3 from dba_objects d 4 where object_id is not null 5 order by dbms_random.random;
The obvious question is then “Why not just cluster the data by combining the two operations ?”, namely
One of the nice facilities in 12c is the ability to instruct the database to co-locate data during loading in order to facilitate reduced resource consumption for subsequent queries.
Here’s an example of it in use. First I’ll create table SOURCE_DATA to mimic some randomised incoming data feed
SQL> exec dbms_random.seed(0) PL/SQL procedure successfully completed. SQL> drop table source_data purge; drop table source_data purge * ERROR at line 1: ORA-00942: table or view does not exist SQL> SQL> create table source_data as 2 select d.* 3 from dba_objects d 4 where object_id is not null 5 order by dbms_random.random; Table created.
So let’s now populate (create) our table T with a straight copy of the data from SOURCE_DATA and index a column of interest
We had a question on AskTom recently asking for the best way to update a new column on a very large table. Four in-place update options are at our disposal:
all of which will achieve the required outcome but they have a significant redo and undo cost associated with them. Also, depending on the size of the new column, such an update might create chaos with row migration, because whilst unlikely, there is definitely the potential for every row to grow beyond the available block space required to hold it in situ.
The OTN tour came to APAC this year, so it’s been a pleasure and privilege to be able to participate in some of the legs. Being Perth born and bred, I know all too well that any travel to Australia from … well… anywhere except Australia, is a long haul, so I’m very grateful to the array of overseas speakers who gave up their time and comfort to make the journey.
My first stop was Sydney and I was glad to get back there for a few reasons. One of them was to catch up with family and offload, oops, I mean “share” my young boys with their grandmother. The highlight of the trip for them of course was travelling back home to Perth unaccompanied :-)
An interesting suggestion came through on AskTom this week, which prompted the following exploration.
Let us assume you are populating a table with
INSERT INTO EMP SELECT * FROM SOURCE_EMPS
and it falls over with
ORA-12899: value too large for column
To capture the data that caused that error is not trivial. Of course, we could edit the code to perform DML error logging but of course, that means changing the source code, which means change control, testing, red tape etc etc etc. Is there a better way ? Perhaps there is.
… and thanks for all the fish.
Today I removed myself from the OTN ACE program. This isn’t a reflection on the anything to do with the ACE program – quite the reverse, in fact – it’s because they’re introducing steps to ensure that the ACE Directors can justify their titles. Unfortunately, as anyone who has gone through (e.g.) ISO 9001 certification can tell you, quality assurance tends to translate into paperwork and ticking boxes – and while I can always find time to write up some interesting feature of Oracle I really find it hard to prioritise time for filling in forms.
In the last 4 months I’ve failed to file my monthly list of relevant activities twice, failed to request funding for two of the international conferences I’ve spoken at, and failed to submit claims against the two for which I had requested and received funding approval – so there really was no hope of me being motivated to collect all the extra details that the new regime requires.
I had an observation come to me last week about PL/SQL and populating nested tables.
“The BULK COLLECT into
Instead, it silently truncates the target table each time. “
This is true. However, if you need to use BULK COLLECT to append results into a single nested table data structure, it is trivial to come up with a workaround using the MULTISET syntax.