Some people get very excited with roles, and quickly the number of roles proliferates to huge numbers…Until this happens
ORA-28031: maximum of 148 enabled roles exceeded
But in helping someone out on AskTom, I just found a nice touch in 18.104.22.168. I had granted my account 200 roles (since the max_enabled_roles parameter is capped at 148), fully expecting to get the standard error when attempting to connect. But I didn’t – all was fine. You can see what is going on via the following demo.
If you’re using AQ, then it’s simple to setup simple enqueue and dequeue facilities on your local database to provide all sorts of asynchronous style processing in your applications. As long as you’re applications are designed and built to handle it, the “fire and forget” model to keep user applications responsive, and all of the “heavy lifting” done in the background is a very attractive one.
You can also use AQ to achieve the same concept across multiple databases, and the database will take care of propagating the messages from one database to the other. Here’s a simple demo of that.
I’m off to UKOUG again this year. It’s an awesome conference, and I’ll be doing three talks there.
On Monday at 3:30 it’s my first keynote talk “12 Things You’ll Love About the Oracle Database 12.2”, followed up at 6pm by “AskTom – One Year On”.
On Tuesday, at 3:30 I’ll be doing a talk for anyone want to come up to speed on all of the partitioning features with “Partitioning 101”.
So in part1 and part2, we looked at creating tables with clustered data. If you’re ready to climb aboard the attribute clustering heading toward Fastville you might want to take an existing table and cluster it. In part 2 we saw how we had to be extra careful with syntax. The same rule applies with altering a table to cluster it. Lets start with our SOURCE_DATA table which was not clustered.
In the previous post, I demonstrated attribute clustering by
SQL> create table source_data as 2 select d.* 3 from dba_objects d 4 where object_id is not null 5 order by dbms_random.random;
The obvious question is then “Why not just cluster the data by combining the two operations ?”, namely
One of the nice facilities in 12c is the ability to instruct the database to co-locate data during loading in order to facilitate reduced resource consumption for subsequent queries.
Here’s an example of it in use. First I’ll create table SOURCE_DATA to mimic some randomised incoming data feed
SQL> exec dbms_random.seed(0) PL/SQL procedure successfully completed. SQL> drop table source_data purge; drop table source_data purge * ERROR at line 1: ORA-00942: table or view does not exist SQL> SQL> create table source_data as 2 select d.* 3 from dba_objects d 4 where object_id is not null 5 order by dbms_random.random; Table created.
So let’s now populate (create) our table T with a straight copy of the data from SOURCE_DATA and index a column of interest
We had a question on AskTom recently asking for the best way to update a new column on a very large table. Four in-place update options are at our disposal:
all of which will achieve the required outcome but they have a significant redo and undo cost associated with them. Also, depending on the size of the new column, such an update might create chaos with row migration, because whilst unlikely, there is definitely the potential for every row to grow beyond the available block space required to hold it in situ.
The OTN tour came to APAC this year, so it’s been a pleasure and privilege to be able to participate in some of the legs. Being Perth born and bred, I know all too well that any travel to Australia from … well… anywhere except Australia, is a long haul, so I’m very grateful to the array of overseas speakers who gave up their time and comfort to make the journey.
My first stop was Sydney and I was glad to get back there for a few reasons. One of them was to catch up with family and offload, oops, I mean “share” my young boys with their grandmother. The highlight of the trip for them of course was travelling back home to Perth unaccompanied :-)
An interesting suggestion came through on AskTom this week, which prompted the following exploration.
Let us assume you are populating a table with
INSERT INTO EMP SELECT * FROM SOURCE_EMPS
and it falls over with
ORA-12899: value too large for column
To capture the data that caused that error is not trivial. Of course, we could edit the code to perform DML error logging but of course, that means changing the source code, which means change control, testing, red tape etc etc etc. Is there a better way ? Perhaps there is.
… and thanks for all the fish.
Today I removed myself from the OTN ACE program. This isn’t a reflection on the anything to do with the ACE program – quite the reverse, in fact – it’s because they’re introducing steps to ensure that the ACE Directors can justify their titles. Unfortunately, as anyone who has gone through (e.g.) ISO 9001 certification can tell you, quality assurance tends to translate into paperwork and ticking boxes – and while I can always find time to write up some interesting feature of Oracle I really find it hard to prioritise time for filling in forms.
In the last 4 months I’ve failed to file my monthly list of relevant activities twice, failed to request funding for two of the international conferences I’ve spoken at, and failed to submit claims against the two for which I had requested and received funding approval – so there really was no hope of me being motivated to collect all the extra details that the new regime requires.