I had someone say to me at an event recently: “We’re are going to upgrade to 18c, because 19c is new and is probably less stable”.
Let me sum up that sentiment simply: It’s Wrong
Now, don’t get me wrong. I am not claiming that every Oracle release is perfect, contains zero bugs, never has a regression, will mow your lawn, take your kids to school, clean your house and sort out all the climate change issues in the world.
A couple of years back (wow…time flies!) I made a video and a post about the de-support of UTL_FILE_DIR in 18c. This was good thing because the number of people opting for “utl_file_dir=*” in their init.ora file, and hence opening themselves up to all sorts of risks seemed to be large! (The post elaborates on this more with an example of erasing your database with a short UTL_FILE script )
In the world of DevOps, continuous integration and repeatable test cases, the demand for being able to
has become far more common. This is one of the many great use cases for pluggable databases with all of the powerful cloning facilities available. In particular, now that you can take advantage of pluggable databases without* incurring additional license fees, there are some great opportunities there…but that is the topic for another post.
One of the pieces of advice that I often see on the ‘net is that undo space is somehow this incredibly precious thing, and as a consequence, one should always keep the amount of uncommitted changes in the database to a small size.
Personally I think that is baloney (Ed-in reality, as an Australian I have a slightly more powerful choice of term, but lets keep things PG-rated
Let’s start with this somewhat odd looking result. I have an inline function that returns a random number between 0 and 20, and I call that for each row in ALL_OBJECTS and then I extract just those rows for which the generated random number is 10. Seems simple enough….but why do I get results for which the value of the second column is most certainly not 10?
In the recent versions of the Oracle database, we’ve had the ability to convert between a System Change Number (SCN) and the approximate time to which that SCN pertains. These functions are unsurprisingly called SCN_TO_TIMESTAMP and TIMESTAMP_TO_SCN. The only potential misnomer here is that even though the functions are called “timestamp” and return a datatype of timestamp, on most platforms you are going to notice that the granularity doesn’t run down into fractions of seconds
SQL> select scn_to_timestamp(14816563713652) from dual;
SCN_TO_TIMESTAMP(14816563713652)
---------------------------------------------------------
08-JUN-19 02.30.59.000000000 AM
This all looks great until you start poking around too far into the past, and you end up in territory like this:
OK, let’s be honest right up front. The motivation for this post is solely to be able to roll out a tongue twisting blog post title . But hopefully there’s some value as well in here for you if you’re hitting the error:
ORA-32034: unsupported use of WITH clause
First some background. A cool little enhancement to the WITH clause came along in 12c that allowed PLSQL functions to be defined within the scope of the executing SQL statement. To see the benefit of this, consider the following example that I have a personal affinity with (given my surname).
Let’s say I’ve allowed mixed-case data in a table that holds names.
There’s a hesitation to publish this example, because publishing it may be interpreted as an endorsement of this approach and it certainly isn’t. Over the years there have been plenty of articles describing the long term pain that typically comes from generic data models. Here’s a few to whet your appetite.
Here’s a simple little PL/SQL block where we call an inner procedure PARAMETER_TESTER from its parent block. Pay particular attention to the parameter we pass to the procedure, and it’s value throughout the execution of that procedure.
I had a request from a client a while back regarding masking of data. They had an application with sensitive data in the Production environment (where access and audit were very tightly controlled) but the issue was how to respect that sensitivity in non-Production environments whilst still preserving full size data sizes for application testing.
After some conversations about requirements, it turned out that since (even in non-Production environments) all access to application components was logged and audited, the issue was simply protecting against “inadvertent” access to sensitive data. For example, in the application, if I searched for “males with black hair with an interest in technology” I should never see the name “Connor McDonald” on screen in the results, because simply viewing that data could be treated as a breach of privacy.
Recent comments
3 years 2 days ago
3 years 12 weeks ago
3 years 16 weeks ago
3 years 17 weeks ago
3 years 22 weeks ago
3 years 43 weeks ago
4 years 11 weeks ago
4 years 41 weeks ago
5 years 25 weeks ago
5 years 26 weeks ago