We had an interesting AskTom question recently, about why certain TRUNC commands would generate an error when applied to a TIMESTAMP value. At first glance, TRUNC seems fine.
SQL> select trunc(localtimestamp,'MM') to_the_month from dual; TO_THE_MO --------- 01-OCT-16 1 row selected. SQL> select trunc(localtimestamp,'YYYY') to_the_year from dual; TO_THE_YE --------- 01-JAN-16 1 row selected. SQL> select trunc(localtimestamp,'DD') to_the_day from dual; TO_THE_DA --------- 20-OCT-16 1 row selected.
But the moment you try apply a TRUNC down to the second, then things go wrong
There was Twitter discussion about using context variables, accessed via SYS_CONTEXT within queries. It’s an easy means of passing parameters into a view. The question that got asked was – does a sys_context variable act exactly like a bind variable. Let’s take a look at a simple demo to explore that.
We’ll create a table with 1 row with a value of ‘a’, and 100,000 rows with a value of ‘b’.
When you are moving from another database to Oracle, in a perfect world, you’d set aside plenty of time to refactor all of your code and re-engineer your applications to get the maximum benefit out of the new database technology. But there are not many “perfect worlds” out there in the IT landscape So sometimes we have to “make do” with the time and budget constraints that are placed upon us. To assist in migrations without having to change every piece of SQL code, you can use our translation tool that will intercept SQL as it is passed to the database, and replace with one of your own choosing.
Here’s a little demo of the feature called “SQL Translation”. First we’ll create a user DEMO that our “application” will be running under. It has two tables – a copy of EMP, and an empty copy of EMP (called
Even in the new world of stateless web applications, from time to time we still come across the “lunch break locking” problem. That is, someone or something, commences a transaction which locks some rows, and then their session goes idle, without committing or rolling back the transaction. The metaphor commonly used was the “someone locks a row in a table and then goes out for lunch”, with them being totally oblivious to the havoc they have caused until they return.
We have several posts on AskTom where it is described how to “carve up” a table into equi-sized chunks in order to (say) perform a task in parallel on that table. Here is an example of one.
Much of this has nowadays been obsoleted by the DBMS_PARALLEL_EXECUTE package, but in either instance, one key point remains: When you are generating rowids, there is no guarantee that the rowid you generate is either valid or will return a row from the table even if you used xxx_EXTENTS to build that rowid.
Let’s look at an example
Information Technology today is fantastic. Never before have we had so much capability to collect, store, and analyze data. Never before have we had so many wonderful tools for presenting data and our analysis of data.
Today, data represents the “crown jewels” of IT. All of our wonderful systems are less useful if the data is incorrect or inaccurate.
http://kingtraining.com/blog/wp-content/uploads/2016/10/crownjewels.png 600w" sizes="(max-width: 300px) 100vw, 300px" />
Although I normally use the job or scheduler facility to run database-centric style processes, most commonly PL/SQL programs, there is nothing to stop you from using the scheduler to gain control over tasks that might normally need to be done outside of the database. And of course, in the world of virtualization, cloud and other such innovations, the ability to initiate and perform tasks from within the database becomes increasingly useful, since often access to the underlying OS layer is either prohibited or restricted.
For example, here’s a simple demo of performing an original-mode export straight out of the scheduler:
In my previous post we saw that the partition number in the execution plan might not align with the partition position in the data dictionary when it comes to interval partitions. As we saw, the partition numbers are preordained based on the low boundary and the interval size.
That also creates an interesting scenario that can catch people out – you might exceed the allowable number of partitions, with an empty table !
This issue came through from an AskTom question, that turned out to be a known bug. In my talks on optimizer statistics, I’ve often mentioned the benefit of using pending statistics as a risk mitigation facility, and since this bug involved pending statistics, I thought I would bring it to your attention. The issue occurs when optimizing a query that accesses a single partition via pending statistics.
Things get a little interesting in the data dictionary when it comes to interval partitions. Consider the following example:
SQL> create table t 2 partition by range(x) interval(1) 3 ( partition p0 values less than (1) ) 4 as 5 select rownum x from dual connect by level <= 5; Table created. SQL> select count(*) 2 from user_tab_partitions 3 where table_name = 'T'; COUNT(*) ---------- 6
So our table has 6 partitions. Let’s now check the execution plan for a sample query