Recently I’ve seen not so smart optimizer behavior: one query took long time to parse, and ended with an error hitting PGA_AGGREGATE_LIMIT in few minutes; another query was just parsed for ages while using reasonable (under 2G :)) amount of PGA and still could hit PGA_AGGREGATE_LIMIT but after way more time – up to an hour.
Both cases were similar and involved queries which were accessing views; and those views’ code is generated by an application using lots of IN LISTs and other OR conditions. They both are really ugly SQLs with text length ~100K. When Oracle tried to parse them it took a lot of time and parse attempt had either failed with ORA-4036 soon or hanged for a long time and then failed. Strangely incident trace file generated for ORA-4036 doesn’t include PGA heaps breakdown and you have to manually enable PGA heapdump on error to get an idea what is taking up memory. Here’s what I’ve found in there:
Here’s a very simple example of a table called PARENT being a (surprise surprise) parent in a referential integrity relationship to a (drum roll for my choice of name) CHILD table
SQL> create table parent ( p int, constraint PAR_PK primary key (p) ); Table created. SQL> create table child ( c int, 2 p int 3 ); Table created. SQL> alter table child add constraint fk1 foreign key ( p ) references parent ( p ); Table altered.
That is all as we would expect, and similarly, if I inadvertently try to add the same foreign key constraint, I’ll get an error
A couple of people have requested that I explain how to install the entire random_ninja and testdata_ninja packages manually. I have created a gist here with the full and complete order of the files: Full install list. So download random_ninja zipfile and testdata_ninja zipfile and follow the order of the gist to install all the code.
Colleague Jeff Smith published an interesting post the other day about his “rules and regulations” for blogging, but the overriding theme (Ed: – this is my opinion, I’m not speaking for Jeff) was that the “what” he blogs about was – anything he’s passionate about, and the “when” was – whenever felt inspired to do so.
That got me thinking about blogging in general. I think it is safe to say
(Because not to do so means you’re in the wrong career)
Here’s one of those little improvements in 12c (including 12.1) that will probably end up being described as “little known features” in about 3 years time. Arguably it’s one of those little things that no-one should care about because it’s not the sort of thing you should do on a production system, but that doesn’t mean it won’t be seen in the wild.
Rather than simply state the feature I’m going to demonstrate it, starting with a little code to build a couple of tables with referential integrity:
In the previous three parts of this series a lot of preparation work, needed for the configuration of Data Guard, was performed. In this part of the mini-series they all come to fruition. Using the Data Guard broker a switchover operation will be performed. A couple of new features in 12c make this easier. According to the “Changes in This Release for Oracle Data Guard Concepts and Administration” chapter of the 12.1 Data Guard Concepts and Administration guide:
When [you, ed.] perform a switchover from an Oracle RAC primary database to a physical standby database, it is no longer necessary to shut down all but one primary database instance.
I have always wanted to test that in a quiet moment…
To me, ‘cloud computing’ is renting a compute resource to perform a task. In order to use that compute resource, you need to instruct it to do something, which is typically done via the network. If the task the compute resource needs to fulfil is being an application server or being a client or both in the case of an application server that uses an Oracle database, the network latency between the client of the database and the database server is a critical property.
When dealing with a RANGE partitioned table, the defined partitions dictate all of the data that can be placed into the table. For example, if I have a SALES table as per below
SQL> create table SALES 2 ( cal_year date, 3 txn_id int, ... ... 24 ) 25 partition by range ( cal_year ) 26 ( 27 partition p_low values less than ( date '2000-01-01' ), 28 partition p2000 values less than ( date '2001-01-01' ), ... ... 34 partition p2016 values less than ( date '2017-01-01' ) 35 ); Table created.
then the existing partitions define a natural upper bound on the value of CAL_YEAR that I can insert into the table.
When tearing down an AWS Delphix Trial, we run the following command with Terraform:
I’ve mentioned before that every time I execute this command, I suddenly feel like I’m in control of the Death Star in Star Wars: