My first OpenWorld was in the Australia in ..well… I’m not sure when but it might have even been the late 1990’s. Time flies.
But 2016 will be my first OpenWorld as an Oracle employee…and hence, I’ll be busy
Hopefully you can come along to some or all of the sessions I’m involved in…or you can probably catch me during the week at the OTN lounge.
I’ll blog more shortly on how I can think you can have an awesome OpenWorld….because I know will !
I saw this on an AskTom question today answered by my colleague Chris. Check out this simple example
SQL> create table T ( 2 x int default 1, 3 y int default 1 4 ,z int); Table created.
It looks like I’ve assigned a default of “1” to both X and Y. But lets now dump out the default definition from the dictionary.
I was reading a very interesting article on Uber’s move from Postgres to MySQL. I really like it when IT professionals and/or companies take the time to explain their technology decisions. It’s a brave thing to do, because it’s easy for people to jump on the bashing bandwagon (“Ha ha … Company X chose Y and now they’re bust” etc etc). It’s the same reason you rarely see detailed customer reference information about the technology they are using, or how they are succeeding or failing. It’s generally kept pretty quiet. So for Uber engineering to be open about it is impressive, and a lesson for us all.
Not being as familiar with either Postgres or MySQL, one statement really caught my attention (colour emphasis mine):
This is a blog not related to Oracle products in any way.
This post is specific to apple Airport Extreme and Express wifi routers. However, in general: if you have multiple (unix/linux) servers, it makes sense to centralise the (sys)logging of these servers, in order to get a better overview on what is happening on these servers. I would want to go as far as saying that if you don’t you are simply not doing it right.
The central logging can be another syslog deamon receiving the logging, but there are many more products who are able to receive logging, like splunk, graylog, logstash and so on. This blogpost is about my home wifi routers, I use the simple and limited Synology “Log Center” daemon.
Before LATERAL and CROSS APPLY were added (exposed to us) in 12c, a common technique to do correlated joins was using the TABLE/MULTISET technique.
OpenWorld is just around the corner, and the Ask Tom team will be involved in a number of panels where you can chat to us, ask questions, debate topics and basically have a relaxed 45mins during all the frenzied activity that is OpenWorld. So if you’ve got any questions you would like answered “face to face”, rather than via Ask Tom, either drop them as a comment here, or feel free to post them on the AskTom site and just annotate it in some way that lets us know you’d like to talk about it on the panels during OpenWorld.
See in in San Francisco !
You would think that (with the exception of the V$ tables which are predominantly memory structures reflecting the state of various parts of the database instance) a query on a read-only standby database would have absolutely no interaction with the primary. After all, the standby database needs to be able to run independently of the primary should that primary database be down, or destroyed.
But there’s an exception to the rule. Consider the following example. I create a sequence on my primary database.
As most of us know, with LAG and LEAD or more generally, any analytic function that may extend “past” the boundary of window it is operating on, you can get null as a result.
Here’s a trivial example
SQL> create table t as 2 select rownum x 3 from dual 4 connect by level <= 10; Table created. SQL> SQL> select x, lag(x) over ( order by x ) as lag_Test 2 from t; X LAG_TEST ---------- ---------- 1 2 1 3 2 4 3 5 4 6 5 7 6 8 7 9 8 10 9 10 rows selected.
We get null for the first row, because we cannot lag “below” x=1 because there is no such value. That is of course trivially solved with an NVL, for example:
An AskTom contributor brought to my attention, that direct mode insert on index organized tables now appears possible in 12c. We can see the difference by running a simple script in both v11 and v12
One of the nifty things in 12c is the ability to pick up DBMS_OUTPUT output from your scheduler jobs. So if you haven’t built an extensive instrumentation or logging facility, you’ll still have some details you can pick up from the scheduler dictionary views.