Search

Top 60 Oracle Blogs

Recent comments

distributed

Working Remotely Isn’t Just About the Work

With COVID-19, the business world has come to a fork in the road: Down one route, shuttered offices. Down the other, companies embracing remote work, showing us how businesses can survive — and thrive! — with a fully remote workforce. And that includes fostering strong team bonds and employee relationships; just because there’s no physical break room or water cooler doesn’t mean companies can’t create opportunities for colleagues to connect.

On Working Remotely: An Automattic Reader

How does a distributed company — a group of people with shared business goals but spread out around the world, representing different cultures, family settings, and local health considerations — stick together during a major health crisis like the COVID-19 pandemic?

We don’t intend to make it sound easy. And we are aware — from our families, our communities, the businesses we support, and our customers — that many, if not most companies cannot actually work 100 percent remotely because of the nature of their business.

For those who can transition to distributed work in the wake of this evolving crisis, we wanted to suggest ideas that might help colleagues work well together even when you’re no longer all sharing the same physical space.

Developing Applications with CockroachDB… end-to-end example with SpringBoot, Java, Hibernate, Gradle

overview

Application development frameworks like SpringBoot have done a good job a giving developers a quick start to developing applications with Java. Layering object-relational mapping to SQL to remove proprietary syntax has further simplified the development of database applications.

CockroachDB allows developers to take advantage in the evolution of globally distributed data with a simple SQL interface. But each database can bring it’s own specific syntax and cause developers to hesitate when getting started with a new database.

How do you get started?

Demystifying JSON with CockroachDB… Import, Index, and Computed Columns

Overview

Recently, I created and delivered an "Advanced Developer Workshop" for CockroachLabs. One of the topics dove into how to ingest and use JSON data.

Like many databases, CockroachDB has the ability to use JSON data type for columns within a table. Basically, you insert a JSONB object into a row, and then can filter and extract the desired data with SQL. The following simple example shows how this is done:

So it is pretty straight forward to use JSONB objects within tables, but how do you load those HUGE json files into CockroachDB?

Importing geo-partitioned data… the easy way

 

setting the stage

I started at Cockroach labs back in June 2019 to help others learn how to architect and develop applications using a geo-distributed database.  There has been a resurgence in distributed database technology, but the focus on geo-distributed is quite unique to CockroachDB.  While the underlying technology is unique, developers and DBAs that come with a wealth of experience, need to know how to best use this innovative technology.  Given this situation, I thought it would be good to start a blog series to explore various topics facing anyone beginning to architect database solutions with CockroachDB.

To start using a database, the first step is to IMPORT table data so you can begin to see how the database performs and responds.  And thus the IMPORT series has started!

DB links

A recent thread on the ODC SQL & PL/SQL forum raised the question of how data moves across a database link – is it row by row or array based (or other)? It’s a good question as it’s fairly common knowledge that distributed queries can be very much slower than the same query when executed on an equivalent set of local tables, so it’s worth having a little poke around to see if there’s anything in the way the data moves across the link that could be tweaked to improve performance.

20 Indexes

If your system had to do a lot of distributed queries there’s a limit on indexes that might affect performance: when deriving an execution plan for a distributed query the optimizer will consider a maximum of twenty indexes on each remote table. if you have any tables with a ridiculous number of indexes (various 3rd party accounting and CRM systems spring to mind) and if you drop and recreate indexes on those tables in the wrong order then execution plans may change for the simple reason that the optimizer is considering a different subset of the available indexes.

FBI Limitation

A recent question on the ODC (OTN) database forum prompted me to point out that the optimizer doesn’t consider function-based indexes on remote tables in distributed joins. I then spent 20 minutes trying to find the blog note where I had demonstrated this effect, or an entry in the manuals reporting the limitation – but I couldn’t find anything, so I’ve written a quick demo which I’ve run on 12.2.0.1 to show the effect. First, the SQL to create a couple of tables and a couple of indexes:

Distributed Trap

Here’s an interesting (and potentially very useful) observation from an OTN database forum thread that appeared at the end of last week. It concerns the problems of pulling data from remote systems, and I’ll start by building some data:

Uniquely parallel

Here’s a surprising (to me) execution plan from 12.1.0.2 – parallel execution to find one row in a table using a unique scan of a unique index – produced by running the following script (data creation SQL to follow):