Top 60 Oracle Blogs

Recent comments


Database space usage in layman’s terms

It seems easy enough. An email pops through from your manager and he’s concerned about the space utilization in the database. Then comes the question:

“So what exactly is using up most of the space?”

For the sake of simplicity, lets assume that you already know that most of the space comes from a single database schema, so you connect to that schema and throw a query against USER_SEGMENTS. And that is when the frustration starts..

MAX_STRING_SIZE – stretching the dictionary

This post cycles back too some other historical posts of mine related to max_string_size and the fact that the data dictionary uses LONG data type columns because of historical reasons related to backward compatibility. Most of us probably have existing databases that have gone through the standard upgrade through various versions of the Oracle Database, and as such, all of our existing database have a max_string_size of STANDARD. Thus to convert to the larger string size, we start by setting max_string_size to EXTENDED, run the appropriate scripts after shutting out database down and re-opening it in upgrade mode, setting the appropriate initialization parameters for the instance in our SPFILE. But what if you are creating a new database?

DataPump migration to SECUREFILE

These tweets somewhat befuddle me.


Because to be honest, if I no matter if I say “Yes” or “No” , and then for someone to take that response at face value…well…whilst I’m flattered in the level of trust, I’m also a little saddened because I think our community is best served by everyone always insisting on evidence for claims.

Covering Indexes in Postgres

Covering indexes are where all the columns needed are in the index and, in theory, only the index needs to be accessed. This seemingly straight forward strategy is much tricker on Postgres.

Let’s take TPC-H query 6:

        sum(l_extendedprice * l_discount) as revenue
            l_shipdate >= date '1996-01-01'
        and l_shipdate < cast(date '1996-01-01' + interval '1 year' as date)
        and l_discount between 0.06 - 0.01 and 0.06 + 0.01
        and l_quantity < 24;

we’ve got 3 predicate filter columns

  • l_shipdate
  • l_discount
  • l_quantity

None of them are that selective but all three are pretty good

select count(*) from
            l_shipdate >= date '1996-01-01'
        and l_shipdate < cast(date '1996-01-01' + interval '1 year' as date)

→ 1,823,373

Read only partitions

The ability for part of a table to be read-only and other parts of the same table to allow full DML is a cool feature in the Oracle Partitioning option.  Perhaps the most common example you will typically see for this is range-based partitioning on a date/timestamp column.  As data “ages”, setting older partitions to read-only can yield benefits such as:

  • moving the older partitions to cheaper, or write-once storage
  • guaranteeing that older data cannot be tampered with
  • shrinking backup times because read-only data only needs to be backed up once (or twice to be sure)

But if you try this in 18c, you might get a surprise:

From 19.6 to 19.7 on Windows

I must say this Release Update (RU) was probably the smoothest I’ve ever done. Obviously you should always read the patch notes carefully before proceeding on your own systems, but for me, it was a simple exercise. I’m posting this just to cover a couple of things that the patch notes “assume” and don’t explicitly state.

  • Shutdown everything Oracle related. I just go to “Services” and look for anything with Oracle. Also shutdown the “Distributed Transaction Coordinator service”.

This next one is key … I’ve made this mistake so many times. Open a command prompt window as administrator. If you don’t, things will progress OK for a tiny bit and then OPatch is going to throw a wobbly.

I did both the 19.7 RU and the 19.7 OJVM with OPatch, and both went through without incident.

Struggling with productivity?

Today’s blog post is somewhat of a different one. You could perhaps argue that it is even non-technical but bear with me, there is a strong relationship to technical work on the topic of productivity.

One of the things we often read about in blog posts, see in YouTube videos or any other form of social media when it comes to being a technology professional, is the importance of taking regular breaks. Even outside the realm of technology, the ergonomists constantly tell us about the mental and physical health benefits of taking regular breaks.

The lunchtime nuisance…

We’ve all been there … You’re trying to save some changes to the database, and no matter what you try, your application just hangs. After some to-and-fro with the DBA, or perhaps you are the DBA, you stumble across that uncommitted transaction that “Joe from marketing” commenced at 11:55am.  You pop around to Joe’s desk, and there it is on screen….and Joe has popped out to lunch for an hour ….. grrrrr!


Code re-use … Not a substitute for brain use

I was doing some code review today of a PL/SQL package, and came across a routine to generate a list of employees for a department.  Truth be told, it wasn’t employees and departments but some anonymity is called for here. Smile

Granularity impact on visualization

The longer the timeline we show in the graph, either the more points we show or the larger granularity we use.

For example if I show 1 hour of data, then I can show 60 points of 1 minute each granularity.

If I showed a day at 1 minute each then that would be 1440 points.

Say we wanted to limit the points to less than 300 points so our UI stays responsive no matter the time scale.

The for 24 hours I can show 5 minute granularity which is  288 points

If  I show 25 hours  that’s 300 points at 5 minute granularity so we are no longer under 300 points.

At this time I’d have to jump to the next level of granularity. In my case the next step up is 1 hour.

Using one hour granularity will look different than 5 minutes.