Search

Top 60 Oracle Blogs

Recent comments

postgresql

Oracle and postgres disk IO performance

This post is about one of the fundamentally important properties of a database: how IO is done. The test case I studied is doing a simple full table scan of a single large table. In both Oracle and postgres the table doesn’t have any indexes or constraints, which is not a realistic example, but this doesn’t change the principal topic of the study: doing a table scan.

I used a publicly available dataset from the US bureau of transportation statistics called FAF4.5.1_database.zip
The zipped file is 347MB, unzipped size 1.7GB.

Postgres@CERN

For once, I was at a conference without being a speaker, but co-organizer. The idea came only 3 months ago at the Swiss PostgreSQL Users Group dinner organized by dbi services in Milan before the PostgreSQL Conference when Laetitia Avrot asked me if it would be possible to organize a meetup at CERN.

SwissPUGOrg dinner in Milan, organized by dbi services

My answer was “yes” of course, but I had just resigned from CERN so it had to happen before February where I come back the consulting life at dbi-services. Laetitia organized everything in a short time: find the sponsors, the speakers, organized the drinks and food, advertise for it,…

Retrieve PostgreSQL variable-length storage information thanks to pageinspect

Introduction

In PostgreSQL a variable-length datatype value can be stored in-line or out-of-line (as a TOAST). It can also be compressed or not (see the documentation for more details).

Let’s make use of the pageinspect extension and the information about variable-length datatype found in postgres.h to build a query to retrieve tuples variable-length storage information.

The query

The query is the following:

Get toast chunk_id from the user table tuples or from the toast index thanks to pageinspect

Introduction

TOAST stands for “The Oversized-Attribute Storage Technique” and allows to broke up large fields value into multiple physical rows (see the PostgreSQL documentation for more details).

The goal of this post is to provide a way to retrieve toast’s information from the user table tuples or from the toast index without querying the toast directly.

We will be able to link the user table tuples to the toast pages by using user table tuples and toast index data (not querying the toast at all).

Build the playground

Create a table with a TOAST-able field:

PostgreSQL Invalid Page and Checksum Verification Failed

At the Seattle PostgreSQL User Group meetup this past Tuesday, we got onto the topic of invalid pages in PostgreSQL. It was a fun discussion and it made me realize that it’d be worth writing down a bunch of the stuff we talked about – it might be interesting to a few more people too!

Invalid Page In Block

You see an error message that looks like this:

PostgreSQL Invalid Page and Checksum Verification Failed

At the Seattle PostgreSQL User Group meetup this past Tuesday, we got onto the topic of invalid pages in PostgreSQL. It was a fun discussion and it made me realize that it’d be worth writing down a bunch of the stuff we talked about – it might be interesting to a few more people too!

Invalid Page In Block

You see an error message that looks like this:

PostgreSQL Invalid Page and Checksum Verification Failed

At the Seattle PostgreSQL User Group meetup this past Tuesday, we got onto the topic of invalid pages in PostgreSQL. It was a fun discussion and it made me realize that it’d be worth writing down a bunch of the stuff we talked about – it might be interesting to a few more people too!

Invalid Page In Block

You see an error message that looks like this:

improving performance with stored procedures — a pgbench example.

improving performance with stored procedures — a pgbench example.

In a previous post I mentioned that I do not use pgbench to benchmark the platform. But when it comes to measuring client/server application, pgbench fully makes makes sense.

I initialize the pgbench schema with small data:

pgbench - initialize - init-steps=dtgvpf -h localhost -p 5432 -U postgres franck

And I run the pgbench builtin workload with does something like a TPC-B

tpcb-like builtin

pgbench --builtin tpcb-like --transactions 30000 --protocol=prepared --jobs=10 --client=10 -h localhost -p 5432 -U postgres franck

I run 30000 transactions there, from 10 threads. It runs for more than 4 minutes:

PostgreSQL subtransactions, savepoints, and exception blocks

TL;DR: similar syntax but very different transaction semantic between Oracle and PostgreSQL procedural blocks

I posted a tricky Quiz on Twitter (unfortunately forgot to mention explicitely that I have a unique constraint on DEMO1.N):

pgbench retry for repeatable read transactions — first (re)tries

pgbench retry for repeatable read transactions — first (re)tries

Trying a no-patch solution for pgbench running on repeatable read transactions, using a custom script with PL/pgSQL

In a previous post I was running pgBench on YugaByteDB in serializable isolation level. But Serializable is optimistic and requires that the transactions are re-tried when failed. But pgBench has no retry mode. There was a patch proposed in several commit fests for that, but patch acceptance is a long journey in PostgreSQL:

WIP: Pgbench Errors and serialization/deadlock retries