Search

Top 60 Oracle Blogs

Recent comments

Running pgBench on YugaByteDB 1.3

Running pgBench on YugaByte DB 1.3

My first test on this Open Source SQL distributed database.

Did you hear about YugaByteDB, a distributed database with an architecture similar to Google Spanner, using PostgreSQL as the query layer?

I started to follow when I’ve heard that Bryn Llewellyn, famous PL/SQL and EBR product manager, left Oracle to be their developer advocate. And YugaByteDB got more attention recently when announcing that their product license is now 100% Open Source.

I like to learn new things by trying and troubleshooting rather than reading the documentation. Probably because there’s more to learn aside of the documentation path. And also because troubleshooting is fun. So, one of the great features is that the query layer is compatible with PostgreSQL. Then I’ll try to run pgBench on YugaByteDB.

It is important to mention here that I’m running all nodes in a single lab VM, so performance is not representative. And I’m testing the YSQL query layer which is still in beta. The goal is to discover and learn about the distributed database challenges, rather than evaluating the product.

Install YugaByteDB

Nothing is easier than the installation of YugaByteDB, all documented:

Install YugaByte DB | YugaByte DB Docs

I install it in RHEL 7.6 (Actually OEL 7.6 as I’m running my lab in an Oracle Cloud VM). The install is just an un-tar followed by the post-install which patches the binaries so that we don’t have to set LD_LIBRARY_PATH. I set PATH to the YugaByte bin directory:

wget -O- https://downloads.yugabyte.com/yugabyte-1.3.0.0-linux.tar.gz | tar -zxvf - 
export PATH=$PATH:$PWD/yugabyte-1.3.0.0/bin
post_install.sh

I create a 3 nodes cluster on this host:

yb-ctl --rf 3 create
yb-ctl status

Those nodes are created as 127.0.0.1 to 127.0.0.3 here. I access remotely and I tunnel the interesting ports with my ssh_config file:

Host yb
HostName 130.61.59.66
User opc
ForwardX11 yes
DynamicForward 8080
#YSQL JDBC
LocalForward 15433 127.0.0.1:5433
LocalForward 25433 127.0.0.2:5433
LocalForward 35433 127.0.0.3:5433
#YCQL API
LocalForward 19042 127.0.0.1:9042
LocalForward 29042 127.0.0.2:9042
LocalForward 39042 127.0.0.3:9042
# Web UI
LocalForward 17000 127.0.0.1:7000
LocalForward 27000 127.0.0.2:7000
LocalForward 37000 127.0.0.3:7000

I create a database. The “psql” equivalent here is “ysqlsh”:

ysqlsh
\timing on
drop database if exists franck;
create database franck;
\q

ysqlsh (11.2)

Install PgBench

As I tunneled the 5433 port I can use pgBench from my laptop:

pgbench --host localhost --port 15433 --username postgres franck

But the full postgres installation is also there in ./yugabyte-1.3.0.0/postgres

then, I add this in my path:

export PATH=$PATH:$PWD/yugabyte-1.3.0.0/bin:$PWD/yugabyte-1.3.0.0/postgres/bin

Initialize pgBench

I run the initialization:

pgbench --initialize --host localhost -p 5433 -U postgres franck

ERROR: DROP multiple objects not supported yet
HINT: See https://github.com/YugaByte/yugabyte-db/issues/880

Ok, I have an error because pgBench uses the multi-table drop statement which is not supported yet. But what’s really nice is the error message containing a link to the GitHub issue about this.

No problem, I don’t need to drop the tables and pgBench has an --init-steps option to choose the steps: drop tables, table creation, generate data, vacuum, primary key creation, foreign key creation.

pgbench --initialize --init-steps=tgvpf -h localhost -p 5433 -U postgres franck

ERROR: VACUUM not supported yet
HINT: Please report the issue on https://github.com/YugaByte/yugabyte-db/issues

There’s no FILLFACTOR and no VACUUM per-se (the storage engine has a transparent garbage collector). YugaByteDB uses the PostgreSQL query layer, but not the same storage layer. My tables are created and data is generated:

ysqlsh franck
\dt
select count(*) from pgbench_branches;
select count(*) from pgbench_accounts;
select count(*) from pgbench_tellers;
select count(*) from pgbench_history;

Let’s continue without vacuum, only the primary and foreign key definition:

pgbench --initialize --init-steps=pf -h localhost -p 5433 -U postgres franck

ERROR: This ALTER TABLE command is not yet supported.

pgBench adds the constraints with an ALTER TABLE but YugaByteDB supports only inline declaration in the CREATE TABLE. You can check the DDL from a PostgreSQL database (initialize with the ‘foreign key’ step which is not the default):

pg_dump --schema-only -h localhost -p 5433 -U postgres franck

Basically, here is what is missing in my YugaByteDB database:

alter table pgbench_branches add primary key (bid);
"alter table pgbench_tellers add primary key (tid)",
"alter table pgbench_accounts add primary key (aid)"
"alter table pgbench_tellers add constraint pgbench_tellers_bid_fkey foreign key (bid) references pgbench_branches",
"alter table pgbench_accounts add constraint pgbench_accounts_bid_fkey foreign key (bid) references pgbench_branches",
"alter table pgbench_history add constraint pgbench_history_bid_fkey foreign key (bid) references pgbench_branches",
"alter table pgbench_history add constraint pgbench_history_tid_fkey foreign key (tid) references pgbench_tellers",
"alter table pgbench_history add constraint pgbench_history_aid_fkey foreign key (aid) references pgbench_accounts"

Re-Create with Primary and Foreign Keys

Finally, here is what I want to run to get everything in a supported way:

ysqlsh franck
 drop table if exists pgbench_history;
drop table if exists pgbench_tellers;
drop table if exists pgbench_accounts;
drop table if exists pgbench_branches;
CREATE TABLE pgbench_branches (
bid integer NOT NULL
,bbalance integer
,filler character(88)
,CONSTRAINT pgbench_branches_pkey PRIMARY KEY (bid)
);
CREATE TABLE pgbench_accounts (
aid integer NOT NULL
,bid integer references pgbench_branches
,abalance integer
,filler character(84)
,CONSTRAINT pgbench_accounts_pkey PRIMARY KEY (aid)
);
CREATE TABLE pgbench_tellers (
tid integer NOT NULL
,bid integer references pgbench_branches
,tbalance integer
,filler character(84)
,CONSTRAINT pgbench_tellers_pkey PRIMARY KEY (tid)
);
CREATE TABLE pgbench_history (
tid integer references pgbench_tellers
,bid integer references pgbench_branches
,aid integer references pgbench_accounts
,delta integer
,mtime timestamp without time zone
,filler character(22)
);
\q

This creates the tables without any error. Next step is to generate data

Generate data

Now the only “ --initialize” step I have to do is the generation of data.

Note that this step is doing the truncate with multi-table syntax and this one is already implemented.

So here is the “generate data” step:

pgbench --initialize --init-steps=g -h localhost -p 5433 -U postgres franck

ERROR: Operation only supported in SERIALIZABLEisolation level
HINT: See https://github.com/YugaByte/yugabyte-db/issues/1199

PostgreSQL runs by default in “read committed” isolation level. As the GitHub issue mentions, YugaByteDB support for Foreign Keys requires “Serializable”. The reason is that the storage engine (DocDB) has no explicit row locking yet to lock the referenced row. But with referential integrity, inserting in a child table must lock the parent row in share mode (like a SELECT FOR KEY SHARE) to ensure that it is not currently being deleted (or the referenced columns updated). Then the no-lock solution is to run in a true Serializable isolation level.

About the support of Foreign Keys, read Bryn Llewellyn blog post:

Relational Data Modeling with Foreign Keys in a Distributed SQL Database - The Distributed SQL Blog

Note that I said “true” Serializable isolation level because I’m used to Oracle Database where this term is used for “Snapshot Isolation”- more about this:

Oracle serializable is not serializable - Blog dbi services

Serializable transaction isolation

So, the current transaction isolation level is “Read Committed” where repeatable reads, which is required by foreign keys, needs SELECT FOR KEY SHARE:

ysqlsh franck
select current_setting('transaction_isolation');
\q

Then, in order to be able to insert in a table that has some foreign keys, I set the default isolation level to Serializable for my database, and re-connect to check it:

ysqlsh franck
alter database franck set default_transaction_isolation=serializable;
\c franck postgres localhost 5433
select current_setting('transaction_isolation');
\q


Update 27-JUL-2019

If you don’t want to change the default, you can also set:

PGOPTIONS='-c default_transaction_isolation=serializable'

Generate data with serializable transactions

Ok, let’s try to generate data now:

pgbench --initialize --init-steps=g -h localhost -p 5433 -U postgres franck

That is taking a long time so I attach the debugger on it:

gdb $(pgrep pgbench)

https://github.com/YugaByte/yugabyte-db/blob/master/src/postgres/src/bin/pgbench/pgbench.c#L3704

pgbench.c:3704 is the call to PQendcopy() which is waiting for the asynchronous COPY completion, COPY is used by pgBench for the large table “pgbench_accounts”. On YugaByteDB side, it seems that only one Tablet Server is doing the work, 100% in CPU:

The Tablet Server is running code from librocksdb.so (the YugaByteDB storage engine, DocDB, is based on RocksDB):

perf top

While I was waiting I generated a Brendan Gregg flamegraph on the busy Tablet Server, but there’s no visible bottleneck.

sudo perf record -e cpu-cycles -o /tmp/perf.out -F 99 -g
^C
git clone https://github.com/brendangregg/FlameGraph.git
sudo perf script -i /tmp/perf.out | ./FlameGraph/stackcollapse-perf.pl | ./FlameGraph/flamegraph.pl /dev/stdin > /tmp/perf.folded.svg

This goes beyond the goal of this post and, anyway, looking at performance is probably not relevant in this version. [21-JUL-2019: the support for COPY has just been added since this test - more info here]

Finally, the initialization of 100000 tuples finished after one hour:

time pgbench --initialize --init-steps=g -h localhost -p 5433 -U postgres franck

So, finally, I have my pgBench schema loaded, with all referential integrity and data.

pgBench simple-update in single-session

I’m running the “simple-update” which basically updates the “abalance” for a row in “pgbench_accounts”, and inserts into “pgbench_history”.

pgbench --no-vacuum --builtin=simple-update --protocol=prepared --time 30 -h localhost -p 5433 -U postgres franck

pgbench --no-vacuum --builtin=simple-update --protocol=prepared --time 30 -h localhost -p 5433 -U postgres franck

At least I know that this basic OLTP application can run without any change on YugaByteDB and that’s a very good point for application transparency. I’ll explain later why I start here the “simple update” workload rather than the default. The Transaction Per Second rate is not amazing (it is 150x higher on a plain PostgreSQL on the same platform), but this is not what I am testing here.

pgBench simple-update in multi-sessions

Obviously, a distributed database should be scalable when multiple users are working in parallel. For this I run pgBnech with 10 clients from 10 threads:

pgbench --no-vacuum --protocol=prepared --builtin=simple-update --time 30 --jobs=10 --client=10 -h localhost -p 5433 -U postgres franck

pgbench — no-vacuum — protocol=prepared — builtin=simple-update — time 30 — jobs=10 — client=10 -h localhost -p 5433 -U postgres franck

Good. I said that we should not look at the elapsed time, but the comparison with the previous run shows the scalability as 10 sessions can run about 10x more transactions per second. I run the 3 YugaByteDB nodes on a 24 core virtual machine here. The servers are multi-threaded:

nTH: number of threads (LWP), P: last used CPU (SMP)

pgBench TCPB-like in multi-sessions

Demystifying Benchmarks: How to Use Them To Better Evaluate Databases

Actually, the first test I did was the default pgBench workload, which is the “TPC-B (sort of)”. In addition to the “simple update”, each transaction updates, in addition to “pg_account”, the balance “pgbench_tellers” and “pgbench_branches”.

This is more tricky because multiple clients will have to concurrently update the same records. And there’s a high probability of contention given the cardinalities. There, pessimistic locking would be better than optimistic. I’m still with my Foreign Keys here and Serializable isolation level (which is optimistic locking).

pgbench --no-vacuum --protocol=prepared --builtin=simple-update --time 30 --jobs=10 --client=10 -h localhost -p 5433 -U postgres franck

ERROR: could not serialize access due to concurrent update

Quickly I can see that 9 out of the 10 clients failed with “Conflicts with higher priority transaction”. This is the equivalent to the PostgreSQL “ERROR: could not serialize access due to concurrent update”. With optimistic locking, the application must be ready to re-try a failed transaction, but pgBench has no option for that. This is not special to YugaByteDB: you will get the same in PostgreSQL when running pgBench default workload in a Serializable transaction isolation level.

Anyway, given the high probability of collision here, the solution is pessimistic locking.

Read Committed transaction isolation

If I set back the isolation level to “Read Committed” the UPDATE will use pessimistic locking, and then I expect the transactions to be serialized, waiting to see committed changes rather than failing when encountering a concurrent change.

ysqlsh franck
alter database franck set default_transaction_isolation='read committed';
\q

But then, I cannot declare the foreign keys or I’ll get “ ERROR: Operation only supported in SERIALIZABLE isolation level” until the issue #1199 is fixed.

Without referential integrity constraints

I re-create the tables with the REFERENCES clause commented out:

ysqlsh franck
drop table if exists pgbench_history;
drop table if exists pgbench_tellers;
drop table if exists pgbench_accounts;
drop table if exists pgbench_branches;
CREATE TABLE pgbench_branches (
bid integer NOT NULL
,bbalance integer
,filler character(88)
,CONSTRAINT pgbench_branches_pkey PRIMARY KEY (bid)
);
CREATE TABLE pgbench_accounts (
aid integer NOT NULL
,bid integer --references pgbench_branches
,abalance integer
,filler character(84)
,CONSTRAINT pgbench_accounts_pkey PRIMARY KEY (aid)
);
CREATE TABLE pgbench_tellers (
tid integer NOT NULL
,bid integer --references pgbench_branches
,tbalance integer
,filler character(84)
,CONSTRAINT pgbench_tellers_pkey PRIMARY KEY (tid)
);
CREATE TABLE pgbench_history (
tid integer --references pgbench_tellers
,bid integer --references pgbench_branches
,aid integer --references pgbench_accounts
,delta integer
,mtime timestamp without time zone
,filler character(22)
);
\q

And run the initialize again, which is much faster (2 minutes instead of 1 hour):

time pgbench --initialize --init-steps=g -h localhost -p 5433 -U postgres franck

Then, ready to run my 10 clients TPC-B workload:

pgbench --no-vacuum --protocol=prepared --builtin=simple-update --time 30 --jobs=10 --client=10 -h localhost -p 5433 -U postgres franck

Operation failed. Try again.: Conflicts with higher priority transaction
ERROR: Operation failed. Try again.: Conflicts with committed transaction
ERROR: Error during commit: Operation expired: Transaction expired

This is much better. Among the 23574 transactions, I got only 3 errors and 7 clients were still running concurrently. I’ve also run it with only one client where I had TPS=144 and here with 7 clients remaining we reach TPS=785.

Of course, I would expect no “Try again” error when in Read Committed isolation level. Here I have the 3 out of the 5 transactional errors we can get from libpq calls (according to yugabyte pg_libpq-test.cc):

  • Transaction expired
  • Conflicts with committed transaction
  • Conflicts with higher priority transaction
  • Restart read required
  • Value write after transaction start

But that’s probably for another post. The important outcomes from my very first test of YugaByteDB are:

  • It is very easy to install and test, so… try it (and you will get a link to get a nice T-Shirt shipped to your home)
  • I was able to run pgBench, a simple application written for PostgreSQL, without any change for YugaByteDB
  • Not all DDL is supported yet, but easy to workaround and follow the Git issue.
  • Foreign Keys are supported, which is a challenge for a distributed database.
  • Transaction concurrency is managed and the issues that will be fixed in later releases are clearly documented as Git issues