Search

Top 60 Oracle Blogs

Recent comments

aws

What is a serverless database?

By Franck Pachot

.
After reading the https://cloudwars.co/oracle/oracle-deal-8×8-larry-ellison-picks-amazons-pocket-again/ paper, I am writing some thoughts about how a database can be serverless and elastic. Of course, a database needs a server to process its data. Serverless doesn’t mean that there are no servers.

Oracle Standard Edition on AWS ☁ socket arithmetic

By Franck Pachot

.
Note that I’ve written previously about Oracle Standard Edition 2 licensing before but a few rules change. This is written in May 2020.
TL;DR: 4 vCPU count for 1 socket and 2 sockets count for 1 server wherever hyper-threading is enabled or not.

The SE2 rules

I think the Standard Edition rules are quite clear now: maximum server capacity, cluster limit, minimum NUP, and processor metric. Oracle has them in the Database Licensing guideline.

2 socket capacity per server

Oracle Database Standard Edition 2 may only be licensed on servers that have a maximum capacity of 2 sockets.

The myth of NoSQL (vs. RDBMS) agility: adding attributes

By Franck Pachot

.
There are good reasons for NoSQL and semi-structured databases. And there are also many mistakes and myths. If people move from RDBMS to NoSQL because of wrong reasons, they will have a bad experience and this finally deserves NoSQL reputation. Those myths were settled by some database newbies who didn’t learn SQL and relational databases. And, rather than learning the basics of data modeling, and capabilities of SQL for data sets processing, they thought they had invented the next generation of persistence… when they actually came back to what was there before the invention of RDBMS: a hierarchical semi-structured data model. And now encountering the same problem that the relational database solved 40 years ago. This blog post is about one of those myths.

AWS Aurora vs. RDS PostgreSQL on frequent commits

This post is the second part of https://blog.dbi-services.com/aws-aurora-xactsync-batch-commit/ where I’ve run row-by-row inserts on AWS Aurora with different size of intermediate commit. Without surprise the commit-each-row anti-pattern has a negative effect on performance. And I mentioned that this is even worse in Aurora where the session process sends directly the WAL to the network storage and waits, at commit, that it is acknowledged by at least 4 out of the 6 replicas. An Aurora specific wait event is sampled on these waits: XactSync. At the end of the post I have added some CloudWatch statistics about the same running in RDS but with the EBS-based PostgreSQL rather than the Aurora engine. The storage is then an EBS volume mounted on the EC2 instance.

AWS Aurora IO:XactSync is not a PostgreSQL wait event

By Franck Pachot

.
In AWS RDS you can run two flavors of the PostgreSQL managed service: the real PostgreSQL engine, compiled from the community sources, and running on EBS storage mounted by the database EC2 instance, and the Aurora which is proprietary and AWS Cloud only, where the upper layer has been taken from the community PostgreSQL. The storage layer in Aurora is completely different.

AWS Certified Database Specialty (DBS-C01)

Here is my feedback after preparing and passing the AWS Database Specialty certification. There are tips about the exam but also some thoughts that came to my mind during the preparation when I had to mind-shift from a multi-purpose database system to purpose-built database services.

DynamoDB: adding a Local covering index to reduce the cost

By Franck Pachot

.
This is a continuation on the previous post on DynamoDB: adding a Global Covering Index to reduce the cost. I have a DynamoDB partitioned on “MyKeyPart”,”MyKeySort” and I have many queries that retrieve a small “MyIndo001” attribute. And less frequent ones needing the large “MyData001” attribute. I have created a Global Secondary Index (GSI) that covers the same key and this small attribute. Now, because the index is prefixed by the partition key, I can create a Local Secondary Index (LSI) to do the same. But there are many limitations. The first one is that I cannot add a local index afterwards. I need to define it at the table creation.

Drop table

Here I am in a lab so that I can drop and re-create the table. In real live you may have to create a new one, copy the items, synchronize it (DynamoDB Stream),…

DynamoDB: adding a Global covering index to reduce the cost

By Franck Pachot

.
People often think of indexes as a way to optimize row filtering (“get item” faster and cheaper). But indexes are also about columns (“attribute projection”) like some kind of vertical partitioning. In relational (“SQL”) databases we often add more columns to the indexed key. This is called “covering” or “including” indexes, to avoid reading the whole row. The same is true in NoSQL. I’ll show in this post how, even when an index is not required to filter the items, because the primary key partitioning is sufficient, we may have to create a secondary index to reduce the cost of partial access to the item. Here is an example with AWS DynamoDB where the cost depends on I/O throughput.

Oracle and the Future

We’ll start with a disclaimer here- this is my experience and my opinion. NO ONE ELSE’S, just as any link to any blog in this post is. Take it or leave it, I really don’t care, never have, never will- never been one to follow any drummer but my own anyway. </p />
</em></strong></p></div>

    	  	<div class=

Importing geo-partitioned data… the easy way

 

setting the stage

I started at Cockroach labs back in June 2019 to help others learn how to architect and develop applications using a geo-distributed database.  There has been a resurgence in distributed database technology, but the focus on geo-distributed is quite unique to CockroachDB.  While the underlying technology is unique, developers and DBAs that come with a wealth of experience, need to know how to best use this innovative technology.  Given this situation, I thought it would be good to start a blog series to explore various topics facing anyone beginning to architect database solutions with CockroachDB.

To start using a database, the first step is to IMPORT table data so you can begin to see how the database performs and responds.  And thus the IMPORT series has started!