Search

Top 60 Oracle Blogs

Recent comments

multitenant

Wrong result with multitenant, dba_contraints and current_schema

Multitenant architecture is not such a big change and this is why I recommend it when you start a project in 12c or if you upgrade to 12.2 – of course after thoroughly testing your application. However, there is a point where you may encounter problems on dictionary queries, because it is really a big change internally. The dictionary separation has several side effects. You should test carefully the queries you do on the dictionary views to get metadata. Here is an example of a bug I recently encountered.

This happened with a combination of things you should not do very often, and not in a critical use case: query dictionary for constraints owned by your current schema, when different than the user you connect with.

Active Data Guard services in Multitenant

A database (or the CDB in multitenant) registers its name as the default service. When a standby database is on the same server, or same cluster, you have no problem because this database name is the db_unique_name which is different between the primary and the standby(s).

In multitenant, in addition to that, each PDB registers its name as a service. But the PDB name is the same in the primary and the standby database. This means that we have the same service name registered for the PDB in primary and standby:

Service "pdb1" has 2 instance(s).
Instance "CDB2A", status READY, has 1 handler(s) for this service...
Instance "CDB2B", status READY, has 1 handler(s) for this service...

We cannot change that, and then it is strongly recommended to create different services for the PDB in primary and standby.

When PDB name conflicts with CDB name

Going to multitenant architecture is not a big change. The administration things (DBA, monitoring, backups) connect to the CDB and the application things connect to the PDB. Without the multitenant option, it is still recommended to go to the CDB architecture. The non-CDB is deprecated and the multitenant architecture brings interesting features. People often ask how to name the CDB and the PDB, especially when they have naming rules or policies in the company. My recommendation is to name the PDB as you are used to naming the databases: the name often gives an idea of the data that is inside, the application, and the environment. The CDB is the container, and in my opinion, you should apply the same naming rules as for servers. Don’t forget that pluggable databases are made to be moved across CDB, so the CDB name should not depend on the content.

Bequeath connect to PDB: set container in logon trigger?

There are little changes when you go to multitenant architecture and one of them is that you must connect with a service name. You cannot connect directly to a PDB with a beaqueath (aka local) connection. This post is about a workaround you may have in mind: create a common user and set a logon trigger to ‘set container’. I do not recommend it and you should really connect with a service. Here is an example.

Imagine that I have a user connecting with bequeath connection to a non-CDB, using user/password without a connection string, the database being determined by the ORACLE_SID. And I want to migrate to CDB without changing anything on the client connection configuration side. The best idea would be to use a service, explicitly or implicitly with TWO_TASK or LOCAL. But let’s imagine that you don’t want to change anything on the client side.

12c Multitenant Internals: VPD for V$ views

I described in an earlier post on AWR views how the dictionary views were using metadata and object links to show information from other containers. But this mechanism cannot work for fixed views (aka V$) because they don’t have their definition in the dictionary.

The big difference is that most of V$ views are available long before the dictionary is opened or even created. Just start an instance in NOMOUNT and you can query the V$ views. Even in multitenant, you can switch to different containers in MOUNT, and query V$ views, when no dictionary is opened.

12cR2 Application Containers and Foreign Keys

Application Container brings a new way to share data among databases, and adds a new dimension to referential integrity. A foreign key in an application PDB can reference a row belonging to a root data link table. But then, should a delete on the root validate that there are no orpheans in the application PDBs? And what if those PDBs are closed at the time of this delete? Here is a small example.

If you run this in 12.2.0.1 you will get an error because the search for parent key is done only on the current container. This is considered as a bug: 21955394: CDB:ORA-02291 WHEN FOREIGN KEY REFERS TO THE PRIMARY KEY IN DATA LINK

The example that follows is run with the patch applied to fix this bug.

12cR2 PDB refresh as a poor-man standby?

Disclaimer

My goal here is only to show that the Refreshable PDB feature works by shipping and applying redo, and then can synchronize a copy of the datafiles. I do not recommend to use it for disaster recovery in any production environment yet. Even if I’m using only supported features, those features were not designed for this usage, and are quite new and not stable yet. Disaster Recovery must use safe and proven technologies and this is why I’ll stick with Dbvisit standby for disaster recovery in Standard Edition.

This post explains what I had in my mind whith the following tweet:
CapturePoorManSBY

12cR2 Cross-container DML – insert into container()

Multitenant has been introduced in 12.1.0.1 with the goal to share resources but isolate data. However, having all PDBs in the same root may be convenient to manipulate data in multiple PDBs. In the first patchset, 12.1.0.2, a way to query cross-container has been introduced for the CDB administrator to see data in other containers. In the second release, 12.2.0.1, this goes further with the introduction of Application Containers and cross-PDB DML. Currently, not all possibilities are documented and not all documented features are actually working. This will probably improve in next patchset. I’ll start here with something simple: insert from root into a table which is in a PDB.

12cR2 partial PDB backup

I had a recent question about the following mention from the 12cR2 Multitenant book, about Partial PDB backups:
CapturePArtialPDBBackup.
Here is an example in 12.2 with local undo to illustrate the answer, which may help to understand what is a partial PDB backup.

Oracle 12cR2, RAC, Cloud, ACFS, PDB thin clones and asmadmin

In the Oracle Public Cloud, fast provisioning gets all its meaning when creating a RAC database service: in one hour you can get an operational highly available multitenant database. You can even create it in Data Guard for Disaster Recovery. Now, Oracle is pushing ACFS to store the datafiles rather than direct ASM. Especially in multitenant because a great feature is thin cloning: CREATE PLUGGABLE DATABASE AS SNAPSHOT COPY. However, I encountered an error when I tried it for the first time.

TDE keystore

SQL> create pluggable database pdb2 from pdb1 snapshot copy;
create pluggable database pdb2 from pdb1 snapshot copy
*
ERROR at line 1:
ORA-28357: password required to open the wallet

Oh yes, in the cloud all tablespaces are encrypted. In 12.2 we can put the keystore password in the command: