Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

PostgreSQL Active Session History extension testing with Docker

Introduction

You may have noticed that a beta version of the pgsentinel extension (providing active session history for postgresql) is publicly available in this github repository.

We can rely on docker to facilitate and automate the testing of the extension on a particular postgreSQL version (has to be >= 10).

Let’s share a Dockerfile so that we can easly build a docker image for testing pgsentinel (on a postgreSQL version of our choice).

Description

The dockerfile used to provision a pgsentinel testing docker image:

  • downloads the postgresql source code (version of your choice)
  • compiles and installs it
  • downloads the pgsentinel extension
  • compiles and installs it
  • compiles and installs the pg_stat_statements extension

Dockerfile github repository

The dockerfile is available in this github repository.

Usage

3 arguments can be used:

  • PG_VERSION (default: 10.5)
  • PG_ASH_SAMPLING (default: 1)
  • PG_ASH_MAX_ENTRIES (default: 10000)

Example to build an image

Let’s build a pgsentinel testing docker image for postgreSQL version 11beta3:

[root@bdtdocker dockerfile]# docker build -t pgsentinel-testing -f Dockerfile-pgsentinel-testing --build-arg PG_VERSION=11beta3 --force-rm=true --no-cache=true .

Once done, let’s run a container

[root@bdtdocker dockerfile]# docker run -d -p 5432:5432 --name pgsentinel pgsentinel-testing

and verify that the pg_active_session_history view is available

[root@bdtdocker dockerfile]# docker exec -it pgsentinel psql -c "\d pg_active_session_history"
                   View "public.pg_active_session_history"
      Column      |           Type           | Collation | Nullable | Default
------------------+--------------------------+-----------+----------+---------
 ash_time         | timestamp with time zone |           |          |
 datid            | oid                      |           |          |
 datname          | text                     |           |          |
 pid              | integer                  |           |          |
 usesysid         | oid                      |           |          |
 usename          | text                     |           |          |
 application_name | text                     |           |          |
 client_addr      | text                     |           |          |
 client_hostname  | text                     |           |          |
 client_port      | integer                  |           |          |
 backend_start    | timestamp with time zone |           |          |
 xact_start       | timestamp with time zone |           |          |
 query_start      | timestamp with time zone |           |          |
 state_change     | timestamp with time zone |           |          |
 wait_event_type  | text                     |           |          |
 wait_event       | text                     |           |          |
 state            | text                     |           |          |
 backend_xid      | xid                      |           |          |
 backend_xmin     | xid                      |           |          |
 top_level_query  | text                     |           |          |
 query            | text                     |           |          |
 cmdtype          | text                     |           |          |
 queryid          | bigint                   |           |          |
 backend_type     | text                     |           |          |
 blockers         | integer                  |           |          |
 blockerpid       | integer                  |           |          |
 blocker_state    | text                     |           |          |

So that we can now test the extension behavior on the postgreSQL version of our choice (11beta3 in this example).

18c runInstaller -silent

By Franck Pachot

.
You find two different ‘runInstaller’ under an Oracle Home. The old one, the Oracle Universal Installer, in $ORACLE_HOME/oui/bin. And the new one, in $ORACLE_HOME directly. They have the same name but are completely different. The old one was used to install an Oracle Home from the installation media. But in 18c you don’t use it. It has been used by Oracle to build the Oracle Home image. Then you download and unzip directly your Oracle Home. You have only to configure it and re-link the binaries. And this is done by the new runInstaller which is at the root of the Oracle Home. Actually, it is just a shell script that runs the Perl dbSetup.pl to setup the Oracle Database software. In my opinion, it would be better to have it called dbSetup.sh rather than rename it to runInstaller, especially given that the same thing for Grid Infrastructure is called GridSetup.sh since 12cR2. The Perl script finally runs the Java GUI. It can also be run in command line, aka silent mode, which is the goal of this post. The command line arguments are similar, but not the same as in the old runInstaller.

Prerequisites

You may want to run the prerequisites only to check if your system is ready for the installation. Here is how to do so in command line:

$ $ORACLE_HOME/runInstaller -silent -executePrereqs -responseFile $ORACLE_HOME/inventory/response/db_install.rsp
 
Launching Oracle Database Setup Wizard...
 
[FATAL] [INS-13013] Target environment does not meet some mandatory requirements.
CAUSE: Some of the mandatory prerequisites are not met. See logs for details. /u00/app/oraInventory/logs/InstallActions2018-08-11_06-07-14PM/installActions2018-08-11_06-07-14PM.log
ACTION: Identify the list of failed prerequisite checks from the log: /u00/app/oraInventory/logs/InstallActions2018-08-11_06-07-14PM/installActions2018-08-11_06-07-14PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.

From there we can check the log about the tests that have failed, such as in the following example:

INFO: [Aug 11, 2018 6:08:21 PM] Physical Memory: This is a prerequisite condition to test whether the system has at least 8GB (8388608.0KB) of total physical memory.
INFO: [Aug 11, 2018 6:08:21 PM] Severity:IGNORABLE
INFO: [Aug 11, 2018 6:08:21 PM] OverallStatus:VERIFICATION_FAILED
INFO: [Aug 11, 2018 6:08:21 PM] *********************************************
INFO: [Aug 11, 2018 6:08:21 PM] Run Level: This is a prerequisite condition to test whether the system is running with proper run level.
INFO: [Aug 11, 2018 6:08:21 PM] Severity:CRITICAL
INFO: [Aug 11, 2018 6:08:21 PM] OverallStatus:VERIFICATION_FAILED
INFO: [Aug 11, 2018 6:08:21 PM] *********************************************
INFO: [Aug 11, 2018 6:08:21 PM] OS Kernel Version: This is a prerequisite condition to test whether the system kernel version is at least "2.6.39-400.211.1".
INFO: [Aug 11, 2018 6:08:21 PM] Severity:CRITICAL
INFO: [Aug 11, 2018 6:08:21 PM] OverallStatus:VERIFICATION_FAILED

Software Install

You can pass all parameters in command line (‘runInstaller -silent -help’ to see all possibilities), but in all cases you need a response file. Then I put everything I need in the response file. There’s no mention of the ORACLE_HOME because you already unzipped it at the right place. The most important is the edition which seems to accept [EE, SEONE, SE2, HP, XP, PE]. I didn’t try it but Standard Edition One is for versions <= 12.1.0.1 by the way.

cd $ORACLE_HOME
 
cat > db18EE.rsp < oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v18.0.0
oracle.install.option=INSTALL_DB_SWONLY
UNIX_GROUP_NAME=oinstall
INVENTORY_LOCATION=/u00/app/oraInventory
ORACLE_BASE=/u00/app/oracle
oracle.install.db.InstallEdition=EE
oracle.install.db.OSDBA_GROUP=dba
oracle.install.db.OSOPER_GROUP=oper
oracle.install.db.OSBACKUPDBA_GROUP=backupdba
oracle.install.db.OSDGDBA_GROUP=dgdba
oracle.install.db.OSKMDBA_GROUP=kmdba
oracle.install.db.OSRACDBA_GROUP=dba
END

There is no need for the oracle.install.db.config variables because I’ll install the software only without creating a database.

Here is how to run the dbSetup. You can use ‘-ignorePrereqFailure’ to ignore the prerequisites if you want to install to a host where some prerequisites fail:

./runInstaller -silent -noconfig -ignorePrereqFailure -responseFile ./db18EE.rsp

 
The log of the installation goes into the oraInventory/logs and, as usual, you have to run the root.sh

As a root user, execute the following script(s):
1. /u00/app/oracle/product/18SE/root.sh
 
Execute /u00/app/oracle/product/18SE/root.sh on the following nodes:
[vmreforatun01]

This new runInstaller can also apply one-off patches with -applyOneOffs mentioning the patch locations. You can also build an Oracle Home image that you customize, with -createGoldImage -destinationLocation and even mention some files or path to exclude to make it smaller: -exclFiles

 

Cet article 18c runInstaller -silent est apparu en premier sur Blog dbi services.

TRANSPORT_CONNECT_TIMEOUT and RETRY_COUNT

By Franck Pachot

.
When you have a Data Guard configuration, you want the application to connect to the right server, where the primary is, without taking too much time. The default TCP timeout is 1 minute which is too long. When you don’t want to configure a virtual IP address (VIP) you can simply list all the addresses in the client connection string. But then you need to reduce the timeout. A short duration in 1 to 5 seconds will be ok most of the time, but in case of network issue, you want to give a chance to retry with a longer timeout. This post is about the connection string parameters to define this. Of course, all is documented but the goal of this post is also to show how to quickly test it. Because a reliable understanding of how it works relies on both documentation and test.

Here is a simple client failover configuration where the connection tries 10.10.10.10 and, if it fails, tries 10.10.10.11

DEFAULT=
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)

The problem with that is when the 10.10.10.10 is down then the 10.10.10.11 will be tried only after 60 seconds, the default TCP timeout. You can completely avoid waiting for the timeout by using a virtual IP that will always be up, started on the failed-over server. But you can also reduce the TCP timeout to a few seconds.

Here is a tnsping with the above tnsnames.ora entry and when both servers are down:


$ time tnsping DESCRIPTION
 
TNS Ping Utility for Linux: Version 18.0.0.0.0 - Production on 10-AUG-2018 15:15:55
 
Copyright (c) 1997, 2018, Oracle. All rights reserved.
 
Used parameter files:
 
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (CONNECT_DATA=(SERVICE_NAME=pdb1)) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))))
TNS-12535: TNS:operation timed out
 
real 2m0.051s
user 0m0.005s
sys 0m0.011s

That’s 2 minutes because there is a 1 minute timeout for each address.

TRANSPORT_CONNECT_TIMEOUT

Now, just adding the TRANSPORT_CONNECT_TIMEOUT to the connection string description to reduce the timout to 4 seconds:


DESCRIPTION=
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(TRANSPORT_CONNECT_TIMEOUT=4)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)

The total time to get the answer from both addresses is 8 seconds – 4 second for each:

$ time tnsping DESCRIPTION
 
TNS Ping Utility for Linux: Version 18.0.0.0.0 - Production on 10-AUG-2018 15:15:55
 
Copyright (c) 1997, 2018, Oracle. All rights reserved.
 
Used parameter files:
 
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (CONNECT_DATA=(SERVICE_NAME=pdb1)) (TRANSPORT_CONNECT_TIMEOUT=4) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))))
TNS-12535: TNS:operation timed out
 
real 0m8.023s
user 0m0.010s
sys 0m0.006s

RETRY_COUNT

If you lower the timeout, you may give a chance to retry a few times with RETRY_COUNT. There, RETRY_COUNT=2 will give 3 attempts ( 1 + 2 retries ) to the address list:

$ time tnsping RETRY_COUNT
 
TNS Ping Utility for Linux: Version 18.0.0.0.0 - Production on 10-AUG-2018 15:49:34
 
Copyright (c) 1997, 2018, Oracle. All rights reserved.
 
Used parameter files:
 
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (CONNECT_DATA=(SERVICE_NAME=pdb1)) (TRANSPORT_CONNECT_TIMEOUT=4) (RETRY_COUNT=2) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))))
TNS-12535: TNS:operation timed out
 
real 0m24.049s
user 0m0.011s
sys 0m0.010s

This has tried 10.10.10.10 and then 10.10.10.11 for 4 seconds each, and then retried 2 times wich in total takes 2x4x4=24 seconds

DESCRIPTION_LIST

The TRANSPORT and RETRY_COUNT are used only in the DESCRIPTION. You may want to give several attempts with an increasing timeout. For example: try each address for one second to get a quick connection to the primary, wherever it is, when the network is in good health. Then give two attempts with a 5 seconds timeout for bad network times. And then one final attempt to each with the default timeout to be sure that the servers are down.

You can use a DESCRIPTION_LIST for this:

INCREASING=
(DESCRIPTION_LIST=
(LOAD_BALANCE=off)
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(TRANSPORT_CONNECT_TIMEOUT=1)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(TRANSPORT_CONNECT_TIMEOUT=5)
(RETRY_COUNT=1)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(TRANSPORT_CONNECT_TIMEOUT=2)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)
)

Rather than just time the total attempts, I’ll strace each connections:

$ strace -tT tnsping INCREASING 2>&1 | grep -C1 --color=auto -E 'poll.*|inet_addr[()".0-9]*'
 
16:15:49 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000008>
16:15:49 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000087>
16:15:49 times(NULL) = 434920117 <0.000011>
16:15:49 mmap(NULL, 528384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7efce31bc000 <0.000013>
16:15:49 poll([{fd=4, events=POLLOUT}], 1, 1000) = 0 (Timeout) <1.001435>
16:15:50 close(4) = 0 <0.000256>
--
16:15:50 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000060>
16:15:50 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.11")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000495>
16:15:50 times(NULL) = 434920218 <0.000062>
16:15:50 poll([{fd=4, events=POLLOUT}], 1, 1000) = 0 (Timeout) <1.000768>
16:15:51 close(4) = 0 <0.000050>
--
16:15:51 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000015>
16:15:51 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000060>
16:15:51 times(NULL) = 434920318 <0.000010>
16:15:51 poll([{fd=4, events=POLLOUT}], 1, 5000) = 0 (Timeout) <5.005563>
16:15:56 close(4) = 0 <0.000027>
--
16:15:56 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000012>
16:15:56 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.11")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000081>
16:15:56 times(NULL) = 434920819 <0.000015>
16:15:56 poll([{fd=4, events=POLLOUT}], 1, 5000) = 0 (Timeout) <5.006265>
16:16:01 close(4) = 0 <0.000192>
--
16:16:01 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000079>
16:16:01 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000486>
16:16:01 times(NULL) = 434921320 <0.000087>
16:16:01 poll([{fd=4, events=POLLOUT}], 1, 5000) = 0 (Timeout) <5.004660>
16:16:06 close(4) = 0 <0.000611>
--
16:16:06 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000114>
16:16:06 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.11")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000536>
16:16:06 times(NULL) = 434921822 <0.000097>
16:16:06 poll([{fd=4, events=POLLOUT}], 1, 5000) = 0 (Timeout) <5.008128>
16:16:11 close(4) = 0 <0.000135>
--
16:16:11 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000137>
16:16:11 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000584>
16:16:11 times(NULL) = 434922323 <0.000079>
16:16:11 poll([{fd=4, events=POLLOUT}], 1, 60000) = 0 (Timeout) <60.053782>
16:17:11 close(4) = 0 <0.000166>
--
16:17:11 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000195>
16:17:11 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.11")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000549>
16:17:11 times(NULL) = 434928329 <0.000488>
16:17:11 poll([{fd=4, events=POLLOUT}], 1, 60000) = 0 (Timeout) <60.007246>
16:18:11 close(4) = 0 <0.000043>

With ‘-T’ strace shows the duration of the poll() system call between brackets after the return code. You can see here 1-second timeout attempts to each address, then 2 attempts with 5 seconds timeout and then 60 seconds.

Note that I have added (LOAD_BALANCE=OFF) here because the default is ON in a DESCRIPTION_LIST but here I want to take them in the order I specified them.

 

Cet article TRANSPORT_CONNECT_TIMEOUT and RETRY_COUNT est apparu en premier sur Blog dbi services.

ATP vs ADW – the Autonomous Database lockdown profiles

By Franck Pachot

.
The Oracle database has always distinguished two types of workloads: transactional (OLTP) and datawarehouse (VLDB, DWH, DSS, BI, analytics). There is the same idea in the managed Oracle Cloud with two autonomous database services.

To show how this is old, here is how they were defined in the Oracle7 Tuning Book:

CaptureOLTPvsDSS

The definition has not changed a lot. But the technology behind DSS/DWH has improved. Now, with In-Memory Column Store, Smart Scan, Result Cache we can even see that indexes, materialized views, star transformation, hints,.. are disabled in the Autonomous Datawarehouse cloud service.

The difference between the two autonomous cloud services, ATP (Autonomous Transaction Processing) for OLTP and ADW (Autonomous Datawarehouse) for analytics have been described by Maria Colgan after Larry Ellison announce:
https://sqlmaria.com/2018/08/07/how-does-autonomous-transaction-processing-differ-from-the-autonomous-data-warehouse/

PDBaaS

Those autonomous services are PDB as a Service. They are using the consolidation and isolation features of 12cR2 multitenant. And we can even see that the ATP and ADW services can run within the same CDB and same instance. They are different PDBs and they differ only by their resource management plans and PDB lockdown profiles. So let’s see the differences from this point of view: ATP lockdown profile is called ‘OLTP’ and ADW lockdown profile is called ‘DWCS’.

Options

The only difference is about partitioning which is enabled for ATP and disabled for ASW

SQL> select (select value from v$parameter where name='pdb_lockdown') type,v$lockdown_rules.* from v$lockdown_rules where rule_type='OPTION';
 
TYPE RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
---- --------- ---- ------ ------------- ------ ----- ------
OLTP OPTION PARTITIONING ENABLE ALL 284
 
SQL> select (select value from v$parameter where name='pdb_lockdown') type,v$lockdown_rules.* from v$lockdown_rules where rule_type='OPTION' order by 1, 2 ,3 nulls first, 4 nulls first,5;
 
TYPE RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
---- --------- ---- ------ ------------- ------- ----- ------
DWCS OPTION PARTITIONING DISABLE ALL 73

Features

All disabled features are the same:
Disabled for all users: COMMON_SCHEMA_ACCESS, CONNECTIONS, CTX_LOGGING, NETWORK_ACCESS, OS_ACCESS, SYSTEM_DATA
Disabled for local users only: USER_PASSWORD_VERIFIERS
Enabled for all users: COMMON_USER_CONNECT, TRACE_VIEW_ACCESS, UTL_FILE

Those last ones are good news. We can query V$DIAG_OPT_TRACE_RECORDS, V$DIAG_SQL_TRACE_RECORDS, $DIAG_TRACE_FILE_CONTENTS to show some diagnostics. Unfortunately, I’ve seen no allowed ways to enable SQL Trace.

DDL Statements

In the ADW service, the local user cannot create indexes and materialized views, They are supposed to be autonomously created when required (probably by a common user):

SQL> select (select value from v$parameter where name='pdb_lockdown') type,v$lockdown_rules.* from v$lockdown_rules where regexp_like(rule,'(INDEX|MATERIALIZED)') order by 1, 2 ,3 nulls first, 4 nulls first,5;
 
TYPE RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
---- --------- ---- ------ ------------- ------ ----- ------
DWCS STATEMENT ALTER INDEX DISABLE LOCAL 73
DWCS STATEMENT ALTER INDEXTYPE DISABLE LOCAL 73
DWCS STATEMENT ALTER MATERIALIZED VIEW DISABLE LOCAL 73
DWCS STATEMENT ALTER MATERIALIZED VIEW LOG DISABLE LOCAL 73
DWCS STATEMENT CREATE INDEX DISABLE LOCAL 73
DWCS STATEMENT CREATE INDEXTYPE DISABLE LOCAL 73
DWCS STATEMENT CREATE MATERIALIZED VIEW DISABLE LOCAL 73
DWCS STATEMENT CREATE MATERIALIZED VIEW LOG DISABLE LOCAL 73
DWCS STATEMENT DROP INDEX DISABLE LOCAL 73
DWCS STATEMENT DROP INDEXTYPE DISABLE LOCAL 73
DWCS STATEMENT DROP MATERIALIZED VIEW DISABLE LOCAL 73
DWCS STATEMENT DROP MATERIALIZED VIEW LOG DISABLE LOCAL 73

The ATP service enables all those for all users.

System parameters

Both services disable all ALTER SYSTEM statements except KILL SESSION. They allow all ALTER SYSTEM SET for the common users but allow only a few parameters to be changed by local users.

Basically, the ADW service sets the following:


_default_pct_free=1
_ldr_io_size=33554432
_ldr_io_size2=33554432
_max_io_size=33554432
_optimizer_allow_all_access_paths=FALSE
_optimizer_answering_query_using_stats=TRUE
optimizer_ignore_hints=TRUE
optimizer_ignore_parallel_hints=TRUE
parallel_degree_policy=AUTO
parallel_min_degree=CPU
result_cache_max_result=1
result_cache_mode=FORCE

The ATP service keeps the defaults.

Basically, all the features for OLTP are there for years in the Oracle Database. Under the ‘autonomous’ umbrella we can see some well-known features

  • Automatic provisioning: pluggable databases
  • Automatic scaling: PDB resource manager
  • Automatic tuning: SQL Plan Management, Adaptive Plans, SQL Tuning Advisor, Storage Indexes
  • Automatic security: Rolling patches, Encryption, Database Vault
  • Automatic Fault Tolerant Failover: RAC, Data Guard
  • Automatic Backup and Recovery: RMAN, Flashback

All these proven features are enhanced to work together in a consolidated multitenant environment. Povisioning is as fast as a CREATE PDB. The ATP and ADW services enable and configure the right set of features for each workload.

 

Cet article ATP vs ADW – the Autonomous Database lockdown profiles est apparu en premier sur Blog dbi services.

MERGE JOIN CARTESIAN: a join method or a join type?

By Franck Pachot

.
I’ll present about join methods at POUG and DOAG. I’ll show how the different join methods work in order to better understand them. The idea is to show Nested Loops, Hash Join, Sort Merge Join, Merge Join Cartesian on the same query. I’ll run a simple join between DEPT and EMP with the USE_NL, USE_HASH, USE_MERGE and USE_MERGE_CARTESIAN hints. I’ll show the execution plan, with SQL Monitoring in text mode. And I’ll put some gdb breakpoints on the ‘qer’ (query execution rowsource) functions to run the plan operations step by step. Then I’ll do the same on a different query in order to show in detail the 12c adaptive plans.

But wait, I listed Nested Loops, Hash Join, Sort Merge Join, Merge Join Cartesian… but is Merge Cartesian Join really a join method? I mean, my query is not a cartesian join. I have all join predicates here. But for sure you can also do an inner join by starting with a cartesian join and then filter on the join predicate. As if doing physically what the old join syntax of Oracle is doing logically: by not putting any predicates in the from clause and add the join predicates in the where clause to filter over it.

If I look at the 12.2 documentation, it is a Join method
CaptureJoinMethods122

For the definition, a Join Method is how the join will be executed. It is not a decision of the SQL developer because SQL is declarative: you declare the result you want, and the optimizer will decide how to do it. And this is why hints are in comments: they are not part of the declarative syntax. Forcing how to do it is not part of SQL.

Just after listing the join methods, the documentation lists the join types which are part of the SQL because it declares the join result you expect. Inner join to get all matching rows. Semi join to get only the first matching row. Anti Join to get all rows which do not match. Outer join to get all matching rows in addition to those which matches. The syntax is INNER JOIN, OUTER JOIN, EXISTS or IN, NOT EXISTS or NOT IN. Join type is not ‘how’ but ‘what’.

Ok, so back to the join method. Let’s force it on my inner join between DEPT and EMP:

11g


SQL> alter session set current_schema=SCOTT statistics_level=all;
Session altered.
 
SQL> select /*+ leading(DEPT) USE_MERGE_CARTESIAN(EMP) FULL(DEPT) */ * from DEPT join EMP using(deptno);
 
DEPTNO DNAME LOC EMPNO ENAME JOB MGR HIREDATE SAL COMM
---------- -------------- ------------- ---------- ---------- --------- ---------- --------- ---------- ----------
10 ACCOUNTING NEW YORK 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 ACCOUNTING NEW YORK 7839 KING PRESIDENT 17-NOV-81 5000
10 ACCOUNTING NEW YORK 7934 MILLER CLERK 7782 23-JAN-82 1300
10 RESEARCH DALLAS 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 RESEARCH DALLAS 7839 KING PRESIDENT 17-NOV-81 5000
10 RESEARCH DALLAS 7934 MILLER CLERK 7782 23-JAN-82 1300
10 SALES CHICAGO 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 SALES CHICAGO 7839 KING PRESIDENT 17-NOV-81 5000
10 SALES CHICAGO 7934 MILLER CLERK 7782 23-JAN-82 1300
10 OPERATIONS BOSTON 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 OPERATIONS BOSTON 7839 KING PRESIDENT 17-NOV-81 5000
10 OPERATIONS BOSTON 7934 MILLER CLERK 7782 23-JAN-82 1300
 
12 rows selected.
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------
SQL_ID 1xpfxq6pc30vq, child number 0
-------------------------------------
select /*+ leading(DEPT) USE_MERGE_CARTESIAN(EMP) FULL(DEPT) */ * from
DEPT join EMP using(deptno)
 
Plan hash value: 2034389985
 
------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 12 |00:00:00.01 | 7 | | | |
| 1 | MERGE JOIN CARTESIAN| | 1 | 14 | 12 |00:00:00.01 | 7 | | | |
| 2 | TABLE ACCESS FULL | DEPT | 1 | 4 | 4 |00:00:00.01 | 4 | | | |
| 3 | BUFFER SORT | | 4 | 4 | 12 |00:00:00.01 | 3 | 2048 | 2048 | 2048 (0)|
|* 4 | TABLE ACCESS FULL | EMP | 1 | 4 | 3 |00:00:00.01 | 3 | | | |
------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
4 - filter("DEPT"."DEPTNO"="EMP"."DEPTNO")
 

Ok, then I declared my result with an inner join query, and I forced the join method with a hint to show that it is possible. But look at the result. 12 rows? Only DEPTNO 10 where the SCOTT schema has employees in 10, 20 and 30? And only 3 employees here, repeated 4 times for each department name? That’s wrong result.

NEVER FORCE A CARTESIAN JOIN WITH USE_MERGE_CARTESIAN!

That’s a very old bug: Bug 17064391 Wrong result with USE_MERGE_CARTESIAN hint finally fixed in 12c (12.2 and backported in 12.1 PSU)

Then how is it fixed?

18c

With the fix, the hint is just ignored and a SORT MERGE JOIN is used here:

SQL> alter session set current_schema=SCOTT statistics_level=all;
Session altered.
 
SQL> select /*+ leading(DEPT) USE_MERGE_CARTESIAN(EMP) FULL(DEPT) */ * from DEPT join EMP using(deptno);
 
DEPTNO DNAME LOC EMPNO ENAME JOB MGR HIREDATE SAL COMM
---------- -------------- ------------- ---------- ---------- --------- ---------- --------- ---------- ----------
10 ACCOUNTING NEW YORK 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 ACCOUNTING NEW YORK 7839 KING PRESIDENT 17-NOV-81 5000
10 ACCOUNTING NEW YORK 7934 MILLER CLERK 7782 23-JAN-82 1300
20 RESEARCH DALLAS 7566 JONES MANAGER 7839 02-APR-81 2975
20 RESEARCH DALLAS 7902 FORD ANALYST 7566 03-DEC-81 3000
20 RESEARCH DALLAS 7876 ADAMS CLERK 7788 23-MAY-87 1100
20 RESEARCH DALLAS 7369 SMITH CLERK 7902 17-DEC-80 800
20 RESEARCH DALLAS 7788 SCOTT ANALYST 7566 19-APR-87 3000
30 SALES CHICAGO 7521 WARD SALESMAN 7698 22-FEB-81 1250 500
30 SALES CHICAGO 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0
30 SALES CHICAGO 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300
30 SALES CHICAGO 7900 JAMES CLERK 7698 03-DEC-81 950
30 SALES CHICAGO 7698 BLAKE MANAGER 7839 01-MAY-81 2850
30 SALES CHICAGO 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400
 
14 rows selected.
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------
SQL_ID 1xpfxq6pc30vq, child number 0
-------------------------------------
select /*+ leading(DEPT) USE_MERGE_CARTESIAN(EMP) FULL(DEPT) */ * from
DEPT join EMP using(deptno)
 
Plan hash value: 1407029907
 
--------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 14 |00:00:00.01 | 12 | 12 | | | |
| 1 | MERGE JOIN | | 1 | 14 | 14 |00:00:00.01 | 12 | 12 | | | |
| 2 | SORT JOIN | | 1 | 4 | 4 |00:00:00.01 | 6 | 6 | 2048 | 2048 | 2048 (0)|
| 3 | TABLE ACCESS FULL| DEPT | 1 | 4 | 4 |00:00:00.01 | 6 | 6 | | | |
|* 4 | SORT JOIN | | 4 | 14 | 14 |00:00:00.01 | 6 | 6 | 2048 | 2048 | 2048 (0)|
| 5 | TABLE ACCESS FULL| EMP | 1 | 14 | 14 |00:00:00.01 | 6 | 6 | | | |
--------------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
4 - access("DEPT"."DEPTNO"="EMP"."DEPTNO")
filter("DEPT"."DEPTNO"="EMP"."DEPTNO")

So here the result is good, thanks to the fix, and we clearly see how it is fixed: the USE_MERGE_CARTESIAN hint has been ignored.

And the funny thing is that when you look at the 18c documentation, the Merge Join Cartesian is not a join method anymore but a join type:
CaptureJoinMethods183

Exactly the same paragraph, but now in join types (the ‘what’) rather than in join methods (the ‘when’).

What or How?

Actually, in my opinion, it is both. When you explicitly want a cartesian join, that’s a join type described by the CROSS JOIN in the ANSI join syntax, or the lack of related predicates in the old syntax. This is ‘what’. But you may also encounter a MERGE JOIN CARTESIAN for a non-cartesian join just because the optimizer decides it is more efficient. When you have very few rows on both sides, it may be faster to start with a cartesian product on small rowsources. This can be part of star transformation where fact rows are joined back to the cartesian product of filtered dimensions in order to project the dimension attributes. This is ‘how’ it will be executed. We also see it when the optimizer underestimates the cardinalities and is followed by a long nested loop.

When?

So, let’s look at the documentation “When the Optimizer Considers Cartesian Joins”:
CaptureJoinMethods183-when

  • No join condition exists: that’s when cartesian product is what we want
  • The ORDERED hint specifies a table before its join table is specified: that’s when it is the only join method possible with the specified join order
  • A Cartesian join is an efficient method: then it is a method there, even if documented in join types.

In conclusion, cartesian join is a join type. It can also be used as a join method when the optimizer decides to. But you cannot decide it yourself by hinting since 12c, and trying to do so in previous version is a very bad idea and can returns wrong results.

So, for this one I’ll explicitely run a CROSS JOIN:
CaptureUSEMERGECARTESIAN

The query is on top. The SQL monitor in the middle, showing that we are currently active on reading rows from EMP. The bottom shows the ‘qer’ functions backtrace: the fetch call is propagated from opifch2 for the SELECT STATEMENT, through the MERGE JOIN CARTESIAN (querjo), the BUFFER SORT (qerso), to the TABLE ACCESS (qertb).

So basically, the goal of this full-demo presentation is to show how to read the execution plan by understanding how it is executed. This qertbFetch on the inner table EMP is executed only on the first row coming from the outer table DEPT. As the rows are returned to a buffer, the further iterations will fetch only from this buffer and will not go further than qersoFetchSimple. The qersoProcessULS (‘process underlying row source’ – see Frits Hoogland annotations) is run only once. This is the big difference with Nested Loop where the inner loop on the underlying rowsource is run for each outer loop iteration: those two loops are nested – thus the name. But the function for the join part is the same for Nested Loop, Sort Merge Join and Merge Join Cartesian: qerjo. Only the underlying operations differenciate the join methods.

Last comment, we don’t see any function which really sort the rows in this buffer (as we will see for the Sort Merge Join method) because there is no sorting despites the name of the BUFFER SORT operation. More info on Jonathan Lewis blog.

 

Cet article MERGE JOIN CARTESIAN: a join method or a join type? est apparu en premier sur Blog dbi services.

Creating a RAC 12.1 Data Guard Physical Standby environment (3b)

Huh, what is this I hear you ask? Part 3b? Oracle 12.1? Well, there’s a bit of a story to this post. Back in December 2016 I started to write a series of blog posts (part 1 | part 2 | part 3 | part 4) about how I created a standby database on RAC 12.1. For some reason I forgot to post this part. Up until now the step where I am creating the broker configuration was missing. Thanks to a friend I discovered my mistake. I decided to post this article to complete the series. There’s a pretty big logical gap that needed filling :)

I also deliberately decided against making changes to my notes, so this is written in the same style as the articles preceding this one. Also bear in mind that this is Oracle 12.1!

NOTE: As always, this is just a demonstration using VMs in my lab, based on my notes. Your system is most likely different, so in real-life you might take a different approach. The techniques I am using here were suitable for me, and my own small-scale testing. I tried to make sure they are valid, but you may want to allocate more resources in your environment. Test, test, test on your own environment on test kit first!

Here is the original blog post

In a last-minute decision I decided to split part 3 into half: first the database duplication, followed by the Data Guard configuration. It really got a bit too long (even for my standards), and would have put readers off the article. I appreciate your patience though…

What’s the situation?

At the end of part 3 I had a physical standby database ready for use, including its registration in the OCR. it was looking promising, but there was more work to do.

At the moment, the standby database is out of sync with the primary, a fact that needs to be rectified by creating a Data Guard configuration. I prefer to do this on the command line. If you are so inclined you can of course push the mouse around and do it with OEM.

Step 1: Data Guard Broker pre-requisites

Before creating the configuration you should change the location of the broker configuration files to shared storage. This needs to be done on each side – primary and every standby – before you can start the broker.

SQL> alter system set dg_broker_config_file1='+DATA/NCDBA/dr1NCDBA.dat' scope=both sid='*';

System altered.

SQL> alter system set dg_broker_config_file2='+DATA/NCDBA/dr2NCDBA.dat' scope=both sid='*';

System altered.

SQL> show parameter dg_broker

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
dg_broker_config_file1               string      +DATA/NCDBA/dr1NCDBA.dat
dg_broker_config_file2               string      +DATA/NCDBA/dr2NCDBA.dat
dg_broker_start                      boolean     FALSE

SQL> alter system set dg_broker_start=true scope=both sid='*';

System altered.

SQL> 

I had to repeat the step on NCDBB, of course with adapted paths and file names.

Step 2: Creating a Data Guard Broker configuration

Once that’s done, you create the configuration using dgmgrl, the Data Guard Broker Manager Line Mode tool. I named my configuration RACTEST and added both databases. Here is an example of my session:

[oracle@rac12pri1 ~]$ dgmgrl
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@ncdba
Password:
Connected as SYSDBA.
DGMGRL> create configuration ractest as primary database is 'NCDBA' connect identifier is 'NCDBA';
Configuration "ractest" created with primary database "NCDBA"
DGMGRL> add database 'NCDBB' as connect identifier is 'NCDBB';
Database "NCDBB" added
DGMGRL> show configuration

Configuration - ractest

  Protection Mode: MaxPerformance
  Members:
  NCDBA - Primary database
    NCDBB - Physical standby database 

Fast-Start Failover: DISABLED

Configuration Status:
DISABLED

As a first step I connect as SYSDBA before creating the configuration. You can usually type help command to remind you of the exact syntax. After the configuration is created you can start adding (standby) databases. In my case there is only 1 – physical standby – database, but you can think of environments where you have more than one standby environment although that is less common. When specifying the connect identifier you use valid net*8 names found in the default tnsnames.ora (the one in the RDBMS home).

At this point it is important to set standby_file_management to auto. Otherwise managed recovery can abort and you will have to do extra work managing data files that can easily be avoided. The broker syntax to do so involves changing database properties (type show database verbose 'NCDBA' to show all of them). Many Data Guard broker properties are not easily mapped back to the corresponding initialisation parameter. In the case of standby_file_management it is not hard, and the documentation helps with the rest.

Eventually I enable the configuration and check the status. I have found that it can take a couple of minutes at times to get the SUCCESS message.

DGMGRL> edit database 'NCDBA' set property standbyfilemanagement=auto;
Property "standbyfilemanagement" updated
DGMGRL> edit database 'NCDBB' set property standbyfilemanagement=auto;
Property "standbyfilemanagement" updated
DGMGRL> enable configuration
Enabled.
DGMGRL> show configuration

Configuration - ractest

  Protection Mode: MaxPerformance
  Members:
  NCDBA - Primary database
    NCDBB - Physical standby database 

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 21 seconds ago)

There are other things you must consider in accordance with your company’s (data protection/availability/business continuity/…) standards and other documentation – again my example is rather basic, but deliberately so. It’s my lab playground after all.

For the sake of completeness, here is the current configuration of NCDBA and NCDBB:

DGMGRL> show database 'NCDBA'

Database - NCDBA

  Role:               PRIMARY
  Intended State:     TRANSPORT-ON
  Instance(s):
    NCDBA1
    NCDBA2

Database Status:
SUCCESS

DGMGRL> show database 'NCDBB';

Database - NCDBB

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          0 seconds (computed 1 second ago)
  Average Apply Rate: 223.00 KByte/s
  Real Time Query:    OFF
  Instance(s):
    NCDBB1
    NCDBB2 (apply instance)

Database Status:
SUCCESS

DGMGRL> exit

Please note that the broker detected the fact that my systems are cluster databases and lists both instances for each database. It also shows that NCDBB2 is the apply instance. I was positively surprised about the broker’s ability to detect both instances automatically. I seem to remember cases in earlier releases where said instance detection occasionally required a little nudging to get it right.

This example is deliberately kept as minimalist as possible: it works. Most likely it won’t satisfy your requirements. Review your Data Guard (and any other relevant) standards and change the configuration accordingly.

Summary

… back to present time :) This post completes the series, all my notes are now transcribed to the blog. I hope you find this useful.

back to part 3 | on to part 4

How much free space can be reclaimed from a segment?

By Franck Pachot

.
You have the feeling that your table takes more blocks than it should? Here are the queries I use to quickly check the free space. The idea is to call DBMS_SPACE.SPACE_USAGE and infer the minimum space from the percentages. For example, a block in FS3 (defined as having at least 50 to 75% free space) is supposed to have at least 50% of free space. Of course it can have more, but you don’t know.

Here is some PL/SQL to do so:

set serveroutput on
declare
unf number; unfb number; fs1 number; fs1b number; fs2 number; fs2b number; fs3 number; fs3b number; fs4 number; fs4b number; full number; fullb number;
begin
for i in (select * from (select * from dba_segments where segment_subtype='ASSM' and segment_type in (
'TABLE','TABLE PARTITION','TABLE SUBPARTITION','CLUSTER','LOB','LOB PARTITION','LOB SUBPARTITION'
) order by bytes desc) where 10>=rownum)
loop
begin
dbms_space.space_usage(i.owner,i.segment_name,i.segment_type,unf,unfb,fs1,fs1b,fs2,fs2b,fs3,fs3b,fs4,fs4b,full,fullb,partition_name=>i.partition_name);
dbms_output.put_line(to_char((unfb+fs1b+fs2b*0.25+fs3b*0.5+fs4b*0.75)/1024/1024/1024,'999G999D999')||' GB free in '||i.segment_type||' "'||i.owner||'"."'||i.segment_name||'" partition "'||i.partition_name||'"');
exception
when others then dbms_output.put_line(i.segment_type||' "'||i.owner||'"."'||i.segment_name||'" partition "'||i.partition_name||'": '||sqlerrm);
end;
end loop;
end;
/

The output looks like:

.001 GB free in INDEX "DEMO"."ACCOUNT_PK" partition ""
.001 GB free in TABLE "APEX_040200"."WWV_FLOW_PAGE_PLUGS" partition ""
.009 GB free in TABLE "SCOTT"."DEMO" partition ""
.000 GB free in TABLE "APEX_040200"."WWV_FLOW_STEP_ITEMS" partition ""
.003 GB free in INDEX "SYS"."WRH$_SYSMETRIC_HISTORY_INDEX" partition ""
.000 GB free in TABLE "MDSYS"."SDO_CS_SRS" partition ""
.002 GB free in INDEX "SYS"."I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST" partition ""
.006 GB free in TABLE "SYS"."WRH$_SYSMETRIC_HISTORY" partition ""
.002 GB free in TABLE "SYS"."WRH$_SQL_PLAN" partition ""

If you are in 12c, an inline function in the query might come handy:

with function freebytes(segment_owner varchar2, segment_name varchar2, segment_type varchar2,partition_name varchar2) return number as
unf number; unfb number; fs1 number; fs1b number; fs2 number; fs2b number; fs3 number; fs3b number; fs4 number; fs4b number; full number; fullb number;
begin
dbms_space.space_usage(segment_owner,segment_name,segment_type,unf,unfb,fs1,fs1b,fs2,fs2b,fs3,fs3b,fs4,fs4b,full,fullb,partition_name=>partition_name);
return unfb+fs1b+fs2b*0.25+fs3b*0.5+fs4b*0.75;
end;
select round(freebytes(owner,segment_name,segment_type,partition_name)/1024/1024/1024,3) free_GB,segment_type,owner,segment_name,partition_name
from dba_segments where segment_subtype='ASSM' and segment_type in (
'TABLE','TABLE PARTITION','TABLE SUBPARTITION','CLUSTER','LOB','LOB PARTITION','LOB SUBPARTITION'
) order by bytes desc fetch first 10 rows only
/

The result looks like:

FREE_GB SEGMENT_TYPE OWNER SEGMENT_NAME PARTITION_NAME
------- ------------ ----- ------------ --------------
0 TABLE DEMO ACCOUNTS
0.001 INDEX DEMO ACCOUNT_PK
0.001 TABLE APEX_040200 WWV_FLOW_PAGE_PLUGS
0.009 TABLE SCOTT DEMO
0.003 INDEX SYS WRH$_SYSMETRIC_HISTORY_INDEX
0 TABLE APEX_040200 WWV_FLOW_STEP_ITEMS
0.002 INDEX SYS I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST
0 TABLE MDSYS SDO_CS_SRS
0.006 TABLE SYS WRH$_SYSMETRIC_HISTORY
0.002 TABLE SYS WRH$_SQL_PLAN

Future evolution will be published on GitHub:
https://raw.githubusercontent.com/FranckPachot/scripts/master/administration/segment_free_space_plsql.sql
https://raw.githubusercontent.com/FranckPachot/scripts/master/administration/segment_free_space_sql.sql

Note that having free space does not mean that you have to shrink or reorg. Try to understand what happened to your data before, and whether this space will be reused soon.

Update 8-AUG-2018

In the initial post I added all segment types accepted by the dbms_space documentation but finally removed ‘INDEX’,’INDEX PARTITION’,’INDEX SUBPARTITION’ because the meaning of the output is completely different. See Jonathan Lewis note about it: https://jonathanlewis.wordpress.com/2013/12/17/dbms_space_usage/

 

Cet article How much free space can be reclaimed from a segment? est apparu en premier sur Blog dbi services.

Conferences 2018

Here are the conferences where I speak this end of year


Join Methods: Nested Loop, Hash, Sort, Merge, Adaptive

This is an explanation of the join methods, with live demos on very simple cases, reading execution plan statistics, looking at how it is executed. And how to improve the query performance: hints, partitioning, indexing. The goal is to understand, with modern tracing tools (dbms_xplan and SQL Monitoring), the basic core stuff that is involved in all query tuning: the join methods, the estimations that drive the choice between full scan and index access, the transformations that avoid unnecessary joins.

  • POUG 2018 September 07, 2018

POUG 2018

  • DOAG 2018 VOM 20. BIS 23. NOVEMBER 2018

Vortrag Details

  • Oak Table World 2018 22–23 OCTOBER 2018

Oak Table World 2018 | 22-23 October 2018


Multitenant Security Features Clarify DBA Role in DevOps Cloud

The role of the DBA changes with the cloud and autonomous managed services. The multitenant architecture is the foundation for these new roles. The container database/pluggable database (CDB/PDB) paradigm enforces role segregation between the container DBA and the application. In a DevOps environment, you need the finer access privilege definition that comes in Oracle Database 12.1 (common/local users) and Oracle Database 12.2 (lockdown profiles, PDB isolation), and that was improved in Oracle Database 18c. In this session learn to administer a cloud or on-premises database and give PDB administration rights to application teams. See how to grant database access to a developer on a PDB with Oracle Database 18c. Discover the new features, see a demo, and learn best practices.

  • Oracle Open World October 22–25, 2018 San Francisco, CA

Session Catalog | Oracle OpenWorld 2018


Microservices: Get Rid of Your DBA and Send the DB into Burnout

The database is always a problem: it is a single point of failure for applications; it makes any continuous integration/delivery impossible; the DBA is a drag on your agile developments; and any interaction with the database is done with unmaintainable, complex, old-style SQL queries. Those are the reasons why modern architects want to get all code out of the database and even the data moved out to caches, local to CQRS microservices. But there are drawbacks with this approach, and doing it incorrectly may show the worst once implemented. This session focuses on the antipatterns and shows, with facts and measurements, what can be wrong in the implementation. This knowledge may help you design your applications with scalability in mind.

  • Oracle Code One October 22–25, 2018 San Francisco, CA

Session Catalog | Oracle Code One 2018


Oracle Database on Docker: Lessons Learned

Developers need to quickly get a database up and running without wasting time waiting for the DBA or doing the whole installation themselves. Docker containers are perfect for that. But there are two big problems with Oracle Database. First, the default installation is huge, and you do not want a 20GB image to start a container. Second, the database must persist beyond the container lifecycle. This means that we need to create it in external volume. But do you accept a 15 minutes database creation at ‘docker run’?

There can be different solutions to address different needs: a small database to run on a laptop, a large number of databases running on OpenShift, quick continuous integration ephemeral databases…

This session will show some ideas to run Oracle Database in a container in an efficient way. With some tuning and tweaks, it is possible to provide a small image that can run in seconds. And is that different, or complementary, to multitenant containers?

  • UKOUG Tech18, Liverpool, 04/12/2018, 11:40–12:25

UKOUG Technology Conference & Exhibition 2018

CQRS, Event Sourcing and the Oracle Database

By Franck Pachot

.
This blog post relates my thoughts when reading about Command Query Responsibility Separation and Event Sourcing, in the context of the Oracle Database (but it can probably apply to any database). We see those terms in the new software architecture diagrams, but they are actually quite old:

Command-Query separation

Command-Query separation was defined by Bertrand Meyer 15 years ago, not for the database but for the Eiffel language. See page 22-44 of Eiffel: a language for software engineering.

This basically states that a function that returns a result does not change anything (“asking a question should not change the answer”). In PL/SQL you can do whatever you want with FUNCTION and PROCEDURE: PROCEDURE can return values with OUT parameters and FUNCTION can modify some data. However, ‘You can’ does not mean that you should. It is easy to define some coding rules where a FUNCTION does not change anything and where on PROCEDURE implements what Bertrand Meyer calls ‘command’.

In SQL this is clearly enforced. A SELECT is for queries and cannot change any data. Even if you call a function that tries to change some data you will encounter:

ORA-14551: cannot perform a DML operation inside a query

Object Oriented data modeling

This concept came from Eiffel, an object-oriented programming language. Object Oriented approach was designed for transient data structures: stored in memory (the object identification is the address in memory) and usually not shared (the representation of the same data is a different object in different systems). But the modular and encapsulation aspects of OO approach being great, OO approach has been extended to data, the persistent data that we store in the database. Before, the functions and the data model were analyzed separately. This designed monolithic applications which were difficult to maintain and evolve. The Object Oriented approach helps to analyze the sub-subsystems separately.

Create Retrieve Update Delete

However, because the objects were the business objects and not the use-cases, or services, then the software architects came with this simplified idea that all interaction with the database is CRUD: you Create (aka INSERT), retrieve (aka SELECT), UPDATE or DELETE. And then you define an Object (mapped to a table) and define those 4 CRUD methods mapped to 4 SQL Statements. And you can do everything. And because some ORM frameworks give the possibility to ignore how the data is stored and shared in the database, a new generation of developers was working with data sets as if it were objects in memory.

And this is completely wrong because

  • one object maps to only one row, and the CRUD approach exposes no method for bulk updates – those where the database can be the most efficient
  • the CRUD methods read and update all table columns – ignoring all efficient access paths to a subset of columns
  • mapping a transient object identifier (address in memory) to a shared persistent identifier (primary key) is not easy
  • and finally, all those CRUD operations belong to an object when they should belong to use-cases and transactions

The last point is more obvious between the queries (the R in CRUD) and the insert (the C in CRUD). In an information system, you usually use (or query or retrieve) the data in a completely different way than it was entered (entered, inserted). Let’s take an example in an Order Entry system. When the customer starts with an order, it is probably the responsibility of the Orders object. But then, it will be processed by for the delivery, and then queried by the Customer object, and finally by the Products for sales analysis. Grouping all that in the same object is easier for the Object-Oriented Analysis, for the modelization of the domain object. But keeping this grouping for the software implementation is a big source of problems. And finally, the same object will be used by completely different use-cases, with different availability, consistency, performance,… requirements. That’s too much responsibility for one object.

Command-Query-Responsability-Separation (CQRS)

Because of this (CRUD row-by-row queries inefficiency and Object over-responsability) the applications started to cache locally the data used by queries. In order to avoid the row-by-row roundtrips and to store them in a model more suited to the query than the way it was inserted. Like dozens of data marts that try to keep in sync with the changes done in the database.

Hibernate, for example, can cache the whole data in a second level cache, using object stores like GigaSpaces or Oracle Coherence. But this doesn’t solve the main problem. The object model, which is not relational but hierarchical, cannot be used efficiently by all use-cases. And I’ve seen very few Hibernated developers accepting to have multiple Entity mappings for the different use-cases. Basically, the relational model can cope with all kind of queries (especially since Oracle In-Memory Columns Store as you can also do analytics on it). But when querying it with an object model, you need several models.

Then came the idea to split the data model between ‘command’ – the modifications, and ‘query’. This is the CQRS from Martin Fowler: https://martinfowler.com/bliki/CQRS.html

This is, again, not new. The relational model exposes data through views and you can, and should, have different views on your data. The data is stored in one normalized model, modified though views or procedures (Transactional API) and queried through views. Separation of responsibility has always been there. And I totally agree with all those diagrams showing multi-layers, separation, and even micro-services. There’s only one problem with them.

Logical vs. physical separation

I don’t really know when, but at a point, the architects looking at those diagrams forgot one step where the logical model should be implemented in a physical model. And what was logical layers became physical tiers without any reasons. Many architects uses ‘layer’, ‘level’, ‘tier’ without even knowing if they are at logical or physical level. It is good to have logical layers, but processing data across multiple physical tiers will exhaust all resources in all tiers just by doing nothing else than waiting on roundtrips latency and expensive context switches.

The CRUD methods had to be coded in an Object Oriented language, and then the idea was Java. Then, the rows have to be shipped between two different processes: one running a JVM, and one running the SQL queries in the RDBMS. Different processes means context switches for each calls. This is latency and overhead, and is not scalable. Oracle has the possibility to run the JVM but this still context switch and datatype conversion. Then, to try to scale, more application servers were needed and this data processing part moved to the application server.

And the bad effect is not only on performance and scalability. Because of this confusion, implementing logical layers into different servers, technologies, languages,… we lost the link between the storage data and the processing of data. Which is exactly the opposite of an Object Oriented approach. With the data logic (you can call it business logic as well, as all data processing is there to implement business rules) in the database you can manage dependencies. The database always keeps track of which function or procedure is using which table, and how.

Event Sourcing

https://martinfowler.com/eaaDev/EventSourcing.html

Oracle can even guarantee consistent reads without changing anything, thanks to MVCC. Of course, SELECT FOR UPDATE can write to the database, not to change data but to write lock information, but you can restrict this by granting only the READ privilege.

This means that, for example, in PL/SQL we use PROCEDURE for what Bertrand Meyer calls ‘command': changing data. And we use FUNCTION to query data

 

Cet article CQRS, Event Sourcing and the Oracle Database est apparu en premier sur Blog dbi services.

Visual Studio- All the Warm and Fuzzy

So I haven’t opened Visual Studio in….oh….let’s just say it’s been a few years…:)

I had a project that I needed to run and was surprised when the Solution Explorer was missing from SSMS 2017.  Its only fair to say, there was also fair warning from Microsoft.

https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/sol_ex.png?re... 300w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/sol_ex.png?re... 768w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/sol_ex.png?w=... 1568w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/sol_ex.png?w=... 1400w" sizes="(max-width: 650px) 100vw, 650px" data-recalc-dims="1" />

Due to this, I opened up Visual Studio to use its Solution Explorer and integration for SSIS and other features required for a large project.  I was both happy with the sheer amount of features and have some constructive feedback to make it more user friendly.

I love that I can navigate servers, log into SQL Server databases and manage and verify what’s occurred in my releases.  The properties pane comes in handy to offer valuable information when I’m building out connection strings or looking for data that may have not compiled correctly in a build.

Although the rough instructions were for Solution Explorer for SSMS, I was able, even with as rusty as I was, figure out how to do everything- projects, SSIS and database SQL, in Visual Studio.

The interface is familiar as a Windows user-  right click for options, left click to execute the option.  The interface has links on the left for shortcuts to SSMS Object Explorer, which allows me to log into my database environments, along with browsing servers that I may also be deploying application code to.

Projects make it easy to build out a full, multi-tier deployment and debug it all from one application, too.  Needless to say, I’m happy to report that even with some missing instructions, I was able to do what needed to be done and do it with some grace.

https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/08/vss_dw_deploy... 300w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/08/vss_dw_deploy... 768w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/08/vss_dw_deploy... 1400w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/08/vss_dw_deploy... 2100w" sizes="(max-width: 650px) 100vw, 650px" data-recalc-dims="1" />

So what do I see that can be improved?

  1. When you copy and paste to update a path, don’t automatically remove the remainder of the file name, etc., that’s been left on purpose.  This can lead to extra human intervention, which then leads to more chance of human error.
  2. The hints when hovering over a button can become a nuisance instead of a help.  Have the hints auto-hide after 5 seconds.  There’s no reason to leave them up when we’re trying to guide our cursor to a small button.
  3. Make debug display all steps, the errors and then shut down automatically when complete.
  4. Make it easier to keep panes open for the SQL Server Object Explorer, Toolbox, etc. vs. auto-hiding.  The information on the pane may be needed for reference as one works on another configuration panel.
  5. The Control Flow on SSIS package execution shouldn’t be blown up so large that you can’t decipher what a package is doing.  Keep it legible.



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBAKevlar [Visual Studio- All the Warm and Fuzzy], All Right Reserved. 2018.