Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Read only partitions

The ability for part of a table to be read-only and other parts of the same table to allow full DML is a cool feature in the Oracle Partitioning option.  Perhaps the most common example you will typically see for this is range-based partitioning on a date/timestamp column.  As data “ages”, setting older partitions to read-only can yield benefits such as:

  • moving the older partitions to cheaper, or write-once storage
  • guaranteeing that older data cannot be tampered with
  • shrinking backup times because read-only data only needs to be backed up once (or twice to be sure)

But if you try this in 18c, you might get a surprise:


SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production

SQL>
SQL>  create table t
  2      ( d date,
  3        id int,
  4        data varchar2(20)
  5      )
  6      partition by range (d )
  7      (
  8        partition p1 values less than ( date '2018-01-01' ),
  9        partition p2 values less than ( date '2019-01-01' ),
 10        partition p3 values less than ( date '2020-01-01' ),
 11        partition p4 values less than ( date '2021-01-01' )
 12     );

Table created.

SQL>
SQL> insert into t
  2      select date '2017-01-01' + rownum, rownum, 'data'||rownum
  3      from dual connect by level <= date '2021-01-01' - date '2017-01-01' - 1;

1460 rows created.

--
-- Make older data read only prevent tampering
--
SQL> alter table t modify partition p1 read only;

Table altered.

SQL>
SQL> alter table t modify partition p2 read only;

Table altered.

--
-- Expose a view for the current year
--

SQL> create or replace
  2      view VW as
  3      select d,id from t
  4      where d > date '2020-01-01';

View created.

--
-- And DML on this view should be fine...
--


SQL> delete from VW where id = 1100;

1 row deleted.

--
-- This *SHOULD* work but it does not in 18c
--

SQL> insert into VW values ( sysdate, 100);
insert into VW values ( sysdate, 100)
            *
ERROR at line 1:
ORA-14466: Data in a read-only partition or subpartition cannot be modified.

Luckily this is not an implementation restriction, its just a bug in 18c that has since been fixed. All is well in 19c, which is yet another reason this should be your “go to” release when upgrading.


SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

SQL>
SQL>  create table t
  2      ( d date,
  3        id int,
  4        data varchar2(20)
  5      )
  6      partition by range (d )
  7      (
  8        partition p1 values less than ( date '2018-01-01' ),
  9        partition p2 values less than ( date '2019-01-01' ),
 10        partition p3 values less than ( date '2020-01-01' ),
 11        partition p4 values less than ( date '2021-01-01' )
 12     );

Table created.

SQL>
SQL> insert into t
  2      select date '2017-01-01' + rownum, rownum, 'data'||rownum
  3      from dual connect by level <= date '2021-01-01' - date '2017-01-01' - 1;

1460 rows created.

SQL> alter table t modify partition p1 read only;

Table altered.

SQL> alter table t modify partition p2 read only;

Table altered.

SQL> create or replace
  2      view VW as
  3      select d,id from t
  4      where d > date '2020-01-01';

View created.

SQL> delete from VW where id = 1100;

1 row deleted.

SQL> insert into VW values ( sysdate, 100);

1 row created.

Views and read-only partitions are a great way of controlling access to particular subsets of data to ensure your users can’t get themselves with either performance issues or auditor issues! Smile

Lower Cost Ignored

This is an update on a post I wrote nearly 10 years ago describing how the optimizer could choose to ignore a lower cost indexed access path and use a higher cost index if the lower cost were based on guesswork. The original article article used (select {constant} from dual) as a way of supplying an “unpeekable bind” in a predicate, but that mechanism stopped working 11gR2, hence the update.

The upate also goes into a little more detail about event 38036 which can be used to modify this behaviour by defining a “cut-off” percentage where Oracle will switch back to using the lower cost path.

We start with the code to generate the data – including, in this case – a table that I can query to supply “hidden constants” to the optimizer:

em
rem     Script:         index_range_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2020
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

create table t1
as
with generator as (
        select  --+ materialize
                rownum id 
        from dual 
        connect by 
                rownum <= 10000 -- > comment to avoid wordpress format issue
)
select
        rownum                  id,
        9 + mod(rownum-1,3)     scattered,
        trunc(rownum / 500)     clustered,
        lpad(mod(rownum,2),10)  ind_pad,
        lpad(rownum,10,'0')     small_vc,
        rpad('x',100)           padding
from
        generator       v1,
        generator       v2
where
        rownum <= 40000 -- > comment to avoid wordpress format issue
;

create index t1_equi  on t1(scattered, ind_pad, clustered);
create index t1_range on t1(clustered, scattered);

create table driver(name varchar2(20), value number(4,0));
create index drv_i1 on driver(name);

insert into driver values('Forty',40);
insert into driver values('FortyOne',41);
commit;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'DRIVER',
                method_opt       => 'for all columns size 1'
        );
end;
/


The driver table models a pattern that I see fairly frequently of a table holding a set of “constants” that are subject to infrequent changes. (An alternative strategy is to have a PL/SQL package of constants and a function to return a value when supplied with a name.)

The key columns in the table are

  • scattered: 3 distinct values evenly but randomly scattered across the table – not a column you would index separately
  • clustered: 500 consecutive rows each for 80 distinct values, so very well clustered data

There are two indexes on the main data table

  • t1_range (clustered scattered) — roughly (80 * 3 =) 240 distinct keys
  • t1_equi (scattered, ind_pad, clustered) — roughly (3 * 2 * 80) = 480 distinct keys

If I execute the query: “select * from t1 where clustered between 40 and 41 and scattered = 10” Oracle uses the t1_range index to return an estimated 491 rows at a cost of 36. The estimates aren’t too far out since in my case the query returned 334 rows in 26 buffer visits (when my SQL*Plus arraysize was 500).

------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |          |   496 |  8432 |    36   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1       |   496 |  8432 |    36   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | T1_RANGE |   496 |       |     5   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("CLUSTERED">=40 AND "SCATTERED"=10 AND "CLUSTERED".le.41)
       filter("SCATTERED"=10)

But what happens if we decide to keep the start and end values for clustered in the driver table:


set autotrace traceonly explain

select 
        /*  index(t1 t1_range) */
        small_vc
from
        t1
where
        scattered = 10
and     clustered between (select value from driver where name = 'Forty')
                  and     (select value from driver where name = 'FortyOne')
;

set autotrace off



You’ll notice the “hint that isn’t a hint” I’ve tested two versions of the query, one without a hint and one where I inserted the necessary “+” to have a hint instead of a comment. Here are the execution plans (with “<=” edited to “.le.” in the Predicate Information):


=======
Default
=======
-------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |         |    33 |   561 |    67   (3)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED  | T1      |    33 |   561 |    63   (4)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                    | T1_EQUI |    60 |       |    56   (4)| 00:00:01 |
|   3 |    TABLE ACCESS BY INDEX ROWID BATCHED| DRIVER  |     1 |    11 |     2   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN                  | DRV_I1  |     1 |       |     1   (0)| 00:00:01 |
|   5 |    TABLE ACCESS BY INDEX ROWID BATCHED| DRIVER  |     1 |    11 |     2   (0)| 00:00:01 |
|*  6 |     INDEX RANGE SCAN                  | DRV_I1  |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("SCATTERED"=10 AND "CLUSTERED">= (SELECT "VALUE" FROM "DRIVER" "DRIVER"
              WHERE "NAME"='Forty') AND "CLUSTERED".le. (SELECT "VALUE" FROM "DRIVER" "DRIVER" WHERE
              "NAME"='FortyOne'))
       filter("CLUSTERED">= (SELECT "VALUE" FROM "DRIVER" "DRIVER" WHERE "NAME"='Forty')
              AND "CLUSTERED".le. (SELECT "VALUE" FROM "DRIVER" "DRIVER" WHERE "NAME"='FortyOne'))
   4 - access("NAME"='Forty')
   6 - access("NAME"='FortyOne')

=============
Hint t1_range
=============
--------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |          |    33 |   561 |    10   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED  | T1       |    33 |   561 |     6   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                    | T1_RANGE |    60 |       |     2   (0)| 00:00:01 |
|   3 |    TABLE ACCESS BY INDEX ROWID BATCHED| DRIVER   |     1 |    11 |     2   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN                  | DRV_I1   |     1 |       |     1   (0)| 00:00:01 |
|   5 |    TABLE ACCESS BY INDEX ROWID BATCHED| DRIVER   |     1 |    11 |     2   (0)| 00:00:01 |
|*  6 |     INDEX RANGE SCAN                  | DRV_I1   |     1 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("CLUSTERED">= (SELECT "VALUE" FROM "DRIVER" "DRIVER" WHERE "NAME"='Forty')
              AND "SCATTERED"=10 AND "CLUSTERED".le. (SELECT "VALUE" FROM "DRIVER" "DRIVER" WHERE
              "NAME"='FortyOne'))
       filter("SCATTERED"=10)
   4 - access("NAME"='Forty')
   6 - access("NAME"='FortyOne')

The big difference occurs at operation 2. You should notice two things – first, the estimate of rows is wrong by a factor of 10, but that’s not the point I want to chase. Secondly by default the optimizer has  selected the plan using the (far) more expensive index. This is a deliberately coded choice and if you have access to MOS then you should check Document ID 4112254.8: “CBO may not choose best index for single table access with unpeeked bind/function”.

Rerun the test (unhinted) after issuing.

alter session set events '38036 trace name context forever, level 10'

You will find that the optimizer automatically takes the path using the lower cost index. For values of 10 or higher Oracle will pick the lower cost index, for values of 9 or less the optimizer will stick with the higher cost index.

The question, of course, is what’s special about the value 10. Given the clue in the MOS document, and the costs from the plans above, and allowing for some fiddling with the arithmetic we can note the following:

  • Cost of using the expensive index for the table access is 63 (From the 10053 trace it’s actually 62.525996)
  • Cost of using the cheaper index for the table access if 6 (Again, from the 1003 trace, 6.037011)
  • Express 6.037011 / 62.525996 as a percentage and you get 9.655 which rounds to 10.

Probably not a coincidence! (Though it might take a few more test to decide whether it’s round() or ceiling())

So if the optimizer is picking the wrong index, and the arithmetic for the one it should be using is dictated by guessing, then calculate the ratio of the two costs for the table access, express as a percentage and round up (and add some if you want to) then set the event.

I’m not really sure how much use you could, or should, make of this event. Possibly it’s one of those things you use temporarily to allow you to generate an SQL Baseline (the event setting doesn’t get captured as an “opt_param()” hint unlike other fiddles of this type that you might use). Maybe there are a few special cases in batch jobs were the optimizer is a little inconsistent about which index to pick and the event is relevant and could be set through an ‘alter session’ command. It’s nice to know it’s there – but may be hard to use in practice.

Footnote

If you were wondering about the cardinality estimates of 60 rowids from the index and 33 rows from the table. This is the effect of the standard  “guesses for unpeeked binds” the optimizer uses. For bounded (between) range on an index the selectivity is 0.0045, for a table it’s 0.0025; in this case we then have to include a factor of 1/3 in the arithmetic to allow for the “scattered=10” predicate:

40,000 * 0.0045 * 1/3 = 60

40,000 * 0.0025 * 1/3 = 33.33….

 

 

 

 

Oracle non-linguistic varchar2 columns to order by without sorting

By Franck Pachot

.
Sorting data is an expensive operation and many queries declare an ORDER BY. To avoid the sort operation you can build an index as it maintains a sorted structure. This helps with Top-N queries as you don’t have to read all rows but only those from a range of index entries. However, indexes are sorted by binary values. For NUMBER or DATE datatypes, the internal storage ensures that the order is preserved in the binary format. For character strings, the binary format is ASCII, which follows the English alphabet. That’s fine when your session language, NLS_LANGUAGE, defines an NLS_SORT that follows this BINARY order. But as soon as you set a language that has some specific alphabetical order, having an index on a VARCHAR2 or CHAR column does not help to avoid a SORT operation. However, in Oracle 12.2 we can define the sort order at column level with the SQL Standard COLLATE. One use case is for alpha-numeric columns that have nothing to do with any language. Like some natural keys combining letters and numbers. The user expects them to be listed in alphabetical order but, storing only 7-bits ASCII characters, you don’t care about linguistic collation.

I am running this on the Oracle 20c preview in the Oracle Cloud.

VARCHAR2

It can happen that a primary key is not a NUMBER but a CHAR or VARCHAR2, like this:


SQL> create table demo (ID constraint demp_pk primary key) as
  2  select cast(dbms_random.string('U',1)||to_char(rownum,'FM0999') as varchar2(5)) ID
  3  from xmltable('1 to 10');

Table created.

SQL> select * from demo order by ID;

      ID
________
K0003
K0009
L0007
L0010
M0008
O0002
S0001
W0005
Y0006
Z0004

10 rows selected.

I query with ORDER BY because sorting can make sense on a natural key.

Index

I have an index on this column, which is sorted, and then the execution plan is optimized:


SQL> select * from dbms_xplan.display_cursor(format=>'basic');

                      PLAN_TABLE_OUTPUT
_______________________________________
EXPLAINED SQL STATEMENT:
------------------------
select * from demo order by ID

Plan hash value: 1955576728

------------------------------------
| Id  | Operation        | Name    |
------------------------------------
|   0 | SELECT STATEMENT |         |
|   1 |  INDEX FULL SCAN | DEMP_PK |
------------------------------------

13 rows selected.

There’s no SORT operation because the INDEX FULL SCAN follows the index entries in order.

NLS_LANGUAGE

However, there are many countries where we don’t speak English:


SQL> alter session set nls_language='French';

Session altered.

In French, like in many languages, we have accentuated characters and other specificities so that the language-alphabetical order does not always follow the ASCII order.

I’m running exactly the same query:


SQL> select * from demo order by ID;

      ID
________
K0003
K0009
L0007
L0010
M0008
O0002
S0001
W0005
Y0006
Z0004

10 rows selected.

SQL> select * from dbms_xplan.display_cursor(format=>'basic');

                      PLAN_TABLE_OUTPUT
_______________________________________
EXPLAINED SQL STATEMENT:
------------------------
select * from demo order by ID

Plan hash value: 2698718808

------------------------------------
| Id  | Operation        | Name    |
------------------------------------
|   0 | SELECT STATEMENT |         |
|   1 |  SORT ORDER BY   |         |
|   2 |   INDEX FULL SCAN| DEMP_PK |
------------------------------------

14 rows selected.

This time, there’s a SORT operation. even if I’m still reading with INDEX FULL SCAN.

NLS_SORT

The reason is that, by setting the ‘French’ language, I’ve also set the French sort collating sequence.


SQL> select * from nls_session_parameters;
                 PARAMETER                           VALUE
__________________________ _______________________________
NLS_LANGUAGE               FRENCH
NLS_SORT                   FRENCH

And this is different from the BINARY one that I had when my language was ‘American’.

Actually, only a few languages follow the BINARY order of the ASCII table:


SQL>
  declare
   val varchar2(64);
  begin
    for i in (select VALUE from V$NLS_VALID_VALUES where PARAMETER='LANGUAGE') loop
    execute immediate 'alter session set nls_language='''||i.value||'''';
    select value into val from NLS_SESSION_PARAMETERS where PARAMETER='NLS_SORT';
    if val='BINARY' then dbms_output.put(i.value||' '); end if;
    end loop;
    dbms_output.put_line('');
  end;
/

AMERICAN JAPANESE KOREAN SIMPLIFIED CHINESE TRADITIONAL CHINESE ENGLISH HINDI TAMIL KANNADA TELUGU ORIYA MALAYALAM ASSAMESE GUJARATI MARATHI PUNJABI BANGLA MACEDONIAN LATIN SERBIAN IRISH

PL/SQL procedure successfully completed.

This is ok for real text but not for my primary key where ASCII order is ok. I can set the NLS_SORT=BINARY for my session, but that’s too wide as my problem is only with a column.

Or I can create an index for the French collation. Actually, this is what is used internally:


SQL> explain plan for select * from demo order by ID;
Explained.

SQL> select * from dbms_xplan.display(format=>'basic +projection');
                                                      PLAN_TABLE_OUTPUT
_______________________________________________________________________
Plan hash value: 2698718808

------------------------------------
| Id  | Operation        | Name    |
------------------------------------
|   0 | SELECT STATEMENT |         |
|   1 |  SORT ORDER BY   |         |
|   2 |   INDEX FULL SCAN| DEMP_PK |
------------------------------------

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - (#keys=1) NLSSORT("DEMO"."ID",'nls_sort=''GENERIC_M''')[50],
       "DEMO"."ID"[VARCHAR2,5]
   2 - "DEMO"."ID"[VARCHAR2,5]

GENERIC_M is the sort collation for many European languages.

But that again, does not fit the scope of my problem as I don’t want to create an index for any possible NLS_SORT setting.

COLLATE

The good solution is to define the collation for my table column: this ID is a character string, but it is an ASCII character string which has nothing to do with my language. In 18c I can do that:


SQL> alter table demo modify ID collate binary;

Table altered.

The COLLATE is a SQL Standard syntax that exists in other databases, and it came to Oracle in 12cR2.

And that’s all:


SQL> explain plan for select * from demo order by ID;

Explained.

SQL> select * from dbms_xplan.display(format=>'basic +projection');

                                             PLAN_TABLE_OUTPUT
______________________________________________________________
Plan hash value: 1955576728

------------------------------------
| Id  | Operation        | Name    |
------------------------------------
|   0 | SELECT STATEMENT |         |
|   1 |  INDEX FULL SCAN | DEMP_PK |
------------------------------------

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - "DEMO"."ID"[VARCHAR2,5]

No SORT operation needed, whatever the language I set for my session.

Here is the DDL for my table:


SQL> ddl demo

  CREATE TABLE "SYS"."DEMO"
   (    "ID" VARCHAR2(5) COLLATE "BINARY",
         CONSTRAINT "DEMP_PK" PRIMARY KEY ("ID")
  USING INDEX  ENABLE
   )  DEFAULT COLLATION "USING_NLS_COMP" ;

My column explicitly follows the BINARY collation.

Extended Data Types

Now, all seems easy, but there’s a prerequisite:


SQL> show parameter max_string_size

NAME            TYPE   VALUE
--------------- ------ --------
max_string_size string EXTENDED

I have set my PDB to EXTENDED string size.

If I try the same in a PDB with the ‘old’ limit of 4000 bytes:


SQL> alter session set container=PDB1;

Session altered.

SQL> show parameter max_string_size

NAME            TYPE   VALUE
--------------- ------ --------
max_string_size string STANDARD

SQL> drop table demo;

Table dropped.

SQL> create table demo (ID varchar2(5) collate binary constraint demp_pk primary key);

create table demo (ID varchar2(5) collate binary constraint demp_pk primary key)
 *
ERROR at line 1:
ORA-43929: Collation cannot be specified if parameter MAX_STRING_SIZE=STANDARD is set.

This new feature is allowed only with the Extended Data Types introduced in 12c release 2.

ORDER BY COLLATE

Ok, let’s create the table with the default collation:


SQL> create table demo (ID constraint demp_pk primary key) as
  2  select cast(dbms_random.string('U',1)||to_char(rownum,'FM0999') as varchar2(5)) ID
  3  from xmltable('1 to 10');

Table created.

SQL> select * from dbms_xplan.display_cursor(format=>'basic +projection');

                                                   PLAN_TABLE_OUTPUT
____________________________________________________________________
EXPLAINED SQL STATEMENT:
------------------------
select * from demo order by ID

Plan hash value: 2698718808

------------------------------------
| Id  | Operation        | Name    |
------------------------------------
|   0 | SELECT STATEMENT |         |
|   1 |  SORT ORDER BY   |         |
|   2 |   INDEX FULL SCAN| DEMP_PK |
------------------------------------

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - (#keys=1) NLSSORT("DEMO"."ID",'nls_sort=''FRENCH''')[50],
       "DEMO"."ID"[VARCHAR2,5]
   2 - "DEMO"."ID"[VARCHAR2,5]

As my NLS_SORT is ‘French’ there is a SORT operation.

But I can explicitly request a BINARY sort for this:


SQL> select * from demo order by ID collate binary;

      ID
________
D0003
H0002
L0009
N0008
P0010
Q0005
R0004
W0007
Y0001
Z0006

10 rows selected.

SQL> select * from dbms_xplan.display_cursor(format=>'basic +projection');

                                             PLAN_TABLE_OUTPUT
______________________________________________________________
EXPLAINED SQL STATEMENT:
------------------------
select * from demo order by ID collate binary

Plan hash value: 2698718808

------------------------------------
| Id  | Operation        | Name    |
------------------------------------
|   0 | SELECT STATEMENT |         |
|   1 |  SORT ORDER BY   |         |
|   2 |   INDEX FULL SCAN| DEMP_PK |
------------------------------------

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - (#keys=1) "DEMO"."ID" COLLATE "BINARY"[5],
       "DEMO"."ID"[VARCHAR2,5]
   2 - "DEMO"."ID"[VARCHAR2,5]

I have no idea why there is still a sort operation. I think that the INDEX FULL SCAN returns already the rows in binary order. And that should require additional sorting for the ORDER BY … COLLATE BINARY.

Cet article Oracle non-linguistic varchar2 columns to order by without sorting est apparu en premier sur Blog dbi services.

Oracle 19c Automatic Indexing: A More Complex Example (How Does The Grass Grow)

In this post I’m going to put together in a slightly more complex SQL example a number of the concepts I’ve covered thus far in relation to the current implementation of Oracle Automatic Indexing. I’ll begin by creating three tables, a larger TABLE1 and two smaller TABLE2 and TABLE3 lookup tables. Each table is created […]

From 19.6 to 19.7 on Windows

I must say this Release Update (RU) was probably the smoothest I’ve ever done. Obviously you should always read the patch notes carefully before proceeding on your own systems, but for me, it was a simple exercise. I’m posting this just to cover a couple of things that the patch notes “assume” and don’t explicitly state.

  • Shutdown everything Oracle related. I just go to “Services” and look for anything with Oracle. Also shutdown the “Distributed Transaction Coordinator service”.

This next one is key … I’ve made this mistake so many times. Open a command prompt window as administrator. If you don’t, things will progress OK for a tiny bit and then OPatch is going to throw a wobbly.

I did both the 19.7 RU and the 19.7 OJVM with OPatch, and both went through without incident.


C:\WINDOWS\system32>set PATH=%ORACLE_HOME%\perl\bin;%PATH%
C:\WINDOWS\system32>set PATH=%PATH%;%ORACLE_HOME%\OPatch
C:\WINDOWS\system32>cd D:\oracle\stage\19.7windows\30901317
C:\WINDOWS\system32>d:

D:\oracle\stage\19.7windows\30901317>opatch apply
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation.  All rights reserved.


Oracle Home       : C:\oracle\product\19
Central Inventory : C:\Program Files\Oracle\Inventory
   from           :
OPatch version    : 12.2.0.1.19
OUI version       : 12.2.0.7.0
Log file location : C:\oracle\product\19\cfgtoollogs\opatch\opatch2020-06-05_18-52-57PM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   30901317

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = 'C:\oracle\product\19')


Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '30901317' to OH 'C:\oracle\product\19'
ApplySession: Optional component(s) 
  [ oracle.assistants.usm, 19.0.0.0.0 ] , 
  [ oracle.rdbms.ic, 19.0.0.0.0 ] , 
  [ oracle.rdbms.tg4ifmx, 19.0.0.0.0 ] , 
  [ oracle.has.deconfig, 19.0.0.0.0 ] , 
  [ oracle.has.cfs, 19.0.0.0.0 ] , 
  [ oracle.rdbms.tg4tera, 19.0.0.0.0 ] , 
  [ oracle.network.gsm, 19.0.0.0.0 ] , 
  [ oracle.network.cman, 19.0.0.0.0 ] ,
  [ oracle.rdbms.tg4msql, 19.0.0.0.0 ] , 
  [ oracle.ons.daemon, 19.0.0.0.0 ] , 
  [ oracle.options.olap.awm, 19.0.0.0.0 ] , 
  [ oracle.swd.oui, 12.2.0.4.0 ] , 
  [ oracle.swd.oui.core.min, 12.2.0.4.0 ] , 
  [ oracle.rdbms.tg4db2, 19.0.0.0.0 ] ,
  [ oracle.assistants.asm, 19.0.0.0.0 ] , 
  [ oracle.usm, 19.0.0.0.0 ] , 
  [ oracle.tomcat.crs, 19.0.0.0.0 ] , 
  [ oracle.has.crs, 19.0.0.0.0 ] ,
  [ oracle.tfa, 19.0.0.0.0 ] , 
  [ oracle.has.cvu, 19.0.0.0.0 ] , 
  [ oracle.rdbms.tg4sybs, 19.0.0.0.0 ]  not present in the Oracle Home or a higher version is found.

Patching component oracle.sdo, 19.0.0.0.0...

Patching component oracle.rdbms.rman, 19.0.0.0.0...

Patching component oracle.aspnet_2, 19.0.0.0.0...

Patching component oracle.dbjava.ic, 19.0.0.0.0...

Patching component oracle.ons, 19.0.0.0.0...

Patching component oracle.ntoramts, 19.0.0.0.0...

Patching component oracle.rdbms.dbscripts, 19.0.0.0.0...

Patching component oracle.xdk.parser.java, 19.0.0.0.0...

Patching component oracle.nlsrtl.rsf, 19.0.0.0.0...

Patching component oracle.rdbms, 19.0.0.0.0...

Patching component oracle.blaslapack, 19.0.0.0.0...

Patching component oracle.rdbms.oci, 19.0.0.0.0...

Patching component oracle.assistants.acf, 19.0.0.0.0...

Patching component oracle.duma, 19.0.0.0.0...

Patching component oracle.tfa.db, 19.0.0.0.0...

Patching component oracle.ldap.security.osdt, 19.0.0.0.0...

Patching component oracle.ctx.atg, 19.0.0.0.0...

Patching component oracle.dbjava.jdbc, 19.0.0.0.0...

Patching component oracle.xdk.rsf, 19.0.0.0.0...

Patching component oracle.precomp.common, 19.0.0.0.0...

Patching component oracle.rdbms.hsodbc, 19.0.0.0.0...

Patching component oracle.sqlplus.ic, 19.0.0.0.0...

Patching component oracle.oracore.rsf, 19.0.0.0.0...

Patching component oracle.rsf, 19.0.0.0.0...

Patching component oracle.precomp.common.core, 19.0.0.0.0...

Patching component oracle.network.client, 19.0.0.0.0...

Patching component oracle.precomp.lang, 19.0.0.0.0...

Patching component oracle.install.deinstalltool, 19.0.0.0.0...

Patching component oracle.ctx, 19.0.0.0.0...

Patching component oracle.dbjava.ucp, 19.0.0.0.0...

Patching component oracle.javavm.client, 19.0.0.0.0...

Patching component oracle.network.listener, 19.0.0.0.0...

Patching component oracle.ntoledb.odp_net_2, 19.0.0.0.0...

Patching component oracle.rdbms.util, 19.0.0.0.0...

Patching component oracle.usm.deconfig, 19.0.0.0.0...

Patching component oracle.assistants.server, 19.0.0.0.0...

Patching component oracle.rdbms.deconfig, 19.0.0.0.0...

Patching component oracle.ntoledb, 19.0.0.0.0...

Patching component oracle.has.common, 19.0.0.0.0...

Patching component oracle.assistants.deconfig, 19.0.0.0.0...

Patching component oracle.ldap.rsf, 19.0.0.0.0...

Patching component oracle.ovm, 19.0.0.0.0...

Patching component oracle.rdbms.plsql, 19.0.0.0.0...

Patching component oracle.has.db, 19.0.0.0.0...

Patching component oracle.precomp.rsf, 19.0.0.0.0...

Patching component oracle.xdk, 19.0.0.0.0...

Patching component oracle.sdo.locator, 19.0.0.0.0...

Patching component oracle.rdbms.install.plugins, 19.0.0.0.0...

Patching component oracle.rdbms.olap, 19.0.0.0.0...

Patching component oracle.rdbms.rsf.ic, 19.0.0.0.0...

Patching component oracle.rdbms.scheduler, 19.0.0.0.0...

Patching component oracle.ldap.owm, 19.0.0.0.0...

Patching component oracle.rdbms.rsf, 19.0.0.0.0...

Patching component oracle.network.rsf, 19.0.0.0.0...

Patching component oracle.oraolap, 19.0.0.0.0...

Patching component oracle.clrintg.ode_net_2, 19.0.0.0.0...

Patching component oracle.rdbms.dv, 19.0.0.0.0...

Patching component oracle.has.rsf, 19.0.0.0.0...

Patching component oracle.sqlplus, 19.0.0.0.0...

Patching component oracle.has.common.cvu, 19.0.0.0.0...
Patch 30901317 successfully applied.
Sub-set patch [30445947] has become inactive due to the application of a super-set patch [30901317].
Please refer to Doc ID 2161861.1 for any possible further required actions.
Log file location: C:\oracle\product\19\cfgtoollogs\opatch\opatch2020-06-05_18-52-57PM_1.log

OPatch succeeded.

D:\oracle\stage\19.7windows\30901317>cd ..

D:\oracle\stage\19.7windows>cd jvm

D:\oracle\stage\19.7windows\jvm>cd 30805684

D:\oracle\stage\19.7windows\jvm\30805684>opatch prereq CheckConflictAgainstOHWithDetail -ph .
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : C:\oracle\product\19
Central Inventory : C:\Program Files\Oracle\Inventory
   from           :
OPatch version    : 12.2.0.1.19
OUI version       : 12.2.0.7.0
Log file location : C:\oracle\product\19\cfgtoollogs\opatch\opatch2020-06-05_18-59-04PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

D:\oracle\stage\19.7windows\jvm\30805684>
D:\oracle\stage\19.7windows\jvm\30805684>opatch apply
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation.  All rights reserved.


Oracle Home       : C:\oracle\product\19
Central Inventory : C:\Program Files\Oracle\Inventory
   from           :
OPatch version    : 12.2.0.1.19
OUI version       : 12.2.0.7.0
Log file location : C:\oracle\product\19\cfgtoollogs\opatch\opatch2020-06-05_19-12-41PM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   30805684

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = 'C:\oracle\product\19')


Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '30805684' to OH 'C:\oracle\product\19'

Patching component oracle.rdbms, 19.0.0.0.0...

Patching component oracle.rdbms.dbscripts, 19.0.0.0.0...

Patching component oracle.javavm.server, 19.0.0.0.0...

Patching component oracle.javavm.client, 19.0.0.0.0...
Patch 30805684 successfully applied.
Log file location: C:\oracle\product\19\cfgtoollogs\opatch\opatch2020-06-05_19-12-41PM_1.log

OPatch succeeded.

D:\oracle\stage\19.7windows\jvm\30805684>

Then its a case of starting the databases in upgrade mode and then running datapatch. This is my only criticism of the Windows setup here. If I just start the services, the database is completely open, and hence needs to be shutdown entirely again so that it can be started in upgrade mode. I tried the old “oradim” command which is meant to allow me to start just the service and not the instance, ie

oradim -startup -sid db19 -starttype srvc

but I found that this also started the entire instance and database. So you might just have to tough it out and accept that its going to need a second shutdown/startup.


SQL> startup upgrade
ORACLE instance started.

Total System Global Area 4294966040 bytes
Fixed Size                  9276184 bytes
Variable Size            2113929216 bytes
Database Buffers         2164260864 bytes
Redo Buffers                7499776 bytes
Database mounted.
Database opened.
SQL> alter pluggable database all open upgrade;

Pluggable database altered.

C:\oracle\product\19\OPatch>datapatch -verbose
SQL Patching tool version 19.3.0.0.0 Production on Fri Jun  5 19:25:22 2020
Copyright (c) 2012, 2019, Oracle.  All rights reserved.

Log file for this invocation: C:\oracle\cfgtoollogs\sqlpatch\sqlpatch_44888_2020_06_05_19_25_22\sqlpatch_invocation.log

Connecting to database...OK
Gathering database info...done

Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)

Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of interim SQL patches:
Interim patch 30805684 (OJVM RELEASE UPDATE: 19.7.0.0.200414 (30805684)):
  Binary registry: Installed
  PDB CDB$ROOT: Not installed
  PDB PDB$SEED: Not installed
  PDB PDB1: Not installed
  PDB PDB2: Not installed

Current state of release update SQL patches:
  Binary registry:
    19.7.0.0.0 Release_Update 200514113449: Installed
  PDB CDB$ROOT:
    Applied 19.6.0.0.0 Release_Update 200104221455 successfully on 20-MAR-20 07.53.55.650000 PM
  PDB PDB$SEED:
    Applied 19.6.0.0.0 Release_Update 200104221455 successfully on 20-MAR-20 07.53.57.153000 PM
  PDB PDB1:
    Applied 19.6.0.0.0 Release_Update 200104221455 successfully on 20-MAR-20 07.53.57.351000 PM
  PDB PDB2:
    Applied 19.6.0.0.0 Release_Update 200104221455 successfully on 20-MAR-20 07.53.57.153000 PM

Adding patches to installation queue and performing prereq checks...done
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED PDB1 PDB2
    No interim patches need to be rolled back
    Patch 30901317 (Windows Database Bundle Patch : 19.7.0.0.200414 (30901317)):
      Apply from 19.6.0.0.0 Release_Update 200104221455 to 19.7.0.0.0 Release_Update 200514113449
    The following interim patches will be applied:
      30805684 (OJVM RELEASE UPDATE: 19.7.0.0.200414 (30805684))

Installing patches...
Patch installation complete.  Total patches installed: 8

Validating logfiles...done
Patch 30901317 apply (pdb CDB$ROOT): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\30901317\23471163/30901317_apply_DB19_CDBROOT_2020Jun05_19_27_50.log (no errors)
Patch 30805684 apply (pdb CDB$ROOT): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\30805684\23504850/30805684_apply_DB19_CDBROOT_2020Jun05_19_27_15.log (no errors)
Patch 30901317 apply (pdb PDB$SEED): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\30901317\23471163/30901317_apply_DB19_PDBSEED_2020Jun05_19_28_42.log (no errors)
Patch 30805684 apply (pdb PDB$SEED): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\30805684\23504850/30805684_apply_DB19_PDBSEED_2020Jun05_19_28_34.log (no errors)
Patch 30901317 apply (pdb PDB1): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\30901317\23471163/30901317_apply_DB19_PDB1_2020Jun05_19_28_41.log (no errors)
Patch 30805684 apply (pdb PDB1): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\30805684\23504850/30805684_apply_DB19_PDB1_2020Jun05_19_28_34.log (no errors)
Patch 30901317 apply (pdb PDB2): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\30901317\23471163/30901317_apply_DB19_PDB2_2020Jun05_19_28_41.log (no errors)
Patch 30805684 apply (pdb PDB2): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\30805684\23504850/30805684_apply_DB19_PDB2_2020Jun05_19_28_34.log (no errors)

Automatic recompilation incomplete; run utlrp.sql to revalidate.
  PDBs: CDB$ROOT

SQL Patching tool complete on Fri Jun  5 19:30:10 2020
C:\oracle\product\19\OPatch>


C:\oracle\product\19\OPatch>sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 5 19:33:16 2020
Version 19.7.0.0.0

Copyright (c) 1982, 2020, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.7.0.0.0


SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 4294966040 bytes
Fixed Size                  9276184 bytes
Variable Size            2113929216 bytes
Database Buffers         2164260864 bytes
Redo Buffers                7499776 bytes
Database mounted.
Database opened.
SQL>

One thing to be aware of once your patching is completed (and your experiences may be different) is that because the act of patching resulted in a growth of my SYSTEM, UNDO and SYSAUX tablespaces, the database engine seemed to think that it was in the middle of a large space growth activity. As a result, my space management slaves went absolutely bonkers for 2-3mins once I re-opened the database.  So I’d recommend you patch in a quiet time just in case you are also impacted. (It is mostly likely because I run my databases very lean on space because I have so many of them … I suspect a well managed database is unlikely to encounter this Smile)


SQL> @active_sess

       SID    SERIAL# USERNAME_SECONDS_ACTIVE                       STATUS   SQL_ID        PROGRAM
---------- ---------- --------------------------------------------- -------- ------------- -----------------
         5       2450 SYS (260)                                     ACTIVE                 ORACLE.EXE (W008)
         7      23944 SYS (236)                                     ACTIVE                 ORACLE.EXE (W00G)
       127      61619 SYS (257)                                     ACTIVE                 ORACLE.EXE (W009)
       130      38485 SYS (233)                                     ACTIVE                 ORACLE.EXE (W00H)
       254      33664 SYS (230)                                     ACTIVE                 ORACLE.EXE (W00I)
       369      43820 SYS (0)                                       ACTIVE   gg5tyt1pvnr47 sqlplus.exe
       372      21722 SYS (254)                                     ACTIVE                 ORACLE.EXE (W00A)
       374      46323 SYS (227)                                     ACTIVE                 ORACLE.EXE (W00J)
       495       4987 SYS (224)                                     ACTIVE                 ORACLE.EXE (W00K)
       616       1532 SYS (251)                                     ACTIVE                 ORACLE.EXE (W00B)
       619      63318 SYS (221)                                     ACTIVE                 ORACLE.EXE (W00L)
       739      43427 SYS (218)                                     ACTIVE                 ORACLE.EXE (W00M)
       860      11487 SYS (215)                                     ACTIVE                 ORACLE.EXE (W00N)
       982       7708 SYS (248)                                     ACTIVE                 ORACLE.EXE (W00C)
      1106      52591 SYS (245)                                     ACTIVE                 ORACLE.EXE (W00D)
      1227      15812 SYS (242)                                     ACTIVE                 ORACLE.EXE (W00E)
      1347      39327 SYS (263)                                     ACTIVE                 ORACLE.EXE (W007)
      1349      15601 SYS (239)                                     ACTIVE                 ORACLE.EXE (W00F)
      
      

As part of this exercise I found an awesome MOS note, which covers exactly what is fixed in each RU. Be sure to visit MOS note 2523220.1 for all the details. I am not permitted to cut/paste it here, but the image below gives you a flavour of the contents…You get each major section of the database, the bugs fixed in each RU, plus hyperlinks to the bug description. Very cool as a quick reference for seeing if a bug you are being impacted by is fixed in a particular RU.

image

Happy patching everyone!

Addenda June 17 2019:

There’s some internal discussion about whether the “upgrade” option is required for the “alter pluggable all open” command before the data patching. Some good information on that here https://mikedietrichde.com/2020/01/23/do-you-need-startup-upgrade-for-ojvm/.

Thanks to Daniel Overby Hansen for bringing this to my attention.

Video : APEX_DATA_PARSER : Convert simple CSV, JSON, XML and XLSX data to rows and columns

Today’s video is a quick demonstration of using the APEX_DATA_PARSER package to convert simple CSV, JSON, XML and XLSX data into rows and columns.

If you want the copy/paste examples and the test files, you can get them from this article.

Yet another reason why you should always install APEX in your databases.

The star of today’s video is Kosseila.HD, also known as BrokeDBA, complete with sun glasses, basket ball and a rattling watch. </p />
</p></div>

    	  	<div class=

No{Join,GroupBy}SQL – Analytic Views for BI

By Franck Pachot

.
Advocates of NoSQL can query their structures without having to read a data model first. And without writing long table join clauses. They store and query a hierarchical structure without the need to follow relationships, and without the need to join tables on a foreign key name, in order to get a caption or description from a lookup table. The structure, like an XML or JSON document, provides metadata to understand the structure and map it to business objects. The API is simple ‘put’ and ‘get’ where you can retrieve a whole hierarchy, with aggregates at all levels, ready to drill-down from summary to details. Without the need to write sum() functions and group by clauses. For analytics, SQL has improved a lot with window functions and grouping sets but, despite being powerful, this makes the API more complex. And, at a time were the acceptable learning curve should reach its highest point after 42 seconds (like watching the first bits of a video or getting to the stackoverflow top-voted answer), this complexity cannot be adopted easily.

Is SQL too complex? If it does, then something is wrong. SQL was invented for end-users: to query data like in plain English, without the need to know the internal implementation and the procedural algorithms that can make sense out of it. If developers are moving to NoSQL because of the complexity of SQL, then SQL missed something from its initial goal. If they go to NoSQL because “joins are expensive” it just means that joins should not be exposed to them. Because optimizing access paths and expensive operations is the job of the database optimizer, with the help of the database designer, but not the front-end developer. However, this complexity is unfortunately there. Today, without a good understanding of the data model (entities, relationships, cardinalities) writing SQL queries is difficult. Joining over many-to-many relationships, or missing a group by clause, can give wrong results. When I see a select with a DISTINCT keyword, I immediately think that there’s an error in the query and the developer, not being certain of the aggregation level he is working on, has masked it with a DISTINCT because understanding the data model was too time-consuming.

In data warehouses, where the database is queried by the end-user, we try to avoid this risk by building simple star schemas with only one fact tables and many-to-one relationships to dimensions. And on top of that, we provide a reporting tool that will generate the queries correctly so that the end-user does not need to define the joins and aggregations. This requires a layer of metadata on top of the database to describe the possible joins, aggregation levels, functions to aggregate measures,… When I was a junior on databases I’ve been fascinated by those tools. On my first Data Warehouse, I’ve built a BusinessObjects (v3) universe. It was so simple: define the “business objects”, which are the attributes mapped to the dimension columns. Define the fact measures, with the aggregation functions that can apply. And for the joins, it was like the aliases in the from clause, a dimension having multiple roles: think about an airport that can be the destination or the origin of a flight. And then we defined multiple objects: all the airport attributes in the destination role, and all the airport attributes as an origin, were different objects for the end-user. Like “origin airport latitude”, rather than “airport latitude” that makes sense only after a join on “origin airport ID”. That simplifies a lot the end-user view on our data: tables are still stored as relational tables to be joined at query time, in order to avoid redundancy, but the view on top of that shows the multiple hierarchies, like in a NoSQL structure, for the ease of simple queries.

But, as I mentioned, this is the main reason for SQL and this should be done with SQL. All these descriptions I did in the BusinessObjects universe should belong to the database dictionary. And that’s finally possible with Analytic Views. Here is an example on the tables I’ve created in a previous post. I am running on the 20c cloud preview, but this can run on 18c or 19c. After importing the .csv of covid-19 cases per day and countries, I’ve built one fact and one snowflake-dimension tables:


create table continents as select rownum continent_id, continentexp continent_name from (select distinct continentexp from covid where continentexp!='Other');
create table countries as select country_id,country_code,country_name,continent_id,popdata2018 from (select distinct geoid country_id,countryterritorycode country_code,countriesandterritories country_name,continentexp continent_name,popdata2018 from covid where continentexp!='Other') left join continents using(continent_name);
create table cases as select daterep, geoid country_id,cases from covid where continentexp!='Other';
alter table continents add primary key (continent_id);
alter table countries add foreign key (continent_id) references continents;
alter table countries add primary key (country_id);
alter table cases add foreign key (country_id) references countries;
alter table cases add primary key (country_id,daterep);

The dimension hierarchy is on country/continent. I should have created one for time (day/month/quarter/year) but the goal is to keep it simple to show the concept.

When looking at the syntax, it may seem complex. But, please, understand that the goal is to put more in the static definition so that runime usage is easier.

Attribute Dimension

I’ll describe the Country/Continent dimension. It can be in one table (Star Schema) or multiple (Snowflake Schema). I opted for snowflake to show how it is supported since 18c. In 12c we have to create a view on it as the using clause can be only a table or view identifier.


create or replace attribute dimension COUNTRIES_DIM_ATT
using COUNTRIES a ,CONTINENTS b join path country_continent on a.CONTINENT_ID=b.CONTINENT_ID
attributes ( a.COUNTRY_ID "Country ID", a.COUNTRY_CODE "Country", a.COUNTRY_NAME "Country name", a.CONTINENT_ID "Continent ID", b.CONTINENT_NAME "Continent")
level "Continent"
  key "Continent ID"
  member name         '#'||to_char("Continent ID")
  member caption      upper(substr("Continent",1,3))
  member description  "Continent"
  determines ("Continent")
level "Country"
  key "Country ID"
  member name         "Country ID"
  member caption      "Country"
  member description  "Country name"
  determines ("Country ID","Country", "Country name", "Continent ID", "Continent")
 all member name 'WORLD'
/

Let’s take it simply, I have an internal name for my dimension COUNTRIES_DIM_ATT and a USING clause which declares the dimension table and an optional join for snowflake schemas with JOIN PATH. Then I’ve declared the attributes which are the projection of those columns. For this example, I decided to use quoted identifiers for the one that I add in this layer, to distinguish them from the table columns. But do as you want.

The most important here is about levels and dependency. In a star schema, we denormalize the fact tables for simplification (and because it is not a problem as there are no updates, and size is not as large as the fact tables). The metadata we declare here describes the relationships. I have two levels: country and continent. And a many-to-one relationship from country to continent. This is what I declare with the LEVEL and DETERMINES keyword: from all the attributes declared, which ones are functional dependencies of others.

The second important description here is standard naming. In the analytic view, I can query the attributes as columns from the USING clause. But for the ease of querying by simple tools, they will also have standard columns names. Each attribute has as MEMBER NAME (I used the 2-letter country code here which is the COUNTRY_ID primary key in my COUNTRIES dimension table. They have a MEMBER CAPTION as a short name and a MEMBER DESCRIPTION for a longer one. Those are standardized names for each object. The idea is to provide a view that can be used without reading the data model: for each level, the end-user can query the name, caption or the description.

The idea is that those hierarchy levels will be selected in the WHERE clause by a LEVEL_NAME instead of mentioning all columns in GROUP BY clause or PARTITION BY analytic function windowing clause. Note that the’s also an ALL level for the top-most aggregation and we can keep the ‘ALL’ name or a specific one like the ‘WORLD’ I’ve defined here for all countries.

This is the most important metadata is defined by the dimension but we don’t query on dimensions. We can only look at the definitions in the dictionary:


SQL> select * FROM user_attribute_dimensions;

      DIMENSION_NAME    DIMENSION_TYPE    CACHE_STAR    MAT_TABLE_OWNER    MAT_TABLE_NAME    ALL_MEMBER_NAME    ALL_MEMBER_CAPTION    ALL_MEMBER_DESCRIPTION    COMPILE_STATE    ORIGIN_CON_ID
____________________ _________________ _____________ __________________ _________________ __________________ _____________________ _________________________ ________________ ________________
COUNTRIES_DIM_ATT    STANDARD          NONE                                               'WORLD'                                                            VALID                           3
CALENDAR_DIM_ATT     STANDARD          NONE                                               'ALL'                                                              VALID                           3
DAYS_DIM_ATT         TIME              NONE                                               'ALL'                                                              VALID                           3

SQL> select * FROM user_attribute_dim_attrs;

      DIMENSION_NAME    ATTRIBUTE_NAME    TABLE_ALIAS       COLUMN_NAME    ORDER_NUM    ORIGIN_CON_ID
____________________ _________________ ______________ _________________ ____________ ________________
DAYS_DIM_ATT         Date              CASES          DATEREP                      0                3
COUNTRIES_DIM_ATT    Country ID        A              COUNTRY_ID                   0                3
COUNTRIES_DIM_ATT    Country           A              COUNTRY_CODE                 1                3
COUNTRIES_DIM_ATT    Country name      A              COUNTRY_NAME                 2                3
COUNTRIES_DIM_ATT    Continent ID      A              CONTINENT_ID                 3                3
COUNTRIES_DIM_ATT    Continent         B              CONTINENT_NAME               4                3
CALENDAR_DIM_ATT     Date              CASES          DATEREP                      0                3

SQL> select * FROM user_attribute_dim_levels;

      DIMENSION_NAME    LEVEL_NAME    SKIP_WHEN_NULL    LEVEL_TYPE                MEMBER_NAME_EXPR               MEMBER_CAPTION_EXPR    MEMBER_DESCRIPTION_EXPR    ORDER_NUM    ORIGIN_CON_ID
____________________ _____________ _________________ _____________ _______________________________ _________________________________ __________________________ ____________ ________________
COUNTRIES_DIM_ATT    Continent     N                 STANDARD      '#'||to_char("Continent ID")    upper(substr("Continent",1,3))    "Continent"                           0                3
DAYS_DIM_ATT         Day           N                 DAYS          TO_CHAR("Date")                                                                                         0                3
COUNTRIES_DIM_ATT    Country       N                 STANDARD      "Country ID"                    "Country"                         "Country name"                        1                3
CALENDAR_DIM_ATT     Day           N                 STANDARD      TO_CHAR("Date")                                                                                         0                3

There are more that we can define here. I the same way we want to simplify the PARTITION BY clause of analytic function, thanks to levels, we avoid the ORDER BY clause with ordering in each level. I keep it simple here.

For drill-down analytics, we query on hierarchies.

Hierarchy

This is a simple declaration of parent-child relationship between levels:


SQL> 
create or replace hierarchy "Countries"
    using COUNTRIES_DIM_ATT
    ( "Country" child of "Continent")
 /

Hierarchy created.

This is actually a view that we can query, and the best way to understand it is to look at it.

The definition from the dictionary just reflects what we have created:


SQL> select * FROM user_hierarchies;

   HIER_NAME    DIMENSION_OWNER       DIMENSION_NAME    PARENT_ATTR    COMPILE_STATE    ORIGIN_CON_ID
____________ __________________ ____________________ ______________ ________________ ________________
Countries    DEMO               COUNTRIES_DIM_ATT                   VALID                           3

SQL> select * FROM user_hier_levels;

   HIER_NAME    LEVEL_NAME    ORDER_NUM    ORIGIN_CON_ID
____________ _____________ ____________ ________________
Countries    Continent                0                3
Countries    Country                  1                3

We can also query USER_HIER_COLUMNS to see what is exposed as a view.

but a simple DESC will show them:


SQL> desc "Countries"

                 Name    Role            Type
_____________________ _______ _______________
Country ID            KEY     VARCHAR2(10)
Country               PROP    VARCHAR2(3)
Country name          PROP    VARCHAR2(50)
Continent ID          KEY     NUMBER
Continent             PROP    VARCHAR2(10)
MEMBER_NAME           HIER    VARCHAR2(41)
MEMBER_UNIQUE_NAME    HIER    VARCHAR2(95)
MEMBER_CAPTION        HIER    VARCHAR2(12)
MEMBER_DESCRIPTION    HIER    VARCHAR2(50)
LEVEL_NAME            HIER    VARCHAR2(9)
HIER_ORDER            HIER    NUMBER
DEPTH                 HIER    NUMBER(10)
IS_LEAF               HIER    NUMBER
PARENT_LEVEL_NAME     HIER    VARCHAR2(9)
PARENT_UNIQUE_NAME    HIER    VARCHAR2(95)

This is like a join on the COUNTRIES and CONTINENTS (defined in the using clause of the attribute dimension) with the attributes exposed. But there are also additional columns that are there with standard names in all hierarchies: member name/caption/description and level information. Because all levels are here, as if we did some UNION ALL over GROUP BY queries.

Additional columns and additional rows for each level. Let’s query it:


SQL> select * from "Countries";

   Country ID    Country                         Country name    Continent ID    Continent    MEMBER_NAME    MEMBER_UNIQUE_NAME    MEMBER_CAPTION                   MEMBER_DESCRIPTION    LEVEL_NAME    HIER_ORDER    DEPTH    IS_LEAF    PARENT_LEVEL_NAME    PARENT_UNIQUE_NAME
_____________ __________ ____________________________________ _______________ ____________ ______________ _____________________ _________________ ____________________________________ _____________ _____________ ________ __________ ____________________ _____________________
                                                                                           WORLD          [ALL].[WORLD]                                                                ALL                       0        0          0
                                                                            1 Asia         #1             [Continent].&[1]      ASI               Asia                                 Continent                 1        1          0 ALL                  [ALL].[WORLD]
AE            ARE        United_Arab_Emirates                               1 Asia         AE             [Country].&[AE]       ARE               United_Arab_Emirates                 Country                   2        2          1 Continent            [Continent].&[1]
AF            AFG        Afghanistan                                        1 Asia         AF             [Country].&[AF]       AFG               Afghanistan                          Country                   3        2          1 Continent            [Continent].&[1]
BD            BGD        Bangladesh                                         1 Asia         BD             [Country].&[BD]       BGD               Bangladesh                           Country                   4        2          1 Continent            [Continent].&[1]
...
VN            VNM        Vietnam                                            1 Asia         VN             [Country].&[VN]       VNM               Vietnam                              Country                  43        2          1 Continent            [Continent].&[1]
YE            YEM        Yemen                                              1 Asia         YE             [Country].&[YE]       YEM               Yemen                                Country                  44        2          1 Continent            [Continent].&[1]
                                                                            2 Africa       #2             [Continent].&[2]      AFR               Africa                               Continent                45        1          0 ALL                  [ALL].[WORLD]
AO            AGO        Angola                                             2 Africa       AO             [Country].&[AO]       AGO               Angola                               Country                  46        2          1 Continent            [Continent].&[2]
BF            BFA        Burkina_Faso                                       2 Africa       BF             [Country].&[BF]       BFA               Burkina_Faso                         Country                  47        2          1 Continent            [Continent].&[2]
...

I’ve removed many rows for clarity, but there is one row for all countries, the deepest level, plus one row for each continent, plus one row for the top summary (‘WORLD’). This is how we avoid GROUP BY in the end-user query: we just mention the level: LEVEL_NAME=’ALL’, LEVEL_NAME=’Continent’, LEVEL_NAME=’Country’. Or query the DEPTH: 0 for the global summary, 1 for continents, 2 for countries. The countries, being the most detailed level can also be queried by IS_LEAF=1. The attributes may be NULL for non-leaf levels, like “Country name” when at ‘Continent’ level, or “Continent” when at ‘ALL’ level.

In addition to the attributes, we have the standardized names, so that the user GUI can see the same column names for all dimensions. I don’t show all countries and I don’t query MEMBER_NAME and MEMBER_CAPTION to get it short here:


SQL>
select MEMBER_NAME,MEMBER_UNIQUE_NAME,LEVEL_NAME,PARENT_LEVEL_NAME,PARENT_UNIQUE_NAME,HIER_ORDER,DEPTH,IS_LEAF
 from "Countries" order by DEPTH,HIER_ORDER fetch first 10 rows only;

   MEMBER_NAME    MEMBER_UNIQUE_NAME    LEVEL_NAME    PARENT_LEVEL_NAME    PARENT_UNIQUE_NAME    HIER_ORDER    DEPTH    IS_LEAF
______________ _____________________ _____________ ____________________ _____________________ _____________ ________ __________
WORLD          [ALL].[WORLD]         ALL                                                                  0        0          0
#1             [Continent].&[1]      Continent     ALL                  [ALL].[WORLD]                     1        1          0
#2             [Continent].&[2]      Continent     ALL                  [ALL].[WORLD]                    45        1          0
#3             [Continent].&[3]      Continent     ALL                  [ALL].[WORLD]                   101        1          0
#4             [Continent].&[4]      Continent     ALL                  [ALL].[WORLD]                   156        1          0
#5             [Continent].&[5]      Continent     ALL                  [ALL].[WORLD]                   165        1          0
AE             [Country].&[AE]       Country       Continent            [Continent].&[1]                  2        2          1
AF             [Country].&[AF]       Country       Continent            [Continent].&[1]                  3        2          1
BD             [Country].&[BD]       Country       Continent            [Continent].&[1]                  4        2          1
BH             [Country].&[BH]       Country       Continent            [Continent].&[1]                  5        2          1

A row can be identified by the level (LEVEL_NAME or DEPTH) and its name but a unique name is generated here with the full path (in MDX style). This is MEMBER_UNIQUE_NAME and we have also the PARENT_UNIQUE_NAME if we want to follow the hierarchy.

Analytic View

Now that I have a view on the hierarchy, I want to join it to the fact table, in order to display the measures at different levels of aggregation. Again, I don’t want the user to think about joins and aggregation functions, and this must be encapsulated in a view, an ANALYTIC VIEW:


create or replace analytic view "COVID cases"
using CASES
dimension by (
  COUNTRIES_DIM_ATT key COUNTRY_ID references "Country ID"
  hierarchies ( "Countries")
 )
measures (
  "Cases"          fact CASES aggregate by sum,
  "Highest cases"  fact CASES aggregate by max
)
/

The USING clause just mentions the fact table. The DIMENSION clause lists all the dimensions (I have only one here for the simplicity of the example, but you will have all dimensions here) and how they join to the dimension (foreign key REFERENCES the lowest level key of the dimension). The MEASURES defines the fact columns and the aggregation function to apply to them. This can be complex to be sure it always makes sense. What is stored in one fact column can be exposed as multiple business objects attribute depending on the aggregation.

There are many functions for measures calculated. For example in the screenshot you will see at the end, I added the following to show the country covid cases as a ration on their continent ones.


 "cases/continent" as 
  ( share_of("Cases" hierarchy COUNTRIES_DIM_ATT."Countries"  level "Continent") )
  caption 'Cases Share of Continent' description 'Cases Share of Continent'

But for the moment I keep it simple with only “Cases” and “Highest cases”.

Here is the description:


SQL> desc "COVID cases"

            Dim Name    Hier Name                  Name    Role            Type
____________________ ____________ _____________________ _______ _______________
COUNTRIES_DIM_ATT    Countries    Country ID            KEY     VARCHAR2(10)
COUNTRIES_DIM_ATT    Countries    Country               PROP    VARCHAR2(3)
COUNTRIES_DIM_ATT    Countries    Country name          PROP    VARCHAR2(50)
COUNTRIES_DIM_ATT    Countries    Continent ID          KEY     NUMBER
COUNTRIES_DIM_ATT    Countries    Continent             PROP    VARCHAR2(10)
COUNTRIES_DIM_ATT    Countries    MEMBER_NAME           HIER    VARCHAR2(41)
COUNTRIES_DIM_ATT    Countries    MEMBER_UNIQUE_NAME    HIER    VARCHAR2(95)
COUNTRIES_DIM_ATT    Countries    MEMBER_CAPTION        HIER    VARCHAR2(12)
COUNTRIES_DIM_ATT    Countries    MEMBER_DESCRIPTION    HIER    VARCHAR2(50)
COUNTRIES_DIM_ATT    Countries    LEVEL_NAME            HIER    VARCHAR2(9)
COUNTRIES_DIM_ATT    Countries    HIER_ORDER            HIER    NUMBER
COUNTRIES_DIM_ATT    Countries    DEPTH                 HIER    NUMBER(10)
COUNTRIES_DIM_ATT    Countries    IS_LEAF               HIER    NUMBER
COUNTRIES_DIM_ATT    Countries    PARENT_LEVEL_NAME     HIER    VARCHAR2(9)
COUNTRIES_DIM_ATT    Countries    PARENT_UNIQUE_NAME    HIER    VARCHAR2(95)
                     MEASURES     Cases                 BASE    NUMBER
                     MEASURES     Highest cases         BASE    NUMBER

I have columns from all hierarchies, with KEY and PROPERTY attributes, and standardized names from the HIERARCHY, and the measures. You must remember that it is a virtual view: you will never query all columns and all rows. You SELECT the columns and filter (WHERE) the rows and levels and you get the result you want without GROUP BY and JOIN. If you look at the execution plan you will see the UNION ALL, JOIN, GROUP BY on the star or snowflake table. But this is out of the end-user concern. As a DBA you can create some materialized views to pre-build some summaries and query rewrite will used them.

We are fully within the initial SQL philosophy: a logical view provides an API that is independent of the physical design and easy to query, on a simple row/column table easy to visualize.

Analytic query

A query on the analytic view is then very simple. In the FROM clause, instead of tables with joins, I mention the analytic view, and instead of mentioning table aliases, I mention the hierarchy. I reference only the standard column names. Only the hierarchy names and the measures are specific. In the where clause, I can also reference the LEVEL_NAME:


SQL> 
select MEMBER_DESCRIPTION, "Cases"
 from "COVID cases" hierarchies ("Countries")
 where ( "Countries".level_name='Country' and "Countries".MEMBER_CAPTION in ('USA','CHN') )
    or ( "Countries".level_name in ('Continent','ALL') )
 order by "Cases";

         MEMBER_DESCRIPTION      Cases
___________________________ __________
Oceania                           8738
China                            84198
Africa                          203142
Asia                           1408945
United_States_of_America       1979850
Europe                         2100711
America                        3488230
                               7209766

Here I wanted to see the total covid-19 cases for all countries (‘ALL’), for each continent, and only two ones at the country level: USA and China. And this was a simple SELECT … FROM … WHERE … ORDER BY without joins and group by. Like a query on an OLAP cube.

If I had no analytic views, here is how I would have queried the tables:


SQL>
select coalesce(CONTINENT_NAME, COUNTRY_NAME,'ALL'), CASES from (
select CONTINENT_NAME, COUNTRY_NAME, sum(CASES) cases, COUNTRY_CODE, grouping(COUNTRY_CODE) g_country
from CASES join COUNTRIES using(COUNTRY_ID) join CONTINENTS using(CONTINENT_ID)
group by grouping sets ( () , (CONTINENT_NAME) , (COUNTRY_CODE,COUNTRY_NAME) )
)
where COUNTRY_CODE in ('USA','CHN') or g_country >0
order by cases
/

   COALESCE(CONTINENT_NAME,COUNTRY_NAME,'ALL')      CASES
______________________________________________ __________
Oceania                                              8738
China                                               84198
Africa                                             203142
Asia                                              1408945
United_States_of_America                          1979850
Europe                                            2100711
America                                           3488230
ALL                                               7209766

This was with GROUPING SETS to add multiple levels and GROUPING() function to detect the level. Without GROUPING SETS I may have done it with many UNION ALL between GROUP BY subqueries.

Back to roots of SQL

You may think that you don’t need Analytic Views because the same can be done by some BI reporting tools. But this should belong to the database. SQL was invented to provide a simple API to users. If you need an additional layer with a large repository of metadata and complex transformations between the user-defined query and the SQL to execute, then something is missed from the initial goal. One consequence is people going to NoSQL hierarchical databases with the idea that they are easier to visualize: simple API (a key-value get) and embedded metadata (as JSON for example). While SQL was more and more powerful to process data in the database, the complexity was going too far and developers prefered to come back to their procedural code rather than learning something new. And the first step of many current developments is to move the data out of the database, to NoSQL, or to an OLAP cube in our case.

Analytic views bring back the power of SQL: the view exposes a Data Mart as one simple table with columns and rows, containing all dimensions and levels of aggregation. The metadata that describes the data model is back where it belongs: the data dictionary. My example here is a very simple one but it can go further, with classification to add more metadata for self-documentation, with more hierarchies (and a special one for the time dimension), and many calculated measures.
SQL on it is simplified, and there are also some GUI over analytic views, like APEX, or SQL Developer:
https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/06/Annotat... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/06/Annotat... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/06/Annotat... 1536w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/06/Annotat... 2048w" sizes="(max-width: 1024px) 100vw, 1024px" />
And if SQL is still too complex, it seems that we can query Analytic Views with MDX (MultiDimensional eXpressions). The MEMBER_UNIQUE_NAME follows the MDX syntax and we can find this in ?/mesg/oraus.msg list of error messages:


/============================================================================
/
/    18200 - 18699 Reserved for Analytic View Sql (HCS) error messages
/
/============================================================================
/
/// 18200 - 18219 reserved for MDX Parser
/

HCS is the initial name of this feature (Hierarchical Cubes). I’ve not seen other mentions of MDX in the Oracle Database documentation, so I’ve no idea if it is already implemented.

Cet article No{Join,GroupBy}SQL – Analytic Views for BI est apparu en premier sur Blog dbi services.

The WordPress.com Referral Program: Empower Others to Start a Website

All of us know interesting people. Some have unique talents. Others have business ideas, write beautiful poetry, or have a passion to change the world for the better. 

Should your mom be sharing her recipes with the world?
Does your roommate have hilarious opinions on current events?
Got a co-worker who needs to pursue her entrepreneurial dreams?
Is your favorite singer-songwriter looking for a better way to make money from his music?

If they’re not online, they should be. If you’ve ever told a friend or family member that they should create a blog, start their own podcast, or sell what they make online, this is your chance to give them the nudge they need.

The WordPress.com Refer-a-Friend Program kicked off this spring. Both you and your connections can earn credits for their new WordPress.com websites. But what’s really exciting is how you’ll give people you know the opportunity to bring their big ideas to life.

How the referral program works

We’ve designed our peer referral program to be mutually beneficial: You simply invite someone  — friends, family, casual acquaintances — to build a website. As long as they’re totally new to WordPress.com, they get a US$25 credit towards purchasing a WordPress.com plan.

And every time someone you refer picks a plan, you get a US$25 credit, too! That’s our way of saying “thanks.” The credit will be applied within two months of the referrer signing up and making an eligible purchase.

WIth current pricing, a $25 credit is more than 50% off the first year of a Personal plan and more than 25% off a Premium plan. Plus, your referrals also get a free custom domain name for their first year.

Here’s how to start:

    1. Log in to your WordPress.com account and go to Tools → Earn.
    2. Locate your unique referral link in the Refer-a-Friend section.
    3. Copy the link and share it via email, social media, or text message.

https://en-blog.files.wordpress.com/2020/06/refer-a-friend-earn-section.... 150w, https://en-blog.files.wordpress.com/2020/06/refer-a-friend-earn-section.... 300w, https://en-blog.files.wordpress.com/2020/06/refer-a-friend-earn-section.... 768w" sizes="(max-width: 1024px) 100vw, 1024px" />

Think about what you’ll say to each person before you pass along that link. Instead of saying, “Click this link and you’ll save money on a website,” tell people why you think they should make that leap. Tell them why you believe in them.

Then, tell them why you use WordPress.com and explain how you think it will help them, too.

A little encouragement goes a long way

Why should you bother? Why should you take the time to tell others about WordPress.com?

It’s not really about saving a few bucks (although that’s nice!). The credit is just a little incentive that convinces your friends they should take that first step towards doing something meaningful. You’re not clipping coupons. You’re encouraging the creation of something new and valuable on the internet.

Think back to when you first built a website. Think about the first blog post you ever published. If you’re like most people, it felt like a big deal. And it was a big deal. You had something to say, a goal you were striving to reach, and your website gave you the power to make it happen. Now it’s time to empower others with that same sense of possibility.

In a world of tweetstorms and 24-hour news cycles, websites help us stop consuming and start creating. They give us a space to be thoughtful and proactive. We need more original artists. We need more thoughtful writers. We need more brilliant entrepreneurs. We need more compassionate community activists.

Everyone at WordPress.com believes in the importance of democratizing online publishing. That means giving small businesses, free thinkers, and creators the tools they need to build an online presence.

But we also need you. We need you to help spread the word about what can be done with a real website. Think about three people you can refer to WordPress.com today and give them the spark they need to get started.

Want to send along some inspiration? Check out the amazing websites and customer stories featured on Discover!

Struggling with productivity?

Today’s blog post is somewhat of a different one. You could perhaps argue that it is even non-technical but bear with me, there is a strong relationship to technical work on the topic of productivity.

One of the things we often read about in blog posts, see in YouTube videos or any other form of social media when it comes to being a technology professional, is the importance of taking regular breaks. Even outside the realm of technology, the ergonomists constantly tell us about the mental and physical health benefits of taking regular breaks.

And (perhaps like you) I have always managed to convince myself that I was indeed taking regular breaks from my work, but in reality that was not happening. When you spend all day in front of a computer, the definition of taking a break becomes very loose. Rather than stepping away from the desk and taking a genuine break, over the years my definition of taking a break has been to grab a quick cup of coffee, or move from one (technically-focussed) tab on my browser to another tab on my browser showing the weather, the news, or some piece of nonsensical attention grabbing piece that is so easy to find on the Internet.

But with the recent travel restrictions in place due to the COVID-19 pandemic, I decided it would be a good time to get a dog. As a child, our family always had pets, either cats or dogs, and I was keen to revisit that – not just for myself but also for the benefit of my children. I have always thought it important for children to learn how to look after a pet because it teaches them to be more selfless, ie, that sometimes the needs of others take precedence.

So I have fostered a greyhound and now my regular breaks actually have to be a genuine break from the computer. It might be just patting the dog, feeding the dog, and every morning and evening the dog gets a walk around the park outside my house. And I’ve made a profound discovery. Unlike my previous definition of “taking a break”, these genuine physical detachments from the computer desk has really transformed my productivity. Mentally I am much more refreshed for longer periods during the day, and I find tackling tougher challenges, such as complicated AskTom questions, to be much more manageable.

Hence my advice is – if you are a computer professional and you are currently housebound due to the COVID-19 pandemic, I can highly recommend considering getting a housemate in the form of a four-legged friend. You may think it would be a disruption to your working life, but in my case (and I think in many people’s cases) it will actually improve your working day.

Please don’t misinterpret this advice. I know that travel restrictions will slowly relax over time and I will probably be travelling again. Do not forget the mantra we were always taught as children when we pestered our parents for a pet: “A pet is forever!” I am already putting into place a support network of friends and family to make sure that my dog will always have a happy environment, even when my travel commitments resume at an unknown time in future

Thus, the more legs in your working household, the more productive you may be! And this … is Bailey Smile

20200525_105904