Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Oracle Restart 19c: silent installation and ASM Filter Driver

Oracle 19c is has been getting a lot of traction recently, and I have been researching various aspects around its installation and use. One topic that came up recently was the installation of Oracle Restart 19c using ASM Filter Driver. ASM Filter Driver has been around for a little while, but I never really looked at it closely. I found very little has been written about ASMFD in the context of Oracle 19c either, so I thought I’d revert the trend and write a series of posts about it (maybe I just didn’t find the relevant articles, I didn’t look too closely)

This blog post and all those that follow in the series are by no means an endorsement for the technology! My only goal is to make the documentation more accessible, I found it a little hard to work out the steps and hope to save you some time. As with every new-ish storage technology it’s imperative to make sure (by means of rigorous testing) that it meets your requirements.

It’s not as simple as it seems

There are actually quite a few nuances to the installation process when trying to install ASM with ASMFD from the beginning, which I’ll detail in the short blog post series to follow. The idea is to install Oracle Restart 19c with ASMFD straight away, no upgrade from ASMLib, no changing from UDEV to ASMFD. Plus it’s a fresh installation, no upgrade from a previous release.

As always I’m using Oracle Linux as the basis for my tests. And since I’m a good citizen I have updated my KVM VMs to the latest and greatest at the time of writing. More details about the environment used can be found in each of the posts in the series.

How do I install ASMFD together with Oracle Restart 19c?

I have studied the documentation, and the way I see it there are essentially 2 ways of installing Oracle Restart 19c with ASMFD:

  • Using UDEV to change permissions on all future ASM disks
  • Labeling future ASM disks using asmcmd to achieve the same goal

According to the certification matrix (MOS 1369107.1), it also matters which Oracle Linux 7/kernel combination you are using.

The easiest thing to do should be switching the Oracle Linux from UEK to the Red Hat Compatible Kernel, and I’m going to write about that first. The simplicity gained by using RHCK is slightly offset by operational caveats such as kernel upgrades etc. But this is a post about Oracle Restart, not the intricacies of switching from UEK to RHCK …

For quite a while now, UEK 5 has been the default kernel for Oracle Linux 7. If you’d like to install Oracle Restart 19c/ASMFD on UEK 5 you can’t do that out of the box, a little magic is necessary.

The following is a list of things I hope to write in the upcoming days. It’s all about a silent installation of Oracle Restart 19c for use with ASMFD:

Happy installing!

Oracle disables your multitenant option when you run on EC2

I have installed Oracle 19.6 on an EC2 for our Multitenant Workshop training. And of course, during the workshop we create a lot of PDBs. If you don’t have paid for the Enterprise Edition plus the Multitenant Option you can create at most 3 pluggable database. But with this option you can create up to 252 pluggable databases. Does it worth the price, which according to the public price list is USD 47,500 + 17,500 per processor, which means per-core because Oracle doesn’t count the core factor when your Intel processors are in AWS Cloud (according to the Authorized Cloud Environments paper)? Probably not because Oracle detects where you run and bridles some features depending whether you are on the Dark or the Light Side of the public cloud (according to their criteria of course).

At one point I have 3 pluggable databases in my CDB:


SQL> show pdbs
   CON_ID     CON_NAME    OPEN MODE    RESTRICTED
_________ ____________ ____________ _____________
        2 PDB$SEED     READ ONLY    NO
        3 CDB1PDB01    MOUNTED
        4 CDB1PDB03    MOUNTED
        5 CDB1PDB02    MOUNTED

I want to create a 4th one:


SQL> create pluggable database CDB1PDB04 from CDB1PDB03;

create pluggable database CDB1PDB04 from CDB1PDB03
                          *
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

It fails. The maximum number of pluggable databases is defined by MAX_PDBS, but I defined nothing in my SPFILE:


SQL> show spparameter max_pdbs
SID NAME     TYPE    VALUE
--- -------- ------- -----
*   max_pdbs integer

I thought that the default was 4098 (which is incorrect anyway as you cannot create more than 4096) but it is actually 5 here:


SQL> show parameter max_pdbs
NAME     TYPE    VALUE
-------- ------- -----
max_pdbs integer 5

Ok… this parameter is supposed to count the number of user pluggable databases (the ones with CON_ID>2) and I have 3 of them here. The limit is 5 and I have an error mentioning that I’ve reached the limit. That’s not the first time I see wrong maths with this parameter. But there’s worse as I cannot change it:


SQL> alter system set max_pdbs=6;

alter system set max_pdbs=6
 *
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-65334: invalid number of PDBs specified

I can change it in the SPFILE but it doesn’t help me to create more pluggable databases:


SQL> alter system set max_pdbs=200 scope=spfile;

System altered.

SQL> startup force;

Total System Global Area   2147482744 bytes
Fixed Size                    9137272 bytes
Variable Size               587202560 bytes
Database Buffers           1543503872 bytes
Redo Buffers                  7639040 bytes
Database mounted.
Database opened.

SQL> show parameter max_pdbs
NAME     TYPE    VALUE
-------- ------- -----
max_pdbs integer 200

SQL> create pluggable database CDB1PDB04 from CDB1PDB03;

create pluggable database CDB1PDB04 from CDB1PDB03
                          *
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

Something bridles me. There’s a MOS Note ORA-65010 When Oracle Database Hosted on AWS Cloud (Doc ID 2328600.1) about the same problem but that’s in 12.1.0.2 (before MAX_PDBS was introduced) which is supposed to be fixed in AUG 2017 PSU. But here I am 3 years later in 19.6 (the January 2020 Release Update for the latest version available on-premises).

So, Oracle limits the number of pluggable databases when we are on a public cloud provider which is not the Oracle Public Cloud. This limitation is not documented in the licensing documentation which mentions 252 as the Enterprise Edition limit, and I see nothing about “Authorized Cloud Environments” limitations for this item. This, and the fact that it can come and go with Release Updates put customers at risk when running on AWS EC2: financial risk and availability risk. I think there are only two choices, on long term, when you want to run your database on a cloud: go to Oracle Cloud or leave for another Database.

How does the Oracle instance know on which public cloud you run? All cloud platforms provide some metadata through HTTP api. I have straced all sendto() and recvfrom() system calls when starting the instance:


strace -k -e trace=recvfrom,sendto -yy -s 1000 -f -o trace.trc sqlplus / as sysdba <<<'startup force'

And I searched for Amazon and AWS here:

https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/03/Annotat... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/03/Annotat... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/03/Annotat... 1121w" sizes="(max-width: 1024px) 100vw, 1024px" />

This is clear: the instance has a function to detect the cloud provider (kgcs_clouddb_provider_detect) when initializing the SGA in a multitenant architecture (kpdbInitSga) with the purpose of detecting non-oracle clouds (kscs_is_non_oracle_cloud). This queries the AWS metadata (documented on Retrieving Instance Metadata):


[oracle@ora-cdb-1 ~]$ curl http://169.254.169.254/latest/meta-data/services/domain
amazonaws.com/

When Oracle software sees the name of the enemy in the domain name amazonaws.com, it sets an internal limit for the number of pluggable databases that overrides the MAX_PDBS setting. Ok, I don’t need this metadata and I’m root on EC2 so my simple workaround is to block this metadata API:


[root@ora-cdb-1 ~]# iptables -A OUTPUT -d 169.254.169.254  -j REJECT
[root@ora-cdb-1 ~]# iptables -L
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
REJECT     udp  --  anywhere             10.0.0.2             udp dpt:domain reject-with icmp-port-unreachable
REJECT     all  --  anywhere             10.0.0.2             reject-with icmp-port-unreachable

Then restart the instance and it works: I can set or reset MAX_PDBS and create more pluggable databases.

I can remove the rule


[root@ora-cdb-1 ~]# iptables -D OUTPUT -d 169.254.169.254  -j REJECT

If, for watever reason I want to revert back.

Finally, because they had many bugs with the MAX_PDBS soft limit, there’s a parameter to disable it and this disables also the hard limit:


SQL> alter system set "_cdb_disable_pdb_limit"=true scope=spfile;
System altered.

Thanks to Mauricio Melnik for the heads-up on that:

However, with this parameter you cannot control anymore the maximum number of PDBs so don’t forget to monitor your AUX_COUNT in DBA_FEATURE_USAGE_STATISTICS.

Here was my discovery when preparing the multitenant workshop lab environment. Note that given the current situation where everybody works from home when possible, we are ready to give this training full of hands-on exercises though Microsoft Teams and AWS EC2 virtual machines. Two days to be comfortable when moving to CDB architecture, which is what should be done this year when you plan to stay with Oracle Database for the future versions.

Update 27-MAR-2020

In order not to sound too negative here, this limit on AWS platforms has been removed in the past and this may be a bug re-introduced with the change from 1 to 3 PDBs in Standard Edition.

Cet article Oracle disables your multitenant option when you run on EC2 est apparu en premier sur Blog dbi services.

Create With Confidence — and Better Blocks

In the last few years, the teams working on the block editor have learned a lot about how people build sites now and how they want to build sites in the future.

The latest version represents the culmination of these discoveries, and the next stage in the editor’s evolution.

With better visuals and more advanced features, it’ll keep designers, developers, writers, and editors productive and happy, and — tension-building drumroll — it’s in your editor right now!

What’s new

With a comprehensive visual refresh, a plethora of new features, and dozens of bug fixes, the new block editor comes with a lot to unpack.

What follows is just a small (but delectable) sample of the many ways we’ve upgraded your editing experience. (You can get the full list of goodies in the release notes.)

We hope you enjoy.

A revamped editor UI

The first thing you’ll notice is the slick UI. Buttons, icons, text, and dropdowns are all sporting a contrast boost, with bolder colors and more whitespace between buttons, text labels, and menu items.

The world is made of blockshttps://en-blog.files.wordpress.com/2020/03/the-word-is-made-of-blocks.j... 1440w, https://en-blog.files.wordpress.com/2020/03/the-word-is-made-of-blocks.j... 150w, https://en-blog.files.wordpress.com/2020/03/the-word-is-made-of-blocks.j... 300w, https://en-blog.files.wordpress.com/2020/03/the-word-is-made-of-blocks.j... 768w, https://en-blog.files.wordpress.com/2020/03/the-word-is-made-of-blocks.j... 1024w" sizes="(max-width: 720px) 100vw, 720px" />
The new block editor’s UI

As you navigate through the editor’s menus, individual items are clearly highlighted, allowing you to quickly identify what you’ve selected.

Active menu items have distinct highlights

The block toolbars are now simpler, displaying the most commonly-used features. For example, paragraph blocks show only bold, italic, and link formatting buttons. You’ll find all the extra options in the dropdown menu.

https://en-blog.files.wordpress.com/2020/03/paragraph-option-menu.png?w=150 150w, https://en-blog.files.wordpress.com/2020/03/paragraph-option-menu.png?w=300 300w, https://en-blog.files.wordpress.com/2020/03/paragraph-option-menu.png?w=768 768w, https://en-blog.files.wordpress.com/2020/03/paragraph-option-menu.png 1080w" sizes="(max-width: 1024px) 100vw, 1024px" />
The block toolbar options are simpler and uncluttered

What’s more, instead of listing blocks within a fixed-height container, the block inserter now spans the height of the window. You’ll now see more blocks and block categories at once with less scrolling.

Block inserter spanning the full height of the windowhttps://en-blog.files.wordpress.com/2020/03/block-inserter-full-span-hei... 1440w, https://en-blog.files.wordpress.com/2020/03/block-inserter-full-span-hei... 150w, https://en-blog.files.wordpress.com/2020/03/block-inserter-full-span-hei... 300w, https://en-blog.files.wordpress.com/2020/03/block-inserter-full-span-hei... 768w, https://en-blog.files.wordpress.com/2020/03/block-inserter-full-span-hei... 1024w" sizes="(max-width: 720px) 100vw, 720px" />
The block inserter spans the full height of your screen

Introducing block patterns

With the block editor as your canvas you can design almost any layout you can imagine – but building intricate page structures should never get in the way of your creative process.

Here’s where the blocks really shine: along with individual blocks, the editor now includes block patterns, a library of predefined and reusable block layouts, that you use on any page or post.

To check out the list of available patterns, click on the block pattern icon (on the top right) to reveal a collection of pre-built layouts:

https://en-blog.files.wordpress.com/2020/03/notice-the-block-patterns.jp... 2048w, https://en-blog.files.wordpress.com/2020/03/notice-the-block-patterns.jp... 150w, https://en-blog.files.wordpress.com/2020/03/notice-the-block-patterns.jp... 300w, https://en-blog.files.wordpress.com/2020/03/notice-the-block-patterns.jp... 768w" sizes="(max-width: 1024px) 100vw, 1024px" />
Block patterns are groups of individual blocks combined to create elegant layouts

Pick the pattern you want to use, and it will appear in your editor ready for you to customize with your own content.

Right now, you’ll find a few introductory patterns – Two Columns of Text, Two Buttons, Cover, and Two Images Side by Side – but we’ll be adding more and more patterns as they’re available. When the block patterns API opens up to third-party authors, you’ll also be able to develop and share your own.

(Have an idea for a great pattern? The block editor developer community is actively seeking ideas. The more ideas they receive, the better your editor will be!)

Colors, colors everywhere

When it comes to words and columns, websites aren’t newspapers: things don’t have to be black and white.

Use the new Text Color selector tool to change the color of sentences, and even individual words and letters. Highlight the text you’d like to change, then click on the arrow dropdown and select “Text Color.”

https://en-blog.files.wordpress.com/2020/03/text-color-editor.png?w=150 150w, https://en-blog.files.wordpress.com/2020/03/text-color-editor.png?w=300 300w, https://en-blog.files.wordpress.com/2020/03/text-color-editor.png?w=768 768w, https://en-blog.files.wordpress.com/2020/03/text-color-editor.png 1314w" sizes="(max-width: 1024px) 100vw, 1024px" />
Select “Text Color” from the options
https://en-blog.files.wordpress.com/2020/03/text-color-editor02.png?w=150 150w, https://en-blog.files.wordpress.com/2020/03/text-color-editor02.png?w=300 300w, https://en-blog.files.wordpress.com/2020/03/text-color-editor02.png?w=768 768w, https://en-blog.files.wordpress.com/2020/03/text-color-editor02.png 1306w" sizes="(max-width: 1024px) 100vw, 1024px" />
Pick the color of your word or character

 

To change the background colors of your columns, select the column and head to the sidebar, to Color settings.

Columns get background colors too!

The road ahead is paved with blocks

There’s still a long way to go, and the editor’s community of contributors hasn’t given its collective keyboards a moment’s rest. Work on polishing UI elements like the sidebar and dropdowns continues along with advancements to block patterns and other exciting features.

Are there ways we could improve the site editing experience even more? Please let us know! We’re always keen to hear how we can make the web a better place for everyone.

How to Move Your Classes Online — and Charge for Them

We are proud to host many websites for language tutors, yoga schools, and personal fitness coaches around the world.

It’s exciting to see how educators and consultants across different industries are getting creative with their online offerings: language teachers conduct 1:1 sessions to help students hone pronunciation, yoga studios livestream group sessions, and instructors lead writing boot camps via Zoom breakout rooms. Even my own strength coach is monitoring my workouts — I launch the camera on my phone, place it against the wall, and do deadlifts while he supervises.

Last year we launched Recurring Payments to support creators, consultants, small businesses, and other professionals in establishing dependable income streams. We were very pleased to discover that online educators using this feature are thriving as well!

Marta, for example, runs Spanish Teacher Barcelona, a Spanish language school located in — you guessed it! — Barcelona. She offers 1:1 sessions and classes in a coworking space in the city’s Gracia neighborhood. For customers that cannot meet in person, she hosts private lessons online, available with a subscription. She offers three subscription plans to meet the variety of needs of her students.

https://en-blog.files.wordpress.com/2020/03/spanishteacherbcn.png?w=150 150w, https://en-blog.files.wordpress.com/2020/03/spanishteacherbcn.png?w=300 300w, https://en-blog.files.wordpress.com/2020/03/spanishteacherbcn.png?w=768 768w, https://en-blog.files.wordpress.com/2020/03/spanishteacherbcn.png 1069w" sizes="(max-width: 1024px) 100vw, 1024px" />

Ready to set up your own subscription-based service or move your existing classes online? Here’s a quick guide to get you set up with the right tools, so you can focus instead on providing the best educational environment possible. 

Set up your online class today

Below, we’ll cover the steps you can take to get your classes or private lessons up and running with the Recurring Payments feature. We’ll also recommend tools to make scheduling 1:1 sessions and operating your classes easier, like the Calendly block and various video conferencing tools. 

1. Create a “Subscribe” page to promote your class or service

You need to convince your customers that your subscription is worth paying for. A typical way to do this is with a “Subscribe” page where you explain the benefits of your services.

Take a look at the “Join” page on Longreads.com, an online publication that publishes and curates nonfiction storytelling on the web and funds stories with memberships:

https://en-blog.files.wordpress.com/2020/03/lr.png?w=150 150w, https://en-blog.files.wordpress.com/2020/03/lr.png?w=300 300w, https://en-blog.files.wordpress.com/2020/03/lr.png?w=768 768w, https://en-blog.files.wordpress.com/2020/03/lr.png 1852w" sizes="(max-width: 1024px) 100vw, 1024px" />

A few tips to make your offer irresistible:

  • Focus on the benefits for the customer.
  • Provide a few subscription options, such as classes at different frequencies and at different price points.
  • Add testimonials if you can — people love to read reviews.

Create this page by going to My Sites → Pages → Add New.

2. Add a subscription with the Recurring Payments feature

Recurring Payments allows you to create renewable payments. Your subscribers will enter their credit card details, and will then be charged automatically every month or every year.

Recurring Payments is currently available on any of our paid plans. To get started, you’ll need to create a Stripe account, which is a global money transfer service. We partner with Stripe to make sure payments end up safely in your bank account.

You can start collecting Recurring Payments in five minutes.

On the “Subscribe” page you created above, search for the “Recurring Payments” block:

https://en-blog.files.wordpress.com/2020/03/zrzut-ekranu-2020-03-26-o-18... 150w, https://en-blog.files.wordpress.com/2020/03/zrzut-ekranu-2020-03-26-o-18... 300w, https://en-blog.files.wordpress.com/2020/03/zrzut-ekranu-2020-03-26-o-18... 768w, https://en-blog.files.wordpress.com/2020/03/zrzut-ekranu-2020-03-26-o-18... 1934w" sizes="(max-width: 1024px) 100vw, 1024px" />

After clicking “Connect to Stripe,” you’ll be able to connect your existing Stripe account or create a new one.

Now you can create your first subscription.

https://en-blog.files.wordpress.com/2020/03/zrzut-ekranu-2020-03-26-o-19... 150w, https://en-blog.files.wordpress.com/2020/03/zrzut-ekranu-2020-03-26-o-19... 300w, https://en-blog.files.wordpress.com/2020/03/zrzut-ekranu-2020-03-26-o-19... 768w, https://en-blog.files.wordpress.com/2020/03/zrzut-ekranu-2020-03-26-o-19... 1716w" sizes="(max-width: 1024px) 100vw, 1024px" />

Set the price, frequency (we recommend monthly for start), and the title of your subscription, like Writing Bootcamp, 3 breakout sessions/month or Conversational French for Beginners, 4 classes/month.

That’s it! Your subscription is now created. Once you publish the page and activate your Stripe account, your customers will be able to subscribe to this service.

Subscriptions are dependable: your subscribers will be automatically charged at the beginning of the next renewal period (in a month or a year). You don’t have to remind or nudge them, and they also don’t have to remember to pay you — everything is handled.

For more details, please read this Recurring Payments support article.

Would you rather sell access to your services as a one-time purchase? Check out the Simple Payments feature.

3. Schedule your lessons

Your subscribers can set up a time for their lessons using a service like Calendly, a handy tool that allows them to select a free slot in your schedule. We recently created the Calendly block to bring some of the service’s key features to you. While editing your page, search for the “Calendly” block.

Remember to check if the subscription is active

Before hopping on an online meeting, you need to confirm that the person scheduling a call is indeed a paying subscriber. Check the list of your active Recurring Payments subscribers located in your WordPress.com dashboard under My Sites Earn Payments.

Read more about managing your list of subscribers.

4. Select a tool to host your class

Video conferencing tools are very useful for teaching. Apart from seeing the other person, you can share your screen, send files, or even host a session for multiple people, lecture-style.

You can use Google Hangouts, Skype, or Zoom (which is what we use for our meetings here at WordPress.com). Zoom has put together a handy tutorial for teachers.

If you’d like additional setup tips on selecting a theme for your website, adding content and media, and adding students as viewers or contributors, read our support tutorial on building a virtual classroom.

What amazing class are you going to launch?

Setting Users Impossible Passwords BY VALUES and Schema Only Accounts

I plan to try and write some Oracle security based blog posts whilst working from home. These promises when I have made them in the past usually end up not coming true due to other work and things getting more....[Read More]

Posted by Pete On 26/03/20 At 02:38 PM

Large objects are larger than expected

Most customers I encounter nowadays are generally sticking to the defaults when creating their databases from 12c onwards. This means that they’ll be using the UTF8 class of charactersets when configuring their database. As a quick segue from the topic at hand, I think this is a good thing. Having a UTF8 characterset avoids a lot of hassles later when applications encounter a need to store extended characters, and let’s face it, the moment you need to store someone’s name, then at some stage you are going to hit this.

But back to large objects. It is important to realise that a UTF8 characterset has implications with how LOBs will be stored in the database. In a single byte characterset database, then if you need to store a 1 megabyte text file in a CLOB, then it will consume approximately 1 megabyte. But lets look at what happens when you are using a multi-byte database.


SQL> host dir /s /b c:\temp > c:\temp\file1.txt

SQL> host dir c:\temp\file1.txt
 Volume in drive C is OS
 Volume Serial Number is 66D0-4AB5

 Directory of c:\temp

22/03/2020  07:19 PM           635,750 file1.txt
               1 File(s)        635,750 bytes
               0 Dir(s)  54,663,430,144 bytes free

So I have a text file of about 630k, and I’ll now store that as a single CLOB in the database (which was built with AL32UTF8)


SQL> select property_name, property_value
  2  from   database_properties
  3  where  property_name like 'NLS%';

PROPERTY_NAME                  PROPERTY_VALUE
------------------------------ -----------------------------
NLS_RDBMS_VERSION              19.0.0.0.0
NLS_NCHAR_CONV_EXCP            FALSE
NLS_LENGTH_SEMANTICS           BYTE
NLS_COMP                       BINARY
NLS_DUAL_CURRENCY              $
NLS_TIMESTAMP_TZ_FORMAT        DD-MON-RR HH.MI.SSXFF AM TZR
NLS_TIME_TZ_FORMAT             HH.MI.SSXFF AM TZR
NLS_TIMESTAMP_FORMAT           DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_FORMAT                HH.MI.SSXFF AM
NLS_SORT                       BINARY
NLS_DATE_LANGUAGE              AMERICAN
NLS_DATE_FORMAT                DD-MON-RR
NLS_CALENDAR                   GREGORIAN
NLS_NUMERIC_CHARACTERS         .,
NLS_NCHAR_CHARACTERSET         AL16UTF16
NLS_CHARACTERSET               AL32UTF8
NLS_ISO_CURRENCY               AMERICA
NLS_CURRENCY                   $
NLS_TERRITORY                  AMERICA
NLS_LANGUAGE                   AMERICAN

SQL> create tablespace small_extents 
  2    datafile 'X:\ORACLE\ORADATA\DB19\PDB1\SMALL01.DBF' size 20m
  3    extent management local uniform size 128k;

Tablespace created.

SQL> create table t ( c clob )
  2  lob ( c )store as basicfile
  3  tablespace small_extents;

Table created.

SQL>
SQL> declare
  2    bf    bfile;
  3    cb    clob;
  4
  5    dest  int := 1;
  6    src   int := 1;
  7    cs    int := 0;
  8    ctx   int := 0;
  9    warn  int := 0;
 10  begin
 11    insert into t (c) values (empty_clob())
 12    returning c into cb;
 13
 14    bf := bfilename('TEMP', 'file1.txt');
 15    dbms_lob.fileopen(bf, dbms_lob.file_readonly);
 16    dbms_lob.loadclobfromfile (
 17      dest_lob      => cb,
 18      src_bfile     => bf,
 19      amount        => dbms_lob.lobmaxsize,
 20      dest_offset   => dest,
 21      src_offset    => src,
 22      bfile_csid    => cs ,
 23      lang_context  => ctx,
 24      warning       => warn);
 25    dbms_lob.fileclose(bf);
 26
 27    commit;
 28  end;
 29  /

PL/SQL procedure successfully completed.

So the CLOB has been stored. Let us take a look at the size of the CLOB in the database.


SQL> select segment_name, bytes
  2  from   user_segments
  3  where  segment_name = 'T'
  4   or segment_name = ( select segment_name from user_lobs where table_name = 'T' )
  5   or segment_name = ( select index_name from user_lobs where table_name = 'T' );

SEGMENT_NAME                        BYTES
------------------------------ ----------
SYS_IL0000082130C00001$           131072
SYS_LOB0000082130C00001$         1572864
T                                  131072

3 rows selected.

SQL>
SQL> select bytes
  2  from   user_extents
  3  where  segment_name = 'SYS_LOB0000082130C00001$';

     BYTES
----------
    131072
    131072
    131072
    131072
    131072
    131072
    131072
    131072
    131072
    131072
    131072
    131072

12 rows selected.

SQL>

Even without the LOB index, the LOB data is just over double the size of the original text file. This might come as a surprise, but it is documented in the Large Object Guide

CLOB data is stored in a format that is compatible with UCS-2 if the database character set is multibyte

Hence, if you are doing a database sizing estimate, and you intend to store text data in CLOBs, make sure to take this into consideration when planning your size. In case you are thinking that this is an unreasonable “tax” to pay, don’t forget that all we are really doing is catering to the potential of the file containing extended characters. For example, if I open the original file with a text editor and save that file in UTF16 format, then the file size is similar to that of the CLOB.



SQL> host dir c:\temp\file1*
 Volume in drive C is OS
 Volume Serial Number is 66D0-4AB5

 Directory of c:\temp

22/03/2020  07:19 PM           635,750 file1.txt
22/03/2020  07:33 PM         1,271,500 file1u.txt
               2 File(s)      1,907,250 bytes
               0 Dir(s)  54,639,943,680 bytes free

Expert Advice: How to Make a Great Website for Your Small Business – Webinar

Whether you already own a small business or are exploring the idea of starting one, you’ll come away from this free, 60-minute live webinar with a wealth of actionable advice on how to maximize your digital presence.

Date: Thursday, April 2, 2020
Time: 11:00 am PDT | 1:00 pm CDT | 2:00 pm EDT | 18:00 UTC
Registration link: https://zoom.us/webinar/register/4215849773038/WN_at0PB64eTo2I0zJx-74g2Q
Who’s invited: Business owners, freelancers, entrepreneurs, and anyone interesting in starting a small business or side gig.

Hosts Steve Dixon and Kathryn Presner, WordPress.com Happiness Engineers, have many combined years of experience helping small-business owners create and launch successful websites. They’ll give you tips on site design, search engine optimization (SEO), monetization, and mobile optimization. You’ll be able to submit questions beforehand—in the registration form—and during the live webinar.

Everyone is welcome, even if you already have a site, and even if your site wasn’t built on WordPress.com. We know you’re busy, so if you can’t make the live event, you’ll be able to watch a recording of the webinar on our YouTube channel.

Live attendance is limited, so be sure to register early. We look forward to seeing you on the webinar!

CoronaVirus - We are Still Open

Everyone must now be affected in some way about coronavirus. We had an inkling that Boris Johnson and his government would enact a more severe lock down in the UK. So in anticipation I decided on Monday that we needed....[Read More]

Posted by Pete On 25/03/20 At 01:27 PM

Indexes on Joins for MySQL

MySQL is simpler than the Oracle example  because MySQL only has Nested Loops and doesn’t have Hash Join.

Here is the query, like on the Oracle example from the last blog post

select max(t1.data) 
from  t1, t2  
where t1.id = t2.id 
and t1.clus  = 1
;

 

So there are only 4 examples below which all do NL joins

  1. full table scan on T1 and T2
    1. 7.83 secs
  2. index on T1 predicate filter column
    1. 7.39 secs
  3. index on T2 join column
    1. 0.49 secs
  4. index on both T2 join column and T1 predicate filter column
    1. 0.06 secs

There isn’t an idea of “explain analyze” on MySQL until MySQL 8.0.18 and I did my testing on 8.0.17, so the explain plans costs are estimates not actual values, but the elasped time of the query is an actual value.

 

 

drop table seed;
CREATE TABLE seed ( id INT AUTO_INCREMENT PRIMARY KEY, val INT);
 insert into seed(val) values (1);
insert into seed(val) select val from seed;  /*  2 */
insert into seed(val) select val from seed;  /*  4 */
insert into seed(val) select val from seed;  /*  8 */
insert into seed(val) select val from seed;  /*  16 */
insert into seed(val) select val from seed;  /*  32 */
insert into seed(val) select val from seed;  /*  64 */
insert into seed(val) select val from seed;  /*  128 */
insert into seed(val) select val from seed;  /*  256 */
insert into seed(val) select val from seed;  /*  512 */
insert into seed(val) select val from seed;  /*  1024 */
insert into seed(val) select val from seed;  /*  2048 */
insert into seed(val) select val from seed;  /*  4096 */
insert into seed(val) select val from seed;  /*  8192 */
insert into seed(val) select val from seed;  /*  16384 */
insert into seed(val) select val from seed;  /*  32768 */
insert into seed(val) select val from seed;  /*  131072 */
insert into seed(val) select val from seed;  /*  262144 */
insert into seed(val) select val from seed;  /*  524288 */
insert into seed(val) select val from seed;  /*  1048576 */
 select count(*) from seed;

 drop table dummy;
 
  create table dummy ( 
    id  int,
    clus int,
    val int,
    data VARCHAR(40)
 );
 
 insert into dummy (
    id, clus , val, data
)
 select 
 id,
 truncate(id/169,0),
 mod(id,10000), 
 CONV(FLOOR(RAND() * 99999999999999), 10, 36) 
from seed
;

select count(*) from dummy where clus = 1;
+----------+
| count(*) |
+----------+
|      106 |
+----------+
select count(*) from dummy where val =1;
+----------+
| count(*) |
+----------+
|      106 |
+----------+

drop table t1;
drop table t2;

create table t1 as select * from dummy;
create table t2 as select * from dummy; 



Strange thing is that there are gaps in the sequence # in the seed table, thus the strange value in the truncate to get 106 values, i.e. truncate(id/169,0).

First we run the query with no indexes ( I already created the indexes so force them off in the example with “USE INDEX()” )

both nodes have access type

"access_type": "ALL"

so they are doing full table scans.

explain format=json select max(t1.data) 
from 
t1 USE INDEX ()
, t2 USE INDEX ()
where t1.id = t2.id 
and t1.clus = 1
;

1 row in set (7.83 sec)

{
  "query_block": {
    "select_id": 1,
    "cost_info": {
      "query_cost": "18102538.86"
    },
    "nested_loop": [
      {
        "table": {
          "table_name": "t1",
          "access_type": "ALL",
          "rows_examined_per_scan": 1045682,
          "rows_produced_per_join": 172,
          "filtered": "0.02",
          "cost_info": {
            "read_cost": "105376.24",
            "eval_cost": "17.21",
            "prefix_cost": "105393.45",
            "data_read_per_join": "29K"
          },
          "used_columns": [
            "id",
            "clus",
            "data"
          ],
          "attached_condition": "(`kyle`.`t1`.`clus` = 1)"
        }
      },
      {
        "table": {
          "table_name": "t2",
          "access_type": "ALL",
          "rows_examined_per_scan": 1045682,
          "rows_produced_per_join": 172,
          "filtered": "0.00",
          "using_join_buffer": "Block Nested Loop",
          "cost_info": {
            "read_cost": "917.35",
            "eval_cost": "17.21",
            "prefix_cost": "18102538.86",
            "data_read_per_join": "29K"
          },
          "used_columns": [
            "id"
          ],
          "attached_condition": "(`kyle`.`t2`.`id` = `kyle`.`t1`.`id`)"
        }
      }
    ]
  }
}

We can see that the filter on T1 should filter most of the table (172 rows out of 1045682) so now add index on t1.clus the predicate filter ( I remove the “USE INDEX () ” on t1 so the index on T1 is now used in the explain ) . The execution time is a little faster but not that much 7.83 vs 7.39. We see that T1 is now accessed by index

explain format=json select max(t1.data) 
from  t1, t2  
USE INDEX ()
where t1.id = t2.id 
and t1.clus    = 1
;
 row in set (7.39 sec)

{
  "query_block": {
    "select_id": 1,
    "cost_info": {
      "query_cost": "11085148.28"
    },
    "nested_loop": [
      {
        "table": {
          "table_name": "t1",
          "access_type": "ref",
          "possible_keys": [
            "t1_clus"
          ],
          "key": "t1_clus",
          "used_key_parts": [
            "clus"
          ],
          "key_length": "5",
          "ref": [
            "const"
          ],
          "rows_examined_per_scan": 106,
          "rows_produced_per_join": 106,
          "filtered": "100.00",
          "cost_info": {
            "read_cost": "26.50",
            "eval_cost": "10.60",
            "prefix_cost": "37.10",
            "data_read_per_join": "18K"
          },
          "used_columns": [
            "id",
            "clus",
            "data"
          ]
        }
      },
      {
        "table": {
          "table_name": "t2",
          "access_type": "ALL",
          "rows_examined_per_scan": 1045682,
          "rows_produced_per_join": 105,
          "filtered": "0.00",
          "using_join_buffer": "Block Nested Loop",
          "cost_info": {
            "read_cost": "881.98",
            "eval_cost": "10.60",
            "prefix_cost": "11085148.28",
            "data_read_per_join": "18K"
          },
          "used_columns": [
            "id"
          ],
          "attached_condition": "(`kyle`.`t2`.`id` = `kyle`.`t1`.`id`)"
        }
      }
    ]
  }
}

 

On T2 we see that we access 105 row after examining 1045682, thus a good opportunity for an index. Now add an index or the join column and we get a significant improvement in execution time. We go from 7.39 secs to 0.49  or 7.39/0.49 or a 15x improvement

explain format=json select max(t1.data) 
from  t1 USE INDEX () , t2  
where t1.id = t2.id 
and t1.clus   = 1
;
1 row in set (0.49 sec)

{
  "query_block": {
    "select_id": 1,
    "cost_info": {
      "query_cost": "105582.64"
    },
    "nested_loop": [
      {
        "table": {
          "table_name": "t1",
          "access_type": "ALL",
          "rows_examined_per_scan": 1045682,
          "rows_produced_per_join": 172,
          "filtered": "0.02",
          "cost_info": {
            "read_cost": "105376.24",
            "eval_cost": "17.21",
            "prefix_cost": "105393.45",
            "data_read_per_join": "29K"
          },
          "used_columns": [
            "id",
            "clus",
            "data"
          ],
          "attached_condition": "((`kyle`.`t1`.`clus` = 1) and (`kyle`.`t1`.`id` is not null))"
        }
      },
      {
        "table": {
          "table_name": "t2",
          "access_type": "ref",
          "possible_keys": [
            "t2_id"
          ],
          "key": "t2_id",
          "used_key_parts": [
            "id"
          ],
          "key_length": "5",
          "ref": [
            "kyle.t1.id"
          ],
          "rows_examined_per_scan": 1,
          "rows_produced_per_join": 172,
          "filtered": "100.00",
          "using_index": true,
          "cost_info": {
            "read_cost": "171.98",
            "eval_cost": "17.21",
            "prefix_cost": "105582.64",
            "data_read_per_join": "29K"
          },
          "used_columns": [
            "id"
          ]
        }
      }
    ]
  }
}

Now use both indexes, which gives us a small  0.43 reduction in execution time, but as a ratio from the previous 0.49/0.06, i.e. an 8x improvement, still pretty good

explain format=json select max(t1.data) 
from  t1, t2  
where t1.id = t2.id 
and t1.clus  = 1
;
1 row in set (0.06 sec)

{
  "query_block": {
    "select_id": 1,
    "cost_info": {
      "query_cost": "153.62"
    },
    "nested_loop": [
      {
        "table": {
          "table_name": "t1",
          "access_type": "ref",
          "possible_keys": [
            "t1_clus"
          ],
          "key": "t1_clus",
          "used_key_parts": [
            "clus"
          ],
          "key_length": "5",
          "ref": [
            "const"
          ],
          "rows_examined_per_scan": 106,
          "rows_produced_per_join": 106,
          "filtered": "100.00",
          "cost_info": {
            "read_cost": "26.50",
            "eval_cost": "10.60",
            "prefix_cost": "37.10",
            "data_read_per_join": "18K"
          },
          "used_columns": [
            "id",
            "clus",
            "data"
          ],
          "attached_condition": "(`kyle`.`t1`.`id` is not null)"
        }
      },
      {
        "table": {
          "table_name": "t2",
          "access_type": "ref",
          "possible_keys": [
            "t2_id"
          ],
          "key": "t2_id",
          "used_key_parts": [
            "id"
          ],
          "key_length": "5",
          "ref": [
            "kyle.t1.id"
          ],
          "rows_examined_per_scan": 1,
          "rows_produced_per_join": 106,
          "filtered": "100.00",
          "using_index": true,
          "cost_info": {
            "read_cost": "105.93",
            "eval_cost": "10.60",
            "prefix_cost": "153.63",
            "data_read_per_join": "18K"
          },
          "used_columns": [
            "id"
          ]
        }
      }
    ]
  }

.

COVID-19: What’s Going To Happen Now

<< COVID-19 Basics. What it is & what it does to us

  COVID-19: What can we do to Reduce Social Distancing >>

I thought I’d record what the scientific evidence and epidemiological modelling is saying about what is going to happen in respect of COVID-19 in the UK (and, to some extent, elsewhere) over the next weeks and months. As with my intro to COVID-19 this post is mostly “for me”. I’m sharing it but please, please, treat all of this post (not the science I link to!) with some scepticism.

The figures are shocking so I want to spell out right at the start that, if our governments does what it needs to do and does it right (and over the last 2 or 3 weeks the UK government has fallen a tad short on this, but it’s improving) in the end over 99% of us will be OK. If they get it wrong, it’s more like 97% of us will come through this.

And, I feel it is important to say:

90% of even high risk people will also be OK.

I strongly feel that the message is constantly that it is the at-risk people who are dying and not that most people at risk will be OK. Yes, COVID-19 is more of a danger to those over 70 and those with underlying medical conditions, but with the media and government constantly saying “the people who died are old” etc it makes it sound like COVID-19 is a death sentence to them – and it is not.

Yes, I’m quite angry about that that poor messaging.

Source of Epidemiological information

My main source is This paper by Imperial College in collaboration with the World Health Organisation and British Medical Research Council. If you can, please read this paper. It spells out how COVID-19 will spread and what happens when the NHS intensive care unit (ICU) beds are all full. It’s a hard read in two ways.  It is technically dense; and it says things people are still refusing to believe:

  • If we had done nothing and had an infinite number of critical care beds, it would burn through the population of the UK (and all other countries) in 3 months, infecting 81% of people. At that point herd immunity stops it.
  • In the UK 510,000 people would die (COVID-19 kills about 1% of people even with ICU treatment). 2.2M would die in the USA.
  • At the time of publication of the report, the “mitigation” plans by the UK government would have failed to stop even more deaths (more than 1%) as the NHS would have been overwhelmed by the 2nd week of April.
  • At the peak we would have needed 30 times the number of ICU beds we have.
  • The paper does not fully spell this out, but if you need an ICU bed and there is not one, you will almost certainly die. Thus the death rate would be more like 2.3% {Note, that is my figure, I have not spotted it in the report. It is based on 4.4% of the population needing hospitalisation and 30% of them needing critical care, figures that are in the report}. I’ll let you work that out based on the UK population of 66.5 million. OK, it’s about 1.17 million.

These figures are truly scary. They won’t happen now as it shocked our government enough to ramp up the social isolation. If anyone questions why we need the social isolation, give them the figures. If they refuse to believe them,  tell them to read the paper and various articles based on it and point out where they are significantly wrong. If they won’t, thank them for their baseless “opinion”.

The calculation of 510,000 deaths in the UK did not factor in self-isolating naturally, as we all saw people fall ill and die. That would slow down the disease.

However, if the hospital is full to absolute bursting capacity with COVID-19 patients, any person who needs ICU care for other illnesses (cancer, cardiovascular disease, stroke) or accident. How do you fit them in? Deaths for other reasons will increase.

One thing I am not sure of is that in the paper critical care is stated as “invasive mechanical ventilation or ECMO”. If you need just a ventilator and one is not available, I’m pretty sure you would also be likely to die or suffer brain and other organ damage from oxygen deprivation.

As I understand it, this report is what made the UK and other governments take COVID-19 a lot more seriously and really understand the need to implement strict social isolation.

I’d like to say why I put so much trust in this source:

  1. The three organisations behind it are all highly respected (WHO, MRC, and Imperial College)
  2. They state clearly at the top their assumptions – the R number, incubation period, types of social isolation, the percentage of people who will comply with each one.
  3. They created a model that was then verified by running the numbers and seeing if it predicted what had happened in reality to that point.
  4. The subject matter experts I follow have all endorsed this piece of work.

Mitigation or Suppression

The Imperial College report spells out the distinction between Mitigation and Suppression:

Mitigation is where you reduce the R number (the number of people each infected person in turn infects) down from the natural number of around 2.4 but it is still above 1. At this rate the disease continues to spread and the number of cases per day continues to increase, but more slowly. The idea seems to be that it would lead to herd immunity. This was the UK governments aim until Monday 16th March.

Suppression is where you reduce the R number below 1. Within a few weeks the disease is no longer spreading. But it is still there in the population. This is what Wuhan did and Italy is making progress on.

To achieve mitigation the government isolated people infected, asked those who had had contact with them to self isolate, and asked us all to wash our hands and keep a distance and think about working from home. The impact on daily life, business, the economy is minimal. Further steps would be introduced later, like closing universities and schools.

The Imperial college report demonstrated that mitigation was a terrible idea as the number of cases would still explode, but just be delayed a little, and the NHS would be absolutely overwhelmed.

The graph at the top of this article shows the mitigation steps being considered and how it only shifted the curve and did not lower to anywhere like the NHS ICU capacity. It was simply not enough.

Isolation involves the sort of steps most of us would have previously thought only an authoritarian regime like China or North Korea could manage. Schools, universities and non-critical business shut, everyone not doing a critical job made to stay at home except to buy food etc. Basically, Wuhan. And now Italy is doing very similar. As of the 23rd March the UK is following suit.

https://mwidlake.files.wordpress.com/2020/03/screenhunter_-328.jpg?w=145... 145w, https://mwidlake.files.wordpress.com/2020/03/screenhunter_-328.jpg?w=290... 290w, https://mwidlake.files.wordpress.com/2020/03/screenhunter_-328.jpg?w=768... 768w, https://mwidlake.files.wordpress.com/2020/03/screenhunter_-328.jpg 775w" sizes="(max-width: 460px) 100vw, 460px" />

Most western countries are now implementing many of the steps needed for isolation levels that will suppress COVID-19, but not all the steps needed.

The graph to the right shows the impact of two implementations of Isolation, both implementing several measures but the orange line does not include closing schools and universities. The green line does. The green line keeps the number of cases within the NHS ICU capactiy, the orange does not. That is why schools and universities were closed.

The graph also makes the point about the main problem with Isolation. It is only stopping the virus spreading, it is NOT getting rid of it. Remember, no one is immune unless they have had COVID-19. When the steps to enforce isolation are relaxed, COVID-19 will burst back.

This is potentially the position that China is in. They have locked down Wuhan province tightly and it worked. The number of cases there rocketed even after the lock-down but have since reduced, almost as fast as they increased. China as a whole now have very few new cases. The lock-down is being relaxed as I prepare this post. Epidemiologists expect the number of cases in China to increase again.

The degree to which either mitigation or suppression is enforced obviously impacts society and commerce. The Imperial College report makes the point that they are not addressing those concerns, they are simply saying what social isolation changes will have what effect on COVID_19 spread, deaths, and the ability of the NHS to cope.

Delayed impact.

This next point is being made widely, by both non-scientific observers and the scientific community, but I want to re-iterate it as it is so far being played down by government (which could be changing at the very moment I am typing).

There is no way to avoid the huge increase in COVID-19 cases and deaths that are going to happen in the UK over the next 2-3 weeks. Expect our levels to be the same levels as Italy. In fact, expect them to be 20, 30% higher. This is because the UK government were too slow to lock down and did it in stages when, based on the epidemiology, we should have shut down totally on Monday 16th when the paper I reference was published, or within 2 days to allow for planning.

Up until now COVID-19 has been spreading exponentially (1 person has it, passes it to 2-3 people. They pass it to 4 people who pass it to 8…16…. 32… 64… 128… 256… 512… 1024). This has been seen in the way the number of case had double every 3-4 days, deaths are now following the same pattern.

The two graphs to the right show the number of cases in the UK to the 20th March above, and the number of deaths in Italy to the 20th. They look like the same graph as they sort of are. This is how something grows exponentially when the growth rate is the same – the same as both cases and deaths are caused by the same thing.

(these graphs are from Worldometers – I use this site as I think the John Hopkins site has more incorrect information on it).

Covid-19 takes on average 5.1 days to show symptoms from when you catch it (this can be up to 2 weeks – with all these averages there will be some cases which are two or three times as long). It takes less time, 4.6 days on average, from when you catch it to when you spread it. So you can spread the disease before you get ill. And some people do not get ill (or only very mildly) and spread it. Like “Typhoid Mary”. If you are going to be ill enough to need hospitalisation it takes 5 days from first symptoms for you to deteriorate to that point.. At this point you will be admitted to hospital, tested, and will join the number of confirmed cases. If you are going to die (I know, this sounds really callous) that is another few days. The report does not spell it out but going on the figures they use for time spent in intensive care in the model, about a week.

Add it all together and someone who dies of COVID-19 today caught it 15-20 days ago on average, so the spike will be delayed that much.

Yesterday, 23rd March, almost total lock-down in the UK was announced. Cases and deaths will rise for 20 more days in the UK. Exponentially. To Italy levels, maybe 20-30% higher. Then they will plateau for a few days and drop quickly, depending on how well people respect the social distancing or are forced to. I am expecting over 9,000 will die in this first spike, with a peak number of deaths between 750 and 900 in one day. Sadly my predictions so far have all been correct or a little too optimistic.

That is the reality and that is why we are seeing the actions of our government that have never been seen outside World Wars before.

Three choices – or is it four?

To summarise the above, there were 3 choices available to the UK (and all other countries):

  1. Let COVID-19 burn through the population in 3 months. It would kill 2-3% of the population as the NHS collapsed and also anyone who needed medical treatment during that time would probably not get it. During the 3 months lots of people would have “bad ‘flu”. 80%  of survivors would be resistant to COVID-19 for now.
  2. Mitigate the impact by the measures implemented in stages during mid-March, reduce the impact a little and stretch the curve a little, and have 1.5-2.5% of the population die over 4 months. 70% of survivors {my guess!} would be resistant to COVID-19 for now.
  3. Suppress COVID-19, 10,000 dead and everyone in lock-down until “something changes”, which could be 18 months or more.  A tiny percent, maybe 5% {my guess} resistant to COVID-19.

The UK government chose option 3, after considering 2 for a while (and thus increasing the death count by, hmmm, 3,000 in that first spike).

The “something changes” in option 3 is that scientist create a vaccine for SARS-COV-2, the underlying organism to COVID-19, or we have a quick and reliable immunity test for it that allows those who have survived the disease to move about unrestricted. See further down in this post. Most of us stay in lock-down until “something changes”

But this Imperial College paper has a solution 4:

  1. sorry, 4. I can’t get the layout to work. solution 4 is to
    1. suppress.
    2. Let the known bubble of cases come and deal with it.
    3. Once it has passed, relax (not remove!) the Suppression rules to let business and normal life start up again.
    4. Monitor the number of COVID-19 cases coming into ICU.
    5. When it hits a threshold, back to total lockdown and deal with the next bubble.
    6. Repeat.

It is a clever idea. No one wants to stay at home until a vaccine is created in 18 months. Economically, total lock-down until we have a vaccine would be a disaster. So varying the lock-down based on NHS demand indicators would allow some relief from the restrictions. But not back to normal.

Option 4 comes at a cost. More people will die reach time you relax the lock-down, depending on what is allowed. Much of the rest of the paper details this plan and, based on the figures they state at the top of the report in respect of how many people will abide by the rules, what different isolation strategies and key triggers (how many new COVID-19 ICU cases in a week) to increase isolation levels, gives death rates varying from 8,700 to 120,000. This also takes into account a range of R values (how easy it spreads naturally) as there is still some uncertainty about this.

The paper makes one thing clear – we would need to maintain the isolation levels for suppression for 2 years – their cautious estimate of how long it will be until we have a widely available vaccine.

The best case is deaths creep up (after the initial surge we can no longer avoid) with very strong lockdown only relaxed at very low levels of ICU cases and deaths. I personally doubt very strongly that enough people will abide by the rules for long and, as people start ignoring them, others will feel “why should I play by the rules when they don’t”.

I do not have anything like the understanding of human nature needed to predict how people are going to react so I won’t. But the figures being bandied around a few days of keeping UK deaths to 8,000 or less seem utter fantasy to me.

The “The hammer and the dance” paper…

Some of you may have come across “The hammer and the dance”, which is based on a paper by Tomas Pueyo on “Medium”, a home for science papers that have not been verified by anyone. I would not normally look at things here very much but several people have mentioned the paper or even linked to it. If you recognise the term, you will probably recognise the “dance” part as choice 4 above.

Context is paramount

Lots of numbers are being thrown about, but to understand the true impact of COVID-19 those numbers need to be interpreted in light of some general background.

Let’s start with the base rate of mortality. In the UK there were 541,589 deaths in 2018. That give 9.3 deaths per 1,000 residents. See the office for national statistics article for this figure. Over the year that is 1,483 deaths a day, from all causes. People keep on insisting on comparing COVID-19 to influenza. I’ve struggled to get a definitive number of deaths due to Influenza in the UK but it seems to be between 8,000 and 17,000 a year. Let’s take 17,000 as a top estimate, that is 46 a day.

(you may wonder why it is hard to say how many people die of influenza. Well, influenza kills people who are already seriously ill and likely to die anyway, and I believe not every death attributed to influenza is tested for sure to be influenza.

Our key figures are 1,482 deaths by any means a day and 46 a day from influenza, in the UK.

On the 21st March 56 people in the UK died of COVID-19. More than Influenza, about 4% of the daily mortality rate. Bad, but nothing that significant. In Italy, 793 people died of COVID-19 on 21st March (and it looks like that might be the peak). Our figures in the UK for known diagnoses and deaths are following the Italy pattern very closely (for very good scientific reasons) just 2 weeks behind – 15 days to be more precise. In 15 days the death rate for COVID_19 is likely to be very similar to Italy so, despite my hunch the UK peak will be higher, let’s use Italy’s peak number:

  • 50% of the total death rate for everything in the UK.
  • And 17 times the death rate by ‘flu.

So COVID-19 is incredibly serious,  but it could have been worse. It looks like for a period at least, for each country, it will increase the daily death rate by 50% and maybe more. But it is not killing a large percentages of the population.

I’ve seen some scare stories about this disease sending us back to the dark ages as it kills half the population of the world. Rubbish. It might stop the world population growing for a year.

Why will social distancing last 18 months?

No one is naturally immune to COVID-19 until they have had it. Let’s assume that once you have had it you are immune for several years, as you are with many other viral diseases (Influenza A is a special case as changes so fast and in a way that reduces the effectiveness of both vaccines and immunity via exposure).

We could let COVID-19 spread naturally or at least in a contained way – but it will overwhelm our health services as discussed, and 1-3% of us would die.

The other way is to create a vaccine, which gives immunity or partial immunity without having the disease (or maybe a very mild version of it). Vaccination works, it rid us of smallpox totally and, until the loony anti-vaxxer movement got going, it was vastly reducing measles, rubella and many other diseases.

But creating a vaccine that works is hard. Lots of biomedical scientists are working on it and we might get lucky and someone comes up with a very effective vaccine that can be created in bulk, but by lucky we are still talking months. (There is at least one early trial running – but that absolutely does not mean it will be available next month!)

Any vaccine has to be tested, proven effective, and shown not to itself harm.

All of this is why specialist in the field all say “18 months”. It’s a guess based on science and experience. It could take longer, it could be only 12 months, it might be that an initial vaccine is only as effective as the yearly flu vaccine (the flu vaccine generally protects 40-60% of people – see  this oxford university paper).

We can test for if people currently have COVID-19, the test is accurate and relatively cheap. It checks for the RNA of the virus, an established diagnostic practice. Production of the test is being massively increased and improved and we need that so we can better track the disease and accurately identify who has the disease and put them in isolation. In the short term, wider testing will help a lot and those countries that have gone in for huge testing efforts (South Korea and Singapore are examples) have done well in containing COVID-19.

The other tool we really need is a test for immunity, which is usually for the antibodies to a disease. Again, these tests take time to devise. If we could identify those who have had the disease (but were not tested) and are now immune. They would not need to be isolating themselves. A small and growing part of our population could return to normal. But we have no idea when such a tool will be ready, how accurate it is, how cheap it is to do etc.

Finally, scientists need to work out if immunity to COVID-19 is long-lasting, for how long, and if the immunity is strong or weak. We just do not know yet.

Until we have a vaccine (ideally), or the immunity test (it would really help) we have to suppress COVID-19 via social distancing etc.

Basically we are sleeping with a tiger. Best not wake her.

Disclaimer

All of what I put here is based on what is said by experts, scientists, epidemiologists. I’m just pulling some of it together. As I said in the previous blog, I am not an expert in any of this. I’ll make it clear when something is my opinion. I also want to highlight that I only look at sources that I feel are backed by good science. The only information I take from the government is official statistics on cases & deaths. I’m heartened that our government is now taking the spread and impact of COVID-19 more seriously but I remain angry that the experts told them what was coming weeks ago and they were slow to act, putting business concerns before lives.

Any mistakes in this blog post are mine. There are bound to be a couple.

I would love to hear about sources of information you feel are good. I had several excellent sources pointed out to me after my last post, including being corrected on a couple of counts – which I am very happy about.

However, I will probably ignore anything based on rumour or anecdote. Ginger & Garlic are not going to boost your immune system and protect you, quinine is almost certainly not a magic protector. If you have a peer reviewed article in a reputable journal or the support of a respected epidemiologist to back those opinions, then let me know.