Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Why I don’t want my presentations recorded!

https://oracle-base.com/blog/wp-content/uploads/2020/08/cat-2642643_640-... 295w, https://oracle-base.com/blog/wp-content/uploads/2020/08/cat-2642643_640-... 57w" sizes="(max-width: 158px) 85vw, 158px" />

I was on Twitter a couple of days ago and I mentioned my preference not to be recorded when I’m presenting. That sparked a few questions, so I said I would write a blog post about it. Here it is.

This is a bit of a stream of consciousness, so forgive me if I ramble.

The impact on me!

The primary reason I don’t like being recorded is it has a big impact on me.

I’ve said many times, presenting is not natural for me. I’m very nervous about doing it. I have to do a lot of preparation before an event to try to make it look casual, and almost conversational. It takes a rather large toll on me personally, invading every part of my life for weeks before it happens, and pretty much ruining the day(s) immediately before the event. In my head it’s going to be a complete disaster, and the public humiliation is going to be that much worse because I’m an Oracle ACE and Groundbreaker Ambassador, so I must clearly think I’m the shit, yet I can’t even present and don’t have a clue what I’m talking about. Classic impostor syndrome stuff.

That’s “normal” for me and conferences, which is why I nearly always get a post-conference crash, because of the relief it’s over. But it goes into overdrive if I know the session is going to be recorded, because in my head there will be a permanent record of my screw up.

I have been recorded before, but fortunately not on the sessions where I’ve screwed up… Yet… I don’t think… Recently I’ve decided that I will probably pull out of any event where I’m being recorded, as I can’t keep putting myself through that anymore.

There are other people that will happily fill the conference slot, so me not being there is no big deal.

Editorial control

When I write an article, I constantly go back and revisit things. If my opinion changes, I learn something new, or just don’t like the way I explained something I will rewrite it. I have full control of the content.

When I record a YouTube video I edit it, making sure it contains what I want it to contain. YouTube won’t let you do much in the way of editing a video once it’s posted, but you can make minor changes to the timeline. Even so, if something really annoyed me I could delete it, re-edit it and post it again. Yes I would lose all the views and comments, but ultimately I can do that if I want.

When a user group records a presentation, you no longer have any control of that content. If your opinion changes, or it contains some really dumb stuff, it is there for life. I know nothing is lost on the internet, but at least I should be able to control the “current version” of the content.

I very rarely write for other publications. I like to keep control of my content, so I can decide what to do with it. A lot of this is a throw-back to the previous point about my insecurities, but that’s how I feel about it, and why should I have to compromise about my content?

It’s my content!

Following on from the previous point, it is my content. I wrote it. I rehearsed it. I presented it. And most importantly, I wasn’t being paid to present it! Why should a user group now have control of that content?

Karen López (@datachick) recently posted a really interesting tweet.

“What would you think about an organization who held an event and you spoke at it for free. You signed an agreement to allow distribution to attendees, but they are now selling your content as part of a subscription that you are getting no compensation for?”

@datachick

I’m not saying this is what user groups are planning, but it’s certainly something some might try, now that times are getting harder than usual.

I’m sorry if this sounds really selfish, but I think I’m doing enough for the community and user groups, without giving them additional product to sell. I know a lot of user groups find finance difficult, but in the current online-model, the financial situation is very different. There aren’t any buildings to hire and people to feed.

The audience matters!

My presentation style varies depending on the audience.

If I present in the UK I tend to speak faster and swear a bit. Similar with Australia. When I present in other countries I tend to tone down my language, as some places are really uptight about expletives.

In some countries where English is a second or third language, I slow down a lot and remove some content from the session, because I know there will be a larger number of people who will struggle to keep up. Maybe I’ll miss out a couple of anecdotes, so I can speak more slowly. If there is live translation I have to go a lot slower.

I remember seeing one recording of me presenting with live translation and I sounded really odd, as I was having to present so slowly for the live translation to work. It was kind-of shocking for me to see it back, and I would prefer people not see that version of the talk, as it doesn’t represent me. It’s “adjusted me” to suit the circumstance.

Other things…

OK. Let’s assume other speakers are not self-obsessed control freaks like me for a second…

It’s possible some people would prefer to be selective about what gets recorded. For example, the first time I do a talk I really don’t know how it will turn out. That’s different to the 10th time I give the same talk. For a new talk I doubt I would feel happy about it being recorded, even if I were generally cool with the concept. I may feel better about recording a talk I have done a few times, having had time to adjust and improve it. I think of this like comedians, who go on tour and constantly change their material based on how it works with the audience. At the end of a tour they record their special, only using the best bits. Then it’s time to start preparing for the next tour. I suspect many comedians would be annoyed at being recorded on the first day of a tour. Same idea…

I think recording sessions could be off-putting for new speakers. When you are new to the game there is enough to worry about, without having to think about this too. Maybe other people aren’t as “sensitive” as me, but maybe they are.

I don’t like to be in pictures and videos. It’s just not my thing. I rarely put myself into my videos on YouTube. I’m sure there would be other speakers who would prefer to be judged by what they say, rather than how they look.

I used to be concerned that if someone recorded my session and put it on YouTube, nobody would come to my future sessions on the same subject. I actually don’t think this is a real problem. It seems the audience for blog posts, videos and conferences is still quite different. Yes, there is some crossover, but there is also a large group of people that gravitate to their preferred medium and stick with it.

But what about…

Look, I really do know what the counter arguments to this are.

  • Some people can’t get to your session because of an agenda clash, and they would like to watch it later.
  • This gives the user group members a resource they can look back at to remind themselves what you said.
  • This is a resource for existing user group members who couldn’t make it to the event.
  • For paid events, the attendees are paying money, so they have the right to have access to recordings. (but remember, the speakers are not being paid!)

I know all this and more. I am sorry if people don’t like my view on this. I really am, and I’m happy not to be selected to speak at an event. It really doesn’t bother me. Feel free to pick someone else that fits into your business model. That is fine by me. It really is.

Conclusion

Maybe I’m the only person that feels this way. Maybe other people feel the same, but don’t feel they have a loud enough voice to make a big deal out of it.

At the end of the day, it’s my content and I should have the right to decide if I’m happy about it being recorded or not. I believe conferences should make recording optional, and I’ll opt out. If people believe recording should be mandatory, that’s totally fine. It’s just unlikely I will be involved.

I’m sorry if you don’t like my opinion, but that’s how I feel at this point and it’s my choice. My attitude may change in future. It may not. Either way, it’s still my choice!

Cheers

Tim…

Update: This is not because of any recent conferences. Just thought I better add that in case someone thought it was. I’ve been asking events not to record me for a while now and it’s not been drama. In a recent message for a conference later in the year I was asked to explicitly confirm my acceptance of recording and publishing rights, which is why I mentioned it on Twitter, which then prompted the discussion. Sorry to any recent events if you thought you were the catalyst for this. You weren’t. Love you! </p />
</p></div>

    	  	<div class=

Developer Live wrap up

The Developer Live event for database has concluded. Thank you to the (almost) 2000 people that attended my talk across the USA, Europe and APAC timezones! I very much appreciate you giving up your time to attend the session.

Whilst we were doing some Q&A I tried to add as many useful links into the chat line as I could, to help people with onward learning on SQL performance and database performance in general. A few of you then asked if I could publish that chat as to not lose that information, so here it is below

From Me to Everyone:  04:06 PM
Hi everyone,

Its Connor here. Chris Saxon and I are monitoring the Q&A channel, so fire off your questions and we’ll answer them right here.  Anything we can’t answer, we’ll tackle at a later date.

Sessions are all recorded and will be made available in the next day or so.
Check back on
https://developer.oracle.com/developer-live/database for the recordings.

From Me to Everyone:  04:36 PM

If you want more details on PLSQL, I had a chat with Ace Director Patrick Barel about PLSQL, which you can check out here on my youtube channel https://www.youtube.com/watch?v=7xcQtxsM7DE
There are more videos on parsing and avoiding the pain of it at my YouTube channel https://www.youtube.com/c/ConnorMcDonaldOracle

If you want fun way of explaining array fetching to your development team, here’s a funny video on a real world example https://www.youtube.com/watch?v=-Mlxdn5osvs

If you want more info on some of these more sophisticated SQL statements, here’s a playlist on analytic SQL https://www.youtube.com/watch?v=0cjxYMxa1e4&list=PLJMaoEWvHwFIUwMrF4HLnRksF0H8DHGtt

See the full example of how smart the database optimizer is here https://www.youtube.com/watch?v=s1MDfejM0ys

How to use FORCE_MATCHING_SIGNATURE on V$SQL. Check out my blog post on this
https://connor-mcdonald.com/2016/05/30/sql-statements-using-literals/

cursor_sharing can also be used to improve DML (insert/update/delete) performance. Here’s an example https://connor-mcdonald.com/2019/05/17/hacking-together-faster-inserts/

More information on bind peeking  https://www.youtube.com/watch?v=MnISfllmK74 and some of the risks that were there in earlier versions problems with bind peeking https://www.youtube.com/watch?v=ynLF6S15J5M

And here is how adaptive cursor sharing improves things https://www.youtube.com/watch?v=m9sjq9TDuTw

If you DO find SQL complex, then here’s a great resource to get up to speed with database developement using SQL https://asktom.oracle.com/databases-for-developers.htm

Once again, thanks for attending, and if you the missed the session, here it is below

Expert Advice: How to Build an Accessible Education Website on WordPress.com

Learn the basics and best practices of building an accessible and inclusive website for your classroom, school, or class assignment. This is a free, one-hour live webinar open to all, but is especially geared toward educators, teachers, school webmasters, and students.

Date: Thursday, August 27, 2020
Time: 10:00 am PT | 12:00 pm CT | 1:00 pm ET | 17:00 UTC
Registration linkhttps://zoom.us/webinar/register/2715977718561/WN_RFyhYfGNTOikZxw4aAsMXA
Who’s invited: All are welcome, but this webinar is designed for stakeholders within education, including teachers, educators, school webmasters, students, and parents.

Melissa Silberstang and Fernando Medina are WordPress.com Happiness Engineers and accessibility advocates who have helped thousands of people build websites on WordPress.com. They’ll help you understand what makes a great, accessible website, and what customizations to look out for as you build.

During the last 10-15 minutes of the webinar, attendees will be able to ask questions during the live Q&A portion.

We know you’re busy, so if you can’t make the live event, you’ll be able to watch a recording of the webinar on our YouTube channel.

Live attendance is limited, so be sure to register early. We look forward to seeing you on the webinar!

Explain Rewrite

This is one of those notes that I thought I’d written years ago. It answers two questions:

  • what can I do with my materialized view?
  • why isn’t the optimizer using my materialized view for query rewrite?

I’ve actually supplied an example of code to address the first question as a throwaway comment in a blog that dealt with a completely different problem, but since the two questions above go together, and the two answers depend on the same package, I’m going to repeat the first answer.

The reason for writing this note now is that the question “why isn’t this query using my materialized view” came up on the Oracle Developer community forum a few days ago – and I couldn’t find the article that I thought I’d written.

Note: a couple of days after I started drafting this note Frank Pachot tweeted the links to a couple of extensive posts on materialized views that included everything I had to say (and more) about “what can my materialized view do”. Fortunately he didn’t get on to second question – so I decided to publish the whole of my note anyway as the two questions go well together.

The key feature is the dbms_mview package, and the two procedures (both overloaded) explain_mview() and explain_rewrite(). To quote the 12.2 “PL/SQL Supplied Packages” manual page, the first procedure:

“explains what is possible with a materialized view or potential [ed: my emphasis] materialized view”

note particularly that the materialized view doesn’t need to have been created before you run the package – the second procedure:

“explains why a query failed to rewrite or why the optimizer chose to rewrite a query with a particular materialized view or materialized views”.

Here’s the relevant extract from the SQL*Plus describe of the package –

PROCEDURE EXPLAIN_MVIEW
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 MV                             VARCHAR2                IN
 STMT_ID                        VARCHAR2                IN     DEFAULT

PROCEDURE EXPLAIN_MVIEW
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 MV                             CLOB                    IN
 STMT_ID                        VARCHAR2                IN     DEFAULT

PROCEDURE EXPLAIN_MVIEW
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 MV                             VARCHAR2                IN
 MSG_ARRAY                      EXPLAINMVARRAYTYPE      IN/OUT

PROCEDURE EXPLAIN_MVIEW
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 MV                             CLOB                    IN
 MSG_ARRAY                      EXPLAINMVARRAYTYPE      IN/OUT

PROCEDURE EXPLAIN_REWRITE
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 QUERY                          VARCHAR2                IN
 MV                             VARCHAR2                IN     DEFAULT
 STATEMENT_ID                   VARCHAR2                IN     DEFAULT

PROCEDURE EXPLAIN_REWRITE
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 QUERY                          CLOB                    IN
 MV                             VARCHAR2                IN     DEFAULT
 STATEMENT_ID                   VARCHAR2                IN     DEFAULT

PROCEDURE EXPLAIN_REWRITE
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 QUERY                          VARCHAR2                IN
 MV                             VARCHAR2                IN     DEFAULT
 MSG_ARRAY                      REWRITEARRAYTYPE        IN/OUT

PROCEDURE EXPLAIN_REWRITE
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 QUERY                          CLOB                    IN
 MV                             VARCHAR2                IN     DEFAULT
 MSG_ARRAY                      REWRITEARRAYTYPE        IN/OUT

As you can see there are 4 overloaded versions of explain_mview() and 4 of explain_rewrite(): both procedures take an input of an SQL statement (parameter mv for explain_mv(), parameter query for explain_rewrite()) – which can be either a varchar2() or a CLOB, and both procedures supply an output which can be written to a table or written to an “in/out” pl/sql array.

Two possible input options times two possible output options gives 4 overloaded versions of each procedure. In this note I’ll restrict myself to the first version of the two procedures – varchar2() input, writing to a pre-created table.

Here’s a simple demo script. Before we do anything else we need to call a couple of scripts in the $ORACLE_HOME/rdbms/admin directory to create the target tables. (If you want to do something a little clever you could tweak the scripts to create them as global temporary tables in the sys schema with a public synonym – just like the plan_table used in calls to explain plan.)

rem
rem     Script:         c_explain_mv.sql
rem     Author:         Jonathan Lewis
rem     Dated:          March 2002
rem

@$ORACLE_HOME/rdbms/admin/utlxmv.sql
@$ORACLE_HOME/rdbms/admin/utlxrw.sql

-- @$ORACLE_HOME/sqlplus/demo/demobld.sql

create materialized view dept_cost
refresh complete on demand
enable query rewrite
as
select 
        d.deptno,sum(e.sal) 
from 
        emp e,dept d
where 
        e.deptno = d.deptno
group by 
        d.deptno
;
 
set autotrace traceonly explain
 
select 
        d.deptno,sum(e.sal) 
from 
        emp e,dept d
where 
        e.deptno = d.deptno
and     d.deptno=10
group by 
        d.deptno
;

set autotrace off

/*

Execution Plan
----------------------------------------------------------
Plan hash value: 3262931184

------------------------------------------------------------------------------------------
| Id  | Operation                    | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |           |     1 |     7 |     2   (0)| 00:00:01 |
|*  1 |  MAT_VIEW REWRITE ACCESS FULL| DEPT_COST |     1 |     7 |     2   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("DEPT_COST"."DEPTNO"=10)

*/

My original script is rather old and in any recent version of Oracle the Scott tables are no longer to be found in demobld.sql, so I’ve added them at the end of this note.

After creating the emp and dept tables I’ve created a materialized view and then enabled autotrace to see if a suitable query will use the materialized view – and as you can see from the execution plan the materialized view can be used.

So let’s do a quick analysis of the view and the rewrite:


column audsid new_value m_audsid
select sys_Context('userenv','sessionid') audsid from dual;

begin
        dbms_mview.explain_mview(
--              mv      => 'DEPT_COST',
                mv      => q'{
                        create materialized view dept_cost
                        refresh complete on demand
                        enable query rewrite
                        as
                        select 
                                d.deptno,sum(e.sal) 
                        from 
                                emp e,dept d
                        where 
                                e.deptno = d.deptno
                        group by 
                                d.deptno
                }',
                stmt_id         => '&m_audsid'
        );
end;
/
 
set linesize 180

column cap_class noprint
column capability_name format a48
column related_text format a15
column short_msg format a90

break on cap_class skip 1
 
select
        substr(capability_name,1,3) cap_class,
        capability_name, possible, related_text, msgno, substr(msgtxt,1,88) short_msg
from
        mv_capabilities_table
where
        mvname       = 'DEPT_COST'
and     statement_id = '&m_audsid'
order by
        substr(capability_name,1,3), related_num, seq
;


I’ve captured the session’s audsid because the mv_capabilities_table table and the rewrite_table table both have a statement_id column and the audsid is a convenient value to use to identify the most recent data you’ve created if you’re sharing the table. Then I’ve called dbms_mview.explain_mview() indicating two possible strategies (with one commented out, of course).

I could pass in a materialized view name as the mv parameter, or I could pass in the text of a statement to create a materialized view (whether or not I have previously created it). As you can see, I’ve also used a substitution variable to pass in my audsid as the statement id.

After setting up a few SQL*Plus format options I’ve then queried some of the columns from the mv_capabilitie_table table, with the following result:


CAPABILITY_NAME                                  P RELATED_TEXT    MSGNO SHORT_MSG
------------------------------------------------ - --------------- ----------------------------------------
PCT_TABLE                                        N EMP             2068 relation is not a partitioned table
PCT_TABLE_REWRITE                                N EMP             2068 relation is not a partitioned table
PCT_TABLE                                        N DEPT            2068 relation is not a partitioned table
PCT_TABLE_REWRITE                                N DEPT            2068 relation is not a partitioned table
PCT                                              N

REFRESH_FAST_AFTER_ONETAB_DML                    N SUM(E.SAL)      2143 SUM(expr) without COUNT(expr)
REFRESH_COMPLETE                                 Y
REFRESH_FAST                                     N
REFRESH_FAST_AFTER_INSERT                        N TEST_USER.EMP   2162 the detail table does not have a materialized view log
REFRESH_FAST_AFTER_INSERT                        N TEST_USER.DEPT  2162 the detail table does not have a materialized view log
REFRESH_FAST_AFTER_ONETAB_DML                    N                 2146 see the reason why REFRESH_FAST_AFTER_INSERT is disabled
REFRESH_FAST_AFTER_ONETAB_DML                    N                 2142 COUNT(*) is not present in the select list
REFRESH_FAST_AFTER_ONETAB_DML                    N                 2143 SUM(expr) without COUNT(expr)
REFRESH_FAST_AFTER_ANY_DML                       N                 2161 see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
REFRESH_FAST_PCT                                 N                 2157 PCT is not possible on any of the detail tables in the materialized view

REWRITE                                          Y
REWRITE_FULL_TEXT_MATCH                          Y
REWRITE_PARTIAL_TEXT_MATCH                       Y
REWRITE_GENERAL                                  Y
REWRITE_PCT                                      N                 2158 general rewrite is not possible or PCT is not possible on any of the detail tables


20 rows selected.

If you want the complete list of possible messages, the msgno is interpreted from $ORACLE_HOME/rdbms/mesg/qsmus.msg (which is shared with the explain_rewrite() procedure. Currently (19.3) it holds a slightly surprising 796 messages. A call to “oerr qsm nnnnn” will translate the number to the message text.

In a similar vein we can call dbms_mview.explain_rewrite(). Before we do so I’m going to do something that will stop the rewrite from taking place, then call the procedure:

update emp set ename = 'JAMESON' where ename = 'JAMES';
commit;

begin
        dbms_mview.explain_rewrite ( 
                query => q'{
                        select 
                                d.deptno,sum(e.sal) 
                        from 
                                emp e,dept d
                        where 
                                e.deptno = d.deptno
                        and     d.deptno=10
                        group by 
                                d.deptno
                        }',
                mv => null,
                statement_id => '&m_audsid'
        );
end;
/

column message format a110

select
        sequence, message 
from 
        rewrite_table
where
        statement_id = '&m_audsid'
order by
        sequence
/

You’ll notice that there’s an input parameter of mv – there may be cases where the optimizer has a choice of which materialized view to use to rewrite a query, you could supply the name of a materialized view for this parameter to find out why Oracle didn’t choose the materialized view you were expecting. (In this case mv has to supply a materialized view name, not the text to create a materialized view)

In my example I get the following results:

  SEQUENCE MESSAGE
---------- --------------------------------------------------------------------------------------------------------------
         1 QSM-01150: query did not rewrite
         2 QSM-01106: materialized view, DEPT_COST, is stale with respect to some partition(s) in the base table(s)
         3 QSM-01029: materialized view, DEPT_COST, is stale in ENFORCED integrity mode

3 rows selected.

The 2nd and 3rd messages are a bit of a clue – so let’s change the session’s query_rewrite_integrity parameter to stale_tolerated and re-execute the procedure; which gets us to:


  SEQUENCE MESSAGE
---------- --------------------------------------------------------------------------------------------------------------
         1 QSM-01151: query was rewritten
         2 QSM-01033: query rewritten with materialized view, DEPT_COST

Footnote:

You’ll notice that I’ve quote-escaping in my inputs for the materialized view definition and query. This is convenience that let’s me easily cut and paste a statement into the scripts – or even use the @@script_name mechanism – writing a query (without a trailing slash, semi-colon, or any blank lines) in a separate file and then citing the script name between the opening and closing quote lines, e.g.

begin
        dbms_mview.explain_rewrite ( 
                query        => q'{
@@test_script
                                }',
                mv           => null,
                statement_id => '&m_audsid'
        );
end;
/

It’s worth noting that the calls to dbms_mview.explain_mv() and dbms_mview.explain_rewrite() don’t commit the rows they write to their target tables, so you ought to include a rollback; at the end of your script to avoid any confusion as you test multiple views and queries.

Footnote:

Here’s the text of the demobld.sql script as it was when I copied it in the dim and distant past. I’m not sure which version it came from – but I don’t think it’s the original v4/v5 release which I’m fairly sure had only the emp and dept tables. (For reference, and hash joins, there used to be a MOS note explaining that EMP was the big table and DEPT was the small table ;)


CREATE TABLE EMP (
        EMPNO           NUMBER(4) NOT NULL,
        ENAME           VARCHAR2(10),
        JOB             VARCHAR2(9),
        MGR             NUMBER(4),
        HIREDATE        DATE,
        SAL             NUMBER(7, 2),
        COMM            NUMBER(7, 2),
        DEPTNO          NUMBER(2)
);

insert into emp values
        (7369, 'SMITH',  'CLERK',     7902,
        to_date('17-DEC-1980', 'DD-MON-YYYY'),  800, NULL, 20);

insert into emp values
        (7499, 'ALLEN',  'SALESMAN',  7698,
        to_date('20-FEB-1981', 'DD-MON-YYYY'), 1600,  300, 30);

insert into emp values
        (7521, 'WARD',   'SALESMAN',  7698,
        to_date('22-FEB-1981', 'DD-MON-YYYY'), 1250,  500, 30);

insert into emp values
        (7566, 'JONES',  'MANAGER',   7839,
        to_date('2-APR-1981', 'DD-MON-YYYY'),  2975, NULL, 20);

insert into emp values
        (7654, 'MARTIN', 'SALESMAN',  7698,
        to_date('28-SEP-1981', 'DD-MON-YYYY'), 1250, 1400, 30);

insert into emp values
        (7698, 'BLAKE',  'MANAGER',   7839,
        to_date('1-MAY-1981', 'DD-MON-YYYY'),  2850, NULL, 30);

insert into emp values
        (7782, 'CLARK',  'MANAGER',   7839,
        to_date('9-JUN-1981', 'DD-MON-YYYY'),  2450, NULL, 10);

insert into emp values
        (7788, 'SCOTT',  'ANALYST',   7566,
        to_date('09-DEC-1982', 'DD-MON-YYYY'), 3000, NULL, 20);

insert into emp values
        (7839, 'KING',   'PRESIDENT', NULL,
        to_date('17-NOV-1981', 'DD-MON-YYYY'), 5000, NULL, 10);

insert into emp values
        (7844, 'TURNER', 'SALESMAN',  7698,
        to_date('8-SEP-1981', 'DD-MON-YYYY'),  1500,    0, 30);

insert into emp values
        (7876, 'ADAMS',  'CLERK',     7788,
        to_date('12-JAN-1983', 'DD-MON-YYYY'), 1100, NULL, 20);

insert into emp values
        (7900, 'JAMES',  'CLERK',     7698,
        to_date('3-DEC-1981', 'DD-MON-YYYY'),   950, NULL, 30);

insert into emp values
        (7902, 'FORD',   'ANALYST',   7566,
        to_date('3-DEC-1981', 'DD-MON-YYYY'),  3000, NULL, 20);

insert into emp values
        (7934, 'MILLER', 'CLERK',     7782,
        to_date('23-JAN-1982', 'DD-MON-YYYY'), 1300, NULL, 10);

CREATE TABLE DEPT(
        DEPTNO  NUMBER(2),
        DNAME   VARCHAR2(14),
        LOC     VARCHAR2(13) 
);

insert into dept values (10, 'ACCOUNTING', 'NEW YORK');
insert into dept values (20, 'RESEARCH',   'DALLAS');
insert into dept values (30, 'SALES',      'CHICAGO');
insert into dept values (40, 'OPERATIONS', 'BOSTON');

CREATE TABLE BONUS
        (
        ENAME VARCHAR2(10)      ,
        JOB VARCHAR2(9)  ,
        SAL NUMBER,
        COMM NUMBER
);

CREATE TABLE SALGRADE
      ( GRADE NUMBER,
        LOSAL NUMBER,
        HISAL NUMBER 
);

INSERT INTO SALGRADE VALUES (1,700,1200);
INSERT INTO SALGRADE VALUES (2,1201,1400);
INSERT INTO SALGRADE VALUES (3,1401,2000);
INSERT INTO SALGRADE VALUES (4,2001,3000);
INSERT INTO SALGRADE VALUES (5,3001,9999);
COMMIT;

 

JDBC & the Oracle Database: if you want Transparent Application Failover you need the OCI driver

This is the second article in the series of JDBC articles I’m about to publish. It covers an old technology that’s surprisingly often found in use: Transparent Application Failover (TAF). It’s a client side feature for clustered Oracle databases allowing sessions (and to some extent, select statements) to fail over to a healthy node from a crashed instance.

I would wager a bet that you probably don’t want to use Transparent Application Failover in (new) Java code. There are many better ways to write code these days. More posts to follow with my suggestions ;)

Well, then, why bother writing this post? Simple! There is a common misconception about the requirement: since Transparent Application Failover relies on the Oracle client libraries, you cannot use it with the thin driver. The little tool I have written demonstrates exactly that. And besides, I had the code more or less ready, so why not publish it?

Prerequisites for running the demo code

My Java code has been updated to Oracle work with Oracle 19c. I am also using an Oracle 19c RAC database as the back-end.

Preparing the client

Since I am going to use the Secure External Password Store again you need to prepare the client as per my previous article. The only difference this time is that I need a sqlnet.ora file in my client’s tns directory. Continuing the previous example I created the file in /home/martin/tns, and it contains the following information:

WALLET_LOCATION =
  (SOURCE =(METHOD = FILE)
    (METHOD_DATA =
      (DIRECTORY = /home/martin/tns)
    )
  )

SQLNET.WALLET_OVERRIDE = TRUE  

When you are creating yours, make sure to update the path according to your wallet location.

Since I’m connecting to a RAC database I need to change the entry in tnsnames.ora as well. This requires the application specific service to be created and started, a bit of a chicken and egg problem. The setup of the database service is explained in the next section. Here is my tnsnames.ora entry:

taf_svc =
 (DESCRIPTION = 
  (ADDRESS = (PROTOCOL = tcp)(HOST = rac19pri-scan.example.com)(PORT = 1521))
  (CONNECT_DATA=
    (SERVICE_NAME = taf_svc) 
     (FAILOVER_MODE=(TYPE=select)(METHOD=basic)))
  )

Note that setting the failover_mode () isn’t the preferred way to set TAF properties. It’s better to do that at the service level, see below.

Preparing the database service

Oracle strongly discourages the use of the default service name except for DBA tasks. As I’m a good citizen I’ll create a separate service for my little TAF application.

You need to connect to the database server and use srvctl create service to create a service. I used the following properties:

[oracle@rac19pri1]$ srvctl add service -db NCDB -service taf_svc \
-role primary -policy automatic -clbgoal long \
-failovermethod basic -failovertype session \
-preferred "NCDB1,NCDB2"

You have to set at least preferred nodes and the connect time load balancing goal. If you want to ensure anyone connecting to the TAF services actually makes use of it regardless of the tnsnames setting, you also need to set failovertype and failovermethod.

Don’t forget to start the service after you created it! Once the service is created and running, let’s try to use it to see if all TAF properties are available. To do so, I connected to taf_svc in my 1st session. I then checked the status after connecting as SYSTEM in a second session:

SQL> select inst_id, failover_method, failover_type, failed_over, service_name
  2  from gv$session where username = 'MARTIN'
  3  /

   INST_ID FAILOVER_M FAILOVER_TYPE FAI SERVICE_NAME
---------- ---------- ------------- --- ---------------
         1 BASIC      SESSION       NO  taf_svc

SQL> show user
USER is "SYSTEM"

Running the code

The complete code is available on github in my java-blogposts repository. After downloading it to your machine, change into the taf-demo-1 directory and trigger the compile target using mvn compile.

With the code built, you can run it easily on the command line. First off, try the thin driver.

JDBC Thin Driver

I used this command to start the execution using the thin driver:

java -cp /home/martin/java/libs/ojdbc10.jar:/home/martin/java/libs/oraclepki.jar:/home/martin/java/libs/osdt_cert.jar:/home/martin/java/libs/osdt_core.jar:target/taf-example-1-0.0.1-SNAPSHOT.jar de.martin.tafDemo.App thin

This should connect you to the database, but not with the desired effect.

About to start a demonstration using Transparent Application Failover
Driver Name: Oracle JDBC driver
Driver Version: 19.7.0.0.0
Connection established as MARTIN


inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver 

As you can easily spot there isn’t any trace of TAF in the output. Not surprisingly, the code crashes as soon as instance 2 fails:

inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00049 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00049 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
SQLException while trying to get the session information: java.sql.SQLRecoverableException: No more data to read from socket
[martin@appserver taf-demo-1]$

There are potentially ways around that, but I yet have to see an application implement these. So in other words, in most cases the following equation is true: instance crash = application crash.

JDBC OCI driver

Running the same code using the OCI driver should solve that problem. You will need an Oracle 19.7.0 client installation for this to work, and you have to set LD_LIBRARY_PATH as well as TNS_ADMIN in the shell

$ export TNS_ADMIN=/home/martin/tns
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/client/installation

Once these are set, start the application:

$ java -cp /home/martin/java/libs/ojdbc10.jar:/home/martin/java/libs/oraclepki.jar:/home/martin/java/libs/osdt_cert.jar:/home/martin/java/libs/osdt_core.jar:target/taf-example-1-0.0.1-SNAPSHOT.jar de.martin.tafDemo.App oci
About to start a demonstration using Transparent Application Failover
Driver Name: Oracle JDBC driver
Driver Version: 19.7.0.0.0
Connection established as MARTIN


inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 2 sid: 00275 failover_type: SELECT     failover_method: BASIC      failed_over: YES   module: java@appserver (TNS V1-V3) action: null
inst_id: 2 sid: 00275 failover_type: SELECT     failover_method: BASIC      failed_over: YES   module: java@appserver (TNS V1-V3) action: null
inst_id: 2 sid: 00275 failover_type: SELECT     failover_method: BASIC      failed_over: YES   module: java@appserver (TNS V1-V3) action: null
^C[martin@appserver taf-demo-1]$ 

You should notice a seamless transition from node 1 to node 2. As you can imagine this is the simplest example, but it should convey the message as intended. For more details about TAF and RAC, including the use of “select” failover and DML support I suggest you have a look at Pro Oracle 11g RAC on Linux, chapter 11.

Summary

Contrary to what one might think using TAF with the JDBC thin driver doesn’t protect a session from instance failure. The only way to protect a (fat client) session is to make use of the Oracle Call Interface.

Then again, TAF is a very mature solution and there might be better ways of working with RAC. Connection pools based on Oracle’s own Universal Connection Pool look like the way forward. Newer technologies, such as (Transparent) Application Continuity are better suited to meet today’s requirements.

Video : Real-Time Statistics in Oracle Database 19c

In today’s video we’ll give a demonstration of Real-Time Statistics in Oracle Database 19c.

This video is based on the following article.

This is essentially a follow-on from the previous video and article.

The star of today’s video is Ludovico Caldara, who is rocking a rather “different” look. I’ll leave it to you to decide if it’s an improvement or not! </p />
</p></div>

    	  	<div class=

Faster DISTINCT operations in 19c

If I gave you a telephone book and asked you to tell me how many distinct street names are present in the book, then the most likely thing you would do is …Wave your mobile phone at me and ask what a “telephone book” is Smile. But assuming you’re old enough to remember telephone books, you’ll probably tackle the task by sorting the entire set in street name order and then work your way down the list and incrementing your count of distinct street names every time it changed. 

In database terms, a little demo of that approach is as below



SQL> create table telephone_book
  2  as select 'Street'||trunc(dbms_random.value(1,30)) street
  3  from dual
  4  connect by level  select street
  2  from   telephone_book
  3  order by 1;

STREET
----------------------------------------------
Street1             == 1st street
Street1
Street10            == 2nd street 
Street10
Street11            == 3rd street 
Street11
Street11
Street11
Street11
Street11
...
...
Street8
Street9             == 28th street
Street9
Street9
Street9

100 rows selected.

which results in our 28 distinct streets.

But what if we were not allowed to sort the data. Perhaps a more realistic question is – what if it was ridiculously prohibitive in terms of time and effort to sort the data? I don’t know about you, but I have better things to do with my weekend than sort the telephone book! Smile

From my demo above, I can see that the highest possible number of distinct streets I could have is 30. So rather than sort the data, I can create a simple 30-character string, which each character represents the occurrence the partnering street.  My string would start off as (the hyphens added for clarity)

NNNNNNNNNN-NNNNNNNNNN-NNNNNNNNNN

and as I read the telephone book, I simply change the string for each street I encounter. If the street for the first row is “Street7”, I change my string to be:

NNNNNNYNNN-NNNNNNNNNN-NNNNNNNNNN

If the second row is “Street22” then the string becomes:

NNNNNNYNNN-NNNNNNNNNN-NYNNNNNNNN

and so on until I have read the entire table. At the end of the exercise, I just count the number of “Y” and that is the distinct count of streets. I never had to sort the data. Here’s a simple code demo of that process:


SQL> set serverout on
SQL> declare
  2    bits varchar2(32) := rpad('N',32,'N');
  3    street_no int;
  4  begin
  5    for i in ( select rownum idx, t.* from telephone_book t )
  6    loop
  7      street_no := to_number(ltrim(i.street,'Street'));
  8      bits := substr(bits,1,street_no-1)||'Y'||substr(bits,street_no+1);
  9      dbms_output.put_line('After row '||i.idx||' map='||bits);
 10    end loop;
 11    dbms_output.put_line('Final='||bits);
 12    dbms_output.put_line('ndv='||(length(bits)-length(replace(bits,'Y'))));
 13  end;
 14  /
After row 1 map=NNNNNNNNNNNNNNNNNNNNNYNNNNNNNNNN
After row 2 map=NNNNNNNNNNNNNNNNNNYNNYNNNNNNNNNN
After row 3 map=NNNNNNNNYNNNNNNNNNYNNYNNNNNNNNNN
After row 4 map=NNNNNNNNYNNNNYNNNNYNNYNNNNNNNNNN
...
...
After row 97 map=YYYYYYYYYYYYYYYYYNYYYYYYYYYYYNNN
After row 98 map=YYYYYYYYYYYYYYYYYNYYYYYYYYYYYNNN
After row 99 map=YYYYYYYYYYYYYYYYYNYYYYYYYYYYYNNN
After row 100 map=YYYYYYYYYYYYYYYYYNYYYYYYYYYYYNNN
Final=YYYYYYYYYYYYYYYYYNYYYYYYYYYYYNNN
ndv=28

PL/SQL procedure successfully completed.

and we can verify the result with a standard SQL query:


SQL> select count(distinct street)
  2  from   telephone_book;

COUNT(DISTINCTSTREET)
---------------------
                   28

Of course, such a simple solution masks a lot of complexity in implementing something like this for an arbitrary set of data.

  • How do we know the upper limit on the potential number of distinct rows?
  • How do we rapidly count the number of “Y” or “hits” once we have scanned the data?
  • Have we just shifted the problem to an enormous memory structure?

You’ll be reassured to know that a lot of thought has gone into tackling these issues, and thus taking the simple demo above into a genuine robust implementation within version 19c of the database. Many queries requiring distinct counts can be achieved now without requiring an expensive sort.

Here’s the video walking through exactly what we’ve done in 19c, and what some of the benefits are:

Oracle Autonomous JSON Database (AJD) : The Big Reveal

https://oracle-base.com/blog/wp-content/uploads/2020/08/autonomous_json_... 218w" sizes="(max-width: 174px) 85vw, 174px" />

The Autonomous JSON Database (AJD) was announced during the Oracle Developer Live (#OracleDevLive) event last night. This was accompanied by a blog post announcement here.

I was on an briefing the night before where we were told about this announcement in advance. Later I found out the service had been live since Tuesday, but they were waiting for this event for the big reveal. As soon as I knew it was live I fired up an instance up on my free tier account, but I had to wait for the announcement before I released the article. You can see what I tried here.

If it looks familiar, that’s because it is. The Autonomous JSON Database is essentially an Autonomous Transaction Processing (ATP) instance with some restrictions, that you get to run for less money. You can “convert” it to full ATP at the click of a button if you want to. Obviously, the price changes then.

If you are considering using the Autonomous JSON Database service you will need to learn more about SODA (Simple Oracle Document Access). I’ve written a few things about this over the years.

There are SODA APIs for a bunch of languages. You can do all of this on-prem using Oracle REST Data Services (ORDS), but it comes ready to go on AJD and ATP.

So now it’s here, and it’s available on the Oracle Cloud Free Tier, what have you got to lose?

Cheers

Tim…

The post Oracle Autonomous JSON Database (AJD) : The Big Reveal first appeared on The ORACLE-BASE Blog.


Oracle Autonomous JSON Database (AJD) : The Big Reveal was first posted on August 14, 2020 at 7:36 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

The Classic Editing Experience is Moving, Not Leaving

With the introduction of the Block editor, the WordPress.com Classic Editor was set for retirement at the beginning of June. We pushed that back a bit to make time for more changes that ease the transition to the Block editor — and now it’s time!

The WordPress editor empowers you to create pages and posts by layering multiple blocks on top of each other. It’s versatile, intuitive, and boasts exciting new features, including:

  • Over 100 content blocks available for publishing.
  • A growing collection of block patterns.
  • Dozens of beautiful built-in page templates.
  • Styles you can customize directly within the editor.

If you’d rather stick with the Classic editor experience — the one you used before we introduced the WordPress.com editor a few years ago — no worries. With the new and improved Classic block, you have the best of both editors: the flexibility and stability of the Block editor, and the Classic editor interface you know.

From August 11 on all WordPress.com accounts will start to switch from Classic editor to the new Block editor. It will happen in phases, and you’ll get an email to let you know to expect the change.

Here’s what you need to know if you’re a fan of the Classic editor experience.

Why the change?

There are exciting new features in the pipeline that require the new WordPress editor. It’s not technically possible to retrofit them into the older, Classic editor, and we want to make sure everyone can take advantage of them as they become available. With all WordPress.com users publishing with the Block editor, all WordPress.com users always have the latest and greatest.

Can I create simple blog posts the way I always have?

Yes, with the Classic block! It provides an editing experience that mimics the Classic editor — the same options and tools, in the same spot.

To use it, add a Classic block to your post or page, then add and edit both text and media right inside it.

Also ….

The Block editor has updates to bring in some of your favorite classic features, like a clean editing screen. The Block editor displays pop-up options and menus as you type — they give you lots of control, but you might not always want them visible over your content. Turn on Top toolbar mode to keep them pinned to the top of the screen. It’s a great way to experience the full flexibility of the block editor while still allowing distraction-free writing.

What about editing posts and pages already created in the Classic editor?

Many of you have lots of pages and posts already created and published with the Classic editor. Previously, editing them in the Block editor led to a lot of prompts asking you to convert the content to blocks. Now there’s a single “Convert to blocks” menu item to take care of it in one go.

https://en-blog.files.wordpress.com/2020/07/convert-to-blocks-toolbar.pn... 150w, https://en-blog.files.wordpress.com/2020/07/convert-to-blocks-toolbar.pn... 300w, https://en-blog.files.wordpress.com/2020/07/convert-to-blocks-toolbar.pn... 768w, https://en-blog.files.wordpress.com/2020/07/convert-to-blocks-toolbar.png 1280w" sizes="(max-width: 1024px) 100vw, 1024px" />

You can use this button to upgrade your posts and pages to block-based content at your leisure.

Can I combine the Classic block with other blocks?

For the best editing experience, particularly if you use the mobile app to edit your posts, we recommend just having a single Classic block on each post or page.

But, moving everyone to the block editor gives you the best of both worlds. You can continue writing and editing some of your posts with the simple Classic interface — but when you want to experiment with more complex layouts and functionality you can create a new post and play with the power and flexibility of all the other blocks on offer. For example, have you ever wanted an easy way to show off your favorite podcast?


Look out for the email letting you know when to expect the Block editor switch! In there meantime, learn more about working with the Block editor and the Classic block.

WhatsApp: A New, Convenient Way for Your Customers to Contact You

The world is mobile, and your visitors and customers expect to be able to easily contact you using their mobile device. With WordPress.com’s new WhatsApp button, you can provide a one-click, secure way for people to open WhatsApp, with your phone number and a message pre-filled.

https://en-blog.files.wordpress.com/2020/08/screen-shot-2020-08-11-at-9.... 150w, https://en-blog.files.wordpress.com/2020/08/screen-shot-2020-08-11-at-9.... 300w, https://en-blog.files.wordpress.com/2020/08/screen-shot-2020-08-11-at-9.... 768w, https://en-blog.files.wordpress.com/2020/08/screen-shot-2020-08-11-at-9.... 1408w" sizes="(max-width: 1024px) 100vw, 1024px" />
Insert the WhatsApp button with your phone number and a custom message pre-filled.

Adding the button is easy. In the block editor, create a new block and search for WhatsApp:

whatsapp blockhttps://en-blog.files.wordpress.com/2020/08/whats-app-block.png?w=606&h=772 606w, https://en-blog.files.wordpress.com/2020/08/whats-app-block.png?w=118&h=150 118w, https://en-blog.files.wordpress.com/2020/08/whats-app-block.png?w=235&h=300 235w" sizes="(max-width: 303px) 100vw, 303px" />

The WhatsApp button is available now to all WordPress.com sites on a Premium, Business, or eCommerce plan. You can upgrade your site to one of these plans, try it out for 30 days, and if you’re not satisfied with your upgrade we’ll grant you a full refund.

If you decide to cancel your paid plan after you’ve already accepted the free custom domain, the domain is yours to keep. We simply ask that you cover the costs for the domain registration.

We hope the WhatsApp button helps you connect with your customers and visitors in new ways. Give it a try today!