Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

The Problem With Oracle : If a developer/user can’t do it, it doesn’t exist.

This post was inspired by two separate events that happened back to back.

Event 1

One of my colleagues sent me a list of cool features that were available in a cloud database service we may be purchasing. The vast majority of those features have been available in Oracle for over a decade, and we are currently licensed to use them. I would suggest the Oracle features were in fact the inspiration for their inclusion in this cloud database product.

I got a little on the defensive and explained this situation, and their reply was along the lines of, “Most of those features are DBA-focused, not user-focused”. That was also not 100% true, but I can understand where that assumption came from.

Now you could raise several arguments, including these.

  • The person in question should have known about this functionality.
  • I should have done a better job about promoting this functionality.
  • I could have done a better job about making those features that were DBA-focused available to a wider audience.

Regardless of all that, we are left in the position where Oracle and several other database engines are seen as DBA-focused tools, and not really inclusive for developers and users.

FYI: I think this cloud database product is probably the right choice for this specific use case, so this is not {only} about me being an Oracle fan-boy. It is my frustration at the reinvention of the wheel being touted as new. Happens all the time… </p />
</p></div>

    	  	<div class=

Level up your audit trigger game

A weekend audit

Often if you see the words “weekend audit” in a non-technical sense, it means that awkward moment where you look at the state of your house/bedroom/bathroom/bookshelf/shed etc and say to yourself “Yup, it is time we had a weekend audit of all this crap” Smile

But I had a different kind of weekend audit on the weekend just passed. I used some weekend time to write a generic audit facility for capturing changes to tables in your Oracle database. I was motivated to do this from a recent review on a very very old AskTOM question from way back in 2000.  Tom wrote a facility that wrote out column changes, but I was never really enamoured with the solution, in particular, the multiplier effect on the number of calls you had to do, and the fact that the changes were all stored in generic VARCHAR2 columns. I recently updated the post due to a user request to handle timestamps and other data types, but I’ve never been a true fan of the code.

But as my parents always taught me – you can’t criticise unless you have a better proposal…so I built one! I think it is a better option for anyone wanting to capture changes via triggers.

Note – I’m also a fan of using Flashback Data Archive as a means of capturing audit changes (by combining it with the VERSIONS BETWEEN syntax). There’s a video at the bottom of this post showing you details on that. But I built my own using triggers because I wanted the flexibility to also satisfy the following (common) audit requirements.

  • A dedicated audit table for each table to be audited
  • A consolidated view of all audit actions undertaken
  • Not having to write a lot of code to implement auditing – it should be generated for me
  • API level control over whether to capture inserts, and whether to capture both OLD/NEW updates or just OLD (because the current table values always contain the NEW)
  • The ability to selectively disable/enable the trigger for a session (for example, for data maintenance) without impacting other sessions
  • Handle things like new columns to the source table etc.

Here is my implementation, and you can get the code from github.

The common metadata for all captured changes goes into a table called AUDIT_HEADER. I won’t describe what each column is here, because they are pretty much self-explanatory, but also…in perhaps what is the last example you’ll see anywhere of someone using the database dictionary for what it is designed for, I have added COMMENT commands for all the tables and columns in the github repo.  I always lament the sad state of affairs when people say “We don’t have a place to document our database schema…”.  Er, um, yes you do .. it’s called (drum roll for impact…..) the database!


SQL> desc AUD_UTIL.AUDIT_HEADER

 Name                          Null?    Type
 ----------------------------- -------- ------------------
 AUD$TSTAMP                    NOT NULL TIMESTAMP(6)
 AUD$ID                        NOT NULL NUMBER(18)
 TABLE_NAME                    NOT NULL VARCHAR2(30)
 DML                           NOT NULL VARCHAR2(1)
 DESCR                                  VARCHAR2(100)
 ACTION                                 VARCHAR2(32)
 CLIENT_ID                              VARCHAR2(64)
 HOST                                   VARCHAR2(64)
 MODULE                                 VARCHAR2(48)
 OS_USER                                VARCHAR2(32)

Every table for which you activate auditing for when then have its own table which is a logical child of the AUDIT_HEADER table. I say logical child because I am not explicitly nominating a foreign key here. Feel free to add it if you like, but by default, I don’t do it because no-one should be doing DML against these tables except via the API that is automatically generated. It might be useful for the optimizer if you are heavily mining these tables.

Each audit table is the table name suffixed with the schema (because multiple schemas with potentially the same table names will have their audit captured into a single auditing schema, which is AUD_UTIL by default, but you can change this simply by editing the ‘schema’ substitution variable at the top of each script.


SQL> desc AUD_UTIL.EMP_SCOTT

 Name                          Null?    Type
 ----------------------------- -------- -----------------
 AUD$TSTAMP                    NOT NULL TIMESTAMP(6)
 AUD$ID                        NOT NULL NUMBER(18)
 AUD$IMAGE                     NOT NULL VARCHAR2(3)
 EMPNO                                  NUMBER(4)
 ENAME                                  VARCHAR2(10)
 JOB                                    VARCHAR2(9)
 MGR                                    NUMBER(4)
 HIREDATE                               DATE
 SAL                                    NUMBER(7,2)
 COMM                                   NUMBER(7,2)
 DEPTNO                                 NUMBER(2)

The child audit tables will contain the same columns as the source table name, but with three additional columns

  • AUD$TSTAMP, AUD$ID which are logical link back to the parent AUDIT_HEADER record
  • AUD$IMAGE which is “OLD” or “NEW” aligning to the triggering values

Sample Usage

Any API call that is “destructive”, namely, it could run DDL has an “action” parameter that is OUTPUT or EXECUTE. “OUTPUT” lets you do things is a safe manner.

Every API call to modify the auditing is captured in a MAINT_LOG table so you can blame the appropriate people have a history of the auditing API calls you have made. But first, we need to let the code know which schemas we will allow to be audited. Only the schemas nominated in the table SCHEMA_LIST can have auditing enabled, and by default it is empty, so you’ll get an error on any attempt to use the API


SQL> exec  aud_util.audit_util.generate_audit_table('SCOTT','EMP',p_action=>'OUTPUT');

Call to generate audit table for SCOTT.EMP

BEGIN aud_util.audit_util.generate_audit_table('SCOTT','EMP',p_action=>'OUTPUT'); END;

*
ERROR at line 1:
ORA-20378: You can only manage audit facilities for schemas listed in SCHEMA_LIST
ORA-06512: at "AUD_UTIL.AUDIT_UTIL", line 111
ORA-06512: at "AUD_UTIL.AUDIT_UTIL", line 480
ORA-06512: at "AUD_UTIL.AUDIT_UTIL", line 496
ORA-06512: at line 1


SQL> insert into aud_util.schema_list values ('SCOTT');

1 row created.

SQL> commit;

Commit complete.

Now that this is done, we can dig into the API a little more

Creating an Audit Table

Note: This example does not actually do any work, because we have set p_action to OUTPUT. But we can see what work would have been performed by the call.


SQL> exec  aud_util.audit_util.generate_audit_table('SCOTT','EMP',p_action=>'OUTPUT');

Call to generate audit table for SCOTT.EMP

create table AUD_UTIL.EMP_SCOTT (
 aud$tstamp     timestamp   not null,
 aud$id         number(18)  not null,
 aud$image      varchar2(3) not null )
 partition by range ( aud$tstamp  )
 interval (numtoyminterval(1,'MONTH'))
 ( partition EMP_p202009 values less than ( to_timestamp('20201001','yyyymmdd') )
 ) pctfree 1 tablespace users

alter table AUD_UTIL.EMP_SCOTT
  add constraint EMP_SCOTT_PK primary key ( aud$tstamp, aud$id, aud$image)
  using index
    (create unique index AUD_UTIL.EMP_SCOTT_PK
     on AUD_UTIL.EMP_SCOTT ( aud$tstamp, aud$id, aud$image)
     local tablespace users)

alter table AUD_UTIL.EMP_SCOTT add EMPNO NUMBER(4,0)
alter table AUD_UTIL.EMP_SCOTT add ENAME VARCHAR2(10)
alter table AUD_UTIL.EMP_SCOTT add JOB VARCHAR2(9)
alter table AUD_UTIL.EMP_SCOTT add MGR NUMBER(4,0)
alter table AUD_UTIL.EMP_SCOTT add HIREDATE DATE
alter table AUD_UTIL.EMP_SCOTT add SAL NUMBER(7,2)
alter table AUD_UTIL.EMP_SCOTT add COMM NUMBER(7,2)
alter table AUD_UTIL.EMP_SCOTT add DEPTNO NUMBER(2,0)

PL/SQL procedure successfully completed.

We create an audit table which is partitioned by month (see later for details about partitioning), with an appropriate locally partitioned primary key. No global indexes are used because the expectation here is that with partitioning you may ultimately want to purge at the partition level in future.

Creating an Audit Package

I’m “old school” and have always considered that any lengthy code that would go into a trigger should be placed into a package. So our insertion DML is wrapped up in a database package that ultimately our  audit trigger will call.


SQL> exec  aud_util.audit_util.generate_audit_package('SCOTT','EMP',p_action=>'OUTPUT');

Call to generate audit package for SCOTT.EMP

create or replace
package AUD_UTIL.PKG_EMP_SCOTT is

 /***************************************************************/
 /* ATTENTION                                                   */
 /*                                                             */
 /* This package is automatically generated by audit generator  */
 /* utility.  Do not edit this package by hand as your changes  */
 /* will be lost if the package are re-generated.               */
 /***************************************************************/

  procedure audit_row(
     p_aud$tstamp                     timestamp
    ,p_aud$id                         number
    ,p_aud$image                      varchar2
    ,p_empno     number
    ,p_ename     varchar2
    ,p_job       varchar2
    ,p_mgr       number
    ,p_hiredate  date
    ,p_sal       number
    ,p_comm      number
    ,p_deptno    number
  );
end;
create or replace
package body AUD_UTIL.PKG_EMP_SCOTT is

 /***************************************************************/
 /* ATTENTION                                                   */
 /*                                                             */
 /* This package is automatically generated by audit generator  */
 /* utility.  Do not edit this package by hand as your changes  */
 /* will be lost if the package are re-generated.               */
 /***************************************************************/

  procedure audit_row(
     p_aud$tstamp                    timestamp
    ,p_aud$id                        number
    ,p_aud$image                     varchar2
    ,p_empno     number
    ,p_ename     varchar2
    ,p_job       varchar2
    ,p_mgr       number
    ,p_hiredate  date
    ,p_sal       number
    ,p_comm      number
    ,p_deptno    number
  ) is
begin

  insert into AUD_UTIL.EMP_SCOTT (
     aud$tstamp
    ,aud$id
    ,aud$image
    ,empno
    ,ename
    ,job
    ,mgr
    ,hiredate
    ,sal
    ,comm
    ,deptno
  ) values (
     p_aud$tstamp
    ,p_aud$id
    ,p_aud$image
    ,p_empno
    ,p_ename
    ,p_job
    ,p_mgr
    ,p_hiredate
    ,p_sal
    ,p_comm
    ,p_deptno
    );
end;

end;
grant execute on AUD_UTIL.PKG_EMP_SCOTT to SCOTT

PL/SQL procedure successfully completed.

You can see that the package is created in our audit schema not in the table owing schema.  This is to improve security, and the use of a package is aimed to keep the code in the subsequent audit trigger nice and compact.

 

Creating an Audit Trigger

Finally we need a trigger to call our package to capture information.


SQL> exec  aud_util.audit_util.generate_audit_trigger('SCOTT','EMP',p_action=>'OUTPUT');

Call to generate audit trigger for SCOTT.EMP

create or replace
trigger AUD_UTIL.AUD$EMP
after insert or update or delete on SCOTT.EMP
for each row
disable
declare
 l_dml       varchar2(1) := case when updating then 'U' when inserting then 'I' else 'D' end;
 l_tstamp    timestamp;
 l_id        number;
 l_descr     varchar2(100);
begin

 /***************************************************************/
 /* ATTENTION                                                   */
 /*                                                             */
 /* This package is automatically generated by audit generator  */
 /* utility.  Do not edit this package by hand as your changes  */
 /* will be lost if the package are re-generated.               */
 /***************************************************************/

 if aud_util.trigger_ctl.enabled('AUD$EMP') then
  l_descr :=
    case
      when updating
        then 'UPDATE'
      when inserting
        then 'INSERT'
      else
        'DELETE'
    end;

  aud_util.log_header('EMP',l_dml,l_descr,l_tstamp,l_id);

  if updating or
     deleting then
     aud_util.pkg_emp_scott.audit_row(
         p_aud$tstamp =>l_tstamp
        ,p_aud$id     =>l_id
        ,p_aud$image  =>'OLD'
        ,p_empno     =>:old.empno
        ,p_ename     =>:old.ename
        ,p_job       =>:old.job
        ,p_mgr       =>:old.mgr
        ,p_hiredate  =>:old.hiredate
        ,p_sal       =>:old.sal
        ,p_comm      =>:old.comm
        ,p_deptno    =>:old.deptno
        );
  end if;
  if inserting then
     aud_util.pkg_emp_scott.audit_row(
         p_aud$tstamp=>l_tstamp
        ,p_aud$id    =>l_id
        ,p_aud$image =>'NEW'
        ,p_empno     =>:new.empno
        ,p_ename     =>:new.ename
        ,p_job       =>:new.job
        ,p_mgr       =>:new.mgr
        ,p_hiredate  =>:new.hiredate
        ,p_sal       =>:new.sal
        ,p_comm      =>:new.comm
        ,p_deptno    =>:new.deptno
        );
  end if;
 end if;
end;
alter trigger AUD_UTIL.AUD$EMP enable

By default, even the trigger is created in the audit schema not the owning schema. Some people prefer this, some people hate it. You can choose your preference by adjusting the ‘g_trigger_in_audit_schema’ global variable in the package body. See the tail of this post for other settings.

You can see that we are capturing inserts. I typically think this is overkill, because an inserted row is readily available in the table itself. It is only when someone updates or deletes the row that typically you want to capture an audit. The ‘g_inserts_audited’ (true|false) in the package body gives you control over this.

Also, people have differing opinions on whether audit should capture both OLD and NEW values during an update, or just the OLD (because the new values are readily available in the table). Thus similarly, there is a setting ‘g_capture_new_updates’ (true|false) to give you control over this.

The trigger is always created in DISABLED mode to ensure that if it does not compile, then it will not cause any damage. It is enabled afterwards, but you can control this with the p_enable_trigger parameter which defaults to true.

Bringing it Altogether

All of the above is really just to help with explaining what is going on under the covers. In normal operation, you only need a single call to generate all the audit infrastructure for a table.


SQL> exec  aud_util.audit_util.generate_audit_support('SCOTT','EMP',p_action=>'EXECUTE');

Call to generate audit table for SCOTT.EMP

create table AUD_UTIL.EMP_SCOTT (
 aud$tstamp     timestamp   not null,
 aud$id         number(18)  not null,
 aud$image      varchar2(3) not null )
 partition by range ( aud$tstamp  )
 interval (numtoyminterval(1,'MONTH'))
 ( partition EMP_p202009 values less than ( to_timestamp('20201001','yyyymmdd') )
 ) pctfree 1 tablespace users
alter table AUD_UTIL.EMP_SCOTT
  add constraint EMP_SCOTT_PK primary key ( aud$tstamp, aud$id, aud$image)
  using index
    (create unique index AUD_UTIL.EMP_SCOTT_PK
     on AUD_UTIL.EMP_SCOTT ( aud$tstamp, aud$id, aud$image)
     local tablespace users)
alter table AUD_UTIL.EMP_SCOTT add EMPNO NUMBER(4,0)
alter table AUD_UTIL.EMP_SCOTT add ENAME VARCHAR2(10)
alter table AUD_UTIL.EMP_SCOTT add JOB VARCHAR2(9)
alter table AUD_UTIL.EMP_SCOTT add MGR NUMBER(4,0)
alter table AUD_UTIL.EMP_SCOTT add HIREDATE DATE
alter table AUD_UTIL.EMP_SCOTT add SAL NUMBER(7,2)
alter table AUD_UTIL.EMP_SCOTT add COMM NUMBER(7,2)
alter table AUD_UTIL.EMP_SCOTT add DEPTNO NUMBER(2,0)

Call to generate audit package for SCOTT.EMP

create or replace
package AUD_UTIL.PKG_EMP_SCOTT is

 /***************************************************************/
 /* ATTENTION                                                   */
 /*                                                             */
 /* This package is automatically generated by audit generator  */
 /* utility.  Do not edit this package by hand as your changes  */
 /* will be lost if the package are re-generated.               */
 /***************************************************************/

  procedure audit_row(
     p_aud$tstamp                     timestamp
    ,p_aud$id                         number
    ,p_aud$image                      varchar2
    ,p_empno     number
    ,p_ename     varchar2
    ,p_job       varchar2
    ,p_mgr       number
    ,p_hiredate  date
    ,p_sal       number
    ,p_comm      number
    ,p_deptno    number
  );
end;
create or replace
package body AUD_UTIL.PKG_EMP_SCOTT is

 /***************************************************************/
 /* ATTENTION                                                   */
 /*                                                             */
 /* This package is automatically generated by audit generator  */
 /* utility.  Do not edit this package by hand as your changes  */
 /* will be lost if the package are re-generated.               */
 /***************************************************************/

  procedure audit_row(
     p_aud$tstamp                    timestamp
    ,p_aud$id                        number
    ,p_aud$image                     varchar2
    ,p_empno     number
    ,p_ename     varchar2
    ,p_job       varchar2
    ,p_mgr       number
    ,p_hiredate  date
    ,p_sal       number
    ,p_comm      number
    ,p_deptno    number
  ) is
begin

  insert into AUD_UTIL.EMP_SCOTT (
     aud$tstamp
    ,aud$id
    ,aud$image
    ,empno
    ,ename
    ,job
    ,mgr
    ,hiredate
    ,sal
    ,comm
    ,deptno
  ) values (
     p_aud$tstamp
    ,p_aud$id
    ,p_aud$image
    ,p_empno
    ,p_ename
    ,p_job
    ,p_mgr
    ,p_hiredate
    ,p_sal
    ,p_comm
    ,p_deptno
    );
end;

end;
grant execute on AUD_UTIL.PKG_EMP_SCOTT to SCOTT

Call to generate audit trigger for SCOTT.EMP

create or replace
trigger AUD_UTIL.AUD$EMP
after insert or update or delete on SCOTT.EMP
for each row
disable
declare
 l_dml       varchar2(1) := case when updating then 'U' when inserting then 'I' else 'D' end;
 l_tstamp    timestamp;
 l_id        number;
 l_descr     varchar2(100);
begin

 /***************************************************************/
 /* ATTENTION                                                   */
 /*                                                             */
 /* This package is automatically generated by audit generator  */
 /* utility.  Do not edit this package by hand as your changes  */
 /* will be lost if the package are re-generated.               */
 /***************************************************************/

 if aud_util.trigger_ctl.enabled('AUD$EMP') then
  l_descr :=
    case
      when updating
        then 'UPDATE'
      when inserting
        then 'INSERT'
      else
        'DELETE'
    end;

  aud_util.log_header('EMP',l_dml,l_descr,l_tstamp,l_id);

  if updating or
     deleting then
     aud_util.pkg_emp_scott.audit_row(
         p_aud$tstamp =>l_tstamp
        ,p_aud$id     =>l_id
        ,p_aud$image  =>'OLD'
        ,p_empno     =>:old.empno
        ,p_ename     =>:old.ename
        ,p_job       =>:old.job
        ,p_mgr       =>:old.mgr
        ,p_hiredate  =>:old.hiredate
        ,p_sal       =>:old.sal
        ,p_comm      =>:old.comm
        ,p_deptno    =>:old.deptno
        );
  end if;
  if inserting then
     aud_util.pkg_emp_scott.audit_row(
         p_aud$tstamp=>l_tstamp
        ,p_aud$id    =>l_id
        ,p_aud$image =>'NEW'
        ,p_empno     =>:new.empno
        ,p_ename     =>:new.ename
        ,p_job       =>:new.job
        ,p_mgr       =>:new.mgr
        ,p_hiredate  =>:new.hiredate
        ,p_sal       =>:new.sal
        ,p_comm      =>:new.comm
        ,p_deptno    =>:new.deptno
        );
  end if;
 end if;
end;
alter trigger AUD_UTIL.AUD$EMP enable

PL/SQL procedure successfully completed.

SQL>

Remember this call, because as you’ll see below, it should be the only call you ever need.

Schema Evolution

Contrary to popular opinion, it is pretty easy to change the structure of a database table in a relational database. So what happens to our auditing when you add a column to the SCOTT.EMP table? By default, the auditing will continue on without any issue but will not capture that new column. But all you need to do is re-run the same audit API. It will work out what you have done and make the necessary adjustment.


SQL> alter table scott.emp add new_col number(10,2);

Table altered.

SQL> exec  aud_util.audit_util.generate_audit_support('SCOTT','EMP',p_action=>'EXECUTE');

Call to generate audit table for SCOTT.EMP

alter table AUD_UTIL.EMP_SCOTT add NEW_COL NUMBER(10,2)

Call to generate audit package for SCOTT.EMP
create or replace
package AUD_UTIL.PKG_EMP_SCOTT is

 /***************************************************************/
 /* ATTENTION                                                   */
 /*                                                             */
 /* This package is automatically generated by audit generator  */
 /* utility.  Do not edit this package by hand as your changes  */
 /* will be lost if the package are re-generated.               */
 /***************************************************************/

  procedure audit_row(
     p_aud$tstamp                     timestamp
    ,p_aud$id                         number
    ,p_aud$image                      varchar2
    ,p_empno     number
    ,p_ename     varchar2
    ,p_job       varchar2
    ,p_mgr       number
    ,p_hiredate  date
    ,p_sal       number
    ,p_comm      number
    ,p_deptno    number
    ,p_new_col   number
  );
end;
create or replace
package body AUD_UTIL.PKG_EMP_SCOTT is

 /***************************************************************/
 /* ATTENTION                                                   */
 /*                                                             */
 /* This package is automatically generated by audit generator  */
 /* utility.  Do not edit this package by hand as your changes  */
 /* will be lost if the package are re-generated.               */
 /***************************************************************/

  procedure audit_row(
     p_aud$tstamp                    timestamp
    ,p_aud$id                        number
    ,p_aud$image                     varchar2
    ,p_empno     number
    ,p_ename     varchar2
    ,p_job       varchar2
    ,p_mgr       number
    ,p_hiredate  date
    ,p_sal       number
    ,p_comm      number
    ,p_deptno    number
    ,p_new_col   number
  ) is
begin

  insert into AUD_UTIL.EMP_SCOTT (
     aud$tstamp
    ,aud$id
    ,aud$image
    ,empno
    ,ename
    ,job
    ,mgr
    ,hiredate
    ,sal
    ,comm
    ,deptno
    ,new_col
  ) values (
     p_aud$tstamp
    ,p_aud$id
    ,p_aud$image
    ,p_empno
    ,p_ename
    ,p_job
    ,p_mgr
    ,p_hiredate
    ,p_sal
    ,p_comm
    ,p_deptno
    ,p_new_col
    );
end;

end;
grant execute on AUD_UTIL.PKG_EMP_SCOTT to SCOTT

Call to generate audit trigger for SCOTT.EMP

create or replace
trigger AUD_UTIL.AUD$EMP
after insert or update or delete on SCOTT.EMP
for each row
disable
declare
 l_dml       varchar2(1) := case when updating then 'U' when inserting then 'I' else 'D' end;
 l_tstamp    timestamp;
 l_id        number;
 l_descr     varchar2(100);
begin

 /***************************************************************/
 /* ATTENTION                                                   */
 /*                                                             */
 /* This package is automatically generated by audit generator  */
 /* utility.  Do not edit this package by hand as your changes  */
 /* will be lost if the package are re-generated.               */
 /***************************************************************/

 if aud_util.trigger_ctl.enabled('AUD$EMP') then
  l_descr :=
    case
      when updating
        then 'UPDATE'
      when inserting
        then 'INSERT'
      else
        'DELETE'
    end;

  aud_util.log_header('EMP',l_dml,l_descr,l_tstamp,l_id);

  if updating or
     deleting then
     aud_util.pkg_emp_scott.audit_row(
         p_aud$tstamp =>l_tstamp
        ,p_aud$id     =>l_id
        ,p_aud$image  =>'OLD'
        ,p_empno     =>:old.empno
        ,p_ename     =>:old.ename
        ,p_job       =>:old.job
        ,p_mgr       =>:old.mgr
        ,p_hiredate  =>:old.hiredate
        ,p_sal       =>:old.sal
        ,p_comm      =>:old.comm
        ,p_deptno    =>:old.deptno
        ,p_new_col   =>:old.new_col
        );
  end if;
  if inserting then
     aud_util.pkg_emp_scott.audit_row(
         p_aud$tstamp=>l_tstamp
        ,p_aud$id    =>l_id
        ,p_aud$image =>'NEW'
        ,p_empno     =>:new.empno
        ,p_ename     =>:new.ename
        ,p_job       =>:new.job
        ,p_mgr       =>:new.mgr
        ,p_hiredate  =>:new.hiredate
        ,p_sal       =>:new.sal
        ,p_comm      =>:new.comm
        ,p_deptno    =>:new.deptno
        ,p_new_col   =>:new.new_col
        );
  end if;
 end if;
end;
alter trigger AUD_UTIL.AUD$EMP enable

PL/SQL procedure successfully completed.

If you drop a column, we don’t change the audit table because presumably you still want a record of the changes that occurred whilst that column existed. Regenerating the audit with the same call again will adjust the package and trigger to no longer reference the dropped column and leave the table untouched.


SQL> alter table scott.emp drop column new_col;

Table altered.

SQL> exec  aud_util.audit_util.generate_audit_support('SCOTT','EMP',p_action=>'EXECUTE');

Call to generate audit table for SCOTT.EMP

Call to generate audit package for SCOTT.EMP

create or replace
package AUD_UTIL.PKG_EMP_SCOTT is

 /***************************************************************/
 /* ATTENTION                                                   */
 /*                                                             */
 /* This package is automatically generated by audit generator  */
 /* utility.  Do not edit this package by hand as your changes  */
 /* will be lost if the package are re-generated.               */
 /***************************************************************/

  procedure audit_row(
     p_aud$tstamp                     timestamp
    ,p_aud$id                         number
    ,p_aud$image                      varchar2
    ,p_empno     number
    ,p_ename     varchar2
    ,p_job       varchar2
    ,p_mgr       number
    ,p_hiredate  date
    ,p_sal       number
    ,p_comm      number
    ,p_deptno    number
  );
end;
create or replace
package body AUD_UTIL.PKG_EMP_SCOTT is

 /***************************************************************/
 /* ATTENTION                                                   */
 /*                                                             */
 /* This package is automatically generated by audit generator  */
 /* utility.  Do not edit this package by hand as your changes  */
 /* will be lost if the package are re-generated.               */
 /***************************************************************/

  procedure audit_row(
     p_aud$tstamp                    timestamp
    ,p_aud$id                        number
    ,p_aud$image                     varchar2
    ,p_empno     number
    ,p_ename     varchar2
    ,p_job       varchar2
    ,p_mgr       number
    ,p_hiredate  date
    ,p_sal       number
    ,p_comm      number
    ,p_deptno    number
  ) is
begin

  insert into AUD_UTIL.EMP_SCOTT (
     aud$tstamp
    ,aud$id
    ,aud$image
    ,empno
    ,ename
    ,job
    ,mgr
    ,hiredate
    ,sal
    ,comm
    ,deptno
  ) values (
     p_aud$tstamp
    ,p_aud$id
    ,p_aud$image
    ,p_empno
    ,p_ename
    ,p_job
    ,p_mgr
    ,p_hiredate
    ,p_sal
    ,p_comm
    ,p_deptno
    );
end;

end;
grant execute on AUD_UTIL.PKG_EMP_SCOTT to SCOTT

Call to generate audit trigger for SCOTT.EMP

create or replace
trigger AUD_UTIL.AUD$EMP
after insert or update or delete on SCOTT.EMP
for each row
disable
declare
 l_dml       varchar2(1) := case when updating then 'U' when inserting then 'I' else 'D' end;
 l_tstamp    timestamp;
 l_id        number;
 l_descr     varchar2(100);
begin

 /***************************************************************/
 /* ATTENTION                                                   */
 /*                                                             */
 /* This package is automatically generated by audit generator  */
 /* utility.  Do not edit this package by hand as your changes  */
 /* will be lost if the package are re-generated.               */
 /***************************************************************/

 if aud_util.trigger_ctl.enabled('AUD$EMP') then
  l_descr :=
    case
      when updating
        then 'UPDATE'
      when inserting
        then 'INSERT'
      else
        'DELETE'
    end;

  aud_util.log_header('EMP',l_dml,l_descr,l_tstamp,l_id);

  if updating or
     deleting then
     aud_util.pkg_emp_scott.audit_row(
         p_aud$tstamp =>l_tstamp
        ,p_aud$id     =>l_id
        ,p_aud$image  =>'OLD'
        ,p_empno     =>:old.empno
        ,p_ename     =>:old.ename
        ,p_job       =>:old.job
        ,p_mgr       =>:old.mgr
        ,p_hiredate  =>:old.hiredate
        ,p_sal       =>:old.sal
        ,p_comm      =>:old.comm
        ,p_deptno    =>:old.deptno
        );
  end if;
  if inserting then
     aud_util.pkg_emp_scott.audit_row(
         p_aud$tstamp=>l_tstamp
        ,p_aud$id    =>l_id
        ,p_aud$image =>'NEW'
        ,p_empno     =>:new.empno
        ,p_ename     =>:new.ename
        ,p_job       =>:new.job
        ,p_mgr       =>:new.mgr
        ,p_hiredate  =>:new.hiredate
        ,p_sal       =>:new.sal
        ,p_comm      =>:new.comm
        ,p_deptno    =>:new.deptno
        );
  end if;
 end if;
end;
alter trigger AUD_UTIL.AUD$EMP enable

PL/SQL procedure successfully completed.

Similarly, the routine will “do its best” to see if it can align columns if you change them in the source table.  For example, if the ENAME column got extended:


SQL> alter table scott.emp modify ename varchar2(40);

Table altered.

then we will try to make a similar change in the audit setup

SQL> exec  aud_util.audit_util.generate_audit_support('SCOTT','EMP',p_action=>'EXECUTE');

Call to generate audit table for SCOTT.EMP

alter table AUD_UTIL.EMP_SCOTT modify ENAME VARCHAR2(40)

Call to generate audit package for SCOTT.EMP

create or replace
package AUD_UTIL.PKG_EMP_SCOTT is

 /***************************************************************/
 /* ATTENTION                                                   */
 /*                                                             */
 /* This package is automatically generated by audit generator  */
 /* utility.  Do not edit this package by hand as your changes  */
 /* will be lost if the package are re-generated.               */
 /***************************************************************/

  procedure audit_row(
     p_aud$tstamp                     timestamp
    ,p_aud$id                         number
    ,p_aud$image                      varchar2
    ,p_empno     number
    ,p_ename     varchar2
    ,p_job       varchar2
    ,p_mgr       number
    ,p_hiredate  date
    ,p_sal       number
    ,p_comm      number
    ,p_deptno    number
  );
end;
create or replace
package body AUD_UTIL.PKG_EMP_SCOTT is

 /***************************************************************/
 /* ATTENTION                                                   */
 /*                                                             */
 /* This package is automatically generated by audit generator  */
 /* utility.  Do not edit this package by hand as your changes  */
 /* will be lost if the package are re-generated.               */
 /***************************************************************/

  procedure audit_row(
     p_aud$tstamp                    timestamp
    ,p_aud$id                        number
    ,p_aud$image                     varchar2
    ,p_empno     number
    ,p_ename     varchar2
    ,p_job       varchar2
    ,p_mgr       number
    ,p_hiredate  date
    ,p_sal       number
    ,p_comm      number
    ,p_deptno    number
  ) is
begin

  insert into AUD_UTIL.EMP_SCOTT (
     aud$tstamp
    ,aud$id
    ,aud$image
    ,empno
    ,ename
    ,job
    ,mgr
    ,hiredate
    ,sal
    ,comm
    ,deptno
  ) values (
     p_aud$tstamp
    ,p_aud$id
    ,p_aud$image
    ,p_empno
    ,p_ename
    ,p_job
    ,p_mgr
    ,p_hiredate
    ,p_sal
    ,p_comm
    ,p_deptno
    );
end;

end;
grant execute on AUD_UTIL.PKG_EMP_SCOTT to SCOTT

Call to generate audit trigger for SCOTT.EMP

create or replace
trigger AUD_UTIL.AUD$EMP
after insert or update or delete on SCOTT.EMP
for each row
disable
declare
 l_dml       varchar2(1) := case when updating then 'U' when inserting then 'I' else 'D' end;
 l_tstamp    timestamp;
 l_id        number;
 l_descr     varchar2(100);
begin

 /***************************************************************/
 /* ATTENTION                                                   */
 /*                                                             */
 /* This package is automatically generated by audit generator  */
 /* utility.  Do not edit this package by hand as your changes  */
 /* will be lost if the package are re-generated.               */
 /***************************************************************/

 if aud_util.trigger_ctl.enabled('AUD$EMP') then
  l_descr :=
    case
      when updating
        then 'UPDATE'
      when inserting
        then 'INSERT'
      else
        'DELETE'
    end;

  aud_util.log_header('EMP',l_dml,l_descr,l_tstamp,l_id);

  if updating or
     deleting then
     aud_util.pkg_emp_scott.audit_row(
         p_aud$tstamp =>l_tstamp
        ,p_aud$id     =>l_id
        ,p_aud$image  =>'OLD'
        ,p_empno     =>:old.empno
        ,p_ename     =>:old.ename
        ,p_job       =>:old.job
        ,p_mgr       =>:old.mgr
        ,p_hiredate  =>:old.hiredate
        ,p_sal       =>:old.sal
        ,p_comm      =>:old.comm
        ,p_deptno    =>:old.deptno
        );
  end if;
  if inserting then
     aud_util.pkg_emp_scott.audit_row(
         p_aud$tstamp=>l_tstamp
        ,p_aud$id    =>l_id
        ,p_aud$image =>'NEW'
        ,p_empno     =>:new.empno
        ,p_ename     =>:new.ename
        ,p_job       =>:new.job
        ,p_mgr       =>:new.mgr
        ,p_hiredate  =>:new.hiredate
        ,p_sal       =>:new.sal
        ,p_comm      =>:new.comm
        ,p_deptno    =>:new.deptno
        );
  end if;
 end if;
end;
alter trigger AUD_UTIL.AUD$EMP enable

PL/SQL procedure successfully completed.

Some things we obviously can’t really know what your intent was, for example, if you rename a column, but there is an API provided to let you do that.


PROCEDURE rename_column(p_owner varchar2
                       ,p_table_name varchar2
                       ,p_old_columns varchar2
                       ,p_new_columns varchar2
                       ,p_action varchar2) IS

You pass in a comma separated list of the old column names and the new column names, and the audit table columns will be renamed and the audit package and audit triggers regenerated.

Dropping Auditing for a table

If you want to remove the auditing facilities for a table, simply call DROP_AUDIT_SUPPORT for the table.


SQL>  exec  aud_util.audit_util.drop_audit_support('SCOTT','EMP',p_action=>'EXECUTE');

Call to drop audit trigger for SCOTT.EMP
drop trigger AUD_UTIL.AUD$EMP

Call to drop audit package for SCOTT.EMP
drop package AUD_UTIL.PKG_EMP_SCOTT

Call to drop audit table for SCOTT.EMP
drop table AUD_UTIL.EMP_SCOTT

PL/SQL procedure successfully completed.

Clearly, if you added auditing for a table, then dropping is not a thing that should be taken lightly. For this reason, we check to see if there are any rows in the audit table for this object. If there is any data, then by default, we will drop the trigger and the package, but the table will be preserved.


SQL> exec  aud_util.audit_util.drop_audit_support('SCOTT','EMP',p_action=>'EXECUTE');

Call to drop audit trigger for SCOTT.EMP
drop trigger AUD_UTIL.AUD$EMP

Call to drop audit package for SCOTT.EMP
drop package AUD_UTIL.PKG_EMP_SCOTT

Call to drop audit table for SCOTT.EMP
BEGIN aud_util.audit_util.drop_audit_support('SCOTT','EMP',p_action=>'EXECUTE'); END;

*
ERROR at line 1:
ORA-20378: Rows found in AUD_UTIL.EMP_SCOTT.  Use FORCE option if you really want to drop this
ORA-06512: at "AUD_UTIL.AUDIT_UTIL", line 111
ORA-06512: at "AUD_UTIL.AUDIT_UTIL", line 909
ORA-06512: at "AUD_UTIL.AUDIT_UTIL", line 975
ORA-06512: at line 1

As the error message suggests, if you really want to erase that audit history, then add the P_FORCE parameter.


SQL> exec  aud_util.audit_util.drop_audit_support('SCOTT','EMP',p_action=>'EXECUTE',p_force=>true);
Call to drop audit trigger for SCOTT.EMP
drop trigger AUD_UTIL.AUD$EMP
Call to drop audit package for SCOTT.EMP
drop package AUD_UTIL.PKG_EMP_SCOTT
Call to drop audit table for SCOTT.EMP
drop table AUD_UTIL.EMP_SCOTT

PL/SQL procedure successfully completed.

But wait..there’s more!

The examples above are simplified to make comprehension easier. As many of us know, one of the concerns about having auditing triggers is that they turn a batch operation (ie, a single DML that modifies lots of rows) into a row-by-row operation in terms of performance, because for every row touched, we are “jumping out” to the audit code to log the changed row.

In reality, the audit generator does not do this. We take advantage of bulk binding to ensure that we minimise the performance overhead of the audit triggers. This is controlled by the setting g_bulk_bind which defaults to true. Of course, if you know in advance that every operation your application performs is always a single row modification, then

  • that’s a very sad application Smile ,
  • you should consider setting the bulk binding to false because there are also some small overheads with bulk binding when you don’t need to.

Here is how the package and trigger really look when generated with the bulk binding enabled



SQL> exec  aud_util.audit_util.generate_audit_package('SCOTT','EMP',p_action=>'EXECUTE');

Call to generate audit package for SCOTT.EMP

create or replace
package AUD_UTIL.PKG_EMP_SCOTT is

 /***************************************************************/
 /* ATTENTION                                                   */
 /*                                                             */
 /* This package is automatically generated by audit generator  */
 /* utility.  Do not edit this package by hand as your changes  */
 /* will be lost if the package are re-generated.               */
 /***************************************************************/

  procedure bulk_init;
  procedure bulk_process;

  procedure audit_row(
     p_aud$tstamp                     timestamp
    ,p_aud$id                         number
    ,p_aud$image                      varchar2
    ,p_empno     number
    ,p_ename     varchar2
    ,p_job       varchar2
    ,p_mgr       number
    ,p_hiredate  date
    ,p_sal       number
    ,p_comm      number
    ,p_deptno    number
  );
end;

create or replace
package body AUD_UTIL.PKG_EMP_SCOTT is

 /***************************************************************/
 /* ATTENTION                                                   */
 /*                                                             */
 /* This package is automatically generated by audit generator  */
 /* utility.  Do not edit this package by hand as your changes  */
 /* will be lost if the package are re-generated.               */
 /***************************************************************/

    type t_audit_rows is table of AUD_UTIL.EMP_SCOTT%rowtype
      index by pls_integer;

    l_audrows t_audit_rows;

  procedure bulk_init is
  begin
    l_audrows.delete;
  end;

  procedure bulk_process is
  begin
    if l_audrows.count = 1 then
      insert into AUD_UTIL.EMP_SCOTT values l_audrows(1);
    else
      forall i in 1 .. l_audrows.count
        insert into AUD_UTIL.EMP_SCOTT values l_audrows(i);
    end if;
  end;

  procedure audit_row(
     p_aud$tstamp                    timestamp
    ,p_aud$id                        number
    ,p_aud$image                     varchar2
    ,p_empno     number
    ,p_ename     varchar2
    ,p_job       varchar2
    ,p_mgr       number
    ,p_hiredate  date
    ,p_sal       number
    ,p_comm      number
    ,p_deptno    number
  ) is
    l_idx pls_integer := l_audrows.count+1;
  begin

    l_audrows(l_idx).aud$tstamp := p_aud$tstamp;
    l_audrows(l_idx).aud$id     := p_aud$id;
    l_audrows(l_idx).aud$image  := p_aud$image;
    l_audrows(l_idx).empno      := p_empno;
    l_audrows(l_idx).ename      := p_ename;
    l_audrows(l_idx).job        := p_job;
    l_audrows(l_idx).mgr        := p_mgr;
    l_audrows(l_idx).hiredate   := p_hiredate;
    l_audrows(l_idx).sal        := p_sal;
    l_audrows(l_idx).comm       := p_comm;
    l_audrows(l_idx).deptno     := p_deptno;
  end;

end;

You can see that the package just retains audit rows in an associative array, and the “bulk_process” routine which will be called by the trigger to process them all once the statement completes. Thus our trigger now becomes a compound one.



SQL> exec  aud_util.audit_util.generate_audit_trigger('SCOTT','EMP',p_action=>'EXECUTE');

Call to generate audit trigger for SCOTT.EMP

create or replace
trigger AUD_UTIL.AUD$EMP
for insert or update or delete on SCOTT.EMP
disable
compound trigger
 l_dml       varchar2(1) := case when updating then 'U' when inserting then 'I' else 'D' end;
 l_tstamp    timestamp;
 l_id        number;
 l_descr     varchar2(100);

 /***************************************************************/
 /* ATTENTION                                                   */
 /*                                                             */
 /* This package is automatically generated by audit generator  */
 /* utility.  Do not edit this package by hand as your changes  */
 /* will be lost if the package are re-generated.               */
 /***************************************************************/

before statement is
begin
 if aud_util.trigger_ctl.enabled('AUD$EMP') then
   aud_util.pkg_emp_scott.bulk_init;
 end if;
end before statement;

after each row is
begin
 if aud_util.trigger_ctl.enabled('AUD$EMP') then
  l_descr :=
    case
      when updating
        then 'UPDATE'
      when inserting
        then 'INSERT'
      else
        'DELETE'
    end;

  aud_util.audit_pkg.log_header_bulk('EMP',l_dml,l_descr,l_tstamp,l_id);

  if updating('EMPNO')    or
     updating('DEPTNO')   or
     updating('HIREDATE') or
     deleting then
     aud_util.pkg_emp_scott.audit_row(
         p_aud$tstamp =>l_tstamp
        ,p_aud$id     =>l_id
        ,p_aud$image  =>'OLD'
        ,p_empno     =>:old.empno
        ,p_ename     =>:old.ename
        ,p_job       =>:old.job
        ,p_mgr       =>:old.mgr
        ,p_hiredate  =>:old.hiredate
        ,p_sal       =>:old.sal
        ,p_comm      =>:old.comm
        ,p_deptno    =>:old.deptno
        );
  end if;
  if updating('EMPNO')    or
     updating('DEPTNO')   or
     updating('HIREDATE') or
     inserting then
     aud_util.pkg_emp_scott.audit_row(
         p_aud$tstamp=>l_tstamp
        ,p_aud$id    =>l_id
        ,p_aud$image =>'NEW'
        ,p_empno     =>:new.empno
        ,p_ename     =>:new.ename
        ,p_job       =>:new.job
        ,p_mgr       =>:new.mgr
        ,p_hiredate  =>:new.hiredate
        ,p_sal       =>:new.sal
        ,p_comm      =>:new.comm
        ,p_deptno    =>:new.deptno
        );
  end if;
 end if;
end after each row;

after statement is
begin
 if aud_util.trigger_ctl.enabled('AUD$EMP') then
   aud_util.pkg_emp_scott.bulk_process;
   aud_util.audit_pkg.bulk_process;
 end if;
end after statement;
end;
alter trigger AUD_UTIL.AUD$EMP enable

Miscellaneous

Interval Partitioning

Also provided is a routine


PROCEDURE partition_name_tidy_up(p_operation varchar2 default 'DEFAULT',
                                 p_action varchar2);

which only is relevant if you are using partitioning for your audit tables. Interval partition is being used, and as such, automatically generated partition names will be observed. This routine looks through the audit schema and renames the partitions to have the form  _P

The “p_operation” parameter allows this routine to be activated as a scheduler job.  The values for this parameter are:

  • DEFAULT – do the partition renaming work
  • DISABLE – disable the existing scheduler job
  • ENABLE – enable the existing scheduler job
  • UNSCHEDULE – drop the scheduler job
  • SCHEDULE – create a new scheduler job for 9am each day, which simply calls the same routine with the DEFAULT operation
  • CHECK – see if there is a job and create one if not there.

Selective Column Updates

There may be some columns in your source table for which an update does not constitute that change being audit worthy. For example, you might have a table of users, and you don’t care when people upload a new profile photo. You can also nominate which columns are the ones “of interest” when generating audit support for the table



SQL> exec  aud_util.audit_util.generate_audit_trigger('SCOTT','EMP',p_update_cols=>'EMPNO,DEPTNO,HIREDATE',p_action=>'OUTPUT');

Call to generate audit trigger for SCOTT.EMP

create or replace
trigger AUD_UTIL.AUD$EMP
after insert or update or delete on SCOTT.EMP
for each row
disable
declare
 l_dml       varchar2(1) := case when updating then 'U' when inserting then 'I' else 'D' end;
 l_tstamp    timestamp;
 l_id        number;
 l_descr     varchar2(100);
begin

 /***************************************************************/
 /* ATTENTION                                                   */
 /*                                                             */
 /* This package is automatically generated by audit generator  */
 /* utility.  Do not edit this package by hand as your changes  */
 /* will be lost if the package are re-generated.               */
 /***************************************************************/

 if aud_util.trigger_ctl.enabled('AUD$EMP') then
  l_descr :=
    case
      when updating
        then 'UPDATE'
      when inserting
        then 'INSERT'
      else
        'DELETE'
    end;

  aud_util.log_header_bulk('EMP',l_dml,l_descr,l_tstamp,l_id);

  if updating('EMPNO')    or
     updating('DEPTNO')   or
     updating('HIREDATE') or
     deleting then
     aud_util.pkg_emp_scott.audit_row(
         p_aud$tstamp =>l_tstamp
        ,p_aud$id     =>l_id
        ,p_aud$image  =>'OLD'
        ,p_empno     =>:old.empno
        ,p_ename     =>:old.ename
        ,p_job       =>:old.job
        ,p_mgr       =>:old.mgr
        ,p_hiredate  =>:old.hiredate
        ,p_sal       =>:old.sal
        ,p_comm      =>:old.comm
        ,p_deptno    =>:old.deptno
        );
  end if;
  if inserting then
     aud_util.pkg_emp_scott.audit_row(
         p_aud$tstamp=>l_tstamp
        ,p_aud$id    =>l_id
        ,p_aud$image =>'NEW'
        ,p_empno     =>:new.empno
        ,p_ename     =>:new.ename
        ,p_job       =>:new.job
        ,p_mgr       =>:new.mgr
        ,p_hiredate  =>:new.hiredate
        ,p_sal       =>:new.sal
        ,p_comm      =>:new.comm
        ,p_deptno    =>:new.deptno
        );
  end if;
 end if;
end;
alter trigger AUD_UTIL.AUD$EMP enable

When you create a trigger in this way, the nominated columns are stored in the AUDIT_UTIL_UPDATE_TRIG table. In this way, if you generate the trigger again, the same column specification is retained. If you need to reset the columns that are captured, you pass the string ‘NULL’ for the p_update_cols parameter (because a true null just means use the existing column specification.

Selective Row Audit

Along similar lines, you can nominate a WHEN clause for the trigger when generating the audit.



SQL> exec  aud_util.audit_util.generate_audit_trigger('SCOTT','EMP',p_when_clause=>'new.empno > 0',p_action=>'OUTPUT');
Call to generate audit trigger for SCOTT.EMP
create or replace
trigger AUD_UTIL.AUD$EMP
after insert or update or delete on SCOTT.EMP
for each row
disable
when (new.empno > 0)
declare
 l_dml       varchar2(1) := case when updating then 'U' when inserting then 'I' else 'D' end;
 l_tstamp    timestamp;
 l_id        number;
 l_descr     varchar2(100);
begin

 /***************************************************************/
 /* ATTENTION                                                   */
 /*                                                             */
 /* This package is automatically generated by audit generator  */
 /* utility.  Do not edit this package by hand as your changes  */
 /* will be lost if the package are re-generated.               */
 /***************************************************************/

 if aud_util.trigger_ctl.enabled('AUD$EMP') then
  l_descr :=
    case
      when updating
        then 'UPDATE'
      when inserting
        then 'INSERT'
      else
        'DELETE'
    end;

  aud_util.log_header('EMP',l_dml,l_descr,l_tstamp,l_id);

  if updating('EMPNO')    or
     updating('DEPTNO')   or
     deleting then
     aud_util.pkg_emp_scott.audit_row(
         p_aud$tstamp =>l_tstamp
        ,p_aud$id     =>l_id
        ,p_aud$image  =>'OLD'
        ,p_empno     =>:old.empno
        ,p_ename     =>:old.ename
        ,p_job       =>:old.job
        ,p_mgr       =>:old.mgr
        ,p_hiredate  =>:old.hiredate
        ,p_sal       =>:old.sal
        ,p_comm      =>:old.comm
        ,p_deptno    =>:old.deptno
        );
  end if;
  if inserting then
     aud_util.pkg_emp_scott.audit_row(
         p_aud$tstamp=>l_tstamp
        ,p_aud$id    =>l_id
        ,p_aud$image =>'NEW'
        ,p_empno     =>:new.empno
        ,p_ename     =>:new.ename
        ,p_job       =>:new.job
        ,p_mgr       =>:new.mgr
        ,p_hiredate  =>:new.hiredate
        ,p_sal       =>:new.sal
        ,p_comm      =>:new.comm
        ,p_deptno    =>:new.deptno
        );
  end if;
 end if;
end;
alter trigger AUD_UTIL.AUD$EMP enable

PL/SQL procedure successfully completed.

Like the update columns clause, the WHEN clause is stored in the AUDIT_UTIL_UPDATE_TRIG table. In this way, if you generate the trigger again, the same WHEN clause is retained. If you need to reset it, you pass the string ‘NULL’ for the p_when_clause parameter (because a true null just means use the existing specification .

Post Installation

Just done a large application deployment? If you are worried that you might have missed something pertaining to auditing, you can call POST_INSTALL which will cycle though all tables for those schemas listed in the SCHEMA_LIST table and look for differences between the source table and its audit partner. In this one routine, as well as the standard OUTPUT and EXECUTE options for the p_action parameter, there is also an option of REPORT will be present a report of what differences were found. This can be very useful as a validation check.

Accidents Happen

As I mentioned in the video below, some times you need to turn off a trigger temporarily in a session. Disabling a trigger is a drastic way to achieve this because it breaks the application for any other session that is current using that table. So the audit routines respect that need, and you can control audit trigger facilities on a session by session basis using the TRIGGER_CTL package which will be loaded into the audit schema. Clearly, you might want to look at either not using this (if you want to force audit ALL the time) or perhaps adding some sort of authentication etc to ensure people don’t go around selectively turning off the audit! But by default, it is not granted to anyone so you’re pretty safe with the default.

Complete list of Settings

g_aud_prefix
common prefix for all audit table names. Defaults to null

g_capture_new_updates
whether to log :NEW records as well as :OLD records. Defaults to false.

g_inserts_audited
whether to log inserts as well as updates and deletes. Defaults to false.

g_job_name
the scheduler job name for partition name tidy up. Defaults to AUDIT_PARTITION_NAME_TIDY_UP

g_log_level
The amount of logging we do. 1= dbms_output, 2=maint table, 3=both. Defaults to 3

g_logical_del_col
Some times an update is a delete from an application perspective, namely, we set a column called (say) DELETED to Y. If you have that, you can nominate that column here and we will log an update as a “logical delete” in the audit table metadata. Defaults to DELETED_IND

g_table_grantees
If you want to allow access to the audit tables, set the list of schemas/roles to be granted that privilege in this associative array. Defaults to none.

g_trigger_in_audit_schema
Whether the trigger should be created in the table owning schema or the audit schema. Defaults to true (the audit schema).

g_bulk_bind
Whether the audit processing uses bulk bind or row-by-row processing for audit capture.

Alternatives

My video on flashback data archive as an audit trail option

Photo by Harley-Davidson on Unsplash

Why Good Intentions + WIT + D&I = FAIL

In two hours I am honored to be the guest of the CSME Women’s Book Club event at Microsoft to discuss my book, “Crushing the IT Gender Bias.”  In celebration of this, I want to discuss the importance of Women in Technology, (WIT) initiatives and the mistake many groups continue to make these days consolidating them into Diversity and Inclusion, (D&I) groups.

A few years ago, when Women in Technology, (WIT) initiatives were the newest, coolest thing for business to be a part of, I was awarded an annual WIT award from the state’s tech organization.  I was quite humbled to receive this award, anxious to attend the awards ceremony with fellow recipients specializing in other recognized areas.  The event was a well -attended gala by over 1000 people.

The Change

The state organization’s WIT group was quite large at this time and scheduled activities were well received, involving all tiers of technical abilities.  After I’d received the award, it was announced some months later that it would be the last year for the WIT award, as focus had evolved to have diversity and inclusion initiatives.  The change was made with incredibly good intentions.  Many of us were thrilled by the prospect of needed representation and priority for D&I to finally receive some much needed attention.  We also were highly concerned that instead of creating its own group, as they’d done for so many other areas, it was being folded into the WIT group.

In the next two months, I noticed that the dynamic of the WIT activities changed.  As the group started to “fold in” diversity and inclusion, no longer did the deeper technical women attend.  I reached out to those women who were close to me in the WIT groups and discovered that most of them had left the original WIT group once it became a D&I group to join Women Who Code, Girl’s Develop It and Girl Geek Dinners.  When I asked why, they highlighted the loss of focus on WIT challenges and began to refer to the state lead WIT group as the “happy/shiny people.”  This was due to a perception around a need to check off boxes on a checklist, the lack of deeper technical folks being involved and a lack of discussion around solving the challenges groups were facing in the industry.  As I had experienced similar since the update in the state organization group, I ended up investing in the local Girl Geek Dinners as the organizer and partnered up to cosponsor more technical events and discussions centered around WIT issues between us, Women who Code and Girls Develop It.

2020 Numbers

As of 2020, women in technology make up less than 25% of the workforce and a considerable challenge around that percentage as it includes roles such as recruiters and project managers that may not actually perform technical duties.  Currently for GAFAM, (Google, Apple, Facebook, Amazon and Microsoft) the WIT percentage sits at 25%.  The numbers for Microsoft for women in their workforce is at 20%, which always surprises me with the amount of focus we have on ensuring women in tech have an evolved working experience.  I’ve never felt more accepted in any position in my 25 years in technology at any other company.  What this tells us, if a company like Microsoft is having a challenge with all the work they’ve done, then this tells us women in technology is not self-sustaining.  If we think we don’t still need a focused effort in this area, we’re setting ourselves up for failure.

Data shows that even though over half of college students are women, only 20% are headed for CS degrees.  Forty years ago, this number was 40% and the loss of role models and the way we view technical careers is part of the reason.

The cultural challenges still exist.  Girls are still raised with huge limitations between them and leading into a technical career.  71% of grown women in the industry still state they’ve worked in tech companies with a strong “bro culture”.   I’m in a group of deeply technical individuals and we, just as with other engineering groups, on average, consist of a 5:1 ratio of men to women.  As some are trying to “coat” the numbers by including less technical roles to soothe us is not the answer.  Giving the focus that each specialized group deserves is important to its success, otherwise your group or company only appears to be “checking the box”.

The Fix

In the years since the state’s organization new initiative for D&I was announced, it didn’t become the large win they’d hoped for and they chose to learn from their mistake.  In the end, this group re-embraced WIT as an individual group within their organization.  They discovered that consolidating specialized groups with unique struggles isn’t a formula for success and each deserves focus.  When they have consolidated, this results in members less likely to invest when they don’t receive the investment in their interest shown.  Now that they have re-engaged their WIT group, the events, conferences and such are once more well attended. I’m thrilled to see this renewed success into the WIT community and the response to reversing the change appears to be quite successful.

I’m now seeing more organizations disbanding very successful WIT groups with strong involvement, only to roll them up into D&I groups.  As in the experience I’ve just shared, consolidating specialized groups into one, becomes akin to choosing to do a “Black, Women and LBGTQ Lives Matter” protest.  Most won’t be happy and no one will be able to address the varied and important topics each group requires a priority on.

In regards to the D&I groups I’ve been part of, there was limited success in making the most of limited resources and it alienates many important members of originally focused groups. The ending “consolidated” topics resulted priorities unresolved in exchange for a broader range of coverage.  The organizations no longer appeared to be addressing the problems which were a priority to each focus group and those members of the group who really drive change or increase involvement by others simply left.

My hope is, by sharing this lesson learned by the state organization, it will help guide others to not make the same mistake down the road with other WIT groups.  Create new D&I groups- create them with passion for diversity and lead them to take on the challenges uniquely faced by passionate folks who are focused on D&I.  Yes, there will be some overlap with WIT, but these two groups should not be consolidated.  Both are the future of technology and consolidation will only deprive us all of success.

Have a great Monday, everyone!



Tags:  ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright ©  [Why Good Intentions + WIT + D&I = FAIL], All Right Reserved. 2020.

RDBMS (vs. NoSQL) scales the algorithm before the hardware

By Franck Pachot

.
In The myth of NoSQL (vs. RDBMS) “joins dont scale” I explained that joins actually scale very well with an O(logN) on the input tables size, thanks to B*Tree index access, and can even be bounded by hash partitioning with local index, like in DynamoDB single-table design. Jonathan Lewis added a comment that, given the name of the tables (USERS and ORDERS). we should expect an increasing number of rows returned by the join.

In this post I’ll focus on this: how does it scale when index lookup has to read more and more rows. I’ll still use DynamoDB for the NoSQL example, and this time I’ll do the same in Oracle for the RDBMS example.

NoSQL: DynamoDB


aws dynamodb create-table \
 --attribute-definitions \
  AttributeName=MyKeyPart,AttributeType=S \
  AttributeName=MyKeySort,AttributeType=S \
 --key-schema \
  AttributeName=MyKeyPart,KeyType=HASH \
  AttributeName=MyKeySort,KeyType=RANGE \
 --billing-mode PROVISIONED \
 --provisioned-throughput ReadCapacityUnits=25,WriteCapacityUnits=25 \
 --table-name Demo

This creates a DEMO table with MyKeyPart as a hash key and MyKeySort as a sort key. I’m on the AWS Free Tier with limited capacity unit, so don’t check the time. I’ll measure the efficiency from CloudWatch metrics and consumed capacity units.


while aws --output json dynamodb describe-table --table-name Demo | grep '"TableStatus": "CREATING"' ; do sleep 1 ; done 2>/dev/null | awk '{printf "."}END{print}'

This is how I wait the the creation to be completed.

I’ll store 1999000 items as 0+1+2+3+…+1999 to query them later. Each hash key value having a different number of rows.


import boto3, time, datetime
from botocore.config import Config
dynamodb = boto3.resource('dynamodb',config=Config(retries={'mode':'adaptive','total_max_attempts': 10}))
n=0 ; t1=time.time()
try:
 for k in range(0,2000):
  for s in range(1,k+1):
     r=dynamodb.Table('Demo').put_item(Item={'MyKeyPart':f"K-{k:08}",'MyKeySort':f"S-{s:08}",'seconds':int(time.time()-t1),'timestamp':datetime.datetime.now().isoformat()})
     time.sleep(0.05);
     n=n+1
except Exception as e:
 print(str(e))
t2=time.time()
print(f"Last: %s\n\n===> Total: %d seconds, %d keys %d items/second\n"%(r,(t2-t1),k,n/(t2-t1)))

The outer loop iterates 2000 times to generate 2000 values for MyKeyPart, so that when I read for one value of MyKeyPart I read only one hash partition. The inner loop generates from 1 to 2000 values for S. Then this generates 1999000 rows in total. K-00000001 has one item, K-00000002 has two items, K-00000042 has 42 items,… until K-00001999 with 1999 items.

I’ll query for each value of MyKeyPart. The query will read from one partition and return the items. The goal is to show how it scales with an increasing number of items.


for i in {100000000..100001999}
do
aws dynamodb query \
 --key-condition-expression "MyKeyPart = :k" \
 --expression-attribute-values  '{":k":{"S":"'K-${i#1}'"}}' \
 --return-consumed-capacity TABLE \
 --return-consumed-capacity INDEXES \
 --select ALL_ATTRIBUTES \
 --table-name Demo
done
}

This is a simple Query with the partition key value from K-00000000 to K-00001999. I’m returning the consumed capacity.

For example, the last query for K-00001999 returned:


...
        },
        {
            "seconds": {
                "N": "117618"
            },
            "MyKeyPart": {
                "S": "K-00001999"
            },
            "MyKeySort": {
                "S": "S-00001999"
            },
            "timestamp": {
                "S": "2020-08-02T17:05:53.549532"
            }
        }
    ],
    "ScannedCount": 1999,
    "ConsumedCapacity": {
        "CapacityUnits": 20.5,
        "TableName": "Demo",
        "Table": {
            "CapacityUnits": 20.5
        }
    }
}

This query returned 1999 items using 20.5 RCU which means about 49 items per 0.5 RCU. Let’s do some math: eventually consistent query reads 4KB with 0.5 RCU, my items are about 90 bytes (in DynamoDB the attribute names are stored for each row)

$ wc -c <<<"117618 K-00001999 S-00001999 2020-08-02T17:05:53.549532 seconds MyKeyPart MyKeySort timestamp"
94

So 49 items like this one is about 4KB… we are in the right ballpark.

Here are some CloudWatch statistics which shows that everything scales more or less linearly with the number of items retrieved:

https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 1536w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 2048w" sizes="(max-width: 1024px) 100vw, 1024px" />

The CloudWatch gathered every minute is not very precise, and I benefit from some bustring capacity. Here is the graph I’ve made from ScannedCount and CapacityUnits returned by the queries:

https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 1024w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 1536w" sizes="(max-width: 2500px) 100vw, 2500px" />

No doubt, that the big advantage of NoSQL: simplicity. Each time you read 4KB with eventual consistency you consume 0.5 RCU. The more items you have to read, the more RCU. This is linear and because the cost (in time and money) is proportional to the RCU, you can clearly see that it scales linearly. It increases in small steps because each 0.5 RCU holds many rows (49 on average).

This is how NoSQL scales: more work needs more resources, in a simple linear way: reading 500 items consumes 10x the resources needed to read 50 items. This is acceptable in the cloud because underlying resources are elastic, provisioned with auto-scaling and billed on-demand. But can we do better? Yes. For higher workloads, there may be faster access paths. With DynamoDB there’s only one: the RCU which depends on the 4KB reads. But SQL databases have multiple read paths and an optimizer to choose the best one for each query.

RDBMS SQL: Oracle Autonomous Database

For the SQL example, I’ve run a similar workload on the Autonomous Database in the Oracle Free Tier.


create table DEMO (
 MyKeyPart varchar2(10)
,MyKeySort varchar2(10)
,"seconds" number
,"timestamp" timestamp
,constraint DEMO_PK primary key(MyKeyPart,MyKeySort) using index local compress)
partition by hash(MyKeyPart) partitions 8;

This is a table very similar to the DynamoDB one: hash partition on MyKeyPart and local index on MyKeySort.


insert /*+ append */ into DEMO
select
 'K-'||to_char(k,'FM09999999') K,
 'S-'||to_char(s,'FM09999999') S,
 0 "seconds",
 current_timestamp "timestamp"
from
 (select rownum-1 k from xmltable('1 to 2000'))
,lateral  (select rownum s from xmltable('1 to 2000') where rownum<=k) x
order by k,s
/

This is similar to the python loops I’ve used to fill the DynamoDB table. I use XMLTABLE as a row generator, and lateral join as an inner loop. The select defines the rows and the insert loads them directly without going though application loops.


SQL> select count(*) from DEMO;

    COUNT(*)
____________
   1,999,000

I have my 1999000 rows here.


commit;

When you are ok with your changes, don’t forget to get them visible and durable as a whole. This takes not time.

In order to do something similar to the DynamoDB queries, I’ve generated a command file like:


select * from DEMO where K='K-00000000';
select * from dbms_xplan.display_cursor(format=>'allstats last +cost');
select * from DEMO where K='K-00000001';
select * from dbms_xplan.display_cursor(format=>'allstats last +cost');
select * from DEMO where K='K-00000002';
select * from dbms_xplan.display_cursor(format=>'allstats last +cost');
...
select * from DEMO where K='K-00001998';
select * from dbms_xplan.display_cursor(format=>'allstats last +cost');
select * from DEMO where K='K-00001999';
select * from dbms_xplan.display_cursor(format=>'allstats last +cost');

For each one I queried the execution plan which is much more detailed than the –return-consumed-capacity

I have built the following graph with the results from the execution plan. Rather than showing the optimizer estimated cost, I used the execution statistics about the buffer reads, as they are the most similar to the DynamoDB RCU. However, I have two kinds of execution plan: index access which reads 8KB blocks, and full table scan which is optimized with multiblock reads. In the following, I have normalized this metric with the ratio of multiblock reads to single block reads, in order to show both of them:

https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 1536w" sizes="(max-width: 1024px) 100vw, 1024px" />

  • The Plan hash value: 3867611502, in orange, is PARTITION HASH SINGLE + INDEX RANGE SCAN + TABLE ACCESS BY LOCAL INDEX ROWID which is very similar to the DynamoDB query. The cost is proportional to the number of rows returned.
  • The Plan hash value: 388464122, in blue, is PARTITION HASH SINGLE + TABLE FULL SCAN which is scanning a partition with multi-block I/O and direct-path read (and even storage index in this case as the Autonomous Database runs on Exadata). Thanks to those RDBMS optimizations, this access path is fast even when we don’t read all rows. In this example, I never read more than 1% of a partition (1999 rows from a total of 1999000 distributed to 8 partitions). What is awesome is that the cost here does not depend on the size of the result, but is constant: index access is faster for few rows but as soon as you read many, the cost is capped with full table scan.

Of course, the table can grow, and then a full table scan is more expensive. But we can also increase the number of partitions in order to keep the FULL TABLE SCAN within the response time requirements because it actually reads one partition. And the beauty is that, thanks to SQL, we don’t have to change any line of code. The RDBMS has an optimizer that estimates the number of rows to be retrieved and chooses the right access path. Here, when retrieving between 500 and 800 rows, the cost of both is similar and the optimizer chooses one or the other, probably because I’m touching partitions with a small difference in the distribution. Below a few hundreds, the index access is clearly the best. Above a thousand, scanning the whole partition gets a constant cost even when the resultset increases.

Here are the execution plans I’ve built the graph from. This one is index access for few rows (I picked up the one for 42 rows):


DEMO@atp1_tp> select * from DEMO where K='K-00000042';

    MyKeyPart     MyKeySort    seconds                          timestamp
_____________ _____________ __________ __________________________________
K-00000042    S-00000001             0 02-AUG-20 06.27.46.757178000 PM
K-00000042    S-00000002             0 02-AUG-20 06.27.46.757178000 PM
K-00000042    S-00000003             0 02-AUG-20 06.27.46.757178000 PM
...
K-00000042    S-00000042             0 02-AUG-20 06.27.46.757178000 PM

42 rows selected.

DEMO@atp1_tp> select * from dbms_xplan.display_cursor(format=>'allstats last +cost');

                                                                                                                        PLAN_TABLE_OUTPUT
_________________________________________________________________________________________________________________________________________
SQL_ID  04mmtv49jk782, child number 0
-------------------------------------
select * from DEMO where K='K-00000042'

Plan hash value: 3867611502

--------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name    | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
--------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |         |      1 |        |     6 (100)|     42 |00:00:00.01 |       5 |      1 |
|   1 |  PARTITION HASH SINGLE                     |         |      1 |     42 |     6   (0)|     42 |00:00:00.01 |       5 |      1 |
|   2 |   TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| DEMO    |      1 |     42 |     6   (0)|     42 |00:00:00.01 |       5 |      1 |
|*  3 |    INDEX RANGE SCAN                        | DEMO_PK |      1 |     42 |     3   (0)|     42 |00:00:00.01 |       3 |      0 |
--------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("K"='K-00000042')

3 buffers to read from the index and 2 additional buffers to read from the table ( one of them was not in cache and was a physical read). It is a single partition. The more rows are in the result and the more buffers have to be read from the table. I have about one thousand rows per 8KB buffer here (columns names are in the dictionary and not in each block, and I used the optimal datatypes like timestamp to store the timestamp).

Here I take the last query returning 1999 rows:


DEMO@atp1_tp> select * from DEMO where K='K-00001999';

            K             S    seconds                          timestamp
_____________ _____________ __________ __________________________________
K-00001999    S-00000001             0 02-AUG-20 06.27.46.757178000 PM
K-00001999    S-00000002             0 02-AUG-20 06.27.46.757178000 PM
...
1,999 rows selected.

DEMO@atp1_tp> select * from dbms_xplan.display_cursor(format=>'allstats last +cost');

                                                                                            PLAN_TABLE_OUTPUT
_____________________________________________________________________________________________________________
SQL_ID  g62zuthccp7fu, child number 0
-------------------------------------
select * from DEMO where K='K-00001999'

Plan hash value: 388464122

----------------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |      |      1 |        |    21 (100)|   1999 |00:00:00.01 |    1604 |
|   1 |  PARTITION HASH SINGLE     |      |      1 |   2181 |    21  (10)|   1999 |00:00:00.01 |    1604 |
|*  2 |   TABLE ACCESS STORAGE FULL| DEMO |      1 |   2181 |    21  (10)|   1999 |00:00:00.01 |    1604 |
----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - storage("K"='K-00001999')
       filter("K"='K-00001999')

One partition is full scanned. the execution plan shows 1604 buffers but there are many optimizations to get them faster and this is why finally the cost is not very high (cost=21 here means that reading 1605 sequential blocks is estimated to take the same time as reading 21 random blocks). One major optimization is reading 128 blocks with one I/O call (1MB multiblock read), another one is reading them bypassing the shared buffers (direct-path read) and here there’s even some offloading to STORAGE where rows are already filtered even before reaching the database instance.

I voluntarily didn’t get into the details, like why the cost of the full table scan has some variations. This depends on the hash distribution and the optimizer statistics (I used dynamic sampling here). You can read more about the inflection point where a full table scan is better than index access in a previous post: https://blog.dbi-services.com/oracle-12c-adaptive-plan-inflexion-point/ as this also applies to joins and scaling the algorithm can even happen after the SQL query compilation – at execution time – in some RDBMS (Oracle and SQL Server for example). As usual, the point is not that you take the numbers from this small example but just understand the behaviour: linear increase and then constant cost. NoSQL DynamoDB is optimized for key-value access. If you have queries reading many keys, you should stream the data into another database service. SQL databases like Oracle are optimized for the data and you can run multiple use cases without changing your application code. You just need to define the data access structures (index, partitioning) and the optimizer will choose the right algorithm.

Update 08-AUG-2020

People working with NoSQL technologies are used to see numbers and benchmarks on huge data size. And I’ve got a few comments like this one: Toy data size, perfect conditions (no joins, no concurrent reads or writes), no final comparison of actual speed. This is so misleading. This post is not there to sell a technology by throwing numbers but to understand how it works. NoSQL provides a simple API and scales with hardware. RDBMS provides complex algorithms to optimize before scaling out. To understand this, you need to observe how it works, which means reproduce the behaviour on a simple isolated small data set. That was the goal of the post. Now, you can extrapolate to huge data sets in order to know if you prefer to pay for coding and hardware, or for RDBMS software licenses. Nothing is free, but whatever you choose, both technologies can give you amazing results if you design and use them correctly.

Cet article RDBMS (vs. NoSQL) scales the algorithm before the hardware est apparu en premier sur Blog dbi services.

Video : Multitenant : Running Scripts Against Multiple Pluggable Databases Using catcon.pl

In today’s video we give a quick demonstration of using catcon.pl to run scripts against multiple pluggable databases (PDBs) in a container database (CDB).

The video is based on one section of this article.

You can find loads of information about living with CDBs and PDBs in the following articles and the YouTube playlist linked below.

The star of today’s video is my long suffering wife Debra Lilley. Clearly suffering because of her social isolation, which of course means not seeing me. </p />
</p></div>

    	  	<div class=

A lesson from NoSQL (vs. RDBMS): listen to your users

By Franck Pachot

.
I have written a few blog posts about some NoSQL (vs. RDBMS) myths (“joins dont scale”, “agility: adding attributes” and “simpler API to bound resources”). And I’ll continue on other points that are claimed by some NoSQL vendors and are, in my opinion, misleading by lack of knowledge and facts about RDBMS databases. But here I’m sharing an opposite opinion: SQL being user-friendly is now a myth.
Yes, that was the initial goal for SQL: design a relational language that would be more accessible to users without formal training in mathematics or computer programming (This is quoted from “Early History of SQL” by Donald D. Chamberlin on the history behind “SEQUEL: A STRUCTURED ENGLISH QUERY LANGUAGE”

However, it seems that this language became too complex for doing simple things. What was designed to be an end-user language is finally generated by software most of the time. Generated by BI tools for reporting and analytics. Or by ORM framework for OLTP. And the SQL generated is, often, far from optimal (we have all seen many bad queries generated by Tableau, or by Hibernate, for example) not because the tool that generates it is bad, but because no tool can compensate the lack of understanding of the data model.

Then, because the SQL generated was bad, people came with the idea that SQL is slow. Rather than understanding why an index is not used in a BI query (example), or why an OLTP request doesn’t scale (example), they went to Hadoop for their BI analytics (when you read too much, better read it faster) and to NoSQL for their OLTP (when you use the database as an ISAM row store, better optimize it for hashed key-value tables).

And then there are two reactions from database vendors. The legacy ones improve their RDBMS to cope with those bad queries (more transformations in the optimizer/query planner, adaptive optimization features,…).
DynamoDB Simple APIhttps://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/DynamoD... 1024w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/DynamoD... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/DynamoD... 1536w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/DynamoD... 1642w" sizes="(max-width: 300px) 100vw, 300px" />
And the newcomers build something new for them (limited Get/Set API in key-value stores like the PutItem/GetItem/Query/Scan of DynamoDB). And each camp has its advocates. RDBMS team tries to explain how to write SQL correctly (just look at the number of search hits for “bind variables” or “row-by-row” in http://asktom.oracle.com/). NoSQL team claimed that SQL is dead and explains how to build complex data models on a key-value store (see Rick Houlihan single-table design https://youtu.be/HaEPXoXVf2k?t=2964).

Who wins? It depends

And then who wins? It depends on the user population. Those who built a complex ERP on Oracle, SQL Server, PostgreSQL,… will continue with SQL because they know that ACID, Stored Procedure, SQL joins and aggregations, logical structure independent from physical storage,… made their life easier (security, agility, performance). For the oldest DBA and developers, they already had this debate between Codasyl vs. Relational (thinking, like David J. DeWitt and Michael Stonebraker, that it would be a major step backwards to say “No” to the relational view).

But architects in modern development context (with very short release cycles, multiple frameworks and polyglot code, and large coding teams of full-stack devs, rather than few specific experts) tend to favour the NoSQL approach. Their developer teams already know procedural languages, objects, loops, HTTP calls, JSON,… and they can persist their objects into a NoSQL database without learning something new. Of course, there’s something wrong in the idea that you don’t have to learn anything when going from manipulating transient objects in memory to storing persistent and shared data. When data will be queried by multiple users, for the years to come, and new use-cases, you need specific design and implementation that you don’t need for an application server that you can stop and start from scratch, in multiple nodes.

Whatever the architecture you choose you will have to learn. It can be on ORM (for example Vlad Mihalcea on Hibernate https://vladmihalcea.com/tutorials/hibernate/), on NoSQL (for example Alex DeBrie on DynamoDB https://www.dynamodbbook.com/), as well on SQL (like Markus Winand https://sql-performance-explained.com/). When you look at this content you will see that there are no shortcuts: you need to learn, understand and design. And while I’m referencing many nice people who share knowledge and tools, you should have a look at Lukas Eder JDBC abstraction framework https://www.jooq.org/ which is a nice intermediate between the procedural code and the database query language. Because you may understand the power of SQL (and the flaws of top-down object-to-relational generators) but refuse to write queries as plain text character strings, and prefer to write them in a Java DSL.

Both approaches need some learning and require good design. Then why NoSQL (or ORM before, or GraphSQL now, or any new API that replace or hides SQL) appears easier to the developers? I think the reason is that the NoSQL vendors listen to their users better than the SQL vendors. Look at MongoDB marketing: they propose the exact API that application developers are looking for: insert and find items, from a data store, that are directly mapped to the Java objects. Yes, that’s appealing and easily adopted. However, you cannot manipulate shared and persistent data in the same way as in-memory transient objects that are private, but priority was at user API before consistency and reliability. The ORM answer was complex mapping (the “object-relational impedance mismatch”), finally too complex for generating optimal queries. MongoDB, listening to their users, just keep it simple: persistence and sharing is best effort only, not the priority: eventual consistency. This lack of feature is actually sold as a feature: the users complain about transactions, normal forms,… let’s tell them that they don’t need it. It is interesting to read Mark Porter, the new MongoDB CTO, propaganda in “Next Chapter in Delighting Customers”:
Normalized data, mathematically pure or not, is agonizing for humans to program against; it’s just not how we think. […] And while SQL might look pretty in an editor, from a programming point of view, it’s close to the hardest way to query information you could think of. Mark Porter, who knows RDBMS very well, is adopting the MongoDB language: we hear you, you don’t want to learn SQL, you want a simple API, we have it. And, on the opposite, the RDBMS vendors, rather than listening and saying “yes” to this avidity of new cool stuff, are more authoritarian and say: “no, there’s no compromise with consistency, you need to learn the relational model and SQL concepts because that is is inescapable to build reliable and efficient databases”.

I’ll throw a silly analogy here. Silly because most of my readers have a scientific approach, but… did you ever go to a Homeopathic Doctor? I’m not giving any opinion about Homeopathy cures here. But you may realize that Homeopathic Doctors spend a lot of time listening to you, to your symptoms, to your general health and mood, before giving any recommendation. That’s their strength, in my opinion. When you go to an allopathic specialist, you may feel that he gives you a solution before fully listening to your questions. Because he knows, because he is the specialist, because he has statistics on large population with the same symptoms. Similarly, I think this is where RDBMS and SQL experts missed the point. It goes beyond the DBA-Dev lack of communication. If developers think they need a PhD to understand SQL, that’s because the SQL advocates failed in their task and came with all their science, and maybe their ego, rather than listening to users.

Listen to the problems

Ok, sorry for this long introduction. I wanted to through some links and thoughts to get multiple ideas on the subject.

Here is where I got the idea for this blog post:

Felix Geisendörfer is someone I follow, then I know his high technical level. To his simple question (the order of execution for a function in a SELECT with an ORDER BY) I didn’t just answer “you can’t” but tried to explain the reason. And then, without realizing it, I was giving the kind of “answer” that I hate to see in technical forums, like “what you do is wrong”, “your question is not valid”, “you shouldn’t do that”… My intention was to explain something larger than the question, but finally, I didn’t answer the question.

When people ask a question, they have a problem to solve and may not desire to think about all concepts behind. I like to learn the database concepts because that’s the foundation of my job. Taking the time to understand the concepts helps me to answers hundreds of future questions. And, as a consultant, I need to explain the reasons. Because the recommendations I give to a customer are valid only within a specific context. If I don’t give the “How” and “Why” with my recommendations, they will make no sense in the long term. But DBAs and SQL theoreticians should understand that developers have different concerns. They have a software release to deliver before midnight, and they have a problem to fix. They are looking for a solution, and not for an academic course. This is why Stackoverflow is popular: you can copy-paste a solution that works immediately (at least which worked for others). And this is why ORMs and NoSQL are appealing and popular: they provide a quick way to persist an object without going through the relational theory and SQL syntax.

Listen to the users

I’m convinced that understanding the concepts is mandatory for the long term, and that ACID, SQL and relational database is a huge evolution over eventual consistency, procedural languages and key-value hierarchical models. But in those technologies, we forget to listen to the users. I explained how joins can scale by showing an execution plan. But each RDBMS has a different way to display the execution plan, and you need some background knowledge to understand it. A lot, actually: access paths, join methods, RDBMS specific terms and metrics. If the execution plan were displayed as a sequence diagram or a procedural pseudo-code, it would be immediately understandable by developers who are not database specialists. But it is not the case and reading an execution plan is more and more difficult.
NoSQL makes it simple:

All depends on the point of view. I admit that RDBMS execution plans are not easy to read. But I don’t find NoSQL easier. Here is an example of myself trying to understand the metrics from a DynamoDB Scan and match it with CloudWatch metrics:
https://twitter.com/FranckPachot/status/1287773195031007234?s=20

If I stick to my point of view (as a SQL Developer, database architect, DBA, a Consultant,…) I’m convinced that SQL is user-friendly. But when I listen to some developers, I realize that it is not. And that is not new: CRUD, ORM, NoSQL,… all those APIs were created because SQL is not easy. My point of view is also biased by the database engines I have been working with. A few years of DB2 at the beginning of my career. And 20 years mostly with Oracle Database. This commercial database is very powerful and have nothing to envy to NoSQL about scalability: RAC, Hash partitioning, Parallel Query,… But when you look at the papers about MongoDB or DynamoDB the comparisons are always with MySQL. I even tend to think that NoSQL movement started as a “NoMySQL” rant at a time where MySQL had very limited features and people ignored the other databases. We need to listen to our users and if we think that an RDBMS database is still a solution for modern applications, we need to listen to the developers.

If we don’t learn and take lessons from the past, we will continue to do always the same mistake. When CRUD APIs were developed, the SQL advocates answered with their science: CRUD is bad, we need to work with set of rows. When Hibernate was adopted by the Java developers, the relational database administrators answered again: ORM is bad you need to learn SQL. And the same happens now with NoSQL. But we need to open our eyes: developers need those simple APIs. Hibernate authors listened to them, and Hibernate is popular. MongoDB listened to them and is popular. DynamoDB listened to them and is popular. And SQLAlchemy for python developers. And GraphSQL to federate from multiple sources. Yes, they lack a lot of features that we have in RDBMS, and they need the same level of learning and design, but the most important is that they offer the APIs that the users are looking for. Forty years ago, SQL was invented to match what the users wanted to do. Because users were, at that time, what we call today ‘Data Scientists’: they need a simple error-prone API for ad-hoc queries. However, it looks like SQL became too complex for current developers and missed the point with the integration with procedural languages: mapping Java objects to relational rowsets though SQL is not easy. And even if SQL standard evolved, the RDBMS vendors forgot to listen to the developer experience. Look, even the case-insensitivity is a problem for Java programmers:

Using the same naming conventions for procedural code and database objects is a valid requirement. The SQL standard has evolved for this (SQL-92 defines case-sensitive identifiers) but actually, only a few RDBMS took the effort to be compliant with it (just play with all databases in this db<>fiddle). And even on those databases which implement the SQL evolution correctly (Oracle, DB2 and Firebird – paradoxically the oldest ones and without ‘SQL’ in their name), using quoted identifiers will probably break some old-school DBA scripts which do not correctly handle the case-insensitive identifiers.

The lack of simple API is not only for SQL requests to the database. In all RDBMS, understanding how an execution plan can scale requires lot of knowledge. I’ll go on that in a next post. I’ll continue to write about the NoSQL myths, but that’s not sufficient to get developers adopting SQL again like their parents did 40 years ago. We need an easier API. Not for data scientists but for industrial coding factories. Developers should not have to learn normal forms and just think about business entities. They should not have to write SQL text strings in their Java code. They should see execution plans like sequence diagrams or procedural pseudo-code to understand the scalability.

That’s what DBA and RDBMS advocates should learn from NoSQL, because they didn’t take the lesson with ORM: listen to your users, and improve the developer experience. Or we will end again with a N+1 attempt to abstract the relational model, rowset data manipulation, and stateful transaction consistency, which can scale only with massive hardware and IT resources. I hope to see interesting discussions in this blog or twitter.

Cet article A lesson from NoSQL (vs. RDBMS): listen to your users est apparu en premier sur Blog dbi services.

How to speed up Data Vault Query Performance

Data Vault is just too slow?

You think Data Vault is a great model but you struggle with query performance?

Use Exasol as a Platform!

We have the remedy: Use our analytical database Exasol as a platform and query performance problems with Data Vault disappear.

What’s the problem with Data Vault?

The Data Vault model comes with an increased number of tables and joins. Other data warehouse platforms likely have a problem to deliver acceptable query performance with that.

Exasol solves that problem!

Exasol on the other hand is very good with joins even on many tables. We use columnar storage and we always compress data. And our database is designed to do MPP with commodity hardware.

With these attributes, combining Data Vault with Exasol delivered the best query performance for that model already using earlier versions of Exasol. And for a very reasonable price, if I may add that.

Even faster now with Exasol 7.0

Now with the release of Exasol 7.0, Data Vault performance got even better, thanks to the introduction of the HASHTYPE datatype for hash join columns and several improvements on JOIN operations.

Curious? Take action!

You want to see for yourself? Download our free Community Edition and run a test.

I bet we beat any other platform you have in place at the moment when it comes to query performance on a Data Vault model!

We’re also happy to do a PoC with your data and your queries.

As I like to say: Don’t believe it, test it </p />
</p></div>

    	  	<div class=

JSON can be such a TREAT

What better way to start a new week than with a cheesy title to my blog post! Smile

But there is method to my unstructured madness today as we talk about the TREAT function, available from 18C for JSON data. As many of us will already know it is possible to dive into JSON data and extract information from its components as if they were relational columns using the very cool dot notation

In order to do so, we need to be a little generous to the database by letting it know that the contents of a unstructured column such as a CLOB is truly correct JSON data.

I will start with a table with a simple CLOB which does indeed contain valid JSON but I have not let the database know about that fact:



SQL> create table my_json_table
  2  (
  3      id number(10) not null constraint my_pk primary key,
  4      json_data clob
  5  );

Table created.

SQL>
SQL> insert into my_json_table (id, json_data) values (1,
  2    '{"fields": {"field1":{ "name": "field1","label": "My Field 1","value": "ABCD"},
  3      "field2":{ "name": "field2","label": "My Field 2","value": "XYZ"},
  4      "field3":{ "name": "field3","label": "My Field 3","value": "Y"},"field4":{ "name":
  5      "field4","label": "My Field 4","value": ""},"field5":{ "name":
  6      "field5","label": "My Field 5","value": ""},"field6":{ "name":
  7      "field6","label": "My Field 6","value": ""},"field7":{ "name":
  8      "field7","label": "My Field 7","value": ""},}}');

1 row created.

SQL>
SQL> insert into my_json_table (id, json_data) values (2,
  2    '{"fields": {"field1":{ "name": "field1","label": "My Field 1","value": ""},
  3      "field2":{ "name": "field2","label": "My Field 2","value": "XYZ"},
  4      "field3":{ "name": "field3","label": "My Field 3","value": "Y"},
  5      "field4":{ "name": "field4","label": "My Field 4","value": "QWERTY"},
  6      "field5":{ "name": "field5","label": "My Field 5","value": ""},
  7      "field6":{ "name": "field6","label": "My Field 6","value": ""},
  8      "field7":{ "name": "field7","label": "My Field 7","value": ""},}}');

1 row created.

The moment I try to use the dot notation I get an error because the database can’t really guarantee that this is JSON.



SQL> select * from my_json_table m where m.json_data.fields.field1.value='ABCD';
select * from my_json_table m where m.json_data.fields.field1.value='ABCD'
                                    *
ERROR at line 1:
ORA-00904: "M"."JSON_DATA"."FIELDS"."FIELD1"."VALUE": invalid identifier

The simple step of adding a CHECK constraint to ensure that only in JSON is allowed means that of the coolness of the dot notation becomes available to me:



SQL> alter table my_json_table modify json_data check (json_data is json);

Table altered.

SQL>
SQL> select * from my_json_table m where m.json_data.fields.field1.value='ABCD';

        ID JSON_DATA
---------- --------------------------------------------------------------------------------
         1 {"fields": {"field1":{ "name": "field1","label": "My Field 1","value": "ABCD"},

However, as I discovered when helping someone on StackOverflow recently sometimes the check constraint is not enough. Check constraints can only be applied to a table but sometimes we want to expose our JSON data via a view.

Here is a trivial view that simply duplicates the data from my table and look what happens when I try to use the dot notation on that view:


SQL> create or replace view my_json_view as
  2  select id, json_data
  3  from my_json_table
  4  union all
  5  select id, json_data from my_json_table;

View created.

SQL>
SQL> select * from my_json_view j where j.json_data.fields.field1.value='ABCD';
select * from my_json_view j where j.json_data.fields.field1.value='ABCD'
                                   *
ERROR at line 1:
ORA-00904: "J"."JSON_DATA"."FIELDS"."FIELD1"."VALUE": invalid identifier

Even though both sides of the UNION ALL are valid JSON my hypothesis is that the database does not want to incur the cost of examining all the components of the query (after all we might have hundreds of UNION ALL elements) to see if every single column Is a contains a valid JSON check constraint.

Enter the TREAT function! The TREAT function also comes in very handy for handling the situation where there is no capability of having a CHECK constraint.

By wrapping the view in another SELECT and including a TREAT function in that outer SELECT, this is all that is needed to let the database know that we can consume this data as JSON and hence get the advantages of our dot notation:


SQL> create or replace view my_json_view as
  2  select m.id,  treat(m.json_data as json) json_data
  3  from
  4  (
  5    select id, json_data
  6    from my_json_table
  7    union all
  8    select id, json_data from my_json_table
  9  ) m;

View created.

SQL>
SQL> select * from my_json_view j where j.json_data.fields.field1.value='ABCD';

        ID JSON_DATA
---------- --------------------------------------------------------------------------------
         1 {"fields": {"field1":{ "name": "field1","label": "My Field 1","value": "ABCD"},
         1 {"fields": {"field1":{ "name": "field1","label": "My Field 1","value": "ABCD"},

Check out the docs on the TREAT function and if you ever get errors using that dot notation, before you resort to falling back to the JSON_VALUE and JSON_QUERY functions, then perhaps the TREAT treatment is all you need!

Footnote: We want your JSON usage to be as seamless as possible, so we’re constantly working to improve the JSON functionality. For that reason, we logged Bug 31678629 with the intent of removing the need for TREAT in this UNION ALL case, so if this is impacting you, keep an eye on that bug for progress.

Industry Experts and Pro Bloggers Come Together to Inspire, Teach, and Help You (and Your Site) Succeed

At the Official WordPress.com Growth Summit, the two-day virtual conference running twice from August 11-13, you’ll have access to about 50 speakers and presenters across 90+ breakout sessions and keynote conversations. The varied lineup will appeal to new and established bloggers; professionals in tech, media, and marketing; and anyone ready to build or expand their presence on the web.

Event highlights include a talk from Smitten Kitchen creator and cookbook author Deb Perelman; panels with Newspack founder Kinsey Wilson on the state of independent and local journalism amid a struggling media landscape and in the age of COVID-19; and sessions with founders and representatives from companies and organizations like Xbox, Google, Sandwich, African Queer Youth Initiative, Out in Tech, Looka, and more.

Wondering what to expect? Here are 10 entrepreneurs, founders, bloggers, and developers scheduled to speak at the summit.


Jason Snell

https://en-blog.files.wordpress.com/2020/07/jason-4.jpg?w=388 388w, https://en-blog.files.wordpress.com/2020/07/jason-4.jpg?w=150 150w, https://en-blog.files.wordpress.com/2020/07/jason-4.jpg?w=300 300w" sizes="(max-width: 194px) 100vw, 194px" />

Jason is the founder and editor-in-chief of Six Colors, a site that covers Apple, technology, and the intersection of tech and culture. Previously, as the lead editor for Macworld, he covered every major Apple product release for more than a decade.

In a breakout session on content and community with web designer and A List Apart founder Jeffrey Zeldman, they’ll share blogging and podcasting tips, advice on growing your audience, and making money with paywalls, stores, advertising, and more. “Membership programs can build loyalty and provide the most enthusiastic portions of your audience with more of what they love,” says Jason. “I switched to WordPress in order to build a richer — and more marketable — membership program, and the results have been excellent.”

Tina Wells

https://en-blog.files.wordpress.com/2020/07/tina-wells-1-1.jpeg?w=402 402w, https://en-blog.files.wordpress.com/2020/07/tina-wells-1-1.jpeg?w=150 150w, https://en-blog.files.wordpress.com/2020/07/tina-wells-1-1.jpeg?w=300 300w" sizes="(max-width: 202px) 100vw, 202px" />

“The 4Ps of marketing — product, place, promotion, and price — have changed dramatically,” says Tina Wells, the founder of Elevation Tribe, a community and lifestyle publication focused on helping women of color launch, grow, and lead their businesses. In her breakout session, Tina will talk about creating a marketing plan for beginners and small businesses. “We’ll go back to basics and discover how we can make the right changes to help your business not only survive, but thrive.”

A business strategist and passionate entrepreneur, Tina believes in the power of an effective online presence. “Your website is your first storefront, and it tells the story of your brand,” she says, noting that a strong visual identity and fresh design are critical. You can see both in action at Elevation Tribe and on Tina’s website.

Kristin Smith

https://en-blog.files.wordpress.com/2020/07/kristin-smith402.jpg?w=145&h... 145w, https://en-blog.files.wordpress.com/2020/07/kristin-smith402.jpg?w=291&h... 291w, https://en-blog.files.wordpress.com/2020/07/kristin-smith402.jpg 378w" sizes="(max-width: 189px) 100vw, 189px" />

“My authenticity is what allows me to share my brand with others. Being yourself — and transparent — is what people see and keeps them coming back. There’s only one of you, and that alone is enough,” says Kristin Smith, food blogger at Krisp X Kristin and podcast co-host of At the Bar.

In Kristin’s breakout session, you’ll find out how she turned her passion for cooking into a popular blog and podcast. She’ll share what she’s learned, giving you a set of actionable steps to inspire you to overcome your fears and get started on your website.

“Sometimes in the beginning it’s hard to see a finish line. But, keep your head down and keep working. Throw yourself into it, network with other likeminded people, and continue to learn,” says Kristin. “It all will pay off in the end.”

Paul Bakaus

https://en-blog.files.wordpress.com/2020/07/paul-bakaus.jpg?w=146 146w" sizes="(max-width: 195px) 100vw, 195px" />

“The web used to be the best platform for content creation, distribution, and consumption,” says Paul Bakaus, a senior staff development advocate at Google. “But today the walled garden — closed social apps and platforms — are drinking our milkshake. I say it’s time we stop letting them! We need to, as a community, work on making the web more visual, frictionless, and bite-sized. Web Stories are our approach to help with that, and we can’t wait for you to join us.”

In his breakout session, Telling Web Stories with WordPress, Paul will introduce you to Web Stories — a mobile-first tool that allows storytellers to create visual narratives with engaging animations and tappable interactions — and show how you can use them on your website with the Web Stories for WordPress plugin.

Anton Diaz

https://en-blog.files.wordpress.com/2020/07/anton-diaz-profile-1-2-2-3.j... 146w" sizes="(max-width: 195px) 100vw, 195px" />

“We want to contribute to an awesome post-COVID-19 world,” says Anton Diaz, traveler, founder, and creator of Our Awesome Planet. “We’re helping food businesses to connect with foodies and travel destinations to engage with travelers.”

In his breakout session, Anton will share the principles that have guided his food and travel blog for 15 years. “There are core beliefs that have helped Our Awesome Planet stand out,” says Anton. “We make sure that all the food and travel experiences we feature are based on first-hand experience, grounded on our original vision: documenting the food and travel adventures of our family as our four sons — Aidan, Joshua, Raphael, and Yugi — have grown.”

Deb Perelman

https://en-blog.files.wordpress.com/2020/07/debperelmanauthorphoto2018cr... 390w, https://en-blog.files.wordpress.com/2020/07/debperelmanauthorphoto2018cr... 150w, https://en-blog.files.wordpress.com/2020/07/debperelmanauthorphoto2018cr... 300w" sizes="(max-width: 195px) 100vw, 195px" />

Deb Perelman, the longtime food blogger at Smitten Kitchen, is a WordPress.com community favorite. What started as a food blog and side project in her tiny New York City kitchen has grown into one of the most popular food blogs on the internet, as well as a series of best-selling cookbooks.

“I just really, really enjoy blogging,” Deb said in an interview with WordPress.com several years ago. “I love having a place where I can share what I’m working on in an immediate way and have a conversation with people who are equally excited about it, and who encourage me to try more stuff that scares me in the kitchen.” At the conference, Deb will share her story and the journey of Smitten Kitchen, from start to present.

Kim Newton

https://en-blog.files.wordpress.com/2020/07/kim-newton-1.jpeg?w=366 366w, https://en-blog.files.wordpress.com/2020/07/kim-newton-1.jpeg?w=150 150w, https://en-blog.files.wordpress.com/2020/07/kim-newton-1.jpeg?w=300 300w" sizes="(max-width: 183px) 100vw, 183px" />

Kim Newton, a global marketing executive with over 20 years of experience working with corporations and brands, is the creator of The Intentional Pause, a project that empowers women to follow their dreams using the power of pause. “I give every woman permission — yes permission — to just stop and think,” she writes on her website. “I want to help women to embrace pausing as a powerful way forward, with intention, to achieve their dreams.”

Kim has had many successes in consumer marketing, corporate strategy, and business development, and will share her insights on marketing and PR at the summit.

Chris Coyier

https://en-blog.files.wordpress.com/2020/07/chris-coyier-web-designer-de... 382w, https://en-blog.files.wordpress.com/2020/07/chris-coyier-web-designer-de... 150w, https://en-blog.files.wordpress.com/2020/07/chris-coyier-web-designer-de... 300w" sizes="(max-width: 191px) 100vw, 191px" />

Chris Coyier, the co-founder of CodePen, is a front-end developer and designer. He’s also the creator of CSS-Tricks, a resource that’s all about building websites, mostly from a front-end perspective, and was built on WordPress since day one. “I’m a solo developer for the most part on CSS-Tricks. Just me over here. I don’t have the budget for a fancy development team. But I still want to feel powerful and productive. That’s one of the things that WordPress has given to me. I feel like I can build just about anything on WordPress, and do it in a way that doesn’t feel like a mountain of technical debt that I would struggle to maintain.”

In his breakout session — Putting WordPress to Work — Chris will take us behind the scenes at CSS-Tricks, sharing “just how powerful WordPress can be as a platform to run a publishing business on.”

Amy Chan

https://en-blog.files.wordpress.com/2020/07/amy-chan-headshot.jpg?w=374 374w, https://en-blog.files.wordpress.com/2020/07/amy-chan-headshot.jpg?w=150 150w, https://en-blog.files.wordpress.com/2020/07/amy-chan-headshot.jpg?w=300 300w" sizes="(max-width: 187px) 100vw, 187px" />

For Amy Chan, blogging provided the path to a publishing career. Amy is the founder of Renew Breakup Bootcamp, the world’s first breakup bootcamp, and the author of Breakup Bootcamp: The Science of Rewiring Your Heart. “Heartbreak is something that affects everyone, so people were able to connect quickly with the company’s offering,” says Amy.

In Amy’s breakout session — How I Accidentally Became a Thought Leader By Blogging — she shares her own story, and how you can transform your side gig into something bigger. To start, Amy says to just do it: “Stop hiding. Stop waiting. Stop perfecting. Perfection is procrastination in disguise,” she says. “Start the blog, launch the event, put your creation out in the world. Whatever it is, just get in a mindset of taking action. Create as a way of being. Launch it now and develop it later.”

Danica Kombol

https://en-blog.files.wordpress.com/2020/07/danica.jpeg?w=390 390w, https://en-blog.files.wordpress.com/2020/07/danica.jpeg?w=150 150w, https://en-blog.files.wordpress.com/2020/07/danica.jpeg?w=300 300w" sizes="(max-width: 195px) 100vw, 195px" />

Danica Kombol founded Everywhere Agency to help brands tell better stories through social media and influencer marketing. As CEO, she leads a team that works with brands to launch content-driven campaigns and to create meaningful conversations with followers in powerful, measurable ways.

Danica also launched Everywhere Society, a network of about 5,000 established influencers and bloggers, which powers the agency’s influencer campaigns and brand ambassadorships. Her session will cover blogging for purpose and profit.


Browse the agenda for all sessions, demos, and talks. Buy your ticket now for early bird pricing of $79, which expires after July 31!

Video : APEXExport : Export APEX Applications and Workspaces From the Command Line

In today’s video we’ll give a quick demonstration of using the APEX command-line export utility.

The video is based on this article, which includes more examples, and Windows-based examples also.

The star of today’s video is my daughter Heli “Hell-Squirel” Helskyaho. Make sure you check out the cloud forming a Pikachu tail above her head! </p />
</p></div>

    	  	<div class=