Search

OakieTags

Who's online

There are currently 0 users and 27 guests online.

Recent comments

Oakies Blog Aggregator

Exadata Fast Data File Creation

This post is the result of some testing I performed on Exadata data file creation. You may know that Exadata offloading incorporates SQL optimisations as well as some infrastructure work. For quite some time Exadata allowed the DBA to create data files a lot quicker than on traditional systems. This has been documented before, for example by @mpnsh here

The final comment on his blog entry was a remark that data file creation is quite fast, but that is not true for online redo logs. Especially in environments where you duplicate production to a lower tier environment you have to wait for the online redo logs (including all members across all threads) to be zeroed. This is no longer an issue with Fast Data File Creation. If your system is configured to use Write-Back Flash Cache (WBFC from now on) and you are on Exadata 11.2.3.3 then you can benefit from super-fast file creation, including online redo logs. Here is an example, taken from a SQL trace:

TKPROF: Release 12.1.0.2.0 
...
Trace file: /u01/app/oracle/diag/rdbms/mbach/MBACH1/trace/MBACH1_ora_124411.trc
Sort options: default

********************************************************************************
count    = number of times OCI procedure was executed
cpu      = cpu time in seconds executing
elapsed  = elapsed time in seconds executing
disk     = number of physical reads of buffers from disk
query    = number of buffers gotten for consistent read
current  = number of buffers gotten in current mode (usually for update)
rows     = number of rows processed by the fetch or execute call
********************************************************************************

SQL ID: fvt6psf2t3cdz Plan Hash: 0

alter database add logfile thread 2 group 5 ('+DATA','+RECO') size 4096m


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.37       5.09          0          0          2           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.38       5.09          0          0          2           0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  Disk file Mirror Read                           2        0.00          0.00
  control file sequential read                   23        0.00          0.01
  KSV master wait                                19        0.04          0.09
  ASM file metadata operation                     6        0.00          0.00
  CSS initialization                              1        0.00          0.00
  CSS operation: query                            6        0.00          0.00
  CSS operation: action                           2        0.00          0.00
  kfk: async disk IO                              2        0.00          0.00
  Disk file operations I/O                        3        0.00          0.00
  cell smart file creation                     3258        0.13          4.64
  log file single write                           2        0.00          0.00
  control file parallel write                    10        0.07          0.07
  log file sync                                   1        0.00          0.00
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net message from client                     1       17.61         17.61
********************************************************************************

I have tried to replicate the command issued in Martin Nash’s blog post. If memory serves me right he was waiting for 37 seconds for this command to complete. The new method uses about 5 seconds-not bad! Interestingly there is no new wait event-the one we know (cell smart file creation) is used to indicate the work that has been done. In the case of Fast Data File Creation only the metadata about the new file is persisted in the WBFC, the actual formatting has not happened when the prompt returned.

Session Counters?

I have spent _a lot_ of time on Exadata related session counters in 12.1.0.2 and earlier releases. I think they are fascinating, and since the wait events do not really show me what happened during the execution of the statement I used the session counters instead. I am using Adrian Billington’s mystats tool for this:

SQL> @scripts/mystats start

SQL> alter database add logfile thread 2 group 5 ('+DATA','+RECO') size 4096m;

Database altered.

Elapsed: 00:00:04.87
SQL> @scripts/mystats stop t=1
...
STAT    cell flash cache read hits                                                      20
STAT    cell logical write IO requests                                                  14
STAT    cell overwrites in flash cache                                                  30
STAT    cell physical IO bytes eligible for predicate offload               17,179,869,184
STAT    cell physical IO bytes saved during optimized file creation         17,179,869,184
STAT    cell physical IO interconnect bytes                                      1,361,920
STAT    cell writes to flash cache                                                      30
...
STAT    physical write requests optimized                                               10
STAT    physical write total IO requests                                             4,126
STAT    physical write total bytes                                          17,180,197,888
STAT    physical write total bytes optimized                                       163,840
STAT    physical write total multi block requests                                    4,096

Let’s begin with the cell%-statistics. These indicate what happened on the storage layer. The command I executed was to create an online redo log group in thread 2, in disk groups DATA and RECO. Each file is approximately 4 GB in size.

SQL> select f.member, l.bytes/power(1024,2) m 
  2  from v$log l, v$logfile f
  3  where l.group# = f.group#
  4  and l.group# = 5;

MEMBER                                                                M
------------------------------------------------------------ ----------
+DATA/MBACH/ONLINELOG/group_5.454.879407723                        4096
+RECO/MBACH/ONLINELOG/group_5.1229.879407725                       4096

2 rows selected.

So why does the database report 17,180,197,888 “bytes saved during optimized file creation” and later again in “physical write total bytes”? The answer is ASM mirroring. In this system DATA and RECO are protected using ASM normal redundancy, doubling the writes.

What I found amusing is that the exact number of bytes eventually written is “eligible for predicate offload”. Until quite recently I only associated Smart Scans with this statistic counter.

You can also see a few writes (and overwrites) to (Write Back) Flash Cache.

Parallel Execution

This is another little reference list I should have created some time ago. It covers a series of posts on interpreting parallel execution plans and understanding where the work happens.

I may add further links to this page in the future relating to other aspects of parallel execution.

 

I’m a Millionaire now!

At least regarding the hits on this Blog :-) Thank you all for visiting uhesse.com!

What happens to the Standby when you move a datafile on the Primary?

In 12c, we have introduced online datafile movement as a new feature. Now does that impact an existing standby database? I got asked that yesterday during an online webinar. My answer was that I expect no impact at all on the standby database since redo apply doesn’t care about the physical placement of the datafile on the primary. But I added also that this is just an educated guess because I didn’t test that yet. Now I did:

You know, I like to practice what I preach: Don’t believe it, test it! :-)

Tagged: 12c New Features, Data Guard

Moving a datafile in a Data Guard environment

Cost

I’ve just been checking “Cost Based Oracle – Fundamentals” (Apress 2005) to see what I said on a particular topic, and I couldn’t resist quoting the following from the opening page of Chapter 1:

One of the commonest questions about the CBO on the Internet is: “What does the cost represent?” This is usually followed by comments like: “According to explain plan the cost of doing a hash join for this query is seven million and the cost of a nested loop is forty-two – but the hash join completes in three seconds and the nested loop takes 14 hours.”

The answer is simple: the cost represents (and has always represented) the optimizer’s best estimate of the time it will take to execute the statement. But how can this be true when people can see oddities like the hash join / nested loop join example above? The answer can usually be found in that good old acronym GIGO: Garbage In, Garbage Out.

The CBO makes errors for six main reasons:

  • There are some inappropriate assumptions built into the cost model.
  • The relevant statistics about the data distribution are available, but misleading
  • The relevant statistics about the data distribution are not available
  • The performance characteristics of the hardware are not known
  • The current workload is not known
  • There are bugs in the code

Still true – although there are more features and cunning bits where inappropriate assumptions and bugs can appear.

 

 

Heuristic Temp Table Transformation - 2

Some time ago I've demonstrated the non-cost based decision for applying the temp table transformation when using CTEs (Common Table/Subquery Expressions). In this note I want to highlight another aspect of this behaviour.Consider the following data creating a table with delibrately wide columns:


create table a
as
select
rownum as id
, rownum as id2
, rpad('x', 4000) as large_vc1
, rpad('x', 4000) as large_vc2
, rpad('x', 4000) as large_vc3
from
dual
connect by
level <= 1000
;

exec dbms_stats.gather_table_stats(null, 'a')

and this query and plans with and without the temp table transformation:


with cte
as
(
select /* inline */
id
, id2
, large_vc1
, large_vc2
, large_vc3
from
a
where
1 = 1

)
select
*
from
(
select id, count(*) from cte group by id
) a,
(
select id2, count(*) from cte group by id2
) b
where
a.id = b.id2
;

-- Plan with TEMP TABLE transformation
--------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1000 | 52000 | 1341 (1)| 00:00:01 |
| 1 | TEMP TABLE TRANSFORMATION | | | | | |
| 2 | LOAD AS SELECT | SYS_TEMP_0FD9D6609_26FA32 | | | | |
| 3 | TABLE ACCESS FULL | A | 1000 | 11M| 452 (0)| 00:00:01 |
|* 4 | HASH JOIN | | 1000 | 52000 | 889 (1)| 00:00:01 |
| 5 | VIEW | | 1000 | 26000 | 444 (1)| 00:00:01 |
| 6 | HASH GROUP BY | | 1000 | 4000 | 444 (1)| 00:00:01 |
| 7 | VIEW | | 1000 | 4000 | 443 (0)| 00:00:01 |
| 8 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6609_26FA32 | 1000 | 11M| 443 (0)| 00:00:01 |
| 9 | VIEW | | 1000 | 26000 | 444 (1)| 00:00:01 |
| 10 | HASH GROUP BY | | 1000 | 4000 | 444 (1)| 00:00:01 |
| 11 | VIEW | | 1000 | 4000 | 443 (0)| 00:00:01 |
| 12 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6609_26FA32 | 1000 | 11M| 443 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------

-- Plan with CTE inlined (turn INLINE into hint)
-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1000 | 52000 | 907 (1)| 00:00:01 |
|* 1 | HASH JOIN | | 1000 | 52000 | 907 (1)| 00:00:01 |
| 2 | VIEW | | 1000 | 26000 | 453 (1)| 00:00:01 |
| 3 | HASH GROUP BY | | 1000 | 4000 | 453 (1)| 00:00:01 |
| 4 | TABLE ACCESS FULL| A | 1000 | 4000 | 452 (0)| 00:00:01 |
| 5 | VIEW | | 1000 | 26000 | 453 (1)| 00:00:01 |
| 6 | HASH GROUP BY | | 1000 | 4000 | 453 (1)| 00:00:01 |
| 7 | TABLE ACCESS FULL| A | 1000 | 4000 | 452 (0)| 00:00:01 |
-----------------------------------------------------------------------------

Looking at the query and plan output the following becomes obvious:- The mere existence of a WHERE clause, even if it is just "WHERE 1 = 1" and referencing the CTE more than once triggers the transformation (nothing new, already demonstrated in the mentioned previous note, as well as the fact that the inlined CTE variant is cheaper in cost)- There is a huge difference between the estimated size of the TEMP TABLE and the size of the row sources when using the CTE inlineThe latter is particular noteworthy: Usually Oracle is pretty clever in optimizing the projection and uses only those columns required (doesn't apply to the target expression of MERGE statements, by the way), which is reflected in the plan output for the inline CTEs - the wide columns don't matter here because they aren't referenced, although being mentioned in the CTE. But in case of the temp table transformation obviously all columns / expressions mentioned in the CTE become materialized, although not necessarily being referenced when the CTE gets used.So it would be nice if Oracle only materialized those columns / expressions actually used.Now you might raise the question why mention columns and expressions in the CTE that don't get used afterwards: Well, generic approaches sometimes lead to such constructs - imagine the CTE part was static, including all possible attributes, but the actual usage of the CTE can be customized by a client. In such cases where only a small part of the available attributes get actually used a temp table transformation can lead to a huge overhead in size of the generated temp table. Preventing the transformation addresses this issue, but then the inlined CTE will have to be evaluated as many times as referenced - which might not be desirable either.

Friday Philosophy – Know Your Audience

There are some things that are critical for businesses that can be hidden or of little concern to those of us doing a technical job. One of those is knowing who your customers are. It is vital to businesses to know who is buying their products or services. Knowing who is not and never will buy their products is also important (don’t target the uninterested) and knowing and who is not currently buying and might is often sold as the key to ever growing market share and profit. But fundamentally, they need to know who the current customers are, so they can be looked after {I know, some businesses are shocking to current customers, never understood that}.

This should also be a concern to me.

Why? Well, I “sell” something. I don’t charge for it, but I put out my blogs and my tweets and my presentations. I’ve even stepped up to articles. So I am putting a product out there and I want people to use it. Any of us who blog, tweet, facebook or in some way communicate information are fundamentally trying to talk to people. It’s fine to just put stuff out there and see who comes, but if I am doing this in order to reach an audience, well, who is my audience?

I know who my audience is. I’m British. I live in the UK, 75% of my presentations are in the UK, 95% of my work has been in the UK. I drink tea as a hobbie, queue as only the British know how, want my ale at room temperature and I am self-deprecating in my humour. At least, I’d like to think I am, but please forgive me if I fall short of your expectations.

My Audience is UK:

Who comes looking from where

Who comes looking from where

My Audience is American.

Dang!

As you can see from the above, my reasonable assumption was wrong. Those are stats I pulled from my blog about visits by country for a recent period. Most of my audience is in the US. For this particular period the UK is my second highest audience and India is third, but I dug in a little more and at times my Indian audience is higher than my UK audience.

Other countries move up and down but the above graphic is representative – European counties, Canada, South America and Australia all are prominent areas for me, and South Korea – big technology country, South Korea, so I should expect a reasonable showing from there. However, I’ll just let you know that last year (different graph, I hasten to point out) I had only 1 visitor from the Vatican, Vanuatu and Jersey (part of the UK!) each. I’m a bit gutted about Jersey, having worked there once, but the Vatican? Does the Pope need a VLDB?

I have noticed a spike of interest in a given month by a country if I go and present there, but it does not last for long.

What about my Tweet world? The below shows where my followers are from:

Peeps wot Tweets

Peeps wot Tweets

It is nice that this graph emphasises that “others” outside the top 10 are larger source of audience tham any individual country, but it shows a similar pattern to my blog. I’m mostly talking to my American cousins, the home crowd and our friends over in India. I suppose if you think about the number of people working in IT (and, to a lesser extent, just simply living) in countries across the global, the numbers make a lot of sense. If I was doing this analysis on a database of the raw data I’d now be correlating for population size and trying think of a proxy I could use for “IT Aware”.

So now I know who my audience is. Does this mean I should alter the tone of my posts to be more American or International, or is the British flavour of my erudite utterances part of the appeal?

I have noticed one change in my output over that last year or so, as I have become more aware of the geographical spread of my audience. I tend to explain what I recognise as odd phrases (above paragraph allowing) or UK-centric references a little more. And I try to allow for the fact that not everyone visiting my blog speaks English as a first language. But in the end, I have to use the only language I know. However, I don’t think I appreciate well when I am using colloquial phrases or referencing UK-centric culture. I’ll try harder.

One thing I do resist is WordPress trying to auto-correct my spelling to US – despite the fact that the app knows I am in the UK. Maybe I should spend some time trying to see if I can force the use of a UK dictionary on it? I won’t accept corrections to US spelling because, damn it all chaps, English came from this island and I refuse to use a ‘Z’ where it does not belong or drop a ‘u’ where it damned well should be! And pants are underwear, not trousers, you foolish people.

There is another aspect of my blog posts that I find interesting, and it is not about where my audience is – it is about the longevity of posts. Technical posts have a longer shelf life. My top posts are about oddities of the Oracle RDBMS, constantly being found by Google when people are looking at problems. A couple of the highest hitters I put up in 2009 when almost no one came by to look. However, my “Friday Philosophies” hit higher in the popularity stakes when first published but, a month later, no one looks at them anymore. Stuff about user groups and soft skills fall between the two. Some of my early, non technical posts just drifted into the desert with hardly any notice. Sadly, I think a couple of them are the best things I have ever said. Maybe I should republish them?

Re-Adding a Database to the AWR Warehouse Without Losing Historical Snapshots

There may be a reason that one needs to re-add a database to the AWR Warehouse.  This is a new opportunity for me to learn and offer assistance, but I’m working off the grid to figure out a solution.  Luckily, it’s just a matter of creating a new process from pre-existing code and processes.

The process requires both the AWR Warehouse UI and a SQLPlus connection.

1.  Disable the snapshots

2.  Extract the AWR for the Database

3.  Drop the database from the AWR Warehouse

4.  Add the database back into the AWR Warehouse with the retention time or any other configuration change.

5.  Update the configuration table in the source database to show the correct snapshots.

Disable Snapshot Uploads

Log into the AWR Warehouse console and locate the database that needs to be re-added:

awrw_rbld1

From the AWR Warehouse UI, click on the database and then choose to disable the snapshot uploads for the database in question.

awrw_rbld2

 

Extract AWR Data for the Database

From the AWR Warehouse host, log into the AWR Warehouse repository, (database) and now extract the AWR data.  This will use the formal AWR data extract procedure, found in the $ORACLE_HOME/rdbms/admin directory and yes, it works with the AWR Warehouse.

~~~~~~~~~~~~~
AWR EXTRACT
~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~ This script will extract the AWR data for a range of snapshots ~
~ into a dump file. The script will prompt users for the ~
~ following information: ~
~ (1) database id ~
~ (2) snapshot range to extract ~
~ (3) name of directory object ~
~ (4) name of dump file ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SQL> @$ORACLE_HOME/rdbms/admin/awrextr.sql;

A list of databases will be displayed that you can choose from.  In the AWR Warehouse, all the source databases will be shown:

Databases in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id DB Name Host
------------ ------------ ------------
* 1027807260 AWRWHSE AWRWHSE_HOST
9352681 APPSB1 HOST1
4212609618 AASDB  HOST2
4212608695 AASDB HOST3
so on and so forth....

Enter in the DBID for the database from the list.  We’ll use the first AASDB from the list for our example:

Enter value for dbid: 4212607231
Using 4212607231 for Database ID
Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed. Pressing  without
specifying a number lists all completed snapshots.
Enter value for num_days:
Listing all Completed Snapshots

As I didn’t enter a number in for the number of days, you can see I want ALL of the AWR data for this database.  Now the extract script will ask for snapshot information.  Again, we want all of it and the report will show us all of the snapshots.  If there are so many, that you can’t see the first one, use the AWR Warehouse UI to see the first and last snapshot for this step.

Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 1777
Begin Snapshot Id specified: 1777

Enter value for end_snap: 2303
End Snapshot Id specified: 2303
Specify the Directory Name
~~~~~~~~~~~~~~~~~~~~~~~~~~
Directory Name Directory Path
------------------------------ -----------------------------------
DATA_PUMP_DIR /u01/admin/AWRWHSE/dpdump/
OPATCH_INST_DIR /u01/home/oracle/12.1.0.2/OPatch
OPATCH_LOG_DIR /u01/home/oracle/12.1.0.2/QOpatch
OPATCH_SCRIPT_DIR /u01/home/oracle/12.1.0.2/QOpatch

Choose a Directory Name from the above list (case-sensitive).
Enter value for directory_name: DATA_PUMP_DIR
Using the dump directory: DATA_PUMP_DIR
Specify the Name of the Extract Dump File
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The prefix for the default dump file name is awrdat_1777_2303.
To use this name, press  to continue, otherwise enter
an alternative.
Enter value for file_name: awrw_AASDB
Using the dump file prefix: awrw_AASDB
End of AWR Extract
SQL> !ls -la /u01/admin/AWRWHSE/dpdump/awrw_AASDB.dmp
-rw-r----- 1 oracle oinstall 268779520 May 4 16:49 /u01/admin/AWRWHSE/dpdump/awrw_AASDB.dmp

Now that the extract is completed and the data is safe, it’s time to return to the AWR Warehouse console.

 

Remove the Database from the AWR Warehouse

Click on the database in the AWR Warehouse and choose to remove it from the repository.

awrw_rbld3

 

Once the database has been removed, go through the proper steps to add it again to the AWR Warehouse.  Once added, disable the upload of snapshots.  This will keep from concurrency issues and other issues until we’re finished.

Reload the AWR Data

Log back into the AWR Warehouse Repository database, it’s time to load the data back into the AWR Warehouse.

$ORACLE_HOME/rdbms/admin/awrload.sql;

Specify the Name of the Dump File to Load
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Please specify the prefix of the dump file (.dmp) to load: /u01/admin/AWRWHSE/dpdump/awrw_AASDB.dmp
Pressing  will result in the recommended default
tablespace (identified by *) being used.

Enter value for default_tablespace: MGMT_TABLESPACE

As the space will be used at some point anyway, (the mgmt_tablespace is expected to grow, so no concern here and the temporary user utilized for the process will be dropped at the end.

Choose to use the standard TEMP tablespace and proceed forward past this step.  As the AWR objects are already built into the MGMT_TABLESPACE, there is no need to remap from the SYSAUX as the process in a standard database would require.

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "AWR_STAGE"."WRH$_SQL_PLAN"                 89.485 MB   329981 rows
. . imported "AWR_STAGE"."WRH$_SQLTEXT"                  192.34 MB    398276 rows
. . imported "AWR_STAGE"."WRH$_SYSMETRIC_SUMMARY"        232.12 KB    1129472 rows
.......
........
	Append Data for "AWR_STAGE".WRH$_SGASTAT.
INSERT /*+ APPEND */ INTO SYS.WRH$_SGASTAT (SNAP_ID, DBID, INSTANCE_NUMBER, NAME, POOL, BYTES) SELECT SNAP_ID,
3228342000, INSTANCE_NUMBER, NAME, POOL, BYTES FROM "AWR_STAGE".WRH$_SGASTAT WHERE DBID = :c_dbid

Update Metadata Table

Now that we’ve loaded the data back in, we need to update the metadata table to show that more than the initial data exists.

select * from dbnsmp.caw_load_metadata
where dbid=;
aasdb3 oracle_database 4212607231 #ff0000;">1777 1778
27-APR-15 27-APR-15 3 1

Update the table to reflect the new data, as there should only be one upload max that’s been sent over. If there is more than one, you’ll need to change the update statement to just reflect the one row that shows the max(end_snap_id) as the AWR Warehouse loads from the lowest snapid to the most recent.

select * from dbsnmp.caw_load_metadata
where dbid=421607231;
Update dbsnmp.caw_load_metadata
set end_snap_id=2303
where dbid=4212607231
and dump_id=;
commit;

If there isn’t any data at all, then force one upload and then edit it.  This is easily done from the Console:

awrw_rbld4

 

Proceed back and update the row in the metadata table and then check the UI to verify that the beginning and end snapshot looks correct.  You should now have all the AWR data loaded back into the AWR Warehouse! And that’s, how to reload AWR Data from a database that must be removed and re-added to the AWR Warehouse.

This is a new process for me, so please, let me know if you note anything amiss and I’ll be glad to research it!

 



Tags:  ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Re-Adding a Database to the AWR Warehouse Without Losing Historical Snapshots], All Right Reserved. 2015.

I love Live Demos – how about you?

Tired of boring slide-shows? Join me for free to see Oracle core technology live in action!

Live demonstrations have always been a key part of my classes, because I consider them one of the best ways to teach.

This is your opportunity to have a glimpse into my classroom and watch a demo just as I have delivered it there.

Apparently, not many speakers are keen to do things live, so the term Demonar (Demonstration + Seminar) waited for me to be invented :-)

A positive effect towards your attitude about LVC and Oracle University Streams with its live webinars is intended, since the setting and platform is very similar there.

Added a page about my LVC schedule

I get often asked by customers about my schedule, so they can book a class with me. This page now shows my scheduled Live Virtual Classes. I deliver most of my public classes in that format and you can attend from all over the world :-)