Another annual DOAG conference has passed, and I can only say the very best about it: Perfectly organized, large and modern location, impressive list of well known speakers and over 2100 attendees – wow!
My presentation Best of RMAN was scheduled at the first slot on the first day, so I was a bit concerned whether many people would attend that early. It turned out that the room got so full that I was asked by the organizers to deliver the same talk again next day – which I happily did, again with a packed room :-)
Apart from speaking myself, I enjoyed very much to see friends and colleagues again, as well as people I knew from social media before but never met in person yet. Thank you all for your good company guys – I may appear a bit quiet and distanced sometimes, but I really appreciate you and our little conversations. Personal highlight: The lady who approached me at the stairs, shaking hands and telling me that she knew me from an Oracle University class that she liked so much :-)
There were many interesting presentations, just to mention some of those I attended myself:
Frits Hoogland with Oracle Exadata and database memory
In his very detailed talk, he emphasized the importance of using Huge Pages on Exadata and that MEMORY_TARGET aka Automatic Memory Management is not an option here.
Jonathan Lewis presented Five Hints for Optimising SQL
I’m always amazed how much this man knows and how good he is able to explain it to us mere mortals :-)
Lothar Flatz was presenting Anatomie eines SQL Befehls (how to do Oracle Performance Tuning with a scientific approach)
During his very entertaining talk, he quoted no less than seven Oakies (including himself), so that was quite good advertising for the OakTable Network :-)
Frank Schneede delivered Eine Datenbank kommt selten allein (DB Cloning on Exadata using sparse diskgroups)
while Ulrike Schwinn presented Oracle DB Memory Techniken für mehr Performance (Overview about the meanwhile so many different ways to deal with memory in Oracle)
Couldn’t really catch her because she was running out of the picture all the time :-)
Martin Berger also did an interesting talk: Wie misst Oracle die Verwendung von Database Features? (How to make sense of DBA_FEATURE_USAGE STATISTICS)
I liked his presentation style with many live demonstrations very much!
My dear colleague Joel Goodman talked about Automatic Parallel Execution
Joel is what I call a ‘Standup Instructor’ – mention any database related topic and he will be able to deliver an excellent 30 minutes talk about it instantly :-)
A big THANK YOU to the many people from DOAG who helped to make that event take place again in such an impressive way! Hope to be there again next year.
You need to be careful when coding and using triggers when it comes to Datapump (or anything that transposes triggers between schemas). A lot of people make assumptions about what will happen with their triggers, and often get a nasty shock when they see the reality. Lets explore with an example
I’m going to add three triggers to the standard EMP table in the SCOTT schema
SQL> connect scott/tiger
SQL> create or replace 2 trigger TRG1 before insert on scott.EMP 3 for each row 4 declare 5 x int; 6 begin 7 select count(*) into x from scott.dept; 8 end; 9 / Trigger created. SQL> SQL> SQL> create or replace 2 trigger TRG2 before insert on EMP 3 for each row 4 declare 5 x int; 6 begin 7 select count(*) into x from scott.dept; 8 end; 9 / Trigger created. SQL> SQL> SQL> create or replace 2 trigger TRG3 before insert on EMP 3 for each row 4 declare 5 x int; 6 begin 7 select count(*) into x from dept; 8 end; 9 / Trigger created.
Now we’ll unload the entire schema and copy it to a new schema called SCOTT2
expdp directory=TEMP dumpfile=schema.dmp logfile=schema.log schemas=scott Export: Release 22.214.171.124.0 - Production on Fri Nov 27 10:31:29 2015 Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved. Username: system/******** Connected to: Oracle Database 12c Enterprise Edition Release 126.96.36.199.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** directory=TEMP dumpfile=schema.dmp logfile=schema.log schemas=scott Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 256 KB Processing object type SCHEMA_EXPORT/USER Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/TRIGGER Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Processing object type SCHEMA_EXPORT/STATISTICS/MARKER . . exported "SCOTT"."ABC" 5.898 KB 1 rows . . exported "SCOTT"."DEPT" 6.031 KB 4 rows . . exported "SCOTT"."EMP" 8.781 KB 14 rows . . exported "SCOTT"."SALGRADE" 5.960 KB 5 rows . . exported "SCOTT"."BONUS" 0 KB 0 rows Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is: C:\TEMP\SCHEMA.DMP Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Fri Nov 27 10:31:56 2015 elapsed 0 00:00:24 impdp directory=TEMP dumpfile=schema.dmp logfile=schemaimp.log schemas=scott remap_schema=scott:scott2 Import: Release 188.8.131.52.0 - Production on Fri Nov 27 10:31:57 2015 Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved. Username: system/******** Connected to: Oracle Database 12c Enterprise Edition Release 184.108.40.206.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options Master table "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded Starting "SYSTEM"."SYS_IMPORT_SCHEMA_01": system/******** directory=TEMP dumpfile=schema.dmp logfile=schemaimp.log schemas=scott remap_schema=scott:scott2 Processing object type SCHEMA_EXPORT/USER Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA . . imported "SCOTT2"."ABC" 5.898 KB 1 rows . . imported "SCOTT2"."DEPT" 6.031 KB 4 rows . . imported "SCOTT2"."EMP" 8.781 KB 14 rows . . imported "SCOTT2"."SALGRADE" 5.960 KB 5 rows . . imported "SCOTT2"."BONUS" 0 KB 0 rows Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/TRIGGER Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Processing object type SCHEMA_EXPORT/STATISTICS/MARKER ORA-39082: Object type TRIGGER:"SCOTT2"."TRG1" created with compilation warnings ORA-39082: Object type TRIGGER:"SCOTT2"."TRG2" created with compilation warnings Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" completed with 2 error(s) at Fri Nov 27 10:32:16 2015 elapsed 0 00:00:14 SQL> select trigger_name, status 2 from dba_triggers 3 where owner = 'SCOTT2'; TRIGGER_NAME STATUS ------------------------------ -------- TRG1 ENABLED TRG2 ENABLED TRG3 ENABLED
So at this point, everything seems to be ok. But we should look more closely…
SQL> select object_name, status 2 from dba_objects 3 where owner = 'SCOTT2' 4 and object_name like 'TRG%'; OBJECT_NAME STATUS ---------------------------------------- ------- TRG3 VALID TRG2 INVALID TRG1 INVALID
Two of the triggers are invalid. And here’s where things really catch people out …
SQL> select trigger_name, table_owner, table_name 2 from dba_triggers 3 where owner = 'SCOTT2'; TRIGGER_NAME TABLE_OWNER TABLE_NAME ------------------------------ -------------------- ----------- TRG1 SCOTT EMP TRG2 SCOTT2 EMP TRG3 SCOTT2 EMP
Ouch…we have a trigger owned by SCOTT2 but the triggering table is SCOTT ! Probably not what we intended.
That’s how we do it because people forget that triggers do not have to be in the same schema as the object they are triggering on. For example, I might have a schema called “SCOTT_TRIGGERS” where I define all of my triggers for tables owned by SCOTT. So even though an import might ask to remap a schema, we cant really assume that every reference to “SCOTT” in either the trigger definition or trigger body can just be changed. We can only remap the schema (hence the name of the parameter). In the case above, TRG2 is invalid because the trigger body is trying to reference SCOTT.EMP and does not have any permissions to do so.
So be careful when transposing (either by datapump or any other means) triggers between schemas.
So the main functionality I need from the semver package that I am writing, is the "satisfies" functionality. This is the function, that can define if a certain version string pattern is within a set of ranges of different versions.
For example, I want to know if version 1.2.* is satisfied with the following ranges: <=1.2.3 or >1.2.0 or between 1.4.1 and 1.4.6. But before I get there I need some more supporting functions. I my last entry, I had implemented some of the basics, and now I have completed a few more
Another topic of conversation that came out of Bryn‘s session at Oracle Midlands related to PL/SQL code formatting. I’m not sure I agree 100% with his opinion, but it’s certainly making me think about my current stance.
When I said I was going to blog about a couple of the points he raised, Bryn emailed me some stuff, which I’m going to use here. So this is Bryn’s content, with some of my opinions thrown in. I will no doubt get told off for not doing the subject justice, like I did with the last post.
Bryn: Look at page 1 (below). This makes PL/SQL look so old-fashioned. I’m sure that this is part of what makes it unappealing to the modern youth. How easy do you find it to read its real meaning?
Me: With a few minor formatting changes, this is pretty much what I do. As a result, it looks really normal to me and I like it, but if I try to put myself into the head-space of a young person, it does look overly formal, and as Bryn said in his talk, we are essentially shouting all the keywords and using naming conventions that detract from the true meaning we are trying to convey. I don’t like to admit it to myself, but I kind-of see his point. It does look a little Pipe and Slippers.
Bryn: Now look at Page 4 (below). Identifiers are like proper nouns in English prose. They’re capitalized to let them stand out. The capitals in Object_ID (or DBMS_Output) are an aid to pronunciation.
Me: This is a slightly unfair comparison, since there is no syntax highlighting, which is on by default in all PL/SQL IDEs, but it does look less “old”. I understand Bryn’s point about the capitals potentially aiding pronunciation, but if I’m going to stop “shouting” the keywords, I actually prefer it all in lower case myself. I agree that identifiers still need “_” between words. Although camelCase is used by many languages, it doesn’t work so well in Oracle. It looks fine in your scripts, but when you are looking at dictionary views, much of the information is presented in uppercase, so “uniquenessContraintMissing” becomes “UNIQUENESSCONSTRAINTMISSING”, which sucks.
Bryn: As promised, here’s my-marked up copy of the Gettysburg Address. It’s considered to be an exemplar of English prose, so having it mangled into an incomprehensible mess by the robotic application of “code formatting best practice” makes my point very vividly. … Feel free to use it. But do make sure that you credit the author, President Abraham Lincoln, and me for having had the idea to use it to make fun of the anally retentive code formatters who jointly conspire to make PL/SQL look so dated.
Me: Yeah. Point made!
I’ve tried to alter my code formatting in the past, which invariably meant I had to go back and reformat it a couple of days later when I couldn’t stand knowing it wasn’t “how I usually do things”. Having said that, if you read the last blog post, you would see I tried to stop shouting. It’s not what Bryn likes, but I think it’s a step in the right direction… Maybe…
Am I going to redo all my articles, blog posts and videos? No. Am I going to try and change? At the moment I’m thinking yes, but I reserve the right to
Don’t forget to check out these whitepapers from Bryn.
Thanks for all the questions so far – there are plenty of useful topics coming out. At this rate I may have to pass some of these on to the round-table that Tony Hasler.
I’ve made a couple of comments in response, but generally I plan to avoid making any comments until after the panel.
Monday 7th Dec: 11:20 – 12:05
I’ve arranged a panel session on the Cost Based Optimizer for UKOUG Tech 2015, with Maria Colgan, Nigel Bayliss, and Chris Antognini joining me to answer (and maybe argue about) questions from the audience.
To keep things moving along, we aim to have a few questions available before the day and collect a few questions at the door as well as accepting questions from the floor. Martin Widlake and Neil Chandler will be acting as MCs, wielding the microphones, and collecting questions at the door.
So far we have a handful of questions – if you have a question you’d like to raise, add it to the comments below. Even if you can’t attend we may get to your question, and I may be able to write up a summary of questions and answers after the event.
Questions so far:
You tend to name PL/SQL parameters and variables in such a way as to prevent name clashes with table columns. Let’s cause an obvious name clash…
The table DUAL has a column called DUMMY with a single row with the value ‘X’.
SQL> select * from dual; D - X 1 row selected. SQL>
Let’s create a procedure with a parameter called “dummy”, to cause a name clash. I want to use the parameter in the WHERE clause. I hope you can see where this is going.
create or replace procedure my_proc(dummy varchar2) as amount pls_integer; begin select count(*) into amount from dual where dummy = dummy; dbms_output.put_line('amount=' || amount); end my_proc; /
Now if I run it passing a parameter value of ‘Y’ I get an “incorrect result”.
set serveroutput on exec my_proc('Y'); amount=1 PL/SQL procedure successfully completed. SQL>
Actually I’m getting a correct result, because the table column is being matched against itself and the parameter is not being used. To solve this, we might rename the parameter to “p_dummy”.
create or replace procedure my_proc(p_dummy varchar2) as amount pls_integer; begin select count(*) into amount from dual where dummy = p_dummy; dbms_output.put_line('amount=' || amount); end my_proc; /
This gives us the expected result.
set serveroutput on exec my_proc('Y'); amount=0 PL/SQL procedure successfully completed. SQL>
Nothing new so far. Now comes the bit I didn’t know.
Alternatively, I could have prefixed the references to the parameter with the procedure name.
create or replace procedure my_proc(dummy varchar2) as amount pls_integer; begin select count(*) into my_proc.amount from dual a where a.dummy = my_proc.dummy; dbms_output.put_line('amount=' || my_proc.amount); end my_proc; / set serveroutput on exec my_proc('Y'); amount=0 PL/SQL procedure successfully completed. SQL>
This was just one of the points Bryn raised in Doing SQL from PL/SQL: Best and Worst Practices, which I’m sure I read, but obviously not well enough.
Why does this matter? Using a prefix like “p_” is an artificial way to avoid the situation. If someone were to add a column to the table called “p_dummy”, my code would be broken again. I’m relying on them knowing my naming standard and avoiding it when defining table columns.
By the way, this works for parameterised cursors and named anonymous blocks too.
set serveroutput on begin <
> declare dummy varchar2(1) := 'Y'; amount pls_integer; begin select count(*) into my_block.amount from dual a where a.dummy = my_block.dummy; dbms_output.put_line('amount=' || my_block.amount); end my_my_block; end; / amount=0 PL/SQL procedure successfully completed. SQL>
You learn something new every day!
Analytic functions still bamboozle many SQL practitioners out there. So I’m building a suite of videos to walk people through some of the common questions we get asked that we need to solve with SQL, and look at how we can often solve them most simply using Analytic functions.
It will all be based on the KISS principle – Keep It Simply SQL, as it pertains to understanding the Analytic syntax.
You can find my introductory video here, with more to come as we solve problems simply with (analytic) SQL. There is also an “Analytics” playlist on my channel, which will grow each time I add a new video.
I hope you enjoy it – and feel free to comment or offer feedback.
So anyone doing software releases knows the pain of doing versions correctly for the different releases. One solution to that is using SEMVER (http://semver.org/) which tries to provide a solution for that. Go to the website and read up when you change the different parts of the version (x.y.z = major.minor.patch), and what the rules are around comparing different versions.
As a DBA, its awesome when you have the Partitioning option at your disposal. So many cool things suddenly become either possible or easier. For example, all of a sudden you can separate the data of different time ranges into different partitions. But hand in hand with that, was that fear that one day we’d be getting nasty stares from either managers or customers, when the following happened:
As a DBA, its awesome when you have the partitioning option at your disposal. So many cool things suddenly become either possible or easier. But hand in hand with that, was that fear that one day we’d be getting nasty stares from either managers or customers, when the following happened:
SQL> create table T2 2 ( x date, 3 y int ) 4 partition by range ( x ) 5 ( 6 partition p1 values less than ( date '2015-06-01' ), 7 partition p2 values less than ( date '2015-07-01' ), 8 partition p3 values less than ( date '2015-08-01' ), 9 partition p4 values less than ( date '2015-09-01' ) 10 ) 11 / Table created. SQL> SQL> insert into T2 values ( sysdate,1); insert into T2 values ( sysdate,1) * ERROR at line 1: ORA-14400: inserted partition key does not map to any partition
It is the terrible moment when you realise the job you created to ensure that you had sufficient partitions created either was not run, or had stopped working and you had missed it. Luckily, a lot of this was solved when INTERVAL partitioning was created in Oracle Database 11g. All we needed than was a starting point, an interval (size), and at least one partition:
SQL> create table T1 2 ( x date, 3 y int ) 4 partition by range ( x ) 5 interval ( numtoyminterval (1,'MONTH') ) 6 ( 7 partition p1 values less than ( date '2015-06-01' ) 8 ) 9 / Table created. SQL> SQL> insert into T1 2 select add_months( date '2015-01-01',rownum), rownum 3 from dual 4 connect by level <= 10; 10 rows created.
SQL> insert into T1 values ( date '2016-01-01',99);
1 row created.
In the example above, I’ve inserted a few rows for 2015 and one row for 2016, and partitions were automatically created for me, as you can see by looking at the dictionary
SQL> select partition_name, high_value 2 from user_tab_partitions 3 where table_name = 'T1'; PARTITION_NAME HIGH_VALUE -------------------- ----------------------------------------------------------------------------------- P1 TO_DATE(' 2015-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') SYS_P517 TO_DATE(' 2015-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') SYS_P518 TO_DATE(' 2015-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') SYS_P519 TO_DATE(' 2015-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') SYS_P520 TO_DATE(' 2015-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') SYS_P521 TO_DATE(' 2015-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') SYS_P522 TO_DATE(' 2015-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') SYS_P523 TO_DATE(' 2016-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
That’s very cool, and a lot of people are now using the interval strategy as their default partitioning means. With Interval partitioning Oracle supports the concept of ‘holes’, where not all partitions have to be created consecutively. In our example we did not insert any values for December 2015, so we have not created a partition for it. However, the partition for January 2016 exists. If we ever insert data for December 2015, the missing partition will be created and the hole goes away; if we don’t then the hole stays forever, which does not matter either.
But eventually, people started to want to archive off their old data, either by switching partitions out, or dropping them. Now we are seeing the following “What is error ORA-14758?” come up on our question lists from time to time:
SQL> alter table T1 drop partition P1; alter table T1 drop partition P1 * ERROR at line 1: ORA-14758: Last partition in the range section cannot be dropped
To explain what is happening, we need to look at the data dictionary for the type of each partition
SQL> select partition_name, interval 2 from user_tab_partitions 3 where table_name = 'T1'; PARTITION_NAME INTERVAL -------------------- -------- P1 NO SYS_P517 YES SYS_P518 YES SYS_P519 YES SYS_P520 YES SYS_P521 YES SYS_P522 YES SYS_P523 YES
When we defined our table, we nominated a starting point, and then a “size” (interval) for each partition (that’s why we required at least one partition at creation time). The error is telling us that we are trying to drop the starting point, and to do so, means than we dont really have a fixed point anymore to base our intervals on. Notice in the list of partitions above, the first partition is not an interval partition, but it’s the only one of that type, so we cannot drop it. While this is a nuisance that is addressed in a future release , you can easily address this today – we simply let the database know that the existing partitions can be fixed in place and need no longer be interval based. You basically evolve the interval partitions – whose upper and lower boundaries are mathematical based on the calculation of the starting point – into range partitions with fixed upper bound values.
SQL> alter table T1 set interval ( numtoyminterval (1,'MONTH') ); Table altered. SQL> SQL> select partition_name, interval 2 from user_tab_partitions 3 where table_name = 'T1'; PARTITION_NAME INT -------------------- --- P1 NO SYS_P517 NO SYS_P518 NO SYS_P519 NO SYS_P520 NO SYS_P521 NO SYS_P522 NO
Simply by restating the interval, we have now marked all of the existing partitions as standard range partitions. So I can drop all (but one) of them and still have a starting point for my subsequent interval partitions.
SQL> alter table T1 drop partition P1; Table altered.
There is one little subtlety to this method, though: unlike Interval partitions – which have an upper and a lower bound – range partitions only have an upper bound defined. The lower bound of a range partition is defined by the upper bound of the preceding partition. So by making an interval partition a range partition we are in effect removing the lower bound. In our example we had a non-existing interval partition for December 2015, which eventually would have been created if/when data for it was inserted.
But by marking all existing partitions as range partitions, however, the former interval partition for January 2016 becomes a range partition covering the range for December 2015 and January 2016. It covers two months. This is intentional – the alternative design would have been to instantiate all missing partitions when marking interval as range partitions – which could have lead to the creation of potentially tens of thousands of unwanted partitions.
Bryn very kindly stepped in for Tom Freyer who unfortunately couldn’t make it. Bryn’s presentation was called Why use PL/SQL?, which was based on his whitepaper of the same name. I’ve written about a number of the topics he raised, including a piece on My Utopian Development Environment, but his talk and his whitepaper put forward a much more consistent and compelling argument. Definitely worth checking out if you are a PL/SQL expert or newbie!
Along the way, he used a little piece of syntax I have never noticed before, which has been available for the whole 20+ years I’ve been using PL/SQL. I’ll write about that in a post tomorrow if I get chance.
By the way, Bryn loves being called Brian, and he loves it when you say S.Q.L. and PL/S.Q.L. rather than Sequel and PL/Sequel.
Next up was Neil with a session called “Why has my plan changed?”. I’ve written about many of the issues discussed in this session, but Neil crystallised a whole array of points really well, and used live demos which I always like to see. Regardless of how well you think you know the subject, I would be surprised if you don’t walk away with a few new things, which you can later pretend you always knew.
Both guys will be presenting these sessions at UKOUG Tech15 if you want to catch up with them!
Thanks to the guys for coming to speak to us. Thanks to Bryn’s mom, for living in the UK, giving him an excuse to come to UKOUG Tech15 early and therefore be available to step in at short notice. Thanks as always to Red Stack Tech for sponsoring the event, allowing it to remain free, and to Mike for organising it!