Search

OakieTags

Who's online

There are currently 0 users and 33 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

OT: Norway

Having had one of my best conference experiences ever this year in the company of many good Norwegians, this weeks news saddened me and it must have come as a hell of a shock to the country as a whole, not just those involved. Their horror is difficult to imagine.

I really don't have adequate words, frankly.

At the same time, it inspired me to hear ...

"We must – and will – meet terror with more democracy, not less. We must not lock up Norwegian society. That would be piling tragedy upon tragedy."

My heart goes out to those involved and I look forward to going back to Norway next year ...

Who Manages the Exadata Machine?

For organizations that just procured an Exadata machine, one of the big questions is bound to be about the group supporting it. Who should it be - the DBAs, Sys Admins, Network Admins, or some blend of multiple teams?

The conventional Oracle database system is a combination of multiple distinct components - servers, managed by system admins; storage units, managed by SAN admins; network components such as switches and routers, managed by network admins; and, of course, the database itself, managed by the DBAs. Exadata has all those components - servers, storage (as cell servers), infiniband network, ethernet network, flash disks, the whole nine yards; but packaged inside a single physical frame representing a single logical unit - a typical engineered system. (For a description of the components inside the Exadata system, please see my 4-part article series on Oracle Technology Network) None of these conventional technology groups posses the skillsets to the manage all these components. That leads to a difficult but important decision - how the organization should assign the operational responsibilities.

Choices

There are two choices for organizations to assign administrative responsibilities.

  1. Distributed - Have these individual groups manage the respective components, e.g. Sys Admins managing the Linux servers, the storage admins managing the storage cells, network admins managing the network components and finally DBAs managing the database and the cluster.
  2. Consolidated - Create a specialized group - Database Machine Administrator (DMA) and have one of these groups expand the skillset to include the other non-familiar areas.

Each option has its own pros and cons. Let's examine them and see if we can get the right fit for our specific case.

Distributed Management

Under this model each component of Exadata is managed as an independent entity by a group traditionally used to manage that type of infrastructure. For instance, the system admins would manage the Linux OS, overseeing all aspects of it such as creation of users to applying the patches and RPMs. The storage and database would be managed likewise by the specialist teams.

The benefit of this solution is its seeming simplicity - components are managed by their respective specialists without a need for advanced training. The only need for training is for storage, where the Exadata Storage Server commands are new and specific to Exadata.

While this approach seems a nobrainer on surface, it may not be so in reality. Exadata is not just something patched up from these components; it is an engineered system. There is a huge meaning behind that qualifier. These components are not designed to act alone; they are put together to make the entire structure a better database machine. And, note the stress here - not an application server, not a fileserver, not a mail server; not a general purpose server - but a database machine alone. This means the individual components - the compute nodes, the storage servers, the disks, the flashdisk cards and more - are tuned to achieve that overriding objective. Any incremental tuning in any specific component has to  be within the framework of the entire frame; otherwise it may fail to produce the desired result, or worse, produce undesirable result.

For instance the disks where the database resides are attached to the storage cell servers; not the database compute nodes. The cell servers, or Cells run Oracle Enterprise Linux, which is very similar to Red Hat Linux. Under this model of administration, the system admins are responsible for managing the operating system. A system admin looks at the host and determines that it is under tuned since the filesystem cache is very low. In a normal Linux system, that would have been a correct observation; but in Exadata, the database is in ASM and a filesystem cache is less important. On the other hand, the Cells need the memory to place the Storage Indexes on the disk contents. Placing a large filesystem cache not only produce nothing to help the filesystem; but actually hurt the performance for the paging of Storage Indexes.

This is just one example of how the engineered systems are closely interrelated. Assuming they are separate and assigning multiple groups with different skillsets may not work effectively.

Database Machine Administrator

This is leads to the other approach - making a single group responsible for the entire frame from storage to the database. The single group would be able to understand the impact of the changes in one component to the overall effectiveness of the rack and will be in a better position to plan and manage. The single role that performs the management of Exadata is known as Database Machine Administrator (DMA).

I can almost hear the questions firing off inside your brain. The most likely question probably is whether it is even possible to have a single skillset that encompasses storage, system, database and network.

Yes, it definitely is. Remember, the advantages of an engineered system do not stop at being a carefully coordinated individual components. Another advantage is the lack of controls in those components. There are less knobs to turn on each component in an Exadata system. Take for instance the Operating System. There are two types of servers - the compute nodes and the cells. In the cells, the activity performed by a system admin is severely limited - almost to the point of being none. On the compute nodes, the activities are limited as well. The only allowable activities are - setting up users, setting up email relays, possibly setting up an NFS mount and handful of more. This can easily be done by a non-expert. One does not have to a System Admin to manage the servers.

Consider storage, the other important component. Traditionally storage administrators perform critical functions such as adding disks, carving out LUNs, managing replication for DR and so on. These functions are irrelevant in Exadata. For instance, the disks are preallocated in Exadata, the LUNs are created at installation time, there is no replication since the DR is by Data Guard which at the Oracle database level. One need not be a storage expert to the perform the tasks in Exadata. Additionally the Storage Admins are experts in the specific brand of storage, e.g. EMC VMax or IBM XiV. In Exadata, the storage is different from all the other brands your storage admins may be managing. They have to learn about the Exadata storage anyway; so why not have someone else, specifically the DMA learn?

Consider Network. In Exadata the network components are very limited since it is only for the components inside the rack. This reduces the flexibility of the configuration compared to a regular general purpose network configuration. the special kind of hardware used in Exadata - Infiniband - requires some special skills which the network ops folks may have to learn anyway. So, why not the DMAs instead of them? Besides, Oracle already provides a lot of tools to manage this layer.

That leaves the most visible component - the database which is, after all, the heart and soul of Exadata. This layer is amenable to a considerable degree of tuning and the depth of skills in this layer is vital to managing Exadata effectively. Transferring the skills needed here to a non-DBA group or individual is difficult, if not impossible. This makes the DBA group the most natural choice for evolving into the DMA role after absorbing the relevant other skills. The other skills are not necessarily at par with the administrator of the respective components. For instance the DMA does not need to be a full scale Linux system admin; but just needs to know a few relevant concepts, commands and tools to perform the job well. Network management is Exadata is a fraction of the skills expected from a network admin. The storage management in cell servers are new to any group; so the DMA will find that as easy as any other group, if not easier.

By understanding the available knobs on all the constituent components of Exadata, the DMA can be better prepared to be an effective administrator of the Exadata system; not by divvying up the activities to individual groups which are generally autonomous. The advantages are particularly seen when troubleshooting or patching Exadata. Hence, I submit here for your consideration - a new role called DMA (Database Machine Administrator) for the management of Exadata. The role should have the following skillsets:

60% Database Administration
20% Cell Administration
15% Linux Administration
5% Miscellaneous (Infiniband, network, etc.)

I have written an article series on Oracle Technology Network - Linux for Oracle DBAs. This 5-part article series has all the commands an concepts the Oracle DBA should understand about Linux. I have also written a 4 part article series - Commanding Exadata - for DBAs to learn the 20% cell administration. With these two , you will have everything you need to be a DMA. Scroll down to the bottom of this page and click on "Collection of Some of My Very Popular Web Articles" to locate all these articles and more.

Summary

In this blog entry, I argued for creating  a single role to manage the Exadata system instead of multiple groups managing individual parts. Here are the reasons in a nutshell:

  1. Exadata is an engineered system where all the components play collaboratively instead of as islands. Managing them separately may be ineffective and detrimental.
  2. The support organizations of components such as Systems, storage, DBA, etc. in an organizations are designed with a generic purpose in mind. Exadata is not generic. Its management needs unprecedented close coordination among various groups which may be new to the organization and perhaps difficult to implement.
  3. The needed skillsets are mostly database centric; other components have very little to manage.
  4. These other skills are easy to add to the DBA skills making the natural transition to the DMA role.

Best of luck in becoming a DMA and implementing Exadata.

Toolbar Buttons Revisited…

My 9 year old nephew was making a Powerpoint slideshow today. I watched him regularly saving his slideshow using the menu and asked why he was not using the Save button, to which he replied, “Which one is it?”.

I looked at the toolbar and saw a button with a picture of a floppy disk. I don’t think he has ever seen a floppy disk in his life. I’m not surprised he didn’t associate this button with saving his slideshow.

I’ve seen articles suggesting that buttons with icons are not good for new users. This is especially true if the icons reference old technology they have never encountered.

Cheers

Tim…

 

 

 




date conversions

Dates are a surprising pain for correlating data an at the same time being understandable to the human reader.

For a generic form we can take seconds since 1970, on UNIX, with

$ date '+%s'
1311372938

but then how to we convert it back? There is a neat trick for that, “date -d @”

$ date -d @1311372938  +%H:%M:%S
22:15:38

so in scripts we can load up the date into seconds, that can be subtracted to get deltas and at the same time converted easily to any date format :

date_secs=` date '+%s'`
date -d @$date_secs +%H:%M:%S
22:23:34

 

AWK

AWK takes a bit of a trick to convert seconds to human format:

date_secs=1311372938
# setup the UNIX command
d="date -d @"date_secs" +%H:%M:%S"
# run the command to getline and put it into "date_human_format"
d | getline date_human_format
# be sure and close the open descriptor
close(d)

 

Dtrace
Dtrace will output nano seconds since 1970 in “walltimestamp” function, so just lop off the nanoseconds and you can convert it to date

date_secs=walltimestamp/1000000;

 

Oracle

Oracle  has the nice feature of returning the days along with fractional days when subtracting two dates, so it seems like just subtracting off 1970 should work and then multiplying by the seconds in a day

SELECT (SYSDATE - to_date('01-JAN-1970','DD-MON-YYYY')) * (86400) from dual;

But since Oracle can already subtract dates there is little need to do the conversion unless extracting the data to be used by an external consumer.

If Oracle is reading data from an external source in seconds since 1970 we can just reverse the process

select to_char((TO_DATE('01-JAN-1970','DD-MON-YYYY') + seconds_since_1970 / 86400 ),'DD-MON-YYYY') from dual;

example

SQL> select to_char((TO_DATE('01-JAN-1970','DD-MON-YYYY') +1311372938  / 86400 ),'DD-MON-YYYY') from dual;

TO_CHAR((TO
-----------
22-JUL-2011

SQL> SELECT (SYSDATE - to_date('01-JAN-1970','DD-MON-YYYY')) * (86400) from dual;

(SYSDATE-TO_DATE('01-JAN-1970','DD-MON-YYYY'))*(86400)
------------------------------------------------------
                                            1311424000

 

Oracle TIMESTAMP

One issue on Oracle though, the variable type timestamp can be a bit frustrating to work with, but it can be cast into a date, for example:

select cast(begin_interval_time as date ) from DBA_HIST_SNAPSHOT;

 

UPDATE: just found this link which is pretty good at covering a wide range of languages and databases:

http://www.epochconverter.com/

though the didn’t include AWK :)  – who uses AWK these days anyway?

Pythian Tools: Method R Profiler, MR Tools & MR Trace

Working with 100 talented database engineers is fun and there are lots going on — lots of exciting (and not so much) projects ongoing, huge amount of problems solved, mistakes made (and learned from), many unique (as well as routine) customer needs satisfied, many new (and old) methods applied, many good (and less so) tools [...]

Fastest £1,000 Server – back from supplier

At the risk of turning my Blog into some sort of half-way-house tweet update thing (correct, I’ve never logged into twitter), as a couple of people asked about the outcome with the broken £1,000 server, I’m happy to report it came back this week. The motherboard had died. I’d convinced myself it was the PSU when I trawled the net as it seems to be one of those things that is most likely to die having fired up in the first place, but no, the motherboard. I guess some solder “dried” or the pc pixies just don’t like me. One month turnaround is not very impressive…

They had another motherboard exactly the same in stock so I got a like-for-like swap. I was kind of hoping for a different one with more SATA3 and USB3 headers :-)

Now I’m trying to download the latest oracle 11 for 64 bit windows. I live out in the wilds of North Essex (for non-UK people, this is all of 62 Kilometers North-Northeast of London as the crow flies, so not exactly in an obscure and remote part of the UK! For those who DO know the UK, it is nothing like “the only way is Essex” out here. We have trees, fields, wildlife and a lack of youth culture.) As such, my broadband connect is sloooow. The connection keeps breaking and I lose the download. *tsch*. I’m sure I had a download manager somewhere which got around these issues…


Which Linux do you pick for Oracle Installations?

There was an interesting thread on the OakTable mailing list the other day regarding the choice of Linux distros for Oracle installations. It was started by one member (the name has been withheld to protect the innocent :) ) who said,

“I cannot imagine (but want to understand) why anyone would pick RHEL5.6 for Oracle as opposed to the vastly superior OEL with the UEK.”

I must admit I’ve kinda forgotten that any distro apart from Oracle Linux (OL) exists as far as production installations of Oracle software are concerned.

Some of the reasons cited for people not to pick OL include:

  • The customer has a long relationship with Red Hat and doesn’t want to jump ship.
  • RHEL is the market leading enterprise Linux distro, so why switch to Oracle?
  • The customer doesn’t want to be too dependent on Oracle.
  • The customer has lots of non-Oracle servers running RHEL and doesn’t want a mix of RHEL and OL as it would complicate administration.
  • The customer uses some software that is certified against RHEL, but not OL.
  • The customer prefers Red Hat support over Oracle support. Wait. Red Hat and support in the same sentence. Give me a minute to stop laughing…
  • The customer is using VMware for Virtualization and the Unbreakable Enterprise Kernel (UEK) is not supported on VMware.

I guess every company and individual will have differing justifications for their choice of distro.

So why would you pick OL and Unbreakable Enterprise Kernel (UEK) for Oracle installations?

  • You can run it for free if you don’t want OS support. Using OL without support doesn’t affect the support status of the products (DB, App Servers etc.) running on top of it.
  • It’s what Oracle use to write the Linux version of the products.
  • It’s what Exadata uses.
  • Oracle products are now certified against the OL + UEK before they are certified against the RHEL kernel.
  • UEK is typically a much more up to date version of the kernel than that shipped by RHEL and includes all the patches vital for optimum Oracle performance.
  • Single vendor, so no finger pointing over support issues (from Google+ comment).
  • It is the only enterprise Linux distro that supports kernel patching without reboots thanks to Oracle’s newly aquired Ksplice.

For more information you might want to read this whitepaper or watch this webcast.

If you are looking at things from a purely technical perspective, I guess you are going to pick OL and UEK. Of course, many of us don’t work in a world where technology is picked purely on its merits. :)

Cheers

Tim…

Update: Check out this post by Jay Weinshenker for a different angle on this issue.




Expert Oracle Exadata virtual conference

Today is the last day for getting the early bird’s rate!

http://blog.tanelpoder.com/seminar/expert-oracle-exadata-virtual-conference/

Also, our book will be out on Monday!

Oracle CPU time

There are 3 kinds of CPU in the Oracle stats.

  1. Oracle CPU used
  2. System CPU used
  3. Oracle demand for CPU

Starting in 10g Oracle records both the CPU used by the instance as well as the load on the system in v$sysmetric. This is awesome as we can see how busy the system is and how much of the CPU Oracle is responsible for:

col metric_name for a25
col metric_unit for a25
select metric_name, value, metric_unit from v$sysmetric where metric_name like'%CPU%' where group_id=2;
METRIC_NAME                         VALUE METRIC_UNIT
------------------------------ ---------- ------------------------------
CPU Usage Per Sec              251.067016 CentiSeconds Per Second
CPU Usage Per Txn              5025.52477 CentiSeconds Per Txn
Host CPU Utilization (%)       11.6985845 % Busy/(Idle+Busy)
Database CPU Time Ratio        76.3291033 % Cpu/DB_Time

Now the question is how do we convert these to something useful? For me I put it into the equivalent of AAS and compare it to the core count:

   select 'CPU_ORA_CONSUMED'                                     CLASS,
                    round(value/100,3)                             AAS
             from v$sysmetric
             where metric_name='CPU Usage Per Sec'
               and group_id=2
          union
            select 'CPU_OS'                                        CLASS ,
                    round((prcnt.busy*parameter.cpu_count)/100,3)    sAAS
            from
              ( select value busy from v$sysmetric
                where metric_name='Host CPU Utilization (%)'
                 and group_id=2 ) prcnt,
             ( select value cpu_count
                 from v$parameter
                where name='cpu_count' )  parameter;

CLASS                  AAS
---------------- ----------
CPU_ORA_CONSUMED       .002
CPU_OS                 .022

An AAS of 1 is equivalent to 100% of a core, so, OS CPU is about 2% of a core and of that Oracle used 0.2% of a core.
Not a very active system, and we can look at an active system later, but what I wanted to point out is that this query is missing an important statistic: the demand for CPU by Oracle. We can only add that, AFAIK, by joing in ASH:

   select 'CPU_ORA_CONSUMED'                                     CLASS,
                    round(value/100,3)                             AAS
             from v$sysmetric
             where metric_name='CPU Usage Per Sec'
               and group_id=2
          union
            select 'CPU_OS'                                         CLASS ,
                    round((prcnt.busy*parameter.cpu_count)/100,3)     AAS
            from
              ( select value busy from v$sysmetric
                 where metric_name='Host CPU Utilization (%)'
                   and group_id=2 ) prcnt,
              ( select value cpu_count from v$parameter
                 where name='cpu_count' )  parameter
          union
             select
               'CPU_ORA_DEMAND'                                            CLASS,
               nvl(round( sum(decode(session_state,'ON CPU',1,0))/60,2),0) AAS
             from v$active_session_history ash
             where SAMPLE_TIME > sysdate - (60/(24*60*60));

CLASS                   AAS
---------------- ----------
CPU_ORA_CONSUMED       .001
CPU_ORA_DEMAND          .02
CPU_OS                 .019

So the demand for CPU was higher than the amount consumed. Now the demand for CPU is coming from ASH which is sampled so the accuracy is weak, but in larger sample sets or busier systems it’s pretty darn good. The demand alert us to CPU starvation on a busy  system.

I like to wrap all this up into a query with all the wait classes to see the overall load on Oracle including CPU consumed by Oracle, CPU demanded by Oracle and CPU used at the OS level:

select
                 decode(n.wait_class,'User I/O','User I/O',
                                     'Commit','Commit',
                                     'Wait')                               CLASS,
                 sum(round(m.time_waited/m.INTSIZE_CSEC,3))                AAS
           from  v$waitclassmetric  m,
                 v$system_wait_class n
           where m.wait_class_id=n.wait_class_id
             and n.wait_class != 'Idle'
           group by  decode(n.wait_class,'User I/O','User I/O', 'Commit','Commit', 'Wait')
          union
             select 'CPU_ORA_CONSUMED'                                     CLASS,
                    round(value/100,3)                                     AAS
             from v$sysmetric
             where metric_name='CPU Usage Per Sec'
               and group_id=2
          union
            select 'CPU_OS'                                                CLASS ,
                    round((prcnt.busy*parameter.cpu_count)/100,3)          AAS
            from
              ( select value busy from v$sysmetric where metric_name='Host CPU Utilization (%)' and group_id=2 ) prcnt,
              ( select value cpu_count from v$parameter where name='cpu_count' )  parameter
          union
             select
               'CPU_ORA_DEMAND'                                            CLASS,
               nvl(round( sum(decode(session_state,'ON CPU',1,0))/60,2),0) AAS
             from v$active_session_history ash
             where SAMPLE_TIME > sysdate - (60/(24*60*60));

CLASS                   AAS
---------------- ----------
CPU_ORA_CONSUMED       .002
CPU_ORA_DEMAND          .03
CPU_OS                 .023
Commit                    0
User I/O                  0
Wait                      0

Ideally I’d want the CPU stats to be subsets of each other so that I could have a graphically stack-able set of statistics

now rolling it all together
with AASSTAT as (
           select
                 decode(n.wait_class,'User I/O','User I/O',
                                     'Commit','Commit',
                                     'Wait')                               CLASS,
                 sum(round(m.time_waited/m.INTSIZE_CSEC,3))                AAS
           from  v$waitclassmetric  m,
                 v$system_wait_class n
           where m.wait_class_id=n.wait_class_id
             and n.wait_class != 'Idle'
           group by  decode(n.wait_class,'User I/O','User I/O', 'Commit','Commit', 'Wait')
          union
             select 'CPU_ORA_CONSUMED'                                     CLASS,
                    round(value/100,3)                                     AAS
             from v$sysmetric
             where metric_name='CPU Usage Per Sec'
               and group_id=2
          union
            select 'CPU_OS'                                                CLASS ,
                    round((prcnt.busy*parameter.cpu_count)/100,3)          AAS
            from
              ( select value busy from v$sysmetric where metric_name='Host CPU Utilization (%)' and group_id=2 ) prcnt,
              ( select value cpu_count from v$parameter where name='cpu_count' )  parameter
          union
             select
               'CPU_ORA_DEMAND'                                            CLASS,
               nvl(round( sum(decode(session_state,'ON CPU',1,0))/60,2),0) AAS
             from v$active_session_history ash
             where SAMPLE_TIME > sysdate - (60/(24*60*60))
)
select
       ( decode(sign(CPU_OS-CPU_ORA_CONSUMED), -1, 0, (CPU_OS - CPU_ORA_CONSUMED )) +
       CPU_ORA_CONSUMED +
        decode(sign(CPU_ORA_DEMAND-CPU_ORA_CONSUMED), -1, 0, (CPU_ORA_DEMAND - CPU_ORA_CONSUMED ))) CPU_TOTAL,
       decode(sign(CPU_OS-CPU_ORA_CONSUMED), -1, 0, (CPU_OS - CPU_ORA_CONSUMED )) CPU_OS,
       CPU_ORA_CONSUMED CPU_ORA,
       decode(sign(CPU_ORA_DEMAND-CPU_ORA_CONSUMED), -1, 0, (CPU_ORA_DEMAND - CPU_ORA_CONSUMED )) CPU_ORA_WAIT,
       COMMIT,
       READIO,
       WAIT
from (
select
       sum(decode(CLASS,'CPU_ORA_CONSUMED',AAS,0)) CPU_ORA_CONSUMED,
       sum(decode(CLASS,'CPU_ORA_DEMAND'  ,AAS,0)) CPU_ORA_DEMAND,
       sum(decode(CLASS,'CPU_OS'          ,AAS,0)) CPU_OS,
       sum(decode(CLASS,'Commit'          ,AAS,0)) COMMIT,
       sum(decode(CLASS,'User I/O'        ,AAS,0)) READIO,
       sum(decode(CLASS,'Wait'            ,AAS,0)) WAIT
from AASSTAT)
/

    CPU_OS    CPU_ORA CPU_ORA_WAIT     COMMIT     READIO       WAIT
---------- ---------- ------------ ---------- ---------- ----------
       .02       .002            0          0          0          0

Now let’s run up some load on a machine and database.
Take two databases, run up the CPU demand on both and add some wait contention. The machine has 24 cores so there is a definitely a problem when the CPU_TOTAL goes over 24. I’m running 14 sessions each trying to burn a core on two different databases. The first few lines the test is ramping up

SQL> /

 CPU_TOTAL     CPU_OS    CPU_ORA CPU_ORA_WAIT     COMMIT     READIO       WAIT
---------- ---------- ---------- ------------ ---------- ---------- ----------
    14.887       .387     13.753         .747          0          0       .023

SQL> /

 CPU_TOTAL     CPU_OS    CPU_ORA CPU_ORA_WAIT     COMMIT     READIO       WAIT
---------- ---------- ---------- ------------ ---------- ---------- ----------
    21.989      7.469     12.909        1.611          0          0       .044

SQL> /

 CPU_TOTAL     CPU_OS    CPU_ORA CPU_ORA_WAIT     COMMIT     READIO       WAIT
---------- ---------- ---------- ------------ ---------- ---------- ----------
    26.595     12.125     11.841        2.629          0          0       .025

SQL> /

 CPU_TOTAL     CPU_OS    CPU_ORA CPU_ORA_WAIT     COMMIT     READIO       WAIT
---------- ---------- ---------- ------------ ---------- ---------- ----------
    27.045     12.125     11.841        3.079          0          0       .025

Historically CPU used by Oracle was derived from

v$sysstat.name=’CPU used by this session’

but this statistic had problems as the value was only updated every time a call ended. A call could be a 1 hour PL/SQL procedure which would thus report zero cpu usage in the stats until it finished and the CPU would spike off the scale.

ASH had always been the most stable way to gather CPU demand, though Oracle has made improvements in gathering CPU statistics. I believe that the time model gathers CPU every 5 seconds in 10g, and in 11g it’s possible that CPU stats are gathered every second

Friday Philosophy – PowerPoint Picture Pain

The title does not refer to the agony of sitting through a presentation with loads of pointless graphics and zip-all content. Rather, it encompasses the agony of trying to produce some decent graphical content for PowerPoint. Or Word or any other software-based method of presenting information. I think we generally had better presentations when we had overhead projectors or flip charts and I go “powerpoint naked” to present sometimes.

However, I do present quite often using the ubiquitous PowerPoint. I spend hours on my slides, both trying to put good textual content into them for both the actual presentation and also for anyone who downloads them later. I also spend ages trying to put together graphics as we humans are visual creatures and a good {let’s pause for a second and stress the word good} picture or diagram can help enormously. I did a little animated graphic a few years back to show how DNA is unzipped and duplicated, allowing the near perfect transmission of our DNA as cells divide. Modesty prevents me saying it was good but {drat, I just said it} but several people told me it really helped them understand something that text descriptions struggle to put across.

{Hmm, OK, I’ll see if I can get this to work. Try this how-dna-duplicates powerpoint show. It shows how the two strands of DNA are “matched”. A, C, T and G are the 4 letters of the DNA and each one is associated with its “pair” in the second strand. A to T, T to A, C to G and G to C. Split the pair into two strands and you can make each a perfect copy of the original by adding back the pairs. Understand? Oh, and there are very, very, very rare copy mistakes, in this case a C is matched to a T rather than an A. Go on, fire up the slide.}

The reality is though that most of my presentations have very average slides. When you present for the UK Oracle User group they are good enough to solicit feedback from the audience and feed it back to the presenter. You get scored on topic, content, style, slides, charisma, sex appeal… The lowest part of my score is always, Always, ALWAYS the slides. This started getting on my nerves so one year I tried really hard on the slides. It was still my weakest point.

The thing is, I have never had a graphics team (let alone a whole marketing department) to help me out. Either I have worked for a company with no such thing or I have been a lone consultant doing what I can in my spare time. It is especially frustrating as large companies that do have access to teams of graphic artists seem to present with a large number of very pretty, very well drawn slides with zip-all content and the graphics add almost nothing to understanding (and we are back to the first line of this posting). In fact, I often find that slides with lots of pretty icons representing part of the technology stack with chunky arrows sweeping across them showing flow to be {cough} pointless to my understanding. One thing I have picked up from all those pretty pictures though is that a feeling of depth and texture is very effective in altering how a slide looks.

So, I decided for my posts on Index Organised Tables I would spend some time messing around with such things in PowerPoint. My Graphic of the Thread is:

I’m quite pleased with it, especially as in doing so I found out quite a bit about how to use the gradient fills, shadowing, 3D formating and rotating of simple shapes. It is certainly better than my previous stock graphic for an index:

I think most people would agree, it is an improvement. But is it two days worth of effort improvement. Yes, that is how long I spent messing around with the image options in PowerPoint to replace a flat, uninspiring green blob with 3 lines on it to represent an index into a nicely shaped, pseudo-3D image with a tad more information. *sigh*.

I guess I can now convert other flat pictures made up of boxes, triangles and lines into something that look a bit more like a professional diagram than something knocked up by a database geezer late one evening the day before the presentation. But if it does not help me produce a diagram that aids understanding rather than just presentation fluff, like those marketing slides I’ve just wasted a lot of time.

I tell you what though, having spent so long on that diagram, I’m going to use it to death :-)