Search

OakieTags

Who's online

There are currently 0 users and 30 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

Data Virtualization and Greener Data Centers

On the Saturday before the Oracle OpenWorld 2014 conference started, I had the added bonus of finding out that the Data Center Journal had published my article on how data virtualization leads to greener data centers.

So, rather than reprise the article here (which I’m tempted to do), please instead click here and give it a read!

JSON Support in Oracle Database 12c (12.1.0.2)

I spent a bit of time at OpenWorld looking at the JSON support in Oracle Database 12c. I started to write some stuff about it on the plane home and I spent the last two mornings finishing it off. You can see the results here.

I’ve tried to keep it light, since the documentation does a pretty good job at explaining all the variations of the syntax. I’ve also avoided trying to teach people about JSON itself. There is loads of stuff about that on the net already.

For the most part I think the JSON support looks pretty cool. During the process of writing the articles I did notice a few of things that I thought might confuse.

  • Using dot notation to access JSON in SQL seems like a neat solution, but each reference results in a query transformation that may well leave you with a whole bunch of function calls littered around your SQL. The end result is probably not what you want. I think it is probably better to avoid it and write all the direct function calls yourself, so you know exactly what the optimizer will do.
  • Typically the query transformations of dot notation result in a JSON_QUERY function call, but the optimizer can substitute a JSON_VALUE call if there is an index that it can take advantage of. That can be a little confusing when you aren’t expecting it. Once again, it might be better to avoid dot notation so as not to confuse.
  • If you are careful, the indexing of JSON data is pretty straight forward, but if you aren’t aware of how the query transformations work or you forget how very small changes in function parameters affect index usage, you can chase your tail trying to figure out why you aren’t able to use your indexes.

Until the REST APIs are released, the only way you can use this stuff is from the server side, so it’s not really something you can hand out to developers who are expecting to use just another document store. I had a play with the REST stuff during a hands-on lab at OpenWorld and it looked kind-of cool. When it’s released I’ll write an article about it and run it by some of the folks at work to see how they think it compares to other document databases…

Cheers

Tim…


JSON Support in Oracle Database 12c (12.1.0.2) was first posted on October 7, 2014 at 6:11 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

In-memory pre-population speed

While presenting at Oaktable World 2014 in San Fransisco, I discussed the in-memory pre-population speed and indicated that it takes about 30 minutes to 1 hour to load ~300GB of tables. Someone asked me “Why?” and that was a fair question. So, I profiled the in-memory pre-population at startup.

Profiling methods

I profiled all in-memory worker sessions using Tanel’s snapper script and also profiled the processes in OS using Linux perf tool with 99Hz sample rate. As there is no other activity in the database server, it is okay to sample everything in the server. Snapper output will indicate where the time is spent; if the time is spent executing in CPU, then the perf report output will tell us the function call stack executing at that CPU cycle. Data from these two profiling methods will help us to understand the root cause of slowness.

  1. @snapper.sql out,gather=stw 600 10 “select sid from v$session where program like ‘%W00%'”
  2. Perf tool : perf record -F 99 -u oracle -g sleep 3600

Discussion of the output

In the snapper output printed below, session with sid=17 spent about 89% of the time on executing in the CPU. Essentially, most of the time spent on executing CPU itself, not on any other wait event. So, the output of command ‘perf report’ will be useful to understand the function call consuming CPU cycles.

snapim2.txt:
   SID, USERNAME  , TYPE, STATISTIC                               ,         DELTA, HDELTA/SEC,    %TIME, GRAPH
     15, (W003)    , TIME, background IM prepopulation cpu time    ,     511933187,   850.39ms,    85.0%, |@@@@@@@@@ |
     17, (W009)    , TIME, background IM prepopulation cpu time    ,     537411293,   892.71ms,    89.3%, |@@@@@@@@@ |
...
--  End of snap 1, end=2014-10-01 08:48:29, seconds=602

In the output below, 17% of the time spent using the CPU and 85.4% of time spent on single block reads( there is 2% over counting here). This split makes sense since if the process is waiting for I/O, then its CPU usage will be minimal.

    184, (W006)    , TIME, background IM prepopulation elapsed time,     614868987,      1.02s,   102.1%, |@@@@@@@@@@|
    184, (W006)    , TIME, background cpu time                     ,     105862907,   175.85ms,    17.6%, |@@        |
    184, (W006)    , TIME, background IM prepopulation cpu time    ,     105878904,   175.88ms,    17.6%, |@@        |
    184, (W006)    , TIME, background elapsed time                 ,     614853823,      1.02s,   102.1%, |@@@@@@@@@@|
    184, (W006)    , WAIT, db file sequential read                 ,     514181321,   854.12ms,    85.4%, |@@@@@@@@@ |

Visualization

After an hour after the database startup we have sufficient data to analyze. Using R language script and a bit of UNIX awk scripts, we can format and visualize the data. I cheated and extracted just relevant section from the snapper output using egrep. BTW, snapim.txt is the spooled output of snapper script. First egrep command captures ‘IM prepopulation cpu time’ in a file named snapim2.txt and the second egrep command captures the percent of waits for ‘db file sequential reads’ wait event.

# Egrep to grep only specific lines.
egrep   'STATISTIC|background IM prepopulation cpu time|End of snap' snapim.txt >snapim2.txt
egrep   'STATISTIC|db file sequential |End of snap' snapim.txt >snapim3.txt

Now, we have just bare minimal data in these two files: snapim2.txt and snapim3.txt. It is easy to plot them as a scattered plot using R script. As we are only interested in the Y axis, we will use sequence values for the X axis. Y axis is the percent of time spent and X axis is sequence values.

# R script to read a csv file and plot them in x and y axis as a scattered plot.
w1<-read.csv(file='D:/Riyaj/my tech docs/presentations/OTW2014/snapim2.txt')
plot (seq(1, length(w1$"X.TIME")), w1$"X.TIME", xlab="Sample_N_Time", ylab="IM Prepopulation CPU time")
w2<-read.csv('D:/Riyaj/my tech docs/presentations/OTW2014/snapim3.txt')
plot (seq(1,length(w2$"X.TIME")), w2$"X.TIME", xlab="Sample_N_TIME", ylab="db file sequential read")

First picture is plotting the ‘background IM pre-population CPU time’ as a percentage of total time spent, with a snap interval of 600 seconds, 6 snaps in total. You can clearly see two groups: Initially, the processes were spending time executing in CPU. AS time goes by, processes were spending less CPU time. As we just grep’ed only the relevant lines from snapper output, the timeline goes from left to right too.

cpu_time_plot

Following picture is plotting the ‘db file sequential reads’ as percentage of total time spent. Again, initially, waits for single block reads are minimal and as time goes by, more and more sessions are waiting for single block reads. These two statistics are reverse correlating strongly. Reviewing the output manually, if there is huge ‘db file sequential reads’ waits, I see that much of the time spent on ‘fetch continued row’ statistics indicating huge amount of row chaining.

dbf_seq plot

Analysis of perf output

Since, we know that CPU usage was higher in the initial 30 minutes of pre-population, we need to identify the functions consuming CPU. Again, a bit of awk and grep: Essentially, following code is equivalent of “sum(percent) group by function call”, as the perf tool captures granular data.

# perf report --stdio > perf_report_inmem.txt
# grep oracle perf_report_inmem.txt | awk '{print $5" "$1 }'|sort|sed 's/%//'|grep -v '0.0' |awk '{sum[$1]+=$2} END {for (j in sum) print sum[j], j}' |sort -k1nr 
8.57 kdzu_basic_insert
6.92 kdzu_csb_compare_fast
6.22 kdzu_csb_node_bsearch
4.59 kdzu_dict_insert
4.45 BZ2_decompress
4.1 __intel_ssse3_rep_memcpy
2.39 kdzu_csb_search
2.38 kdzu_basic_minmax
2.14 kdzcbuffer_basic
2.02 kdzu_get_next_entry_from_basic
1.38 unRLE_obuf_to_output_FAST
1.24 skswrEqual
0.59 kdzu_dict_create_from_basic
0.55 _intel_fast_memcmp

Functions consuming most CPU cycles is associated with kudzu oracle module. kdzu code module is to populate the in-memory column store using query high compression. You can also see 4.5% of time spent on BZ2_decompress, a function associated with HCC archive high decompression. While I am positive that there may be some opportunities to tune these Oracle functions (yes, you guessed it right, Oracle developers have to do that), there isn’t a smoking gun function. Time is spread around many function calls. Annotating few functions further, I see that, arithmetic instructions such as addl and sub are consuming time, I guess, offloading math processing will help, not sure.

Summary

In summary, we looked deep in to the performance of in-memory pre-population using various tools at our disposal. We can probably derive the following facts:

  1. In-memory follows the chained rows. So, you should check for chained rows before converting to in-memory column store. Chained rows slow down the in-memory population dramatically.
  2. CPU usage will be higher during pre-population, but no function calls are standing out as an issue. Hopefully, Oracle development will profile their code to tune their functions further. May be, some assembly language expert might need to review the instructions.
  3. If the tables are compressed heavily, you should expect CPU usage to decompress from HCC and compress to memcompress.

MobaXterm 7.3

With all that OpenWorld stuff going on I managed to miss the really big news. MobaXterm 7.3 was released. :)

This version includes a fix to Bash for the “shellshock” bug.

Downloads and changelog in the usual places.

Cheers

Tim…


MobaXterm 7.3 was first posted on October 6, 2014 at 10:27 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle In-Memory Column Store Internals – Part 1 – Which SIMD extensions are getting used?

This is the first entry in a series of random articles about some useful internals-to-know of the awesome Oracle Database In-Memory column store. I intend to write about Oracle’s IM stuff that’s not already covered somewhere else and also about some general CPU topics (that are well covered elsewhere, but not always so well known in the Oracle DBA/developer world).

Before going into further details, you might want to review the Part 0 of this series and also our recent Oracle Database In-Memory Option in Action presentation with some examples. And then read this doc by Intel if you want more info on how the SIMD registers and instructions get used.

There’s a lot of talk about the use of your CPUs’ SIMD vector processing capabilities in the Oracle inmemory module, let’s start by checking if it’s enabled in your database at all. We’ll look into Linux/Intel examples here.

The first generation of SIMD extensions in Intel Pentium world were called MMX. It added 8 new XMMn registers, 64 bits each. Over time the registers got widened, more registers and new features were added. The extensions were called Streaming SIMD Extensions (SSE, SSE2, SSSE3, SSE4.1, SSE4.2) and Advanced Vector Extensions (AVX and AVX2).

The currently available AVX2 extensions provide 16 x 256 bit YMMn registers and the AVX-512 in upcoming King’s Landing microarchitecture (year 2015) will provide 32 x 512 bit ZMMn registers for vector processing.

So how to check which extensions does your CPU support? On Linux, the “flags” column in /proc/cpuinfo easily provides this info.

Let’s check the Exadatas in our research lab:

Exadata V2:

$ grep "^model name" /proc/cpuinfo | sort | uniq
model name	: Intel(R) Xeon(R) CPU           E5540  @ 2.53GHz

$ grep ^flags /proc/cpuinfo | egrep "avx|sse|popcnt" | sed 's/ /\n/g' | egrep "avx|sse|popcnt" | sort | uniq
popcnt
sse
sse2
sse4_1
sse4_2
ssse3

So the highest SIMD extension support on this Exadata V2 is SSE4.2 (No AVX!)

Exadata X2:

$ grep "^model name" /proc/cpuinfo | sort | uniq
model name	: Intel(R) Xeon(R) CPU           X5670  @ 2.93GHz

$ grep ^flags /proc/cpuinfo | egrep "avx|sse|popcnt" | sed 's/ /\n/g' | egrep "avx|sse|popcnt" | sort | uniq
popcnt
sse
sse2
sse4_1
sse4_2
ssse3

Exadata X2 also has SSE4.2 but no AVX.

Exadata X3:

$ grep "^model name" /proc/cpuinfo | sort | uniq
model name	: Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz

$ grep ^flags /proc/cpuinfo | egrep "avx|sse|popcnt" | sed 's/ /\n/g' | egrep "avx|sse|popcnt" | sort | uniq
avx
popcnt
sse
sse2
sse4_1
sse4_2
ssse3

The Exadata X3 supports the newer AVX too.

My laptop (Macbook Pro late 2013):
The Exadata X4 has not yet arrived to our lab, so I’m using my laptop as an example of a latest available CPU with AVX2:

Update: Jason Arneil commented that the X4 does not have AVX2 capable CPUs (but the X5 will)

$ grep "^model name" /proc/cpuinfo | sort | uniq
model name	: Intel(R) Core(TM) i7-4960HQ CPU @ 2.60GHz

$ grep ^flags /proc/cpuinfo | egrep "avx|sse|popcnt" | sed 's/ /\n/g' | egrep "avx|sse|popcnt" | sort | uniq
avx
avx2
popcnt
sse
sse2
sse4_1
sse4_2
ssse3

The Core-i7 generation supports everything up to the current AVX2 extension set.

So, which extensions is Oracle actually using? Let’s check!

As Oracle needs to run different binary code on CPUs with different capabilities, some of the In-Memory Data (kdm) layer code has been duplicated into separate external libraries – and then gets dynamically loaded into Oracle executable address space as needed. You can run pmap on one of your Oracle server processes and grep for libshpk:

$ pmap 21401 | grep libshpk
00007f0368594000   1604K r-x--  /u01/app/oracle/product/12.1.0.2/dbhome_1/lib/libshpk#ff0000;">sse4212.so
00007f0368725000   2044K -----  /u01/app/oracle/product/12.1.0.2/dbhome_1/lib/libshpksse4212.so
00007f0368924000     72K rw---  /u01/app/oracle/product/12.1.0.2/dbhome_1/lib/libshpksse4212.so

My (educated) guess is that the “shpk” in libshpk above stands for oS dependent High Performance [K]ompression. “s” prefix normally means platform dependent (OSD) code and this low-level SIMD code sure is platform and CPU microarchitecture version dependent stuff.

Anyway, the above output from an Exadata X2 shows that SSE4.2 SIMD HPK libraries are used on this platform (and indeed, X2 CPUs do support SSE4.2, but not AVX).

Let’s list similar files from $ORACLE_HOME/lib:

$ cd $ORACLE_HOME/lib
$ ls -l libshpk*.so
-rw-r--r-- 1 oracle oinstall 1818445 Jul  7 04:16 libshpk#ff0000;">avx12.so
-rw-r--r-- 1 oracle oinstall    8813 Jul  7 04:16 libshpk#ff0000;">avx212.so
-rw-r--r-- 1 oracle oinstall 1863576 Jul  7 04:16 libshpksse4212.so

So, there are libraries for AVX and AVX2 in the lib directory too (the “12” suffix for all file names just means Oracle version 12). The AVX2 library is almost empty though (and the nm/objdump commands don’t show any Oracle functions in it, unlike in the other files).

Let’s run pmap on a process in my new laptop (which supports AVX and AVX2 ) to see if the AVX2 library gets used:

$ pmap 18969 | grep libshpk     
00007f85741b1000   1560K r-x-- libshpk#ff0000;">avx12.so
00007f8574337000   2044K ----- libshpkavx12.so
00007f8574536000     72K rw--- libshpkavx12.so

Despite my new laptop supporting AVX2, only the AVX library is used (the AVX2 library is named libshpkavx#ff0000;">212.so). So it looks like the AVX2 extensions are not used yet in this version (it’s the first Oracle 12.1.0.2 GA release without any patches). I’m sure this will be added soon, along with more features and bugfixes.

To be continued …

Oracle OpenWorld 2014 : Summary

OpenWorld 2014 was dominated by jet lag. Not that “special” type of Doug Burns jet lag, but the real stuff. As I mentioned in a previous post, having been ill in the 3 weeks leading up to OpenWorld, the jet lag hit me hard and I had nothing in reserve to get me through it. I’m now back in the UK and it is even worse. It’s 01:00 as I write this and I’ve been to sleep for about 3 hours. I’m now wide awake. It’s going to be a long day!

Apart from the jet lag, what was the overall message at OOW14?

Cloud

This one was pretty predictable. What broke away from the message of previous years was the Infrastructure as a Service (IaaS) message. In previous years Oracle said they were not interested in IaaS as competing with the general cloud providers, like AWS, was not good business. As Mark Rittman put it, this is “a race to for the bottom”. Instead, Oracle wanted to focus on the Platform as a Service (PaaS) and Software as a Service (SaaS), where they are selling their technology stack and software respectively. This has much better margins and allows them to do something that other cloud providers can’t really compete with in a price fight.

In reality any PaaS provider needs to also provide IaaS because applications do not work in isolation. It may be nice to have your Oracle database on the cloud, but what do you do with that 3rd party application that you would like to run in the same data centre as the database?

Oracle have come out with a statement that they will provide general purpose compute power and not be beaten on price by the likes of AWS. That sounds quite scary, but I think the reality is this will only be a small part of their cloud business. I would imagine most people moving to the Oracle Cloud will be doing so for the PaaS and SaaS offerings. The IaaS will only play a supporting role.

In more general terms, Oracle are planning on adding just about everything “as a Service” on their cloud. They’ve announced Database Backup, Documents, Big Data (Hadoop) and Node.js as a Service, which were new to me, along with all the usual stuff we either already had or expected…

Once everything is available, it will certainly make an impressive lists. From a platform perspective, not quite as diverse as AWS yet, but impressive none the less.

Big Data

On the whole, Oracle shied away from the normal, “You can do big data with the Oracle database!”, message they’ve been trying to promote over recent years. I think the world and their dog understand that “Big Data” and relational databases don’t really go hand-in-hand.

Instead, Oracle were pushing the Oracle Big Data SQL product. I started off pretty cynical about this, thinking it would just be a knock-off of Cloudera Impala, but it does seem to be something more. Big Data SQL allows you to create external tables over Hadoop and NoSQL data stores, so you can write SQL against them and process the data in your Oracle database. No need to learn any new query/programming tool. It also allows you to join differing data sources together.

Regardless of your views on big data, there are a lot of “data people” out there with SQL skills and, relatively speaking, nobody with map reduce skills. That and the fact that many companies for the foreseeable future will be churning through their map reduce jobs to produce data to put into a relational database for reporting, means that integration between Hadoop, NoSQL and RDBMS will be a key component. Oracle Big Data SQL seems to have hit this nail square on the head. If it weren’t so ridiculously expensive, it would be interesting to see the adoption rate!

JSON Support

This might seem like a minor feature on the surface, but I think it is a massive step forward for Oracle. The reality of the marketplace is that document stores are now seen as the preferred solution for some situations. Oracle will never compete with the likes of MongoDB (it’s webscale) on shear performance, but how many people really need to hit those numbers? Last year my company were considering MongoDB/RavenDB for some HR projects. The main factor against this idea was the split of the “single point of truth” between Oracle and another database technology. If the JSON support in the Oracle database had been available, we would probably have used it.

The JSON support in the database seems pretty comprehensive to me. Once the REST APIs are available, through Oracle REST Data Services (ORDS) it will be interesting to see how the developers react to this.

APEX 5.0

It was rather disappointing to hear that APEX 5.0 is a long way off going to production. The logic for holding back is sound. It’s got to be bullet proof, especially the upgrade process, so it’s better to wait until it is sorted, than release early and get lost in a support nightmare. Even so, I wanted the pretties… :)

WebLogic

I didn’t listen to the formal announcements about WebLogic, so I’m not sure how much of what I heard is still under NDA from ACE Director Briefing. For that reason, I’ll keep my mouth shut, but suffice to say, there are things in the pipeline that will make my life much easier!

Database

The database side of things was relatively quiet. Two years ago we got, “This is what we will give you in 12c”. Last year we got, “This is what we have given you in 12c”. This year we got, “This is what we gave you last year in 12c”. :) We did of course get lots of In-Memory stuff, but we knew about that last year and it is now GA… :)

I guess some news was that we are 18-24 months away from 12cR2, so you will probably have to upgrade to 12cR1 if you want to retain support without paying any extra cash. The proposed release date for 12cR2 will be after the free 1 year extension to support runs out… When you consider the obligatory, “wait for the first patchset”, that could be a long time without support…

Everything Else

There were of course numerous things about Oracle Linux, Oracle VM, MySQL 5.7, Engineered Systems and a whole bunch of other stuff, but I guess if you follow those areas you already know…

Overall

As mentioned in a previous post, the take home message for me is that Oracle are working hard to be a cloud provider. As such, they have spotted obvious flaws in their own products. A big proportion of the new features in their infrastructure products seem to me like a direct result of them “eating their own dog food” while trying to become a cloud provider. I think this is good news for the future of Oracle products, even if you don’t care about the Oracle Cloud specifically.

Big thanks to the ACE Program and OTN for getting me to OOW14. It was great to meet up with my Oracle friends and Oracle family again. I’m looking forward to a jet lag free 10th anniversary OOW next year! :)

Cheers

Tim…


Oracle OpenWorld 2014 : Summary was first posted on October 6, 2014 at 2:29 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Openworld

Well, the annual spectacle of enormous proportions has come to a conclusion again. And thats probably the first reason I’d recommend OpenWorld to anyone who works with Oracle who has never been to it. It’s a jaw dropping moment just to see the scale of the event, and how impressively its organised in terms of facilities, lunches, registration and the like.

But onto the technical elements, here’s my impressions of this years conference:

1) Big data, Big data, Big data, Big data

Somewhere in the conference I think there might have been some coverage of database, middleware and cloud :-) but it was dwarfed by Big Data (no pun intended). From an australian perspective, I found this quite interesting, because whilst there is a groundswell of interest in the philosophies and tools relating to big data, I’d hardly say its taken the nation by storm. Maybe things will change in the coming years.

Having said that, one thing I was pleased to see (as an Oracle dinosaur) was a realisation that SQL still persists as the dominant query language, and hence the clamour by vendors to provide a good SQL interface layer on top of their respective big data products. The promise of running SQL over some, any or all data sources, whether they be RDBMS or any other data structure sounds very cool.

2) In-memory

Lots and lots of presentations on the new in-memory database option, and its very hard to find anything bad to see about the feature. It just looks like a wonderful feature although you have to pay for all that wonderment :-) My only disappointment from the conference was that each session was done “standalone” which means the first 15 mins of each precious 45min slot, was the same 10 slides describing the fundamentals. I would have preferred a single intro session, with then other sessions going more into the nitty gritty.

3) Passion matters

A number of presentations on topics close to my heart did little to inspire me, whereas others on topics that (I thought) had no interest to me were riveting, and it all comes down to the enthusiasm of the presenter for the technology they were talking about.

4) The app

Sadly….the OpenWorld app was a PR disaster for Oracle based on the tweets I saw about it (some of which were my own). I’m mentioning this not to unload a fresh bucket of invective in the public arena, but to encourage people to provide feedback to the conference organisers. I was as guilty as anyone else in terms of getting more and more emotive with my tweets as I got more and more frustrated with the app as the week went on. But I also tabulated my issues and sent them off to the organisers, trying to be as objective as possible as well as providing suggested fixes. I recommend you do the same.

All up, another great week, with lots of cool new things to explore and blog about :-) Now onto downloading all the slides…

Abstract Review Tips

Yes, this is for my RMOUG abstract reviewers, but it may help other conferences and user groups, too.  We have some incredible content at RMOUG, (Rocky Mtn. Oracle User Group) Training Days conference and its all due to a very highly controlled, thought out process that has evolved over time to ensure that we have an abstract selection process that is as fair as possible and offers new speakers opportunities as well.

I’m going to share my abstract scoring summary with the rest of the class to start, so you get an idea of how we score.  As a strong DBA who has done development and a bit of mobile development, I can go across a number of tracks to perform reviews, but you will definitely notice that I don’t review the following:

1.  Almost anything in the BI track.image

2.  Skip over any in development or other tracks that I’m unsure of.

3.  Never score on my own abstracts, (yes, I submit mine to go through the abstract process, but then pull my high scoring ones afterwards and retain them to add to the schedule if we have a last minute cancellation.  Tim does the same this last few years.)

 

You will also notice that I don’t just score whole numbers, but give me more variation in the results by adding more definitive scores.

I carefully inspect the “Review Summary by Rating” to verify that my percentage of scores are somewhere close to what we recommend at the top of the summary page, (seen in blue at the top of the screenshot.)

How to Review

Knowing what to look for and how to score it is important.  We are a tech conference, so if an abstract appears to be marketing anything, if you are not high tech, you may find a low score on the abstract.

Our reviewers are asked to look for great content, variety in content, new and interesting topics and they will score an abstract down if it is incomplete or has grammatical/spelling errors.  This may seem harsh, but if someone can’t take the time to fill out an abstract, it’s difficult to believe quality time will be put into the presentation.

Why It’s Important

Our reviewers and review scores are crucial to our selection process.  This next week, after we close reviews on the 6th of October, we will pull all of the abstracts with their overall average score.  We will arrange the track by highest to lowest and then pick out the top scoring abstracts for each track.

1.  The percentage of abtracts submitted for that track is the percentage of slots allotted of the available total.

2.  We then look for # of high scoring abstracts per speaker across ALL tracks and remove the lowest of theirs till they have two max each, (we only have 100 session slots available and we want everyone to have as much an opportunity to speak as possible!)

3.  I then have marked in comments for anyone I see as a new speaker.  We see how the abstract scored and see if we can add them freely to the schedule or if they may require mentoring from a senior speaker from the conference.  We add in these new speakers to the openings made by step#2.

4.  We build out the schedule into tracks, so that no matter what your specialty area, you should always have something incredible to attend.

5.  Those great scored abstracts I had to hold back as the speaker already had two accepted?  I retain those, along with mine and Tim’s to use as fillers if there is a last minute cancellation.

I’ve just finished my reviews and I’m incredibly impressed with the abstracts this year!  The quality of the abstracts and variety of impressive topics can only mean that its going to be the BEST RMOUG Training Days EVER!!

 



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Abstract Review Tips], All Right Reserved. 2014.

OOW14 Session: The Art and Craft of Tracing

A big thanks to all those who braved attending my session on the final day of Oracle Open World 2014. I hope you enjoyed it and found it valuable.

You can download the slide deck as well as the scripts I mentioned here.

As always, I would love to hear from you.

OOW14 Session: RAC'fying Multitenant

Thank you for attending my session RAC'fying Multitenant at Oracle Open World 2014. You can download the slide deck here.

[Updated] Oct 4th, 2014: The article on multitenant I wrote for OTN is available here. http://www.oracle.com/technetwork/articles/database/multitenant-part1-pdbs-2193987.html. This article shows various commands I referenced in my session, e.g. point in time recovery of PDBs.

As always I would love to hear from you.