Day 2 was a lot more “networky”, so it was pretty tough. I got through all the labs and stuff worked, but if I’m honest I didn’t really have a clue what I was doing. Added to that, I won’t have privilege to do most of the stuff we covered when I’m on the real kit, so I’m pretty much going to forget it all in a few days.
Once again, it’s testament to the course that a complete networking gumby like me was able to survive the day.
Day 3 has got some sections that are more relevant to me. I’ve been swimming, so now it’s Monster, Diet Coke and Coffee for breakfast, check out of the hotel, then head off to start day 3.
PS. I went to see Dawn of the Planet of the Apes last night.
Today I will mostly be saying, “DBA not kill DBA!”, and, “DBA not trust human!”
Today’s film was Dawn of the Planet of the Apes.
I really enjoyed the first film and this is a continuation of that story, set 10 years later.
The pacing on this film was pretty similar to the last one. If you are one of the people that found that too slow, you will have a similar experience this time. If, like me, you would prefer to watch the apes doing ape-stuff, rather than than seeing them going ape-shit, then this will work for you. There are action scenes, but a lot of it is movie time is them just ionteracting with each other and humans. It feels very “real” to me…
PS. My mom recently saw the previous film on TV and asked how many chimps they used for the main character when they were filming… She figured it was like Lassie, where they used about 12 dogs, each trained for different scenes in the film… I had to explain it was CGI, which goes to show the quality of this stuff…
As I suspected, I’m the only person on the course that doesn’t know what a network is. If I had not been tinkering with the reverse proxies over the last year I would have been pretty much lost.
The course itself is well structured and the teacher is good. The fact I’ve not flounced out in a huff is testament to that. The pattern will be quite familiar to anyone who has been on a hands-on course before. Discuss a topic with slides, then do a hands-on lab that works through that stuff.
It takes a while to get into the swing of things and I’ve proved to myself I am incapable of reading other people’s instructions, so it’s a good job I usually write my own. Now I’m starting to get used to the interface and command line, I’m hoping today will be a bit easier.
From a brief discussion, what I need from the load balancers seems *drastically* different to most of the other people in the room. I think this is going to be quite a long and arduous road when I start having to apply some of this to real situations. I sense a lot of external consultancy…
It is interesting coming from a different background to the others in the room and seeing how we approach things from different angles, and with different emphasis. I’ll write a blog post about this when I’ve finished the course, because it’s been something that has been brewing in my mind for a while…
As you will probably already know, I followed the day with a visit to see Guardians of the Galaxy.
I’ve been swimming this morning and now I have to log on to work to fix some stuff before starting day 2 of the course.
Depending on your setup, you might have automatically updated anyway. If not, go on to your dashboard and give it a nudge.
I’ve just got back from seeing Guardians of the Galaxy.
Take note Sci-Fi movie makers! This is your competition! This is what you need to aim to outdo!
It’s very cool. It looks great. You quickly start to give a crap about the characters. It doesn’t take itself too seriously.
Favourite Character: Groot. I challenge anyone to come out of the film and not want to say, “I am Groot”, in response to any situation.
I didn’t really fancy it when I saw the trailers. A couple of people were talking about how good the reviews were, so I thought I would give it a go. I’m glad I did. It’s excellent!
The Flash Cache Mode still defaults to Write-Through on Exadata X-4 because most customers are better suited that way – not because Write-Back is buggy or unreliable. Chances are that Write-Back is not required, so we just save Flash capacity that way. So when you see this
CellCLI> list cell attributes flashcachemode WriteThrough
it is likely to your best :-)
Let me explain: Write-Through means that writing I/O coming from the database layer will first go to the spinning drives where it is mirrored according to the redundancy of the diskgroup where the file is placed that is written to. Afterwards, the cells may populate the Flash Cache if they think it will benefit subsequent reads, but there is no mirroring required. In case of hardware failure, the mirroring is already sufficiently done on the spinning drives, as the pictures shows:
That changes with the Flash Cache Mode being Write-Back: Now writes go primarily to the Flashcards and popular objects may even never get aged out onto the spinning drives. At least that age out may happen significantly later, so the writes on flash must be mirrored now. The redundancy of the diskgroup where the object in question was placed on determines again the number of mirrored writes. The two pictures assume normal redundancy. In other words: Write-Back reduces the usable capacity of the Flashcache at least by half.
Only databases with performance issues on behalf of writing I/O will benefit from Write-Back, the most likely symptom of which would be high numbers of the Free Buffer Waits wait-event. And Flash Logging is done with both Write-Through and Write-Back. So there is a good reason behind turning on the Write-Back Flash Cache Mode only on demand. I have explained this just very similar during my present Oracle University Exadata class in Frankfurt, by the way :-)
This is Part I in a short series of posts dedicated to loading SLOB data. The SLOB loader is called setup.sh and it is, by default a concurrent, data loader. The SLOB configuration file parameter controlling the number of concurrent data loading threads is called LOAD_PARALLEL_DEGREE. In retrospect I should have named the parameter LOAD_CONCURRENT_DEGREE because unless Oracle Parallel Query is enabled there is no parallelism in the data loading procedure. But if LOAD_PARALLEL_DEGREE is assigned a value greater than 1 there is concurrent data loading.
Occasionally I hear of users having trouble with combining Oracle Parallel Query with the concurrent SLOB loader. It is pretty easy to overburden a system when doing something like concurrent, parallel data loading–in the absence of tools like Database Resource Management I suppose. To that end, this series will show some examples of what to expect when performing SLOB data loading with various init.ora settings and combinations of parallel and concurrent data loading.
In this first example I’ll show an example of loading with LOAD_PARALLEL_DEGREE set to 8. The scale is 524288 SLOB rows which maps to 524,288 data blocks because SLOB forces a single row per block. Please note, the only slob.conf parameters that affect data loading are LOAD_PARALLEL_DEGREE and SCALE. The following is a screen shot of the slob.conf file for this example:
The next screen shot shows the very simple init.ora settings I used during the data loading test. This very basic initialization file results in default Oracle Parallel Query, therefore this example is a concurrent + parallel data load.
The next screen shot shows that I directed setup.sh to load 64 SLOB schemas into a tablespace called IOPS. Since SCALE is 524,288 this example loaded roughly 256GB (8192 * 524288 * 64) of data into the IOPS tablespace.
As reported by setup.sh the data loading completed in 1,539 seconds or a load rate of roughly 600GB/h. This loading rate by no means shows any intrinsic limit in the loader. In future posts in this series I’ll cover some tuning tips to improve data loading. The following screen shot shows the storage I/O rates in kilobytes during a portion of the load procedure. Please note, this is a 2s16c32t 115w Sandy Bridge Xeon based server. Any storage capable of I/O bursts of roughly 1.7GB/s (i.e., 2 active 8GFC Fibre Channel paths to any enterprise class array) can demonstrate this sort of SLOB data loading throughput.
After setup.sh completes it is good to count how many loader threads were able to successfully load the specified number of rows. As the example shows I simply grep for the value of slob.conf->SCALE from cr_tab_and_load.out. Remember, SLOB in its current form, loads a zeroth schema so the return from such a word count (-l) should be one greater than the number of schemas setup.sh was directed to load.
The next screen shot shows the required execution of the procedure.sql script. This procedure must be executed after any execution of setup.sh.
Finally, one can use the SLOB/misc/tsf.sql script to report the size of the tablespace used by setup.sh. As the following screenshot shows the IOPS tablespace ended up with a little over 270GB which can be accounted for by the size of the tables based on slob.conf, the number of schemas and a little overhead for indexes.
This installment in the series has shown expected screen output from a simple example of data loading. This example used default Oracle Parallel Query settings, a very simple init.ora and a concurrent loading degree of 8 (slob.conf->LOAD_PARALLEL_DEGREE) to load data at a rate of roughly 600GB/h.
Filed under: oracle
I’m on an F5 Load Balancer training course for the next 3 days.
I have no idea what to expect and to be honest, I really don’t think I should be here. With the exception of a bit of fiddling with Apache reverse proxies, I don’t really know anything about this stuff, so I’m not sure if this will go over my head or be intensely slow and boring…
If anything comes out of it worth blogging about I certainly will.
Chertsey is like a seaside town. It’s full of cafes, restaurants and odd little shops. When I was searching for a place to swim Google came up with loads of pool installation and maintenance companies, so I think it’s a pretty rich area. I found a local swimming pool, but I’ve had to remortgage my house to afford to swim there. I went this morning at 06:30 and it wasn’t too crowded. It’s unusual to find a private gym with a 25M pool. Most of them in the UK have tiny little things that you can’t swim in. It was a bit on the warm side, but then I guess you have to expect that when it’s not a training pool. Hopefully I won’t be too much of a slob by the time I get home.
I’m thinking I might do a cinema visit every night to play catch-up.
I didn’t intend to write another blog post yesterday evening at all, but found something that was worth sharing and got me excited… And when I started writing I intended it to be a short post, too.
If you have been digging around Oracle session performance counters a little you undoubtedly noticed how their number has increased with every release, and even with every patch set. Unfortunately I don’t have a 11.1 system (or earlier) at my disposal to test, but here is a comparison of how Oracle has instrumented the database. I have already ditched my 126.96.36.199 system as well, so no comparison there either :( This is Oracle on Linux.
In the following examples I am going to use a simple query to list the session statistics by their class. The decode statement is based on the official documentation set. There you find the definition of v$statname plus an explanation of the meaning of the class-column. Here is the script:
with stats as ( select name, decode(class, 1, 'USER', 2, 'REDO', 4, 'ENQUEUE', 8, 'CACHE', 16, 'OS', 32, 'RAC', 64, 'SQL', 128, 'DEBUG', 'NA' ) as decoded_class from v$statname ) select count(decoded_class), decoded_class from stats group by rollup(decoded_class) order by 1 /
188.8.131.52 is probably the most common 11g Release 2 version currently out there in the field. Or at least that’s my observation. According to MOS Doc ID 742060.1 184.108.40.206 was released on 23 September 2011 (is that really that long ago?) and already out of error correction support by the way.
Executing the above-mentioned script gives me the following result:
COUNT(DECODED_CLASS) DECODED -------------------- ------- 9 ENQUEUE 16 OS 25 RAC 32 REDO 47 NA 93 SQL 107 USER 121 CACHE 188 DEBUG 638
So there are 638 of these counters. Let’s move on to 220.127.116.11
Oracle 18.104.22.168 is interesting as it has been released after 22.214.171.124. It is the terminal release for Oracle 11.2, and you should consider migrating to it as it is in error correction support. The patch set came out on 28 August 2013. What about the session statistics?
COUNT(DECODED_CLASS) DECODED -------------------- ------- 9 ENQUEUE 16 OS 25 RAC 34 REDO 48 NA 96 SQL 117 USER 127 CACHE 207 DEBUG 679
A few more, all within what can be expected.
Oracle 126.96.36.199 is fresh off the press, released just a few weeks ago. Unsurprisingly the number of session statistics has been increased again. What did surprise me was the number of statistics now available for every session! Have a look at this:
COUNT(DECODED_CLASS) DECODED -------------------- ------- 9 ENQUEUE 16 OS 35 RAC 68 REDO 74 NA 130 SQL 130 USER 151 CACHE 565 DEBUG 1178
That’s nearly double what you found for 188.8.131.52. Incredible, and hence this post. Comparing 184.108.40.206 with 220.127.116.11 you will notice the:
The debug class (128) shows lots of statistics (including spare ones) for the in-memory option (IM):
SQL> select count(1), class from v$statname where name like 'IM%' group by class; COUNT(1) CLASS ---------- ---------- 211 128
Happy troubleshooting! Reminds me to look into the IM-option in more detail.
SLOB can be obtained at this link: Click here.
This post is just a simple set of screenshots I recently took during a fresh SLOB deployment. There have been a tremendous number of SLOB downloads lately so I thought this might be a helpful addition to go along with the documentation. The examples I show herein are based on a 18.104.22.168 Oracle Database but these principles apply equally to 22.214.171.124 and all Oracle Database 11g releases as well.
If you already have a tablespace to load SLOB schemas into please see the next step in the sequence.
Provided database connectivity works with ‘/ as sysdba’ this step is quite simple. All you have to do is tell setup.sh which tablespace to use and how many SLOB users (schemas) load. The slob.conf file tells setup.sh how much data to load. This example is 16 SLOB schemas each with 10,000 8K blocks of data. One thing to be careful of is the slob.conf->LOAD_PARALLEL_DEGREE parameter. The name is not exactly perfect since this actually controls concurrent degree of SLOB schema creation/loading. Underneath the concurrency may be parallelism (Oracle Parallel Query) so consider setting this to a rather low value so as to not flood the system until you’ve practiced with setup.sh for a while.
After taking a quick look at cr_tab_and_load.out, as per setup.sh instruction, feel free to count the number of schemas. Remember, there is a “zero” user so setup.sh with 16 will have 17 SLOB schema users.
After setup.sh and counting user schemas please create the SLOB procedure in the USER1 schema.
This is an example of what happens if one misses the detail to create the semaphore wait kit as per the documentation. Not to worry, simply do what the output of runit.sh directs you to do.
The following is an example of a healthy runit.sh test.
Strictly speaking this is all optional if all you intend to do is test SLOB on your current host. However, if SLOB has been configured in a Windows, AIX, or Solaris box this is how one tests SLOB. Testing these non-Linux platforms merely requires a small Linux box (e.g., a laptop or a VM running on the system you intend to test!) and SQL*Net.
We don’t care where the SLOB database service is. If you can reach it successfully with tnsping you are mostly there.
The following is an example of a successful runit.sh test over SQL*Net.
Please note, loading SLOB over SQL*Net has the same configuration requirements as what I’ve shown for data loading (i.e., running setup.sh). Consider the following screenshot which shows an example of loading SLOB via SQL*Net.
Finally, please see the next screenshot which shows the slob.conf file the corresponds to the proof of loading SLOB via SQL*Net.
This short post shows the simple steps needed to deploy SLOB in both the simple Linux host-only scenario as well as via SQL*Net. Once a SLOB user gains the skills needed to load and use SLOB via SQL*Net there are no barriers to testing SLOB databases running on any platform to include Windows, AIX and Solaris.
Filed under: oracle