Search

OakieTags

Who's online

There are currently 0 users and 47 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

Simple SQL – Finding the Next Operation

November 3, 2011 An interesting request came in from an ERP mailing list – how would you write a SQL statement that indicates the next operation in a manufacturing process.  Sounds like an easy requirement.  Let’s take a look at a graphical view of one example (the graphical view is created using a program that I wrote [...]

Movember

So it's that time of year again, when perfectly reasonable people decide to deliberately make fools of themselves for a good month and a good cause ....

It's Movember, the month formerly known as November, now dedicated to growing moustaches and raising awareness and funds for men's health; specifically prostate and testicular cancer. I'm donating my top lip to the cause for 30 days in an effort to help change the face of men's health. My Mo will spark conversations, and no doubt generate some laughs; all in the name of raising vital awareness and funds for cancer's affecting men.

Why am I so passionate about men's health?
* 1 in 9 men will be diagnosed with prostate cancer in their lifetime
* This year 37,000 new cases of the disease will be diagnosed
* 1 in 2 men will be diagnosed with cancer in their lifetime
* 26% of men are less likely to go the doctor compared to women (Not sure about this statistic - thanks Graham)
* Men never go to the bl**dy doctor! (That's a bit more like it)
* Too many of my friends have already been affected by some of these issues.

I'm asking you to support my Movember campaign by making a donation by either:
*Donating online at: http://mobro.co/dougburns
*If you want to go old school you can write a cheque payable to 'Movember', reference my name and Registration Number 763450 and send it to: Movember Europe, PO Box 68600, London, EC1P 1EF

If you'd like to find out more about the type of work you'd be helping to fund by supporting Movember, take a look at the Programmes We Fund section on the Movember website: http://uk.movember.com/about

Thank you in advance for supporting my efforts both last year and hopefully this to change the face of men's health. On a personal note, I understand that it will be difficult to achieve last year's terrific final total when so many friends in the community are signing up (such as Alex Gorbachev, James Morle, Dan Norris and Jacco Landlust to name a few) but I'll give it a try anyway and, whatever happens, the money will all end up in the right place so sponsor away!

Oh, but *please* don't ask me to go through the pantomime misery again! ;-)

Doug Burns

OTN APAC Tour: NZOUG Day 1 & 2

The evening before the NZOUG conference was a bit chaotic. There was still no resolution to the Qantas fiasco and I was starting to believe I would have to cancel my sessions in Perth and try to fly home from Auckland. I tried to switch my flights, but everything was sold out.

By the next morning the Qantas strike seemed to be over, but there were reports of delays and disruption, so I was still not sure if I would make it to Perth.

NZOUG Day 1:

I was just about to start my first session (Clonedb) when there was a fire alarm. Fortunately it was resolved pretty quickly and I was able to get things back on track. From there is was straight on to do my second session (Edition-Based Redefinition), with no fire alarms this time. Both my sessions seemed to go down well. I got some good questions in each session, which is always cool, and some more later in the day and at the evening event.

I spent a little time chatting to some of the guys on the Quest stand and the guys from the DBVisit stand.

My first session of the day as an attendee was Graham Wood’s Exadata session. I saw this session for the first time a few years ago when it was known as the “Terabyte Hour”. With the hardware refreshes and a few software tweaks that have happened since then it now only takes about 18 minutes to complete the demo, so the name has changed. :) My comment to Graham at the end of the session was, “Much as I hate to admit it, it’s really impressive.”

That pretty much took me to the end of the presentations on the first day.

Chris Muir told me to check my emails as my flights had been sorted out. Sure enough, when I checked Lillian Buziak from the OTN team had contacted Oracle Travel and fixed everything for me. She is a total miracle worker. If I was younger and more attractive (and she wasn’t already married) she would be mine, oh yes, she would be mine…

In the evening we all went to the conference dinner. The theme was “Murder Mystery”, which involved a few poor souls getting selected to be made a fool of in front to the rest of the audience. Unfortunately, I was one of the fools in question, along with Chris Muir, Debra Lilley, Bambi and a couple of guys. The compere and the ghost of the victim (my wife, murdered on her wedding day) lead us through various “role playing scenarios” to determine who was the murderer. The final decision for the murderer came down to me (the husband) and the chef, with the chef being voted the winner/loser/murderer. It was all very confusing, fun and embaressing, all roled in to one. :)

The whole event seemed to go down really well with the audience, who had plenty of comments (and photos). I have a feeling this is going to haunt (no punn intended) me for a long time.

NZOUG Day 2:

This was cut very short for me. My new Perth flight was arranged for 14:10, so I had to leave the conference at 11:00. I still got time to chat to a few people, mostly about the previous evenings events, and check out Ronald Bradford‘s session on the top 9 issues people have with mySQL databases. I’ve been a casual user of mySQL for years, but never really spent much time looking at it in any depth. I learned quite a bit from this session. Maybe I’ll spend a little more time playing with it in future.

Assuming the rest of the conference carried on the way it started, I would say it was a big success.

Perth:

The flight to Perth was pretty straight forward and I got to the apartment with no dramas. I took a walk over to the conference venue, which is significantly further than I thought. It took be about 60 minutes to walk it at a pace, with no bag in the (relatively speaking) cool evening air. I’m not sure what it will be like in the summer sun.

At 06:00 this morning I went out for a run along the river, then made myself look like a freak by doing sprints on the way back. Nothing like the sight of a fat sweaty bloke panting like a dog to turn heads. The locals were out in force, walking, running and doing boot camps on the banks of the river. Even at that time is was very sunny. I think it would be damn near lethal to try it at midday. The flies and mossies were out in force. If nothing else the swatting and endless ticks you develop when they fly round your face and ears helps you burn more calories. :)

Cheers

Tim…




Very Cool Yet Very Dense! Low Wattage HP Technology in Project Moonshot. By The Way, Large Node Counts Require A Symmetrical Architecture.

HP has partnered with Calxeda to produce early samples of 4U chassis containing 288 Systems on Chip servers.

I care about this sort of systems offering because I espouse the symmetrical Massively Parallel Processing (MPP) computing paradigm.

Here’s a nice quote from The Register:

[…a half rack of Redstone machines and their external switches implementing 1,600 server nodes has 41 cables, burns 9.9 kilowatts, and costs $1.2m.

A more traditional x86-based cluster doing the same amount of work would only require 400 two-socket Xeon servers, but it would take up 10 racks of space, have 1,600 cables, burn 91 kilowatts, and cost $3.3m.

Shared Nothing / Shared Everything?
Did you notice I mentioned symmetrical MPP in this post? EMC Greenplum Database is a symmetrical MPP software product. This means all code can and does run on all available CPU units. There is no arbitrary cut off point where some CPUs must run certain code and other CPUs (in dedicated servers) can run certain code. That would be an asymmetrical MPP and it is impossible to handle data flow in a balanced matter if some CPUs must run some code and others cannot. Please allow me to quote myself:

The scalability of an MPP is solely related to whether it is symmetrical or asymmetrical.

So what about shared-disk? The scalability of an MPP has nothing to do with whether the disks are accessed via shared-disk or dedicated (non-shared) plumbing. Oracle Real Application Clusters scales DW/BI/Analytics workloads fantastically and it is a shared-disk architecture. However, coupling Real Application Clusters with Exadata Smart Scan is where the asymmetrical attributes are introduced.

The shared-disk versus shared-nothing argument is old, tired and irrelevant. Interestingly, I part ways with my colleagues here in EMC Greenplum on that matter. If you see literature that props the shared-nothing element of Greenplum please bear in mind that it is my personal assertion that shared versus shared-nothing is not a scalability topic related to DW/BI/Analytics workloads. To put it another way, it is my personal campaign and I’ll be blogging soon on that matter. Oh, I forgot to mention that I’m right on the matter (smiley).

Little Things Doth Crabby Make?
I don’t want to draw attention to the lack of care for Electrostatic discharge in the handling of components in the following video because I’m too excited having finally seen things that I’ve been anxiously anticipating for quite some time. So, no, I won’t make this an installment in the Little Things Doth Crabby Make series. HP most likely uses ESD wrist-bands when they are not producing a video (smiley).

The Disclaimer
Please take a gander at the upper right hand corner of this page. You’ll see the disclaimer that spells out the fact that these are my personal words and thoughts. I am not blogging about any EMC business in this post. I’m simply blogging about low-wattage general-purpose servers—something I’m very interested in.



Filed under: oracle

IOT Part 6 – Inserts and Updates Slowed Down (part A)

<..IOT1 – the basics
<….IOT2 – Examples and proofs
<……IOT3 – Significantly reducing IO
<……..IOT4 – Boosting Buffer Cache efficiency
<……….IOT5 – Primary Key Drawback
…………>IOT6(B) – OLTP Inserts

A negative impact of using Index Organized Tables is that inserts are and updates can be significantly slowed down. This post covers the former and the reasons why – and the need to always run tests on a suitable system. (I’m ignoring deletes for now – many systems never actually delete data and I plan to cover IOTs and delete later)

Using an IOT can slow down insert by something like 100% to 1000%. If the insert of data to the table is only part of a load process, this might result in a much smaller overall impact on load, such as 25%. I’m going to highlight a few important contributing factors to this wide impact spread below.

If you think about it for a moment, you can appreciate there is a performance impact on data creation and modification with IOTs. When you create a new record in a normal table it gets inserted at the end of the table (or perhaps in a block marked as having space). There is no juggling of other data.
With an IOT, the correct point in the index has to be found and the row has to be inserted at the right point. This takes more “work”. The inserting of the new record may also lead to an index block being split and the extra work this entails. Similar extra work has to be carried out if you make updates to data that causes the record to move within the IOT.
Remember, though, that an IOT is almost certainly replacing an index on the heap table which, unless you are removing indexes before loading data and recreating them after, would have to be maintained when inserting into the Heap table. So some of the “overhead” of the IOT would still occur for the heap table in maintaining the Primary Key index. Comparing inserts or updates between a heap table with no indexes and an IOT is not a fair test.

For most database applications data is generally written once, modified occasionally and read many times – so the impact an IOT has on insert/update is often acceptable. However, to make that judgement call you need to know

  • what the update activity is on the data you are thinking of putting into an IOT
  • the magnitude of the impact on insert and update for your system
  • the ratio of read to write.

There is probably little point putting data into an IOT if you constantly update the primary key values (NB see IOT-5 as to why an IOT’s PK columns might not be parts of a true Primary Key) or populate previously empty columns or hardly ever read the data.

There is also no point in using an IOT if you cannot load the data fast enough to support the business need. I regularly encounter situations where people have tested the response of a system once populated but fail to test the performance of population.

Now to get down to the details. If you remember the previous posts in this thread (I know, it has been a while) then you will remember that I create three “tables” with the same columns. One is a normal heap table, one is an Index Organized Table and one is a partitioned Index Organized Table, partitioned into four monthly partitions. All tables have two indexes on them, the Primary Key index (which is the table in the case of the IOTs) and another, roughly similar index, pre-created on the table. I then populate the tables with one million records each.

These are the times, in seconds, to create 1 million records in the the HEAP and IOT tables:

                  Time in Seconds
Object type         Run_Normal
------------------  ----------
Normal Heap table        171.9  
IOT table               1483.8

This is the average of three runs to ensure the times were consistent. I am using Oracle V11.1 on a machine with an Intel T7500 core 2 Duo 2.2GHz, 2GB memory and a standard 250GB 5000RPM disk. The SGA is 256MB and Oracle has allocated around 100MB-120MB to the buffer cache.

We can see that inserting the 1 million rows into the IOT takes 860% the time it does with a heap table. That is a significant impact on speed. We now know how large the impact is on Insert of using an IOT and presumably it’s all to do with juggling the index blocks. Or do we?

This proof-of-concept (POC) on my laptop {which you can also run on your own machine at home} did not match with a proof-of-concept I did for a client. That was done on V10.2.0.3 on AIX, on a machine with 2 dual-core CPUS with hyper-threading (so 8 virtual cores), 2GB SGA and approx 1.5GB buffer cache, with enterprise-level storage somewhere in the bowels of the server room. The results on that machine to create a similar number of records were:

                  Time in Seconds
Object type         Run_Normal
------------------  ----------
Normal Heap table        152.0  
IOT table                205.9

In this case the IOT inserts required 135% the time of the Heap table. This was consistent with other tests I did with a more complex indexing strategy in place, the IOT overhead was around 25-35%. I can’t go into too much more detail as the information belongs to the client but the data creation was more complex and so the actual inserts were only part of the process – this is how it normally is in real life. Even so, the difference in overhead between my local-machine POC and the client hardware POC is significant, which highlights the impact your platform can have on your testing.

So where does that leave us? What is the true usual overhead? Below are my more full results from the laptop POC.

                        Time in Seconds
Object type         Run_Normal    Run_quiet    Run_wrong_p
------------------  ----------    ---------    -----------
Normal Heap table        171.9        81.83         188.27  
IOT table               1483.8      1055.35        1442.82
Partitioned IOT          341.1       267.83         841.22 

Note that with the partitioned IOT the creation took 341 second, the performance ratio to a heap table is only 198% and is much better than the normal IOT. Hopefully you are wondering why!

I’m running this test on a windows laptop and other things are going on. The timings for Run_Quiet are where I took steps to shut down all non-essential services and applications. This yielded a significant increase for all three object types but the biggest impact was on the already-fastest Heap table.

The final set of figures is for a “mistake”. I created the partitions wrong such that half the data went into one partition and the rest into another and a tiny fraction into a third, rather than being spread over 4 partitions evenly. You can see that the Heap and normal IOT times are very similar to the Run_Normal results (as you would expect as these test are the same) but for the partitioned IOT the time taken is half way towards the IOT figure.

We need to dig into what is going on a little further to see where the effort is being spent, and it turns out to be very interesting. During my proof-of-concept on the laptop I grabbed the information from v$sesstat for the session before and after each object creation so I could get the figures just for the loads. I then compared the stats between each object population and show some of them below {IOT_P means Partitioned IOT}.

STAT_NAME                            Heap    	IOT	        IOT P
------------------------------------ ---------- -------------  -----------
CPU used by this session                  5,716         7,222        6,241
DB time                                  17,311       148,866       34,120
Heap Segment Array Inserts               25,538            10           10

branch node splits                           25            76           65
leaf node 90-10 splits                      752         1,463        1,466
leaf node splits                          8,127        24,870       28,841

consistent gets                          57,655       129,717      150,835
cleanout - number of ktugct calls        32,437        75,201       88,701
enqueue requests                         10,936        28,550       33,265

file io wait time                     4,652,146 1,395,970,993  225,511,491
session logical reads                 6,065,365     6,422,071    6,430,281
physical read IO requests                   123        81,458        3,068
physical read bytes                   2,097,152   668,491,776   25,133,056
user I/O wait time                          454       139,585       22,253
hot buffers moved to head of LRU         13,077       198,214       48,915
free buffer requested                    64,887       179,653      117,316

The first section shows that all three used similar amounts of CPU, the IOT and partitioned IOT being a little higher. Much of the CPU consumed was probably in generating the fake data.The DB Time of course pretty much matches the elapsed time well as the DB was doing little else.
It is interesting to see that the Heap insert uses array inserts which of course are not available to the IOT and IOT_P as the data has to be inserted in order. {I think Oracle inserts the data into the heap table as an array and then updates the indexes for all the entries in the array – and I am only getting this array processing as I create the data as an array from a “insert into as select” type load. But don’t hold me to any of that}.

In all three cases there are two indexes being maintained but in the case of the IOT and IOT_P, the primary key index holds the whole row. This means there has to be more information per key, less keys per block and thus more blocks to hold the same data {and more branch blocks to reference them all}. So more block splits will be needed. The second section shows this increase in branch node and leaf block splits. Double the branch blocks and triple the leaf block splits. This is probably the extra work you would expect for an IOT. Why are there more leaf block splits for the partitioned IOT? The same data of volume ends up taking up more blocks in the partitioned IOT – 200MB for the IOT_P in four partitions of 40-60MB as opposed to a single 170MB for the IOT. The larger overall size of the partition is just due to a small overhead incurred by using partitions and also a touch of random fluctuation.

So for the IOT and IOT_P there is about three times the index-specific work being done and a similar increase in related statistics such as enqueues, but not three times as it is not just index processing that contribute to these other statistics. However, the elapsed time is much more than three times as much. Also, the IOT_P is doing more index work than the IOT but it’s elapsed time is less. Why?

The fourth section shows why. Look at the file io wait times. This is the total time spent waiting on IO {in millionths of a second} and it is significantly elevated for the IOT and to a lesser degree for the IOT_P. Physical IO is generally responsible for the vast majority of time in any computer system where it has not been completely avoided.
Session logical reads are only slightly elevated, almost negligably so but the number of physical reads to support it increases from 123 for the Heap table insert to 81,458 for the IOT and 3,068 for the IOT_P. A clue as to why comes from the hot buffers moved to head of LRU and free buffer requested statistics. There is a lot more activity in moving blocks around in the buffer cache for the IOT and IOT_P.

Basically, for the IOT, all the blocks in the primary key segment are constantly being updated but eventually they won’t all fit in the block buffer cache – remember I said the IOT is eventually 170MB and the buffer cache on my laptop is about 100MB – so they are flushed down to disk and then have to be read back when altered again. This is less of a problem for the IOT_P as only one partition is being worked on at a time (the IOT_P is partitioned on date and the data is created day by day) and so more of it (pretty much all) will stay in memory between alterations. The largest partition only grows to 60MB and so can be worked on in memory.
For the heap, the table is simply appended to and only the indexes have to be constantly updated and they are small enough to stay in the block buffer cache as they are worked on.

This is why when I got my partitioning “wrong” the load took so much longer. More physical IO was needed as the larger partition would not fit into the cache as it was worked on – A quick check shows that logical reads and in fact almost all statistics were very similar but 26,000 IO requests were made (compared to 81,458 for the IOT and 3,068 for the correct IOT_P).

Of course, I set my SGA size and thus the buffer cache to highlight the issue on my laptop and I have to say even I was surprised by the magnitude of the impact. On the enterprise-level system I did my client’s proof of concept on, the impact on insert was less because the buffer cache could hold the whole working set, I suspect the SAN had a considerable cache on it, there was ample CPU resource to cope with the added latching effort and the time taken to actually create the data inserted was a significant part of the workload, reducing the overall impact of the slowness caused by the IOT.

{Update, in This little update I increase my block buffer cache and show that physical IO plummets and the IOT insert performance increases dramatically}.

This demonstrates that a POC, especially one for what will become a real system, has to be a realistic volume on realistic hardware.
For my client’s POC, I still did have to bear in mind the eventual size of the live working set and the probably size of the live block buffer cache and make some educated guesses.

It also explains why my “run_quiet” timings showed a greater benefit for the heap table than the IOT and IOT_P. A windows machine has lots of pretty pointless things running that take up cpu and a bit of memory, not really IO so much. I reduced the CPU load and it benefits activity that is not IO, so it has more impact on the heap table load. Much of the time for the IOT and IOT_P is taken hammering the disk and that just takes time.

So, in summary:

  • Using an IOT increases the index block splitting and, in turn, enqueues and general workload. The increase is in proportion to the size of the IOT compared to the size of the replaced PK.
  • The performance degredation across the whole load process may well be less than 50% but the only way to really find out is to test
  • You may lose the array processing load that may benefit a heap table load if you do the load via an intermediate table.
  • With an IOT you may run into issues with physical IO if the segment (or part of the segment) you are loading into cannot fit into the buffer cache (This may be an important consideration for partitioning or ordering of the data loaded)
  • If you do a proof of concept, do it on a system that is as similar to the real one as you can
  • Just seeing the elapsed time difference between test is sometimes not enough. You need to find out where that extra time is being spent

I’ve thrown an awful lot at you in this one post, so I think I will stop there. I’ve not added the script to create the test tables here, they are in IOT-5 {lacking only the grabbing of the v$sesstat information}.

Flash Is Fast! Provisioning Flash For Oracle Database Redo Logging? EMC F.A.S.T. Is Flash And Fast But Leaves Redo Where It Belongs.

Guy Harrison has been blogging his findings regarding solid state disk testing/performance with Oracle. Guy’s tests and reports are very thorough. This is a good body of work. The links to follow are at the bottom of this post.

Before reading, however, please consider the following thoughts about solid state disk as pertaining to Oracle I/O performance and flash:

  1. Oracle Database Smart Flash Cache (DSFC) is flash storage with libC/libaio physical I/O to “augment” the SGA. When using DSFC you will sustain DBWR writes even if your application only uses the SELECT statement. Few people are aware of this fact. The real SGA buffer pool becomes “L1” cache and DBWR spills clean blocks to the L2 (DSFC) where subsequent logical I/O (cache buffers chain hits) will actually require a physical read from flash. Sessions cannot access buffered data in the DSFC. Blocks have to first be read from flash into the SGA before the session can get along with its business. I have never seen DSFC work effectively. I have, on the other hand, seen a whitepaper showing read-intensive “oltp” serviced by an SGA that is much smaller than available RAM on the system but and augmented by flash. However, the paper is really just showing that flash reads are better than reads from an under-configured hard disk farm. I’ll blog about that paper soon.
  2. Database Smart Flash Cache, Really?  Don’t augment DRAM until you’ve maxed out DRAM.  Do I honestly aim to assert that augmenting nanosecond operations with millisecond operations it non-optimal?, that is my assertion.
  3. DBWR spills to DSFC aren’t charged to sessions, but such activity does take system bandwidth.
  4. If you find a real-life workload that benefits from DSFC please let me know.
  5. Redo Log physical I/O is a well-suited for round, brown spinning disks. Just don’t melt the disks with DBWR random writes and expect the same disks to favor the occasional large sequential writes issued by LGWR. Watch your log file parallel write (LFPW) wait events as a component of log file sync (LFS) and you’ll usually find that LFS is a CPU problem, not a LFPW problem. Read this popular post for more on that matter.
  6. Don’t expect much performance increase, in an Exadata environment, from the new “Exadata Smart Flash Log” feature. It may indeed smooth out LFPW service times, and that is a good thing, but the main beneif Smart Flash Log delivers to Exadata customers is relief from the HDD controller starvation (not to mention the imbalance of processors to spindles) that happens in Exadata when DBWR and LGWR are simultaneously demanding IOPS from the limited number of spindles in Exadata standard configurations. Remember, Exadata’s design center is bandwidth, not write IOPS. A full rack X2 Exadata configuration can only sustain on the order of 25,000 mirrored random writes per second. The closer one pushes Exadata to that 25K IOPS figure the more trouble LGWR has with log flushes. That is, Exadata Smart Flash Log (ESFL) is really just a way to flow LGWR traffic through different adaptors (PCI Flash). In fact, simply plugging in another LSI HDD controller with a few SATA drives dedicated to redo streaming writes would actually do quite well if not as well as ESFL. That is a topic for a different post.
  7. EMC Fully Automated Storage Tiering (FAST) does not muddle around with redo because redo does really well with spinning disks. Hard disk drives do just fine with large sequential writes.
  8. Oracle Database 11g Release 2 added the ability to specify the block size for a redo log. If you do feel compelled to flush redo to solid state I recommend you crack open the documentation for the BLOCKSIZE syntax and add some 4K blocking-factor logs. I made this point on Guy’s blog comment section. I’m not sure if his tests were 4K block size or if they were flushing redo with 512-byte alignment (which flash really doesn’t favor).

And now references to Guy Harrison’s posts:

http://guyharrison.squarespace.com/blog/2011/10/27/using-flash-disk-for-redo-on-exadata.html

http://guyharrison.squarespace.com/ssdguide/04-evaluating-the-options-for-exploiting-ssd.html

Filed under: oracle

W-ASH : Web Enabled ASH

I’m excited about the ease of creating rich user applications  that are web enabled  given the state of technology now. JavaScript and JQuery have gone from being disdained as “not a very serious” language to moving towards the limelight of front and center.

Here is a small example.

Download the following file:  W-ASH (web enabled ASH, file is wash.tar.gz )

Go to your apache web server root, in my case on redhat Linux is

# cd /usr/local/apache2
# gzip -d wash.tar.gz
# tar xvf wash.tar
-rwxr-xr-x  21956  14:08:21 cash.sh
-rw-r--r--  30881  11:52:10 htdocs/ash.html
drwxr-xr-x      0  15:40:52 htdocs/js/
-rwxr-xr-x  10958  14:04:42 cgi-bin/json_ash.sh

(the directory htdocs/js has a number of files put into it from Highcharts. I edited them out to make the output cleaner)

There are 3 basic files

  1. cash.sh  – collect ASH like data from Oracle into a flat file, it  runs in a continual loop
  2. ash.html  – basic web page using Highcharts
  3. json_ash.sh – cgi to read ASH like data and give it to the web page via JSON

Now you are almost ready to go. You just need to start the data collection with “cash.sh”  (collect ASH)

./cash.sh
Usage: usage    [sid] [port]

The script “cash.sh” requires “sqlplus” be in the path and that is all. It’s probably easiest to

  • move/copy cash.sh to an ORACLE_HOME/bin
  • su oracle
  • kick it off as in:
nohup cash.sh system change_on_install 172.16.100.250 orcl &

The script “cash.sh” will create a directory in /tmp/MONITOR/day_of_the_week for each day of the week, clearing out any old files, so there are only maximum 7 days of data. (to stop the  collection run “rm /tmp/MONITOR/clean/*end” )

To view the data go to your web server address and add “ash.html?q=machine:sid
For example my web server is on 172.16.100.250
The database I am monitoring is on host 172.16.100.250 with Oracle SID “orcl”

http://172.16.100.250/ash.html?q=172.16.100.250:orcl

 


See video at : http://screencast.com/t/sZrFxZkTrmn

UltraEdit 2.3 for Mac Available…

UltraEdit 2.3 for Mac is now available. You can get the download here. The change log is here.

Previous releases have coincided with the Linux release. Let’s hope 2.3 for Linux will come soon!

Cheers

Tim…




I really liked this one...

I answer lots of questions about Oracle.  Many of them are ambiguous, unclear, not fully specified, hard to follow.  It is like unraveling a puzzle sometimes (ok, most of the times...)

That is why I found this question to be really amusing - especially after reading the comments.  Lots of disagreement as to the meaning of the question!

So, what do you think the answer is... I'll post what I think after I see some feedback.  It is a very very interesting question.

And it also points to why I think the answer to all questions is mostly "Why" or "It depends".  We usually need a lot more information to answer what might appear to be simple questions.

Apologies

.. both to my readers, and to those waiting for the latest (and late) book.

I’ve been so busy for the last three weeks that I’ve had virtually no time for any serious blogging, or even for answering existing comments. Apart from checking and returning the proofs for the book as fast as Apress sends them to me, I’ve also been busy travelling and doing “proper” work – and I now find that it’s been nearly three weeks since I last published anything

In the interim my most interesting trip has been to Tokyo, where I gave a couple of presentations at the Insight Out conference. As Debra Lilleyhas already commented, the conference was very well organised, the audience attentive, and the hospitality of our hosts was astounding. The English language presentations were subject to simultaneous translation into Japanese, and a fair proportion of the audience were wearing headphones so they could follow the translation. I’ve done the same type of thing with Italian and Spanish events, keeping an ear open for the whisper of the translation from the headphones of a member of the audience,  but, as one of the translators explained to me, not only does it usually take more syllables to say something in Japanese than it does in English but Japanese is a reflexive language so you can’t start translating until the speaker reaches the object of the verb. This makes it particularly important to speak slowly enough to allow the interpreter to keep up … and it’s surprisingly hard work listening to yourself to make sure you don’t speed up. (My admiration for people who do simultaneous translation is unbounded – and I fully appreciate the need for them to work in pairs and switch every 10 minutes.)

Normal service will be resumed as soon as possible – starting with a catch-up on the outstanding comments; in the meantime here is a cartoon about String Theory that made me smile a little while ago.