Search

OakieTags

Who's online

There are currently 0 users and 44 guests online.

Recent comments

Affiliations

November 2010

Binding DBLINK to a Network Interface

Just thought I’d do a quick post on this one; came out of a conversation about a month or two ago.

We had a single-instance database running on a failover cluster (RHCS). A database link existed for a related database and the connection had to pass through a firewall. The problem was the firewall: it had a rule which only allowed connections from the VIP.

The database server has two IP addresses – a system IP and a VIP. Is there any way to bind the dblink to one specific interface? (Note: we would still like the system IP to be used for other traffic.)

I couldn’t think of a way for Oracle to do that. But we did find a workaround, of sorts (though not perfect) – by using the operating system route command.

You can always learn something new.

It’s high time for this blog to come off hiatus.  I really don’t know why I let it go so long, just pre-occupied with work and extra curricular activities I guess.

One of those activities was to contribute two chapters to a new book from Apress, Pro Oracle SQL.  Though it was only two chapters, it did consume a significant amount of time.  Some folks seem to be able to bang out well written prose and code with seemingly little effort.  It seems that I labor over it more than most, at least it feels that way at times.

On to something new.  Not really new, but it was new to me the other day.  Or if it was not new, I had completely forgotten about it.

It has to do with the innocuous date formats used with to_date().  I ran into to some unexpected behavior from to_date() while running one of the scripts used for the aforementioned book.

Air France and CDG Airport, you suck!

My plane from Belgrade to Paris landed at Charles de Gaulle Airport on time. I immediately got the bus from terminal 2B to 2E. I ran to security. I didn’t bother repacking my bag or putting on my belt. I just ran to the boarding gate. The bus for the plane was still there but I was told boarding had closed and I would have to take the next flight. We are talking 30 seconds here! I was 5 paces from the bus taking people from the gate to the plane. I turned away from the boarding gate, said a long list of expletives, turned back and politely asked what I was supposed to do now. I was told to go upstairs to the transfer desk.

At the transfer desk I was told I had to take the next plane, scheduled for a little over 6 hours later. There was no extra charge for this change because Air France admitted it was their fault for issuing a ticket for “an impossible connection”. There would be no compensation though because my flights weren’t delayed. There were about 10 other people in exactly the same position. Our flights from various countries had arrived on time, but we had been given a connection that was impossible to make. If they had delayed closing the boarding gate by about 10 minutes we would have all got on.

To add insult to injury, the transfer desk couldn’t issue the new boarding passes immediately because the plane had missed its takeoff slot and was waiting for a new one. I was … very angry! If I was that blue geezer from WatchMen I would have disintegrated every inch of CDG airport.

CDG screwed up my luggage on the way out and Air France messed up my trip home. If I have my way I am never using Air France or this airport again!

Cheers

Tim…

Note. There have been several drafts of this post. The first might well have landed by in prison. The later ones required a Parental Advisory sticker. This one is fit for human consumption…

You can explain an invalid SQL statement

I’m in “nightmare weekend before presenting” mode. I’m up to my eyes at work (and have been for ages, thus the quiet blog) and my recent weekends have been full of normal {and abnormal} life.

As is the way, when up against it and putting together my proofs for wild claims, everything breaks subtly and makes my wild claims look a little, well, wild – even though they are real issues I’ve seen, worked through and fixed in the day job. *sigh*. It does not help when you come across little oddities you have never seen before and end up spending valuable time looking into them.

So here is one. I’m just putting together a very, very simple demo of how the number of rows the CBO expects to see drops off as you move outside the known range. In the below you can see the statement I am using (I keep passing in different days of the month and watching the expected number of rows drop until I hit 1 expected row), but look at how it progress to the last entry…

mdw11> select count(*) from date_test_flat where date_1=to_date('&day-02-2011','DD-MM-YYYY')
  2  /
Enter value for day: 01

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |    16 |   128 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

mdw11> /
Enter value for day: 15

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |     2 |    16 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE(' 2011-02-15 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

mdw11> /
Enter value for day: 21

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |     1 |     8 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE(' 2011-02-21 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

mdw11> /
Enter value for day: 30

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |    99 |   792 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE('30-02-2011','DD-MM-YYYY'))

mdw11>

The expected number of rows drops, becomes and – and has shot up to 99 again (which is the expected number in the known range, as I have 10,000 rows spread over 100 days). My immediate thought is “Wow! Maybe Oracle have put some odd fix in where when you go well out of range it reverts to expecting an average number of rows”. Nope. It is because I asked for the data for 30th February. And I did not get an error.

I think it is because I have set autotrace traceonly explain. This causes the SQL statement not to be executed {if it is just a select, not an insert, update or delete}. It seems the costing section of the CBO is not so good at spotting duff dates, but it then gets the costing wrong.

I’ve spotted that the format of the filter also changes when the date is invalid, I really want to check that out – but I better continue failing to write the presentation!

I know, pretty pointless knowing this but it just amused me. Below is just a quick continuation to show that if the statment is to be executed you get an error and no plan and that utterly duff dates can be passed in.

mdw11> /
Enter value for day: 28

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |     1 |     8 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE(' 2011-02-28 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

mdw11> SET AUTOTRACE ON
mdw11> /
Enter value for day: 20
any key>

  COUNT(*)
----------
         0

1 row selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |     1 |     8 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE(' 2011-02-20 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        821  consistent gets
          0  physical reads
          0  redo size
        421  bytes sent via SQL*Net to client
        415  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

mdw11> /
Enter value for day: 30
select count(*) from date_test_flat where date_1=to_date('30-02-2011','DD-MM-YYYY')
                                                         *
ERROR at line 1:
ORA-01839: date not valid for month specified

mdw11> set autotrace traceonly explain
mdw11> /
Enter value for day: 30

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |    99 |   792 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE('30-02-2011','DD-MM-YYYY'))

mdw11> /
Enter value for day: 45

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |    99 |   792 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE('45-02-2011','DD-MM-YYYY'))

Index Join – 2

In an earlier article introducing the index join I raised a question that came up at the first ES2N virtual conference:

    “If you hint an index hash join, is there any way of telling Oracle the order in which it should use the indexes?”

Consider the following example:

create table indjoin
nologging
as
select
	rownum	id,
	rownum	val1,
	rownum	val2,
	rownum	val3,
	rpad('x',500) padding
from
	all_objects
where
	rownum <= 5000
;

/*

alter table indjoin
add constraint ij_pk primary key (id)

*/

create unique index ij_v1 on indjoin(id, val1);
create unique index ij_v2 on indjoin(id, val2);
create unique index ij_v3 on indjoin(id, val3);

-- gather statistics: without histograms

select
	/*+ index_join(ij) */
	count(*)
from
	indjoin	ij
where
	val1 between 100 and 200
and	val2 between 50 and 150
and	val3 between 250 and 550
;

The query plan for this query is (thanks to the hint) a three-way index hash join:

-----------------------------------------------------------------------------
| Id  | Operation                | Name             | Rows  | Bytes | Cost  |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |                  |     1 |    12 |    74 |
|   1 |  SORT AGGREGATE          |                  |     1 |    12 |       |
|*  2 |   VIEW                   | index$_join$_001 |     1 |    12 |    74 |
|*  3 |    HASH JOIN             |                  |       |       |       |
|*  4 |     HASH JOIN            |                  |       |       |       |
|*  5 |      INDEX FAST FULL SCAN| IJ_V1            |     1 |    12 |    18 |
|*  6 |      INDEX FAST FULL SCAN| IJ_V2            |     1 |    12 |    18 |
|*  7 |     INDEX FAST FULL SCAN | IJ_V3            |     1 |    12 |    18 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("VAL1">=100 AND "VAL1"<=200 AND "VAL2">=50 AND
              "VAL2"<=150 AND "VAL3">=250 AND "VAL3"<=550)
   3 - access(ROWID=ROWID)
   4 - access(ROWID=ROWID)
   5 - filter("VAL1"<=200 AND "VAL1">=100)
   6 - filter("VAL2"<=150 AND "VAL2">=50)
   7 - filter("VAL3"<=550 AND "VAL3">=250)

But what if you know the data better than Oracle, and know that the join order for the three indexes should be different – there are no extra direct hints you can add to the code to tell Oracle the best order for the hash join. (You might, of course, be able to make use of the cardinality() hint – or plan around with the undocumented, hence unsupported, opt_estimate() or column_stats() or index_stats() hints, but I wouldn’t be keen to use such an indirect approach.)

But you CAN rewrite the query to get the same mechanism working under your control. The code looks more complex – but we often have to make a trade between clarity (simplicity) and speed in critical cases, so you may find some examples where the complexity is acceptable:

select
	count(*)
from
	(
	select
		/*+ no_merge */
		rowid
	from
		indjoin		ij
	where
		val1 between 100 and 200
	)	v1,
	(
	select
		/*+ no_merge */
		rowid
	from
		indjoin		ij
	where
		val2 between 50 and 150
	)	v2,
	(
	select
		/*+ no_merge */
		rowid
	from
		indjoin		ij
	where
		val3 between 250 and 550
	)	v3
where
	v2.rowid = v1.rowid
and	v3.rowid = v1.rowid
;

It’s another example of referencing a table twice (or three times) in the query because multiple references allow you to define a better execution path than a single reference. The execution we get from this plan (running under 10.2.0.3) is as follows:


------------------------------------------------------------------
| Id  | Operation                | Name  | Rows  | Bytes | Cost  |
------------------------------------------------------------------
|   0 | SELECT STATEMENT         |       |     1 |    36 |    14 |
|   1 |  SORT AGGREGATE          |       |     1 |    36 |       |
|*  2 |   HASH JOIN              |       |   102 |  3672 |    14 |
|*  3 |    HASH JOIN             |       |   102 |  2448 |     9 |
|   4 |     VIEW                 |       |   102 |  1224 |     4 |
|*  5 |      INDEX FAST FULL SCAN| IJ_V1 |   102 |  1632 |     4 |
|   6 |     VIEW                 |       |   102 |  1224 |     4 |
|*  7 |      INDEX FAST FULL SCAN| IJ_V2 |   102 |  1632 |     4 |
|   8 |    VIEW                  |       |   302 |  3624 |     4 |
|*  9 |     INDEX FAST FULL SCAN | IJ_V3 |   302 |  4832 |     4 |
------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("V3".ROWID="V1".ROWID)
   3 - access("V2".ROWID="V1".ROWID)
   5 - filter("VAL1">=100 AND "VAL1"<=200)
   7 - filter("VAL2">=50 AND "VAL2"<=150)
   9 - filter("VAL3">=250 AND "VAL3"<=550)

By creating three explicit query blocks (which I’ve ring-fenced with no_merge hints), one for each index, I’ve made Oracle extract the same three sets of data that it was using in the index hash join. I’ve then left Oracle to join the three result sets – which it has done with hash joins. Interestingly this seems to have done a little less work than the original index join – the final complex filter action doesn’t appear in the manual rewrite.

Since I’ve now got a query that seems to be “just” a three table join, I can dictate the join order, guarantee the hash joins, and dictate which rowsources should be used as build rowsources, and which as probe. For example, let’s apply the following hints:

select
	/*+
		leading (v1 v3 v2)
		use_hash(v3) no_swap_join_inputs(v3)
		use_hash(v2) swap_join_inputs(v2)
	*/
	count(*)
from
        ....

The resulting plan is as follows:

------------------------------------------------------------------
| Id  | Operation                | Name  | Rows  | Bytes | Cost  |
------------------------------------------------------------------
|   0 | SELECT STATEMENT         |       |     1 |    36 |    14 |
|   1 |  SORT AGGREGATE          |       |     1 |    36 |       |
|*  2 |   HASH JOIN              |       |   102 |  3672 |    14 |
|   3 |    VIEW                  |       |   102 |  1224 |     4 |
|*  4 |     INDEX FAST FULL SCAN | IJ_V2 |   102 |  1632 |     4 |
|*  5 |    HASH JOIN             |       |   102 |  2448 |     9 |
|   6 |     VIEW                 |       |   102 |  1224 |     4 |
|*  7 |      INDEX FAST FULL SCAN| IJ_V1 |   102 |  1632 |     4 |
|   8 |     VIEW                 |       |   302 |  3624 |     4 |
|*  9 |      INDEX FAST FULL SCAN| IJ_V3 |   302 |  4832 |     4 |
------------------------------------------------------------------

The join order (you can check the trace file to confirm this) is: ij_v1, ij_v3, ij_v2 – but because of the swap_join_inputs(v2) hint the ij_v2 index appears first in the plan.
We build a hash table with ij_v2, then build a hash table with ij_v1 with we probe (join) ij_v3.
We then use the result of joining ij_v1/ij_v3 to probe (join) ij_v2 – which means v2 really is the last object in the join order.

It may look complex – but all we’ve done is describe an index-join in detail, and that has allowed us to specify which indexes are joined when. I’ve already pointed out that the manual version appears to be slightly more efficien than the original. It’s also more powerful, and addresses a defect in the current implementation of the index join. But that’s a topic for another blog.

[Further reading on Index Joins]

Oracle 11g Streams Implementer’s Guide

In this post I would like to share with you my impressions about the book Oracle 11g Streams Implementer’s Guide by Ann L. R. McKinnell and Eric Yen. But, first of all, I would like to warn you about two things. First, Packt Publishing, the publisher of this book, asked me, a long time ago, whether I was interested to read the book and publish my impressions on this blog. Since my response was affirmative, they sent me a copy of the book. As a result, here we are… Second, I’m not a Streams expert. By reading this book I just wanted to review the status of the technology as of Oracle Database 11g Release 2.

Preface

Yes, let’s start from the beginning. The important information provided in this chapter, specifically, in the “Who this book is for” section, is that the authors wrote the book for people having good familiarity with Streams. That, honestly, surprised me a bit. Two are the reasons. First, just before providing this information the authors spent few pages introducing basic concepts about replication and distributed systems. Second, on the back cover you can read “This book is for users who have implemented, or are about to implement, a distributed database environment using Oracle Streams replication. The reader is expected to have an intermediate understanding of Oracle database technology”. Anyway, it’s not a real problem when you know it.

Chapter 1 – All the Pieces: The Parts of an Oracle 11g Streams Environment

As the title indicates this chapter examines the different components of Streams and how they work together. It starts by describing the different architectures supported by Streams. Then, the Streams processes (capture, propagate and apply), the queues used to transport changes and the instantiation of the target objects are described. After that, details about the LCRs and the different SCNs that are used by Streams are given. The chapter ends with a short introduction of XStream. The structure is in my opinion suboptimal. Some topics are discussed several times and some others (the advanced features of Streams) are shortly introduced and references to the following chapters are given. It would be much better if the authors had focused on the basic topics only. In addition, sometimes they give information that, at this point, it is not relevant in such a chapter. A good example is the short description of XStream (which is one and a half pages long). One third of the space is used to give a list of views that supply information about XStream. What’s the purpose of such a list in a section that should introduce a feature to a reader?

Chapter 2 – Plot Your Course: Design Considerations

This chapter provides not only a list of questions that requires answers, but also a list of factors to be considered during the planning and designing phase. While the provided information is good, I think that the authors does not share enough they experience in this area. For example, when they point out that additional hardware resources are required (this, I hope, is no surprise for anybody…), I would like to know, based on few examples covering different architectures, what they experience is. And, yes, I am fully aware that every system is different. But, in my opinion, few examples could be very interesting for most readers.

Chapter 3 – Prepare the Rafts and Secure Your Gear: The pre-work before configuring Oracle 11g Streams

First of all I have to say that this chapter has the longest title I have ever seen in a book. It goes without saying that this is a fully irrelevant comment! This chapter starts by describing the importance of the network infrastructure in a Streams environment and which tools can be used to check that everything works fine. It continues by describing the relevant initialization parameters for a Streams environment, how to set the logging features, and how to setup the database links, users and tablespaces needed by Streams. The chapter terminates by describing how to instantiate a target site. Even though the chapter contains plenty of useful information, in my opinion the authors, again, do not share all their knowledge in this area. For example, I really miss actual advices on how to check the information provided by the different tools that are discussed.

Chapter 4 – Single-Source Configuration

This chapter describes how to setup a single-source Streams environment between two databases. The chapter is basically composed of two parts. The first one describes how to perform such a setup with Enterprise Manager. Simply put, there is a print screen of every step with some comments describing its purpose. One thing that disturbed me at this point is that the figures are not numbered and, unfortunately, not always provided in the right sequence. As a result, it is not always obvious to understand to which figure the comments are related. The second part provides the commands used to perform the same setup without using Enterprise Manager. This is, in my opinion, the most interesting part. The reason is simple; I like to know what the commands that are needed to carry out such a setup are.

Chapter 5 – N-Way Replication

While the previous chapter covered a single-source configuration, this one describes how to setup an n-way replication. For that purpose, the commands used to setup a 2-way replication are provided and explained. While most of the content of the chapter is interesting, I think that not enough information about conflict resolution is provided. In fact, such an important topic deservers much more than three pages that only provide general advices on how to tackle it.

Chapter 6 – Get Fancy with Streams Advanced Configurations

This chapter provides detailed information about the advanced features of Streams that are shortly introduced in chapter 1. The covered topics are synchronous capture, subsetting, tags, rules, downstream capture, change tables, heterogeneous configurations and XStreams. The content is generally good and lots of examples are provided. Unfortunately, there are matters that should be explained more in detail. For example, at page 241 you can read, two times, the following sentence: “Ideally, the current user is the Oracle Streams Administrator”. Honestly, without more information, I don’t know how to interpret such a sentence. In my opinion, it is in fact necessary to know why they provide such an advice and what happens when someone do not follows it.

Chapter 7 – Document What You Have and How It Is Working

The title of this chapter set some expectations about its content. And, I have to admit, at first I was puzzled from what I read. And, be careful, I’m not saying that the content is bad. I was just expecting something else. Specifically a content building on the information about documentation described in Chapter 2. Instead, this chapter provides a kind of reference of ways to extract information about the Streams environment. For that purpose it describes several views/queries/scripts and the DBMS_STREAMS_ADVISOR_ADM and UTL_SPADV packages. What I missed in some situations are some examples of the output generated by the described techniques. E.g. I was disappointed to not see what kind of output the UTL_SPADV.SHOW_STATS procedure generates. I know, I can run it myself… But, for doing so, you need a computer. And, honestly, I’m not used to read books in front of a screen.

Chapter 8 – Dealing with the Ever Constant Tides of Changes

This chapter provides a lot of interesting information about change management and troubleshooting. I especially liked the part describing the 13 most likely failure points. Where, for each of them, hints on how to assess the situation and to fix the problem are given. What I especially liked in this part is the “command-line” approach. If you are a GUI user, don’t mind. As in other chapters of the book information about the utilization of Enterprise Manager is also provided. The chapter ends with the description of other tools, hints and tips for dealing with problematic situations.

Chapter 9 – Appendix and Glossary

The last chapter starts by providing some information about Oracle Streams Commander. But, unfortunately, it does not provide information about the features it offers. Basically, they just point out that a tool exists and where it can be found. Some particularities about running Streams in a RAC environment are also given. This is, in my opinion, the only interesting part of this chapter. After mentioning that Oracle bought Golden Gate, the chapter ends with a useless glossary. I say useless because in it you find either terms that were already defined in a much better way in chapter 1 or initialization parameters that, honestly, do not have their place in a glossary. And, in any case, their description can be quickly found elsewhere in a previous chapter thanks to the index provided at the end of the book.

General Remarks

In the preface the authors introduces a series of conventions to be used throughout the book. Unfortunately, they are not consistent at all in applying them.
The book contains plenty of references to the Oracle documentation. This is a good thing. Unfortunately too many of them are useless. For example, some of them are something like the following “For more information refer to the Oracle Streams Concepts and Administration Guide and Oracle Streams Replication Administrator’s Guide”. However, in other cases the section where the information is contained is explicitly provided.
In my opinion the authors and the publisher didn’t invest enough time to perfect the text. In fact, not only I found about three dozen of typos (and, let’s face it, I was not looking for them…), but also the layout of some chapters could be greatly improved.

Conclusion

Even though Oracle 11g Streams Implementer’s Guide is a useful book, the authors missed the opportunity to write a very good book. If you are looking for a book about the latest version of Streams now, buy it. If you can wait few weeks, I would suggest you to also check out Kirtikumar Deshpande’s Oracle Streams 11g Data Replication.

Updates a plenty…

Someone commented recently that I’ve not been writing much lately. That surprised me because I feel like I’m constantly writing. Looking back over the last few months of changes to my website, I can see where that opinion comes from. The majority of the stuff I’ve been writing recently are updates. Typically what happens is I answer questions by linking to articles on my site. This invariably forces me to read through my old stuff and cringe. It then goes on my to-do list.

Sometimes the changes are cosmetic. Sometimes they are total rewrites. Sometimes the rewrites are because of product enhancements, but other times they are because I now realise how little I understood when I wrote the initial article. As usual, the more you know, the more you realise you don’t know.

New versions of the database prompt new features articles. New OS releases often prompt a batch of installation articles. The “quiet” times between are often spent cleaning up the crap. :)

So just because you don’t see a bunch of articles on the front page marked as new, don’t assume I’m doing nothing. The is always a touch of light (website) housework going on… :)

Cheers

Tim…

What did you call me?

On Saturdays I finish yoga and go to my mates local pub for a crafty diet coke before going home. Some of the guys in the pub have got to know me a little from my visits. Now, whenever I go in they ask me what countries I have visited recently and then generally take the p*ss out of me, saying either that I must be rich because of all the travelling I do, or I’m a layabout who doesn’t have a real job…

Last Saturday I went in and we had the usual bit of banter. I then sat down and soon after a lady came up to me and said she had overheard the guys talking and was interested to know what I actually did. Apparently they had said I was a lecturer and I was the 30th best in the world???

The lecturer bit is maybe understandable. Presenter, teacher, lecturer is all the same ball park to me. What I don’t understand is the 30th best in the world bit. I can’t think of anything I’ve ever said that can be interpreted that way. I’ve never spoken to them directly about what I do and I can’t think of anything I’ve ever said that could be interpreted as a “world ranking”. Anyway,  I explained to the lady in question I was in I.T., at which point she made a sharp exit. Nothing new there…

The moral of this story is, if someone asks you what you do, answer with a job title they understand like brick layer or electrician. It does nobody any good if you tell the truth, because they will only misunderstand, make something up and then exaggerate a bit on top of that. Next time someone asks I’m going to say I’m the 26th best gynaecologist in the world and see what comes back to me. Probably a washed up DBA with delusions of grandeur… :)

Cheers

Tim…

iOS 4.2 on my iPad…

I upgraded to the iOS 4.2 the other day (the evening it came out in the UK). Steve Jobs reckons, “iOS 4.2 makes the iPad a completely new product”. Maybe I don’t use it the same as everyone else in Apple-land, but it seems just… kinda… no freaking different…

I really don’t understand how I can remain completely immune to the Apple marketing hype when I’m desperately trying to be assimilated. I keep buying their products and waiting for a Jobsian flash of magic but it’s just not happening. What have I got to do to? Marry the guy?

Maybe it’s like enlightenment and the more you try the harder it is to reach. I shall meditate for a month with my iPad balanced on my head, my MacBook Pro on my lap and my iPod Shuffle up my ass. Maybe then I will understand all the heavenly glory that is Apple, or maybe I’ll just get a sore ass… :(

Cheers

Tim…

Belgrade, Serbia: PL/SQL Masterclass – Day 1

I’ve just completed day 1 of the PL/SQL Masterclass for Oracle University in Belgrade. So far so good…

Last night Ljiljana took me on an evening tour of the city. I got to see a little of New Belgrade, which is very modern with wide streets, big buildings and some shopping malls, as well as some of Old Belgrade, which has narrower streets, regular shops and mostly smaller buildings . There are few big blocks there with a “communist feel” to them (her words not mine). What was quite freaky is some of the military/government builds that were bombed during the NATO bombings in the late 90′s are still not repaired. The amount of building that has happened in the last few years leads me to believe these may be left that way for a reason… I’m not big on commenting about politics, but it is very interesting hearing stories about that time from regular folks who were living here at the time. Always two sides to every story etc.

Unfortunately the timing of my visit is such that I can only really see the city by night, so I’ve not taken any photos yet. I think I’m going to Google some and pretend I saw it all by daylight. :)

After the tour and some shopping we went out to a small traditional Serbian restaurant near Ljiljana’s parents house. Serbians are very big meat eaters, but they also have some really cool veggie dishes. I had lots of stuff I can’t spell or pronounce. Lots of cheese and a dish made out of butter beans, peppers and onions that was awesome. I’m hoping to get the recipe from Ljiljana’s mom so I can try and make it at home. It’s quite similar to the food I’ve had in Bulgaria, which suits me just fine. :)

I didn’t sleep too well last night. I think I was still a little stressed out about the luggage situation. I did manage to buy some clothes and stuff yesterday so today wasn’t too bad, but you still feel a little off center. I got back to the hotel tonight and my luggage has arrived. Now I’ve got no excuse for looking scruffy tomorrow… Doh!

Let’s see what tomorrow brings. :)

Cheers

Tim…