Search

OakieTags

Who's online

There are currently 0 users and 43 guests online.

Recent comments

Affiliations

April 2011

Things worth to mention and remember (III) - Parallel Execution Control 3

Continuing from the previous part of this series I'll cover in this post some further basics about parallel execution control:

- Keep in mind that there are two classes of parallel hints: PARALLEL and PARALLEL_INDEX. One is about the costing of parallel full table / index fast full scans, the other one about costing (driving) parallel index scans, which are only possible with partitioned indexes (PX PARTITION granule vs. PX BLOCK granule)

Envisioning NFS performance II

Another way to look at the latency times from Envisioning NFS performance is looking at the different layers of the stack we go through.

On the last post, I  instrumented the latency data collection at the TCP layer on the NFS server, but there are  a lot of other layers.

The are many layers to analyze. I can analyze the NFS server more easily, since at my work, the NFS server is always running Open Solaris thus I have access to dtrace to analyze the latencies. The latencies in the previous blog were at the TCP level on the NFS Server, but that is just one point in the whole stack.

My goal here is to tune to make sure the server is to make sure the NFS server is responding quickly and to detect if there are any network or client issues. I have access to dtrace on the server but the clients can be anything so I have limited access to data from the clients.

Thus my first step is to instrument data collection on the NFS server to make sure it is responding quickly.

Upgrades

Here’s a link to a truly ambitious document on Metalink (if you’re allowed to log on):

Doc ID 421191.1: Complete checklist for manual upgrades of Oracle databases from any version to any version on any platform

(Actually it only starts at v6 – but I don’t think there are many systems still running v5 and earlier).

My Sister’s Funeral…

Yesterday was my sister’s funeral. The easter holidays and the imminent royal wedding caused some serious delays, so rather than having the funeral within about a week of her death, it ended up being just short of three weeks. Far too long, but it was out of our control so you’ve just got to get on with it.

I was pretty nervous during the day. I had organized most of the stuff and I was just waiting for something to go spectacularly wrong. Fortunately it all went to plan, which was a relief. We are not what I would consider a religious family and I find it more than a little hypocritical that, like many Brits, we roll out religion for births, marriages and deaths, but it seems to be what people want at these times and who am I to argue? We ended up having a Church of England service at a crematorium. The Reverend was a really cool guy (he has a WordsPress blog :) ) and did a cracking job. Everyone was really pleased with the way it went.

After the service we had a wake at a pub close to my sister’s house. My family’s response to most things is to talk crap and laugh at ourselves. Not surprisingly, we reverted to type pretty quickly, which was good to see. It turns out my cousin might be going to work at the funeral directors we used, which prompted comments like, “If you had got your arse in gear we could have got a discount on the funeral!” etc. Like I said, we talk crap and laugh at ourselves…

… and so ends another chapter…

Nearly forgot. We asked people to donate money to Cancer Research UK, rather than buy flowers. That went really well.

Cheers

Tim…




Two Things...

The first is an "AWR Formatter" written by a friend of mine, Tyler Muth. It's pretty cool - works as a Chrome plugin - and it makes an AWR report a little more 'friendly' to use. It creates hot links for many of the wait events (so you know what they mean) and it summarizes up a lot of stuff - making the AWR report a lot more "interactive". Check it out and give him feedback on it if you have time.

Adding Features to PowerPoint Based Oracle Presentation Files

April 28, 2011 This blog article is not specific to Oracle Database, but I thought that I would share the concepts anyway.  If you are giving presentations and using PowerPoint, consider including detailed notes sections in the presentation.  What benefit do those notes sections serve?  Well, if you share the presentation files with the viewers [...]

Graphing Oracle v$sysstat

One thing I’ve failed to clearly document is that my tool ASHMON not only graphs ASH data but graphs v$sysstat as well.

To access statistics in ASHMON click on the “functions” menu and choose “stats”. ( the other choices are “ash init” which goes the the ASH data graph and the third choice is “waits” which graphs data from v$session_event)

Graphing v$sysstat can be overwhelming as the number of statistics is in the hundreds, so what does one look at? The important statistics are what I concentrate on now, but that list is another discussion and up for debate. On the other hand when one wants to explore the stats over time how can one do it in a way that is manageable?

For manageability in ASHMON, I grouped the stats  into 20 groups. Having 20 groups is still too much to display, so none are shown by default. By default one has to click on the group button to see it. One can click on as many groups as they way. The first click shows the group and the second click hides.

Inside a group it can be hard to see which line is which statistics, so by passing the mouse over the stat, the line is highlighted.

Also a statistics can have high values which hide the the activity in stats with lower values. To  address this there are three options.  To access these options right click on the statistic and get a popup menu and either chose

  • hide
  • log scale
  • resize

I typically use “hide” and get rid of any values that are too big or not of interest. Once they are gone, a right click in the legend brings back up the menu and I can pick

  • show

which will redisplay any hidden stats.

Another option is “log scale” which will  make it easier to see movement across different orders of magnitude. A final option is “resize”. Resize will offer the option of changing the Y axis scale, but I rarely do this as the graph autosizes to the maximum point.

Stats with “table”, “cursor” and “enqueue” selected (the groups selected are highlighted in red at the top)

A different set of stats selected, in this case  session (ses), transaction (trnx),  CPU, log file I/O (log_io) and physical I/O (phys_io).

The right click menu shown with log scale, resize, hide and show.

and the ASH screen

The “wait” screen which I never use these days. This is the way I use to show load on the database 10 years ago before ASH data. This screen could be really nice, if I changed the centi_secs per sec, to average wait time per event – maybe I’ll do that soon!f

ORA-14404 / 14405 and Deferred Segment Creation

Have a look at the following SQL*Plus output snippet:

SQL>> select count(*) from dba_segments where tablespace_name in ('TS_TO_DROP', 'TS_TO_NOT_DROP');

COUNT(*)
----------
0

SQL>> drop tablespace ts_to_drop including contents and datafiles cascade constraints;
drop tablespace ts_to_drop including contents and datafiles cascade constraints
*
ERROR at line 1:
ORA-14404: partitioned table contains partitions in a different tablespace

How is it possible to get a message about an object that is supposed to have partitions in the tablespace to be dropped and in some other tablespaces if there is no segment contained in the tablespace to be dropped?

May be a dictionary corruption, or a problem with the recyclebin?

Star Transformation – 2

After publishing my note about emulating star transformations when there was something blocking the optimizer’s ability to do them automatically I ended up having an interesting email conversation with Mark Farnham about variations on the theme, concluding with a completely different (and possibly more intuitive) SQL statement to achieve the same aim.

You will recall that I started off with the following statement:

select
	ord.*
from
	products	prd,
	customers	cst,
	orders		ord
where
	prd.grp = 50
and	cst.grp = 50
and	ord.id_prd = prd.id
and	ord.id_cst = cst.id
;

Although this specific example is so simple that the optimizer could turn it into a star transformation automatically, I used it to demonstrate a “minimum work” version of the statement as follows (with the execution plan from 11.1.0.6):

select
	ord.*
from
	(
	select
		/*+
			leading(prd ord)
			use_nl(ord)
			no_merge
		*/
		ord.rowid 	prid
	from
		products	prd,
		orders		ord
		where
		prd.grp = 50
	and	ord.id_prd = prd.id
		)	prid,
	(
	select
		/*+
			leading(cst ord)
			use_nl(ord)
			no_merge
		*/
		ord.rowid 	crid
	from
		customers	cst,
		orders		ord
	where
		cst.grp = 50
	and	ord.id_cst = cst.id
	)	crid,
	orders	ord
where
	prid.prid = crid.crid
and	ord.rowid = prid.prid
;

--------------------------------------------------------------------------
| Id  | Operation                   | Name       | Rows  | Bytes | Cost  |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |            |    11 |  1650 |   570 |
|   1 |  NESTED LOOPS               |            |    11 |  1650 |   570 |
|*  2 |   HASH JOIN                 |            |    11 |   264 |   559 |
|   3 |    VIEW                     |            |  3361 | 40332 |   277 |
|   4 |     NESTED LOOPS            |            |  3361 | 87386 |   277 |
|*  5 |      TABLE ACCESS FULL      | CUSTOMERS  |    98 |   882 |    81 |
|*  6 |      INDEX RANGE SCAN       | ORD_CST_FK |    34 |   578 |     2 |
|   7 |    VIEW                     |            |  3390 | 40680 |   281 |
|   8 |     NESTED LOOPS            |            |  3390 | 88140 |   281 |
|*  9 |      TABLE ACCESS FULL      | PRODUCTS   |   100 |   900 |    81 |
|* 10 |      INDEX RANGE SCAN       | ORD_PRD_FK |    34 |   578 |     2 |
|  11 |   TABLE ACCESS BY USER ROWID| ORDERS     |     1 |   126 |     1 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("PRID"."PRID"="CRID"."CRID")
   5 - filter("CST"."GRP"=50)
   6 - access("ORD"."ID_CST"="CST"."ID")
   9 - filter("PRD"."GRP"=50)
  10 - access("ORD"."ID_PRD"="PRD"."ID")

Go back to the original SQL statement, though. The select list contains columns from just the orders table, and there’s a fairly well-known suggestion (or possibly guideline) that “tables that are not in the select list should not appear in the from clause, they should appear in subqueries in the where clause”. It’s not difficult to come up with examples where this strategy is a bad idea – but it’s often worth thinking through the consequences of trying to apply it. (Of course, you often find that after you’ve rewritten your SQL to match the strategy the optimizer transforms it back into the join that you had been avoiding.)

Since the products and customers tables don’t appear in the select list, can I find a way of rewriting my statement with subqueries ? The answer is yes. Here’s the SQL, with the execution plan I got.

select
	ord.*
from
	orders ord
where
	ord.rowid in (
		select
			/*+
				no_use_hash_aggregation
			*/
			prid.prid
		from
			(
			select	/*+ no_merge */
				ord.rowid 	prid
			from
				products	prd,
				orders		ord
				where
				prd.grp = 50
			and	ord.id_prd = prd.id
			)	prid,
			(
			select	/*+ no_merge */
				ord.rowid 	crid
			from
				customers	cst,
				orders		ord
			where
				cst.grp = 50
			and	ord.id_cst = cst.id
			)	crid
		where
			prid.prid = crid.crid
	)
;

------------------------------------------------------------------------------------------
| Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |            |     1 |   138 |   591   (1)| 00:00:08 |
|   1 |  NESTED LOOPS               |            |     1 |   138 |   591   (1)| 00:00:08 |
|   2 |   VIEW                      | VW_NSO_1   |    11 |   132 |   589   (1)| 00:00:08 |
|   3 |    SORT UNIQUE              |            |     1 |   264 |            |          |
|*  4 |     HASH JOIN               |            |    11 |   264 |   589   (1)| 00:00:08 |
|   5 |      VIEW                   |            |  3314 | 39768 |   294   (1)| 00:00:04 |
|   6 |       NESTED LOOPS          |            |  3314 | 86164 |   294   (1)| 00:00:04 |
|*  7 |        TABLE ACCESS FULL    | PRODUCTS   |   100 |   900 |    94   (2)| 00:00:02 |
|*  8 |        INDEX RANGE SCAN     | ORD_PRD_FK |    33 |   561 |     2   (0)| 00:00:01 |
|   9 |      VIEW                   |            |  3314 | 39768 |   294   (1)| 00:00:04 |
|  10 |       NESTED LOOPS          |            |  3314 | 86164 |   294   (1)| 00:00:04 |
|* 11 |        TABLE ACCESS FULL    | CUSTOMERS  |   100 |   900 |    94   (2)| 00:00:02 |
|* 12 |        INDEX RANGE SCAN     | ORD_CST_FK |    33 |   561 |     2   (0)| 00:00:01 |
|  13 |   TABLE ACCESS BY USER ROWID| ORDERS     |     1 |   126 |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("PRID"."PRID"="CRID"."CRID")
   7 - filter("PRD"."GRP"=50)
   8 - access("ORD"."ID_PRD"="PRD"."ID")
  11 - filter("CST"."GRP"=50)
  12 - access("ORD"."ID_CST"="CST"."ID")

You’ll notice that this plan is remarkably similar to the plan I got from the joins with inline views – although the new plan has one extra “sort unique” operation at line 3. The optimizer has transformed my query by unnesting the subqueries and turning them into inline views – adding in a “distinct” operation because that’s what happens when you turn an “IN subquery” into a join when there isn’t a suitable uniqueness constraint on the join column.

There are two reasons for adopting this subquery approach. There’s the (highly subjective) argument that the code is a little easier to understand in this form, especially if you then want to join the orders table onwards to other tables. There’s also an objective argument relating to the “sort unique”. Note that I included the hint “no_use_hash_aggregation” to stop Oracle from using a “hash unique” operation at this point. By forcing Oracle to sort for uniqueness I know that the rowids will appear in (at least) an “extent-based” order rather than a completely random order. It’s possible that walking the fact table in physical order will be a little more efficient than accessing it in a completely randomised order. The former might find multiple rows in a block or benefit from some form of O/S or SAN read-ahead; the latter is more likely to turn one row into one random physical read.

One final thought – I said that you might want to take this type of approach for queries where the optimizer can’t do a star transformation automatically. There’s one very important case where this is always true – when you’re running Standard Edition, which doesn’t support bitmap indexes. So if you’re running SE and want to do star transformations – you now know how.

Footnote: I haven’t spent much time testing the limits of variations of this code although I did spend 30 minutes extending the example to use three dimension tables. If you think that this strategy might solve a production problem, think about using the sample code to create and test a better model of your production requirement before rewriting the production code – and if you go to production, check very carefully that the resulting code does produce the same results.

No Linux servers for Oracle Support…

I was just mailed a bug update and it included this text (spelling mistakes theirs, not mine).

Note customer is on Linux but could not find an available
11.2 Linux database to test on.  Reprocided problem on Solaris
confirming that there is some generic problem here.

Really?

And here’s me thinking that firing up a VM with any version of Linux & Oracle was quick and easy. Perhaps their VMs are running on Amazon, hence the lack of available systems. :)

Cheers

Tim…