Search

Top 60 Oracle Blogs

Recent comments

March 2011

Martin Widlake's Yet Another Oracle Blog

Oracle performance, Oracle statistics and VLDBs

How to Tune an Exadata

Q: How do you tune Exadata?
A: Look for long running queries that are not using Smart Scans and then fix them so they do.

We’ve worked on a bunch of Proof of Concepts (POC’s) for customers over the last year or so. These usually involve loading a few terabytes of data and running a bunch queries or some other workload on the data. Generally speaking, anything we have thrown at Exadata has blown the doors off of the platforms that the applications were previously running on. But occasionally we run into a situation where the speed up is just not what we’ve come to expect. Generally speaking it’s because we’ve done something that has kept Exadata from doing what it does best – Smart Scans. While my lead in is obviously a tongue in cheek over simplification, it is basically true. Unfortunately, it’s not as easy as it sounds to determine whether a statement has been offloaded, because our main tool for looking at how a statement was executed (the execution plan) doesn’t tell us whether a Smart Scan was used or not. So in this post, my intent is to give you a couple of options for determining whether Smart Scans are happening or not. Here’s a quick example showing a basic execution plan:

 
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
 
 
INSTANCE_NAME    STARTUP_TIME      CURRENT_TIME         DAYS    SECONDS
---------------- ----------------- ----------------- ------- ----------
SANDBOX1         24-MAR-2011 16:19 25-MAR-2011 22:57    1.28     110283
 
SYS@SANDBOX> 
SYS@SANDBOX> set timing on
SYS@SANDBOX> @avgskew3
SYS@SANDBOX> select avg(pk_col) from kso.skew3
  2  where col1 > 0
  3  /
 
AVG(PK_COL)
-----------
 16093750.2
 
Elapsed: 00:00:34.80
SYS@SANDBOX> select sql_id, sql_text from v$sql
  2  where sql_text like 'select avg(pk_col) from kso.skew3 where col1 > 0';
 
SQL_ID        SQL_TEXT
------------- ----------------------------------------------------------------------
4p62g77m9myak select avg(pk_col) from kso.skew3 where col1 > 0
 
Elapsed: 00:00:00.14
SYS@SANDBOX> @dplan
Enter value for sql_id: 4p62g77m9myak
Enter value for child_no: 
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  4p62g77m9myak, child number 0
-------------------------------------
select avg(pk_col) from kso.skew3 where col1 > 0
 
Plan hash value: 2684249835
 
------------------------------------------------------------------------------------
| Id  | Operation                  | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |       |       |       |   535K(100)|          |
|   1 |  SORT AGGREGATE            |       |     1 |    11 |            |          |
|*  2 |   TABLE ACCESS STORAGE FULL| SKEW3 |   383M|  4028M|   535K  (1)| 01:47:02 |
------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - storage("COL1">0)
       filter("COL1">0)
 
 
20 rows selected.
 
Elapsed: 00:00:00.22

The storage line in the predicate section indicates that a Smart Scan is possible, but it doesn’t actually tell us that one occurred. So how can you tell. Well there are several ways.

  1. You can Millsap it. (generate a 10046 trace)
  2. You can Poder it. (use Tanel Poder’s snapper script to check stats and wait events)
  3. You can Wolfgang it. (generate a 10053 trace) – well actually this doesn’t work since the optimizer doesn’t know whether a statement will do a Smart Scan or not.
  4. Or you can look in v$sql – I wrote a little script called fsx.sql (short for Find_Sql_eXadata.sql) to do that.

I think that tracing is the most foolproof way to verify a Smart Scan (just look for “cell smart table/index scan” wait events). But it can be a little cumbersome to generate a trace and then find it. (Note: Method-R has a great tool to make this easier called MR Trace which is a plug in for Oracle’s SQL Developer). Tanel’s snapper script is an awesome tool that is very versatile – so it’s a very valid option as well. But both of these methods depend on the fact that you can catch the statement of interest while it is executing. They provide no way of looking back at statements that ran in the past. My fsx script is not nearly as comprehensive as either of these approaches, but it has an advantage in that it looks at values stored in v$sql (which are also captured in AWR by the way). This allows us to do analysis that is not limited to what is happening right now. (i.e. we don’t have to catch the query while it’s running).

So how does it work?

The v$sql view contains a column (IO_CELL_OFFLOAD_ELIGIBLE_BYTES) which tells us whether this child has been offloaded or not. Very simply, if the column contains a value greater than 0, then the statement was processed with a Smart Scan. Here’s the basic trick:

   decode(IO_CELL_OFFLOAD_ELIGIBLE_BYTES,0,'No','Yes') Offload 

So basically the fsx script just does a decode on the column and if it’s value is 0 then it returns ‘No’, otherwise it returns ‘Yes’. The script does a few other things too like attempting to calculate the savings in terms of reduced volume of data being transferred across the InfiniBand fabric that the Smart Scan was responsible for. I called the output column IO_SAVED_% although it’s not really I/O, it’s bytes transferred between the storage layer and the database layer. There are some situations where the results of this calculation don’t seem to make much sense, by the way. But that’s too much to go into here. (There are more details in our upcoming Apress Exadata Book if you’re so inclined.) Anyway, here’s a quick example of using the fsx script.

SYS@SANDBOX> @fsx
Enter value for sql_text: select avg(pk_col) from kso.skew3%
Enter value for sql_id: 
 
SQL_ID         CHILD  PLAN_HASH  EXECS  AVG_ETIME AVG_PX OFFLOAD IO_SAVED_% SQL_TEXT
------------- ------ ---------- ------ ---------- ------ ------- ---------- ----------------------------------------------------------------------
4p62g77m9myak      0 2684249835      2      18.31      0 Yes          71.85 select avg(pk_col) from kso.skew3 where col1 > 0

So that’s pretty easy and straight forward. You can enter a bit of a SQL statement’s text or a SQL_ID or both to locate statements of interest in v$sql. The script’s output will tell you whether it was offloaded or not. But what if we want to get a feel for how the whole system is running with regard to Offloading. We could simply add a couple of additional where clauses to the fsx script to allow us to limit the rows returned based on whether the statements were offloaded or not and maybe add a filter on average execution time as well so we can just look at the long running statements. Have a look at fsxo.sql which does just that.

SYS@SANDBOX> @fsxo
Enter value for sql_text: 
Enter value for sql_id: 
Enter value for min_etime: 10
Enter value for offloaded: 
 
SQL_ID         CHILD  PLAN_HASH  EXECS  AVG_ETIME AVG_PX OFFLOAD IO_SAVED_% SQL_TEXT
------------- ------ ---------- ------ ---------- ------ ------- ---------- ----------------------------------------------------------------------
09m6t5qpgkywx      0 1885411402      1     116.79      0 No             .00 select /*+ bloom join 2  use_hash (skew temp_skew) */ a.col2, sum(a.co
1nfa7trushhpm      0 2684249835      2      15.87      0 Yes          74.60 select /*+ result_cache */ avg(pk_col) from kso.skew3 where col1 > 1
35q8ahgw2xhsp      0 3734762968      1      37.12      0 Yes          34.22 select /*+ bloom join  use_hash (skew temp_skew) */ a.col2, sum(a.col1
4p62g77m9myak      0 2684249835      2      18.31      0 Yes          71.85 select avg(pk_col) from kso.skew3 where col1 > 0
5zruc4v6y32f9      0          0      2     362.05      0 No             .00 DECLARE job BINARY_INTEGER := :job;  next_date TIMESTAMP WITH TIME ZON
b6usrg82hwsa3      0          0      2     305.30      0 No             .00 call dbms_stats.gather_database_stats_job_proc (  )
 
6 rows selected.
 
SYS@SANDBOX> @fsxo
Enter value for sql_text: 
Enter value for sql_id: 
Enter value for min_etime: 
Enter value for offloaded: YES
 
SQL_ID         CHILD  PLAN_HASH  EXECS  AVG_ETIME AVG_PX OFFLOAD IO_SAVED_% SQL_TEXT
------------- ------ ---------- ------ ---------- ------ ------- ---------- ----------------------------------------------------------------------
0qa98gcnnza7h      0  568322376      2       3.56      0 Yes        -905.77 select avg(pk_col) from kso.skew where col1 > 0
1nfa7trushhpm      0 2684249835      2      15.87      0 Yes          74.60 select /*+ result_cache */ avg(pk_col) from kso.skew3 where col1 > 1
266gctwscrnn2      0  568322376      3       1.06      0 Yes        -805.04 select /*+ result_cache */ avg(pk_col) from kso.skew where col1 > 1
2uzgbm8azqqv3      0 2974987230      2       1.56      0 Yes          71.79 select avg(pk_col) from kso.skew_encrypt where col1 > 0
35q8ahgw2xhsp      0 3734762968      1      37.12      0 Yes          34.22 select /*+ bloom join  use_hash (skew temp_skew) */ a.col2, sum(a.col1
4p62g77m9myak      0 2684249835      2      18.31      0 Yes          71.85 select avg(pk_col) from kso.skew3 where col1 > 0
d15cdr0zt3vtp      0   62424106      1        .31      0 Yes          99.96 SELECT TO_CHAR(current_timestamp AT TIME ZONE 'GMT', 'YYYY-MM-DD HH24:
 
7 rows selected.
 
SYS@SANDBOX> @fsxo
Enter value for sql_text: %skew%
Enter value for sql_id: 
Enter value for min_etime: 5
Enter value for offloaded: 
 
SQL_ID         CHILD  PLAN_HASH  EXECS  AVG_ETIME AVG_PX OFFLOAD IO_SAVED_% SQL_TEXT
------------- ------ ---------- ------ ---------- ------ ------- ---------- ----------------------------------------------------------------------
09m6t5qpgkywx      0 1885411402      1     116.79      0 No             .00 select /*+ bloom join 2  use_hash (skew temp_skew) */ a.col2, sum(a.co
1nfa7trushhpm      0 2684249835      2      15.87      0 Yes          74.60 select /*+ result_cache */ avg(pk_col) from kso.skew3 where col1 > 1
35q8ahgw2xhsp      0 3734762968      1      37.12      0 Yes          34.22 select /*+ bloom join  use_hash (skew temp_skew) */ a.col2, sum(a.col1
399m90n8jzpu6      0 1923773943      2       5.09      0 No             .00 select avg(pk_col) from kso.skew
4p62g77m9myak      0 2684249835      2      18.31      0 Yes          71.85 select avg(pk_col) from kso.skew3 where col1 > 0

So the idea is to be able to take a high level look at what’s being offloaded and what’s not. Obviously you can do the same thing with AWR data for a longer look back in history. But I’ll leave that as an exercise for the reader. Note that my lab system was recently bounced and so the number of statements is very small. On larger production systems you will probably want to limit yourself to longer running statements.

By the way, there is another tool that can be very helpful in determining if a statement was offloaded, DBMS_SQLTUNE.REPORT_SQL_MONITOR. But I’ll have to save that for another post.

My Secret iPad App Addictions…

When using my iPad, I spend the vast majority of my time in a browser. I don’t have many apps and most of the ones I have I don’t use regularly, but I have a couple that have become a secret addiction…

We City – I love this app. Like all these “earn money and build stuff” apps, you don’t have to spend a lot of time on each visit, but you tend to come back several times a day. I guess I check in about 10 times a day, each time for about 1 minute. I’ve found myself taking my iPad round to friends houses so I can check in a couple of times when I’m there, under the guise of checking my emails. Truly pathetic I know, but that’s addictions for you. :)

GarageBand – I’ve been hearing people rave about GarageBand for ages, but I always assumed it was some sort of Guitar Hero game so I ignored it. It was all a bit random, but the other day I installed it to see what all the fuss was about and it is totally awesome. I was in bands at University, but it’s been years since I’ve done anything musical. I pick up a guitar form time to time, but I can barely play anymore. Anyway, a few minutes with this app and you can start building up pretty cool sounding tunes. You can play the instruments themselves, throw in some autoplay stuff for the instruments you don’t play and even use some of the predefined loops. If you get the right connectors, you can even plug in real guitars, keyboards and mics. I don’t think you’ll be hearing me on the radio any time soon, but it’s great fun and well worth the £2.99 I paid for it. I hadn’t realized I already have GarageBand installed on my MacBook Pro. I guess I should take a look and see what that can do that the iPad app can’t, but to be honest, it’s more fun messing with the iPad than sitting at a keyboard.

Cheers

Tim…




Ambiguity

“There is no space problem.”

If you saw this comment in the middle of a thread about some vaguely described Oracle problem, which of the following would you think was the intended meaning:

    There is a problem – we have no space.
    We do not have a problem with space

Wouldn’t it make life so much easier to choose between:

    We are not seeing any Oracle errors.
    We are seeing Oracle error: “ORA-01653: unable to extend table X by N in tablespace Z”

(That’s just one of many possible space-related errors, of course.)

A Marathon...

If you don't see me updating asktom too much in the next two weeks - it'll be because:

Sequence Driven Primary Keys – Which is Better: Call NextVal in the Insert Statement or in a Row Level Trigger?

March 25, 2011 (Updated March 26, 2011) Occasionally, I see interesting questions being asked about Oracle Database, and it is nice when the person asking the question provides a test case – or at the very least the DDL and DML statements needed to recreate an environment that matches the question.  The question that was asked [...]

Integration of Editions with Services in 11.2.0.2…

There’s a neat new feature for editioning in 11.2.0.2 that allows you to associate an edition with a service. I’ve added it to the end of my Edition-Based Redefinition article here.

Cheers

Tim…




Segment Creation on Demand (Deferred Segment Creation) in Oracle 11.2.0.2…

It looks like 11.2.0.2 has improved most of the original shortfalls of segment creation on demand that were present in 11.2.0.1.

Cheers

Tim…




Small Tables

Here’s a note I’ve been meaning to research and write up for more than 18 months – every since Dion Cho pinged a note I’d written about the effects of partitioning because of a comment it made about the “2% small table threshold”.

It has long been an item of common knowledge that Oracle has a “small table threshold” that allows for special treatment of data segments that are smaller than two percent of the size of the buffer cache, viz:

    If a table is longer than the threshold then a full tablescan of the table will only use db_file_multiblock_read_count buffers at the end of the LRU (least recently used) chain to read the table and (allowing a little inaccuracy for multi-user systems, pinning and so on) keeps recycling the same few buffers to read the table thus protecting the bulk of the buffer cache from being wiped out by a single large tablescan. Such a tablescan would be recorded under the statistic “table scans (long tables)”.

    If a table is shorter than the threshold then it is read to the midpoint of the cache (just like any other block read) but – whether by accident or design – the touch count (x$bh.tch) is not set and the table will fall off the LRU end of the buffer cache fairly promptly as other objects are read into the buffer. Such a tablescan would be recorded under the statistic “table scans (short tables)”.

Then, in July 2009, Dion Cho decided to check this description before repeating it, and set about testing it on Oracle 10gR2 – producing some surprising results and adding another item to my to-do list. Since then I have wanted to check his conclusions, check whether the original description had ever been true and when (or if) it had changed.

As a simple starting point, of course, it was easy to check the description of the relevant (hidden) parameter to see when it changed:

8.1.7.4     _small_table_threshold        threshold level of table size for forget-bit enabled during scan
9.2.0.4     _small_table_threshold        threshold level of table size for direct reads
11.2.0.1    _small_table_threshold        lower threshold level of table size for direct reads

This suggests that the behaviour might have changed some time in 9i (9.2.0.4 happened to be the earliest 9i listing of x$ksppi I had on file) – so I clearly had at least three major versions to check.

The behaviour of the cache isn’t an easy thing to test, though, because there are a number of special cases to consider – in particular the results could be affected by the positioning of the “mid-point” marker (x$kcbwds.cold_hd) that separates the “cold” buffers from the “hot” buffers. By default the hot portion of the default buffer is 50% of the total cache (set by hidden parameter _db_percent_hot_default) but on instance startup or after a “flush buffer cache” there are no used buffers so the behaviour can show some anomalies.

So here’s the basic strategy:

    Start the instance
    Create a number of relatively small tables with no indexes
    Create a table large enough to come close to filling the cache, with an index to allow indexed access
    Collect stats on the table – largest last.
    Query the large table through an index range scan to fill the cache
    Repeat a couple of times with at least 3 second pauses to allow for incrementing the touch count
    Check x$bh for buffer usage
    Run repeated tablescans for the smaller tables to see how many blocks end up in the cache at different sizes.

Here’s some sample code:

create table t_15400
pctfree 99
pctused 1
as
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		rownum <= 10000
)
select
	rownum			id,
	lpad(rownum,10,'0')	small_vc,
	rpad('x',100)		padding
from
	generator	v1,
	generator	v2
where
	rownum <= 15400
;

create index t_15400_id on t_15400(id);

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T_15400',
		estimate_percent => 100,
		method_opt 	 => 'for all columns size 1'
	);
end;
/

select
	object_name, object_id, data_object_id
from
	user_objects
where
	object_name in  (
		'T_300',
		'T_770',
		'T_1540',
		'T_3750',
		'T_7700',
		'T_15400',
		'T_15400_ID'
	)
order by
	object_id
;

select
	/*+ index(t) */
	max(small_vc)
from
	t_15400 t
where
	id > 0
;

The extract shows the creation of just the last and largest table I created and collected statistics for – and it was the only one with an index. I chose the number of blocks (I’ve rigged one row per block) because I had set up a db_cache_size of 128MB on my 10.2.0.3 Oracle instance and this had given me 15,460 buffers.

As you can see from the query against user_objects my test case included tables with 7,700 rows (50%), 3,750 rows (25%), 1,540 rows (10%), 770 rows (5%) and 300 rows (2%). (The number in brackets are the approximate sizes of the tables – all slightly undersized – relative to the number of buffers in the default cache).

Here’s the query that I then ran against x$bh (connected as sys from another session) to see what was in the cache (the range of values needs to be adjusted to cover the range of object_id reported from user_objects):

select
	obj, tch, count(*)
from	x$bh
where
	obj between 77710 and 77720
group by
	obj, tch
order by
	count(*)
;

Typical results from 10.2.0.3

After executing the first index range scan of t_15400 to fill the cache three times:

       OBJ        TCH   COUNT(*)
---------- ---------- ----------
     75855          0          1
     75854          0          1
     75853          0          1
     75851          0          1
     75850          0          1
     75849          0          1
     75852          0          1
     75855          2          9    -- Index blocks, touch count incremented
     75855          1         18    -- Index blocks, touch count incremented
     75854          1      11521    -- Table blocks, touch count incremented

Then after three tablescans, at 4 second intervals, of the 7,700 block table:

       OBJ        TCH   COUNT(*)
---------- ---------- ----------
     75853          3          1    -- segment header of 7700 table, touch count incremented each time
     75855          0          1
     75854          0          1
     75852          0          1
     75849          0          1
     75850          0          1
     75851          0          1
     75855          2          9
     75855          1         10
     75853          0       3991    -- lots of blocks from 7700 table, no touch count increment
     75854          1       7538

Then repeating the tablescan of the 3,750 block table three times:


       OBJ        TCH   COUNT(*)
---------- ---------- ----------
     75853          3          1
     75855          0          1
     75854          0          1
     75851          0          1
     75852          3          1    -- segment header block, touch count incremented each time
     75849          0          1
     75850          0          1
     75855          2          9
     75855          1         10
     75853          0        240
     75852          0       3750    -- table completely cached - touch count not incremented
     75854          1       7538

Then repeating the tablescan of the 1,540 block table three times:

       OBJ        TCH   COUNT(*)
---------- ---------- ----------
     75853          3          1
     75855          0          1
     75854          0          1
     75851          3          1    -- segment header block, touch count incremented each time
     75849          0          1
     75850          0          1
     75852          3          1
     75855          2          9
     75855          1         10
     75853          0        149
     75851          2       1540    -- Table fully cached, touch count incremented but only to 2
     75852          0       2430
     75854          1       7538

Then executing the tablescan of the 770 block table three times:

       OBJ        TCH   COUNT(*)
---------- ---------- ----------
     75853          3          1
     75855          0          1
     75850          3          1    -- segment header block, touch count incremented each time
     75849          0          1
     75851          3          1
     75852          3          1
     75854          0          1
     75855          2          9
     75855          1         10
     75851          0         69
     75853          0        149
     75850          2        770    -- Table fully cached, touch count incremented but only to 2
     75851          2       1471
     75852          0       1642
     75854          1       7538

Finally executing the tablescan of the 300 block table three times:

       OBJ        TCH   COUNT(*)
---------- ---------- ----------
     75853          3          1
     75855          0          1
     75854          0          1
     75850          3          1
     75852          3          1
     75851          3          1
     75855          2          9
     75855          1         10
     75851          0         69
     75850          0        131
     75853          0        149
     75849          3        301	-- Table, and segment header, cached and touch count incremented 3 times
     75850          2        639
     75852          0       1342
     75851          2       1471
     75854          1       7538

This set of results on its own isn’t conclusive, of course, but the indications for 10.2.0.3 are:

    a) “Large” tablescans don’t increment the touch count – so avoiding promotion to the hot portion of the buffer
    b) There is a 25% boundary (ca. 3750 in this case) above which a tablescan will start to recycle the buffer it has used
    c) There is a 10% boundary (ca. 1540 in this case) below which repeating a scan WILL increment the touch count
    d) There is a 2% boundary (ca. 300 in this case) below which tablescans will always increment the touch count.

I can’t state with any certainty where the used and recycled buffers might be, but since blocks from the 3750 tablescan removed the blocks from the 7700 tablescan, it’s possible that “large” tablescans do somehow go “to the bottom quarter” of the LRU.

There also some benefit in checking the statistics “table scans (short)” and “table scans (long)” as the tests run. For the 2% (300 block) table I recorded 3 short tablescans; for the tables in the 2% to 10% range (770 and 1540) I recorded one long and two short (which is consistent with the touch count increment of 2 – the first scan was expected to be long, but the 2nd and 3rd were deemed to be short based on some internal algorithm about the tables being fully cached); finally for the tables above 10% we always got 3 long tablescans.

But as it says in the original note on small partitions – there are plenty of questions still to answer:

    I’ve cited 2%, 10%, and 25% and only one of these is set by a parameter (_small_table_threshold is derived as 2% of the db_cache_size – in simple cases) are the other figures derived, settable, or hard-coded.

    I’ve quoted the 2% as the fraction of the db_cache_size – but we have automatic SGA management in 10g, automatic memory management in 11g, and up to eight different cache sizing parameters in every version from 9i onwards. What figure is used as the basis for the 2%, and is that 2% of the blocks or 2% of the bytes, and if you have multiple block sizes does each cache perhaps allow 2% of its own size.

And then, in 11g we have to worry about automatic direct path serial tablescans – and it would be easy to think that the “_small_table_threshold” may have been describing that feature since (at least) 9.2.0.4 if its description hadn’t changed slightly for 11.2 !

So much to do, so little time — but at least you know that there’s something that needs careful investigation if you’re planning to do lots of tablescans.

Footnote: Having written some tests, it’s easy to change versions. Running on 8.1.7.4 and 9.2.0.8, with similar sized caches, I could see that the “traditional” description of the “small_table_threshold” was true – a short tablescan was anything less 2% of the buffer cache, long tablescans were (in effect) done using just a window of “db_file_multiblock_read_count” buffers, and in both cases the touch count was never set (except for the segment header block).

Finding ORACLE_HOME

Once in while I work on systems that are new to me and no one is around to explain the configuration to me. In these cases, I’ve often wanted a way to clearly figure out which running database is using which ORACLE_HOME, so it was fun to see Tanel Poder’s recent posted on how to do this on LINUX and SOLARIS. Here is a quick summary of LINUX and SOLARIS plus HPUX and AIX as well:

Solaris and Linux

       $ pgrep  -lf _pmon_
       12586 ora_pmon_ee10204

ORACLE_SID is ee10204

       $ pwdx 12586
       12586: /u01/oracle/10.2.0.4/ee1/dbs

ORACLE_HOME is /u01/oracle/10.2.0.4/ee1

HPUX

       $ ps -ef | grep pmon
       ora1024 25611     1  0  Mar 21  ?         0:24 ora_pmon_itanic10

ORACLE_SID is itanic10

       $ pfiles 25611 | grep  bin
       25611:                  /opt/ora1024/app/db_1/bin/oracle

ORACLE_HOME is /opt/ora1024/app/db_1

AIX

        $ ps -ef | grep pmon
      ora1024   262314        1   0   Mar 23      -  0:12 ora_pmon_mercury

ORACLE_SID is mercury

      $ ls -l /proc/262314/cwd
      lr-x------   2 ora1024  dba  0 Mar 23 19:31 cwd -> /data/opt/app/product/10.2.0.4/db_1/dbs/

ORACLE_HOME is /data/opt/app/product/10.2.0.4/db_1