Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Masterclass – 1

A recent thread on the Oracle developer community database forum raised a fairly typical question with a little twist. The basic question is “why is this (very simple) query slow on one system when it’s much faster on another?” The little twist was that the original posting told use that “Streams Replication” was in place to replicate the data between the two systems.

To make life easy for remote trouble-shooters the poster had supplied (for each system) the output from SQL Monitor when running the query, the autotrace output (which shows the predicate section that SQL Monitor doesn’t report), and the session statistics for the query run, plus some statistics about the single table in the query, the index used in the plan, and the column on which that index was based.

Here, with a little cosmetic editing (and a query that has clearly been camouflaged by the OP), is the information supplied for the faster database, where the query took about 30 seconds to complete.


SELECT c1, c2, c3, c4, c5, c6, c7, c8..  
FROM TAB1  
WHERE STS IN ( 'A', 'B')  
AND cnt < '4'  
AND dt < sysdate  
and rownum <=1;  
  
Sql_monitor and stats from DB1  
******************************  
  
Global Information  
------------------------------  
 STS              :  DONE (ALL ROWS)             
 Instance ID         :  1                           
 Execution Started   :  08/17/2018 08:31:22         
 First Refresh Time  :  08/17/2018 08:31:22         
 Last Refresh Time   :  08/17/2018 08:31:53         
 Duration            :  31s                         
 Program             :  sqlplus.exe                 
 Fetch Calls         :  1                           
  
Global Stats  
===============================================================================  
| Elapsed |   Cpu   |    IO    | Concurrency | Fetch | Buffer | Read  | Read  |  
| Time(s) | Time(s) | Waits(s) |  Waits(s)   | Calls |  Gets  | Reqs  | Bytes |  
===============================================================================  
|      33 |    3.00 |       30 |        0.08 |     1 |   102K | 38571 | 301MB |  
===============================================================================  
  
SQL Plan Monitoring Details (Plan Hash Value=715774357)  
======================================================================================================================================================================================  
| Id |            Operation            |          Name           |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read  | Read  | Activity |       Activity Detail        |  
|    |                                 |                         | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs  | Bytes |   (%)    |         (# samples)          |  
======================================================================================================================================================================================  
|  0 | SELECT STATEMENT                |                         |         |       |         1 |    +31 |     1 |        1 |       |       |          |                              |  
|  1 |   COUNT STOPKEY                 |                         |         |       |         1 |    +31 |     1 |        1 |       |       |          |                              |  
|  2 |    INLIST ITERATOR              |                         |         |       |         1 |    +31 |     1 |        1 |       |       |          |                              |  
|  3 |     TABLE ACCESS BY INDEX ROWID | TAB1                    |    114K | 33399 |        32 |     +0 |     2 |        1 | 38377 | 300MB |    96.77 | Cpu (1)                      |  
|    |                                 |                         |         |       |           |        |       |          |       |       |          | db file sequential read (16) |  
|    |                                 |                         |         |       |           |        |       |          |       |       |          | read by other session (13)   |  
|  4 |      INDEX RANGE SCAN           | TAB1_STS_IDX            |    115K |   723 |        30 |     +2 |     2 |     118K |   194 |   2MB |     3.23 | read by other session (1)    |  
======================================================================================================================================================================================  
  
---------------------------------------------------------------------------------------------------------  
| Id  | Operation                     | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |  
---------------------------------------------------------------------------------------------------------  
|   0 | SELECT STATEMENT              |                         |     1 |  1847 | 33399   (1)| 00:03:14 |  
|*  1 |  COUNT STOPKEY                |                         |       |       |            |          |  
|   2 |   INLIST ITERATOR             |                         |       |       |            |          |  
|*  3 |    TABLE ACCESS BY INDEX ROWID| TAB1                    |   114K|   201M| 33399   (1)| 00:03:14 |  
|*  4 |     INDEX RANGE SCAN          | TAB1_STS_IDX            |   114K|       |   723   (1)| 00:00:05 |  
---------------------------------------------------------------------------------------------------------  
  
Predicate Information (identified by operation id):  
---------------------------------------------------  
   1 - filter(ROWNUM<=1)  
   3 - filter("cnt"<'4' AND "dt"

And here’s the equivalent information from the slower database where the query took more than 9 times as long (4 minutes 42 seconds) to complete.


Global Information  
------------------------------  
 STS              :  DONE (ALL ROWS)           
 Instance ID         :  1                         
 Execution Started   :  08/17/2018 08:21:47       
 First Refresh Time  :  08/17/2018 08:21:47       
 Last Refresh Time   :  08/17/2018 08:26:29       
 Duration            :  282s                      
 Module/Action       :  SQL*Plus/-                
 Program             :  sqlplus.exe               
 Fetch Calls         :  1                         
  
Global Stats  
================================================================  
| Elapsed |   Cpu   |    IO    | Fetch | Buffer | Read | Read  |  
| Time(s) | Time(s) | Waits(s) | Calls |  Gets  | Reqs | Bytes |  
================================================================  
|     287 |    8.76 |      278 |     1 |   110K | 110K | 858MB |  
================================================================  
  
SQL Plan Monitoring Details (Plan Hash Value=715774357)  
======================================================================================================================================================================================  
| Id |            Operation            |          Name           |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read | Read  | Activity |        Activity Detail        |  
|    |                                 |                         | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs | Bytes |   (%)    |          (# samples)          |  
======================================================================================================================================================================================  
|  0 | SELECT STATEMENT                |                         |         |       |         1 |   +282 |     1 |        1 |      |       |          |                               |  
|  1 |   COUNT STOPKEY                 |                         |         |       |         1 |   +282 |     1 |        1 |      |       |          |                               |  
|  2 |    INLIST ITERATOR              |                         |         |       |         1 |   +282 |     1 |        1 |      |       |          |                               |  
|  3 |     TABLE ACCESS BY INDEX ROWID | TAB1                    |    142K | 40211 |       282 |     +1 |     2 |        1 | 109K | 854MB |   100.00 | db file sequential read (277) |  
|  4 |      INDEX RANGE SCAN           | TAB1_STS_IDX            |    142K |   892 |       280 |     +3 |     2 |     118K |  491 |   4MB |          |                               |  
======================================================================================================================================================================================  
  
Execution Plan (autotrace) 
---------------------------------------------------------------------------------------------------------  
| Id  | Operation                     | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |  
---------------------------------------------------------------------------------------------------------  
|   0 | SELECT STATEMENT              |                         |     1 |  1847 | 40211   (1)| 00:08:03 |  
|*  1 |  COUNT STOPKEY                |                         |       |       |            |          |  
|   2 |   INLIST ITERATOR             |                         |       |       |            |          |  
|*  3 |    TABLE ACCESS BY INDEX ROWID| TAB1                    |   141K|   249M| 40211   (1)| 00:08:03 |  
|*  4 |     INDEX RANGE SCAN          | TAB1_STS_IDX            |   141K|       |   892   (1)| 00:00:11 |  
---------------------------------------------------------------------------------------------------------  
  
Predicate Information (identified by operation id):  
---------------------------------------------------  
   1 - filter(ROWNUM<=1)  
   3 - filter("cnt"<'4' AND "dt"

There are all sorts of noteworthy details in these two sets of information – some of the “how to see what’s in front of you” type, some of the “be careful, Oracle can deceive you” type. So I’m going to walk though the output picking up a number of background thoughts before commenting on the answer to the basic question.

We’ll start with the object statistics, then we’ll look at the SQL Monitor plan to see if we can determine where the extra time was spent, then we’ll try to work out what else the plan might be telling us about the code and data, then we’ll summarise my observations to make a claim about the difference in behaviour.

Object statistics

The table has 79M rows with average length of 1,847 bytes, using 22M blocks. With an 8KB block size and that average row size we would expect to see about 3 rows per block, and that’s fairly consistent with the value of rows / blocks.  We don’t know what the sample size was for this stats collection, but it might have been a “small” sample size rather than the the 100% you would get from using auto_sample_size, so that might also explain some discrepancy between the two different views on the figures.

We note that the secondary system reports a chain_cnt in excess of 500,000 rows. The only (unhacked) way that this figure could be set would be through a call to analyze statistics, and once the figure is there it won’t go away unless you use the analyze command again to delete statistics.  We don’t know the history of how and when the figure got there so it doesn’t guarantee that there are any chained or migrated rows, nor does the zero in the table stats on the primary system guarantee that it doesn’t have any chained or migrated rows – all it tells us is that at some time someone used the wrong command to gather stats and there were some (less than 1%) migrated or chained rows in the table at the time. (The optimizer will use this figure in its arithmetic if it is set, by the way, so it may affect some of the cost calculations – but not by a huge amount.)

The column sts reports 5 distinct values, no nulls, and a density of 6.2e-9 which is roughly half of 1/79M: so we have a frequency histogram on the column (in the absence of a histogram the density would be 1/5, and it’s reasonable to assume that the number of buckets was either the default or set to something larger than 5).  We were told that the system was running 11.2.0.4 – so we have to be a little suspicious about the accuracy of this histogram since it will have been sampled with a very small sample if the stats collection had used auto_sample_size. (12c will use a specially optimized 100% sample for frequency and top-N histograms when using auto_sample_size)

The index on sts has a clustering_factor of around 22M which is similar to the number of blocks in the table – and that’s not too surprising if there are are only a very small number of distinct values in the column – especially when the presence of the histogram suggest that there’s a skew in the data distribution. (There’s more to come on that point.) The number of leaf blocks is about 500,000 (being lazy about arithmetic) – just as a side note this suggests the index is running in a fairly inefficient state (and probably hasn’t been created with the compress keyword).

Doing a rough estimate of the index arithmetic :  the avg_col_len for sts is 2, so the space required for each index entry will be 13 bytes (2 for the column, 7 for the rowid content, 2 for the row header, 2 for the row directory entry).  Take off the block overhead, and assume the index is running at a “typical” 70% space usage per leaf block and you might expect 5,600 bytes used per leaf block for current index data and that works out to about 430 index entries per leaf block.  With 79M rows in the table that should lead to 79M/430 leaf blocks – i.e. roughly 184,000 leaf blocks, not 493,000 leaf blocks.  However it’s not unusual to see an index with extremely repetitive values operating at something like 50% utilisation, which would bring our estimate to about 310 rows per leaf block and 255,000 leaf blocks – which is still off by a factor of nearly 2 compared to what we’ve actually got. Again, of course, we have to be a little bit cautious about these statistics – we don’t know the sample size, and Oracle uses a surprisingly small number of blocks to sample the stats for an index.

Where’s the time.

The SQL Monitor gives us a very clear report of where most of the time went – almost all of it was spent in I/O waits, and almost all of the wait time was in the “table access by index rowid” opration in both cases; but the primary system did 38,377 read requests while the secondary did 109,000 read requests in that line of the plan. It is significant, though, that quite a lot (40%) of the ASH samples for that operation on the primary system were for “read by other session” rather than “db file sequential read”:  in other words some other session(s) were doing a lot of work to pull the data we wanted into the buffer cache at the same time. Apart from the fact that a wait for “read by other session” often means we spend less time waiting than if we’d had to do the read ourselves, the presence of this wait suggests that other sessions may be “pre-caching” data for us so that we end up having to read far fewer blocks than would otherwise be the case.

It’s important to note at the same time that the difference in Buffer Gets for the two systems was small – 102K vs. 110K – and the “Rows (actual)” was the same in both cases – 118K entries returned by the index range scan.  Both systems did similar amounts of “logical” work, to process similar amounts of data; the difference was the fraction of the work that required a buffer get to turn into a disc read or a “wait for other read”.

We might want to pick up a few more numbers to corroborate the view that the only significant difference was in the volume of data cached and not some more esoteric reason.  Some of the session statistics should help.


DB1:  table fetch by rowid                          117,519
DB2:  table fetch by rowid                          117,521

DB1:  undo change vector size                         4,432
DB2:  undo change vector size                         4,432

DB1:  redo size                                       5,536
DB2:  redo size                                       5,440

DB1:  session logical reads                         102,500
DB2:  session logical reads                         110,326

DB1:  no work - consistent read gets                102,368
DB2:  no work - consistent read gets                110,071

DB1:  table fetch continued row                       2,423
DB2:  table fetch continued row                       3,660

The number of rows fetched by rowid is virtually identical and we have done (virtually) no work that generates undo or redo – such as delayed block cleanout; there are no statistics shown for “%undo record applied” so we probably haven’t done very much work to get a read consistent view of the data though we can’t be sure that the OP simply failed to copy that stat into list supplied (but then the similarity of “session logical reads” to “no work – consistent read gets” confirms the hypothesis that we didn’t do any (significant) work on visiting undo blocks.

We do see a few percent increase in the number of buffer gets (“session logical reads”) – but this may reflect the fact that the actual pattern of data in one table is slightly different from the pattern in the other – thanks to ASSM the process id of the process that inserts a row into a table can affect (within a small range, usually) the block into which the row is inserted; but at 102,000 / 110,000 buffer gets to visit 117,500 rows in the table we can see that there must be some table blocks that hold two (or more) rows that are identified as consecutive in the index – leading to some row visits being achieved through a buffer pin and without a fresh buffer get. You’ll note that this argument is consistent with the small variation in clustering_factor (not that we entirely trust those figures) for the two indexes – the system with the lower clustering_factor for the index has done fewer buffer gets to acquire the same number of rows from the table – by definition that means (assuming default setup) that there are more cases where “the next table row” is in the same block as the current row.

The final figure I’ve shown is the “table fetch continued rows”: according to the table stats (which we don’t necessarily trust completely) 500K out of 79M rows are chained/migrated which is roughly 0.6%. We know that we’re visiting about 117K table rows so might expect (on average) roughly the same percentage migrated/chained viz: 0.6% of 117K = 743, so there’s a little anomaly there (or an error in our assumption about “average” behaviour.  It’s worth noting, though, that a “continued fetch” would have to do an extra buffer visit (and maybe an extra physical read).  You might wonder, of course, how there could be any chained or migrated rows when the average row length is 1,847 bytes but in a follow-up post the OP did say there were 3 BLOB columns in the table, which can cause havoc with interpreting stats for all sorts of reasons. We don’t have any information about the table structure – particularly whether the columns in the query appear before or after the BLOB columns in the table definition – and we don’t know what processing takes place (for example, maybe the 3rd BLOB is only updated after the sts column has been changed to a value other than A or B which would help to explain why we shouldn’t be using the 0.6% calculation above as a table-wide average), so we’re not in a position to say why any of the continued fetches appear but there are several guesses we could make and they’re all easy to check.

Plan observations

If we examine row estimates we see that it 114K for the faster plan and 141K for the slower plan (with a closely corresponding variation in cost). The difference in estimates simply tells us that the histogram gathering was probably a small sample size and subject to a lot of variation. The scale of the estimates tells us that the A and B rows are probably rare – call it 125K out of 79M rows, about 0.16% of the total rows in the table, so it would not be surprising to see consecutive samples for the histogram producing significant variations in estimates.

The other interesting thing we can note in the SQL Monitor plan is that the Starts column for the index range scan / table access operations in both plans shows the value 2: this means that there are no “A” rows that match the other predicates:  Oracle has run the “A” iteration to completion then started the “B” iteration and found a row on the second iteration. Is this a coincidence, or should it always happen, or is it only fairly likely to happen; is it possible to find times when there are no suitable “B” rows but plenty of suitable “A” rows. The final predicate in the query is “rownum <= 1” – so the query is picking one row with no explicit strategy for choosing a specific row when there are multiple appropriate rows, does this mean that we could optimizer the query by rewriting it as a “union all” that searched for B rows first and A rows second ? We just don’t know enough about the processing.

In passing, we can’t get Oracle to search the B rows first by changing the order of the in-list.  If you have a predicate like “where sts in ({list of literal values})” the optimizer will sort the list to eliminate duplicates before rewriting the predicate as a list of disjuncts, and then (if the path uses an iterator) iterate through the list in the resulting order.

In the absence of information about the way in which the data is processed we can only say that we need to avoid visiting the table so frequently. To do this we will need to add one or both of the columns from the other predicates to the index – this might double the size of the index, but eliminate 95% of the potential I/O.  For example if we discover that A and B rows are initially created “into the future” and this query is looking for a row whose “time has come” so that it can be processed and changed to an X row (say) then there may only ever be a tiny number of rows where the “sts = A and the dt < sysdate” and an index on (sts, dt) would be a perfect solution (especially if it were compressed on at least the first column).

The OP has declared a reluctance to add an index to the table – but there are two points to go with this indexing strategy. Since we know there’s a frequency histogram and the A and B rows appear to be rare values – what benefit is there in having an index that covers the other values (unless 2 of the remaining 3 are also rare).  How about creating a function-based index that represents only the rare values and modifying this code to use that index – e.g.

create index t1_id on t1 (
        case sts when 'A' then sts when 'B' then sts end,
        case sts when 'A' then dt  when 'B' then dt  end
) compress 1
;

select  *
from    t1
where   case sts when 'A' then sts when 'B' then sts end in ('A','B')
and     case sts when 'A' then dt  when 'B' then dt  end < sysdate
and     cnt < '4'
and     rownum <= 1
/


You might be able to replace a huge index (79M rows worth) with this small one (120K rows worth) unless there’s too much other code in the system that has to be adjusted or the sts column is actually the target of a referential integrity constraint; at worst you could add this index knowing that it’s generally not going to consume much in the way of extra space or processing resources and is going to save you a lot of work for this query.

Summary

The execution plan from SQL Monitor points very strongly to the fast system benefiting from having a lot of the relevant data cached and constantly being reloaded into the cache by other sessions while the slow system has to acquire almost all of its data by real phyiscal reads. Most of the reads address the table so engineering an index that is low-cost and (fairly) high precision is the most significant strategy for reducing the workload and time on the slow system.

The fact that all the potential A rows fail to match the full predicate set suggests that there MAY be some aspect of the processing that means it would be more efficient to check for B rows before checking for A rows.

Given the massive skew in the data distribution a function-based index that hides all the non-popular values (or even all the values that are not of interest to this query) may be the most cost-effective way of adding a very effective index to the table with minimal resource requirements.

And finally

It’s taken me more than 4 hours to write this note after spending about 10 minutes reading through the information supplied by the OP and identifying and cross-checking details. A small fraction of the 4+ hours was spent creating a little model to check something I had to say about in-lists, the rest of it was trying to write up a coherent description covering all the details.

That’s it for today, but I may have missed a couple of points that I noticed as I read the OP’s posting; and I will want to do a little cosmetic work on this article and check grammar and spelling over the next couple of days.

Shortly after I posted this blog note the owner of the question reported the following as the distribution of values for the sts column:

 STS   COUNT(*)
---- ----------
   A          6
   E        126
   D        866
   C   80212368
   B     117631

Two things stand out about these figures – first it’s an ideal example of a case where it would be nice avoid having index entries for the 80 million ‘C’ rows. Depending on the coding and testing costs, the supportability of the application and the possible side effects this could be done with a function-based index, or by introducing a virtual column that hides the ‘C’s behing a NULL, or by changing the code to use NULL instead of ‘C’.

Secondly – I made a comment about rewriting the code to query the B’s before the A’s. But we saw that Oracle worked through about 117,000 rows before returning a result: so the fitures above tell us that it must have worked through almost all the B’s and the handful of A’s was just a tiny little blip before it got to the B iteration – so there’s no point in making that change.

My suggestion for the function-based index above could be modified in two ways, of course – add two more “when”s to each “case” to capture the D and E rows, or take the opposite viewpoint and create an index on expressions like: “case sts when ‘C’ then to_char(null) else sts end”. The benefit of the latter approach is that you don’t have to modify the index definition (and rebuild the index) if a new legal value for sts appears.

MySQL 5.6 vs 5.7

Whitelist preview of Performance Insights has just started on RDS MySQL and it gave me a chance to visually compare load profiles of MySQL 5.6 and 5.7.

I first tried to use sysbench and ran into some curious anomolies.

The test I ran was

sysbench \
           --test=oltp \
           --oltp-table-size=10000000 \
           --oltp-test-mode=complex \
           --num-threads=10 \
           --max-time=0 \
           --max-requests=0 \
           --mysql-host=$host \
           --mysql-user=$user \
           --mysql-password=$password \
           --mysql-db=sysbench run

Load chart on MySQL 5.6 and 5.7

 

MySQL 5.6

I first ran on MySQL 5.6 and in Performance Insights the load looked like:

sysbench_56

 

Which looks like a small load. The DB instance and load driving machine are both in the same AWS region (N.Virginia) i.e. not much latency. If there is a lot of latency between the machine driving load and the database then it can often be hard to drive a high load. In this case latency was low so it should be easy to drive a high load.

MySQL 5.7

I then ran the same load on 5.7 and it looked like:

sysbench_57

which is radially different. Where as on 5.6 the host was mainly idle since load was well below the # of vCPUs on 5.7 the load is above the # of vCPUs indicating a bottleneck.

SQL on 5.6 vs 5.7

 

MySQL 5.6

Let’s look at the SQL showing up in Performance Insights. For MySQL 5.6 the list was quite limited and there was no “commit”.

sysbench_56_sql

MySQL 5.7

On 5.7 most of the load was on commit and there were quite a few other SQL showing up:

sysbench_57_sql

Waits on 5.6 vs 5.7

Looking at the waits on 5.6 and 5.7 they are quite different as well.

MySQL 5.6

MySQL 5.6 spends most of it’s time on CPU and none on write to stable storage.

sysbench_56_waits

MySQL 5.7

on MySQL 5.7 most of the time is spent waiting on writes to stable storage and little, relatively , on CPU

 

sysbench_57_waits

Analysis

At first I started to wonder if there was something fundamentally different between 5.6 and 5.7.

The main difference I was seeing was on COMMIT and writes to stable storage so rather than trying to go debug sysbench , I decided to run a basic workload that should help me see if there was a basic big difference between 5.6 and 5.7.

Update workload on MySQL 5.6 and 5.7

I ran a workload of 4 sessions, each updating a single row table 10,000 times. In one case autocommit was on and the other it was off. With autocommit on, I’d expect a lot of waits on writes to stable storage and with it off I’d expect much less.

It turns out the load looks almost exactly the same on 5.6 as 5.7. On the left of each chart below there is a little spike a few seconds wide. That little spike is 40,000 updates by 4 users in a few seconds with hardly any wait as autocommit is off. Then the load jobs sleeps for 10 seconds, then the large load in both is the 40,000 updates with autocommit on and we wait on writes to stable storage as expected.

MySQL 5.6

mysql_66_update

MySQL 5.7

mysql_57_update

Conclusion

Looks like something is seriously wrong with sysbench on MySQL 5.6. I’ll have to dig into this.

In the mean time it looks like MySQL 5.6 and 5.7 do respond quite similarly. If anything MySQL 5.7 was a bit faster. Here are the actual timings for 40,000 updates by 4 concurrent users:

5.6
autocommit=0
real   0m5.285s
autocommit=1
real   3m32.095s
 
5.7
autocommit=0
real   0m5.186s
autocommit=1
real   2m57.236s

 

 

Automatic Table Reorganization in #Exasol

Exasol Logohttps://uhesse.files.wordpress.com/2018/05/exasol_logo.png?w=150 150w" sizes="(max-width: 161px) 100vw, 161px" />

One Exasol key feature is the low maintenance effort it has compared to many other database systems. Automatic Table Reorganization is an example for this approach:

SQL_EXA> open schema adam;
EXA: open schema adam;

Rows affected: 0

SQL_EXA> create or replace table t1 as
         select rownum as id from (select 1 from dual connect by level<=1e6); 
EXA: create or replace table t1 as... Rows affected: 1000000 
SQL_EXA> select table_name,table_row_count,delete_percentage from exa_user_tables;
EXA: select table_name,table_row_count,delete_percentage from exa_user_tabl...

TABLE_NAME  TABLE_ROW_COUNT       DELETE
----------- --------------------- ------
T1                        1000000    0.0

1 row in resultset.

Yes we have CREATE OR REPLACE TABLE in Exasol </p />
</p></div>

    	  	<div class=

Let’s talk about Techiquette

I’m typing this at about 38,000 feet in the air. I’m travelling home from the Oracle Latin America tour – it was a great event, but I’ll save the details for another blog post. In reality, I should be sleeping, but as luck would have it one of the lights on our plane is malfunctioning. So whilst the rest of the plane is sleeping happily in restful darkness, my row and the rows around it are bathed in blazing artificial light Sad smile

image

Ah…the joys of travel. Anyway, I digress.

Before I boarded the flight, I did what most people do in the airport. I signed my life away to a deluge of advertising emails as one does in order to get 30 precious minutes of free wi-fi. I checked some emails, and had a glance through twitter when I came upon this one.

image

As you might expect, there’s plenty of social media activity about this tweet, much of it about the contradiction between the language and what you might expect to come out of the Office of the President of the United States. I’m not going to enter into that debate, or get into political discussion. Let’s face it – Donald Trump strikes me as more a Sybase man (Hey don’t flame me Sybase readers…these are just jokes ok?)

But the tweet made me think about the kind of language we sometimes see on AskTOM and on technical forums in general.

So let’s talk about etiquette on technical forums, or “Techiquette” as I’ve coined it.

We’re actually pretty lucky in technical circles with our discussion forums because most of the conversation resolves around topics that can be backed up with cold hard facts. In those instances, even robust discussion stays civil because it’s pretty hard to disagree with solid evidence when its presented.  A hypothetical example might be something like:

Forum Participant #1: “I need to have a table with 500 columns”
Forum Participant #2: “You cannot – Oracle is limited to 255 columns”
Forum Participant #3: “I disagree – here is an example DDL demonstrating 800 columns”

Participant #2 can’t really dispute the response, because the example is right there in black and white.

But here is where I think conversations can go off the rails and slide downhill into a abuse and insult. It is when the content is both technical and emotional, for example:

Forum Participant #1: “I love using RMAN – it made our backup processes much easier”
Forum Participant #2: “RMAN sucks – it is too complex, anyone using it obviously has no clue about anything”
Forum Participant #1: “Dumb ass”

and away we go…The conversation goes from friendly to flame war.

The problem is – I don’t want my forums to be devoid of emotion, even when the topic is technical. Ever been to a presentation where the presenter does not show any joy or enthusiasm for the material? It’s like spending 45 minutes having root canal work on your teeth. We want people to be passionate about the things that interest them.

So how do we avoid forums descending into abuse whilst still encouraging emotion, and hence potentially emotionally charged content?

Here is my ground rules:

  • Expressing an affirmative or agreeing position? Positive emotions 100% allowed and encouraged.
  • Expressing a negative or disagreeing position? Express it as dispassionately as possible. Stick solely to the facts.

Simple mathematics tells us that if we can stick to the above, the average “positivity index” in a tech conversation will always be greater than or equal to zero.

I’m not claiming to be an angel here – I don’t always manage to follow my own guidelines as well as I should. But I’m always striving to improve, and hopefully if we can all follow them to the best of our ability, then in the main, we’ll have much more civil and interesting community conversations as a result.

“Let’s Talk Database” coming to Wellington, Auckland, Brisbane and Perth in September.

Due to popular demand, I’ve been asked by Oracle to again run some more “Let’s Talk Database” events in September. Dates and venues are as follows: Monday, 3 September – Wellington (Wellington Oracle Office): Registration Link. Tuesday, 4 September – Auckland (Auckland Oracle Office): Registration Link. Wednesday, 12 September – Brisbane (Brisbane Oracle Office): Registration Link. Thursday, 13 September – Perth (Perth Oracle Office): Registration Link. […]

The size of Oracle Home: from 9GB to 600MB

By Franck Pachot

.
This is research only and totally unsupported. When building docker images to run Oracle Database in a container, we try to get the smallest image possible. One way is to remove some subdirectories that we know will not be used. For example, the patch history is not used anymore once we have the required version. The dbca templates can be removed as soon as we have created the database… In this post I take the opposite approach: run some workload on a normal Oracle Home, and keep only the files that were used.

I have Oracle Database 18c installed in /u00/app/oracle/product/18EE and it takes 9GB on my host:

[oracle@vmreforatun01 ~]$ du --human-readable --max-depth=1 $ORACLE_HOME | sort -h | tail -10
 
352M /u00/app/oracle/product/18EE/jdk
383M /u00/app/oracle/product/18EE/javavm
423M /u00/app/oracle/product/18EE/inventory
437M /u00/app/oracle/product/18EE/assistants
605M /u00/app/oracle/product/18EE/md
630M /u00/app/oracle/product/18EE/bin
673M /u00/app/oracle/product/18EE/apex
1.4G /u00/app/oracle/product/18EE/.patch_storage
2.3G /u00/app/oracle/product/18EE/lib
9.4G /u00/app/oracle/product/18EE

Gigabytes of libraries (most of them used only to link the executables), hundreds of megabytes of binaries, templates for new databases, applied patches, old object files, options, tools, command line and graphical interfaces,… Do we need all that?

For a full installation in production, yes for sure. The more we have, the better it is. When you have to connect at 2 a.m because you are on-call and a critical alert wakes you up, then you will appreciate to have all tools on the server. Especially if you connect through a few security obstacles such as remote VPN, desktop, Wallix, tunnels to finally get a high latency tty with no copy-paste possibilities. With a full Oracle Home, you can face any issue. You have efficient command line interfaces (sqlplus and hopefully sqlcl) or graphical (SQLDeveloper, asmca,…). For severe problems, you can even re-link, apply or rollback patches, quickly create a new database to import something in it,…

But what if you just want to provide a small container where a database is running, and no additional administration support? Where you will never re-install the software, apply patches, re-create the database, troubleshoot weird issues. Just have users connect through the listener port and never log to the container. Then, most of these 9.4 GB are useless.

But how to know which files are useful or not?

If you can rely on Linux ‘access time’ then you may look at the files accessed during the last days – after any installation or database creation is done:

[oracle@vmreforatun01 ~]$ find $ORACLE_HOME -atime -1 -exec stat -L -c "%x %y %z %F %n" {} \; | sort

But this is not reliable. Access time depends on the file type, filesystem, mount options,… and is usually bypassed as much as possible because writing something just to log that you read something is not a very good idea.

Here, I’ll trace all system calls related to file names (strace -e trace=file). I’ll trace them from the start of the database, so that I run strace on dbstart with the -f arguments to trace across forks. Then, I’ll trace the listener, the instance processes and any user process created through the listener.

I pipe the output to an awk script which extracts the file names (which is enclosed in double quotes in the strace output). Basically, the awk is just setting the field separator with -F” and prints the $2 token for each line. There are many single and double quotes here because of shell interpretation.

[oracle@vmreforatun01 ~]$ dbshut $ORACLE_HOME ; strace -fe trace=file -o "|awk -F'"'"'"' '"'{print $2}'"'" sh -xc "dbstart $ORACLE_HOME >&2" | grep "^$ORACLE_HOME" | sort -u > /tmp/files.txt &

Then I run some activity. I did this on our Oracle Tuning training workshop lab, when reviewing all exercises after upgrading the lab VM to 18c. This runs some usual SQL for application (we use Swingbench) and monitoring. The idea is to run through all features that you want to be available on the container you will build.

When I’m done, I dbshut (remember this is for a lab only – strace is not for production) and then strace output gets deduplicated (sort -u) and written to a file.txt in /tmp.

This file contains all files referenced by system calls. Surprisingly, there is one that is not captured here, the ldap messages file, but if I do not take it then the remote connections will fail with:

ORA-07445: exception encountered: core dump [gslumcCalloc()+41] [SIGSEGV] [ADDR:0x21520] [PC:0x60F92D9] [Address not mapped to object] []

I got it with a very empirical approach, will try to understand later. For the moment, I just add it to the list:

[oracle@vmreforatun01 ~]$ ls $ORACLE_HOME/ldap/mesg/ldapus.msb >> /tmp/files.txt

I also add adrci and dbshut scripts as they are small and may be useful:

[oracle@vmreforatun01 ~]$ ls $ORACLE_HOME/bin/adrci $ORACLE_HOME/bin/dbshut >> /tmp/files.txt

From this list, I check thise which are not directories, and tar all regular files and symbolic links into /tmp/smalloh.tar:

[oracle@vmreforatun01 ~]$ stat -c "%F %n" $(cat /tmp/files.txt) | awk '!/^directory/{print $3}' | tar -cvf /tmp/smalloh.tar --dereference --files-from=-

This is a 600M tar:

[oracle@vmreforatun01 ~]$ du -h /tmp/smalloh.tar
 
598M /tmp/smalloh.tar

Then I can remove my Oracle Home

[oracle@vmreforatun01 ~]$ cd $ORACLE_HOME/..
[oracle@vmreforatun01 product]$ rm -rf 18EE
[oracle@vmreforatun01 product]$ mkdir 18EE

and extract the files from my tar:

[oracle@vmreforatun01 /]$ tar -xf /tmp/smalloh.tar

I forgot that there are some setuid executables so I must be root to set them:

[oracle@vmreforatun01 /]$ ls -l $ORACLE_HOME/bin/oracle
-rwxr-x--x. 1 oracle oinstall 437157251 Aug 11 18:40 /u00/app/oracle/product/18EE/bin/oracle
[oracle@vmreforatun01 /]$ su
Password:
[root@vmreforatun01 /]# tar -xf /tmp/smalloh.tar
[root@vmreforatun01 /]# exit
[oracle@vmreforatun01 /]$ ls -l $ORACLE_HOME/bin/oracle
-rwsr-s--x. 1 oracle oinstall 437157251 Aug 11 18:40 /u00/app/oracle/product/18EE/bin/oracle

That’s a 600MB Oracle Home then. You can reduce it further by stripping the binaries:

[oracle@vmreforatun01 18EE]$ du -hs $ORACLE_HOME
599M /u00/app/oracle/product/18EE
[oracle@vmreforatun01 18EE]$ strip $ORACLE_HOME/bin/* $ORACLE_HOME/lib/*
[oracle@vmreforatun01 18EE]$ du -hs $ORACLE_HOME
570M /u00/app/oracle/product/18EE

but for only 30MB I really prefer to have all symbols. As I’m doing something completely unsupported, I may have to do some toubleshooting.

Now I’m ready to start the database and the listener:

[oracle@vmreforatun01 18EE]$ dbstart $ORACLE_HOME
Processing Database instance "DB1": log file /u00/app/oracle/product/18EE/rdbms/log/startup.log

and I run some Swingbench workload to check that everything is fine:

[oracle@vmreforatun01 18EE]$ /home/oracle/swingbench/bin/charbench -cs //localhost:1521/APP -u soe -p soe -uc 10 -min 5 -max 20 -a -v
Author : Dominic Giles
Version : 2.5.0.932
 
Results will be written to results.xml.
 
Time Users TPM TPS
 
6:35:15 PM 0 0 0
...
6:35:44 PM 10 12 9
6:35:45 PM 10 16 4
6:35:46 PM 10 21 5
6:35:47 PM 10 31 10

The only errors in alert.log are about checking the patches at install:

QPI: OPATCH_INST_DIR not present:/u00/app/oracle/product/18EE/OPatch
Unable to obtain current patch information due to error: 20013, ORA-20013: DBMS_QOPATCH ran mostly in non install area
ORA-06512: at "SYS.DBMS_QOPATCH", line 767
ORA-06512: at "SYS.DBMS_QOPATCH", line 547
ORA-06512: at "SYS.DBMS_QOPATCH", line 2124

Most of those 600MB are in the server executable (bin/oracle) and client shared library (lib/libclntsh.so):

[oracle@vmreforatun01 ~]$ size -td /u00/app/oracle/product/18EE/bin/* /u00/app/oracle/product/18EE/lib/* | sort -n
 
text data bss dec hex filename
2423 780 48 3251 cb3 /u00/app/oracle/product/18EE/lib/libofs.so
4684 644 48 5376 1500 /u00/app/oracle/product/18EE/lib/libskgxn2.so
5301 732 48 6081 17c1 /u00/app/oracle/product/18EE/lib/libodm18.so
10806 2304 1144 14254 37ae /u00/app/oracle/product/18EE/bin/sqlplus
13993 2800 1136 17929 4609 /u00/app/oracle/product/18EE/bin/adrci
46456 3008 160 49624 c1d8 /u00/app/oracle/product/18EE/lib/libnque18.so
74314 4824 1248 80386 13a02 /u00/app/oracle/product/18EE/bin/oradism
86396 23968 1144 111508 1b394 /u00/app/oracle/product/18EE/bin/lsnrctl
115523 2196 48 117767 1cc07 /u00/app/oracle/product/18EE/lib/libocrutl18.so
144591 3032 160 147783 24147 /u00/app/oracle/product/18EE/lib/libdbcfg18.so
216972 2564 48 219584 359c0 /u00/app/oracle/product/18EE/lib/libclsra18.so
270692 13008 160 283860 454d4 /u00/app/oracle/product/18EE/lib/libskjcx18.so
321701 5024 352 327077 4fda5 /u00/app/oracle/product/18EE/lib/libons.so
373988 7096 9536 390620 5f5dc /u00/app/oracle/product/18EE/lib/libmql1.so
717398 23224 110088 850710 cfb16 /u00/app/oracle/product/18EE/bin/orabaseconfig
717398 23224 110088 850710 cfb16 /u00/app/oracle/product/18EE/bin/orabasehome
878351 36800 1144 916295 dfb47 /u00/app/oracle/product/18EE/bin/tnslsnr
928382 108920 512 1037814 fd5f6 /u00/app/oracle/product/18EE/lib/libcell18.so
940122 56176 2376 998674 f3d12 /u00/app/oracle/product/18EE/lib/libsqlplus.so
1118019 16156 48 1134223 114e8f /u00/app/oracle/product/18EE/lib/libocr18.so
1128954 5936 160 1135050 1151ca /u00/app/oracle/product/18EE/lib/libskgxp18.so
1376814 18548 48 1395410 154ad2 /u00/app/oracle/product/18EE/lib/libocrb18.so
1685576 130464 160 1816200 1bb688 /u00/app/oracle/product/18EE/lib/libasmclntsh18.so
2517125 16496 15584 2549205 26e5d5 /u00/app/oracle/product/18EE/lib/libipc1.so
3916867 86504 111912 4115283 3ecb53 /u00/app/oracle/product/18EE/lib/libclntshcore.so.18.1
4160241 26320 69264 4255825 40f051 /u00/app/oracle/product/18EE/lib/libmkl_rt.so
5120001 459984 7784 5587769 554339 /u00/app/oracle/product/18EE/lib/libnnz18.so
10822468 302312 21752 11146532 aa1524 /u00/app/oracle/product/18EE/lib/libhasgen18.so
11747579 135320 160 11883059 b55233 /u00/app/oracle/product/18EE/lib/libshpkavx218.so
61758209 2520896 134808 64413913 3d6e0d9 /u00/app/oracle/product/18EE/lib/libclntsh.so.18.1
376147897 3067672 602776 379818345 16a39169 /u00/app/oracle/product/18EE/bin/oracle
487369241 7106932 1203944 495680117 1d8b7a75 (TOTALS)

Of course, this is probably not sufficient, especially if you want to run APEX, OJVM, OracleText. The method is there: run a workload that covers everything you need, and build the Oracle Home from the files used there. I used strace here, but auditd can also be a good idea. Ideally, this job will be done one day by Oracle itself in a supported way, so that we can build a core container for Oracle Database and add features as Dockerfile layers. This had be done to release Oracle XE 11g which is 300MB only. However Oracle XE 18c announced for October will probably be larger as it includes nearly all option.

 

Cet article The size of Oracle Home: from 9GB to 600MB est apparu en premier sur Blog dbi services.

Oracle Database on OpenShift

By Franck Pachot

.
In a previous post I described the setup of MiniShift on my laptop in order to run OpenShift for test purpose. I even pulled the Oracle Database image from the Docker Store. But the goal is to import it into OpenShift to deploy it from the Image Stream.

I start MiniShift on my laptop, specifying a larger disk (default is 20GB)

C:\Users\Franck>minishift start --disk-size 40g
-- Starting profile 'minishift'
-- Check if deprecated options are used ... OK
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.9.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.9.0' is supported ... OK
-- Checking if requested hypervisor 'virtualbox' is supported on this platform ... OK
-- Checking if VirtualBox is installed ... OK
-- Checking the ISO URL ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting the OpenShift cluster using 'virtualbox' hypervisor ...
-- Minishift VM will be configured with ...
Memory: 2 GB
vCPUs : 2
Disk size: 40 GB
-- Starting Minishift VM .................................................................... OK
-- Checking for IP address ... OK
-- Checking for nameservers ... OK
-- Checking if external host is reachable from the Minishift VM ...
Pinging 8.8.8.8 ... OK
-- Checking HTTP connectivity from the VM ...
Retrieving http://minishift.io/index.html ... OK
-- Checking if persistent storage volume is mounted ... OK
-- Checking available disk space ... 1% used OK
Importing 'openshift/origin:v3.9.0' ............. OK
Importing 'openshift/origin-docker-registry:v3.9.0' ... OK
Importing 'openshift/origin-haproxy-router:v3.9.0' ...... OK
-- OpenShift cluster will be configured with ...
Version: v3.9.0
-- Copying oc binary from the OpenShift container image to VM ... OK
-- Starting OpenShift cluster ...........................................................
Using nsenter mounter for OpenShift volumes
Using public hostname IP 192.168.99.105 as the host IP
Using 192.168.99.105 as the server IP
Starting OpenShift using openshift/origin:v3.9.0 ...
OpenShift server started.
 
The server is accessible via web console at:
https:⁄⁄192.168.99.105:8443
 
You are logged in as:
User: developer
Password:
 
To login as administrator:
oc login -u system:admin

MiniShift is starting a VirualBox and gets an IP address from the VirtualBox DHCP – here 192.168.99.105
I can access to the console https://192.168.99.105:8443 and log as developer or admin but for the moment I’m continuing in command line.

At any moment I can log to the VM running OpenShift with the minishift command. Here checking the size of the disks

C:\Users\Franck>minishift ssh
 
[docker@minishift ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/live-rw 9.8G 697M 9.0G 8% /
devtmpfs 974M 0 974M 0% /dev
tmpfs 1000M 0 1000M 0% /dev/shm
tmpfs 1000M 18M 983M 2% /run
tmpfs 1000M 0 1000M 0% /sys/fs/cgroup
/dev/sr0 344M 344M 0 100% /run/initramfs/live
/dev/sda1 39G 1.8G 37G 5% /mnt/sda1
tmpfs 200M 0 200M 0% /run/user/0
tmpfs 200M 0 200M 0% /run/user/1000

Build the Docker image

The goal is to run in OpenShift a container from an image that has been build somewhere else. In this example I’ll not build one but use one provided on the Docker store: the Oracle Database ‘slim’ image. For this example, I’ll use the minishift VM docker, just because it is there.

I have DockerTools installed on my laptop and just want to set the environment to connect to the docker server on the minishift VM. I can get the environment from minishift:

C:\Users\Franck>minishift docker-env
SET DOCKER_TLS_VERIFY=1
SET DOCKER_HOST=tcp://192.168.99.105:2376
SET DOCKER_CERT_PATH=C:\Users\Franck\.minishift\certs
REM Run this command to configure your shell:
REM @FOR /f "tokens=*" %i IN ('minishift docker-env') DO @call %i

Here is how to directly set the environemnt from it:

C:\Users\Franck>@FOR /f "tokens=*" %i IN ('minishift docker-env') DO @call %i

Now my docker commands will connect to this docker server. Here are the related info, minishift is already running several containers there for its own usage:

C:\Users\Franck>docker info
Containers: 9
Running: 7
Paused: 0
Stopped: 2
Images: 6
Server Version: 1.13.1
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: journald
Cgroup Driver: systemd
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log:
Swarm: inactive
Runtimes: docker-runc runc
Default Runtime: docker-runc
Init Binary: docker-init
containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: e9c345b3f906d5dc5e8100b05ce37073a811c74a (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
seccomp
Profile: default
selinux
Kernel Version: 3.10.0-862.6.3.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.953GiB
Name: minishift
ID: U7IQ:TE3X:HSGK:3ES2:IO6G:A7VI:3KUU:YMBC:3ZIR:QYUL:EQUL:VFMS
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: pachot
Registry: https://index.docker.io/v1/
Labels:
provider=virtualbox
Experimental: false
Insecure Registries:
172.30.0.0/16
127.0.0.0/8
Live Restore Enabled: false

As for this example, I’ll use the Oracle Database image, I need to log to the Docker Store to prove that I accept the licensing conditions:

C:\Users\Franck>docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username:
Password:
Login Succeeded

I pull the image, takes some time because ‘slim’ means 2GB with Oracle Database.

C:\Users\Franck>docker pull store/oracle/database-enterprise:12.2.0.1-slim
Trying to pull repository docker.io/store/oracle/database-enterprise ...
12.2.0.1-slim: Pulling from docker.io/store/oracle/database-enterprise
4ce27fe12c04: Pull complete
9d3556e8e792: Pull complete
fc60a1a28025: Pull complete
0c32e4ed872e: Pull complete
be0a1f1e8dfd: Pull complete
Digest: sha256:dbd87ae4cc3425dea7ba3d3f34e062cbd0afa89aed2c3f3d47ceb5213cc0359a
Status: Downloaded newer image for docker.io/store/oracle/database-enterprise:12.2.0.1-slim

Here is the image:

C:\Users\Franck>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
openshift/origin-web-console v3.9.0 aa12a2fc57f7 7 weeks ago 495MB
openshift/origin-docker-registry v3.9.0 0530b896b578 7 weeks ago 465MB
openshift/origin-haproxy-router v3.9.0 6b85d7aec983 7 weeks ago 1.28GB
openshift/origin-deployer v3.9.0 39ee47797d2e 7 weeks ago 1.26GB
openshift/origin v3.9.0 12a3f005312b 7 weeks ago 1.26GB
openshift/origin-pod v3.9.0 6e08365fbba9 7 weeks ago 223MB
store/oracle/database-enterprise 12.2.0.1-slim 27c9559d36ec 12 months ago 2.08GB

My minishift VM disk has increased by 2GB:

C:\Users\Franck>minishift ssh -- df -Th /mnt/sda1
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 xfs 39G 3.9G 35G 11% /mnt/sda1

Push the image to OpenShift registry

OpenShift has its integrated container registry from which the Docker images are visible to Image Stream.
Here is the address of the registry:

C:\Users\Franck>minishift openshift registry
172.30.1.1:5000

I’ll run some OpenShift commands and the path to the minishift cache for ‘oc’ can be set with:

C:\Users\Franck>minishift oc-env
SET PATH=C:\Users\Franck\.minishift\cache\oc\v3.9.0\windows;%PATH%
REM Run this command to configure your shell:
REM @FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i
 
C:\Users\Franck>@FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i

I am still connected as developer to OpenShift:

C:\Users\Franck>oc whoami
developer

and I get the login token:

C:\Users\Franck>oc whoami -t
lde5zRPHjkDyaXU9ninZ6zX50cVu3liNBjQVinJdwFc

I use this token to login to the OpenShift registry with docker in order to be able to push the image:

C:\Users\Franck>docker login -u developer -p lde5zRPHjkDyaXU9ninZ6zX50cVu3liNBjQVinJdwFc 172.30.1.1:5000
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded

I create a new project to import the image to:

C:\Users\Franck>oc new-project oracle --display-name=Oracle
Now using project "oracle" on server "https://192.168.99.105:8443".
 
You can add applications to this project with the 'new-app' command. For example, try:
 
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
 
to build a new example application in Ruby.

This can also be done from the GUI. Here is the project on the right:
CaptureOpenShiftProject

I tag the image with the name of the registry (172.30.1.1:5000) and the name of the project (oracle) and add an image name, so that the full name is: 172.30.1.1:5000/oracle/ora122slim

C:\Users\Franck>docker tag store/oracle/database-enterprise:12.2.0.1-slim 172.30.1.1:5000/oracle/ora122slim

We can see this tagged image

C:\Users\Franck>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
openshift/origin-web-console v3.9.0 aa12a2fc57f7 7 weeks ago 495MB
openshift/origin-docker-registry v3.9.0 0530b896b578 7 weeks ago 465MB
openshift/origin-haproxy-router v3.9.0 6b85d7aec983 7 weeks ago 1.28GB
openshift/origin-deployer v3.9.0 39ee47797d2e 7 weeks ago 1.26GB
openshift/origin v3.9.0 12a3f005312b 7 weeks ago 1.26GB
openshift/origin-pod v3.9.0 6e08365fbba9 7 weeks ago 223MB
172.30.1.1:5000/oracle/ora122slim latest 27c9559d36ec 12 months ago 2.08GB
store/oracle/database-enterprise 12.2.0.1-slim 27c9559d36ec 12 months ago 2.08GB

Note that it is the same IMAGE ID and doesn’t take more space:

C:\Users\Franck>minishift ssh -- df -Th /mnt/sda1
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 xfs 39G 3.9G 35G 11% /mnt/sda1

Then I’m finally ready to pull the image to the OpenShift docker registry:

C:\Users\Franck>docker push 172.30.1.1:5000/oracle/ora122slim
The push refers to a repository [172.30.1.1:5000/oracle/ora122slim]
066e811424fb: Pushed
99d7f2451a1a: Pushed
a2c532d8cc36: Pushed
49c80855196a: Pushed
40c24f62a02f: Pushed
latest: digest: sha256:25b0ec7cc3987f86b1e754fc214e7f06761c57bc11910d4be87b0d42ee12d254 size: 1372

This is a copy, and takes an additional 2GB:

C:\Users\Franck>minishift ssh -- df -Th /mnt/sda1
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 xfs 39G 5.4G 33G 14% /mnt/sda1

Deploy the image

Finally, I can deploy the image as it is visible in the GUI:
CaptureOpenShiftImport

I choose to deploy from fommand line:

C:\Users\Franck>oc new-app --image-stream=ora122slim --name=ora122slimdeployment
--> Found image 27c9559 (12 months old) in image stream "oracle/ora122slim" under tag "latest" for "ora122slim"
 
* This image will be deployed in deployment config "ora122slimdeployment"
* Ports 1521/tcp, 5500/tcp will be load balanced by service "ora122slimdeployment"
* Other containers can access this service through the hostname "ora122slimdeployment"
* This image declares volumes and will default to use non-persistent, host-local storage.
You can add persistent volumes later by running 'volume dc/ora122slimdeployment --add ...'

--> Creating resources ...
imagestreamtag "ora122slimdeployment:latest" created
deploymentconfig "ora122slimdeployment" created
service "ora122slimdeployment" created
--> Success
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose svc/ora122slimdeployment'
Run 'oc status' to view your app.

CaptureOpenShiftDeploy

I expose the service:

C:\Users\Franck>oc expose service ora122slimdeployment
route "ora122slimdeployment" exposed

/bin/bash: /home/oracle/setup/dockerInit.sh: Permission denied

Here is one little thing to change. From the POD terminal, I can see the following error:
CaptureOpenShiftCrash

The same can be read from command line:

C:\Users\Franck>oc status
In project Oracle (oracle) on server https://192.168.99.105:8443
 
http://ora122slimdeployment-oracle.192.168.99.105.nip.io to pod port 1521-tcp (svc/ora122slimdeployment)
dc/ora122slimdeployment deploys istag/ora122slim:latest
deployment #1 deployed 7 minutes ago - 0/1 pods (warning: 6 restarts)
 
Errors:
* pod/ora122slimdeployment-1-86prl is crash-looping
 
1 error, 2 infos identified, use 'oc status -v' to see details.
 
C:\Users\Franck>oc logs ora122slimdeployment-1-86prl -c ora122slimdeployment
/bin/bash: /home/oracle/setup/dockerInit.sh: Permission denied

This is because by default, for security reason, OpenShift runs the container with a random user id. But the files are executable only by oracle:

sh-4.2$ ls -l /home/oracle/setup/dockerInit.sh
-rwxr-xr--. 1 oracle oinstall 2165 Aug 17 2017 /home/oracle/setup/dockerInit.sh
sh-4.2$

The solution is quite simple: allow the container to run with its own user id:

C:\Users\Franck>minishift addon apply anyuid
-- Applying addon 'anyuid':.
Add-on 'anyuid' changed the default security context constraints to allow pods to run as any user.
Per default OpenShift runs containers using an arbitrarily assigned user ID.
Refer to https://docs.openshift.org/latest/architecture/additional_concepts/autho... and
https://docs.openshift.org/latest/creating_images/guidelines.html#opensh... for more information.

The the restart of the POD will go further:
CaptureOpenShiftOracle

This Oracle Database from the Docker Store is not really an image of an installed Oracle Database, but just a tar of Oracle Home and Database files that have to be untared.

Now, in addition to the image size I have an additional 2GB layer for the container:

C:\Users\Franck>minishift ssh -- df -Th /mnt/sda1
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 xfs 39G 11G 28G 28% /mnt/sda1
 
C:\Users\Franck>docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 7 6 3.568GB 1.261GB (35%)
Containers 17 9 1.895GB 58.87kB (0%)
Local Volumes 0 0 0B 0B
Build Cache 0B 0B

Of course there is more to customize. The minishift VM should have more memory and the container for Oracle Database as well. We probably want to add an external volume, and export ports outside of the minishift VM.

 

Cet article Oracle Database on OpenShift est apparu en premier sur Blog dbi services.

Learn Linux With Docker and Microsoft Container Image

I just finished teaching an 8 hour pre-con at SQL Saturday Indianapolis on Essential Linux for the DBA.  The focus of the class was on what the Microsoft DBA would need to know as more SQL Server environments begin to run on Linux.

The obstacles and demands of them will be a bit different than the Oracle crowd, as they may not have the Linux Admin or Unix expertise onsite for support and have to fend for themselves more than I did starting out on Linux.

Some folks asked me why I chose to use Docker with the SQL Server image to teach the class and I wanted to demonstrate why this, over VMWare or other options were my final choice.

Easy Download

Docker is available for Mac and Windows with a simple installation by the defaults.

  1. Download the correct installation for your OS type.
  2. Run the installer and keep all the defaults, choosing Linux containers, not Windows containers
  3. Reboot Windows workstations- Done.

Incredibly Simple MSSQL Container Install

Microsoft has done a great job of creating a very small, (maybe a bit too small, but we’ll get into that later…) image that can be used to create a running Linux container with SQL Server.  This grants to student a great opportunity to simulate much of what it would be like to work on a real Linux server.

There are a few tips I’ll give you when working with Docker and containers.

  1.  Don’t work from Power Shell or other advanced interfaces-  Command Prompt is your friend.  You’re in the Linux world now and the command line is your first choice and best choice to work with it.
  2. Remember password naming convention requirements.  If you use too simple a password, your container will shut down immediately after starting.  I automatically choose combinations of letters, numbers and special characters, making sure never to lead with a special character in mine.  If I were to use the password “t3st1ngn0w”, the container would fail, but “t3st1ngn0w!” would be successful.
  3. Always choose 1433 for your port designation.  If you look at the logs, the container will redirect to 1433 no matter what, so don’t think you can get fancy.  It would take a lot of steps to change this and I haven’t see it successful yet.  This is still in its infancy at this time.
  4. Because it does want to use 1433, make sure you don’t have a community edition or express edition on your PC already running on those ports, (or anything else…)  Even though this is on a container, it’s still recognized by SSMS as a “localhost” so if you try to connect it, it will continue to fail.
  5. If you experience an error on the create command, you must run a remove command as a partial creation will have been done, entering the container metadata, (docker rm )

The Creation

After the Docker installation is done, the command to create the container would look similar to the following, replacing anything inside the <> with your choice:

docker run -e “ACCEPT_EULA=Y” -e “MSSQL_SA_PASSWORD=
” -p 1403:1423 –name -d microsoft/mssql-server-linux:2017-latest

If I run this on my pc now, I’d see the following:

https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/docker1.png?r... 300w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/docker1.png?r... 768w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/docker1.png?w... 1300w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/docker1.png?w... 1950w" sizes="(max-width: 650px) 100vw, 650px" data-recalc-dims="1" />

Running the “docker ps -a” command is something you will come to rely on to capture the status of the containers at any given time.  It tells me the what, when, who and where of what I just created.

Also note the strange port numbers I set on my container.  We’ll get back to that in a later post, but do note them, if you haven’t already… </p />
</p></div>

    	  	<div class=

ORACLE_HOME with symbolic link and postupgrade_fixups

By Franck Pachot

.
Here is a quick post you may google into if you got the following error when running postupgrade_fixups.sql after an upgrade:

ERROR - Cannot open the preupgrade_messages.properties file from the directory object preupgrade_dir
DECLARE
*
ERROR at line 1:
ORA-29283: invalid file operation
ORA-06512: at "SYS.DBMS_PREUP", line 3300
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 41
ORA-06512: at "SYS.UTL_FILE", line 478
ORA-06512: at "SYS.DBMS_PREUP", line 3260
ORA-06512: at "SYS.DBMS_PREUP", line 9739
ORA-06512: at line 11


Before upgrading a database with dbupgrade, you run, on the current version of your database, the preupgrade.jar from the new version (and probably download the lastest one from MOS). This generates a script to run before the upgrade, and one to run after the upgrade. Those scripts are generated under $ORACLE_BASE/cfgtoollogs//preupgrade where you find something like that:

drwxr-xr-x. 3 oracle oinstall 4096 Aug 11 19:36 ..
drwxr-xr-x. 3 oracle oinstall 4096 Aug 11 19:36 oracle
drwxr-xr-x. 3 oracle oinstall 4096 Aug 11 19:36 upgrade
-rw-r--r--. 1 oracle oinstall 14846 Aug 11 20:19 dbms_registry_extended.sql
-rw-r--r--. 1 oracle oinstall 7963 Aug 11 20:19 preupgrade_driver.sql
-rw-r--r--. 1 oracle oinstall 422048 Aug 11 20:19 preupgrade_package.sql
-rw-r--r--. 1 oracle oinstall 14383 Aug 11 20:19 parameters.properties
-rw-r--r--. 1 oracle oinstall 83854 Aug 11 20:19 preupgrade_messages.properties
-rw-r--r--. 1 oracle oinstall 50172 Aug 11 20:19 components.properties
-rw-r--r--. 1 oracle oinstall 2 Aug 11 20:19 checksBuffer.tmp
-rw-r--r--. 1 oracle oinstall 6492 Aug 11 20:20 preupgrade_fixups.sql
-rw-r--r--. 1 oracle oinstall 7575 Aug 11 20:20 postupgrade_fixups.sql
-rw-r--r--. 1 oracle oinstall 5587 Aug 11 20:20 preupgrade.log

Everything is straightforward.

oracle@vmreforatun01:/u00/app/oracle/product/ [DB2] java -jar /u00/app/oracle/product/18EE/rdbms/admin/preupgrade.jar
...
==================
PREUPGRADE SUMMARY
==================
/oracle/u00/app/oracle/cfgtoollogs/DB2/preupgrade/preupgrade.log
/oracle/u00/app/oracle/cfgtoollogs/DB2/preupgrade/preupgrade_fixups.sql
/oracle/u00/app/oracle/cfgtoollogs/DB2/preupgrade/postupgrade_fixups.sql
 
Execute fixup scripts as indicated below:
 
Before upgrade log into the database and execute the preupgrade fixups
@/oracle/u00/app/oracle/cfgtoollogs/DB2/preupgrade/preupgrade_fixups.sql
 
After the upgrade:
 
Log into the database and execute the postupgrade fixups
@/oracle/u00/app/oracle/cfgtoollogs/DB2/preupgrade/postupgrade_fixups.sql
 
Preupgrade complete: 2018-08-11T19:37:29
oracle@vmreforatun01:/u00/app/oracle/product/ [DB2]

For a database we have in a lab for our workshops, which I upgraded to 18c, I’ve run the postfix script after the upgrade but got the error mentioned above about UTL_FILE invalid file operation in the preupgrade_dir. I looked at the script. The postupgrade_fixups.sql script creates a directory on $ORACLE_HOME/rdbms/admin and calls preupgrade_package.sql which reads preupgrade_messages.properties.

This is a bit confusing because there’s also the same file in the cfgtoollogs preupgrade subdirectory but my directory looks good:

SQL> select directory_name,directory_path from dba_directories where directory_name='PREUPGRADE_DIR';
 
DIRECTORY_NAME
--------------------------------------------------------------------------------------------------------------------------------
DIRECTORY_PATH
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
PREUPGRADE_DIR
/u00/app/oracle/product/18SE/rdbms/admin

So, as the “ORA-29283: invalid file operation” is not very detailed, I traced all the system calls on files (strace -fye trace=file) when running sqlplus and got this:

[pid 29974] clock_gettime(CLOCK_MONOTONIC, {17811, 723389136}) = 0
[pid 29974] stat("/u00/app/oracle/product/18SE/rdbms/admin/preupgrade_messages.properties", {st_mode=S_IFREG|0644, st_size=83854, ...}) = 0
[pid 29974] stat("/u00/app/oracle/product/18SE/rdbms/admin/", {st_mode=S_IFDIR|0755, st_size=65536, ...}) = 0
[pid 29974] lstat("/u00", {st_mode=S_IFLNK|0777, st_size=11, ...}) = 0
[pid 29974] readlink("/u00", "/oracle/u00", 4095) = 11
[pid 29974] lstat("/oracle", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00/app", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00/app/oracle", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00/app/oracle/product", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00/app/oracle/product/18SE", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00/app/oracle/product/18SE/rdbms", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
[pid 29974] lstat("/oracle/u00/app/oracle/product/18SE/rdbms/admin", {st_mode=S_IFDIR|0755, st_size=65536, ...}) = 0
[pid 29974] clock_gettime(CLOCK_MONOTONIC, {17811, 724514469}) = 0

Then I realized that the ORACLE_HOME is under a symbolic link. For whatever reason, on this environment, ORACLE_BASE is physically /oracle/u00/app/oracle but there’s a /u00 link to /oracle/u00 and this short one was used to set the environment variables. UTL_FILE, since 11g, and for security reasons, does not accept directories which use a symbolic link. And we can see on the strace above that it was detected (readlink).

So, the solution can be a quick workaround here, changing the postupgrade_fixups.sql to set the physical path instead of the one read from ORACLE_HOME by dbms_system.get_env.

However, if you can restart the instance, then it will be better to set the ORACLE_HOME to the physical path. Symbolic links for the ORACLE_HOME may be misleading. Remember that the ORACLE_HOME text string is part of the instance identification, combined with ORACLE_SID. So, having different values even when resolved to the same path will bring lot of problems. Do not forget to change it everywhere (shell environment, listener.ora) so that you are sure that nobody will use a different one when starting the database.

 

Cet article ORACLE_HOME with symbolic link and postupgrade_fixups est apparu en premier sur Blog dbi services.

Scripts for Batch-Processing using the Data Dictionary in #Exasol

https://uhesse.files.wordpress.com/2015/10/helps.png?w=600&h=558 600w, https://uhesse.files.wordpress.com/2015/10/helps.png?w=150&h=140 150w" sizes="(max-width: 300px) 100vw, 300px" />

If you want to batch-process over a number of objects in Exasol, scripts that work with the Data Dictionary might do the trick. Let’s say I want to drop all of my tables that start with the letter P:

 

SQL_EXA> open schema fred;
EXA: open schema fred;

Rows affected: 0

SQL_EXA> create or replace script droptables
          as
          res=query([[select * from exa_user_tables where table_name like 'P%']])
          for i=1, #res
          do
           query([[drop table ::t]],{t=res[i].TABLE_NAME})
          end
        /
EXA: create or replace script droptables...

Rows affected: 0

SQL_EXA> select table_name from exa_user_tables;
EXA: select table_name from exa_user_tables;

TABLE_NAME
--------------------------------------------------
P9_TAB
P8_TAB

2 rows in resultset.

Amazingly enough, DROP TABLE is an operation that can be rolled back in Exasol. So to safeguard, I can disable autocommit here.

SQL_EXA> set autocommit off;
SQL_EXA> execute script droptables;
EXA: execute script droptables;

Rows affected: 0

SQL_EXA> select table_name from exa_user_tables;
EXA: select table_name from exa_user_tables;

TABLE_NAME
--------------------------------------------------

0 rows in resultset.

SQL_EXA> rollback;
EXA: rollback;

Rows affected: 0

SQL_EXA> select table_name from exa_user_tables;
EXA: select table_name from exa_user_tables;

TABLE_NAME
---------------------------------------------------
P9_TAB
P8_TAB

2 rows in resultset.