Search

OakieTags

Who's online

There are currently 0 users and 36 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

ASSM Truncate.

Here’s one that started off with a tweet from Kevin Closson, heading towards a finish that shows some interesting effects when you truncate large objects that are using ASSM. To demonstrate the problem I’ve set up a tablespace using system allocation of extents and automatic segment space management (ASSM).  It’s the ASSM that causes the problem, but it requires a mixture of circumstances to create a little surprise.


create
	tablespace test_8k_auto_assm
	datafile	-- OMF
	SIZE 1030M
	autoextend off
	blocksize 8k
	extent management local
	autoallocate
	segment space management auto
;

create table t1 (v1 varchar2(100)) pctfree 99 tablespace test_8k_auto_assm storage(initial 1G);

insert into t1 select user from dual;
commit;

alter system flush buffer_cache;

truncate table t1;

I’ve created a table with an initial definition of 1GB, which means that (in a clean tablespace) the autoallocate option will jump straight to extents of 64MB, with 256 table blocks mapped per bitmap block for a total of 32 bitmap blocks in each 64MB extent. Since I’m running this on 11.2.0.4 and haven’t included “segment creation immediate” in the definition I won’t actually see any extents until I insert the first row.

So here’s the big question – when I truncate this table (using the given command) how much work will Oracle have to do ?

Exchanging notes over twitter (140 char at a time) and working from a model of the initial state, it took a little time to get to understand what was (probably) happening and then produce this silly example – but here’s the output from a snapshot of v$session_event for the session across the truncate:


Event                                             Waits   Time_outs           Csec    Avg Csec    Max Csec
-----                                             -----   ---------           ----    --------    --------
local write wait                                    490           0          83.26        .170          13
enq: RO - fast object reuse                           2           0         104.90      52.451         105
db file sequential read                              47           0           0.05        .001           0
db file parallel read                                 8           0           0.90        .112           0
SQL*Net message to client                            10           0           0.00        .000           0
SQL*Net message from client                          10           0           0.67        .067         153
events in waitclass Other                             2           0           0.04        .018         109

The statistic I want to highlight is the number recorded against “local write wait”: truncating a table of one row we wait for 490 blocks to be written! We also have 8 “db file parallel read”  waits which, according to a 10046 trace file, were reading hundreds of blocks. (I think the most significant time in this test – the RO enqueue wait – may have been waiting for the database writer to complete the work needed for an object checkpoint, but I’m not sure of that.)

The blocks written were the space management bitmap blocks for the extent(s) that remained after the truncate – even the ones that referenced extents above the high water mark for the table. Since we had set the tables initial storage to 1GB, we had a lot of bitmap blocks. At 32 per extent and 16 extents (64MB * 16 = 1GB) we might actually expect something closer to 512 blocks, but actually Oracle had formatted the last extent with only 8 space management blocks. and the first extent had an extra 2 to cater for the level 2 bitmap lock and segment header block giving: 32 * 15 + 8 + 2 = 490.

As you may have seen above, the impact on the test that Kevin was doing was quite dramatic – he had set the initial storage to 128GB (lots of bitmap blocks), partitioned the table (more bitmap blocks) and was running RAC (so the reads were running into waits for global cache grants).

I had assumed that this type of behaviour happened only with the “reuse storage” option of the truncate command: and I hadn’t noticed before that it also appeared even if you didn’t reuse storage – but that’s probably because the effect applies only to the bit you keep, which may typically mean a relatively small first extent. It’s possible, then, that in most cases this is an effect that isn’t going to be particularly visible in production systems – but if it is, can you work around it ? Fortunately another tweeter asked the question “What happens if you ‘drop all storage?'” Here’s the result from adding that clause to my test case:


Event                                             Waits   Time_outs           Csec    Avg Csec    Max Csec
-----                                             -----   ---------           ----    --------    --------
enq: RO - fast object reuse                           1           0           0.08        .079           0
log file sync                                         1           0           0.03        .031           0
db file sequential read                              51           0           0.06        .001           0
SQL*Net message to client                            10           0           0.00        .000           0
SQL*Net message from client                          10           0           0.56        .056         123
events in waitclass Other                             3           0           0.87        .289         186


Looking good – if you don’t keep any extents you don’t need to make sure that their bitmaps are clean. (The “db file sequential read” waits are almost all about the data dictionary, following on from my “flush buffer cache”).

Footnote 1: the same effect appears in 12.1.0.2
Footnote 2: it’s interesting to note that the RO enqueue wait time seems to parallel the local write wait time: perhaps a hint that there’s some double counting going on. (To be investigated, one day).

Openworld bound..

I’ll be in Openworld again this year, and have a couple of speaking slots…

I’ll talk about new features for developers in 12c on Sunday, Sep 28, 2:30 Moscone South 303

https://oracleus.activeevents.com/2014/connect/sessionDetail.ww?SESSION_ID=2244

and of course, at the awesome Oaktable world..

http://www.oraclerealworld.com/oaktable-world/agenda/

Drop in, learn some cool things, or just pop up and say Hello!

Farnham for US Senate: #IMPOVERISH_DRUG_LORDS

The most effective way to put money and power in the hands of criminals is to make something popular illegal.

What do I mean? I mean we should have learned this from the Prohibition. We should have learned this from the great failure of the war on drugs.

IF you really want less addiction and use of drugs, you are far better off to stop having them be illegal.

What?

Okay, there will always be a baseline use of drug of various kinds for recreational purposes. Do you want it to be more or less?

When you make something illegal, you create a profit incentive for criminals to smuggle and distribute whatever it is. Because they profit from illegal consumption it is guaranteed they will market the illegal thing to make it more popular.

This is what funds the drug cartels. This is what makes drugs expensive. This is what prevents legal investment in the creation of recreational drugs that are safer and less addictive.

IF you really want less addiction and use of drugs, you had better get behind eliminating the profit motive to push drugs.

It really is that simple. NO illegal drugs. Cocaine will be about the price of aspirin. Am I saying you should go have some? NO. What I’m saying is that there will be no profit in pushing. If there is no profit, there will be no marketing. If there is no profit there will be no drug lords and they will not have funds to arm their killers.

So take the power out of the hands of the drug lords and reduce addiction: Just make it ALL legal.

#IMPOVERISH_DRUG_LORDS and empty our jails. Fifty years is just too long to let this idiocy continue.

 

The #SAFE_DEAL: On the road to Equal Taxation with a baseline of the poverty level

Twenty-two years ago, when I previously ran for US Senate and began writing about reviving the idea of full support of persons legally in the country to the poverty level, we were in much better overall financial shape. An actual maximum tax would have been possible with some budget discipline. Eventually and philosophically, I think that is still the goal. If everyone made enough money to pay an equal share of the country’s budget and still be above the poverty level, that is what we should do. The philosophy is clear: One Person, One Vote; One Person, One Tax.

But of course we live in a world where many make close to zero and an arguable number somewhere around 50 million make less than the poverty level. Currently federal welfare spending is about $514 BILLION dollars. Unfortunately only about half of what we spend reaches recipients as cash. We have a lot of programs, we force a lot of people to grovel before civil servants to get money, and we try to control what welfare recipients are allowed to spend money on. That is all very expensive. Even if we say it would cost the IRS $14 BILLION dollars a year to distribute economic aid directly as a cash negative income tax payment, we would still have $500 BILLION dollars to give to those 50 million. If we group those 50 million as households of 4 (which is a pretty good weighted average and close enough for first order estimates), then we have 12.5 million households under the poverty level. Most of them don’t actually make zero income, but even if we pretended they did, 500 billion divided by 12.5 million is $40,000. The benchmark “poverty level” for a household of 4 is under $24,000. So there is something very wrong with our welfare system.

Now, with a bit of editing to current numbers and acknowledging that a maximum equal tax has become a future goal rather than a near term reality, here are my thoughts (only a few of them original) that have not changed much in twenty-two years:

One Citizen, One Vote; One Citizen, One Tax

What the heck is an Equality Tax, and why the heck should You support it? (Eventually, as a goal) 

Let’s change the main tax basis altogether:  Take the budget, subtract off the items you plan to pay for with taxes on easily metered use (such as gasoline taxes to pay for roadwork or a direct fee to businesses hiring non-Citizens legally in the country), and call that the net budget to be raised. Now take the number of Citizens aged 23 to 62, and divide. There is your equal tax. (Why only people aged 23 to 62? Well I think that forty years is enough to pay tax. Any younger, and you start making it hard to get a good start on adult life. Any older, and I start getting uncomfortable about putting death and taxes too close together. I know I can’t avoid either, but I’d prefer not to have them at the same time.)

Okay, so there is your equal tax. Well, almost. The problem is, some Citizens make no money at all or earn less than the poverty level, and I don’t want to tax any Citizen into poverty. While I disagree vehemently with success penalty taxes that attempt to transfer income from the top earners to others (including many well above the poverty line), I find support to the poverty line to be qualitatively different and I believe it is a legitimate public policy. In fact you can make a strong case that by supporting Citizens universally to the poverty level you eliminate justification for crimes of necessity (my children were hungry, what else could I do?) and probably you save money over all. Besides that, we are a great country and we just should not have Citizens living in poverty. Of course this means that you’re going to have to add the required funds to raise Citizens to the poverty level to the net budget, but the result is that every Citizen above the poverty level pays the same tax and shares the burden equally. No citizen is left below the poverty line, so instead of arguing about each possible social program, such as subsidized health care insurance, we just debate raising or lowering the poverty level and let individual families make their own decisions on which things to buy. Since it is pretty easy to get a head count and an estimate of poverty level support costs, the cost to each Citizen of each government program becomes transparent and easy to calculate. Suppose there are 100 million taxpayers. Then a billion dollar program is going to cost ten bucks each. Simple! You can directly assess whether a given program is worth it to you and so advise your representatives.

There is one more slight complication for dealing with the Citizens that are in the age range being taxed who earn more than the poverty level but not enough to reasonably pay the full equal tax. These Citizens should not be faced with a penalty for earning more. In fact we want them to earn more so they can strive toward paying the equal share instead of being subsidized. So let them keep seventy-five cents of each dollar they earn up to the point where they are paying the full equal tax. If you make more than four times the poverty level plus the equal tax, then you can just pay the equal tax and be done with it. At that level you are off the dole, you have paid your equal tax, and you are free to use your prosperity for yourself and the charities of your choice. 

So, in summary, Citizens start at a minimum of the poverty level. Citizens get to keep 75 cents of every dollar they earn until they have defrayed the poverty level subsidy and paid the full equal tax. Once the net tax payment reaches the full equal tax, you are done paying that year and you get to decide what to do with the rest. 

What else happens? Well, for starters, there is no need for a minimum wage. Instead of marginal small business owners being squeezed between working themselves to death and hiring a Citizen at the minimum wage, they hire at the wage the work is worth to the business. The business owner wins, because it makes economic sense. The government wins, because the Citizen hired at a value based wage otherwise didn’t have a job and the taxed Citizens were paying the whole tab to the poverty level. And the hired Citizen wins, because the Citizen below the full equal tax gets to keep 75 cents on the dollar of what is earned so it does not cut into the poverty level support the Citizen receives.

This is just one example. The list of things that just fall into place with almost no effort is nearly endless. How about the third rail? Well guess what – if Citizens get a base line of the poverty level then you really don’t need social security, do you? Of course we have to pay off our moral obligation to everyone who thought they were paying into a retirement plan (when in fact they were paying benefits to current recipients). So if the social security payment you are due tops the poverty level, then you should continue to get that excess. But we should stop collecting payroll taxes for two reasons: First, payroll taxes distort the flat tax above the intended rate and make cash under the table arrangements profitable. Second, collecting payroll taxes creates the impression that the government is saving your money for your retirement and your healthcare.

So what should your benefit be if you’re somewhere in the middle when the funding source is no longer a payroll tax? Well we can argue a bit about the calculation, but currently you are required to earn 40 quarters (10 years) to qualify for normal retirement benefits and in a working life of 40 years you would earn 160 quarters. So one way is to divide the number of quarters you have earned by 160 and pay you that quotient (with a ceiling of 1) times the benefit you would otherwise have received. That way people over, say, 60 won’t be caught by surprise and people who have paid in for a few years expecting a benefit have plenty of warning but don’t lose everything.

Another huge advantage to the baseline safety net of the poverty level for persons legally in the country is that it creates a price advantage for employers to hire persons legally here in the recorded economy instead of to persons who are not here legally as cash under the table. The net effect is that we reduce the illegal immigration magnet. With this price advantage in effect, it once again becomes more profitable to hire Citizens than persons here illegally. This is the way to get immigration reform by economic force. With the diminished demand for workers it will no longer be profitable to engage in sophisticated human smuggling operations. This, combined with ending the war on drugs (a separate lengthy post in its own right) will return the upper hand to our border patrol without exorbitant capital expense. Tunnels and armed parties just won’t be worth it. 

Equality taxation, where all of us who have enough share the burden equally, results in a simple and elegant framework where everything else just makes sense. There is nothing particularly magic about the 25 cents on the dollar flat tax in the range where you have income but not enough income to be unsubsidized. Any number in the range where effort to earn the next dollar is incentivized will do, but 25 cents makes the readily understood word formula “four times the poverty level plus four times the equal tax” true as an easy to express formula of the income level you need to pay your equal share. At which point the government should not have to be told how much you actually make, since you are not asking for any subsidy. Equality taxation can be implemented with or without ending corporate taxation, which would also be a smart move both for efficiency and to get corporations out of the business of deciding social charitable priorities in the course of tax avoidance.

Unfortunately, as I noted in the lead paragraph, capping at the top Equality Tax is an eventual goal. Currently we have too big a national debt. We’ve been digging a hole of obligations since 1935 and it will take a while to whittle that down. In the short term, not only is it unlikely we will be able to cap taxes at the theoretical Equality Tax, if we intend to start paying off the debt we might need to go a little bit “progressive” on the top end. By a little bit, I mean no more than 35% percent. Why? Because that is about the rate above which the rich will fight back to keep more of their earnings!

The fact that some folks have figured out a way to earn a lot of money brings forth a lot of greed and jealousy on the part of some of our biggest political airbags. “The income gap is horrible and unacceptable!” they scream. Well folks, they are worried about the #WRONG_GAP. The #CORRECT_GAP for the government to concern itself with is the gap between the poorest and the poverty level. Persecution of the wealthy not only fails to close the #CORRECT_GAP, it causes many of the wealthy to tighten their grip on the money they have earned. This does not help the poor. When we shift to a predictable flat tax rate that is reasonable (at most 35%) the wealthy tend to react in charity beyond their taxes. And they both have a right to choose and choose better than the government.

Of course it is far better if we can keep the rate down to a flat 25%.

Using current figures with a little rounding, that means a family of four with zero income will get $24,000. Not pleasant perhaps, but not in poverty. Earn a dollar and this family’s new net is $24,000.75 and the subsidy is down to $23,999.75. Please notice that the net income always goes up for a dollar earned and the subsidy always goes down (and eventually becomes a net tax.) This is why this plan is sometimes called the “negative income tax.” So, using these current figures, when your earned income reaches $96,000, your subsidy reaches zero. Above that, you actually start paying net taxes, but only at the rate you have been paying all along. Perhaps above $200,000 we might raise the rate a bit if we cannot balance the budget over the business cycle. But even at the 25% rate, someone earning $200,000 will pay $26,000 net in taxes. (Being the $50,000 in taxes on earned income minus the $24,000 baseline).

Oracle IO wait events: db file sequential read

(the details are investigated and specific to Oracle’s database implementation on Linux x86_64)

Exadata IO: This event is not used with Exadata storage, ‘cell single block physical read’ is used instead.
Parameters:
p1: file#
p2: block#
p3: blocks

Despite p3 listing the number of blocks, I haven’t seen a db file sequential read event that read more than one block ever. Of course this could change in a newer release.

Implementation:
One of the important things to realise here is that regardless of asynchronous IO settings (disk_asynch_io, filesystemio_options), Oracle always uses a pread() systemcall, so synchronous IO for reading blocks which are covered with this event. If you realise what the purpose of fetching the single block is in most cases: fetching a single database block which contents are necessary in order to continue processing, it should become apparent that issuing a synchronous IO call makes sense. This is also the reason the V$IOSTAT* view lists both SMALL_READ_REQS, SMALL_SYNC_READ_REQS and SMALL_READ_SERVICETIME, SMALL_SYNC_READ_LATENCY, to make a distinction between SYNC (pread()) reads and non-sync (thus asynchronous) calls, using the io_submit()-io_getevents() call combination.

IO done under the event ‘db file sequential read’ means a single block is read into the buffer cache in the SGA via the system call pread(). Regardless of physical IO speed, this wait always is recorded, in other words: there is a strict relation between the event and the physical IO. Just to be complete: if a block needed is already in the Oracle database buffer cache, no wait event is triggered and the block is read. This is called a logical IO. When the wait event ‘db file sequential read’ is shown, both a physical and a logical IO are executed.

This event means a block is not found in the database buffer cache. It does not mean the block is really read from a physical disk. If DIO (direct IO) is not used (filesystemio_options is set to ‘none’ or ‘async’ when using a filesystem, ASM (alias “Oracle managed raw devices”) is inherently direct path IO, except when the ASM “disks” are on a filesystem (when ASM is used with NFS (!), then filesystemio_options is obeyed)), the block could very well be coming from the filesystem cache of linux. In fact, without DIO a phenomenon known as ‘double buffering’ takes place, which means the IO doesn’t happen to it’s visible disk devices directly, but it needs to take a mandatory step in between, done at the kernel level, which means the data is put in the filesystem cache of linux too. It should be obvious that this extra work comes at the cost of extra CPU cycles being used, and is in almost any case unnecessary.

If you take a step back you should realise this event should take place for a limited amount of blocks during execution. Because of the inherent single block IO nature of db file sequential read, every physical read (when it needs to read from a physical disk device) takes the IO latency penalty. Even with solid state disk devices, which have an inherently lower latency time because there are no rotating parts and disk heads to be moved, chopping up an operation in tiny parts when a full table scan or fast full index scan could be done means a lot of CPU time is used whilst it could be done more efficient.

The time spend on ‘db file sequential read’ quite accurately times single block IO. This means a direct relationship between ‘db file sequential read’ timings and amount should exist with operating system measured IO statistics (iostat, sar and more).

Tagged: internals, IO, latency, linux, oracle, performance

FARNHAM for US Senate Campaign 4 Hashtags, 3 Key concepts

One of my friends noticed that I am trying to push forward campaign finance reform by trying to win without a TV media budget, and he asked what was my hash tag. DOH! I should have created hash tags long ago. Okay, there is still time to go viral. I’ve registered four hash tag definitions with hashtag.org, and I’ll explain them here, first with just the snapshot definitions I could get to stick without the site complaining : Stop! You’re posting too fast and then with the full story as individual blog posts. Hang in there. This is important.

Oh, and my top priority is establishing the #SAFE_DEAL.

#SAFE_DEAL

A system where the government makes sure all persons legally resident start with a base minimum income of at least the poverty level and get to retain at least 65 cents on each dollar they earn. See rsiz.wordpress.com for the details.

#WRONG_GAP and #CORRECT_GAP

When you hear an airbag politician decry the income gap between the poorest and the richest, they are talking about the wrong_gap. The correct_gap for the government to be concerned about is the gap between the poorest and the poverty level. See rsiz.wordpress.com for more details.

#IMPOVERISH_DRUG_LORDS

The best way to put money and power in the hands of criminals is to make something popular illegal. We should have learned this from the prohibition. We are now 40 years into the “War On Drugs” and we have thus created DRUG LORDS and DRUG LORD CARTELS more powerful than most small countries and powerful enough to destabilize regions of Mexico. If you want to take money and power away from the DRUG LORDS and if you want to end the profitability of marketing drugs so we will have less addiction, make it all legal, no matter how vile the drug. See rsiz.wordpress.com for more details.

WordPress 4.0

WordPress 4.0 “Benny” has been released. The download is in the usual place, but you are probably going to upgrade by clicking the auto-upgrade button.

I did 4 upgrades and all went fine. I can’t say things look that different after it, but you’ve got to keep on top of these upgrades I guess.

Cheers

Tim…

 


WordPress 4.0 was first posted on September 4, 2014 at 10:30 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

RESULT_CACHE Hint in 12.1.0.2

Every new Oracle Database patch introduces not only documented features, but also undocumented ones. In this short post I would like to provide information about an enhancement of the RESULT_CACHE hint.

According the documentation, the RESULT_CACHE hint has no parameter. But, as of 12.1.0.2, at least the SNAPSHOT parameter is supported. Its purpose is to specify how much time (in seconds) after the creation the result cache entry has to expire. In other words, you can specify that a specific entry expires even though the objects is based on doesn’t change.

The following example illustrates:

SQL> set time on

18:14:47 SQL> execute dbms_result_cache.flush

18:14:47 SQL> SELECT /*+ result_cache(snapshot=10) */ count(*) FROM scott.emp;

  COUNT(*)
----------
        14

18:14:47 SQL> SELECT status
18:14:47   2  FROM v$result_cache_objects
18:14:47   3  WHERE type = 'Result'
18:14:47   4  AND creator_uid = sys_context('userenv','current_userid');

STATUS
---------
Published

18:14:47 SQL> execute dbms_lock.sleep(10)

18:14:57 SQL> SELECT status
18:14:57   2  FROM v$result_cache_objects
18:14:57   3  WHERE type = 'Result'
18:14:57   4  AND creator_uid = sys_context('userenv','current_userid');

STATUS
---------
Expired

18:14:57 SQL> SELECT status
18:14:57   2  FROM v$result_cache_objects
18:14:57   3  WHERE type = 'Result'
18:14:57   4  AND creator_uid = sys_context('userenv','current_userid');

STATUS
---------
Invalid

Group By Bug

This just in from OTN Database Forum – a surprising little bug with “group by elimination” exclusive to 12c.


alter session set nls_date_format='dd-Mon-yyyy hh24:mi:ss';

select
       /* optimizer_features_enable('12.1.0.1')*/
       trunc (ts,'DD') ts1, sum(fieldb) fieldb
from (
  select
        ts, max(fieldb) fieldb
  from (
  select trunc(sysdate) - 1/24 ts, 1 fieldb from dual
  union all
  select trunc(sysdate) - 2/24 ts, 2 fieldb from dual
  union all
  select trunc(sysdate) - 3/24 ts, 3 fieldb from dual
  union all
  select trunc(sysdate) - 4/24 ts, 4 fieldb from dual
  union all
  select trunc(sysdate) - 5/24 ts, 5 fieldb from dual
  )
  group by ts
)
group by trunc (ts,'DD')
/

You might expect to get one row as the answer – but this is the result I got, with the execution plan pulled from memory:


TS1                      FIELDB
-------------------- ----------
03-Sep-2014 00:00:00          1
03-Sep-2014 00:00:00          5
03-Sep-2014 00:00:00          4
03-Sep-2014 00:00:00          2
03-Sep-2014 00:00:00          3

-------------------------------------------------------------------------
| Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT |      |       |       |    11 (100)|          |
|   1 |  HASH GROUP BY   |      |     5 |    60 |    11  (10)| 00:00:01 |
|   2 |   VIEW           |      |     5 |    60 |    10   (0)| 00:00:01 |
|   3 |    UNION-ALL     |      |       |       |            |          |
|   4 |     FAST DUAL    |      |     1 |       |     2   (0)| 00:00:01 |
|   5 |     FAST DUAL    |      |     1 |       |     2   (0)| 00:00:01 |
|   6 |     FAST DUAL    |      |     1 |       |     2   (0)| 00:00:01 |
|   7 |     FAST DUAL    |      |     1 |       |     2   (0)| 00:00:01 |
|   8 |     FAST DUAL    |      |     1 |       |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------

You’ll notice that I’ve got an “optimizer_features_enable()” comment in the code: if I change it into a hint I get the following (correct) result and plan:


TS1                      FIELDB
-------------------- ----------
03-Sep-2014 00:00:00         15

-------------------------------------------------------------------------
| Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT |      |       |       |    12 (100)|          |
|   1 |  HASH GROUP BY   |      |     5 |    60 |    12  (17)| 00:00:01 |
|   2 |   VIEW           |      |     5 |    60 |    11  (10)| 00:00:01 |
|   3 |    HASH GROUP BY |      |     5 |    60 |    11  (10)| 00:00:01 |
|   4 |     VIEW         |      |     5 |    60 |    10   (0)| 00:00:01 |
|   5 |      UNION-ALL   |      |       |       |            |          |
|   6 |       FAST DUAL  |      |     1 |       |     2   (0)| 00:00:01 |
|   7 |       FAST DUAL  |      |     1 |       |     2   (0)| 00:00:01 |
|   8 |       FAST DUAL  |      |     1 |       |     2   (0)| 00:00:01 |
|   9 |       FAST DUAL  |      |     1 |       |     2   (0)| 00:00:01 |
|  10 |       FAST DUAL  |      |     1 |       |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------

Somehow 12.1.0.2 has managed to get confused by the combination of “group by ts” and “group by trunc(ts,’DD’)” and has performed “group-by elimination” when it shouldn’t have. If you use the ‘outline’ option for dbms_xplan.display_cursor() you’ll find that the bad result reports the hint elim_groupby(@sel$1), which leads to an alternative solution to hinting the optimizer_features level. Start the code like this:


select
       /*+ qb_name(main) no_elim_groupby(@main) */
       trunc (ts,'DD') ts1, sum(fieldb) fieldb
from  ...

The (no_)elim_groupby is a hint that appeared in v$sql_hints only in the 12.1.0.2.

EM12c and Windows OS Failover Cluster- Keep it Simple

This is going to be a “KISS”, (Keep it Simple, Silly) post.  I was offered a second Windows cluster to test on when my original one was required for QA.  It was a gracious offer, but I also found out why a cluster should be kept simple and this cluster was anything but simple.  Many times, issues are created and we’ll never see them coming until we go through the pain and the EM12c on this cluster has been both “educational” and painful.

The failover cluster is a Windows 2008 server with three Central Access Points, (CAP).  Each CAP has a drive assigned to them.  The drives are 20Gb each, so they aren’t TOO large when we are talking an EM12c environment build with an OMR, (Oracle Management Repository) also residing on the same host.

It may get confusing for some folks fast, so I’ll try to build a legend to help everyone keep it all straight.

The cluster is #ff0000;">cluster-vip.us.oracle.com.

The nodes in the cluster are#ff00ff;"> node1.us.oracle.com and #800080;">node2.us.oracle.com

The CAP’s are cluster-vip1.us.oracle.com, cluster-vip2.us.oracle.com and cluster-vip3.us.oracle.com.  They each have one shared drive allocated to them that can fail over:

  • #ff6600;">F:\ = cluster-vip1.us.oracle.com
  • #0000ff;">G:\ = cluster-vip2.us.oracle.com
  • #008000;">H:\ = cluster-vip3.us.oracle.com

This should start sounding like an Abbott and Costello skit pretty quick, so I’ll try to explain as much as I can and hope that each reader has some knowledge of Windows clustering… :)

Each failover cluster CAP will appear like this in the cluster manager:

cluster_f1

Due to the size of each shared volume, we’re going to install the database on #008000;">cluster-vip3, (H:\ volume) and the EM12c console and repository on #0000ff;">cluster-vip2, (G:\)#ff6600;"> Cluster-vip1, (F:\) will be left available for database growth.

As our first step for installing on a Windows cluster, we need to set the %ORACLE_HOSTNAME%, (of for Unix/Linux $ORACLE_HOSTNAME) environment variable.  For the database, it’s going to be #008000;">cluster-vip3.us.oracle.com, as that is the hostname associated with the volume, (#008000;">H:\) we wish to install on, (anyone already seeing the dark road we are starting down on? :) )

After the installation of the Oracle binaries, (software) and the database is complete, you need to verify that the listener is configured correctly and the database is able to be connected via TNS, (“sqlplus / as sysdba” won’t cut it, EM12c must be able to connect with “sys/system/sysman@).

This will most likely require a manual edit of the tnsnames.ora file to replace the hostname in the connection string with the IP Address:

EMREP =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = #008000;">)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = EMREP)
)
)
LISTENER_EMREP =
(ADDRESS = (PROTOCOL = TCP)(HOST = #008000;">)(PORT = 1521))

Now you might say, “but they are all on the same host!”, but the cluster doesn’t see it this way due to the way the CAP’s have all been configured.  Each drive is a different CAP and must be identified by their hostname or IP address.

Now here’s where the fun starts-  You now need to configure the %ORACLE_HOSTNAME% environment variable to the#0000ff;"> cluster-vip2.us.oracle.com for the EM12c installation.  Remember, our EM12c is going to reside on the #0000ff;">G:\ volume of shared storage.

So we now set our environment variable, restart the cluster, bring the cluster services all back online on #ff00ff;">node1, bring up the database, listener and then start the EM12c installation.  The install will auto-populate the #ff00ff;">node1 name, but you need to change it to the %ORACLE_HOSTNAME% environment variable, which will now be set to #0000ff;">cluster-vip2.us.oracle.com:

cluster_f5

It will verify the new ORACLE_HOSTNAME, (unless this matches the ORACLE_HOSTNAME environment variable, it will not succeed!) and the installation then asks for information about the database you want to use for your repository that is actually identified as being on a different VIP:

cluster_f6

There is a common step, even though you may have created your database without the dbcontrol in your repository database, (OMR) that will ask you to uninstall the dbcontrol schema so the EM12c SYSMAN schema can be installed.

The command looks like this:

/bin/emca -deconfig dbcontrol db -repos drop -SYS_PWD -SYSMAN_PWD

If you attempt to run this, with the ORACLE_HOSTNAME set to the correct address for the #0000ff;">G:\ drive, (#0000ff;">cluster-vip2.us.oracle.com), but the database is installed on the #008000;">H:\ drive, (#008000;">cluster-vip3.us.oracle.com) the following error will be seen:

cluster_f2

It will act as if it’s run the removal, but if you log back in, the SYSMAN schema is still intact and no removal is performed.  The trick here is that the command should be copied or saved off to run again from the command line and the installation for the EM12c should be cancelled.

Why?  This step has to be performed with the %ORACLE_HOSTNAME% set to the database VIP again, #008000;">(cluster-vip3.us.oracle.com) instead of the EM12c VIP, (#0000ff;">cluster-vip2.us.oracle.com.)

cluster_f3

There is a number of steps to complete this:

  1. cycle #ff00ff;">node1 of the cluster to set the environment variable.
  2. bring the cluster services back to #ff00ff;">node1 that would have failed over to #800080;">node2.
  3. bring the database and listener services back up.
  4. start a command prompt, run the command to remove dbcontrol.
  5. set the environment variable back to the #0000ff;">cluster-vip2 for the em12c installation.
  6. cycle #ff00ff;">node1 to put the environment variable back in place.
  7. bring the cluster services back over to #ff00ff;">node1, (there are three service, one for each drive, not just one with this configuration.)
  8. bring the database and listener back up, (again!)
  9. restart the EM12c installation.

Yeah, I wasn’t too thrilled, either!  I now was able to run through the installation without being hindered by the dbcontrol step that is automatically hit, even if I chose NOT to install dbcontrol when I created the database… :)  The rest of the installation to #800080;">node2 can be completed and failover testing can begin!

So, the moral of the story-  Don’t create three VIP’s for three shared storage drives on one failover cluster.  It’s just going to annoy you, your installation and even more so, me, if you come and ask me for assistance… :)

cluster_f4

 

 

 

 

 

 

 

 



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [EM12c and Windows OS Failover Cluster- Keep it Simple], All Right Reserved. 2014.