Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Oracle Database 18c and 19c on Fedora 31

https://oracle-base.com/blog/wp-content/uploads/2019/11/fedora31-small-3... 300w" sizes="(max-width: 235px) 85vw, 235px" />

Fedora 31 was released a couple of weeks ago and I’ve done some Oracle installations on it.

Just a warning to explain why this is a bad idea for anything real.

With that out of the way, here are the articles.

Not much has changed since Fedora 30 as far as these installations go.

I’ve pushed some stuff up to GitHub for Fedora 31.

So now you know how to do it, please don’t. </p />
</p></div>

    	  	<div class=

Announcing Obfuscated Column Data Loading with SLOB

On November 18, 2019 I announced availability of SLOB 2.5.2. It is primarily a small bug fix release but also has an undocumented new feature. I want to say a few words about the feature in this post and will update SLOB with full support (e.g., documentation and slob.conf parameter support) in SLOB 2.5.3.  Some SLOB users might want this new undocumented feature as soon as possible–thus this blog post.

The new feature allows SLOB users to load unique data in SLOB tables (single table in Single Schema Model and multiple tables in the Multiple Schema Model).  Default SLOB data consists of simple, repeated character strings because column data is not really that important in the SLOB Method. However, some users find that their storage compression features (e.g., enterprise-class storage arrays such as those by Dell/EMC and Netapp) achieve artificially high compression ratios with default SLOB data. For certain testing purposes, it is less desirable to test with artificially high compression ratios. Starting with SLOB 2.5.2, users can avoid this level of compression by setting OBFUSCATE_COLUMNS to TRUE in the shell environment prior to executing the SLOB data loading script (setup.sh). Again, this feature will be documented in SLOB 2.5.3.

Overview of Loading Obfuscated Data

The following is a set of screen shots that will help me explain the OBFUSCATE_COLUMNS feature. The best way to describe the feature is to show it in use. The runtime of SLOB does not change at all. The change only affects data loading. In Figure 1 I show how one can set the OBFUSCATE_COLUMNS shell environment variable to TRUE and load data as normal with the setup.sh script.

#000000;" src="https://kevinclosson.files.wordpress.com/2019/11/obfuscated-data-1.png?w=500&h=306" alt="" width="500" height="306" srcset="https://kevinclosson.files.wordpress.com/2019/11/obfuscated-data-1.png?w=500&h=306 500w, https://kevinclosson.files.wordpress.com/2019/11/obfuscated-data-1.png?w... 1000w, https://kevinclosson.files.wordpress.com/2019/11/obfuscated-data-1.png?w... 150w, https://kevinclosson.files.wordpress.com/2019/11/obfuscated-data-1.png?w... 300w, https://kevinclosson.files.wordpress.com/2019/11/obfuscated-data-1.png?w... 768w" sizes="(max-width: 500px) 100vw, 500px" />

Figure 1

In Figure 2 I show how the data loader filled the Single Schema Model table with two distinct strings. As explained in the documentation, the slob.conf->LOAD_PARALLEL_DEGREE parameter controls how many concurrent data loading streams will be used to load the data. In Figure 2 I had the default number of concurrent data loading streams. With the OBFUSCATE_COLUMNS feature, each data loading stream will load uniquely generated obfuscated strings. To show a sample of the data, I randomly chose the 13th ordinal column and queried distinct values. As explained in the documentation, the SLOB Method loads only a single row per block and each row has 19 128-character VARCHAR2 columns. Obfuscated data is loaded into each column. Note: each block loaded by a data loading stream will have the same block row content–so your compression ratio might vary based on the chunk size the compressor operates on.

#000000;" src="https://kevinclosson.files.wordpress.com/2019/11/obfuscated-data-2.png?w=500&h=315" alt="" width="500" height="315" srcset="https://kevinclosson.files.wordpress.com/2019/11/obfuscated-data-2.png?w=500&h=315 500w, https://kevinclosson.files.wordpress.com/2019/11/obfuscated-data-2.png?w... 1000w, https://kevinclosson.files.wordpress.com/2019/11/obfuscated-data-2.png?w... 150w, https://kevinclosson.files.wordpress.com/2019/11/obfuscated-data-2.png?w... 300w, https://kevinclosson.files.wordpress.com/2019/11/obfuscated-data-2.png?w... 768w" sizes="(max-width: 500px) 100vw, 500px" />

Figure 2

In Figure 3 I show how setting slob.conf->LOAD_PARALLEL_DEGREE to 64 results in 64 distinct obfuscated strings being loaded.

#000000;" src="https://kevinclosson.files.wordpress.com/2019/11/obfuscated-data-3.png?w=500&h=427" alt="" width="500" height="427" srcset="https://kevinclosson.files.wordpress.com/2019/11/obfuscated-data-3.png?w=500&h=427 500w, https://kevinclosson.files.wordpress.com/2019/11/obfuscated-data-3.png?w... 1000w, https://kevinclosson.files.wordpress.com/2019/11/obfuscated-data-3.png?w... 150w, https://kevinclosson.files.wordpress.com/2019/11/obfuscated-data-3.png?w... 300w, https://kevinclosson.files.wordpress.com/2019/11/obfuscated-data-3.png?w... 768w" sizes="(max-width: 500px) 100vw, 500px" />

Figure 3

Summary

This post offers a preview of an undocumented SLOB feature. Maybe some users will post their experiences with before and after compression levels and how that changes their testing.

Oracle Security Training Manuals for Sale

We have one set of Manuals for the recent training we held here in York and one from 2018. These can be bought as individual books as follows: This manual is from the York class in October 2019 and can....[Read More]

Posted by Pete On 19/11/19 At 03:05 PM

Exadata storage indexes

We had a question on AskTOM inquiring about how to handle the issue of only 8 storage indexes being possible on an Exadata engineered system. If you are unfamiliar with what a storage index is, they are part of the suite of features often referred to as the “secret sauce” that can improve query performance on Exadata systems by holding more metadata about the data that is stored on disk. You can get an introduction to the concept of storage indexes here.

Well..the good news is, the limit of 8 has been lifted in recent versions of the Exadata software up to 24. When I replied to the questioner, they demanded proof Smile, which I was glad to see (even though it means more work for me). Because one of my philosophies for AskTOM is that people should never just trust what they read on a forum (AskTOM or otherwise).

There is no dictionary view or performance view that is a direct path into the storage indexes, that is, you cannot see what is “in” them at any given point in time. However, there is a statistic “cell physical IO bytes saved by storage index” which gives cumulative information about how useful a storage index was, so whenever that statistic increments after a query execution, we know that some savings were made.

To keep the source code down to a manageable size, I’ll do the proof with just 12 columns, but you can easily ramp this up to 24 columns and you will see the same results. First I’ll create a table with 12 columns of nicely clustered data, which will make them obvious candidates for storage indexes as queries occur on them.



SQL> create table t as
  2  select
  3    trunc(rownum/1000) c1,
  4    trunc(rownum/1000) c2,
  5    trunc(rownum/1000) c3,
  6    trunc(rownum/1000) c4,
  7    trunc(rownum/1000) c5,
  8    trunc(rownum/1000) c6,
  9    trunc(rownum/1000) c7,
 10    trunc(rownum/1000) c8,
 11    trunc(rownum/1000) c9,
 12    trunc(rownum/1000) c10,
 13    trunc(rownum/1000) c11,
 14    trunc(rownum/1000) c12,
 15    rpad(rownum,500) data
 16  from
 17    ( select 1 from dual connect by level < 5000 ),
 18    ( select 1 from dual connect by level < 5000 );

Table created.

With an initial query,  you can see that there are no storage index benefits in play yet.



SQL> select name, value
  2  from v$mystat s, v$statname n
  3  where s.statistic# = n.statistic#
  4  and name like '%storage index%';

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index                             0
cell XT granule IO bytes saved by storage index                           0

So to ensure that my storage indexes are working at all,  I’ll run a series of queries against column C1 and check before and after values for the “cell physical IO bytes saved by storage index” statistic.



SQL> declare
  2    x varchar2(1000);
  3  begin
  4  for i in 1 .. 20
  5  loop
  6    execute immediate 'select max(data) from t where c1 < '||(i*10+1000) into x;
  7  end loop;
  8  end;
  9  /

PL/SQL procedure successfully completed.

SQL> select name, value
  2  from v$mystat s, v$statname n
  3  where s.statistic# = n.statistic#
  4  and name like '%storage index%';

NAME                                                                        VALUE
---------------------------------------------------------------- ----------------
cell physical IO bytes saved by storage index                        316601630720
cell XT granule IO bytes saved by storage index                                 0

At this point we know that storage indexes are available and in use on this database. Because the creation and usage of storage indexes is not in the hands of the developer, but a decision made by the Exadata software layer, I’ll now throw a selection of queries at the database on columns C1 to C12 with varying ranges to “let the database know” that these columns are important to me, and hence that storage indexes are going to be of benefit.



SQL> declare
  2    x varchar2(1000);
  3  begin
  4  for c in 1 .. 12 loop
  5    for i in 1 .. 20
  6    loop
  7      execute immediate 'select max(data) from t where c'||c||' < '||(i*10+1000) into x;
  8    end loop;
  9  end loop;
 10  end;
 11  /

PL/SQL procedure successfully completed.

SQL>
SQL> select name, value
  2  from v$mystat s, v$statname n
  3  where s.statistic# = n.statistic#
  4  and name like '%storage index%';

NAME                                                                           VALUE
---------------------------------------------------------------- -------------------
cell physical IO bytes saved by storage index                          3950481989632
cell XT granule IO bytes saved by storage index                                    0

Now all is in readiness to prove that we have storage indexes for all 12 columns. I can now issue a simple query against each of the 12 columns C1 to C12, and each time I would expect to see the cell physical IO bytes saved by storage index” statistic increment. Once I have got to column C12, I’ll loop back around to C1 just to guard against the scenario that perhaps storage indexes are being thrown away as I move to other column queries.



SQL> select max(data) from t where c1 < 2000;

MAX(DATA)
-------------------------------
999999

SQL> select name, value
  2  from v$mystat s, v$statname n
  3  where s.statistic# = n.statistic#
  4  and name like '%storage index%';

NAME                                                                           VALUE
---------------------------------------------------------------- -------------------
cell physical IO bytes saved by storage index                          3965752631296
cell XT granule IO bytes saved by storage index                                    0

SQL>
SQL> select max(data) from t where c2 < 2000;

MAX(DATA)
-------------------------------
999999

SQL>
SQL> select name, value
  2  from v$mystat s, v$statname n
  3  where s.statistic# = n.statistic#
  4  and name like '%storage index%';

NAME                                                                           VALUE
---------------------------------------------------------------- -------------------
cell physical IO bytes saved by storage index                          3981023272960
cell XT granule IO bytes saved by storage index                                    0

SQL>
SQL> select max(data) from t where c3 < 2000;

MAX(DATA)
-------------------------------
999999

SQL>
SQL> select name, value
  2  from v$mystat s, v$statname n
  3  where s.statistic# = n.statistic#
  4  and name like '%storage index%';

NAME                                                                           VALUE
---------------------------------------------------------------- -------------------
cell physical IO bytes saved by storage index                          3996293914624
cell XT granule IO bytes saved by storage index                                    0

SQL>
SQL> select max(data) from t where c4 < 2000;

MAX(DATA)
-------------------------------
999999

SQL>
SQL> select name, value
  2  from v$mystat s, v$statname n
  3  where s.statistic# = n.statistic#
  4  and name like '%storage index%';

NAME                                                                           VALUE
---------------------------------------------------------------- -------------------
cell physical IO bytes saved by storage index                          4011564556288
cell XT granule IO bytes saved by storage index                                    0

SQL>
SQL> select max(data) from t where c5 < 2000;

MAX(DATA)
-------------------------------
999999

SQL>
SQL> select name, value
  2  from v$mystat s, v$statname n
  3  where s.statistic# = n.statistic#
  4  and name like '%storage index%';

NAME                                                                           VALUE
---------------------------------------------------------------- -------------------
cell physical IO bytes saved by storage index                          4026835197952
cell XT granule IO bytes saved by storage index                                    0

SQL>
SQL> select max(data) from t where c6 < 2000;

MAX(DATA)
-------------------------------
999999

SQL>
SQL> select name, value
  2  from v$mystat s, v$statname n
  3  where s.statistic# = n.statistic#
  4  and name like '%storage index%';

NAME                                                                           VALUE
---------------------------------------------------------------- -------------------
cell physical IO bytes saved by storage index                          4042105839616
cell XT granule IO bytes saved by storage index                                    0

SQL>
SQL> select max(data) from t where c7 < 2000;

MAX(DATA)
-------------------------------
999999

SQL>
SQL> select name, value
  2  from v$mystat s, v$statname n
  3  where s.statistic# = n.statistic#
  4  and name like '%storage index%';

NAME                                                                           VALUE
---------------------------------------------------------------- -------------------
cell physical IO bytes saved by storage index                          4057376481280
cell XT granule IO bytes saved by storage index                                    0

SQL>
SQL> select max(data) from t where c8 < 2000;

MAX(DATA)
-------------------------------
999999

SQL>
SQL> select name, value
  2  from v$mystat s, v$statname n
  3  where s.statistic# = n.statistic#
  4  and name like '%storage index%';

NAME                                                                           VALUE
---------------------------------------------------------------- -------------------
cell physical IO bytes saved by storage index                          4072647122944
cell XT granule IO bytes saved by storage index                                    0

SQL>
SQL> select max(data) from t where c9 < 2000;

MAX(DATA)
-------------------------------
999999


SQL>
SQL> select name, value
  2  from v$mystat s, v$statname n
  3  where s.statistic# = n.statistic#
  4  and name like '%storage index%';

NAME                                                                           VALUE
---------------------------------------------------------------- -------------------
cell physical IO bytes saved by storage index                          4087917764608
cell XT granule IO bytes saved by storage index                                    0

SQL>
SQL> select max(data) from t where c10 < 2000;

MAX(DATA)
-------------------------------
999999

SQL>
SQL> select name, value
  2  from v$mystat s, v$statname n
  3  where s.statistic# = n.statistic#
  4  and name like '%storage index%';

NAME                                                                           VALUE
---------------------------------------------------------------- -------------------
cell physical IO bytes saved by storage index                          4103188406272
cell XT granule IO bytes saved by storage index                                    0

SQL>
SQL> select max(data) from t where c11 < 2000;

MAX(DATA)
-------------------------------
999999

SQL>
SQL> select name, value
  2  from v$mystat s, v$statname n
  3  where s.statistic# = n.statistic#
  4  and name like '%storage index%';

NAME                                                                           VALUE
---------------------------------------------------------------- -------------------
cell physical IO bytes saved by storage index                          4118459047936
cell XT granule IO bytes saved by storage index                                    0

SQL>
SQL> select max(data) from t where c12 < 2000;

MAX(DATA)
-------------------------------
999999

SQL>
SQL> select name, value
  2  from v$mystat s, v$statname n
  3  where s.statistic# = n.statistic#
  4  and name like '%storage index%';

NAME                                                                           VALUE
---------------------------------------------------------------- -------------------
cell physical IO bytes saved by storage index                          4133729689600
cell XT granule IO bytes saved by storage index                                    0

SQL>
SQL> select max(data) from t where c1 < 2000;

MAX(DATA)
-------------------------------
999999

SQL>
SQL> select name, value
  2  from v$mystat s, v$statname n
  3  where s.statistic# = n.statistic#
  4  and name like '%storage index%';

NAME                                                                           VALUE
---------------------------------------------------------------- -------------------
cell physical IO bytes saved by storage index                          4149000331264
cell XT granule IO bytes saved by storage index                                    0

As you can see, I got storage index benefit from all 12 queries, thus proving that we have indeed lifted the limit from 8 storage indexes. It is now up to 24 columns….trust me Smile

Announcing SLOB 2.5.2

SLOB 2.5.2 is available for download from Github. Please visit the SLOB Resources Page for the download URLs.

SLOB 2.5.2 is a bug fix release.

Controlling The Presentation Monster (Preparing to Present)

As I covered before, nerves before a presentation are not a bad thing. In fact, many excellent presenters/performers recognise that those butterflies of anxiety are necessary to make your presentation really sing. But you need to control the Presentation Monster. You want to take it down from lion-taming to annoyed-but-fundamentally-not-evil-cat-training.

https://mwidlake.files.wordpress.com/2019/11/monster.jpg?w=150 150w, https://mwidlake.files.wordpress.com/2019/11/monster.jpg?w=300 300w" sizes="(max-width: 441px) 100vw, 441px" />

Presentation Monster Gonna Get You

Embrace the Emotion

As the linked-to post above describes, nerves before a performance (and presenting is a kind of performance) are normal. So the first thing to do is accept that you not only will be nervous/anxious/really wanting the toilet very badly but that, if you didn’t, your talk is probably going to be poor.

Just accepting that and knowing that the people you see presenting apparently in an island of calm are mostly faking it helps. If they can fake, it so can you. Some of the below will help you turn down the anxiety dial or, if there is a need, even turn it up a little to get you buzzing.

Practice, practice…. practice.

I know it sounds obvious, but this is so true. You need to run through your presentation several times and in the right way. And people often don’t do it well.

When I prepare a new presentation, once it is written, I want to run through it from start to finish, in real time, 3 times. This is where most people go wrong and they make one of the following mistakes:

  • They will spot a problem on a slide, for example some text is garbled or an image is too small. And they stop to fix it, and then continue the practice session. Well, you just stopped the flow of it all and any timings you do will broken. Don’t do this – if you are still correcting big mistakes then your talk is not ready for the practising step, small mistakes you can go back to once you finish.
  • As each slide flicks up they go “yeah, OK, I know what I am going to say” – and move on. Don’t. Say it. Imagine the audience, talk to them, include any anecdotes or side points you plan (or suddenly think of), and speak slowly. It is way better to be faster for the real thing than slower as most presentations lead up to a Big Point or the Most Interesting Bits at the end, and if you run out of time…
  • They never time it. How do you know this takes 45 minutes unless you have done it in 45 minutes?

Practice any anecdotes or jokes. Ideally you want them to sound spontaneous but, like sincerity, you can fake spontaneity </p />
</p></div>

    	  	<div class=

Video : Oracle REST Data Services (ORDS) : OAuth Authorization Code

In today’s video we look at the OAuth Authorization Code flow for Oracle REST Data Services.

This goes together with a previous video about first-party authentication here.

Both videos are based on parts of this article.

There are loads of other ORDS articles here.

The star of today’s video is Atul Kumar, who has done on bunch of video’s on his YouTube channel.

Cheers

Tim…


Video : Oracle REST Data Services (ORDS) : OAuth Authorization Code was first posted on November 18, 2019 at 9:27 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Parse Time

This is a note I started drafting In October 2012. It’s a case study from an optimizer (10053) trace file someone emailed to me, and it describes some of the high-level steps I went through to see if I could pinpoint what the optimizer was doing that fooled it into spending a huge amount of time optimising a statement that ultimately executed very quickly.

Unfortunately I never finished my notes and I can no longer find the trace file that the article was based on, so I don’t really know what I was planning to say to complete the last observation I had recorded.

I was prompted a  couple of days ago to publish the notes so far becuase I was reminded in a conversation with members of the Oak Table Network about an article that Franck Pachot wrote a couple of years ago. In 12c Oracle Corp. introduced a time-reporting mechanism for the optimizer trace. If some optimisation step takes “too long” (1 second, by default) then then optimizer will write a “TIMER:” line into the trace file telling you what the operation was and how long it took to complete and how much CPU time it used.  The default for “too long” can be adjusted by setting a “fix control”.  This makes it a lot easier to find out where the time went if you see a very long parse time.

But let’s get back to the original trace file and drafted blog note. It started with a question on OTN and an extract from a tkprof output to back up a nasty  performance issue.

=============================================================================================

 

What do you do about a parse time of 46 seconds ? That was the question that came up on OTN a few days ago – and here’s the tkprof output to demonstrate it.

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1     46.27      46.53          0          5          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.33       0.63        129      30331          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4     46.60      47.17        129      30336          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 144  
Number of plan statistics captured: 1
 
Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE (cr=30331 pr=129 pw=0 time=637272 us)
       863        863        863   VIEW  VM_NWVW_1 (cr=30331 pr=129 pw=0 time=637378 us cost=1331 size=10 card=1)
       ... and lots more lines of plan

According to tkprof, it takes 46 seconds – virtually all CPU time – to optimise this statement, then 0.63 seconds to run it. You might spot that this is 11gR2 (in fact it’s 11.2.0.3) from the fact that the second line of the “Row Source Operation” includes a report of the estimated cost of the query, which is only 1,331.

Things were actually worse than they seem at first sight; when we saw more of tkprof output the following also showed up:

SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE 
  NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false') 
  NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),:"SYS_B_00"), 
  NVL(SUM(C2),:"SYS_B_01") 
FROM
 (SELECT /*+ IGNORE_WHERE_CLAUSE NO_PARALLEL("VAL_000002") FULL("VAL_000002") 
  NO_PARALLEL_INDEX("VAL_000002") */ :"SYS_B_02" AS C1, 
  CASE WHEN
    ...
  END AS C2 FROM "BISWEBB"."RECORDTEXTVALUE" 
  SAMPLE BLOCK (:"SYS_B_21" , :"SYS_B_22") SEED (:"SYS_B_23") "VAL_000002" 
  WHERE ... 
 ) SAMPLESUB
 
 
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        5      0.00       0.00          0          0          0           0
Execute      5      0.00       0.00          0          0          0           0
Fetch        5     21.41      24.14      11108      37331          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       15     21.41      24.15      11108      37331          0           5
 
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 144     (recursive depth: 1)
Number of plan statistics captured: 3
 
Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE (cr=7466 pr=3703 pw=0 time=5230126 us)
   3137126    3137126    3137126   PARTITION HASH ALL PARTITION: 1 128 (cr=7466 pr=3703 pw=0 time=2547843 us cost=18758 size=131597088 card=3133264)
   3137126    3137126    3137126    TABLE ACCESS SAMPLE RECORDTEXTVALUE PARTITION: 1 128 (cr=7466 pr=3703 pw=0 time=2372509 us cost=18758 size=131597088 card=3133264)

This piece of SQL executed five times as the query was optimised, adding a further 24 seconds elapsed time and 21 CPU seconds which, surprisingly, weren’t included in the headline 46 seconds. The total time spent in optimising the statement was around 70 seconds, of which about 68 seconds were spent on (or waiting for) the CPU.

This is unusual – I don’t often see SQL statements taking more than a few seconds to parse – not since 8i, and not without complex partition views – and I certainly don’t expect to see a low cost query in 11.2.0.3 taking anything like 70 (or even 46) seconds to optimise.

The OP had enabled the 10046 and the 10053 traces at the same time – and since the parse time was sufficiently unusual I asked him to email me the raw trace file – all 200MB of it.

Since it’s not easy to process 200MB of trace the first thing to do is extract a few headline details, and I thought you might be interested to hear about some of the methods I use on the rare occasions when I decide to look at a 10053.

My aim is to investigate a very long parse time and the tkprof output had already shown me that there were a lot of tables in the query, so I had the feeling that the problem would relate to the amount of work done testing possible join orders; I’ve also noticed that the dynamic sampling code ran five times – so I’m expecting to see some critical stage of the optimisation run 5 times (although I don’t know why it should).

Step 1: Use grep (or find if you’re on Windows) to do a quick check for the number of join orders considered. I’m just searching for the text “Join order[” appearing at the start of line and then counting how many times I find it:

[jonathan@linux01 big_trace]$ grep "^Join order\[" orcl_ora_25306.trc  | wc -l
6266

That’s 6,266 join orders considered – let’s take a slightly closer look:

[jonathan@linux01 big_trace]$ grep -n "^Join order\[" orcl_ora_25306.trc >temp.txt
[jonathan@linux01 big_trace]$ tail -2 temp.txt
4458394:Join order[581]:  RECORDTYPEMEMBER[RTM]#9  RECORD_[VAL_000049]#13  ...... from$_subquery$_008[TBL_000020]#2
4458825:Join order[1]:  VM_NWVW_1[VM_NWVW_1]#0

The line of dots represents another 11 tables (or similar objects) in the join order. But there are only 581 join orders (apparently) before the last one in the file (which is a single view transformation). I’ve used the “-n” option with grep, so if I wanted to look at the right bit of the file I could tail the last few thousand lines, but my machine is happy to use vi on a 200MB file, and a quick search (backwards) through the file finds the number 581 in the following text (which does not appear in all versions of the trace file):

Number of join permutations tried: 581

So a quick grep for “join permutations” might be a good idea. (In the absence of this line I’d have got to the same result by directing the earlier grep for “^Join order\[“ to a file and playing around with the contents of the file.

[jonathan@linux01 big_trace]$ grep -n "join permutations" orcl_ora_25306.trc
11495:Number of join permutations tried: 2
11849:Number of join permutations tried: 1
12439:Number of join permutations tried: 2
13826:Number of join permutations tried: 2
14180:Number of join permutations tried: 1
14552:Number of join permutations tried: 2
15938:Number of join permutations tried: 2
16292:Number of join permutations tried: 1
16665:Number of join permutations tried: 2
18141:Number of join permutations tried: 2
18550:Number of join permutations tried: 2
18959:Number of join permutations tried: 2
622799:Number of join permutations tried: 374
624183:Number of join permutations tried: 2
624592:Number of join permutations tried: 2
624919:Number of join permutations tried: 1
625211:Number of join permutations tried: 2
1759817:Number of join permutations tried: 673
1760302:Number of join permutations tried: 1
1760593:Number of join permutations tried: 2
1760910:Number of join permutations tried: 1
1761202:Number of join permutations tried: 2
2750475:Number of join permutations tried: 674
2751325:Number of join permutations tried: 2
2751642:Number of join permutations tried: 1
2751933:Number of join permutations tried: 2
2752250:Number of join permutations tried: 1
2752542:Number of join permutations tried: 2
3586276:Number of join permutations tried: 571
3587133:Number of join permutations tried: 2
3587461:Number of join permutations tried: 1
3587755:Number of join permutations tried: 2
3588079:Number of join permutations tried: 1
3588374:Number of join permutations tried: 2
4458608:Number of join permutations tried: 581
4458832:Number of join permutations tried: 1

The key thing we see here is that there are five sections of long searches, and a few very small searches. Examination of the small search lists shows that they relate to some inline views which simply join a couple of tables. For each of the long searches we can see that the first join order in each set is for 14 “tables”. This is where the work is going. But if you add up the number of permutations in the long searches you get a total of 2,873, which is a long way off the 6,266 that we found with our grep for “^Join order[“ – so where do the extra join orders come from ? Let’s take a closer look at the file where we dumped all the Join order lines – the last 10 lines look like this:

4452004:Join order[577]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4452086:Join order[577]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4453254:Join order[578]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4453382:Join order[578]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4454573:Join order[579]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4454655:Join order[579]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4455823:Join order[580]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4455905:Join order[580]:  RECORD_[VAL_000033]#10  from$_subquery$_017[TBL_000029]#1 ...
4457051:Join order[581]:  RECORDTYPEMEMBER[RTM]#9  RECORD_[VAL_000049]#13  ...
4458394:Join order[581]:  RECORDTYPEMEMBER[RTM]#9  RECORD_[VAL_000049]#13  ...
4458825:Join order[1]:  VM_NWVW_1[VM_NWVW_1]#0

Every single join order seems to have appeared twice, and doubling the counts we got for the sum of the permutations gets us close to the total we got for the join order search. Again, we could zoom in a little closer, does the text near the start of the two occurrences of join order 581 give us any clues ? We see the following just before the second one:

****** Recost for ORDER BY (using join row order) *******

The optimizer has tried to find a way of eliminating some of the cost by letting the table join order affect the order of the final output. Let’s do another grep to see how many join orders have been recosted:

[jonathan@linux01 big_trace]$ grep "Recost for ORDER BY" orcl_ora_25306.trc | sort | uniq -c
    452 ****** Recost for ORDER BY (using index) ************
   2896 ****** Recost for ORDER BY (using join row order) *******

So we’ve done a huge amount recosting. Let’s check arithmetic: 452 + 2,896 + 2,873 = 6,221, which is remarkably close to the 6,266 we needed (and we have ignored a few dozen join orders that were needed for the inline views, and the final error is too small for me to worry about).

We can conclude, therefore, that we did a huge amount of work costing a 14 table join a little over 6,000 times. It’s possible, of course, that we discarded lots of join orders very early on in the cost stage, so we could count the number of times we see a “Now joining” message – to complete a single pass on a 14 table join the optimizer will have to report “Now joining” 13 times.

[jonathan@linux01 big_trace]$ grep -n "Now joining" orcl_ora_25306.trc | wc -l
43989

Since the message appeared 44,000 times from 6,200 join orders we have an average of 7 steps evaluated per join order. Because of the way that the optimizer takes short-cuts I think this is a fairly strong clue that most of the join order calculations actually completed, or get very close to completing, over the whole 14 tables. (The optimizer remembers “partial results” from previous join order calculations, so doesn’t have to do 13 “Now joining” steps on every single join order.)

We still need to know why the optimizer tried so hard before supplying a plan – so let’s look for the “Best so far” lines, which the trace file reports each time the optimizer finds a better plan than the previous best. Here’s an example of what we’re looking for:

       Cost: 206984.61  Degree: 1  Resp: 206984.61  Card: 0.00 Bytes: 632
***********************
Best so far:  Table#: 0  cost: 56.9744  card: 1.0000  bytes: 30
              Table#: 3  cost: 59.9853  card: 0.0000  bytes: 83
              Table#: 6  cost: 60.9869  card: 0.0000  bytes: 151
              Table#:10  cost: 61.9909  card: 0.0000  bytes: 185
              Table#: 5  cost: 62.9928  card: 0.0000  bytes: 253
              Table#: 2  cost: 65.0004  card: 0.0000  bytes: 306
              Table#: 1  cost: 122.4741  card: 0.0000  bytes: 336
              Table#: 8  cost: 123.4760  card: 0.0000  bytes: 387
              Table#: 4  cost: 125.4836  card: 0.0000  bytes: 440
              Table#: 7  cost: 343.2625  card: 0.0000  bytes: 470
              Table#: 9  cost: 345.2659  card: 0.0000  bytes: 530
              Table#:11  cost: 206981.5979  card: 0.0000  bytes: 564
              Table#:12  cost: 206982.6017  card: 0.0000  bytes: 598
              Table#:13  cost: 206984.6055  card: 0.0000  bytes: 632
***********************

As you can see, we get a list of the tables (identified by their position in the first join order examined) with details of accumulated cost. But just above this tabular display there’s a repeat of the cost that we end up with. So let’s write, and apply, a little awk script to find all the “Best so far” lines and then print the line two above. Here’s a suitable script, followed by a call to use it:

{
        if (index($0,"Best so far") != 0) {print NR m2}
        m2 = m1; m1 = $0;
}

awk -f cost.awk orcl_ora_25306.trc >temp.txt

There was a bit of a mess in the output – there are a couple of special cases (relating, in our trace file, to the inline views and the appearance of a “group by placement”) that cause irregular patterns to appear, but the script was effective for the critical 14 table join. And looking through the list of costs for the various permutations we find that almost all the options show a cost of about 206,000 – except for the last few in two of the five “permutation sets” that suddenly drop to costs of around 1,500 and 1,300. The very high starting cost explains why the optimizer was prepared to spend so much time trying to find a good path and why it kept working so hard until the cost dropped very sharply.

Side bar: I have an old note from OCIS (the precursor or the precursor of the precursor of MOS) that the optimizer will stop searching when the number of join orders tests * the number of “non-single-row” tables (according to the single table access path) * 0.3 is greater than the best cost so far.  I even have a test script (run against 8.1.7.4, dated September 2002) that seems to demonstrate the formula.  The formula may be terribly out of date by now and the rules of exactly how and when it applies may have changed – the model didn’t seem to work when I ran it against 19.3 – but the principle probably still holds true.

At this point we might decide that we ought to look at the initial join order and at the join order where the cost dropped dramatically, and try to work out why Oracle picked such a bad starting join order, and what it was about the better join order that the optimizer had missed. This might allow us to recognise some error in the statistics for either the “bad” starting order or the “good” starting order and allow us to solve the problem by (e.g.) creating a column group or gather some specific statistics. We might simply decide that we’ll take a good join order and pass it to the optimizer through a /*+ leading() */ hint, or simply take the entire outline and attach it to the query through a faked SQL Profile (or embedded set of hints).

However, for the purposes of this exercise (and because sometimes you have to find a strategic solution rather than a “single statement” solution) I’m going to carry on working through mechanisms for dissecting the trace file without looking too closely at any of the fine detail.

The final “high-level” target I picked was to pin down why there were 5 sets of join orders. I had noticed something particular about the execution plan supplied – it showed several occurrences of the operation “VIEW PUSHED PREDICATE” so I wondered if this might be relevant. So I did a quick check near the start of the main body of the trace file for anything that might be a clue, and found the following just after the “QUERY BLOCK SIGNATURE”.

QUERY BLOCK SIGNATURE
---------------------
  signature(): NULL
***********************************
Cost-Based Join Predicate Push-down
***********************************
JPPD: Checking validity of push-down in query block SEL$6E5D879B (#4)
JPPD:   Checking validity of push-down from query block SEL$6E5D879B (#4) to query block SEL$C20BB4FE (#6)
Check Basic Validity for Non-Union View for query block SEL$C20BB4FE (#6)
JPPD:     JPPD bypassed: View has non-standard group by.
JPPD:   No valid views found to push predicate into.
JPPD: Checking validity of push-down in query block SEL$799AD133 (#3)
JPPD:   Checking validity of push-down from query block SEL$799AD133 (#3) to query block SEL$EFE55ECA (#7)
Check Basic Validity for Non-Union View for query block SEL$EFE55ECA (#7)
JPPD:     JPPD bypassed: View has non-standard group by.
JPPD:   No valid views found to push predicate into.
JPPD: Checking validity of push-down in query block SEL$C2AA4F6A (#2)
JPPD:   Checking validity of push-down from query block SEL$C2AA4F6A (#2) to query block SEL$799AD133 (#3)
Check Basic Validity for Non-Union View for query block SEL$799AD133 (#3)
JPPD:     Passed validity checks
JPPD:   Checking validity of push-down from query block SEL$C2AA4F6A (#2) to query block SEL$6E5D879B (#4)
Check Basic Validity for Non-Union View for query block SEL$6E5D879B (#4)
JPPD:     Passed validity checks
JPPD:   Checking validity of push-down from query block SEL$C2AA4F6A (#2) to query block SEL$FC56C448 (#5)
Check Basic Validity for Non-Union View for query block SEL$FC56C448 (#5)
JPPD:     Passed validity checks
JPPD: JPPD:   Pushdown from query block SEL$C2AA4F6A (#2) passed validity checks.
Join-Predicate push-down on query block SEL$C2AA4F6A (#2)
JPPD: Using search type: linear
JPPD: Considering join predicate push-down
JPPD: Starting iteration 1, state space = (3,4,5) : (0,0,0)

As you can see we are doing cost-based join-predicate pushdown, and there are three targets which are valid for the operation. Notice the line that says “using search type: linear”, and the suggestive “starting iteration 1” – let’s look for more lines with “Starting iteration”

[jonathan@linux01 big_trace]$ grep -n "Starting iteration" orcl_ora_25306.trc
9934:GBP: Starting iteration 1, state space = (20,21) : (0,0)
11529:GBP: Starting iteration 2, state space = (20,21) : (0,C)
11562:GBP: Starting iteration 3, state space = (20,21) : (F,0)
12479:GBP: Starting iteration 4, state space = (20,21) : (F,C)
12517:GBP: Starting iteration 1, state space = (18,19) : (0,0)
13860:GBP: Starting iteration 2, state space = (18,19) : (0,C)
13893:GBP: Starting iteration 3, state space = (18,19) : (F,0)
14587:GBP: Starting iteration 4, state space = (18,19) : (F,C)
14628:GBP: Starting iteration 1, state space = (16,17) : (0,0)
15972:GBP: Starting iteration 2, state space = (16,17) : (0,C)
16005:GBP: Starting iteration 3, state space = (16,17) : (F,0)
16700:GBP: Starting iteration 4, state space = (16,17) : (F,C)
16877:JPPD: Starting iteration 1, state space = (3,4,5) : (0,0,0)
622904:JPPD: Starting iteration 2, state space = (3,4,5) : (1,0,0)
1759914:JPPD: Starting iteration 3, state space = (3,4,5) : (1,1,0)
2750592:JPPD: Starting iteration 4, state space = (3,4,5) : (1,1,1)

There are four iterations for state space (3,4,5) – and look at the huge gaps between their “Starting iteration” lines. In fact, let’s go a little closer and combine their starting lines with the lines above where I searched for “Number of join permutations tried:”


16877:JPPD: Starting iteration 1, state space = (3,4,5) : (0,0,0)
622799:Number of join permutations tried: 374

622904:JPPD: Starting iteration 2, state space = (3,4,5) : (1,0,0)
1759817:Number of join permutations tried: 673

1759914:JPPD: Starting iteration 3, state space = (3,4,5) : (1,1,0)
2750475:Number of join permutations tried: 674

2750592:JPPD: Starting iteration 4, state space = (3,4,5) : (1,1,1)
3586276:Number of join permutations tried: 571

4458608:Number of join permutations tried: 581

At this point my notes end and I don’t know where I was going with the investigation. I know that I suggested to the OP that the cost-based join predicate pushdown was having a huge impact on the optimization time and suggested he experiment with disabling the feature. (Parse time dropped dramatically, but query run-time went through the roof – so that proved a point, but wasn’t a useful strategy). I don’t know, however, what the fifth long series of permutations was for, so if I could find the trace file one of the things I’d do next would be to look at the detail a few lines before line 4,458,608 to see what triggered that part of the re-optimization. I’d also want to know whether the final execution plan came from the fifth series and could be reached without involving all the join predicate pushdown work, or whether it was a plan that was only going to appear after the optimizer had worked through all 4 iterations.

The final plan did involve all 3 pushed predicates (which looksl like it might have been from iteration 4), so it might have been possible to find a generic strategy for forcing unconditional predicate pushing without doing all the expensive intermediate work.

Version 12c and beyond

That was then, and this is now. And something completely different might have appeared in 12c (or 19c) – but the one thing that is particularly helpful is that you can bet that every iteration of the JPPD state spaces would have produced a “TIMER:” line in the trace file, making it very easy to run grep -n “TIMER:” (or -nT as I recently discovered) against the trace file to pinpoint the issue very quickly.

Here’s an example from my “killer_parse.sql” query after setting “_fix_control”=’16923858:4′ (1e4 microseconds = 1/100th second) in an instance of 19c:


$ grep -nT TIMER or19_ora_21051.trc

16426  :TIMER:      bitmap access paths cpu: 0.104006 sec elapsed: 0.105076 sec
252758 :TIMER:     costing general plans cpu: 0.040666 sec elapsed: 0.040471 sec
309460 :TIMER:      bitmap access paths cpu: 0.079509 sec elapsed: 0.079074 sec
312584 :TIMER: CBQT OR expansion SEL$765CDFAA cpu: 10.474142 sec elapsed: 10.508788 sec
313974 :TIMER: Complex View Merging SEL$765CDFAA cpu: 1.475173 sec elapsed: 1.475418 sec
315716 :TIMER: Table Expansion SEL$765CDFAA cpu: 0.046262 sec elapsed: 0.046647 sec
316036 :TIMER: Star Transformation SEL$765CDFAA cpu: 0.029077 sec elapsed: 0.026912 sec
318207 :TIMER: Type Checking after CBQT SEL$765CDFAA cpu: 0.220506 sec elapsed: 0.219273 sec
318208 :TIMER: Cost-Based Transformations (Overall) SEL$765CDFAA cpu: 13.632516 sec elapsed: 13.666360 sec
328948 :TIMER:      bitmap access paths cpu: 0.093973 sec elapsed: 0.095008 sec
632935 :TIMER: Access Path Analysis (Final) SEL$765CDFAA cpu: 7.703016 sec elapsed: 7.755957 sec
633092 :TIMER: SQL Optimization (Overall) SEL$765CDFAA cpu: 21.539010 sec elapsed: 21.632012 sec

The closing 21.63 seconds (line 633092) is largely 7.7559 seconds (632,935) plus 13.666 seconds (line 318208) Cost-Based Transformation time, and that 13.666 seconds is mostly the 1.475 seconds (line 313,974) plus 10.508 seconds (line 312,584) for CBQT OR expansion – so let’s try disabling OR expansion (alter session set “_no_or_expansion”=true;) and try again:


$ grep -nT TIMER or19_ora_22205.trc
14884  :TIMER:      bitmap access paths cpu: 0.062453 sec elapsed: 0.064501 sec
15228  :TIMER: Access Path Analysis (Final) SEL$1 cpu: 0.256751 sec elapsed: 0.262467 sec
15234  :TIMER: SQL Optimization (Overall) SEL$1 cpu: 0.264099 sec elapsed: 0.268183 sec

Not only was optimisation faster, the runtime was quicker too.

Warning – it’s not always that easy.

 

Library Cache Stats

In resonse to a comment that one of my notes references a call to a packate “snap_libcache”, I’ve posted this version of SQL that can be run by SYS to create the package, with a public synonym, and privileges granted to public to execute it. The package doesn’t report the DLM (RAC) related activity, and is suitable only for 11g onwards (older versions require a massive decode of an index value to convert indx numbers into names).

rem
rem Script: snap_11_libcache.sql
rem Author: Jonathan Lewis
rem Dated: March 2001 (updated for 11g)
rem Purpose: Package to get snapshot start and delta of library cache stats
rem
rem Notes
rem Lots of changes needed by 11.2.x.x where x$kglst holds
rem two types – TYPE (107) and NAMESPACE (84) – but no
rem longer needs a complex decode.
rem
rem Has to be run by SYS to create the package
rem
rem Usage:
rem set serveroutput on size 1000000 format wrapped
rem set linesize 144
rem set trimspool on
rem execute snap_libcache.start_snap
rem — do something
rem execute snap_libcache.end_snap
rem

create or replace package snap_libcache as
procedure start_snap;
procedure end_snap;
end;
/

create or replace package body snap_libcache as

cursor c1 is
select
indx,
kglsttyp lib_type,
kglstdsc name,
kglstget gets,
kglstght get_hits,
kglstpin pins,
kglstpht pin_hits,
kglstrld reloads,
kglstinv invalidations,
kglstlrq dlm_lock_requests,
kglstprq dlm_pin_requests,
— kglstprl dlm_pin_releases,
— kglstirq dlm_invalidation_requests,
kglstmiv dlm_invalidations
from x$kglst
;

type w_type1 is table of c1%rowtype index by binary_integer;
w_list1 w_type1;
w_empty_list w_type1;

m_start_time date;
m_start_flag char(1);
m_end_time date;

procedure start_snap is
begin

m_start_time := sysdate;
m_start_flag := ‘U’;
w_list1 := w_empty_list;

for r in c1 loop
w_list1(r.indx).gets := r.gets;
w_list1(r.indx).get_hits := r.get_hits;
w_list1(r.indx).pins := r.pins;
w_list1(r.indx).pin_hits := r.pin_hits;
w_list1(r.indx).reloads := r.reloads;
w_list1(r.indx).invalidations := r.invalidations;
end loop;

end start_snap;

procedure end_snap is
begin

m_end_time := sysdate;

dbms_output.put_line(‘———————————‘);
dbms_output.put_line(‘Library Cache – ‘ ||
to_char(m_end_time,’dd-Mon hh24:mi:ss’)
);

if m_start_flag = ‘U’ then
dbms_output.put_line(‘Interval:- ‘ ||
trunc(86400 * (m_end_time – m_start_time)) ||
‘ seconds’
);
else
dbms_output.put_line(‘Since Startup:- ‘ ||
to_char(m_start_time,’dd-Mon hh24:mi:ss’)
);
end if;

dbms_output.put_line(‘———————————‘);

dbms_output.put_line(
rpad(‘Type’,10) ||
rpad(‘Description’,41) ||
lpad(‘Gets’,12) ||
lpad(‘Hits’,12) ||
lpad(‘Ratio’,6) ||
lpad(‘Pins’,12) ||
lpad(‘Hits’,12) ||
lpad(‘Ratio’,6) ||
lpad(‘Invalidations’,14) ||
lpad(‘Reloads’,10)
);

dbms_output.put_line(
rpad(‘—–‘,10) ||
rpad(‘—–‘,41) ||
lpad(‘—-‘,12) ||
lpad(‘—-‘,12) ||
lpad(‘—–‘,6) ||
lpad(‘—-‘,12) ||
lpad(‘—-‘,12) ||
lpad(‘—–‘,6) ||
lpad(‘————-‘,14) ||
lpad(‘——‘,10)
);

for r in c1 loop
if (not w_list1.exists(r.indx)) then
w_list1(r.indx).gets := 0;
w_list1(r.indx).get_hits := 0;
w_list1(r.indx).pins := 0;
w_list1(r.indx).pin_hits := 0;
w_list1(r.indx).invalidations := 0;
w_list1(r.indx).reloads := 0;
end if;

if (
(w_list1(r.indx).gets != r.gets)
or (w_list1(r.indx).get_hits != r.get_hits)
or (w_list1(r.indx).pins != r.pins)
or (w_list1(r.indx).pin_hits != r.pin_hits)
or (w_list1(r.indx).invalidations != r.invalidations)
or (w_list1(r.indx).reloads != r.reloads)
) then

dbms_output.put(rpad(substr(r.lib_type,1,10),10));
dbms_output.put(rpad(substr(r.name,1,41),41));
dbms_output.put(to_char(
r.gets – w_list1(r.indx).gets,
‘999,999,990’)
);
dbms_output.put(to_char(
r.get_hits – w_list1(r.indx).get_hits,
‘999,999,990’));
dbms_output.put(to_char(
(r.get_hits – w_list1(r.indx).get_hits)/
greatest(
r.gets – w_list1(r.indx).gets,
1
),
‘999.0’));
dbms_output.put(to_char(
r.pins – w_list1(r.indx).pins,
‘999,999,990’)
);
dbms_output.put(to_char(
r.pin_hits – w_list1(r.indx).pin_hits,
‘999,999,990’));
dbms_output.put(to_char(
(r.pin_hits – w_list1(r.indx).pin_hits)/
greatest(
r.pins – w_list1(r.indx).pins,
1
),
‘999.0’));
dbms_output.put(to_char(
r.invalidations – w_list1(r.indx).invalidations,
‘9,999,999,990’)
);
dbms_output.put(to_char(
r.reloads – w_list1(r.indx).reloads,
‘9,999,990’)
);
dbms_output.new_line;
end if;

end loop;

end end_snap;

begin
select
startup_time, ‘S’
into
m_start_time, m_start_flag
from
v$instance;

end snap_libcache;
/

drop public synonym snap_libcache;
create public synonym snap_libcache for snap_libcache;
grant execute on snap_libcache to public;

You’ll note that there are two classes of data, “namespace” and “type”. The dynamic view v$librarycache reports only the namespace rows.

Friday Philosophy – Jerks, Plain & Simple

A couple of weeks ago I saw a post on social media by someone who had just encountered a jerk. You know, someone who is an arrogant, bullying, self-important cockwomble (*).

The offended party had tweeted a link to the thread where the abuse had happened and I went and took a look. It’s not really that important in what manner the jerk was being a jerk, though in this case they were asking for free help that the original poster was giving, and when they did not get as much help as they felt entitled to, they became abusive to the person helping. Sadly, nothing new there.

But what did make me pause was that the original tweet that linked to this tale of the toss-pot ended with the line “Welcome to my world as a woman in I.T.” And I paused – but not for the obvious reason.

I went back and checked the thread to make sure I was right, that I had not missed anything. I hadn’t, not that I could see on the thread anyway.

At no point in this woman’s dealings with this jerk had anything been said about sex, gender, male, female, anything. This person asking for help was undoubtedly not a nice person, the speed with which they swapped from “please help me” through “You have to do more for me” and then on to a tirade of abuse was rapid. And it was nasty abuse – but none of it was sexist.

The woman had made a point that they received this sort of treatment because of their gender – but there was no sign of gender being part of this at all.  And that bothered me. It bothered me for three reasons.

  • I keep coming across people who immediately assume that if someone treats them badly, is offensive, does not like them, or is in any way a jerk towards them, it is because of whatever minority group they are part of. Sometimes sadly that is the case. However, in others it is not – but the offended person has stopped being able to discern that difference. At times I’ve suffered from bullying and being ignored in my own little way but I realized that I was being over sensitive and sometimes I was being ignored just because the person ignoring me ignored pretty much everyone, or was derogatory to everyone. It was nothing to do with my height.
  • Maybe because of that first point, where any issue is perceived as being due to an “..ism”, some people have developed an attitude that all people not in their minority group are against them. For example, I regularly come across the attitude of “all men are….”. I’ve been told to my face that all men think that women are not as good at programming as men. Well, you’re wrong. Not all men think like that. I told the person in question I did not hold that opinion and the reply was something like “well you’re about the only exception then!”. They were determined to hold a point of view in which it was not that there are still some sexist men about – but that all men were sexist, and rabidly so. That’s pretty annoying and it risks making people not want to help fight your corner.
  • I’ve had people say to me “I can’t say anything about X doing this wrong as I’ll only get accused of …ism” – and It’s occasionally been a worry for me. This can lead to a backlash where people get away with poor work or bad behaviour as no one wants to be labelled with an “…ism”

What worries me about this “cry wolf” scenario and the attitudes of “they are all going to be out to get you” is that it actually perpetuates the very thing we need to stand against. When I come across someone who is constantly, always citing all their interpersonal problems as being due to the minority they associate themselves with, I confess I thinking to myself “well, perhaps I’ll be a little wary of you, you seem to have issues here”. It’s like a chap I met who was adamant that every boss he had ever had was “a moron”. He’d had a lot of bosses and he could not accept that maybe, just maybe, at times perhaps the boss was not the problem.

Don’t get me wrong, we absolutely should challenge behaviour that treats a group of people as inferior, just for being part of that group. We should not condone any “..ism”. We should all ask ourselves if we are being unconsciously prejudiced and, if so, work to stop that. But we should be wary of citing bigotry as a knee-jerk reaction or falling into the error of challenging sexism, racism etc with exactly the same attitude but just from the other side of the fence.

And I think part of this is recognising that sometimes, often, jerks are just jerks. There is no …ism. Let’s just call them out for being jerks. And if there is an …ism, we call them out for being absolute, utter jerks.

 

(*) cockwomble is a term that seems to be becoming more universally recognised. It just means someone who is a bit of an idiot, someone you don’t want to spend any more time with that you need. A Womble is a creature from UK kids TV in the 70’s and 80’s. They are made-up animals that wander around Wimbledon Common (a large park in London), tidying it up and making things out of the rubbish they find. Sue made this cockwomble out of a beany-bag womble and a beany-bag cockerel.