Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Hash Joins

This is a follow-up to a note that’s 10 years old [opens in new tab], prompted by a thread on the Oracle Developer Community forum asking about the order of operation in an execution plan, and following up with a question about PGA memory use that I had left open in the original note.

The original note pointed out that the serial execution plan for a 4 table join that used hash joins for every join and had a leading() hint dictating a single join order could still have 8 distinct execution plans (which I then corrected to 4, because half of them were eliminated by an odd little inconsistency of the leading() hint).

The source of the number 8 was the fact that when you hint a hash join with /*+ use_hash(next_alias) */ the optimizer will consider a /*+ swap_join_inputs(next_alias) */ – in other words, it will choose which of the two inputs should be the “build” table and which the “probe” table, even though the human eye will probably assume that next_alias was clearly intended as the probe table. (The “little inconsistency” in the leading() hint is that it blocks the option for a swap for the first – and only the first – join unless it’s explicitly hinted.)

In the article I showed all 8 possible plans, and described how the two plans at the extreme ends of my list would operate at run-time. I’ve reproduced these two plans below, preceded by the hints that produced them:

leading(t1 t2 t3 t4)
use_hash(t2) no_swap_join_inputs(t2)
use_hash(t3) no_swap_join_inputs(t3)
use_hash(t4) no_swap_join_inputs(t4)
 
--------------------------------------------------------------
| Id  | Operation             | Name | Rows  | Bytes | Cost  |
--------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |     1 |    24 |    11 |
|   1 |  SORT AGGREGATE       |      |     1 |    24 |       |
|*  2 |   HASH JOIN           |      |    70 |  1680 |    11 |
|*  3 |    HASH JOIN          |      |    70 |  1260 |     8 |
|*  4 |     HASH JOIN         |      |    70 |   840 |     5 |
|   5 |      TABLE ACCESS FULL| T1   |    70 |   420 |     2 |
|   6 |      TABLE ACCESS FULL| T2   |    70 |   420 |     2 |
|   7 |     TABLE ACCESS FULL | T3   |    70 |   420 |     2 |
|   8 |    TABLE ACCESS FULL  | T4   |    70 |   420 |     2 |
--------------------------------------------------------------

leading(t1 t2 t3 t4)
use_hash(t2)    swap_join_inputs(t2)
use_hash(t3)    swap_join_inputs(t3)
use_hash(t4)    swap_join_inputs(t4)
 
--------------------------------------------------------------
| Id  | Operation             | Name | Rows  | Bytes | Cost  |
--------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |     1 |    24 |    11 |
|   1 |  SORT AGGREGATE       |      |     1 |    24 |       |
|*  2 |   HASH JOIN           |      |    70 |  1680 |    11 |
|   3 |    TABLE ACCESS FULL  | T4   |    70 |   420 |     2 |
|*  4 |    HASH JOIN          |      |    70 |  1260 |     8 |
|   5 |     TABLE ACCESS FULL | T3   |    70 |   420 |     2 |
|*  6 |     HASH JOIN         |      |    70 |   840 |     5 |
|   7 |      TABLE ACCESS FULL| T2   |    70 |   420 |     2 |
|   8 |      TABLE ACCESS FULL| T1   |    70 |   420 |     2 |
--------------------------------------------------------------

Side note: one of the comments on the original post raised the point that (e.g.) the second of the plans above could have been generated by the join order represented by the hint leading(t2 t1 t3 t4), and that’s absolutely correct. In fact, if you see the plan for a 4 table join consisting of nothing but 3 hash join you cannot be certain what join order the optimizer was examining when it produced that plan unless you look at the CBO trace file (or check to see if it’s been explicitly and completely, hinted in the code).

Having set the scene, we come to the question that prompted this note. The question related to the second plan above, and said:

“However, this one is the plan that takes the most memory in the PGA because it builds 3 hash tables before joining any table, right?”

The quick answer to the question is: “Not necessarily.”

In the original article I had pointed out that plans following the pattern of the first plan above with N tables and N – 1 joins would have at most two build tables in memory at any one moment while the bottom plan would create N – 1 build tables in memory before any join results could be created. This does rather invite the inference that the bottom plan has to be the one that is going to use most PGA memory, but what I had said in the original article was (emphasis added):

“Notice: the number of in-memory hash (build) tables we have in the first of the 8 plans at any instant after the first join starts is two and (no matter how many tables are involved in this pattern) the number of in-memory hash tables will always be two. The actual size of the two hash tables is a little unpredictable and, as a very crude guideline, you might expect the size to grow as more tables are joined into the result set.

As a thought experiment, consider 3 small dimension tables and one big fact table. If Oracle were to create in-memory hash tables from the three dimension tables and then start scanning the fact table (following the pattern of the second plan above with t1 in the role of the fact table) probing each of the dimension tables in turn, it could deliver the first result row very quickly without requiring more memory to store intermediate results.

Conversely if Oracle were to create a tiny in-memory hash table from the first dimension and probe it with the fact table (following the pattern of the first plan above with t2 in the role of the fact table) Oracle would then have to build a very large in-memory hash table before before probing it with the second dimension table, and as that second join takes place it would be generating a new result set that would become the next big in-memory hash table.

In this thought experiment we would probably find that the optimizer did the right thing without prompting and constructed three tiny in-memory hash tables – but it’s not always so clear-cut, and even the “right” decision can result in very large intermediate build tables (and if those intermediate build tables spill to disc in an Exadata environment the change in performance can be huge).

To finish off, here’s a script to turn the thought experiment into a concrete example (by careful, but deliberately silly, hinting).

rem
rem     Script:         c_treblehash_3.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2020
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem 

create table t1 
as
select
        rownum          id,
        to_char(rownum) small_vc,
        rpad('x',100)   padding
from
        all_objects
where
        rownum <= 70
;

alter table t1 
        add constraint t1_pk primary key(id)
;

create table t2
as
select
        rownum          id,
        to_char(rownum) small_vc,
        rpad('x',100)   padding
from
        all_objects
where
        rownum <= 70
;

alter table t2
        add constraint t2_pk primary key(id)
;

create table t3
as
select
        rownum          id,
        to_char(rownum) small_vc,
        rpad('x',100)   padding
from
        all_objects
where
        rownum <= 70
;

alter table t3
        add constraint t3_pk primary key(id)
;

create table t4
nologging
as
select
        t1.id                   id1,
        t2.id                   id2,
        t3.id                   id3,
        rpad(rownum,10)         small_vc,
        rpad('x',100)           padding
from
        t1, t2, t3
;

connect test_user/test
set linesize 156
set serveroutput off

select
        /*+ 
                leading(t4 t1 t2 t3)
                full(t4) 
                use_hash(t1) full(t1) swap_join_inputs(t1)
                use_hash(t2) full(t2) swap_join_inputs(t2)
                use_hash(t3) full(t3) swap_join_inputs(t3) 
        */
        count(t1.small_vc),
        count(t2.small_vc),
        count(t3.small_vc),
        count(t4.small_vc)
from
        t4,     
        t1,     
        t2,     
        t3
where
        t1.id = t4.id1
and     t2.id = t4.id2
and     t3.id = t4.id3
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

column pga_max_mem format 99,999,999

select pga_max_mem from v$process where addr = (
                select paddr from v$session where sid = (
                        select sid from V$mystat where rownum = 1
                )
        )
;

connect test_user/test
set linesize 156
set serveroutput off

select
        /*+ 
                leading(t4 t1 t2 t3)
                full(t4) 
                use_hash(t1) full(t1) no_swap_join_inputs(t1)
                use_hash(t2) full(t2) no_swap_join_inputs(t2)
                use_hash(t3) full(t3) no_swap_join_inputs(t3) 
        */
        count(t1.small_vc),
        count(t2.small_vc),
        count(t3.small_vc),
        count(t4.small_vc)
from
        t4,     
        t1,     
        t2,     
        t3
where
        t1.id = t4.id1
and     t2.id = t4.id2
and     t3.id = t4.id3
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

column pga_max_mem format 99,999,999

select pga_max_mem from v$process where addr = (
                select paddr from v$session where sid = (
                        select sid from V$mystat where rownum = 1
                )
        )
;

All I’ve done is create three small “dimension” tables of 70 rows each then created a table which is their Cartesian join, which produces a “fact” table of 343,000 rows. Then I’ve written a simple query to join the three dimension tables to the fact table.

I’ve used swap_join_inputs() for all the joins in one version of the query, and no_swap_join_inputs() in the other versions; and I’ve reconnected to the data before running each query to make it easier to see the different impact on the PGA of the two plans.

I’ve use dbms_xplan.display_cursor() to pull the execution plans from memory after running the queries, and since I’ve not set statistics_level to all, or added the hint /*+ gather_plan_statistics */ to the queries the only rowsource execution statistics I’ll get are the PGA usage: estimates and actuals.

The final step of each test reports the maximum PGA memory usage the session reached in the course of the test.

Here are the two plans, each followed by the PGA maximum memory size.

--------------------------------------------------------------------------
| Id  | Operation             | Name | E-Rows |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |        |       |       |          |
|   1 |  SORT AGGREGATE       |      |      1 |       |       |          |
|*  2 |   HASH JOIN           |      |    343K|  2171K|  2171K| 1684K (0)|
|   3 |    TABLE ACCESS FULL  | T3   |     70 |       |       |          |
|*  4 |    HASH JOIN          |      |    343K|  2171K|  2171K| 1681K (0)|
|   5 |     TABLE ACCESS FULL | T2   |     70 |       |       |          |
|*  6 |     HASH JOIN         |      |    343K|  2171K|  2171K| 1711K (0)|
|   7 |      TABLE ACCESS FULL| T1   |     70 |       |       |          |
|   8 |      TABLE ACCESS FULL| T4   |    343K|       |       |          |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T3"."ID"="T4"."ID3")
   4 - access("T2"."ID"="T4"."ID2")
   6 - access("T1"."ID"="T4"."ID1")

PGA_MAX_MEM
-----------
 13,859,925

--------------------------------------------------------------------------
| Id  | Operation             | Name | E-Rows |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |        |       |       |          |
|   1 |  SORT AGGREGATE       |      |      1 |       |       |          |
|*  2 |   HASH JOIN           |      |    343K|    23M|  3727K|   29M (0)|
|*  3 |    HASH JOIN          |      |    343K|    23M|  3727K|   29M (0)|
|*  4 |     HASH JOIN         |      |    343K|    23M|  3667K|   29M (0)|
|   5 |      TABLE ACCESS FULL| T4   |    343K|       |       |          |
|   6 |      TABLE ACCESS FULL| T1   |     70 |       |       |          |
|   7 |     TABLE ACCESS FULL | T2   |     70 |       |       |          |
|   8 |    TABLE ACCESS FULL  | T3   |     70 |       |       |          |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T3"."ID"="T4"."ID3")
   3 - access("T2"."ID"="T4"."ID2")
   4 - access("T1"."ID"="T4"."ID1")

PGA_MAX_MEM
-----------
 52,984,917

As you can see, each of the hash joins in the first plan required roughly 1.7MB of memory. All three hash tables would have been in existence at the same time, giving a total of about 5.1MB of memory for the query. The session’s maximum PGA usage shows up as 13MB, of which 5MB was my basic “startup” PGA, leaving 3MB “unaccounted”.

In comparison, each of the hash joins in the second plan required roughly 29MB, although only two of the hash tables would have been in existence at any one moment. That’s still an allocation of 58MB for the same basic 4 table join. In fact things aren’t quite as bad as they seem in this case since the maximum PGA allocated was only about 52MB (again with 5MB of startup PGA). The apparent contradiction may be due to the way that Oracle allocates PGA in increasing chunks – the 29MB reported may have been the result of the session doing something like: “I’ve reached 23MB, my next allocation will be 4MB, oops, I only needed another 128KB)”

As a final check of activity, I’ve also run a couple of queries against V$sql_workarea – a convenient memory structure you can examine to get some “recent history” of queries that have been using large amount of memory, or spilling to disc. In this case I’ve query the structure by sql_id for the two queries, reporting just a little detail about the last execution and memory usage.

SQL> select operation_id, operation_type, last_memory_used, last_execution  from V$sql_workarea where sql_id = 'dcc01q28gcbmy';

OPERATION_ID OPERATION_TYPE                   LAST_MEMORY_USED LAST_EXECUTION
------------ -------------------------------- ---------------- ----------------------------------------
           2 HASH-JOIN                                 1724416 OPTIMAL
           4 HASH-JOIN                                 1721344 OPTIMAL
           6 HASH-JOIN                                 1752064 OPTIMAL


SQL> select operation_id, operation_type, last_memory_used, last_execution  from V$sql_workarea where sql_id = 'b52uwjz07fwhk';

OPERATION_ID OPERATION_TYPE                   LAST_MEMORY_USED LAST_EXECUTION
------------ -------------------------------- ---------------- ----------------------------------------
           2 HASH-JOIN                                30930944 OPTIMAL
           3 HASH-JOIN                                30945280 OPTIMAL
           4 HASH-JOIN                                30650368 OPTIMAL

As you can see, the view reports each plan operation (with id and type) that needed memory for an SQL workarea – and the numbers confirm the 1.7MB and 29MB reported by the execution plans. (Which is not surprising since it’s exactly these figures that are used to supply the details in the plans.)

You need to be a little cautious with this view in older versions of Oracle – it used to do a “full tablescan” of the library cache, which resulted in a lot of library cache latch activity and could cause some contention if you ran it frequently, but it’s a very useful view for finding queries that are doing unexpectedly large sorts or hash joins, and one you might query occasionally if you see any space management threats in the temporary tablespace.

Elevate your IT career with free certifications from #Exasol!

https://uhesse.files.wordpress.com/2020/12/exasol_banner.png?w=150 150w, https://uhesse.files.wordpress.com/2020/12/exasol_banner.png?w=300 300w, https://uhesse.files.wordpress.com/2020/12/exasol_banner.png?w=768 768w, https://uhesse.files.wordpress.com/2020/12/exasol_banner.png 1024w" sizes="(max-width: 620px) 100vw, 620px" />

Exasol is the fastest analytics database. It is rapidly spreading worldwide. Exasol knowledge is in high demand therefore. You can get it for free from us here: training.exasol.com

We provide free online learning courses to cope with our rapid growth – these scale better than instructor-led training. Our courses are short and to-the-point: Exasol doesn’t need extensive training because it’s easy to work with. It delivers outstanding performance with very low maintenance. Our database is designed to take care of itself to a large degree. That’s why each of our few courses can be completed within one day.

Exasol certification exams are charged with 150 Euro per attempt. We decided now to grant one free attempt per course. For you, that means you can get fully Exasol certified for free – if you prepare well for the exams.

Good luck with that, we love to see you succeed </p />
</p></div>

    	  	<div class=

Aurora Serverless v2 (preview) – CPU

By Franck Pachot

.
This follows my previous post https://blog.dbi-services.com/aurora-serverless-v2-ram/ ‎which you should read before this one. I was looking at the auto-scaling of RAM and it is now time to look at the CPU Utilization.

I have created an Aurora Serverless v2 database (please don’t forget it is the beta preview) with auto-scaling from 4 ACU to 32 ACU. I was looking at a table scan to show how the buffer pool is dynamically resized with auto-scaling. Here I’ll start to run this same cpu() procedure in one, then two, then tree… concurrent sessions to show auto-scaling and related metrics.

Here is the global workload in number of queries per second (I have installed PMM on AWS in a previous post so let’s use it):

https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 1024w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 768w" sizes="(max-width: 1521px) 100vw, 1521px" />
And the summary of what I’ve run, with the auto-scaled capacity and the CPU utilization measured:


10:38 1 session  running,  6 ACU , 14% CPU usage
10:54 2 sessions running, 11 ACUs, 26% CPU usage
11:09 3 sessions running, 16 ACUs, 39% CPU usage
11:25 4 sessions running, 21 ACUs, 50% CPU usage
11:40 5 sessions running, 26 ACUs, 63% CPU usage
11:56 6 sessions running, 31 ACUs, 75% CPU usage
12:12 7 sessions running, 32 ACUs, 89% CPU usage
12:27 8 sessions running, 32 ACUs, 97% CPU usage

The timestamp shows when I started to add one more session running in CPU, so that we can match with the metrics from CloudWatch. From there, it looks like the Aurora database engine is running on an 8 vCPU machine and the increase of ACU did not change dynamically the OS threads the “CPU Utilization” metric is based on.

Here are the details from CloudWatch:
https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 1024w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 768w" sizes="(max-width: 1520px) 100vw, 1520px" />
The metrics are:

  • Serverless Capacity Units on top-left: the auto-scaled ACU from 4 to 32 (in the preview), with a granularity of 0.5
  • CPU Utilization on top-right: the sessions running in CPU as a pourcentage of available threads
  • Engine Uptime on bottom-left: there were no restart during those runs
  • DB connections on botton right: I had 4 idle sessions before starting, then substract 4 and you have the sessions running

With 8 sessions in CPU, I’ve saturated the CPU and, as we reached 100%, my guess is that those are 8 cores, not hyperthreaded. As this is 32 ACUs, this would mean that an ACU is 1/4th of a core, but…

Here is the same metric I displayed from PMM, but here from CloudWatch, to look again how the workload scales:
https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 1024w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 768w" sizes="(max-width: 1077px) 100vw, 1077px" />

If ACUs were proportional to the OS cores, I would expect linear performance, which is not the case. One session runs at 1.25M queries per second on 6 ACUs. Two sessions are at 1.8M queries per second on 11 ACUs. Tree sessions at 2.5M queries/s on 16 ACU. So the math is not so simple. Does this mean that 16 ACU does not offer the same throughput as two times 8 ACU? Are we on burstable instances for small ACU? And, 8 vCPU with 64 GB, does that mean that when I start a serverless database with a 32 ACU maximum it runs on a db.r5.2xlarge, whatever the actual ACU it scales to? Is the VM simply provisioned on the maximum ACU and CPU limited by cgroup or similar?

I’ve done another test, this time fixing the min and max ACU to 16. So, maybe, this is similar to provisioning a db.r5.xlarge.
And I modified my cpu() procedure to stop after 10 million loops:


delimiter $$
drop procedure if exists cpu;
create procedure cpu()
begin
 declare i int default 0;
 while i < 1e7  do
  set i = i + 1;
 end while;
end$$
delimiter ;

1 million loops, this takes 50 seconds on dbfiddle, and you can test it on other platforms where you have an idea of the CPU speed.

I’ve run a loop that connects, run this function and displays the time and loop again:


Dec 07 18:41:45 real    0m24.271s
Dec 07 18:42:10 real    0m25.031s
Dec 07 18:42:35 real    0m25.146s
Dec 07 18:43:00 real    0m24.817s
Dec 07 18:43:24 real    0m23.868s
Dec 07 18:43:48 real    0m24.180s
Dec 07 18:44:12 real    0m23.758s
Dec 07 18:44:36 real    0m24.532s
Dec 07 18:45:00 real    0m23.651s
Dec 07 18:45:23 real    0m23.540s
Dec 07 18:45:47 real    0m23.813s
Dec 07 18:46:11 real    0m24.295s
Dec 07 18:46:35 real    0m23.525s

This is one session and CPU usage is 26% here (this is why I think that my 16 ACU serverless database runs on a 4 vCPU server)


Dec 07 18:46:59 real    0m24.013s
Dec 07 18:47:23 real    0m24.318s
Dec 07 18:47:47 real    0m23.845s
Dec 07 18:48:11 real    0m24.066s
Dec 07 18:48:35 real    0m23.903s
Dec 07 18:49:00 real    0m24.842s
Dec 07 18:49:24 real    0m24.173s
Dec 07 18:49:49 real    0m24.557s
Dec 07 18:50:13 real    0m24.684s
Dec 07 18:50:38 real    0m24.860s
Dec 07 18:51:03 real    0m24.988s

This is two sessions (I’m displaying the time for one only) and CPU usage is 50% which confirms my guess: I’m using half of the CPU resources. And the response time per session is till the same as when one session only was running.


Dec 07 18:51:28 real    0m24.714s
Dec 07 18:51:53 real    0m24.802s
Dec 07 18:52:18 real    0m24.936s
Dec 07 18:52:42 real    0m24.371s
Dec 07 18:53:06 real    0m24.161s
Dec 07 18:53:31 real    0m24.543s
Dec 07 18:53:55 real    0m24.316s
Dec 07 18:54:20 real    0m25.183s

I am now running 3 sessions there and the response time is still similar (I am at 75% CPU usage so obviously I have more than 2 cores here – no hyperthreading – or I should have seen some performance penalty when running more threads than cores)


Dec 07 18:54:46 real    0m25.937s
Dec 07 18:55:11 real    0m25.063s
Dec 07 18:55:36 real    0m24.400s
Dec 07 18:56:01 real    0m25.223s
Dec 07 18:56:27 real    0m25.791s
Dec 07 18:57:17 real    0m24.798s
Dec 07 18:57:42 real    0m25.385s
Dec 07 18:58:07 real    0m24.561s

This was with 4 sessions in total. The CPU is near 100% busy and the response time is still ok, which confirms I have 4 cores available to run that.


Dec 07 18:58:36 real    0m28.562s
Dec 07 18:59:06 real    0m30.618s
Dec 07 18:59:36 real    0m30.002s
Dec 07 19:00:07 real    0m30.921s
Dec 07 19:00:39 real    0m31.931s
Dec 07 19:01:11 real    0m32.233s
Dec 07 19:01:43 real    0m32.138s
Dec 07 19:02:13 real    0m29.676s
Dec 07 19:02:44 real    0m30.483s

One more session here. Now the CPU is a 100% and the processes have to wait 1/5th of their time in runqueue as there is only 4 threads available. That’s an additional 20% that we can see in the response time.

Not starting more processes, but increasing the capacity now, setting the maximum ACU to 24 which then enables auto-scaling:


...
Dec 07 19:08:02 real    0m33.176s
Dec 07 19:08:34 real    0m32.346s
Dec 07 19:09:01 real    0m26.912s
Dec 07 19:09:25 real    0m24.319s
Dec 07 19:09:35 real    0m10.174s
Dec 07 19:09:37 real    0m1.704s
Dec 07 19:09:39 real    0m1.952s
Dec 07 19:09:41 real    0m1.600s
Dec 07 19:09:42 real    0m1.487s
Dec 07 19:10:07 real    0m24.453s
Dec 07 19:10:32 real    0m25.794s
Dec 07 19:10:57 real    0m24.917s
...
Dec 07 19:19:48 real    0m25.939s
Dec 07 19:20:13 real    0m25.716s
Dec 07 19:20:40 real    0m26.589s
Dec 07 19:21:06 real    0m26.341s
Dec 07 19:21:34 real    0m27.255s

At 19:00 I increased to maximum ACU to 24 and let it auto-scale. The engine restarted at 19:09:30 and I got some errors until 19:21 where I reached the optimal response time again. I have 5 sessions running on a machine sized for 24 ACUs which I think is 6 OS threads and then I expect 5/6=83% CPU utilization if all my hypothesis are right. Here are the CloudWatch metrics:
https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 1024w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 768w" sizes="(max-width: 1483px) 100vw, 1483px" />
Yes, it seems we reached this 83% after some fluctuations. Those irregularities may be the consequence of my scripts running loops of long procedures. When the engine restarted (visible in “Engine Uptime”), I was disconnected for a while (visible in “DB Connections”), then the load decreased (visible in “CPU Utilization”), then scaling-down the available resources (visible in “Serverless Capacity Unit”)

The correspondence between ACU and RAM is documented (visible when defining the min/max and reported in my previous post) and the the instance types for provisioned Aurora gives the correspondance between RAM and vCPU (which confirms what I’ve seen here 16 ACU 32GB 4 vCPU as a db.r5.xlarge): https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.DBInstanceClass.html#aurora-db-instance-classes

Please remember, all those are guesses as very little information is disclosed about how it works internally. And this is a preview beta, many things will be different when GA. The goal of this blog is only to show that a little understanding about how it works will be useful when deciding between provisioned or serverless, think about side effects, and interpret the CloudWatch metrics. And we don’t need huge workloads for this investigation: learn on small labs and validate it on real stuff.

Cet article Aurora Serverless v2 (preview) – CPU est apparu en premier sur Blog dbi services.

Aurora Serverless v2 (preview) – RAM

By Franck Pachot

.
What is Aurora Serverless? That’s the RDS Aurora name for auto-scaling: instead of provisioning an instance size (from the burstable db.t3.small with 2 vCPU and 2GB RAM to db.r5.16xlarge with 64 vCPU and 512 GB RAM) you define a range in term of ACU /Aurora Capacity Unit). ACU is about CPU + RAM. This blog post will focus on RAM.

Aurora Serverless v1

In Serverless v1 the ACU goes from 1 (2 GB RAM) to 256 (488GB RAM) and the granularity is in power of two: each scale-up will double the instance. You can also opt for a minimum capacity of 0 where the instance is stopped when unused (no connections for 5 minutes), but then you accept that it takes few minutes for the first connection after that to startup again to the minimum capacity defined. Scaling happens on measured metrics like CPU (scale-up when >70%, down when <30%). The number of connections (percentage of max), and available RAM (but this is actually how the maximum number of maximum connections is calculated from the instance RAM. I don’t think the RAM threshold considers the usage of the shared buffer pool in serverless v1. Aurora tries to scale-up as soon as one threshold is reached, but scale-down requires both CPU and connections to be lower than the threshold, and scale-down cannot happen within 15 minutes after scale-up (the cooldown period). And, as scaling in Serverless v1 means stopping the database and starting it in another VM, it tries to do it outside of active connections and may timeout or force (your choice).

Serverless 1 will still be available. And there are currently some features that are not available in v2, like PostgreSQL compatibility or Data API. But they will come and I suppose v2 will replace v1 one day.

Aurora Serverless v2 (preview)

This new service has a finer grain of auto-scaling. With server’s virtualization, there’s the possibility to increase the number of vCPU on a VM without restart, and MySQL 5.7.5 can change the buffer pool online. And this gives a finer grain in scaling up and scaling down (announced 0.5 ACU gain), and without waiting. The preview goes from 4 ACU (8GB) to 32 (64GB) but the plan is that the minimum is as low as 0.5 ACU and up to 256 ACU. Then, you will probably not opt for stopping the instance, to avoid cold start latency, but keep it low at 0.5 ACU and then the database will be immediately available when a connection comes. And the granularity is by addition of 0.5 ACU rather than doubling the instance. So, even if the ACU is more expensive in v2, you probably consume less. And the scale-down doesn’t have to wait 15 minutes to cut by half the capacity as it can be decreased progressively online. Of course, having the instance restarted is still a possibility if there’s no vCPU available in the shared one, but that should not happen often.

Here is an example where I created a 8GB demo table:


--------------
create procedure demo(n int)
begin
 declare i int default 0;
 create table demo (id int not null primary key auto_increment, n int,x varchar(1000));
 insert into demo(n,x) values (1,lpad('x',898,'x'));
 while i < n do
  set i = i + 1;
  insert into demo(n,x) select n,x from demo;
  commit;
 end while;
end
--------------

--------------
call demo(23)
--------------

VM movement

During this creation I experienced multiple scale down (after the instance creation) and up (during the table row ingestion) and you can see that in this case the VM was probably moved to another server and had to restart. The “Engine Uptime” in CloudWatch testifies from the restarts and “Serverless Database Capacity” i the ACU (capacity units):
https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 297w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 1014w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 1520w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 100w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 140w" sizes="(max-width: 1554px) 100vw, 1554px" />

During those VM movements, I got this kind of error:


ERROR 2013 (HY000) at line 29: Lost connection to MySQL server during query

You just have to retry in this case. If you can’t then you will set some minimum/maximum ACU or maybe go to a provisioned database.


--------------
analyze table demo
--------------
Table           Op       Msg_type   Msg_text
-----------     -------  --------   --------
franck.demo     analyze  status     OK

--------------
select round(data_length/1024/1024) TABLE_MB,round(index_length/1024/1024) INDEX_MB,table_rows,table_name,table_type,engine from information_s
chema.tables where table_schema='franck'
--------------

TABLE_MB  INDEX_MB        table_rows      table_name      table_type      engine
--------  --------        ----------      ----------      ----------      ------
    8200         0           7831566            demo      BASE TABLE      InnoDB

Here I checked the size of my table: about 8GB.

buffer pool

I mentioned vCPU but what about RAM? The VM memory can also be resized online but there’s a difference. With CPU, if you scaled-down too early, it can scale-up immediately and you get back to the previous performance. But when you do that with RAM you have evicted some data from the caches that will not be back immediately until the first sessions warms it up again. So, the Serverless v2 has to look at the InnoDB LRU (Least Recently Used) buffers to estimate the risk to drop them. I mention InnoDB because for the moment Aurora Serverless v2 is on the MySQL compatibility only.

On my DEMO table I’ve run the following continuously:


use franck;
set profiling = 1;
select count(*) from demo where x='$(date) $RANDOM';
show profiles;

I’ve run that in a loop, so one session continuously active reading 8GB (the predicate does not filter anything and is there just to run a different query each time as I want to show the effect on the buffer pool and not the query cache).

Then, from 18:00 to 18:23 approximately I have run another session:


use franck;
delimiter $$
drop procedure if exists cpu;
create procedure cpu()
begin
 declare i int default 0;
 while 1  do
  set i = i + 1;
 end while;
end$$
delimiter ;
call cpu();

Please, don’t judge me on my MySQL procedural code </p />
</p></div>

    	  	<div class=

Device name persistence in the cloud: OCI

Device name persistence is an important concept for everyone deploying the Oracle database. In this little series I’ll show how you can achieve device name persistence with Oracle Cloud Infrastructure (OCI) and block storage. I am hoping to share future parts for Azure and AWS.

In the example I’m going to prepare a cloud VM for the installation of Oracle Grid Infrastructure 19.9.0. To do so I have created a number of block devices in addition to the boot volume:

  • One block volume to contain the Oracle binaries
  • Two block volumes to be used as +DATA
  • Two more block volumes for +RECO

This is going to be a playground environment, the block volume size is unrealistically small. You will certainly need larger block devices for a production environment. Additionally there is most likely a cost associated with creating these resources, be careful!

Block devices

The following block devices have been created previously, and are waiting to be attached to the VM:

cloudshell:~ (eu-frankfurt-1)$ oci bv volume list -c $C \
 --query "data [?contains(\"display-name\", 'dbinst1')].{AD:\"availability-domain\",name:\"display-name\"}" \
 --output table
+--------------------------+--------------+
| AD                       | name         |
+--------------------------+--------------+
| IHsr:EU-FRANKFURT-1-AD-3 | dbinst1-bv01 |
| IHsr:EU-FRANKFURT-1-AD-3 | dbinst1-bv02 |
| IHsr:EU-FRANKFURT-1-AD-3 | dbinst1-bv03 |
| IHsr:EU-FRANKFURT-1-AD-3 | dbinst1-bv04 |
| IHsr:EU-FRANKFURT-1-AD-3 | dbinst1-bv05 |
+--------------------------+--------------+ 

These now need to be attached to my VM, called dbinst1. You may have guessed ;)

Block device attachment

Once the block devices are created, they need to be attached to the VM. There are many ways to do so, but since I’m using a script in Cloud Shell I went with the OCI Command Linue Interface (CLI). For example:

oci compute volume-attachment attach-paravirtualized-volume \
--instance-id ocid1.instance.oc1.eu-frankfurt-1.a...a \
--volume-id ocid1.volume.oc1.eu-frankfurt-1.a...q \
--device "/dev/oracleoci/oraclevdf" 

This command attached the 5th block volume to the VM as /dev/oracleoci/oraclevdf. I have other volumes attached as /dev/oracleoci/oraclevd[a-e] already. Note that I opted to add the block volumes using paravirtualised option. This is fine for my playground VM where I don’t really expect or need the last bit of I/O performance. If you need performance, you need go with the iSCSI attachment type.

Block device use

And this is all there is to it: the para-virtualised block devices are immediately visible on dbinst1:

[opc@dbinst1 ~]$ lsscsi
[2:0:0:1]    disk    ORACLE   BlockVolume      1.0   /dev/sde 
[2:0:0:2]    disk    ORACLE   BlockVolume      1.0   /dev/sda 
[2:0:0:3]    disk    ORACLE   BlockVolume      1.0   /dev/sdb 
[2:0:0:4]    disk    ORACLE   BlockVolume      1.0   /dev/sdd 
[2:0:0:5]    disk    ORACLE   BlockVolume      1.0   /dev/sdc 
[2:0:0:6]    disk    ORACLE   BlockVolume      1.0   /dev/sdf 
[opc@dbinst1 ~]$  

The only thing to be aware of is that you shouldn’t use the native block device. Instead, use the device name you assigned when attaching the block device:

[opc@dbinst1 ~]$ ls -l /dev/oracleoci/*
lrwxrwxrwx. 1 root root 6 Nov 24 06:38 /dev/oracleoci/oraclevda -> ../sde
lrwxrwxrwx. 1 root root 7 Nov 24 06:38 /dev/oracleoci/oraclevda1 -> ../sde1
lrwxrwxrwx. 1 root root 7 Nov 24 06:38 /dev/oracleoci/oraclevda2 -> ../sde2
lrwxrwxrwx. 1 root root 7 Nov 24 06:38 /dev/oracleoci/oraclevda3 -> ../sde3
lrwxrwxrwx. 1 root root 6 Nov 24 06:38 /dev/oracleoci/oraclevdb -> ../sda
lrwxrwxrwx. 1 root root 6 Nov 24 06:38 /dev/oracleoci/oraclevdc -> ../sdb
lrwxrwxrwx. 1 root root 6 Nov 24 06:38 /dev/oracleoci/oraclevdd -> ../sdd
lrwxrwxrwx. 1 root root 6 Nov 24 06:38 /dev/oracleoci/oraclevde -> ../sdc
lrwxrwxrwx. 1 root root 6 Nov 24 07:18 /dev/oracleoci/oraclevdf -> ../sdf
[opc@dbinst1 ~]$  

My Ansible playbooks reference /dev/oracleoci/oraclevd*, and that way ensure device name persistence across reboots. Happy automating!

Video : Read-Only Partitions and Subpartitions in Oracle 12.2 Onward

In today’s video we’ll discuss read-only partitions and subpartitions, introduced in Oracle 12.2.

This is based on the following article.

You might find these useful too.

The star of today’s video is Chris Saxon of AskTom fame. You can check out his YouTube channel here.

Cheers

Tim…

The post Video : Read-Only Partitions and Subpartitions in Oracle 12.2 Onward first appeared on The ORACLE-BASE Blog.


Video : Read-Only Partitions and Subpartitions in Oracle 12.2 Onward was first posted on December 7, 2020 at 9:46 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

The invisible restore point

I regularly create and drop tablespace on my databases here at home, mainly because they are obviously not used in a “Production-like” manner.  Typically I’m tinkering with tablespaces to help answer customer questions on AskTOM about space allocation, reclaiming space and the like, so tablespaces come and go frequently. (Sidebar: If you’re regularly dropping/creating tablespaces on your Production database, and its not for the purposes of transportable tablespaces, then please drop me a line with why, because most of the time its not a great strategy)

This morning I went to drop a tablespace I had used to answer a question and this popped up.

SQL> drop tablespace demo including contents and datafiles;
drop tablespace demo including contents and datafiles
*
ERROR at line 1:
ORA-38881: Cannot drop tablespace DEMO on primary database due to guaranteed restore points.

Seemingly no problem there – the error message is pretty clear as to what the cause is. Casting my memory back to the weekend, I recalled some other AskTOM questions about flashback and restore points, so presumably I had left one hanging around. However, things started to get interesting when I went to clean up the mess

SQL> select name from v$restore_point;

NAME
------------------------------------------
TESTING

But then…I can’t see that restore point when I drop it.

SQL> drop restore point testing;
drop restore point testing
*
ERROR at line 1:
ORA-38780: Restore point 'TESTING' does not exist.

Savvy readers will probably already know the solution to this mystery. Flashback and restore points, for lack of a better term, can “span the divide” between pluggable and container databases. If the “administrator of all adminstrators” wants to control flashback at the root container level, then all pluggables will need to abide by any such directives. In a multi-tenant database, transfer of authority is always arbitrated from the root down to the pluggables and not in the other direction (otherwise of course, as admins we would all be bumping up our cpu_count in our pluggables to grab a bigger slice of the server pie Smile).

In my case, I am the master of my database domain, so was just a case of flipping over to the root and dropping the restore point from there.

SQL> conn / as sysdba
Connected.
SQL> drop restore point testing;

Restore point dropped.

and then my tablespace can be dropped as expected

SQL> alter session set container = pdb1;

Session altered.

SQL> drop tablespace demo including contents and datafiles;

Tablespace dropped.

TL;DR: If you can’t see a restore point but evidence suggests it is present, then it is probably in the container root.

pic credit: https://pixabay.com/users/sergeitokmakov-3426571

Database announcements at re:Invent 2020

By Franck Pachot

.
This year is not very nice for conferences as everything is virtual and we miss the most important: meeting and sharing with people. But the AWS re:Invent is actually a great experience. As an AWS Data Heros, I received an Oculus Quest 2 to teleport to the virtual Neon City where we can meet and have fun in Virtual Reality (but incredibly real-life chatting):

There are 3 important new launches announced around databases: Babelfish for Aurora, Aurora Serverless v2 and AWS Glue Elastic Views but let’s start by a recap of the pre-reInvent new features from this year.

We have more regions, even one planned in Switzerland. And also more cloud at customer solutions, like RDS in Outposts in addition to RDS on VMware. We had new versions, PostgreSQL 12, MariaDB 10.5, SQL Server 2019. SQL Server even came with SSRS). And also recent Release Updates for Oracle (July 2020).
About new features from 2020, we can export RDS snapshots to S3 parquet format. We can share AD with RDS from multiple VPC, we have connection pooling in RDS Proxy (session state aware). SQL Server supports parallel backups. Oracle supports backup to other regions. RDS can use always-on for SQL Server read replicas. And Oracle does not need Active Data Guard option when the replica is not there for read workloads. And talking about licenses, there’s the License Manager for Oracle to help manage them. There’s also the new Graviton2 processors for RDS PostgreSQL and MySQL.

All that was about relational databases, there’s also new features in NoSQL databases, like DynamoDB export to S3, PartiQL queries. But let’s new go to the new launches.

AWS Glue Elastic Views

I mentioned that we can query the NoSQL DynamoDB tables with a SQL-like API, PartiQL. Now those PartiQL queries can do more: continuous query to propagate data and changes, like materialized views. This event sourcing is based on CDC (not Stream). It propagates changes in near real-time (asynchronous, can be throttled by the target capacity) and to multiple destinations: Elasticsearch to search, S3 for data lake, Redshift for analytics. A nice serverless solution for CQRS: DynamoDB for ingest and OLTP and propagation to purpose-build services for the queries that cannot be done in the NoSQL operational database. This is serverless: billed per second of compute, and volume of storage.

Currently, those materialized views support only selection and projection, but hopefully, in the future, they will be able to maintain aggregations with GROUP BY. As I’m not a fan of writing procedural code to process data, I really like materialized views for replication, rather than triggers and lambdas.

Aurora Serverless v2

You don’t want to pre-plan the capacity but have your database server scale up, out, and down according to the load? That’s serverless. You don’t provision servers, but capacity units: Aurora Capacity Units (ACU). Rather than multiplying the capacity when needed, by changing the instance size, the new Aurora Serverless v2 elasticity has a granularity of 0.5 ACU: you start by provisioning 0.5 ACU (not zero because you don’t want to wait seconds on first start after being idle). When compared with v1 (which is still available) the starting capacity is lower, the increment is finer, and the scale-down is in minute rather than a 15 minutes cool down. And it has all Aurora features: Multi-AZ, Global Database, Proxy,… Basically, this relies on the ability to add vCPU and memory online, and reduce it (this includes shrinking the buffer pool according to LRU). This means scale up and down as long as it is possible (depends on the neighbors activity in the same VM). It can scale out as well if in the compute fleet and move to another VM if needed, but the goal is to be able to scale-up in-place most of the time.

Releasing idle CPU is easy, but knowing how much RAM can be released without significantly increase I/O and response time, is probably more challenging. Anyway, we can expect min/max controls on it. The goal is not to replace the capacity planning, but to be more elastic with unplanned workloads.

You have the choice to migrate to v2, but look at the price. The ACU is more expensive, but given the elasticity, you probably save a lot (start lower, increase by smaller steps, decrease sooner).

Babelfish

This is the most revolutionary in my opinion. We want polyglot databases not only to have the coice of language or API for new developments. Many databases run applications, like ERP, which are tied to a specific commercial database. And companies want to get out of this vendor lock-in but migration of those applications is impossible. They use specific behaviour, or code, in the database, and they do it for a reason: the agility and performance of processing data within the database. The business logic is tied to data for consistency and performance, in stored procedures. There are many attempts to translate the code, but this works partially. And that’s not sufficient for enterprise software: rewriting is easy but testing… who will sign the UAT validation that the business code, working for years in a database engine, has been rewritten to show the same behaviour?

This is different when there is no application change at all, and that’s the idea of Babelfish, starting with SQL Server compatibility in Aurora. Given the powerful extensibility of PostgreSQL, AWS has built some extensions to understand T-SQL, and specific SQL Server datatype behaviour. They also add endpoints that understand the MS SQL network protocol. And then can run the applications running on SQL Server, without any change besides the connection to the new endpoint. Of course, this is not easy. Each application may have specificities and need to implement new extensions. And for this reason, AWS decided to Open Source this compatibility layer. Who will contribute? Look at the ISV who has an ERP running on SQL Server. They can invest in developing the compatibility with Babelfish, and then can propose to their customer to move out of the commercial database, to PostgreSQL. Of course, the goal of AWS is to get them to Aurora, providing the high availability and scalability that big companies may require. But Babelfish target is PostgreSQL, the community one.

About the target, Aurora comes with two flavors, using the upper layer from MySQL or PostgreSQL. PostgreSQL was chosen as it is probably the most compatible with commercial databases, and provides easy extensibility in procedural language, datatypes and extensions. About the source, it is SQL Server for the moment (a commercial reply to the licensing policy they have set for their cloud competitors) but I’m sure Oracle will come one day. Probably not 100% compatible, given the complexity of it, but the goal of an ISV is to provide 100% compatibility for one application. And, once compatibility is there, the database is also accessible with the native PostgreSQL API for further developments.

I’m looking forward to seeing how this Open Source project will get contributions. Aurora has a bad reputation in the PostgreSQL community, taking the community code, making money with it, and not giving back their optimizations. But this Babelfish can really extend the popularity of this reliable open-source database. Contributions are not only extensions for code-compatibility. I can expect lot of contributions about test cases and documentation.

I’ve seen a demo about T-SQL and Money datatype. This is nice, but a single-user test case. I’ll test concurrency as soon as I have the preview. Isolation of transactions, read and write consistency in multi-user workloads are very different in PostgreSQL and SQL Server. And test case for compatibility acceptance is not easy.

You can expect more technical insights on this blog, as soon as I have access to the preview. For the moment, let me share some pictures about the Oculus Quest 2 I got from the AWS Heroes program, and the Neon City place where we meet:
https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 1024w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/12/Screens... 1536w" sizes="(max-width: 1796px) 100vw, 1796px" />

I forgot to mention the io2 Block Express which will be very interesting for database bandwidth with 4GB/s (and 256K IOPS if you really need this):

https://twitter.com/FranckPachot/status/1333859697363988480?s=20

and the EC2 R5b instance:

Recommended Reads for International Day of Disabled Persons

WordPress.com, as my colleague Anne recently wrote, continues to be a space for people to tell their personal stories and amplify their voices. Today, International Day of Disabled Persons, we’d like to highlight a few perspectives and thoughtful reads to raise awareness of the myriad experiences of disabled people.

This reading list is merely a starting point — be sure to explore more posts tagged with “disability” in the WordPress.com Reader, for example. We hope it introduces you to writers and disability rights advocates whose work you may not be familiar with.


“How to Properly Celebrate a Civil Rights Law During a Pandemic in Which Its Subjects Were Left to Die” at Crutches and Spice

Imani Barbarin at Crutches and Spice writes about life, current events, entertainment, and politics from the perspective of a Black woman with cerebral palsy. Read her reflections on the death of actor Chadwick Boseman, or the anniversary of the Americans With Disabilities Act (which turned 30 this year), excerpted below.

Prior to the pandemic, disabled people were told that the accessibility we needed was cost-prohibitive and unlikely to be implemented only to watch as the institutions that barred our inclusion make those tools available now that nondisabled people needed them. We called for polling places and voting procedures to be made accessible only to watch as politicians shut down polling places in predominantly black neighborhoods. We begged for businesses to be inclusive and accessible to disabled customers only for accessibility to be pitted against small businesses and workers’ rights.

And now, unironically, they celebrate.

They celebrate not weighed down by their own words calculating the amount of acceptable death it would take to reopen the economy. They post our pictures celebrating their own “diversity and inclusion” without confronting the fact they only became accessible because of a pandemic and as they loudly push to reopen, they amplify our voices for now with no plan to continue to include the disability community as businesses start to reopen.

I’m angry.

But I am also filled with love and gratitude for my community.

#ADA30InColor at Disability Visibility Project

Founded by Alice Wong, The Disability Visibility Project is a community focused on creating and sharing disability media and culture. You’ll find a range of content, including oral histories, guest blog posts, and a podcast hosted by Wong and featuring conversations with disabled people.

If you’re not sure where to start, dive into the 13 posts in the #ADA30InColor series — it includes essays on the past, present, and future of disability rights and justice by disabled BIPOC writers. Here are excerpts from two pieces.

More than anything, however, it was my blindness that allowed me to experience perhaps the biggest impact of this transition. Being able to attend a “regular” school as opposed to the school for the blind and take classes with sighted peers every day, becoming friends with classmates who have different types of disabilities, having Braille placards by every classroom door at a school not intended solely for only blind students, meeting blind adults with various jobs — ranging from chemist to statistician to lawyer — was my new reality. Even as a teenager, I knew it was a great privilege to be in this new reality — America, where there were laws in place to protect the rights of disabled people to live, study, play, and work alongside the nondisabled. At the same time, this reality began to feel like a multi-layered burden as I began to form and understand different elements of who I am: a disabled, 1.5 generation Korean-American immigrant. 

“Building Bridges as a Disabled Korean Immigrant” by Miso Kwak

Even with medical documentation on file, disabled BIPOC face added suspicion, resistance, and stigma from instructors, particularly for invisible disabilities. We are also stereotyped in racially coded ways as unreasonable, aggressive, and “angry” when we self-advocate. We are especially heavily policed in graduate and professional programs, and this is apparent in our representation — while 26 percent of adults in the US have a disability, only 12 percent of post-baccalaureate students are students with disabilities. This is even lower among some ethnicities — only 6 percent of post-baccalaureate Asian American students have a disability.  

“The Burden and Consequences of Self-Advocacy for Disabled BIPOC” by Aparna R.

“My Favorite Wheelchair Dances” at Alizabeth Worley

Alizabeth Worley is a writer and artist with moderate chronic fatigue syndrome. She writes about topics like health and interabled marriage (her husband has cerebral palsy). In a recent post, Alizabeth compiles YouTube clips of beautiful and inspiring wheelchair dances, some of which are from Infinite Flow, an inclusive dance company. Here’s one of the dances she includes in her list, featuring Julius Jun Obero and Rhea Marquez.


“The Intersection of Queerness and Disability” at Autistic Science Person

Ira, the writer at Autistic Science Person, explores the parallels between queerness and disability, and the way other people make assumptions about their body.

I often put down Female for medical appointments even if there’s a Nonbinary option, as I don’t want to “confuse” them. It’s just easier for everyone, I think. I worry about backlash I would receive, or the confused looks I would get if I put down Nonbinary. I think about people tiptoeing around my gender. I can’t deal with even more self-advocacy in a medical visit as an autistic person, so it’s just not worth it, I think. I’m reminded of the time I carried folding crutches to my unrelated medical appointment. Both the staff and doctor asked me why I brought crutches when I was “walking normally.” I had to explain that I needed them on my walk back for my foot pain. Both explaining my disability and explaining my gender — explaining the assumptions around my body is exhausting.

No matter what, people will make assumptions. Both ableism and cisnormativity are baked into our brains and our society. The things people have to do to accommodate us and acknowledge us involves unlearning their preconceptions. Society really doesn’t want us to do that. This is why there is so much defensiveness for both providing accommodations and acknowledging someone’s gender, pronouns, and name. People don’t want to do that work. They don’t want to be confronted with structural changes, the issue of gender norms, and the problems that disabled people face every day. They just want to go on with their lives because it’s easier to them. It’s easier for them to ignore our identities.

“The Last Halloween, The First Halloween” at Help Codi Heal

“The first Halloween my daughter could walk was the last Halloween that I could,” writes Codi Darnell, the blogger at Help Codi Heal. In a post reflecting on her fifth Halloween in a wheelchair, Codi reflects on change, pain, and the firsts and lasts in her life.

It was all automatic — all done without realizing the ways these simple acts of motherhood were deeply engrained in my identity. All done with zero understanding that something so simple could be snatched away — and how painful it would be when it was.

Because a year later I would not hold her hand up the stairs or scoop her up and onto my hip. I wouldn’t stand beside her at the door or see her face light up when — in her big two-year-old voice — she managed all three words “trick-or-treat”. A year later, I would understand the fragility of our being and know intimately the pain of things taken away. But I would still be there. 

“Even If You Can’t See It: Invisible Disability and Neurodiversity” at Kenyon Review

At Kenyon Review, author Sejal A. Shah writes a personal essay on neurodiversity, depression, academia, and the writing life.

Maybe things would have turned out differently had I requested accommodations, had I known about the Americans with Disabilities Act (ADA, 1990), had I understood my “situation,” as my aunt calls it, counted as a disability. The ADA law was amended in 2008 to include bipolar disorder. I began my job in 2005 and finished in 2011. It would have been helpful to know about the law and my rights under it.

I didn’t know the laws then; I didn’t know them until writing this essay. I looked normal; I passed. Would my career have turned out differently had I been willing to come out (for that’s what it felt like, an emergence into a world that might not accept me)? I was certain the stigma of having a major mood disorder would have hurt me professionally. Even had I disclosed my disorder, HR and my supervisors may not have agreed to modifications in my work responsibilities. I would still have needed to advocate for myself — would still have needed the energy to provide documentation and persist. For years, I had been ashamed, alarmed, and exhausted from trying to keep my head above water.

“The Outside Looking In” at Project Me

Project Me is the blog of Hannah Rose Higdon, a Deaf Lakota woman who grew up on the Cheyenne River Sioux Reservation. In “The Outside Looking In,” Higdon offers a glimpse into her experience as a child who was born hard of hearing, and whose family had very little access to the support she needed. (Higdon is now profoundly Deaf.)

I look up as my uncle talks to me. I nod. I smile. And I pretend I know just exactly what is going on. The truth is I have no clue what he’s saying or why he’s laughing, but I laugh too and mimic his facial expressions. I would never want to draw any more attention to myself than necessary. You see, I might only be 5 years old, but I know just how important it is to pretend.

“How to Center Disability in the Tech Response to COVID-19” at Brookings TechStream

Organizer, attorney, and disability justice advocate Lydia X.Z. Brown calls on the tech industry to carefully consider how policy affects marginalized communities, looking at algorithmic modeling in hospitals, contract tracing and surveillance, and web inaccessibility.

For disabled people who are also queer, trans, or people of color, the deployment of algorithmic modeling increases the risk of compounded medical discrimination. All marginalized communities have long histories and ongoing legacies of surviving involuntary medical experimentation, coercive treatment, invasive and irreversible procedures, and lower quality of care — often justified by harmful beliefs about the ability to feel pain and quality of life. These health care disparities are exacerbated for people who experience multiple forms of marginalization.

Spoonie Authors Network

The Spoonie Authors Network features work from authors and writers about how they manage their disabilities or chronic illnesses and conditions. Managed by Cait Gordon and Dianna Gunn, the community site also publishes resources and produces a podcast. Explore posts in the Featured Author or Internalized Ableism categories, like the piece below, to sample some of the writing.

When my neurologist suggested that I get a parking pass, I turned it down.

“I’d rather that go to someone more deserving,” I said. “There are people out there who are far more disabled than I am. Let the pass go to one of them.”

“You have difficulty walking. What would happen if it was icy or there were other difficult walking conditions?” she said kindly. “This is for your safety.”

I nodded and accepted the parking pass, even though I felt it made me look weak. I wasn’t disabled enough to warrant a parking pass. I can walk. I didn’t need it, I told myself.

“Not Disabled Enough” by Jamieson Wolf


More recommended sites:

Note on header image: Six disabled people of color smile and pose in front of a concrete wall. Five people stand in the back, with the Black woman in the center holding up a chalkboard sign that reads, “disabled and HERE.” A South Asian person in a wheelchair sits in front. Photo by Chona Kasinger | Disabled and Here (CC BY 4.0)

TCPS Connection With an Oracle Instant Client

All of our products ( PFCLScan , PFCLCode , PFCLObfuscate and http://www.petefinnigan.com/products/pfclforensics.htm) can use an Oracle instant client to connect to the target database(s) or even a full client. It is of course simpler to use an instant client if....[Read More]

Posted by PFCLScan On 27/11/20 At 03:56 PM