Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Oracle 19c Automatic Indexing: Indexing Partitioned Tables Part I (Conversation Piece)

In this little series, I’m going to discuss how Automatic Indexing works in relation to Partitioning. I’ve discussed Indexing and Partitioning many times previously and how Oracle has various options when indexing a partitioned table: Non-Partitioned Index Globally Partitioned Index Locally Partitioned Index So the question(s) are how does Automatic Indexing handle scenarios with partitioned […]

A New Way to Publish Your Blog Posts Simultaneously as Twitter Threads

Publishing WordPress content on Twitter just got a whole lot easier. You can already bring single Tweets or threads, also known as Tweetstorms, over from Twitter to WordPress, and now we’re bringing the process full circle. With just two extra clicks, you can transform your entire blog post into a Twitter thread. By publishing your quality content on Twitter, you can open new lines of engagement and conversation.

When you share a WordPress post as a Twitter thread, nothing is left behind: Text, images, video, and embeds will be inserted into the thread right where they’re supposed to be.

We know that Twitter threads work best without breaks and other quirks. That’s why, in building this feature, we paid special attention to formatting. If a paragraph is too long for a single Tweet, for instance, it will automatically be split into multiple Tweets. And rather than squishing as many words as possible into the first Tweet and letting the rest spill to the second one, the break will come at the end of a sentence. Also, if you have a list block in your post, it will be formatted as a list on Twitter.

To give you extra control, while you’re writing a post, we’ll show you where Tweet splits will happen. That way, you can shape how your post will appear on Twitter as you write.

How to publish a blog post as a Twitter thread


  1. At any time while you’re working on a post, you can click on the Jetpack icon that’s located on the far right of the header menu at the top of the page.
  1. If you don’t already have your Twitter account connected to your website, click “Connect an account” to allow WordPress to publish content on your Twitter feed. You can add multiple Twitter handles if you’ll be Tweeting from more than one account. You only need to connect each account once.

  1. Make sure the right Twitter handle is selected, write a custom message, and then choose whether you want to share a single link to your blog post or all of the post’s content as a thread.
  1. Hit publish! Your blog post and the Tweet or thread will be shared simultaneously. Be sure you’ve selected your Twitter account when you publish, as this is the only time you’ll be able to share your blog post as a Twitter thread.

Sharing your full blog posts on Twitter is a great way to amplify your content, increase engagement, and build an audience for your work. You most likely have a number of followers on your WordPress blog who aren’t following you on Twitter, and vice versa. This feature allows you to tap into both groups.

Connect your Twitter account to your WordPress site today, and start publishing to both platforms at the same time. Get creative, and have fun. We’re excited to see how you use this first-of-its-kind tool.

YCSB (NoSQL benchmark) on Oracle Database

By Franck Pachot

.
The NoSQL technologies emerged for Big Data workloads where eventual consistency is acceptable and scaling out to multiple server nodes is an easy answer to increase throughput. With cloud services rising, those key-value document datastores started to be used by the web-scale companies for some transactional processing workloads as well. The solutions can provide high performance and easy partitioning capabilities thanks to the very simple API of NoSQL. Each table or collection, has only one key, the primary key, which is used for hash partitioning, sharding, and indexing. The API is a simple get/put by primary key, or scan on a range. There’s no referential integrity and no join, so that each query access only one shard, which makes each call it fast and predictable. There’s limited consistency provided so that each call do not have to wait for cross-node latency. There’s no row set operations link in SQL: you scale by adding more threads rather than having more work done by each thread.

Even if the API is very simple, there no standard API, and each technology provides a different interface for the application. In order to compare the performance, a benchmark application has been developed by the research division of Yahoo in 2010 (all explained in this paper: https://www2.cs.duke.edu/courses/fall13/cps296.4/838-CloudPapers/ycsb.pdf).This open source YCSB (Yahoo Cloud Serving Benchmark), in addition to providing a core workload generator, includes extensible clients for any NoSQL database. Look at the current clients:


[opc@al YCSB]$ grep -r "ycsb.DB" . | awk -F/ '{print $2}' | sort -u | paste -sd,
accumulo1.9,aerospike,arangodb,asynchbase,azurecosmos,azuretablestorage,cassandra,cloudspanner,couchbase,couchbase2,crail,elasticsearch,elasticsearch5,geode,googlebigtable,googledatastore,griddb,hbase1,hbase2,hypertable,ignite,infinispan,jdbc,kudu,maprjsondb,memcached,mongodb,nosqldb,orientdb,rados,redis,rest,s3,solr,solr6,solr7,voldemort,voltdb

In the list of clients, you can see that it goes beyond the usual NoSQL datastores. There’s a JDBC client storing the attibutes in a relational table. I have added the support for the FETCH FIRST n ROWS ONLY so that you can run it on Oracle Database and any SQL:2008 compatible RDBMS.

Here is how to download the latest relase of YCSB:


cd /var/tmp
release="$(curl https://github.com/brianfrankcooper/YCSB | awk '/[/]download[/]/{print $NF}' )"
curl --location "$release" | tar -zxvf -

However, my Pull Request to support the FETCH FIRST n ROWS ONLY has been merged but is not yet included in the release, so better compile from source if you want to use YCSB on Oracle Database.


sudo yum-config-manager --add-repo http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven...
sudo yum-config-manager --enable epel-apache-maven
sudo yum install -y git apache-maven

This installs Git and Maven if you don’t have it already (I’m running an Autonomous Linux on the Oracle Free Tier as it gives me an always free environment with an Oracle Database). Of course you can run it anywhere but remember that, by the nature of the NoSQL API, the network latency between the application and the database is quickly a bottleneck.

I compile YCSB:


(
git clone https://github.com/brianfrankcooper/YCSB.git
cd YCSB
mvn clean package
mvn -pl site.ycsb:jdbc-binding -am package -DskipTests dependency:build-classpath -DincludeScope=compile -Dmdep.outputFilterFile=true
) > mvn.log

I keep the log in “mvn.log” and I’ll use it to get the classpath

YCSB will connect with JDBC but I’m using the OCI client here (OCI is the Oracle Call Interface). I’ll also use sqlplus to create the tables. Here is how to download them:


cd /var/tmp
wget https://download.oracle.com/otn_software/linux/instantclient/oracle-inst...
wget https://download.oracle.com/otn_software/linux/instantclient/oracle-inst...
sudo yum localinstall -y oracle-instantclient-basic-linuxx64.rpm oracle-instantclient-sqlplus-linuxx64.rpm

In this example, I’ll connect to my free Oracle Autonomous Database for which I’ve downloaded the wallet into my TNS_ADMIN directory.

Here is my quick and dirty way to set the ORACLE_HOME from an installed Instant Client with sqlplus:
Updated (see comments) I’get important directories for future use:


MY_ORACLE_HOME=$(sqlplus /nolog <<<'get ?"' | awk -F'"' '/SP2-0160/{print $(NF-2)}')
MY_TNS_ADMIN=$MY_ORACLE_HOME/network/admin
MY_LD_LIBRARY_PATH=$MY_ORACLE_HOME
MY_CLASSPATH="$MY_ORACLE_HOME/ojdbc8.jar"

You may choose to use the JDBC thin driver, and even run this from the Cloud Shell. For the OCI thick JDBC I need to define the CLASSPATH for the .jar and LD_LIBRARY_PATH for the .so

If you are too lazy to create a database, I let you use the one I’ve shared for the SQL101 presentation. Here is my wallet.


wget https://objectstorage.us-ashburn-1.oraclecloud.com/n/idhoprxq7wun/b/pub/...
sudo unzip -od $MY_TNS_ADMIN Wallet_sql101.zip

Please be smart if you use my database, you are not alone there. It is easy to open an Oracle Cloud trial that will leave you with a free Oracle Database for life.

Here is my configuration file providing the connection settings. If you used my SQL101 wallet this will connect to my database with the DEMO user. I’ll remove it only if someone does nasty things. It is limited to very small storage quota so this is only to check how it works – not real scale benchmark.


cat > /tmp/ycsb.database.properties <

I’m using the OCI driver here, as I have the Instant Client installed. Of course you can use the thin driver, and even add it as a maven dependency. If you do this, please document it in comments.

YCSB workload is defined by a few properties:


cat > /tmp/ycsb.workload.properties <<'CAT'
threadcount=20
fieldcount=1
fieldlength=42
recordcount=100000
operationcount=10000
workload=site.ycsb.workloads.CoreWorkload
readallfields=true
readproportion=0.3
updateproportion=0.2
scanproportion=0.1
insertproportion=0.4
requestdistribution=zipfian
minscanlength=1
maxscanlength=1000
CAT

All this is documented in https://github.com/brianfrankcooper/YCSB/wiki/Core-Properties

Here is my sqlplus script to create the table used by YCSB


cat > /tmp/ycsb.create.sql <<'SQL'
set pagesize 1000 long 1000000 echo on
whenever sqlerror exit failure
exec for i in (select * from user_tables where table_name='USERTABLE') loop execute immediate 'drop table USERTABLE'; end loop;
purge recyclebin;
create table USERTABLE (
  YCSB_KEY varchar2(25) primary key,
  FIELD0 &&1
)
&&2
/
select dbms_metadata.get_ddl('TABLE','USERTABLE') from dual;
exit
SQL

It takes the datatype as first parameter, and table attributes as second parameters. I use this to compare various physical data model alternatives. For a scalable key-value document store you will probably partition by hash on the key, with the primary key index being local, maybe even all stored as an Index Organized Table. Any RDBMS has specific features to scale a key-value workload. NoSQL databases usually restrict their implementation to those physical representations, but they didn’t invent new algorithms for access by primary key as it was there in RDBMS for a long time. But, as some RDBMS like Oracle Database are also optimized to handle complex queries, the default table attributes may not be the best suited for NoSQL-like access.

When running on my SQL101 you are limited in space, so let’s keep it simple:


sqlplus SQL101DEMO/"**P455w0rd**"@sql101_tp @ /tmp/ycsb.create.sql "varchar2(42)" "organization index nologging --partition by hash(YCSB_KEY) partitions 8"

Now I have a USERTABLE in the DEMO schema.

I put all environment variables in setenv.sh which is called by ycsb.sh


cat > /var/tmp/YCSB/bin/setenv.sh <

The LD_LIBRARY_PATH is required to find libocijdbc19 as I’m using the OCI driver. You can also put it in etc/ld.so.conf.d/oic.conf if you prefer. The CLASSPATH includes the one from maven output, plus the ojdbc8.jar

I am now ready to load some data:


/var/tmp/YCSB/bin/ycsb.sh load jdbc -P /tmp/ycsb.workload.properties -P /tmp/ycsb.database.properties

This reads /tmp/ycsb.workload.properties for the number of records to create.

Here is how to run the workload defined in /tmp/ycsb.workload.properties


/var/tmp/YCSB/bin/ycsb.sh run jdbc -P /tmp/ycsb.workload.properties -P /tmp/ycsb.database.properties | tee run.log

I’m showing below the Performance Hub when I’ve run it on my SQL101 database with 20 client threads.
https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/10/Screens... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/10/Screens... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/10/Screens... 1536w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/10/Screens... 2048w" sizes="(max-width: 1024px) 100vw, 1024px" />
I’m running a very low workload here as it is a small test database. But the important message is that you cannot run a benchmark without looking at the database metrics to be sure that the workload, and the bottleneck, is what you expect to push to the limits.

Here is how to quickly parse the log to see the result:

awk -F, '/^\[(READ|SCAN|INSERT|UPDATE|DELETE)\].*95thPercentile/{printf "%10.3f %-10s ",$3/1e6,$1}' run.log | sort | paste -s

     0.355 [SCAN]          0.000 [INSERT]        0.009 [UPDATE]        0.003 [READ]

Measuring the percentiles, 95th Percentile here, is interresting for NoSQL-like workloads. The goal is to scale to million of users and get acceptable response time for most of the interactions. And NoSQL databases are usually used to “web-scale” into nodes distributed over the internet where some latency outliers may happen. Looking at percentiles helps to focus on the SLA while accepting those outliers.

This “jdbc” client declares all attributes as columns in the table. Of course, even if RDBMS works on structured data in order to ease the application code that can then rely on a schema, SQL includes DDL with all agility to add columns later. But in addition to the traditional relational tables, all the major RDBMS can also be used to store documents. And some have even an optimized datatype to store a JSON document in binary format (JSONB in PostgreSQL, OSON in Oracle). Oracle even provides a NoSQL-like API for it: SODA. I hope that Oracle will contribute to the YCSB project adding a client for SODA.

The NoSQL APIs are similar but may have some implementation differences and, for sure, different calls. YCSB implements read, scan, update, insert, delete and here is how it maps to some popular NoSQL databases:

  • YCSB read is equivalent to
    MongoDB: find().first() with “_id”, DynamoDB: getItem(), Cassandra: SELECT without WHERE, HBase: get(), SODA: getOne()
  • YCSB scan is equivalent to
    MongoDB: find().sort().limit(), DynamoDB: scan(), Cassandra: SELECT with WHERE and LIMIT, HBase: scan(), SODA: find()
  • YCSB update is equivalent to
    MongoDB: updateOne(), DynamoDB: updateItem(), Cassandra: UPDATE, HBase: put(), SODA: replaceOne()
  • YCSB insert is equivalent to
    MongoDB: insertOne(), DynamoDB: putItem(), Cassandra: INSERT, HBase: put(), SODA: insert()
  • YCSB delete is equivalent to
    MongoDB: deleteOne(), DynamoDB: deleteItem(), Cassandra: DELETE, HBase: delete(), SODA: remove()

With the evolution of RDBMS to converged databases, it is useful to compare the performance of key-value workloads with JDBC SQL API and NoSQL put/get APIs. YCSB can be used and extended for that. Among the most advanced converged databases:

  • On PostgreSQL you can use the JDBC client or the “postgrenosql” that are already there in the master branch
  • On YugaByteDB you can do the same because it is compatible with PostgreSQL with the YSQL API. YB also provides a Cassandra-like NoSQL API: CQL and here is how to run YCSB on it: https://docs.yugabyte.com/latest/benchmark/ycsb-ycql/
  • On Oracle Database, now that I have added the support for FETCH FIRST, the jdbc client can be used on a relational table. I hope to see a SODA one soon, as SODA (Simple Oracle Document Access) is a very advanced NoSQL API that combines key-value document store with RDBMS consistency and reliability. Oracle has published the result but not the code.

Cet article YCSB (NoSQL benchmark) on Oracle Database est apparu en premier sur Blog dbi services.

The weirdest reason to avoid SELECT *

A quick Google or Bing search and you’ll find no limit to the number of articles on databases that tell you that using “SELECT *” is a terrible terrible thing to do. Your code will be more fragile.  It is bad for your data dictionary.  You’ll end up with conflicts when you join, the list goes on. You can find a more reasoned argument from Markus Winand but ultimately for the majority of the time, the potential drawbacks outweigh any convenience benefits of using SELECT *.

(I say “majority” because PL/SQL is one of the few languages that can insulate you from a lot of the risks with its %ROWTYPE syntax and the VALUES / SET ROW clauses for DML. Yet another reason why PL/SQL is so cool).

But here’s probably the weirdest reason you’ll ever see for why you might want to steer clear of SELECT *. I’ll start with a simple table T, and I’ve named the columns CHILD and PARENT because I’m going to query the table using the recursive subquery factoring feature:



SQL> create table t (child,parent ) as select 2,1 from dual;

Table created.

With only a single row, the results from the recursive query are not particularly exciting


SQL> with recursive (chd,par) as (
  2    select child,parent
  3    from t
  4    union all
  5    select t.child, t.parent
  6    from t, recursive
  7    where recursive.chd = t.parent
  8  )
  9  select *
 10  from recursive;

       CHD        PAR
---------- ----------
         2          1

but the key thing to note here is line 2 and 5. As you would expect from any UNION ALL query, the number of columns in the first part of the UNION ALL must match the number of columns in the second part of the UNION ALL.

On line 5, I’m querying columns CHILD and PARENT from the table T, and savvy readers will have already spotted that this is all of the columns from T and in the same order at the definition of the table. Hence lets see what happens when I go against conventional wisdom and replace this with a SELECT *.



SQL> with recursive (chd,par) as (
  2    select child,parent
  3    from t
  4    union all
  5    select t.*
  6    from t, recursive
  7    where recursive.chd = t.parent
  8  )
  9  select *
 10  from recursive;
  select t.*
  *
ERROR at line 5:
ORA-01789: query block has incorrect number of result columns

To understand why this is the case, we need to take a careful look at the documentation for recursive subquery factoring, which states:

“The number of column aliases following WITH query_name and the number of columns in the SELECT lists of the anchor and recursive query blocks must be the same.”

The ORA-01789 error suggests this check is being done very early in the syntax processing before expansion of the asterisk into columns. Hence there is only a single item on the second part of the UNION ALL. Further weight to the hypothesis of the earliness of this check can be seen by putting gibberish in the SQL.



SQL> with recursive (chd,par) as (
  2    select child,parent
  3    from t
  4    union all
  5    select blahblahblah.*
  6    from t, recursive
  7    where recursive.chd = t.parent
  8  )
  9  select *
 10  from recursive;
  select blahblahblah.*
  *
ERROR at line 5:
ORA-01789: query block has incorrect number of result columns

Even with the reference to the non-existent “blahblahblah”, the first error we get is not about the reference but to the number of columns. The statement fails the first syntax check before getting anywhere near the data dictionary for further validation.

So when it comes to recursive query subfactoring, make sure you list those columns out loud and proud! Smile

Video : Temporal Validity in Oracle Database 12c Onward

In today’s video we discuss how Temporal Validity can make querying of effective date ranges simpler in Oracle 12c and beyond.

The video is based on this article.

The syntax looks similar to Flashback Query and Flashback Version Query.

The star of today’s video is a knitted creation by the wife, Debra Lilley. I’m not entirely sure what it is. Maybe a sea squirt? I’m sure I’m going to get into trouble for not knowing, as I’m sure it’s a perfect representation… </p />
</p></div>

    	  	<div class=

Interval Oddity

Interval partitioning is a popular strategy for partitioning date-based data. It’s an enhanced variant of range partitioning that allows you to define a starting partition and an interval that should be used to derive the high values for all subsequent partitions – and Oracle doesn’t even have to create intervening partitions if you insert data that goes far beyond the current partition, it automatically creates exactly the right partition (with the correct high_value and correctly inferred lower boundary) for the incoming data and behaves as if the intervening partitions will become available when they’re needed at some later point in time. So no need for DBAs to work out actual partition high_values, no need to ensure that all the partitions you need out into the future have been pre-created, no more batch processes crashing with Oracle error ORA-14400: inserted partition key does not map to any partition.

But there’s a surprising difference between traditional range partitioning and the newer interval partitioning that will increase CPU usage in some cases and may (though I haven’t yet investigated this in sufficient detail to create an example) lead to variations in execution plans.

To demonstrate the difference I’m going to create two tables with the same structure and content then run the same query against them and show you the resulting execution plans. Here’s the code to create and populate the tables:

rem
rem     Script:         interval_or_range.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2020
rem
rem     Last tested 
rem             19.3.0.0
rem


create table t_interval(
        order_date      date not null,
        order_id        number(10,0) not null,
        small_vc        varchar2(10),
        padding         varchar2(100)
)
partition by range (order_date) 
interval (numtoyminterval(1,'MONTH'))
subpartition by hash (order_id) subpartitions 4
        (
                partition start_p1 values less than (to_date('01-Jan-2020','dd-mon-yyyy'))
        )
;

create table t_range(
        order_date      date not null,
        order_id        number(10,0) not null,
        small_vc        varchar2(10),
        padding         varchar2(100)
)
partition by range (order_date) 
subpartition by hash (order_id) subpartitions 4
        (
                partition start_p1 values less than (to_date('01-Jan-2020','dd-mon-yyyy')),
                partition start_p2 values less than (to_date('01-Feb-2020','dd-mon-yyyy')),
                partition start_p3 values less than (to_date('01-Mar-2020','dd-mon-yyyy')),
                partition start_p4 values less than (to_date('01-Apr-2020','dd-mon-yyyy')),
                partition start_p5 values less than (to_date('01-May-2020','dd-mon-yyyy')),
                partition start_p6 values less than (to_date('01-Jun-2020','dd-mon-yyyy')),
                partition start_p7 values less than (to_date('01-Jul-2020','dd-mon-yyyy')),
                partition start_p8 values less than (to_date('01-Aug-2020','dd-mon-yyyy')),
                partition start_p9 values less than (to_date('01-Sep-2020','dd-mon-yyyy'))
        )
;

insert into t_range select
        to_date('01-Jan-2020','dd-mon-yyyy') + rownum,
        rownum,
        lpad(rownum,10,'0'),
        lpad('x',100,'x')
from
        all_objects
where
        rownum <= 240 -- > comment to avoid wordpress format issue  
;

insert into t_interval select
        to_date('01-Jan-2020','dd-mon-yyyy') + rownum,
        rownum,
        lpad(rownum,10,'0'),
        lpad('x',100,'x')
from
        all_objects
where
        rownum <= 240 -- > comment to avoid wordpress format issue
;

commit;

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T_RANGE',
                method_opt  => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T_INTERVAL',
                method_opt  => 'for all columns size 1'
        );
end;
/

I’ve used composite partitioned tables in this example but the same anomaly appears with simple partitioning – which you can test by commenting out the “subpartion by ..” lines. The two tables have partitions defined to hold a month’s data. I’ve inserted a few rows into most of the partitions, and ensured that I haven’t accidentally attempted to insert data that falls outside the legal range of the table with the predefined partitions .

To show that the tables are nominally identical here’s the list of partitions with their high values and number of rows:

column table_name       format a15
column partition_name   format a15
column high_value       format a80

break on table_name skip 1

select
        table_name, partition_name, num_rows, high_value
from
        user_tab_partitions
where
        table_name in ('T_INTERVAL','T_RANGE')
order by
        table_name, partition_name
;


ABLE_NAME      PARTITION_NAME    NUM_ROWS HIGH_VALUE
--------------- --------------- ---------- --------------------------------------------------------------------------------
T_INTERVAL      START_P1                 0 TO_DATE(' 2020-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10722              30 TO_DATE(' 2020-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10723              29 TO_DATE(' 2020-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10724              31 TO_DATE(' 2020-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10725              30 TO_DATE(' 2020-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10726              31 TO_DATE(' 2020-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10727              30 TO_DATE(' 2020-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10728              31 TO_DATE(' 2020-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                SYS_P10729              28 TO_DATE(' 2020-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA

T_RANGE         START_P1                 0 TO_DATE(' 2020-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P2                30 TO_DATE(' 2020-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P3                29 TO_DATE(' 2020-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P4                31 TO_DATE(' 2020-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P5                30 TO_DATE(' 2020-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P6                31 TO_DATE(' 2020-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P7                30 TO_DATE(' 2020-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P8                31 TO_DATE(' 2020-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
                START_P9                28 TO_DATE(' 2020-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA


18 rows selected.

With this setup we can now run a simple query against the two tables using a where clause that is an exact match for the partition definition for the July data:

set serveroutput off

select 
        count(*) 
from 
        t_range 
where   order_date >= date '2020-07-01'  -- to_date('01-Jul-2020','dd-mon-yyyy')
and     order_date <  date '2020-08-01'  -- to_date('01-Aug-2020','dd-mon-yyyy')
/

select * from table(dbms_xplan.display_cursor);

select 
        count(*) 
from 
        t_interval 
where   order_date >= date '2020-07-01'  -- to_date('01-Jul-2020','dd-mon-yyyy')
and     order_date <  date '2020-08-01'  -- to_date('01-Aug-2020','dd-mon-yyyy')
/

select * from table(dbms_xplan.display_cursor);

I’ve left in two versions of the date predicates – the “ANSI” style, and a strictly formatted “to_char()” style that is still the approached used most commonly in Oracle systems. The effect on the execution plans is the same for both variations of the date declaration, and here are the two execution plans – reported from an instance of 19.3.0.0:

---------------------------------------------------------------------------------------------------
| Id  | Operation               | Name    | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |         |       |       |   130 (100)|          |       |       |
|   1 |  SORT AGGREGATE         |         |     1 |     8 |            |          |       |       |
|   2 |   PARTITION RANGE SINGLE|         |    31 |   248 |   130   (3)| 00:00:01 |     8 |     8 |
|   3 |    TABLE ACCESS FULL    | T_RANGE |    31 |   248 |   130   (3)| 00:00:01 |     8 |     8 |
---------------------------------------------------------------------------------------------------


------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |            |       |       |   130 (100)|          |       |       |
|   1 |  SORT AGGREGATE         |            |     1 |     8 |            |          |       |       |
|   2 |   PARTITION RANGE SINGLE|            |    31 |   248 |   130   (3)| 00:00:01 |     8 |     8 |
|*  3 |    TABLE ACCESS FULL    | T_INTERVAL |    31 |   248 |   130   (3)| 00:00:01 |     8 |     8 |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter(("ORDER_DATE">=TO_DATE(' 2020-07-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_DATE"

It’s quite possible that you won’t notice the difference these plans, even though I’ve put them directly one below the other; and it’s very likely that most people wouldn’t think about the difference if they didn’t have the two plans available at the same time to compare.

If you can’t spot the difference it’s because I’ve deliberately arranged them in a way that might fool you into not looking carefully enough.

If you’ve spotted the difference it’s probably because you’ve examined the Predicate Information section. There isn’t one for the the t_range example but there is one for the t_interval example – and it was a little sneaky of me to make it easy for you to assume that the one set of predicates I showed was common to the two plans.

Since the plans have been pulled from memory (v$sql_plan) it seems likely that they are truthful and the query based on the interval partitioning is actually checking every row in the partition against two date values. In my example that’s not going to make much difference to performance, but in a datawarehouse with millions of rows per partition there’s scope for a noticeable increase in CPU between the two queries – especially since the block reads are likely to be direct path, bypassing much of the usual (CPU -intensive) buffer cache activity.

Interestingly when I checked the 10053 trace files for the two queries the CPU cost for the pair was identical. Normally you expect to see at least a little CPU cost (even if it’s only tens of units out of millions) for checking a predicate. This raises the question – does the optimizer allow a cost for the t_range table for an event that isn’t going to happen, or is it reporting an event that isn’t going to happen for the t_interval table. [Edit: easy enough to check by doing the test on a simple heap table that clones the data from that one partition, and checking the calculated cost with and without the predicate]

There’s a further ramification to this anomaly, relating to the question on the Oracle-L list server that prompted the investigation. What happens on Exadata where the predicates can be offloaded to storage?

The original question wasn’t actually about the appearance (or not) of the predicates, it was about an unexpected cardinality estimate for a query involving two equi-partitioned tables, so this note has gone completely off-topic from the question; but another strange detail about the predicates showed up when I suggested the creation of a column group on the join columns. A (redundant) predicate that had been offloaded to storage stopped being offloaded; here, from the original posting with the original tables, are the two different Predicate Information sections that appeared – the first without the column group, the second after the column group had been created and its stats collected:

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("ORDER_DETL"."ORDR_RQST_NUMBER"="ORDER_REQST"."ORDR_RQST_NUMBER" AND
              "ORDER_DETL"."ORDR_RQST_DATE"="ORDER_REQST"."ORDR_RQST_DATE")
   6 - storage("ORDER_REQST"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_REQST"."ORDR_RQST_DATE"<=TO_DATE(' 2020-07-31
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       filter("ORDER_REQST"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_REQST"."ORDR_RQST_DATE"<=TO_DATE(' 2020-07-31
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
   8 - storage("ORDER_DETL"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_DETL"."ORDR_RQST_DATE"<=TO_DATE(' 2020-07-31
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       filter("ORDER_DETL"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_DETL"."ORDR_RQST_DATE"<=TO_DATE(' 2020-07-31
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Predicate Information (identified by operation id):
---------------------------------------------------
   4 -
access("ORDER_DETL"."ORDR_RQST_NUMBER"="ORDER_REQST"."ORDR_RQST_NUMBER" AND
              "ORDER_DETL"."ORDR_RQST_DATE"="ORDER_REQST"."ORDR_RQST_DATE")
   6 - storage("ORDER_REQST"."ORDR_RQST_DATE">=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_REQST"."ORDR_RQST_DATE"=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_REQST"."ORDR_RQST_DATE"=TO_DATE(' 2020-07-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ORDER_DETL"."ORDR_RQST_DATE"

Notice how the storage() predicate that appears at operation 8 in the first set has disappeared from the second even though operation 6 manages to retain its storage() predicate throughout.

I’ve modelled a further example of odd behaviour using two pairs of tables – one pair using range/hash partitioning the other interval/hash partitioning. I won’t reproduce it here but the problem of redundant predicates appearing and then disappearing gets even stranger.

I haven’t yet produced an example where the unexpected predicate behaviour has affected the costs or cardinalities – but I’ve only spent a couple of hours playing around with well-formed examples: it’s possible that in badly formed examples (e.g. with statistical inconsistencies) the side effect could mean that two notionally identical queries produce different numbers and different plans because they end up with different predicates in the final transformed query.

Footnote

Following an email from David Kurtz, it occurred to me that I should have made it clear that the disappearance of predicates on the partition key is expected behaviour when the predicates are clearly synchronised with the partition boundaries. The behaviour for the interval partitioning is the oddity, the behaviour for the “normal” range partitioning is the standard.

Video Conference Equipment Breakdown

https://oracle-base.com/blog/wp-content/uploads/2020/10/clapper-board-15... 274w" sizes="(max-width: 271px) 85vw, 271px" />

Buying equipment can be really confusing because the reviews are very inconsistent, with a mix of 5 star and 1 star ratings. Also a lot depends on your experience level. If you are a beginner, simplicity is probably your number one priority. As you get more experienced, people often want greater levels of control (not me). So I thought I would give out a breakdown of what I use at the moment, just in case it helps anyone who is a bit confused.

I’ve given links to Amazon where possible.

Microphone Stuff

I tried several USB microphones before I got to this setup. I think this was the cheapest I’ve used and it seems to be the most consistent for my voice. I’m sure I could get better quality with an XLR mic and a mixer, but I really don’t need the extra hassle. I just plug this in and start.

I use a wind screen and a pop filter, yet I still get plosive noises all the time. I used to use a separate adjustable pop filter, but I was constantly readjusting it, which drove me mad, so I changed to one attached to the mic.

Backdrop (Green Screen)

Every frame and backdrop I looked at has extremely varied reviews. Most of the frames were described as flimsy in the reviews, so I spent more and got something that was really solid, which is both height adjustable and can adjust from 4 feet (121 cm) to 10 feet (305 cm) in width. The green screen is thick, and is so wide I have it folded double when I’m using the 6 feet width I’m currently using.

  • 3.6Mx2.8M Heavy Duty Backdrop Studio Support System Kit Tripod Adjustable W/Bag : (amazon.co.uk)
  • Neewer 6-Pack Set Heavy Duty Muslin Spring Clamps : (amazon.co.uk)
  • Neewer 9 x 13 feet/2.8 x 4 meters Photography Background Photo Video Studio Fabric Backdrop Background Screen (Green) : (amazon.com) (amazon.co.uk)

It’s early days, but I’m really happy with the result.

I only bought the following brick background because there was a delay with the above green screen, but then the green screen came the next day.

  • Allenjoy White Brick Wall with Gray Wooden Floor Photography Background : (amazon.co.uk)

Lighting

I tried using normal lighting with the green screen and it was OK, but there were green screen artefacts, especially at the creases on the green screen. As a result I decided to get some lights. As soon as these were on and pointing to the green screen, not me, all the artefacts disappeared, even on the heavy creases. These are the lights I got.

WebCam

I checked out the reviews and this webcam seemed to have a good mix of price and features. There are loads to choose from. You could of course use a proper camera if that is your thing, but it’s far too much work for me, so a webcam is fine.

Nothing to complain about here. The quality is fine. It would be even better if I had some reasonable lighting. </p />
</p></div>

    	  	<div class=

Inline Hint

If you’ve ever used subquery factoring (“with” subqueries or common table expressions (CTEs) as they are often called) then you’re probably aware of the (undocumented) hints /*+ materialize */ , which forces Oracle to create a local temporary table to hold the result of the subquery for subsequent use, and /*+ inline */, which forces the optimizer to copy the text of the subquery into the body of the query before starting the optimisation phase.

There’s a small, but important, enhancement to these hints that appeared in Oracle 18. Like so many other hints in Oracle they can now have a query block name as a “parameter”, so you can use them at the top level of your query. Here’s some code to demonstrate:

rem
rem     Script:         inline_hint.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2020
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             18.3.0.0
rem             12.2.0.1  -- hints don't have any effect
rem

create table t1
as
select  *
from    all_objects
where   rownum <= 10000  -- > comment to avoid wordpress format issue
/

create index t1_i1 on t1(object_id);

create table t2
as
select  *
from    t1
/

create index t2_i1 on t2(object_id);

spool inline_hint.lst


explain plan for
with v1 as (
        select 
                /*+ 
                        qb_name(cte) 
                */ 
                object_id, object_type, object_name 
                from t2 gtt1
                where object_id is not null
)
select
        /*+
                qb_name(main)
                inline(@cte)
        */
        t1.object_id,
        t1.object_name
from
        t1
where
        exists (
                select
                        null
                from
                        v1      v1a
                where
                        v1a.object_id = t1.object_id
                and     v1a.object_type = 'TABLE'
        )
and     exists (
                select
                        null
                from
                        v1      v1b
                where
                        v1b.object_id = t1.object_id
                and     v1b.object_name like 'WRI%'
        )
and
        t1.object_id between 100 and 200
/

select * from table(dbms_xplan.display(format=>'alias'));

explain plan for
with v1 as (
        select 
                /*+ 
                        qb_name(cte) 
                */ 
                object_id, object_type, object_name 
                from t2 gtt1
                where object_id is not null
)
select
        /*+
                qb_name(main)
                materialize(@cte)
        */
        t1.object_id,
        t1.object_name
from
        t1
where
        exists (
                select
                        null
                from
                        v1      v1a
                where
                        v1a.object_id = t1.object_id
                and     v1a.object_type = 'TABLE'
        )
and
        t1.object_id between 100 and 200
/

select * from table(dbms_xplan.display(format=>'alias'));

The first of these two queries uses the factored subquery twice so, by default, it will create a “cursor duration memory” temporary table to hold the results of the subquery and then use that temporary table twice in the execution plan.

Conversely the second query uses the factored subquery just once, so the optimizer’s default action will be to copy the text into the body of the main query and optimize the whole thing as a single query block.

To reverse the default behaviour in versions of Oracle up to 12.2.0.1 (though later patch sets may include the 18c enhancements) you could add the /*+ inline */ or /*+ materialize */ hints respectively to the factored subqueries; but my demonstration you can see that I’ve given the factored subquery a query block name and added the relevant hint to the main query block passing in the query block name of the factored subquery – hence /*+ inline(@cte) */ and /*+ materialize(@cte) */.

Here – from 19.3 – are the resulting execution plans (with some cosmetic editing) – first the plan with the inline() hint.

------------------------------------------------------------------------------------------------
| Id  | Operation                              | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                       |       |     1 |    63 |     9  (12)| 00:00:01 |
|   1 |  NESTED LOOPS SEMI                     |       |     1 |    63 |     9  (12)| 00:00:01 |
|   2 |   NESTED LOOPS                         |       |     1 |    50 |     7  (15)| 00:00:01 |
|   3 |    SORT UNIQUE                         |       |     1 |    25 |     4   (0)| 00:00:01 |
|*  4 |     TABLE ACCESS BY INDEX ROWID BATCHED| T2    |     1 |    25 |     4   (0)| 00:00:01 |
|*  5 |      INDEX RANGE SCAN                  | T2_I1 |    48 |       |     2   (0)| 00:00:01 |
|   6 |    TABLE ACCESS BY INDEX ROWID BATCHED | T1    |     1 |    25 |     2   (0)| 00:00:01 |
|*  7 |     INDEX RANGE SCAN                   | T1_I1 |     1 |       |     1   (0)| 00:00:01 |
|*  8 |   TABLE ACCESS BY INDEX ROWID BATCHED  | T2    |     1 |    13 |     2   (0)| 00:00:01 |
|*  9 |    INDEX RANGE SCAN                    | T2_I1 |     1 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$06B48120
   4 - SEL$06B48120 / GTT1@CTE
   5 - SEL$06B48120 / GTT1@CTE
   6 - SEL$06B48120 / T1@MAIN
   7 - SEL$06B48120 / T1@MAIN
   8 - SEL$06B48120 / GTT1@CTE
   9 - SEL$06B48120 / GTT1@CTE

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter("OBJECT_NAME" LIKE 'WRI%')
   5 - access("OBJECT_ID">=100 AND "OBJECT_ID"<=200)
   7 - access("OBJECT_ID"="T1"."OBJECT_ID")
       filter("T1"."OBJECT_ID"<=200 AND "T1"."OBJECT_ID">=100)
   8 - filter("OBJECT_TYPE"='TABLE')
   9 - access("OBJECT_ID"="T1"."OBJECT_ID")
       filter("OBJECT_ID"<=200 AND "OBJECT_ID">=100)

As you can see Oracle has copied the subquery text into the main body of the text and then optimized to produce a three-table join. One of the subqueries has been unnested into an aggregate view (operations 3,4,5), the other has been transformed into a semi-join.

In passing you’ll also notice that the optimizer has used transitive closure to add the range predicate on t1 to both occurrences of the t2 table.

And here’s the plan for the query with the single use of the subquery and materialize() hint:

-----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                            |    48 |  2448 |    39   (8)| 00:00:01 |
|   1 |  TEMP TABLE TRANSFORMATION               |                            |       |       |            |          |
|   2 |   LOAD AS SELECT (CURSOR DURATION MEMORY)| SYS_TEMP_0FD9D6611_F53A566 |       |       |            |          |
|   3 |    TABLE ACCESS FULL                     | T2                         | 10000 |   322K|    27   (8)| 00:00:01 |
|*  4 |   HASH JOIN SEMI                         |                            |    48 |  2448 |    13  (16)| 00:00:01 |
|   5 |    TABLE ACCESS BY INDEX ROWID BATCHED   | T1                         |    48 |  1200 |     4   (0)| 00:00:01 |
|*  6 |     INDEX RANGE SCAN                     | T1_I1                      |    48 |       |     2   (0)| 00:00:01 |
|*  7 |    VIEW                                  |                            | 10000 |   253K|     8  (13)| 00:00:01 |
|   8 |     TABLE ACCESS FULL                    | SYS_TEMP_0FD9D6611_F53A566 | 10000 |   322K|     8  (13)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$A3F38ADC
   2 - CTE
   3 - CTE          / GTT1@CTE
   5 - SEL$A3F38ADC / T1@MAIN
   6 - SEL$A3F38ADC / T1@MAIN
   7 - SEL$AA28F105 / V1A@SEL$1
   8 - SEL$AA28F105 / T1@SEL$AA28F105

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("V1A"."OBJECT_ID"="T1"."OBJECT_ID")
   6 - access("T1"."OBJECT_ID">=100 AND "T1"."OBJECT_ID"<=200)
   7 - filter("V1A"."OBJECT_TYPE"='TABLE' AND "V1A"."OBJECT_ID">=100 AND "V1A"."OBJECT_ID"<=200)

In this plan the optimizer has created an in-memory temporary table and then used it in the existence subquery – which it has then transformed into a semi-join, so we have a query block with the name SEL$A3F38ADC; but we also see that the query block CTE still exists, labelling the operations that Oracle used to populate the temporary table.

It is an interesting (and irritating) detail that when we look at object aliases we see (operation 8) that Oracle has given the temporary table the alias of t1 – which is just a little confusing since I actually have a table called t1!

Next Steps

Being able to nominate a query block for the inline() and matieralize() hints may be of great help in some cases (there’s a recent example on the Oracle Developer Forum (may need a MOS login) where it might make a huge difference to the performance of a particular query without requiring a rewrite of the SQL).

But there are a couple of details to investigate. First, I had a query block name built into my factored subquery – what happens if the author of the SQL didn’t include a query block name?

Before I’d added the inline() hint and query block names in the first example above this is what the plan looked like:

-----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                            |    48 |  6240 |    48  (11)| 00:00:01 |
|   1 |  TEMP TABLE TRANSFORMATION               |                            |       |       |            |          |
|   2 |   LOAD AS SELECT (CURSOR DURATION MEMORY)| SYS_TEMP_0FD9D6612_F53A566 |       |       |            |          |
|   3 |    TABLE ACCESS FULL                     | T2                         | 10000 |   322K|    27   (8)| 00:00:01 |
|*  4 |   HASH JOIN SEMI                         |                            |    48 |  6240 |    21  (15)| 00:00:01 |
|*  5 |    HASH JOIN SEMI                        |                            |    48 |  4992 |    13  (16)| 00:00:01 |
|   6 |     TABLE ACCESS BY INDEX ROWID BATCHED  | T1                         |    48 |  1200 |     4   (0)| 00:00:01 |
|*  7 |      INDEX RANGE SCAN                    | T1_I1                      |    48 |       |     2   (0)| 00:00:01 |
|*  8 |     VIEW                                 |                            | 10000 |   771K|     8  (13)| 00:00:01 |
|   9 |      TABLE ACCESS FULL                   | SYS_TEMP_0FD9D6612_F53A566 | 10000 |   322K|     8  (13)| 00:00:01 |
|* 10 |    VIEW                                  |                            | 10000 |   253K|     8  (13)| 00:00:01 |
|  11 |     TABLE ACCESS FULL                    | SYS_TEMP_0FD9D6612_F53A566 | 10000 |   322K|     8  (13)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$A317D234
   2 - SEL$1
   3 - SEL$1        / GTT1@SEL$1
   6 - SEL$A317D234 / T1@SEL$2
   7 - SEL$A317D234 / T1@SEL$2
   8 - SEL$D67CB2D2 / V1B@SEL$4
   9 - SEL$D67CB2D2 / T1@SEL$D67CB2D2
  10 - SEL$D67CB2D3 / V1A@SEL$3
  11 - SEL$D67CB2D3 / T1@SEL$D67CB2D3

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("V1A"."OBJECT_ID"="T1"."OBJECT_ID")
   5 - access("V1B"."OBJECT_ID"="T1"."OBJECT_ID")
   7 - access("T1"."OBJECT_ID">=100 AND "T1"."OBJECT_ID"<=200)
   8 - filter("V1B"."OBJECT_NAME" LIKE 'WRI%' AND "V1B"."OBJECT_ID">=100 AND "V1B"."OBJECT_ID"<=200)
  10 - filter("V1A"."OBJECT_TYPE"='TABLE' AND "V1A"."OBJECT_ID">=100 AND "V1A"."OBJECT_ID"<=200)

As you can see, the factored subquery (operations 2 and 3) has the query block name of sel$1 and the main query (operations 6 an 7 where the real t1 is used) has the query block name sel$2. So without giving the subquery a name I could have used the hint /*+ inline(@sel$1) */ in the main query block.

This takes us on to the second point that needs investigation. If you’ve looked at the example on the Oracle Developer Forum you will have seen that there’s an SQL statement that references a stored view and the factored subquery of interest is defined in the view. This means we might be able to edit the query that calls the view to include a hint referencing the query block inside the view – but then what do we do if we can’t edit the main query itself?

To be investigated (1) – would the inline() hint with nominated query block work if the factored subquery was inside a stored view that we were using in our query?

To be investigated(2) – if (1) works, could we achieve the same result by using an SQL Patch to attach the hint to the main query text without editing the main query?

Update (Oct 2020)

It turns out that I discovered this enhancement a few months ago while doing some experimentation with recursive subquery factoring.

Open Broadcaster Software (OBS) Studio : First Time Using Green Screen

https://oracle-base.com/blog/wp-content/uploads/2020/10/cam-setup-strict... 300w" sizes="(max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 984px) 61vw, (max-width: 1362px) 45vw, 600px" />

I’m not sure if you know this, but a lot of people are using video conferencing, and not all video conference tools allow you to use virtual backgrounds. </p />
</p></div>

    	  	<div class=

The New Jetpack Search Add-On

As your website grows, it becomes challenging for visitors to find exactly what they need. Being prolific and publishing lots of posts and pages is a net positive for engaging and growing your audience, but it also means visitors may be more likely to find content that isn’t relevant to their interests.

This is where Jetpack Search comes into the picture: It turns your website into a place where visitors can search, refine their results, and find the specific content and products that interest them. 

Built by the same team that created WordPress.com, Jetpack Search is a powerful search solution backed by Elasticsearch (an open source search platform), Jetpack Stats, and our globally distributed data centers — the very same ones that keep your WordPress.com website up, running, and secure day in and day out. Jetpack Search uses multiple technologies to build a modern search experience that improves website engagement. You can now add Jetpack Search to your WordPress.com site with just a few clicks.

What makes Jetpack Search better than regular search?

A modern search experience combines great result rankings with a fast and intuitive user interface. Most WordPress themes can’t do this yet, because the default WordPress search function offers only basic options to show the most relevant content to viewers.

Jetpack Search combines modern ranking algorithms with your traffic stats so that the best results will float to the top and engage your visitors. Our search algorithms work well with numerous languages and provide custom handling for 29 of the most widely used languages in the world. A large percentage of searches contain typos or spelling errors, but Jetpack Search instantly corrects the search query when there are no matches — so if a potential customer is looking for, say, collectables, they will still see the most popular collectibles you offer on your online store. Any changes you make to your site will be reflected in your search results within a few minutes.

Great algorithms are not enough, though. Jetpack Search displays results instantly, as your readers type into the search box. It does its magic equally well on both mobile devices and desktop computers, and in any theme — even those without widget areas. As the user refines their search terms or filters the results, Jetpack Search continues to update the results instantly. The user’s search terms will be highlighted in the results, which they will be able to narrow down further with displayed filters. 

Instant search, spelling correction, and filtering on TwentyTwenty theme.

All of these benefits come together to provide a compelling and beautiful user experience that will encourage people to stay on your site longer.

Get Started with Jetpack Search

We wanted to make Jetpack Search accessible for website owners of all stripes, from those who publish a personal blog to owners of thriving online stores. So we’re offering this new add-on at a price that takes into account how many searchable items — posts, pages, products, and custom post types — your site contains.


Record Limit Price Per Month
100 $5
1,000 $10
10,000 $25
100,000 $60
1,000,000 $200
Over one million $200 per million

All purchases are fully refundable for 30 days. Even better, when you add Jetpack Search to your site with an annual subscription, you get two months free — so you only pay for 10 months every year. When your purchase renews after a year, the price will automatically adjust based on the number of items in your search index.

We think you’ll love Jetpack Search — but more importantly, your site visitors will love it too.

Follow the link below to purchase your Jetpack Search subscription. Once you buy it, Jetpack Search will automatically be enabled for your site. You can then customize the colors, filtering, and other details if you’d like — though the out-of-the-box settings work great, too.

Get Jetpack Search

Read about how we made Jetpack Search