Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

WhatsApp: A New, Convenient Way for Your Customers to Contact You

The world is mobile, and your visitors and customers expect to be able to easily contact you using their mobile device. With WordPress.com’s new WhatsApp button, you can provide a one-click, secure way for people to open WhatsApp, with your phone number and a message pre-filled.

https://en-blog.files.wordpress.com/2020/08/screen-shot-2020-08-11-at-9.... 150w, https://en-blog.files.wordpress.com/2020/08/screen-shot-2020-08-11-at-9.... 300w, https://en-blog.files.wordpress.com/2020/08/screen-shot-2020-08-11-at-9.... 768w, https://en-blog.files.wordpress.com/2020/08/screen-shot-2020-08-11-at-9.... 1408w" sizes="(max-width: 1024px) 100vw, 1024px" />
Insert the WhatsApp button with your phone number and a custom message pre-filled.

Adding the button is easy. In the block editor, create a new block and search for WhatsApp:

whatsapp blockhttps://en-blog.files.wordpress.com/2020/08/whats-app-block.png?w=606&h=772 606w, https://en-blog.files.wordpress.com/2020/08/whats-app-block.png?w=118&h=150 118w, https://en-blog.files.wordpress.com/2020/08/whats-app-block.png?w=235&h=300 235w" sizes="(max-width: 303px) 100vw, 303px" />

The WhatsApp button is available now to all WordPress.com sites on a Premium, Business, or eCommerce plan. You can upgrade your site to one of these plans, try it out for 30 days, and if you’re not satisfied with your upgrade we’ll grant you a full refund.

If you decide to cancel your paid plan after you’ve already accepted the free custom domain, the domain is yours to keep. We simply ask that you cover the costs for the domain registration.

We hope the WhatsApp button helps you connect with your customers and visitors in new ways. Give it a try today!

Oracle ADB: rename the service_name connect_data

By Franck Pachot

.
Since Aug. 4, 2020 we have the possibility to rename an Autonomous Database (ATP, ADW or AJD – the latest JSON database) on shared Exadata infrastructure (what was called ‘serverless’ last year which is a PDB in a public CDB). As the PDB name is internal, we reference the ADB with its database name is actually a part of the service name.

I have an ATP database that I’ve created in the Oracle Cloud Free Tier a few months ago.
I have downloaded the region and instance wallet to be used by client connections:


SQL> host grep _high /var/tmp/Wallet_DB202005052234_instance/tnsnames.ora

db202005052234_high = (description= (retry_count=20)(retry_delay=3)(address=(protocol=tcps)(port=1522)(host=adb.eu-frankfurt-1.oraclecloud.com))(connect_data=(service_name=jgp1nyc204pdpjc_db202005052234_high.atp.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.eucom-central-1.oraclecloud.com,OU=Oracle BMCS FRANKFURT,O=Oracle Corporation,L=Redwood City,ST=California,C=US")))

This is the instance wallet which references only this database (db202005052234)


SQL> host grep _high /var/tmp/Wallet_DB202005052234_region/tnsnames.ora

db202005052234_high = (description= (retry_count=20)(retry_delay=3)(address=(protocol=tcps)(port=1522)(host=adb.eu-frankfurt-1.oraclecloud.com))(connect_data=(service_name=jgp1nyc204pdpjc_db202005052234_high.atp.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.eucom-central-1.oraclecloud.com,OU=Oracle BMCS FRANKFURT,O=Oracle Corporation,L=Redwood City,ST=California,C=US")))
db202003061855_high = (description= (retry_count=20)(retry_delay=3)(address=(protocol=tcps)(port=1522)(host=adb.eu-frankfurt-1.oraclecloud.com))(connect_data=(service_name=jgp1nyc204pdpjc_db202003061855_high.adwc.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.eucom-central-1.oraclecloud.com,OU=Oracle BMCS FRANKFURT,O=Oracle Corporation,L=Redwood City,ST=California,C=US")))

This contains also my other database service that I have in the same region.

I connect using this wallet:


SQL> connect admin/"TheAnswer:=42"@DB202005052234_tp?TNS_ADMIN=/var/tmp/Wallet_DB202005052234_instance
Connected.

SQL> select name,network_name,creation_date,pdb from v$services;

                                                          NAME                                                   NETWORK_NAME          CREATION_DATE                               PDB
______________________________________________________________ ______________________________________________________________ ______________________ _________________________________
JGP1NYC204PDPJC_DB202005052234_high.atp.oraclecloud.com        JGP1NYC204PDPJC_DB202005052234_high.atp.oraclecloud.com        2019-05-17 20:53:03    JGP1NYC204PDPJC_DB202005052234
JGP1NYC204PDPJC_DB202005052234_tpurgent.atp.oraclecloud.com    JGP1NYC204PDPJC_DB202005052234_tpurgent.atp.oraclecloud.com    2019-05-17 20:53:03    JGP1NYC204PDPJC_DB202005052234
JGP1NYC204PDPJC_DB202005052234_low.atp.oraclecloud.com         JGP1NYC204PDPJC_DB202005052234_low.atp.oraclecloud.com         2019-05-17 20:53:03    JGP1NYC204PDPJC_DB202005052234
JGP1NYC204PDPJC_DB202005052234_tp.atp.oraclecloud.com          JGP1NYC204PDPJC_DB202005052234_tp.atp.oraclecloud.com          2019-05-17 20:53:03    JGP1NYC204PDPJC_DB202005052234
jgp1nyc204pdpjc_db202005052234                                 jgp1nyc204pdpjc_db202005052234                                 2020-08-13 09:02:02    JGP1NYC204PDPJC_DB202005052234
JGP1NYC204PDPJC_DB202005052234_medium.atp.oraclecloud.com      JGP1NYC204PDPJC_DB202005052234_medium.atp.oraclecloud.com      2019-05-17 20:53:03    JGP1NYC204PDPJC_DB202005052234

Here are all the services registered: the LOW/MEDIUM/HIGH/TP/TP_URGENT for my connections and the PDB name one.

Now from the Cloud Console I rename the database:

https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 1530w" sizes="(max-width: 1024px) 100vw, 1024px" />

You can see that the “display name” (DB 202008131439) didn’t change but the “Database name” has been renamed (from “DB202008131439” to “FRANCK”).


SQL> select name,network_name,creation_date,pdb from v$services;

Error starting at line : 1 in command -
select name,network_name,creation_date,pdb from v$services
Error at Command Line : 1 Column : 1
Error report -
SQL Error: No more data to read from socket
SQL>

My connection has been canceled. I need to connect again.


SQL> connect admin/"TheAnswer:=42"@DB202005052234_tp?TNS_ADMIN=/var/tmp/Wallet_DB202005052234_instance
Aug 13, 2020 10:13:41 AM oracle.net.resolver.EZConnectResolver parseExtendedProperties
SEVERE: Extended settings parsing failed.
java.lang.RuntimeException: Unable to parse url "/var/tmp/Wallet_DB202005052234_instance:1521/DB202005052234_tp?TNS_ADMIN"
        at oracle.net.resolver.EZConnectResolver.parseExtendedProperties(EZConnectResolver.java:408)
        at oracle.net.resolver.EZConnectResolver.parseExtendedSettings(EZConnectResolver.java:366)
        at oracle.net.resolver.EZConnectResolver.parse(EZConnectResolver.java:171)
        at oracle.net.resolver.EZConnectResolver.(EZConnectResolver.java:130)
        at oracle.net.resolver.EZConnectResolver.newInstance(EZConnectResolver.java:139)
        at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:669)
        at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:562)
        at java.sql.DriverManager.getConnection(DriverManager.java:664)
        at java.sql.DriverManager.getConnection(DriverManager.java:208)
        at oracle.dbtools.raptor.newscriptrunner.SQLPLUS.connect(SQLPLUS.java:5324)
        at oracle.dbtools.raptor.newscriptrunner.SQLPLUS.logConnectionURL(SQLPLUS.java:5418)
        at oracle.dbtools.raptor.newscriptrunner.SQLPLUS.logConnectionURL(SQLPLUS.java:5342)
        at oracle.dbtools.raptor.newscriptrunner.SQLPLUS.getConnection(SQLPLUS.java:5154)
        at oracle.dbtools.raptor.newscriptrunner.SQLPLUS.runConnect(SQLPLUS.java:2414)
        at oracle.dbtools.raptor.newscriptrunner.SQLPLUS.run(SQLPLUS.java:220)
        at oracle.dbtools.raptor.newscriptrunner.ScriptRunner.runSQLPLUS(ScriptRunner.java:425)
        at oracle.dbtools.raptor.newscriptrunner.ScriptRunner.run(ScriptRunner.java:262)
        at oracle.dbtools.raptor.newscriptrunner.ScriptExecutor.run(ScriptExecutor.java:344)
        at oracle.dbtools.raptor.newscriptrunner.ScriptExecutor.run(ScriptExecutor.java:227)
        at oracle.dbtools.raptor.scriptrunner.cmdline.SqlCli.process(SqlCli.java:410)
        at oracle.dbtools.raptor.scriptrunner.cmdline.SqlCli.processLine(SqlCli.java:421)
        at oracle.dbtools.raptor.scriptrunner.cmdline.SqlCli.startSQLPlus(SqlCli.java:1179)
        at oracle.dbtools.raptor.scriptrunner.cmdline.SqlCli.main(SqlCli.java:502)

  USER          = admin
  URL           = jdbc:oracle:thin:@DB202005052234_tp?TNS_ADMIN=/var/tmp/Wallet_DB202005052234_instance
  Error Message = Listener refused the connection with the following error:
ORA-12514, TNS:listener does not currently know of service requested in connect descriptor
  USER          = admin
  URL           = jdbc:oracle:thin:@DB202005052234_tp?TNS_ADMIN=/var/tmp/Wallet_DB202005052234_instance:1521/DB202005052234_tp?TNS_ADMIN=/var/tmp/Wallet_DB202005052234_instance
  Error Message = IO Error: Invalid connection string format, a valid format is: "host:port:sid"

Warning: You are no longer connected to ORACLE.
SQL>

The service is not known, which makes sense because the rename of the database is actually a rename of the services.

Oracle documentation says that we have to download the wallet again after a rename of the database. But that’s not very agile. Let’s rename the service in the tnsnames.ora


SQL> host sed -ie s/_db202005052234/FRANCK/g /var/tmp/Wallet_DB202005052234_instance/tnsnames.ora

This changes only the SERVICE_NAME in CONNECT_DATA but not the tnsnames.ora entry, then I can use the same connection string.


SQL> connect admin/"TheAnswer:=42"@DB202005052234_tp?TNS_ADMIN=/var/tmp/Wallet_DB202005052234_instance

SQL> select name,network_name,creation_date,pdb from v$services;

                                                  NAME                                           NETWORK_NAME          CREATION_DATE                       PDB
______________________________________________________ ______________________________________________________ ______________________ _________________________
JGP1NYC204PDPJC_FRANCK_high.atp.oraclecloud.com        JGP1NYC204PDPJC_FRANCK_high.atp.oraclecloud.com        2019-05-17 20:53:03    JGP1NYC204PDPJC_FRANCK
JGP1NYC204PDPJC_FRANCK_tp.atp.oraclecloud.com          JGP1NYC204PDPJC_FRANCK_tp.atp.oraclecloud.com          2019-05-17 20:53:03    JGP1NYC204PDPJC_FRANCK
JGP1NYC204PDPJC_FRANCK_medium.atp.oraclecloud.com      JGP1NYC204PDPJC_FRANCK_medium.atp.oraclecloud.com      2019-05-17 20:53:03    JGP1NYC204PDPJC_FRANCK
jgp1nyc204pdpjc_franck                                 jgp1nyc204pdpjc_franck                                 2020-08-13 10:05:58    JGP1NYC204PDPJC_FRANCK
JGP1NYC204PDPJC_FRANCK_low.atp.oraclecloud.com         JGP1NYC204PDPJC_FRANCK_low.atp.oraclecloud.com         2019-05-17 20:53:03    JGP1NYC204PDPJC_FRANCK
JGP1NYC204PDPJC_FRANCK_tpurgent.atp.oraclecloud.com    JGP1NYC204PDPJC_FRANCK_tpurgent.atp.oraclecloud.com    2019-05-17 20:53:03    JGP1NYC204PDPJC_FRANCK

Using the new SERVICE_NAME is sufficient. As you can see above, some autonomous magic remains: the new services still have the old creation date.

Note that you should follow the documentation and download the wallet and change your connection string. There is probably a reason behind this. But autonomous or not, I like to understand what I do and I don’t see any reason for changing everything when renaming a service.

Cet article Oracle ADB: rename the service_name connect_data est apparu en premier sur Blog dbi services.

Amazon DynamoDB: the cost of indexes

By Franck Pachot

.
That’s common to any data structure, whether it is RDBMS or NoSQL, indexes are good to accelerate reads but slow the writes. This post explains the consequences of adding indexes in DynamoDB.

Secondary Indexes

What we call an index in DynamoDB is different from an index in RDBMS. They have the same goal: store your data with some redundancy in order to have it physically partitioned, sorted, and clustered differently than the table, in order to optimize the performance for specific access patterns. It can be full redundancy (covering indexes) so that there’s is no need to look at the table, or partial redundancy (only the key values and the sufficient values for accessing the table efficiently). The indexes are maintained automatically: when the table is updated, the index entries are maintained by the database engine. This can be synchronous, or asynchronous if eventual consistency is accepted. The major difference is that a relational database separates the logical and physical implementation (Codd Rule 8: Physical Data Independence) for better agility: there is no change to do in the application code to access through an index or another. RDBMS have an optimizer (query planner) that selects the best access path for the query predicates. That was the topic of the previous post. But following the NoSQL spirit, AWS DynamoDB delegates this responsibility to the application code: the index access will be used only when you explicitly query it.

Because DynamoDB tables are physically organized by the primary key (hash partitioning with local index when a sort key is defined) this KeySchema can be considered the primary index. Then any additional index is a secondary index. It can be local, prefixed by the hash key, or global, prefixed by another hash key than the table.

Table with no indexes


aws dynamodb create-table --table-name Demo \
 --billing-mode PROVISIONED --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \
 --attribute-definitions AttributeName=P,AttributeType=S AttributeName=S,AttributeType=S \
 --key-schema AttributeName=P,KeyType=HASH AttributeName=S,KeyType=RANGE

I have created a HASH/RANGE table that is perfect to access with a single value for the attribute P, the partition key, and a single value or a range for S, the sort key.


{
    "TableDescription": {
        "TableArn": "arn:aws:dynamodb:eu-central-1:802756008554:table/Demo",
        "AttributeDefinitions": [
            {
                "AttributeName": "P",
                "AttributeType": "S"
            },
            {
                "AttributeName": "S",
                "AttributeType": "S"
            }
        ],
        "ProvisionedThroughput": {
            "NumberOfDecreasesToday": 0,
            "WriteCapacityUnits": 5,
            "ReadCapacityUnits": 5
        },
        "TableSizeBytes": 0,
        "TableName": "Demo",
        "TableStatus": "CREATING",
        "TableId": "b2a97f98-611d-451d-99ee-c3aab1129b30",
        "KeySchema": [
            {
                "KeyType": "HASH",
                "AttributeName": "P"
            },
            {
                "KeyType": "RANGE",
                "AttributeName": "S"
            }
        ],
        "ItemCount": 0,
        "CreationDateTime": 1597052214.276
    }
}

This is the output of the create-table command. I use small reserved capacity in my blog posts so that you can run the test on the AWS Free Tier without risk. I look at the metrics, for better understanding, not the response time which depends on many other factors (network latency, bursting, throttling,…). But of course, you will get the same on larger data sets.

In this table, I’ll insert items by batch from the following JSON (values are randomly generated for each call):


{                                                                                                                                                                                                                                  [16/91205]
 "Demo": [
  {"PutRequest":{"Item":{"P":{"S":"4707"},"S":{"S":"23535"},"A0":{"S":"18781"}
,"A01":{"S":"10065"} ,"A02":{"S":"2614"} ,"A03":{"S":"7777"} ,"A04":{"S":"19950"} ,"A05":{"S":"30864"} ,"A06":{"S":"24176"} ,"A07":{"S":"22257"} ,"A08":{"S":"11549"} ,"A09":{"S":"28368"} ,"A10":{"S":"29095"} ,"A11":{"S":"23060"} ,"A12":{
"S":"3321"} ,"A13":{"S":"30588"} ,"A14":{"S":"16039"} ,"A15":{"S":"31388"} ,"A16":{"S":"21811"} ,"A17":{"S":"10593"} ,"A18":{"S":"18914"} ,"A19":{"S":"23120"} ,"A20":{"S":"25238"} }}},
  {"PutRequest":{"Item":{"P":{"S":"4106"},"S":{"S":"15829"},"A0":{"S":"28144"}
,"A01":{"S":"9051"} ,"A02":{"S":"26834"} ,"A03":{"S":"1614"} ,"A04":{"S":"6458"} ,"A05":{"S":"1721"} ,"A06":{"S":"8022"} ,"A07":{"S":"49"} ,"A08":{"S":"23158"} ,"A09":{"S":"6588"} ,"A10":{"S":"17560"} ,"A11":{"S":"4330"} ,"A12":{"S":"175
78"} ,"A13":{"S":"8548"} ,"A14":{"S":"57"} ,"A15":{"S":"27601"} ,"A16":{"S":"8766"} ,"A17":{"S":"24400"} ,"A18":{"S":"18881"} ,"A19":{"S":"28418"} ,"A20":{"S":"14915"} }}},
... 
  {"PutRequest":{"Item":{"P":{"S":"27274"},"S":{"S":"8548"},"A0":{"S":"11557"}
,"A01":{"S":"28758"} ,"A02":{"S":"17212"} ,"A03":{"S":"17658"} ,"A04":{"S":"10456"} ,"A05":{"S":"8488"} ,"A06":{"S":"28852"} ,"A07":{"S":"22763"} ,"A08":{"S":"21667"} ,"A09":{"S":"15240"} ,"A10":{"S":"12092"} ,"A11":{"S":"25045"} ,"A12":{"S":"9156"} ,"A13":{"S":"27596"} ,"A14":{"S":"27305"} ,"A15":{"S":"22214"} ,"A16":{"S":"13384"} ,"A17":{"S":"12300"} ,"A18":{"S":"12913"} ,"A19":{"S":"20121"} ,"A20":{"S":"20224"} }}}
 ]
}

In addition to the primary key, I have attributes from A0 to A20. I put 25 items per call (that’s the maximum for DynamoDB) and my goal is to have many attributes that I can index later.


aws dynamodb batch-write-item --request-items file://batch-write.json \
 --return-consumed-capacity INDEXES --return-item-collection-metrics SIZE

This is the simple call for this batch insert, returning the consumed capacity on table and indexes:


aws dynamodb batch-write-item --request-items file://batch-write139.json --return-consumed-capacity INDEXES --return-item-collection-metrics SIZE
...
{
    "UnprocessedItems": {},
    "ItemCollectionMetrics": {},
    "ConsumedCapacity": [
        {
            "CapacityUnits": 25.0,
            "TableName": "Demo",
            "Table": {
                "CapacityUnits": 25.0
            }
        }
    ]
}

25 Write Capacity Units for 25 items: each item that is smaller than 1KB consumes 1 WCU. My items are on average 170 Bytes here, so they fit in 1 WCU. And batching doesn’t help there: it is batched for the network call only, but they all go into a different place, and then require a WCU for each of them. There is nothing like preparing full blocks with many items (like RDBMS direct-path inserts, or fast load, or insert append…). DynamoDB is there to scale small transactions by scattering data though multiple partitions.

Table with 5 local indexes

Here is the same create statement but with 5 Local Secondary Indexes declared:


aws dynamodb create-table --table-name Demo --billing-mode PROVISIONED --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \
 --attribute-definitions AttributeName=P,AttributeType=S AttributeName=S,AttributeType=S \
  AttributeName=A01,AttributeType=S AttributeName=A02,AttributeType=S AttributeName=A03,AttributeType=S AttributeName=A04,AttributeType=S AttributeName=A05,AttributeType=S \
 --key-schema AttributeName=P,KeyType=HASH AttributeName=S,KeyType=RANGE \
 --local-secondary-indexes \
  'IndexName=LSI01,KeySchema=[{AttributeName=P,KeyType=HASH},{AttributeName=A01,KeyType=RANGE}],Projection={ProjectionType=ALL}' \
  'IndexName=LSI02,KeySchema=[{AttributeName=P,KeyType=HASH},{AttributeName=A02,KeyType=RANGE}],Projection={ProjectionType=ALL}' \
  'IndexName=LSI03,KeySchema=[{AttributeName=P,KeyType=HASH},{AttributeName=A03,KeyType=RANGE}],Projection={ProjectionType=ALL}' \
  'IndexName=LSI04,KeySchema=[{AttributeName=P,KeyType=HASH},{AttributeName=A04,KeyType=RANGE}],Projection={ProjectionType=ALL}' \
  'IndexName=LSI05,KeySchema=[{AttributeName=P,KeyType=HASH},{AttributeName=A05,KeyType=RANGE}],Projection={ProjectionType=ALL}'

This recreated the table with adding the definition for 5 local indexes, on the same partition key but different sort key. I had to add the attribute definition for them as I reference them in the index definition.


...
        },
        "TableSizeBytes": 0,
        "TableName": "Demo",
        "TableStatus": "CREATING",
        "TableId": "84fc745b-66c5-4c75-bcf4-7686b2daeacb",
        "KeySchema": [
            {
                "KeyType": "HASH",
                "AttributeName": "P"
            },
            {
                "KeyType": "RANGE",
                "AttributeName": "S"
            }
        ],
        "ItemCount": 0,
        "CreationDateTime": 1597054018.546
    }
}

The hash partition size in DynamoDB is fixed, 10GB, and because the local indexes are stored within each partition, the total size of an item plus all its index entries cannot go higher than this limit. Here, I’m far from the limit, which will be often the case: if your key-value store is a document store, you will not project the document into all local indexes. Then use KEYS_ONLY for the projection type and not the ALL one I used there. And anyway, 5 local indexes is the maximum you can create in DynamoDB.


aws dynamodb batch-write-item --request-items file://batch-write139.json --return-consumed-capacity INDEXES --return-item-collection-metrics SIZE
...
   "ConsumedCapacity": [
        {
            "CapacityUnits": 150.0,
            "TableName": "Demo",
            "LocalSecondaryIndexes": {
                "LSI01": {
                    "CapacityUnits": 25.0
                },
                "LSI03": {
                    "CapacityUnits": 25.0
                },
                "LSI02": {
                    "CapacityUnits": 25.0
                },
                "LSI05": {
                    "CapacityUnits": 25.0
                },
                "LSI04": {
                    "CapacityUnits": 25.0
                }
            },
            "Table": {
                "CapacityUnits": 25.0
            }
        }
    ]
}

Here we are 155 WCU in total here. The same 25 WCU as before, for the 25 items put in the table. And each local index accounts for an additional 25 WCU. I have no idea why 26 and not 25 by the way. Note that I’ve seen a few with 26 WCU for all indexes in the test and I don’t really know why.

Table with 20 global indexes

Now, without any local indexes but the maximum global indexes we can have here: 20 Global Secondary Indexes (GSI)


aws dynamodb create-table --table-name Demo --billing-mode PROVISIONED --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \
 --attribute-definitions AttributeName=P,AttributeType=S AttributeName=S,AttributeType=S \
AttributeName=A01,AttributeType=S AttributeName=A02,AttributeType=S AttributeName=A03,AttributeType=S AttributeName=A04,AttributeType=S AttributeName=A05,AttributeType=S AttributeName=A06,AttributeType=S AttributeName=A07,AttributeType=S AttributeName=A08,AttributeType=S AttributeName=A09,AttributeType=S AttributeName=A10,AttributeType=S AttributeName=A11,AttributeType=S AttributeName=A12,AttributeType=S AttributeName=A13,AttributeType=S AttributeName=A14,AttributeType=S AttributeName=A15,AttributeType=S AttributeName=A16,AttributeType=S AttributeName=A17,AttributeType=S AttributeName=A18,AttributeType=S AttributeName=A19,AttributeType=S AttributeName=A20,AttributeType=S \
 --key-schema AttributeName=P,KeyType=HASH AttributeName=S,KeyType=RANGE \
 --global-secondary-indexes \
  'IndexName=GSI01,KeySchema=[{AttributeName=A01,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI02,KeySchema=[{AttributeName=A02,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI03,KeySchema=[{AttributeName=A03,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI04,KeySchema=[{AttributeName=A04,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI05,KeySchema=[{AttributeName=A05,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI06,KeySchema=[{AttributeName=A06,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI07,KeySchema=[{AttributeName=A07,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI08,KeySchema=[{AttributeName=A08,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI09,KeySchema=[{AttributeName=A09,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI10,KeySchema=[{AttributeName=A10,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI11,KeySchema=[{AttributeName=A11,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI12,KeySchema=[{AttributeName=A12,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI13,KeySchema=[{AttributeName=A13,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI14,KeySchema=[{AttributeName=A14,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI15,KeySchema=[{AttributeName=A15,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI16,KeySchema=[{AttributeName=A16,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI17,KeySchema=[{AttributeName=A17,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI18,KeySchema=[{AttributeName=A18,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI19,KeySchema=[{AttributeName=A19,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI20,KeySchema=[{AttributeName=A20,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}'

This takes much longer to create because global indexes are actually like other tables that are maintained asynchronously (there’s only eventual consistency when you read them).


aws dynamodb batch-write-item --request-items file://batch-write139.json --return-consumed-capacity INDEXES --return-item-collection-metrics SIZE
...
{                                                                                                                                                                                                                                      "UnprocessedItems": {},
    "ItemCollectionMetrics": {},
    "ConsumedCapacity": [
        {
            "CapacityUnits": 525.0,
            "GlobalSecondaryIndexes": {
                "GSI06": {
                    "CapacityUnits": 25.0
                },
                "GSI07": {
                    "CapacityUnits": 25.0
                },
                "GSI05": {
                    "CapacityUnits": 25.0
                },
 ...
                "GSI08": {
                    "CapacityUnits": 25.0
                },
                "GSI03": {
                    "CapacityUnits": 25.0
                }
            },
            "TableName": "Demo",
            "Table": {
                "CapacityUnits": 25.0
            }
        }
    ]
}

The cost is the same as with local indexes: one capacity unit per item per index in addition to the table.

So this post is simply there to get your attention to the fact that adding indexes will slow the writes, in NoSQL, as in any database. In DynamoDB this is measured by Write Capacity Unit and you can get the whole detail, how many WCU for the table, for the LSI and for the GSI, with “ReturnConsumedCapacity”. But what is important is that this capacity can scale. You will probably not see a difference in the response time. Except of course if you go beyond the provisioned capacity. And then you can increase it (it has a cost of course). How does it scale? Because DynamoDB allows us to do only things that scale. Maintaining global indexes requires cross-node synchronization in a distributed database, and this cannot scale. So DynamoDB does it asynchronously (reads on the GSI is eventually consistent). And the number of GSI is limited to 20. Maintaining local indexes do not involve cross-partition latency and are maintained synchronously. But to limit the overhead, you can create 5 LSI at maximum. Within those limits, local and global indexes are useful to keep item access fast (see previous posts on covering GSI and LSI)

Cet article Amazon DynamoDB: the cost of indexes est apparu en premier sur Blog dbi services.

Manage Posts and Pages with the Block Editor Sidebar

We wanted to make it easier to manage posts and pages when you’re working within the Block Editor. With the Block Editor Sidebar you can view, create, edit, and manage posts and pages right from the Block Editor!  

Wait, another sidebar?

Have you ever finished editing something and then needed to make a quick edit to an existing post, or start a new one? Previously this meant navigating to your dashboard and to the posts list.

The Block Editor Sidebar allows you to view and access your site’s posts and pages — whether they’re drafts, published, or scheduled — and create new ones right from the editor.

Using the sidebar

The sidebar is available in the Block Editor when editing both posts and pages. For now, when editing a post you’ll only be able to see recent posts, and create new posts. Similarly, if you’re editing a page you’ll see a list of your recent pages and a link to create a new page.

The following steps refer to posts, but you can use the same steps to edit pages.

Open an existing post or create a new one to open the editor.

Select a post from the posts listhttps://en-blog.files.wordpress.com/2020/08/select-post-to-edit.jpg?w=1440 1440w, https://en-blog.files.wordpress.com/2020/08/select-post-to-edit.jpg?w=150 150w, https://en-blog.files.wordpress.com/2020/08/select-post-to-edit.jpg?w=300 300w, https://en-blog.files.wordpress.com/2020/08/select-post-to-edit.jpg?w=768 768w, https://en-blog.files.wordpress.com/2020/08/select-post-to-edit.jpg?w=1024 1024w" sizes="(max-width: 720px) 100vw, 720px" />

To view the sidebar, click on the WordPress.com logo at the top left of the editor.

Click on the WordPress.com logohttps://en-blog.files.wordpress.com/2020/08/click-on-the-wordpress-logo.... 1440w, https://en-blog.files.wordpress.com/2020/08/click-on-the-wordpress-logo.... 150w, https://en-blog.files.wordpress.com/2020/08/click-on-the-wordpress-logo.... 300w, https://en-blog.files.wordpress.com/2020/08/click-on-the-wordpress-logo.... 768w, https://en-blog.files.wordpress.com/2020/08/click-on-the-wordpress-logo.... 1024w" sizes="(max-width: 720px) 100vw, 720px" />

To return to your posts list, select the View Posts link.

Block Editor Sidebar - view all postshttps://en-blog.files.wordpress.com/2020/08/sidebar-return-to-post-list.... 1440w, https://en-blog.files.wordpress.com/2020/08/sidebar-return-to-post-list.... 150w, https://en-blog.files.wordpress.com/2020/08/sidebar-return-to-post-list.... 300w, https://en-blog.files.wordpress.com/2020/08/sidebar-return-to-post-list.... 768w, https://en-blog.files.wordpress.com/2020/08/sidebar-return-to-post-list.... 1024w" sizes="(max-width: 720px) 100vw, 720px" />

To load an existing post, select the post title in the list of recent posts. Unless it’s already been published, the post’s status will appear next to the title, for example “Draft” or “Scheduled.”

Block Editor Sidebar - select a post to edithttps://en-blog.files.wordpress.com/2020/08/sidebar-select-post-to-edit.... 150w, https://en-blog.files.wordpress.com/2020/08/sidebar-select-post-to-edit.... 300w, https://en-blog.files.wordpress.com/2020/08/sidebar-select-post-to-edit.... 768w, https://en-blog.files.wordpress.com/2020/08/sidebar-select-post-to-edit.... 1024w, https://en-blog.files.wordpress.com/2020/08/sidebar-select-post-to-edit.jpg 1412w" sizes="(max-width: 720px) 100vw, 720px" />

To create a new post, click on the Add new post link.

Block Editor Sidebar - add a new posthttps://en-blog.files.wordpress.com/2020/08/sidebar-add-new-post.jpg?w=1440 1440w, https://en-blog.files.wordpress.com/2020/08/sidebar-add-new-post.jpg?w=150 150w, https://en-blog.files.wordpress.com/2020/08/sidebar-add-new-post.jpg?w=300 300w, https://en-blog.files.wordpress.com/2020/08/sidebar-add-new-post.jpg?w=768 768w, https://en-blog.files.wordpress.com/2020/08/sidebar-add-new-post.jpg?w=1024 1024w" sizes="(max-width: 720px) 100vw, 720px" />

To dismiss the sidebar, click anywhere outside the sidebar or press the “Escape” key.

What’s next?

Now that we have the Block Editor Sidebar in place, we’ll extend its capabilities with more tools and shortcuts to help you manage your site’s content.

Do you have ideas for site-level features you’d like to see in the Block Editor Sidebar? Or suggestions about how we can improve this feature? Please let us know in the comments!

Starting Your Own Podcast on WordPress.com

I am very inspired by creative podcasts. Entertaining guests and the unscripted nature of the conversations make me feel like I am right there, chatting about world affairs or the deep societal transformations of the connected age. I decided to start my own show with a group of friends  — loosely about blogging, but delving into all sorts of things.

It took three hours, with a total cost of $5/month, including our own domain name. In this post, I’ll share my journey of launching a brand-new podcast on WordPress.com, and later promoting it on iTunes and Spotify.

Why start a podcast?

Podcasting has been hailed as “the new blogging.” According to Edison Research, 51% of Americans have listened to a podcast, and hosts have created over one million podcasts on iTunes Podcasts.

Starting a podcast comes with a lot of perks:

  • You can connect with your audience on a much deeper level, thanks to your voice and the unscripted nature of the conversation.
  • It’s a fantastic excuse to reach out and meet interesting people.
  • Podcasting pairs really well with other projects — blogging, writing, or even a corporate ”day job.”

A group of Polish bloggers and I had been dreaming of a foray into podcasting for a while. We created the Placebo podcast in hopes of meeting interesting people and, frankly, having fun.

The tagline in Polish reads: “Your dose of absolutely nothing. Confirmed clinical efficacy.”

How does it all work?

You may have listened to a podcast on iTunes, Spotify, or another app — but did you know the content you’re enjoying doesn’t originate there? The beating heart of every podcast is its RSS feed. It is a particular format for content that services like iTunes or Spotify can process and display in the appropriate apps — iTunes, Alexa devices, and other services.

When you upload an audio file to your blog (for example, an .mp3) and hit publish, the new post (including the file) shows up in your RSS feed. From there, iTunes, Spotify, Pocket Casts, or another podcast app downloads the file to play it to your listeners.

WordPress.com also has built-in podcasting tools to make that job easier. To upload audio files, you’ll need a paid plan — any plan, including the Personal plan at $5/month, will do.

What do you need to start?

The name

The name of your podcast helps listeners find you in their favorite podcasting app. With Placebo, our main concern was making it memorable; our secondary goal, humor. (Which is related — funny is memorable!) We came up with “Placebo — a podcast with a confirmed clinical efficacy.”

Other memorable name examples include therapist Esther Perrell’s podcast, Where Should We Begin, Matt Mullenweg’s podcast about remote work, Distributed, and Joe Rogan’s The Joe Rogan Experience. If you are still stuck, try out our recently launched business name generator to help you brainstorm.

Recording

We didn’t want to get expensive gear. It’s better to start simple with the equipment you already have. The Placebo podcast is recorded over Zoom, with video-call headsets. I recommend recording at least three episodes, so when you publish, your listeners will have a better taste of your style.

The Distributed podcast has a great walkthrough of the various recording options you can try.

Cover art

In addition to your name, there are other ways to stand out from all the other podcasts and voices: strong cover art and a succinct, solid summary. (You can add all three on WordPress.com in My Sites → Settings → Writing → Podcasting.) 

The cover art should be easy to recognize. Since our podcast is named Placebo, we decided on a satirical medical vibe. We found a nice graphic, and after a few tweaks, tada!

https://en-blog.files.wordpress.com/2020/08/image.png?w=512 512w, https://en-blog.files.wordpress.com/2020/08/image.png?w=150 150w, https://en-blog.files.wordpress.com/2020/08/image.png?w=300 300w" sizes="(max-width: 256px) 100vw, 256px" />
Placebo podcast cover image

Pro tip: Cover art should be 1400×1400 px, so be sure to use an image that’s big enough.

The summary

When writing your summary on iTunes, you must distill the description of your podcast content to 250 characters or less. For Placebo, we wanted to give listeners a taste and encourage them to give us a try:

Placebo is a podcast with confirmed clinical efficacy. Artur, Agnes, and Michael chat about building your online home, technology, culture, business ideas, and being the best citizen of the Internet you can be. More on placebo.pro

We also included the link to our site, where listeners can learn more.

Here is where you upload your cover art and summary on WordPress.com

Light audio editing

Some podcasts (like Distributed) are edited to equalize audio levels or condense the episode. We opted to publish the raw audio — I only attach the pre-recorded intro music in GarageBand. I delve into details of audio editing on my blog Deliberate Internet

Before I upload a new episode to iTunes Podcasts, I add relevant ID3 tags in the iTunes player (where you edit rather than publish). ID3 tags are used by the iPhone Podcasts app to display your cover image. Import each one of your recorded files in iTunes player and select “Information.” There, you can tweak the image, category, and title of the audio file.

https://en-blog.files.wordpress.com/2020/08/image-1.png?w=135 135w, https://en-blog.files.wordpress.com/2020/08/image-1.png?w=271 271w, https://en-blog.files.wordpress.com/2020/08/image-1.png 541w" sizes="(max-width: 406px) 100vw, 406px" />

After you edit the information in iTunes player, or wherever you are editing your podcast, upload your cover art and click “OK”; your episode will appear in the iTunes Player Podcast library on your computer. You can find this file in your Home Directory → Music → iTunes → Podcasts.

Your audio files are now ready for distribution! That’s where WordPress.com comes in.

Uploading to WordPress.com

Create a separate WordPress.com post for each episode of your podcast. This gives you a unique link to share with your audience, a way for them to listen to your episode without the app, and a place to add extra notes and links.

To publish a podcast post:

  1. Create a new post. Add any extra notes or info you want to share with the podcast.
  2. Add the Audio block and upload your podcast file.
  3. Give the post the Podcast category.
  4. Publish the post.

This guide on the WordPress.com Support Site has more details.

Time to go live

You have your domain and a site for your podcast. You’ve recorded and uploaded a few episodes. Now it’s time to share your podcast with the world!

Your podcast is available on your WordPress.com site, but the majority of people consume podcasts through an app — iTunes podcasts, Spotify, Google Play, etc. Fortunately, they all work by checking your RSS feed. Once you submit your RSS feed to those services, new episodes will appear there automatically.

Spotify 

  1. Copy the RSS URL from the WordPress.com podcasting settings page
  2. Go to Podcasters Spotify
  3. Submit your RSS feed
  4. Wait two to three days for your podcast to appear

iTunes

  1. Copy the RSS URL from the WordPress.com podcasting settings page
  2. Go to Podcasts Connect
  3. Submit your RSS feed
  4. Wait about a week for them to review and approve your podcast

When the podcast is reviewed, approved, and appears in the Spotify and iTunes libraries, add those links to your site. That way, you can promote your nifty WordPress.com custom domain, and visitors can choose their preferred podcast consuming technology. We put these buttons right up at the top:

You can download these buttons here:

The lucky number seven

Recording new podcast episodes are plenty of fun, but can sometimes feel like work. Podcasts with seven or more episodes are likely to continue growing. The Placebo podcast crew has pledged to record at least eight.

And don’t forget to enjoy it! The conversations with interesting guests are why we do it.

I’ve published the full version of “How to Start a Podcast” on my personal website. There, I’ve expanded on my process to record episodes over Zoom, and how I later edited them in GarageBand. Dive deeper if you’re interested!


Cover Photo by The Teens Network Daytime Show Studios on Pexels.com

Clickjacked!

In the language of the millennials I should start this post with “TIL”, but for the rest of us out there that are “not so down with the cool kids” Smile the expression is “Today I learned”.

Anyway…today I learned about a security risk called ClickJacking. You can read the full history and details of it here but the TL;DR version is:

Someone can mess with CSS and the like to make it look like your website is being displayed, but in reality, a click on it will invoke an action on a different site.

For example, you might want to buy a new phone on your favourite online auction website, and when you press the “Buy” button, unbeknown to you, someone has overlaid an invisible version of their own website on top of the auction page, and hitting “Buy” is in reality the “Send My Credit Card Details To A Nasty Person” button on the invisible website.  One click and your details have been hijacked, hence the expression.

Because most clickjacking attacks involve rendering in part or full, the true version of an innocent website before augmenting it with something malicious, the use of html IFRAMEs is a common mechanism. Hence allowing your site to be referenced from within an IFRAME creates the vulnerability for it to be used to potentially clickjack someone. If you want to see if your site can be referenced from within an IFRAME, an easy way is to craft some html that does precisely that




Am I vulnerable?


Do you see the site? Hopefully not




A kind AskTOM reader pointed out to us that in one of our recent AskTOM deployments, we had introduced a regression in that we had reset our browser header settings, and thus allowed AskTOM to be used within an IFRAME. We’d hate anyone to be using AskTOM and get hacked as a result, so we’ve quickly fixed it. If you’re using APEX you can check your browser header security settings via Shared Components as per the GIF below, but of course, this applies for any website you look after.

clickjack

Stay secure!

This is just

This is just a quick blog entry to direct readers to an Amazon Web Services blog post regarding Oracle Licensing options when deploying Oracle Database in AWS.

Licensing Options for Oracle Database Deployments in Amazon Web Services

 

Oracle 19c Automatic Indexing: Poor Data Clustering With Autonomous Databases Part III (Star)

In Part I we looked at a scenario where an index was deemed to be too inefficient for Automatic Indexing to create a VALID index, because of the poor clustering of data within the table. In Part II we improved the data clustering but the previous SQLs could still not generate a new Automatic Index […]

Oracle 19c Automatic Indexing: Poor Data Clustering With Autonomous Databases Part II (Wild Is The Wind)

  In my previous post, I discussed a scenario in which Oracle Automatic Indexing refused to create a VALID index, because the resultant index was too inefficient to access the necessary rows due to the poor clustering of data within the table. If the performance of such an SQL were critical for business requirements, there […]

Video : Online Statistics Gathering for Bulk Loads

Today’s video gives a demonstration of Online Statistics Gathering for Bulk Loads.

This is based on this article.

The star of today’s video is my sister-in-law Maria Colgan, otherwise know as SQLMaria. You may know her by her former names of “the in-memory lady” and “the optimizer lady”. Maria sent me three different clips, with one looking like she’s running for president. I’m sure these will appear in future videos… </p />
</p></div>

    	  	<div class=