Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

12cR2 needs to connect with password for Cross-PDB DML

In a previous post, I explained that Cross-PDB DML, executing an update/delete/insert with the CONTAINERS() clause, seems to be implemented with implicit database links. Connecting through a database link requires a password and this blog post is about an error you may encounter: ORA-01017: invalid username/password; logon denied

This blog post also explains a consequence of this implementation, the big inconsistency of CONTAINERS() function because the implementation is completely different for queries (select) and for insert/delete/update, and you may finally write and read from different schemas.

We do not need Application Container for Cross-PDB DML and we don’t even need metadata link tables. Just tables with same columns. Here I have a DEMO table which is just a copy of DUAL, and it is created in CDB$ROOT and in PDB1 (CON_ID=3), owned by SYS.

Implicit database link

I’m connecting to CDB$ROOT with user, password and service name:

SQL> connect sys/oracle@//localhost/CDB1A as sysdba
Connected.

I insert a row into the DEMO table in the PDB1, which is CON_ID=3:

SQL> insert into containers(DEMO) (con_id,dummy) values (3,'Y');
1 row created.

This works in 12.2, is documented, and is an alternative way to switching to the container.

But now, let’s try to do the same when connecting with ‘/ as sysdba':

SQL> connect / as sysdba
Connected.
SQL> insert into containers(DEMO) (con_id,dummy) values (3,'Y');
 
insert into containers(DEMO) (con_id,dummy) values (3,'Y')
*
ERROR at line 1:
ORA-01017: invalid username/password; logon denied
ORA-02063: preceding line from PDB1

The first message mentions invalid user/password, and the second one mentions a database link having the same name as the container.
As I described in the previous post the CONTAINERS() opens an implicit database link when doing some modifications to another container. But a database link requires a connection and no user/password has been provided. It seems that it tries to connect with the same user and password as the one provided to connect to the root.

Then, I provide the user/password but with local connection (no service name):


SQL> connect sys/oracle as sysdba
Connected.
SQL> insert into containers(DEMO) (con_id,dummy) values (3,'Y');
insert into containers(DEMO) (con_id,dummy) values (3,'Y')
*
ERROR at line 1:
ORA-01017: invalid username/password; logon denied

There is no mention of a database link here, but still impossible to connect. Then it seems that the session needs our connection string to find out how to connect to the PDB.

Explicit database link

There is an alternative. You can create the database link explicitly and then it will be used by the container(), having all information required password and service. But the risk is that you define this database link to connect to another user.

Here I have also a DEMO table created in SCOTT:

SQL> create database link PDB1 connect to scott identified by tiger using '//localhost/PDB1';
Database link created.
 
SQL> select * from DEMO@PDB1;
 
D
-
X

From the root I insert with CONTAINERS() without mentioning the schema:

SQL> insert into containers(DEMO) (con_id,dummy) values (3,'S');
1 row created.

I have no errors here (I’m still connected / as sysdba) because I have a database link with the same name as the one it tries to use implicitly. So it works without any error or warning. But my database link does not connect to the same schema (SYS) but to SCOTT. And because a DEMO table was there with same columns, the row was actually inserted into the SCOTT schema:

SQL> select * from DEMO@PDB1;
 
D
-
X
S

The big problem here is that when doing a select through the same CONTAINER() function, a different mechanism is used, not using the database link but session switching to the other container, in same schema, so the row inserted through INSERT INTO CONTAINER() is not displayed by SELECT FROM CONTAINER():
SQL> select * from containers(DEMO);
 
D CON_ID
- ----------
X 1
X 3
Y 3

So what?

I don’t know if the first problem (invalid user/password) will be qualified as a bug but I hope the second one will. Cross-PDB DML will be an important component of Application Containers, and having a completely different implementation for SELECT and for INSERT/UPDATE/DELETE may be a source of problems. In my opinion, both should use container switch within the same session, but that means that a transaction should be able to write in multiple containers, which is not possible currently.

 

Cet article 12cR2 needs to connect with password for Cross-PDB DML est apparu en premier sur Blog dbi services.

Virtualizing Big Data in the Cloud

style="display:inline-block;width:320px;height:100px"
data-ad-client="ca-pub-5103295461547706"
data-ad-slot="5182487270">

I’m in sunny Phoenix this week at the Data Platforms 2017 Conference and looking forward to a break in the heat when I return to Colorado this evening.

As this event is for big data, I expected to present on how big data could benefit from virtualization, but was surprised to find that I didn’t have a lot of luck finding customers utilizing us for this reason, (yet).  As I’ve discussed in previous presentations, I was aware of what a “swiss army knife” virtualization is, resolving numerous issues, across a myriad of environments, yet often unidentified.

The Use Case

To find my use case, I went out to the web and found a great article, “The Case for Flat Files in Big Data Projects“, by Time.com interactive graphics editor, Chris Wilson.  The discussion surrounds the use of data created as part of ACA and used for another article, “How Much Money Does Your Doctor Get From Medical Companies“.  The data in the interactive graphs that are part of the article is publicly available from cms.gov and the Chris discusses the challenges created by it and how they overcame it.

https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/05/Screen-Shot-2... 300w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/05/Screen-Shot-2... 768w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/05/Screen-Shot-2... 1462w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/05/Screen-Shot-2... 1200w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

Upon reading it, there was a clear explanation of why Chris’ team did what they did to consume the data in a way that complimented their skill set.  It resonated with anyone who works in any IT shop and how we end up with technical choices that we’re left to justify later on.  While observing conversations at the conference this week, I lost count of how often I accepted the fact that there wasn’t a “hadoop” shop or a “hive” shop, but everyone had a group of varied solutions that resulted in their environment and if you didn’t have it, don’t count it out-  Pig, Python, Kafka or others could show up tomorrow.

This results in a more open and accepting technical landscape, which I, a “legacy data store” technologist, was welcome.  When I explained my upcoming talk and my experience, no one turned up their nose at any technology I admitted to having an interest in.

With the use case found online, was also the data.  As part of the policies in the ACA, cms.gov site, (The Center for Medicare and Medicaid) you can get access to all of this valuable data and it can offer incredible insight into these valuable programs.  The Time article only focuses on the payments to doctors from medical companies, but the data that is collected, separated out by area and then zipped up by year, is quite extensive, but as noted by a third article, as anticipated as the data was, it was cumbersome and difficult to use.

The Requirements

I proceeded to take this use case and imagine it as part of an agile environment, with this data as part of the pipeline to produce revenue by providing information to the consumer.  What would be required and how could virtualization not only enhance what Chris Wilson’s team had built, but how could the entire pipeline benefit from Delphix’s “swiss army knife” approach?

  1.  I can’t assume this is the main data store.  These flat files are a supplement to legacy data stores.
  2. There would be a standard development environment-  development and testing would need their own environments, not just a production copy of these files, applications, etc.
  3. If it’s providing data to a consumer and data is in perpetual motion in the age of the internet, an agile development method would need to be in place, which means a short development cycle with many, small, “scrum like” development groups from different departments working on tasks.
  4. Automation and seamless deployment would assist in less human intervention and resource demands, along with more successful deployments.

Solution

There were four areas that I focused to solve and eliminate bottlenecks that I either experienced or foresaw an organization experiencing when having this data as part of their environment.

  1. Eliminate the need to have multiple copies of the files, slow and manual process to propagate files to targets for development, test, etc. with Delphix’s vFile option, this would include any applications or other non-relational database tier included in the scenario.
  2. Eliminate any legacy data stores copies and refreshes that big data was dependent from and create VDBs for all development, test and reporting.
  3. Protect all non-production environments by masking non-production databases and flat files.
  4. Containerize environments for easy deployment, delivery, testing and cloud migrations.

vFiles

Each of the files, compressed were just over 500M and uncompressed, 15-18G.  This took about over 4 minutes per file to transfer to a host and could add up to considerable space.

I used Delphix vFile to virtualize files.  This means that there is a single, “gold copy” host of the files at the Delphix engine and then there’s an NFS Mount that “projects” the file access to each target, which can be used for unique copies to as many development, test and reporting copies.

https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/05/Screen-Shot-2... 300w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/05/Screen-Shot-2... 768w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/05/Screen-Shot-2... 1200w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/05/Screen-Shot-2... 1800w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

Fig. 1- Creating vFiles from dSource that flat files are sourced on.

If a refresh is required, then the parent is refreshed and an automated refresh to all the “children” can be performed.  Changes can be made at the child level and if catastrophic, Delphix would allow for easy recovery, allowing for data version control, not just code version control throughout the development and testing process.

https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/05/Screen-Shot-2... 300w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/05/Screen-Shot-2... 768w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/05/Screen-Shot-2... 1358w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/05/Screen-Shot-2... 1200w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

Fig. 2- Demonstration of target vFile, showing NFS Mount, files available, (created in less than 10 seconds) and how easily disabled and proven to be “Projection” of files.

Its a pretty cool feature and one that is very valuable to the big data arena.  I heard countless stories of how often, due to lack of storage, data scientists and developers were taking subsets of data to test and then once to production, find out that their code wouldn’t complete or fail when presented against the full data.  Having the ability to have the FULL files without taking up more space for multiple environments would be incredibly beneficial and shorten development cycles.

Virtualize

Most big data shops are still dependent on legacy data stores.  These are legendary roadblocks due to their size, complexity and demands for refreshes.  I proposed that those be virtualized so that each developer could have a copy and instant refresh without storage demands to again, ease development deadline pressures and allow for full access of data towards the development success.

Protect

Most people know we mask relational databases, but did you know we have Agile Data Masking for  flat files?  If these files are going to be pushed to non-production systems, especially with as much as we’re starting to hear about GDPR, (General Data Protection Regulations) from the EU in the US now, shouldn’t we mask outside of the database?

What kind of files can be masked?

  • Multi-record
  • CSV
  • XML
  • Word
  • Excel
  • PowerPoint
  • Unstructured
  • EDI

Thats a pretty considerable and cool list.  The ability to go in and mask data from flat files is a HUGE benefit to big data folks.  Many of them were looking at file security from the permissions and encryption level, so the ability to render the data benign to risk is a fantastic improvement.

Containerize

The last step is in simple recognition that big data is complex and consists of a ton of moving parts.  To acknowledge how much is often home built, open source, consisting of legacy data stores, flat files, application and other dependent tiers, should be expected.

https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/05/Screen-Shot-2... 300w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/05/Screen-Shot-2... 768w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/05/Screen-Shot-2... 1200w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/05/Screen-Shot-2... 1800w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

Fig. 3- A Container, created on-prem, then moved to the cloud and to as many environments as required for the development cycle to meet the business needs.

Delphix has the ability to create templates of our sources, (aka dSources) which is nothing more than creating a container.  In my use case enhancement, I took all of these legacy data stores, applications, (including any Ajax code) flat files and then create a template from it for simple refreshes, deployments via jenkins, Chef jobs or other DevOps automation.  The ability to then take these templates and deploy them to the cloud would make a migration from on-prem to the cloud a simpler process or from one cloud vendor to another.

https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/05/Screen-Shot-2... 300w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/05/Screen-Shot-2... 768w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/05/Screen-Shot-2... 1526w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/05/Screen-Shot-2... 1200w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

Fig. 4- A look at the full scenario-  Delphix engines masking files, databases, creating containers and deploying it all on-prem and to the cloud.

The end story is that this use case could be any big data shop or start up in the world today.  So many of these companies are hindered by data and Delphix virtualization could easily let their data move at the speed of business.

I want to thank Data Platforms 2017 and all the people who were so receptive of my talk.  If you’d like access to the slide deck, it’s been uploaded to Slideshare. I had a great time in Phoenix and hope I can come back soon!

 

 



Tags:  ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Virtualizing Big Data in the Cloud], All Right Reserved. 2017.

The post Virtualizing Big Data in the Cloud appeared first on DBA Kevlar.

Interview with PeopleSoft Administrator Podcast: Cost-Based Optimizer Statistics in PeopleSoft

I recently recorded another interview with Dan Iverson and Kyle Benson for the PeopleSoft Administrator Podcast, this time about management of Cost-Based Optimizer Statistics in PeopleSoft systems.

(19 May 2017) #81 - Database Statistics

You can listen to the podcast on psadmin.io, or subscribe with your favourite podcast player, or in iTunes.

Oracle Security Training

Yesterday I made a short video to talk about my two day class " How to Perform a Security audit of an Oracle database " and added the video to YouTube. This class is going to be delivered at a....[Read More]

Posted by Pete On 26/05/17 At 09:39 AM

Parallelism

Headline – if you don’t want to read the note – the /*+ parallel(N) */ hint doesn’t mean a query will use parallel execution, even if there are enough parallel execution server processes to make it possible. The parallel(N) hint tells the optimizer to consider the cost of using parallel execution for each path that it examines, but ultimately the optimizer will still take the lowest cost path (bar the odd few special cases) and that path could turn out to be a serial path.

The likelihood of parallelism appearing for a given query changes across versions of Oracle so you can be fooled into thinking you’re seeing bugs as you test new versions but it’s (almost certainly) the same old rule being applied in different circumstances. Here’s an example – which I’ll start off on 11.2.0.4:


create table t1
segment creation immediate
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum                          id,
        lpad(rownum,10,'0')             v1,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 ; create index t1_i1 on t1(id); begin dbms_stats.gather_table_stats( ownname => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );
end;
/

set autotrace traceonly explain

select
        count(v1)
from    t1
where   id = 10
;

select
        /*+ parallel(4) */
        count(v1)
from    t1
where   id = 10
;

select
        /*+ parallel(4) full(t1) */
        count(v1)
from    t1
where   id = 10
;

set autotrace off

I haven’t declare the index to be unique, but it clearly could be; and it’s obvious that with 1M rows and about 120M of table a parallel full scan is probably a bad idea to acquire one row (even if you’re running Exadata!). So what do we get for the three plans – I’ll skip the predicate section – when we want to collect one row.


Base plan - unhinted
--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |     1 |    16 |     4   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE              |       |     1 |    16 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1    |     1 |    16 |     4   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | T1_I1 |     1 |       |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Hinted parallel(4)
--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |     1 |    16 |     4   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE              |       |     1 |    16 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1    |     1 |    16 |     4   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | T1_I1 |     1 |       |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Hinted parallel(4) and full(t1)
----------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |          |     1 |    16 |   606   (2)| 00:00:02 |        |      |            |
|   1 |  SORT AGGREGATE        |          |     1 |    16 |            |          |        |      |            |
|   2 |   PX COORDINATOR       |          |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM) | :TQ10000 |     1 |    16 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE     |          |     1 |    16 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR |          |     1 |    16 |   606   (2)| 00:00:02 |  Q1,00 | PCWC |            |
|*  6 |       TABLE ACCESS FULL| T1       |     1 |    16 |   606   (2)| 00:00:02 |  Q1,00 | PCWP |            |
----------------------------------------------------------------------------------------------------------------

In 11.2.0.4 the optimizer did consider the parallel hint when it appeared on its own – but it has compared the parallel(4) cost of 606 with the serial index cost of 16 and chosen the indexed access path. This is not a case of ignoring the hint, it’s an example of being fooled if you don’t know how the hint is really supposed to work.

But here’s an interesting change that appeared in 12.2 – this time just the plan with the parallel(4) hint on its own:


---------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |          |     1 |    16 |     4   (0)| 00:00:01 |        |      |            |
|   1 |  SORT AGGREGATE                         |          |     1 |    16 |            |          |        |      |            |
|   2 |   PX COORDINATOR                        |          |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)                  | :TQ10001 |     1 |    16 |            |          |  Q1,01 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE                      |          |     1 |    16 |            |          |  Q1,01 | PCWP |            |
|   5 |      TABLE ACCESS BY INDEX ROWID BATCHED| T1       |     1 |    16 |     4   (0)| 00:00:01 |  Q1,01 | PCWP |            |
|   6 |       PX RECEIVE                        |          |     1 |       |     3   (0)| 00:00:01 |  Q1,01 | PCWP |            |
|   7 |        PX SEND HASH (BLOCK ADDRESS)     | :TQ10000 |     1 |       |     3   (0)| 00:00:01 |  Q1,00 | S->P | HASH (BLOCK|
|   8 |         PX SELECTOR                     |          |       |       |            |          |  Q1,00 | SCWC |            |
|*  9 |          INDEX RANGE SCAN               | T1_I1    |     1 |       |     3   (0)| 00:00:01 |  Q1,00 | SCWP |            |
---------------------------------------------------------------------------------------------------------------------------------

You get a parallel execution plan – although it starts with a serial index range scan which is operated for the new (12c) PX Selector operator that allocates a serial operation to one of the parallel execution slaves rather than running it through the query coordinator (QC). The serial range scan does a hash distribution (hashed by block address of the rowids it finds to avoid collisions between parallel execution slave as they do their table accesses) with the net effect that the cost of the index range scan drops from 16 to 4.

This is just one cute little trick that makes it worth looking at the upgrade to 12c – this new path is likely to be of benefit to people who had to create global (as opposed to globally partitioned) indexes on partitioned tables.

This note was prompted by a recent twitter comment by Timur Akhmadeev followed in short order by an OTN posting that added further confusion to the problem by running Siebel – which is just one of several 3rd party products that love to configure optimizer parameters with non-standard values like: optimizer_index_cost_adj = 1, or optimizer_mode = first_rows_10. (At the last update I’ve seen on the thread, there seemed to be some other reason why parallelism was being blocked.)

Footnote

In a follow-up tweet, Timue directed me to the 11.2 SQL Language Reference manual – specifically a section on the Parallel Hint, asking if this was an example of a documentation bug.

The trouble with the manuals is that sometimes they are obviously wrong, sometimes they are wrong but it’s not obvious they are wrong, sometimes they omit important information, and sometimes they are badly written and, most specfically, the writing can be ambiguous.

Here’s and extract we could consider:

For PARALLEL, if you specify integer, then that degree of parallelism will be used for the statement.

But my example above shows a “parallel({integer})” hint where we didn’t use that degree of parallelism for the statement.

However the next two sentences read as follows:

If you omit integer, then the database computes the degree of parallelism. All the access paths that can use parallelism will use the specified or computed degree of parallelism.

So what if the optimizer uses the degree of parallelism while calculating the lowest cost plan and ends up with a serial plan ? How comfortable would you feel saying that Oracle has “used the degree of parallelism for the statement”. Or would you say that the first sentence means Oracle isn’t allowed to use a serial plan even if it finds one when doing the arithmetic with the appropriate degree of parallelism.

My call is that this is one of those ambiguous cases – the manual should say something more like:

For PARALLEL, if you specify integer, then that degree of parallelism will be used by the optimizer while calculating the best execution  plan for the statement.

Even then I’m not sure that that’s a complete statement of how the hint works because when you have a full set of system statistics, or have used the dbms_resource_manager.calibrate_io mechanism to tell Oracle about the I/O capacity of the system the optimizer may do some working that says something like: “the hint says degree 64, but the stats say the maximum effective degree will be 38 so I’ll calculate using 38” (This type of thing happens with the older usage of the parallel hint with manual parallelism – I haven’t examined what happens with an automatic policy and the newer option for the hint.)

 

Joining Accenture Enkitec Group

I have accepted an offer by the Accenture Enkitec Group to join them starting next month. That unit is a kind of ‘special force’ inside the large Accenture Corporation with particular expertise in Oracle Database and Oracle Engineered Systems technology.

Although I feel quite a bit sad to leave Oracle after all those years, the opportunity to work together with all these bright people – you can see some of them here – outshines that largely </p />
</p></div>

    	  	<div class=

O7_DICTIONARY_ACCESSIBILITY and UTL_FILE_DIR in Oracle 12c release 2

I was not in the beta program for Oracle database 12c release 2 but when I was discussing security changes in the new release with some people who were in the beta they told me that O7_DICTIONARY_ACCESSIBILITY and utl_file_dir parameters....[Read More]

Posted by Pete On 23/05/17 At 04:17 PM

255 Again!

There are so many things that can go wrong when you start using tables with more than 255 columns – here’s one I discovered partly because I was thinking about a client requirement, partly because I had a vague memory of a change in behaviour in 12c and Stefan Koehler pointed me to a blog note by Sayan Malakshinov when I asked the Oak Table if anyone remembered seeing the relevant note. Enough of the roundabout route, I’m going to start with a bit of code to create a table, stick a row in it, then update that row:

rem
rem     Script: wide_table_4.sql
rem     Author: Jonathan Lewis
rem     Dated:  May 2017
rem
rem     Last tested
rem             12.2.0.1
rem             12.1.0.2
rem             11.2.0,4
rem

set pagesize 0
set feedback off

spool temp.sql

prompt create table t1(

select
        'col' || to_char(rownum,'fm0000') || '  varchar2(10),'
from
        all_objects
where   rownum <= 320
;

prompt col0321 varchar2(10)
prompt )
prompt /

spool off

@temp

set pagesize 40
set feedback on

insert into t1 (col0010, col0280) values ('0010','0280');
commit;

update t1 set col0320 ='0320';
commit;

column file_no  new_value m_file_no
column block_no new_value m_block_no

select
        dbms_rowid.rowid_relative_fno(rowid)    file_no,
        dbms_rowid.rowid_block_number(rowid)    block_no,
        dbms_rowid.rowid_row_number(rowid)      row_no
from
        t1
;

alter system flush buffer_cache;
alter system dump datafile &m_file_no block &m_block_no;


So I’ve written one of those horrible scripts that write a script and then run it. The script creates a table with 320 columns and inserts a row that populates columns 10 and 280. That gets me two row pieces, one consisting of the 255 columns from columns 26 to 280 that goes in as row piece 0, the other consisting of the first 25 columns that goes in as row piece 1; the remaining 40 columns are not populated so Oracle “forgets” about them (“trailing nulls take no space”). The script then updates the row by setting column 320 to a non-null value.

For convenience I’ve then generated the file and block number (and row number, just to show its head piece went in as row 1 rather than row 0) of the row and done a symbolic block dump. The question is: what am I going to see in that block dump ?

Answers (part 1)

Here’s an extract from the block dump – though I’ve cut out a lot of lines reporting the NULL columns:


ntab=1
nrow=2
frre=-1
fsbo=0x16
fseo=0x1e54
avsp=0x1e3e
tosp=0x1f13
0xe:pti[0]      nrow=2  offs=0
0x12:pri[0]     offs=0x1e7a
0x14:pri[1]     offs=0x1e54
block_row_dump:
tab 0, row 0, @0x1e7a
tl: 49 fb: -------- lb: 0x2  cc: 40
nrid:  0x014000a7.0
col  0: *NULL*
col  1: *NULL*
col  2: *NULL*
...
col 37: *NULL*
col 38: *NULL*
col 39: *NULL*
tab 0, row 1, @0x1e54
tl: 38 fb: --H-F--- lb: 0x2  cc: 25
nrid:  0x014000a3.0
col  0: *NULL*
col  1: *NULL*
col  2: *NULL*
...
col 22: *NULL*
col 23: *NULL*
col 24: *NULL*
end_of_block_dump

The block holds two row pieces, and the piece stored as “row 1” is the starting row piece (the H in the flag byte (fb) tells us this). This row piece consists of 25 columns. The next rowpiece (identified by nrid:) is row zero in block 0x014000a3 – that’s block 163 of file 5 – which is the same block as the first row piece. When we look at row zero we see that it holds 40 columns, all null; it’s pointing to a third row piece at row zero in block 0x014000a7 (file 5, block 167), and as corroboration we can also see that the flag byte has no bits set and that tells us that this is just a boring “somewhere in the middle” bit. So it looks like we have to follow the pointer to find the last 255 columns of the table. So let’s take a look at the dump of file 5 block 167:


fsbo=0x14
fseo=0x1e76
avsp=0x1e62
tosp=0x1e62
0xe:pti[0]      nrow=1  offs=0
0x12:pri[0]     offs=0x1e76
block_row_dump:
tab 0, row 0, @0x1e76
tl: 266 fb: -----L-- lb: 0x1  cc: 255
col  0: *NULL*
col  1: *NULL*
col  2: *NULL*
...
col 251: *NULL*
col 252: *NULL*
col 253: *NULL*
col 254: [ 4]  30 33 32 30
end_of_block_dump

Take note of the L in the flag byte – that tells us that we’re looking at the last row piece of a multi-piece row. It’s that last 255 columns we were looking for. The mechanics have worked as follows

  • On the simple insert Oracle split the used 280 columns into (25, 255)
  • On the update we grew the used column count from 280 to 320, adding 40 columns. Oracle extended the 255 column row piece to 295, then split it (40, 255) leaving 40 in the original block and migrating the 255 to a new block. So a row that could be only 2 pieces is now

So a row that could be two pieces in one block is now three pieces spread over two blocks; and there’s worse to come. Go back to the original block dump and check the used space. A good first approximation would be to check the “tl:” (total length) value for each row – this gives you: 49 + 38 bytes; add on a couple of hundred for the general block overhead and stuff like the transaction table and you find you’ve used less than 300 bytes in the block. But I’ve got a little procedure (I published this version of it some time ago) to check for free and used space – and this is what it said about the (ASSM) segment that holds this table:


Unformatted                   :           44 /          360,448
Freespace 1 (  0 -  25% free) :            0 /                0
Freespace 2 ( 25 -  50% free) :            0 /                0
Freespace 3 ( 50 -  75% free) :            0 /                0
Freespace 4 ( 75 - 100% free) :           15 /          122,880
Full                          :            1 /            8,192

Take particular note of the “Full” block at the end of the report – that’s the block where we’ve used up rather less than 300 bytes. In fact if you look again at the first block dump you’ll see the avsp (available space) and tosp (total space) figures of 0x1e3e and 0x1f13 bytes (7,742 and 7,955 bytes). There’s loads of space in the block – but the block is marked in the bitmap space management map as full. That’s really bad news.

And there’s still more to come – but it will have to wait a little longer.

Oracle SuperCluster M7 SLOB LIO Tests vs Intel Xeon E5-2699 V4

Here are some SLOB LIO figures from a DB zone configured with 16 threads running on an Oracle SuperCluster M7 hardware. For comparison I've also included numbers from an Intel Xeon E5-2699 V4 CPU.

It makes sense to mention that this is not exactly a fair comparison -- a single SPARC M7 core has 8 threads associated with it so my zone is able to utilize a total of two SPARC M7 cores (16 threads total with 8 threads per core). E5-2699 V4 is currently top of the line Intel CPU core packed model with 22 cores. So we're comparing two SPARC M7 cores vs 16 E5-2699 cores. It does however help answer the question -- if you're running on a certain number of Intel cores what kind of performance can you expect when you move over to a heavy threaded M7 if you transfer your workload "as is"?

Below are the results:



The first thing worth mentioning is that M7 still exhibits a large deficit when it comes to single threaded performance -- a single SPARC M7 thread is about 60% performance compared to an E5-2699 V4 core (which is not even a top bin CPU when it comes to frequency). Throughput is a different story -- two M7 cores are almost able to match four E5-2699 V4 cores thanks to a heavily threaded design.







Where in the World is Geek Goth Girl- Week 21

Happy National Goth Day!  Although I consider myself “Goth Lite”, it’s a national holiday in my little world and I’m pondering what books I’ll write after I run out of tech titles.  Needless to say, I’ve chosen the title, “Staying Geeky and Goth After 50” and chosen the following for the cover art:

I’m considering adding a black choker to Ma Goth and maybe some facial piercings to Pa- we’ll see.

As May winds down, I look forward to a vacation in Paris the beginning of June with my favorite person, Tim Gorman.  As much as we travel together, we rarely get to see much of each other while at an event and are both looking forward to some downtime to just spend together.

After last week’s Data Summit 2017, DBTA did a wonderful write up on my session, but as I’m my worst critic, let me know I needed to do a better job of relaying my message to the community, as in the article, not one mention of the importance of virtualization in removing that pesky bottleneck of data from cloud migrations.  I greatly appreciate the opportunity to speak at this awesome event and shall promise to do better next time… </p />
</p></div></div>

    	  	<div class=