Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Pi Day, March 14

Geeks around the world will be celebrating Pi day, a shameless excuse to put the month before the day like the Americans do so we can talk Mathematics on “3.14” day Smile

So what better way to show how cool SQL can be with some expressions to approach Pi with continuous series

All we really need to know by way of background is the CONNECT BY LEVEL trick to generate arbirtary sequences of row, eg


SQL> select level from dual connect by level <= 5;

     LEVEL
----------
         1
         2
         3
         4
         5

Now we can get some approximations to Pi using the some of the standard SQL functions SQRT and POWER



SQL> select sqrt(8*sum(1/power((level*2-1),2))) pi
  2  from dual
  3  connect by level <= 100000
  4  order by 1 desc;

        PI
----------
3.14158947

1 row selected.

SQL>
SQL>
SQL> select sqrt(6*sum(1/(level*level))) pi
  2  from dual
  3  connect by level <= 100000
  4  order by 1 desc;

        PI
----------
 3.1415831

1 row selected.

SQL>
SQL> select 8*sum(1/((4*(level-1)+1)*(4*(level-1)+3))) pi
  2  from dual
  3  connect by level <= 100000;

        PI
----------
3.14158765

1 row selected.

SQL>
SQL>
SQL> select
  2   4*sum(
  3     power(-1,level-1)/(level*2-1)*
  4        ( 12*power(1/18,level*2-1)+
  5           8*power(1/57,level*2-1)-
  6           5*power(1/239,level*2-1))) pi
  7  from dual
  8  connect by level <= 100;

        PI
----------
3.14159265

1 row selected.

SQL>

Some methods to calculate Pi need factorials, but there isn’t a native SQL function for that. But in 12c, that’s no problem, we can define SQL functions on the fly directly inside our SQL statement !


SQL>
SQL> with
  2    function factorial(n int) return int is
  3      f int := 1;
  4    begin
  5      for i in 1 .. n loop
  6        f := f * i;
  7      end loop;
  8      return f;
  9    end;
 10  select  2*sum(
 11    power(2,level-1)*power(factorial(level-1),2)/factorial(2*level-1)
 12    ) pi
 13  from dual
 14  connect by level <= 17;
 15  /

        PI
----------
 3.1415864

1 row selected.

But what if you’re not on 12c yet ? Well, you should be! You can check out why it’s time to upgrade from Maria and myself chatting about it over coffee

But whilst you’re on 11g, there is still plenty of cool SQL options to play with Pi. Here we can use recursive SQL to generate Pi, based on the formula:

 

image



SQL> with term(numerator,product,seq) as
  2  ( select sqrt(2) numerator, sqrt(2)/2 product , 1 seq from dual
  3    union all
  4    select sqrt(2+numerator), sqrt(2+numerator)*product/2 , seq+1
  5    from   term, dual
  6    where  term.seq <= 16
  7  )
  8  select 2/product pi
  9  from term
 10  where seq = 16;

        PI
----------
3.14159265

1 row selected.

 

SQL…still the most awesome language out there!

Default Password Hashes for 11g Oracle Database

I often get Oracle Security related questions from people randomly sent to my inbox or occasionally on Social media and less on on this sites forum. I get questions on average probably 4 times per week in these ways. I....[Read More]

Posted by Pete On 14/03/17 At 06:16 PM

Little things worth knowing: Executing RDA on RAC

Result! I have finally been able to gather a complete RDA (Oracle Remote Diagnostic Agent) output on my 2 node RAC system. After consulting the relevant documentation on MOS-which is spread over at least 42 Doc IDs-I found them not to be very helpful to the degree that some of what I read is actually wrong or contradicting. I put together a short note, primarily to myself, to remind me of the process. I hope you find it useful, too.

The RDA version I used for this post is 8.14.x from MOS March 4th 2017. My RAC nodes are based on Oracle Linux 7.3/UEK 4.

Starting the data collection

RDA is one of the standard tools I use and I have previously blogged about it. RDA started off as a simple-to-use tool. After having used it for some years I started to run into issues with the interactive configuration which simply took too long to complete. As soon as I learned about them I fell in love with profiles. RDA profiles only prompt you for relevant information about the product you want to collect data for.

Running RDA on a RAC system is similar to single instance, and you automatically use profiles-a nice touch. It appears to me that RAC data collection is triggered via SSH on the remote nodes with the results being transferred to the local node.

I believe there are two methods of data gathering on RAC: one for Grid Infrastructure, and another one for the RDBMS layer. Depending on your settings of ORACLE_HOME and ORACLE_SID different parts of the stack are analysed. In this post I was primarily interested in running RDA for the RDBMS part of my cluster.

I started off by downloading the current RDA version to the system I want to run it on. This is a 2-node RAC, with nodes named rac12pri1 and rac12pri2. Both are based on Oracle Linux 7.3 with the most current UEK4 at the time of writing. My RDBMS homes are 12.1.0.2 patched with the January 2017 proactive bundle patch + OJVM.

The first caveat looms right at the beginning of the entire process: I didn’t find it stated explicitly in the documentation on MOS, but it seemed that you need to deploy RDA on all nodes in the cluster, and in the exact same location, before you start the data collection. I use /home/oracle/rda for that purpose. Make sure you have plenty of space in the location you chose, sometimes /home doesn’t provide enough for larger, more active systems.

During my first unsuccessful attempts I didn’t deploy RDA on all nodes before starting the data collection, only to see it not gather any information from the remote node. This is somewhat confusing, because the output (when collecting data) states this:

NOD002: Installing RDA software

NOD002 is short for the second node in my cluster.

Also, when configuring the data gathering process, you will see a prompt like this one:

Enter the absolute path of the directory where RDA will be installed on the remote nodes.

To the casual observer like me these messages suggest that RDA is actually installed on the remote nodes as part of the data collection process-but it is only partially done. Comparing directory sizes between remote and local node revealed /home/oracle/rda to be greater than 200M locally, while on the remote nodes it was only 37MB in size. Something seems to be missing…

Once deployed, you can change to the RDA directory and prepare for the data collection. You actually execute RDA on the first node, the rest of the work is done programatically. The MOS note seems to be correct this time, here is an example of my configuration session:

[oracle@rac12pri1 rda]$ ./rda.pl -vXRda start CLOUD -pRac_Assessment
Creating collection "output" ...
        - Trying to identify the domain ... (can take time)
 Inside DFT scenario, define the input and profile targets
 Inside DFT scenario, check Oracle home or Middleware presence
        - RDA:DCbegin ...
------------------------------------------------------------------------------
RDA.BEGIN: Initializes the Data Collection
------------------------------------------------------------------------------
Enter the Oracle home to be used for data analysis
Press Return to accept the default (/u01/app/oracle/product/12.1.0.2/dbhome_1)
>

        - RDA:DCconfig ...
------------------------------------------------------------------------------
RDA.CONFIG: Collects Key Configuration Information
------------------------------------------------------------------------------
        - RDA:DCocm ...
------------------------------------------------------------------------------
RDA.OCM: Set up the Configuration Manager Interface
------------------------------------------------------------------------------
        - RDA:DCstatus ...
------------------------------------------------------------------------------
RDA.STATUS: Produces the Remote Data Collection Reports
------------------------------------------------------------------------------
        - RDA:DCload ...
------------------------------------------------------------------------------
RDA.LOAD: Produces the External Collection Reports
------------------------------------------------------------------------------
        - RDA:DCfilter ...
------------------------------------------------------------------------------
RDA.FILTER: Controls Report Content Filtering
------------------------------------------------------------------------------
        - RDA:DCend ...
------------------------------------------------------------------------------
RDA.END: Finalizes the Data Collection
------------------------------------------------------------------------------
In CRS start scenario, getting nodes using /u01/app/12.1.0.2/grid/bin/olsnodes
2>/dev/null
------------------------------------------------------------------------------
Requesting common cluster information
------------------------------------------------------------------------------
Enter the absolute path of the directory where RDA will be installed on the
remote nodes.
Press Return to accept the default (/home/oracle/rda)
>

Do you want RDA to use an alternative login to execute remote requests (Y/N)?
Press Return to accept the default (N)
>

In CRS start scenario, getting local node information
In CRS start scenario, pinging node rac12pri2
------------------------------------------------------------------------------
Requesting information for node rac12pri1
------------------------------------------------------------------------------
Enter the Oracle home to be analyzed on the node rac12pri1
Press Return to accept the default (/u01/app/oracle/product/12.1.0.2/dbhome_1)
>

Enter the Oracle SID to be analyzed on the node rac12pri1
Press Return to accept the default (NCDBA1)
>

------------------------------------------------------------------------------
Requesting information for node rac12pri2
------------------------------------------------------------------------------
Enter the Oracle home to be analyzed on the node rac12pri2
Press Return to accept the default (/u01/app/oracle/product/12.1.0.2/dbhome_1)
>

Enter the Oracle SID to be analyzed on the node rac12pri2
> NCDBA2

        - RDA:DCremote ...
------------------------------------------------------------------------------
RAC Setup Summary
------------------------------------------------------------------------------
Nodes:
. NOD001  rac12pri1/NCDBA1
. NOD002  rac12pri2/NCDBA2
2 nodes found

So RDA understands the RAC scenario, and it gathers data for each node that is part of the cluster as reported by the olsnodes tool. There was nothing really surprising about the prompts, except that I haven’t found a way to analyse more than one database, or ASM and a database together.

Be careful to enter the correct ORACLE_SID for your (remote) RAC nodes. RDA picks up your ORACLE_HOME and ORACLE_SID if they are set.

Optionally verify the correct number of nodes was detected

I am only mentioning this for the sake of completeness, the RAC Setup Summary you saw in the previous step was more than adequate for me. If you really want to find out about the number of nodes you are about to collect information, be careful: MOS Doc ID 359395.1 is wrong – the command ./rda.pl -vC RDA.REMTOE list will not only list the nodes, it will also start the data collection. Use this one instead, which I found in MOS Doc ID 1682909.1:

[oracle@rac12pri1 rda]$ ./rda.pl -XRemote list
Defined nodes:
  NOD001  rac12pri1  NCDBA1
  NOD002  rac12pri2  NCDBA2
[oracle@rac12pri1 rda]$

It merely lists the nodes, without actually starting to do any work.

Initiate data collection

When you are happy with the output and configuration, start collecting data. Here is an example of my session:

[oracle@rac12pri1 rda]$ ./rda.pl -v -e TRC/TRACE=1
Collecting diagnostic data ...
------------------------------------------------------------------------------
RDA Data Collection Started 04-Mar-2017 16:10:15
------------------------------------------------------------------------------
Processing RDA.BEGIN module ...
 Inside BEGIN module, testing the RDA engine code build
 Inside BEGIN module, testing the report directory
 Inside BEGIN module, testing the module targets
 Inside BEGIN module, launching parallel executions
Processing RDA.CONFIG module ...
 Inside CONFIG module, listing Oracle homes
 Inside CONFIG module, getting Oracle home inventory (can take time)
Processing RDA.REMOTE module ...
NOD001: Detecting storage type
NOD002: Detecting storage type
NOD001: Running RDA command
NOD002: Installing RDA software
NOD002: Running RDA command
NOD002: Transfering report package
NOD001: Transfering report package
Processing RDA.END module ...
 Inside END module, gathering system information
 Inside END module, getting CPU information (linux)
 Inside END module, getting memory information (linux)
 Inside END module, producing the file catalog
 Inside END module, producing target overview
 Inside END module, waiting for parallel execution completion
 Inside END module, producing setting overview
------------------------------------------------------------------------------
RDA Data Collection Ended 04-Mar-2017 16:17:44
------------------------------------------------------------------------------
Generating the reports ...
        - collect/RDA_CONFIG_homes.txt ...
        - collect/RDA_CONFIG_oh_inv.txt ...
        - collect/RDA_END_system.txt ...
        - collect/RDA_END_files.txt ...
        - collect/RDA_END_target.txt ...
        - collect/RDA_END_report.txt ...
        - Report index ...
Packaging collection results ...

[...skipping a lot of text...]

  You can review the reports by transferring the /home/oracle/rda/output
  directory structure to a location where you have web-browser access.
Then, point your browser at this file to display the reports:
    RDA__start.htm

[...skipping a lot of text...]

It is crucially important to see these lines:

Processing RDA.REMOTE module ...
NOD001: Detecting storage type
NOD002: Detecting storage type
NOD001: Running RDA command
NOD002: Installing RDA software
NOD002: Running RDA command
NOD002: Transfering report package
NOD001: Transfering report package

In my first attempts, when I didn’t deploy RDA on all nodes myself, the lines “NOD002: Running RDA command” and “NOD002: Transfering report package” weren’t shown. Unsurprisingly no data was gathered on the remote nodes.

Viewing the output

At the end of the data collection you should see a *.tar.gz file per node. In my 2 node cluster setup, there are two:

[oracle@rac12pri1 rda]$ ls output/remote/
RDA_nod001_output.tar.gz  RDA_nod002_output.tar.gz

You can view these after extracting to a temporary location in your browser, start with the file named RDA__start.htm which presents the various parts of the report.

Further reading: Testing user equivalence prior to gathering data

While trying to troubleshoot my remote data gathering problems (I suspected a problem with SSH at first) I noticed that RDA offers test modules as well (see MOS Doc ID: 330760.1). What’s missing from the Doc ID is an example on how to invoke the SSH test module, or rather the command in the RAC specific instructions seems not to work. But it isn’t too hard to figure out the proper call to the RDA executable. The following worked for me:

[oracle@rac12pri1 rda]$ ./rda.pl -T ssh
Processing remote operation test module ...

Command Availability:
  rcp          (Not found)
  remsh        (Not found)
  rsh          (Not found)
  scp          /bin/scp
  ssh          /bin/ssh
  ssh-add      /bin/ssh-add
  ssh-agent    /bin/ssh-agent

Related files:
  /etc/ssh/moduli
  /etc/ssh/ssh_config
  /etc/ssh/ssh_host_ecdsa_key
  /etc/ssh/ssh_host_ecdsa_key.pub
  /etc/ssh/ssh_host_ed25519_key
  /etc/ssh/ssh_host_ed25519_key.pub
  /etc/ssh/ssh_host_rsa_key
  /etc/ssh/ssh_host_rsa_key.pub
  /etc/ssh/sshd_config
  /home/oracle/.ssh/authorized_keys
  /home/oracle/.ssh/config
  /home/oracle/.ssh/config.backup
  /home/oracle/.ssh/id_rsa
  /home/oracle/.ssh/id_rsa.pub
  /home/oracle/.ssh/known_hosts

Check if an authentication agent has been started
  SSH_AGENT_PID=1
  SSH_AUTH_SOCK=1
Agent identities:
  1024 some:colums:separated:by:colons:indicating:the:ID
  /home/oracle/.ssh/id_rsa (RSA)

Driver Availability:
    da Available
  jsch Available
   ssh Available
   rsh -

Settings modified by the remote operation initialization:
  REMOTE.F_SCP_COMMAND=/bin/scp
  REMOTE.F_SSH_COMMAND=/bin/ssh
  REMOTE.T_SCP_OPTIONS=-BCpq -o ConnectTimeout=30
  REMOTE.T_SSH_OPTIONS=-Cnq -o ConnectTimeout=30

------------------------------------------------------------------------------
Test a remote command
------------------------------------------------------------------------------
Enter the remote host name
Press Return to accept the default (localhost)
> rac12pri2

Do you want RDA to use an alternative login to execute remote requests (Y/N)?
Press Return to accept the default (N)
>

Check remote command execution with best driver
Exit code: 0

Check remote command execution using DA
Exit code: 0

Check remote command execution using JSCH
Exit code: 0

Check remote command execution using SSH
Exit code: 0

Check remote command execution using SSH (no timeout)
Exit code: 0
[oracle@rac12pri1 rda]$

Looking at the above output let me to believe that there isn’t a SSH-related problem with my cluster.

Generating randomized incremental values

In the previous entries, I explained the basics of testdata_ninja and then I explained how to create foreign key relations between your test data and existing tables. This time I will show how you can create data that are random but always increasing, either to create time patterns or to create unique id columns for primary keys. I will extend the format that I have already used in the previous entries, so the code will be familiar. If we look at the output of the test_days generator we have created, it currently looks like this:


SQL> select * from table(tdg_test_days.test_days);

NAME                                               BIRTHDAY            MY_PK_REF
-------------------------------------------------- ------------------ ----------
Arianna Turner                                     02-MAR-93 00:24:58          1
Sofie Dam                                          17-MAY-91 17:21:09          1
Camila King                                        27-OCT-62 18:16:56         10
Malik Henriksen                                    24-NOV-65 06:23:51         10
.... [more rows] ....

There is no clear column with possible unique values, so I will create a new column where the data generator is an "incrementor". Like all the other different data fields, it is in the 3 field definition of the block that we define how to generate the data. So far the format looks like this:


name#varchar2(50)#person_random.r_name@birthday#date#person_random.r_birthday@my_pk_ref#number#£t_people¤person_num_pk¤simple¤5

So let us add a column in the beginning, that increments randomly but is unique nonetheless:


people_pk#number#^numiterate~1~3¤5

and here is the explanation of the fields, which are separated by the "~" character:

  • incrementor type (numiterate for numbers and datiterate for dates)
  • start from (number or date to start increment from)
  • increment range (for numbers min and max separated by the "¤" character and dates interval type (seconds, minutes etc), min and max separated by the "¤" character for example: seconds¤1¤5)

So if we now take a look at the full format in the code, it would look like this:


declare
  the_format varchar2(4000) := 'people_pk#number#^numiterate~1~3¤5@name#varchar2(50)#person_random.r_name@birthday#date#person_random.r_birthday@my_pk_ref#number#£t_people¤person_num_pk¤simple¤5';
begin
  testdata_ninja.generator_create('test_days', the_format);
end;
/

And would create a result like the following:


1	Sigurd Andreasen	20-NOV-63 12:33:49	1
6	Ella Madsen	30-JAN-80 07:09:39	1
9	Alexandra White	16-JUL-53 16:32:07	1
13	Benjamin Holst	08-APR-88 13:39:38	1
17	Lea Jepsen	24-MAY-88 17:21:01	1
21	Eli Morgan	07-AUG-86 15:54:27	10
26	Oskar Bruun	20-JUL-69 22:40:56	10
31	Jens Friis	26-APR-63 16:32:21	10
35	Camila Hall	05-DEC-78 23:30:19	10

Swingbench with the AWS Trial

Swingbench is a one of the best choices for easy loads on a database.  I wanted to use it against the SH sample schema I loaded into my Oracle Source database and I haven’t used Swingbench outside of the command line quite a while back, (my databases seem to always come with a load on them!)  so it was time to update my Swingbench skills and catch up with the user interface.  Thanks to Dominic Giles for keeping the download, features and documentation so well maintained.

After adding the application rights to run on my Macbook Pro, I was impressed by the clean and complete interface.  I wanted to connect it to my AWS instance and as we talk about, the cloud is a lot simpler a change than most DBAs first consider.

When first accessing, Swingbench will prompt you to choose what pre-configured workload you’d like to utilize.  I had already set up the Sales History schema in my AWS Trial source database, so I chose Sales History and then had to perform a few simple configurations to get it to run.

Username: sh

Password: 

Connect String: ::

Proceed down to the tab for Environment Variables and add the following:

ORACLE_HOME  

I chose the default 16 connections to start out, but you can add more if you’d like. You can also configure stats collection, snapshot collection before and after the workload.

https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/03/Screen-Shot-2... 300w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/03/Screen-Shot-2... 768w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/03/Screen-Shot-2... 1200w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/03/Screen-Shot-2... 1800w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

I set my autoconnect to true, but the default is to not start the load until you hit the green arrow button.  The load will then execute the workload with the amount of connections requested until you hit the red stop button.  You should see the users logged in at the bottom right and in the events window:

https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/03/Screen-Shot-2... 300w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/03/Screen-Shot-2... 768w" sizes="(max-width: 342px) 100vw, 342px" data-recalc-dims="1" />

Next post we’ll discuss what you’ll see when running a Swingbench on a source database, the Delphix Engine host and subsequently refreshes to a VDB, (virtual database.)  We’ll also discuss other tools that can grant you visibility to optimization opportunities in the cloud.

 

 



Tags:  , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Swingbench with the AWS Trial], All Right Reserved. 2017.

The post Swingbench with the AWS Trial appeared first on DBA Kevlar.

Test data and foreign keys

In my last entry on testdata_ninja I showed the basics on how to create your own custom test data generators.

Once you have created the simple ones you probably want to extend it a little bit, and create test tables that has relations. So you want to create random data, but random data that actually can join to another table. The way to do this, is with a special data generator construct in the testdata_ninja format. If you read the old entry, you can see that testdata_ninja format is a series of columns seperated by the "@" sign. Each column has at least 3 parts (divided by the "#" character), where the third is the data generator.

So in the previous example I wrote about we used functions from the random_ninja package. You can however, reference existing tables instead. First we want to create a master table that we can reference in our new generator. We will use the basic data generators that are already part of the testdata_ninja package to create this:


create table t_people as select * from table(testdata_generator.people(100));

This statement will create a table with a hundred rows in. This table is what we will use to create our other test data table. One of the rows is called person_num_pk, and that is the column with the relation to our other table. The way we define that is with a special character as the first character in our generator field. So in my last entry I defined the test table with the following format:


name#varchar2(50)#person_random.r_name@birthday#date#person_random.r_birthday

So let us say that I wanted to add another column here, that linked to my primary key in the new table I just created. If I wanted to do that I simply create a third column definition like the following:


name#varchar2(50)#person_random.r_name@birthday#date#person_random.r_birthday@my_pk_ref#number#£t_people¤person_num_pk¤simple¤5

Notice the "£" sign. This is what specifies this generator as a reference to another table and not a function. The fields in the reference generator (divided with the "¤" character) are the following:

  • table name that we are referencing (t_people)
  • column in that table we are referencing (person_num_pk)
  • distribution rule (simple or range)
  • distribution count. If simple just one number, and every ref will this many rows in the new generator or range and you specify 2,5 then every reference will have between 2 and 5 rows in the new generator

So to create this new table we would simply run:


declare
  the_format varchar2(4000) := 'name#varchar2(50)#person_random.r_name@birthday#date#person_random.r_birthday@my_pk_ref#number#£t_people¤person_num_pk¤simple¤5';
begin
  testdata_ninja.generator_create('test_days', the_format);
end;
/

and we can now select from the generator and see that each value in the my_pk_ref column will be a real reference back to the t_people table and there will be 5 "child" rows per real row from the t_people table:


select
  a.first_name
  , b.birthday
from
  t_people a
  , (select name, birthday, my_pk_ref from table(tdg_test_days.test_days)) b
where
  a.person_num_pk = b.my_pk_ref;

FIRST_NAME                                                                                           BIRTHDAY         
---------------------------------------------------------------------------------------------------- ------------------
Parker                                                                                               14-SEP-64 13:35:56
Parker                                                                                               18-APR-62 05:02:35
Parker                                                                                               05-SEP-93 12:35:50
Parker                                                                                               16-AUG-82 21:19:14
Parker                                                                                               11-MAR-85 15:51:54
Bryson                                                                                               03-SEP-84 19:56:20
Bryson                                                                                               08-JUL-96 14:33:07
Bryson                                                                                               30-DEC-62 06:28:59
.......... [more rows]

Google Cloud Platform instances and Oracle Database

When it comes to choose a cloud instance to run Oracle Database, you want to be able to run your workload on the minimum CPU cores. This is why in a previous post I measured how many logical reads per seconds can be achieved with a SLOB workload, on AWS which is often the first considered, and will probably do it on Azure in the future. I did the same on the Oracle Cloud which is the only one where Oracle make it easy to run an license the Oracle Database.

CaptureGCPlogoPNGToday, I’ve seen that the Google Cloud offers a 1 year trial (1 year instead of 3 months) with 300$ free credits where you are sure that your credit card will not be debited before you accept it. Of course I wanted to play with it and tested the IaaS instance types with same method: SLOB.

Licencing No-Go

I said ‘play with it’ because you cannot consider Google Cloud as a platform for Oracle Database. Not because of the Google Cloud: it is on the top 3 cloud vendors with AWS and Azure. But just because Oracle doesn’t want to:

  • It is not an ‘authorized cloud’ where Oracle accepts the virtual cores as a metric
  • It is not running with an hypervisor where Oracle accepts the virtual cores as a metric
  • It is not the Oracle Cloud where Oracle accepts to count the virtual cores, and even apply the core factor

So, the sad truth is that if you want to run an Oracle Database on the Google Cloud, you may have to pay Oracle Licences to cover the whole physical infrastructure of Google data center… This is clearly a No-Go for processor metrics licenses. You may think about NUP+ licensing where the metric is not the processors, but the number of users. This was possible for Standard Edition for 11g (and for 12.1.0.1 but this is out of support now). But with 12.1.0.2 they changed the Standard Edition rules and, even when the metric is the number of users, you have to count the number of servers. This is again a No-Go for a public cloud environment.

So let’s play in the hope that one day the rules will change. For the moment, they think that this strategy will push the current Oracle Database users to the Oracle Cloud. They may realize one day that it increases the counterflow of users going to Open Source databases to run away from those arbitrary licensing rules.

Machine types

There are 3 types of processors available for Europe (datacenter in Brussels) Sandy Bridge, Ivy Bridge and Haswell. The regions are detailed in regions-zones documentation and there is more detail in machine_types.

Ivy Brige

Here I choose europe-west1-c and created an VM running RHEL7 on 4 vCPU Ivy Brige:

CaptureGCP00Ivy

Note that the price is the same for all kind of processors within the same region.

SLOB

Here is cached SLOB (UPDATE_PCT=0 RUN_TIME=300 SCALE=80M WORK_UNIT=64)

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 30.3 0.00 4.74
DB CPU(s): 1.0 29.4 0.00 4.59
Redo size (bytes): 14,254.9 433,831.2
Logical read (blocks): 674,052.6 20,513,983.3

This is 674 kLIOPS per vCPU.

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 25.0 0.00 7.24
DB CPU(s): 2.0 24.6 0.00 7.11
Logical read (blocks): 1,377,553.4 17,267,173.0

This is 689 kLIOPS per vCPU. We are running on two cores

Now, let’s see if those 4 VCPUs are threads or cores.

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 3.0 1.6 0.00 12.10
DB CPU(s): 2.9 1.6 0.00 11.93
Logical read (blocks): 1,493,775.4 815,084.0

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 54.6 0.00 13.80
DB CPU(s): 4.0 54.0 0.00 13.64
Logical read (blocks): 1,780,041.3 24,329,604.1

This is 498 kLIOPS per vCPU with 3 threads and 445 kLIOPS per vCPU with 4 threads. We are obviously running on 2 hyper-threaded cores, where hyper-threading here gives only 30% additional LIOPS.

When going further, the performance will decrease:

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 5.0 48.5 0.00 15.81
DB CPU(s): 3.0 29.0 0.00 9.46
Logical read (blocks): 1,460,029.2 14,233,024.3

It is interesting to see that when running 5 sessions on 4 vCPU then actually 3 threads only are used on average.

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 6.0 72.1 0.00 16.24
DB CPU(s): 3.0 36.5 0.00 8.22
Logical read (blocks): 1,458,749.6 17,651,628.4

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 7.0 72.5 0.00 20.42
DB CPU(s): 3.0 31.5 0.00 8.86
Logical read (blocks): 1,445,423.3 15,073,622.1

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 8.0 82.9 0.00 18.93
DB CPU(s): 3.0 31.4 0.00 7.17
Logical read (blocks): 1,436,355.2 14,986,038.9

Those 8 runs are visible on the CPU usage graph from the VM dashboard where maximum CPU usage is when running 4 sessions on those 4 threads.

CaptureGCP00Ivy001

Here is the CPU description that we can see from lscpu and /proc/cpuinfo where we can see that we are virtualized with KVM:

InkedCaptureGCP00Ivy002_LI

For tty output I usually prefer to paste the text rather than a screenshot, but I wanted to show the ssh screen that you get on your browser with a simple click. This is very convenient. We are really in easy and fast provisioning here.

Haswell

Just to compare, here is the same running on the Haswell machine type.

CaptureGCP00Haswel

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 25.4 0.00 5.53
DB CPU(s): 1.0 24.4 0.00 5.32
Logical read (blocks): 598,274.2 15,275,586.5
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 27.3 0.00 8.46
DB CPU(s): 2.0 26.8 0.00 8.29
Logical read (blocks): 1,155,681.9 15,814,666.6
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 3.0 34.7 0.00 12.34
DB CPU(s): 3.0 34.2 0.00 12.18
Logical read (blocks): 1,300,659.1 15,052,978.1
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 54.6 0.00 13.80
DB CPU(s): 3.9 53.8 0.00 13.61
Logical read (blocks): 1,541,843.0 21,098,158.6

Sandy Brige

here is the same running on the Sandy Brige machine type.

CaptureGCP00Sandy


Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 25.2 0.00 2.16
DB CPU(s): 1.0 25.0 0.00 2.14
Logical read (blocks): 668,393.8 16,935,651.8
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 23.1 0.00 11.13
DB CPU(s): 2.0 22.6 0.00 10.86
Logical read (blocks): 1,204,487.9 13,938,751.4
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 3.0 37.5 0.00 13.65
DB CPU(s): 3.0 36.9 0.00 13.43
Logical read (blocks): 1,383,602.0 17,334,975.8
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 50.0 0.00 16.23
DB CPU(s): 3.9 49.3 0.00 15.98
Logical read (blocks): 1,656,216.2 20,782,477.0

So what?

Whether you like it or not, you cannot run Oracle Database on the Google Cloud because Oracle licensing rules are set to make life difficult for running Oracle Database outside of the Oracle Cloud. For performance, it is still the Oracle IaaS that shows the best performance for this test. But of course, there are lot of other points to consider. You don’t run only the database, but application should be located in the same data center.

The Google Cloud Platform looks good and I’ll probably use my Google Cloud trial to test Spanner, and maybe the new PostgreSQL service. I love Oracle Database technology. But the customers going to a public cloud will tend migrate to database systems which can run on any virtualized environments without certification, support and licensing risks.

 

Cet article Google Cloud Platform instances and Oracle Database est apparu en premier sur Blog dbi services.

Creating real test data - Fast and Efficiently

When I build RANDOM_NINJA I knew already one of things I wanted to use that library for. Building good valid and life-like test data has and is always a really big problem. The base for good test data is good and valid looking random data. Without that, most tests are not really valid, as data can be clustered wrongly, indexes act strange because data does not mimic real life and relations and constraints are hard to define or validate on data that is created using most available randomizing packages. That is why it was important to me that RANDOM_NINJA would be able to create random data from as many different domains as possible. As of today it can create more than 185 different data points from as many as 14 different data domains.

But having good random data is only half of it. You still need something that can define and create those tables. You also need something that can still maintain relations between those test tables, and make sure that foreign key distributions are real as well. So I created TESTDATA_NINJA. This package has some generic generators to create simple tables of people, populations which are true according to UN demographics data, CDR records and credit card transactions. The real function in this package is the custom generation procedure. This procedure can parse a string representation of what your data looks like and from that create pipelined functiones that can create thousands of test rows extremely fast and efficiently.

The testdata_ninja.generator_create procedure takes 2 arguments. The name of the new generator and the format of the test data. Below is a short description on how the format looks like.

Manually Adding Sales History, (SH) Schema to 11.2.0.4

Most people know I like to do things the hard way… </p />
</p></div></div>

    	  	<div class=

12cR2: TNS_ADMIN in env.ora

The network files (sqlnet.ora, tnsnames.ora, lsnrctl.ora) are read by default from ORACLE_HOME/network/admin but you may have several Oracle installations, and want only one place for those files. Then you can use TNS_ADMIN environment variable. But are you sure that it is always set when starting the database? the listener? the client? They must be consistent (see https://blog.dbi-services.com/oracle-12cr2-plsql-new-feature-tnsping-from-the-database/). Then what we do for the cases where TNS_ADMIN is not set: define symbolic links from the ORACLE_HOME to the common location. It would be better to just change the default location and this is what can be done in 12.2 with env.ora

By default, the $ORACLE_HOME/env.ora is empty. There are only comments.

If you run any oracle 12.2 client the $ORACLE_HOME/ora.env will be read. If nothing is set here, then the default $ORACLE_HOME/network/admin location is read.

$ORACLE_HOME/network/admin

[oracle@VM104 tmp]$ strace -e trace=file tnsping localhost 2>&1 | grep -E "[.]ora"
lstat("/u01/app/oracle/product/12.2.0/dbhome_1/env.ora", {st_mode=S_IFREG|0644, st_size=852, ...}) = 0
open("/u01/app/oracle/product/12.2.0/dbhome_1/env.ora", O_RDONLY) = 3
access("/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/sqlnet.ora", F_OK) = 0
open("/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/sqlnet.ora", O_RDONLY) = 3
access("/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/sqlnet.ora", F_OK) = 0
open("/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/sqlnet.ora", O_RDONLY) = 3
access("/home/oracle/.tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/etc/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)

Here I have a sqlnet.ora but no tnsnames.ora so the next locations that are searched are ~/.tnsnames.ora and /etc/tnsnames.ora

TNS_ADMIN environment variable

If I set the environment variable TNS_ADMIN to /tmp then

[oracle@VM104 tmp]$ TNS_ADMIN=/tmp strace -e trace=file tnsping localhost 2>&1 | grep -E "[.]ora"
lstat("/u01/app/oracle/product/12.2.0/dbhome_1/env.ora", {st_mode=S_IFREG|0644, st_size=852, ...}) = 0
open("/u01/app/oracle/product/12.2.0/dbhome_1/env.ora", O_RDONLY) = 3
access("/tmp/sqlnet.ora", F_OK) = 0
open("/tmp/sqlnet.ora", O_RDONLY) = 3
access("/tmp/sqlnet.ora", F_OK) = 0
open("/tmp/sqlnet.ora", O_RDONLY) = 3
access("/home/oracle/.tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/tmp/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/etc/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)

The directory defined in TNS_ADMIN is searched first

TNS_ADMIN in env.ora

I have added the TNS_ADMIN=/tmp in the env.ora:

[oracle@VM104 tmp]$ tail -3 $ORACLE_HOME/env.ora
# Default: $ORACLE_HOME/network/admin
#
TNS_ADMIN=/tmp

When I run tnsping without setting any environment variable, I have exactly the same as before:


[oracle@VM104 tmp]$ strace -e trace=file tnsping localhost 2>&1 | grep -E "[.]ora"
lstat("/u01/app/oracle/product/12.2.0/dbhome_1/env.ora", {st_mode=S_IFREG|0644, st_size=867, ...}) = 0
open("/u01/app/oracle/product/12.2.0/dbhome_1/env.ora", O_RDONLY) = 3
access("/tmp/sqlnet.ora", F_OK) = 0
open("/tmp/sqlnet.ora", O_RDONLY) = 3
access("/tmp/sqlnet.ora", F_OK) = 0
open("/tmp/sqlnet.ora", O_RDONLY) = 3
access("/home/oracle/.tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/tmp/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/etc/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)

The good thing about it is that the setting is centralized for all binaries running from this ORACLE_HOME set.

Both

However the setting in environment has priority over the env.ora one:

[oracle@VM104 tmp]$ TNS_ADMIN=/var/tmp strace -e trace=file tnsping localhost 2>&1 | grep -E "[.]ora"
lstat("/u01/app/oracle/product/12.2.0/dbhome_1/env.ora", {st_mode=S_IFREG|0644, st_size=867, ...}) = 0
open("/u01/app/oracle/product/12.2.0/dbhome_1/env.ora", O_RDONLY) = 3
access("/var/tmp/sqlnet.ora", F_OK) = 0
open("/var/tmp/sqlnet.ora", O_RDONLY) = 3
access("/var/tmp/sqlnet.ora", F_OK) = 0
open("/var/tmp/sqlnet.ora", O_RDONLY) = 3
access("/home/oracle/.tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/var/tmp/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/etc/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)

So the recommandation if you want to use the env.ora is not to set TNS_ADMIN, especially when starting the listener or the database, to be sure that the same environment is always used. Final note: I’ve not seen it in the documentation so if you rely on it for critical environment, better to validate with support.

 

Cet article 12cR2: TNS_ADMIN in env.ora est apparu en premier sur Blog dbi services.