Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Remote syslog from Linux and Solaris

Auditing operations with Oracle Database is very easy. The default configuration, where SYSDBA operations go to ‘audit_file_dest’ (the ‘adump’ directory) and other operations go to the database may be sufficient to log what is done but is definitely not a correct security audit method as both destinations can have their audit trail deleted by the DBA. If you want to secure your environment by auditing the most privileged accounts, you need to send the audit trail to another server.

This is easy as well and here is a short demo involving Linux and Solaris as the audited environments. I’ve created those 3 computer services in the Oracle Cloud:
CaptureSyslog000

So, I have an Ubuntu service where I’ll run the Oracle Database (XE 11g) and the hostname is ‘ubuntu’

root@ubuntu:~# grep PRETTY /etc/os-release
PRETTY_NAME="Ubuntu 16.04.4 LTS"

I have a Solaris service which will also run Oracle, and the hostname is ‘d17872′

root@d17872:~# cat /etc/release
Oracle Solaris 11.3 X86
Copyright (c) 1983, 2016, Oracle and/or its affiliates. All rights reserved.
Assembled 03 August 2016

I have an Oracle Enterprise Linux service which will be my audit server, collecting syslog messages from remote hosts, the hostname is ‘b5e501′ and the IP address in the PaaS network is 10.29.235.150

[root@b5e501 ~]# grep PRETTY /etc/os-release
PRETTY_NAME="Oracle Linux Server 7.5"

Testing local syslog

I start to ensure that syslog works correctly on my audit server:

[root@b5e501 ~]# jobs
[1]+ Running tail -f /var/log/messages &
[root@b5e501 ~]#
[root@b5e501 ~]# logger -p local1.info "hello from $HOSTNAME"
[root@b5e501 ~]# Jun 20 08:28:35 b5e501 bitnami: hello from b5e501

Remote setting

On the aduit server, I un-comment the lines about receiving syslog from TCP and UDP on port 514

[root@b5e501 ~]# grep -iE "TCP|UDP" /etc/rsyslog.conf
# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514
# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514
# Remote Logging (we use TCP for reliable delivery)

I restart syslog service

[root@b5e501 ~]# systemctl restart rsyslog
Jun 20 08:36:47 b5e501 systemd: Stopping System Logging Service...
Jun 20 08:36:47 b5e501 rsyslogd: [origin software="rsyslogd" swVersion="8.24.0" x-pid="2769" x-info="http://www.rsyslog.com"] exiting on signal 15.
Jun 20 08:36:47 b5e501 systemd: Starting System Logging Service...
Jun 20 08:36:47 b5e501 rsyslogd: [origin software="rsyslogd" swVersion="8.24.0" x-pid="2786" x-info="http://www.rsyslog.com"] start
Jun 20 08:36:47 b5e501 systemd: Started System Logging Service.

I tail the /var/log/messages (which is my default destination for “*.info;mail.none;authpriv.none;cron.none”)

[root@b5e501 ~]# tail -f /var/log/messages &
[root@b5e501 ~]# jobs
[1]+ Running tail -f /var/log/messages &

I test with local1.info and check that the message is tailed even when logger is sending it though the network:

[root@b5e501 ~]# logger -n localhost -P 514 -p local1.info "hello from $HOSTNAME"
Jun 20 09:18:07 localhost bitnami: hello from b5e501

That’s perfect.

Now I can test the same from my Ubuntu host to ensure that the firewall settings allow for TCP and UDP on port 514


root@ubuntu:/tmp/Disk1# logger --udp -n 10.29.235.150 -P 514 -p local1.info "hello from $HOSTNAME in UDP"
root@ubuntu:/tmp/Disk1# logger --tcp -n 10.29.235.150 -P 514 -p local1.info "hello from $HOSTNAME in TCP"

Here are the correct messages received:

Jun 20 09:24:46 ubuntu bitnami hello from ubuntu in UDP
Jun 20 09:24:54 ubuntu bitnami hello from ubuntu in TCP

Destination setting for the audit

As I don’t want to have all messages into /var/log/messages, I’m now setting, in the audit server, a dedicated file for “local1″ facility and “info” level that I’ll use for my Oracle Database audit destination

[root@b5e501 ~]# touch "/var/log/audit.log"
[root@b5e501 ~]# echo "local1.info /var/log/audit.log" >> /etc/rsyslog.conf
[root@b5e501 ~]# systemctl restart rsyslog

After testing the same two ‘logger’ commands from the remote host I check the entries in my new file:

[root@b5e501 ~]# cat /var/log/audit.log
Jun 20 09:55:09 ubuntu bitnami hello from ubuntu in UDP
Jun 20 09:55:16 ubuntu bitnami hello from ubuntu in TCP

Remote logging

Now that I validated that remote syslog is working, I set automatic forwarding of syslog messages on my Ubuntu box to send all ‘local1.info to the audit server':

root@ubuntu:/tmp/Disk1# echo "local1.info @10.29.235.150:514" >> /etc/rsyslog.conf
root@ubuntu:/tmp/Disk1# systemctl restart rsyslog

This, with a single ‘@’ forwards in UDP. You can double the ‘@’ to forward using TCP.

Here I check with logger in local (no mention of the syslog host here):

root@ubuntu:/tmp/Disk1# logger -p local1.info "hello from $HOSTNAME with forwarding"

and I verify that the message is logged in the audit server into /var/log/audit.log

[root@b5e501 ~]# tail -1 /var/log/audit.log
Jun 20 12:00:25 ubuntu bitnami: hello from ubuntu with forwarding

Repeated messages

Note that when testing, you may add “$(date)” to your message in order to see it immediately because syslog keeps the message to avoid flooding when the message is repeated. This:

root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Then another one"

is logged as this:

Jun 20 12:43:12 ubuntu bitnami: message repeated 5 times: [ Always the same message]
Jun 20 12:43:29 ubuntu bitnami: Then another one

I hope that one day this idea will be implemented by Oracle when flooding messages to the alert.log ;)

Oracle Instance

The last step is to get my Oracle instance sending audit message to the local syslog, with facility.level local1.info so that they will be automatically forwarded to my audit server. I have to set audit_syslog_level to ‘local1.info’ and the audit_trail to ‘OS':

oracle@ubuntu:~$ sqlplus / as sysdba
 
SQL*Plus: Release 11.2.0.2.0 Production on Wed Jun 20 11:48:00 2018
 
Copyright (c) 1982, 2011, Oracle. All rights reserved.
 
Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
 
SQL> alter system set audit_syslog_level='local1.info' scope=spfile;
 
System altered.
 
SQL> alter system set audit_trail='OS' scope=spfile;
 
System altered.
 
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
 
Total System Global Area 1068937216 bytes
Fixed Size 2233344 bytes
Variable Size 616565760 bytes
Database Buffers 444596224 bytes
Redo Buffers 5541888 bytes
Database mounted.
Database opened.

It is very easy to check that it works as the SYSDBA and the STARTUP are automatically audited. Here is what I can see in my audit server /var/log/audit.log:

[root@b5e501 ~]# tail -f /var/log/audit.log
Jun 20 11:55:47 ubuntu Oracle Audit[27066]: LENGTH : '155' ACTION :[7] 'STARTUP' DATABASE USER:[1] '/' PRIVILEGE :[4] 'NONE' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[13] 'Not Available' STATUS:[1] '0' DBID:[0] ''
Jun 20 11:55:47 ubuntu Oracle Audit[27239]: LENGTH : '148' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[5] 'pts/0' STATUS:[1] '0' DBID:[0] ''
Jun 20 11:55:51 ubuntu Oracle Audit[27419]: LENGTH : '159' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[5] 'pts/0' STATUS:[1] '0' DBID:[10] '2860420539'

In the database server, I have no more files in the adump since this startup:

oracle@ubuntu:~/admin/XE/adump$ /bin/ls -alrt
total 84
drwxr-x--- 6 oracle dba 4096 Jun 20 11:42 ..
-rw-r----- 1 oracle dba 699 Jun 20 11:44 xe_ora_26487_1.aud
-rw-r----- 1 oracle dba 694 Jun 20 11:44 xe_ora_26515_1.aud
-rw-r----- 1 oracle dba 694 Jun 20 11:44 xe_ora_26519_1.aud
-rw-r----- 1 oracle dba 694 Jun 20 11:44 xe_ora_26523_1.aud
drwxr-x--- 2 oracle dba 4096 Jun 20 11:48 .
-rw-r----- 1 oracle dba 896 Jun 20 11:48 xe_ora_26574_1.aud

Solaris

I have also started a Solaris service:

opc@d17872:~$ pfexec su -
Password: solaris_opc
su: Password for user 'root' has expired
New Password: Cl0udP01nts
Re-enter new Password: Cl0udP01nts
su: password successfully changed for root
Oracle Corporation SunOS 5.11 11.3 June 2017
You have new mail.
root@d17872:~#

Here, I add the forwarding to /etc/syslog.conf (tab is a required separator which cannot be replaced with spaces) and restart the syslog service:

root@d17872:~# echo "local1.info\t@10.29.235.150" >> /etc/syslog.conf
root@d17872:~# svcadm restart system-log

Then logging a message locally

root@d17872:~# logger -p local1.info "hello from $HOSTNAME with forwarding"

Here is the messaged received from the audit server:

[root@b5e501 ~]# tail -f /var/log/audit.log
Jun 20 05:27:51 d17872.compute-a511644.oraclecloud.internal opc: [ID 702911 local1.info] hello from d17872 with forwarding

Here in Solaris I have the old ‘syslog’ with no syntax to change the UDP port. The default port is defined in /etc/services, which is the one I’ve configured to listen to on my audit server:

root@d17872:~# grep 514 /etc/services
shell 514/tcp cmd # no passwords used
syslog 514/udp

If you want more features, you can install syslog-ng or rsyslog on Solaris.

 

Cet article Remote syslog from Linux and Solaris est apparu en premier sur Blog dbi services.

The death of UTL_FILE

In a previous post I covered a technique to improve the performance of UTL_FILE, but concluded the post with a teaser: “you probably don’t need to use UTL_FILE ever again”.

image

Time for me to back that statement up with some concrete evidence.

UTL_FILE can read and write files. This blog post will cover the writing functionality of UTL_FILE and why I think you probably don’t need UTL_FILE for this. I’ll come back to UTL_FILE to read files in a future post.

There are two possibilities when it comes to writing a file:

  • The file is being requested by a client program and the results should be stored on a client machine. In that case, you do not use UTL_FILE anyway – you use the tools available on the client, for example, SQL Developer, SQLcl, SQL*Plus or a myriad of other tools, OR
  • the file is to be written by the database server to a location on the database server itself, or to a location that is accessible to the database server. This is where UTL_FILE has been used, the main motivations being that it can be part of existing database-resident PL/SQL code, and does not require clients to have direct SQL or OS level access to the database server.

So here is my assertion – most of the time, UTL_FILE is used to write out a file because we can’t make use an easier tool like those mentioned above directly on the database server.  After all, who wants to write code like:


SQL> declare
  2    f utl_file.file_type;
  3    line varchar2(255);
  4    s timestamp;
  5  begin
  6    f := utl_file.fopen('TMP','demo.dat','w');
  7    s := systimestamp;
  8    for i in ( select * from t )
  9    loop
 10      line := i.object_id||','||i.object_name||','||i.object_type||','||i.data_object_id;
 11      utl_file.put_line(f,line);
 12    end loop;
 13    dbms_output.put_line(systimestamp-s);
 14    utl_file.fclose_all;
 15  end;
 16  /

PL/SQL procedure successfully completed.

when most of the time if I was doing this in a tool it would be simply:


SQL> set markup csv on
SQL> spool c:\tmp\demo.dat
SQL> select * from t;
SQL> spool off

But in recent versions of the database, you can do exactly this! The database scheduler has been enhanced to be able to to run SQL*Plus style scripts directly out of the database without needing to give OS access to database server, or the SQL*Plus executable. In the previous post I unloaded 40million rows to a file in CSV format and the responsibility for formatting the data into CSV format fell to me – I had to do all of the “heavy lifting”. Using the scheduler and the SQL_SCRIPT job type, it is as simple as writing a SQL*Plus script, and submitting it as a job.


SQL> declare
  2    l_script   VARCHAR2(32767) :=
  3  'conn /@db
  4  set markup csv on
  5  set arraysize 500
  6  set pages 0
  7  set lines 200
  8  set trimspool on
  9  spool c:\tmp\demo_sched.dat
 10  select * from t;
 11  spool off';
 12  begin
 13    dbms_scheduler.create_job(
 14      job_name        => 'UNLOAD_DATA',
 15      job_type        => 'SQL_SCRIPT',
 16      job_action      => l_script,
 17      credential_name => 'MY_ACCOUNT',
 18      enabled         => true
 19    );
 20  end;
 21  /

PL/SQL procedure successfully completed.

You will want to ensure that you have some controls over the usage of this feature, and what credentials the scripts will run under.  Also in my example, I’ve got a connection wallet setup so that I do not have to code any passwords into the connection string for my scheduler job. But suddenly it has become easy to get access to the scripting tools we are used to on our own client machines, and utilize them on the database server.

“Slow down there cowboy…” I hear you exclaim. “…That might be fine for simple SELECT * scripts, but my UTL_FILE procedures have a lot of complex logic to construct the file data”.

Even if you are using UTL_FILE because you are performing some complicated algorithms to generate the data that will be written to file, you can still utilize this scheduler facility.  After your complicated logic is completed, ultimately you typically will have a line of data you need to write to a file.  And if you have a line of data, then it is trivial to port that procedure to become a pipelined function.  And once you have a pipelined function, then we have a simple query mechanism that can be used to spool the output.  For example, if my original procedure is:


SQL> create or replace
  2  procedure my_procedure is
  3  begin
  4    for each_row in ( [some set] )
  5    loop
  6    ...
  7    ... my complex logic
  8    ...
  9
 10    ...
 11    ... writing each line to file with UTL_FILE
 12    ...
 13    end loop;
 14  end;
 15  /

then we change none of the logic – we only need replace all of that messy UTL_FILE code with a simple pipe command to allow querying that function as if it was source of rows, and then spool it to a file in the usual way.


SQL> create or replace
  2  function my_function return sys.odcivarchar2list pipelined is
  3  begin
  4    for each_row in ( [some set] )
  5    loop
  6    ...
  7    ... my complex logic
  8    ...
  9
 10    ...
 11    ... pipe row ( the_line );
 12    ...
 13    end loop;
 14    return;
 15  end;
 16  /

SQL> spool c:\tmp\demo.dat
SQL> select * from my_function();
SQL> spool off

So next time you’re cranking out some cumbersome UTL_FILE code to write a file, take a moment to see if the scheduler can look after some of the work for you. I’ll talk about (not) using UTL_FILE to read files in a future post.

When WHEN went faster

Yeah…try saying that blog post title 10 times in a row as fast as you can Smile

But since we’re talking about doing things fast, this is just a quick post about a conversation I had a twitter yesterday about the WHEN clause in a trigger.

 

image

That is an easy benchmark to whip up – I just need a couple of tables, each with a simple a trigger differing only by their usage of the WHEN clause.  Here is my setup:


SQL> create table t1 ( x int );

Table created.

SQL> create table t2 ( x int );

Table created.

SQL> create or replace
  2  trigger TRG1
  3  before insert on t1
  4  for each row
  5  begin
  6    if sys_context('USERENV','SID') = 0 then
  7       :new.x := 0;
  8    end if;
  9  end;
 10  /

Trigger created.

SQL> create or replace
  2  trigger TRG2
  3  before insert on t2
  4  for each row
  5  when ( sys_context('USERENV','SID') = 0 )
  6  begin
  7     :new.x := 0;
  8  end;
  9  /

Trigger created.

I’m using an always false condition so the content of the trigger will never be fired (except for the WHEN clause and the IF-test).  Now I’m ready to slam a bunch of inserts into each table and measure the performance.


SQL> set timing on
SQL> insert into t1
  2  select 1 from dual
  3  connect by level <= 1000000;

1000000 rows created.

Elapsed: 00:00:02.52

SQL> insert into t2
  2  select 1 from dual
  3  connect by level <= 1000000;

1000000 rows created.

Elapsed: 00:00:00.41

There is an immediate urge here to go screaming from the roof tops that WHEN is the all conquering hero of this benchmark. And yes, it is indeed faster but make sure you keep the results in perspective.  We gained back about 2 seconds for 1 million inserts. For me, once I’m inserting millions of rows into a table I’m starting to think not about trigger performance but whether this is the kind of table I want to be having triggers on at all. 

Power BI and the Speed(ier) Desktop

I can be an extremely impatient person about anything I think should be faster.

I’m diving in deep with Power BI and for most of the early on lessons, the data sources used are Excel and…ahem…Microsoft Access.  I don’t know a DBA alive that enjoys working with Access.  Its great for the common user to have a database application, but we can barely use the words “Access” and “Database” in the same sentence.  In my heart, I will always be a performance person and working with Power BI desktop with Microsoft Access is enough to make you want to throw your PC out the window, especially when talking data sets of 10 million rows or more..  I felt there had to be, at least, some way to help speed up the performance on Power BI when using this combination.

Now in a real life scenario, the first recommendation would be to filter the data set down so that it wouldn’t put so much pressure, resulting in poor performance.  I was offered some great links that presented that, along with other best practices and I’ll link them here, as the Power BI community offered up some AWESOME responses to my quest for answers:

Melissa Coates has a great Check List for Finalizing a Power BI Data Model post

Meagan Longoria sent me the Power BI Performance Best Practices link for Microsoft, which is filled with great information that everyone who works with Power BI should know.

Ginger Grant tweeted out the Marco Russo webinar,  My Power BI Report is Slow.

As for me, I had the added challenge of working with the edX lessons, not much of the above is an option, as the labs successful completion relies on entering correct counts for the results post lab work with the required data sets.  If you filtered the data or optimized the data model, the counts would be off and you WOULD FAIL.

What’s a girl to do to get through this without pulling her hair out and feeling the quality of the experience wasn’t impacted?  I can’t be the only one who felt this way and I know how users react when these types of situations happen.  I’m a DBA and in the database world, no matter who the performance impact culprit is, the database is guilty until proven innocent.  In this new world, Power BI credibility is the one impacted for new users who are just starting to learn about his powerful tool, when the goal is to empower the user.

I searched Google for some best practices, but most of them surrounded how to model the data more effectively vs. working with the products.  It demonstrates why performance specialists from all areas are essential to creating solutions and how NOTHING should be off the table.

OK, so if I’m working from scratch, this is the time to test out my own ideas and if I fall flat on my face, so be it. </p />
</p></div>

    	  	<div class=

Juicing up UTL_FILE

Think about your rubbish bin for a second. Because, clearly this is going to be an oh so obvious metaphor leading into UTL_FILE right?  OK, maybe a little explanation is needed. I have a basket next to my desk into which I throw any waste paper. It is where I throw my stupid ideas and broken dreams Smile

image

Melancholy aside, once it is full I’ll take it out and empty it into the main recycling bin that is collected once a fortnight from my street front.

image

Metaphor make sense now? Still No? OK, let me bring it around to the database. When you make changes to data in the database, most of the time these changes are only made to data blocks in memory. Obviously we have to keep a permanent record of the changes in the redo log to ensure that database recovery is possible, but the changed data blocks themselves are only dumped out to disk on an intermittent basis. That way, the process of changing data in the database is nice and efficient.

That is similar to what I do with the waste paper basket. I don’t take each piece of paper out to the street front recycling bin; I wait until the waste paper basket is full and then just do the one trip. It is more efficient. (This is hopefully your light bulb moment on my metaphor skills Smile)

So back to UTL_FILE. You can take the same approach when you need to unload some data using UTL_FILE. There is an inherent overhead with every call you make to write out a line of output to a flat file using UTL_FILE. Hence, if you can minimize the number of calls you make, you’ll see some benefits.

Here’s an example of that in action. I’m building a simple CSV file based on a table called T which contains approximately 40 million rows.  Here is my first cut at the solution which I’d wager most people have written in their IT lives. We simply loop through each record and write it out to the file:


SQL> set serverout on
SQL> declare
  2    f utl_file.file_type;
  3    line varchar2(255);
  4    s timestamp;
  5  begin
  6    f := utl_file.fopen('TMP','demo.dat','w');
  7    s := systimestamp;
  8    for i in ( select * from t )
  9    loop
 10      line := i.object_id||','||i.object_name||','||i.object_type||','||i.data_object_id;
 11      utl_file.put_line(f,line);
 12    end loop;
 13    dbms_output.put_line(systimestamp-s);
 14    utl_file.fclose_all;
 15  end;
 16  /
+000000000 00:04:07.368000000

PL/SQL procedure successfully completed.

I’d contend that four minutes for 40 million records is pretty good, but maybe we want to go faster.  Based on my metaphor it might not be immediately apparent how could I perform less UTL_FILE calls? Surely I need to call it once per line of data written? But don’t forget that a “line” in a file is merely the data terminated with a end-of-line pattern specific to your platform. It could be ASCII 10 or an ASCII 13,10 pair, but either way, it is really just an extension of the data you need to write to the file. A “line” is merely our interpretation of that data where that pattern means move to the next line.

So rather than call UTL_FILE for each line, I’ll build up multiple lines and then write them out with a single UTL_FILE.PUT_LINE call. I’m choosing a cap of around 300 lines to ensure that my concatenation does not exceed the maximum size of a VARCHAR2, but as a result I should cut down on the number of UTL_FILE calls by a factor of 300.


SQL> set serverout on
SQL> declare
  2    f utl_file.file_type;
  3    line varchar2(255);
  4    big_line varchar2(32767);
  5    s timestamp;
  6    cnt pls_integer := 0;
  7  begin
  8    f := utl_file.fopen('TMP','demo2.dat','w',32767);
  9    s := systimestamp;
 10    for i in ( select * from t )
 11    loop
 12      line := i.object_id||','||i.object_name||','||i.object_type||','||i.data_object_id;
 13      cnt := cnt + 1;
 14      if cnt < 300 then
 15         big_line := big_line || line || chr(10);
 16      else
 17         utl_file.put_line(f,big_line||line);
 18         cnt := 0;
 19         big_line := null;
 20      end if;
 21    end loop;
 22    utl_file.put_line(f,big_line);
 23    dbms_output.put_line(systimestamp-s);
 24    utl_file.fclose_all;
 25  end;
 26  /
+000000000 00:02:23.297000000

PL/SQL procedure successfully completed.

That’s pretty cool. We reclaimed around 100 seconds of elapsed time just by reducing the number of UTL_FILE.PUT_LINE calls. In this particular instance, that’s about 40% but obviously your mileage may vary based on a myriad of factors – so make you sure do your own benchmarking on your own systems.

As you can see, with just a couple of lines of extra code, we can really make UTL_FILE sing from a performance perspective. Having said that, in an upcoming blog post I’ll make a bold assertion – that you probably don’t need to use UTL_FILE ever again!  Stay tuned for that one.

ADWC new OCI interface

A few things have changed about the Autonomous Data Warehouse Cloud service recently. And I’ve found the communication not so clear, so here is a short post about what I had to do to start the service again. The service has always been on the OCI data centers but was managed with the classic management interface. It has been recently migrated to the new interface:
CaptureADWCnew
Note that ADWC here is the name I’ve given for my service. It seems that the Autonomous Data Warehouse Cloud Service is now referred by the ADW acronym.

The service itself did not have any outage. The migration concerns only the interface. However, once the migration done, you cannot use the old interface. I went to the old interface with the URL I bookmarked, tried to start the service, and got a ‘last activity START_SERVICE failed’ error message without additional detail.
CaptureADWCfail

You can forget the old bookmark (such as https://psm-tenant.console.oraclecloud.com/psmui/faces/paasRunner.jspx?serviceType=ADWC) and you now have to use the new one (such as https://console.us-ashburn-1.oraclecloud.com/a/db/adws/ocid1.autonomousdwdatabase.oc1.iad.al-long-IAD-identifier)

So I logged to the console https://console.us-ashburn-1.oraclecloud.com (My service is in Ashburn-1 region). There I provided the tenant name (was the cloud account in the old interface) which can also be provided in the URL as https://console.us-ashburn-1.oraclecloud.com/?tenant=tenant. I selected oracleidentitycloudservice as the ‘identity provider’, my username and password and I am on the OCI console.

From the top-left menu, I can go to Autonomous Data Warehouse. I see nothing until I choose the compartement in the ‘list scope’. The ADWC service I had created when in the old interface is in the ‘tenant (root)’ compartment. Here I can start the service.

The previous PSM command line interface cannot be used anymore. We need to install the OCI CLI:

$ bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/...)"

You will need the Tenancy ID (Tenancy OCID:ocid1.tenancy.oc1..aaaaaaaa… that you find on the bottom of each page in the console), the User ID (User OCID ocid1.user.oc1..aaaaaaa… that you find in the ‘users’ menu). All those ‘OCID’ are documented in https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm

If you used the REST API, they change completely. You will have to post to something like:

/20160918/autonomousDataWarehouses/ocid1.autonomousdwdatabase.oc1.iad.abuwcljrb.../actions/start

where the OCID is the database one that cou can copy from the console.

 

Cet article ADWC new OCI interface est apparu en premier sur Blog dbi services.

The Priority 600 Pinion Gearbox Bike

My friends at Priority Bicycles have released a new model bicycle with a German-made gearbox that is sealed against the elements and is centered on the bike for optimal weight distribution. While it's the gearbox that is the marquee feature on this new bike, it is the wide tires that fire my soul. I want to share where those tires can take you.

Full disclosure: I am a friend of Priority Bicycles, and was in the loop on the design and testing of the Priority 600 All Road bicycle. I am not a disinterested party.

Pavement is easy. Priority has specified WTB Horizon road-plus tires on the bike. These are 47 mm wide tires that you can run at low pressures for comfort over rough surfaces. Priority did not skimp here! You are not getting a cheap OEM version of the Horizon tires. Priority spec'd the full-on, high thread count enthusiast version that is tubeless ready, and mounted on tubeless rims. 

Priority 600 Pavement.jpg

 

 

 

Rough. Smooth. Doesn't matter. You can't go wrong with the Horizon tires on pavement.

Gravel is the sweet spot. I can't say enough about how much fun it is to head out on the Horizon tires and just follow my nose down whatever interesting road -- paved or unpaved -- presents itself. Run the tires at 30 psi. Adjust up or down to suit your taste. These tires feel planted and secure, and comfortable!

Priority 600 Gravel.jpg

The Horizon tires are planted and secure on gravel

Doubletrack's good too! Because even better than gravel roads are the endless miles of leftover logging doubletrack and ATV trails in the Hiawatha National Forest. One of my favorite rides for exercise is a mix of gravel road to ATV trail to pavement, and back to the parking lot. The WTB Horizon tires take all those surfaces in stride.

Priority 600 Big Gravel.jpg

Doubletrack? Big Gravel? It's all good!

Singletrack's possible. Just make it smooth singletrack. I don't recommend slamming into rocks and roots, and for sure don't be hitting jumps and grabbing air. But go easy, and the Horizon tires have let me push the envelope to include easy singletrack into my weekend afternoon rides.

Priority 600 Singletrack.jpg

Smooth Singletrack? Sweet Solitude.

Boardwalks and other unusual surfaces are easy, with the wide tires providing stability and grip. I don't often get the opportunity to ride long distances on boards, but last summer on vacation with my wife I had fun circumnavigating a lake on a trail that included over two miles of boardwalk. 

Does it get any better than this?

And now for the crazy stuff! What follows is not recommended(!), but I'm an enthusiast and have pushed the bike -- willing to risk that either the bike or myself might break in the process -- beyond its intended design parameters. This next photo shows the first prototype of the Priority 600 All Road just as I'm entering a rough patch of mountain bike trail in Manhattan's Highbridge Park. 

Priority 600 On The Rocks.jpg

The Priority 600 prototype handled this rough section. Skill and experience are needed! Surfaces like these are outside the design parameters and bring risk of damage and personal injury.

I love the bike! I've ridden the WTB Horizon tires for over a year now. I love where they can take me on my Priority bicycles. I love how the wide tires help me link together random trails and roads, bringing me to scenic places and helping me connect with nature and recoup from a tough day in the office. The Pinion gearbox on the Priority 600 makes the experience even better by centering the weight on the bike and adds tremendous gearing range for getting up (and down!) steep terrain. 

Priority 600 Smiling.jpg

The Priority Smile

p.s., Ryan Van Duzer's video review of the Priority 600 is excellent.

The simplest things….can be risky

Java and Oracle expert Lukas Eder tweeted yesterday about a potential optimization that could be done when reviewing database SQL code.

image

This looks to be a logical thing to do.  Why scan the table T twice to perform an update, when the same job could be done in a single pass.  The benefits seem obvious:

  • less I/O work
  • less time the data is spent locked
  • less risk of an error between the two operations

so don’t get me wrong – the consolidation is going to be a good thing in the majority of cases

And therein lies the rub – the “majority” of cases is the not the same as “all” cases, and that is why I don’t think a tool should ever automatically perform this change. I’d be cool with a tool making a recommendation but let’s see why you cannot just assume that the consolidation is correct.

Here’s our table with a single row and single business rule implement with a check constraint



SQL> create table t ( a int, b int );

Table created.

SQL> alter table t add constraint chk check ( a < b ) ;

Table altered.

SQL> insert into t values (1,2);

1 row created.

SQL> commit;

Commit complete.

Now I’ll implement the application in the original “unoptimized” way:


SQL> update t set a = a + 1;
update t set a = a + 1
*
ERROR at line 1:
ORA-02290: check constraint (MCDONAC.CHK) violated


SQL> update t set b = b*2;

1 row updated.

You can see that the first update failed – it violated the check constraint. Of course, it is not definitively clear whether this should be the case based on the business requirements, because I haven’t elaborated on whether these two updates should be two transactions or a single transaction. The correctness is not really the point I’m trying to make here, but that if I now choose to consolidate the update, I end up with a different application behaviour.

I’ll roll back the change above, and repeat the experiment using the consolidate update:


SQL> roll;
Rollback complete.
SQL> update t set a = a + 1, b = b*2;

1 row updated.

This time the update completes successfully. If a tool had automatically done this, then I will get a different behaviour in my application. That might be a good thing..it might not be. I could eliminate the difference by implementing the constraint in a DEFERRED usage, but we’re starting to depart even further from the existing implementation of the application code, which means more scrutiny and more regression testing.

So by all means, explore opportunities to improve the performance of your SQL by re-arranging it, consolidating it, and aiming to get more done with less work. But be careful that you do not unknowingly change the way your application works when you do so.

 

Oracle Can Generate 6 Password Hashes When a User is Added or Password Changed in 12.1.0.2 and Above

In a 12.2.0.2 database it's possible that Oracle generates 6 different password hashes for one password for one user under certain circumstances when a password is changed or created (user is created). I will layout the 6 different ones first....[Read More]

Posted by Pete On 13/06/18 At 09:02 PM

#Exasol’s annual conference in Berlin #XP18

We have had our annual conference Xperience in Berlin again – it was an amazing event with a packed agenda full of customer success stories and technical news. This year at the Umspannwerk.

Umspannwerk Berlinhttps://uhesse.files.wordpress.com/2018/06/umspannwerk_berlin.jpeg?w=124... 1240w, https://uhesse.files.wordpress.com/2018/06/umspannwerk_berlin.jpeg?w=150... 150w, https://uhesse.files.wordpress.com/2018/06/umspannwerk_berlin.jpeg?w=300... 300w, https://uhesse.files.wordpress.com/2018/06/umspannwerk_berlin.jpeg?w=768... 768w, https://uhesse.files.wordpress.com/2018/06/umspannwerk_berlin.jpeg?w=102... 1024w" sizes="(max-width: 620px) 100vw, 620px" />

My assumption is that we will need a larger location next year because of our constant growth – it was a bit tight already this time </p />
</p></div>

    	  	<div class=