Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

When WHEN went faster

Yeah…try saying that blog post title 10 times in a row as fast as you can Smile

But since we’re talking about doing things fast, this is just a quick post about a conversation I had a twitter yesterday about the WHEN clause in a trigger.

 

image

That is an easy benchmark to whip up – I just need a couple of tables, each with a simple a trigger differing only by their usage of the WHEN clause.  Here is my setup:


SQL> create table t1 ( x int );

Table created.

SQL> create table t2 ( x int );

Table created.

SQL> create or replace
  2  trigger TRG1
  3  before insert on t1
  4  for each row
  5  begin
  6    if sys_context('USERENV','SID') = 0 then
  7       :new.x := 0;
  8    end if;
  9  end;
 10  /

Trigger created.

SQL> create or replace
  2  trigger TRG2
  3  before insert on t2
  4  for each row
  5  when ( sys_context('USERENV','SID') = 0 )
  6  begin
  7     :new.x := 0;
  8  end;
  9  /

Trigger created.

I’m using an always false condition so the content of the trigger will never be fired (except for the WHEN clause and the IF-test).  Now I’m ready to slam a bunch of inserts into each table and measure the performance.


SQL> set timing on
SQL> insert into t1
  2  select 1 from dual
  3  connect by level <= 1000000;

1000000 rows created.

Elapsed: 00:00:02.52

SQL> insert into t2
  2  select 1 from dual
  3  connect by level <= 1000000;

1000000 rows created.

Elapsed: 00:00:00.41

There is an immediate urge here to go screaming from the roof tops that WHEN is the all conquering hero of this benchmark. And yes, it is indeed faster but make sure you keep the results in perspective.  We gained back about 2 seconds for 1 million inserts. For me, once I’m inserting millions of rows into a table I’m starting to think not about trigger performance but whether this is the kind of table I want to be having triggers on at all. 

Power BI and the Speed(ier) Desktop

I can be an extremely impatient person about anything I think should be faster.

I’m diving in deep with Power BI and for most of the early on lessons, the data sources used are Excel and…ahem…Microsoft Access.  I don’t know a DBA alive that enjoys working with Access.  Its great for the common user to have a database application, but we can barely use the words “Access” and “Database” in the same sentence.  In my heart, I will always be a performance person and working with Power BI desktop with Microsoft Access is enough to make you want to throw your PC out the window, especially when talking data sets of 10 million rows or more..  I felt there had to be, at least, some way to help speed up the performance on Power BI when using this combination.

Now in a real life scenario, the first recommendation would be to filter the data set down so that it wouldn’t put so much pressure, resulting in poor performance.  I was offered some great links that presented that, along with other best practices and I’ll link them here, as the Power BI community offered up some AWESOME responses to my quest for answers:

Melissa Coates has a great Check List for Finalizing a Power BI Data Model post

Meagan Longoria sent me the Power BI Performance Best Practices link for Microsoft, which is filled with great information that everyone who works with Power BI should know.

Ginger Grant tweeted out the Marco Russo webinar,  My Power BI Report is Slow.

As for me, I had the added challenge of working with the edX lessons, not much of the above is an option, as the labs successful completion relies on entering correct counts for the results post lab work with the required data sets.  If you filtered the data or optimized the data model, the counts would be off and you WOULD FAIL.

What’s a girl to do to get through this without pulling her hair out and feeling the quality of the experience wasn’t impacted?  I can’t be the only one who felt this way and I know how users react when these types of situations happen.  I’m a DBA and in the database world, no matter who the performance impact culprit is, the database is guilty until proven innocent.  In this new world, Power BI credibility is the one impacted for new users who are just starting to learn about his powerful tool, when the goal is to empower the user.

I searched Google for some best practices, but most of them surrounded how to model the data more effectively vs. working with the products.  It demonstrates why performance specialists from all areas are essential to creating solutions and how NOTHING should be off the table.

OK, so if I’m working from scratch, this is the time to test out my own ideas and if I fall flat on my face, so be it. </p />
</p></div>

    	  	<div class=

Juicing up UTL_FILE

Think about your rubbish bin for a second. Because, clearly this is going to be an oh so obvious metaphor leading into UTL_FILE right?  OK, maybe a little explanation is needed. I have a basket next to my desk into which I throw any waste paper. It is where I throw my stupid ideas and broken dreams Smile

image

Melancholy aside, once it is full I’ll take it out and empty it into the main recycling bin that is collected once a fortnight from my street front.

image

Metaphor make sense now? Still No? OK, let me bring it around to the database. When you make changes to data in the database, most of the time these changes are only made to data blocks in memory. Obviously we have to keep a permanent record of the changes in the redo log to ensure that database recovery is possible, but the changed data blocks themselves are only dumped out to disk on an intermittent basis. That way, the process of changing data in the database is nice and efficient.

That is similar to what I do with the waste paper basket. I don’t take each piece of paper out to the street front recycling bin; I wait until the waste paper basket is full and then just do the one trip. It is more efficient. (This is hopefully your light bulb moment on my metaphor skills Smile)

So back to UTL_FILE. You can take the same approach when you need to unload some data using UTL_FILE. There is an inherent overhead with every call you make to write out a line of output to a flat file using UTL_FILE. Hence, if you can minimize the number of calls you make, you’ll see some benefits.

Here’s an example of that in action. I’m building a simple CSV file based on a table called T which contains approximately 40 million rows.  Here is my first cut at the solution which I’d wager most people have written in their IT lives. We simply loop through each record and write it out to the file:


SQL> set serverout on
SQL> declare
  2    f utl_file.file_type;
  3    line varchar2(255);
  4    s timestamp;
  5  begin
  6    f := utl_file.fopen('TMP','demo.dat','w');
  7    s := systimestamp;
  8    for i in ( select * from t )
  9    loop
 10      line := i.object_id||','||i.object_name||','||i.object_type||','||i.data_object_id;
 11      utl_file.put_line(f,line);
 12    end loop;
 13    dbms_output.put_line(systimestamp-s);
 14    utl_file.fclose_all;
 15  end;
 16  /
+000000000 00:04:07.368000000

PL/SQL procedure successfully completed.

I’d contend that four minutes for 40 million records is pretty good, but maybe we want to go faster.  Based on my metaphor it might not be immediately apparent how could I perform less UTL_FILE calls? Surely I need to call it once per line of data written? But don’t forget that a “line” in a file is merely the data terminated with a end-of-line pattern specific to your platform. It could be ASCII 10 or an ASCII 13,10 pair, but either way, it is really just an extension of the data you need to write to the file. A “line” is merely our interpretation of that data where that pattern means move to the next line.

So rather than call UTL_FILE for each line, I’ll build up multiple lines and then write them out with a single UTL_FILE.PUT_LINE call. I’m choosing a cap of around 300 lines to ensure that my concatenation does not exceed the maximum size of a VARCHAR2, but as a result I should cut down on the number of UTL_FILE calls by a factor of 300.


SQL> set serverout on
SQL> declare
  2    f utl_file.file_type;
  3    line varchar2(255);
  4    big_line varchar2(32767);
  5    s timestamp;
  6    cnt pls_integer := 0;
  7  begin
  8    f := utl_file.fopen('TMP','demo2.dat','w',32767);
  9    s := systimestamp;
 10    for i in ( select * from t )
 11    loop
 12      line := i.object_id||','||i.object_name||','||i.object_type||','||i.data_object_id;
 13      cnt := cnt + 1;
 14      if cnt < 300 then
 15         big_line := big_line || line || chr(10);
 16      else
 17         utl_file.put_line(f,big_line||line);
 18         cnt := 0;
 19         big_line := null;
 20      end if;
 21    end loop;
 22    utl_file.put_line(f,big_line);
 23    dbms_output.put_line(systimestamp-s);
 24    utl_file.fclose_all;
 25  end;
 26  /
+000000000 00:02:23.297000000

PL/SQL procedure successfully completed.

That’s pretty cool. We reclaimed around 100 seconds of elapsed time just by reducing the number of UTL_FILE.PUT_LINE calls. In this particular instance, that’s about 40% but obviously your mileage may vary based on a myriad of factors – so make you sure do your own benchmarking on your own systems.

As you can see, with just a couple of lines of extra code, we can really make UTL_FILE sing from a performance perspective. Having said that, in an upcoming blog post I’ll make a bold assertion – that you probably don’t need to use UTL_FILE ever again!  Stay tuned for that one.

ADWC new OCI interface

A few things have changed about the Autonomous Data Warehouse Cloud service recently. And I’ve found the communication not so clear, so here is a short post about what I had to do to start the service again. The service has always been on the OCI data centers but was managed with the classic management interface. It has been recently migrated to the new interface:
CaptureADWCnew
Note that ADWC here is the name I’ve given for my service. It seems that the Autonomous Data Warehouse Cloud Service is now referred by the ADW acronym.

The service itself did not have any outage. The migration concerns only the interface. However, once the migration done, you cannot use the old interface. I went to the old interface with the URL I bookmarked, tried to start the service, and got a ‘last activity START_SERVICE failed’ error message without additional detail.
CaptureADWCfail

You can forget the old bookmark (such as https://psm-tenant.console.oraclecloud.com/psmui/faces/paasRunner.jspx?serviceType=ADWC) and you now have to use the new one (such as https://console.us-ashburn-1.oraclecloud.com/a/db/adws/ocid1.autonomousdwdatabase.oc1.iad.al-long-IAD-identifier)

So I logged to the console https://console.us-ashburn-1.oraclecloud.com (My service is in Ashburn-1 region). There I provided the tenant name (was the cloud account in the old interface) which can also be provided in the URL as https://console.us-ashburn-1.oraclecloud.com/?tenant=tenant. I selected oracleidentitycloudservice as the ‘identity provider’, my username and password and I am on the OCI console.

From the top-left menu, I can go to Autonomous Data Warehouse. I see nothing until I choose the compartement in the ‘list scope’. The ADWC service I had created when in the old interface is in the ‘tenant (root)’ compartment. Here I can start the service.

The previous PSM command line interface cannot be used anymore. We need to install the OCI CLI:

$ bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/...)"

You will need the Tenancy ID (Tenancy OCID:ocid1.tenancy.oc1..aaaaaaaa… that you find on the bottom of each page in the console), the User ID (User OCID ocid1.user.oc1..aaaaaaa… that you find in the ‘users’ menu). All those ‘OCID’ are documented in https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm

If you used the REST API, they change completely. You will have to post to something like:

/20160918/autonomousDataWarehouses/ocid1.autonomousdwdatabase.oc1.iad.abuwcljrb.../actions/start

where the OCID is the database one that cou can copy from the console.

 

Cet article ADWC new OCI interface est apparu en premier sur Blog dbi services.

The Priority 600 Pinion Gearbox Bike

My friends at Priority Bicycles have released a new model bicycle with a German-made gearbox that is sealed against the elements and is centered on the bike for optimal weight distribution. While it's the gearbox that is the marquee feature on this new bike, it is the wide tires that fire my soul. I want to share where those tires can take you.

Full disclosure: I am a friend of Priority Bicycles, and was in the loop on the design and testing of the Priority 600 All Road bicycle. I am not a disinterested party.

Pavement is easy. Priority has specified WTB Horizon road-plus tires on the bike. These are 47 mm wide tires that you can run at low pressures for comfort over rough surfaces. Priority did not skimp here! You are not getting a cheap OEM version of the Horizon tires. Priority spec'd the full-on, high thread count enthusiast version that is tubeless ready, and mounted on tubeless rims. 

Priority 600 Pavement.jpg

 

 

 

Rough. Smooth. Doesn't matter. You can't go wrong with the Horizon tires on pavement.

Gravel is the sweet spot. I can't say enough about how much fun it is to head out on the Horizon tires and just follow my nose down whatever interesting road -- paved or unpaved -- presents itself. Run the tires at 30 psi. Adjust up or down to suit your taste. These tires feel planted and secure, and comfortable!

Priority 600 Gravel.jpg

The Horizon tires are planted and secure on gravel

Doubletrack's good too! Because even better than gravel roads are the endless miles of leftover logging doubletrack and ATV trails in the Hiawatha National Forest. One of my favorite rides for exercise is a mix of gravel road to ATV trail to pavement, and back to the parking lot. The WTB Horizon tires take all those surfaces in stride.

Priority 600 Big Gravel.jpg

Doubletrack? Big Gravel? It's all good!

Singletrack's possible. Just make it smooth singletrack. I don't recommend slamming into rocks and roots, and for sure don't be hitting jumps and grabbing air. But go easy, and the Horizon tires have let me push the envelope to include easy singletrack into my weekend afternoon rides.

Priority 600 Singletrack.jpg

Smooth Singletrack? Sweet Solitude.

Boardwalks and other unusual surfaces are easy, with the wide tires providing stability and grip. I don't often get the opportunity to ride long distances on boards, but last summer on vacation with my wife I had fun circumnavigating a lake on a trail that included over two miles of boardwalk. 

Does it get any better than this?

And now for the crazy stuff! What follows is not recommended(!), but I'm an enthusiast and have pushed the bike -- willing to risk that either the bike or myself might break in the process -- beyond its intended design parameters. This next photo shows the first prototype of the Priority 600 All Road just as I'm entering a rough patch of mountain bike trail in Manhattan's Highbridge Park. 

Priority 600 On The Rocks.jpg

The Priority 600 prototype handled this rough section. Skill and experience are needed! Surfaces like these are outside the design parameters and bring risk of damage and personal injury.

I love the bike! I've ridden the WTB Horizon tires for over a year now. I love where they can take me on my Priority bicycles. I love how the wide tires help me link together random trails and roads, bringing me to scenic places and helping me connect with nature and recoup from a tough day in the office. The Pinion gearbox on the Priority 600 makes the experience even better by centering the weight on the bike and adds tremendous gearing range for getting up (and down!) steep terrain. 

Priority 600 Smiling.jpg

The Priority Smile

p.s., Ryan Van Duzer's video review of the Priority 600 is excellent.

The simplest things….can be risky

Java and Oracle expert Lukas Eder tweeted yesterday about a potential optimization that could be done when reviewing database SQL code.

image

This looks to be a logical thing to do.  Why scan the table T twice to perform an update, when the same job could be done in a single pass.  The benefits seem obvious:

  • less I/O work
  • less time the data is spent locked
  • less risk of an error between the two operations

so don’t get me wrong – the consolidation is going to be a good thing in the majority of cases

And therein lies the rub – the “majority” of cases is the not the same as “all” cases, and that is why I don’t think a tool should ever automatically perform this change. I’d be cool with a tool making a recommendation but let’s see why you cannot just assume that the consolidation is correct.

Here’s our table with a single row and single business rule implement with a check constraint



SQL> create table t ( a int, b int );

Table created.

SQL> alter table t add constraint chk check ( a < b ) ;

Table altered.

SQL> insert into t values (1,2);

1 row created.

SQL> commit;

Commit complete.

Now I’ll implement the application in the original “unoptimized” way:


SQL> update t set a = a + 1;
update t set a = a + 1
*
ERROR at line 1:
ORA-02290: check constraint (MCDONAC.CHK) violated


SQL> update t set b = b*2;

1 row updated.

You can see that the first update failed – it violated the check constraint. Of course, it is not definitively clear whether this should be the case based on the business requirements, because I haven’t elaborated on whether these two updates should be two transactions or a single transaction. The correctness is not really the point I’m trying to make here, but that if I now choose to consolidate the update, I end up with a different application behaviour.

I’ll roll back the change above, and repeat the experiment using the consolidate update:


SQL> roll;
Rollback complete.
SQL> update t set a = a + 1, b = b*2;

1 row updated.

This time the update completes successfully. If a tool had automatically done this, then I will get a different behaviour in my application. That might be a good thing..it might not be. I could eliminate the difference by implementing the constraint in a DEFERRED usage, but we’re starting to depart even further from the existing implementation of the application code, which means more scrutiny and more regression testing.

So by all means, explore opportunities to improve the performance of your SQL by re-arranging it, consolidating it, and aiming to get more done with less work. But be careful that you do not unknowingly change the way your application works when you do so.

 

Oracle Can Generate 6 Password Hashes When a User is Added or Password Changed in 12.1.0.2 and Above

In a 12.2.0.2 database it's possible that Oracle generates 6 different password hashes for one password for one user under certain circumstances when a password is changed or created (user is created). I will layout the 6 different ones first....[Read More]

Posted by Pete On 13/06/18 At 09:02 PM

#Exasol’s annual conference in Berlin #XP18

We have had our annual conference Xperience in Berlin again – it was an amazing event with a packed agenda full of customer success stories and technical news. This year at the Umspannwerk.

Umspannwerk Berlinhttps://uhesse.files.wordpress.com/2018/06/umspannwerk_berlin.jpeg?w=124... 1240w, https://uhesse.files.wordpress.com/2018/06/umspannwerk_berlin.jpeg?w=150... 150w, https://uhesse.files.wordpress.com/2018/06/umspannwerk_berlin.jpeg?w=300... 300w, https://uhesse.files.wordpress.com/2018/06/umspannwerk_berlin.jpeg?w=768... 768w, https://uhesse.files.wordpress.com/2018/06/umspannwerk_berlin.jpeg?w=102... 1024w" sizes="(max-width: 620px) 100vw, 620px" />

My assumption is that we will need a larger location next year because of our constant growth – it was a bit tight already this time </p />
</p></div>

    	  	<div class=

VMware Hands On Labs

Introduction

In a previous post, I covered the VMware Experts Program. When I was attending the Program, one of the things Don Sullivan mentioned was the VMware Hands On Labs (HOL). I hadn’t had a chance to explore those before, so I was interested to see what they were like.

The Good

Quite honestly, these are among the best hands on lab environments I have played around with. If I wanted to test a feature of Oracle software in my previous life, most of the time I built myself a virtual machine environment using Oracle’s VirtualBox, installed all the relevant software and started from there. Of course, much of that could be snapshotted to save repeating the exercise, but it was largely restricted to a single VM. If I wanted to test something like Oracle’s Real Application Clusters technology, I built myself a small two node cluster out of two separate VMs and went from there. It really didn’t give me the feel of a real-world environment.

Enter the VMware HOL. Obviously, to get a real-world environment to test things like vMotion migration of VMs takes much more in the way of resources than my poor old laptop could handle, even if it does have 32 GB of RAM. The VMware HOL environment solves that issue for you. It takes only a small amount of time to actually crank up an environment for each lab you want to do, and each lab comes complete with online instructions integrated with the lab, as well as a separate PDF file and HTML version of the lab if you want to use that.

The labs are largely standalone, but do reference other labs where more details can be found. For things that may take some time to execute, you generally find they have included an interactive simulation where you click your way around and type in a few things, but the actual steps are simply simulated to save time.

Overall, as I mentioned above, the VMware HOLs are pretty darn good!

The Bad

Frankly, there’s not much to say here. If I was to nit pick a few minor points, they’d be these:

  • It would be nice to have a recommended path through the labs documented outside the labs themselves. There are quite a few labs there, and it’s a bit hard to determine which ones should be done in which order. In the labs themselves, there are sometimes suggestions to do other labs for more information, but that’s about it.
  • Sometimes the interactive simulation may not take input correctly (sometimes need to hit a key twice – which can be more than a tad confusing when you’re entering passwords and don’t realize one got missed!). However, after a while you realize you can type any active key (i.e. not Shift) and it will think you’re typing what it expects, so you can just keep typing “jjjjjjjjjjjjj…” until the field populates with the right number of characters. <br />
</li></ul></div>

    	  	<div class=

VMware Hands On Labs

Introduction

In a previous post, I covered the VMware Experts Program. When I was attending the Program, one of the things Don Sullivan mentioned was the VMware Hands On Labs (HOL). I hadn’t had a chance to explore those before, so I was interested to see what they were like.

The Good

Quite honestly, these are among the best hands on lab environments I have played around with. If I wanted to test a feature of Oracle software in my previous life, most of the time I built myself a virtual machine environment using Oracle’s VirtualBox, installed all the relevant software and started from there. Of course, much of that could be snapshotted to save repeating the exercise, but it was largely restricted to a single VM. If I wanted to test something like Oracle’s Real Application Clusters technology, I built myself a small two node cluster out of two separate VMs and went from there. It really didn’t give me the feel of a real-world environment.

Enter the VMware HOL. Obviously, to get a real-world environment to test things like vMotion migration of VMs takes much more in the way of resources than my poor old laptop could handle, even if it does have 32 GB of RAM. The VMware HOL environment solves that issue for you. It takes only a small amount of time to actually crank up an environment for each lab you want to do, and each lab comes complete with online instructions integrated with the lab, as well as a separate PDF file and HTML version of the lab if you want to use that.

The labs are largely standalone, but do reference other labs where more details can be found. For things that may take some time to execute, you generally find they have included an interactive simulation where you click your way around and type in a few things, but the actual steps are simply simulated to save time.

Overall, as I mentioned above, the VMware HOLs are pretty darn good!

The Bad

Frankly, there’s not much to say here. If I was to nit pick a few minor points, they’d be these:

  • It would be nice to have a recommended path through the labs documented outside the labs themselves. There are quite a few labs there, and it’s a bit hard to determine which ones should be done in which order. In the labs themselves, there are sometimes suggestions to do other labs for more information, but that’s about it.
  • Sometimes the interactive simulation may not take input correctly (sometimes need to hit a key twice – which can be more than a tad confusing when you’re entering passwords and don’t realize one got missed!). However, after a while you realize you can type any active key (i.e. not Shift) and it will think you’re typing what it expects, so you can just keep typing “jjjjjjjjjjjjj…” until the field populates with the right number of characters. <br />
</li></ul></div>

    	  	<div class=