Search

OakieTags

Who's online

There are currently 0 users and 37 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

ASH Analytics- Activity Focused on Session Identifiers, Part III

We’re going to continue onto Session Identifiers in the ASH Analytics Activity view.  You can use the links to view Part I and Part II to catch up on the ASH Analytics fun!  Knowing just how many ways you can break down ASH data is very helpful when you are chasing an issue and need specific data to resolve it.   We’ve already taken a deeper look into SQL and Resource Consumption, so not onto the session identifiers.

Session identifiers in ASH Analytics cover the following-

ash_0324_main

A session identifier provides distinct information about the session or sessions.  Like previous blog posts, all data for each graph is from the same timeline, so comparisons to more readily understand the differences in data has been chosen.

Instance

The data is broken down by total sessions per instance, which is extremely helpful when working in a RAC environment.  The graph will show if there is “weight” to the database resource consumption or unusual sessions, etc. from one instance or another.

ash_0324_instance

As you can see in our example, it’s pretty much 50/50 between the two, with a couple, minimal spikes from sessions that we can drill down on the right or by SQL_ID on the left.  Notice that you can see the percentage of SQL that is being executed on each of the instances.

Services

Your services are your connections into the database that each session is using.  The database environment we are using for our example only has one distinct “user” service, which is shown in hot pink.  If you have more, you can gauge the amount of resources, per user service that’s been deployed to each application/user group/host, etc.

ash_0324_services

As we’ve seen in the other activity graphs, we can easily drill down by SQL_ID on the left or instance/SID/Serial# on the right.

User Session

The activity in this graph is quite heavy.  We can identify the majority of user sessions that are consuming the highest amount of resources individually.  The rest who can’t be isolated and graphed out individually, are then grouped at the bottom and marked as “Other”.  This view comes in handy when you have one session that is an overall consumer that would be out of the norm.

ash_0324_sessions

Notice the mix of sessions that are involved in each of the SQL_ID’s on the left.  This tells us that most of the users are either accessing via an application or through packaged code.  If you were to see one session running on top SQL_ID, this might be a reason to quickly investigate for an issue.  The “Other” category may look interesting, but remember, if any one of these sessions or SQL_ID were a significant consumer of resources, they would have their own identified section in the graph.  Don’t fall into the rabbit hole… :)

Parallel Processes

There isn’t a lot that I get to show you for activity here.  The environment that I’m using for my example just isn’t set up to use or need a lot of parallel.  I have all of two parallel processes and you can see a process running in parallel on the first instance and another on the second instance, identified by color.

ash_0324_parallel

Notice that the bottom sections is our parallel process shown on both the left and the right side.  You have the option to start filtering by more information, same as what you see at the top, but this is secondary filtering.  A simple change on the right could gather a different set of data for the same processes, such as wait event or resource consumption.

User ID

As you can see, two primary User IDs are accessing the environment.  We already know that its numerous sessions, but this could signify an application ID that is used for a login, resulting in the user login security being housed at the application or OS level.  We can see that the primary usage is from the User ID with the magenta graphed section.

ash_0324_user_id

The bottom section would show the same as the top, just broken down by percentage.  What we can do instead, is break it down by a different type of data, such as wait event or even objects as I’ve shown here:

ash_0324_break_it_dwn

Program

For the Programs Activity graph, I’ve done something a bit more with the bottom section.  As you can see in the graph, we have the programs displayed across the graphs and notice for this one, the OMS is the majority of the usage, which is expected, (this is an OMS environment…. :))  Now look at the bottom section-

ash_0324_program

The default would have been to show me the Programs by percentage, but I’ve updated them by displaying now SQL information in the left and right sections.  The left section, I’ve now requested to know what SQL Plan Operation there are and by what programs.  The right I’m displaying the SQL_ID’s and from what Programs they are sourcing from.

Session Type

Last one is Session Type.  This is to simply distinguish background vs. foreground processes.  the bottom section already displays data on SQL_ID on the left and SID on the right.  You’re welcome to mix up this data and filter anyway you need to, but this is the default for this activity view.

ash_0324_session_type

Keeping to our areas we’ve already discussed in our previous posts, I’ll switch up the left and break it down further by Wait Event.  On the right, I’m just starting to choose a new filter, but notice that I’m simply clicking on the drop down menu and picking the event type I want to use for my new filter.

ash_0324_break_it_dwn_2

Using the activity graphs in ASH Analytics this way grants the DBA access to various types of diagnostic and analysis information that can be used to trouble shoot an issue or dig deeper into a performance tuning challenge when it comes to session information.

 



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [ASH Analytics- Activity Focused on Session Identifiers, Part III], All Right Reserved. 2014.

Oracle Midlands : Event #2 Summary

The second Oracle Midlands event took place last night.

The most important part of the evening was the opportunity to win an Oracle Press teddy bear.

orabear

Oh yeah, there were a couple of talks too…

I was planning to arrive early and go for a quick drink with my dad, but the Birmingham traffic put that plan to rest. Instead we just chatted a bit before the event started.

First up was my dad (Graham Wood), who dropped by on his way to the UKOUG Real World Performance Day in London, speaking about “Advanced ASH Architecture and Usage”. I’ve seen Graham doing this presentation a few times, but each time something new jumps out at me. If Graham came back next week and did the same talk, I would still go and watch it. Every year that goes by I find myself using AWR reports and SQL Trace less and less because I can get the information I need out of ASH.

After saying an emotional farewell to Graham, I drowned my sorrows in vegetable samosas and chatted to people, including Martin Widlake, who I assume was lost since he’s not from round these parts… :)

Next up was Nikolay Manchev, who drove up from London to speak about “Using Clusterware 11g to protect single-instance databases”. This was essentially using Oracle Clusterware to mimic RAC One-Node without having to buy a RAC One-Node license. Almost the whole of this presentation was live demonstration. I love doing demos, but I don’t think I am brave enough to do live demos of Oracle clusterware and failover etc. To much to go wrong on a little laptop. Nikolay laughs in the face of danger and jumps right into that stuff. :) He had a couple of little hiccups with the projector not playing well with his laptop, but he got through the demo and his cold failover of the database, listener and VIP worked just fine. Neat! :)

After the event was over, a few of us wandered over to a nearby pub and sat chatting for a couple of hours more. :)

All in all a really cool event! Here come the thank you messages.

  • Thanks to Mike for actually organising this event. I think you are doing a fantastic job!
  • Thank you to the speakers for coming along to the event.
  • Thanks to those kind folks at Red Gate Software, whose sponsorship allowed this to be a free event.
  • Thanks to Oracle Press for the raffle prizes of bears, books and t-shirts. I really wanted a bear, but I didn’t win.
  • Thanks to the attendees for coming, some from very far afield. Please keep coming and please keep the word-of-mouth about these events going. Local user groups like this live or die based on your support!

The next event should be on May 20th, with Christian Antognini doing both sessions. Christian is the author of probably my favourite technical book, Troubleshooting Oracle Performance, so I’m really looking forward to this. You never know, you might get to win the second edition of this book, which must be about to arrive… :) Keep an eye on the Oracle Midlands website for registration information.

Cheers

Tim…

 


Oracle Midlands : Event #2 Summary was first posted on March 26, 2014 at 9:34 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

C14 OakTable World Las Vegas

If you haven’t yet made the decision to attend COLLABORATE 14 – IOUG Forum in Las Vegas taking place on 7-11 April, 2014 at the Venetian Hotel, this might just help you to make the call. You know you want to be there.

OakTable Network will be holding its OakTable World for the very first time during the COLLABORATE conference. While it’s a little bit last moment, IOUG was able to provide a room for us to use for the whole day and we at OakTable quickly put the schedule together. The agenda is selected by the OakTable speakers on the topics they are really passionate about. As history shows, this is generally what your also want to hear about.

The history of OakTable World comes from the underground event started by Mogens Nørgaard during Oracle OpenWorld somewhere around 2007-2009. After several years of success and growing popularity, the even has become knows as OakTable World run during OOW12 (link) and OOW13 (link) and was a huge success. Last year, we have also run OTWUK13 around UKOUG TEBS 13 conference in Manchester. Needless to say it was a success.

Thanks to many good companies sponsoring the events all those years — you know who you are. This year, the sponsor is really IOUG who managed to find a room for us. I’ll probably gather few usual suspect to print special t-shirts again so stay tuned for that update — I wanted to get the info out-there ASAP so that people could plan attending.

The up to date schedule is in Google Calendar. You can also sign up to the calendar feed using XML or iCal feeds.

Embedded schedule is below:

Note that we have two hours of TED-style lightning talks with or without slides. Each talk is 10-15 minutes when the author shares an innovative perspective relevant to the database industry. The talks are the mix of technical and non-technical topics. It’s been first attempted at UKOUG conference in 2011 and 2012 I believe and were very popular and well attended.

You will hear about some specific technologies or methods that are under-utilized, about provocative approaches to database tasks we have at hand, about disruptive trends and about totally non-technical things that will make you a better IT professional — stories of someone’s life or professional career. It’s usually entertaining and thoughts-provoking. I strongly recommend them.

As you can see the speakers for the rest of the day are stellar. Frits Hoogland of Enkitec Europe fame comes to C14 all the way from Holland and he will start the day by getting very very very deep inside Oracle execution code. Frits will demonstrate live use of Linux GDB debugger techniques to diagnose and troubleshoot very complex scenarios when you really need to understand internal of certain database operations. You really need your brain fresh for this session so scheduling it first thing in the morning is the only reasonable time to reduce the risk of your brain being fried in the first 15 minutes. Hardcode folks will love this.

Next goes Kyle Hailey with Agile Data story to teach us about modern techniques in making Oracle database deployment, cloning and things as easy as deploying a Linux RPM package with YUM. Following thatMoved to 3pm, Graham Wood from Oracle will be presenting Unreal World Performance use cases themed after popular Real World Performance Tour (but without two other amigos) with fresh look at bad practices of managing connections to Oracle databases — time for a refresher on those very popular RWP Youtube videos.

After lunch is when Lightning talks will be running and also a very cool session by Jeff Needham of Scalabilities where you can learn about disruptive trends in modern data warehousing and what customers are doing today (and more will be doing tomorrow) to scale their data processing in cost effective way. How vendors respond to Big Data technologies disrupting their traditional products and how customers handle it (note that I’m trying hard to be politically correct here — come, it will be very refreshing). By the way, Jeff will give away his book copies (Disruptive Possibilities) during Lightning talk as well.

Unfortunately Jeff won’t be able to be here. We might still get some of his books. However, we have Gwen Shapira presenting on loading Twitter data into Hadoop and analyzing with complete live on the spot demo!

Late Karl Arao addition is also on the schedule!

Here are the schedule for the Lightning Talks. The schedule might change slightly between two slots (and we still have one slot to be confirmed):

Lightning Talk Part I:

  • Kyle Hailey – “Importance of Data Visualization”
  • Jeff Needham – “Hadoop Myth-Busting (plus books giveaway)”
  • Tim Gorman – “Eagles and Seagulls” about being an independent IT consultant
  • Kellyn Pot’vin – “Finding Your Path”

Lightning Talk Part II:

  • Jonathan Gennick – “Seven Ways toward Better Writing”
  • Kellyn Pot’vin – “Snapclone with DBaaS in EM12c”
  • Graham Wood – How Hardware Timings Affect Your Oracle Database Performance
  • TBC

Lightning talks details are in the events details in the calendar if you open the event. Remember that there might be some last minute changes so I recommend simply signing up for the calendar feed on your mobile device you carry with you at the conference.

Also, there will be the OakTable folks and other crowd hanging around that room for the whole day so come over. I think it would be a good experiment to reintroduce couple things that Mogens had back in the old days:

  • FIVE-MINUTES-OF-FAME – Any attendee can get up and present a technical solution (5-minute time limit) to the group for admiration.
  • FIVE-MINUTES-OF-HELP – Any attendee can get up and pose a technical problem (5-minute time limit) to the group for help.

Well, that’s the news. If you are excited — post here in the comments that you are coming, share this on Twitter, Facebook, LinkedIn and what not. Looking forward to see as many of you there as possible. As Tim Gorman said — “more fun than a barrel of drunken DBAs, is the un-conference formerly known as “Oracle Closed World“, now known as “Oak Table World“.” And this year we are in Vegas!

VirtualBox 4.3.10

VirtualBox 4.3.10 has been released. The downloads and changelog are in the usual places.

At the time of writing, the link to the Windows version seems to be broken, but the Mac, Oracle Linux and Fedora versions are there.

Happy upgrading!

Cheers

Tim…

Update: Probably best to wait a while before downloading this new version. This version is currently on its 3rd build since I first downloaded it and now the download links are broken. :(

 


VirtualBox 4.3.10 was first posted on March 26, 2014 at 4:28 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Diagnostics

Here’s a little test you might want to try. Examine the following script, and decide what sort of symptoms you would see in the AWR report.


create global temporary table gtt1(n1 number);

execute dbms_workload_repository.create_snapshot;

insert into gtt1 values(1);
truncate table gtt1;

-- repeat insert/truncate for a total of 100 cycles

execute dbms_workload_repository.create_snapshot;

-- generate an AWR report across the interval.

I don’t need anyone to tell me their results – but if your predictions and the actual results match then you can give yourself a pat on the head.
You might also like to enable SQL trace for all the inserts/truncate to see if that shows you anything interesting.

This is one of the simpler scripts of the 3,500 I have on my laptop that help me interpret the symptoms I see in client systems.

Diagnostics

Here’s a little test you might want to try. Examine the following script, and decide what sort of symptoms you would see in the AWR report.


create global temporary table gtt1(n1 number);

execute dbms_workload_repository.create_snapshot;

insert into gtt1 values(1);
truncate table gtt1;

-- repeat insert/truncate for a total of 100 cycles

execute dbms_workload_repository.create_snapshot;

-- generate an AWR report across the interval.

I don’t need anyone to tell me their results – but if your predictions and the actual results match then you can give yourself a pat on the head.
You might also like to enable SQL trace for all the inserts/truncate to see if that shows you anything interesting.

This is one of the simpler scripts of the 3,500 I have on my laptop that help me interpret the symptoms I see in client systems.

Min/Max

One of my most-repeated observations about trouble-shooting Oracle is that things break when you start combining features. Here’s an example that demonstrates the point.

It’s possible to create “descending” indexes – or indexes with descending columns, as I prefer to call them, and there’s a special “min/max range scan” optimizer operation for a particular kind of index usage – demonstrated in the following code fragment (running under 11.2.0.4, and reporting the rowsource execution statistics):


create table t1(
	a number not null,
	b number not null,
	c number not null,
	padding varchar2(100)
);

insert into t1
select
	mod(object_id +   1245,1001),
	mod(object_id +   4545,1111),
	mod(object_id + 774545,  13),
	rpad('x',100,'x')
from
	all_objects
where
	rownum<=10000
;

commit;

create index t1_i1 on t1(b, c, a);

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1',
		method_opt	 => 'for all columns size 1'
	);
end;
/

alter session set statistics_level = all;

select
	max(a)
from	t1
where	b=1
and	c=1
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |      1 |        |      1 |00:00:00.01 |       2 |
|   1 |  SORT AGGREGATE              |       |      1 |      1 |      1 |00:00:00.01 |       2 |
|   2 |   FIRST ROW                  |       |      1 |      1 |      1 |00:00:00.01 |       2 |
|*  3 |    INDEX RANGE SCAN (MIN/MAX)| T1_I1 |      1 |      1 |      1 |00:00:00.01 |       2 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("B"=1 AND "C"=1)

Note how the optimizer is aware that it can find a path aiming for one specific index entry (FIRST ROW), using the (min/max) option on the index.

So what happens when we change the index:


drop index t1_i1;
create index t1_i1 on t1(b, c, a desc);

execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for all columns size 1')

select
	max(a)
from	t1
where	b=1
and	c=1
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

-------------------------------------------------------------------------------------
| Id  | Operation         | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |      1 |        |      1 |00:00:00.01 |       2 |
|   1 |  SORT AGGREGATE   |       |      1 |      1 |      1 |00:00:00.01 |       2 |
|*  2 |   INDEX RANGE SCAN| T1_I1 |      1 |      1 |      1 |00:00:00.01 |       2 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("B"=1 AND "C"=1)

We’ve changed the index so that the final column is descending and although the optimizer is smart enough to determine that the query can still be satisfied without visiting the table, it can no longer use the min/max optimization, instead it does a range scan the section of the index matching the where clause, using the normal aggregate operation to find max(a).

In this tiny example the difference in the work load is barely perceptible – but there will be cases where the change in plan will make a difference in performance. As ever, when taking advantage of a feature that looks useful you have to try to imagine all the possible cases for the feature that might appear in your application and test them to see whether they introduce an unexpected (and possibly unacceptable) overhead.

Footnote:

There is a workaround in this case – not that I would suggest using it in a production system. If you remember that descending columns are implemented through a function-based index using the sys_op_descend() function, you can write code like this:

select
	utl_raw.cast_to_number(hextoraw(sys_op_undescend(MIN(sys_op_descend(a)))))	a
from
	t1
where
	b = 1
and	c = 1
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |      1 |        |      1 |00:00:00.01 |       2 |
|   1 |  SORT AGGREGATE              |       |      1 |      1 |      1 |00:00:00.01 |       2 |
|   2 |   FIRST ROW                  |       |      1 |      1 |      1 |00:00:00.01 |       2 |
|*  3 |    INDEX RANGE SCAN (MIN/MAX)| T1_I1 |      1 |      1 |      1 |00:00:00.01 |       2 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("B"=1 AND "C"=1)

These results came from an instance of 11.2.0.4, but the limitation is still present in 12.1.0.1

Min/Max

One of my most-repeated observations about trouble-shooting Oracle is that things break when you start combining features. Here’s an example that demonstrates the point.

It’s possible to create “descending” indexes – or indexes with descending columns, as I prefer to call them, and there’s a special “min/max range scan” optimizer operation for a particular kind of index usage – demonstrated in the following code fragment (running under 11.2.0.4, and reporting the rowsource execution statistics):


create table t1(
	a number not null,
	b number not null,
	c number not null,
	padding varchar2(100)
);

insert into t1
select
	mod(object_id +   1245,1001),
	mod(object_id +   4545,1111),
	mod(object_id + 774545,  13),
	rpad('x',100,'x')
from
	all_objects
where
	rownum<=10000
;

commit;

create index t1_i1 on t1(b, c, a);

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1',
		method_opt	 => 'for all columns size 1'
	);
end;
/

alter session set statistics_level = all;

select
	max(a)
from	t1
where	b=1
and	c=1
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |      1 |        |      1 |00:00:00.01 |       2 |
|   1 |  SORT AGGREGATE              |       |      1 |      1 |      1 |00:00:00.01 |       2 |
|   2 |   FIRST ROW                  |       |      1 |      1 |      1 |00:00:00.01 |       2 |
|*  3 |    INDEX RANGE SCAN (MIN/MAX)| T1_I1 |      1 |      1 |      1 |00:00:00.01 |       2 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("B"=1 AND "C"=1)

Note how the optimizer is aware that it can find a path aiming for one specific index entry (FIRST ROW), using the (min/max) option on the index.

So what happens when we change the index:


drop index t1_i1;
create index t1_i1 on t1(b, c, a desc);

execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for all columns size 1')

select
	max(a)
from	t1
where	b=1
and	c=1
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

-------------------------------------------------------------------------------------
| Id  | Operation         | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |      1 |        |      1 |00:00:00.01 |       2 |
|   1 |  SORT AGGREGATE   |       |      1 |      1 |      1 |00:00:00.01 |       2 |
|*  2 |   INDEX RANGE SCAN| T1_I1 |      1 |      1 |      1 |00:00:00.01 |       2 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("B"=1 AND "C"=1)

We’ve changed the index so that the final column is descending and although the optimizer is smart enough to determine that the query can still be satisfied without visiting the table, it can no longer use the min/max optimization, instead it does a range scan the section of the index matching the where clause, using the normal aggregate operation to find max(a).

In this tiny example the difference in the work load is barely perceptible – but there will be cases where the change in plan will make a difference in performance. As ever, when taking advantage of a feature that looks useful you have to try to imagine all the possible cases for the feature that might appear in your application and test them to see whether they introduce an unexpected (and possibly unacceptable) overhead.

Footnote:

There is a workaround in this case – not that I would suggest using it in a production system. If you remember that descending columns are implemented through a function-based index using the sys_op_descend() function, you can write code like this:

select
	utl_raw.cast_to_number(hextoraw(sys_op_undescend(MIN(sys_op_descend(a)))))	a
from
	t1
where
	b = 1
and	c = 1
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |      1 |        |      1 |00:00:00.01 |       2 |
|   1 |  SORT AGGREGATE              |       |      1 |      1 |      1 |00:00:00.01 |       2 |
|   2 |   FIRST ROW                  |       |      1 |      1 |      1 |00:00:00.01 |       2 |
|*  3 |    INDEX RANGE SCAN (MIN/MAX)| T1_I1 |      1 |      1 |      1 |00:00:00.01 |       2 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("B"=1 AND "C"=1)

These results came from an instance of 11.2.0.4, but the limitation is still present in 12.1.0.1

Java 8 Released

Java 8 released! Download it here... Oracle continues to beat Sun’s track record for actually updating Java on a regular basis. Java 8 brings many exciting new features and many miscellaneous improvements including:
• Java tippy-toes into the world of “Functional” programming with the addition of Lambda expressions; not quite a “Closure” but Lambdas allow the definition of stand-alone methods tied to the java.util.Function interface; functional methods are perfect for one-time code needs
• Annotations may be used anywhere a Type is used; not just at Type declaration
• New java.util.Stream API allows functional style operations on streams of elements in collections providing the ability to perform bulk operations such as map-reduce
• HashMap improvements
• Compact profiles to allow Java to function in low-resource environments
• Numerous security upgrades
• Enhancements to JavaFX including HTML5 and CSS improvements; added UI controls; support for ARM platforms
• New Time & Date capabilities like timezones and durations via java.time and java.time.XXX packages
• Removal of JDBC-ODBC bridge and upgrade to JDBC 4.2
• Upgrade of Java DB to version 10
• and more…

I’ll include a specific blog post on using Lambda expressions in the next couple of months; in the mean time visit Oracle’s website for more information

Oracle SQL*Net Wait Events

 

Introduction

Unfortunately, what Oracle calls “Network Waits” have little to do with Network but and almost exclusively to do with the time it takes to pack messeges for the network before they are sent.
Client = you, the tool, sqlplus, application
Not the client, the other side = the shadow process is communicating to the client

Of the three waits, only “more data” is possibly related to network issues and that’s not even clear, the other two are simply the time it takes to pack a message before sending it.

SQL*Net message to client - time to pack a message (no network time included) possibly tune SDU
SQL*Net more data from client -#cc0000;"> possible network issues, possibly tune SDU
SQL*Net more data to client - time to pack a message (no network time included) possibly tune SDU

 

 The same events exist, but where the client is the shadow process and another database plays the roll of shadow process:

 

SQL*Net message to dblink 
SQL*Net more data from dblink - #cc0000;">possible network issues, possibly tune SDU
SQL*Net more data to dblink 

 

SQL*Net Wait Events

 

SQL*Net message from client

Idle Event
Waiting for work from Client
Includes network transmission times for messages coming from shadow

Typically indicative of Client “think time” or “processing time”
Example from Egor Starostin,  http://oracledba.ru
From a 10046 trace
    =====================
   PARSING IN CURSOR #1 len=43 dep=0 uid=0 oct=3 lid=0 tim=1304096237
    hv=2707617103 ad=’89a03e18′
    select * from all_objects where rownum < 20
    END OF STMT
    PARSE #1:c=0,e=143,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=1304096209
    EXEC #1:c=0,e=744,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=1304097036
    WAIT #1: nam=’SQL*Net message to client’ ela= 3 driver id=1650815232
    #bytes=1 p3=0 obj#=-1 tim=1304097096
    FETCH #1:c=10000,e=6903,p=0,cr=9,cu=0,mis=0,r=1,dep=0,og=1,tim=1304104057
1->WAIT #1: nam=’#990000;">SQL*Net message from client‘ ela= 721 driver
    id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1304104865        # [non-idle]
    WAIT #1: nam=’SQL*Net message to client’ ela= 1 driver id=1650815232
    #bytes=1 p3=0 obj#=-1 tim=1304105319
    FETCH #1:c=0,e=627,p=0,cr=21,cu=0,mis=0,r=15,dep=0,og=1,tim=1304105524
2->WAIT #1: nam=’#990000;">SQL*Net message from client‘ ela= 253 driver
    id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1304105818        # [non-idle]
    WAIT #1: nam=’SQL*Net message to client’ ela= 1 driver id=1650815232
    #bytes=1 p3=0 obj#=-1 tim=1304105867
    FETCH #1:c=0,e=63,p=0,cr=6,cu=0,mis=0,r=3,dep=0,og=1,tim=1304105900
3->WAIT #1: nam=’SQL*Net message from client‘ ela= 1960753 driver
    id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1306066946 # [idle]
    =====================
    PARSING IN CURSOR #1 len=21 dep=0 uid=0 oct=3 lid=0 tim=1306069444
    hv=2200891488 ad=’89913b50′
    select user from dual
    END OF STMT
    PARSE #1:c=0,e=60,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=1306069440
    …
The first two “SQL*Net message from client’ are in the middle of cursor processing and are considered non-idle waits.
The third “SQL*Net message from client” is between cursors and considered an idle event, ie we are waiting for the next command from the client.

 

SQL*Net message to client

Time it takes to pack a message to be sent to the client
Doesn’t include network timing
see Tanel Poder’s analysis of SQL*Net message to client

 

SQL*Net more data to client

Same as SQL*Net message to client except this is for data that spans SDU packets.

Wait represents the time it takes to pack data.
Doesn’t include network timing

 

SQL*Net more data from client

The only SQL*Net wait that can indicate a possible NETWORK problem
Client is sending data to shadow that spans packets (think large data inserts, possibly large code blocks, large SQL statements)
Shadow waits for next packet.
Can indicate network latency.
Can indicate a problem with the client tool
Here is an example with ASHMON where the application server died mid-stream on inserts. The shadow processes were left waiting for completion of the message. You can see the regular load on the database on the left, then just past the middle the load crashes, and all that’s left is waits on “SQL*Net more data from client”

Possibly set SDU=32768 as well as setting RECV_BUF_SIZE and SEND_BUF_SIZE to 65536.

 

SQL*Net break/reset to client

Error in sql statement

Control C
Usually highlights and error in application
Example:
       CREATE TABLE T1 (C1 NUMBER);
       ALTER TABLE T1 ADD
            (CONSTRAINT T1_CHECK1 CHECK (C1 IN ('J','N')));
       ALTER SESSION SET EVENTS
            '10046 TRACE NAME CONTEXT FOREVER, LEVEL 12';
       INSERT INTO T1 VALUES (1);
Trace File
       PARSING IN CURSOR #2 len=25 dep=0 uid=0 oct=2 lid=0 tim=5009300581224 hv=981683409 ad='8e6a7c10'
       INSERT INTO T1 VALUES (1)
       END OF STMT
       PARSE #2:c=0,e=2770,p=0,cr=2,cu=0,mis=1,r=0,dep=0,og=1,tim=5009300581220
       BINDS #2:
       EXEC #2:c=0,e=128,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=5009300581418
       ERROR #2:err=1722 tim=512952379
       WAIT #2: nam=#ff0000;">'SQL*Net break/reset to client' ela= 31 driver id=1650815232 break?=1 p3=0 obj#=-1 tim=5009300581549
       WAIT #2: nam='SQL*Net break/reset to client' ela= 92 driver id=1650815232 break?=0 p3=0 obj#=-1 tim=5009300581662
Unfortunately Oracle doesn’t give much information about debugging unless you are trace. If you don’t trace, the SQL won’t be captured because from Oralce’s point of view the problem statement isn’t an acceptable SQL statement so there is no SQL ID to track down.

 

DBLINK SQL*Net Waits

These waits are the same as
SQL*Net message to dblink

SQL*Net more data from dblink
SQL*Net more data to dblink
SQL*Net break/reset to dblink

 

Analysis and Tuning

There isn’t much to do on the Oracle side for tuning. You can try optimizing the SDU and SEND_BUF_SIZE and RECV_BUF_SIZE.
For actually getting information on network speeds you will have to use something like
  • ping
  • tnsping
  • network sniffer

 

SDU

The default SDU can be set in the sqlnet. ora
If it’s not set, the default is 2048
The max is 32768
The default,or the value in sqlnet.ora, can be overridden in the tnsnames. ora and the listener.ora. The client and server negotiate the size aggreeing on the smaller of the two settings.
(TDU – Transmission Data Unit – see note 44694.1 The TDU parameter has been deprecated in the Oracle Net v8.0 and beyond and is ignored. It is only mentioned here for backward compatibility.)
tnsnames.ora
      V10G = (DESCRIPTION =
      #ff0000;">(SDU=32768)
      (ADDRESS = (PROTOCOL = TCP)(HOST = fuji)(PORT = 1522))
      (CONNECT_DATA =
      (SERVER = DEDICATED) (SERVICE_NAME = v10g)
) )
listener.ora
       SID_LIST_LISTENER =
       (SID_LIST =
       (SID_DESC =
       #ff0000;">(SDU=32768)
       (SID_NAME = v10g)
       (ORACLE_HOME = /export/home/oracle10)
))

Tracing

sqlnet.ora

       trace_level_client=16
       trace_directory_client=/tmp
       trace_file_client=client.trc
       trace_unique_client = true
       trace_level_server=16
       trace_directory_server=/tmp
       trace_file_server=server.trc

client.trc

       client_3582.trc:[12-JAN-2008 11:37:39:237] nsconneg: vsn=313, gbl=0xa01, sdu=32768, tdu=32767

more from Jonathan Lewis at http://www.jlcomp.demon.co.uk/sdu.html

 

RECV_BUF_SIZE and SEND_BUF_SIZE

The recommended size for these buffers (from Oracle’s docs) is at least
Network bandwidth * roundtrip = buffer min size
For example if the network bandwidth is 100mbs and the round trip time (from ping) is 5ms then
           100,000,000 bits   1 byte   5 seconds
           ---------------- x ------ x --------- = 62,500 bytes
            1 second          8 bits     1000
tnsnames.ora
           V10G = (DESCRIPTION =
           #ff0000;">(SEND_BUF_SIZE=65536)
           #ff0000;">(RECV_BUF_SIZE=65536)
           (ADDRESS = (PROTOCOL = TCP)(HOST = fuji)(PORT = 1522))
           (CONNECT_DATA =
           (SERVER = DEDICATED) (SERVICE_NAME = v10g)
           ) )
 
listener.ora
           SID_LIST_LISTENER =
           (SID_LIST =
           (SID_DESC =
           #ff0000;">(SEND_BUF_SIZE=65536)
           #ff0000;">(RECV_BUF_SIZE=65536)
           (SID_NAME = v10g)
           (ORACLE_HOME = /export/home/oracle10)
           ))
 
sqlnet.ora
          RECV_BUF_SIZE=65536
          SEND_BUF_SIZE=65536