Search

OakieTags

Who's online

There are currently 0 users and 24 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

Physical IO on Linux

I posted a fair amount of stuff on how Oracle is generating IOs, and especially large IOs, meaning more than one Oracle block, so > 8KB. This is typically what is happening when the Oracle database is executing a row source which does a full segment scan. Let’s start off with a quiz: what you think Oracle is the maximum IO size the Oracle engine is capable of requesting of the Operating System (so the IO size as can be seen at the SCI (system call interface) layer? If you made up your answer, remember it, and read on!

The real intention of this blogpost is to describe what is going on in the Oracle database kernel, but also what is being done in the Linux kernel. Being a performance specialised Oracle DBA means you have to understand what the operating system does. I often see that it’s of the utmost importance to understand how an IO ends up as a request at the NAS or SAN head, so you understand what a storage admin is talking about.

Many people (including myself in the past) would state that the maximum IO size on Linux is 1MB. For the Linux 2.6 kernel and higher this statement is incorrect because there is no such thing as a single maximum IO size on Linux. There used to be one in the Linux 2.4 era, which was set with the maxphys parameter, but that time is long gone. In order to find out what it is now, let’s test and see!

First let’s get a Linux system and a big table!
The system I got is a VMWare Fusion VM, running Linux 3.8.13-44.1.5.el6uek.x86_64 (UEK3) on Oracle Linux 6u6. On top of that I am using the Oracle database and grid infrastructure version 12.1.0.2. This system is using udev for providing disk access (as opposed to asmlib)
The redundancy mode of ASM is external, although for reading (what I will be covering) this doesn’t matter.

Now let’s get a normal database session, and use a combination of sql_trace with waits (10046/8) and strace to see how the Oracle database interfaces with the kernel. Please mind I’ve prepared a heap table with no indexes on it, so a count(*) on it always will result in a full table scan. Also, the buffercache is sized small enough (or the table is created large enough, it depends on how you look at it) to have the session make the decision to do a direct path read, instead of a buffered read. If you don’t know what that means: please search this blog on direct path reads, or even better, download my presentation ‘about multiblock reads’.

The direct path read decision is visible via the ‘direct path read’ wait event. If you get a full table scan operation and see ‘db file scattered read’ waits, you are doing a buffered scan.

In the most cases, you will get a maximum value of 1MB if possible, which seems to support the generally assumed 1MB maximum operating system induced IO size. Why? Well, because you probably set the DB_FILE_MULTIBLOCK_READ_COUNT parameter to 128, which means you have explicitly set the Oracle process not to do IO with a size more than 1MB (8192*128). In this blogpost, I explain that Oracle can request IOs bigger than 1MB.

In the blogpost series on extra huge database IOs, I show that Oracle can do huge (1MB+) IOs, but the physical request size (what Oracle actually requests at the SCI layer, visible with the pread/pwrite/io_submit/io_getevents functions) still is 1MB. This limit is imposed by the physical storage structure which the database uses with ASM, called allocation unit (often called ‘AU’). The default size of an allocation unit is 1MB. The allocation unit can be seen in both the database and the ASM instance with the following query:

SYS@+ASM AS SYSASM> select name, allocation_unit_size from v$asm_diskgroup;

NAME			       ALLOCATION_UNIT_SIZE
------------------------------ --------------------
DATA					    1048576

How about doing an unbuffered read on a filesystem? I’ve created a database on an (XFS, but this doesn’t matter AFAIK) filesystem, and tried to set the maximum value to DB_FILE_MULTIBLOCK_READ_COUNT. I’ve done this by setting DB_FILE_MULTIBLOCK_READ_COUNT to 10000 (ten thousand), and then bounce the database to see what the number has become. In my case, the value became 4096. I think this is the limit for Oracle 12.1.0.2 on Linux x86_64, but love to hear if you have gotten different results:

I set 10000:

SYS@fv12102 AS SYSDBA> select name, value from v$spparameter where name like 'db_file_multiblock%';

NAME						   VALUE
-------------------------------------------------- ----------------------------------------------------------------------
db_file_multiblock_read_count			   10000

But Oracle limits this to 4096:

SYS@fv12102 AS SYSDBA> select name, value from v$parameter where name like 'db_file_multiblock%';

NAME						   VALUE
-------------------------------------------------- ----------------------------------------------------------------------
db_file_multiblock_read_count			   4096

Okay. Let’s start our investigation at that point: a database which is set up with a DB_FILE_MULTIBLOCK_READ_COUNT set to 4096, alias 32MB (with a block size of 8KB), and a table which got extents large enough to accommodate huge (32MB) IOs.

Fire up a session regular database session, and enable sql trace at level 8:

$ sqlplus ts/ts@//localhost/v11204
...
SQL> alter session set events 'sql_trace level 8';

Now start another session as root on the database server, and find the PID of the server process of the sqlplus process we just created above. Issue strace with verbose writing setting:

# strace -e write=all -e all -p PID
Process PID attached - interrupt to quit
read(14,

Okay, we are setup and ready to go, but there is one additional thing: the way direct path reads work, they would probably give little waits with fast IO capabilities. One way to get the waits back, is to limit the IO capabilities of the process. Doing so is documented in this article.

Now issue the full table scan on a large table in sqlplus while strace is attached:

SQL> select count(*) from bigtab;

Now take a peek at the strace output!
The output first shows IOs as we expect:

io_getevents(139717184229376, 1, 128, {{0x7f126dd3d780, 0x7f126dd3d780, 33554432, 0}}, {600, 0}) = 1
times(NULL)                             = 431386800
write(7, "\n*** 2014-11-24 13:09:28.028\n", 29) = 29
 | 00000  0a 2a 2a 2a 20 32 30 31  34 2d 31 31 2d 32 34 20  .*** 201 4-11-24  |
 | 00010  31 33 3a 30 39 3a 32 38  2e 30 32 38 0a           13:09:28 .028.    |
lseek(7, 0, SEEK_CUR)                   = 31181
write(7, "WAIT #139717129509840: nam='dire"..., 130) = 130
 | 00000  57 41 49 54 20 23 31 33  39 37 31 37 31 32 39 35  WAIT #13 97171295 |
 | 00010  30 39 38 34 30 3a 20 6e  61 6d 3d 27 64 69 72 65  09840: n am='dire |
 | 00020  63 74 20 70 61 74 68 20  72 65 61 64 27 20 65 6c  ct path  read' el |
 | 00030  61 3d 20 33 39 30 37 33  30 20 66 69 6c 65 20 6e  a= 39073 0 file n |
 | 00040  75 6d 62 65 72 3d 34 20  66 69 72 73 74 20 64 62  umber=4  first db |
 | 00050  61 3d 37 34 31 33 37 36  20 62 6c 6f 63 6b 20 63  a=741376  block c |
 | 00060  6e 74 3d 34 30 39 36 20  6f 62 6a 23 3d 32 30 34  nt=4096  obj#=204 |
 | 00070  37 34 20 74 69 6d 3d 31  39 32 30 30 37 31 30 31  74 tim=1 92007101 |
 | 00080  39 39                                             99                |

What is visible here, is first the reap of an I/O request (with asynchronous IO on Linux this is typically the io_getevents() call). If you take a close look at the arguments of the io_getevents() call (taken from the manpage of io_getevents):

int io_getevents(aio_context_t ctx_id, long min_nr, long nr, struct io_event *events, struct timespec *timeout);

And then focus on the struct io_event:

struct io_event {
         __u64           data;           /* the data field from the iocb */
         __u64           obj;            /* what iocb this event came from */
         __s64           res;            /* result code for this event */
         __s64           res2;           /* secondary result */
};

The above description is taken from the annotated Linux kernel source, as available here: http://lxr.free-electrons.com/source/include/uapi/linux/aio_abi.h#L58 I use this site for navigating the Linux kernel source. What is important, is that the third field (io_event.res) contains the size of the IO request. Having learned this, now look again in the io_getevents call. The size of the IO reaped above is 33554432, which is 33554432/1024/1024=32 MB. Yes, that’s a single IO of 32MB! Also, this is consistent with the wait line a little lower:

 | 00050  61 3d 37 34 31 33 37 36  20 62 6c 6f 63 6b 20 63  a=741376  block c |
 | 00060  6e 74 3d 34 30 39 36 20  6f 62 6a 23 3d 32 30 34  nt=4096  obj#=204 |

Block count = 4096 * 8192 (block size) = 33554432

So, I wonder what you thought was possible, the correct answer on my operating system (Linux x86_64) with Oracle 12.1.0.2 is 32MB. It turned out the big IOs in the ASM case were limited by the allocation unit size of 1MB.

The next thing I’ve wondered is how this matches with the maximum IO size of the disk devices as visible by the Operating System. You can request 32MB, but a normal SCSI disk doesn’t do 32MB IOs. Of course in my case the SCSI disk really is a VMWare virtual disk device.

Let’s keep the 32MB IO in mind, now dive from the top layer, the SCI (system call interface) where an IO enters the kernel to the bottom of the kernel from an IO perspective, to the block device. The block device settings are found in /sys/block/DEVICE/queue. The maximum IO size the device is capable of is found in max_hw_sectors_kb. This is in kilobytes, and read only (can’t change hardware, right?). In my case this is:

[root@bigmachine queue]# cat max_hw_sectors_kb
4096

My disk supports a maximum of 4M for an IO size! But this is not what is used, the actual setting is in max_sectors_kb:

[root@bigmachine queue]# cat max_sectors_kb
512

That’s half a megabyte!

So…we got (up to) 32MB sized IO requests coming in, and a device that is set to 512KB IOs. This means that somewhere between the SCI and the device, there is a mechanism to scatter the request size to the device’s maximum IO size, and once the IO requests are done, going back to gather the IO results to the original request.

There are a couple of layers in the Linux kernel through which the call travels (including common functions):

-SCI/system call interface: system_call, sys_io_submit…. (io_submit, do_io_submit, io_submit_one; these seem to be in the VFS layer)
-VFS/virtual filesystem: aio_run_iocb, do_aio_read, xfs_file_read_iter, generic_file_read_iter, xfs_vm_direct_IO, bio_*, kiocb_batch_refill
-Block layer: blk_finish_plug, blk_flush_plug_list, queue_unplugged, __blk_run_queue, blk_run_queue
-SCSI layer: scsi_*
-Device driver: mptspi_qcmd, mptscsih_qcmd, mpt_put_msg_frame

(note: there seems to be consensus the above mentioned layers exist, although there is different wording and different numbers by different sources. Also, there doesn’t seem to be a very clear description of what is done by which layer, and what typically defines a kernel layer. For some functions it is clear they belong to a certain layer (for example aio_run_iocb in Linux/fs/aio.c, bulk_finish_plug in Linux/block/blk-core.c, etc.), for some layers, like the SCI layer, it seems there isn’t a clear layer definition by looking at where the function is defined. Also please mind the SCSI layer is implemented as a driver, just like the actual device driver for the hardware. This is very understandable, but makes it a bit harder to see it in a layered way)

System Call Interface (SCI)
The request enters kernel space via the SCI. The function of the SCI is to elevate a process to system priority to perform a kernel mode task, like (but not limited to) doing I/O. The system call implementation on Linux makes use of a wrapper function in glibc, which executes the system call on behalf of the user systemcall request. The reason for mentioning this, is that sometimes the glibc wrapper “hides” the real system call, for example calling the semtimedop() function:

(gdb) break semtimedop
Breakpoint 1 at 0x3bb38eb090: file ../sysdeps/unix/syscall-template.S, line 82.
(gdb) c
Continuing.

Breakpoint 1, semtimedop () at ../sysdeps/unix/syscall-template.S:82
82	T_PSEUDO (SYSCALL_SYMBOL, SYSCALL_NAME, SYSCALL_NARGS)

Above is a gdb (GNU debugger) session which attaches to an Oracle background process, which I know is sleeping in the system call semtimedop() when idle, A breakpoint is set on the semtimedop function, and the execution of the attached process is resumed. It then breaks on the function, showing the source code at which the break happened. Instead of showing the actual semtimedop function, it shows the pseudo function in glibc which wraps this system call. This hides the arguments of calling the semtimedop() function. My current workaround is to read the kernel registers which “carry” the arguments (RDI, RSI, RDX, RCX, R8, R9 for the first 6 arguments in most cases).

Virtual File System (VFS)
The next layer is virtual filesystem. Here we see functions specific to asynchronous IO or synchronous IO, and doing direct IO or not, and also actual filesystem specific functions (in my case xfs, when ext4 is used, you will see specific functions for that. I highly recommend XFS!). This layer also uses a structure called ‘request_queue’, which keeps track of the actual IO requests for a block device, of which each individual request is a struct ‘request’, which contains one or more structs called ‘bio’ which contains a description of the request, which points to structure called ‘bio_vec’, which points to pages for storing the disk request contents. This is all setup and created in kernel memory by the user process in system mode. It’s my assumption that the properties of the disk device (=maximum advertised IO size) are taken into account when the VFS filesystem implementation creates requests and all necessary structs and memory area’s. Please mind it’s important that enough memory is available to setup the necessary structures, and enough CPU to make this happen. Also some of the crucial structures for doing IO (request, bio, bio_vec) seem to be setup in this layer. An IO can’t be done without a memory area for the IO request to hold the data for sending it to the device (alias a write), or a memory area for the IO request to hold the data which is fetched from the device (alias a read).

The funny thing is that when you use ASM (the simple version 11.2 ASM with a local ASM instance and local disk devices), you will still see some functions of the VFS layer, because you use a disk device which is opened using the local filesystem. Examples of these functions are: aio_run_iocb, do_aio_read.

Block Layer
The next layer is the block layer. Here the request queue is handled, and I/O scheduling is done. Oracle advises the deadline scheduler in all cases. The scheduler works by plugging a request queue, much like a plug in your bathtub, letting the requests enter the queue. Having multiple requests in a queue means it can be optimised by reordering the requests, and merging adjacent requests up to the device’s advertised maximum IO size. Once a request’s timeout expires, or the requesting process finishes submitting IO, the queue is unplugged.

SCSI layer
The SCSI layer is responsible for communicating with SCSI devices to do IOs.

Device driver
The device driver layer is the layer that truly physically communicates with a device, and implements the device specific communication. In my case the functions start with mpt, which is the driver for LSI PCI adapters.

To see how the flow of IO going through the block layer, there is a tool called blktrace. Actually this is a mini-suite of tools consisting of blktrace (tracing the IO requests through the block layer), blkparse (parsing the output of blktrace to make it human readable), btrace (script to combine blktrace and blkparse, and btt (a blktrace output post processing tool)), among others.

In order to use blktrace, the debug file system of the Linux kernel needs to be mounted. Here is how that is done:

# mount -t debugfs debugfs /sys/kernel/debug

If the kernel debugfs is not mounted, you get the following message:

[root@bigmachine ~]# btrace /dev/oracleasm/disk1
Invalid debug path /sys/kernel/debug: 0/Success

I use blktrace in this article for looking at the IO requests to understand what is going on. The workflow for this use of blktrace is:
– create a trace file of the block flow using blktrace
– make the trace file human readable via blkparse or analyse via btt (block trace times)

Actually, you can parse the output of blktrace directly via blkparse using ‘blktrace -d DEVICE – | blkparse -i -‘. To make that even simpler, the script ‘btrace’ is created, to do exactly that.

Here’s how that looks like (depending on the number of processes using it, the output can be huge, this is only a snippet):

[root@bigmachine ~]# btrace /dev/oracleasm/disk1
...
  8,16   0       57     0.260669503  2421  Q  WS 4088 + 8 [asm_gmon_+asm]
  8,16   0       58     0.260672502  2421  G  WS 4088 + 8 [asm_gmon_+asm]
  8,16   0       59     0.260673231  2421  P   N [asm_gmon_+asm]
  8,16   0       60     0.260674895  2421  I  WS 4088 + 8 [asm_gmon_+asm]
  8,16   0       61     0.260675745  2421  U   N [asm_gmon_+asm] 1
  8,16   0       62     0.260677119  2421  D  WS 4088 + 8 [asm_gmon_+asm]
  8,16   0       63     0.260882884     0  C  WS 4088 + 8 [0]
...

What is shown here, is the typical flow of an IO in the block layer:
Q – Queue. A request starts off sending a notification on the intent to queue at the given location.
G – Get request. A struct request is allocated.
P – Plug. When the block device queue is empty, the queue is plugged in order to receive further IOs and have the ability to optimise (merge and/or reorder) them before the data is sent to the device.
I – Insert. A request is sent to the IO scheduler for addition to the internal queue and later service by the driver. The request is fully allocated at this time.
U – Unplug. The start of sending requests to the driver.
D – Driver. A request has been sent to the driver and removed from the queue.
C – Complete. A previously issued request to the driver has been completed.

The main point is here, that you can truly see how the IO requests flow through the block layer and are issued to the storage device, in other words, you can see how the block layer receives the IOs, and what is exactly submitted to the driver as request for the physical storage layer.

This is a microscopic view of the disk IOs. In most cases, when you want to gain information on block layer IO processing, another view on it is provided by processing blktrace output with btt. This is an example output of btt:

First capture IO events using blktrace:

[root@bigmachine ~]# blktrace -w 60 -d /dev/oracleasm/disk1 -o - | blkparse -d sdb.blkparse -i -

In this example I captured IOs for 60 seconds. You can exclude ‘-w 60′, and press interrupt (ctrl-c) when you deem IO recording is enough. This produces a binary file ‘sdb.blkparse’, which can be used btt:

This is the first part, the flow through the block layer until IO completion:

==================== All Devices ====================

            ALL           MIN           AVG           MAX           N
--------------- ------------- ------------- ------------- -----------

Q2Q               0.000000001   0.239795347   3.002829973         238
Q2G               0.000000001   0.159337842   3.011192142         264
G2I               0.000000679   0.000001724   0.000011618         264
I2D               0.000000764   0.000007633   0.000153436         264
D2C               0.000000001   0.103328167   3.012509148         233
Q2C               0.000000001   0.270961298   3.012516496         233

Note: time is in milli seconds.
Q2Q – Time between IO requests.
Q2G – Time it takes for a request struct to be allocated.
G2I – Time it takes for the request to be inserted in the device’s queue.
I2D – Time spend in the device queue waiting to be issued to the driver.
D2C – Time spend between issuing to the driver and completion of the request. This includes controller, storage. This is the same figure as the ‘svctm’ column with iostat -x.
Q2C – Total time spend in block layer and physical IO. This is the same figure as the ‘await’ column with iostat -x.

The second part is the device overhead section:

==================== Device Overhead ====================

       DEV |       Q2G       G2I       Q2M       I2D       D2C
---------- | --------- --------- --------- --------- ---------
 (  8, 16) |  66.6284%   0.0007%   0.0000%   0.0032%  38.1339%
---------- | --------- --------- --------- --------- ---------
   Overall |  66.6284%   0.0007%   0.0000%   0.0032%  38.1339%

This is partly the same as the IO flow table above. This is expressed as a percentage of where the total time of the IO is spend.
Q2G – Request struct allocation.
G2I – Insertion in the device queue.
Q2M – Total time until merge.
I2D – Time spend in the queue until it was dispatched to the driver.
D2C – Time spend on doing the IO after submitting the request to the driver.

Tagged: blkparse, blktrace, btt, internals, IO, IO size, linux, oracle

Line Numbers

One of the presentations I went to at the DOAG conference earlier on this month was called “PL/SQL Tuning, finding the perf. bottleneck with hierarchical profiler” by Radu Parvu from Finland. If you do a lot of PL/SQL programming and haven’t noticed the dbms_hprof package yet make sure you take a good look at it.

A peripheral question that came up at the end of the session asked about problems with line numbers in pl/sql procedures; why, when you get a run-time error, does the reported line number sometimes look wrong, and how do you find the right line. I can answer (or give at least one reason for) the first part, but not the second part; Julian Dontcheff had an answer for the second bit, but unfortunately I failed to take a note of it.

Here’s the SQL to create, run and list a very simple (and silly) procedure.


define m_value = 3
set timing on

create or replace procedure silly
as
        m_n number;
begin
        for i in 1..1000000 loop
                m_n := exp(sqrt(ln(&m_value)));
        end loop;
end;
/

execute silly

list

Here’s the output I got from running this, and the thing I want you to note is the time it takes to run, and the line number reported for the assignment:


Procedure created.

Elapsed: 00:00:00.01

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
  1  create or replace procedure silly
  2  as
  3          m_n number;
  4  begin
  5          for i in 1..1000000 loop
  6                  m_n := exp(sqrt(ln(&m_value)));
  7          end loop;
  8* end;

It seems to take my instance o.oo seconds to perform 1,000,000 evaluations of the exponent of the square root of the natural logarithm of 3 at line 6 of the code. But let’s make two changes; first, let’s try that with the value -1 (which is a little careless if you know your logarithms).


Procedure created.

Elapsed: 00:00:00.02
BEGIN silly; END;

*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error
ORA-06512: at "TEST_USER.SILLY", line 5
ORA-06512: at line 1

Notice that the 6502 error is reported at line 5, not line 6.

Now let’s go back to the value 3, but start the script with the command: alter session set plsql_optimize_level = 0; (the default level is 2, the range is 0 to 3):


Session altered.

Elapsed: 00:00:00.00

Procedure created.

Elapsed: 00:00:00.01
BEGIN silly; END;

*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error
ORA-06512: at "TEST_USER.SILLY", line 6
ORA-06512: at line 1

Reducing the pl/sql optimisation level to zero (or one) results in the the error message reporting the problem at line 6 – which matches our original code. So let’s check the effect of running the code at level zero with a valid number for the input.


Session altered.

Elapsed: 00:00:00.00

Procedure created.

Elapsed: 00:00:00.02

PL/SQL procedure successfully completed.

Elapsed: 00:00:24.56


The run time for our pl/sql call has gone up from 0.00 seconds to 24.56 seconds.

Like all good compilers the pl/sql compiler had recognised (at level 2) that we were assigning a constant inside a loop, so it had (effectively) rewritten our code to move the assignment outside the loop, effectively swapping lines 6 and 5, storing the change in the “object” level code, but not in the database copy of the source. The benefit we get (from the default settings) is a saving of 25 seconds of CPU, the (small) penalty we pay is that the lines reported for run-time errors aren’t always going to identify the text that caused the problem.

HOWTO: Create a Structured XMLIndex, using the In-Memory Column Store

In Oracle database version 12.1.0.2, Oracle introduced the Oracle In-Memory Database option. It is possible…

Agents Management Via EM12c Release 4 Console

I’m going to take a break from the OMS/OMR Performance pages and jump over to the Agents page in release 12.1.0.4.  You can access this page from the Setup menu drop down in EMCC:

agent10

Main Agent Management Page

The Agents Management page will default to show you all agents and looks very similar to the All Targets page when filtered by a target type, but this page focuses on Agents, something the All Targets page just doesn’t do well no matter how you filter.

agent1

A few things are going to stand out when you come to the Agents Management page-

  • The names of all the agents deployed.
  • The status of the agents, (which often is the first place you should look when you have an issue with a target.)
  • Counts of how many targets it monitors and if there are any broken targets.  (I haven’t taken into consideration how this count may be affected by composite targets, still looking into this…)
  • Status, incident and when the last successful upload from the agent occurred.

At the top of the graph are actions that can be used to manage the agents from the list.

  • Block and Unblock
  • Start up and Shut down
  • Restart
  • Secure and Unsecure
  • Properties
  • Query by Example
  • Detach

Depending on the current status of the agent, different actions will be available.  To choose an agent to perform an action on, you must move your cursor to the box to the very left of the agent or to a spot on the agent line that does not have a link.  If a link exists, it’s going to take you to the corresponding page, (i.e. Agent Home page, Incident Manager, etc.)

agent2

One of the cool things is you can perform actions on multiple agents by highlighting more than one agents in the list like you see above.  There’s a couple ways to do this-

  1. Bring your cursor to the left side box of the agent name and then left click hold and move your cursor down to to the last agent you wish to perform an action on.
  2. The standard known way of choosing multiple lines via office programs, hold down the ctrl key and left click on the individual agents or…
  3. Left click on the first, hold down the shift key and then click on the last one to select a succession of agents from the list.

Once you have chosen the agents you are interested in performing an action on, click on what you wish to do from the menu above the graph and follow through with any required information to complete the action.

Agents with a Down Status

To quickly see what agents are down is valuable as I’ve seen way too often where administrators have become accustomed to not having a 100% green status and give up and correcting the issue.  Focusing on your agents first is a solid path to resolving this.

agent6

You can quickly choose, (and I recommend doing this action one at a time to then see if anything is amiss, troubleshoot and proceed) and attempt to start the agent.  Once the agent is started, expect some “noise” in the form of EM notifications as Cloud Control gets a handle on what it’s been missing while the agents been down.

Agent Unreachable

An unreachable status has always bothered me-  It’s not just down, the OMS can’t reach it due to some issue and you should expect to have some type of troubleshooting to correct the problem.  Luckily, you can first hover your cursor over the status and the Agent Management page will show you the high level reason behind the agent being in an unreachable status.  The one we’ve highlighted below shows that it can’t write to the file system, so we are already way ahead in what will need to be corrected for this agent to resolve to an up, (green) status.

agent4

Agents that are blocked are also shown clearly in the status section, so keep in mind that these are “double” and also shown in their own filtering in the Blocked view, which we discuss more in depth below.  The rest of the agents you will need to double click on the agent name and investigate the logs to see what has caused the issue.  If data isn’t up to date enough to explain the issue, then the best course of action is to follow my instructions in this post to correct the problems.

Agent Blocked

agent3

Blocked agents are quite straight forward and if you hover your cursor over the Blocked Reason, the full description and how to correct it will even be shown.  I had to laugh when it said to have the EM administrator do this, but for many, the separation of duties is clear and this message will be very helpful.  For those of you that do need to perform this yourself, you can resync the agent by following this blog post from Adeesh Fulay.

Misconfigured Agents

Now I don’t have any misconfigured agents to show you in our environment:

agent11

What I can tell you, that the Misconfigured status, like the Agent Unreachable, will offer you high level information if you hover over the status section for the agent in question.  As this is a misconfigured agent, the best advice I will offer is to go to the host the agent resides and attempt the following command:

emctl status agent -details

This should give you information on why or what it believes is the OMS upload URL/port, where the agent believes the AGENT_HOME is, etc.  It’s a good place to start, but gather information from the emctl.log, the emctl*.out and other logs to find out what is causing the issue.

Advanced Performance Data for Agents

The default view is not the only data that can be displayed.  Note that there are a number of columns that could be added or removed from the graph display, allowing you to answer any number of questions regarding EM12c agents.

agent7

As you can see in the next screenshot, you can even change the Agents Management page to view performance data, which can help you if you are experiencing backlog or agent resource usage issues.  I’ve changed my view to display the following:

  • Agent Name, Incident information
  • Backlog
  • CPU Usage
  • Restarts

agent12

I can now sort by backlog data and display what  agents have a backlog of collections to upload and view the higher CPU usage, (but as you can see by this graph, it’s pretty light considering… :)) This gives me an opportunity to optimize these agents collections, upload intervals and the data its collecting to give it better performance.

Locating All Targets for an Agent

Last but not least, at the bottom left of the Agents Management page you’ll see the following link:  agent13This link will take you to the search page to view what agents support what targets to Cloud Control.

agent9

By typing in a filter for a target type, (or you can choose to leave this blank) you can then type in a name of an agent or partial agent and all targets and agents with that naming convention will be listed.  You will see each target and the agent that supports it.

As unimportant as some may think the Agents Management page is to the Manage Cloud Control offering in the console, I know how often the agent is the first place one should look to a solution for upload issues, incidents and collection errors.  I hope administrators will come to appreciate this valuable console page and how much time it should save them when issues arise.

 

 



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Agents Management Via EM12c Release 4 Console], All Right Reserved. 2014.

Lunchtime quiz

There was a question on OTN a few days ago asking the following question:

Here’s a query that ran okay on 11g, but crashed with Oracle error “ORA-01843: not a valid month” after upgrade to 12c; why ?

The generically correct answer, of course, is that the OP had been lucky (or unlucky, depending on your point of view) on 11g – and I’ll explain that answer in another blog posting.

That isn’t the point of this posting, though. This posting is a test of observation and deduction. One of the respondants in the thread had conveniently supplied a little bit of SQL that I copied and fiddled about with to demonstrate a point regarding CPU costing, but as I did so I thought I’d show you the following and ask a simple question.’


drop table T;

Create Table T
As
with
periods as (
                  Select 'January' period, 1 cal  From Dual
        union all Select 'February' period , 2 cal From Dual
        union all Select 'March' period , 3 cal From Dual
        union all Select 'April' period , 4 cal From Dual
        union all Select 'May'  period, 5 cal From Dual
        union all Select 'June' period, 6 cal From Dual
        union all Select 'July' period, 7 cal From Dual
        union all Select 'August' period, 8 cal From Dual
        union all Select 'September' period, 9 cal  From Dual
        union all Select 'October' period, 10 cal From Dual
        union all Select 'November' period, 11 cal From Dual
        Union All Select 'December' Period, 12 Cal From Dual
        Union All Select '13 Series' Period, Null Cal  From Dual
)
Select  Period,Cal
from periods;

prompt  ==================================
prompt  When we invoke below SQL it works.
prompt  ==================================

set autotrace on explain

select *
from    (
        select
                Period,
                Cal,
                to_date(Period || ', ' || 2014,'Month, YYYY') col1 ,
                to_date('November, 2014','Month, YYYY') col2
        From  T
        Where  Cal > 0
        )
;

prompt  ================================================
prompt  But when we add comparison operations , it fails
prompt  ================================================

select *
from    (
        select
                Period,
                Cal,
                to_date(Period || ', ' || 2014,'Month, YYYY')   col1,
                to_date('November, 2014','Month, YYYY')         col2
        From  T
        Where  Cal > 0
        )
where
        col1 >= col2
;

set autotrace off



All I’ve done is create a table then run and generate the execution plans for two queries – with a comment that if you try to run one query it will succeed but if you try to run the other it will fail (and raise ORA-01843). As far as the original supplier was concerned, both queries succeeded in 11g and the failure of the second one appeared only in 12c. In fact, for reasons that I won’t discuss here, it is POSSIBLE for the failure to appear in 11g as well, though not necessarily with this exact data set.

Here’s the COMPLETE output I got from running the code above on an 11.2.0.4 instance:



Table dropped.


Table created.

==================================
When we invoke below SQL it works.
==================================

PERIOD           CAL COL1      COL2
--------- ---------- --------- ---------
January            1 01-JAN-14 01-NOV-14
February           2 01-FEB-14 01-NOV-14
March              3 01-MAR-14 01-NOV-14
April              4 01-APR-14 01-NOV-14
May                5 01-MAY-14 01-NOV-14
June               6 01-JUN-14 01-NOV-14
July               7 01-JUL-14 01-NOV-14
August             8 01-AUG-14 01-NOV-14
September          9 01-SEP-14 01-NOV-14
October           10 01-OCT-14 01-NOV-14
November          11 01-NOV-14 01-NOV-14
December          12 01-DEC-14 01-NOV-14

12 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    12 |   228 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |    12 |   228 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("CAL">0)

Note
-----
   - dynamic sampling used for this statement (level=2)

================================================
But when we add comparison operations , it fails
================================================

PERIOD           CAL COL1      COL2
--------- ---------- --------- ---------
November          11 01-NOV-14 01-NOV-14
December          12 01-DEC-14 01-NOV-14

2 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    19 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |     1 |    19 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("CAL">0 AND TO_DATE("PERIOD"||', '||'2014','Month,
              YYYY')>=TO_DATE(' 2014-11-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))


So this is the question. What’s the anomaly in this output ?

Bonus question: What’s the explanation for the anomaly ?

The User Group Tour

Those of you that have been around Oracle for some time may already be aware of the various OTN tours. These tours bring well known speakers to some of the smaller (relatively speaking) regions that often don’t get to see some of the big name speakers, simply because of audience size. Over the past couple of years, I’ve been involved in the OTN APAC tour, and recently returned from the New Zealand leg of the tour for this year. I presented two workshops – one on Database Lifecycle Management and one on DBaaS – as well as standing in for a sick presenter and covering Snap Clone in Enterprise Manager 12c. For me, the best value for the conference was (as it so often is) the time spent interacting both with customers and other speakers / exhibitors. It was great to catch up with so many people I haven’t seen for a long time, both from within Oracle and outside of it.

In many ways, the New Zealand leg of the OTN APAC Tour is the start of my very own user group tour. Over the next few months, I’ll be presenting at UKOUG, RMOUG and possibly Collaborate 15 and Kscope 15. Should be a fun time, provided my back and knee hold out. I was on crutches at the NZOUG conference and will be for UKOUG as well.

UKOUG Tech14

This will be my first time at the UKOUG conference. I hear it’s one of the better user group conferences to get to. Of course, in my case getting there is going to be the problem since I’ll still be on crutches. I’m flying from Canberra to Manchester with two stops:

  • Canberra to Sydney – the shortest leg, just to prepare me for the next leg. :) That’s only a 55 minute flight, so most of the time will be getting to and from cruising height. I suspect this one will be a Dash 8, which means I’ll have to check my laptop bag as premium luggage (taken from and delivered to me at the steps into the plane) and have to carry on the laptop itself. All these things are relatively easy when you’re not on crutches, but adding the crutches adds a whole new dimension of complexity!
  • Sydney to Dubai – after a 2 hour layover in Sydney, this is the real ugly one – 15 hours, 28 minutes estimated travel time. UGH! :( At least this time I can carry the laptop bag on. I had to get a new one so I could carry it on my back since wheeling a bag around isn’t the easiest of options with crutches! :)
  • Dubai to Manchester – after a 3 hour 40 minute layover in Dubai, a short 7 hours, 32 minutes – well, short compared to the leg before it anyway!

After landing in Manchester, I’ll be met by a good friend who’s volunteered to drive me to Liverpool. I wonder what state I’ll be in by that time!

Once I’m at the conference, I’ll kick things off with Super Sunday, probably attending most of the OakTable stream. I’m looking forward to meeting a few of my colleagues on the Oak Table from Europe that I haven’t met as yet, and of course there’s the ACE Dinner that night that the organizers have kindly invited me to. During the Tech14 conference itself, I’ll be helping out with Enterprise Manager Round Table at 2:30 on Monday, as well as presenting on Snap Clone on Tuesday at 3:00. I’m also visiting Barclays on the Thursday to discuss EM and how it can help them in their environment, before the long haul home again.

RMOUG Training Days

The RMOUG Training Days conference is one of my personal favourites, and to me one of the best grass roots user group conferences out there. I’ve presented at it multiple times, and have always enjoyed meeting up with my good friends on the organizing committee. This time round, I have two hands-on workshops (one on Delivering Pluggable Database as a Service (PDBaaS) with Enterprise Manager and the other on Delivering Schema as a Service with Enterprise Manager, and doing the Snap Clone presentation as well.

The two workshops are really designed to show you Database as a Service (DBaaS) in two different versions of the Oracle database. Schema as a Service works with releases prior to Oracle Database 12c, and as we all know, there are a lot of people out there who haven’t made the switch to DB12c as yet. Of course, there can be multiple reasons for that, but if you can, I really do suggest you make the move to DB12c and then look at using PDBaaS instead of Schema as a Service. It’s a much more straightforward implementation at the application level, since it addresses the namespace collision issue you need to deal with in Schema as a Service (see my posts here and here on Schema as a Service if what I just said is double dutch to you. ;) )

Once the conference is over, I’m looking forward to at least seeing some snow close up. I doubt I’ll be of much use for skiing or similar with my back and knee problems, but I can always hope to see Carol Dacko disappear from sight as she attempts a snow angel again! :)

Collaborate 15

This is another conference that I haven’t attended before, which I’m hoping to get to this time round. At the time I’m writing this, they haven’t announced acceptances for the conference, so I may or may not get there. Again, it’s a conference I’ve been wanting to get to but just haven’t had the opportunity as yet. Hopefully this year I’ll get there. THe only downside for me is that it’s in Las Vegas, a city I visited plenty of times for work when I was based in the US. It’s one of those places I wanted to visit to tick off my bucket list, but it really doesn’t hold a lot of attraction for me now I’ve been there. I’m not a gambler so the gaming side doesn’t interest me at all, but the shows are usually worth investigating. :)

Kscope 15

Like Collaborate 15, this is another conference I haven’t been to. I submitted a couple of papers for it as some of my colleagues said this was a great conference to get to, so we’ll see what comes of those!

So there you have it – a bunch of travel coming up to present at a variety of user group conferences. That’s one of the parts of my job as a database architect in the DBaaS group within EM that really makes me happy. I love talking with customers about what EM and DBaaS in particular can do to help them, so if you see me hobbling around on crutches at any of these conferences, come up and say hi!

Conferences and the Etiquette of Meeting New People

One of the reasons I like user group conferences is meeting new people in my “area” – these people at the conference not only like technology and probably some of the same technology I do but, as they are at a conference, are probably inclined to learn stuff, meet people and maybe share ideas. I’m not actually very good face-to-face socially with people I do not know, so I like to tilt things in my favour if I can!

But sometimes you have odd experiences.

I was at a conference a couple of years back that was not in the UK. I was loving it as I was meeting some people I knew electronically but never met and others who I had only met when they visited UK conferences – but most people I had no real connection to, so I was a bit quiet (yes, loud-mouthed, always-comments-in-talks me was quiet), especially as it was the first day. I was late for the lunch event, I don’t remember why, but it was a little unusual for being a sit-down meal and there was to be some meet-the-expert tables and I wanted to be able to identify some people I knew by name and never met. The big signs on tables would be a bit of a help there.

As I came in I saw half the tables were full and most of my friends were already on a full table. I wandered over to a half-full table and as I sat down I said “Hello, I’m Martin Widlake, How you doing?” or whatever to the half-dozen guys there. They all looked at me. A couple nodded or said “hi” but one said “We’ve got other friends turning up soon”. Hmm, that was code for “sod off”, I think.

I turned on the full English Accent so they could tell I was not from around those parts. “That’s nice – always good to have friends when you are at a conference….especially one where you don’t know many people?”. Some smiled, they all understood my mild reprimand. Mr Friendly who had mentioned all his other friends did not smile though. After this opening “chat” no one else really said anything more to me.

The Starter turned up and the guys all spoke to each other – and ignored me. Some other lone gun wandered up and asked me if he could sit next to me – “Sure, feel free – I’m Martin Widlake, I’m from the UK”. He introduced himself and sat down. Mr Friendly piped up “There are more people joining us at this table, I’m not sure there is gonna be room”. Some of his already-present friends had the decency to look a bit apologetic and I simply said “Well, it’s pretty full on all the tables now – and he’s got his starter” as the waitress put down a plate. And I pointedly started talking to the new chap.

Main turns up and so do a couple of late members for Mr Friendly’s group, who sat down at the remaining spare seats. “I told you” he said “you might have to move”.

I’m losing my patience a bit now. “Well they can sit somewhere else I’m sure, seeing as they are late.”

Mr Friendly is getting angry “I warned you when you sat down – when the others turn up, you’ll move”.

“I won’t move, this is my seat. I’m staying here”.

“Who the hell do you think you are?” demands Mr Friendly. Oh, thank you, I think to myself. I’m so going to enjoy this….

“Well, I did introduce myself when I arrived and….” pointing to the large sign with ‘Martin Widlake’ on it above the table “there is a large reminder of my name just there”. I paused a couple of seconds before adding “So this is my seat and my table and I’m kind of obliged to stay here, whether you want to talk to me or not”.

Maybe I could have handled the situation better from the start and stressed that the reason why I was staying was I was supposed to sit at my table. But I was smarting a little from the fact that no one apparently wanted to come to my table and talk to me. Maybe not surprising, as I don’t think I had done a presentation at the conference at that point – but my massive ego was already bruised.

So what about the etiquette of meeting people at conferences? It was just a title for a story I felt like telling…

However, there are a couple of things I do want to mention about the etiquette of meeting people at conferences. If you do not know many people there – just talk to people. People you don’t know. Just make a few observational or open comments, nothing to direct – “This queue for the coffee is a bit long”, “Have you seen any good/bad presentations yet?”, “what do you think about ansii join syntax” (OK, last one is a bad suggestion). Most people will respond and those that do not are probably just very nervous – more nervous than you! – and almost no one will be like Mr Friendly above. And if they are like Mr Friendly, where there are a few hundred other people you can go and try the odd comment on to see if you get a response.

At the social events you can see dozens and dozens of people just at the side or wandering around, not speaking to anyone. If you are one of them, few people are likely to come up to you and start a conversation (I’ve tried approaching the odd lone person but I stopped when I got Seriously Stalked at one of the UKOUG conferences). But if you go talk to other people, most of us will respond. And if someone does respond – Don’t stalk them!!! – have a conversation and then, having found it is possible, go and try some other people. The next day, if you see the people who responded last night, go and have a small chat again. But Don’t stalk them!!!.

Finally, talk to the presenters. We are actually the easy targets and not the hard ones. Those of us who present tend to be attention seekers so we are happy for you to come up and chat. And if you pretend you liked our talks we will certainly warm to you, so it’s an easy opening. However, it’s not like we are pop-stars or TV celebrities, we are just average people and you can come and chat to us (actually, I feel the same about pop-stars and TV celebrities, I don’t get easily star-struck but I know a lot of people do, even over Oracle Names).

But Don’t stalk them!!!.

And if someone insists on joining you at a table that has a name above it – listen carefully when they introduce themselves…

Why Women Should Consider a Smartwatch

I keep having this conversation over and over, in retail stores, restaurants and other establishments- women coming up to me and asking, “Can you tell me about that watch you have on?”

It made me realize that it could be really intimidating to:

1.  Ask questions of the sales staff in most stores.

2.  The benefits that I noticed I get from the smartwatch are not always the same for men.

3.  Seeing a woman wearing one lets them know that they are something they may want to consider.

So, even though I’m hoping everyone will benefit from this post, I really am hoping to make up for the lack of marketing of smartwatches to women and listing some of the benefits of owning one.

So let’s start with the model that I currently possess-the Moto360

It has a round face and comes in both the black model and the silver with leather band.  I have the brushed aluminum with leather band and it’s my second, very successful pairing of a smart watch with an Android smartphone.  Mine is available to pair with Apple iPhone, too, so I am happy to say that I recommend it highly for both phone OS options.

moto-360-hands-on-630  photo:  http://egadget.com

My introduction into the smartwatch world was a Christmas present from Tim last year, (on my request) of the Sony Smartwatch2-

sw2

I started out with this paired to my Samsung S4, then migrated it to my Samsung Note3, which I still have at this time.  Pairing these watches are as simple as a software download from the playstore, (or IOS AppStore for those with iPhones…) and then using the Bluetooth on your phone to connect to it.

Differences that Caused Me to Upgrade

1.  The Sony Smartwatch2 used an Android base OS that was compatible with my phone, but unlike the Moto360 that uses the newer AndroidWear OS.  This means that instead of finding apps that are compatible with my new watch, everything on my smartphone is automatically available at some level of notifications with AndroidWear and I can choose what to exclude.

2.  The Moto360 has a smaller face and it’s round.  This means it doesn’t get caught on EVERYTHING when I’m putting a sweater on, taking a coat off or even reaching my hand into my purse, it’s less likely to get hung up.

Top Features of a Smartwatch

1. Solid notifications, as our smart phone is rarely on our person, but in our purse.  Women’s clothes commonly do not have functional pockets that can hold a smart phone.  The pairing of a smartphone and smartwatch is pure genius to simplifying our lives and never missing out on important notifications.

2.  Size-  Our wrists are often much smaller than men’s and we need something that doesn’t look like a monstrosity on our arm.  I have a tiny, tiny wrist, (ring size is 4 1/2, so keep that in mind as you look at the photos) yet the watch doesn’t stand out like it’s ridiculous and the wrist band fits comfortably.

3.  The rest of the list:

  • Include the ability to change the face of the watch to something less masculine if we want! :)  There are a TON of watch faces out there, but I chose to use my own background with the standard offering.

2014-11-24 16.42.32

3.  Vibration notifications that we can control.

  • The ability to exclude applications and silence any notifications that don’t provide value.
  • The ability to answer, (at least via bluetooth headset) or send calls to voicemail after noting the callerID info.
  • Ability to quickly reply to text messages with pre-recorded list of responses vs. getting our phone out of our purse.

4.  Social Media

  • Ability to like Facebook posts.
  • Option to favorite or native retweet Twitter posts.
  • Simple options to interact with other social media platforms.

2014-11-24 16.43.44

5.  Remote camera

  • Our hands are commonly smaller, making it difficult to click the shutter button on the camera screen when taking selfies and other pictures.  Having the option to do this on our watch is helpful.

6.  Health Monitoring

  • Any of those cool fitbit features you’ve been eyeing?  Yeah, there is a version out for the Androidwear and you can just download it.  The censors are already on the Moto360 to perform these functions.
  • Check your pulse
  • Create goals for yourself and check status during day on your progress.

2014-11-24 16.48.232014-11-24 16.48.12

What can the watch also do?

  • Let you know when an item has been shipped, (very helpful right now with Christmas shopping upon us!)

2014-11-24 16.42.58

  • Check the weather before the kids head out to ensure both you and they bundle up or otherwise by a few swipes on your wrist-

2014-11-24 16.42.44

The Watch is Easy To Interact With

Turning your wrist to face you will light the face unless you have the ambient mode set to on, (not be default and it does use more battery life by doing so…)

Swiping the face up and down will access different AndroidWear “cards”, which are active notifications you can view.

Swiping right, you can see different options available to you on the watch.

A quick tap on the screen face will bring up the “Ok Google” with options that are voice activated or you can scroll down to choose one.

  • Send a text
  • Send an email
  • Set a reminder
  • Play music
  • Set an alarm
  • Show your current status for activity goals.

Application Enhancements

If you do purchase one, I would highly recommend a few apps that made my experience much more beneficial:

Connect-  Moto360 app to tell you specific data about your Moto360 watch, including battery levels and health monitor profiles.

Wear App Manager-  Tells you a bit more about the app and allows you to manage just the Android Wear apps separately.

Wear Mini Launcher-  Extra management options for AndroidWear users, including the ability to hide apps, (good for password apps) and global shutoff for notifications.

1Weather-  Android app that has a great weather interface and notifies on multiple locations more accurately than some I’ve seen.  Very important when you live somewhere like Colorado where the weather changes every 5 minutes… :)

Wear Aware-  Vibration notification if your watch and phone become separated.  It’s also free, where some of the others want $$.

Wear Mail-  One mail app to offer advanced email notifications.  I like it as it seems to notify faster than my other email clients, so I just exclude those in notifications and use Wear Mail instead.

Silliest Feature I Love

I know, this is going to be funny, but I love that while charging, it works as a clock.  The blue lit display isn’t intrusive, doesn’t disturb me, but if I wake up at night, it’s easy to look over and see the time.  I don’t really use an alarm, (I wake up about 15 min. before any alarm, which is very annoying… :)) so my smartphone does the trick and I haven’t owned an alarm clock for years.

moto-360-charging

 



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Why Women Should Consider a Smartwatch], All Right Reserved. 2014.

Retrieving Bind Values from SQL Monitor in EM12c Release 4

I know others may want to know how to do this and I had challenges until Gagan Chawla, a teammate from Oracle, was kind enough to point out how to still get to this hidden, little gold nugget, so I’m posting it here for others!

Up till database plug-in 12.1.0.5, while in SQL Monitor SQL ID details page, you could click on a button called View Report and quickly view a large amount of valuable data about a SQL statement that had executed.  One of the sections in this report was binds, which listed what values were being passed for the bind variables.

binds1

If you are investigating a performance issue for the execution of a SQL statement, having bind values can give you a significant advantage.  It can tell you:

  1. Is the value outside the min/max value on an existing histogram.
  2. Do statistics lean towards another value being more prevalent.
  3. Is the value passed in not in the correct format.
  4. Does the value searched impact due it’s different from the values known and/or counts expected are off.

There are a number of other reasons, but to have this data and to have it easily accessible at your fingertips is very beneficial to the person trouble shooting.

Post the database plug-in, the feature is no longer where it once was.  From the SQL Monitoring, Monitored SQL Executions, if you were to a SQL ID of interest, you would then go to the SQL Details page.

binds2

There is a new report called “SQL Details Active Report“, but it doesn’t contain the bind values data.  This report is still very, very valuable:

binds3

It shows all the above data, along with a wait event vs. all resource usage graph at the bottom of the report.  You can save or mail the report and all it’s relevant data.  It would still be nice to have the previous report with the bind values that was once available from the details page and you can get to it, but you just need to make a few more clicks.

Go back to the main Monitored SQL Executions page and locate the SQL that you are interested in:

binds4

Bring your cursor to the status column for that SQL ID and double click.  This will take you the the Monitored SQL Executions, SQL Detail Page and on the right hand side, you will see the View Report button.

binds5

This button will bring you to the previous SQL ID Details report that includes the bind data.  Another thing to remember is that you must also be viewing a database that supports the feature, which means Oracle 11.2 or higher.

 

 



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Retrieving Bind Values from SQL Monitor in EM12c Release 4], All Right Reserved. 2014.

12.1.0.2 Introduction to Zone Maps Part III (Little By Little)

I’ve previously discussed the new Zone Map database feature and how they work in a similar manner to Exadata Storage indexes. Just like Storage Indexes (and conventional indexes for that manner), they work best when the data is well clustered in relation to the Zone Map or index. By having the data in the table […]