Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Have Dog, Will Travel

So I went on a business trip to visit my company headquarters this last week and I took my dog with me.  Yes, you heard that right, my dog.

My 10lb Chihuahua/Basenji mix dog, Esme, is actually a service dog, which means she can go almost anywhere with me.  She’s gone through a good amount of training to do so and honestly, she travels better than most adults.  Although she’s had all the training, the fact can’t be ignored that she was a severely abused puppy and still suffers some residual effects of this.  The task she performs is at night for a sleep disorder, but as I travel, it’s important to have her travel and is socialized as part of her training and she’s in that odd category of service animal that doesn’t fit most people’s idea of what a service dog is.

She’s an adorable animal and as loving as she is to me, it can be difficult for other people to understand, it’s not acceptable to come into her space and try to pet her or that as friendly as she may seem, she still isn’t too keen on new people.  Her connection to me is part of her job and it’s necessary for her to perform her job, but it will also result in her being more protective of me if you come into my space.  I’ve had to get a “Do Not Pet” label for her Service Dog harness due to this.

The How

Now, with that said, I’ve been asked how to best travel with a dog.  I’ve never found it that difficult to still travel with Esme, a small backpack and my 20 inch suitcase, but there are some fast rules and tips to know that can make the best of the trip.

  1. Dogs travel rather inexpensively on flights.  Most airlines have become quite pet friendly, but the animal must remain in a small kennel that will fit under the seat in front of your own unless its an ESA, (Emotional Support Animal) or service animal.
  2. Train your dog.  If you plan on traveling with them, train them not to bark and travel with them in a car extensively before air travel to get them accustomed to sitting still and being content with the travel arrangements.
  3. Make them as comfortable as possible.
    1. Get a thundershirt or similar to help ease the stress of the travel.
    2. Use a pheromone spray that will help to relax the dog.  You spray it on a blanket or on the thundershirt, there’s a special patch.
    3. If the dog is a small service or ESA dog, consider a pouch.  I have people ask me all the time where I got Esme’s Cozy and Dozy Sling.  She looks like a little wallaby in it, but she even sleeps in it while I work-  typing on my computer, attending meetings.  People are just amazed that she’s completely content to snuggle up and snooze in it.  It also comes with pockets for my cell phone when I’m getting on the plane, has a short leash with hook to secure her and a couple zip pockets to keep necessities.
    4. Collapsible food and water bowl.  Often made of silicone, they collapse flat and fit in a zipper pocket.  I keep a collapsible water bottle with me and often give her water in her bowl from it as we travel.
    5. Take your dogs customary food with you in a Tupperware, along with their favorite toys.  Esme’s is a Nylabone that she’s happy to chew on when we relax for the evening.
    6. And take a roll of dog poop bags.  Yeah, you’re going to need those and often at the most inopportune times.
  4. Know where the relief areas are for the departure and arrival airport are.  Most airports now have them and it’s important to know, so look online.
  5. Make sure they are up to date on all their vaccinations and have all their tags present on their collar.
  6. If they are an ESA or Service dog, you should have a letter or documentation showing this.
  7. Make sure to call the airlines BEFORE checking in for your flight so they can add your pet to your ticket.  Then check in, otherwise they may have to “uncheck” you in to add the pet information.
  8. Book with a pet-friendly hotel, like aLoft and call beforehand to find out what accommodations the hotel has for pets.  I’ve had ones that just accepted pets and then ones that are pet-friendly and there’s a big difference.  Ones that are pet-friendly will often have quiet yards with receptacles for pet refuse, walking areas and rooms set aside that have easy access to the outdoors for early morning outings, etc.  aLoft actually has dog beds, bowls and pet yards.  When trying to make a pet comfortable in a new surrounding, this can make a huge difference.

The day of a trip, I feed her a little earlier, ensure she has at least two hours afterwards to have enough time to relieve herself and then give her only a little bit of water until the flight.  I won’t feed her again until we’re at the hotel and it gives her a reason to be happy to be in the new place, too.  This will keep her from getting ill if their are turbulents, etc.

I’m sure there’s other things I’m doing to help make Esme a good traveler, but these are some of the tips I can offer to make it easier for everyone involved.  I love that people are astounded what a great traveling companion she is.  I’d do anything to take away the abuse she went through as a puppy before I rescued her at 5 months old, but she’s got it pretty good now.  She’s earned it and she earns her keep.

https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/01/esme_air.png?... 256w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/01/esme_air.png?... 768w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/01/esme_air.png?... 988w" sizes="(max-width: 428px) 100vw, 428px" data-recalc-dims="1" />

 



Tags:  ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Have Dog, Will Travel], All Right Reserved. 2018.

The post Have Dog, Will Travel appeared first on DBA Kevlar.

Getting started…adding an account to use

If you’ve read my previous post about getting started with the Oracle database, then hopefully you now have your very own database installed and running, and you have a explored a little with the sample schemas using SQL Developer.  Perhaps now you want to venture out into your own database development, and for that, you will want to create your own user account and create your own tables.  Here’s another video which will guide you through the process.

The video shows the process of creating a user via SQL Developer. If you are a command line addict, then here’s the commands you will need run to enact the same. You can use SQL Plus or SQLcl to run these commands. If you’ve never done that, simply

  • open up a terminal or MS-DOC command window,
  • type sqlplus and hit Enter
  • Type SYSTEM for the username and hit Enter
  • Type the password you specified when you created the database as per my previous blog post.

You will see a prompt “SQL>”.  You enter the following commands to create your first user account.


SQL> create user DEVELOPER identified by MyCaseSensitivePassword;

User created.

SQL> grant connect, resource to DEVELOPER;

Grant succeeded.

SQL> alter user DEVELOPER quota unlimited on USERS;

User altered.

And you are now good to go. Fire up SQL Developer as shown in the video, and you are ready to connect as your new user account as shown at this moment in the video.

Remember – we’re doing this to learn and explore Oracle. It is very easy to see all of functionality available to you, and to leap in and start building a fully-fledged application.  But don’t forget – for that, you need more than just a play-thing at home. You need backups; you need security; you need scalability; you need redundancy and failover.  So if you’re ready to get serious, you probably need to think about using a cloud account or similar.

Enjoy your database !

Is NFS on ZFS slowing you down?

If you think so, check out shell script “ioh.sh” from github at  https://github.com/khailey/ioh

Introduction and Goals

The goal of ioh.sh is to measure both the throughput and latency of the different code layers when using NFS mounts on a ZFS appliance. The ZFS appliance code layers inspected with the script are I/O from the disks, ZFS layer and the NFS layer. For each of these layers the script measures the throughput, latency and average I/O size. Some of the layers are further broken down into other layers. For example NFS writes are broken down into data sync, file sync and non-sync operations and NFS reads are broken down into cached data reads and reads that have to go to disk.

The primary three questions ioh is used to answer are

  • Is I/O latency from the I/O subsystem to ZFS appliance sufficiently fast?
  • Is NFS latency from ZFS appliance to the consumer sufficiently fast?
  • Is ZFS adding unusual latency

One: If the latency from the I/O subsystem is not adequate then look into supplying better performing I/O subsystem for ZFS appliance. For example if the goal is 3ms write times per 1K redo write but the underlying I/O subsystem is taking 6ms, then it will be impossible for ZFS appliance to meet those expectations.

Two: If the latency for NFS response from ZFS appliance is adequate and yet the NFS client reports latencies as much slower (more than 2ms slower) then one should look instead at problems in the NIC, network or NFS client host, see network tracing, example http://datavirtualizer.com/tcp-trace-analysis-for-nfs/

Three: If the I/O latency is sufficiently fast but ZFS latency is slow, then this could indicate a problem in the ZFS layer.

The answer to the question “what is adequate I/O latency” depends. In general a single random 8 Kb block read on Oracle is expected to take 3-12 ms on average, thus the typical latency is around 7.5 ms. NOTE: when measuring I/O latency on the source system it’s important to use a tool like “iostat” that will show the actually I/Os to the subsystem. The I/O measured by the Oracle database will include both I/Os satisfied from the host file system cache as well as the I/O subsystem unless the database is running with direct I/O

The ioh tool can also give insight into other useful information such as

  • Are IOPs getting near the supported IOPs of the underlying I/O subsystem
  • is NFS throughput getting near the maximum bandwidth of the NIC?”

For example if the NIC is 1GbE then the maximum bandwidth is about 115MB/s, and generally 100MB/s is a good rule of thumb for the max. If throughput is consistently near the NIC maximum, then demand is probably going above maximum and thus increasing latency

$ ioh.sh -h

usage: ./ioh.sh options

collects I/O related dtrace information into file "ioh.out"
and displays the

OPTIONS:
  -h              Show this message
  -t  seconds     runtime in seconds, defaults to forever
  -c  seconds     cycle time ie time between collections, defaults to 1 second
  -f  filename    change the output file name [defautls to ioh.out]
  -p              parse the data from output file only,  don't run collection
  -d  display     optional extra data to show: [hist|histsz|histszw|topfile|all]
                    hist    - latency histogram for NFS,ZFS and IO both reads and writes
                    histsz  - latency histogram by size for NFS reads
                    histszw - latency histogram by size for NFS writes
                    topfile - top files accessed by NFS
                    all     - all the above options
                  example
                    ioh.sh -d "histsz topfile"

two optional environment variables CWIDTH – histogram column width PAD – character between columns in the histograms, null by default

Running

$ sudo ./ioh.sh 

Outputs to the screen and put raw output into default file name “ioh.out.[date]”. The default output file name can be changed with “-o filename” option. the raw output can later be formatted with

$ ./ioh.sh -p  ioh.out.2012_10_30_10:49:27

By default it will look for “ioh.out”. If the raw data is in a different file name it can be specified with “-o filename”

The output looks like

date: 1335282287 , 24/3/2012 15:44:47
TCP out:  8.107 MB/s, in:  5.239 MB/s, retrans:        MB/s  ip discards:
----------------
            |       MB/s|    avg_ms| avg_sz_kb|     count
------------|-----------|----------|----------|--------------------
R |      io:|     0.005 |    24.01 |    4.899 |        1
R |     zfs:|     7.916 |     0.05 |    7.947 |     1020
C |   nfs_c:|           |          |          |        .
R |     nfs:|     7.916 |     0.09 |    8.017 |     1011
- 
W |      io:|     9.921 |    11.26 |   32.562 |      312
W | zfssync:|     5.246 |    19.81 |   11.405 |      471
W |     zfs:|     0.001 |     0.05 |    0.199 |        3
W |     nfs:|           |          |          |        .
W |nfssyncD:|     5.215 |    19.94 |   11.410 |      468
W |nfssyncF:|     0.031 |    11.48 |   16.000 |        2

The sections are broken down into

  • Header with date and TCP throughput
  • Reads
  • Writes

Reads and Writes are are further broken down into

  • io
  • zfs
  • nfs

For writes, the non stable storage writes are separated from the writes to stable storage which are marked as “sync” writes. For NFS the sync writes are further broken down into “data” and “file” sync writes.

examples:

The following will refresh the display every 10 seconds and display an extra four sections of data

$ sudo ./ioh.sh -c 10 -d "hist histsz histszw topfile"   

date: 1335282287 , 24/3/2012 15:44:47
TCP out:  8.107 MB/s, in:  5.239 MB/s, retrans:        MB/s  ip discards:
----------------
            |       MB/s|    avg_ms| avg_sz_kb|     count
------------|-----------|----------|----------|--------------------
R |      io:|     0.005 |    24.01 |    4.899 |        1
R |     zfs:|     7.916 |     0.05 |    7.947 |     1020
R |     nfs:|     7.916 |     0.09 |    8.017 |     1011
- 
W |      io:|     9.921 |    11.26 |   32.562 |      312
W | zfssync:|     5.246 |    19.81 |   11.405 |      471
W |     zfs:|     0.001 |     0.05 |    0.199 |        3
W |     nfs:|           |                     |        .
W |nfssyncD:|     5.215 |    19.94 |   11.410 |      468
W |nfssyncF:|     0.031 |    11.48 |   16.000 |        2
---- histograms  -------
    area r_w   32u   64u   .1m   .2m   .5m    1m    2m    4m    8m   16m   33m   65m    .1s   .3s   .5s    1s    2s    2s+
R        io      .     .     .     .     .     .     .     .     .     1     3     1
R       zfs   4743   287    44    16     4     3     .     .     .     1     2     2
R       nfs      .  2913  2028    89    17     3     .     .     .     1     2     2
-
W        io      .     .     .    58   249   236    50    63   161   381   261    84    20     1
W       zfs      3    12     2
W   zfssync      .     .     .     .    26   162   258   129   228   562   636   250    75    29
W       nfs      .     .     .     .    12   164   265   134   222   567   637   250    75    29
--- NFS latency by size ---------
    ms   size_kb
R   0.1    8     .  2909  2023    87    17     3     .     .     .     1     2     2
R   0.1   16     .     4     5     2
-
W   5.0    4     .     .     .     .     8    49    10     3     4    11     4     2     1
W  21.4    8     .     .     .     .     4    55   196    99   152   414   490   199    60    23
W  18.3   16     .     .     .     .     .    34    29    25    43    91    84    28     8     5
W  16.1   32     .     .     .     .     .    19    16     7    14    38    36    15     3
W  19.3   64     .     .     .     .     .     6    11     .     9    11    19     6     2     1
W  20.4  128     .     .     .     .     .     1     3     .     .     2     4     .     1
---- top files ----
   MB/s                  IP  PORT filename
W  0.01MB/s  172.16.103.196 52482 /domain0/group0/vdb17/datafile/home/oracle/oradata/swingb/control01.ora
W  0.02MB/s   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/control01.ora
W  0.57MB/s   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/undo.dbf
W  0.70MB/s   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/redo3.log
W  3.93MB/s   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/opt/app/10.2.0.4/db_1/dbs/soe.dbf
-
R  0.01MB/s  172.16.100.102 39938 /domain0/group0/vdb12/datafile/home/oracle/oradata/kyle/control01.ora
R  0.01MB/s  172.16.103.196 52482 /domain0/group0/vdb17/datafile/home/oracle/oradata/swingb/control01.ora
R  0.02MB/s   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/control01.ora
R  0.05MB/s   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/undo.dbf
R  7.84MB/s   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/opt/app/10.2.0.4/db_1/dbs/soe.dbf
IOPs         313

Sections

First line is the date Second line is TCP MB per second in,out and retransmitted. The last value is “ip discards”

The three parts are all related and are a drill down starting with course grain data at the top to finer grain data at the bottom.

  • averages – default
  • histograms [hist]
  • histograms by size reads [histsz] writes [histszw] for NFS

The first section is a quick overview.

The second section breaks out the latency into a histogram so one can get an indication of amount of I/O from memory (ie those in microsecond ranges) as well as how far out the outliers are. (are the outliers on the VDBs matching up to the outliers seen on ZFS appliance?)

The third section differentiates between latency of single block random (typically the 8K size) and latency of multi-block sequential reads (32K and higher). The differentiation is important when comparing to Oracle stats which are grouped by single block random reads (called “db file sequential read” ) and sequential multi-block read (called “db file scattered read”).

The final section

top files read and write [topfile]

is sort of a sanity check as there are periods where there is suppose to be little to no NFS I/O and yet there is, so the top file sections tells which file and which host the NFS I/O is coming from.

The last line after all the sections is total IOPs for reads plus writes. (Note these IOPs could get converted to higher values at the storage layer if using RAID5 which will cause each write to be two reads plus two writes.)

The first section, shows up by default. The other sections require command line arguments.

To see just the first section, which is the default, run ioh.sh without any arguments:

sudo ./ioh.sh

To show non-default sections, add them to the command line

sudo ./ioh.sh -d "hist histsz histszw topfile"

A shortcut for all sections is “all”

sudo ./ioh.sh  -d all

Collecting in the background

nohup sudo ./ioh.sh -c 60 -t 86400 &

Runs the collection for 1 day (86400 seconds) collecting every 60 seconds and put raw output into default file name “ioh.out”. The default output file name can be changed with “-o filename” option.

1. Averages:

The displays I/O, ZFS and NFS data for both reads and writes. The data is grouped to try and help easily correlate these different layers First line is date in epoch format

columns

  • MB/s – MB transferred a second
  • avg_ms – average operation time
  • avg_sz_kb – average operation size in kb
  • count – number of operations

example

.
             |      MB/s|     mx_ms| avg_sz_kb|     count
 ------------|----------|----------|----------|--------------------
 R |      io:|  through | average  |  average |      count of operations
 R |     zfs:|  put     |latency   | I/O size | 
 R |     nfs:|          |millisec  | in KB    |   
 - 
 W |      io:|          |          |          |
 W | zfssync:|          |          |          |                                         
 W |     zfs:|          |          |          |                                         
 W |     nfs:|          |          |          |                                         
 W |nfssyncD:|          |          |          |                                         
 W |nfssyncF:|          |          |          |                                         

For writes

  • zfssync – these are synchronous writes. THese should mainly be Oracle redo writes.
  • nfs – unstable storage writes
  • nfssyncD – data sync writes
  • nfssyncF – file sync writes

DTrace probes used

  • io:::start/done check for read or write
  • nfs:::op-read-start/op-read-done , nfs:::op-write-start/op-write-done
  • zfs_read:entry/return, zfs_write:entry/return

2. Histograms

latency distribution for i/o, zfs, nfs for reads and writes. These distributions are not normalized by time, ie if ioh.d is outputs once a second then these counts will be equal to the counts in the first section. If ioh.d outputs every 10 seconds, then these values will be 10x higher

  1. Histograms by size for reads and writes

The first column is the average latency for the size of I/O for this line. The second column is the size. The size includes this size and every size lower up till the previous bucket. The goal here is to show the sizes of I/Os and the different latency for different sizes. For an Oralce database with 8k block size, 8k reads will tend to be random where as higher read sizes say will be multiblock requests and represent sequential reads. It’s common to see the 8K reads running slower than the larger reads.

4. Top files

shows the top 5 files for reads and writes. First column is MB/s, then R or W, then IP, then port then filename

Examples and Usage

Idle system

First thing to look at is the MB in and out which answers

  • “how busy is the system?”
  • “is NFS throughput approaching the limits of the NIC?”

In the following example, there is only less than 50KB/s total NFS throughput ( in plus out) thus the system isn’t doing much, and there must be no database activity other than the regular maintenance processes which are always running on a database. To confirm this, one can look at the top files at the bottom and see that the only activity is on the control files which are read and written to as part of database system maintenance. Otherwise there is no activity to speak of, so no reason look at I/O latency in this case. Additionally, all majority of what little I/O is in 16K sizes which is typical of control file activity, where as the default database data block activity is in 8K sizes. Most read I/O is coming from ZFS appliance cache as its 64 micro seconds.

date: 1335282646 , 24/3/2012 15:50:46
TCP  out:  0.016 MB/s, in:  0.030 MB/s, retrans:        MB/s  ip discards:
----------------
            |       MB/s|    avg_ms| avg_sz_kb|     count
------------|-----------|----------|----------|--------------------
R |      io:|           |          |          |        .
R |     zfs:|     0.016 |     0.01 |    1.298 |       13
R |     nfs:|     0.016 |     0.10 |   16.000 |        1
- 
W |      io:|     0.365 |     4.59 |    9.590 |       39
W | zfssync:|     0.031 |    14.49 |   16.000 |        2
W |     zfs:|     0.001 |     0.07 |    0.199 |        3
W |     nfs:|           |          |          |        .
W |nfssyncD:|     0.003 |          |          |        .
W |nfssyncF:|     0.028 |    14.33 |   14.400 |        2
---- histograms  -------
    area r_w   32u   64u   .1m   .2m   .5m    1m    2m    4m    8m   16m   33m   65m    .1s   .3s   .5s    .5s+
R        io      .
R       zfs     60     5
R       nfs      .     .     5
-
W        io      .     .     .    20    43    60    11    11     8    28    17     1
W       zfs      2     8     5     2
W   zfssync      .     .     .     .     .     .     2     .     2     5     1     1
W       nfs      .     .     .     .     .     .     2     .     2     5     1     1
--- NFS latency by size ---------
    ms   size_kb
R   0.1   16     .     .     5
-
W          8     .     .     .     .     .     .     .     .     .     1     .     1
W  16.0   16     .     .     .     .     .     .     2     .     2     4     1
---- top files ----
   MB/s                  IP  PORT filename
W  0.00MB/s   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/control01.ora
W  0.01MB/s  172.16.100.102 39938 /domain0/group0/vdb12/datafile/home/oracle/oradata/kyle/control01.ora
W  0.01MB/s  172.16.103.133 59394 /domain0/group0/vdb13/datafile/home/oracle/oradata/kyle/control01.ora
W  0.01MB/s   172.16.100.69 39682 /domain0/group0/vdb14/datafile/home/oracle/oradata/kyle/control01.ora
W  0.01MB/s  172.16.103.196 52482 /domain0/group0/vdb17/datafile/home/oracle/oradata/swingb/control01.ora
-
R  0.00MB/s   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/control01.ora
R  0.01MB/s  172.16.100.102 39938 /domain0/group0/vdb12/datafile/home/oracle/oradata/kyle/control01.ora
R  0.01MB/s  172.16.103.196 52482 /domain0/group0/vdb17/datafile/home/oracle/oradata/swingb/control01.ora
IOPs          39

Active System Below is an example of an active system. Looking at TCP bytes in and out, there is a fair bit 3MB/s out and 2MB/s in. These rates are a long way from saturating 1GbE but there is activity going on.

READs all reads are coming out of the cache. How do we know? For one the average ms latency is 0.07, or 70 micro seconds. Does thhis 70us include slower reads that might be off disk? Looking at the histogram, one can see that the slowest zfs I/O is under 100us and looking just above at the I/O histogram there are no I/Os thus all the I/O is coming from cache.

Writes Writes are pretty slow. Oracle Redo writes on good systems are typically 3ms or liess for small redo. Unfortunately most of the I/O is coming from datafile writes so it’s difficult to tell what the redo write times are. (maybe worth enhancing ioh.d to show average latency by file) Typically the redo does “nfssyncD” writes and datafile writes are simply unstable storage writes “nfs” writes that get sync at a later date. This particular database is using the Oracle parameter “filesystemio_options=setall” which implements direct I/O. Direct I/O can work without sync writes but the implementation depends on the OS. This O/S implementation, OpenSolaris, causes all Direct I/O writes to by sync writes.

date: 1335284387 , 24/3/2012 16:19:47
TCP out:  3.469 MB/s, in:  2.185 MB/s, retrans:        MB/s  ip discards:
----------------
            ||         |           |          |          o       MB/s|    avg_ms| avg_sz_kb|     count
------------|-----------|----------|----------|--------------------
R |      io:|           |          |          |        .
R |     zfs:|     3.387 |     0.03 |    7.793 |      445
R |     nfs:|     3.384 |     0.07 |    8.022 |      432
- 
W |      io:|     4.821 |    12.08 |   24.198 |      204
W | zfssync:|     1.935 |    38.50 |   11.385 |      174
W |     zfs:|     0.001 |     0.06 |    0.199 |        3
W |     nfs:|           |          |          |        .
W |nfssyncD:|     1.906 |    39.06 |   11.416 |      171
W |nfssyncF:|     0.028 |    14.59 |   14.400 |        2
---- histograms  -------
    area r_w   64u   .1m   .2m   .5m    1m    2m    4m    8m   16m   33m   65m    .1s   .3s   .3s+
R        io      .
R       zfs   2185    34     5     .     1
R       nfs    903  1201    47     8     1
-
W        io      .     .    19   142   143    46    42   108   240   212    57    12     1
W       zfs     13     3     1
W   zfssync      .     .     .     .    10     6     .    21    60   384   287    86    16
W       nfs      .     .     .     .    10     5     .    21    60   384   287    86    16
--- NFS latency by size ---------
    ms   size_kb
R   0.1    8   900  1199    47     7     1
R   0.1   16     3     2     .     1
-
W  17.7    4     .     .     .     .     3     1     .     2     5     3     3
W  41.1    8     .     .     .     .     3     .     .    13    35   292   231    76    13
W  34.0   16     .     .     .     .     3     3     .     4    13    61    30     8     2
W  39.0   32     .     .     .     .     .     1     .     .     2    16    14     2     1
W  28.3   64     .     .     .     .     1     .     .     .     2     9     8
W  26.2  128     .     .     .     .     .     .     .     2     3     2     1
W        256     .     .     .     .     .     .     .     .     .     1
---- top files ----
   MB/s             IP  PORT filename
R  0.01   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/control01.ora
R  0.01  172.16.103.196 52482 /domain0/group0/vdb17/datafile/home/oracle/oradata/swingb/control01.ora
R  0.02   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/system.dbf
R  0.02   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/undo.dbf
R  3.33   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/opt/app/product/dbs/soe.dbf 
-
W  0.01   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/control01.ora
W  0.01  172.16.103.196 52482 /domain0/group0/vdb17/datafile/home/oracle/oradata/swingb/control01.ora
W  0.15   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/undo.dbf
W  0.30   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/redo1.log
W  1.46   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/opt/app/product/dbs/soe.dbf 
IOPs         204

ZFS read layer problem

          |      MB/s|    avg_ms|  avg_sz_kb
----------|----------|----------|-----------
R |   io :|    88.480|      4.60|     17.648 
R |  zfs :|    19.740|      8.51|     12.689 
R |  nfs :|    16.562|     22.67|     30.394 

In this case the ZFS I/O 19MB/s is higher than NFS at 16MB/s. Now that could because some thing is accessing the file system locally on ZFS appliance or that ZFS is doing read ahead, so there are possible explanations, but it’s interesting. Second subsystem I/O at 88MB/s is much greater than ZFS I/O at 19MB/s. Again that is notable. Could because there is a scrub going on. (to check for a scrub, run “spool status”, to turn off scrub run “zpool scrub -s domain0″ though the scrub has to be run at some point). Both interesting observations.

Now the more interesting parts. The NFS response time 22ms is almost 3x the average ZFS response time 8ms. On the other hand the average size of NFS I/O is 2.5x the average ZFS I/O size so that might be understandable. The hard part to understand is that the ZFS latency 8ms is twice the latency of subsystem I/O at 4ms yet the average size of the I/O sub-system reads is bigger than the average ZFS read. This doesn’t make any sense.

In this case to hone in the data a bit, it would be worth turning off a scrub if it was running and see what the stats are to eliminate a factor that could be muddying the waters.

But in this case, even without a scrub going, the ZFS latency was 2-3x slower than the I/O subsystem latency.

It turns out ZFS wasn’t caching and spending a lot of time trying to keep the ARC clean.

ZFS write layer problem

           |       MB/s|    avg_ms| avg_sz_kb|     count
-----------|-----------|----------|----------|--------------------
W |     io:|     10.921|     23.26|    32.562|       380
W |    zfs:|    127.001|     37.95|     0.199|      8141
W |    nfs:|     0.000 |     0.00 |    0.000 |        0

NFS is 0 MB/s because this was from http traffic. The current version of ioh would show the TCP MB/s. This version also mixed up zfs sync and non-sync writes into one bucket, but much of the ZFS writes have to be non-sync because the write rate is 127MB/s where as the I/O subsystem writes are only 10MB/s thus at least 117MB/s is not sync and if they are not sync they are just memory writes so should be blindingly fast, but they aren’t. The average latency for the ZFS writes is 37ms. All the more shockingly the average size is only 0.199K where as the I/O subsystem writes 32K in 23ms. The case here was that because of disk errors, the ZFS layer was self throttling way to much. This was a bug

Those pesky LONG columns

There was a time, many moons ago Smile when CLOB, BLOB and BFILE did not exist as data types. So if you had anything longer than a few kilobytes of data to store, you had to use a LONG or a LONG RAW.  But those data types came with all sorts of restrictions and frustrations, and we all embraced the improvements that the LOB data types brought in Oracle 8.  But of course, we carry a lot of that historical “baggage” in the data dictionary.


SQL> select owner, table_name, column_name
  2  from   dba_tab_cols
  3  where  data_type = 'LONG'
  4  and    substr(table_name,1,4) in ('DBA_','ALL_','USER')
  5  order by 1,2, 3;

OWNER                          TABLE_NAME                     COLUMN_NAME
------------------------------ ------------------------------ -------------------------
SYS                            ALL_ARGUMENTS                  DEFAULT_VALUE
SYS                            ALL_CLUSTER_HASH_EXPRESSIONS   HASH_EXPRESSION
SYS                            ALL_CONSTRAINTS                SEARCH_CONDITION
SYS                            ALL_IND_EXPRESSIONS            COLUMN_EXPRESSION
SYS                            ALL_IND_PARTITIONS             HIGH_VALUE
SYS                            ALL_IND_SUBPARTITIONS          HIGH_VALUE
SYS                            ALL_MVIEWS                     QUERY
SYS                            ALL_MVIEW_AGGREGATES           MEASURE
SYS                            ALL_MVIEW_ANALYSIS             QUERY
SYS                            ALL_NESTED_TABLE_COLS          DATA_DEFAULT
SYS                            ALL_REGISTERED_MVIEWS          QUERY_TXT
SYS                            ALL_REGISTERED_SNAPSHOTS       QUERY_TXT
SYS                            ALL_SNAPSHOTS                  QUERY
SYS                            ALL_SQLSET_PLANS               OTHER
SYS                            ALL_SUBPARTITION_TEMPLATES     HIGH_BOUND
SYS                            ALL_SUMMARIES                  QUERY
SYS                            ALL_SUMMARY_AGGREGATES         MEASURE
SYS                            ALL_TAB_COLS                   DATA_DEFAULT
SYS                            ALL_TAB_COLS_V$                DATA_DEFAULT
SYS                            ALL_TAB_COLUMNS                DATA_DEFAULT
SYS                            ALL_TAB_PARTITIONS             HIGH_VALUE
SYS                            ALL_TAB_SUBPARTITIONS          HIGH_VALUE
SYS                            ALL_TRIGGERS                   TRIGGER_BODY
SYS                            ALL_VIEWS                      TEXT
SYS                            ALL_VIEWS_AE                   TEXT
SYS                            ALL_ZONEMAPS                   QUERY
SYS                            ALL_ZONEMAP_MEASURES           MEASURE
SYS                            DBA_ADVISOR_SQLPLANS           OTHER
SYS                            DBA_ARGUMENTS                  DEFAULT_VALUE
SYS                            DBA_CLUSTER_HASH_EXPRESSIONS   HASH_EXPRESSION
SYS                            DBA_CONSTRAINTS                SEARCH_CONDITION
SYS                            DBA_IM_EXPRESSIONS             SQL_EXPRESSION
SYS                            DBA_IND_EXPRESSIONS            COLUMN_EXPRESSION
SYS                            DBA_IND_PARTITIONS             HIGH_VALUE
SYS                            DBA_IND_SUBPARTITIONS          HIGH_VALUE
SYS                            DBA_MVIEWS                     QUERY
SYS                            DBA_MVIEW_AGGREGATES           MEASURE
SYS                            DBA_MVIEW_ANALYSIS             QUERY
SYS                            DBA_NESTED_TABLE_COLS          DATA_DEFAULT
SYS                            DBA_OUTLINES                   SQL_TEXT
SYS                            DBA_REGISTERED_MVIEWS          QUERY_TXT
SYS                            DBA_REGISTERED_SNAPSHOTS       QUERY_TXT
SYS                            DBA_SNAPSHOTS                  QUERY
SYS                            DBA_SQLSET_PLANS               OTHER
SYS                            DBA_SQLTUNE_PLANS              OTHER
SYS                            DBA_SUBPARTITION_TEMPLATES     HIGH_BOUND
SYS                            DBA_SUMMARIES                  QUERY
SYS                            DBA_SUMMARY_AGGREGATES         MEASURE
SYS                            DBA_TAB_COLS                   DATA_DEFAULT
SYS                            DBA_TAB_COLS_V$                DATA_DEFAULT
SYS                            DBA_TAB_COLUMNS                DATA_DEFAULT
SYS                            DBA_TAB_PARTITIONS             HIGH_VALUE
SYS                            DBA_TAB_SUBPARTITIONS          HIGH_VALUE
SYS                            DBA_TRIGGERS                   TRIGGER_BODY
SYS                            DBA_VIEWS                      TEXT
SYS                            DBA_VIEWS_AE                   TEXT
SYS                            DBA_ZONEMAPS                   QUERY
SYS                            DBA_ZONEMAP_MEASURES           MEASURE
SYS                            USER_ADVISOR_SQLPLANS          OTHER
SYS                            USER_ARGUMENTS                 DEFAULT_VALUE
SYS                            USER_CLUSTER_HASH_EXPRESSIONS  HASH_EXPRESSION
SYS                            USER_CONSTRAINTS               SEARCH_CONDITION
SYS                            USER_IM_EXPRESSIONS            SQL_EXPRESSION
SYS                            USER_IND_EXPRESSIONS           COLUMN_EXPRESSION
SYS                            USER_IND_PARTITIONS            HIGH_VALUE
SYS                            USER_IND_SUBPARTITIONS         HIGH_VALUE
SYS                            USER_MVIEWS                    QUERY
SYS                            USER_MVIEW_AGGREGATES          MEASURE
SYS                            USER_MVIEW_ANALYSIS            QUERY
SYS                            USER_NESTED_TABLE_COLS         DATA_DEFAULT
SYS                            USER_OUTLINES                  SQL_TEXT
SYS                            USER_REGISTERED_MVIEWS         QUERY_TXT
SYS                            USER_REGISTERED_SNAPSHOTS      QUERY_TXT
SYS                            USER_SNAPSHOTS                 QUERY
SYS                            USER_SQLSET_PLANS              OTHER
SYS                            USER_SQLTUNE_PLANS             OTHER
SYS                            USER_SUBPARTITION_TEMPLATES    HIGH_BOUND
SYS                            USER_SUMMARIES                 QUERY
SYS                            USER_SUMMARY_AGGREGATES        MEASURE
SYS                            USER_TAB_COLS                  DATA_DEFAULT
SYS                            USER_TAB_COLS_V$               DATA_DEFAULT
SYS                            USER_TAB_COLUMNS               DATA_DEFAULT
SYS                            USER_TAB_PARTITIONS            HIGH_VALUE
SYS                            USER_TAB_SUBPARTITIONS         HIGH_VALUE
SYS                            USER_TRIGGERS                  TRIGGER_BODY
SYS                            USER_VIEWS                     TEXT
SYS                            USER_VIEWS_AE                  TEXT
SYS                            USER_ZONEMAPS                  QUERY
SYS                            USER_ZONEMAP_MEASURES          MEASURE
WMSYS                          ALL_WM_IND_EXPRESSIONS         COLUMN_EXPRESSION
WMSYS                          USER_WM_IND_EXPRESSIONS        COLUMN_EXPRESSION

So when we want to do some data mining against the dictionary, the following error is a common and frustrating one!


SQL> select *
  2  from   dba_views
  3  where  text like '%mystring%';
where  text like '%mystring%'
       *
ERROR at line 3:
ORA-00932: inconsistent datatypes: expected CHAR got LONG

I often hear the comment “Why not just in the next version, just change all those LONG’s to CLOB?”.  I imagine it would be possible to do so, but just pause for a second and think of the amount of regression testing that would need to occur, not just inside the database group in the Oracle organization, but for every customer that has ever coded up a mechanism in PL/SQL, or C, or Java, or any other language that is potentially using a LONG column in one of their queries.  I don’t know for sure, but I suspect that is the reason why we have added new columns to the dictionary rather than modify existing ones.  For example, if you look at DBA_VIEWS, you can see the addition of a column TEXT_VC which is a VARCHAR2 equivalent to the pre-existing TEXT column which is a LONG.


SQL> desc DBA_VIEWS
 Name                                Null?    Type
 ----------------------------------- -------- ----------------
 OWNER                               NOT NULL VARCHAR2(128)
 VIEW_NAME                           NOT NULL VARCHAR2(128)
 TEXT_LENGTH                                  NUMBER
 TEXT                                         LONG
 TEXT_VC                                      VARCHAR2(4000)
 TYPE_TEXT_LENGTH                             NUMBER
 TYPE_TEXT                                    VARCHAR2(4000)
 OID_TEXT_LENGTH                              NUMBER
 OID_TEXT                                     VARCHAR2(4000)
 VIEW_TYPE_OWNER                              VARCHAR2(128)
 VIEW_TYPE                                    VARCHAR2(128)
 SUPERVIEW_NAME                               VARCHAR2(128)
 EDITIONING_VIEW                              VARCHAR2(1)
 READ_ONLY                                    VARCHAR2(1)
 CONTAINER_DATA                               VARCHAR2(1)
 BEQUEATH                                     VARCHAR2(12)
 ORIGIN_CON_ID                                NUMBER
 DEFAULT_COLLATION                            VARCHAR2(100)
 CONTAINERS_DEFAULT                           VARCHAR2(3)
 CONTAINER_MAP                                VARCHAR2(3)
 EXTENDED_DATA_LINK                           VARCHAR2(3)
 EXTENDED_DATA_LINK_MAP                       VARCHAR2(3)

TEXT_VC is a nice touch, but it would have been nice to have that as a CLOB column to get access to the entire DDL for the view should it exceed 4000 characters.  I can’t solve that problem, but I thought perhaps I can give you a workable compromise.  In 8i, the TO_LOB function was implemented to allow customers to migrate from LONG to CLOB with minimal fuss.  I can take advantage of that to provide a CLOB-based copy of DBA_VIEWS:


SQL> create table system.t  as
  2  select
  3     owner
  4    ,view_name
  5    ,text_length
  6    ,to_lob(text) text
  7    ,text_vc
  8    ,type_text_length
  9    ,type_text
 10    ,oid_text_length
 11    ,oid_text
 12    ,view_type_owner
 13    ,view_type
 14    ,superview_name
 15    ,editioning_view
 16    ,read_only
 17    ,container_data
 18    ,bequeath
 19    ,origin_con_id
 20    ,default_collation
 21    ,containers_default
 22    ,container_map
 23    ,extended_data_link
 24    ,extended_data_link_map
 25  from dba_views;

Table created.

SQL>
SQL> create index system.t_ix on system.t ( owner, view_name );

Index created.

That is all well and good, but of course, the moment I perform maintenance on any view in the system, that table will be “stale”. That still might be a perfectly reasonably solution for you – you just refresh the table contents (say) once per day, or when you do deployments of schema changes into your database.  But perhaps we can do a little better than that.  Using a DDL event trigger, we can capture changes on views and adjust our copy accordingly.


SQL> create or replace
  2  trigger sys.view$clob$handler
  3  after create or alter or drop or rename
  4  on database
  5  when ( ora_dict_obj_type = 'VIEW' )
  6  declare
  7    l_obj_name  varchar2(128) := ora_dict_obj_name;
  8    l_obj_owner varchar2(128) := ora_dict_obj_owner;
  9
 10    l_text   varchar2(1000);
 11    sql_text ora_name_list_t;
 12    l_idx    pls_integer;
 13  begin
 14    lock table system.t in exclusive mode;
 15
 16    --
 17    -- remove the existing row for the view
 18    --
 19    delete from system.t where owner = l_obj_owner and view_name = l_obj_name;
 20
 21    --
 22    -- if it is a rename event, we will try to derive the new name
 23    -- from the sql statement by looking for a trailing "TO"
 24    --
 25    if ora_sysevent in ('RENAME') then
 26      l_idx := ora_sql_txt(sql_text);
 27      for i in 1 .. l_idx
 28      loop
 29          l_text := l_text || sql_text(i);
 30      end loop;
 31      l_idx := instr(lower(l_text),' to ');
 32      if l_idx = 0 then
 33        raise_application_error(-20000,'Could not find appropriate rename content');
 34      end if;
 35      --
 36      -- The SQL has a trailing chr(0) which we need to remove
 37      --
 38      l_text := rtrim(ltrim(substr(l_text,l_idx+4)),' '||chr(0));
 39      --
 40      -- The object name in quotes mean we preserve the case specified, otherwise
 41      -- we will normalize to upper
 42      --
 43      if l_text like '"%"' then
 44        l_obj_name := rtrim(ltrim(l_text,'"'),'"');
 45      else
 46        l_obj_name := upper(l_text);
 47      end if;
 48    end if;
 49
 50    --
 51    -- Now we insert the updated definition for the view, or
 52    -- perhaps its new name
 53    --
 54    if ora_sysevent in ('CREATE','ALTER','RENAME') then
 55      insert into system.t
 56      select
 57         owner
 58        ,view_name
 59        ,text_length
 60        ,to_lob(text) text
 61        ,text_vc
 62        ,type_text_length
 63        ,type_text
 64        ,oid_text_length
 65        ,oid_text
 66        ,view_type_owner
 67        ,view_type
 68        ,superview_name
 69        ,editioning_view
 70        ,read_only
 71        ,container_data
 72        ,bequeath
 73        ,origin_con_id
 74        ,default_collation
 75        ,containers_default
 76        ,container_map
 77        ,extended_data_link
 78        ,extended_data_link_map
 79      from dba_views
 80      where owner = l_obj_owner
 81      and   view_name = l_obj_name;
 82    end if;
 83
 84  end;
 85  /

Trigger created.

Obviously, for more serious usage you’ll be choosing a better table name and not using SYS, but this is just a demo on my laptop.  Unlike a DML trigger, where we have access to “new” and “old” images of the data, for a RENAME command, I had to probe the SQL text to try derive the new name. So lets now perform some view DDL and see how our tracking trigger accommodates the changes.


SQL> create view view1 as select * from ALL_objects;

View created.

SQL> create view view2 as select * from ALL_objects;

View created.

SQL> create view view3 as select * from view2;

View created.

SQL> drop view view1;

View dropped.

SQL> rename view2 to view2a;

Table renamed.

SQL>
SQL>
SQL> select count(*) from system.t;

  COUNT(*)
----------
      7347

SQL> select count(*) from dba_views;

  COUNT(*)
----------
      7347

SQL> select owner, view_name from dba_views
  2  minus
  3  select owner, view_name from system.t;

no rows selected

SQL>
SQL> select owner, view_name from system.t
  2  minus
  3  select owner, view_name from dba_views;

no rows selected

And there we have it – our own custom version of DBA_VIEWS where the TEXT is now exposed as a CLOB column.  So now, mining that column for information is as easy as a simple predicate


SQL> select owner, view_name
  2  from   system.t
  3  where  text like 'selec%ALL_objects';

OWNER                          VIEW_NAME
------------------------------ -------------------
MCDONAC                        VIEW2A

gc buffer busy

I had to write this post because I can never remember which way round Oracle named the two versions of gc  buffer busy when it split them. There are two scenarios to cover when my session wants my instance to acquire a global cache lock on a block and some other session is already trying to acquire that lock (or is holding it in an incompatible fashion):

  • The other session is in my instance
  • The other session is in a remote instance

One of these cases is reported as “gc buffer busy acquire”, the other as a “gc buffer busy release” – and I always have to check which is which. I think I usually get it right first time when I see it, but I always manage to convince myself that I might have got it wrong and end up searching the internet for Riyaj Shamsudeen’s blog posting about it.

The “release” is waiting for another instance to surrender the lock to my instance; the “acquire” is waiting for another session in my instance to finish acquiring the lock from the other  instance.

I decided to jot down this note so I didn’t have to keep searching for Riyaj’s and also because a little problem on OTN at the moment showed a couple of AWR reports with an unlikely combination of waits for acquire (180,000,000) and release (2,000) waits.

If you’re wondering why this looks odd – if I’m waiting for an acquire someone else in my instance must be waiting for a release.  Obviously many sessions could be waiting for one release, and if acquirers time out very rapidly (though they’re not reported as doing so) then the ratio could get very high – but 90,000 acquires per release doesn’t look right.

 

Video: Oracle X$TRACE, Wait Event Internals and Background Process Communication

I have uploaded the the video of my Secret Hacking Session: Oracle X$TRACE, Wait Event Internals and Background Process Communication to my Oracle performance & troubleshooting Youtube channel.

The slides are in Slideshare.

Enjoy!

 

NB! I am running one more Advanced Oracle Troubleshooting training in 2018! You can attend the live online training and can download personal video recordings too. The Part 1 starts on 29th January 2018 - sign up here!

Linux for the SQL Server DBA, Part III

Yes, there’s a Part III!

You’re going to have to start scripting and now that you know a few VI/VIM commands, lets talk beginning scripting with shell.

In our example below, we’re going to start a new project called “test”.  For this, the requirements are:

  1. create a new folder off of the /u01/scripts directory
  2. create a new file called test.sh inside the testdir directory
  3. ensure it uses bash as our chosen shell
# cd /u01/scripts
# mkdir testdir
# cd testdir
# touch test.sh
# ls
 test.sh
# pwd
 /u01/scripts/testdir

As you can see, I’ve now fulfilled the first two steps of the requirements.  Now I need to edit the new file I created and add the shell to be used.

# vim test.sh
i
#!/bin/bash
:wq

What I did in the above steps was to:

  1. open the file I created with the touch command in VIM.
  2. hit “i” to insert
  3. Add the dialect of shell scripting that we want to use.  By adding the “#!” before the path to the correct shell program, this tells the script to execute /bin/bash
  4. hit :wq” to write and quit the file, returning to the prompt.

This is the very beginning of scripting with shell.  We’re not going to go any further with this, but actually clean up, (aka remove) what we just did and how to do that properly.  From the /u01/scripts/testdir path:

# rm test.sh
 # ls
 # cd ..
 # rmdir testdir

For our next example, let’s say there are more files than the test.sh file in the directory:

# ls
test.sh test1.log test1.sh test2.log test2.sh

Now the rmdir, (remove directory) command won’t work.  You could use the command at the /u01/scripts path location to remove the testdir directory and ALL the files within it:

# rm -rf ./testdir

This is a destructive command, removing testdir and all files below, but what if you were really supposed to keep those other files or forgot to put in a directory name?

# rm -rf .

Well, that’s not what you wanted to do….  Its important to always double check the command before you hit the enter key and also consider scripting destructive commands that you have to run on a regular basis to deter from catastrophic mistakes.

You can find Part I and Part II for this blog series at the links.  Enjoy!



Tags:  ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Linux for the SQL Server DBA, Part III], All Right Reserved. 2018.

The post Linux for the SQL Server DBA, Part III appeared first on DBA Kevlar.

Explain Plan format

The DBMS_XPLAN format accepts a lot of options, which are not all documented. Here is a small recap of available information.

The minimum that is displayed is the Plan Line Id, the Operation, and the Object Name. You can add columns and/or sections with options, such as ‘rows’, optionally starting with a ‘+’ like ‘+rows’. Some options group several additional information, such ‘typical’, which is also the default, or ‘basic’, ‘all’, ‘advanced’. You can choose one of them and remove some columns, with ‘-‘, such as ‘typical -rows -bytes -cost -plan_hash -predicate -remote -parallel -partition -note’. Finally, from an cursor executed with plan statistics, you can show all execution statistics with ‘allstats’, and the last execution statistics with ‘allstats last’. Subsets of ‘allstats’ are ‘rowstats’, ‘memstats’, ‘iostats’, buffstats’.

Of course, the column/section is displayed only if the information is present.

This blog post shows what is display by which option, as of 12cR2, and probably with some missing combinations.

+plan_hash, or BASIC


PLAN_TABLE_OUTPUT
-----------------
Plan hash value: 1338588353

Plan hash value: is displayed by ‘basic +plan_hash’ or ‘typical’ or ‘all’ or ‘advanced’

+rows +bytes +cost +partition +parallel, or TYPICAL


-----------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ/Ins |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 287 | 19516 | 5 (20)| 00:00:01 | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (ORDER) | :TQ10002 | 287 | 19516 | 5 (20)| 00:00:01 | | | Q1,02 | P->S | QC (ORDER) |
| 3 | SORT ORDER BY | | 287 | 19516 | 5 (20)| 00:00:01 | | | Q1,02 | PCWP | |
| 4 | PX RECEIVE | | 287 | 19516 | 4 (0)| 00:00:01 | | | Q1,02 | PCWP | |
| 5 | PX SEND RANGE | :TQ10001 | 287 | 19516 | 4 (0)| 00:00:01 | | | Q1,01 | P->P | RANGE |
|* 6 | HASH JOIN | | 287 | 19516 | 4 (0)| 00:00:01 | | | Q1,01 | PCWP | |
| 7 | PX BLOCK ITERATOR | | 14 | 532 | 2 (0)| 00:00:01 | 1 | 1 | Q1,01 | PCWC | |
| 8 | TABLE ACCESS FULL | EMP | 14 | 532 | 2 (0)| 00:00:01 | 1 | 1 | Q1,01 | PCWP | |
| 9 | BUFFER SORT | | | | | | | | Q1,01 | PCWC | |
| 10 | PX RECEIVE | | 82 | 2460 | 2 (0)| 00:00:01 | | | Q1,01 | PCWP | |
| 11 | PX SEND BROADCAST| :TQ10000 | 82 | 2460 | 2 (0)| 00:00:01 | | | | S->P | BROADCAST |
| 12 | REMOTE | DEPT | 82 | 2460 | 2 (0)| 00:00:01 | | | LOOPB~ | R->S | |
-----------------------------------------------------------------------------------------------------------------------------------

Rows or E-Rows: is displayed by ‘basic +rows’ or ‘typical’ or ‘all’ or ‘advanced’
Bytes or E-Bytes: is displayed by ‘basic +bytes’ or ‘typical’ or ‘all’ or ‘advanced’
Cost: is displayed by ‘basic +cost’ or ‘typical’ or ‘all’ or ‘advanced’
TmpSpc or E-Temp: is displayed by ‘basic +bytes’ or ‘typical’ or ‘all’ or ‘advanced’
Time or E-Time: is displayed by ‘typical’ or ‘all’ or ‘advanced’
Pstart/Pstop: is displayed by ‘basic +partition’ or ‘typical’ or ‘all’ or ‘advanced’
TQ/Ins, IN-OUT, PQ Distrib: is displayed by ‘basic +parallel’ or ‘typical’ or ‘all’ or ‘advanced’

The ‘A-‘ and ‘E-‘ prefixes are used when displaying execution statistics, to differentiate estimations with actual numbers

+alias


Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
 
1 - SEL$58A6D7F6
8 - SEL$58A6D7F6 / EMP@SEL$1
12 - SEL$58A6D7F6 / DEPT@SEL$1

Query Block Name / Object Alias: is displayed by ‘basic +alias’ or ‘typical +alias’ or ‘all’ or ‘advanced’

+outline


Outline Data
-------------
 
/*+
BEGIN_OUTLINE_DATA
PQ_DISTRIBUTE(@"SEL$58A6D7F6" "DEPT"@"SEL$1" NONE BROADCAST)
USE_HASH(@"SEL$58A6D7F6" "DEPT"@"SEL$1")
LEADING(@"SEL$58A6D7F6" "EMP"@"SEL$1" "DEPT"@"SEL$1")
FULL(@"SEL$58A6D7F6" "DEPT"@"SEL$1")
FULL(@"SEL$58A6D7F6" "EMP"@"SEL$1")
OUTLINE(@"SEL$1")
OUTLINE(@"SEL$2")
MERGE(@"SEL$1" >"SEL$2")
OUTLINE_LEAF(@"SEL$58A6D7F6")
ALL_ROWS
DB_VERSION('12.2.0.1')
OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
*/

Outline Data: is displayed by ‘basic +outline’ or ‘typical +outline’ or ‘all +outline’ or ‘advanced’

+peeked_binds


Peeked Binds (identified by position):
--------------------------------------
 
1 - :X (VARCHAR2(30), CSID=873): 'x'

Peeked Binds: is displayed by ‘basic +peeked_binds’ or ‘typical +peeked_binds’ or ‘all +outline’ or ‘advanced’

+predicate


Predicate Information (identified by operation id):
---------------------------------------------------
 
6 - access("EMP"."DEPTNO"="DEPT"."DEPTNO")

Predicate Information: is displayed by ‘basic +predicate’ or ‘typical’ or ‘all’ or ‘advanced’

+column


Column Projection Information (identified by operation id):
-----------------------------------------------------------
 
1 - INTERNAL_FUNCTION("DEPT"."DEPTNO")[22], "EMP"."EMPNO"[NUMBER,22], "EMP"."ENAME"[VARCHAR2,10],
"EMP"."JOB"[VARCHAR2,9], "EMP"."MGR"[NUMBER,22], "EMP"."HIREDATE"[DATE,7], "EMP"."SAL"[NUMBER,22],
"EMP"."COMM"[NUMBER,22], "DEPT"."DNAME"[VARCHAR2,14], "DEPT"."LOC"[VARCHAR2,13]
2 - (#keys=0) INTERNAL_FUNCTION("DEPT"."DEPTNO")[22], "EMP"."EMPNO"[NUMBER,22], "EMP"."ENAME"[VARCHAR2,10],
"EMP"."JOB"[VARCHAR2,9], "EMP"."MGR"[NUMBER,22], "EMP"."HIREDATE"[DATE,7], "EMP"."SAL"[NUMBER,22],
"EMP"."COMM"[NUMBER,22], "DEPT"."DNAME"[VARCHAR2,14], "DEPT"."LOC"[VARCHAR2,13]
3 - (#keys=1) INTERNAL_FUNCTION("DEPT"."DEPTNO")[22], "EMP"."EMPNO"[NUMBER,22], "EMP"."ENAME"[VARCHAR2,10],
"EMP"."JOB"[VARCHAR2,9], "EMP"."MGR"[NUMBER,22], "EMP"."HIREDATE"[DATE,7], "EMP"."SAL"[NUMBER,22],
"EMP"."COMM"[NUMBER,22], "DEPT"."DNAME"[VARCHAR2,14], "DEPT"."LOC"[VARCHAR2,13]
4 - INTERNAL_FUNCTION("DEPT"."DEPTNO")[22], "EMP"."EMPNO"[NUMBER,22], "EMP"."ENAME"[VARCHAR2,10],
"EMP"."JOB"[VARCHAR2,9], "EMP"."MGR"[NUMBER,22], "EMP"."HIREDATE"[DATE,7], "EMP"."SAL"[NUMBER,22],
"EMP"."COMM"[NUMBER,22], "DEPT"."DNAME"[VARCHAR2,14], "DEPT"."LOC"[VARCHAR2,13]

Column Projection Information: is displayed by ‘basic +projection’ or ‘typical +projection’ or ‘all’ or ‘advanced’

+remote


Remote SQL Information (identified by operation id):
----------------------------------------------------
 
12 - SELECT "DEPTNO","DNAME","LOC" FROM "DEPT" "DEPT" (accessing 'LOOPBACK' )

Remote SQL Information: is displayed by ‘basic +remote’ or ‘typical’ or ‘all’ or ‘advanced’

+metrics


Sql Plan Directive information:
-------------------------------
 
Used directive ids:
9695481911885124390

Sql Plan Directive information: is displayed by ‘+metrics’

+note

The Note section can show information about SQL Profiles, SQL Patch, SQL Plan Baseline, Outlines, Dynamic Sampling, Degree of Parallelism, Parallel Query, Parallel DML, Create Index Size, Cardinality Feedback, Rely Constraints used for transformation, Sub-Optimal XML, Adaptive Plan, GTT private statistics,…


Note
-----
- Degree of Parallelism is 2 because of table property
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement
- this is an adaptive plan (rows marked '-' are inactive)

Note: is displayed by ‘basic +note’ or ‘typical’ or ‘all’ or ‘advanced’

+adaptive


---------------------------------------------------------------------------------------
| Id | Operation | Name |Starts|E-Rows| A-Rows|
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 0 |
| 1 | HASH UNIQUE | | 1 | 1 | 0 |
| * 2 | HASH JOIN SEMI | | 1 | 1 | 0 |
|- 3 | NESTED LOOPS SEMI | | 1 | 1 | 7 |
|- 4 | STATISTICS COLLECTOR | | 1 | | 7 |
| * 5 | TABLE ACCESS FULL | DEPARTMENTS | 1 | 1 | 7 |
|- * 6 | TABLE ACCESS BY INDEX ROWID BATCHED| EMPLOYEES | 0 | 1 | 0 |
|- * 7 | INDEX RANGE SCAN | EMP_DEP_IX | 0 | 10 | 0 |
| * 8 | TABLE ACCESS FULL | EMPLOYEES | 1 | 1 | 1 |
---------------------------------------------------------------------------------------

Inactive branches of adaptive plan: is displayed by ‘+adaptive’

+report


Reoptimized plan:
-----------------
This cursor is marked for automatic reoptimization. The plan that is
expected to be chosen on the next execution is displayed below.

Reoptimized plan: is displayed by ‘+report’

ALLSTATS


---------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | O/1/M |
---------------------------------------------------------------------------------------------------------------------------

Starts: is displayed by ‘basic +rowstats’, ‘basic +allstats’
A-Rows: is displayed by ‘basic +rowstats’, ‘basic +allstats’
A-Time: is displayed by ‘typical +rowstats’, ‘basic +allstats’
Buffers, Reads, Writes: is displayed by ‘basic +buffstats’, ‘basic +iostats’, ‘basic +allstats’
OMem, 1Mem, Used-Mem, O/1/M, Used-Mem: is displayed by ‘basic +memstats’, ‘basic +allstats’
Max-Tmp,Used-Tmp is displayed by ‘basic +memstats’, ‘typical +allstats’

With summed stats, O/1/M and Max-Tmp are used for the headers. With last stats, Used-Mem and Used-Tmp.

 

Cet article Explain Plan format est apparu en premier sur Blog dbi services.

Identity columns in 12c … just a sequence ?

This question came to me over Twitter, so I thought I’d whip out a quick post on it

 

image

Yes, we do implement the IDENTITY column via a sequence.  (Digression – I think this is a smart thing to do.  After all, we’ve had sequences for 20 years, so we know how they work, they are mature, tested, and rock solid, so why invent something new?)

So Peter’s question was – why not just do what we’ve always done and create a sequence and assign it as the default.  And yes, there is nothing wrong with doing that – it will work just fine.  But there are a couple of subtle differences between that and using the IDENTITY syntax.

1) The sequence is bound to the table, so when I drop the table, the sequence disappears as well


SQL> create table t ( x int generated as identity);

Table created.

SQL> select object_name from user_objects where created > sysdate - 60/86400;

OBJECT_NAME
----------------------------------------
T
ISEQ$$_195276

SQL> drop table t purge;

Table dropped.

SQL> select object_name from user_objects where created > sysdate - 60/86400;

no rows selected

2) You can lock down the column


SQL> create table t ( x int generated always as identity);

Table created.

SQL> insert into t values ( 12 ) ;
insert into t values ( 12 )
*
ERROR at line 1:
ORA-32795: cannot insert into a generated always identity column

Personally, I don’t have a preference.  The most important thing to me was that sequences can be used as a default value to avoid triggers on every single table.

DOAG 2017 and IT Tage 2017 Presentation Material

First of all Happy New Year to everyone and I want to thank the numerous people that attended my sessions "Calamities With Cardinalities" at DOAG 2017 and IT Tage 2017 last November and December.

You can find the slide deck here at slideshare.net.

Stay tuned for more publications this year - at least I have some interesting stuff upcoming.