Search

OakieTags

Who's online

There are currently 0 users and 45 guests online.

Recent comments

Affiliations

Oakies Blog Aggregator

A brief history of time^H^H Oracle session statistics

I didn’t intend to write another blog post yesterday evening at all, but found something that was worth sharing and got me excited… And when I started writing I intended it to be a short post, too.

If you have been digging around Oracle session performance counters a little you undoubtedly noticed how their number has increased with every release, and even with every patch set. Unfortunately I don’t have a 11.1 system (or earlier) at my disposal to test, but here is a comparison of how Oracle has instrumented the database. I have already ditched my 12.1.0.1 system as well, so no comparison there either :( This is Oracle on Linux.

The script

In the following examples I am going to use a simple query to list the session statistics by their class. The decode statement is based on the official documentation set. There you find the definition of v$statname plus an explanation of the meaning of the class-column. Here is the script:

with stats as (
        select name, decode(class,
                1, 'USER',
                2, 'REDO',
                4, 'ENQUEUE',
                8, 'CACHE',
                16, 'OS',
                32, 'RAC',
                64, 'SQL',
                128, 'DEBUG',
                'NA'
        ) as decoded_class from v$statname
)
select count(decoded_class), decoded_class
 from stats
 group by rollup(decoded_class)
 order by 1
/

Oracle 11.2.0.3

11.2.0.3 is probably the most common 11g Release 2 version currently out there in the field. Or at least that’s my observation. According to MOS Doc ID 742060.1 11.2.0.3 was released on 23 September 2011 (is that really that long ago?) and already out of error correction support by the way.

Executing the above-mentioned script gives me the following result:

COUNT(DECODED_CLASS) DECODED
-------------------- -------
                   9 ENQUEUE
                  16 OS
                  25 RAC
                  32 REDO
                  47 NA
                  93 SQL
                 107 USER
                 121 CACHE
                 188 DEBUG
                 638

So there are 638 of these counters. Let’s move on to 11.2.0.4

Oracle 11.2.0.4

Oracle 11.2.0.4 is interesting as it has been released after 12.1.0.1. It is the terminal release for Oracle 11.2, and you should consider migrating to it as it is in error correction support. The patch set came out on 28 August 2013. What about the session statistics?

COUNT(DECODED_CLASS) DECODED
-------------------- -------
                   9 ENQUEUE
                  16 OS
                  25 RAC
                  34 REDO
                  48 NA
                  96 SQL
                 117 USER
                 127 CACHE
                 207 DEBUG
                 679

A few more, all within what can be expected.

Oracle 12.1.0.2

Oracle 12.1.0.2 is fresh off the press, released just a few weeks ago. Unsurprisingly the number of session statistics has been increased again. What did surprise me was the number of statistics now available for every session! Have a look at this:

COUNT(DECODED_CLASS) DECODED
-------------------- -------
                   9 ENQUEUE
                  16 OS
                  35 RAC
                  68 REDO
                  74 NA
                 130 SQL
                 130 USER
                 151 CACHE
                 565 DEBUG
                1178

That’s nearly double what you found for 11.2.0.3. Incredible, and hence this post. Comparing 11.2.0.4 with 12.1.0.2 you will notice the:

  • same number of enqueue stats
  • same number of OS stats
  • 10 additional RAC stats
  • twice the number of REDO related statistics
  • quite a few more not classified (26)
  • 34 more sql related
  • 13 more in the user-class
  • 24 additional stats in the cache-class
  • and a whopping 298 (!) in the debug class

The debug class (128) shows lots of statistics (including spare ones) for the in-memory option (IM):

SQL> select count(1), class from v$statname where name like 'IM%' group by class;

  COUNT(1)      CLASS
---------- ----------
       211        128

Happy troubleshooting! Reminds me to look into the IM-option in more detail.

SLOB Deployment – A Picture Tutorial.

SLOB can be obtained at this link: Click here.

This post is just a simple set of screenshots I recently took during a fresh SLOB deployment. There have been a tremendous number of SLOB downloads lately so I thought this might be a helpful addition to go along with the documentation. The examples I show herein are based on a 12.1.0.2 Oracle Database but these principles apply equally to 12.1.0.1 and all Oracle Database 11g releases as well.

Synopsis

  1. Create a tablespace for SLOB.
  2. Run setup.sh
  3. Verify user schemas
  4. Create The SLOB procedure In The USER1 Schema
  5. Execute runit.sh. An Example Of Wait Kit Failure and Remedy
  6. Execute runit.sh Successfully
  7. Using SLOB With SQL*Net
    1. Test SQL*Net Configuration
    2. Execute runit.sh With SQL*Net
  8. More About Testing Non-Linux Platforms

 

Create a Tablespace for SLOB

If you already have a tablespace to load SLOB schemas into please see the next step in the sequence.

SLOB-deploy-1

Run setup.sh

Provided database connectivity works with ‘/ as sysdba’ this step is quite simple. All you have to do is tell setup.sh which tablespace to use and how many SLOB users (schemas) load. The slob.conf file tells setup.sh how much data to load. This example is 16 SLOB schemas each with 10,000 8K blocks of data. One thing to be careful of is the slob.conf->LOAD_PARALLEL_DEGREE parameter. The name is not exactly perfect since this actually controls concurrent degree of SLOB schema creation/loading. Underneath the concurrency may be parallelism (Oracle Parallel Query) so consider setting this to a rather low value so as to not flood the system until you’ve practiced with setup.sh for a while.

 

SLOB-deploy-2

Verify Users’ Schemas

After taking a quick look at cr_tab_and_load.out, as per setup.sh instruction, feel free to count the number of schemas. Remember, there is a “zero” user so setup.sh with 16 will have 17 SLOB schema users.

SLOB-deploy-3

Create The SLOB Procedure In The USER1 Schema

After setup.sh and counting user schemas please create the SLOB procedure in the USER1 schema.

SLOB-deploy-4

Execute runit.sh. An Example Of Wait Kit Failure and Remedy

This is an example of what happens if one misses the detail to create the semaphore wait kit as per the documentation. Not to worry, simply do what the output of runit.sh directs you to do.

SLOB-deploy-5

Execute runit.sh Successfully

The following is an example of a healthy runit.sh test.

SLOB-deploy-6

Using SLOB with SQL Net

Strictly speaking this is all optional if all you intend to do is test SLOB on your current host. However, if SLOB has been configured in a Windows, AIX, or Solaris box this is how one tests SLOB. Testing these non-Linux platforms merely requires a small Linux box (e.g., a laptop or a VM running on the system you intend to test!) and SQL*Net.

Test SQL*Net Configuration

We don’t care where the SLOB database service is. If you can reach it successfully with tnsping you are mostly there.

SLOB-deploy-7

Execute runit.sh With SQL*Net

The following is an example of a successful runit.sh test over SQL*Net.

SLOB-deploy-8

More About Testing Non-Linux Platforms

Please note, loading SLOB over SQL*Net has the same configuration requirements as what I’ve shown for data loading (i.e., running setup.sh). Consider the following screenshot which shows an example of loading SLOB via SQL*Net.

SLOB-deploy-9

Finally, please see the next screenshot which shows the slob.conf file the corresponds to the proof of loading SLOB via SQL*Net.

SLOB-deploy-10

 

Summary

This short post shows the simple steps needed to deploy SLOB in both the simple Linux host-only scenario as well as via SQL*Net. Once a SLOB user gains the skills needed to load and use SLOB via SQL*Net there are no barriers to testing SLOB databases running on any platform to include Windows, AIX and Solaris.

 

 

 

 

 

Filed under: oracle

Metering and Chargeback

In the past few posts, I’ve covered setting up PDBaaS, using the Self Service portal with PDBaaS, setting up Schema as a Service, and using the Self Service Portal with Schema as a Service, all of these using Enterprise Manager Cloud Control 12c release 12.1.0.4. Now I want to move on to an area where you start to get more back from all of this work – metering and chargeback.

Metering is something that Enterprise Manager has done since its very first release. It’s a measurement of some form of resource – obviously in the case of Enterprise Manager it’s a measurement of how much computing resource such as CPU, I/O, memory, storage etc. has been used by an object. If I think way back when to the very first release of Enterprise Manager I ever saw – the 0.76 release whenever that was! – the thing that comes to mind most was it had this remarkably pretty tablespace map, that showed you diagrammatically just where every block in an object was in a particular tablespace. Remarkably pretty, as I said – but virtually useless, because all you could do was look at the pretty colours!

Clearly, metering has come a long, long way since that time, and if you have had Enterprise Manager up and running for some time you now have at your fingertips metrics on so many different things that you may be lost trying to work out what can you do with it all. Well, that’s where Chargeback comes into play. In simple terms, chargeback is (as the name implies) an accounting tool. In Enterprise Manager terms, it has 3 main functions:

  1. It provides a way of aggregating the enormous amount of metrics data that Enterprise Manager has been collecting.
  2. It provides reports to the consumers of those metrics of how much they have used of those particular metrics.
  3. If you have set it up to do so, it provides a way for the IT department to charge those consumers for ther resources they have used.

Let me expand on that last point a little further. Within the Chargeback application, the cloud administrator can set specific charges for specific resources. As an example, you might decide to charge $1 a month per gigabyte of memory used for a database. Those charges can be transferred to some form of billing application such as Oracle’s “Self-Service E-Billing” application and end up being charged as a real cost to the end user. However, my experience so far has been that few people are actually using it to charge a cost to the end user. There are two reasons for that:

  • Firstly, most people are still not in the mindset of paying for computing power in the same way as other utilities i.e. paying for the amount of computing power that is actually consumed, as we do with our gas, electricity and phone bills.
  • Secondly, and as a direct extension (I believe anyway) of the first reason, most people are still not capable of deciding just how much to charge for a “unit” (whatever that might be) of computing power. In fact, I have seen arguments over just what to charge for a “unit” of computing power last much longer than any meetings held to decide to actually implement chargeback!

The end result of this is that I have most often seen customers choose to implement SHOWback rather than CHARGEback. Showback is in many ways very similar to chargeback. It’s the ability to provide reports to end users that show how much computing resource they have used, AND to show them how much it would have cost the end users if the IT department had indeed decided to actually charge for it. In some ways this is just as beneficial to the IT department as it allows them to have a much better grasp on what they need to know for budgeting purposes, and it avoids the endless arguments about whether end users are being charged too much. :)

Terminology

OK, let’s talk about some of the new terminology you need to understand before we implement chargeback (from now on, I will use the term “chargeback” to cover both “chargeback” and “showback” for simplicity’s sake, and because the application is actually called “Chargeback” in the Enterprise Manager Cloud Control product).

Chargeback Entities

The first concept you need to understand is that of a chargeback entity. In Enterprise Manager terms, a target typically uses some form of resource, and the Chargeback application calculates the cost of that resource usage. In releases prior to Enterprise Manager 12.1.0.4, the Chargeback application collected configuration information and metrics for a subset of Enterprise Manager targets. In the 12.1.0.4 release, you can add Chargeback support for Enterprise Manager target types for which there is no current out of the box Chargeback support via the use of EMCLI verbs. These chargeback targets, both out of the box and custom types, are collectively known as “entities”.

Charge Plans

A charge plan is what Enterprise Manager uses to associate the resources being charged for and the rates at which they are charged. There are two types of charge plans available:

  • Universal Charge Plan – The universal charge plan contains the rates for CPU, storage and memory. While it is called the “Universal” charge plan, in fact it isn’t really because it doesn’t apply to all entity types. For example, it is not applicable to J2EE applications.
  • Extended Charge Plans – The Universal Charge Plan is an obvious starting point, but there are many situations where entity-specific charges are required. Let’s say you have a lot of people who understand Linux, but there is a new environment being added to your data centre that requires Windows knowledge. If you had to pay a contractor to come in to look after that environment because it was outside your knowledge zone, it would be fair to charge usage of the Windows environment at a higher rate. As another example, let’s say your standard environments did not use Real Application Clusters (RAC), and a new environment has come in that requires the high availability you can get from a RAC environment. RAC is, of course, a database option that you need to pay anadditional license fee for, so that should be charged at a higher rate. An extended charge plan can be used to meet these sorts of requirements, as it provides greater flexibility toChargeback administrators. Extended charge plans allow you to:
    • Setup specific charges for specific entities
    • Define rates based on configuration and usage
    • Assign a flat rate regardless of independent of configuration or usage
    • Override the rates set for the universal plan

    An out of the box extended plan is provided that you can use as a basis for creating your own extended plans. This plan defines charges based on machine sizes for the Oracle VM Guest entity.

Cost Centres

Obviously, when charges for resource usage are implemented, these charges must be assigned to something. In the Chargeback application, the costs are assigned to a cost centre. Cost centres are typically organized in a hierarchy and may correspond to different parts of an organization — for example, Sales, Development, HR, and so forth – or they may correspond to different customers – for example, where you are a hosting company and host multiple customer environments. In either case, cost centres are defined as a hierarchy within the Chargeback application. You can also import cost centres that have been implemented in your LDAP server, if you want to use those.

Reports

The main benefit you get from using Chargeback is the vast amount of information it puts at your fingertips. This information can be reported on by administrators in a variety of formats available via BI Publisher, including pie charts and bar graphs, as well as drilling down to charges based on a specific cost centre, entity type, or resource. You can also make use of trending reports over time and can use this to aid you in your IT budget planning. Outside the Chargeback application itself, Self-Service users can view chargeback information related to the resources they have used within the Self Service Portal.

What’s Next?

So now you have an understanding of the capabilities of the Chargeback application in the Enterprise Manager product suite. The next step, of course, is to set it up. I’ll cover that in another blog post, so stay tuned for that!

More Oracle Multitenant Changes (12.1.0.2)

When I wrote about the remote cloning of PDBs, I said I would probably be changing some existing articles. Here’s a change I’ve done already.

There are also new articles.

I’m sure there will be some more little pieces coming out in the next few days…

As I mentioned before, the multitenant option is rounding out nicely in this release.

Cheers

Tim…

 

 


More Oracle Multitenant Changes (12.1.0.2) was first posted on August 4, 2014 at 7:34 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

A first look at RAC 12c (part I)

I have recently upgraded my RAC 12.1.0.1.3 system to RAC 12.1.0.2 including the RDBMS installation. Currently I am updating my skills with information relevant to what I would normally have called 12c Release 2 (so that would also answer the question: when is 12c Release 2 coming out?). Then I realised I haven’t posted a first look at RAC 12c post yet-so here it comes.

There are a few things that aren’t specifically mentioned in the new features guide that caught my eye. First of all, RAC 12 does a few really cool things. Have a look at the srvctl command output:

[oracle@rac12node1 ~]$ srvctl
Usage: srvctl   []
    commands: enable|disable|export|import|start|stop|relocate|status|add|remove|modify|getenv|setenv|unsetenv|
       config|convert|update|upgrade|downgrade|predict
    objects: database|instance|service|nodeapps|vip|network|asm|diskgroup|listener|srvpool|server|scan|scan_listener|
        oc4j|home|filesystem|gns|cvu|havip|exportfs|rhpserver|rhpclient|mgmtdb|mgmtlsnr|volume|mountfs
For detailed help on each command and object and its options use:
  srvctl  -help [-compatible] or
  srvctl   -help [-compatible]
[oracle@rac12node1 ~]$

Quite a few more than with 11.2.0.3:

[oracle@rac112node1 ~]$ srvctl
Usage: srvctl   []
 commands: enable|disable|start|stop|relocate|status|add|remove|modify|getenv|setenv|unsetenv|config|convert|upgrade
 objects: database|instance|service|nodeapps|vip|network|asm|diskgroup|listener|srvpool|server|scan|scan_listener|oc4j|home|filesystem|gns|cvu
For detailed help on each command and object and its options use:
 srvctl  -h or
 srvctl   -h

I will detail the meaning of some of these later in this post or another one to follow.

Evaluation and Prediction

When you are working with policy managed databases RAC 12c already gave you a “what if” option in form of the -eval flag. If for example you wanted to grow your server pool from 2 to 3 nodes:

[oracle@rac12node1 ~]$ srvctl modify srvpool -serverpool pool1 -max 3 -eval -verbose
Database two will be started on node rac12node3
Server rac12node3 will be moved from pool Free to pool pool1
[oracle@rac12node1 ~]$

Now you will be able to predict a resource failure as well:

[oracle@rac12node1 ~]$ srvctl predict -h

The SRVCTL predict command evaluates the consequences of resource failure.

Usage: srvctl predict database -db  [-verbose]
Usage: srvctl predict service -db  -service  [-verbose]
Usage: srvctl predict asm [-node ] [-verbose]
Usage: srvctl predict diskgroup -diskgroup  [-verbose]
Usage: srvctl predict filesystem -device  [-verbose]
Usage: srvctl predict vip -vip  [-verbose]
Usage: srvctl predict network [-netnum ] [-verbose]
Usage: srvctl predict listener -listener  [-verbose]
Usage: srvctl predict scan -scannumber  [-netnum ] [-verbose]
Usage: srvctl predict scan_listener -scannumber  [-netnum ] [-verbose]
Usage: srvctl predict oc4j [-verbose]

So what would happen if a disk group failed?

[oracle@rac12node1 ~]$ srvctl predict diskgroup -diskgroup DATA -verbose
Resource ora.DATA.dg will be stopped
Resource ora.DATA.ORAHOMEVOL.advm will be stopped
[oracle@rac12node1 ~]$

What it doesn’t do at this stage seems to be an assessment of cascading further problems. If +DATA went down, it would pretty much drag the whole cluster with it, too.

Status

Interestingly you can see a lot more detail with 12.1.0.2 than previously. Here is an example of a policy-managed RAC One Node database:

[oracle@rac12node1 ~]$ srvctl config database -d RONNCDB
Database unique name: RONNCDB
Database name: RONNCDB
Oracle home: /u01/app/oracle/product/12.1.0.2/dbhome_1
Oracle user: oracle
Spfile: +DATA/RONNCDB/PARAMETERFILE/spfile.319.854718651
Password file: +DATA/RONNCDB/PASSWORD/pwdronncdb.290.854718263
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ronpool1
Disk Groups: RECO,DATA
Mount point paths:
Services: NCDB
Type: RACOneNode
Online relocation timeout: 30
Instance name prefix: RONNCDB
Candidate servers:
OSDBA group: dba
OSOPER group:
Database instances:
Database is policy managed

Did you spot the OSDBA and OSOPER group mappings in the output? DBCA by default creates the password file and server parameter file into ASM since 12.1.0.1.

You can get a lot more status information in 12.1.0.2 then previously, especially when compared to 11.2:

[oracle@rac12node1 ~]$ srvctl status -h

The SRVCTL status command displays the current state of the object.

Usage: srvctl status database {-db  [-serverpool ] | -serverpool  | -thisversion | -thishome} [-force] [-verbose]
Usage: srvctl status instance -db  {-node  | -instance } [-force] [-verbose]
Usage: srvctl status service {-db  [-service  ""] | -serverpool  [-db ]} [-force] [-verbose]
Usage: srvctl status nodeapps [-node ]
Usage: srvctl status vip {-node  | -vip } [-verbose]
Usage: srvctl status listener [-listener ] [-node ] [-verbose]
Usage: srvctl status asm [-proxy] [-node ] [-detail] [-verbose]
Usage: srvctl status scan [[-netnum ] [-scannumber ] | -all] [-verbose]
Usage: srvctl status scan_listener [[-netnum ] [-scannumber ] | -all] [-verbose]
Usage: srvctl status srvpool [-serverpool ] [-detail]
Usage: srvctl status server -servers "" [-detail]
Usage: srvctl status oc4j [-node ] [-verbose]
Usage: srvctl status rhpserver
Usage: srvctl status rhpclient
Usage: srvctl status home -oraclehome  -statefile  -node 
Usage: srvctl status filesystem [-device ] [-verbose]
Usage: srvctl status volume [-device ] [-volume ] [-diskgroup ] [-node  | -all]
Usage: srvctl status diskgroup -diskgroup  [-node ""] [-detail] [-verbose]
Usage: srvctl status cvu [-node ]
Usage: srvctl status gns [-node ] [-verbose]
Usage: srvctl status mgmtdb [-verbose]
Usage: srvctl status mgmtlsnr [-verbose]
Usage: srvctl status exportfs [-name  |-id ]
Usage: srvctl status havip [-id ]
Usage: srvctl status mountfs -name 
For detailed help on each command and object and its options use:
  srvctl   -help [-compatible]

RAC 12.1.0.2 adds a nifty few little flags: thisversion and thishome to srvctl status database. That works really well where you have multiple versions of Oracle on the same machine (think consolidation):

[oracle@rac12node1 ~]$ srvctl status database -thisversion
Database unique name: RONNCDB
Instance RONNCDB_1 is running on node rac12node4
Online relocation: INACTIVE

Database unique name: TWO
Instance TWO_1 is running on node rac12node1
Instance TWO_2 is running on node rac12node2

Verbose!

Some commands are actually more verbose when you specify the -verbose flag:

[oracle@rac12node1 ~]$ srvctl status database -d RONNCDB -verbose
Instance RONNCDB_1 is running on node rac12node4 with online services NCDB. Instance status: Open.
Online relocation: INACTIVE
[oracle@rac12node1 ~]$ srvctl status database -d RONNCDB
Instance RONNCDB_1 is running on node rac12node4
Online relocation: INACTIVE
[oracle@rac12node1 ~]$

But that’s not new in 12.1.0.2 I believe.

Interesting changes for database logging

The database itself will also tell you more about memory allocation:

**********************************************************************
Dump of system resources acquired for SHARED GLOBAL AREA (SGA)
 Per process system memlock (soft) limit = 64K
Thu Jul 31 13:34:58 2014
 Expected per process system memlock (soft) limit to lock
 SHARED GLOBAL AREA (SGA) into memory: 1538M
Thu Jul 31 13:34:58 2014
 Available system pagesizes:
  4K, 2048K
 Supported system pagesize(s):
  PAGESIZE  AVAILABLE_PAGES  EXPECTED_PAGES  ALLOCATED_PAGES  ERROR(s)
        4K       Configured               3          393219        NONE
     2048K                0             769               0        NONE

RECOMMENDATION:
 1. For optimal performance, configure system with expected number
 of pages for every supported system pagesize prior to the next
 instance restart operation.
 2. Increase per process memlock (soft) limit to at least 1538MB
 to lock 100% of SHARED GLOBAL AREA (SGA) pages into physical memory

As you can see I am not using large pages here at all, which I did for demonstration purposes only. I don’t see any reason not to use large pages on a 64bit system these days. I’m curious to see whether the AIX port supports all the AIX page sizes here.

End of part I

This has already turned into a longer post than I expected it to be when I started writing. I think I’ll continue the series in a couple of weeks when I’m finding the time.

Viewing Figures

The day has just started in Singapore – though it’s just coming up to midnight back home – and the view counter has reached 4,00,009 despite the fact that I’ve not been able to contribute much to the community for the last couple of months. Despite the temporary dearth of writing it’s time to have a little review to see what’s been popular and how things have changed in the 10 months it took to accumulate the last 500,000 views so here are some of the latest WordPress stats.

All time ratings

AWR / Statspack 80,997 Updated from time to time
NOT IN 51,673 February 2007
Cartesian Merge Join 39,265 December 2006
dbms_xplan in 10g 37,725 November 2006
Shrink Tablespace 31,184 November 2006

Ratings over the last year

AWR / Statspack 13,905 Updated from time to time
AWR Reports 9,494 February 2011
Shrink Tablespace 8,402 February 2010
Lock Modes 8,221 June 2010
NOT IN 6,388 February 2007

The figures for the previous half million views (opens in a new window) are very similar for most of the top 5 although “Analysing Statspack (1)” has been pushed from 5th place to 6th place in the all-time greats; and “Lock Modes” has swapped places with “NOT IN” in the annual ratings. As the annual WordPress summary says: “… your posts have staying powere, why not write more about …”.

The number of followers has gone up from about 2,500  to just over 3,900 but, as I said last time, I suspect that there’s a lot of double counting related to twitter.

 

Analogy – 2

I suggested a little while ago that thinking about the new in-memory columnar store as a variation on the principle of bitmap indexes was quite a good idea. I’ve had a couple of emails since then asking me to expand on the idea because “it’s wrong” – I will follow that one up as soon as I can, but in the meantime here’s another angle for connecting old technology with new technology:

It is a feature of in-memory column storage that the default strategy is to store all columns in memory. But it’s quite likely that you’ve got some tables where a subset of the columns are frequently accessed and other columns are rarely accessed and it might seem a waste of resources to keep all the columns in memory just for the few occasional queries. So the feature allows you to de-select columns with the “no inmemory({list of columns})” option – it’s also possible to use different degrees of compression for different columns, of course, which adds another dimension to design and planning – but that’s a thought for another day.

So where else do you see an example of being selective about where you put columns ?  Index Organized Tables (IOTs) – where you can choose to put popular columns in the index (IOT_TOP) segment, and the rest in the overflow segment, knowing that this can give you good performance for critical queries, but less desirable performance for the less important or less frequent queries. IOTs allow you to specify the (typically short) list of columns you want “in” – it might be quite nice if the same were true for the in-memory option, I can imagine cases where I would want to include a small set of columns and exclude a very large number of them (for reasons that bring me back to the bitmap index analogy).

 

Bedford Thumbnails in Squarespace

Squarespace's Bedford template is wonderful in its bold use of thumbnail images to headline individual blog posts. I love that aspect of the template. The images add character and a splash of fun to my otherwise text-heavy posts. 

Oddly enough, the Bedford template does not show thumbnail images when you view the main page for a blog. Figure 1 shows the default view of my Database and Tech blog. All you see is boring text. But not to worry! Those images are one line of CSS away.

(Click any image to embiggen it)

Figure 1. Default listing of blog posts in the Bedford template

Figure 1. Default listing of blog posts in the Bedford template

The Bedford template does generate the HTML for display of thumbnail images. It's just that it's switched off. Because of that foresight on the template designer's part, you can switch on the display of thumbnails with the following single line of CSS added through the Custom CSS editor:

.excerpt-thumb {display: block !important;}

Figure 2 shows the underlying HTML structure. You can see for yourself by right-clicking in the area of a post title or an excerpt, and selecting Inspect Element from the fly-out menu. Navigate your page structure in the left pane. Find and expand the div named excerpt-thumb. Within that div will be an tag that is a hyperlink, and inside that hyperlink is an tag with a reference to the thumbnail image. 

Figure 2. The underlying HTML structure in the listing of blog post excerpts

Figure 2. The underlying HTML structure in the listing of blog post excerpts

The right-hand pane shows the excerpt-thumb div's display property set to none. Displaying the div as a block element causes the images to appear as you see in Figure 3. Simple and done!

Figure 3. Default location of thumbnail images once their display is enabled

Figure 3. Default location of thumbnail images once their display is enabled

You can add even more impact by making the images span the entire width of the content column like in the Marquee template. Do that by adding three more property declarations to give the following rule set:

.excerpt-thumb {
  display: block !important; 
  float: none !important;
  width: auto !important;
  margin-right: 0 !important;
}

Figures 4 through 6 step through the additional three declarations and their effects. Removing the float prevents the text from wrapping around the image (Figure 4). Setting the width to auto allows the image to take up the entire column width (Figure 5). Finally, and this one is easy to miss at first, there is a right margin that is specified in the template to prevent text from wrapping too close when the image is floated left, and that margin should be removed. Figure 6 shows the final result, with each blog post's thumbnail image stretching across the entire column. 

Figure 4. After float: none to remove the float

Figure 4. After float: none to remove the float

Figure 5. After width: auto to allow the image to expand

Figure 5. After width: auto to allow the image to expand

Figure 6. The final version after adding margin-right: 0 

Figure 6. The final version after adding margin-right: 0 

Well-chosen thumbnail images provide instant impact and help to grab reader attention and focus it toward your posts. Bedford does a nice job in presenting those images at the top of each post, but you might as well have them in your summary listings too. Now you have not one, but two ways of doing that. Use the one-line rule to have images appear at the left of your excerpts, and the longer rule set to have them span across the entire column width.


Just getting started with CSS? See my post on CSS and Squarespace: Getting Started for help on navigating the HTML structure of your pages and finding the correct elements to target. Also check out my book below. It teaches CSS from the ground up with a specific focus on Squarespace. It's the fastest way to get up to speed on CSS and begin to have fun customizing the look and feel of your own Squarespace site.


coverThumbLarge.jpg

Learn CSS for Squarespace

9.99

Learn CSS for Squarespace is a short, seven-chapter introduction to working with CSS on the Squarespace platform. The book is delivered as a mobi file that is readable from the Kindle Fire HDX or similar devices. A complimentary sample chapter is available. Try before you buy.

Add To Cart

How to ask a question: The optician edition

I’ve just returned from a rather awkward and unpleasant visit to the optician…

Let me start by saying this is the same optician I’ve used for the last four years. I don’t think we would ever be capable of being friends, but I don’t have to like someone to “work with them”. That’s what being professional is all about. It’s only once a year, so before now I’ve never felt the need to go elsewhere. That has probably changed now.

Issue 1:

  • Optician: Look at the black spots. Do they look blacker with or without the lens?
  • Me: Neither. They look the same.
  • Optician: That is impossible.
  • Me: With the lens the spots look bigger and more defined, but the “intensity of the blackness” is the same. Do you mean which look clearer, or do you mean the which actually look “more black”?
  • Optician: Stop overcomplicating it. Do they look more black with or without the lens?
  • Me: Neither. Like I said, the “level of blackness” is the same, but the clarity is different. What are you asking for, the “blackness” or the clarity.
  • Optician: Can you just answer my question?
  • Me: I can’t answer the question unless I understand the question.

Issue 2:

  • Optician: Look at the two sets of stripes. Which look clearer?
  • Me: Where should I be looking? If I look at the vertical stripes they look very clear and the horizontal stripes look burred. If I look at the horizontal stripes, they look clear and the vertical stripes look blurred. If I try to look between the two they look equally clear/blurred. What do you want me to do?
  • Optician: Please just answer my question.

Issue 3:

  • Optician: Look at the “+” symbol. Do the lines above and below line up in the centre of the “+”.
  • Me: The top one does, but the bottom one is kind-of jumping between being in line and being slightly out of line. Also, the bottom one sometimes goes very pale.
  • Optician: Please don’t give extra information. Just answer my question. Do they line up.
  • Me: If the bottom line is moving between sometimes lining up and sometimes not, what answer should I give?
  • Optician: Do they line up or don’t they?
  • Me: Sometimes.

Issue 4:

  • Optician: Can you see the improvement by using this lens when reading 5pt font at this distance.
  • Me: Yes.
  • Optician: That means you should probably consider having your glasses adjusted to allow for that.
  • Me: OK, but I never read something that small at that distance, so does it really matter?
  • Optician: Well, I’m not able to test you for 10 hours straight at your normal resolution to see if it is giving you eye strain.
  • Me: I’m looking at a monitor pretty much from the time I wake up to the time I go to bed. If this were an issue, would I feel like I had eye strain?
  • Optician: I’m telling you it’s a issue.
  • Me: I understand that. I’m just trying to get an handle on if my current reading habits are affected by this, or is it only an issue if I want to do something I never do?
  • Optician: Well, you will probably just change the resolution of your screen to counter it.
  • Me: Well, I’ve not changed the resolution of my screen and I am not feeling any noticeable eye strain, so do you think it’s actually an issue?
  • Optician: I’ve just shown you it is an issue. You said the print looked clearer.
  • Me: Yes, but only when I do something I never do. My point is, is this affecting my “normal” life or are we trying to fix a problem that has never and probably never will manifest itself?

Maybe someone will read this and think I was being a complete jerk, but I was genuinely unable to understand some of the things I was asked to do. What’s more, when I asked for clarification it was not forthcoming.

On my way home I was thinking how similar this situation was to things that happen in the IT world. People are generally really bad at asking questions (see here) and very quick to complain when they don’t get the answer to the question they think they have asked…

Cheers

Tim…

PS. On a brighter note, swimming went well this morning. I’ve started to incorporate sprints into my sessions.


How to ask a question: The optician edition was first posted on August 2, 2014 at 11:53 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Visualizing AWR data using python

In my earlier post, I talked about, how tableau can be used to visualize the data. In some cases, I find it useful to query AWR base tables directly using Python and graph it using matplotlib package quickly. Since python is preinstalled in almost all computers, I think, this method will be useful for almost everyone. Of course, you may not have all necessary packages installed in your computer, you can install the packages using install python packages . Of course, if you improve the script, please send it to me, I will share it in this blog entry.

Script is available as a zip file: plotdb.py

Usage:

Script usage is straight forward. Unzip the zip file and you will have a .py script in the current directory. Execute the script (after adjusting permissions of the script) using the format described below:

# To graph the events for the past 60 days, for inst_id=1, connecting to PROD, with username system. 
./plotdb.py -d PROD -u system -n 'latch free' -t e -i 1
# To graph the statistics for the past 60 days, for inst_id=2, connecting to PROD
./plotdb.py -d PRD -u system  -n 'physical reads' -t s -i 2

A typical graph from the above script is:

physical_reads

physical_reads