I have been working on Flex ASM for a little while and so far like the feature. In a nutshell Flex ASM removes the requirement to have one (and only one!) ASM instance per cluster node. Instead, you get three ASM instances, not more, not less-regardless of cluster size.
Databases which are in fact “ASM clients” as in v$asm_client connect remotely or locally, depending on the placement of the ASM instance and the database instance.
Sometimes it’s the little differences that make something really cool, and I was wondering why this hasn’t made it into the Oracle dictionary before.
Have you ever asked yourself which out of the 30 or so accounts in the database were maintained by Oracle or in other words were Oracle internal and to be left alone? I did so on many occasions especially when it comes to the options I do not regularly see in the database. DBA_USERS lists all accounts in the database, user managed as well as Oracle managed. The below is the definition of the 11g view:
I’m always a great fan of trying new things and Oracle has made it a little easier a little while ago. In my early Linux days (SuSE Linux 6.0 was my first distribution, it was still kernel 2.0.36 but “2.2 ready”) compiling the kernel was more manageable than it is today. I even remember “make zImage” instead of the now default “make bzImage” and “make menuconfig” before that :) There was less complexity with hardware, and the initrd was more of a “nice to have” than a requirement with those PATA drives in my system.
This is a small note describing how Oracle implemented the situation which is covered by the db file parallel read wait event. This events happens if Oracle knows it must read multiple blocks which are not adjacent (thus from different random files and locations), and cannot continue processing with the result of a single block. In other words: if it cannot process something after reading a single block (otherwise Oracle will read a single block visible by the wait ‘db file sequential read’).
This is how it shows up if you enable sql trace:
This is part 2 of a number of blogposts about huge Oracle database IO’s.
If you landed on this blogpost and did not read part 1, please read part 1 here.
In part 1 I showed how database IOs of a full table scan could be bigger than 1MB by increasing the db_file_multiblock_read_count parameter to a number beyond 1MB expressed in Oracle blocks. These bigger IOs only happen with direct path reads, not with buffered multiblock reads.
But how much bigger can these IOs be? In part 1 I showed Oracle IOs of 1020 blocks. Is that the limit? To investigate this, I created a much bigger table (table T2 in part 1 had a maximum extent size of 1024 blocks, which meant that the 1020 is the biggest IO possible from this table).
For the sake of this investigation I created a much bigger table to get larger extents:
It’s been a while since I presented the first incarnation of my ‘about multiblock reads’ presentation. When I did this at the UKOUG TEBS conference in Birmingham in 2011, Christian Antognini chaired my presentation. After my presentation Christian showed me it’s possible to set the parameter ‘db_file_multiblock_read_count’ higher than 1MB/db_block_size (which is 128 if your blocksize is 8kB), and you could benefit from it if your hardware is sufficient. In fact, Christian showed me AWR reports (could also be statspack reports, not sure) which showed the benefit.
My understanding of the parameter db_file_multiblock_read_count at the time was:
The maximum value is the operating system’s maximum I/O size expressed as Oracle blocks ((max I/O size)/DB_BLOCK_SIZE). If you set this parameter to a value greater than the maximum, Oracle uses the maximum.
This post is just a reminder to myself how to install the vmware (fusion) stuff in a fresh installed OL6u4 installation. To be honest, it isn’t entirely “fresh”: I’ve ‘yum update’ed the installation, and I’ve installed the 11gR2 preinstall package (yum install oracle-rdbms-server-11gR2-preinstall) package already.
After the yum update and yum install of the oracle database preinstall package everything is updated, but not yet activated (new kernel). This means you need to reboot your VM first.
2. Install the kernel headers
Now the whole system is installed and setup we can *almost* install the vmware tools. However: the vmware tools need the kernel headers. If you look for the kernel headers, you *think* you see them:
You can’t possibly have avoided all the buzz around the multi-tenancy option until now so it is probably in order to give you a bit more information about it besides the marketing slides Oracle has shown you so far.
IMPORTANT:This article is a technical article and leaves the pros and cons for the multi-tenancy option at one side. It does the same with the decision to make the CDB a cost option. And it doesn’t advise you on licensing either, so make sure you are appropriately licensed to make use of the features demonstrated in the series.
Since it is nearly impossible to give an overview over Pluggable and Container Databases in one article this is going to be a series of multiple articles. In this part you can read about creating PDBs and where to look for information about them.
This post is really a quick note to myself to remind me how to bump up the maximum IO size on Linux. I have been benchmarking a bit lately and increasing the maximum size of an I/O request from 512kb to 1024kb looked like something worth doing. Especially since it’s also done in Exadata :)
So why would it matter? It matters most for Oracle DSS systems, actually. Why? Take ORION for example-although it’s using mindless I/O as explained by @flashdba and @kevinclosson, at least it gives me a quick way to test the size of a typical I/O request. Let me demonstrate: