You can’t possibly have avoided all the buzz around the multi-tenancy option until now so it is probably in order to give you a bit more information about it besides the marketing slides Oracle has shown you so far.
IMPORTANT:This article is a technical article and leaves the pros and cons for the multi-tenancy option at one side. It does the same with the decision to make the CDB a cost option. And it doesn’t advise you on licensing either, so make sure you are appropriately licensed to make use of the features demonstrated in the series.
Since it is nearly impossible to give an overview over Pluggable and Container Databases in one article this is going to be a series of multiple articles. In this part you can read about creating PDBs and where to look for information about them.
This post is really a quick note to myself to remind me how to bump up the maximum IO size on Linux. I have been benchmarking a bit lately and increasing the maximum size of an I/O request from 512kb to 1024kb looked like something worth doing. Especially since it’s also done in Exadata :)
So why would it matter? It matters most for Oracle DSS systems, actually. Why? Take ORION for example-although it’s using mindless I/O as explained by @flashdba and @kevinclosson, at least it gives me a quick way to test the size of a typical I/O request. Let me demonstrate:
Oracle 12c certainly has some great features, but for the performance guy like myself, performance monitoring features are particularly interesting. There are three new v$ tables that track anomalies in the IO path. The idea is to provide more information about really poorly performing IO that lasts more than 500ms.
These two tables are going to be useful to monitor when performance issues occur. I can already see the SQL scripts to monitor this activity starting to pile up. But, there is one little extra table that dives even further into the IO stack using Dtrace.
This morning I was quite surprised to see one of my lab servers suffering from a high load average. To be fair I haven’t looked at it for a while, it’s just one of these demo systems ticking along … However this was an interesting case!
High load averages are usually first felt when you get sluggish response from the terminal. I got very sluggish response but attributed it to the SSH connection. Nope, it turned out that the other VMs were a lot more responsive, and I know that they are on the same host.
A few options exist in Linux to see what is going on. The uptime command is a pretty good first indicator.
# uptime 09:14:05 up 28 days, 11:29, 1 user, load average: 19.13, 19.15, 19.13
OK so this has been going on for a little while (hey remember its my lab server!). But what’s contributing? The next stop was top for me. Here’s the abridged output:
Lately I have been drawn into to a fare number of discussions about IO characteristics while helping customers run benchmarks. I have been working with a mix of developers, DBAs, sysadmin, and storage admins. As I have learned, every group has there own perspective – certainly when it comes to IO and performance.
As part of pulling back the covers, I came up with a simple little tool for show IOPS at the cell level.
I just realised this week that I haven’t really detailed anything about policy managed RAC databases. I remembered having done some research about server pools way back when 18.104.22.168 came out. I promised to spend some time looking at the new type of database that comes with server pools: policy managed databases but somehow didn’t get around to doing it. Since I’m lazy I’ll refer to these databases as PMDs from now on as it saves a fair bit of typing.
So how are PMDs different from Administrator Managed Databases?
First of all you can have PMDs with RAC only, i.e. in a multi-instance active/active configuration. Before 11.2 RAC you had to tie an Oracle instance to a cluster node. This is why you see instance prefixes in a RAC spfile. Here is an example from my lab 22.214.171.124.6 cluster:
This post has been a long time coming but recently, I have started working on some SPARC SuperCluster POC’s with customers and I am getting re-acquainted with my old friend Solaris and SPARC.
If you are a Linux performance guy you have likely heard of HugePages. Huge pages are used to increase the performance of large memory machines but requiring fewer TLB‘s . I am not going to go into the details TLB’s, but every modern chip supports multiple memory page sizes.
Do nothing – it is the DEFAULT with Oracle running on Solaris.
In my last post about large pages in 126.96.36.199 I promised a little more background information on how large pages and NUMA are related.
Background and some history about processor architecture
Large Pages in Linux are a really interesting topic for me as I really like Linux and trying to understand how it works. Large pages can be very beneficial for systems with large SGAs and even more so for those with large SGA and lots of user sessions connected.
I have previously written about the benefits and usage of large pages in Linux here:
So now as you may know there is a change to the init.ora parameter “use_large_pages” in 188.8.131.52. The parameter can take these values: