How Many ASM Disks Per Disk Group And Adding vs. Resizing ASM Disks In An All-Flash Array Environment

I recently posted a 4-part blog series that aims to inform readers that, in an All-Flash Array environment (e.g., XtremIO), database and systems administrators should consider opting for simplicity when configuring and managing Oracle Automatic Storage Management (ASM).

The series starts with Part I which aims to convince readers that modern systems, attached to All-Flash Array technology, can perform large amounts of low-latency physical I/O without vast numbers of host LUNs. Traditional storage environments mandate large numbers of deep I/O queues because high latency I/O requests remain “in-flight” longer. The longer an I/O request takes to complete, the longer other requests remain in the queue. This is not the case with low-latency I/O. Please consider Part I a required primer.

To add more detail to what was offered in Part I,  I offer Part II.  Part II shares a very granular look at the effects of varying host LUN count (aggregate I/O queue depth) alongside varying Oracle Database sessions executing zero-think time transactions.

Part III begins the topic of resizing ASM disks when additional ASM disk group capacity is needed.  Parts I and II are prerequisite reading because one might imagine that a few really large ASM disks is not going to offer appropriate physical I/O performance. That is, if you don’t think small numbers of host LUNs can deliver the necessary I/O performance you might be less inclined to simply resize the ASM disks you have when extra space is needed.

Everything we know in IT has a shelf-life. With All-Flash Array storage, like XtremIO, it is much less invasive, much faster and much simpler to increase your ASM disk group capacity by resizing the existing ASM disks.

Part IV continues the ASM disk resizing topic by showing an example in a Real Application Clusters environment.

 

Filed under: oracle