To me, ‘cloud computing’ is renting a compute resource to perform a task. In order to use that compute resource, you need to instruct it to do something, which is typically done via the network. If the task the compute resource needs to fulfil is being an application server or being a client or both in the case of an application server that uses an Oracle database, the network latency between the client of the database and the database server is a critical property.
How do we know where latency comes from when there is a disparity in reported I/O latency on the I/O subsystem and that of the latency reported on the client box requesting the I/O.
For example if I have an Oracle database requesting I/O and Oracle says an 8Kb request takes 50 ms yet the I/O storage subsystem says 8Kb I/Os are taking 1ms (averages) , then where does the 49 extra ms come from?
When the I/O subsystem is connected to Oracle via NFS then there are a lot of layers that could be causing the extra latency.
Where does the difference in latency come from between NFS Server and Oracle’s timing of pread?