IOPS Alone Can’t Slay the Noisy Neighbor

In the most recent post from our high-performance (r)evolution mini-series I reviewed what we consider to be the real measures of storage performance, and the importance of looking beyond IOPS-based vanity metrics when evaluating a high-performance storage architecture.  I want to build on this conversation by discussing an all too familiar “friend” to anyone attempting to run performance sensitive applications in a multi-tenant cloud infrastructure: The Noisy Neighbor.

The Noisy Neighbor is the guy that ruins the party for everyone else. In cloud storage terms, the Noisy Neighbor is the application or volume that consumes a disproportionate amount of available IOPS at the expense of everyone else. Unable to isolate or predict the behavior of the Noisy Neighbor, service providers can’t guarantee performance to any of their cloud based customers. Unable to get predictable performance from their cloud services provider, most customers simply don’t trust them with any of their business critical or performance sensitive applications. This trickle down effect impairs the ability of enterprises to fully embrace the cloud while forcing cloud service providers to leave a massive amount of potential revenue on the table (and off the cloud).

For a cloud services provider, the initial reaction taken to address the Noisy Neighbor is to throw more storage performance (i.e. IOPS) at the problem so that the offender is drowned out by a sea of IOPS. These IOPS could be obtained in a number of different ways including an SSD appliance, a dedicated SAN, dedicated physical server infrastructure, short-stroking drives or underutilizing disk systems to ensure adequate available performance. Unfortunately these are not sustainable solutions for two reasons; 1) In the hyper-competitive cloud market where efficiency is paramount, cloud providers cannot afford the underutilization inherent to these approaches 2) By simply throwing gross performance at the Noisy Neighbor you are not solving the real problem, the need for predictable and consistent performance.

Regardless of the IOPS available, the lack of control around how this performance is provisioned exposes all tenants to an unknown and unacceptable level of performance variance. To ensure any degree of usability, IOPS must be accompanied by some quality-of-service controls that govern the provisioning and enforcement of performance to ensure each application receives the allocation it needs to run effectively in the cloud. It’s important to note that priority-based QoS isn’t enough – “high” or “medium” or “low” levels of relative performance don’t do anything to actually guarantee IOPS or give customers a realistic view of what performance to expect at any given time. To ensure efficiency these controls must be granular enough to allow service providers to independently dial-in performance to the unique needs of each volume or applications.

So while a performance-centric approach may pose as the quick fix to slay the noisy neighbor, don’t stop there. We didn’t. In a multi-tenant environment, when looking to host performance sensitive applications you can only get so far on full throttle performance. By combining a high-performance architecture with fine grain quality-of-service controls you can set and maintain hard SLAs around storage performance more efficiently and more profitably than ever before. Starting on 11/13/12 you will be able to do just that. Get Ready.

-Dave Wright, Founder & CEO

5858 views 8 views Today


Posted in Other.