Not All QoS Is Created Equal

Quality-of-Service (QoS) features exist in everything from network devices, to hypervisors, to storage. When multiple tenants share a limited resource, QoS helps provide some level of control over how that resource is shared and prevents the noisiest neighbor from disrupting everyone.

In networking, QoS is an important part of allowing real-time protocols such as VoIP to share links with other less latency sensitive traffic. Hypervisors provide both hard and soft QoS by controlling access to many resources including CPU, memory, and network. QoS in storage is less common, but is now available on many high-end arrays. However, most approaches to storage QoS are “soft” – that is, based on simple prioritization of volumes, rather than hard guarantees around performance. Soft QoS features are effective only as long as the scope of the problem is small enough. In enterprise environments where an administrator has visibility across a global portfolio of applications, and performance fluctuations are not penalized as heavily, it is conceivable that prioritization can be managed with this more simplistic approach.

However, in a large scale cloud environment, these soft QoS implementations come up short. When multiple tenants share storage, the concept of priority is ineffective. From the perspective of the CSP, unlike the enterprise storage admin they aren’t afforded the luxury of application level visibility. Consequently, it does little good to assign a priority level to a set of applications that they have no control over.  From the customer perspective, priority is a relative ranking lacking any real clarity on absolute performance. If a customer has a priority of 10 and everyone else is at 5, they may have twice the priority, but it will come at the expense of all the other tenants on that system. Moreover, even if performance is good, there is no guarantee it will stay that way. While the priority level may be controlled, the performance delivered to a particular level is still “best effort” in the context of all the other workloads on the system. This creates an unpredictable environment for both cloud service providers, and more importantly, their customers.

At SolidFire, one of our founding premises was that solving the performance challenges for cloud service providers required a completely different approach to Quality of Service. SolidFire has architected hard QoS controls into the system that are defined in terms that actually mean something to a customer, IOPS and MB/s. Each volume is configured with minimum, maximum, and burst IOPS and bandwidth. The minimum IOPS provides a guarantee for performance, independent of what other applications or tenants on the system are doing. The maximum and burst controls the allocation of performance and delivers consistent performance to tenants. For the cloud provider, SolidFire QoS enables SLAs around exact performance metrics and complete control over the customer’s experience. For cloud consumers, clear expectations around storage performance provide confidence and stability. With guaranteed performance, IT administrators can finally deploy their tier 1 applications with confidence in the cloud.

-Adam Carter, Director of Product Management

3069 views 6 views Today


Posted in Company, Quality of Service.