Take Total Control

Requirement #5 for guaranteed Quality of Service (QoS): fine-grain QoS control

Fine Grain QoS ControlAs you know from reading our Quality of Service (QoS) Benchmark blog series, guaranteeing QoS takes more than simply having a QoS feature. Without an architecture built from a design that includes all-SSD, scale-out architecture, RAID-less data protection, and balanced data distribution, any discussion of QoS is really just lip service. Another key requirement for guaranteeing Quality of Service is a fine-grain QoS model that describes performance in all situations.

Contrast fine-grain control against today’s rudimentary approaches to (QoS), such as rate limiting and prioritization. These features merely provide a limited amount of control and don’t enable specific performance in all situations.

The trouble with having no control
For example, basic rate limiting, which sets a cap on the IOPS or bandwidth an application consumes, doesn’t take into account the fact that most storage workloads are prone to performance bursts. Database checkpoints, table scans, page cache flushes, file copies, and other operations tend to occur suddenly, requiring a sharp increase in the amount of performance needed from the system. Setting a hard cap simply means that when an application actually does need to do IO, it is quickly throttled. Latency then spikes and the storage seems painfully slow, even though the application isn’t doing that much IO overall.

Prioritization assigns labels to each workload, yet similarly suffers with bursty applications. While high priority workloads may be able to easily burst by stealing resources from lower priority ones, moderate or low priority workloads may not be able to burst at all. Worse, these lower priority workloads are constantly being impacted by the bursting of high priority workloads.

Failure and over-provisioned situations also present challenges for coarse-grained QoS. Rate limiting doesn’t provide any guarantees if the system can’t even deliver at the configured limit when it is overtaxed or suffering from performance-impacting failures. While prioritization can minimize the impact of failures for some applications, it still can’t tell you ahead of time how much impact there will be, and the applications in the lower tiers will likely see absolutely horrendous performance.

SolidFire enables the control you’ve been looking for
SolidFire’s QoS controls are built around a robust model for configuring QoS for an individual volume. The model takes into account bursty workloads, changing performance requirements, different IO patterns, and the possibility of over-provisioning. Whether an application is allocated a lot of performance or a little, the amount of performance it gets in any situation is never in doubt. Cloud operators finally have the confidence to guarantee QoS and write firm SLAs against performance. Only an architecture built with a fine-grained Quality of Service model can support these types of guarantees.

Stay tuned to this blog as we discuss the other critical architecture requirements required for guaranteed QoS, and join us on our upcoming webinar with WHIR to learn more:

Unlocking the Secret to QoS in the Cloud: The 6 Requirements of Your Storage Architecture
Web Host Industry Review Webinar with SolidFire
Tuesday, April 2, 2:00pm EST

Register now

-Dave Wright, Founder & CEO

2998 views 4 views Today
2998 views

 | 0 comments



Posted in Other.