Capacity vs. Performance Tiering

In our end of year blog we reviewed a number of the unique storage challenges that infrastructure service providers face in building and operating a large-scale, and profitable, cloud offering. A clear understanding of these issues provides a more constructive lens through which to the viability of a storage solution within a high-performance cloud-scale setting. This approach is particularly useful for understanding the basis of SolidFire’s thoughts on the merits of “automated” storage tiering in a large scale cloud.

As promised, we kick off our first of three blogs on this topic below. If you happened to miss our initial thoughts on this subject you can go back and read them here. We look forward to your feedback as we go.

Within the enterprise, storage tiering has become a popular vendor solution to improve performance for a subset of applications. With tiering the performance gain is achieved by retrofitting a disk-based array with an SSD tier and some intelligent fetching/data placement algorithms. Tiered storage systems are most effective when an IT manager has direct visibility into the usage profiles of the applications that reside on the system.  This allows the IT manager to size each tier appropriately, continually ensuring there is enough room in “fast disk” to accommodate demand. When data is not in demand it is then moved to slower speed disk. Overall, this is both a reactive and human-centric model that requires constant monitoring and adjustments to ensure each storage tier is rightsized to accommodate the access patterns of different volumes across the data set. The continuous promotion and demotion of data to the different tiers also comes at the cost of endurance due to excess wear on the flash media.

When operating a large scale public cloud environment customer applications and their associated usage patters are largely unknown to the service provider. How do you most effectively allocate tiers of storage without ongoing visibility into the access requirements of a particular application?  How big should the SSD tier be? How much SATA capacity should be used? When should data be promoted or demoted between tiers? Might a better question be; how many IOPS need to be available within the storage system? Unfortunately, for cloud service providers with unpredictable demand patterns across a large number of tenants, trying to spec out a system in this manner is impossible.

From SolidFire’s perspective, the best way to manage performance in a multi-tenant cloud environment is to approach this problem from the demand side of the equation (i.e. application performance) as opposed to the supply side (i.e. storage capacity).  Proactive performance management based on IOPS demanded by the application offers a far more efficient approach to allocating storage resources, rather than trying to guess the right quantity and capacity of each tier within the system. Armed with fine-grain performance controls, storage performance management should no longer be a complex, reactive and resource intensive experience. By leveraging a system that can assign and guarantee IOPS on a volume by volume basis, all of the guesswork around right sizing for application performance is eliminated.

For a quick graphical depiction of how SolidFire brings this concept to life, check out our 90 second video on  Performance Virtualization.

-Dave Wright, Founder & CEO

3552 views 4 views Today
3552 views

 | 0 comments



Posted in Other.