Inefficiency & Unpredictability…A Service Providers Worst Enemy

In our first two posts on storage tiering we talked through the difference between capacity-centric vs. performance-centric approaches and also exposed some of the hidden costs of an automated tiering implementation. Closing out this mini-series I wanted to touch on a few other deficiencies inherent to an automated tiering solution.

Within a storage infrastructure it is IOPS, not capacity, that are the most expensive and limited resource. In a tiered architecture, SSDs are inserted into the equation to try and improve the balance between IOPS and capacity. However, while an SSD tier may reduce performance issues for well-placed data, the usage of this expensive tier remains inefficient. This inefficiency stems from a lack of granularity in the data movement of a tiered system. If a sub-LUN tiering system needs to move hot data chunks anywhere from 32MB to 1GB, it will likely promote a lot of cold data in the process. This overhead forces sub-optimal utilization of the premium SSD capacity.

Another potential problem area from tiering, specifically in a multi-tenant environment, is dealing with IO density – that is, how IO is distributed across a range of disk space. Applications whose IOs are concentrated within close proximity to each other (IO dense) will gain greater benefit from sub-LUN tiering than those whose IOs are spread more evenly over the entire logical block address space (IO sparse). Because tiering mechanisms measure data usage at the chunk level, an application who has more hits within a small number of chunks is more likely to be promoted than an application who spreads the same number of IOPS across more chunks. From an array performance perspective this approach is reasonable, as you get more performance within the same resource footprint. However, in a multi-tenant setting with data distributed across many distinct application this leads to serious problems with fairness and performance consistency across workloads.

We originally discussed the performance implications of tiering in July of last year. In a multi-tenant setting this performance variability exposure is magnified. Customers are continually exposed to the risk that the promotion of another customer’s hot data will result in the demotion of their own. The order of magnitude difference in latencies and IOPS between the different tiers makes it practically impossible for a service provider to guarantee performance to an individual application (or tenant) under these conditions.

In recognition of the deficiencies of a tiered architecture, SolidFire sought a better way. Our Performance Virtualization technology decouples the tight binding between the storage performance and capacity, resulting in a far more precise allocation of IOPS and capacity on a volume by volume basis regardless of issues such as IO density. Instead of best guess efforts as to the size and tiers of media required to meet customer performance requirements, a service provider can now dial-in IOPS and capacity individually at the volume-level from cluster-wide independent pools of capacity and performance. These allocations can also be dynamically adjusted over time as application requirements change. All things considered, Performance Virtualization is a far more efficient way to address IOPS scarcity, without exposing customers to the inefficiency and unpredictable performance inherent in an automated tiering architecture.

-Dave Wright, Founder & CEO

4019 views 2 views Today
4019 views

 | 0 comments



Posted in Other.