The best automated tiering is no tiering at all

Recently Hitachi and EMC have gotten in a blog fight about whose automated tiering technology is better. But are they asking the wrong question? Is tiering even the right solution to storage performance problems to begin with?

To be sure, on the surface the concept of automated tiering sounds great – your hot data goes in fast SSD storage, while seldom accessed data is on cheap spinning disk.  The problem with this type of optimization is that it is executed from a global perspective across the entire storage system.  The storage array optimizes the IO load across all of the data it controls. If you only have a single application, or perhaps a small number of applications running on your storage array, this global optimization probably works pretty well.

That’s a good fit for a traditional enterprise SAN deployment, but consider a cloud environment where you have hundreds or thousands of virtual machines and applications, with new applications coming and going all the time. From the perspective of an individual application, storage performance can be radically unpredictable. One day, the array may decide the application’s data is “hot” and serve it out of SSD with < 1ms response time, the next day another hot application may come online and suddenly response times jump to 10ms as the data gets pushed out to a SAS or SATA tier. This type of unpredictability is especially problematic in a multi-tenant service provider environment, where customers aren’t aware of each other and any radical change in performance is likely to trigger a support call. It’s not the storage array’s fault – it’s still trying to globally optimize, but try telling that to a customer whose website or database is suddenly slow.

So if tiering isn’t the answer, what is? Simply putting all the data on SSDs helps, but doesn’t take advantage of the fact that some applications and data are actually more active than others, and there is a hesitance to “waste” SSD performance on less active data. At SolidFire, we think the answer to these issues is performance virtualization. SolidFire’s unique performance virtualization technology de-couples capacity from performance and allows service providers and their customers to dial in the exact performance required for each application. Have a lot of data that doesn’t need much performance? No problem, those IOPS aren’t wasted – they’re simply available for other more demanding applications on the storage cluster. Either way, you get the performance you expect day after day, without any surprises, and if you need more or less, you can change it instantly. Skip the data tiering stopgap and get an all solid-state solution that can optimize performance across thousands of applications.

-Dave Wright, Founder & CEO

2472 views 8 views Today
2472 views

 | 0 comments



Posted in Company, Other.