The SolidFire Blog
Storage Notes for the Next Generation Data Center

SANs don’t do justice to SSD

twittergoogle_pluslinkedinmail

So what would happen if you took the HDDs in your SAN and replaced them with latest SSD drives?  Don’t faster disks = faster storage technology?  Unfortunately it is not that straight forward.  Traditional SAN architectures dramatically complicate the use of SSD because both the hardware and software were designed around spinning storage media – not SSDs.

Today there is an ever-widening gap between compute and storage IO.  Large multi-core servers packed with memory are capable of delivering a high number of IOPS to extremely fast networks, and traditional storage systems have languished with high latency and poor IO.  Compute technology has been outpacing SAN and disk performance for years. At this point traditional SANs are engulfing more than their fair share of IT budget trying to keep up.  So why doesn’t the use of SSDs as HDD replacements fix this problem alone? The answer lies within storage controllers and the storage operating system.  Within traditional storage architectures these aged components do more harm than good to SSDs, and are unable to take advantage of their benefits.

Controller IO Bottleneck
Traditional storage controllers were designed to manage thousands to tens of thousands of IOPS, not the hundreds of thousands to millions of IOPS that SSDs are capable of delivering.  Current controllers simply can’t keep up.

Traditional SAN architectures are not designed to maintain the integrity of SSDs
Data layout architectures that optimize for deficiencies in spindle physics are ineffective with SSDs.  Write patterns and redundancy mechanisms such as RAID cause write amplifications that put unnecessary loads on SSDs.  These algorithms accelerate the wear of SSD media and have lent to the myth that SSD are inferior to HDDs and wear quickly.  So for the record, it is legacy storage architectures and how they manage SSDs that limits SSD use and life cycle.  Today’s SSD duty cycles can be on par with HDD and getting better.

Limited Deployment of SSD
Predominant use of SSDs within traditional SANs are as either cache or a small storage tier.  SSDs used in these modes receive tremendous write traffic and churn which places tremendous wear on the drives.  To compensate most manufacturers require the exclusive use of the most expensive and wear resistant SSDs which drives up solution cost.  Think of the cost and wear implications if you deployed SSD across an entire legacy SAN architecture… not a pretty picture.

The solution to leveraging SSDs in an intelligent and and cost effective manner is a new storage architecture.  An architecture built from the ground up around SSD technology that sizes cache, bandwidth, and processing power to match the IOPS that SSDs provide while extending their endurance.  It requires an architecture designed to take advantage of SSDs unique properties in a way that makes a scalable ALL SSD storage solution  cost effective – today.

-Adam Carter, Director of Product Management

twittergoogle_pluslinkedinmail
Adam Carter

About Adam Carter

Adam Carter is an expert in storage virtualization and 10 year veteran of scale-out IP storage systems. Adam has led product management at LeftHand Networks, HP, and VMware, bringing revolutionary products to market. Adam pioneered the industry's first Virtual Storage Appliance (VSA) at LeftHand Networks and helped establish VMware's VSA certification category. Adam brings deep product knowledge and broad experience in the storage ecosystem to SolidFire and heads up all aspects of product development.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*