The Not So Hidden Costs Of Retrofitting A Plane Mid-Flight
Tuesday, August 21, 2012 posted by Adam Carter
In enterprise IT it is common for vendors to retrofit older architectures to address new markets. With the increased acceptance of SSDs as a viable medium in the storage hierarchy we are running across these kind of "solutions" with increased frequency. Unwilling to cannibalize their bread and butter revenue streams, storage vendors slot SSDs behind controllers designed for the performance, throughput and read/write characteristics of hard disk drives. The result is a suboptimal solution severely handicapped by the constraints of the legacy architecture. A recent example of this dynamic is on display for all to see with HPs announcement of an all-flash version of their P10000 (formerly 3Par) storage array. From the limited details provided here are some of the most revealing shortcomings of this "old solutions for new problems" approach:
- Controller Bottleneck. Constrained by the legacy controller design the P10000 can't exploit all of the raw performance provided by the SSDs. Once they max out the controller IOPS each incremental drive is only adding capacity. So while additional drives may improve the $/GB story, the $/IOP metric heads in the wrong direction.
- Capacity Limitation. The current maximum configuration of the all-SSD version of the P10000 is 512 drives. Using 200GB SLC SSD the maximum raw capacity of this design would be 102.4TBs. In comparison, our recently announced SF6010 cluster starts out at 120TBs of effective capacity and scales to 2 PBs. It is also worth noting that a 2PB SolidFire cluster would require half the rackspace of a maxed out 102.4 TB P10000 configuration.
- System Utilization. Due to the controller constraints alluded to above, the system drive chassis can only be 40% utilized before maxing out the available IOPS. This leaves 60% of the system empty, wasting very valuable real estate.
- Drive Utilization. Based on the IOPS performance HP recently produced to demonstrate the SSD equivalent to its disk-based SPC benchmark, the net IOPS per SSD is in the range of 750-880. This yield is well below the actual SSD drive specifications implying considerable underutilization of expensive SLC media.
- Rack Utilization. An P10000 array maxed out with SSDs (totalling 512 drives) would require five racks of equipment despite a 40% utilization rate per rack. To achieve the same IOPS capacity of this system from a SolidFire 3010 would require a 10 node cluster and 95% less data center real estate (only 10U)
- Power Draw. The required five racks of HP equipment equates to a power draw of 13,295 Watts. This yields an IOPS/Watt calculation of 33.9 IOPS/Watt. In comparison a 10 node cluster from SolidFire under full load has a power draw of approximately 3000 Watts or 166.7 IOPS/Watt.
- Performance Variability. To offset the expensive SLC SSDs HP has "dynamic" tiering software that they refer to as Adaptive Optimization. However moving data between tiers is a reactive process that has the tendency to demote the wrong data to lower tiers. This can expose a customer to dramatic performance variability. In the recent past we have given this topic extensive coverage in prior blogs here, here and here.
So how do these specs compare to the clean slate approach that we have taken here at SolidFire? In a recent article written off the SPC benchmark comparison HP's stated the $/IOP of its all-flash array was $1.98. Based on the 450k IOPS benchmark this equates to a cluster cost of $891,421. In comparison, at current list pricing for a 10 node cluster, SolidFire can deliver 11% more IOPS and 170% more capacity for 33% lower cost, 95% less real estate and 77% less power draw. In the cloud market, where infrastructure cost and efficiency are survival mandates, these significant deltas equate to meaningful competitive differentiation. If these numbers matter to you and your business then let us walk you through the benefits of a purpose-built design in more detail.
-Adam Carter, Director of Product Management