vExpert Event - Hypervisor & Storage QoS; Two great tastes that taste great togetherThursday, April 10, 2014 posted by Aaron Delp, Cloud Solutions Architect

This is just a quick note to my fellow vExperts out there.

We have a vExpert only event coming up on April 17th at 12:00 EST (9:00am PST). SolidFire recognizes that vExperts love VMware virtualization and are always looking to learn new concepts. About a year ago SolidFire started talking about how we could tackle the guaranteed performance problem in a unique way through the combination of hypervisor and storage array technology. If you are looking to provide guaranteed performance to hundreds or even thousands of tenants or applications, you want to attend this event.

If you are a VMware vExpert and this sounds interesting to you, please register (vExpert access to the VMware forums required). Adam will be presenting and I’ll be helping to answer questions along the way.

Title: Hypervisor & Storage QoS; Two great tastes that taste great together
Presenter: Adam Carter, Director of Product Management, SolidFire
Abstract: Implementing a storage QoS mechanism at the hypervisor, without similar enforcement at the storage level, does not completely address the challenges imposed in a multi-application infrastructure. In this deep dive with the vExpert community, the SolidFire team will discuss the key attributes of their scale-out block storage architecture that allow administrators to achieve volume-level QoS regardless of operating condition across their shared storage infrastructure. As part of this discussion, SolidFire will preview ongoing development efforts intended to create a tighter API-based integration between the hypervisor and storage systems through advanced SIOC integration and the VVOLs API program.

As a token of our appreciation for investing your time to learn more about SolidFire and provide feedback on latest our VMware integrations, all vExperts that register and attend the session will receive a free Google Chromecast Streaming Media Player.

-Aaron Delp, VMware vExpert & Cloud Solutions Architect


Scale-Out Architecture, Quality of Service, and Polyglot Persistence - a story of databasesTuesday, April 8, 2014 posted by Tyler Hannan, Sr. Marketing Strategist

Today, April 8 2014, we announced a series of best practices for deployment of Oracle & Microsoft SQL Server. This is an extension of the work that SolidFire has previously performed with MongoDB. The press release – which was entitled SolidFire Widens Reach In Enterprise Database Market with Oracle and Microsoft SQL Server Solutions, Oracle Gold Partnership – includes a quote from Jeff Wright (Database Application Engineer) that is quite compelling. A section of it, in particular, resonates strongly with me.

“SolidFire guarantees performance from tens to hundreds of databases within a shared infrastructure, facilitating significant storage consolidation, improving query response times and shrinking backup windows.” - Jeff Wright, Database Application Engineer, SolidFire

In 2011, Martin Fowler coined a term – Polyglot Persistence – that radically impacted my approach to the consideration of database design and usage. The notion is fairly simple, that any enterprise will be using a variety of different data storage technologies for different kinds of data.

Enterprise-IT, particularly in the area of data persistence, has undergone a massive transformation in thinking. I may deploy an OLTP system when consideration of consistency and transactional control is a business requirement,  but utilize a “NoSQL” system when my data model is inherently non-relational, can be easily denormalized, or development patterns dictate. In fact, many enterprises utilize different persistence solutions in the context of the same application.

Early in my career, and during all of my database design & implementation courses, there was an inherent assumption that a “database” was not just a software solution but a monolothic set of infrastructure components including, in many cases, compute, network, and storage. Polyglot persistence that maintains the inherent assumption of monolithic infrastructure creates almost as many problems as it solves.

This is the beauty of the SolidFire storage system. Shared, scale-out storage infrastructure enables your model of polyglot persistence without investing in one-off, per persistence model, infrastructure solutions. Combining this with SolidFire’s granular, guaranteed performance Quality of Service controls ensures that this shared infrastructure is composable and tunable for the business, application, and development requirements faced by the enterprise IT department.

It is often no longer wise to invest, heavily, in a single database solution. Rather the forward looking enterprise - those designing and implementing their Next Generation Data Center - utilize database solutions that are tailored to access patterns, data models, and development preferences. The best solution for this polyglot persistence model, with shared storage infrastructure, is SolidFire.

-Tyler Hannan, Sr. Marketing Strategist 

Settling in at the Intersection of Flash & CloudMonday, April 7, 2014 posted by Aaron Delp, Cloud Architect

Today I’m proud to announce I’m joining SolidFire! In my new role I will be helping our customers and development teams create the best cloud solutions by integrating SolidFire with VMware, OpenStack, and CloudStack products. As somebody who has been around both traditional Enterprise virtualization and cloud computing for a number of years I wanted to share a few trends that I see in the industry and why I believe SolidFire is such a great fit.

 Simply put, cloud service providers (or an Enterprise that runs their IT like a service provider) are demanding more from their storage than what is offered from traditional scale-up array architectures. Even within the all-flash array market, features such as inline deduplication/compression and encryption at rest are commodity features. It’s time to start thinking bigger. Let’s look at a few examples.

 The Move to Scale-Out Architecture – I think this graph says it best. As more applications and tenants are added to an array, scale-up architectures (even All Flash) hit a wall when they bump up against performance and/or capacity thresholds. True Scale-Out designs, especially in a shared storage infrastructure where consistent performance matters, shouldn’t force their customers to choose between Best-Effort-as-a-Service or Just-Throw-More-IOPS-At-It-as-a-Service.

 SolidFire has an underlying architecture that is unique in the market today. Features such Helix Data ProtectionGuaranteed QoS, Performance Virtualization (decoupling capacity and performance away from the hardware), & Balanced Load Distribution provide a solid foundation for scalably supporting thousands of applications from a single cluster and management interface.

It’s About More Than Going Fast – With the release of Element OS 6 the first thing I noticed was the latest version didn’t focus on Flash technology at all. Been there, done that. The focus was on solving common issues around backup and disaster recovery we all face in data center operations today. What good is an array if you can’t replicate the data somewhere? What good is an array if you can’t easily perform backup and restore operations? SolidFire has changed the All Flash game by offering array-to-array replication at no cost AND the ability to backup and restore from S3 and Swift based Object storage. How cool is that?!

Integration with Industry Leading IaaS products – SolidFire is the only all-flash storage array that offers plug-ins to OpenStackVMware, and CloudStack. We have dedicated development teams that not only contribute code but also provide thought leadership (OpenStack Cinder PTL & CloudStack PMC members are SolidFire employees) into how to better integrate block storage into IaaS projects and products. Want to provision storage AND assign QoS to a volume from within the IaaS interface? We can do that. 

Operations Scalability Through Automation – Every management feature in a SolidFire array can be accessed through our REST API or web GUI (built entirely on the Element OS API of course). This means easy integration into just about any existing operations workflow is possible allowing greater flexibility and scalability while consuming less time and resources.

In summary, as the IT industry shifts from the classic monolithic and static operational models into the new era of dynamic, agile workflows of cloud computing, it’s time to look beyond traditional storage hardware architectures and consider products that are built from the ground up for the next generation of applications and operations.

If any of this interests you I will be at the following industry events over the next few weeks and would love to talk more about all the creative ways we are helping customers.

- Aaron Delp, Cloud Architect

Solid Storage Progress with CloudPlatform 4.3Wednesday, March 26, 2014 posted by Mike Tutkowski, Apache CloudStack Project Management Committee & Sr. CloudStack Developer, SolidFire

Since getting involved in the Apache CloudStack community with the 4.2 release, we have focused on driving critical enhancements to the CloudStack storage framework. With Citrix’s GA release of CloudPlatform 4.3 today, I wanted to take a moment below to point out some of the key storage-related features in this release. Highlights of today’s announcement include:

  • Extended Hypervisor Coverage: With the addition of KVM as a supported hypervisor for dynamic data disk provisioning, CloudStack now supports dynamic provisioning with all three of the major hypervisors used in CloudStack deployments: XenServer, ESX and KVM.
  • Support for hypervisor snapshots: With this new feature in 4.3, when an administrator dynamically provisions a data disk in CloudPlatform, they can also allocate storage capacity to accommodate hypervisor snapshots. Additional benefits can be realized from this feature when using a storage system like SolidFire that can thinly provision the snapshot storage capacity.
  • Virtual Desktop Support: Through the native integration of the XenDesktop 7.5 release (which we blogged about here) with CloudPlatform 4.3, customers can now can deploy, flex and manage their XenDesktop virtual desktop infrastructure from within CloudPlatform. Leveraging SolidFire’s Citrix Ready storage system in this environment, customers can confidently support the storage demands of a virtual desktop infrastructure in a multi-tenant or multi-application environment.

To learn more about the new 4.3 functionality and what cool things are planned for 4.4 (hint: we are working on some major advances in the CloudStack storage framework), come hang with us at the CloudStack Collaboration Conference in Denver, April 9-11th. I will also be presenting at the conference on Friday morning: Key Design Considerations For Your Cloud Storage. I look forward to seeing everyone at the event.

 - Mike Tutkowski, Apache CloudStack Project Management Committee & Sr. CloudStack Developer, SolidFire

A New Element for the Next Generation Data CenterThursday, March 13, 2014 posted by Dave Wright, CEO & Founder

Today SolidFire is announcing Element OS 6 (Carbon), our next major system update that further increases SolidFire’s lead as the most feature complete all-flash array on the market today. Only SolidFire combines the 5 key elements needed for true storage agility in Next Generation enterprise and service provider data centers: scale-out architecture, guaranteed application performance, automated management, extreme high availability, and full in-line efficiency. The Carbon release builds on this core architecture with both highly requested features as well as some groundbreaking new capabilities, including:

  • Native 16Gb Fibre Channel connectivity in an active/active scale-out configuration, which can be used simultaneously with our existing 10Gbit iSCSI support
  • Real-time remote replication for long distance data protection and disaster recovery, with one of the most flexible replication topologies of any storage system and integrated compression and deduplication
  • Mixed-node cluster support, allowing customers to mix any current-generation SolidFire nodes in a single cluster, along with future systems, giving customers immediate access to the latest flash technologies and completely eliminating the forklift storage system upgrade
  • Integrated backup and restore to secondary storage, including any object storage that supports the S3 or Swift APIs, with direct data transfer and support for efficient incremental backups

Carbon is a free upgrade to existing customers with all the new functionality, including replication, included at no additional charge.

I’d like to add a few more words on the importance of each of these features for our customers.

Although many see converged Ethernet networks as the future single fabric for the datacenter, Fibre Channel has a significant install base and is going to be a long-term fixture in many data centers. SolidFire is uniquely positioned to help customers both leverage their existing Fibre Channel investments and transition to next-generation Ethernet based storage networks over time, with an architecture that can scale out with either (or both) while supporting our complete feature set including guaranteed Quality of Service, replication, and in-line data reduction.

While remote replication has long been standard on high-end disk arrays, its absence is particularly notable on the all-flash array side, where no other ground-up all-flash architecture currently offers native remote replication, instead relying on external software or hardware replication appliances. Highly efficient remote replication is a key requirement for many business critical applications, and SolidFire now enables those applications on a scale-out all flash architecture with guaranteed performance.

Scale-up disk and flash architectures have long had a critical failing: They lock you into a set of technology for a 3-5 year period, with no ability to leverage the latest and greatest hardware improvements short of replacing your actual controllers. At the end of that time, when it is time you replace your hardware, you are forced to plan and execute a complex data migration effort which often involves application downtime or performance impact to move data. SolidFire’s scale-out architecture and new mixed node support completely eliminates this headache. Customers only buy the capacity and performance they need to start, and can scale their systems “just in time” as needed. Furthermore, they aren’t locked in to the hardware they started with - they can integrate the latest and greatest SolidFire nodes into their cluster and take advantage of the continued performance and capacity increases in flash. As old hardware is amortized out, it can transparently be removed from the system and recycled internally or externally. No other flash system offers this level of storage agility.

Backup is one of the most critical aspects of any storage strategy, but one that is most often bolted on or overlooked completely when it comes to primary storage. While host-based backup agents and stand-alone backup servers can work with any storage system (including SolidFire), the next generation datacenter calls for new backup approaches that are more transparent, more scalable, more efficient, and less expensive. Our new integrated backup and restore feature, fully supported through our REST-based API, allows customers to easily push and pull compressed backups directly between SolidFire and other secondary storage systems for cost effective and scalable data protection.

SolidFire’s software-based flash architecture provides unique advantages for rapid innovation over hardware-centric approaches to flash. Not only can we rapidly adopt the latest flash advances from multiple flash vendors, but we can spend our engineering efforts building deeply value added capabilities on top. Our Carbon release further proves this by rapidly closing feature gaps with legacy disk based systems while pushing the bar even higher for all flash systems.

And we’re just getting started.

 -Dave Wright, CEO & Founder

OpenStack & Enterprise Forum - The Conversation is Just BeginningThursday, February 6, 2014 posted by Tyler Hannan, Sr. Marketing Strategist

On January 29th, SolidFire hosted the very first OpenStack: Breaking into the Enterprise Forum, that zeroed in on the debate over the opportunity for OpenStack within the enterprise. This event, moderated by Lydia Leong - Research VP at Gartner, brought together over 350 participants within the OpenStack community including representation from the OpenStack FoundationSolidFire, Nebula, Internap, eBay, PayPal, Solinea, HP, and PARC. In addition to in-person attendance, the event was streamed live and cultivated a vigorous discussion with over 2.4M impressions via the twitter hashtag - #OEForum. You can find all of the presentations conversations archived on the SiliconAngle YouTube Channel.  The 2 minute video below highlights the perspective and opinions of both speakers and attendees after the event:

As discussed in Debating the Opportunity for OpenStack in the Enterprise last week, there is no shortage of conversation about the state of OpenStack.

Setting the stage for the Forum was Jay Prassl, VP of Marketing for SolidFire.

“At the root of this event is conversation, vigorous debate and dialogue - Not only about the challenges, but also the opportunity that is presented by OpenStack moving from early adopters into the enterprise.”

The sessions that followed included a discussion on the state of OpenStack, stories of OpenStack deployments by early web-scale and cloud provider adopters, and ended with a discussion of the innovations that are driving OpenStack adoption. Each session is worth reviewing in its entirety.  Over the upcoming weeks, the SolidFire Blog will be highlighting particularly compelling pieces of content and provide our perspective on the challenges and opportunity in greater detail.

As vendors, integrators, users, and participants within the OpenStack ecosystem, our shared opportunity lies within the continued broadening of the OpenStack use case. And as enterprises continue to embrace the shift from delivering siloed IT infrastructure, to delivering IT as a service - we expect OpenStack to be a key enabler to this transition.

The conversation about OpenStack in the enterprise is just beginning…

Have thoughts on what is required for OS to become viable within today’s enterprise? Sound off here in the comments section, or use the #OEForum hashtag on twitter, and we will keep the conversation going.

 -Tyler Hannan, Sr. Marketing Strategist

XenDesktop 7.5: The Worlds of Cloud & VDI CollideTuesday, January 28, 2014 posted by Dave Cahill

Update on 2/13/14:  Today we have announced our Citrix Ready® verification for XenDesktop and have published a SolidFire & XenDesktop Reference Architecture, a SolidFire & XenDesktop Whiteboard Video, and a SolidFire & XenDesktop Datasheet. We are committed to building the highest quality storage experience for our customers’ virtual desktop infrastructure.

At SolidFire we have invested significant time and resources over the past few years to deliver an industry leading Citrix CloudPlatform integration for our service provider and enterprise customers. With the recent Cloud Platform 4.2 release we were one of the first companies to deliver a plug-in architecture allowing native provisioning and dynamic adjustment of storage quality-of-service from directly within the CloudPlatform interface.

As we have moved up the stack to more workload specific testing and validation, one of the most obvious places to start was VDI. Storage remains an unsolved problem in VDI environments. As I have written about in the past, traditional storage systems lack the adaptability, performance and scalability to keep pace with the unpredictable demands of VDI environments as they grow from proof-of-concept to production. These storage challenges are further magnified in a multi-tenant/multi-application environment. This is where SolidFire comes in.

SolidFire’s scale-out block storage system, with granular quality-of-service controls, is uniquely suited to harness the mixed and unpredictable workload profiles that exist in VDI environments. These workloads would normally require a dedicated storage system. However, SolidFire’s ability to guarantee storage performance, dynamically adjust storage resources on the fly without hardware reconfiguration, and linear and non-disruptive scale translates to significant user experience and cost benefits from within a shared storage infrastructure.

Given our existing relationship with the CloudPlatform team at Citrix, it was natural to extend our reach to include the qualification and testing of XenDesktop. Over the last few months we have validated the SolidFire solution as Citrix Ready for XenDesktop (stay tuned for more detailed results from our joint testing in the coming weeks). But we haven’t been the only ones hard at work. While we have validated SolidFire with CloudPlatform and XenDesktop individually, Citrix has been working to create a tighter alignment between these two products. XenDesktop 7.5 is the first deliverable from this effort, allowing customers to seamlessly leverage Citrix CloudPlatform powered by Apache CloudStack and XenDesktop together in the same environment. Through the integration work completed in 7.5, customers now can deploy, flex and manage their XenDesktop virtual desktop infrastructure from within CloudPlatform.

For SolidFire, Citrix’s work with the 7.5 release brings the worlds of cloud management and desktop virtualization together. Individually, we have delivered validated integrations with Citrix’s industry leading offerings in each segment. Merged together, customers can now harness the power of both XenDesktop and CloudPlatform from within a single infrastructure. With SolidFire at the basis of this infrastructure, customers benefit from the CloudPlatform integration and XenDesktop validation to ensure they can confidently support the storage demands of a virtual desktop infrastructure in a multi-tenant or multi-application environment.

- Dave Cahill

The Battle That Lies AheadFriday, January 24, 2014 posted by Dave Cahill

While we are several years into this new product investment cycle, I now feel that several companies have reached or are about to reach a level of maturity where they can truly deliver a next-generation, enterprise-ready storage array based on flash. 2014 is going to be the year where we see some incredible marketplace momentum in the next-generation/flash storage marketplace. Enterprises are ready, requirements have been set, battle lines have been drawn and, as discussed, the stakes are extremely high.

Eric Kaplan, CTO, Ahead, A Storage Industry Inflection Point, S**t’s about to get real in storage

In his recently published piece “A Storage Industry Inflection Point, S**t’s about to get real in storage”, Eric Kaplan, CTO of Ahead, takes a quick hitting look at the implications of the inflection point in the storage industry that has resulted from the ongoing flash storage movement. Based in Chicago, Ahead is a leading systems integrator delivering innovative technology services and solutions for enterprise data center customers. The key premise of Eric’s piece is one that resonates very strongly with us here at SolidFire. Simply put, the retrofitting of complex legacy storage systems to incorporate flash is insufficient in the face of current market dynamics including virtualization, analytics, data growth and the “do more with less” imperative. Adding flash helps performance, but that’s not the only problem.

Outlining the key attributes of the next generation enterprise storage array, Eric highlights the need for simplicity, deep management layer integrations with software like VMware and OpenStackrich data servicespredictable performance, broad protocol support and robust API based automation. Perhaps most interesting in this list is the notable absence of performance.  Great performance is just table stakes at this point. The impending storage battle that will lay waste to the legacy data center won’t be won by raw performance. Instead it will be the rich features and functions above the flash layer, tailored for specific use cases or workloads, that will drive significant capital and operational cost savings for customers and long-term sustainable value for companies in this space. Flash is a potent weapon in this battle, but one that every vendor wields. Relying on it as differentiator is a short lived strategy. We couldn’t agree more.

- Dave Cahill

Debating the Opportunity for OpenStack in the EnterpriseMonday, January 20, 2014 posted by Dave Cahill

There is no shortage of debate about the current state of OpenStack. Some of the more compelling commentary from the Hong Kong Design Summit has come from Alessandro PerilliGeoff Arnold and Michael Cote. The most constructive dialogues on OpenStack are those that can evaluate the strengths and weakness of the project while examining what is needed to appeal to a broader customer set and accelerate adoption through 2014 and beyond.

Some of the world’s largest cloud infrastructures today are built on a combination of open source software, commodity hardware, and a bunch of PhDs to make it all work. But those unable to afford the cost, complexity, or extra brain power on staff, want their PhDs built into the infrastructure, not standing next to it. This is the void that the OpenStack and its ecosystem are looking to fill.

While holding great promise, OpenStack today is still a complex system made up of many disparate projects and services that can make deployment difficult and costly. The appliance-ization of the OpenStack platform into a more consumable ‘plug-and-play’ form factor is critical to fostering broader adoption into segments of the market that lacks the budget, skillset or time to do it all themselves. This is where the contributions and innovations of the supporting vendor ecosystem with OpenStack are critical.

The opportunity for OpenStack in the enterprise is significant. Large-scale enterprises like Comcast, Bloomberg and Best Buy are already using it in some capacity today. However, there is a much larger audience waiting on the sideline attempting to understand what OpenStack means for them across their compute, networking and storage infrastructure. 

OE Forum Retargeting Banner 02 03This is why we are hosting the upcoming forum, OpenStack: Breaking Into The Enterprise in Mountain View, CA on January 29th. This event is about bringing together industry thought leaders, vendors, and users within the OpenStack ecosystem. The Q&A style agenda includes conversations around the current state of OpenStack adoption, how it is being used today, what gaps still need to be closed, and what innovative vendors are doing to close them.

Constructively surfacing and debating the key challenges and opportunities for OpenStack in the enterprise is extremely important to push this project and community forward. We hope you can make it on January 29th and participate in the conversation. If you can’t make it to Mountain View, we’re also broadcasting the event live through our media partner, SiliconANGLE/theCUBE (register for the live stream). We hope you’re able to join us in Mountain View or virtually through the live stream!

- Dave Cahill

Heading into 2014: Prepared for LiftoffTuesday, December 31, 2013 posted by Dave Wright

After announcing general availability of SolidFire’s all flash storage systems in November of 2012, this past year has been about validating the product in market, establishing strong early customer momentum, scaling our operations, support, partnerships and routes to market, and continuing to innovate with new products and features. Before we close the books on this year and charge into 2014, I want to take a moment to reflect back on the last 12 months. Here are some of our most noteworthy accomplishments over this period:

Looking through this list, I am incredibly proud of how our team has delivered to customers a world-class storage offering unmatched in scale or functionality by either startup or incumbent vendors. The all-flash storage market is noisy and crowded. 2014 will be the year that we see the dust settle and real winners emerge. Winning value propositions will be defined by architectures that are enabled by flash rather than defined by it. Market leaders will deliver a broad and innovative feature set that goes beyond just performance and delivers on the scale, agility, efficiency, and reliability expectations of enterprise storage. In this context, SolidFire is better positioned than any emerging storage company in the market.

If you think you are up for the challenge of helping us build on our success in ‘13 then take a look at our careers page. We are hiring the best talent in the world to help us achieve our goal of becoming the storage company that is at the core of the enterprise data centers and public clouds worldwide. Happy New Year and we look forward to seeing you in 2014.

 - Dave Wright, CEO/Founder 

What did we learn from the eleven providers that we announced last week?Monday, December 16, 2013 posted by Stuart Oliver

Over the last week we announced eleven new service providers that have chosen SolidFire’s all flash storage array as their primary block storage offering.  What is really interesting is how diverse this service provider mix is, yet how their ideas of the next generation data center are so similar.  We announced some smaller, regional service providers along with some very large global service providers and they all have chosen SolidFire for similar reasons. With the cost of flash dropping they can now build their new clouds and new cloud services with an all flash architecture at costs significantly lower than spinning disk and with superior features and capabilities.

Interestingly enough there were five recurring themes that appeared throughout all of their unique storage purchasing situations:

  1. Common vision of the next generation data center.  Each service provider that we announced this week shared a common vision of the storage functionality required within their next generation clouds to meet the growing needs of their customers.

  2. Issues with legacy storage architectures.  Each provider was  experiencing the exact same headaches with their legacy, controller based, SAN’s. They didn’t like the costs or the limited integration / automation capabilities and they all felt that their next generation cloud requirements could not be architected using legacy storage systems.  

  3. Required Flash.  Each of these 11 providers required that their new storage architecture needed to be all-flash, capable of easily scaling out to accommodate growth, and capable of handling a broad spectrum of application workloads. In the end the requirement boiled down to one system for all applications and workloads.

  4. Broad mix of infrastructure management platforms.  Another interesting observation was that there is a very broad broad mix of platforms that providers are using to manage these infrastructures - VMwareOpenStackCloudStack, as well as home-grown management frameworks.

Despite the commonality of the requirements and how these next generation cloud providers are approaching their storage infrastructure, each one has uniquely architected their services around their target customers. As an example, Codero Hosting (Austin, TX) recently launched their Elastic SSD Cloud Block Storage (ECBS) that is completely integrated into their control panel and deeply integrated into their backend storage fabric making it easily available to all customers whether cloud, managed or dedicated.  Another example of differentiation is Clearview (Dallas, TX). Clearview is using SolidFire to offer performance storage as a service to its colocation and managed customers along with SolidFire being the primary performance storage behind its private cloud configurations.  The final example of differentiation is GetCloudServices (Fort Pierce, FL).  GetCloudServices is primarily focused on delivering desktop as a service and uses SolidFire to deliver consistent and predictable performance to virtual desktop users provisioned by its enterprise customers.

While all of the announced service providers deliver unique solutions to their customers there was one final recurring theme that appeared across the group.

  1. Building the Business Case. The final recurring theme was that all but one of these service providers participated in the Fueled by SolidFire Go to Market program where we took the time to meet and discuss their unique service offerings, business goals and long-term strategic objectives. Once we understood the business drivers we were able to very accurately demonstrate the projected solution ROI, anticipated profit margins of their proposed solutions and how they should and should not price and position their proposed offerings. The financial validation that they could build their own unique solution on an all-flash system profitably, really made the decision to move forward with SolidFire and easy one that addressed both financial and technology requirements at the same time.

This is an exciting time for both service providers and SolidFire and as we move into 2014 we look forward to supporting the new solutions and services that our service provider partners will be bringing to the table Fueled by SolidFire’s unique capabilities.  

Bring on the customers and the applications!

Stuart Oliver, Sr. Service Provider Program Manager

Scale-Out vs. Scale-Up posted by Dave Wright

George Crump at Storage Swiss recently wrote a blog comparing the merits of scale-up versus scale-out architectures in all-flash array designs. The article starts with the assertion that scale-out systems are more expensive to build, implement, and maintain. However with significant innovations in scale-out architecture in the past few years, combined with rapid data growth in Enterprise datacenters, that view is now seriously outdated. In fact, the cost and complexity argument has now flipped for most environments and significantly favors scale-out.

The Cost of Scaling

For small capacity (<50TB) environments that don’t need to scale over time, fixed controller-based systems likely can be cheaper and simpler. But as soon as you add the element of scale, the element of growth, or the element of long-term TCO, a good scale-out architecture will be significantly less expensive and simpler. With a scale-up architecture, you either need to move to a faster controller for more performance, or add entirely new storage systems. Both of these options have significant costs associated with them. Moving to a faster controller usually involves data migration and you are left with an unused controller. Adding new storage systems exposes customers to data migration as well as the burden of more and more islands of storage to manage. A good scale-out architecture allows you to scale UP or DOWN by adding and removing nodes with no data migration and no increase in management burden. Storage systems can be sized based on the logical, physical, and application requirements of a datacenter, rather than arbitrary vendor-specified configurations.

Scaling Performance & Capacity

Due to the significant performance available in an all-flash array, most systems will run out of capacity before performance. However, in planning for growth it’s important to retain the flexibility to be able to scale both. In a scale-up design you are limited to the performance of the controller design. Regardless of how much capacity you have left, if you need to add performance you are stuck buying another controller pair. Getting an ideal balance of capacity and performance is nearly impossible. In contrast, a scale-out architecture can offer a wide range of capacity and performance points, particularly if it has the ability to mix heterogeneous nodes with different capacity and performance levels. Relying on Moore’s law and yearly controller upgrades may be a good strategy for a flash vendor, but it’s a substantial inconvenience for customers who want to leverage their investments for the maximum amount of time.

Consistent Performance

Scale-up designs suffer from the well known problem that as capacity is added, the controller performance is now spread over more data and applications. Even if not fully “maxed out” to start, early applications will have less controller resources (CPU, cache, network bandwidth) as more applications are added. Scale-up designs are particularly prone to noisy-neighbor problems where a small number of applications can monopolize all controller resources.

By contrast, scale-out designs add more CPU, network connectivity, and memory with each node, ensuring that performance doesn’t degrade as more capacity is added. Of course obtaining linear performance growth in larger clusters is a difficult engineering problem. SolidFire’s architecture is uniquely designed to avoid performance loss as the cluster scales, and our Guaranteed QoS eliminates noisy neighbors and adds fine-grained performance control.

Data Protection

Controller based shared-disk systems utilize redundant components for HA (redundant controllers, power supplies, etc), however despite claims of no single points of failure, they generally share a weak point in the shared disk shelf. That disk shelf and the backplane within represents a key point of failure without full redundancy. Historically, high-end disk based systems used dual-ported FC or SAS drives to allow independent backplane connections to each drive, reducing (if not completely eliminating) a single point of failure. Flash arrays that use SATA-based flash drives can’t do that. A shared-nothing scale-out system, with no disks shared between shelves, doesn’t have this limitation and can truly offer no single points of failure as a result. In addition, a good scale-out architecture can self-heal without the requirement for “extra” redundant components, removing the fire drills associated with storage component failures. At small scales, these differences may not matter much to customers – disk shelf failures are likely fairly rare, and a 4-hour data unavailability for parts replacement won’t kill most customers, but at large scale and in environments where 5+ 9′s of availability are needed, shared-disk flash systems represent an added risk.

End-of-Life Upgrades

Moving from one controller based storage system to a new generation after a 3-5 year service cycle is the bane of storage administrators’ existence. It can often take 6 months or more of planning, testing, and execution to complete, along with application downtime. As environments get larger and larger, the cost and time required continues to increase. With scale-out architectures that allow mixing of hardware generations, hardware upgrades become a trivial process. Simply add the new nodes to the cluster, and remove the old ones. No data migration, no downtime. Put the old nodes in a lab, resell or recycle them, and get back to productive work. The ability to mix generations also means that you can add in “new” storage nodes that offer higher capacity and performance (and lower cost) as you grow, rather than being stuck with old technology for 5 years.

 There are places in the market for both scale-up and scale-out flash systems, but that balance is shifting rapidly towards scale-out as historical disadvantages are architectured out and a more agile enterprise datacenter runs into the fundamental disadvantages of scale-up. This shift is a key reason why nearly every new storage architecture from a major vendor in the last 10 years has been scale-out (XIV, 3par, LeftHand, Equallogic, XtremIO, Atmos, Isilon, etc). The early rush of scale-up flash architectures is an aberration. A reflection of startups looking for fast time to market, rather than a market shift back to the antiquated storage paradigm of 20 years ago.

- Dave Wright, CEO/Founder 

Expect more from your storage supportThursday, December 12, 2013 posted by Kelly Boeckman

Selecting and deploying the right storage solution for your business can be transformative. With the proper storage, you can deploy new applications and capabilities faster, increase performance and predictability and provide a more agile and scalable infrastructure. The success of your new IT solution is dependent not only on the technology itself, but how quickly and seamlessly you deploy it and how well your ongoing support needs are met.

Today, we are announcing SolidFire Active Support, a dramatically different approach to data storage support: proactive, fast, seamless and additive to your business and bottom line.

Support is the safety net under your IT solution, but it encompasses more than just break-fix or patch updates. Proper capacity and performance management, non-disruptive growth, and routine system triage can help drive investment value. Too often support is reactive, slow, and frustrating with endemic finger pointing. Your IT solution is your business, and you need it performing at its best.

Our support begins with a proven, well-defined and well-documented installation service offering that is provided with every array we sell. SolidFire will install, configure and enable your SolidFire cluster, facilitating a successful deployment.

Once you’re up and running, you can count on Active Support to continuously monitor and diagnose your systems - data is received and analyzed every 10 seconds. Traditional phone home services typically react to algorithms that look for exceptions or major faults only, which may or may not generate support tickets. Active Support takes a different approach. We analyze all manner of data, putting issues in context so the proper support can be delivered. Active Support ensures clusters are maintained and operated at the highest possible level of availability and performance.

  • Proactive Philosophy - We continuously monitor your systems and proactively alert you when a problem is present. Often we’ll alert you to a system issue before you were even aware of it.

  • Secure Assist - Our support engineers can remotely and securely log-in to systems to provide hands-on real-time support

  • 24x7x365 Worldwide Availability - Active Support is global: International offerings are available, with up to 4 hour break-fix.

  • Expert Support Engineers - All calls and cases are handled by tier three support engineers who can resolve your issues or answer your questions the first time, every time

  • Active Cloud Monitoring - You are granted access to our custom Active Cloud Monitoring tool. Active Cloud Monitoring runs real-time diagnostics and analysis at the system and volume level, displaying historical data and trending analysis. Active Cloud Monitoring data is stored and accessible for up to five years

 “Solidfire support is by far the best vendor support that [we] deal with day in and day out.  We never have to wait for hours for an actual person to follow up on an issue.  Most of the time it is Solidfire that calls us and tells us to take an action to avoid a potential issue.  All I can say is thank you and keep up the great work.” - Hetal Patel, Datapipe

To learn more about SolidFire Active Support, please visit our website, or contact us for more information.

- Kelly Boeckman, Sr. Product Marketing Manager

Codero Quiets the Noisy Neighbor with Next Generation Flash StorageWednesday, December 11, 2013 posted by Dave Wright

The following is a guest post from David Wright, CEO of SolidFire posted on the Codero Blog

After my previous company, Jungle Disk, was acquired by Rackspace in 2008, I had the pleasure of working with Emil Sayegh and Chandler Vaughn for several years. While at Rackspace, Chandler, Emil, and I had the opportunity to regularly share ideas, thoughts, and predictions on the evolution of the technology industry.  Given our shared history and our efforts to push innovation, I am really excited about what they are doing at Codero, and look forward to collaborating with them again.

Statistics show that today only about 10% of true enterprise applications are run in public clouds, while the other 90% remain either inside the enterprise walls or are on physical systems in collocated or managed hosting environments. Why don’t more enterprises (large or small) take advantage of the cloud value proposition? In many cases there is one pain point that needs to be addressed: Storage systems that “the cloud” runs on are unpredictable, underpowered, and deliver inconsistent application performance.  That’s why Codero has implemented SolidFire’s all-flash SSD storage to make predictable performance in the cloud a reality for all of your enterprise applications.

So why is storage a problem?

Lets take a step back a few years and look at how I.T. storage needs have evolved. Traditional legacy storage systems were designed to handle small numbers of internal enterprise application workloads. I.T. admins or storage admins knew the performance attributes of every application running and could implement the appropriately configured system to handle the job. Then came “the cloud,” which offered attractive pricing to enterprise I.T. departments, but often came with challenging performance problems due to demanding multi-tenant workloads.

The Noisy Neighbor arrives

As more virtual machines or server instances were provisioned in the cloud, both customers and service providers started to see an increase in support calls related to system performance. Eventually the root cause was determined to be the Noisy Neighbor effect due to the heavy and variable I/O loads running on the underlying shared SAN storage. It’s no surprise that applications would seem fast one minute and slow another—at times, thousands of similar applications could all be contending for performance resources from within a single, underpowered storage system.

To illustrate, the image on the left below shows the Noisy Neighbor effect on a legacy SAN, the image on the right shows how SolidFire’s all flash storage system removes the Noisy Neighbor effect and allows Codero to deliver a predictable and consistent application hosting environment to its enterprise customers via the Codero Elastic Block Storage (CEBS) solution.

Traditional Traffic Spike Example 1:Tradtional Multi-Tenant Performance
Traditional Multi-Tenant Performance Drop - Individual tenant impacts other applications (unsuitable for performance sensitive apps)
SolidFire QoS In Practice (Enabled on Codero’s Elastic Block Storage):
SolidFire QoS in Practice
Creates fine-grained tiers of performance, Application Performance is isolated, Performance SLA enforced.


This is where the adaptability of a next generation storage system is critical. This is where SolidFire comes in.

At SolidFire we designed our all-flash storage architecture so that storage performance and capacity are provisionable resources that can scale up or down based on the needs of each and every application. With the ability to allocate and dynamically adjust IOPS and GBs at a very granular level, CEBS can now provision storage resources to meet both the performance and capacity needs of every application being used by the customer without ever experiencing the Noisy Neighbor effect.

This capability means that you can run all of your applications in the cloud no matter how many IOPS your application may need. Whether its a critical SQL database that may need 1000 IOPS or a VDI (virtual desktop) environment that needs 3000 IOPS, or a simple web server that needs 100 IOPS, CEBS gives you the performance you need without affecting or being affected by any other customer on the system. SolidFire’s fully HA, all-flash storage architecture has enabled Codero to deliver a predictable, high performance and consistent hosting experience to its enterprise customers.

IOPS-changing-QOSWe invite you to check out CEBS, a high performing, all-flash SSD storage-based cloud Fueled By SolidFire. Learn how you can run the other 90% of your enterprise applications more cost effectively while delivering an even better application experience to your end users.

It is always great to work with people who you know, trust, and respect. SolidFire and Codero are committed to bringing the most innovative, best-of-breed products to the market. Incredible innovation occurs when you bring together two of the top thought leaders in the industry. Here’s to reigniting great friendships and moving forward with progressive partnerships.


Welcome to the Flash Party EMCTuesday, November 12, 2013 posted by Dave Wright

EMC is launching their first new array architecture in over a decade this week with the XtremIO all-flash array. Prior to launch, following a long industry tradition, EMC is equipping their sales teams with “kill sheets” against other flash vendors. Typically these documents live in the murky world between “creative” positioning, FUD and outright untruth, but seldom are they publicly available, as is EMC’s SolidFire kill sheet. (ed: Document Removed) However, since it’s out there, it provides an interesting opportunity to compare and contrast EMC’s approach to to flash storage, and the corresponding product strengths and weaknesses.

On the surface, there are some clear parallels between the XtremIO and SolidFire architecture. Both utilize scale-out approaches that allow performance and capacity to scale. Each system utilizes in-line deduplication with a content-based placement algorithm to distribute data across the cluster. However, beyond the surface similarities, there are significant architectural differences that will dictate both where and why customers would use XtremIO.

EMC’s approach to scale-out is a series of traditional dual-controller / shared-disk 6U “bricks” linked together over an IB backplane. SolidFire utilizes shared-nothing scale-out 1U nodes linked together over standard 10GBE. EMC has optimized around high performance at low capacity points, while SolidFire has focused on high density for both capacity and performance while also being cost-competitive with disk. Each approach has its tradeoffs. Let’s look at some of EMC’s positioning points:

  • Ask SolidFire about the failure rates of consumer grade MLC flash memory and why they are not using eMLC for enterprise SSD resilience. 
    The irony of this argument is palpable. SolidFire, and every other ground-up flash architecture on the market, uses standard MLC flash. Like others, SolidFire has an architecture that minimizes any wear concerns, and we warranty the drives for five years. Real-world failure rates are dramatically less than spinning disk. The real question is: Why is EMC forced to use significantly more expensive eMLC on a supposedly flash optimized architecture?
  • XtremIO’s content aware architecture delivers up to 250K IOPS per node with <1 ms latency performance during real world workloads. SolidFire claims best case 50K IOPS per node with <2 ms latency performance.
    Besides being factually incorrect, as SolidFire offers up to 75K IOPS per node and <1 ms latency, it’s really not reasonable to compare a 6U / dual controller / 25 drive X-brick to a single SolidFire node. In the same 6U of rack space, SolidFire provides a unified pool of 450K IOPS. Furthermore, XtremIO’s 250K IOPS is based on a 100% read workload, rather than the more real-world mixed workload that SolidFire (and most other vendors) uses.
  • XtremIO has a no single point of failure (NSPOF) design with all drives accessible to active/active storage processors. SolidFire nodes have a single controller so a failure disables the entire node and up to 20% of capacity.
    SolidFire has a NSPOF design. Any component in the system - including entire nodes -  can fail, and not only will SolidFire continue to run with guaranteed Quality of Service, it self-heals and rebalances data among the remaining nodes. By comparison, if XtremIO loses a controller, it may continue to run but loses significant performance (due to active/active controllers). More importantly, the system is now non-redundant until the failed controller is replaced. If a shared disk shelf fails, the complete XtremIO system goes down as there is no redundancy beyond RAID within the disk shelf. The same goes for any failure that disables an entire X-brick. In reality, SolidFire’s HA design goes far beyond traditional dual-controller models by removing problem points such as shared-disk and eliminating the storage-failure fire drill with automatic self-healing and guaranteed QoS.
  • Ask SolidFire about their < 50% raw capacity per node… XtremIO raw capacity is more than 70% due to an efficient data protection algorithm with no mirroring.
    SolidFire utilizes a unique data protection scheme called Double Helix based on a patent-pending distributed replication architecture. Double Helix protects against both drive as well as node-level failure. Because we don’t need to do parity reads during drive failure, rebuilds complete faster and with minimal performance impact. XtremIO utilizes a modified RAID6 to protect data within a single disk shelf only. There is no protection between nodes, and the system requires expensive dual-controllers for EACH disk shelf to provide redundancy. The small trade-off we make in capacity is actually more than paid off by not requiring a second controller and separate disk shelf hardware.
  • Ask SolidFire to compare their 1.5kW energy efficiency for 5 nodes to XtremIO. A single X-Brick is half the power footprint at 750 watts.
    Glad they asked! SolidFire’s 5U / 5-node SF9010 footprint provides up to 48TB (RAW) and 173TB (effective) capacity. A single 6U X-brick 10TB (RAW) and 7TB (usable) capacity. The power/capacity efficiency isn’t even close! SolidFire requires less than half the power of XtremIO for similar capacity, not to mention a tiny fraction of the rack space. Even SolidFire’s smallest SF3010 node has better power and capacity density than XtremIO.
  • Ask SolidFire to show scalability to large node counts as all their customer wins to date are relatively small.
    Good to know EMC has such a clear view of our business! In reality, we have customers in production today with 20 node clusters, and some who regularly run 40 node clusters internally. By comparison, at launch XtremIO will support a max of four X-bricks but have previously mentioned scaling to eight in the future. Whether this is an architectural limit, or simply an attempt to protect the positioning of VNX/VMAX today isn’t clear, but what is clear is that SolidFire has the only production-proven scale-out flash with a full feature set, including data reduction, in the industry today. Even our smallest clusters are larger than the largest XtremIO system available at GA.
  • Ask SolidFire about their flexibility to start with a configuration smaller than 5 nodes. XtremIO starts with a single HA X-Brick for an attractive entry point.
    SolidFire’s starting footprint uses less rack space than a single X-brick, while offering significantly more effective capacity. A single X-brick certainly has less capacity, and will likely have a lower starting price, but fundamentally this illustrates the different use case that EMC is focused on. For EMC, XtremIO is a point solution for individual applications and use cases that simply can’t get enough performance from disk-based or hybrid systems. SolidFire was designed to fully compete as a complete disk replacement for multi-application and multi-tenant storage environments from 60TB to more than 3PB.

We could go on all day like this, but I think we’ve shown that while SolidFire has certainly made different design decisions in some areas (like shared-nothing vs. dual-controllers), there are strong advantages to the SolidFire approach in the use cases and environments we focus on. The SolidFire system leads the industry in flash density and cost-effectiveness, with a range of functionality such as Guaranteed QoS that can’t be found in any storage system - disk or flash based.

Finally, since EMC was nice enough to lay out some of their “questions” publicly,  here are a few to ask them:

  • Can you expand (or shrink) an existing cluster?
    Initial reports are that you can’t actually scale-out (or scale-down) an XtremIO cluster, or at least not without data migration. SolidFire provides a completely dynamic scale-out (and scale-down) capability with automatic data and IO balancing.
  • Why does each X-brick require a UPS?
    It appears that XtremIO commits writes in DRAM, rather than using an NVRAM device or power-protected RAID controller. The system requires a full UPS to protect against data loss on power failure. Few storage systems have ever taken this approach, as it carries inherent risks of data loss or corruption for situations like software faults or UPS failure.
  • What’s the performance impact of a controller failure?
    XtremIO utilizes a traditional active/active controller design, which means that loss of a controller translates to loss of half the IO ports and controller CPU. Usually this means a significant performance impact as well. By comparison, SolidFire’s shared-nothing model means that loss of a node only loses the capacity and performance proportional to what failed (for example, 5% in a 20-node system).
  • How much will it cost?
    Of course that’s the question everyone wants to know! EMC has a tricky job positioning XtremIO against its existing disk, hybrid, and even all-flash VNX/VMAX portfolio. Lacking compression, XtremIO is at a 2X or more effective capacity penalty compared to systems like SolidFire that provide in-line de-duplication, compression, and thin provisioning. The use of expensive “extras” such as eMLC, dual-controllers, separate disk shelves, and IB switches will increase the cost as well. Early reports have XtremIO at $8/GB or more, while SolidFire is now solidly under $3/GB for effective capacity. Cost certainly isn’t everything, but it further separates the use cases for something like XtremIO and SolidFire.

Hopefully we’ll see their response!

-Dave Wright, Founder & CEO

The cure for what ails VDI is more than just IOPSThursday, November 7, 2013 posted by Dave Cahill

With penetration rates of the professional PC market estimated between 2% to 2.5%, VDI has struggled to achieve broader market acceptance across enterprises and service providers. In searching for where to place blame for the failure of many VDI projects, storage has always been painted as a prime suspect. Traditional storage systems lack the adaptability, performance and scalability to keep pace with the unpredictable demands of VDI environments as they grow from proof-of-concept to production. The resulting capex, opex and user experience impact of the various workarounds deployed to try and circumvent these storage deficiencies effectively destroys the ROI of any VDI project.

Enter SSD drives, and the resulting performance gains, and now everyone believes they have discovered the cure for what ails all failed VDI projects. But besides the IOPS boost are things really that much better? As we have said many times before, lots of IOPS are great but if you can’t effectively control this performance you can’t ensure a positive user experience for each virtual desktop or any other application running on that system. Are you still stuck with a dedicated storage platform for VDI without any mechanism for isolating workloads from each other? To accommodate desktop growth are you forced to purchasing unnecessarily large increments of storage? If desktop profiles change over time are you required to perform a complete hardware reconfiguration?

While capitalizing on the movement to flash as much as the next guy, SolidFire’s value proposition in a VDI deployment spans far beyond IOPS. SolidFire’s scale-out block storage system, with granular quality-of-service controls, is uniquely suited to harness the mixed and unpredictable workloads profiles that exist in a VDI environments. SolidFire’s ability to guarantee storage performance, dynamically adjust storage resources on the fly without hardware reconfiguration, and linear and non-disruptive scale translates to significant user experience and infrastructure cost benefits throughout the lifecycle of a VDI deployment.

Today we are increasing our external presence in the VDI market with the availability of our initial VMware View Reference Architecture and our VDI solutions page. If you are struggling to find a storage solution for your VDI needs we think we have a compelling answer. At less than $50/desktop, >1000 desktops per node and a 0.52 View Planner score, SolidFire effectively changes the economics of the storage infrastructure needed to ensure a successful VDI buildout from start to scale.

To learn more about SolidFire solution for VDI please see our VDI Solutions Page or contact us for more information.

-Dave Cahill, Director of Strategic Alliances

From Now to Cloud, Don’t Go IT AloneTuesday, October 29, 2013 posted by Dave Cahill

The amazing agility, flexibility, and power of cloud computing has CIOs everywhere trying to figure out how to get cloud functionality and economics inside their own data centers. Beyond basic infrastructure improvements, many are also looking to run their IT departments like internal service providers. This shift represents a fundamental transformation in how IT services are delivered, and with it comes a myriad of practical pitfalls and vendor-centric traps along the way. But this journey from the current state of IT infrastructure to the next generation data center need not be navigated alone. That is where our Cloud Builders come in.

Today we are announcing our Cloud Builders Channel Partner Program. These partners are a select group of systems integrators and resellers with proven expertise in the areas of storage, virtualization and cloud infrastructure. Focused on vibrant and growing technology ecosystems including VMware, OpenStack and CloudStack, our Cloud Builders Partners specialize in the planning, building, implementation, and management of customers’ most strategic IT initiatives. Cloud Builders are helping customers transform their data centers to achieve the properties of adaptability, scalability, flexibility and automation that have come to define the next generation data center.

Having recognized the importance of these trusted advisors to our customers’ success, we worked hard to assemble the foundational pieces of a channel partner program meant to embrace and invest in our Cloud Builders. We could not be more excited to work with some of the strongest systems integrators, consultants and VARs across these vibrant and growing ecosystems. These partners recognize the deficiencies of legacy storage when held accountable to the properties of proper cloud design including adaptability, automation, efficiency and scale. Merging SolidFire’s innovative all-flash architecture with the expertise of our Cloud Builders partners yields a powerful combination for customers looking to accelerate the transformation of their IT infrastructure.

For more information on our Cloud Builder Partner Program, visit our website. To be connected with one of our Cloud Builders partners, please contact us.

-Dave Cahill, Director of Strategic Alliances

Production Databases in Your CloudThursday, October 24, 2013 posted by Chris Merz

In Ops and DBA circles, it has been common and accepted wisdom for nearly a decade that databases are always best run on dedicated, bare-metal hardware. While multi-tenant cloud infrastructure has been wonderful for application servers and utility arrays, the heart of the system, the database, has been considered off-limits. There have been a variety of very good and valid reasons for this mindset, including persistence, latency, and IOPS shortcomings that are show-stoppers for database performance. Recent forays into cloud database hosting have focused on in-memory deployments and complex 'hybrid cloud' implementations, that are essentially networking band-aids to mask the real issue.

But today, things are changing. Compute and memory have been solved problems for a while now, but what about storage? How do we ensure that a virtual server has the IOPS, low latency, and persistence that are absolutely required and essential for any serious production database implementation? What can be done to ensure that shared storage system resources can avoid contention issues that might impact individual production instances?

What if you could effectively virtualize the storage layer for database applications, creating a stable, safe, and durable platform to grow any database solution? Imagine if you could dynamically control the IOPS throughput for every single data volume in your system, guaranteed, in real time. Now think about how useful it could be to grow your disks in seconds, with a single API call or using a slider-bar in a user interface. Consider the implications of knowing that your storage benefited from inline compression and de-duplication, with no impact to throughput or latency. What if you never had to buy another SAN head unit, ever? What if you could scale-out your distributed storage layer by simply adding more 1U nodes to a rack?

These things are completely possible and available today to any public or private cloud implementation team. It is possible to entirely remove the kludges and workarounds necessarily employed to optimize database operations in cloud environments, while retaining top level performance for IOPS sensitive applications, at scale. Using a well developed API interface, you can integrate and create Puppet and Chef patterns to completely virtualize your platform. It's a dream that many Operations teams have sought and chased for many years.

The importance of these efficiencies and operational cost savings cannot be understated. IT budgets have flattened dramatically in recent years, and it's no longer acceptable to over-provision and under-utilize in order to squeeze performance out of square-peg/round-hole hardware and software solutions. You do not have to sacrifice disk space for spindles. It is no longer necessary to undertake step-wise projection planning due to hardware limitations in order to scale. You can plan out your cores/memory/storage/IOPS matrices in detail, with far fewer variables than ever before. The result is a much smoother, more linear growth curve where costs remain in-line with real capacity.

The time is now to challenge conventional thinking. No longer must you accept running your databases on dedicated hardware as your only option. If solving these longstanding database/storage challenges outlined in the paragraphs above appeal to you, you really should consider the game-changing capabilities of a true next-generation, scale-out data storage solution like SolidFire. We think you will be pleasantly surprised by what you find...and we are only getting started!

Today we are kicking off an expanded presence in the database market with the announcement of our MongoDB partnership. To hear more about how SolidFire will completely change your impression of storage for MongoDB register for our webinar on November 6th. You can also read about our MongoDB benefits, best practices and more on our Database Solution page.

-Chris Merz, Senior Database Application Engineer

Verizon’s Journey To A Proper Cloud; Is this the only way?Tuesday, October 8, 2013 posted by Dave Cahill

Almost three years after their Terremark acquisition, and a few “enterprise” clouds later, Verizon has announced the Verizon Cloud and Verizon Cloud Storage (block and object) services. This new platform is the result of significant internal investments.

Breaking away from their prior VMware ESX and vCenter-based cloud offering, the Verizon Cloud is built on the Xen hypervisor, x86 servers, Arista switches, multi-tenant all-SSD storage, and an internally designed provisioning engine with dynamic resource isolation. Commenting on the opportunity that spurred their new design, Verizon’s Chris Drumgoole, SVP Global Operations, Verizon Terremark, commented:

"The scale of cloud computing is going to change dramatically over the next several years, and any weaknesses in the cloud will be exposed very quickly, such as the ability to scale to tens of millions of machines. The ability to get predictable performance, scalability and reliability, these things are not mature on the cloud today, but they need to be in a world where all the apps live in the cloud."

Despite targeting the developer crowd with this offering, Verizon also ensured compatibility with it’s existing VMware-based cloud offering. And while some may argue that Verizon has regrettably built its way into a two cloud world, if you dig a little deeper you quickly get the impression that this is not the long-term plan. Verizon has embraced the idea of a single composable platform with resources that can be granularly tuned up or down to meet the needs of a wide ranging application set. Talking about the functionality of the Verizon Cloud offering Lydia Leong from Gartner remarks:

"It’s intended to provide fine-grained performance controls for the compute, network, and storage resource elements. It is also built to allow the user to select fault domains, allowing strong control of resource placement (such as “these two VMs cannot sit on the same compute hardware”); within a fault domain, workloads can be rebalanced in case of hardware failure, thus offering the kind of high availability that’s often touted in VMware-based clouds (including Terremark’s previous offerings). It is also intended to allow dynamic isolation of compute, storage, and networking components, allowing the creation of private clouds within a shared pool of hardware capacity."

So Verizon looks to have themselves a competitive cloud offering with the architectural flexibility to dynamically tune the levels of infrastructure resilience and performance to the needs of the application. But at what cost? Two years, greater than $1.4B in acquisitions, and undoubtedly tens of millions of internal investment have allowed them to  assemble the team and technologies to build this cloud. In order to deliver this functionality Verizon chose to tackle some very difficult engineering problems in house. Talking about the depth of the engineering undertaking, John Considine was quoted by The Register as saying:

"Where we run into a lot of problems is when you talk about independent lifecycles for all of those components. We really had to build what we consider an integrated system to make it work. We control everything from the manufacture of the disk drives down to the firmware on the storage cards; we find it has to be tightly integrated."

Is this really the learning curve and depth of engineering investment required to deliver a true cloud? The ROI on both the initial investment and the ongoing support of a proprietary infrastructure stack requires a business that will generate billions in revenue over its life. Very few public or private clouds will ever achieve that scale. Fortunately for those that can’t afford to make these type of acquisitions or internal engineering investments there are other options. At SolidFire we are focused on helping our customers realize the complete range of benefits of cloud computing, including scalability, performance, reliability, and consistency, utilizing a storage platform built from the ground up for large clouds. Combined with our partnerships around OpenStack, CloudStack, and fabric-based networking, SolidFire can help any service provider or enterprise deliver the full promise of the cloud.

-Dave Cahill, Director of Strategic Alliances

Best Effort-as-a-Service Just Won’t Cut ItMonday, October 7, 2013 posted by Dave Cahill

During the recent Hosting & Cloud Transformation Summit in Las Vegas we hosted our third QoS Summit roundtable, this time with Tim Stammers from 451 Research.  We were joined by representatives from both service providers and large enterprise IT departments and debated the opportunity and challenges of delivering storage Quality of Service to their end users. The most interesting discussions of the evening was about the complexity of turning raw IOPS into a service that could easily be consumed by end customers and internal business units.

When it comes to discussing the merits of a storage system, IOPS get the most airplay, but IOPS alone don’t directly translate to benefits for end customers. It is not just raw performance, but rather the ability to deliver guaranteed performance which can be clearly delivered to end users. Without the ability to guarantee a certain level of performance all you are providing “best effort”. The line of the evening came from one of the attendees who commented on what you end up with without the ability to guarantee performance. He stated, “without quality of service all you are is a ‘best effort provider.’”

For any CxO looking to host production sensitive applications in a shared infrastructure, either in the cloud or on premise, best effort just simply won’t cut it. But delivering storage QoS is much easier said than done. In addition to the limitations of any legacy infrastructure to deliver guaranteed performance, service providers are challenged to properly package the concept of performance to end users.

End users care about applications running smoothly and how many IOPS it takes to make that happen is a foreign concept to most. As a result, exposing a bunch of IOPS options to end users is often not the answer because it causes confusion and unnecessary complexity. The more sensible starting point echoed by our QoS Summit attendees is to expose simple tiered services with different levels of IOPS bundled in. Over time these services can evolve to more granular offerings as customers gain a greater understanding of different performance levels/tiers needed for different applications.

Unfortunately, the ability to present a spectrum of services from a single storage infrastructure has not been possible previously. So what often results are basic block storage services delivered from one platform and advanced higher performance offerings from another. From an admin perspective, an application starts on a lower performance tier and movement to a higher performance offering requires all sorts of manual administration and data migration. Along the way none of this performance is actually guaranteed, it is best-effort. This hassle removes any sort of agility on the part of the service provider to confidently respond to the evolving needs of their customers. This is where the adaptability of a storage system is critical. This is where SolidFire comes in.

At SolidFire we have designed our architecture such that storage performance and capacity are composable resources than an administrator can scale up or down based on the needs of the application. With the ability to allocate and dynamically adjust IOPS and GBs at a very granular level, administrators can provision basic storage tiers to start and simply adjust them over time as the needs of the applications and end users become more apparent. Using SolidFire this adjustment is just a simple API call, no downtime, no performance unpredictability, no data migration required. We believe this kind of flexibility is imperative for service providers and enterprises looking to transform IOPS into a recognizable benefit for their end users. Those around the table at our QoS Summit seemed to agree!

For more detail on our presence at this year’s HCTS event you can watch our CEO, Dave Wright during his panel discussion, Profiting from Cloud Storage in an Era of Software-Defined Everything. The slides from the panel can be found here, while the video from the discussion is available below.

HCTS Video

-Dave Cahill, Director of Strategic Alliances

CloudPlatform 4.2: A Giant Leap Forward for Block Storage in Apache CloudStackTuesday, September 24, 2013 posted by Dave Cahill

In conversations with customers and prospects building out public and private clouds, you can't help but encounter Apache CloudStack as one of the primary cloud orchestration options under consideration. Looking to develop a meaningful integration with this strategic cloud building block, we dove head first into the CloudStack ecosystem about 9 months back at the Collab developer conference in Las Vegas. Our initial goal was to quickly navigate through the community to determine the current state of storage features/functions within the platform. At the same time, we were keen to understand the best ways for our storage system to integrate with, and exploit advanced storage system functionality natively through CloudStack. Fast forward to today with Citrix's release of CloudPlatform 4.2, powered by Apache CloudStack, and we are extremely excited about how far the platform has come, especially as it relates to storage.

So how did we get to this point? Back when we were still getting oriented within the Apache CloudStack ecosystem, our initial conversations exposed to us a very generic storage framework that interfaced with storage in a static manner. Moreover, there was no direct way to integrate a storage system into the management platform. Instead, administrators had to go through multiple infrastructure-configuration steps so that CloudStack could use the storage. As a one-time event, this might be tolerated; however, in cloud environments, users expect resource flexibility. Consequently, these ongoing configuration changes amounted to a significant management burden.

Recognizing these shortcomings, combined with real customer feedback, the Apache CloudStack community tackled these challenges head on in version 4.2 with a complete redesign of the primary storage framework. By converting to a pluggable architecture, vendors could create a simple driver for CloudStack to interact with their storage system directly. In addition to streamlining basic storage tasks like creating and deleting storage volumes, vendors can also expose their own unique capabilities to CloudStack. With Citrix's announcement today, this plugable framework has been upstreamed in CloudPlatform 4.2. In the SolidFire context, this means service providers and enterprises can now provision and manage SolidFire storage directly through CloudPlatform. In addition to the configuration flexibility that this integration affords customers, another key benefit of this new release is the ability to exploit advanced storage functionality like QoS natively within CloudPlatform. This is a significant advancement compared to prior versions that required manual workarounds.

The arrival of CloudPlatform 4.2 is a major step forward for Citrix, Apache CloudStack and SolidFire. With this release SolidFire is the first storage vendor in the market to deliver a storage plug-in for CloudStack. More detail on our integration, including reference architectures and configuration guides, for SolidFire, CloudStack and CloudPlatform can be found on our solutions page. For customers looking to deploy SolidFire in a CloudPlatform environment, the integration is now significantly more straightforward and dynamic. We hope you like what you see. We look forward to hearing your feedback on not only 4.2 but also additional features/functions we can develop toward in future releases.

-Dave Cahill, Director of Strategic Alliances

Distributed Storage; Picking The Right Tool For The JobMonday, September 16, 2013 posted by John Griffith

The distributed storage myth that CloudScaling's Randy Bias alludes to in his recent whitepaper is primarily perpetrated by marketing departments eager to reach too far outside the original use case of their system architecture. The myth that Bias is referring to is the idea that a single pooled storage system can capably consolidate all three tiers of storage. While you can certainly have distributed storage architectures for each tier, the expectation that a single architecture can capably span all tiers will likely end badly. In this respect we agree with Randy's position that one size doesn't fit all and you need to choose the right tool for the job.

We're big fans of open source at SolidFire, both using and contributing to many projects including OpenStack and CloudStack, however any discussion about distributed storage solutions for cloud should include commercial options as well. In the case of cloud storage for performance sensitive applications, the options provided by open source as well as legacy storage vendors are significantly lacking.

There are very few production-quality distributed storage systems available today. Popular open source storage solutions like Ceph and Gluster were architected for capacity-optimized storage use cases such as file servers and unstructured data. When it comes to performance-optimized workloads, however, these solutions were simply not built with this use case in mind. To help identify, as Randy puts it, "the right tool for the job", we have created a list of key considerations for anyone evaluating performance-optimized distributed storage for cloud infrastructures:

  • Consistent Performance - Tier 1 applications generally expect consistent latency and throughput from storage systems. Achieving this in a multi-tenant legacy storage system is challenging enough, but in a complex distributed system it becomes an even larger problem.
  • Performance Control - Without the ability to provision performance separate from capacity and dial-in performance allocations for each application, a storage system will quickly be dominated by the "noisiest neighbor", starving resources from more critical applications.
  • Data Reduction - By definition, Tier 1 storage is going to utilize faster, more expensive media - either fast disk or preferably SSDs. Inline deduplication and compression, without impacting performance, are critical for making the system cost effective and achieving maximum density in a cloud environment.
  • Manageability - API's are an often overlooked component of block storage in cloud environments. A robust API that lends itself to automation of all aspects of the storage system is imperative to achieve the promised operational benefits of cloud.
  • Professional Testing & Support - Tier 1 applications are called mission critical for a reason. Ensuring the storage hardware and software you use is thoroughly tested and supported helps minimize the downtime and errors encountered when these platforms are deployed in production environments.
  • Qualified Hardware - Consuming storage in an appliance form factor has real, measurable benefits. Vendors bear the burden of ongoing qualification of the hardware and software while providing a single resource for support without finger pointing. Firmware bugs in commodity storage controllers and drives are a very real problem, and system vendors are in the best position to identify and correct or work-around these issues. Why resource an effort so far outside of your core competence when your vendor will aggressively ride the hardware cost curve for you?
  • Flash Aware - With cost declining at a rapid pace, flash is now appropriate for a large percentage of Tier 1 use cases, particularly when combined with data reduction. However plugging SSDs into a storage system designed around disk is a recipe for problems. Disk-based architectures can't deliver the maximum IOPS from flash, while  wear and endurance are real concerns due to write amplification. Only native flash storage architectures can deliver both the performance and endurance required for Tier 1 applications.

In crafting this list we decided not to tackle the most commonly assumed traits of distributed storage systems: availability and scalability. These traits should be viewed as table stakes in any tier of storage, but still thoroughly vetted. Instead, we focused on some of the key attributes unique to Tier 1 storage that are seldom delivered by capacity-optimized systems. After reading through the list it is clear that certain tools, while good for other things, simply weren't intended for performance-optimized use cases in your cloud storage infrastructure.

-John Griffith, Senior Software Engineer & Cinder PTL

Are we destined for a “two cloud world”?Tuesday, September 10, 2013 posted by Dave Cahill

Lydia Leong from Gartner wrote a great blog post recently about her belief that the cloud service provider market should not evolve to a "world of two clouds". What she is referring to here is the idea that providers are heading down the path of building different clouds to accommodate enterprise and "cloud-native" workloads respectively. Lydia's stance is that this is the wrong approach;

I do not believe in a "world of two clouds", where there are cloud IaaS offerings that are targeted at enterprise workloads, and there are cloud IaaS offerings that are targeted at cloud-native workloads - broadly, different clouds for applications designed with the assumption of infrastructure resilience, versus applications designed with the assumption that resilience must reside at the application layer.

From our view of the world at SolidFire, the underlying problem here is that service providers are saddled with legacy hardware infrastructure that doesn't allow them to simultaneously service the needs of two masters: IT operations and developers. Lacking a dynamic and resilient infrastructure that can accommodate both "cloud-native" and legacy enterprise applications, service providers end up with multiple clouds. While different clouds for each use case would seem to address the problem tactically, this probably isn't what customers want either. Lydia writes;

There's no need to build two clouds; in fact, customers actively do not want two different clouds, since nobody really wants to shift between different clouds as you go through an application's lifecycle, or for different tiers of an app, some of which might need greater infrastructure resilience and guaranteed performance.

Instead of building different clouds or waiting for enterprise applications to be rewritten for cloud, service providers need to focus on designing their clouds with the architectural flexibility to dynamically tune the levels of infrastructure resilience (i.e. availability) and performance to the needs of the application. While built into "cloud-native" applications from the outset, we can't possibly expect applications on a broader scale to be able to account for the limitations of the underlying infrastructure. Instead, cloud infrastructure needs to be dynamic and fluid enough to respond the the disparate needs of different applications. To avoid a two cloud world this burden of resilience and performance consistency needs to fall on the infrastructure as much as, if not more than, the applications.

But lacking this flexibility in the infrastructure today services providers are stuck with no choice but to head down the two cloud path. According to Lydia, this is the path a lot of service providers have already chosen to head down;

Now, there are tons of service providers out there building to that world of two clouds - often rooted in the belief that IT operations will want one thing, and developers another, and they should build something totally different for both. This is almost certainly a losing strategy.

At SolidFire we are focused on delivering our service provider customers the architectural flexibility they need at the storage layer to mold their infrastructure to the needs of application. This allows them the flexibility to appease the needs of the operations and developer camps without having to build two clouds in the process. With solutions like SolidFire helping service providers avoid this multi-cloud approach, it is hard to argue with Lydia that this isn't the more optimal path to head down. She closes her blog with this exact sentiment;

Winning providers will satisfy both needs within a single cloud, offering architectural flexibility that allows developers to decide whether or not they want to build for application resiliency or infrastructure resiliency.

-Dave Cahill, Director of Strategic Alliances

VMworld 2013: Above the Crowd (Literally & Figuratively)Wednesday, September 4, 2013 posted by Dave Cahill

With the VMworld exhibit hall overflowing with flash vendors, there is no question that the flash storage movement has a lot of buzz. But flash by itself is a means to an end. As the core component of competitive differentiation it is likely an unsustainable trend. Tactically, however, there is a small window of opportunity to continue to differentiate with an IOPS-centric story. This probably helps explain why so much money is being spent at conferences like VMworld to stand out from the sea of vendors pitching slightly different versions of the same thing.

At SolidFire, flash is certainly a part of what we do, but it barely scratches the surface of the value our system delivers to customers. The more enduring theme that we have thrown all our weight behind is the architecture of a storage system designed to embrace the movement from silo'ed to shared IT infrastructure. The conversations we had last week at VMworld continues to reinforce our belief that customers are focused on this very same trend.

The SolidFire Lounge was across the street from the conference on the 31st floor of the W Hotel….above the crowd, literally. The venue provided an opportunity to break away from the conference madness and engage customers and prospects in more strategic conversations and deep technical demonstrations. Few of these discussions revolved around flash, but rather focused largely around the following topics:

  • The challenges of consolidating disparate applications onto a highly available shared storage platform
  • Eliminating resource contention and complex workarounds to deliver consistent and guaranteed application performance
  • How to entirely remove the complexities and inefficiencies that come with traditional  storage provisioning to drive maximum virtual machine density in minimum footprint
  • Exploring the depth of management stack integrations (e.g. VMware, OpenStack, CloudStack) that form the foundation of customers most strategic IT initiatives.

As virtualized environments scale, and as cloud design principles increasingly work their way into enterprise IT environments, the need for more scalable, automated and economical storage platforms is too difficult to ignore. Our interactions continue to confirm corporate IT's movement towards delivering shared infrastructure, and that the challenges ahead are not going away anytime soon.

Watch this video to get a glimpse into the SolidFire Lounge in the W Hotel:

VMWorld Reel
By the way, if you missed us at VMworld you can still sign up for a demo of our VMware VVOLs, OpenStack or CloudStack integrations. Alternatively you can try and catch us at Hosting & Cloud Transformation Summit in late September in Las Vegas.

-Dave Cahill, Director of Strategic Alliances

More Fuel for the FireThursday, July 25, 2013 posted by Dave Wright

July here in the states is known for fireworks, and at SolidFire we have just set off a whole bunch of them. We lit the fuse a few weeks ago with the announcement that Colt, one of the largest service providers in Europe, has standardized on SolidFire for multi-tenant primary storage across multiple tiers of performance and capacity. SolidFire's ability to guarantee performance to thousands of applications simultaneously, along with our industry leading scale-out and automation capabilities, made it the clear choice over both incumbent disk architectures and other flash storage systems. Our radically disruptive architecture is quickly being recognized as the de facto standard for block storage in public and private clouds around the world. And we are not resting on this early success.

Today we have taken the wraps off the SF9010, the largest and fastest all-SSD storage system on the market. Just how big is it? A base cluster configuration of 5-nodes provides over 173TB of effective capacity, larger than most all-flash storage systems on the market. A modest 30 node cluster offers over 1PB of capacity, while a fully-scaled 100 node system tops out at over 3.4PB of effective capacity - more than an EMC VMAX 40K filled with 3TB spinning disks. The performance density is equally impressive, with over 3 million IOPS in a single 40U rack and up to 7.5 million IOPS in a 100 node cluster. These scale and density metrics put SolidFire in a league of its own. But here's where it really gets interesting. All that performance and capacity, the simplicity and efficiency of a true scale-out design and the only Guaranteed QoS on the market today is now available at less than $3/GB effective capacity. That's not only significantly below every other flash based system on the market today, it's less than most performance disk and hybrid systems as well.

Along with the SF9010 introduction we are also releasing SolidFire Element 5, Boron, our latest software update that adds VMware VAAI and VASA support, full encryption-at-rest without performance impact, and more detailed per-volume and per-tenant performance reporting. SolidFire's SF3010, SF6010, and now SF9010 give customers the option to start small and scale capacity and performance non-disruptively and in line with increasing application demand.

Last, but certainly not least, I am very pleased to announce the addition of Samsung as a strategic partner and investor. Samsung Ventures is leading our $31 million Series C funding round. Samsung's latest 960GB MLC datacenter-class drives are at the core of the SF9010, providing the raw performance and capacity we need to set new industry benchmarks, while leveraging the latest flash fabrication processes to drive down cost. This combination allows allows SolidFire to deliver what pundits claimed was years away - all solid state storage below the cost of disk. With our new funding, new customers, new system and software, SolidFire's summer is off to a spectacular start.

-Dave Wright, Founder & CEO

Changing the thinking on storage for MongoDB; one conversation at a timeThursday, July 18, 2013 posted by Chris Merz

My recent trip to the MongoDB conference in NYC provided me with the opportunity to take the temperature of the community with regards to current trends in storage solutions and strategies being employed by practitioners. One theme that emerged time and time again, was that of poor Public Cloud disk performance, and the necessity to employ workarounds such as 'hybrid cloud', EBS striping, and shard-to-ram-only.

I started out the day listening to members of a major global investment bank discuss the challenges in provisioning private cloud 'shapes' that work well for MongoDB. In their case, they are married to commodity hardware, but have had to make a lot of compromises and introduce resource inefficiencies to match storage to the IOPS needed to run Mongo (and databases in general) successfully. The ratio of storage to cores to memory was of particular concern to them when planning out their cloud, and this is a common theme I've heard elsewhere. One size just doesn't fit all.

This same investment bank stated, as fact, that you are going to overprovision something, no matter what, when attempting to build a cloud suitable for database usage. While this may be true when using legacy storage systems this is simply not true in a cloud backed by a QoS architecture like SolidFire. Unfortunately we see this mindset prevailing in NoSQL conversations today more often than not. We clearly have some work to do.

In another interesting presentation given by a MongoDB service provider they discussed how they provision dedicated hardware for running MongoDB. This configuration integrates via cross-connect into a public cloud. This is a perfect example of what some are touting as The Hybrid Cloud. But in reality, this is a very complex workaround involving high-speed network cross-connects from dedicated datacenters which are physically adjacent to large cloud provider locations. The main reason for absorbing the complexity came down to IOPS. Specifically, the potential for getting 'stuck' in sharded clusters by running out of physical IOPS until the sharding processes breaks down was deemed the biggest threat to smooth operations. This process of IOPS starvation would effectively kill the cluster. Inability to dynamically provision more IOPS from their existing storage platform was the biggest driver of this problem.

In a panic, with most storage systems you simply can't add more IOPS. Sure you can throw RAM and memory at the problem but at the end of the day I/O always matters. Almost all of the caveats and concerns raised revolved around the limitations from static IOPS availability. The recommended solution: leaving 25% of disk capacity at all times to avoid getting the sharded cluster wedged. They also recommended avoiding VMs due to i/o concerns. For the former recommendation, this is a lot of wasted overhead that can be alleviated by dynamic volume growth options. For the latter recommendation, this again comes down to using the right QoS-centric storage architecture. With SolidFire, this story would have a very different ending.

In another presentation from another leading service provider, they compared their internal public cloud offerings against their bare metal (SSD and 15k) systems. Suffice it to say the results were unimpressive. The SSD offerings only showed slight improvement over the 15k offerings. As expected, the latency variance was a lot lower on dedicated hardware vs public cloud. The presenter attributed the shortcomings of the results to shared storage resources, multitenant woes, inconsistent performance, failure of bursting technology, and lack of real IO. In their summary, Public Cloud was a non-starter.  

Overall, it was a very insightful conference. SolidFire has a very compelling solution at the right time for this community. They may not know it yet, but this ecosystem and expanding user base will benefit greatly from next generation storage options than can deliver them the best of both worlds; dedicated performance and shared storage economics. We look forward to proving this out in the coming weeks and months.

-Chris Merz, Senior Database Application Engineer

Colt sets the bar and delivers shared storage service with guaranteed performanceTuesday, July 16, 2013 posted by Dave Wright

Colt Technology Services recently announced their decision to deliver a broad set of shared storage services fueled by SolidFire's all-SSD storage technology.  Standardizing on SolidFire was not a decision taken lightly. The team at Colt reviewed over 20 different storage vendors, several of which had multi-year and multi-million dollar relationships with the company.

Colt embarked on this project with a vision of offering the flexibility of five different tiers of storage within their shared service offering. One approach they considered was utilizing a different storage platform for each level of performance. Alternatively, they could have chosen a tiered platform that allowed for migration between disk types. The market is certainly full of solutions like these. However Colt had additional requirements that made it necessary to evaluate other solutions. Rather than fall in line with what everyone else was doing, Colt wanted to differentiate their enterprise cloud infrastructure. They wanted to deliver a set of scalable storage services with fine grain performance control that could be guaranteed to each of their customers - and they needed to do it at massive scale.

This is where SolidFire stood above the crowd. Only SolidFire gave Colt the combination of all-SSD performance combined with fine-grain Quality of Service (QoS) control that allowed them to support Tiers 0, 1, 2, and 3 - all from a single storage platform. In addition, the team at Colt placed immense value on the ability to integrate SolidFire system functionality into their acclaimed customer management portal. Use of the SolidFire REST-based API  simplifies storage provisioning for both Colt and their enterprise customers.  Customers can shift data workloads between storage service tiers instantly and without data migration - while billing for the changes happens automatically.

Colt clearly sees disruptive value in being able to offer services that competitors like Telefonica, Rackspace, SoftLayer and others can't deliver.  But more importantly, Colt understands that the ability to deliver consistent application performance will allow them to increase the number and type of enterprise applications that can be hosted within their cloud infrastructure.  Combining the performance characteristics of dedicated storage with the economic and efficiency benefits of shared infrastructure, both Colt and its customers gain more performant and economical storage services.

So where to from here? Colt is rolling out SolidFire across seven of their European datacenters and will phase out legacy storage systems as they age.  SolidFire will also form the foundation upon which Colt will build value added services such as SaaS, PaaS and VDI as they continue to stay ahead of the competition and meet increasing customer demand for enterprise services.

Read the complete case study and watch the video at

-Dave Wright, Founder & CEO

Bridging The GapMonday, July 15, 2013 posted by Dave Cahill

The following is an excerpt from the recently published SolidFire whitepaper, Beyond Test & Development: What virtualization can teach us about the next phase of cloud computing. This is the fourth post in a six part series that will be published each Monday. The whitepaper in its entirety can be found  here.

While the x86 server virtualization market evolution has presented us with an intriguing template for what could lie ahead in the cloud computing market, it is not yet concluded that things will materialize in a similar fashion. Certainly the early parallels between the test and development phases of these markets are too compelling to ignore. But continued innovation needs to occur across the cloud ecosystem to yield similar growth rates versus what was experienced in the production era of server virtualization.
In a recent whitepaper, Citrix accurately captured the current state of the market, reminding service providers of the gap that must be bridged between early adopter use cases and the production application opportunity that lies in front of us:

"To fully seize the cloud computing opportunity, service providers must be prepared for shifts in industry adoption and application readiness as the market matures. While startups and Web 2.0 use cases constituted much of the earliest adoption of cloud computing, recent growth in IaaS has been dominated by enterprise adoption of cloud services for a wide variety of production applications. This trend is expected to continue. Whereas Internet companies, the earliest users of cloud computing, were aggressive in their adoption of new technologies and trends-including mobile and social applications, REST-based web services and NoSQL databases-the vast majority of enterprise applications do not yet embrace cloud era architectural principles. Rather, these traditional workloads, such as SAP ERP, Oracle database apps and Microsoft® Exchange, are based on n-tier application architectures that predate the cloud."

The unspoken reality underneath Citrix's application-centric conclusions is that most of the cloud infrastructure isn't suited to address the requirements of enterprise (i.e. precloud) applications. A possible solution to this problem is to rewrite legacy applications to embrace "cloud era architectural principles." Unfortunately, enterprises often find it difficult to justify the time and investment behind this undertaking. Even if they did, this type of movement would take years to reach any sort of critical mass, certainly longer than the market is willing to wait. Consequently, expediting the migration of these applications from on-premise to cloud is a burden that should be placed more on the supporting application infrastructure (i.e. the underlying hardware and software that support these workloads) rather than the applications themselves.
Of course, buying into this scenario requires two tightly coupled assumptions:

  1. The capabilities of the underlying application infrastructure can evolve faster than the pace at which these applications could be rewritten.
  2. Cloud service providers will deliver an infrastructure that can strike a balance between the performance, resource isolation, and availability requirements that are the lifeblood of enterprise applications, with multi-tenancy and commodity infrastructure traits of a profitable cloud infrastructure.  

Up for grabs

IT departments are increasingly subjecting applications to "cloud first" scrutiny, similar to the "virtualization first" mandate that helped drive the majority of incremental workloads to virtualized environments. With this type of adoption stimulant in place, the opportunity for cloud providers to help facilitate the production era of cloud computing is there for the taking. For the market to have any chance of realizing its full potential, service providers must be able to confidently, and economically, address the more stringent demands of mission- and business-critical applications. In the server virtualization market, VMware answered the bell. In the cloud computing market, the time is now for service providers to do the same.

-Dave Cahill, Director of Strategic Alliances

Inside The FirewallMonday, July 8, 2013 posted by Dave Cahill

The following is an excerpt from the recently published SolidFire whitepaper, Beyond Test & Development: What virtualization can teach us about the next phase of cloud computing. This is the fourth post in a six part series that will be published each Monday. The whitepaper in its entirety can be found  here.

The x86 virtualization movement began and succeeded almost entirely within on-premise data centers that favored quick enterprise adoption. In contrast, public cloud adoption has gained success outside of the enterprise firewall through successful delivery of low-cost, on-demand compute and storage resources. There are many stories of application owners testing and deploying applications outside of the corporate firewall due to the nimbleness of on-demand cloud infrastructure.
But for many CIOs, data privacy, data residency, and compliance requirements will ensure that certain enterprise applications remain on premise for some time. This doesn't mean the infrastructure hosting these applications will stay the same. In fact, due to the transparent nature of public cloud computing pricing, the cat is already out of the bag. Public cloud pricing transparency has set an aggressive standard that all IT departments will be benchmarked against, whether they like it or not. Today's IT departments that depend on costly and complex legacy IT infrastructure will most certainly struggle to keep up.
In the most public use case of enterprise cloud computing to date, the Bechtel CIO, Geir Ramleth, took matters into his own hands. Unwilling to settle with the cost burden imposed by legacy infrastructure and absent SaaS, Ramleth challenged his IT department to benchmark against best-in-class cloud computing services.
Summarizing how he and his team came to this decision at Bechtel, he remarked:

"We operate as a service provider to a set of customers that are our own [construction] projects. Until we can find business applications and SaaS models for our industry, we will have to do it ourselves, but we would like to operate with the same thinking and operating models as [SaaS providers] do."

Unwilling to relinquish all of their most sensitive applications to the public cloud today, CIOs everywhere will want to figure out how to get cloud-like economics inside their firewalls. The most forward-thinking of the bunch, despite enterprise logos brandishing their front door, are running their IT departments like service providers. As part of this movement, for those applications not suited for public cloud, IT executives will undoubtedly look to incorporate cloud design principles into their on-premise IT strategies.

In our next post, slated for July 15th, we will conclude this blog series by looking at some of the gaps that need to be bridged in order for the market to advance beyond the early adopter use cases and the production application opportunity that lies in front of us.

-Dave Cahill, Director of Strategic Alliances

Amazon & The EnterpriseMonday, July 1, 2013 posted by Dave Cahill

The following is an excerpt from the recently published SolidFire whitepaper, Beyond Test & Development: What virtualization can teach us about the next phase of cloud computing. This is the fourth post in a six part series that will be published each Monday. The whitepaper in its entirety can be found here.

Beyond the opportunity at stake, perhaps the more interesting question is whether or not Amazon can maintain its stronghold on the market through the transition to the production era. VMware was able to nimbly expand its reach from predominantly test and development to production. Can AWS successfully navigate this transition point in the cloud infrastructure market with the same elegance?

As the undisputed leader of the test and development era, AWS is starting from a position of strength as we enter the next phase of the cloud market growth. The bad news for competing service providers is that Amazon is churning out new features and services weekly to accommodate the requirements of increasingly more demanding workloads. The introductions of DynamoDB, Provisioned IOPS, Glacier, and high-capacity instances throughout the last year are all examples of this effort. The good news for other cloud providers is that enterprises are less likely to relinquish all of their data, computing and storage to one single vendor in the same manner they allowed VMware to virtualize all of its x86 workloads on premise. Risk aversion and varying IT environment complexity will dictate a more diverse range of cloud providers servicing each enterprise.

But cloud providers are not only competing with Amazon and other cloud providers for the right to host these workloads. In fact, probably the biggest threat to public cloud adoption is the incumbent provider: on-premise IT. Despite the increasing "cloud first" mandate proliferating across IT departments, cloud providers must prove capable of catering to the performance, reliability, and privacy demands of business and mission critical applications. If enterprises lack the confidence to entrust cloud providers with their most performance-sensitive workloads, then these applications will remain on-premise.

In our next post in the series, to be released July 8th, we will continue to examine some of the key challenges that Amazon, and other service providers, must be able to address in order to ensure the cloud computing market can continue on its current growth trajectory

-Dave Cahill, Director of Strategic Alliances

Be Prepared: Next Up, QoS-as-a-ServiceMonday, June 24, 2013 posted by Antonio Piraino, ScienceLogic

Last Tuesday evening, we co-hosted the QoS Summit during HostingCon with ScienceLogic and Structure Research. Over twenty cloud provider executives attended this roundtable to discuss key challenges and opportunities to deliver guaranteed performance in the cloud. Below is an excellent recap written by Antonio Piraino, CTO of ScienceLogic. Please see the original blog post here.

- Jay Prassl, SolidFire

As the cloud enters its next stage of growth, most of us are not asking why and how we should run applications in the cloud. Now, we are looking to optimize and guarantee performance to meet service obligations and give the best possible service.

What has become increasingly clear over the past few years is that this cloud phenomenon, especially as it pertains to the oncoming wave of enterprise consumers, is being dominated ultimately by IOPS as a proxy for Quality of Service (QoS) expectations in the cloud.   This prompted us to join SolidFire and Structure Research in co-hosting a first of its kind QoS Summit with a select group of hosting executives, in the midst of the annual HostingCon event being held in Austin, Texas.  The discussion was a lively one, but there was consensus around the need for a cost effective alternative to traditional and somewhat limited storage mechanisms, and the simultaneous ability to control, visualize, and manage the IOPS associated with dynamic workloads attributable to cloud services.


Is your noisy neighbor keeping you awake at night?

The noisy neighbor problem is defined by the fact that cloud in general is based upon shared resources, ranging from shared bandwidth, operating systems, CPU, memory, and storage.  A number of technologies already help service providers gain control over the network (i.e. when MPLS was brought to the market), and even the relatively simple partitioning of CPU and RAM by containerized/virtual technologies. (Memory virtualization helped in overcoming physical memory limitations.) However, the disk subsystem is one which remains extremely difficult to partition. That means that invariably, with higher demanding client workloads, virtual machines on the physical host are consuming very large amounts of disk I/O resulting in very poor performance for their neighboring virtual machines.

While the bigger arrays are helping, the neighbors are getting noisier. There is a need, therefore, to control the allocation of storage and control the associated QoS per workload. The Service Level expectations are coming, and while SMBs haven't defined the IOPS yet, the large enterprises are showing savvy when it comes to the performance expected from their multi-tenant service provider, to be aligned with their traditional DAS solutions.  The power of the modern solutions on offer from SSD providers such as Intel and hot startup SolidFire, is in knowing that you can guarantee a certain number of IOPS on each volume, and pair that with the elastic cloud platform in parallel to the compute resources allocated to workloads.


So how does one go about managing and displaying QoS to your constituents?  That's where things like the ScienceLogic PowerPack (read: app-store-esque app) recently created to instrument against the SolidFire API has become useful to visualizing those now controllable IOPS.

At the same HostingCon event, I had moderated a session with a variety of hosters that had kicked the tires on SolidFire in particular, including: Codero, Crucial Cloud Hosting, and Softlayer.  The draw behind a company like SolidFire became obvious, given the similarities in their cloud offerings.  These companies had effectively started out their cloud offerings on bare metal servers, with a cloud offering based on a SAN.  The next evolution was to offer cloud services on local storage to reduce the impact on the SAN.  Ultimately they were seeking local disk space performance, since disk IO is where their customers' first problems bubbled to the top.

But going the single server or bare metal model is no longer a cloud offering, and the alternatives that involve striping across numerous resources toward dedicated IOPS, fast becomes expensive.  Moving away from the traditional allocations of spindles and mechanics, to technologies that are able to de-dup, compress, and allocate volumes on the fly is not only impressive, but going to become necessary. Add to this the idea that usable space on a cluster can actually be higher than the usable space on drives (strange, but true), and suddenly the price of modern day technologies like SolidFire's are quickly becoming an affordable way to offering that QoS as a Service.


Guest blog by Antonio Piraino, ScienceLogic

Entering the production era of cloud computingWednesday, June 26, 2013 posted by Dave Cahill

The following is an excerpt from the recently published SolidFire whitepaper, Beyond Test & Development: What virtualization can teach us about the next phase of cloud computing. This is the third post in a six part series that will be published each Monday. The whitepaper in its entirety can be found here.

Seven years into cloud market adoption, and Amazon Web Services finds itself at a similar position to VMware after the first seven years in the x86 virtualization market. Projecting out the next phase of growth in cloud computing will be dependent on the ability for cloud service providers to architect a VMware-like market transition from primarily test and development workloads to production applications. However, no single service provider-not even AWS-can bridge this transition alone. The commitment and innovation across the cloud ecosystem must at least parallel that demonstrated by the ISV and hardware vendor community in support of VMware.

Vendors are already stepping up to support the cloud's advancement by building in features to support multi-tenancy and self-service via automation, and leveraging new architectures to drive increased scale. Similarly, Amazon continues to add services to support production environments, including Route 53, VPCs, Direct Connect, and traditional relational database certifications. If the revenue growth realized by VMware from successfully navigating their transition is any indication, the potential returns to be had from investing in the production era of cloud computing will more than offset the upfront investments.

Figure 4 plots VMware's actual license revenue growth over the ten-year period from '02 (YR1) to '12 (YR11). This data effectively captures both the test and development and production phases of the server virtualization market. Overlaid against this data set, with time-scale normalized at Year Zero, is the early cloud computing growth trajectory in its first seven years (using AWS S3 objects as the proxy).

Missing from this graph, as the history is still to be written, is what happens in the next phase of cloud computing. AWS recently announced that, only one-third of the way through 2013, the total of S3 objects stored had eclipsed the two trillion mark. If this pace continues, cloud adoption will show a significantly accelerated growth trajectory relative to the same period of x86 virtualization adoption by the end of 2013. Against this historical context, cloud computing's initial adoption trajectory has been very impressive, especially considering we are now just scratching the surface of the opportunity for production applications.


Where to from here?

If the adoption profile witnessed in the x86 virtualization era is even remotely close to what we are now seeing in the cloud market, then the production era of cloud is upon us. The implication here is that real market growth in the future will be dependent upon the continued shift of both green-field and legacy production applications away from infrastructure silos of the past to the shared multi-tenant infrastructure profile that now defines cloud. However, to run properly in a cloud infrastructure, these production applications come with an entirely different set of infrastructure dependencies and service-level expectations compared to test and development and backup/archiving workloads. Concerns regarding application availability, performance variability, and scale all take on a completely different complexion when the discussion revolves around production applications.

In our next post in the series, slated for July 1st, we will touch on some of the different challenges faced by service providers as they navigate the transition to the production era of cloud computing.

-Dave Cahill, Director of Strategic Alliances


The Test and Development Era In The Cloud posted by Dave Cahill

The following is an excerpt from the recently published SolidFire whitepaper, Beyond Test & Development: What virtualization can teach us about the next phase of cloud computing. This is the first post in a six part series that will be published each Monday over the course of the next six weeks. The whitepaper in its entirety can be found here.

 In November 2012, almost seven years into the adoption of its Simple Storage Service (S3), AWS announced that greater than 1.3 trillion objects had been stored in the S3 service. The incredible growth over this period, as shown in Figure 2, was driven primarily by the use of S3 as the data repository for test and development, backup, and archive workloads.


Given AWS' initial market dominance, the number of objects stored in their S3 repository serves as the most accurate proxy for growth and adoption in this early phase of the cloud market. In the same vein, the x86 virtualization market was initially dominated by VMware's success in test and development. Consequently, VMware's software license revenue through this period from '02-'08 serves as the most accurate measure of early x86 server virtualization market growth. Overlaying the growth trajectories (See Figure 3) from the initial ramp of the cloud and virtualization platforms in these early years shows a remarkably similar early adoption profile. Notably in this comparison graph, the cloud growth proxy (S3 objects) actually eclipsed the x86 virtualization growth proxy (VMware License Revenue) at the end of Year Seven.


Of course the x86 virtualization market has since played out well beyond what is captured in Figure 3. Leveraging its leadership position in the early phase of the market, VMware proceeded to capture the opportunity for virtualizing production workloads in the datacenter as well. By 2010, according to IDC, more new server applications were deployed as virtual machines than on individual physical servers[5]. Meanwhile, in the cloud computing market, we are just now embarking on the production era with market growth, as reflected in Figure 3, appearing to accelerate even faster than what we witnessed with x86 virtualization.

In our next post in the series, timed for 6/24, will we look at the potential growth trajectory for cloud computing in the production era relative to what we witnessed from x86 virtualization as it navigated a similar transition.

  -Dave Cahill, Director of Strategic Alliances

Back to the FutureMonday, June 10, 2013 posted by Dave Cahill

The following is an excerpt from the recently published SolidFire whitepaper, Beyond Test & Development: What virtualization can teach us about the next phase of cloud computing. This is the first in a six part series that will be published every Monday. The whitepaper in its entirety can be found here.

The current cloud revolution shares a number of parallels with the virtualization movement that empowered the enterprise datacenter in the last decade. At the core of both tectonic shifts are clear platform disruptions spurred by Amazon Web Services (AWS) and VMware, respectively. Similar to VMware's early lead in x86 virtualization market, AWS has shown early dominance in the public cloud market. At the heart of both of these movements, the predominant early adopter use-case has been test and development environments.

Recognizing the similarities in the initial adoption profile across these two markets caused us to take a closer look at the evolution of the server virtualization market to see what else we might learn. It turns out that a deeper examination of how this market evolved can help provide greater comfort and appreciation for the real market growth potential for cloud computing.

The Virtualization movement

Since its arrival in the early 2000's, x86 server virtualization has become one of the most disruptive technologies in the history of the datacenter. In its early days, the ripest candidates for virtualization in the enterprise were test and development environments. In fact, according to an IDC report, about 70% of all x86 virtualization deployments in 2003 were related to software testing and development. However, as the technology and its surrounding ecosystem matured, performance improved, quality of service was established, security concerns were abated, and production workloads were hosted more and more in virtualized environments. By 2008, a Gartner Group poll indicated 73% of respondents were using x86 virtualization for mission-critical applications in production. While the overall penetration rate for virtual servers was still early, these proof points suggest that consolidating production applications onto virtualized infrastructure was on its way to becoming mainstream.4

From test and development to production in the virtualized datacenter

Customers deploying VMware's software in test and development environments was the primary catalyst for the company's growth in its formative years. However, by the end of 2008, another growth driver had emerged. Spurred by enterprise customer's migrating production workloads to virtualized environments, VMware license revenue continued its rapid upswing for more than three years (See Figure 1 below). The only revenue growth retrenchment during this run stemmed from the Financial Crisis in late 2008.


A key to VMware's growth during this period was the ability to overcome some of the biggest obstacles to virtualizing more than the most basic production applications. In a May 2011 report, IDC cited performance and application owner resistance as two of the biggest hurdles that had to be removed before business- and mission-critical application virtualization was more widely accepted.

Key innovations across different tools and enabling technologies, as well as general market maturity, were integral to driving broader acceptance of virtualization for performance sensitive workloads. Fast forward to today and these same obstacles are again key points of resistance preventing enterprises from a broader embrace of public and private cloud infrastructure.

In our next post in the series, slated for 6/17, will we examine the adoption profiles witnessed in the test and development phases of both the x86 virtualization and cloud computing markets. Specifically we will look at the remarkable similarity in their early growth profiles and what this could mean for the next phase of cloud computing.

-Dave Cahill, Director of Strategic Alliances

Into the ViPR’s NestTuesday, May 7, 2013 posted by Dave Wright

Yesterday EMC sent out a Very Impressive Press Release about their ViPR "software defined storage" project. After navigating through the buzzword bingo, the proposed benefit appears to be a form of storage virtualization where a software-based control plane sits in front of heterogeneous storage, simplifying basic management and provisioning. Unlike previous storage virtualization approaches, the ViPR controller doesn't sit in the data path, it simply configures storage arrays via their proprietary protocols while providing another proprietary (but REST-based) API on top.

Reading their description I couldn't help but think how similar this sounds to the Cinder project, which we helped launch as part of the OpenStack community more than a year ago. Cinder provides a simple (and open) API for managing pools of heterogeneous storage systems. Individual vendors can write open-source plugins for their storage systems, and there are more than a dozen available today. By comparison, when ViPR launches later this year it will support only EMC and possibly NetApp arrays.

To be sure, some of EMC's plans for ViPR go beyond what is in Cinder today, but then again, ViPR isn't available today either. It's disappointing that rather than contribute that functionality to the broader storage community they are attempting to create a new layer of lock-in in the orchestration stack. EMC's idea of OpenStack integration into ViPR is to simply make it another abstraction layer under Cinder. As a corporate sponsor of OpenStack, I would have hoped EMC understood that supporting open source projects is about far more than marketing dollars.

In the end, I believe EMC has realized that the rise of cloud orchestration is a threat to their dominance at the storage systems level. Open source storage virtualization software like Cinder makes it easy for customers to move their cloud workloads to the best storage platform over time. Linux had a very similar effect in leveling the playing field for x86 servers against proprietary Unix systems.

This won't be the only vendor announcement this year claiming to be the first true software defined storage product. But software defined storage (SDS) is not a single vendor product. We have already set the record straight on this topic. In a market like cloud that is clearly embracing faster innovation and increased openness, the last thing the anyone needs is another proprietary layer of lock-in. This is especially true considering there are at least two viable open source options today in OpenStack and CloudStack.

Anyone considering ViPR as the solution to their storage system lock-in problem will quickly find that middleware lock-in isn't really any better. If you like some of the promises you see in the ViPR release but would prefer an open alternative that is available today, take a look at Cinder and join us in contributing to Cinder and OpenStack.

-Dave Wright, Founder & CEO

OpenStack Summit Recap: Mindshare Achieved, Market Share Must FollowThursday, April 25, 2013 posted by Dave Cahill

With ~2800 people in attendance at the OpenStack Summit in Portland last week it is obvious that OpenStack has more than caught the attention of the IT community. With keynotes and presentations from the likes of BestBuy, Bloomberg, Comcast, HubSpot, PayPal, NSA, CERN and others, the focal point of the event and community is slowly migrating away from vendors and towards end-users and deployers. These early proof points, along with the considerable marketing momentum, have earned OpenStack a promotion from the kiddie section of cloud management platforms to a seat at the adult table. No longer a cute science project, OpenStack is now receiving consideration as a viable cloud management platform across both greenfield and rip and replace deployments.

While this increased attention and hype is useful to increase developer contributions and drive end user awareness it also ups the ante on execution across the community and vendor ecosystems. With mindshare clearly established, to maintain a seat at the table OpenStack must now backfill this momentum with real market share. Delivering on this market share requires more operational proof points to validate the technology and crystalize the use cases in the eyes of CTO, CIOs and IT executives.

As evidenced by our announcements leading into the summit, here at SolidFire we are laser focused on operationalizing our OpenStack story in the form of advanced technology development, cross-ecosystem integrations, detailed reference architectures and continued customer use cases. I discussed our OpenStack efforts in more detail at the Summit during an appearance on Silicon Angle's theCube.

So where to from here? In the coming months leading up to the Havanna release in the fall you will see continued iterations and enhancements to our existing OpenStack Block Storage (Cinder) configuration best practices, reference architectures and implementation guides . To drive increased familiarity, ease of deployment and ensure users take full advantage of advanced functionality like QoS, you will also see us roll out an informative series of 'how to' videos starting with this one; Configuring OpenStack Block Storage with SolidFire.  

At SolidFire we are all-in investing in the building blocks our customers rely upon to underpin their cloud infrastructure. OpenStack has quickly established itself as a very viable option in this conversation and our early investments in this community are starting to pay off. For customers seeking guaranteed performance, high availability and scale for their cloud infrastructure, there is no better block storage option. We look forward to delivering additional proof points behind this message in Hong Kong. 

-Dave Cahill, Director of Strategic Alliances

Separating from the PackTuesday, April 16, 2013 posted by Dave Cahill

The interest and involvement in the OpenStack project is indicative of the demand for more scalable and economical solutions for managing large-scale compute, networking, and storage infrastructure environments. A byproduct of this incredible interest is a lot of noise resulting from vendors announcing their "support" for the project, but OpenStack is flush with sponsorships and tops-down corporate support at this point. In fact, probably the greatest risk to the project's success and longevity is becoming too top heavy, morphing into more of a standards body and less of an open-source community.

Politics aside, as we head into the Grizzly Summit, and closing in on the project's 3rd year of existence, OpenStack is growing up before our eyes. It is a powerful assembly of software innovation across all of the different ongoing projects. But to really bring all this promise to bear, potential deployers need to see what is possible with OpenStack. Harnessing all this momentum into more operational proof-points is imperative. This means advanced feature development, cross-ecosystem integrations, reference architectures/best practices, and real-world deployments.

At SolidFire we define contribution to OpenStack a bit differently than other vendors. We define contribution to this project not by board seats or sponsorships but by meaningful contributions to the project. We are committed to OpenStack far beyond a basic plug-in integration. After another six months of hard work under our belts since Folsom and the arrival of Cinder, we have announced the latest fruits of our labor, including:

  • Advanced feature development. SolidFire continues to deliver the industry's most comprehensive support for the Cinder block storage service. Enhanced features supported by our OpenStack Driver in the Grizzly release include boot from volumes, QoS settings via volume types, and multi-backend support allowing for the seamless addition of a SolidFire cluster into an existing Cinder environment.
  • Cross-ecosystem integrations. To help ease deployment and management of customers' OpenStack-based infrastructure we have announced partnerships with leading OpenStack distributions including both Nebula One and Rackspace's Alamo Private Cloud software.
  • Reference architectures & best practices. To minimize friction when deploying SolidFire within an OpenStack-based infrastructure, we have published three different documents including a SolidFire/Cinder configuration guide, reference architecture and a SolidFire/Rackspace Private Cloud Implementation guide.
  • Real world deployments. We have a number of customers using SolidFire as the block storage within their OpenStack cloud infrastructure. One that we can talk about today, Brinkster, is quoted in our recent press release. The level of activity and conversations in this space is amazing and we hope to share more of these names with you in the very near future.

From early on at SolidFire we have invested in continued project and community support. Now with some strong customers wins and go-to-market partnerships under our belt, SolidFire is gaining increasing momentum as the block storage of choice for OpenStack. For customers seeking guaranteed performance, high availability, and scale for their OpenStack-based cloud infrastructure, there is no better option.

If you plan to attend the upcoming summit this week in Portland, OR, please drop by our booth. To schedule a meeting or demo while at the show email ( or tweet us (@solidfire) ahead of time.

-Dave Cahill, Director of Strategic Alliances

Life as a hosting provider: Why Crucial went looking for storage QoS posted by Guest blog by Ijan Kruizinga, Sales & Marketing Director, Crucial Cloud Hosting

Storage QoS (Quality of Service) is important to Crucial because we want our customers to remain competitive. The world has become flat and it's becoming harder and harder to find a competitive edge. By delivering guaranteed, predictable storage performance to our customers they now are able to drive further efficiencies from their business allowing them to scale without large increases in investment.
Crucial exists to help businesses succeed online and the rapid growth of our cloud services over the past few years challenged our ability to deliver on our purpose. Traditional SAN technologies weren't designed to scale for the cloud and performance silos were becoming an issue for us and, more importantly, limiting our customers' ability to make the transition to the cloud. As more and more of our customers took the step to our cloud offerings we soon realized that existing storage solutions were years behind the requirements of businesses, and that spinning disks had finally met their demise since they couldn't deliver the performance and efficiencies needed in today's high performance, customer-driven hosting environments.

Another challenge that we faced was that cost creep also became an issue for us. Pricing for traditional SAN solutions was built around enterprises and was never designed for today's cloud hosting environments. We needed our storage to scale to petabytes--not just terabytes--and on a cost per GB model instead of cost per shelf level model.

By using traditional SAN solutions we were limited in our ability to not only service our existing customers, but also to attract new customers. We were forced to develop hybrid solutions for customers that needed scalable, larger environments to power anything from busy websites, online retail stores or even I/O heavy databases. Our solutions were made up of mixed virtual servers and dedicated servers, which did deliver on our customers' needs but still posed a number of challenges. The challenges we found with hybrid environments is that they are usually complex, not as flexible or scalable, and don't deliver high availability unless a considerable investment in time and money was made by the customer as well as our team.

At Crucial we believe that the web should be easy and reliable. Our customers demand performance with simplicity and ease-of-use, so we went looking for a solution that would essentially kill the need to use dedicated servers in hosting environments. We found the performance and the competitive edge in SolidFire.

Learn more about how Crucial is being Fueled by SolidFire.

-Ijan Kruizinga, Sales & Marketing Director, Crucial Cloud Hosting

Life as a hosting provider: How Elastx checked off all the boxes on their storage wishlistThursday, April 4, 2013 posted by Guest blog by Joakim Öhman, CEO, Elastx

I have been working with data center and storage solutions for 18 years and one of the things on the top of my wishlist has been a storage system that could keep up with the rest of the infrastructure. While servers and network increased in capacity and performance, the good ol' hard drives just increased in capacity and NOT performance. Yes, a lot of alternative solutions have been developed during the years to compensate for the lack of disk performance, but no solution that really solved the core problem. Most solutions use cache to remove hot spots, but you can't cache it all. And then you still could not get predictable performance -- especially not in a multi-tenant environment.

90% of all the performance issues that arose for us were infrastructure-related and were caused by storage. Even if the system was performing okay when we did IOPS and throughput tests, the system still felt slow and single transactions did not perform as expected. It all depends on the latency. A high-performance hard drive has an average latency of 4ms and a general rule is that a storage system should have a latency of less than 10ms to be considered okay in performance. The problem is when the storage system is heavily loaded and another server (the noisy neighbor) is using the same disk and your IO is queued, then you can get latency numbers up to 20-30ms. SSDs are great as they do not have the latency issues as traditional hard drives do, but the problem is the cost.

The way we deliver IT infrastructure (IaaS) and platform (PaaS) is changing rapidly. In the legacy service model, where you think of your machines as pets and give them names like, they are raised and cared for. When they get ill, you nurse them back to health. In the new cloud service model, machines are treated like cattle and given number names like They are all identical and when they get ill, you shoot them and get another one. You work with parallel servers and scale out for redundancy and scalability. This has shown to be the best way to do it with servers and with my experience the best way to build storage systems as well.

In 2001 I founded 24 Solutions which offers traditional managed services but with a focus on compliance and high availability. During the last couple of years we saw a huge demand from our customers to deliver a more elastic and automated environment with the possibility to integrate with the customer business and DevOps processes. So, at the end of 2012 I founded the company Elastx that only offers cloud-based PaaS and consultancy services. Then I had the privilege to build a completely new infrastructure that was designed from the beginning for multi-tenant cloud computing. When searching and evaluating products for the platform, storage was the area where we really hoped to find a better solution.

This was the storage system wish list that we had.

  • SSD (or similar) only
  • Some dedupe or similar technology to make it affordable
  • Scale-out design where we would get more IOPS per node
  • Fully redundant where we should be able to lose a node without downtime
  • Integration with OpenStack

We did a lot of searching and found a number of potential products. SolidFire was the only system that checked all the boxes! One feature that SolidFire had that we did not even have in the list of requirements was true Quality of Service (QoS).
The SolidFire deduplication and compression made the system affordable even if it is SSD only. With a SSD-only system we can get predictable performance and always deliver high performance and low latency. During our performance tests we got up to 2ms latency during maximum IOPS load. During normal load we achieved sub 1ms latency, which is 10-20 times better than a normal hard-disk-based storage system. With the linear scalability achieved by a scale-out design we can be certain not to run into bottlenecks in the storage system when we grow in capacity. We also see the SolidFire contribution to OpenStack as a big advantage securing the future integration with the OpenStack platform.

Learn more about how Elastx is being Fueled by SolidFire™ .

-Joakim Öhman, CEO, Elastx

Hypervisor-based QoS: Helps with the symptoms, but by itself it’s not the cureTuesday, April 2, 2013 posted by Dave Cahill

If you have been following our recent stream of blogs and announcements, we have been giving a lot of airtime to the subject of storage Quality of Service (QoS). In a timely post on this subject, VMware's @FrankDenneman recently wrote a blog to solicit feedback on a concept they are calling "Storage-level Reservations." If you haven't read the blog yet, I would encourage you to do so. Also make sure to fill out the survey at the end to help VMware with their research.

In the post Frank summarizes the key challenges imposed by running multiple tenants on a shared storage infrastructure:

In a relatively closed environment such as the compute layer it's fairly easy to guarantee a minimum level of resource availability, but when it comes to a shared storage platform new challenges arise. The hypervisor owns the computes resource and distributes it to the virtual machine it's hosting. In a shared storage environment we are dealing with multiple layers of infrastructure, each susceptible to congestion and contention. And then there is the possibility of multiple external storage resource consumers such as non-virtualized workloads using the same array impacting the availability of resources and the control of distributing the resources.

        - Frank Denneman, Would you be interested in Storage-level reservations? 3/26/13

While it would be fantastic to solve this problem solely from a hypervisor perspective, the reality is that the hypervisor has very little control or visibility of the underlying storage system resources. A cloud infrastructure of any size demands a more coordinated approach across both host and storage resources. Some of the key issues to consider with a hypervisor-centric approach in front of traditional storage include:

  • Lack of IOPS control. While the hypervisor can throttle IOPS, it has no control over maintaining the total IOP pool available. With not governance from the underlying storage system there is no way for a hypervisor to truly guarantee a minimum IOP level. In this scenario the hypervisor will always be at the mercy of the storage device.
  • Performance degradation. Without visibility into back-end storage resource utilization, there is no way for the hypervisor to know what resources remain available to it on a persistent basis. As storage system utilization increases performance degradation becomes a real concern. With a larger pool of virtual resources contending for the same pool of resources, the lack of any sort of storage system layer isolation effectively creates an IOPS free-for-all. The resulting performance variability is a non-starter for a multi-tenant infrastructure hosting performance sensitive applications.
  • Forced overprovisioning. Absent the ability to granularly carve up storage system performance and provision it out to each virtual machine, the only way to ensure a large enough IOPS pool for these VMs is to extensively overprovision your storage. Unfortunately, there is no better way to blow the economics of your shared storage environment than by being forced to deploy 3x as many systems at 1/3rd the utilization rates.
  • Lacking coordination. While throttling IOP usage to VMs is a basic form of storage QoS, this solution is more of an indictment of the deficiencies of existing storage systems than an ideal solution to the problems posed in a multi-tenant infrastructure.  True QoS is delivered through end-to-end coordination and orchestration between the host and the underlying storage system to ensure each virtual machine has the resources it needs to properly support the application.

Implementing a storage QoS mechanism like storage reservations at the hypervisor layer, without similar enforcement capability at the storage system level, does little to address the core challenges imposed by these multi-tenant environments. With VMware and others efforting to improve controls at the hypervisor layer, now is the time to demand more from your storage vendors to deliver on their side of this equation. The good news is there is no need to wait. There are options already available today and over time, API-based integration between hypervisors and storage systems, such as that provided by projects like OpenStack Cinder and VMware VVOLs will provide a much more holistic approach to managing storage Quality of Service than what can be obtained from a hypervisor alone.

-Dave Cahill, Director of Strategic Alliances

Manage Performance and Capacity SeparatelyWednesday, March 27, 2013 posted by Dave Wright

Requirement #6 for guaranteed Quality of Service (QoS): performance virtualization

Performance VirtualizationDelivering Guaranteed Quality of Service in a cloud environment is key to unlocking the true potential of cloud to host business critical applications. However, doing so requires an architecture designed for quality of service, not simply bolt-on features. The final architectural requirement for guaranteeing QoS is the ability for the system to virtualize performance separate from capacity, allowing a user to dial in the exact amount of performance and capacity required from separate pools.

All modern storage systems virtualize the underlying raw capacity of their disks, creating an opaque pool of space from which individual volumes are carved. However the performance of those individual volumes is a second-order effect, determined by a number of variables such as the number of disks the volume is spread across, the speed of those disks, the RAID-level used, how many other applications share the same disks, and the controller resources available to service IO.

Traditional capacity virtualization does not suffice
Historically this approach has prevented storage systems from delivering any specific level of performance. "More" or "less" performance could be obtained by placing a volume on faster or slower disks or by relocating adjacent applications that may be causing impact. However, this solution is a manual and error-prone process. In a cloud environment, where both the scale and the dynamic nature prevent manual management of individual volumes, this approach just isn't possible. Worst of all, significant raw capacity is often wasted as sets of disks get maxed out from a performance standpoint well before all their capacity is used.  

Finally, performance can be managed independent of capacity
SolidFire's performance virtualization removes all this complexity, creating separate pools of capacity and performance from which individual volumes can be provisioned. Performance becomes a first-class citizen, and management is as simple as specifying the performance requirements for an application rather than manually placing data and trying to adjust later.

Furthermore, SolidFire performance virtualization allows performance for an individual volume to be changed over time - simply increased or decreased as application workloads change or as requirements become more clear. SolidFire's ability to dynamically adjust performance gives service providers the complete flexibility to deliver customers the exact performance they need, precisely when they need it.

Separating performance from capacity has the added benefit of providing a consistent way to view the current load on the system, both in terms of the capacity and performance that is actually used. Ensuring that the system doesn't become unexpectedly overloaded is now as simple as reading a gas gauge rather than reading tea leaves. SolidFire's ability to separate performance from capacity in our architecture is the last essential part of guaranteeing QoS. Without it, you're left with a manual process full of guessing games and resulting in poor overall efficiency.

If you'd like to learn more about Quality of Service in cloud computing, join our upcoming webinar with WHIR:

Unlocking the Secret to QoS in the Cloud: The 6 Requirements of Your Storage Architecture
Web Host Industry Review Webinar with SolidFire
Tuesday, April 2, 2:00pm EST

Register now

-Dave Wright, Founder & CEO

Take Total ControlThursday, March 21, 2013 posted by Dave Wright

Requirement #5 for guaranteed Quality of Service (QoS): fine-grain QoS control

Fine Grain QoS ControlAs you know from reading our Quality of Service (QoS) Benchmark blog series, guaranteeing QoS takes more than simply having a QoS feature. Without an architecture built from a design that includes all-SSD, scale-out architecture, RAID-less data protection, and balanced data distribution, any discussion of QoS is really just lip service. Another key requirement for guaranteeing Quality of Service is a fine-grain QoS model that describes performance in all situations.

Contrast fine-grain control against today's rudimentary approaches to (QoS), such as rate limiting and prioritization. These features merely provide a limited amount of control and don't enable specific performance in all situations.

The trouble with having no control
For example, basic rate limiting, which sets a cap on the IOPS or bandwidth an application consumes, doesn't take into account the fact that most storage workloads are prone to performance bursts. Database checkpoints, table scans, page cache flushes, file copies, and other operations tend to occur suddenly, requiring a sharp increase in the amount of performance needed from the system. Setting a hard cap simply means that when an application actually does need to do IO, it is quickly throttled. Latency then spikes and the storage seems painfully slow, even though the application isn't doing that much IO overall.

Prioritization assigns labels to each workload, yet similarly suffers with bursty applications. While high priority workloads may be able to easily burst by stealing resources from lower priority ones, moderate or low priority workloads may not be able to burst at all. Worse, these lower priority workloads are constantly being impacted by the bursting of high priority workloads.

Failure and over-provisioned situations also present challenges for coarse-grained QoS. Rate limiting doesn't provide any guarantees if the system can't even deliver at the configured limit when it is overtaxed or suffering from performance-impacting failures. While prioritization can minimize the impact of failures for some applications, it still can't tell you ahead of time how much impact there will be, and the applications in the lower tiers will likely see absolutely horrendous performance.

SolidFire enables the control you've been looking for
SolidFire's QoS controls are built around a robust model for configuring QoS for an individual volume. The model takes into account bursty workloads, changing performance requirements, different IO patterns, and the possibility of over-provisioning. Whether an application is allocated a lot of performance or a little, the amount of performance it gets in any situation is never in doubt. Cloud operators finally have the confidence to guarantee QoS and write firm SLAs against performance. Only an architecture built with a fine-grained Quality of Service model can support these types of guarantees.

Stay tuned to this blog as we discuss the other critical architecture requirements required for guaranteed QoS, and join us on our upcoming webinar with WHIR to learn more:

Unlocking the Secret to QoS in the Cloud: The 6 Requirements of Your Storage Architecture
Web Host Industry Review Webinar with SolidFire
Tuesday, April 2, 2:00pm EST

Register now

-Dave Wright, Founder & CEO

Get Back in BalanceTuesday, March 19, 2013 posted by Dave Wright

Requirement #4 for guaranteed Quality of Service (QoS): balanced load distribution

Balanced LoadGuaranteeing performance to thousands of applications at the same time is a daunting challenge, but it's essential for anyone wanting to host performance-sensitive applications in a cloud environment. However, delivering true Quality of Service (QoS) requires an architecture specifically designed for the task. As we've shown, true QoS starts with an all-SSD platform, a scale-out architecture, and RAID-less data protection. The fourth architecture requirement for guaranteed QoS is a balanced load distribution across all the disks in the system.

Most block storage architectures use very basic algorithms to lay out provisioned space. Data is striped across a set of disks in a RAID set, or possibly across multiple RAID sets in a storage pool. For systems that support thin provisioning, the placement may be done via smaller chunks or extents rather than the entire volume at once. Typically, however, at least several hundred megabytes of data will be striped together.

Once data is placed on a disk, it is seldom moved (except possibly in tiering systems to move to a new tier). Even when a drive fails, all its data is simply restored onto a spare. When new drive shelves are added they are typically used for new data only - not to rebalance the load from existing volumes. Wide striping is one attempt to deal with this imbalance, by simply spreading a single volume across many disks. But as we've discussed before, when combined with spinning disk, wide striping just increases the number of applications that are affected when a hotspot or failure does occur.

Unbalanced loads cause unbalanced performance
The result of this static data placement is uneven load distribution between storage pools, RAID sets, and individual disks. When the storage pools have different capacity or different types of drives (e.g. SATA, SAS, or SSD) the difference can be even more acute. Some drives and RAID sets will get maxed out while others are relatively idle. Managing data placement to effectively balance IO load as well as capacity distribution is left to the storage administrator, often working with Microsoft Excel spreadsheets to try and figure out the best location for any particular volume.

Not only does this manual management model not scale to cloud environments, it just isn't viable when storage administrators have little or no visibility to the underlying application, or when application owners cannot see the underlying infrastructure. The unbalanced distribution of load also makes it impossible for the storage system itself to make any guarantees about performance. If the system can't even balance the IO load it has, how can it guarantee QoS to an individual application as that load changes over time?

SolidFire restores the balance
SolidFire's unique approach to data placement distributes individual 4K blocks of data throughout the storage cluster to evenly balance both capacity and performance. Data is distributed based on content rather than location, which avoids hotspots caused by problematic application behavior such as heavy access to a small range of LBAs. Furthermore, as capacity is added (or removed) from the system, data is automatically redistributed in the background across all the storage capacity. Rather than ending up with a system that has traffic jams in older neighborhoods while the suburbs are mostly empty, SolidFire creates perfect balance as the system scales.

This even distribution of data and IO load across the system allows SolidFire to deliver predictable performance regardless of the IO behavior of an individual application. As load on the system increases, it happens predictably and consistently. And as new capacity and performance is added, the SolidFire system gives a predictable amount of additional performance. This balanced load distribution continues to stay balanced over time, an essential aspect of delivering consistent performance day after day. You just can't guarantee QoS without it.

Stay tuned to this blog as we discuss the other critical architecture requirements required for guaranteed QoS, and join us on our upcoming webinar with WHIR to learn more:

Unlocking the Secret to QoS in the Cloud: The 6 Requirements of Your Storage Architecture
Web Host Industry Review Webinar with SolidFire
Tuesday, April 2, 2:00pm EST

Register now

-Dave Wright, Founder & CEO

Say Farewell to RAID StorageThursday, March 14, 2013 posted by Dave Wright

Requirement #3 for guaranteed Quality of Service (QoS): RAID-less data protection

RAID-lessEnsuring Quality of Service (QoS) is an essential part of hosting business-critical applications in a cloud. But QoS just isn't possible on legacy storage architectures. As we've been discussing in this QoS Benchmark blog series, guaranteeing true QoS requires an architecture built for it from the beginning, starting with all-SSD and scale-out architectures. Now let's explore the third requirement to deliver guaranteed performance: data protection that doesn't rely on standard RAID.

The invention of RAID 30+ years ago was a major advance in data protection, allowing "inexpensive" disks to store redundant copies of data, rebuilding onto a new disk when a failure occurred. RAID has advanced over the years with multiple approaches and parity schemes to try and maintain relevance as disk capacities have increased dramatically. Some form of RAID is used on virtually all enterprise storage systems today. However, the problems with traditional RAID can no longer be glossed over, particularly when you want a storage architecture that can guarantee performance even when failures occur.

The problem with RAID
When it comes to QoS, RAID causes a significant performance penalty when a disk fails, often 50% or more. This penalty occurs because a failure causes a 2-5X increase in IO load to the remaining disks. In a simple RAID10 setup, a mirrored disk now has to serve double the IO load, plus the additional load of a full disk read to rebuild into a spare. The impact is even greater for parity-based schemes like RAID5 and RAID6, where a read that would have hit a single disk now has to hit every disk in the RAID set to rebuild the original data - in addition to the load from reading every disk to rebuild into a spare.

The performance impact from RAID rebuilds becomes compounded with long rebuild times incurred by mutli-terabyte drives. Since traditional RAID rebuilds entirely into a new spare drive, there is a massive bottleneck of the write speed of that single drive combined with the read bottleneck of the few other drives in the RAID set. Rebuild times of 24 hours or more are now common, and the performance impact is felt the entire time.

How can you possibly meet a performance SLA when a single disk failure can lead to hours or days of degraded performance? In a cloud environment, telling the customer "the RAID array is rebuilding from a failure" is little comfort. The only option available for service providers is to dramatically under-provision the performance of the system and hope that the impact of RAID rebuilds goes unnoticed.

Introducing SolidFire Helix™ data protection
SolidFire's Helix data protection is a post-RAID distributed replication algorithm. This solution spreads redundant copies of data for single disk throughout all the other disks in the cluster rather than just a limited RAID set. Data is distributed in such a way that when a disk fails, the IO load it was serving spreads out evenly among every remaining disk in the system, with each disk only needing to handle a few percent more IO - not double or triple what it served before like RAID. Furthermore, data is rebuilt in parallel to the free space on all remaining disks rather than to a dedicated spare drive. Each drive in the system simply needs to share 1-2% of its data with its peers, allowing for rebuilds in a matter of seconds or minutes rather than hours or days.

The combination of even load redistribution and rapid rebuilds allows SolidFire to continue to guarantee performance even when failures occur, something that just isn't possible with traditional RAID.

Stay tuned to this blog as we discuss the other critical architecture requirements required for guaranteed QoS, and join us on our upcoming webinar with WHIR to learn more:

Unlocking the Secret to QoS in the Cloud: The 6 Requirements of Your Storage Architecture
Web Host Industry Review Webinar with SolidFire
Tuesday, April 2, 2:00pm EST

Register now

-Dave Wright, Founder & CEO

Scale Out, Not UpTuesday, March 12, 2013 posted by Dave Wright

Requirement #2 for guaranteed Quality of Service (QoS): a true scale-out architecture

True Scale OutWelcome to the third blog in the SolidFire Benchmark QoS series, where we've been explaining how guaranteeing Quality of Service (QoS) isn't a feature that can be bolted on to a storage system. It requires an architecture built for it from the ground up, starting with an all-SSD platform. Now let's discuss a second requirement: a true scale-out architecture.

Traditional storage architectures follow a scale-up model, where a controller (or pair of controllers) are attached to a set of disk shelves. More capacity can be added by simply adding shelves, but controller resources can only be upgraded by moving to the next "larger" controller (often with a data migration). Once you've maxed out the biggest controller, the only option is to deploy more storage systems, increasing the management burden and operational costs.

Tipping the scales not in your favor
This scale-up model poses significant challenges to guaranteeing consistent performance to individual applications. As more disk shelves and applications are added to the system, contention for controller resources increases, causing decreased performance as the system scales. While adding disk spindles is typically seen as increasing system performance, many storage architectures only put new volumes on the added disks, or require manual migration. Mixing disks with varying capacities and performance characteristics (such as SATA  and SSD) makes it even more difficult to predict how much performance will be gained, particularly when the controller itself can quickly become the bottleneck.

Scaling out is the only way to go
By comparison, a true-scale out architecture such as SolidFire adds controller resources and storage capacity together. Each time capacity is increased and more applications are added, a consistent amount of performance is added as well. The SolidFire architecture ensures that the added performance is available for any volume in the system, not just new data. This solution is critical for both the administrator's planning ability as well as for the storage system itself. If the storage system itself can't predict how much performance it has now or will have in the future, it can't possibly offer any kind of guaranteed Quality of Service.

Stay tuned to this blog as we discuss the five other critical architecture requirements required for guaranteed QoS, and join us on our upcoming webinar with WHIR to learn more:

Unlocking the Secret to QoS in the Cloud: The 6 Requirements of Your Storage Architecture
Web Host Industry Review Webinar with SolidFire
Tuesday, April 2, 2:00pm EST

Register now

-Dave Wright, Founder & CEO

Step Away From The Spinning MediaWednesday, March 6, 2013 posted by Dave Wright

Requirement #1 for guaranteed Quality of Service (QoS): An All-SSD Architecture

Product _shot _blogAnyone deploying either a large public or private cloud infrastructure is faced with the same issue: how to deal with inconsistent and unpredictable application performance. As we discussed earlier, overcoming this problem requires an architecture built from the ground up to guarantee Quality of Service (QoS) for many simultaneous applications.

The first requirement for achieving this level of performance is moving from spinning media to an all-SSD architecture. Only an all-SSD architecture allows you to deliver consistent latency for every IO.

At first, this idea might seem like overkill. If you don't actually need the performance of SSD storage, why can't you guarantee performance using spinning disk? Or even a hybrid disk and SSD approach?

Fundamentally, it comes down to simple physics. A spinning disk can only serve a single IO at a time, and any seek between IOs adds significant latency. In cloud environments where multiple applications or virtual machines share disks, the unpredictable queue of IO to the single head can easily result in orders of magnitude variance in latency, from 5 ms with no contention to 50 ms or more on a busy disk.

The solutions are part of the problem
Modern storage systems attempt to overcome this fundamental physical bottleneck in a number of ways including caching (in DRAM and flash), tiering, and wide striping.

Caching is the easiest way to reduce contention for a spinning disk. The hottest data is kept in large DRAM or flash-based caches, which can offload a significant amount of IO from the disks. Indeed, this is why large DRAM caches are standard on every modern disk-based storage system. But while caching can certainly increase the overall throughput of the spinning disk system, it causes highly variable latency.

Data in DRAM or flash cache can be served in under 1 ms, while cache misses served from disk will take 10-100 ms. That's three orders of magnitude for an individual IO. Clearly the overall performance for an individual application is going to be strongly influenced by how cache-friendly it is, how large the cache is, and how many other applications are sharing it. In a dynamic cloud environment, that last criteria is changing constantly. All told it's impossible to predict, much less guarantee, the performance of any individual application in a system based on caching.

Tiering is another approach to overcome the physical limits of spinning disk, but suffers from many of the same problems as caching. Principally, tiered systems move "hot" and "cold" data between different storage in an attempt to give popular applications more performance. But as we've discussed before this solution suffers from the same unpredictability problems as caching.

Wide striping data for a volume across many spinning disks doesn't solve the problem either. While this approach can help balance IO load across the system, many more applications are now sharing each individual disk. A backlog at any disk can cause a performance issue, and a single noisy neighbor can ruin the party for everyone.

All-SSD is the only way to go
All-SSD architectures have significant advantages when it comes to being able to guarantee QoS. The lack of a moving head means latency is consistent no matter how many applications demand IOs, regardless of whether the IOs are sequential or random. Compared to the single-IO bottleneck of disk, SSDs have eight to 16 channels to serve IOs in parallel, and each IO is completed quickly. So even at a high queue depth, the variance in latency for an individual IO is low. All-SSD architectures often do away with DRAM caching altogether. Modern host operating systems and databases do extensive DRAM caching already, and the low latency of flash means that hitting the SSD is often nearly as fast as serving from a storage-system DRAM cache anyway. The net result in a well-designed system is consistent latency for every IO, a strong requirement for delivering guaranteed performance.

An all-SSD architecture is just the starting point for guaranteed QoS, however. Even a fast flash storage system can have noisy neighbors, degraded performance from failures, or unbalanced performance. Stay tuned to this blog as we discuss the five other critical architecture requirements required for guaranteed QoS, and join us on our upcoming webinar with WHIR to learn more:

Unlocking the Secret to QoS in the Cloud: The 6 Requirements of Your Storage Architecture
Web Host Industry Review Webinar with SolidFire
Tuesday, April 2, 2:00pm EST

Register now

-Dave Wright, Founder & CEO

Announcing Upcoming WebinarsMonday, March 4, 2013 posted by SolidFire

We have several webinars coming up in the next month and wanted to share the details with you. Be sure to click the links below for more information and to register.

Host Performance-Sensitive Applications in Your Cloud with Confidence

Citrix Ready Webinar with SolidFire
Wednesday, March 13, 1:00pm EST

Learn how SolidFire storage used with Citrix CloudPlatform powered by Apache CloudStack can help you deliver a cloud infrastructure with the performance, quality-of-service and automation required to confidently, and economically, host mission and business critical applications.

Register now

Learn more about Citrix and SolidFire 

Unlocking the Secret to QoS in the Cloud: The 6 Requirements of Your Storage Architecture

Web Host Industry Review Webinar with SolidFire
Tuesday, April 2, 2:00pm EST

There's lots of buzz around Quality of Service (QoS) these days, and also lots of questions. In this webinar, we'll discuss why QoS is the key to delivering performance for enterprise applications in the cloud and the 6 architectural requirements needed to guarantee it.

Register now

Learn more about storage QoS

My CloudPlatform Cloud goes SSD with SolidFire!!Thursday, February 28, 2013 posted by Guest Blog: Tim Mackey, Citrix CloudPlatform & XenServer Evangelist

*** This guest blog was simultaneously cross-posted on The Citrix Blog and can be found here ***

Last year some of you followed on the XenServer Facebook page with great interest the physical migration of my CloudPlatform demo cloud.  Some even commented on how cool the storage I was using looked.  Unfortunately, as anyone who has had to deal with datacenter hardware knows all too well, servers which are running might not start back up if powered down, and this is no less true for storage controllers.  As it turned out, one of the controllers in my storage array failed, and it proved just a little bit harder to get it replaced than I had anticipated, so off I went to find a suitable replacement.  Before we go too far down my decision process, it's probably a good idea to review what the two most common storage options are in cloud, and why you might want to choose one over another.

Local Storage
Local storage is by far the simplest of the choices, after all most servers come with at least one disk, and you usually have the option add in several more.  Typically local storage is used in an effort to control storage costs, and with decent shared storage starting in the tens of thousands of dollars, there is the potential for some savings.  Well, up until you understand IO that is.  All spinning disks have spindles, and the amount of random IO you can get out of a spinning disk is a function of its rotational speed and the number of spindles it has.  If you are the sole user of the disk, the number of spindles doesn't matter too much, but as soon as you have multiple users (aka VMs), things can slow down quickly.  Of course SSD is always an option, but with enterprise SSD costing 5-10 times what the same capacity SAS 15k drive does, SSD for local storage isn't really a cost leader.  More importantly, local storage also historically came with an implicit limitation; VMs can't readily migrate between hosts.  Thankfully, the latest versions of both vSphere and XenServer effectively address this problem.

Shared Storage
In server virtualization, shared storage is typically used to allow for more effective host utilization.  If you need to start a new VM, there is no real way to predict which host in a cluster might have the free capacity, but with shared storage the host selection process can be disconnected from the storage management problem.  This is really good because anchoring the storage to a shared storage solution allows for more advanced functionality like automatically restarting VMs if the hardware should fail.  Regardless of whether you use file (NFS) or block (iSCSI) based storage, the IO available to you is a function of the number of disks, their speed and how efficient the storage array is at handling those IO requests.  The problem with traditional shared storage is that controllers don't understand the type of IO they are being asked to deliver.  To them, a database query and a starting VM are pretty much the same, and that leads to a serious problem in the cloud.

How I Arrived at SolidFire
When you look at the state of the world in storage arrays, the core trend today is greater and greater IOPs.  This is wonderful for the storage guys, but organizations are actually over-buying IOPs based on Newsfvolumepredictions for peak IO requirements.  In the world of IaaS, this is made worse due to a lack of control over the IO demands each cloud tenant has.  Effectively, if careful storage design isn't done, the IO usage of one account could lead to a second account becoming IO starved.  SSDs offer a ton of IO, but that still doesn't solve the core problem of IO control.  Enter the guys from SolidFire.  Yes the SolidFire Storage Solution is SSD based; which is cool.  Yes it offers a ton of IOP capacity, but it goes one level further.  With SolidFire, you actually specify the IOPs you need on a per LUN/volume basis, and associate it with an account.  This allows some pretty granular controls, but more importantly allows you to clearly establish an SLA on the storage side, and ensure that if someone is attempting to abuse the array that the impact on other tenants is easily manageable.

As cool as that is, it's still not the full story.  I'm pretty well known for being a XenServer guy, and I'll freely admit that one of the bigger challenges I've had over the years has been thick provisioning on block based storage.  It's a pretty long story, but suffice it to say that if you want thin provisioning in XenServer your choices are local storage and NFS, or to choose storage based on StorageLink. Sfstoragecapacity Now I have nothing against NFS, and honestly do use it for some of my storage in the demo cloud, but I definitely prefer iSCSI when it comes to storage management.  Here's where the SolidFire solution really got my attention.  Under the covers, they natively perform thin provisioning, data deduplication and compression on each of the blocks; across LUNs.  In real-life this means that despite the fact that I've requested a 20Gb disk from the cluster, I am likely to be using far less than that, and while XenServer thinks it has the full 20Gb, the cluster knows better.  Since I'm running a cloud, there is a ton of commonality between my templates and deduplication is a wonderful addition.

Here's the final, and arguably key point.  Since I was replacing storage, I could have taken the easy route and just got the current version of my existing array.  Nice, simple, and drop it right in.  Instead, I chose to look at exactly how my cloud was being used, and see if there wasn't a better solution in the market.  My key pain points were controlling IO utilization based on unknown workloads for the next several years, and being able to ensure that I wasn't going to run out of storage capacity any time soon.  SolidFire delivered on these, and that's why my cloud is now happily running on SSD.

SolidFire is a Citrix Ready partner in cloud solutions, and if you'd like to learn more about the solution, the Citrix Ready folks are hosting a joint webinar with SolidFire on March 13th, 2013 that anyone is invited to.  Click here to register.

-Tim Mackey, Citrix CloudPlatform & XenServer Evangelist

Quality of Service is Not a Feature... It’s an ArchitectureTuesday, February 26, 2013 posted by Dave Wright

SolidFire's unique ability to guarantee performance to thousands of applications at once has garnered praise from analysts and customers alike. Given the compelling advantages for performance isolation and guaranteed QoS, it's no wonder that other storage vendors are adding QoS features to their products. We recently discussed some of the simplistic approaches to QoS offered by other storage systems, however the ability to guarantee performance is not as simple as adding a new bullet point to a lengthy feature list.

BeetlejetengineBeing able to guarantee performance in all situations - including failure scenarios, system overload, variable workloads, and elastic demand - requires an architecture built from the ground up specifically to guarantee Quality of Service. Trying to bolt Quality of Service onto an architecture that was never designed to deliver performance guarantees is like strapping a jet engine to a VW Beetle. The wheels will come off just when you get up to speed.

Solving performance from the core
SolidFire gets to the root of the performance problem with a new storage architecture that overcomes every predictability challenge through six core architectural requirements. Together, these six requirements enable true storage Quality of Service and establish the benchmark for guaranteeing performance within a multi-tenant infrastructure.

  • All-SSD Architecture
    • Enables the delivery consistent latency for every IO
  • True Scale-out Architecture
    • Linear, predictable performance gains as system scales
  • RAID-less Data Protection
    • Predictable performance in any failure condition
  • Balanced Load Distribution
    • Eliminate hot spots that create unpredictable IO latency
  • Fine Grain QoS Control
    • Completely eliminate noisy neighbors, and guarantee volume performance
  • Performance Virtualization
    • Control performance independent of capacity and on demand 

Over the next few weeks we'll dive into why you need each of these six architectural requirements to deliver guaranteed QoS. We'll also talk about why, no matter what the feature list says, traditional storage architectures just aren't up to the task, because Quality of Service isn't a feature - it's an architecture. 

-Dave Wright, Founder & CEO

Guaranteed Quality of Service: Its True Power and What It Means to a Cloud Service ProviderTuesday, February 19, 2013 posted by Guest Blog: Julian Box, CEO and Co-Founder, Calligo

Prega _qos _logo _190x 184Having been designing and implementing cloud infrastructure for over 6 years, for both Virtustream and now Calligo, the ability to guarantee storage Quality of Service (QoS) across all resources of a virtual datacentre has been an important and personal design goal.

Until recently, cloud service providers have been creating Infrastructure as a Service (IaaS) offerings using mainly technology that was never envisaged for use in a multi-tenanted environment. Achieving a level QoS across a multi-tenanted platform has been very complicated to deliver and difficult to maintain.

The reason why this, for me, is so important is that true cloud is about true agility, allowing clients to flex their utilization of resources as and when they need it -- either automatically or at a touch of a button. Prior to my discovery of SolidFire, enabling dynamic storage provisioning (or guaranteeing the throughput of a SAN) was very expensive, incredibly difficult, and cumbersome.  SolidFire's critical QoS functionality enabled me to meet the exact performance requirements of my customer, and allowed me to react to changes in their requirements, instantly.

Currently, the mainstream IaaS provider struggles trying to fully guarantee workloads within a multi-tenanted platform. Some specialist providers do guarantee them, but they must do so using large dedicated pools of storage, which is hardly efficient.

Until SolidFire, no one had a true on-demand architecture that covered I/O bandwidth and disk I/O while breaking the link to capacity within their offerings.  Most cloud providers offer some sort of guarantee around CPU and memory but start to struggle to guarantee I/O bandwidth and disk I/O. Add in the need to be able to control I/O on a server-by-server basis on the fly, and the traditional storage vendors' offerings struggle to deliver in multi-tenanted environments.

Having the ability to control bandwidth and disk I/O to 1000s of applications is an extremely powerful tool. Coupled with the ability to adapt to changes on the fly, this allows my service offerings to meet the exact demands of my customers and allows them to get very close to true utility computing that comes with performance guarantees. Dynamic QoS function at the volume level encapsulated in SolidFire's products is why this new breed of storage technology is so important.

Another key area of importance in multi-tenanted platforms is when you look at workloads in an enterprise, they normally have a rhythm to the peaks and troughs, but within a cloud environment this doesn't exist. Instead, it's replaced with a randomness that is massively impacted by "noisy neighbour" syndrome.

Until now, these attributes have only been dealt with by isolating the workloads on near dedicated hardware. This in effect forces providers into creating dedicated areas for these applications which is more akin to a managed service than true cloud, and considerably more costly to manage and maintain.

With SolidFire's guaranteed QoS functionality in place, these abilities allow us to create Service Level Agreements (SLAs) that truly meet our clients' requirements, both from an infrastructure perspective but, more importantly, also allows us to tailor them on an application-by-application basis.

The key to SolidFire's technology isn't just that it solves problems that have, in my opinion, caused delays in cloud adoption, but that the solution is "simple." It is simple to deploy in relation to other technologies in this area, and most importantly it is simple to use, operate, and maintain.

From a client perspective, there is a clear desire to move more important and performance-sensitive applications to the cloud, yet with cloud providers unable to manage QoS levels these needs will largely remain unmet. It is my opinion that service providers that embrace these and other new technologies that have been designed specifically for use in the cloud will have the most success - and really allow the cloud to reach its full potential.

About Julian Box, CEO and Co-Founder, Calligo
Julian has over 25 years of experience helping organisations streamline operations through the innovative application of technology, including nearly a decade of delivering dynamic and agile virtualisation and cloud solutions.

Prior to Calligo, he founded and was Managing Director of VirtualizeIT Limited, a provider of virtualisation technology, including server, storage, and network virtualisation. From the time of its inception in 1995, VirtualizeIT won several UK & EMEA industry awards in recognition of its ability to deliver specialised consultancy services and complex virtualisation projects.

In 2008 Julian co-founded Virtustream Inc., a venture-capital backed Enterprise Cloud Service provider where as CTO he led the design and implementation of their industry-leading private multi-tenanted Infrastructure as a Service offering.

-Julian Box, CEO and Co-Founder, Calligo

Can your legacy SAN deliver Quality of Service (QoS)? Is popcorn a vegetable?Tuesday, February 12, 2013 posted by Dave Wright

Think about it. If corn is a vegetable, why isn't popcorn? Likewise, if storage performance can be guaranteed, why can't any storage architecture do it?

It's a hard truth to face: legacy storage systems are simply not designed to handle the demands of multi-tenant cloud environments. More specifically, the few systems that claim storage Quality of Service (QoS) - or want to claim it on their roadmap - are really just "bolting it on" as an afterthought. And these "bolted on" methods of achieving QoS have unfortunate side effects.

Before we dive in further, let's first discuss why you should care about true storage QoS as a cloud service provider. Hosting business-critical applications in the cloud represents a large revenue growth opportunity for cloud service providers. But until storage performance is predictable and guaranteed, you won't be able to programmatically attract this type of business from your enterprise customers. Is there a solution? Yes, and the answer is storage QoS architected from the ground up with guaranteed performance in mind.

Let's take a closer look at some of the "bolt-on" methods that legacy systems use to try to perform something they can market as "QoS."


How it works - Prioritization defines applications simply as "more" or "less" important in relation to one another. This is often done in canned and well described tiers such as "mission critical," "moderate," and "low." 

Why it doesn't really offer QoS -  While prioritization can indeed help give higher relative performance to some apps and not others, it doesn't actually tell you what performance to expect from any given tier. Additionally, it certainly can't guarantee performance, particularly if the problematic "noisy neighbor" is located at the same priority level. So for starters, there is no ability to guarantee that any one application will get the performance it needs. What's more, there is no functionality for one tenant to understand what their priority designation means in relation to the other priorities on the same system. It means nothing to tell a tenant they are prioritized as "moderate" unless they know how moderate compares to the other categorizations. Moderate is also meaningless without knowing what system resources are dedicated to this particular tier. In addition, priority-based QoS can often make a "noisy neighbor" LOUDER if that tenant has a higher priority because that higher priority tenant is allowed more resources to turn up the volume.

Rate limiting

How it works - Rate limiting attempts to deal with performance requirements by setting a hard limit on an application's rate of IO or bandwidth. Customers that pay for a higher service will get a higher limit.

Why it doesn't really offer QoS - Rate limiting can help quiet noisy neighbors, but does so only by "limiting" the amount of performance that an application has access to. This one-sided approach does nothing to guarantee that the set performance limit can actually be attained. Rate limiting is all about protecting the storage system rather than delivering true QoS to the applications. In addition, firm rate limits set on high performance or bursty applications can inject significant undesired latency.

Dedicated storage

How it works - IT managers attempt to deliver predictable performance by dedicating specific disks or drives to a particular application, isolating it from other applications or noisy neighbors.  

Why it doesn't really offer QoS - Dedicating storage to an application goes a long way toward eliminating "noisy neighbors," yet even dedicated infrastructure cannot guarantee a level of performance. A component failure in one of these storage islands can have a massive impact on application performance as system bandwidth and IO are redirected to recovering from the failure. Despite the dedication of resources, this approach still falls short in its ability to guarantee performance at any level. 

Tiered storage

How it works - Multiple tiers of different storage media (SSD, 15K rpm HDD, 7.2K rpm HDD) are combined to deliver different tiers of performance and capacity. Application performance is determined by the type of media the application resides on. In an effort to optimize application performance, predictive algorithms are layered over the system which literally try to predict, based on historical performance information, which data is "hot" and kept in SSD vs. data that is "cold" and kept in HDD.

Why it doesn't really offer QoS - Tiering is the worst of all the "bolted on" solutions designed for delivering predictable performance. Quite simply, this solution is unable to deliver any level of storage QoS. Tiering actually amplifies "noisy neighbors" because they appear hot and are promoted to higher performing (and scarcer SSDs), thereby displacing other volumes to lower performing, cold disks. Performance for every tenant varies wildly as algorithms move their data between media. No particular tenant knows what to expect of their IO as they don't control the tiering algorithm or have any insight into the effect on other tenants. Some tiering solutions try to offer QoS by pinning the data of a particular application into a specific tier, but this is essentially dedicated storage (discussed above) at an even higher cost than usual.

Stay tuned to our blog to learn more about storage QoS and how a scale-out storage system architecture designed from the ground up to deliver and guarantee consistent performance to thousands of volumes simultaneously is the ideal system for building performance SLAs in a multi-tenant cloud environment.

 -Adam Carter, Director of Product Management

A Closer Look at Cloud Challenges: Noisy NeighborsTuesday, February 5, 2013 posted by SolidFire

Have you ever taken a look at the cost impact of "noisy neighbors" on your cloud? They're not only ruining performance for all your cloud tenants - they're also affecting your bottom line. The only way to truly guarantee performance is by gaining independent control of performance and capacity - so you can guarantee storage Quality of Service (QoS).

Noisy Neighbor


Report from Cloud Expo Europe: What is the role of performance in the cloud’s future?Tuesday, February 5, 2013 posted by Guest Blog: Simon Robinson, Research Vice President, 451 Research

Hot on the heels of Cloud Expo Europe last week, I was invited to participate in a SolidFire-sponsored dinner at London's Soho Hotel with executives from a dozen UK-based service providers including Calligo and ShapeBlue.

The intention was to facilitate an open and honest discussion around the kind of things that are keeping service providers awake at night: what are the biggest opportunities in the market around 'cloud,' especially when it comes to running more mission-critical applications; what are some of the barriers, and; what can be done to help service providers differentiate and compete in an increasingly cut-throat market?

The evening overall was a roaring success - not just because the food and wine were excellent (though that certainly helped) - but because the conversation flowed with ease, and everyone around the table actively participated; indeed, no-one was backwards in coming forwards on some of the more contentious issues.

In my introductory remarks, I highlighted some of our recent research findings that suggest end-user organizations are interested in moving more performance-sensitive workloads to the cloud, but they need help in getting there. I also compared the current state of the cloud market to the Cambrian explosion; that period in the earth's evolution where the number and variety of new species accelerated at an unprecedented rate. Any visitor to Cloud Expo could see this for themselves; the sheer number and variety of organizations offering some kind of enterprise cloud service or cloud enabling technology speaks to the extent of the opportunity for sure. But it also underscores that the high signal-to-noise ratio in the cloud ecosystem can make this a very confusing space for end users.

What follows are my takeaways on what I thought were some of the most actively debated, and interesting, themes of the evening.

Defining the opportunity
There is still no agreement among service providers on what constitutes a 'cloud;' less still on whether this really matters or not. Cloud is still mostly marketing hype, and whilst the emergence of consumer clouds such as Apple's iCloud and Dropbox has helped to popularize the notion, this isn't always helpful for providers looking to sell 'enterprise-grade' cloud services.

Persuading end users to buy into the notion of cloud can still be tough
Expectations for cloud SLAs (in terms of availability) are often unrealistically high - buyers often ask for double or even triple site redundancy, but also are not often willing to pay for it. This is partly due to the fact that there is still a strong 'server-hugging' mentality among IT managers who may feel threatened by cloud-based alternatives. There is a strong feeling that, despite the amount of hype cloud-based models have attracted, many IT organizations just don't understand the value they can derive by offloading some or all of the IT burden to a third party.

Current methods of expressing performance and meeting service levels needs overhauling
Users often don't understand the factors that impact service levels such as availability and performance. Often user 'interference' is the culprit, and dialing-in extra performance is difficult with traditional storage technologies. More widespread use of API-based provisioning will help.

Storage remains a key bottleneck
Though not the only one, it's certainly keeping more performance-centric applications from moving to the cloud. Traditional storage is also complex and expensive, facts that often get in the way of developing flexible services for customers.

Customers still think of storage performance in terms of capacity rather than IOPS                                                                                                                                                                       This is tied to the fact that traditional storage systems historically need to add in more disks to address performance. Hence, customers are often confused about why 'enterprise' storage seems so expensive relative to the cost of buying a hard drive from a retailer.

Service providers will succeed by differentiating themselves through IT services that enable business transformation
Although there seems to be a 'race to the bottom' as cloud infrastructure commodifies, this is a dangerous game for service providers to play. Pricing cannot be totally ignored, however, and providers need to be in the same ballpark as the commodity cloud providers.

Users are rarely interested in pay-as-you-go pricing
They overwhelmingly prefer to pay up-front, but with the knowledge that their experience -- and costs -- will be predictable and stable, and that there is an option to dial-up or dial down resources if required (and lots of debate over whether on-demand 'bursting' is actually viable or not).

Lots of interest in liability insurance
Insurance (eg PLI/PII in the UK and E&O in the USA) may (or may not) be impacted by the cloud, and there is interest in how service providers may be able to take advantage of this. Still early days here, but some insurance companies are starting to assess the risk profiles of different 'clouds' based on their performance, availability, etc.

Underlying hardware is a commodity
Users rarely ask about the server networking or storage hardware. However, service providers still care A LOT - the mantra is still 'you get what you pay for' and there's still perceived value in certain brands. The hypervisor is similarly commodifying, though there are real religious allegiances here as well.

The notion of the 'software-defined' datacenter is popular
Software-defined networking is already leading some providers to radically reduce their network infrastructure costs, and there is a belief that software-defined storage will follow, having a similar effect.

From my perspective, the evening helped highlight the role that smart, opinionated, and passionate service providers are playing in driving the IT industry forward. I'm looking forward to continuing the conversation at other SolidFire and industry events. One such event is The 451 Group's European Hosting and Cloud Transformation Summit , taking place in London on April 9-10. Hope to see you there!

(Note: This is a guest post by Simon Robinson, Research Vice President, 451 Research)

Laying The GroundworkMonday, December 17, 2012 posted by Dave Wright

Over the past year SolidFire has been putting the pieces in place to build not only the next great storage company, but the first storage company focused on performance storage for large-scale cloud infrastructure. Our rallying point to achieve this objective is a laser focus on helping our customers 'Advance the way the world uses the cloud.' To have any chance at making this vision a reality requires us to lay down the groundwork today.

Looking back years from now, the foundational underpinnings of our success will not be limited to one part of the organization. They are company-wide, spanning engineering, operations, marketing, alliances, sales, finance and human resources. Across each of these teams we have made some significant moves in the past year to prepare for what lies ahead.

Before we charge ahead into 2013 I would like to take the opportunity to reflect on 2012 and some of the company and industry milestones that have put us in the position we are today.

2012 has been an incredible year for SolidFire. Reading through this list makes me tremendously proud of what our team has accomplished over the last 12 months. No question we still have a lot of work to do, but I could not be more excited about the team we have assembled, the product and culture we have built, and the opportunity in front of us as we move into 2013.

Happy holidays and we look forward to connecting again in the New Year.

-Dave Wright, Founder & CEO

Setting the Record Straight on Software-Defined StorageWednesday, November 21, 2012 posted by Dave Wright (originally posted on

Solidfire _660

Thanks to VMware's recent $1.26 billion purchase of Software-Defined-Networking (SDN) leader Nicira, and their new marketing push on the Software-Defined-Data-Center, everyone is running around trying to attach themselves to Software-Defined-Anything (SDx). This is as true for the storage market as it is any other segment of the technology ecosystem. It is a safe bet that there are a lot of storage companies, both old and new, scurrying around trying to figure out how to maneuver "Software-Defined" into their messaging.

This whole SDx concept is built on the idea that all virtualized data center resources (e.g. server, storage, networking, security) can be defined in software. These resources are then abstracted into a higher-level control plane where they are dynamically provisioned out in support different applications and/or services. The reason this is called Software-Defined is because we are at least two layers removed from the physical hardware at this point and all management, orchestration and provisioning of these services has to be done in software.

As it relates to storage, Software-Defined-Storage (SDS) is enabled by lower-level storage systems abstracting their physical resources into software in as dynamic, flexible and granular a manner as possible. These virtualized storage resources are then presented up to a control plane as "software-defined" services. The consumption and manipulation of these storage services is done through an orchestration layer like VMware, CloudStack or OpenStack. The quality and breadth of these services are highly dependent on virtualization and automation capabilities of the underlying hardware. More precisely, the control plane's effectiveness is dependent on the virtualized resources it is presented from the layers below it. Without the granular abstraction of physical storage resources, and APIs to define, flex and apply policy to these resources dynamically, the control plane is limited in the services it can provision out to virtual machines or applications.

As you can see from the description above, SDS is a combination of virtualization, abstraction and control. A storage system by itself is not SDS. Storage is a supporting element for anyone looking to manage their infrastructure within the "Software-Defined" framework. There will be a lot of vendors trying to muddy the waters between Software-Only storage and Software-Defined Storage. No matter what anyone tries to tell you, they are not the same thing. Software-Only storage is still requires hardware. The fact that it is sold as software-only is more of a go-to-market strategy and packaging decision than a technology decision. Meanwhile, SDS is a higher-level framework for the orchestration, provisioning and consumption of storage.

In a storage system properly architected to support SDS, all of the management of system resources is done through software. These resources are then presented up to the control plane, in a fine-grained fashion, via REST APIs. These APIs enable the control plane to more precisely provision storage services to the unique needs of the applications running above it. The APIs are effectively relinquishing the management of these resources to the control plane to carve them up and flex as required. This is the way it should be. This communication layer is essential to supporting Software-Defined-Storage.

In the year ahead a lot of vendors will be quick to claim they are "software-defined-storage". However, software-defined storage is NOT a storage system concept. No single product, system or platform makes up SDS, but that won't prevent a lot of people from telling you otherwise. To quickly get to the signal in this forthcoming SDS marketing storm, here are a few more questions to ask:

When your vendor claims they are Software-Defined-Storage ask them how they virtualize the underlying hardware and present it up to the control plane.

  • Ask them if they can abstract and provision not only storage capacity but also performance.
  • When they claim they can, ask if it is possible to make an API call to the system for a 100gb volume with 1000 IOPS. Then ask them if they can dynamically adjust this policy on the fly through software.
  • Ask them if they have a complete API that allows automation of all storage services so that higher-level orchestration layers can fully exploit the benefits of SDS.

The "Software-Defined" movement has the chance to be a major leap forward for how infrastructure resources are provisioned, managed and automated. But a lot of pieces of the infrastructure need to come together to make the vision of a Software-Defined-Data-Center anything close to reality. As it relates to storage, in the coming year don't be fooled by vendor quick claims of Software-Defined-Storage. Using the questions above, dig beyond the marketing smokescreen to understand what that really means. You might surprised at what you actually find.

-Dave Wright, Founder & CEO

(originally posted on

The (R)EVOLUTION Is HereTuesday, November 13, 2012 posted by Dave Wright

"Public cloud services are simultaneously cannibalizing and stimulating demand for external IT services spending, according to Gartner, Inc. Infrastructure as a service (IaaS) adoption - the most basic and fundamental form of cloud computing service - has expanded beyond development and test use cases."

- Gartner Group, Press Release, 11/01/12

Just last week I was at the Next Generation Storage Symposium delivering a presentation about the undeniable shifts in the computing landscape that are transforming IT as we know it. The combined forces of mobile and cloud are rapidly becoming the solution for an increasing percentage of IT needs.

From a cloud perspective we are still in the early days, but market data confirms the inevitable move to public and private cloud infrastructures. On the supply side of the equation, pure-play cloud providers and managed hosters are responding. The 451 Group recently released a report on the IaaS market predicting 49% annual growth through the year 2015.

On the demand side, Gartner recently released a survey of almost 600 organizations globally regarding customers use of the cloud for production applications.

"A recent Gartner survey found that 19 percent of organizations are using cloud computing for most of production computing, and 20 percent of organizations are using storage as a service for all, or most, storage requirements."

19% penetration for production applications in the cloud is a great start, but it means there is another 81% to go. How long is it going to take to get there? To continually expand the spectrum of applications that can be hosted in a cloud environment requires game changing innovations at all layers of the cloud infrastructure. Until now storage has been a real laggard. Constrained to the options available from existing storage vendors, this advancement of cloud computing could take decades.

We are not willing to wait that long. With the announcement today of general availability for SolidFire's all-SSD cloud-scale storage system, cloud providers can break free from those legacy storage systems. Purpose-built to guarantee performance to thousands of applications simultaneously, SolidFire has set the bar for what a high-performance storage system built for cloud computing should look like. Our early customers, including ViaWest, Databarracks, Calligo and CloudSigma seem to agree. Each is using the SolidFire platform as a springboard to drive more and more production applications to the cloud.

The (r)evolution is here, and it's growing by the day. If you're building a large public or private cloud, come talk to SolidFire about how we're advancing the way the world uses the cloud.

-Dave Wright, Founder & CEO

VM Density: The Key to Unlocking Higher Profits in the CloudThursday, November 8, 2012 posted by Dave Wright

In our last two blog posts we have talked about the importance of a high performance storage architecture and fine grain quality of service controls in a multi-tenant cloud infrastructure. While mildly interesting individually, it is the unique combination of performance and control within a single platform that is powerful. How does this functionality translate into real value business value for a cloud service provider (CSP)? The answer is Virtual Machine (VM) density.

In traditional storage terms, density is a measure of the amount of capacity or IOPS packed into as small a footprint as possible (e.g. 1U). But thinking about density as purely a capacity or performance concept isn't useful for a service providers hosting production applications. In an environment that requires predictable performance, capacity density is a meaningless metric if your system lacks the performance (IOPS) necessary to access these volumes. The result is often a severely underutilized system to ensure the provisioned capacity has access to the performance it needs. Not exactly the most efficient approach.

Similarly, in a multi-tenant environment IOPS density alone is insufficient. Without any sort of control or governance over this bucket of performance there is no way for a cloud provider to guarantee performance to any of the applications running on that infrastructure. Consequently, customers can't entrust cloud providers with their performance sensitive applications.

For a service provider to properly monetize a dense IOPS footprint requires the ability to provision and guarantee those IOPS to each and every virtual machine. The virtual machine is smallest unit of consumption in a cloud infrastructure. For a cloud infrastructure hosting performance sensitive applications, recklessly packing virtual machines onto a platform without the ability to deliver predictable performance to each is a recipe for disaster. Likewise, forced under-provisioning of a storage system to ensure each VM gets the resource it needs is a recipe for going out of business.

The ability for a cloud service provider to deliver profits is directly related to their ability to confidently host the largest number of virtual machines in the smallest storage footprint possible. This is VM density. This is the key to unlocking higher profits from your cloud infrastructure. Unfortunately, you can't get there with storage systems available on the market today. Six days from now that all changes.

-Dave Wright, Founder & CEO

IOPS Alone Can’t Slay the Noisy NeighborThursday, November 1, 2012 posted by Dave Wright

In the most recent post from our high-performance (r)evolution mini-series I reviewed what we consider to be the real measures of storage performance, and the importance of looking beyond IOPS-based vanity metrics when evaluating a high-performance storage architecture.  I want to build on this conversation by discussing an all too familiar "friend" to anyone attempting to run performance sensitive applications in a multi-tenant cloud infrastructure: The Noisy Neighbor.

The Noisy Neighbor is the guy that ruins the party for everyone else. In cloud storage terms, the Noisy Neighbor is the application or volume that consumes a disproportionate amount of available IOPS at the expense of everyone else. Unable to isolate or predict the behavior of the Noisy Neighbor, service providers can't guarantee performance to any of their cloud based customers. Unable to get predictable performance from their cloud services provider, most customers simply don't trust them with any of their business critical or performance sensitive applications. This trickle down effect impairs the ability of enterprises to fully embrace the cloud while forcing cloud service providers to leave a massive amount of potential revenue on the table (and off the cloud).

For a cloud services provider, the initial reaction taken to address the Noisy Neighbor is to throw more storage performance (i.e. IOPS) at the problem so that the offender is drowned out by a sea of IOPS. These IOPS could be obtained in a number of different ways including an SSD appliance, a dedicated SAN, dedicated physical server infrastructure, short-stroking drives or underutilizing disk systems to ensure adequate available performance. Unfortunately these are not sustainable solutions for two reasons; 1) In the hyper-competitive cloud market where efficiency is paramount, cloud providers cannot afford the underutilization inherent to these approaches 2) By simply throwing gross performance at the Noisy Neighbor you are not solving the real problem, the need for predictable and consistent performance.

Regardless of the IOPS available, the lack of control around how this performance is provisioned exposes all tenants to an unknown and unacceptable level of performance variance. To ensure any degree of usability, IOPS must be accompanied by some quality-of-service controls that govern the provisioning and enforcement of performance to ensure each application receives the allocation it needs to run effectively in the cloud. It's important to note that priority-based QoS isn't enough - "high" or "medium" or "low" levels of relative performance don't do anything to actually guarantee IOPS or give customers a realistic view of what performance to expect at any given time. To ensure efficiency these controls must be granular enough to allow service providers to independently dial-in performance to the unique needs of each volume or applications. 

So while a performance-centric approach may pose as the quick fix to slay the noisy neighbor, don't stop there. We didn't. In a multi-tenant environment, when looking to host performance sensitive applications you can only get so far on full throttle performance. By combining a high-performance architecture with fine grain quality-of-service controls you can set and maintain hard SLAs around storage performance more efficiently and more profitably than ever before. Starting on 11/13/12 you will be able to do just that. Get Ready.

-Dave Wright, Founder & CEO

Performance And Profits Should Not Be Mutually ExclusiveThursday, October 25, 2012 posted by Dave Wright

In my last post I discussed the need for new innovations to bridge the gap between today's cloud service offerings and the infrastructure required to bring all applications into the cloud. Most enterprises lack the confidence that a multi-tenant cloud can provide the predictable, consistent storage performance, and the high availability their production applications demand. Meanwhile, service providers who are trying to bridge this gap with legacy storage solutions, are struggling with the seemingly inverse relationship between performance and profits. It doesn't have to be this way.

At SolidFire we think the recipe for successfully increasing both performance and profit contains three key ingredients: a high performance storage architecture, fine grain QoS controls, and virtual machine (VM) density. Combining these three components together into a single platform has powerful, and profitable, implications for cloud service providers. Over the next few weeks I will dive deeper into each of these ingredients. First let's tackle the importance of a high-performance architecture.

Performance to cloud service providers isn't just some IOPS-based vanity metric. For those running IT as a profit center the real measure of performance is how well the underlying architecture endures under normal operating conditions, including:

  • Deduplication, compression and thin provisioning processes
  • Adding and removing capacity to a system
  • Linearly scaling of capacity and performance
  • Adjusting per volume or tenant quality-of-service settings
  • Recovery from failure conditions

When evaluating a high-performance storage architecture you have to be careful. While raw IOPS are important, they are not a viable business model for service providers. You can spend a lot of money on storage performance and still present an unpredictable environment to your customer's performance sensitive applications.

At SolidFire the ability to scale our all-flash design to 5 million IOPS is just the starting point. Equally important as raw performance is the ability to guarantee it at an individual volume level; offering consistent and predictable performance regardless of system activity or condition.

Hosting high performance applications represents a massive growth opportunity for cloud providers. However, without the right balance of performance and predictability, this opportunity remains out of reach. SolidFire helps close the gap for cloud providers, making high-performance applications in the cloud a profitable reality.  Learn how on 11/13/12. Get Ready.

-Dave Wright, Founder & CEO

Its time to join the high-performance (r)evolution! Get Ready!Tuesday, October 16, 2012 posted by Dave Wright

So what does enterprise IT need to confidently run more production applications in a cloud environment? Rodney Rogers (@rjrogers87) Chairman & CEO of Virtustream recently blogged on the six key attributes required to run SAP in the cloud. Rodney's first attribute nails the key storage challenge that must be overcome:

Application-level SLAs without giving up the economics of multi-tenant: If you deploy an architecture that can control my compute and storage IOPS within your multi-tenant stack, you can give me application-level latency guarantees. With this I can then trust production instances of SAP, and thus my revenue generating systems, on your cloud.

               - The Enterprise: I'm Not Sexy and I Know It, TechCrunch 10/06/12

Advancing the cloud as a viable medium for more than bursty, performance insensitive applications is part evolution and part revolution. The evolution side requires an enterprise to become increasingly comfortable with hosting their business critical applications in the the cloud. But this evolution is stuck in neutral without revolutionary new technology that bridges the gap between today's cloud and the infrastructure required to bring all applications into the cloud.  For the cloud to evolve, enterprises need a greater degree of confidence that production applications will have access to predictable and consistent performance. Up till now that level of confidence and performance guarantee hasn't existed. But that is all about to change.

Starting on 11.13.12 cloud providers world-wide will have access to storage technology from SolidFire that will completely transform customers expectations for what is possible in a cloud infrastructure. SolidFire's all-SSD storage platform delivers the unique combination of performance, control and VM density required for service providers to confidently and profitably host their customers most performance sensitive workloads.

On 11.13.12 service providers will have a choice.  Continue to struggle with differentiation in a hyper-competitive market, or innovate their way to higher profits by delivering guaranteed performance to business critical applications. SolidFire's ability to combine the best of dedicated all-SSD performance with true multi-tenant economics is nothing short of revolutionary.  And yes, we can help you do it all below the cost of HDD systems.

The high-performance (r)evolution is fast approaching, and we want you to join us!  Evolving your cloud to accommodate high-performance applications is easier than you think.  Stay tuned and our team will show how...

-Dave Wright, Founder & CEO

Celebrating CinderFriday, September 28, 2012 posted by John Griffith

Welcome Cinder!!! Six months ago at the OpenStack Folsom Design Summit there were multiple sessions focused on the idea of separating block storage out of Nova. Block storage is an integral component of a cloud infrastructure. In order to accelerate advancement of the block storage service within OpenStack required greater focus, awareness and contribution. In short, it needed its own project. Thanks to a lot of hard work from over 50 contributors, Cinder (aka OpenStack Block Storage) is now a reality.

The birth of a new core OpenStack project is a significant accomplishment. It has been an incredible experience watching this project come to life with everything ranging from the creation in Launchpad, to git repos, Gerrit and Jenkins infrastructure, Devstack, Tempest, etc.  A lot of hard work from a lot of people working together made this happen in a very short period of time.

Once all the pieces were in place for Cinder to live on its own, the fun began!  Job one was to extract nova-volumes from Nova. This included significant modifications to Nova just to make the extraction possible. Along the way it was imperative to maintain compatibility and NOT impact existing volume API's.  In parallel with the Nova work, the process of porting Nova-Volume code into Cinder was moving full speed ahead.  Essentially the first month after the Folsom Summit was dedicated to these efforts.

Once we had an independent service, endpoint mappings and a new Cinder client we were ready to roll. While compatibility was a clear priority in this release we also strived for quality improvements and minimally invasive feature additions. Some of the key block storage enhancements included in the Folsom release include:

  • A Nova-Volume compatible Block Storage Service
  • Updates to all of the back-end storage drivers
  • NFS as block storage support
  • Improved Boot From Volume with ability to specify image at volume creation
  • Ability to create an Image From Volume
  • Persistent iSCSI targets
  • Resuming interrupted volume operations in case of service shutdown

Along with all the work in Cinder, we also ported every bug fix and feature to Nova-Volume. Why you ask? With Nova-Volume being deprecated we wanted to minimize confusion and make the migration process as easy and painless as possible. We also worked on tools to migrate your existing Nova-Volume related database tables to your new Cinder nodes.  The migration tools are included in the cinder-manage utility which can be used to migrate your DB as well as your persistent iSCSI target files.  One thing to keep in mind, in order to make this migration as easy and smooth as possible, it's required that you upgrade your existing Nova-Volume install to Folsom before performing the migration to Cinder.

It's an exciting time for Cinder and we are just getting started. There is significant potential for Block Storage in OpenStack. Now that we have the initial release of Cinder stable and ready for use, we can turn our focus to Grizzly. I look forward to discussing new feature additions and improvements we are working on at the OpenStack Summit in two weeks in San Diego. See you there!

-John Griffith, Senior Software Engineer & Cinder PTL

Assembling Cloud Building BlocksWednesday, September 12, 2012 posted by Dave Cahill

A cloud computing service is typically measured by the quality and breadth of functionality it delivers rather than the technology it is built on. However, these services are made possible by a vast mesh of hardware, software and middleware. The relationship between these parts is critical to enable a cost-effective, reliable, and automated cloud.

Time to market is a huge competitive differentiator for cloud service providers and deeper integrations across key cloud building blocks are critical to accelerating the realization of the full promise of cloud computing.  At SolidFire we are actively driving integrations with multiple partners across the entire spectrum of cloud computing to allow our customers to quickly deploy new services fueled by SolidFire.

Today we have announced our initial partner ecosystem, including partnerships and integrations with Arista Networks, Citrix Systems (Citrix Ready), Canonical, OnApp, OpenStack, Tier 3 and VMware (VMware Ready). As we continue to expand this ecosystem, we are focused on partnerships that deliver out-of-the-box value with relevant, and validated, joint solutions that maximize adoption and enable customers' cloud services to move into production faster. It is difficult today to have a cloud conversation without talking about one or more of these partners as a core component of the solution.

While we're proud of our initial efforts and all that we have achieved in a short period of time, we are only scratching the surface. Moving forward you can expect SolidFire to expand the depth and breadth of our ecosystem to ensure strategic alignment with the technology and services that are integral to the planning, building and running of cloud infrastructures. In the meantime, if you are building a cloud and you feel there are additional vendors or technologies we should include in our ecosystem, please let us know.

-Dave Cahill, Director of Strategic Alliances

The Not So Hidden Costs Of Retrofitting A Plane Mid-FlightTuesday, August 21, 2012 posted by Adam Carter

In enterprise IT it is common for vendors to retrofit older architectures to address new markets. With the increased acceptance of SSDs as a viable medium in the storage hierarchy we are running across these kind of "solutions" with increased frequency. Unwilling to cannibalize their bread and butter revenue streams, storage vendors slot SSDs behind controllers designed for the performance, throughput and read/write characteristics of hard disk drives. The result is a suboptimal solution severely handicapped by the constraints of the legacy architecture. A recent example of this dynamic is on display for all to see with HPs announcement of an all-flash version of their P10000 (formerly 3Par) storage array. From the limited details provided here are some of the most revealing shortcomings of this "old solutions for new problems" approach:

  • Controller Bottleneck. Constrained by the legacy controller design the P10000 can't exploit all of the raw performance provided by the SSDs. Once they max out the controller IOPS each incremental drive is only adding capacity. So while additional drives may improve the $/GB story, the $/IOP metric heads in the wrong direction.
  •  Capacity Limitation. The current maximum configuration of the all-SSD version of the P10000 is 512 drives. Using 200GB SLC SSD the maximum raw capacity of this design would be 102.4TBs. In comparison, our recently announced SF6010 cluster starts out at 120TBs of effective capacity and scales to 2 PBs. It is also worth noting that a 2PB SolidFire cluster would require half the rackspace of a maxed out 102.4 TB P10000 configuration.
  • System Utilization. Due to the controller constraints alluded to above, the system drive chassis can only be 40% utilized before maxing out the available IOPS. This leaves 60% of the system empty, wasting very valuable real estate.
  • Drive Utilization. Based on the IOPS performance HP recently produced to demonstrate the SSD equivalent to its disk-based SPC benchmark, the net IOPS per SSD is in the range of 750-880. This yield is well below the actual SSD drive specifications implying considerable underutilization of expensive SLC media.
  • Rack Utilization. An P10000 array maxed out with SSDs (totalling 512 drives) would require five racks of equipment despite a 40% utilization rate per rack. To achieve the same IOPS capacity of this system from a SolidFire 3010 would require a 10 node cluster and 95% less data center real estate (only 10U)
  • Power Draw. The required five racks of HP equipment equates to a power draw of 13,295 Watts. This yields an IOPS/Watt calculation of 33.9 IOPS/Watt. In comparison a 10 node cluster from SolidFire under full load has a power draw of approximately 3000 Watts or 166.7 IOPS/Watt.
  • Performance Variability. To offset the expensive SLC SSDs HP has "dynamic" tiering software that they refer to as Adaptive Optimization. However moving data between tiers is a reactive process that has the tendency to demote the wrong data to lower tiers. This can expose a customer to dramatic performance variability. In the recent past we have given this topic extensive coverage in prior blogs here, here and here.

So how do these specs compare to the clean slate approach that we have taken here at SolidFire? In a recent article written off the SPC benchmark comparison HP's stated the $/IOP of its all-flash array was $1.98. Based on the 450k IOPS benchmark this equates to a cluster cost of $891,421. In comparison, at current list pricing for a 10 node cluster, SolidFire can deliver 11% more IOPS and 170% more capacity for 33% lower cost, 95% less real estate and 77% less power draw. In the cloud market, where infrastructure cost and efficiency are survival mandates, these significant deltas equate to meaningful competitive differentiation. If these numbers matter to you and your business then let us walk you through the benefits of a purpose-built design in more detail.

-Adam Carter, Director of Product Management

Cloud With ConfidenceWednesday, August 8, 2012 posted by Dave Cahill

With >1 trillion objects stored in S3, Amazon has clearly led the way in the first wave of enterprise cloud adoption. Amazon and others have proven out that the cloud is viable for bursty and less performance sensitive applications (e.g. test/dev) along with startups looking to ramp IT without heavy capex. The rapid adoption of cloud computing for these use cases continues to prove they are well served by this type of infrastructure.

This market adoption profile is a close parallel to the early years of server virtualization technology. The initial use case for market leader VMware was very similar to what we are seeing in the cloud. However the larger market opportunity was always about getting users comfortable enough with server virtualization to run more demanding applications. With the introduction and maturation of the vCenter suite VMware enticed users to migrate more performance sensitive applications to virtual servers. Fast forward to today where production applications running in virtualized environments is commonplace.

Apply this trajectory to cloud adoption and the question becomes "what is needed for enterprise IT departments to confidently run more traditional applications in the cloud?" The answer: a greater degree of confidence that these applications will have access to predictable and consistent performance.

Cloud infrastructures today are not well suited to meet the performance and consistency requirements of most database-backed applications. This helps to explain why they are still for the most part run on-premise in a dedicated SAN. AWS' James Hamilton summed up the unique challenge presented by I/O intensive workloads in his recent blog;

"The key observation is that these random I/O-intensive workloads need to have IOPS available whenever they are needed. When a database runs slowly, the entire application runs poorly. Best effort is not enough and competing for resources with other workloads doesn't work. When high I/O rates are needed, they are needed immediately and must be there reliably."

-James Hamilton, EBS Provisioned IOPS & Optimized Instance Types 8/01/12

In recognition of customers interest in hosting I/O intensive applications, AWS recently followed up their recent announcement of high I/O EC2 instances by introducing Provisioned IOPS for EBS. This service allows a customer to specify I/O rates to specific volumes inside EBS. Up to 1,000 IOPS can be allocated per volume with the ability to stripe up to 10 volumes together per account to compose a 10,000 IOPS virtual volume. 

This provisioned IOPS concept is very similar to the IOPS QoS controls enabled by our performance virtualization technology. It was a year ago this week that Amazon's Hamilton blogged about the merits of SolidFire's approach to provisioning performance;

This system can support workloads that need dead reliable, never changing I/O requirements. It can also support dead reliable average case with rare excursions above (e.g. during a database checkpoint). It's also easy to  support workloads that soak up resources left over after satisfying the most demanding workloads without impacting other users. Overall, a nice simple and very flexible solution to a very difficult problem.

- James Hamilton, SolidFire: Cloud Operators Become a Market, 8/01/11

So what do we think of Amazon's Provisioned EBS announcement? Instilling confidence in IT departments to deploy more of their application footprint in the cloud is going to require a lot more than just our evangelism efforts. In this respect we couldn't have chosen a better ally to help drive the next phase of cloud market growth.

For cloud and hosting providers, today's announcement from Amazon has again raised the stakes. For better or worse Amazon is the benchmark against which all others need to carve out their own unique niche. Success will be dependent on the ability to maintain a differentiated offering across multiple dimensions including cost, quality and breadth of service offerings.

-Dave Cahill, Director of Strategic Alliances

The Train is Leaving the StationFriday, July 20, 2012 posted by Dave Wright

"Magnetic disks are rapidly starting to exhibit tape-like properties and with modern workloads being increasingly random, they are becoming less and less suitable as a storage system."

Werner Vogels, All Things Distributed, 7/19/12

Yesterday morning Amazon announced high I/O EC2 instances designed to run low-latency I/O intensive applications. Building on their initial foray into SSD-based storage, this new service allows a customer to provision dedicated SSD from local storage for each instance. Designed only for EC2 instances the use case is limited to ephemeral storage (i.e. no EBS equivalent). From a cost perspective Amazon is charging $3.10 per instance hour compared to $1.30 for the closest disk-based equivalent. Based on a 30-day month this rolls up to $2200 per month or $0.65/GB/month. In its current form the service is delivered in only one package (2TB of local storage, visible as 2 x 1TB volumes), lacking the ability to tailor storage performance or capacity to workloads that don't fit this profile.

Most SSD-based cloud offerings today have taken a very similar approach to what Amazon has announced today...local SSD's in a box. All the typical tradeoffs of local versus shared storage still apply; lack of sharing or multi-tenancy, no high availability, inability to move data between instances. For some workloads like Mongo or Cassandra this solution should suffice. For traditional enterprise applications however, the lack of reliability remains a major sticking point. Moreover, to achieve the price points required to attract a broader range of applications will require a multi-tenant infrastructure.

Harping further on the limitations of the initial offering misses the broader implications of this announcement. The performance inconsistency that exists in most cloud environments is the most frequently cited barrier to broader adoption of more I/O intensive applications in the cloud. Amazon's broader adoption of SSDs in their cloud portfolio is an extremely important milestone for cloud computing and strong directional indicator of where the market is going. In short order, we fully expect more cloud providers to follow in their footsteps. The good news for now is that Amazon has left the door open for others to come along with a differentiated value proposition. However, when it comes to SSDs in the cloud the train is clearly leaving the station...all aboard!!!

-Dave Wright, Founder & CEO

Optimize your cloud on your terms, not oursWednesday, June 20, 2012 posted by Adam Carter

The need for infrastructure scale within our cloud service provider customers is a major influence on our roadmap. Our recent scale blog provides a view into what we consider to be the defining characteristics of a storage system designed to operate under the constraints of scale. For cloud providers, infrastructure scale and business scale are inextricably linked. To successfully scale their businesses, cloud providers need more flexibility from their storage infrastructure to accommodate the growing needs of a diverse range of applications or tenants.

Traditional storage systems can scale capacity with additional disk shelves, but getting more performance requires changing storage media, forklift controller upgrades, or short-stroking drives and wasting capacity. By comparison, SolidFire's true scale-out architecture increases performance and capacity linearly and creates a single pool of capacity and performance for all tenants. While our scale-out architecture has always allowed service providers to easily scale performance and capacity, that growth came with a fixed ratio of capacity to performance. Our long term goal is to allow service providers to add precisely the ratio of capacity and performance that they require for maximum efficiency based on the needs of their customers.

Today we are taking the next step on that roadmap by introducing our new high-density storage node - the SF6010. You can see the detailed specs of the new system here. Compared to our existing SF3010 model, the SF6010 is simply a bigger pool of capacity for providers to provision out to their customers. Cloud service providers can now create a dense multi tenant cloud environment backed by up to 2.4 petabytes of capacity and 5 million IOPS --- all within a single system.
Assuming an average of 100GB capacity per virtual machine the SF6010 could house 240 virtual machines per rack unit (RU). At 240 VMs per node the SF6010 can deliver 200 sustained random IOPS to each VM, a roughly 20x performance advantage over spinning disk. More importantly, our unique performance virtualization technology allows some of those VMs to use thousands of IOPS while others consume a few dozen, without regard to capacity.

The SF6010 announcement is just the beginning for us. Our architecture is designed to allow us to continually stay out in front of price declines and density increases in storage media; driving down the cost of high performance storage in the cloud, while increasing the flexibility of cloud providers to accommodate the varying performance and capacity needs of a wider set of applications. To get more details on this announcement look for us this week at our booth at the Structure 2012 conference. @jungledave, @dcahill8 and @jprassl will all be at the event.

-Adam Carter, Director of Product Management

Built To Scale...Easier Said Than DoneMonday, May 21, 2012 posted by Dave Wright

Scalability is often marketed as a feature of a storage system. But scale is not a checkbox feature, nor is it a single number like capacity. Scale is a set of constraints that operate across every metric and feature of a system. Within large cloud environments all parts of the infrastructure are expected to operate against this backdrop of scale. In two recent posts we touched briefly on the magnitude of the challenges presented by scale and why EMC spent $430 million to acquire scale. However, as a critical consideration in any cloud infrastructure build-out, we wanted to discuss more deeply how we solve the challenges of scale.  

As it relates to storage, two of the most critical dimensions of scale in a cloud environment are performance and capacity. Using traditional storage systems, optimizing for either one of these resources almost always comes at the expense of the other. The best visual depiction of this dilemma can be seen in this graphic.SFScaleSM Flash-based designs today are IOPS rich but lack the capacity, high-availability and/or shared characteristics required to scale to the broader demands of a large scale cloud environment. Meanwhile, hard disk-based systems have plenty of capacity scale but lack the IOPS needed to service the full capacity footprint adequately. Unfortunately a storage infrastructure containing lots of underutilized disk is unsustainable from both a cost and management perspective.

Properly architecting for scale in a multi-tenant cloud environment requires a system design that is able to manage the mixed workload profile inherent to this environment. Unlike an on-premise architecture that has a more controlled binding between application and storage, the economics of cloud are predicated on a shared infrastructure across many applications. Rather than optimizing the underlying storage for a single application, a cloud infrastructure must be able to accommodate for the unique performance and capacity requirements of 1000's of applications. Modern hypervisors provide this level of flexibility for compute resources today. It is about time storage caught up.

So what are the defining characteristics of a storage system designed to operate under the constraints of scale? Here are some of the design objectives we have based our system around:

  • Performance and capacity balance- Rather than force an sub-optimal tradeoff at the system level (i.e. performance or capacity) we instead designed an architecture with a more balanced blend of performance and capacity. Armed with our performance virtualization technology, service providers can now carve up this system to serve the unique needs of many different applications across a wide mix of performance and capacity requirements. This more granular level of provisioning is a far more efficient method for allocating storage resources relative to more traditional system-centric alternatives that force a capacity or performance decision upfront on every application.
  • Incremental growth- The recurring nature of the service provider business model necessitated an incremental approach to scale. Each node added to the SolidFire cluster adds equal parts performance and capacity to the global pool. With a more balanced, and linearly scalable resource pool at its disposal, a cluster can more easily span environments both small and large. Traditional controller based architectures require a large investment up-front for redundant controllers, and while adding more disk shelves can increase capacity, in many architectures the performance benefit is limited, or a complex reconfiguration is required.
  • Dynamic change- Capacity and performance allocations within the cluster needs to be dynamic and non-disruptive to account for the only two constants in the cloud; growth and change. This requirements applies both at the node and volume level. Node additions to a SolidFire cluster are done non-disruptively with data rebalanced across the newly added footprint. Performance QoS settings for individual volumes can be dynamically adjusted on real-time through the SolidFire REST-based APIs.
  • Single management domain- As a storage environment scales it is critically important that the management burden does not do the same. The clustered nature of the SolidFire architecture ensures a single management domain as the cluster grows. Alternative architectures often require additional points of management for each new storage system. Even worse, scale limitations often prevent vendors from addressing such a broad range of capacity and performance requirements from within the same product family. The complexity resulting from multiple points of management across multiple product families can have crippling effects at scale. Multiple clusters can be set up in different fault domains or availability zones as required, but the key decision point about what scale to place in each domain is determined by the customer, not by the storage system.

The scale challenges of cloud environments mandated different design choices for us at SolidFire compared to a solutions intended for more traditional enterprise use cases. Delivering such a balanced pool of performance and capacity with a single management domain is unique in the storage industry today. Layering our performance virtualization technology into the architecture allows service providers to flexibly host a much broader range of application requirements from start to scale. Consequently, I would urge anyone building a scale-out cloud infrastructure to at least consider the above criteria as a starting point for any discussion around scale.

-Dave Wright, Founder & CEO

Distributed Systems Challenges Demand Different Skill-setMonday, May 14, 2012 posted by Dave Wright

SolidFire's unique approach to scale-out all-SSD storage for cloud environments involves different engineering challenges than those confronted by traditional storage systems. Rather than focusing on ASICs, buses, and RAID firmware, SolidFire is solving difficult distributed systems problems dealing with scale, latency, reliability, and quality of service. The magnitude of this challenge requires us to continually add new talent with experience in this area. We've added more than a dozen great people to the team so far this year. One recent hire I'd like to highlight is our new Vice President of Engineering, Dan Berg.

In addition to his skills as a leader and manager for our engineering team, Dan has a long history of building the type of complex distributed systems that SolidFire delivers. After a 15 year career at Sun Microsystems, which concluded as VP of Systems Engineering and Distinguished Engineer, Dan served as CTO of Skype in Europe. At Skype Dan helped grow the engineering team and significantly broaden their product offerings while increasing platform scale and stability. Following his return to Colorado from Europe, Dan most recently ran R&D for Avaya in the US.

While a P2P VOIP platform like Skype may seem very different from a primary storage system, it represents exactly the type of scale-out, fault-tolerant distributed system at the core of true cloud architectures like SolidFire. Cloud computing is changing not just how IT is deployed, but fundamentally how the underlying infrastructure is built.

I'm pleased to welcome Dan Berg as well as all our other recent hires to the team. If you're excited about the work SolidFire is doing to advance the way the world is using the cloud, I'd encourage you to bookmark our Careers Pageand check it regularly!

-Dave Wright, Founder & CEO

Big Players Make Big PlaysThursday, May 10, 2012 posted by Dave Wright

EMC has made a big play with its announced acquisition of XtremIO for a reported $430 million. In acquiring the all-flash scale-out flash storage system vendor, EMC has made another aggressive bet in an emerging growth market.  When a growth opportunity justifies making a bet, EMC is best in class at getting it done. But to assume EMC spent $430 million to simply double down on its investment in flash is shortsighted.

This deal is not just about flash. This deal is about scale. I suspect EMC's early entry into the flash market was invaluable learning experience for understanding the opportunities and challenges posed by flash. Somewhere along the way they realized that building flash into an architecture is one thing, but building a true scale-out flash system is a whole different challenge. This is not a challenge to be solved with traditional storage controller technologies that were designed in the hard disk era.

Scale imposes an entirely different set of constraints on a system and its underlying media. Delivering consistent performance at scale, delivering efficiency and data reduction at scale, automating management at scale...each of these challenges on their own are hard enough. Solving them with a completely different media at the base of the design requires a rethink architecturally.

The timing is interesting here.  As it pertains to the flash market, acquisitions at this stage of the game are much earlier than the storage industry traditionally likes to place their chips. However, the urgency with which EMC chose to strike is indicative of the market demand for more than just bolt-on solutions backed by go-to-market heft.

If this deal was just about flash, EMC had a number of different options at their disposal, including staying the course with its evolving portfolio of flash solutions while the market matured. However, the transformative nature of flash necessitated a different approach. Realizing these challenges EMC made a rich bet, but one that will eventually seem small compared to the opportunities created by scale-out flash storage.

-Dave Wright, Founder & CEO

Our Thoughts Off The Recent Solid State Storage SymposiumTuesday, May 1, 2012 posted by Jay Prassl

Last week our Founder and CEO Dave Wright attended Tech Field Day's Solid State Storage Symposium (SSSS) in San Jose. At the event he joined a number of other companies from across the flash storage ecosystem for a day full of lively discussions on the most optimal use cases, implementation types and future directions for flash technology.

Dave kicked off the day with a presentation on SolidFire's vision for the future of flash storage that really set the tone for the event.  My one line takeaway from his presentation was this: "Sure flash is fast, but what good is all that performance without control". In his talk he expands the argument to include efficiency and scale. The net of all this is that flash is a means to an end but without complementary innovations across quality of service, efficiency and automation the end market is never going to be as big as some industry analysts are predicting. 

At SolidFire we believe that our technology and approach to the market is fundamentally advancing the way the world uses the cloud. In his SSSS presentation I think you will find that Dave paints a clear and compelling picture for where flash is headed and what companies like SolidFire are doing to bring this vision to life. You can find the full presentation from the event on slideshare along with the video posted on Vimeo.

I would also encourage you to check out the panel sessions from the event as well. You will surely find some useful insights across a number of key trends that are shaping the future of solid state.

Hats off to Stephen Foskett and the fantastic moderators that he brought on board for the day.  The content and discussion on these panels are much richer than what you would find at a run of the mill tradeshow.

- Jay Prassl - VP of Marketing

Why OpenStack MattersMonday, April 9, 2012 posted by Dave Cahill

OpenStack matters because choice matters. In order for markets, and innovation within these markets to thrive, consumers must have platform choices. Multiple platform options help to accommodate the varying requirements, skill-sets and risk profiles of different customers. In the cloud context, platform options help service providers right-size cost and quality of service to the unique needs of a subset of customers. Competition between multiple platforms forces all the players to be better (In this context, Citrix's recent release of CloudStack to the Apache Software Foundation might turn out to be one of best things to ever happen to OpenStack).

Despite the fragmentation that competition creates early on, market forces will whittle down the number of platforms choices over time. Technology history has taught us that platform markets can sustain only a few dominant players. Often times this includes a proprietary and open source alternative. The operating system wars that started 20+ years ago are the most frequently cited evidence of this dynamic. The fragmented and proprietary Unix variants eventually lost out to Linux and Windows as the open source and proprietary standards respectively. Server virtualization has seen a similar trajectory with VMware and Xen leading in a race that is still underway. Most recently iOS and Android have created a competitive and rapidly evolving mobile operating system market.

Fast forward to today and history is repeating itself in the cloud "operating system" market. VMware's proprietary stack has become the clear commercial leader. Meanwhile, there is an emerging group of open source platforms vying to become the "Linux" of the cloud data center. Only time will tell how this plays out, but OpenStack has as good a shot as any to become this defacto standard. With the stakes so clear the question isn't why invest in OpenStack, but rather why wouldn't you?

Despite the magnitude of the opportunity, let's not lose sight of the fact that it is still early days. July of this year marks only the two year anniversary of the OpenStack effort. In just six short months since the last release, OpenStack has made some big strides. Of course, challenges still persist, but there are more than 150 companies and 2500+ developers working on the problem.

Coinciding with the Essex code release last week, the OpenStack Conference & Design Summit will be held April 16-21 in California. At SolidFire, we have been working hard since the last summit and are proud of our achievements over this period. We will be very active participants throughout the week of the conference. If you are attending, make sure to stop by our booth or come see our panel, "OpenStack & Block Storage...Where to from here?" on Thursday at 1 p.m. PST. We will also be hosting a party with CloudScaling and RightScale on Monday night. Building off the Mirantis reception earlier in the evening, make sure to come hang out with three of the most innovative companies in the cloud ecosystem at 111 Minna Gallery in downtown San Francisco. Details and registration for the party are posted here.

-Dave Cahill, Director of Strategic Alliances


Bringing SSDs to the Cloud (at scale)Wednesday, March 21, 2012 posted by Dave Wright

At the Cloud Connect Performance Summit back in February I presented the topic "Increasing Storage Performance in a Multi-Tenant Cloud". The way the schedule fell out I took the stage after Adrian Cockroft from NetFlix. Coincidentally, I borrowed a few quotes from Adrian's prior blogging on the subject to help bring to life the biggest roadblocks to achieving great storage performance in a multi-tenant cloud. In my discussion I called out three key problem areas: the capacity vs IOPS imbalance, handling multi-tenancy, and performance consistency. My discussion centered around the limitations of legacy solutions and how flash storage, if leveraged correctly, can help remedy current cloud performance woes.

Many thanks to Adrian, who continues to be a great straight man for the biggest challenges we are tackling here at SolidFire. In a recent Q&A with ZDNet UK's Jack Clark, Adrian shared some perspectives that we commonly hear from cloud service providers and their customers:

  • "The thing I've been publicly asking for has been better IO in the cloud. Obviously I want SSDs in there. We've been asking cloud vendors to do that for a while."
  • "The instances available from AWS have similar CPU, memory and network capacity to instances available for private datacentre use, but are currently much more limited for disk I/O."
  • "The hard thing to do in the cloud is to do high-performance IO [input-output], but that is starting to change as third-party vendors are figuring out ways of connecting high-performance IO externally, and we've worked around it with our [Cassandra] data store architecture."

Probably the most interesting answer was in response to a question around why it took Amazon so long to roll out an SSD-based offering (referring to DynamoDB). Cockcroft remarked:

"It's purely scale for them. For Amazon to do something they have to do it on a scale that's really mind-boggling. If you think about deploying an infrastructure service with a new type of hardware - if they got it wrong, they can't turn it back out and do it again differently. So they have to over-engineer what they do."

The key point here is that performance (through SSDs) was only part of the problem Amazon had to address. In fact, the bigger challenge for them to overcome was scale. Scale is what differentiates true clouds from small virtualized environments. Everything has to be designed to scale, which imposes a very different set of design considerations and constraints on an architecture. SSD or not, you can't escape this reality. At SolidFire scale is what we do best. There are many options for high-performance storage these days, but only SolidFire is designed for cloud scale. In doing so we are enabling service providers to focus on offering a differentiated portfolio of high performance cloud services and advancing the way we all use the cloud.

-Dave Wright, Founder & CEO

Sorting through the noise (and the bottlenecks)Tuesday, February 28, 2012 posted by Dave Cahill

The current flash-based storage landscape is filled with many vendors proposing to address different niches of the market with their respective solutions. With flash as the common ground, some of the more easily identifiable differentiators are in areas like host interface, form factor, media support and data protection schemes. The design choices for these specifications are heavily influenced by each vendors' target workload and/or customer set. Of course, there are strengths and weaknesses to every approach. There are bottlenecks to be minimized or altogether avoided if possible. If all goes according to plan a vendor's target market will play to more of its strengths than weaknesses.

At SolidFire we have taken direct aim at solving the challenges encountered in delivering high performance storage for large-scale multi-tenant cloud environments. For this customer set the objective is not about delivering massive amounts of performance to single application at any cost. Instead, these providers are focused on cost effectively delivering consistent performance to thousands of applications at the same time. This use case has shaped many of our early design choices at SolidFire. We believe the most efficient way to achieve the right price/performance balance at scale is through a shared storage architecture.

In the case of shared storage, regardless of how fast the storage system can deliver I/O, there will always be the issue of network latency. Fusion-io has eliminated the network latency issue altogether with its server resident PCI-based designs. This design works well for DAS topologies serving massive IOPS to extremely performance hungry applications. However for the service provider use case referenced above, the price/performance and availability story of server-resident flash misses the mark.

So if network latency is unavoidable, what is the best approach? How do you optimize the storage stack to maximize IOPS and minimize latency to deliver consistent performance to thousands of applications? Sparing you a buzzword infused tongue twister that distills our approach into as few words as possible (think "Raid-less All-SSD Scale-Out Storage System"), we have instead outlined some of the key enabling features of our design in a more digestible format below;

  • An All-SSD system is the only way to confidently deliver predictable performance across a large number of tenants and applications in a large-scale cloud infrastructure. A tiered approach may suffice in a controlled setting with a few applications. However, the resource intensity and performance variability encountered in larger QoS-sensitive environments make tiering an unsustainable option.
  • Scale-out can mean lots of different things. For SolidFire this means no monolithic storage controllers. It also means a fully distributed design with IO and capacity load evenly balanced across every node in the cluster. At the media layer, data still has to traverse the SAS bus, but ten drives per node are working in tandem to deliver more than enough aggregate performance. Thinking through alternative design choices here, it is important not to lose sight of the fact that any latency encountered at this layer of the stack is an order of magnitude less than what is encountered at the network layer.
  • RAID-less means exactly what you think, no RAID. More than any controller bottlenecks, RAID is the biggest performance drag in the storage stack. By rethinking the date protection algorithm you cure a lot of what ails storage system performance today. At SolidFire we have done just that, implementing a replication-based redundancy algorithm where data is distributed throughout the cluster. The result is significant improvements in write performance and drastic acceleration of rebuilds from failure without performance impact.

Sure our storage system does a heck of a lot more than these three things. You can read all about the software innovations embedded in our Element OS on our site. But these three concepts we highlight above are critically important design choices that we made early on. They are foundational components of our architecture that make the rest of the story possible. They are also three fairly tangible concepts to help you differentiate one vendor from the next in the flash-based storage market. Good luck, it's noisy out there!

-Dave Cahill, Director of Strategic Alliances

Extending The Storage Disruption CycleThursday, January 26, 2012 posted by Dave Cahill

"There comes a time when a storage company needs to define itself by what it does for customers and not by the machinery it uses to do so."

Chris Mellor, "How to tell if your biz will do a Kodak", The Register

 The Register's Chris Mellor penned a great article the other day reflecting on the continuous cycles of innovation and disruption that have come to characterize the storage media industry. He uses Kodak to paint the picture of an incumbent getting capsized by a media transition. He goes on to cite other examples across tape and optical media where incumbents failed to manage the transition to the next generation media.

As the storage industry has transitioned through different media types there have always been opportunistic stopgap innovations that have bridged the gap from one generation to the next. Virtual Tape Library (VTL) technology is a great example of an innovation serving as a transitional bridge between the tape and disk eras. Once applications were written with the capability to natively interface with disk, deduplication and compression drove down solution costs quickly making it an effective bulk storage medium.  Once financially viable, the flood gates were opened and tape was relegated as a deep archive. Similarly, today we are seeing flash-based caching and tiering technologies forming a similar transitional bridge while the $/GB economics of flash fully converge with, and eventually eclipse, disk.

So with history as a guide for how this plays out, why will the disk to flash media transition be any different than the ones before it? Well, I suspect this cloud thing might have something to do with it.

In the enterprise IT sector, systems always seem to consume features over time. At its core, the cloud is a massive infrastructure system that when used properly is an extension of existing IT. However, cloud infrastructures will increasingly chip away at the incumbent IT footprint by rapidly incorporating new innovations into its architecture. These enabling innovations allow cloud providers to continually expand their portfolio of cloud services. Over time the IT use cases applicable to this medium naturally expand as applications and interfaces catch up, performance improves and the economic value proposition can no longer be ignored.

So what does this mean? From our perspective, the cloud adds a third leg to the innovation sequence we have witnessed in the past. New component level technologies will continue to enable new architectures. But where it gets interesting is when these new architectures drive the performance and economics to enable new cloud services.

In storage, the media innovations that Mellor refers to, and their related price/performance value proposition, are a powerful enabling force behind new storage architectures. Applied to traditional IT cost centers these architectures are interesting, when applied to profit-driven cloud services they are game changing. Amazon's recently announced DynamoDB service is an early instantiation of this extended innovation sequence where component level technologies (SSD), enable new architectures that drive new services. Fortunately for the end-customers, the economics of flash are only getting better from here. Now is it up to the storage industry to innovate on top of this medium, delivering next generation systems that can extend the reach of cloud hosted services to an even wider range of application workloads.

-Dave Cahill, Director of Strategic Alliances

Inefficiency & Unpredictability...A Service Providers Worst EnemyTuesday, January 24, 2012 posted by Dave Wright

In our first two posts on storage tiering we talked through the difference between capacity-centric vs. performance-centric approaches and also exposed some of the hidden costs of an automated tiering implementation. Closing out this mini-series I wanted to touch on a few other deficiencies inherent to an automated tiering solution.

Within a storage infrastructure it is IOPS, not capacity, that are the most expensive and limited resource. In a tiered architecture, SSDs are inserted into the equation to try and improve the balance between IOPS and capacity. However, while an SSD tier may reduce performance issues for well-placed data, the usage of this expensive tier remains inefficient. This inefficiency stems from a lack of granularity in the data movement of a tiered system. If a sub-LUN tiering system needs to move hot data chunks anywhere from 32MB to 1GB, it will likely promote a lot of cold data in the process. This overhead forces sub-optimal utilization of the premium SSD capacity.

Another potential problem area from tiering, specifically in a multi-tenant environment, is dealing with IO density - that is, how IO is distributed across a range of disk space. Applications whose IOs are concentrated within close proximity to each other (IO dense) will gain greater benefit from sub-LUN tiering than those whose IOs are spread more evenly over the entire logical block address space (IO sparse). Because tiering mechanisms measure data usage at the chunk level, an application who has more hits within a small number of chunks is more likely to be promoted than an application who spreads the same number of IOPS across more chunks. From an array performance perspective this approach is reasonable, as you get more performance within the same resource footprint. However, in a multi-tenant setting with data distributed across many distinct application this leads to serious problems with fairness and performance consistency across workloads.

We originally discussed the performance implications of tiering in July of last year. In a multi-tenant setting this performance variability exposure is magnified. Customers are continually exposed to the risk that the promotion of another customer's hot data will result in the demotion of their own. The order of magnitude difference in latencies and IOPS between the different tiers makes it practically impossible for a service provider to guarantee performance to an individual application (or tenant) under these conditions.

In recognition of the deficiencies of a tiered architecture, SolidFire sought a better way. Our Performance Virtualization technology decouples the tight binding between the storage performance and capacity, resulting in a far more precise allocation of IOPS and capacity on a volume by volume basis regardless of issues such as IO density. Instead of best guess efforts as to the size and tiers of media required to meet customer performance requirements, a service provider can now dial-in IOPS and capacity individually at the volume-level from cluster-wide independent pools of capacity and performance. These allocations can also be dynamically adjusted over time as application requirements change. All things considered, Performance Virtualization is a far more efficient way to address IOPS scarcity, without exposing customers to the inefficiency and unpredictable performance inherent in an automated tiering architecture.

-Dave Wright, Founder & CEO

Amazon launches DynamoDB...We like what we see!Wednesday, January 18, 2012 posted by Dave Wright

Amazon launched a new service today: DynamoDB. It's a scaleable NoSQL database service that will run in the AWS cloud. It is akin to a hosted version of Cassandra or MongoDB with unlimited scalability. The most notable section of Werner Vogel's blog announcing the new service is worth repeating:

Cloud-based systems have invented solutions to ensure fairness and present their customers with uniform performance, so that no burst load from any customer should adversely impact others. This is a great approach and makes for many happy customers, but often does not give a single customer the ability to ask for higher throughput if they need it.

As satisfied as engineers can be with the simplicity of cloud-based solutions, they would love to specify the request throughput they need and let the system reconfigure itself to meet their requirements. Without this ability, engineers often have to carefully manage caching systems to ensure they can achieve low-latency and predictable performance as their workloads scale. This introduces complexity that takes away some of the simplicity of using cloud-based solutions.

The number of applications that need this type of performance predictability is increasing: online gaming, social graphs applications, online advertising, and real-time analytics to name a few. AWS customers are building increasingly sophisticated applications that could benefit from a database that can give them fast, predictable performance that exactly matches their needs.

Looking under the covers a bit further here there are two really interesting enabling components of the DynamoDB service that deserve highlighting:

  1. All-SSD- the service is deployed using 100% SSDs to provide consistent high performance at a very large scale. This is notable in that it is AWS' first use of SSDs in their cloud architecture.
  2. Guaranteed Throughput - The DynamoDB service includes a concept called "Provisioned Throughout". This is essentially a guaranteed QoS model, where a customer can purchase reserved capacity (measured in queries per second), rather than paying for the actual queries run. Applied to a storage service, this would be akin to paying based on guaranteed IOPS. Currently Amazon EBS's current pricing model is based on actual IO operations with no guaranteed throughput or latency.

Amazon DynamoDB is a strong endorsement of several of SolidFire's key principals. The first being that the cloud needs Solid-State Drives (SSD) to adequately support the evolving performance demands of multi-tenant storage. The second is the idea that as more of these performance-sensitive applications make their way to the cloud there is a clear requirement for guaranteed QoS controls that can dynamically support performance requirements at a much more granular level. Finally, and building off the first two, is the validation that when armed with the enabling architecture to confidently and economically deliver performance-based services, service providers can stand-up cloud service offerings based on committed performance.

Amazon is a great indicator on the pulse and direction of the industry. The broader implications here for running performance sensitive applications in a cloud environment are intriguing to think about. Here at SolidFire, the continued innovations around the enabling architectures required to make this a reality are what get us really excited.

-Dave Wright, Founder & CEO

The Diseconomies of TieringTuesday, January 17, 2012 posted by Dave Wright

In the initial post of our series on tiering we covered the merits of a proactive performance-driven approach to tiering relative to the more traditional capacity-centric discussions. Today we take a closer look at some of the less obvious cost implications of "automated" tiering. On the surface, the promise of tiering looks like an clear win - SSD performance with spinning disk capacity and cost. However, the true economics of this type of solution are not nearly as compelling as some vendors would lead you to believe. Considered in the context of the unique burdens faced by cloud service providers and the proposed value proposition is even less appealing.

To start with, the "SSD performance" promise part of the catchy tagline above must be caveatted by the fact that this only proves to be the case if the data is actually residing in the SSD tier. Easier said than done. The ability to guarantee SSD performance in a tiered architecture requires a substantial SSD tier and/or extremely accurate data placement algorithms. Rightsizing the former skews the proposed economics of a tiered solution substantially, while the latter has been long on promise but short on delivery for at least three generations of marketing executives. Before the industry marketed this functionality as Automated Tiering it was known as Information Lifecycle Management (ILM) and a few years before that it was Hierarchical Storage Management (HSM). Regardless of what you call it, tiering has always been impaired by the inability to accurately predict and automate the movement of data between tiers. In the context of cloud environments the significant scale requirements and extremely low application-level visibility make solving this challenge even more difficult.

It's also important to consider the flash media requirements of a tiered solution. The write patterns in the flash layer of a tiered architecture require a higher grade flash solution to withstand the impact of write amplification and churn. Vendors are forced to use the most expensive SLC flash to ensure adequate media endurance. The cost impact even modest amounts of SLC flash destroy the economic advantage of a tiered architecture relative to an all-MLC design. In many examples we've seen that the "combined" $/GB of a storage solution that incorporates SLC-flash, 15k SAS and SATA is actually higher than an all-flash MLC solution with similar raw capacity. Importantly, this price advantage for MLC over tiered storage is achieved before factoring in the favorable impact of compression and deduplication for the all-flash solution, making the flash design even more compelling.

Tiering also hurts capacity utilization and controller performance. In order to ensure data is in the right place at the right time it is constantly being promoted and demoted between the flash and disk tiers. There needs to be a certain capacity buffer to accommodate this movement. There is also a controller processing cost to keep up with all this activity. Most legacy systems have limited CPU and controller memory relative to their overall capacity, making the overhead of tiered storage processing one more burden for them to manage. Even complex tiering requires only a fraction of the processing power and memory needed for in-line data reduction features like compression and dedupliction, which is why those features are seldom found on legacy primary storage controllers. A recent article from TechWorld references a Forrester Research report by Andrew Reichman (@ReichmanIT) that expands on the data management burden of a tiered storage topology.

The issues outlined above are just a few examples of the hidden costs embedded in an "automated" tiering solution. In some cases these deficiencies may be acceptable in smaller IT environments. However, in a large scale multi-tenant cloud infrastructure the capital and management costs of these shortcomings are magnified. The hyper-competitive nature of service provider business model necessitates a more efficient approach.

-Dave Wright, Founder & CEO

Capacity vs. Performance TieringTuesday, January 10, 2012 posted by Dave Wright

In our end of year blog we reviewed a number of the unique storage challenges that infrastructure service providers face in building and operating a large-scale, and profitable, cloud offering. A clear understanding of these issues provides a more constructive lens through which to the viability of a storage solution within a high-performance cloud-scale setting. This approach is particularly useful for understanding the basis of SolidFire's thoughts on the merits of "automated" storage tiering in a large scale cloud.

As promised, we kick off our first of three blogs on this topic below. If you happened to miss our initial thoughts on this subject you can go back and read them here and here. We look forward to your feedback as we go.

Within the enterprise, storage tiering has become a popular vendor solution to improve performance for a subset of applications. With tiering the performance gain is achieved by retrofitting a disk-based array with an SSD tier and some intelligent fetching/data placement algorithms. Tiered storage systems are most effective when an IT manager has direct visibility into the usage profiles of the applications that reside on the system.  This allows the IT manager to size each tier appropriately, continually ensuring there is enough room in "fast disk" to accommodate demand. When data is not in demand it is then moved to slower speed disk. Overall, this is both a reactive and human-centric model that requires constant monitoring and adjustments to ensure each storage tier is rightsized to accommodate the access patterns of different volumes across the data set. The continuous promotion and demotion of data to the different tiers also comes at the cost of endurance due to excess wear on the flash media.

When operating a large scale public cloud environment customer applications and their associated usage patters are largely unknown to the service provider. How do you most effectively allocate tiers of storage without ongoing visibility into the access requirements of a particular application?  How big should the SSD tier be? How much SATA capacity should be used? When should data be promoted or demoted between tiers? Might a better question be; how many IOPS need to be available within the storage system? Unfortunately, for cloud service providers with unpredictable demand patterns across a large number of tenants, trying to spec out a system in this manner is impossible.

From SolidFire's perspective, the best way to manage performance in a multi-tenant cloud environment is to approach this problem from the demand side of the equation (i.e. application performance) as opposed to the supply side (i.e. storage capacity).  Proactive performance management based on IOPS demanded by the application offers a far more efficient approach to allocating storage resources, rather than trying to guess the right quantity and capacity of each tier within the system. Armed with fine-grain performance controls, storage performance management should no longer be a complex, reactive and resource intensive experience. By leveraging a system that can assign and guarantee IOPS on a volume by volume basis, all of the guesswork around right sizing for application performance is eliminated. 

For a quick graphical depiction of how SolidFire brings this concept to life, check out our 90 second video on  Performance Virtualization.

-Dave Wright, Founder & CEO

Looking Back Before We Charge ForwardTuesday, December 13, 2011 posted by Dave Wright

2011 was a foundational year for us here at SolidFire. Emerging from stealth mode at Structure in June, to a great VMworld panel and TechFieldDay appearance in September, and more recently announcing our new financing round in late-October, we have been hard at work. The best part is we are just getting started. Beyond enhancing our product, building our team and spreading the word, we have spent countless hours with cloud service providers (CSPs) listening to the challenges they encounter when attempting to deploy profitable high-performance cloud-storage infrastructures.  Today we are solving these problems with a select group of early access customers, and we look forward to making the SolidFire system broadly available in 2012.  We don't think that cloud computing will ever be the same.

Conversations with CSPs throughout the past year has continued to reinforce our belief that this customer segment is unique in its scale, business model and the solutions that it requires. The driving force behind this conclusion are four important qualifiers that clearly differentiate their IT environment and resulting storage system requirements from that of the traditional enterprise. These factors are:

  • Ability to Provide Predictable Performance
  • Massive Scale
  • Multi-Tenancy
  • Lack of Application-level visibility

Individually, each of these factors impose unique pressures on the IT environment. Taken together, they demand an entirely new approach. Deeply understanding the architectural implications of these collective burdens provides a more constructive lens from which to assess the viability of one solution versus another in a cloud-scale setting.

A frequently debated topic that highlights the importance of evaluating customer requirements from a more holistic viewpoint is that of automated storage tiering. Originally blogging on the topic back in July, we have continued to evolve our thinking on the topic and I would like to introduce a blog series in which we cover our view on tiering at length. 

Talk to any IT manager about how they are keeping up with performance demands and you will increasingly hear talk about resorting to unpredictable and resource intensive band-aids like tiering. In a controlled single system environment the dynamic tiering of data between SSD and SATA drives would seem to make sense. Unfortunately, the economics of these more tactical approaches, while viable in smaller topologies, start to break down under the burdens imposed in a cloud environment.

Once you introduce the elements of multi-tenancy, multiple applications, and the need to scale across multiple systems, a tiered approach exposes CSP customers to "all or nothing" performance disparity. Working around the unpredictable nature of this setup requires human intervention eroding the proposed cost benefits of the automated tiering value proposition. When evaluated against criteria above, the shortcomings of an automated tiering approach start to become very clear. Cloud service providers are forced to seek out alternative solutions that are better aligned with both the performance controls required in a multi-tenant cloud environment, and the efficiency mandated by the hyper-competitive nature of the cloud services market.

There is little debate that Quality of Service (QoS) is a key competitive differentiator for CSPs.  Consequently, they cannot afford to gamble with the performance variability inherent to a tiering or cache-based solution. The manual intervention required to tune and optimize these architectures on an ongoing basis is the antithesis of a profitable cloud-scale management model.  Coming out of the holiday break we will further explore storage tiering in even greater detail.  We will look at the differences between capacity and performance tiering, dive into the true economics of tiered solutions, and hash out the merits of local versus global deduplication. As always, please provide your feedback here on our blog.  We look forward to the conversation.

Happy and safe holidays to everyone and we look forward to seeing you in 2012.

-Dave Wright, Founder & CEO

SolidFire adds fuel to all-SSD storage solution with $25M in fundingMonday, October 31, 2011 posted by Jay Prassl

Over the last few months we have been writing about a number of topics surrounding  SolidFire's all-SSD storage technology. It is important for us that we strike a balance between  being informational about SolidFire, but also educational about how some of the most successful cloud service providers in the world are thinking about SSD technology and how guaranteed QoS is impacting their business.  Here are SoftLayer and Virtustream discussing their thoughts on the use of solid-state technology in their cloud.

Currently there are a number of world-class cloud service providers evaluating over 500TB of SolidFire's all-SSD storage technology. They are evaluating the solution technically, but also evaluating it from a business perspective as well.  For every customer we work with, SolidFire technology is radically changing their business.  These IaaS providers are now able to invite new mission critical and performance sensitive applications into their cloud, and build new revenue streams and customer value around guaranteed performance. There is a very good reason that 3Par, EMC, and NetApp customers have all joined our Early Access program.

No other storage technology in the world has SolidFire's capability to combine revolutionary performance management, in-line storage efficiency, and full system automation.

There are many service providers out there simply making due with what has been available in the market. If you are reading this blog, you are probably one of them.  You, and each of our early access customers all feel the same way - current technology can't get me to where I want to go.

SolidFire can.  SolidFire can bring your cloud to the next level and add to your bottom line in a way that no 3Par or NetApp system ever could.  Think deduplicating a single volume is interesting? How about deduplicating your entire data store across thousands of customers.  SolidFire offers not  just incremental change, but rather a massive leap forward in how storage systems really SHOULD be built.  Why wait for your current vendor to drag themselves up to date?

To help get SolidFire technology in front of every cloud service provider, today we announced the closing of our $25 million Series B funding round, bringing our total funding to $37 million. We will be investing in our sales and marketing teams to broaden our reach, and will be accelerating our technical development as well.

SolidFire is on a fantastic roll and we want to give you the chance to learn about our technology.  We have a webinar coming up on November 17th and want to urge you to carve out an hour to spend with us. We will be talking about: How Performance Virtualization Enables New Storage Services in the Cloud.

If your cloud is held back by complex, expensive storage systems and would like to know more about our solution attend our webinar or Talk with Us!!

-Jay Prassl, VP of Marketing

The Challenges of Cloud Service Providers-Part Three - RecapTuesday, October 18, 2011 posted by Dave Wright

recap video

To wrap up our VMworld video series hosted by Silicon Angle TV, I sat down with Virtustream's Matt Theurer and Softlayer's Duke Skarda to discuss as a group, some of the challenges faced by cloud service providers. This conversation focuses largely around the barriers that these two companies face with traditional storage systems in the cloud, and the opportunities that flash storage presents.

For both companies, the use of all-SSD based technologies is changing the way they think about storage, and how they approach resolving the gap between server and storage performance.  Matt discusses how SSD technology has inverted the capacity / performance imbalance that has existed for many years and how capacity will soon be the limiting factor within cloud storage architectures; a much easier metric to manage.  Duke explaines how block storage is a fundamental building block of cloud infrastructure, and traditionally the most problematic part to deal with.

I also got a bit of airtime to talk about the history of SolidFire and how my experience at Rackspace, and evaluating how traditional storage is used within the cloud, both helped me shape the technology of SolidFire and the market focus of the company.  It is important to keep in mind that SSDs do not constitute a different approach to storage.  SSDs are just part of the system.  How that system is architected, the functionality designed around the SSDs, and deep knowledge of your customer and their key feature-set, are all required when delivering a next generation storage solution. 

Many thanks to Matt and Duke for sharing their views on performance storage in the cloud, and to Silicon Angle TV for hosting us!

-Dave Wright, Founder & CEO 

The Challenges of Cloud Service Providers-Part Two - VirtustreamWednesday, October 12, 2011 posted by Jay Prassl

virtustream thumbnail

At VMworld earlier this fall Matt Theurer, SVP of Solutions Architecture and Rodney Rodgers, Chairman and CEO of Virtustream took some time to sit down with the folks SiliconAngle TV to discuss some of the challenges that cloud service providers are facing.  During their discussion they talked about the specifics of their business and their focus on enabling high performance applications like SAP within their shared infrastructure.  Key to their success in this space has been their ability to carve up compute, networking and IOPS and bundle them into what they call an "infrastructure unit".  Customers can combine as many IUs as needed to meet their requirements, and this enables Virtustream to provide some of the most comprehensive SLAs in the industry.

Matt takes the conversation a bit deeper discussing some of the more granular performance challenges posed by traditional spinning media.  He discusses how the ability to guarantee storage performance would allow them to be even more exacting in their SLAs and raise their overall efficiency.  At SolidFire, one of our primary goals is to enable cloud service providers to allocate storage performance as easily as they allocate storage capacity; and to do so for thousands of volumes within a shared infrastructure.  This capability allows companies like Virtustream to wrap SLAs around exact performance metrics and maintain customer performance expectations regardless of the activity within the system.

-Jay Prassl, VP of Sales & Marketing

The Challenges of Cloud Service Providers - Part One - SoftlayerTuesday, October 4, 2011 posted by Adam Carter

softlayer video thumbnail

Nathan Day and Duke Skarda of SoftLayer were kind enough to talk with the guys from Silicon Angle/ on the Cube at this years VMworld. During their discussion they touched on a major challenge that many cloud service providers are dealing with today… storage performance.   One of the points that was brought up was fine grain control on Quality of Service. They referred to "per volume" or "per account" control as storage nirvana. At SolidFire, our architecture was designed from the ground up with this in mind. The ability to guarantee consistent QoS to thousands of applications and thousands of customers is how SolidFire is making storage nirvana a reality.

-Adam Carter, Director of Product Management

Cloud: The Triumph of Automation Over AdministrationThursday, September 29, 2011 posted by Dave Wright

What makes cloud computing a better way to manage an IT infrastructure? Is it the flexibility? The cost savings? The improvements in utilization and efficiency? More importantly, what is the fundamental difference between cloud computing and traditional IT management that enables these benefits? In a word, the answer is automation.

Automation is what allows cloud infrastructures to expand rapidly without a corresponding increase in administrator headcount. Automation is what allows business teams and developers to deploy and scale applications in the cloud in minutes instead of weeks.  Without automation, it would be impossible to keep web-scale applications such as Twitter, Facebook, and Ebay running.  Also, without automation, the cost of managing these web-scale environments would quickly spiral out of control.

Web-scale companies such as Google and cloud service providers such as Amazon and Rackspace have long realized that automation is the key to running their data-centers and their business. But when it comes to storage, automation has remained elusive. Storage vendors selling products to IT administrators have created systems that are administration-centric, with fantastically complex tools that require months of training and certification to master. In a cloud setting, these tools don't scale because there simply aren't enough humans to manage them.  For years, storage vendors have treated automation as an afterthought, something outside the norm, which has forced cloud providers to perform unnatural acts like scripting vendor-specific CLIs.

At SolidFire, we think automation is the only way to scale the cloud and is our first and most important focus when it comes to management. We have built into the core of our system a comprehensive REST-based API that allows service providers to automate any aspect of the system - from deployment to provisioning, to security, reporting, and billing. Failure mitigation, adding capacity, performance management, backup and restore capabilities are all included as well. Our web-based and CLI tools are simply wrappers around the API, providing a good way to get started on your way to a fully automated storage environment.

While our API makes it easy to integrate SolidFire into home-grown cloud management stacks, we are taking things a step further by creating pre-built integrations with cloud stacks such as VMware vCloud Director and OpenStack. Our product management and development teams will be at the OpenStack Conference next week as part of that effort. If you are attending the conference ping us on twitter (@Solidfire) or swing by our booth.

One final point: these days, everyone claims to have APIs, and automation is fast becoming a buzzword across the industry.  If a system enables you to automate just one or two aspects of storage management, this is not cloud-scale automation.

The next time a storage vendor talks about automation, try this simple test:  ask them if you can you take their system out of the box, put it in a rack with power and a network connection, and have APIs handle the rest.  That means you walk away, and you don't have anyone log into this box as long as it is on your network.  That is the true essence of cloud-scale automation. If you'd like to see how a storage API can accomplish that, see us at the OpenStack conference or feel free to contact us.

-Dave Wright, Founder & CEO

Get Rid of the GuessworkWednesday, September 14, 2011 posted by Adam Carter

Building a cloud infrastructure requires careful planning and technologies that give you confidence. Balancing system performance, efficiency, and cost are important to be competitive in the cloud market, and are required for building a profitable Block Storage as a Service offering. If you are using educated guesses, or assumptions around performance and efficiency as you build out your service you are opening yourself up to risk.

So why talk about this now?  Well, because SolidFire has invested in a tool that eliminates the guesswork that is often used to plan for thin provisioning, compression, and deduplication.  We have talked earlier about efficiency and how these storage technologies in concert with SSDs, can be huge game changers in the cloud.  Historically it has been difficult to design these advantages into an infrastructure without debilitating performance impact.  Trying to understand how the storage system itself affects efficiency complicates this even further.  Each cloud service provider is different and efficiency results can vary based on the data stored and the type of thin provisioning, compression, and deduplication a vendor offers.

It's hard to have any rule of thumb that would expose how much thin provisioning, compression, or deduplication can save. You can imagine that thousands of virtual desktops would compress and dedupe like crazy, but exactly how much? Are you sure they will as much as you hope? You also don't always know all the details about how a particular vendors feature works, or how it might work when combined with other features. Does dedupe span multiple volumes? How do I account for dedupe in a multi-tenant environment? What segment size does it scan?

SolidFire's answer to these questions is to stop guessing.

SolidFire developed a command line utility with the ability to look at a specific set of data and say exactly how much SolidFire capacity would be required to store that exact data. eScanner evaluates block devices, files, file trees, or vmdk files and tells you exactly how much of that data is real, how much it would compress, and how much it would deduplicate on a SolidFire storage system. The same utility is also capable of aggregating multiple data sets so you can see how much more effective deduplication gets as you put more data on a SolidFire system. It's refreshing to be able to deliver clear and direct answers about efficiency, and to set expectations about data reduction rates based on real data.

I'm excited about our recent release of the eScanner utility and really interested to see what feedback we get about the different data sets users scan. I encourage you to download eScanner, run it as widely as you can, and see how much more efficient your data could be stored within a SolidFire system.

-Adam Carter, Director of Product Management

Designed for Solid StateMonday, August 22, 2011 posted by Dave Wright

For the past 20 years, network storage systems have been designed around spinning disk, with the form factor, performance characteristics, and reliability profile of the HDD dominating every architectural and design decision that was made. Many of these systems with 10-20 year old architectures are now bolting on solid state disk, which comes with a number of tradeoffs that Adam previously discussed. However, today, rather than focus on the problems with traditional architectures and SSDs, I want to focus on the advantages of a system designed from the ground up for solid state.

We did just that here at SolidFire. We built one of the first general-purpose storage systems that has been designed exclusively for solid state storage.  Our use of solid state did not simply influence small decisions about data layout and I/O scheduling, but rather drove the entire architecture of the system. We completely rethought how a storage system could function if you were to remove disk from the picture, and ended up with a storage architecture that has very little resemblance to a traditional SAN.

From the outside, the system may look similar to other scale-out storage systems, with nodes and drives and iSCSI networking, but underneath the covers is something so different, it could never be built with spinning disk.

This fresh approach gives our customers tremendous benefits; such as increased performance, "hard" fine-grained quality of service guarantees, in-line deduplication, and reduced SSD wear.   These technology advantages are enabling cloud service providers to invite mission-critical, performance-sensitive applications into a cloud infrastructure with greater confidence.

-Dave Wright, Founder & CEO

Not All QoS Is Created EqualTuesday, August 9, 2011 posted by Adam Carter

Quality-of-Service (QoS) features exist in everything from network devices, to hypervisors, to storage. When multiple tenants share a limited resource, QoS helps provide some level of control over how that resource is shared and prevents the noisiest neighbor from disrupting everyone.

In networking, QoS is an important part of allowing real-time protocols such as VoIP to share links with other less latency sensitive traffic. Hypervisors provide both hard and soft QoS by controlling access to many resources including CPU, memory, and network. QoS in storage is less common, but is now available on many high-end arrays. However, most approaches to storage QoS are "soft" - that is, based on simple prioritization of volumes, rather than hard guarantees around performance. Soft QoS features are effective only as long as the scope of the problem is small enough. In enterprise environments where an administrator has visibility across a global portfolio of applications, and performance fluctuations are not penalized as heavily, it is conceivable that prioritization can be managed with this more simplistic approach.

However, in a large scale cloud environment, these soft QoS implementations come up short. When multiple tenants share storage, the concept of priority is ineffective. From the perspective of the CSP, unlike the enterprise storage admin they aren't afforded the luxury of application level visibility. Consequently, it does little good to assign a priority level to a set of applications that they have no control over.  From the customer perspective, priority is a relative ranking lacking any real clarity on absolute performance. If a customer has a priority of 10 and everyone else is at 5, they may have twice the priority, but it will come at the expense of all the other tenants on that system. Moreover, even if performance is good, there is no guarantee it will stay that way. While the priority level may be controlled, the performance delivered to a particular level is still "best effort" in the context of all the other workloads on the system. This creates an unpredictable environment for both cloud service providers, and more importantly, their customers.

At SolidFire, one of our founding premises was that solving the performance challenges for cloud service providers required a completely different approach to Quality of Service. SolidFire has architected hard QoS controls into the system that are defined in terms that actually mean something to a customer, IOPS and MB/s. Each volume is configured with minimum, maximum, and burst IOPS and bandwidth. The minimum IOPS provides a guarantee for performance, independent of what other applications or tenants on the system are doing. The maximum and burst controls the allocation of performance and delivers consistent performance to tenants. For the cloud provider, SolidFire QoS enables SLAs around exact performance metrics and complete control over the customer's experience. For cloud consumers, clear expectations around storage performance provide confidence and stability. With guaranteed performance, IT administrators can finally deploy their tier 1 applications with confidence in the cloud.

-Adam Carter, Director of Product Management

Server-Side Caching: A complex stopgap making up for deficiencies in current array designsWednesday, July 27, 2011 posted by Dave Wright

Chris Mellor from The Register recently wrote an informative article ("Why should storage arrays manage server flash?") that outlines the merits of server side caching. You can read the entire article here. At SolidFire, we take a slightly different viewpoint on the subject and wanted to take the opportunity expand on it some more here.

There are certainly advantages to server-side SSD caching. Most notably, it reduces load on storage arrays that are being taxed far beyond what they were originally designed for. However, in the long run I think we'll see server-side SSD caching as nothing but a complex stopgap making up for deficiencies in current array designs.

If you look at "why" it's claimed server-side cache is necessary, it basically boils down to:

-The array can't handle all the IO load from the servers, particularly when flash is used with advanced features like dedupe

-The reduction in latency from a local flash cache

The first is a clear indication that current array designs aren't going to scale to cloud-workloads and all (or mostly all) solid state storage levels of performance. Scale-out architectures are going to be required to deliver the controller performance needed to really benefit from flash.

The second is based on the assumption that the network or network stack itself is responsible for the 5-10ms of latency that he's reporting. The reality is that a 10G or FC storage network and network stack will introduce well under 1ms of latency - the bulk of the latency is coming from the controller and the media. Fix the controller issues and put in all-SSD media, and suddenly network storage doesn't seem so "slow". Architectures designed for SSD like TMS, Violin, and SolidFire have proven this. Local flash, particularly PCI-attached, will still be lower latency, but that micro-second performance is really only needed for a small number of applications.

EMC and Netapp have huge investments in their current architectures, and are going to try every trick they can to keep them relevant as flash becomes more and more dominant in primary storage, but eventually architectures designed for flash from the start will win out.

-Dave Wright, Founder & CEO  

The best automated tiering is no tiering at allWednesday, July 13, 2011 posted by Dave Wright

Recently Hitachi and EMC have gotten in a blog fight about whose automated tiering technology is better. But are they asking the wrong question? Is tiering even the right solution to storage performance problems to begin with?

To be sure, on the surface the concept of automated tiering sounds great - your hot data goes in fast SSD storage, while seldom accessed data is on cheap spinning disk.  The problem with this type of optimization is that it is executed from a global perspective across the entire storage system.  The storage array optimizes the IO load across all of the data it controls. If you only have a single application, or perhaps a small number of applications running on your storage array, this global optimization probably works pretty well.

That's a good fit for a traditional enterprise SAN deployment, but consider a cloud environment where you have hundreds or thousands of virtual machines and applications, with new applications coming and going all the time. From the perspective of an individual application, storage performance can be radically unpredictable. One day, the array may decide the application's data is "hot" and serve it out of SSD with < 1ms response time, the next day another hot application may come online and suddenly response times jump to 10ms as the data gets pushed out to a SAS or SATA tier. This type of unpredictability is especially problematic in a multi-tenant service provider environment, where customers aren't aware of each other and any radical change in performance is likely to trigger a support call. It's not the storage array's fault - it's still trying to globally optimize, but try telling that to a customer whose website or database is suddenly slow.

So if tiering isn't the answer, what is? Simply putting all the data on SSDs helps, but doesn't take advantage of the fact that some applications and data are actually more active than others, and there is a hesitance to "waste" SSD performance on less active data. At SolidFire, we think the answer to these issues is performance virtualization. SolidFire's unique performance virtualization technology de-couples capacity from performance and allows service providers and their customers to dial in the exact performance required for each application. Have a lot of data that doesn't need much performance? No problem, those IOPS aren't wasted - they're simply available for other more demanding applications on the storage cluster. Either way, you get the performance you expect day after day, without any surprises, and if you need more or less, you can change it instantly. Skip the data tiering stopgap and get an all solid-state solution that can optimize performance across thousands of applications.

-Dave Wright, Founder & CEO

SolidFire Launch at Gigaom StructureTuesday, July 5, 2011 posted by Jay Prassl

If you have been following us here at SolidFire over the last few months, you know that we officially came out of stealth-mode at the Gigaom Structure conference in San Francisco in June.  Structure provided us a great launch-pad to begin talking publicly about our company, our technology, and the availability of our early access program

You can watch a playback our Founder and CEO, Dave Wright, participating on a panel titled Cloud Storage: Moving Beyond Backup.

We have had fantastic press and analyst coverage over the past couple of  weeks; numerous editors, bloggers, and analysts talked through their articles, blog posts, or tweets about the SolidFire solution, and you can find it all in the News section of our website.  We also refreshed our website to enable customers to begin to dive deeply into our technology and understand the impact that a SolidFire solution can have on their primary storage portfolio.  Make sure you take a few minutes to check our videos to learn about the benefits of SolidFire performance, efficiency and automated management each in 90 seconds or less.

As a company, we are growing rapidly here in Boulder, Colorado, and it is great to be part of Boulder's thriving storage and start-up community.  We have new people joining the team every week, and new customers joining our early access program almost daily.  It is an exciting time for us at SolidFire, as well as for our early access customers who are now able to offer scalable primary block storage to thousands of servers.

For Dave, Adam, and I, Structure was a great opportunity to finally let loose our enthusiasm for the work that we are doing at SolidFire. When you are passionate about what you do, stealth-mode can be a drag, and all of us are glad to be getting on with the business of helping cloud providers enable high-performance and high-efficiency primary block storage within the cloud.

To stay in formed on what is new with SolidFire and to keep up with our growing community, sign up on our mailing list or follow us on Twitter @solidfire.

-Jay Prassl, VP of Sales & Marketing

Time for SolutionsMonday, June 20, 2011 posted by Dave Wright

Over the past few weeks, we've talked about the challenges of providing high-performance primary storage at cloud scale, challenges like performance, efficiency, and management. Today, I'm excited to stop talking about problems, and start talking about SolidFire's solutions.
This week SolidFire is taking the wraps off our all-SSD storage technology and announcing our early access program. On our new website you'll find our product described in detail for the first time, including the SolidFire Element™ operating system and the SF3010™ storage node.

After reading more about SolidFire, you may still wonder:

  • Is it really possible to build an all-SSD storage system that scales to petabytes of capacity and millions of IOPS?
  • Can that performance and scalability be achieved at a price per gigabyte similar to disk?
  • Can you really guarantee IOPS to thousands of individual volumes and put an SLA around storage performance?
  • Can efficiency technologies like deduplication and compression actually be built to run in real-time without affecting performance?

We certainly think so, having built the first storage system that can do all this and more. If you'd like to find out how, come talk with us and see what SolidFire can do for you.

-Dave Wright, Founder & CEO

Challenges of Block Storage as a Service at Cloud Scale Part 3 - ManagementMonday, June 13, 2011 posted by Dave Wright

So far in our series of blog posts on the challenges of deploying Block Storage as a Service, we have discussed why it's difficult to get both good performance and high efficiency with primary storage in the cloud. Because of these issues, what service providers are often left with is a sprawling, underutilized storage infrastructure that is extremely difficult to manage at scale.

Where does this management challenge come from? It's primarily caused by a disconnect between how traditional enterprise storage systems have been managed and how cloud providers want to build and manage their infrastructure. In the enterprise, expensive and complex storage equipment is looked after by experienced storage administrators. Given the cost of the equipment and the value of the data being stored, having a well-trained human configuring and managing the storage on a daily basis makes a lot of sense. Traditional storage companies have built their management systems around the demands of storage administrators, and, as a result, have created complex and feature-filled administration tools.

The problem is that this model doesn't scale. For a large-scale cloud, where you are growing quickly and deploying new storage on a weekly or even daily basis, and adding customers 24 hours a day, hiring an army of storage administrators to setup, configure, provision, manage, and troubleshoot that storage is not a viable option. The efficiencies of the cloud are not based on armies of administrators; they are based on efficient management through automation. Service providers don't want to administer their storage; they want to automate it.

An illustration I like to use is to compare deployment of compute capacity to storage. Most cloud providers are extremely efficient at deploying new compute capacity. Automated server configuration, deployment, and management tools allow new racks of servers to be plugged into the network and immediately added to the pool of available compute capacity.  All of these activities are accomplished without an administrator ever logging in or configuring a single thing. How can you do that with storage today? Setting up new storage arrays is a complex and time-consuming process. Provisioning new storage or adding capacity is something that has to be done carefully to avoid disruption and ensure security and data isolation are preserved. Automated alerting and reporting are primarily done through proprietary vendor tools or complex integrations. Any automation capabilities or APIs tend to be afterthoughts and cover only a small portion of the system's functionality.

What service providers are really looking for is a storage system that was designed with automation in mind from the start, with APIs that are both comprehensive yet easy to integrate, and with management capabilities that can be consumed by a machine just as easily as they can by a human. Only then is storage really ready for cloud scale.

Performance, efficiency, and management are just three of the challenges facing cloud providers who want to deploy primary block storage at scale. SolidFire was built from the ground up to address these challenges and many others. Soon, we will be telling you just how we do that. I can't wait!

-Dave Wright, Founder & CEO


Challenges of Block Storage as a Service at Cloud Scale Part 2 - EfficiencyFriday, June 10, 2011 posted by Dave Wright

In Part 1 of our series on the challenges service providers face delivering block storage as a service, we discussed the challenge of performance. Today, we're going to talk about a second challenge service providers face - storage efficiency. The ratio between how much storage capacity a service provider buys, and how much they are able to sell is a critical driver of bottom line profitability - yet obtaining high utilization rates - is a constant struggle.

Part of the reason for this inefficiency goes back to our discussion of the imbalance between storage capacity and performance. In an effort to provide as consistent performance as possible, it is common place that service providers are forced to deploy far more capacity (spindles) than they are able to use in order to provide the right number of IOPS.  All that wasted capacity continues to consume space, power, and cooling, and drags down the profitability of the capacity that is sold.

Another challenge to high utilization rates is how service providers plan for growth and deploy new capacity. While most storage systems are designed to allow for capacity expansion through disk shelves, in practice, many service providers deploy storage "fully configured" from day one. Reasons for deploying full storage configurations include; better pricing from the vendor, reducing the risk and complexity of adding new capacity while operating, and the fact that much of the cost of the storage system is in the controller and software which need to be purchased up front. Whatever the reason, the result is the same - low utilization rates during early deployment, which brings down overall utilization and reduces profitability.

Over the past few years, efficiency technologies that allow you to store more data in less space, like compression and deduplication, have started to appear in primary storage systems. While on the surface these features should be a huge boon to service providers looking to increase their efficiency, in reality they are seldom used.  Again, it comes down to the balance between performance and capacity. These efficiency features often incur a significant performance penalty while providing space that can't actually be used. In fact, many service providers don't even use thin provisioning, a storage feature that has been standard for years. Why? Because it makes capacity planning more difficult, and they get better performance by fat provisioning volumes up front.

What service providers really want is storage that is designed and balanced to run at consistently high utilization rates, and can be grown incrementally over time, so that it can be profitable from day one.

-Dave Wright, Founder & CEO



Challenges of Block Storage as a Service at Cloud Scale Part 1 - PerformanceMonday, June 6, 2011 posted by Dave Wright

For service providers who want to offer Block Storage as a Service as part of their cloud compute offering, a number of challenges exist.  At SolidFire, we're focused on solving the  biggest problems that the service providers encounter when trying to build scalable, reliable, and profitable network-based primary storage. In this first of a three part blog series discussing these problems we will address the challenge of performance.

Over the past 20 years, a huge performance imbalance has been created between processing power, which has doubled every 18-24 months under Moore's law, and storage performance, which has barely improved at all due to the physical limitations of spinning disk.  Meanwhile, storage capacity has exploded by a factor of 10,000 over that time. The result is that while capacity is plentiful and cheap, storage performance (measured in IOPS) is expensive.

For a service provider looking to sell block-based primary storage as a service, that imbalance makes it difficult to sell storage on a per-gigabyte model, which is how it is most commonly sold today. Customers who may buy only 50 or 60 GB of space for their application still expect reasonable performance - but when that customer is put on the same set of disks with dozens of others, their "fair share" of IOPS doesn't amount to very much. Even worse, performance will vary considerably based on how many other apps are on the same disk and how active those apps are at any given time. The result is poor, unpredictable performance, and unhappy customers. Today, service providers offering Block Storage via enterprise storage arrays typically deal with this challenge by using lots of fast, expensive FC and SAS disk, and utilizing only a fraction of the available capacity (a technique known as under-provisioning or short-stroking). Even with this approach, it's difficult or impossible for providers to guarantee customers any particular level of performance, short of putting them on their own dedicated disks and eliminating much of the benefit of an efficient multi-tenant cloud.

So what about flash? Doesn't it solve the performance problem? Today that's true only in part. As we previously discussed, most enterprise storage today makes only limited use of flash as a cache or tier of storage for hot data, and overall array performance is often limited by the controller. While cache and tiering technology does a good job of "globally optimizing" array performance by putting the hottest data in low latency flash, it can actually end up causing more headaches for service providers by making storage performance even more unpredictable for customers. From the perspective of an individual customer, their data may be blazing fast one minute as it is served from flash, and the next minute slow as a dog as it got bumped to disk, because another customer was "hotter." At this point, expect a support call. Inconsistent, unexplainable performance is one of the biggest complaints about block storage in the cloud today, and automated tiering and cache just makes it worse. All service providers want is an endless amount of storage performance that can be carved up and sold in predictable, profitable chunks. Is that too much to ask?

-Dave Wright, Founder & CEO

Just How Expensive is Flash?Thursday, May 26, 2011 posted by Dave Wright

For most people there are two common associations with SSDs: expense and performance. The performance side is hard to argue - SSDs can be 100X faster than spinning disk on many workloads. But what about cost?

By historical standards, solid state storage is now amazingly inexpensive. The myth that flash is expensive is propagated by enterprise storage vendors selling flash modules for $25-$50/GB or more. The performance benefits of flash allows customers to justify the ridiculous price, but also limits their use of flash to only the most critical, most performance sensitive applications.

The reality of the situation is that flash chip prices have been on a dramatic decline over the last 5 years, dropping in price by 50% or more a year as demand increases and process sizes shrink. Spot prices for MLC flash are now around $1-$1.25/GB, and high-capacity MLC SSDs can now be had for under $2/GB. Of course, most enterprise storage vendors aren't using MLC. The limitations of their architecture often don't allow it. However an architecture designed from the ground up around SSDs, balancing use of SLC and MLC technologies, is a different story.

In comparison to Enterprise SATA drives which sell for $0.15/GB, $1/GB for flash may still seem expensive. For applications where capacity is the only concern, that's certainly true, and will continue to be the case for many years. However for primary storage applications, those that require even a modest number of IOPS, a comparison to SATA drives does not make much sense.  A better comparison with flash would be 10K & 15K SAS drives, or even complex tiered solutions that use SSD, SAS, and SATA.  With 15K SAS drives at $1/GB or more, flash is not far from closing the gap.

For SolidFire however, the really exciting part is what happens when you combine the falling price of solid state storage with efficiency technology that dramatically increases effective capacity and requires a fraction of the power, cooling, and space of spinning disk. Suddenly spinning rust doesn't look so cheap after all, does it?

-Dave Wright, Founder & CEO

SANs don't do justice to SSDTuesday, May 10, 2011 posted by Adam Carter

So what would happen if you took the HDDs in your SAN and replaced them with latest SSD drives?  Don't faster disks = faster storage technology?  Unfortunately it is not that straight forward.  Traditional SAN architectures dramatically complicate the use of SSD because both the hardware and software were designed around spinning storage media - not SSDs.

Today there is an ever-widening gap between compute and storage IO.  Large multi-core servers packed with memory are capable of delivering a high number of IOPS to extremely fast networks, and traditional storage systems have languished with high latency and poor IO.  Compute technology has been outpacing SAN and disk performance for years. At this point traditional SANs are engulfing more than their fair share of IT budget trying to keep up.  So why doesn't the use of SSDs as HDD replacements fix this problem alone? The answer lies within storage controllers and the storage operating system.  Within traditional storage architectures these aged components do more harm than good to SSDs, and are unable to take advantage of their benefits.

Controller IO Bottleneck
Traditional storage controllers were designed to manage thousands to tens of thousands of IOPS, not the hundreds of thousands to millions of IOPS that SSDs are capable of delivering.  Current controllers simply can't keep up.

Traditional SAN architectures are not designed to maintain the integrity of SSDs
Data layout architectures that optimize for deficiencies in spindle physics are ineffective with SSDs.  Write patterns and redundancy mechanisms such as RAID cause write amplifications that put unnecessary loads on SSDs.  These algorithms accelerate the wear of SSD media and have lent to the myth that SSD are inferior to HDDs and wear quickly.  So for the record, it is legacy storage architectures and how they manage SSDs that limits SSD use and life cycle.  Today's SSD duty cycles can be on par with HDD and getting better. 

Limited Deployment of SSD
Predominant use of SSDs within traditional SANs are as either cache or a small storage tier.  SSDs used in these modes receive tremendous write traffic and churn which places tremendous wear on the drives.  To compensate most manufacturers require the exclusive use of the most expensive and wear resistant SSDs which drives up solution cost.  Think of the cost and wear implications if you deployed SSD across an entire legacy SAN architecture... not a pretty picture.

The solution to leveraging SSDs in an intelligent and and cost effective manner is a new storage architecture.  An architecture built from the ground up around SSD technology that sizes cache, bandwidth, and processing power to match the IOPS that SSDs provide while extending their endurance.  It requires an architecture designed to take advantage of SSDs unique properties in a way that makes a scalable ALL SSD storage solution  cost effective - today.

-Adam Carter, Director of Product Management

Welcome to SolidFireMonday, May 9, 2011 posted by Dave Wright

Primary storage for cloud computing requires a new solution.
That simple truth is one of the biggest lessons I learned from my time at Rackspace.

Cloud computing amplifies many of the shortcomings of today's enterprise storage systems. Demands around scalability, performance, efficiency, availability, and automation increase dramatically when you move from an enterprise environment supporting a few dozen applications to a cloud that is the backbone of thousands.

Since it's inception, SolidFire has been focused on solving one critical problem: How to provide high-performance primary storage to thousands of applications in a cloud computing environment. Today we are starting to take the covers off the amazing technology we've built to address this challenge. Technology that has the potential to not just revolutionize the cloud computing world, but the entire landscape of primary storage.

Behind the technology is an equally amazing team composed of storage industry veterans from LeftHand Networks, IBM, and HP, distributed computing wizards from Cornell and Georgia Tech, and industry experience in virtualization and cloud computing from VMWare and Rackspace.
We're fortunate to be backed by a team of investors from Valhalla, Novak Biddle, and NEA with both the long term vision and deep pockets necessary to help us build a world class company.
Over the coming weeks and months, I look forward to sharing more of our vision, our team, our product, and the groundbreaking technology  that makes it all possible.

-Dave Wright, Founder & CEO