Be Prepared: Next Up, QoS-as-a-Service

dscn0635-1

Last Tuesday evening, we co-hosted the QoS Summit during HostingCon with ScienceLogic and Structure Research. Over twenty cloud provider executives attended this roundtable to discuss key challenges and opportunities to deliver guaranteed performance in the cloud. Below is an excellent recap written by Antonio Piraino, CTO of ScienceLogic. Please see the original blog post here.

– Jay Prassl, SolidFire

As the cloud enters its next stage of growth, most of us are not asking why and how we should run applications in the cloud. Now, we are looking to optimize and guarantee performance to meet service obligations and give the best possible service.

What has become increasingly clear over the past few years is that this cloud phenomenon, especially as it pertains to the oncoming wave of enterprise consumers, is being dominated ultimately by IOPS as a proxy for Quality of Service (QoS) expectations in the cloud.   This prompted us to join SolidFire and Structure Research in co-hosting a first of its kind QoS Summit with a select group of hosting executives, in the midst of the annual HostingCon event being held in Austin, Texas.  The discussion was a lively one, but there was consensus around the need for a cost effective alternative to traditional and somewhat limited storage mechanisms, and the simultaneous ability to control, visualize, and manage the IOPS associated with dynamic workloads attributable to cloud services.

DSCN0635-1

Is your noisy neighbor keeping you awake at night?

The noisy neighbor problem is defined by the fact that cloud in general is based upon shared resources, ranging from shared bandwidth, operating systems, CPU, memory, and storage.  A number of technologies already help service providers gain control over the network (i.e. when MPLS was brought to the market), and even the relatively simple partitioning of CPU and RAM by containerized/virtual technologies. (Memory virtualization helped in overcoming physical memory limitations.) However, the disk subsystem is one which remains extremely difficult to partition. That means that invariably, with higher demanding client workloads, virtual machines on the physical host are consuming very large amounts of disk I/O resulting in very poor performance for their neighboring virtual machines.

While the bigger arrays are helping, the neighbors are getting noisier. There is a need, therefore, to control the allocation of storage and control the associated QoS per workload. The Service Level expectations are coming, and while SMBs haven’t defined the IOPS yet, the large enterprises are showing savvy when it comes to the performance expected from their multi-tenant service provider, to be aligned with their traditional DAS solutions.  The power of the modern solutions on offer from SSD providers such as Intel and hot startup SolidFire, is in knowing that you can guarantee a certain number of IOPS on each volume, and pair that with the elastic cloud platform in parallel to the compute resources allocated to workloads.

Use-Cases

So how does one go about managing and displaying QoS to your constituents?  That’s where things like the ScienceLogic PowerPack (read: app-store-esque app) recently created to instrument against the SolidFire API has become useful to visualizing those now controllable IOPS.

At the same HostingCon event, I had moderated a session with a variety of hosters that had kicked the tires on SolidFire in particular, including: Codero, Crucial Cloud Hosting, and Softlayer.  The draw behind a company like SolidFire became obvious, given the similarities in their cloud offerings.  These companies had effectively started out their cloud offerings on bare metal servers, with a cloud offering based on a SAN.  The next evolution was to offer cloud services on local storage to reduce the impact on the SAN.  Ultimately they were seeking local disk space performance, since disk IO is where their customers’ first problems bubbled to the top.

But going the single server or bare metal model is no longer a cloud offering, and the alternatives that involve striping across numerous resources toward dedicated IOPS, fast becomes expensive.  Moving away from the traditional allocations of spindles and mechanics, to technologies that are able to de-dup, compress, and allocate volumes on the fly is not only impressive, but going to become necessary. Add to this the idea that usable space on a cluster can actually be higher than the usable space on drives (strange, but true), and suddenly the price of modern day technologies like SolidFire’s are quickly becoming an affordable way to offering that QoS as a Service.

DSCN0639

Guest blog by Antonio Piraino, ScienceLogic

3114 views 2 views Today
3114 views

 | 0 comments



Posted in Company, Quality of Service.