Production Databases in Your Cloud

In Ops and DBA circles, it has been common and accepted wisdom for nearly a decade that databases are always best run on dedicated, bare-metal hardware. While multi-tenant cloud infrastructure has been wonderful for application servers and utility arrays, the heart of the system, the database, has been considered off-limits. There have been a variety of very good and valid reasons for this mindset, including persistence, latency, and IOPS shortcomings that are show-stoppers for database performance. Recent forays into cloud database hosting have focused on in-memory deployments and complex ‘hybrid cloud’ implementations, that are essentially networking band-aids to mask the real issue.

But today, things are changing. Compute and memory have been solved problems for a while now, but what about storage? How do we ensure that a virtual server has the IOPS, low latency, and persistence that are absolutely required and essential for any serious production database implementation? What can be done to ensure that shared storage system resources can avoid contention issues that might impact individual production instances?

What if you could effectively virtualize the storage layer for database applications, creating a stable, safe, and durable platform to grow any database solution? Imagine if you could dynamically control the IOPS throughput for every single data volume in your system, guaranteed, in real time. Now think about how useful it could be to grow your disks in seconds, with a single API call or using a slider-bar in a user interface. Consider the implications of knowing that your storage benefited from inline compression and de-duplication, with no impact to throughput or latency. What if you never had to buy another SAN head unit, ever? What if you could scale-out your distributed storage layer by simply adding more 1U nodes to a rack?

These things are completely possible and available today to any public or private cloud implementation team. It is possible to entirely remove the kludges and workarounds necessarily employed to optimize database operations in cloud environments, while retaining top level performance for IOPS sensitive applications, at scale. Using a well developed API interface, you can integrate and create Puppet and Chef patterns to completely virtualize your platform. It’s a dream that many Operations teams have sought and chased for many years.

The importance of these efficiencies and operational cost savings cannot be understated. IT budgets have flattened dramatically in recent years, and it’s no longer acceptable to over-provision and under-utilize in order to squeeze performance out of square-peg/round-hole hardware and software solutions. You do not have to sacrifice disk space for spindles. It is no longer necessary to undertake step-wise projection planning due to hardware limitations in order to scale. You can plan out your cores/memory/storage/IOPS matrices in detail, with far fewer variables than ever before. The result is a much smoother, more linear growth curve where costs remain in-line with real capacity.

The time is now to challenge conventional thinking. No longer must you accept running your databases on dedicated hardware as your only option. If solving these longstanding database/storage challenges outlined in the paragraphs above appeal to you, you really should consider the game-changing capabilities of a true next-generation, scale-out data storage solution like SolidFire. We think you will be pleasantly surprised by what you find…and we are only getting started!

Today we are kicking off an expanded presence in the database market with the announcement of our MongoDB partnership. To hear more about how SolidFire will completely change your impression of storage for MongoDB register for our webinar on November 6th. You can also read about our MongoDB benefits, best practices and more on our Database Solution page.

-Chris Merz, Senior Database Application Engineer

2656 views 2 views Today


Posted in Company, Database.