Gary Hutchins

Gary Hutchins
Director of Solutions Architecture

Here we are well into the second half of 2015 and Hyper-converged has quickly risen to the top of the IT infrastructure ‘buzzword’ charts. If vendors aren’t talking about it, customers are asking about it, and everyday someone new has entered the space (or re-branded or pivoted to take advantage of the market). The last time I counted, 20+ players were positioning ‘Hyper-converged’ solutions.

We like Hyper-converged and offer solutions from several different vendors – What’s not to like about the concept of CPU, storage and memory all bundled into one easy to consume, scale-out package that is simple to deploy and expand?

What we don’t like is all the players touting that Hyper-Converged is the answer for ‘any and all workloads’ (and especially ‘any mix of workloads’).

As with any new technology (or new application of technology), we all have to take a step back and focus on the real infrastructure requirements and business use-case for a given situation.

How much CPU, storage (both capacity and performance), memory and network connectivity does my environment or application need? Does it scale in a fixed ratio or do I need to scale memory, CPU, storage IO and storage capacity independently? Are we building a cluster dedicated to a single application or are we trying to to consolidate a variety of mixed workloads?

Remember, most Hyper-Converged solutions today prioritize simplicity over flexility and force scale on items you may not need to add items you do; so you may be overpaying if you’re just looking to scale one component. Sometimes that is a perfectly reasonable trade-off, sometimes it’s not.

We see a wide variety of applications all with different resource scaling ratio requirements, which can also change over time. The resource requirements of general purpose, mixed workload clusters can be even more difficult to predict.

Some enterprise workloads are easier to profile, with more predictable and linearly scaling resource requirements…VDI being a good example. This is the reason VDI has remained the ‘low hanging fruit’ for most of the Hyper-converged players. VDI also typically has a smaller and more predictable ‘working set’ profile, allowing it to function better with the hybrid storage subsystems in most of these systems than other applications might. Lastly, VDI is one of the current enterprise services that often runs in its own cluster. Combine all of this and VDI often stands as a pretty good use case for Hyper-converged. Though even here there can be exceptions based upon the environment.

You can certainly size and configure HC clusters for other enterprise applications, but the question then becomes whether it really makes sense to deploy, manage and scale separate clusters for different applications. Sometimes that will make sense, often it won’t.

So don’t get me wrong, I think Hyper-Converged solutions are great and we all believe things are quickly headed in the direction of a ‘Converged / Software-Defined’ world. HC is a key component of our offerings and solution architecture toolkit. We just believe that the decision is not always as simple as the marketing sheets sometimes indicate and recommend that IT organizations consider their specific use-cases and application requirements…as we all should with any technology.

And if your requirements ARE a good fit for one of the current Hyper-converged platforms, we’ll absolutely be the first to recommend it.