This content is part of the Essential Guide: The rise of hyper-converged architecture for storage

The hyper-converged hype isn't going away soon

Hyper-converged products have shown their value and the vendor competition is just starting to heat up.

While hyper-converged infrastructure may be a young concept, the stage is quickly crowding with would-be lead actors. This is a technology concept that is going to stick.

As vendors began to collaborate on data center designs, the need quickly emerged for detailing the elements of these designs. This allowed a consumer to navigate the complicated mess of interoperability and hardware/software compatibility in these solutions. This resulted in reference architectures, a sort of blueprint on how to combine these technologies to create a single solution.

In 2009, VMware, Cisco and EMC took the reference architecture a step further, creating the Virtual Compute Environment (VCE). Through VCE, the entire stack of compute, storage and networking was bundled in a converged architecture, called a Vblock. Though this was, in essence, a reference architecture, it introduced new management practices that allowed a customer to treat the Vblock as an entire data center in a box. It would arrive pre-configured and ready to go. It would also be managed and maintained as a single product, updating and patching all components at once instead of treating each element as its own point of management. This meant that a Vblock environment was not only fully compliant with each vendor's interoperability requirements on the day it was delivered, but that it would remain compliant throughout the lifespan of the Vblock.

The concept of converged architecture took hold, and VCE has been very successful in maturing and adapting the Vblock over the last six years. However, many customers were still looking for a smaller, more granular solution. They wanted to be able to start smaller than a rack, and scale out over time. In 2011, Nutanix began shipping its hyper-converged product. Not only were the technologies in the stack compliant (as with a reference architecture) and delivered as a single product (like a converged architecture), but Nutanix was delivering their product in a footprint as small as 2U of rack space. This solution bundled computer hardware, virtual storage and virtual networking with a VMware hypervisor to shrink a converged solution into an incredibly small footprint. Nutanix's software, the Nutanix Distributed File System, groups the local drives from each of the servers to create a virtual storage array with enterprise-class features. This shook the market and started a new trend.

Not long after Nutanix hit the market, Simplivity followed with its own hyper-converged product. While Nutanix initially packaged its software on Super Micro hardware, and Simplivity initially released on Dell hardware, both have since adjusted their hardware bases. Nutanix has formed an OEM partnership with Dell, and Simplivity has established an OEM partnership with Cisco. While I like what Simplivity and Cisco are working on, I liked Nutanix better on their original Super Micro hardware platform. That said, neither of these hardware changes have significantly changed the core offerings from Nutanix and Simplivity. Both continue to gain momentum and drive demand for hyper-converged infrastructure products.

Not to be outdone, VMware has stepped into the hyper-converged market space. However, they are playing the role of "enabler" rather than competitor. VMware developed its EVO:RAIL platform, and then opened the door for hardware vendors to OEM the product. In less than a year, EMC, NetApp, Fujitsu, Hitachi Data Systems, Dell, HP and Super Micro have all signed on to sell EVO:RAIL products. That is a very impressive list of OEM partners. While this is a new foray into the compute space for EMC and NetApp, EVO:RAIL will also serve as a direct competitor to solutions that have been built on Dell and Super Micro platforms. I am sure that will lead to some interesting theatrics in the future.

In the meantime, you may be left wondering why your IT group should consider hyper-converged infrastructure technologies for future data center strategies. These solutions promise to lower the barrier of entry for new data centers, and to ease the burden of growth. These are two areas that provide immediate value to most organizations.

Do hyper-converged products solve all of your IT woes? No. But they can make life easier and address pesky capital expense issues that are often tied to ongoing support of more complex hybrid solutions. While I am not ready to rip and replace existing deployments in favor of hyper-converged products, I am looking for ways to begin testing them. I am still not convinced that, just because I "can" add storage capacity every time I add compute power, I "should." Sometimes I only need to grow one or the other, and few hyper-converged infrastructure products allow that. However, if the cost savings are great enough, maybe that imbalance in supply versus demand is not an issue.

I believe that hyper-converged technologies will prove to be a pivot point -- a paradigm shift -- that will serve as a catalyst to change how technology products are designed. If you are not at least considering them in your long-term IT strategy, you should be.

Next Steps

How hyper-convergence tackles server and storage strain

How the hyper-converged market stacks up

Top-notch storage sends hyper-converged system interest soaring

Dig Deeper on Server hardware and virtualization