Understanding why hyper-converged platforms are getting so much attention requires a brief jaunt through recent...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
history. The original mainframe approach -- a simple setup consisting of servers, storage and networks -- was replaced by distributed computing. During this transition, distinct vendor groups grew up around each area, such as Dell and Hewlett Packard Enterprise for servers, EMC and NetApp for storage, and Cisco and Juniper for networks.
Organizations bought racks and built their own hyper-converged platforms, combining the best components from each area. This approach had its problems, mainly regarding personnel. The multiple platforms required skilled employees who understood the interplay among the various components, and how to provision, monitor and maintain an overall platform. Many times, however, these platforms failed in their main aim: to support the business.
A few attempts were made to streamline the creation of IT platforms. Blades and bricks made life easier, as certain platform functions were combined and a specialized chassis was created, and they reduced the skills required to put everything together in an optimized manner. Blade computing, though, didn't take hold in a significant way. Having to buy a specific chassis that required regular upgrades created the perception that vendors had much too tight a grip on their customers.
The emergence of the hyper model
In hopes of solving the problems of the past, vendors, such as DataCore Software, Nutanix, SimpliVity and VMware, created the hyper-converged platform. This model preconfigures compute, storage and networking -- along with virtualization -- to provide an optimized system that can be up and running in a short period of time. Incumbent vendors, such as Dell with its FX2, IBM with its PureFlex, and Hewlett Packard Enterprise with its ConvergedSystem offerings, have also jumped on the bandwagon.
In essence, we are seeing a distributed computing version of the mainframe -- along with many of the benefits and problems that accompany this type of model.
The benefits are obvious. Hyper-converged vendors have engineered the system so that it operates at optimum levels. In addition to the pre-engineered hardware, hyper-converged vendors include a preconfigured software stack, ranging from a relatively simple hypervisor, operating system and systems management environment to a full-blown elastic private cloud, with intelligent workload management software. Purchasers need to have a reasonable understanding of how they will be using the system to ensure they choose the right approach.
Whether an organization chooses a workload-specific system or a flexible cloud platform, life is a lot easier once the hardware is delivered. As everything is consolidated into one system, the buying organization's IT department has far less to do when provisioning the system. Most of the time, IT simply needs to unpack it, plug it in, input a few variables -- IP address, domain name system settings and so on -- and then start installing applications.
Server, network and storage interactions are all managed directly by the software intelligence the vendor builds into the system. And this is where one of the battles will be fought: Which vendors can build the best intelligence into their hyper-converged platforms?
With all of these benefits, and just a slight nagging worry over how intelligent the software is, it all sounds great, doesn't it?
What are the drawbacks to hyper-converged infrastructure?
Hyper-converged infrastructure may be wonderful for some organizations, but there are downsides.
Most hyper-converged platforms are built on commodity components -- as in, they use standard, mainly Intel, CPUs, SATA or Serial-Attached SCSI disk drives and standard network connectors. However, a lot of the internal connectivity is proprietary, which leads back to the same issues found with blade computing and the need for specialized chassis. If the internal bus structure of a hyper-converged system requires all extra components to be purchased from the original vendor, then you are beholden to that vendor. This may go further. The system may have less commodity storage -- e.g., PCIe or mSATA modules -- and it may include offload processors, such as GPUs. Some will see these as red flags.
But this arrangement may not be as worrisome as it seems. As long as the hyper-converged system is standardized on the outside -- that is, it speaks TCP/IP over Ethernet and has no external dependencies on that vendor's specific equipment -- then using proprietary internal technology won't be an issue.
In fact, there is a hidden benefit in having vendors add their own touches to these systems. If all hyper-converged platforms had to be based on standard components, with standard connections and standard firmware, all systems would be the same. By allowing for innovation at the internal hardware level, Vendor A's hyper-converged system may meet your distinct requirements far better than Vendor B's.
This doesn't alleviate all concerns, though, as expansion issues may persist.
Some hyper-converged platforms are sized for a specific environment. If your organization wants to use a hyper-converged system as a complete private cloud platform, then you need to ensure it has the capability to share its resources in an elastic manner and that those resources can be easily multiplied, as required.
Some systems have the requisite space to expand while using the same vendor's equipment. Others require the buyer purchase another system and, essentially, cluster the two together. Some systems allow companies to easily add other vendor's equipment in areas such as network-attached storage or storage area networks; others struggle to make use of external resources in an optimized manner. Many systems require that if one type of resource is added, more of the other resources must be added at the same time. It might make sense to look for systems that have built-in systems management and external third-party equipment management capabilities.
Overall, it makes sense to look at hyper-converged systems, as they allow an IT department to react faster to their organization's needs. Buyers need to make sure that any system is fit for purpose and that it has the flexibility to grow as needs change.
Should you customize your hyper-converged platform?
Hyper-converged infrastructure picks up speed
Avoid resource contention in hyper-converged platforms