Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Which approach is better: hyper-converged software or rack servers?

IT clusters are undergoing a rapid transformation, and infrastructures are changing to reflect that fact. Hyper-converged software, in particular, seems like the way of the future.

The traditional IT cluster consists of racks with servers, storage appliances and top-of-rack switches. This idea...

is going through a major metamorphosis today, driven by the idea that servers and storage appliances are beginning to look indistinguishable.

This convergence is a result of solid-state drives (SSD), which deliver so much bandwidth that just a few drives saturate any current controller and associated network connections. Just think of an SSD delivering 10 GB per second (GBps) and you see the dilemma. Twelve of those SSDs will need a high-interconnect CPU to match the aggregated 120 GBps performance they can deliver. That spells the end of the large disk boxes we used in the RAID era.

Consider the fact that typical rack servers contain 12 drive bays, and the conclusion that a single box can do both the server job and the storage job begins to make sense. There's one more piece to the puzzle, though. Most high-connection CPUs have more cores than a storage appliance needs, since moving data around isn't particularly compute intensive. Why not use the spare cores to run applications? The result is the hyper-converged node, which does both jobs.

Hyper-convergence allows a step forward in software architectures, too. Instead of the traditional storage area network, with host initiators and networked storage targets, we have a peer-to-peer access mechanism that shares all storage on a hyper-converged node with the other nodes in the cluster and allows local access.

The argument for hyper-convergence

While the hyper-converged software vendors focus sales channels today on OEM relationships with large box makers, there are already signs new software-focused competitors are entering the market.

So, which approach is better? Hyper-converged software addresses issues, such as pooling and tiering of all the storage in the cluster, making deployment easier than the discrete structures of local disk and tiered network storage that the rack uses. The use of VMs or containers for storage software makes resourcing workloads more agile and responsive.

Merging the server and storage functions allows orchestration software to deploy applications where data is stored, reducing latency and network loads dramatically. The rack approach is stuck with moving data to the app server. Containers with low startup times and low memory overhead can really take advantage of having data ready to go. As we move to automated cloud orchestration, hyper-converged software should be able to better take advantage of this data-centricity and support more containers or VMs for a given physical and cost footprint.

Disadvantages of hyper-converged infrastructure

Are there disadvantages to hyper-converged infrastructure (HCI) compared to rack servers? Overall performance has been a recurring question over the roughly two years that HCI has been around. Much of this stems from poor orchestration products that ignore the need for apps to open near data, which results in too much network traffic. Another issue is that little planning goes into app-to-app communication, which boosts network traffic.

The rack server has the advantage of being well characterized, so it provides a sort of safety blanket about buying more units. On the surface, the traditional approach allows compute, networking and storage to grow independently, while there is little flexibility in the HCI node configurations vendors offer. This, however, is an artifact of the newness of hyper-converged software, rather than technologically intrinsic. We can expect configurations to loosen up considerably over the next year and one major vendor, SuperMicro, is already very flexible.

This highlights the industry tendency to under-connect servers, with one 10 Gigabit Ethernet (GbE) link where you really need four. This leads us to look at the network far more carefully than in the past and to conclude that Remote Direct Memory Access over Ethernet, with at least 40 GbE links, makes good economic sense for HCI clusters. The resulting performances boosts offset the necessary number of nodes, so the net cost of the extra networking is a wash or better, while jobs generally run much faster.

The cost of hyper-convergence vs. rack servers

We can answer the question of pricing in two parts. First, hyper-converged software allows the combination of servers and storage into a single box, which should lower prices. More importantly, the products are really software packages that are platform agnostic to the extent that the iron meets the definition of commercial off-the-shelf (COTS) x64 hardware.

While the hyper-converged software vendors focus sales channels today on OEM relationships with large box makers, there are already signs new software-focused competitors are entering the market. This will make it possible to unbundle software and hardware and thus rapidly lead to white box and/or low-cost hardware with considerable savings to the IT shop.

You might ask where the network sits in all of this. We are entering an era of software-defined networking where unbundled code running in the HCI nodes supports cheap switch hardware. This lowers networking costs dramatically, which is good if we implement the wider network connections discussed above.

Is the future hyper-converged?

With the rapid pace of technology in the server and storage spaces, we will soon surpass today's benchmarks. Evolution brings the challenge of disparate platforms over time, but a good software offering should cope with mixing and matching COTS platform boxes and different generations of SSDs, so IT shops can avoid vendor lock-in with some care. This will be important as the box business becomes a race to the bottom on prices over the next five years.

If we look out over that five year horizon, there will be a sea change in server design. We are moving to make servers fabric-centric with memory, CPU, graphics processing unit and SSD all able to move data directly over the same fast fabric. This will move throughput dramatically upward in the hyper-converged server cluster, with significant performance and cost savings benefits compared to that of the traditional rack server/array model.

Next Steps

Hyper-converged storage is ready for the SDDC

Is HCI vendor lock-in a threat to innovation?

Predicting the future of server design

This was last published in April 2017

Dig Deeper on Server hardware and virtualization

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What are some other benefits and drawbacks of hyper-converged software over rack servers?
Cancel

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close