ltstudiooo - Fotolia
Hyper-converged infrastructure integrates servers and storage into one product supported by a single vendor. As disk drive technology has evolved with faster speeds and greater capacity, local storage pools are emerging as an alternative to expensive arrays. With local drive speeds and capacities still accelerating, admins should examine the benefits of hyper-converged servers and weigh them against conventional options.
Disk drives have evolved
Traditionally -- and this goes back to the day when disk drives resembled washing machines -- large storage configurations were kept separate from the server farms they serviced. This approach commonly used a storage area network, in which multiple servers shared the same storage.
Even so, servers still contained local disk drives to accelerate program loading. Later, as disk drives shrunk to 2.5 inches, it was more common for manufacturers to sell servers with multiple drives.
Capacity is increasing
The increased speed of solid-state drives (SSDs) now enables data compression and deduplication. For many commercial workloads, this means a five times or better increase in effective capacity and a potential of more than 3.5 petabytes in a 2U form factor. This new potential for local storage capacity leaves administrators with an alternative to centralized shared storage appliances. Hyper-converged infrastructure (HCI) uses proprietary software to pool and share storage from multiple server nodes.
Hyper-converged servers and DDA
The advent of nonvolatile memory express (NVMe) as the preferred protocol for SSD primary storage has considerably changed the HCI story. NVMe uses remote direct memory access (RDMA) to speed up transfers, while dramatically lowering system overhead. One recent innovation from Excelero includes running NVMe over Ethernet, which enables direct drive access systems, where any node can use RDMA to connect to any drive.
Direct drive access parallelizes transfers and removes the bottleneck created by data going through a server engine in the HCI node. This reduces latency and increases bandwidth to such a degree that this will likely be a fixture of future storage systems.
The future of HCI
HCI is still a new concept, and as a result, some of the more mature configurations come from major system vendors as preconfigured nodes. We can, however, expect software suppliers such as Nutanix to make their code directly available, and this will open up do-it-yourself software integration on the platform of your choice.
HCI nodes are standard commercial off-the-shelf boxes. This means that hardware prices are a fraction of the traditional proprietary array. Drives would then be standard devices with no lock-in features that are inexpensive when purchased from distributors; even the major vendors are bowing to this pricing reality.
Software-defined storage needs
Software-defined storage (SDS) virtualizes storage. The data services commonly found in storage arrays are migrating to discrete packages within VMs or containers, allowing much more flexibility in sourcing and configuring code chains.
SDS aligns well with the next three years of hyper-converged servers, because it can take advantage of the local drives in each node for speed. The move to NVMe over Ethernet will, however, open the possibility of fragmentation in the storage market in order to create simplified shared storage.
It remains to be seen whether this will happen or if our needs for separate scaling will be satisfied well enough by HCI. Overall, there's a strong case for hyper-converged servers going forward. It should prove much less expensive and have a higher throughput than traditional approaches, which has, so far, led to growing unit sales.
Evaluate benefits of hyper-converged and blade servers
Decide between converged and hyper-converged