Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Big servers are the better container host platform

In an environment with a large number of containers, big servers better meet the needs of users because of the impact of containers on CPU bandwidth, storage I/O and networking.

The containers approach opens up a new level of instance support in any given server. By sharing the OS, tools...

and other binaries of the container host, the memory space needed for an instance shrinks enormously, when compared to hypervisor-based virtualization. Most current estimates give us 3x the number of containers versus VMs in any given server, and the concept of containerized microservices will extend this packing ratio even more in the future.

Impact of containers on servers

In looking at the impact of containers on the server platform, we need to understand that, while memory efficiency is the driving force for the density increase, other resources are challenged to keep up. The overhead is typically lower with containers compared to VMs, approaching native bare-metal speeds and increasing the loading on many of the elements in the system. This brings the effective load to around 5x that of hypervisor-based virtualization.

This load impact begins in the CPU caches, where the increased instance count tends to make any given size of cache less efficient. The sharing of app-level images can help this by limiting cache churn, but this is more of a prospective approach than a reality today. The implication is that larger caches help solve this problem, which is the first pointer in the direction of using big servers as the container host.

Storage I/O and networking

Storage is evolving at a phenomenal rate, with solid-state drives (SSDs) boosting speed, and it's now getting another major boost from nonvolatile dual-inline memory modules (NVDIMMs), which offer at least another 4x access speed.

NVDIMMs make sense in big servers, where the number of DIMM slots and memory busses is high, allowing NVDIMMs to coexist in meaningful numbers with much faster dynamic RAM (DRAM) DIMMs. The evolution of the software infrastructure around NVDIMMs is headed toward byte addressability, and NVDIMMS can be used as a DRAM extender with sophisticated caching to create effective DRAM spaces in the 8 TB range, or even more. Again, this boosts the instance count by a big factor and adds to resource stress.

Future industry directions will both reflect the needs of users and the capabilities of new technologies as they evolve an answer to the question 'Are big servers the better container host?'

At this point, we must consider using only NVM Express (NVMe) SSDs as local primary storage and dropping Serial-Attached SCSI or Serial Advanced Technology Attachment from consideration. While all new servers will support NVMe drives within a few months, small servers have too few Peripheral Component Interconnect Express (PCIe) 3.0 links to be able to drive a set of, say, six to 12 drives. This is a result of server chipset choices -- single CPU servers have fewer connections.

Handling storage with NVMe reduces CPU overhead by a major factor and is, in fact, currently the only way to realize the high bandwidth of top-end drives. NVMe has the added advantage of being able to deliver data directly to a container, even with large container instance counts, since it can manage 64,000 different queues at any point in time.

PCIe affects networking, too. Most servers carry two 10 Gigabit Ethernet (GbE) ports mounted on the motherboard and they are effectively "free." We're moving on from this connectivity in two dimensions: First, the industry is driving toward 25 GbE connectivity, replacing 10 GbE at a good pace. This is an important consideration when building a new server cluster.

Second, for storage, remote direct-memory access (RDMA) is now perceived as the best approach, with lower CPU overhead and much lower latency. Combining NVMe and RDMA over Ethernet is the mainstream NVMe-over-Fabrics approach, and this allows much more efficient sharing of data in a hyperconverged or networked storage environment.

The current market

Intel is still in the process of rolling out RDMA on its server chipsets and appears to be focused on big servers first. The memory, storage and networking issues described above all suggest that the servers of choice for a container host should be big, well-featured engines. The one downside is, of course, that these big servers are expensive compared to commodity servers, so one has to complete a cost-benefit analysis, also considering the upside of a smaller server count and ease of deployment.

One other factor pushing for big servers is the expanding role of large instances and GPU-boosted VMs. The first needs larger resource pools, while GPUs work better in large servers. If your roadmap includes those needs, the cost-benefit analysis should reflect that.

Looking ahead to bigger servers

Future industry directions will both reflect the needs of users and the capabilities of new technologies as they evolve an answer to the question "Are big servers the better container host?" Next generation servers will have a new cache layer of very fast DRAM, connected by serial links, that creates as much as 32 GB of cache per CPU. These will also speed the memory bus by a good factor and, at the same time, reduce system power.

Overall, big servers look like better platforms for the large number of containers we will see in tomorrow's servers. VMs, on the other hand, use much more memory, keeping VM counts per server low. The available storage, networking and CPU bandwidth per VM is accordingly larger and, generally, going for large server configurations with heavy I/O is unnecessary.

It's true that we can pack more VMs into a larger memory space, but economics point to the 1U or 2U server as being the cost-effective option in many cases, rather than 4U engines.

Next Steps

Choose between containers and VMs for cloud deployment

Understand the integration of containers and VMs

Test your container and VM knowledge

Deploy a container host in a vSphere environment

 

This was last published in March 2017

Dig Deeper on Server hardware and virtualization

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What type of server hardware do you use to host containers?
Cancel

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close