With very low power, low cost and flexible architectures, designs based on the ARM 64-bit core design have been...
hyped as the answer to Intel's quasi-monopoly on the server market. Names like HP's Moonshot tell it all. Now, the hype is settling down, some real products are being delivered and it's a good time to try to separate the myth from reality.
The idea of using a low-power chip for servers stems from a fundamental break in the IT industry. Many jobs could be run on a single bottom-end server, so one school of thought was that farms of very cheap server boxes would get the job done. Meanwhile, virtualization introduced a way to break up the computing power of an expensive server into virtual machines.
These two approaches have been on a collision course ever since. The problem of managing the resource pool for agile deployments and for recovering failures is essentially the same with either approach. The result is that orchestration tools can cope with either approach, which leaves an economic issue and some technical questions.
Which is cheaper? Today's markup and feature pricing may change radically, as technology and, more especially, competition evolves.
Hypervised Intel-architecture servers look to have the edge when it comes to total cost of ownership (TCO) today. A server supporting 128 entry-level instances is probably a very high volume, half-wide 1U box with one or two low-end (15-45W) Xeon processors. That beats out the microserver -- based on the current dedicated server approach -- with plenty of room to spare.
But that is based on microservers (ARMs) that don't support VMs, a trend that is rapidly changing. The ARM 64-bit offerings are becoming sophisticated products. At the volume end of the market, we have Qualcomm, with 24-core products being sampled, while monsters with 100 cores are available.
These, unlike the microserver, are aimed at the scale-out server market as a direct challenge to the Intel CPU. Simply put, a 24-core ARM and a 12-core Intel make for a real race. The ARM designs also have the storage, networking and memory interfaces needed for the big league.
The ARM challenge only works if hypervisors support the architecture, and it's no surprise that we are very close to KVM and other hypervisors running smoothly on ARM processors. However, it's possible that hypervisors won't matter -- they are in great danger of being supplanted by the much more efficient container virtualization approach.
With containers already running on ARM processors, and with container approaches for both ARM and Intel architecture, the playing field appears to be changing. ARM vendors will certainly be able to compete with the Intel designs in terms of cost per workload, bringing the question of lower power and smaller footprint to the table. An HP Moonshot box -- or more likely Quanta's equivalent -- with 24-core ARM processors becomes very interesting for the small-instance end of the cloud market, as an example.
Let's assume there are 60 processor boards in the unit, each with a 24-core ARM running two VMs per core. Now, we are talking the equivalent of roughly 30 of those half-wide 1U servers, but in a 4U rack space. That may be pessimistic. ARM processors may handle as many as eight containers per core, which really tips things over. That's roughly a rack's worth of today's servers in a 4U form factor.
Of course, Intel will increase density with containers as well, but this makes for a good race based on economics. If Intel just continues with business as usual, then ARM processors will likely win this battle.
My guess is that Intel is seriously concerned about all of this, especially with AMD being a member of the ARM camp. We will see some major architectural innovation as a result. Expect Intel to announce products based on their version of Hybrid Memory Cube (HMC). This will boost the capacity and bandwidth of DRAM enormously, and also make the motherboard much smaller, while running on less than 50% of the power. Intel is also talking up a persistent memory that likely will sit on the HMC architecture and run at a much faster speed than flash or solid-state drives.
This makes for a race again, but the supporters of ARM are also able to access HMC-class performance -- maybe to the High Bandwidth Memory spec, which is fundamentally different enough to cause years of duplicated effort and frustration in the industry -- and build System-in-Package products, too. They are probably behind Intel and Micron, which started up HMC and seemed ahead of the game all along, but this may not be enough to give Intel the win.
This is still too early to call, but one thing is certain: We should see more performance for a lower cost, as we move through the next five years.
ARM processors and server virtualization
Answering questions about 64-bit ARM processors
Selecting hardware for virtualization