carloscastilla - Fotolia
The Open Compute Project began in 2011 as a way to share some of the no-frills, ultra low-cost designs being deployed by major cloud users and providers. Facebook initiated the project, but in reality, Google, Amazon Web Services and Azure had already dived into this type of gear back in 2006, and in the intervening years had tuned the logistics pipe and design approaches to achieve incredible savings.
Open Compute is intended as a way for us mere mortals to share the bounty and bring new standards to how hardware is built. With clouds and virtual clusters sharing common platform needs, all of the IT space would eventually benefit from the Open Compute approach. In many ways, this was to be the hardware answer to open source code.
The designs released into the Open Compute Project (OCP) reflect the eclectic nature of the cloud provider base. All are free of embellishments -- extra connectors are stripped off, metal is basic and just enough is provided for specific tasks. Designs in the project are typically picked up by several vendors, including Chinese original design manufacturers (ODMs) and traditional players.
In many ways, though, OCP is frustrating. From the perspective of someone who has designed servers for the cloud for years, it seems that OCP should be more of a buying philosophy than a somewhat disjointed set of obsolescent designs from cloud providers. The central issue is buying commercial off-the-shelf (COTS) components as cheaply as possible, while using only those elements needed for the job.
There is no special magic that cloud providers add to these to make cloud servers. These are mostly standard Intel motherboards, built to one of a dozen Intel reference designs by a board vendor, with some complement of drives. The motherboards provide the Ethernet connections from the chipset and adding a complement of memory and a power supply rounds everything off.
The result, as we've seen, is that a lot of server designs fit the OCP model, especially as the ODMs start selling white box units in volume. Does the OCP label actually have value other than a mindset of inexpensive is good?
Assessing the value of the OCP
The value of the OCP worldview is based more in the infrastructure in which it lives rather than in the server. Best practices are moving us away from servers with redundant power supplies and removable drive caddies. One key to the Open Compute Project is the standardization on power systems, right down to connector positioning and input AC voltages. This allows servers to intermix over time in the same racks. Another key is the acceptance that crash cart maintenance is obsolete and that hardware repair is old hat, too. There are just too many systems, so it's cheaper to overprovision the hardware a bit rather than fix every failure.
If you embrace the philosophy of that last paragraph, the issue is selecting a server that is sufficient enough to meet your needs -- 2 CPUs, 128 GB DRAM and a single hard drive, for example -- and find the cheapest vendor. You won't get the Google price, but you won't be gouged by an ODM and the savings over traditional servers should be substantial, especially when you buy your DRAM and drives from a master distributor such as Arrow. Traditional vendors are also getting in the loop with OCP-compatible servers and storage, but watch for higher pricing.
How the Open Compute Project should be used
This leaves us with the question of fitness for use. No matter which path you follow, there's an obligation to right-size the hardware to the tasks at hand. If you do that well, and have staff who understand the issue, any COTS-based server from a reputable vendor will run hypervisors and standard operating systems properly. That's how standardized COTS has become. Most of the ODMs ship millions of units annually to cloud providers, and there is no tolerance among these customers for a mistake in compatibility. Companies like SuperMicro and Quanta ship high-quality gear.
This level of standardization allows you to select from a wider range of products than OCP, but the key is to understand what makes the approaches that cloud providers use work for them. OCP comes in handy here, as it is a valuable tool for learning about what matters. There are outside resources to tap into for configurations and integration, too.
The Open Compute Project is a starter kit for how to save money on hardware and maximize agility as technology evolves. At some point, you will graduate to a more sophisticated knowledge base and world view, buying hardware more a la carte. Wherever you are in the process, these low-cost servers should meet the need for virtual clusters, though there might be configuration certification issues with some hypervisor vendors. Cloud solutions such as OpenStack and Ceph are much less sensitive to platform certification questions.
OCP gains ground outside of Facebook
Is the Open Compute Project right for you?
Big-name vendors go head-to-head with white boxes