The argument of blade servers vs. rack servers is as old as the data center itself. Each platform has advantages...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
regarding density, power, cooling and availability, and those points will be debated for many years to come. Many of those arguments were based on traditional server operating systems and applications. In today’s world, the focus is less on the application to hardware relationship and more about which hardware platform the hypervisor runs on. Today, many refer to hardware as a commodity and argue that the real driver for the data center is the software. Does this mean we can finally end the debate between blade servers vs. rack servers forever? Well, not really. In fact, it makes the argument even more important.
Not all is lost and, with a few guidelines, it can be an easy road to navigate. One of the core principles in deciding between blade servers, rack servers or both, is that virtualization is a consolidation technology. This sounds like virtualization 101 and something we all should know, but it is the (sometimes forgotten) critical foundation when designing infrastructure. We already know that blades are a consolidation technology, so if we use them to host VMs, we now have both hardware and software consolidation. If we go a step further and add Docker or Citrix on top of that virtual environment, the data center begins to look like a set of Russian stacking dolls.
Planning for failure
While this provides a compact infrastructure system, one of the key pieces in designing your infrastructure is to design for failure. Consolidation and failure in the same sentence is something few administrators ever want to hear, and this is where many of the questions between blade servers vs. rack servers come into play.
Several years ago, I witnessed the partial failure of a core switch in a data center that flooded multiple downstream switches, causing several to go offline. Several rack switches and two blade center switches went into fault and needed to be reset. While the rack switches were spread across the data center, the blade switches happened to be in the same enclosure. Since the rack servers had dual connections, they never lost connection. However, the affected blade enclosure dropped offline and took a very large Citrix environment with it.
While that was a bad day, it could have been much worse if the blade enclosure had been running a hypervisor. Instead of 16 servers, it could have been several hundred. Was this an isolated incident? Yes, but the hardware consolidation danger still exists. Just like software consolidation, we have to learn to work with and mitigate those risks. Blades and virtualization have too much value in power and consolidation savings to dismiss, so instead we need to ensure we design with them in mind and step away from the design process for traditional rack and application servers.
It can be a lot easier to place rack servers in different racks supported by different switches to help mitigate possible hardware failures. This approach, combined with high availability (HA) rules, can greatly improve resilience. While HA rules still work within blade centers to prevent critical VMs from running on the same blade, often times they are still in the same enclosure. You could argue that rack servers depend on the rest of the data center infrastructure, just as blade servers depend on an enclosure, and while this is true, let's not forget that the blade enclosure also depends on the data center. In reality, the enclosure is another layer to that Russian doll model. One of the ways to address this concern is to deploy your virtual farms across multiple blade enclosures, spreading out the possible risk of a single enclosure failure bringing down the entire virtual farm.
Are blades pulling ahead?
On the surface, this seems like the solution to the blade question, but some concerns still exist. The enclosures themselves are expensive pieces of hardware. While the enclosure frame might be “passive,” the dual controllers, networking and fiber uplinks can add considerable costs to an enclosure. Having several of these partially filled can result in a large infrastructure cost per server. Of course, it is possible to fill the enclosure with non-virtualized servers, but if they don’t use the fiber switches or higher speed networking components you needed for the hypervisor hosts, have you really made the best use of the blade enclosure?
On the surface, it would seem blades are not the right fit for virtualization. However, what happens if you reduce the number of blades per enclosure? Traditionally, blade centers were large and high density, but today’s blade centers can have a density ranging from four to 16 physical servers. While the high density still exists, a midsize or large company has the flexibility to reduce the number of blades and cost per enclosure. This allows businesses to take advantage of the benefits blades offer, while mitigating some of the concerns about density and the traditional fear of having "all of your eggs in one basket."
The power and space savings are not as good when compared to the higher density enclosures, but this approach provides a balance point for organizations that want the benefit of blades without having the high risk. Since the size of the enclosure is scaled down, the costs associated with management and external connections are also reduced, making the smaller blade enclosure a more cost-effective solution. Of course, this does not mean that blades have won and rack servers are going away anytime soon. However, if blade manufactures continue thinking more about business needs and risks instead of just speed and size, the future might be based on blades.
Dig Deeper on Server hardware and virtualization
Brian Kirsch asks:
Do you prefer blade servers or rackmount servers for virtualization?
0 ResponsesJoin the Discussion