BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
Many IT shops want to know whether blade servers or rack-mount servers make more sense for virtualized environments. As is often the case, there's no one-size-fits-all answer to this question. But there certain considerations that can help you decide which form factor makes sense for virtualized architecture. In this tip, we discuss the pros and cons of rack and blade servers so you can base purchase on your most important priorities -- whether it's reducing servers' physical footprint, minimizing server power consumption, or improving server performance.
But the truth is, whichever form factor you choose, some basic purchasing principles still apply. So in this tip, we also outline how to determine whether your purchased hardware is compatible with your existing virtualization software and how hardware compatibility lists (HCLs) can help.
Hardware selection: The case for blades
The decision whether to use blades or rack-mount servers to host virtual infrastructure is contingent on several factors. But in many cases, it's dictated by whatever type of physical server you use in your data center.
Blades and racks each offer advantages and disadvantages. Blade servers provide better rack density and a smaller footprint than do traditional servers. Early blade systems posed limitations, such as having one or two single-core CPUs, two network interface cards (NICs), limited internal storage and no support for Fibre Channel storage. Virtual hosts often require several NICs, two storage adapters for maximum reliability and large amounts of memory available to support the virtual machines (VMs) running on them.
Thankfully, over the past few years, blade servers have evolved and now offer hardware options comparable to rack-mount servers, such as support for up to 16 NICs, four quad-core processors and multiple Fibre Channel or iSCSI host bus adapters (HBAs). Some other reasons to consider blade servers include the following:
- If your data center has limited space, blade servers are a great choice because you can increase rack density. Compared with traditional servers you can typically install up to 50% more servers in a standard 42U rack.
- Blade servers consume less power than do traditional servers because they are more energy efficient and require less cooling. The amount of power that blades consume is dependant on how full the rack is. A fully-loaded blade chassis will consume far less power than an equivalent amount of traditional servers. If the chassis isn't full, it will still consume less power than a traditional server but the disparity won't be as great.
- They plug into a chassis with a single connector, which makes cabling neater and eliminates the mess of cables that you encounter with traditional servers.
- Blade servers are great if you want to boot your virtual hosts from the storage area network (SAN). When booting from the SAN no internal disk is required because the host performs a Preboot Execution Environment (PXE) start up from the network, and then connects to the SAN disk where all of its files are located to continue the boot process.
Hardware selection: The case for rack-mount servers
Although blades provide certain benefits, traditional servers are also viable for virtual hosts. Some of the advantages of using traditional servers include:
- There are more expansion slots available for network and storage adapters with traditional servers. Some of the larger rack-mount servers (of more than 5U) have up to seven I/O expansion slots; blade servers tend to have a limited number of expansion slots for storage or network needs. Traditional servers are a solid choice if you need a large number of NICs or storage controllers in your virtual host for load balancing and fault tolerance, or if you need them to connect to several networks.
- Traditional servers have a greater internal capacity for local disk storage, whereas blade servers typically have a limited amount (zero to four drives). If you want to run a lot of your VMs on local storage, traditional servers are a better choice because they have more bays for internal disks. To be fair though, some blade systems have separate storage blades that can increase the amount of local storage that a blade can use.
- While most blade servers support up to four processor sockets, traditional servers can support eight or more CPU sockets. This work wells if you plan on using a smaller number of powerful servers for your virtual hosts.
- Traditional servers can be installed without any additional infrastructure components. Conversely, once a blade chassis is full you need to buy another chassis even if you only need one more server, which can get pricey.
- The process of installing and managing traditional servers is often less complicated than with blade servers. Blade servers can be more complex to install, cable, power and configure, but once you have experience, installation becomes a less significant factor in the decision-making process.
- Traditional servers have serial, parallel and USB I/O ports for connecting external storage devices and optical drives. These can also be employed for hardware dongles that are used for licensing software. In addition, you can install a tape backup device inside your host with traditional servers. To compensate, blade servers have virtual I/O devices that can be managed through embedded hardware management interfaces. So you can use network-based devices for USB connectivity over a network to compensate for the lack of local USB ports.
Both blades and traditional servers are solid options for virtual hosts. Be sure to weigh the pros and cons of each before deciding which works best for your environment. Often times, the choice between blades and traditional servers comes down to a combination of personal preference and what type of server is already in use in your data center.
Supported hardware with virtualization software
As virtualization vendors release new versions of their technologies, they typically support only newer hardware. A user, for example, employed an older SAN that was supported by an old version of his virtualization technology. When he upgraded his environment to a newer release, he discovered that the new release didn't support his SAN. When he contacted support, he was told that he would have to either downgrade to the previous version or use a supported storage device.
Most virtualization software on the market supports a solid selection of server hardware from all the major server vendors (e.g., Dell, IBM and HP). VMware ESX and Citrix XenServer have specific HCLs that detail the brands and models they support. One of the reasons for these lists is that products come with a limited set of hardware device drivers (e.g., storage/network controllers) built-in, and adding additional drivers is not supported. The HCLs for both VMware ESX and Citrix XenServer are available online, but remember that these lists are updated periodically so check them before performing any upgrades.
Unlike VMware and Citrix, Microsoft Hyper-V doesn't have a specific hardware support list. Instead, Hyper-V supports any hardware that's supported by the underlying Windows Server 2008 operating system. That is as long as it has an x64 (64-bit) processor with hardware-assisted virtualization; specifically either Intel VT or AMD-V. There are a few other minimum hardware requirements for Hyper-V that are available online from Microsoft.
In addition, some vendors give users the option of an integrated hypervisor, such as VMware's ESXi or Citrix XenServer, pre-installed on internal flash storage. This results in a faster boot of the hypervisor and eliminates the need for local storage on the server.
Some hardware vendors offer both hardware and virtualization software support which eliminates having to deal with two support vendors. For example, when you purchase a server from HP you can include the installable or full version of XenServer or ESX/ESXi which also includes technical support from HP. This can be especially advantageous when a hardware problem causes a software error, such as ESX servers' purple screen of-death because of defective memory.
In part two of this series, we'll explore the considerations in choosing CPU and memory for a virtual host, as well as choosing I/O adapters for networking and storage.
For more on hardware and storage for virtualization:
|Eric Siebert, is a 25-year IT veteran who specializes in Windows and VMware system administration. He is a guru-status moderator on the VMware community VMTN forums and maintains VMware-land.com, a VI3 information site. He is also the author of the upcoming book VI3 Implementation and Administration , which is due out in June 2009 from Pearson Publishing. Siebert is also a regular on VMware's weekly VMTN Roundtable podcast.|