- Rick Vanover, Contributor
This tip explains the major categories of server hardware for virtual environments and offers some perspective on where server hardware is headed, and what admins should purchase going forward.Processor hardware offerings
Keeping up with processor configurations, models and options is challenging. Frequently, administrators focus on getting servers with the fastest processors and the highest number of cores available. Virtualization administrators also try to stay with a current processor type to maintain compatibility with technologies such as VMware's VMotion. And the good news is that in September of 2008 VMware announced that processor licenses will cover up to six-core processors.
Currently, the Intel Xeon 7400 series has a six-core offering, and, later in 2009, the AMD Istanbul processor line will be released with six-core support. Available product roadmaps indicate that the core count in processors increases with time. As an extreme example, Intel recently demoed an 80-core processor at a trade show, but it's a long way off from becoming your next mainstream server.
With the number of cores increasing and many vendors licensing products by the processor, many system architects may be at a crossroads, where they have to settle for fewer sockets to save costs. This applies to not only to virtualization platforms but also management tools that are priced by the socket. This makes a difference in an "incremental host" concept: that is, by reducing processor inventory on server purchases, you can add another host by saving on the associated per-processor licensing costs.
Another factor is whether an organization wants a large number of smaller hosts (from a socket perspective), or a small number of larger hosts. An example of this is a cluster of 10 two-socket servers, or a cluster of five four-socket servers. The licensing costs would be the same for the software per-processor, but the hardware costs would vary when the memory configuration and the accessories are calculated.
The obvious use case for blade-based virtualization is where space is at a premium. Blades are a good platform for virtual host server deployment because when mixed with traditional systems, they can live in the same chassis. Further, blades can share network and storage connectivity, which reduces the cabling and port footprint on the I/O side.
Blade systems are on par with other server offerings, including products with six-core processing capability. The HP ProLiant BL680c server blade is available with four socket, six core processing with up to 128 GB of RAM. Other blade platforms can exceed that RAM count. In the ProLiant c series, the BL2x220c blade server allows up to 256 GB of RAM per-blade.Memory planning is a critical step
For most virtual environments, memory poses constraints in implementation. The current server hardware landscape offers options for large amounts of RAM. In terms of blades, the ProLiant BL2x220c server can have a maximum of 256 GB of RAM, while some rack server models offer 512 GB or more in maximum memory configuration. As you price systems, you'll find that these high amounts of RAM can potentially double the cost of a server system. So plan the number of processors you need and a RAM configuration that works for your virtual environment.
For Hyper-V and XenServer environments, there is no memory overcommit functionality. This means that there is a direct one-to-one ratio for guest RAM provisioning and its cost to the host. The open source Xen offering supports the memory overcommit functionality, so it's logical to assume that it will make its way to the mainstream product. In provisioning systems, the target consolidation ratio will vary based on this factor so plan accordingly.Hardware considerations for storage and networking
Provisioning host connectivity for networking and storage is one of the pressing challenges for administrators. Luckily, server hardware is catching up to admins' needs. The Dell PowerEdge R900 rack server, for example, has been modified to meet the needs of the virtualization host. This server is a high-functionality four-socket model with the option of either four- or six-core processors. This system evolved from the PowerEdge 6850 server, a key change is that the server now has four built-in Gigabit Ethernet interfaces. This is a great start to a virtual implementation, because most servers have only two built-in interfaces.
As time passes, you may need to add network interface cards (NICs) to the server system. This aspect of the infrastructure roots itself in the practice of separating roles on virtualized systems from the network. In general, management traffic, migration traffic and guest traffic should be on separate interfaces. In addition, should a failure in an interface or cable occur, each of these ports should have redundancy on the host. Many organizations extend this practice to the switching in a data center. In this situation, a virtualization host's multiple interfaces would be spread across multiple switches (assuming the same network is available to the switches) to accommodate a switch failure. Having additional ports also enables guest traffic to be prioritized and aggregated across two or more interfaces.
Today the decision is whether to deploy 10 Gb Ethernet interfaces. All the major server vendors have options in this area, but many data centers are not prepared to provide 10 Gb ports to servers currently. For new deployments, it's worth purchasing this class of equipment in anticipation of support for 10 Gb Ethernet, at least for virtual machine migration or guest operating system traffic.Integrating ESXi with server hardware
As the virtualization market matures, vendors have slowly realized the benefits of connecting hardware to virtualization management. An example of this is the ability to download VMware ESXi and install it with native HP Insight Management agents. This is beneficial because installable agents are the one limitation to adopting ESXi for many organizations. HP, as well as other vendors, has adopted ESXi for new purchase server configurations by offering the integrated hypervisor. The installable edition works for most organizations, but the cost of local disk space and an array controller can be avoided by using the integrated hypervisor. Validate the configuration of VMware environments
For VMware environments, always check the VMware compatibility guides to see whether your hardware selection is supported. The guide covers processor models, storage systems, NICs, host bus adapters and the support matrix of VMware products that are supported. This is especially helpful when existing equipment will be integrated with newer equipment and software for storage.
One example of this is that some storage systems have full support for VMware ESX 3.0.3, but not for ESX 3.5 and ESXi. This is likely because of Storage VMotion functionality support. The moral is to ensure that your hardware is compatible to avoid a surprise ending to your equipment installation.
In addition to software configurations and compatibility parameters from VMware and other vendors, get have a handle on the direction of the server hardware market. The best free assistance you can get is a product roadmap from your hardware vendor. These are usually provided under nondisclosure agreements and are critical to your planning process. They allow you to make the best decision for your virtual environment at the time you make it.
|Rick Vanover, (MCITP, MCTS, MCSA) is a systems administrator for Safelite AutoGlass in Columbus, Ohio. Vanover has more than 12 years of IT experience and focuses on virtualization, Windows-based server administration and system hardware.|
And check out our Server Virtualization blog.
- E-Book: Virtualizing Your Infrastructure - Selecting Server Hardware for ... –SearchDataCenter.com