- Blades are too expensive
- Blades require too much power and cooling
- Blades are not as powerful as rack servers, and so not a good platform for virtualization
- Blades have problems in virtual environments since they boot from a SAN
- Virtualizing on blades is a problem due to I/O limitations
- PC blades are virtual PCs on blades
I'll tackle these myths one at a time over my next two columns, and then I'll drill down further in subsequent weeks. If you have more myths to add,
Myth #1 – Blades are too expensive
Because blade systems require an upfront purchase of a blade chassis, one misconception is that blades are a more expensive solution than rack servers. In fact, if you are only implementing one or two servers in a single location, this is true. However, if you are implementing four, five or more servers, the total cost per server can actually be lower, because of the shared components within the blade chassis. Just calculating the numbers on hardware alone (without counting other savings such as power, cabling, and management), the costs for a blade server can be lower than a comparable rack server. One blade customer did a very basic comparison using Dell blades, calculating the cost per server based on the blade cost plus 1/10 of the chassis cost. Comparing against Dell 1U servers configured for the same level of redundancy (power supplies, NICs, and HBAs) he estimates the blades saved him 20% over comparable rack servers. While these numbers vary by vendor and configuration, the bottom line is that unless the chassis is mostly empty, blades cost less.
Myth #2 – Blades require too much power and cooling
This myth appears in varying formats. Many data centers today are running into limitations in power and cooling as they try to add more servers. Some have insufficient power coming into the data center, some are out of room on their UPS or battery backup systems, some have A/C that is already starting to fail on hot days.(Some have all of the above.)
As a result, users are looking for answers to these issues as they add servers, and some believe that blades will make the problem worse. The reality is that, in fact, on a one to one basis, server to server, a blade server will generally use less power, and generate less heat than a comparable rack server. According to HP, an HP BladeSystem c-Class blade server uses 40% less power and cooling than its rack counterpart. Other vendors cite different percentages, but all state that their blade servers use less than their rack servers.
This is due in large part to the advances in thermal technology and improved efficiencies implemented in the current generation of blade systems. Early blade systems generally used more power and cooling, even with lower speed processors and less memory. Today's blades have more efficient power supplies and improved overall thermal design. Many also have sophisticated software to help in managing heat and automating certain actions to address problems, including powering down components as needed when the temperature gets too high.
There is, however, a power and cooling issue with blades in terms of density. Even though one blade uses less than one rack server, since blade systems are designed for high density, the number of blades per footprint can be substantially higher than rack servers. (This is fact is one of their biggest advantages in space savings.) So it's important to pay attention to power and cooling issues from an overall data center planning perspective, and to ensure that you plan appropriately. This is an area where help from experienced professionals can help and I strongly recommend working with your blade system vendor, UPS vendor, reseller/integrator and/or others who specialize in this area.
Myth #3 – Blades are not as powerful, and so not a good platform for virtualization
When blades were first introduced, they were mostly designed as low-power Web servers in a more dense form factor. They were built with low-speed, single CPU chips. Today, blades are available with all the same options as rack servers, with multiple CPUs, multicore, and lots of memory. They are now the functional clones of rack servers in terms of processors and memory configuration options.
If you are going through consolidation planning, you have to decide what platform to consolidate your virtual servers onto, as you go forward. Depending on your tech refresh cycle, you may choose to consolidate onto your most powerful existing servers. Depending on your virtualization software, you may need to choose servers that include virtualization hardware assist capabilities of Intel VT or AMD Pacifica (e.g., Xen and the Linux products incorporating it). If you have the option of choosing new server hardware, blades today offer the same CPUs, socket counts and memory as rack servers, including chips with virtualization assist.
In addition, some blade systems are leading the way in other areas of virtualization. Egenera set out to address blades and virtualization together, and their architecture is based on an entire virtualized approach. HP now offers a blade option called Virtual Connect, which virtualizes IP addresses and NICs, and simplifies ongoing configuration management. Hitachi recently introduced a blade system with the hypervisor down at the firmware level, offering performance benefits for virtualized environments.
I'll tackle myths four, five and six in part two.
About the author: Barb Goldworm is president and chief analyst of Focus Consulting, a research, analyst and consulting firm focused on systems and storage. She has spent 30 years in the computer industry, in various management, and industry analyst positions with IBM, Novell, StorageTek, Enterprise Management Associates, and multiple successful startups. She currently chairs the Blade Server Summit conference.
She is author of Blades Servers and Virtualization: Transforming Enterprise Computing While Cutting Costs published by Wiley.