Myth #4 – Blades have problems in virtual environments since they boot from a SAN This is an interesting misconception for several reasons. First, almost all blade systems have an option for local on-blade disk storage (except Egenera), and don't require boot from SAN. In fact, the majority of the installed base of blades are still configured with local storage and boot locally, even if they are SAN connected, which most are (more than...
75% of VMware's customers are SAN attached). In addition, some blade systems offer the additional option of configuring storage blades to make additional direct attached disk storage available to the blades in the chassis (e.g., HP now offers over 1 TB of storage available through a storage blade).
Second, the ability to run diskless blades which boot from a SAN is a move forward in many ways, offering significant benefits in ease of provisioning and manageability. As the pioneer in this area, Egenera has led the way for the other blade vendors with its diskless and stateless blades to also deliver their blades with the option to configure diskless. Although the installed base overall is still mostly running with local storage, there is a clear trend towards a significant number going diskless.
Third, for those users that are running diskless, the only problems I have heard with VMware stem from configuration difficulties and mistakes, often in figuring out the proper LUN masking. Once the configuration issues get corrected, there seem to be no long-term problems, and in fact, users seem quite satisfied with the benefits of stateless blades. I have heard good success both with ESX booting from the SAN, and with guest Operating Systems booting from SAN.
Myth #5 – Virtualizing on blades is a problem due to I/O limitations
Since virtualization allows multiple virtual servers to run on each blade, and each virtual server brings additional I/O requirements, one blade myth is that virtualizing on blades will exceed the I/O capabilities either per blade or per chassis. Although early blade systems were limited to two NICS, current offerings have increased these limits to four, six or eight per blade, depending on the vendor (e.g., IBM and HP both support up to eight Ethernet NICs, though in different configurations - IBM requires a sidecar for eight NICs). Some blade systems (e.g. Sun and Hitachi) allow standard off-the shelf PCI-express cards and/or modules to be installed in the chassis, giving additional flexibility in I/O configurations. Changes in VMware ESX have also changed the way I/O is handled, making additional NICs less of an issue.
Some users also are afraid of hitting an aggregated limitation either with NICs or Fibre Channel HBAs. In fact, even though most blade systems now support four Gig Fibre Channel, IBM now supports 10GigE, and InfiniBand is available in a number of platforms, most users have not yet taken advantage of the high speed technologies, and yet still have not hit an I/O bottleneck. Of course I/O is extremely workload dependent. It would be possible to intentionally configure a workload mix that could become I/O bound, but it's also possible to avoid it.
Myth #6 – PC blades are virtual PCs running on blades
As people begin to look at blades and virtualization technologies and where they fit, many people confuse PC blades and virtual PCs running on blades. In fact, HP's Consolidated Client Infrastructure offering was delivered using PC blades (a completely different architecture than the HP BladeSystem blade servers), while IBM's initial Virtual Hosted Client Infrastructure (which has now evolved into their Virtual Client offering) was delivered as VMware and Citrix Presentation Server running on IBM BladeCenter.
Though it may seem confusing, delivering desktop alternatives is a great fit for blades and virtualization. PC blades use a blade form factor similar to server blades, and offer benefits for certain situations. Virtual machines, running on back-end server blades and brokered out to users, can offer additional advantages that many users are looking to, as they rethink their future desktop strategy. As user requirements grow and change, it's easy to add more blades to the back-end, and let the virtualization software handle the rest. Bundled solutions in this area are appearing from numerous vendors, with different virtualization products and blade systems vendors including VMware, Xen, Citrix, IBM, NEC, HP and others.
The real bottom line
As I talk with users who have successfully implemented blades and virtualization technologies, I often hear similar words to these spoken by one user last week, "We implemented VMware exclusively on blades and plan to continue. The bottom line reason? Cost."
As I hear how prevalent some of these myths and misconceptions on these technologies are, one additional message is clear. As you make decisions on consolidation, server virtualization, virtual desktops, and server platforms, choose your partners carefully. Be sure to work with resellers and integrators who are current on both blades and virtualization, and not relying on old data and early product horror stories.
About the author: Barb Goldworm is president and chief analyst of Focus Consulting, a research, analyst and consulting firm focused on systems and storage. She has spent 30 years in the computer industry, in various management, and industry analyst positions with IBM, Novell, StorageTek, Enterprise Management Associates, and multiple successful startups. She currently chairs the Blade Server Summit conference. She is author of Blades Servers and Virtualization: Transforming Enterprise Computing While Cutting Costs published by Wiley.
Dig deeper on Improving server management with virtualization