Sure, early blade server products ran too hot, cost too much and lacked configuration flexibility; but that's old news, says analyst Barb Goldworm. Today's new breed of blades may not be perfect, she says in this interview, but they run cool and are cost-effective when teamed with virtualization.
In this interview, Goldworm addresses the reasons why IT managers say they're not using blades, details advancements that have made blades better, and gives tips on using blades and virtualization together. Goldworm is president and chief analyst of Focus Consulting, a research, analyst and consulting firm focused on systems, software, and storage, and author of a new book, Blades and Virtualization: Transforming Enterprise Computing While Cutting Costs, published by Wiley. She is chairperson of the 2007 Server Blade Summit: Blades and Virtualization, which runs from May 1-3 in Anaheim, Calif.
SearchServerVirtualization.com: Why do you think blade servers are a good platform for virtualization?
Barb Goldworm: Blades and virtualization address many of the same issues -- consolidating to save space, reducing time to provision new servers, improving manageability, improving utilization of resources -- so implementing them together as a hardware/software combination can provide double the benefits with a single implementation effort.
Now that blades are available with all the same options as rack servers (which wasn't true originally), virtualizing on blades gives the same configuration options, plus the additional benefits of blades. Examples of these additional benefits include modular components, shared components -- power, cooling, management and networking – built-in remote, out-of-band management, pre-wiring, and of course, high density.
What did you hear from IT managers who are using blades when you were researching your book?
Goldworm: Those who had implemented virtualization on blades consistently told us that combining the two gave them much more for their money. One user, quoted in the book, advocated using deploying server virtualization on blades whenever possible. He said:
"The configuration becomes so much more manageable and redundant, and that boosts efficiency even more. Why? Because blades not only take up half the space of traditional 1U servers, but they also share vital resources such as switches and interface cards, allowing for more simplified, centralized management."
In our recent survey, the majority of IT managers responding said they have not bought and, in 2007, will not buy blade servers. Let's discuss their reasons for not buying blades, starting with the high cost of a blade chassis.
Goldworm: This [reason] is only valid for implementations with small numbers of servers. Although a blade chassis does mean an up-front investment in the chassis before paying for the individual blades, when you amortize the cost of the chassis and the shared components across all the blades, the cost per server is comparable or even less.
What's your view on another barrier respondents listed: the vendor lock-in that comes with a chassis?
Goldworm: There is a lock-in issue, in that blades from one vendor's chassis don't fit in another vendor's chassis. However, there is nothing to stop users from having chassis from more than one vendor. In fact, some users choose to do so to avoid having a sole source vendor issue, just as they do with rack servers.
In general, what we see is that users tend to standardize on one or two server vendors for most of their server needs. Then when they move to blades, they choose the blade vendors based on their preferred server vendors; e.g., if they've standardized on HP, IBM or Dell servers, they move to HP, IBM or Dell blades. It's also important to realize that all the blade chassis support multiple configurations of blades. You are not tied to populating an entire chassis with identical processors, identical memory, or identical storage configurations, thus creating a great deal of flexibility.
In our survey, a huge majority believe that blades' heat issue hasn't been resolved. In fact, a few keep their chassis half-loaded to reduce heat issues. Is overheating just an issue with servers built prior to 2007, or does it persist today?
Goldworm: In the earlier days of blades, cooling was a big issue, and many users ran half loaded. The past year has seen significant improvement in power and cooling efficiencies and management. In some data centers, cooling may be an issue; but, in many datacenters, there are lots of things that can be done to improve cooling and allow blades to be easily incorporated in the datacenter. In addition, chip, blade and power/cooling vendors are still working on this issue, with improvements continuing to come.
Is there still a problem with blades' lack of flexibility in peripherals?
Goldworm: There's probably some confusion here as well. In early blade systems, there were several areas of limited flexibility with I/O. First, the embedded switches in the chassis were limited to certain vendors and certain features from those vendors. Now the switch options have expanded both in the vendors supported and in the features available on the switches. In addition, if you don't want to use embedded switches, you can always use the pass-through options to connect to external switches, just like you would with rack servers.
The other area that caused concern early on -- and still does for those relying on old information -- is the myth that blades only allow 2 NICs per blade, which would be particularly problematic for virtualizing on blades. Fortunately, this limit is no longer valid, and most blades now go up to 6 or 8 NICS per blade (depending on the vendor).
For those less-frequent applications requiring specific non-standard cards, there are some that are not available in a blade form factor. Those applications would not run on most blades, however, there are some blade vendors (e.g. Sun, Hitachi) who support standard PCI-express cards as add-ins to their blade chassis.
How do blades stand up to racks in ease of management?
Goldworm: Blades offer significant improvements in ease of management due to their architecture. Blades were designed from the outset for remote lights-out management. Even if the OS is down, you still have remote management capabilities to every component in the chassis, without doing any special wiring (it's all pre-wired). The chassis includes redundant management modules which are automatically connected to everything. In addition, if there is a failure, even a non-technical person can read the lights, pop out a failed blade and pop a new blade in.
Initial deployment is simplified by the pre-wiring, and ongoing cable management is far easier for the same reason. Blade chassis wiring always looks like the person who does the wiring is extremely fastidious.
In addition, some of the new virtual I/O capabilities offered in blades this year add to the ongoing management benefits by simplifying configuration changes. For example, HP's Virtual Connect for the HP BladeSystem abstracts the physical I/O connections from the components within the network allowing changes to be made within the chassis without having to reconfigure everything outside the chassis.
From your answers, it seems that many objections to using blades are based on users' negative experiences with first-generation blades.
Goldworm: Yes. Since many of the concerns I hear from folks are based on old data, I encourage people to get the most up-to-date information they can before making their next round of strategic decisions. While neither blades nor virtualization are right for every IT shop or every application, they both offer significant advantages in both real dollars and ongoing soft costs.
Dig Deeper on Server hardware and virtualization