Embracing virtualization is easier said than done, particularly if you're talking about migrating your whole production
environment and not just creating a virtual lab for minor testing and development scenarios. So what's behind an enterprise virtualization project and what issues do IT managers face during implementation?
In this series, we'll identify the phases of a virtualization adoption, including:
- Identification of candidates
- Capacity planning
- ROI calculation
- P2V migration
- Enterprise management
- Resources management
- Disaster recovery
- Infrastructure automation (provisioning)
- Resources monitoring and reporting
If today's virtualization market seems crowded, we'll discover that the opposite is true. The market is actually still in its infancy and several areas have yet to be covered in a decent way. In this installment, we'll examine the tricky aspects of choosing which servers should be virtualized.
The very first phase of a company-wide virtualization project is to identify which physical servers are to be virtualized. This operation can be much more difficult that you'd think. For a company that lacks an efficient enterprise management policy, it may actually be the most time consuming.
So, one of the steps here is to take an inventory of the whole data center. A second, equally critical step is to take a complete performance measurement of the whole server population, storing this crucial data for the capacity-planning phase. This step is often overlooked, because the IT management usually has a general idea of which servers are the least demanding of resources and believes this notion is sufficient for planning purposes.
But sometimes a benchmarking analysis reveals unexpected bottlenecks, depending on problems or simply bad evaluation of servers' workloads. The best suggestion in the first case is to immediately pause the project and proceed with solving the bottleneck. Moving a badly performing server into a virtual environment could have a serious impact on the whole infrastructure, presenting subsequent huge difficulties in troubleshooting.
A precise calculation of performance averages and peaks is also fundamental for the next phase of our virtualization adoption. During the capacity planning phase, it will be necessary for consolidating complementary roles on the same host machine.
Choosing virtualization candidates
After collecting performance measurements, we have to identify good candidates for virtualization among the inventoried population.
Contrary to what some customers think (and some consultants pretend), not every physical server can or should be virtualized today. Three factors are critical in deciding which service can go into a virtual machine: virtualization overhead, dependency on highly specific hardware and product support.
Virtualization overhead is something future improvements of virtualization will mitigate more and more, but for now it's something we still have to seriously consider.
I/O workload is a critical sticking point in virtualization adoption, and servers that rely heavily on data exchange cannot be migrated so easily.
Databases and mail servers are particularly hard to move into virtual infrastructures. In both cases, virtualization adds overhead to the I/O stream in a way that significantly affects performance, sometimes to the point that migration is discouraged.
But there isn't a general rule on these or others servers' types; it really depends on the workload. In some case studies, customers could virtualize without any particular effort; in others, the migration was successful only when the virtual machines received double the expected resources.
The second sticking point relates to special hardware on which the production servers depend. At the moment, virtualization products can virtualize the standard range of ports, including old serials and parallel, but vendors still cannot virtualize new hardware components on demand.
An effective example is modern and powerful video adapters that are needed by games development or CAM/CAD applications, both of which are the most contested unsupported hardware today.
The third sticking point in confirming a server as a virtualization candidate is product support.
The market has only been ready for modern server virtualization for the last two years, and vendors have been very slow to support their products inside virtual machines.
It's easy to understand why: too many factors in a virtual infrastructure can affect performance. The number is so big that application behavior can be severely influenced by something the vendor's support staff cannot control or even know is present.
Microsoft itself, while offering a virtualization solution, has been reluctant to support its products inside its own Virtual Server, and as of today, many of Windows Server technologies are still unsupported.
So, whenever your server seems a good machine for virtualization, the final word depends on the vendor providing applications inside it -- at least if you want to count on vendor support! Although every virtualization provider has its own, usually undisclosed, list of supporting vendors, it's always best to directly query your application's vendor to confirm support. Going virtual without support is risky and inadvisable even after a long period of testing.
Assistance from recognition tools
From a product point of view, the market still offers few alternatives. Candidate servers' recognition problem can be approached by four kinds of specialists: hardware vendors, operating systems vendors, application vendors and neutral virtualization specialists.
Hardware vendors like IBM and HP provide big iron for virtualization and offer outsourcing services. They usually have internal technologies for recognizing virtualization candidates. In rare cases, these tools are even available for customers use, such as the IBM Consolidation Discovery and Analysis Tool (CDAT).
Operating systems vendors do not actually provide tools for virtualization, but the trend is going to change very soon. All of them, from Microsoft and Sun to Novell and Red Hat, are going to implement hypervisors in their platforms and will have to offer tools for accelerating virtualization adoption.
Microsoft announced at the May WinHEC 2006 conference that it would offer a new product called Virtual Machine Manager, which addresses these needs and more.
Application vendors hardly ever offer specific tools for virtualization, even though they are in the best position to achieve the task. The best move to expect from them would be an application profiling tool, featuring a database of average values, to be used in performance comparisons between physical and virtual testing environments.
The best concrete solution currently available for customers comes from the fourth category, the neutral virtualization specialists.
Among them, the most widely known is probably PlateSpin with its Power Recon 2.0, which offers a complete and flexible solution for taking inventory of and benchmarking phyical machines in the data center, eventually passing data to physical to virtual migration tools, which we'll cover in the fourth phase of our virtualization adoption series.
In the next part, we'll address the delicate phase of capacity planning, on which the success or failure of the whole project depends.
About the author: Alessandro Perilli, a self-described server virtualization evangelist, launched his influential virtualization.info blog in 2003. He is an IT security and virtualization analyst, book author, conference speaker and corporate trainer. He was awarded the Microsoft Most Valuable Professional for security technologies by that company. His certifications include Certified Information Systems Security Professional (CISSP); Microsoft Certified Trainer (MCT); Microsoft Certified System Engineer w/ Security competency (MCSES); CompTIA Linux+; Check Point Certified Security Instructor (CCSI); Check Point Certified System Expert+ (CCSE+); Cisco Certified Network Associate (CCNA); Citrix Metaframe XP Certified Administrator (CCA); and others.