Gone are the days when VMware virtualization was a pure consolidation play. Increasingly, IT managers bitten by the virtualization bug now host their most performance-intensive applications on ESX, arguing that the improved availability and manageability justify the cost and performance overhead of a VMware license.
According to Adrian James, the infrastructure and operations manager at the university, the project was a wash financially. "We probably spent about the same as we would have if we had deployed on physical boxes, but we have a much better service in terms of the functionality we have," he said.Upgrading Exchange was part of a larger infrastructure overhaul that implemented storage virtualization and IP telephony, upgraded networking infrastructure and replaced desktop devices. As part of that project, James and his team also brought in VMware to improve server utilization, consolidating 280-plus servers down to 30 ESX hosts. Upgrade brings new game plan
But virtualizing Exchange was not part of the original game plan. At first, the idea was simply to upgrade to Exchange 2007 running on 15 new physical hosts. But James and his team quickly realized that virtualizing Exchange would improve hardware independence (i.e., freedom from device driver incompatibilities), simplify patching and simplify disaster recovery. "We weren't doing it for consolidation but for all the other nice features that virtualization has," he said. In September 2007, James' team began a pilot, and the group was "pleasantly surprised" by the performance it witnessed. The team began migrating over users at a rate of 5,000 per night and had completed the upgrade by the end of the month.
The new Exchange environment consisted of eight virtual machines -- four mailbox servers, one client access server, and three hub transport servers -- running on an eight-node ESX cluster running on four-way dual-core HP cClass blades – seven fewer than they had originally planned on. That's because the team decided not to run Microsoft Cluster Continuous Replication (CCR), an Exchange high-availability feature that uses asynchronous log shipping to a second, standby copy of the database for failover purposes. By implementing VMware High Availability, "we felt that the restart times in the event of a failure were sufficiently low that there wasn't a compelling reason to introduce cluster replication," James said.
But James didn't skimp on hardware resources. "I've seen people try and shoehorn huge applications on what is an underpowered server to start with, and then they say that virtualization didn't work for them," he said. "Virtualization is not some magic technology that makes it so you can get something out of nothing."To that end, each Exchange server in the University's environments is assigned four virtual CPUs (vCPUs) and 16 GB of RAM to start, which can be "adjusted to find the sweet spot of what our performance really is," James said. Ultimately, James' goal is to achieve a 2:1 consolidation ratio on his high-utilization ESX nodes, but not at the expense of performance. "With high-utilization applications, do not overcommit resources; give them all the resources they require," he said.
Another commonly cited roadblock to virtualizing Exchange is Microsoft's lack of official support for Exchange running on VMware, but – knock wood – Microsoft has been gracious about responding to the university's issues. "We haven't had any problems at all," James said. If they had, however, "we were prepared to replicate the problem on a physical server."Chris Wolf, an analyst at Midvale, Utah-based research firm Burton Group who specializes in virtualization, said he sees more and more organizations virtualizing high-performance production applications like Exchange, Oracle and high-I/O imaging servers. "Last year it was 25% of our clients; this year it's more like 50% -- you just need good hardware to do it." Provided adequate resources, though, there are not a lot of barriers to virtualizing those applications today."