News Stay informed about the latest enterprise technology news and product updates.

Virtualization performance benchmarks needed ASAP, vendors say

Big players in the virtualization world griped about the absence of performance benchmarks for virtual machines on CIO Talk Radio yesterday and discussed some of the issues surrounding virtualization standards.

Guests on the show included: Simon Crosby, Chief Technology Officer of the Virtualization and Management Division of Citrix; Tom Bishop, Chief Technology Officer, of BMC Software; Dr. Tim Marsland, Sun Fellow, Chief Technology Officer, for the Software Organization at Sun Microsystems Inc.; and Brian Stevens, Chief Technology Officer and Vice President of Engineering at Red Hat.

The glaring ommission in this lineup: VMware, Inc.

The panelists on CIO Talk Radio didn’t mention VMware by name, but did complain that some companies aren’t being open with their performance data, thus prohibiting the virtualization industry from publishing comparative performance data.

VMware’s licensing agreement for ESX allows users to conduct internal performance testing and benchmarking studies, and allows those users (and not unauthorized third parties) to publish or publicly disseminate the data provided that VMware has reviewed and approved of the methodology, assumptions and other parameters of the study.

Users that have published benchmark data, like Sr. Systems Engineer Mark Foster did on his blog, have had to unpublish results because of VMware’s stipulations.

VMware introduced its own free benchmarking tool, VMmark, last year for certain applications.

Meanwhile, the SPEC Virtualization Committee has been working to create standard benchmarks for VMs. The committee’s goals are to deliver a benchmark that will model server consolidation of commonly virtualized systems such as application servers, web servers and file servers; provide a means to compare server performance while running a number of VMs; and produce a benchmark designed to scale across a wide range of systems.

SPEC expects these benchmarks to be available by the end of this year, but the timeline is not set in stone, according to the website.

Sun’s Marsland said benchmarking progress has been slow because there isn’t an easy way to define a workload, and a large number of benchmarks are required.

“We are talking about a virtual computer, with lots of aspects that need to be benchmarked,” Marsland said. “Every component that gets virtualized needs to be benchmarked.”

Having an open, standardized way of benchmarking is expected to push virtualization further into the mainstream because it will eliminate false perceptions about performance, panelists said. For instance, “there is the thought that I/O intensive workloads can not be virtualized, and the absence of benchmarks prevents us from proving otherwise. It is important for us to have good benchmarks out there,” one panelist on the show said.

Though users look at benchmarks, this type of data is most useful to vendors and OEMs who can use the performance standards to improve the technology, and of course, market their products.

“More open scrutiny of performance results will help us to improve as an industry overall,” Bishop said. “There are ways to measure performance in non-virtual environments, and people are adapting those techniques to get the most out of their virtualized environments.”

In terms of application performance in virtual environments, the issues differ depending on the data center infrastructure. The network, the servers and the storage all affect performance, said Stevens of RedHat.

“The areas that have to progress are around I/O. Intel and AMD are improving around page tables, and we will see improvements around I/O adapters soon,” Stevens said.

Another problem with virtualization? There are support challenges. If an application running in a VM starts acting wacky, the application vendor may not support it, Crosby said.

Licensing and support in virtual environments has been a major gripe with Oracle, for example, which does not support running its applications with VMware.

“It is a reasonable concern…right now there is irrational market based control. Some folks are abstaining from supporting certain apps [in virtual envionments]. As customers demand support, things will hopefully get rational, by next year I hope,” Crosby said.

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

The most basic benchmark standard should be defined first: the physical to virtual. Second is the need to evaluate multiple systems running on a virtual machine with comparable performance to the physcial, with an ROI on the virtual environment. It is easy to get misled by isloated benchmarks generated in a virtual vacuum.
An omission from the list of currently available virtualization benchmarks is ‘vConsolidate’. Note: this benchmark has been available through Intel for nearly a year. Anniversary coming April 17! As one of the people spending his days supporting vConsolidate, I get first-hand information on how current benchmarks (vConsolidate, VMmark, and the future SPEC benchmark) have work yet to accomplish before they fully characterize the needs associated with virtualization. The most frequent complaint is that customers and engineering teams alike ask for more ways to characterize and benchmark their specific environments. vConsolidate was designed and continues to fulfill the basic needs of virtualization benchmarking. It supports multiple VM vendors, multiple hardware platforms, and multiple ‘profiles’ of test configurations that accommodate different usage models. But, it doesn’t support all usage models... just as no other single benchmark currently does. There is more work to be done. For some example reports with vConsolidate benchmark detail and results, check out the Virtualization Benchmarks section on this site: For general happenings about virtualization and just about anything server related, check out our ‘The Server Room’ community: Or for a lighter version of similar content: