Managing performance is a key task in systems administration. With virtualization (where multiple independent operating systems are competing for system resources), measuring and monitoring real-world performance of your applications is even more important.
But this is easier said than done. In this article, I'll cover some details and approaches for monitoring the performance of your physical and virtual machines. The goal is to make better decisions in matters of virtualization performance and load distribution.
Monitoring real-world performance
When you are measuring performance, it's important to keep in mind that the best predictions can be made by using real-world activity. For example, if you're planning to move a production line-of-business application from a physical server to a virtual one, it's best to have a performance profile that highlights production load as closely as possible. Or, if you're developing a new application, expected workload information can be instrumental in making better decisions. All too often, systems administrators take the approach of trial by fire, saying, "If there are performance problems, we'll address them when users complain," which raises the question of how you can collect realistic performance data.
IT organizations that have invested in centralized performance monitoring tools will be able to easily collect CPU, memory, disk, network and other performance statistics. Generally, this information is stored in a central repository and reports can be generated on demand.
An alternative is the manual approach. Because most operating systems provide methods that allow for capturing performance statistics over time, all that's required is to set up those tools to collect the relevant information. For example, Figure 1 shows options for tracking server performance over time using the Windows Performance Tool. Key performance statistics such as CPU, memory, disk and network utilization can be collected over time and analyzed. You'll want to pay close attention to the peaks and the average values.
Figure 1: Capturing performance data using the Windows System Monitor tool.
If you're planning to migrate an existing application to a virtual environment, you can monitor its current performance prior to the move. But what if you are considering deploying a new application? That's where stress testing comes in.
Some applications include performance-testing functionality as part of the code base. For those that don't, numerous load-testing tools are available on the market. They range from free or cheap utilities to full enterprise performance-testing suites. For example, Microsoft provides its Application Center Test (ACT) utility to test the performance of Web applications and report on a number of useful metrics.
You can predict approximate performance by running the application within a virtual machine (VM) and measuring response times for common operations based on a variety of different loads. Your goals are first to ensure that the expected workload will be supported, and second, to make sure that no unforeseen stability or performance problems arise.
It's no secret that virtualization solutions present some level of overhead that reduces the performance of virtual machines. The additional load is based on the cost of context-switching and redirecting requests through a virtualization layer. Unfortunately, it's very difficult to provide a single number or formula for predicting how well an application will perform in a virtual environment.
That's where synthetic performance benchmarks can help. The operative word here is synthetic, meaning that the tests will not provide real-world usage information. Instead, they will give you information on the maximum performance of the hardware given a pre-defined workload. An example of a benchmark tools suite is SiSoft Sandra 2007 (a free version is available from SiSoftware). Many other tools are available from third-party vendors. It's important to choose one tool and use it consistently rather than switching among multiple tools, because results from different products (and often, versions) cannot be accurately compared.
Figure 2: Viewing results from a physical disk benchmark performed with SiSoftware's Sandra 2007.
The general approach is to run similar tests on both physical hardware and within virtual machines. If the tests are run on the same or similar hardware configurations, they can be reliably compared. Table 1 provides an example of typical benchmark results that might be obtained by testing a single operating system and application in physical versus virtual environments. A quantitative comparison of the capabilities of each subsystem can help determine the amount of "virtualization platform overhead" that can be expected.
Table 1: Comparing VM and physical machine performance statistics.
Distributing VM load
Portability is one of the main benefits of virtualization. It's usually fairly simple to move a VM from one host server to another. Ideally, once you profile your physical and virtual machines, you'll be able to determine general resource requirements. Based on these details, you can mix and match VMs on host computers to obtain the best performance out of your physical hardware.
Table 2 shows high-level requirements for some hypothetical VM workloads. Ideally, the load will be distributed. For example, those VMs that have high CPU requirements can be placed on the same physical host as those that are disk-intensive. The end result is a more efficient allocation of VMs based on the needs of each workload.
Table 2: Comparing high-level information about various virtual machine workloads.
Making better virtualization decisions
Overall, a little bit of performance testing can go a long way toward ensuring that your VMs will work properly in a virtual environment. By combining data from real-world performance tests with stress-testing results and synthetic benchmarks, you can get a good idea of how to best allocate your VMs.
Dig deeper on Server consolidation with virtualization