IT professionals often use sophisticated tools to monitor and report virtual server performance, ensuring that the appropriate computing resources are provisioned to each workload and verifying that each system is operating within established parameters. But just because a hardware platform is running properly is no guarantee that the workloads on that platform are delivering an adequate level of service to users. As businesses gain greater appreciation for the importance of each application the focus is slowly shifting from systems management to service management.
The importance of application performance management
Application performance management is an emerging data center discipline designed to ensure that workloads are delivering an appropriate level of performance to end users and to assist IT professionals in diagnosing the causes of workload performance issues.
Application performance management (sometimes called business service management or monitoring) is based on the realization that system hardware performance is relatively easy to monitor, but hardware performance does not always translate to workload performance. The server may have the proper resources, yet workloads may still experience bottlenecks, conflict with one another or otherwise under perform.
The result is a poor user experience, which can lead to diminished productivity and work quality, lost sales opportunities or unnecessary support calls.
Application performance management helps IT professionals understand the behaviors of each enterprise workload and the way those workloads interact across data center servers, storage and networking infrastructures.
Pinpointing problems in a virtual data center
Before virtualization, it was relatively simple to troubleshoot application problems on the corresponding server. In most cases, admins could resolve the problem by reconfiguring, upgrading or patching the server.
Unfortunately, virtualization produced an entirely new layer of complexity to application performance. It is certainly beneficial to improve a server’s utilization by running multiple workloads, but shared hardware resources can sometimes result in unforeseen consequences that affect workload performance in unexpected ways.
As an example, suppose a database server and media server share the same physical host server. There are ample resources on the server to handle the requirements of both workloads, and under normal use patterns, both workloads deliver adequate performance. Now suppose that from time to time, users report poor performance in database queries. IT technicians will typically act on those complaints by inspecting the database server VM, expecting to find a configuration change or resource shortage. But the database server checks out. The only glitch appears to be heavy local disk activity during periods of slow database performance, but the disk activity is not related to the database VM. However, another technician realizes that the company just posted video of a new product line, and additional investigation shows that the media server VM was servicing heavy requests for streaming video during periods of poor database response.
In this example, even though the database server VM was experiencing user performance problems, it was caused by spikes in activity from the local media server VM. Thus, the performance of one VM can have an adverse effect on another local VM. Virtualization can complicate root cause troubleshooting because VMs can easily be migrated (and resources adjusted) without regard for other workloads on a particular system. In order to troubleshoot such problems effectively, IT professionals need business service management tools that can identify the physical locations of VMs and the applications running within each VM.
Diagnostic capabilities for application performance management
Examples of workload performance monitoring tools include ManageEngine's Applications Manager, Dell's Foglight, BMC Software's Application Performance Management and APM from IBM. But, regardless of the product choice, the next generation of virtual machine (VM) performance monitoring and management tools must offer an intelligent and holistic view of the virtualized environment that reaches to endpoint devices.
For example, tools must allow IT staff to see the entire virtual infrastructure overlaid on the physical systems. Tools must also track the computing resources each VM uses against automatic performance baselines and report any performance problems before they affect workload performance. This combination of features allows tools to correlate cause-and-effect behaviors between multiple workloads to allow for better root cause problem analysis. It’s a major challenge, but it will emerge as an important step in data center development.
From a value perspective, better tools with root cause analytical capabilities can potentially recover the deployment cost by saving unnecessary expenses. For example, an IT technician unable to examine the behavior relationships between VMs can waste considerable time trying to correct workload problems by migrating VMs, upgrading servers, replacing servers or reallocating resources. While those tactics might alleviate the immediate problem, it does not address the underlying cause nor prevent subsequent problems.
The goal of business IT is to provide services to employees, partners and customers of the business. Ensuring that every workload is available and providing adequate user performance will be a vital part of tomorrow’s data center management. The proper tools not only prevent user problems before they start, but can also speed troubleshooting when VM interactions cause unexpected problems for workloads that are otherwise configured and operating properly. Application performance monitoring tools are available, but the features and functionality are still evolving to provide better insight and decision-making information to IT professionals.
Stephen J. Bigelow asks:
What problems have you encountered with application performance in a virtual environment?
0 ResponsesJoin the Discussion