Once you select your virtualization management tools, you'll need to install and configure them effectively for your own environment.
The actual requirements and processes can vary dramatically, but there are some installation and management best practices that can help mitigate problems.
Installation and management best practices
First, consider the availability of the virtualization management tools in your environment. Remember that virtualization management tools will generally need to be installed -- often as their own servers -- so consider what happens to your monitoring and management capability in the event of an outage. You may be able to deploy the tools as virtual machines (VMs) or on physical servers, so understand the tradeoffs.
For example, you might choose to deploy the virtualization management tools on physical servers to avoid congesting the virtual servers or to ensure access to the tools if trouble arises. Use the tools to help keep core services available, said Scott Gorcester, president of Moose Logic, an IT solutions provider in Bothell, Wash. At the same time, treat the tools as core services that need corresponding levels of availability, he said.
Second, test the virtualization management tools in a lab environment to evaluate them properly and ensure the necessary level of compatibility for your environment. Remember, though, that lab testing isn't always a fully accurate reflection of the actual environment, so a deployment should be introduced in phases where it can be thoroughly tested and tuned in actual use. Once the tools are proved on noncritical systems, you can systematically expand the deployment to other VMs.
Limit your virtualization management tools
Similarly, make the effort to limit the number of virtualization management tools in your environment and keep them as centrally managed as possible. More tools mean more costs, possible patching issues and compatibility problems.
"Try to keep your environment as light on tools as possible," said Bob Plankers, technology consultant and blogger for The Lone Sysadmin. "Every third-party thing you add on to an environment adds complexity."
Standardizing on a single tool or limited toolset will streamline patches and upgrades. Also, remember that some third-party vendors may lag behind in their updates, possibly disrupting monitoring and management activities when a change takes place in the environment.
Finally, you don't need to use every possible feature. Start with the features and functionality that are most essential, and phase in additional features as you gain experience or as business needs change.
"A lot of product suites offer many modules and features, but not all will necessarily be useful or required at first," said Pierre Dorion, data center practice director at Long View Systems Inc., an IT services company in Denver.
Once a monitoring or management tool is deployed, it typically requires little tuning or configuration other than policy definitions or data set selection. In most cases, you will need to minimize the flood of data that monitoring tools can provide.
"It's not that I'm tuning the tool -- I'm tuning the way I use the tool," Gorcester said. "We can get too much information, and suddenly the information that I really care about is hard to find."
The future of virtualization management tools
There is a wide assortment monitoring and management tools to choose from. Some cover narrow niches, but most harness the wealth of system data collected normally by the hypervisor.
The value of niche virtualization management tools, however, may be waning as better APIs and development tools appear from vendors, allowing organizations to write unique tools that are tailored for their own environments right from the start.
Integration is another important area for development, so expect future monitoring and management tools to integrate with other infrastructure elements like storage and network components. The goal here is to see and control more of the infrastructure through a single dashboard.
Administrators should expect to see more automation features, too, in the future. Automation may assign more resources to a virtualized application or migrate an application to another server when certain utilization parameters occur. This kind of behavior will reduce direct human interaction and rein in costs while making the data center far more adaptive to change.
Automation should also extend to the user community, allowing employees, customers and partners to set up and provision their own environments or applications, which would minimize the load on administrators who can then focus on more important data center tasks.
About the author
Stephen J. Bigelow, a senior technology writer in the Data Center and Virtualization Group at TechTarget Inc., has more than 15 years of technical writing experience in the PC/ technology industry. He holds a bachelor of science in electrical engineering, along with CompTIA A+ and Network+ certifications, and has written hundreds of articles and more than 15 feature books on computer troubleshooting, including Bigelow's PC Hardware Desk Reference and Bigelow's PC Hardware Annoyances. Contact him at firstname.lastname@example.org.