Get started Bring yourself up to speed with our introductory content.

Virtual environments are different from physical environments

The greater automation provided by virtualization has to be set against additional challenges like density and complexity. Learn how to contend with these hurdles in this sample chapter from "The Shortcut Guide to Selecting the Right Virtualization Solution."

This tip is excerpted from "Best practices in implementing virtualization," Chapter 3 of The Shortcut Guide to...

Selecting the Right Virtualization Solution, written by Greg Shields and published by Realtimepublishers.com. You can read the entire e-book for free at the link above.

Virtual environments are different than physical environments

Virtual environments require a different level of care than do physical environments. Many organizations move to virtual environments because of their natural capabilities for easier management and the potential for greater availability and uptime. What is not readily understood is that virtual environments in some ways add their own set of risks that must be properly managed separately from pure physical environments. These risks align with three greater concepts—their improved density, their greater complexity, and their enhanced capabilities for automation.

Greater Density

First and foremost, most organizations incorporate virtualization into their networks because of the driving need to consolidate physical machines. This consolidation activity reduces the total number of physical machines in the environment while allowing for the same number of network services to operate as before. One concern associated with this increased level of density surrounds the capacity for more individual services to operate on the same physical hardware.

Physical hardware failure characteristics are common across devices. This is the case regardless of whether a single OS instance is installed directly on the physical hardware or a virtualization solution is implemented to consolidate multiple machine instances. In the case of virtualization, the difference is that a host failure can impact more than one service. An increase in density due to virtualization consolidation means a corresponding increase in outage exposure when a host failure occurs.

Consider an all-physical situation in which a single network service occupies a single physical server. In this example, the loss of a single server means the loss of a single service. Next consider an all-virtual situation in which 10 network services—each on an individual virtual machine—are all housed on a virtual host. The loss of a virtual host can mean the loss of 10 network services. 

More on virtualization

In purchasing hardware to support the greater density that occurs with virtualization and consolidation, take special care to acquire server hardware that is most resilient to physical failure. That resiliency may be manifested in the form of physical redundancy or higher-quality server-class components. These higher-quality components are usually available in the high-end server equipment needed to support virtualization.

For example, the Mean-Time Between Failure (MTBF) metric for a hard drive is the same no matter what type of data that drive is storing. That metric does change, however, when the disk is used more often. In virtualization environments, the need for disk resources by multiple, simultaneous virtual machines mean more disk activity. This increase in disk activity can reduce the overall effective life of that disk. Because of this characteristic of greater use, server redundancy features are critically important within the virtualization environment.

Most high-end server-class equipment is also equipped with on-board hardware management and notification capabilities. These capabilities can notify a central network management system when failure—or even pre-failure—events have occurred on the system. In many cases, these capabilities are already a part of the hardware but may not be used. Enabling these features helps ensure better service resiliency in the environment. In the case of pre-failure warnings, leveraging your notification system gives the administrator time to relocate virtual machines to healthy equipment prior to a failure.

Greater Complexity

Many all-physical computing environments do not follow the practice of service isolation, often due to financial constraints. When an IT organization practices service isolation, they are using a single computer to support a single network service. Adding a new network service means adding a new server. Effective service isolation means that the loss of a server will result in the loss of only a single service. It also reduces the complexity of troubleshooting associated with identifying problems on a particular server.

Service isolation is operationally expensive in physical environments, especially in organizations with limited funding; the move to virtualization can lower many of these cost barriers. Creating a new server and its associated service is as easy as a "copy-and-paste." It takes little time and effort to replicate services. Software licensing is financially friendly towards virtualization-hosted OSs.

There is one problem. An unintended consequence of this "easiness" can be a perfect storm of server and service expansion. An environment that moves to virtualization can experience a massive bloat in total server count, even as the number of physical machines stays the same.

This increase in total OS instances has a tendency to increase total environment complexity. More instances means more machines to patch, more applications to manage and monitor, and more services to maintain. Some virtualization solutions, such as those associated with OS virtualization, provide native tools to assist with the management of virtual environment OSs and their installed applications. Other solutions may not natively support these features. Care towards the management of an increasingly complex environment must be taken such that diseconomies of scale do not appear.

Additionally, the ease of creating new virtual machines introduces new security-based issues into the environment. The horizontal increases in the total number of machines may drive the creation of more machines that are only rarely used. With some types of virtualization architectures, these rarely used computers can pose an additional risk to the operating environment associated with their configuration.

Let's think for a minute about this problem. If a virtual machine is created, used for a period of time, and then shelved for future use, that machine is no longer continuously operational. Typical systems and patch management tools often don't have the capability to power on the machine, patch it, and power it back down. Thus, the nature of these machines being powered off for long periods of time increases the risk that an already-patched exploit can infiltrate the machine. Because these machines are powered off during the typical patch cycle, they can "miss" certain critical patches.

In the case of OS virtualization, the files of the powered-off virtual computer can still be linked to those on the virtual host. That virtual host typically remains powered on, which means its configuration is known and up-to-date. Thus, its resident virtual machines—even those that are powered off—are more likely to power on with a correct and fully patched configuration.

The problems of powered down equipment are only now being recognized. These problems are not specific to virtual machines but are exacerbated by their nature. Products are only now being made available that assist in managing this security hole.

Greater Automation

One very positive characteristic of virtualized environments is their capacity for enhanced levels of automation. This automation is more extensible than with traditionally highly manageable physical hardware. Whereas individual OSs already include rich tools for programmatically modifying and managing configurations, the tools for managing physical hardware have traditionally been highly device-specific.

With virtual environments, the processes for machine-specific automation tasks such as powering on, powering off, backups, and bare-metal restoration are all similar for each virtual machine. The use of different physical hardware does not require different management tools to support this automation.

In addition, with most virtualization solutions scripting and programmatic exposure via published APIs adds management flexibility. These APIs are built into the virtualization framework and can be used no matter where the virtual machines may be housed. A good scripter or coder can make use of these APIs to easily create their own custom interfaces.

For example, if you need to create scripts for mass rebooting, backing up at the level of the virtual machine, or even mass restoration activities, the process is much easier than with physical machines alone.

We'll talk more about some potential automation benefits later on in this chapter.


Best practices in implementing virtualization 
  Introduction
  Virtual environments are different than physical environments
  Potential usage scenarios and best practices
  Obtaining maximum return on virtualization 
  Best practices in systems automation
 

About the author: Greg Shields is an independent writer, speaker and IT consultant based in Denver. With more than 10 years of experience in information technology, Greg has developed extensive experience in systems administration, engineering and architecture, specializing in Microsoft, Citrix and VMware technologies. He is a contributing editor for both Redmond magazine and Microsoft Certified Professional magazine, authoring two regular columns along with numerous feature articles, webcasts and white papers. He is also a highly sought-after instructor and speaker, teaching system and network troubleshooting curricula for TechMentor Events, a twice-annual IT conference, and producing computer-based training curriculum for CBT Nuggets on numerous topics. Greg is a triple Microsoft Certified Systems Engineer (MCSE) with security specialization and a Certified Citrix Enterprise Administrator (CCEA). He is also the leader of the Realtime Windows Server Community.

This was last published in May 2008

Dig Deeper on Introduction to virtualization and how-tos

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close