Get started Bring yourself up to speed with our introductory content.

Scenarios for implementing virtualization

Several implementation scenarios demonstrate best practices for virtualization.

This tip is excerpted from "Best practices in implementing virtualization," Chapter 3 of The Shortcut Guide to Selecting the Right Virtualization Solution, written by Greg Shields and published by Realtimepublishers.com. You can read the entire e-book for free at the link above.

Potential usage scenarios and best practices

Depending on the virtualization solution chosen, there are several usage scenarios that fit well within that solution's architecture. In this section, we'll take a look at five potential ways of using virtualization to gain an accelerated return over with physical machines alone. For each of these five scenarios—Infrastructure Service Consolidation, Disaster Recovery & Business Continuity, Dynamic Workload Management, Code Development & Testing, and Virtual Desktop Infrastructure we'll also talk about best practices associated with their use.

Infrastructure service consolidation
As we discussed in Chapter 1, the average metric for server utilization is around 5% across all industries. This means that servers are on-average performing useful work 5% of the time. For the other 95% of their operational life cycle, they aren't adding any value to the business. This problem is particularly relevant for machines labeled "infrastructure servers." These typically low-use servers—such as DNS servers, Active Directory (AD) domain controllers, patch management servers, and reporting servers—are necessary for the operation of the environment. They typically cannot collocate their services with others, and they traditionally have the lowest usage of all servers in an environment.

Infrastructure servers can be the lowest hanging fruit for virtualization.

Unlike mission-critical services, such as databases or services with complex configurations such as industry-specific software, the services on these servers are typically well-understood. The movement of these services to virtualization is usually supported by their vendors. Their movement can be done through a physical-to-virtual process as easily as a complete rebuild. They are typically redundant, so the loss of a single instance will not impact the environment as a whole.

More on virtualization
Q&A with Greg Shields

Any of the virtualization architectures we've discussed in this guide can well support these types of infrastructure services. As low-resource services, they typically enjoy an excellent consolidation ratio. Most importantly, they typically do not involve large numbers of third-party applications where introduction into a virtualization environment may violate support agreements. Infrastructure services are often an easy win for virtualization-based consolidation.

Disaster recovery and business continuity
Our conversation on infrastructure services dovetails perfectly into a discussion on the twin topics of Disaster Recovery and Business Continuity. Both are parts of the same whole—that of ensuring service continuance during and after an impacting event. But because Disaster Recovery and Business Continuity are subtly different, there are differences in the types of solutions necessary to compensate for them.

Business Continuity typically associates with the need for the continuance of a service after the loss of a single service or service element. This involves adding compensating mechanisms to help protect against the situation in which a single server or service goes down. Conversely, Disaster Recovery usually involves the loss of an entire location or data center, often due to a natural or manmade disaster. As you can see, a disaster recovery event involves many more impacted systems than the single instance of a business continuity event. Thus, different tools and techniques are needed to prepare for a disaster recovery event.

As noted, incidences of business continuity typically involve the loss of a single service or service element. When that service goes down, the business is incapable of performing a critical operation. The standard solution for these sorts of events is to provide a redundant server that will fulfill the needs of the business once the primary server goes down. Traditionally, this "clustering" approach was costly to implement and challenging to maintain. Similar hardware and configurations were and are critical for clustering solutions so that the cluster can reliably move services from node to node. With virtualization, the need for similar hardware is lessened somewhat. From a hardware perspective, the servers in the virtualization environment are mirrors of each other. Individual virtual hosts can relocate virtual server instances between them as necessary in preparation for a failure event. This further enhances the uptime capabilities of the redundant services. Many virtualization architectures support this "hot migration" capability, moving virtual servers from host to host without the loss of uptime. All virtualization architectures support the "cold migration" capability, moving virtual servers from host to host after the virtual machine has been powered off.

The time-to-restore for an individual failed server can be quite different based on the type of virtualization architecture chosen. As an example, with OS Virtualization, the startup process for a failed system can be significantly faster because the host server is typically already in a running state. Resources are already online and ready for use, which speeds the booting process. Contrast this with Hardware Virtualization where individual machines must complete a full boot cycle after a failure along with all the associated resource spin-up necessary to complete the boot process.

In the situation of disaster recovery, backups are critical. But backups are only one piece of the puzzle. A disaster event can be catastrophic to a business' livelihood if services are not brought back online within a very short period of time. Mere tape-based backups may not be capable of bringing services back online in a timely enough fashion to support a business' requirements. This is often due to the sheer number of individual files that make up a single computer, the loss or corruption of which—as part of the backup—can compromise the successful restoration of that server. For most organizations, after a disaster, speed is of the essence.



Figure 3.1: With data replication from primary site to backup site, virtual machine files can be replicated elsewhere. A major benefit with virtualization is that a 1:1 ratio of primary to backup machines is not necessarily required.

One solution for speeding this process is to use a tool that replicates the virtual machine backups to an alternative site in real or near-real time. That backup arrives at the backup site in a way that is easily restorable to a replacement host. As Figure 3.1 illustrates, the virtualization hosts in the primary site can automatically transfer their backups over the network to a data store at a backup site. Virtual machines at the alternative site can be rapidly provisioned from this backup data to replacement hardware, thereby greatly speeding the return-to-operations of necessary network services.

Enhancing this solution even more is the nature of post-disaster operations. In many industries, fewer users are using network services during a disaster; a 1:1 ratio of virtualization hosts may not be necessary at the backup site. More virtual machines can be consolidated onto fewer hosts because their resource needs are less. Fewer hosts at the backup site mean a lower cost to support that backup site.

Since the beginning of data centers, the cost-effective ability to provide a full disaster recovery facility has been elusive. Early attempts involved the creation of a fully redundant set of physical servers at the alternative site. Configuration changes at the primary site needed to be made also at the redundant site. Human errors over time could misalign configurations, causing the backup site to no longer mirror the production site. Many virtualization solutions include, either natively or as a third-party add-on, the capability to offload backup files to a remote site for rapid re-provisioning after a disaster event. This software-based approach is significantly more cost-effective than with previous solutions. When considering the right virtualization solution for your environment, look for one that can support your potential or real needs for supporting recovery in the case of a disaster.

Dynamic workload management
Related to the need for consolidation is a key desire to squeeze more useful processor cycles out of expensive server hardware. This is particularly useful in situations in which available funding or space to support additional hardware does not exist. Dynamic workload management is the virtualization concept that individual virtual machines and their workloads can be spread across multiple host systems. Depending on the virtualization product chosen, these migrations can occur automatically or manually and with or without an outage to the host system. Other products allow for the highly granular assignment of physical resources to individual virtual machines that can scale to fill all the resources that make up the host system. Some require a service outage to support resource changes and some do not.

To support dynamic workload management, hardware virtualization products such as VMware Virtual Infrastructure use a concept called VMotion. VMotion is a process whereby the ownership and processing of virtual machines is transferred from one physical host to another without a corresponding loss of service. The virtual machine need not be powered off to support this capability. Adding this capability to a virtualization solution means that VMware-hosted environments can load balance virtual machines as necessary across multiple hosts. This ensures that resources are best distributed to servers that need them and no single physical host is overloaded.

To the virtual machine-specific assignment of resources, VMware Virtual Infrastructure can supply varying levels of resources such as disk, RAM, and processor power to virtual machines. For many of these changes, a reboot of the virtual machine is required in order for that virtual machine to recognize and use the resources.

OS virtualization tools such as Parallels Virtuozzo Containers have similar tools for a virtual machine "hot migration" from host to host. Depending on the software chosen as well as the virtual machine OS this process may or may not require a short outage of the system, typically just the time to restart a virtual environment, which is much less because the OS is already running. OS Virtualization operates at a different layer than the hypervisor-centric hardware virtualization, so the process for migrating machines is slightly different so it does not require a SAN or dedicated storage However, the end result is the same—the capability to load balance resources across multiple physical hosts.

A critical point to recognize here is the ability with these advanced features to further abstract computers from the hardware they reside upon. Once virtual machine migration capabilities associated with dynamic workload management are introduced into a virtualization environment, the administrator no longer needs to consider each host as an individual computer chassis. Rather, the host can now be considered a "set of resources" of which any virtual machine can make use. This further abstraction of resources makes possible the best and most efficient use of available resources.

Another feature of OS virtualization tools like is the ability to dynamically alter resources assigned to virtual machines on the fly. Because OS virtualization tools do not use emulated driver sets, they are not limited to predetermined and code-limited quantities of resources such as number of processors, hard drive or amount of RAM. Virtual machines hosted on OS Virtualization products can make use of any and all resources that make up the physical host with no limit on maximum size or use.

Combining automation with the ability to dynamically adjust resources on the fly allows machines to be given just the amount of resources necessary to perform their function. No longer do machines need to be "over spec'ed" with resources they may not even use.

Code development and testing
Virtualization grants some very specific benefits for code development and testing environments as well. These types of environments are typified by high turnover rates, meaning a repeated need to rebuild environments to support new tests or new code versions. Often, due to the typical code development process, multiple, simultaneous environments are necessary to support overlapping code, unit test, qualification test, and staging environments. Depending on the testing schedule, multiple environments for each of these stages may be required. In an all-physical situation, these multiple environments can be prohibitively expensive to purchase and maintain.

Virtualization and its management tools provide an easy interface for speeding the rapid re-creation of these environments. The "snapshotting" feature of many virtualization tools freezes a machine's configuration. It makes trivial the steps to roll back the configuration to that snapshot after the test is complete. A test can be run over and over rapidly and repeatedly, with the only requirement being that the tester reverts to the snapshot between each test. The reversion process is simple and automatic and significantly reduces the amount of time necessary to reset the environment in preparation for another activity.

More so, most virtualization solutions provide the capability to rapidly deploy new servers. Depending on the solution chosen, this rapid deployment can support the stand-up of templatized machines only or templatized machines along with necessary software packages.

Tools such as VMware Virtual Infrastructure support the creation and subsequent deployment of virtual machine templates. These templates can be augmented with automation that automatically adds them to the necessary network locations and Windows Active Directory (AD) domains.

Other tools such as Parallels Virtuozzo Containers also support machine templates. As Virtuozzo Containers is an OS virtualization tool, it also supports the addition of software packages to deployed templates as necessary. This native addition of software packaging further aids in the rapid provisioning of necessary servers by speeding the resolution of custom requirements. This makes Virtuozzo Containers a great tool to provision large quantities of servers for stress testing environments.

Another valuable benefit here is the reduction in number of templates required overall. This is possible by removing individual software packages from the core image and packaging them separately. A "blank" template can be created and later augmented with software packages.

Virtual Desktop Infrastructure
Our final usage scenario involves the removal of the desktop altogether. The management and maintenance of individual desktops in an organization can be one of the most expensive operational costs for an IT department. When technicians need to individually go to desktops to solve problems, the environment may not be serviced as quickly as necessary due to schedule conflicts or available technician resources.

The centralization of desktops was first pioneered through remote application tools such as Microsoft Terminal Services and Citrix Presentation Server. These tools, still commonly used today, provide an excellent mechanism for aggregating users and their applications onto server-based hardware. However, in some situations these tools cannot provide the necessary level of user separation. Some applications may not function properly on multi-session servers such as Microsoft Terminal Services or Citrix Presentation Server. Most importantly, users may want or need their own individual desktop that is not shared with other users. In any of these situations, it may be necessary to "host" the user's desktop as a virtual machine.

The process of hosting a desktop is, in concept, relatively simple. The desktop is virtualized and hosted on a virtualization host. Users access their desktops through a network interface (for Microsoft Windows environments, this is most commonly Microsoft's RDP protocol or Citrix's ICA protocol). Quickly depreciating desktop hardware can be replaced with longer-depreciating thin client equipment. Users gain the ability to use their desktops from anywhere with a secured network connection.

All virtualization solutions provide the capability of virtualizing desktops. Some also add management components and easy user interfaces for connecting to those desktops. One important difference, however, between the various architectures is involved with the horizontal scaling of resources. With paravirtualization and Hardware Virtualization tools such as Citrix XenSource and VMware Virtual Infrastructure, all files and other resources that make up the individual desktop machine must be replicated for each machine to be hosted. Thus, if an example desktop consumes 20GB of disk space, hosting 100 desktops will require at least 2TB of online storage to support the environment.

A major benefit associated with the architecture of OS virtualization and tools such as Parallels Virtuozzo Containers is that individual virtual machine files can be shared across multiple virtual machines. Consider the case in which 20GB of disk space are required per desktop to support its hosting, but only 1GB is information that is different between desktops. If we attempt to virtualize 100 desktops, the total disk space necessary is only 120GB, a difference in size of nearly an order of magnitude. This is made up of the 20GB that is shared amongst the virtual machines plus the gigabyte of different data on each of the 100 desktops. With the cost for storage increasing geometrically as data size increases, this reduction in required storage space can be a major savings in overall cost.

Another benefit with OS Virtualization is one discussed in our previous chapter. Hosting desktops does not eliminate the need to manage their security profile and configuration. With OS Virtualization, patching all hosted desktops means patching only the host. This reduces the total number of potential open points on a network from a security perspective and eases their management burden.




Best practices in implementing virtualization

  Introduction
  Virtual environments are different than physical environments
  Potential usage scenarios and best practices
  Obtaining maximum return on virtualization
  Best practices in systems automation

About the author: Greg Shields is an independent writer, speaker and IT consultant based in Denver. With more than 10 years of experience in information technology, Greg has developed extensive experience in systems administration, engineering and architecture, specializing in Microsoft, Citrix and VMware technologies. He is a contributing editor for both Redmond magazine and Microsoft Certified Professional magazine, authoring two regular columns along with numerous feature articles, webcasts and white papers. He is also a highly sought-after instructor and speaker, teaching system and network troubleshooting curricula for TechMentor Events, a twice-annual IT conference, and producing computer-based training curriculum for CBT Nuggets on numerous topics. Greg is a triple Microsoft Certified Systems Engineer (MCSE) with security specialization and a Certified Citrix Enterprise Administrator (CCEA). He is also the leader of the Realtime Windows Server Community.

Dig Deeper on Introduction to virtualization and how-tos

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close