As IT professionals and developers, we always need access to complex technologies -- directory services, email services, databases, Web servers and so on -- for a variety of reasons, but mostly for testing and development purposes. To do this, we run laboratories -- IT labs that include all of the technologies required to support our efforts. In addition, many organizations provide and manage their own training centers, which are similar in nature to these testing and development laboratories. Unfortunately, each of these different environments is not treated the same.
Training environments often have up-to-date technologies because they need to present trainees with everything they'll use in production. Development environments must also have the most current technologies because they need the latest and greatest to produce new code. However, test environments are often built with leftover computers that already belong in a scrap heap, and, in the end, organizations are left with several official and unofficial environments jumbled together in no particular fashion to get systems up and running.
In an ideal situation, organizations would connect these systems in a unified infrastructure that would support all services. But that is often difficult to do. Development projects, in particular, have their own budgets for hardware acquisitions. These projects can contain multiple environment layers for unit testing and for functional, integrated, staging and pilot or production testing -- and each layer can have its own hardware. However, each system will have very low utilization ratios and will be at rest most of the time.
If you try to set up two development projects on the same machines, you'll learn that it isn't possible. Many training classrooms also house far-from-ideal setups. Sometimes each classroom will use a different system. Often, classrooms must use complex procedures to reset each system after every class so all students start at the beginning of the training process.
In a time when most CIOs state that their goals are to develop a flexible and efficient infrastructure, these complexities cannot continue. Unfortunately, IT infrastructures as well as mentalities are slow to change. Worse yet, IT budgets have fixed expenses even though we're being confronted with changing business needs.
Despite these constraints, many businesses have moved forward with system consolidation projects to increase resource utilization and decrease costs and physical server footprints. Those organizations realize the gains through server virtualization projects -- consolidating physical servers into virtual machines (VMs) running on host servers.
While organizations tend to move forward with these projects for production environments, few realize how useful and successful these physical consolidation projects can be for testing, training, development and other volatile IT environments. This article highlights some of the benefits of consolidating your test and development environments through virtualization, including cost savings, accelerated time to deployment and system consistency throughout your environment.
Understanding virtualization benefits
Using physical consolidation practices for volatile IT environments makes sense, especially at a time when companies are tightening their collective belts because of economic hardships. If production server resources are only used at 10% to 15% of operational ratios, then it's likely that other environments have even lower operational ratios. In most cases, volatile IT environments, such as test or development systems, are only used at high operational ratios during stress testing when you're pushing the service or application to its limits to see how it performs.
In such cases, systems can run at upwards of 50% utilization ratios. Stress testing, however, only occurs at specific times during the staging or development process and can be scheduled. The rest of the time, machines tend to run at less than 5% utilization.
The goal of physical server consolidation is virtualization, which offers those types of environments the following additional benefits:
- You can virtualize more than 95% of the servers. Only specific machines with custom hardware dependencies will defy virtualization and require a dedicated physical system.
- Server virtualization is designed to support maximum resource utilization. Some virtualization platforms are designed to share CPU capabilities and memory between VMs.
- You can reduce the cost of these environments because several available virtualization technologies are free.
- Server virtualization reduces costs because fewer physical machines are required to support multiple environments.
- The same physical machines can support several different volatile IT environments. You can save VMs as templates and deploy them as needed to build almost any environment.
- Virtual machine deployment times are significantly shorter than physical machine deployments, which can save resources and shorten deployment timelines and project completion time.
- When systems administrators build the source VMs or templates used to generate each environment, they can guarantee the consistency of each environment. Too often, developers build their own systems and take shortcuts because of time constraints. Then, when machines do not match production configurations, inconsistent test results may occur.
- VMs are simply a set of files in a folder, making them easier to back up and restore. Virtual environments use considerably less physical space than conventional laboratories. Users can connect to VMs in any environment through their own workstation using standard network connections.
- VMs support the concept of snapshots -- point-in-time images of a virtual machine state that can be taken whether the machine is running or not. Each time a user performs a task and wants to save the results, he can take a snapshot. Then, if he performs a subsequent task that doesn't work, he can return to the former snapshot.
This saves enormous amounts of time in that he can avoid rebuilding the system (Figure 1). Snapshots also support training environments because you only need to return to the snapshot to recover the original state of any training machine.
- VMs are completely isolated from one another because of the host server's network virtualization capabilities. In addition, users will not know they share host server resources because their environments will be completely contained.
- Virtualization technologies allow self-service VM provisioning and management, which enables systems administrators to perform other vital tasks within the network.
- Administrators can control how many resources are available to a VM. You can throttle certain resources to ensure that VMs don't consume all of the physical host's resources. This ensures that performance levels remain acceptable for each user sharing physical resources.
Organizations must decide to use virtualization to build laboratories if they want to save costs, maintain consistency and reduce their carbon footprint.
Danielle Ruest and Nelson Ruest are IT experts focused on continuous service availability and infrastructure optimization. They are authors of multiple books, including Virtualization: A Beginner's Guide and Windows Server 2008, The Complete Reference for McGraw-Hill Osborne. Contact them at [email protected]