Server colocation can be appealing in virtual environments, because virtualization provides logical isolation between virtual machines (VMs). But does it provide enough insulation for production applications from testing and development
Some experts argue that testing and development workloads should never run on the same virtualization hosts and clusters as production applications, even when resource-sharing software like VMware Distributed Resource Scheduling (DRS) is in place. DRS helps guarantee CPU resources, but it does not guarantee network or disk I/O resources, which can harm production VMs. Some of these experts also cite security concerns about server colocation.
Others take a softer stance. They say that colocated servers, when properly configured, can be a safe and more cost-effective use of virtual infrastructure.
In this face-off, Edward Haletky argues against server colocation, and Rick Vanover highlights the benefits of server colocation.
Edward Haletky: Segment testing and development from production to prevent problems
Rick Vanover: Server colocation is safe, cost-effective
Segment testing and development from production to prevent
By Edward Haletky, Contributor
Like the old saying goes, the best way to test something is 12 inches to the foot. In other words, never scale things down just because you can; testing and development should duplicate your production environment as closely as possible. This is also true for virtual environments.
When you share your production virtualization host clusters with your testing and development VMs, you do two things:
- Scale your testing and development environment to something less than your full production environment (remember, 12 inches to the foot).
- Affect your production environment because testing and development use resources in the form of CPU, memory, disk I/O and perhaps network I/O.
However, resource pools are often misconfigured to allow pools to borrow from the parent -- i.e., the virtualization cluster itself. So when the development process inevitably takes all the resources it can, the production environment is adversely affected. It is possible for the testing and development environment to, in effect, create a "denial of service" to your production environment. Furthermore, resource pools do not yet apply to network and disk, so testing and development VMs can also affect production network VLANs and VMs with their disk I/O.
Server colocation has security implications as well. A testing and development environment often has different security requirements than a production environment does. In some cases, the testing and development environment is managed by a different group of people than production environments. The test environment administrators may then be developers who want or need more access than you would normally have within a production environment, including virtualization host access. There should be a separate testing and development environment, because it is difficult to manage two distinct sets of authentications and authorizations.
Segmenting testing and development to its own cluster will lead to a cleaner, more manageable and more controllable set of virtualization hosts. Production will not have resource utilization bleed through from testing and development or vice versa.
"Testing and development" is often synonymous with "experimental." Do not let experimental VMs affect your production environment.
About the author:
Edward L. Haletky is the author of VMware ESX Server in the Enterprise: Planning and Securing Virtualization Servers. He recently left Hewlett-Packard Co., where he worked on the virtualization, Linux and high-performance computing teams. Haletky owns AstroArch Consulting Inc. and is a champion and moderator for the VMware Communities Forums.
Server colocation is safe, cost-effective
By Rick Vanover
Virtualization architects face many decisions in the journey to building a correct solution for their environments. One of the important choices regards server colocation: Should production workloads run on the same virtual infrastructure as nonproduction workloads, such as testing and development?
In most situations, server colocation is the right way to architect your virtualization environment. And in many respects, you are probably already colocating production and nonproduction workloads on a shared infrastructure.
The clearest example is shared storage, specifically Fibre Channel SAN storage. Most shared-storage installations have not fundamentally changed their provisioning practices for separating production and nonproduction for both physical and virtual storage requirements. While tiered storage may be in use, the storage infrastructure in many cases provides storage to both production and nonproduction environments.
Another clear example of server colocation is the network architecture for most data centers. The typical data center switch, either as a core switch or an end-of-rack switch, is usually configured to provide trunks or VLANs to ports across many different networks.
Cost is the key factor when deciding on server colocation. If full separation of production from nonproduction virtual workloads is required at every level, virtualization can simply become cost-prohibitive when new storage and networking infrastructures come into play. Furthermore, most current virtualization products provide resource separation with object-based access permissions and granular workload-shaping rules.
Rules can even be made to keep production and nonproduction VMs separated at the hypervisor level, avoiding the need to invest in and provision another virtual infrastructure for testing and development workloads.
Every environment and requirement set is different. The key to determining whether server colocation is the correct way to proceed is to engage and inform the decision-makers in your organization, including security officers and application owners.
About the author:
Rick Vanover, VCP, MCITP, MCTS, MCSA, is an IT infrastructure manager at Alliance Data in Columbus, Ohio. He is an IT veteran specializing in virtualization, server hardware, operating system support and technology management.
This was first published in February 2010