Problem solve Get help with specific problems with your technologies, process and projects.

Networking and availability considerations with virtual servers

Granting priorities to some servers over others is important for high availability in a virtual environment, as is network redundancy and isolation, when necessary.

Determining how server virtualization will affect your network administration and automation procedures is not a simple task. Whether you're designing an enterprise-wide virtualization deployment or isolating adoption for select systems, a different approach is required from almost every perspective of infrastructure management. In this tip, I explore some issues frequently presented in managing network resources and automation strategies in virtualized server environments.

Network redundancy and isolation is critical

General system design and implementation architecture has long called for redundancy in network connections and dedicated or isolated interfaces. For virtual systems, a slightly different approach is required for successful implementations.

Historically, network redundancy in a server meant having two network interfaces available and a piece of software to team the interfaces or provide a load-balancing configuration. For an implementation within the virtual space, this technique can be modified and expanded in a number of ways to provide the best availability and performance for the systems being hosted. For example, consider an environment that has six virtual local area networks (VLANs) that will be connected to the virtual environment. Of these six VLANs, one has the most intense traffic load by far.

One strategy for hosting virtual systems in this scenario would be to design the virtual host system with connectivity that has two interfaces directly and exclusively connected to the busy VLAN and two interfaces connected to the other five VLANs. For connectivity to multiple networks, the use of IEEE 802.1Q frames or VLAN tagging permits access to many VLANs through a single connection. For redundant design in virtual implementations, the better practice would be to have at least two interfaces made available for each class of traffic.

VLAN tagging is a newer practice for managing connectivity between a server and a switch. Nevertheless, VLAN tagging is done routinely by networking administrators when interconnecting switches. This shift likely occurred because most virtualization platforms introduce the concept of the virtual switch, which enables virtual machine VLAN tagging.

Certain network roles are best suited in an isolated connection. This would include management interfaces and any services that are not related to any guest virtual machines. One good example is the VMkernel interface for VMware ESX. When used for dynamic migrations, the VMkernel interface will use a lot of network bandwidth to transfer the memory and processor state to another host system. If a number of virtual machines need to be migrated at one time, the isolated interface is a good design element as the traffic would be isolated from the virtual machines. This is only if the physical network switches can match the isolation, however.

The popularity of virtualization and the corresponding networking requirements is now being reflected in server hardware. For example, the Dell PowerEdge R900 has four embedded network interfaces for more connectivity options for virtual installations. This increased functionality is welcome for most virtualization applications as they require additional connectivity.

Adaptive automation practices for high availability

Virtualization, in a unique way, has provided administrators with a level of availability and automation that may have not been available in a non-virtualized environment. This is especially true of the non-critical application server. In the physical server world, it would be a tough sell to provide high availability to a single application that does not have the business requirements to justify the costs. However, virtualized environments can be architected in a highly available fashion from inception. This is enabled by shared storage, networking availability and a virtual host environment that is configured for technologies that enable the functionality desired.

In the virtual world, one of the biggest challenges an administrator will face is resource management. For implementations involving a large number of virtual machines, the planning stage of resource requirements cannot be stressed enough.

The current big players for enterprise virtualization, VMware ESX 3 and Citrix XenServer Enterprise 4, both offer management options that can control the resources virtual machines are allowed to access. These can be configured to automate how the virtual environment is to handle the situation where a series of virtual machines have consumed a lot of resources on a particular host system. When another host system has more available resources, setting the host OS to automatically migrate the virtual machines to a more capable host system will best manage the resources for most environments.

A four-tiered approach

The other side of resource management is planning what resources will be required for the virtual environment. A strategy that has worked well in my environment is a four-tiered approach. This has a bronze, silver, gold and platinum classification. This can be further broken down into categories such as live or development instances and, if needed, certain business group collections. In using this model, to maximize the efficient use of the virtual host resources, the platinum level would be on a case-by-case basis. This may be a situation where the virtual system has nearly the same virtual hardware provisioning as that of a newly purchased general purpose server, but the system is put in a virtual environment for the high availability and increased management tools.

For the virtual environment to function as expected, there are a number of forward-looking factors to consider. While we cannot predict the future and the computing needs of the organization exactly, we can very specifically identify what functionality and levels of service would be provided by the virtual environment.

One of the most valuable features to the IT administrator of virtual environments is that the host hardware can be taken offline as long as there are adequate resources among the remaining hosts to sustain the load of the virtual machines. To facilitate this, an N+1 model for host systems is generally used. In this scenario, you can withstand one host to leave the virtual environment and the service levels will continue to be met.

A host system can go offline for a number of reasons, which can include a rolling upgrade of virtualization software, adding or testing connectivity to a new network, diagnosing an underlying hardware issue or adding connectivity to a new storage system. All of these tasks are more safely performed in modes where there are no virtual machines assigned to the host systems. In my experience as a virtualization administrator, I find that frequently verifying the N+1 architecture is beneficial as all virtual machines are verified as migrate-ready, the process is kept familiar with our staff and we know it works. The N+1 model has worked well for me for around 10 virtual host systems, but for larger implementations a different approach may be needed.

Virtualization planning: Never too much

For a virtual implementation, planning is the most import step. Plan and then plan some more. For networking and automation configuration, the key is to define the expectation for the technology and assign the parameters in which the virtual environment is to perform. If you need a service level agreement (SLA), you'll find advice in this tip on how to proceed with a formal SLA. Without the proper planning, the virtual environment can quickly become an unmanageable problem rather than a very powerful and enabling environment.

Rick Vanover is an MCSA-certified system administrator for Belron US in Columbus, Ohio. Rick has been working with information technology for over 10 years and with virtualization technologies for over seven years.

This was last published in February 2008

Dig Deeper on Disaster recovery, failover and high availability for virtual servers

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close