Get started Bring yourself up to speed with our introductory content.

Making servers fail over to a different name space

A reader wanted to know some details about using VMware to have servers fail over to a different name space, such as onto a different VLAN in a different building. Expert Andrew Kutz replied that in short, the answer is yes, but it requires some fancy maneuvering which VMware does not yet support.

Using VMware, I'm told you can make the servers automatically fail over to another physical machine if it detects a server error. Can this work over a different name space (such as failing over to a physical box on a different VLAN at a different building)?

Thank you for the question! The process you are referring to is known as VMware High Availability (HA), one of the new features of VMware Infrastructure 3 (VI3). As you were told, VMware HA enables Virtual Machines (VM) running on ESX 3 hosts that belong to VMware HA enabled clusters to fail automatically over to other ESX 3 hosts that belong to the same VMware HA enabled cluster in the case of the failure or isolation of the original ESX 3 host.

So the question then becomes, "Can a VMware HA enabled cluster contain ESX 3 hosts in separate buildings and/or separate VLANs?"

The VMware cluster prerequisites state that "In general, DRS and HA work best if the virtual machines meet VMotion requirements..." (http://pubs.vmware.com/vi3/resmgmt/vc_create_cluster.7.2.html). Of the VMotion requirements, there are two that present a problem for separate buildings and separate VLANs. VMotion requires that participating hosts use shared storage -- typically this is a storage area network (SAN) attached to the hosts by means of a fibre connection. VMotion also requires a private gigabit ethernet migration network between all participating hosts (http://pubs.vmware.com/vi3/resmgmt/vc_create_cluster.7.4.html).

First I will examine the shared storage requirement. If two ESX 3 host servers are in separate buildings then it is cumbersome at best to connect them to the same SAN. It is true that this could be accomplished using a long-distance fibre connection, but this negates the purpose of separating the two ESX 3 host servers to begin with. If the building that contains the first ESX 3 host server and the SAN loses connectivity or power, the remote ESX 3 host server is effectively useless since it cannot access the data on the SAN. Therefore the solution is for each ESX 3 host server to be connected to a separate, local SAN. However, the VMotion requirement clearly states that all participating hosts must use shared storage -- that is have access to the same data -- a requirement that seems to cancel out the solution of separate, local SANs. However, this requirement can be fulfilled using a product by EMC called Mirrorview. MirrorView allows two SANs in separate locations to stay synchronized in real time (MirrorView/Synchronous). This means that even though the two ESX 3 host servers are connected to separate SANs, both servers have access to the same data. This leaves the networking requirement.

VMware reports that VMotion requires that all participating hosts be connected to a private gigabit migration network. While this is a very strong suggestion, it is not a requirement. There are, however, two very good technical reasons for VMware stating that VMotion, and subsequently HA, requires a private gigabit network.

The first reason relates to how VMotion works. VMotion is the process by which the entire state of a running VM is moved to a new host over the wire. The VMotion process could potentially fail or worse, result in a corrupted VM on the target host, if any of the packets containing the state of the running VM were corrupted or did not arrive properly at their destination. Packet loss and corruption typically occur due to heavy network loads, limited bandwidth, header corruption, and hardware problems. This is why VMware requires a private gigabit network for VMotion. VMotion requires the bandwidth and the network segregation to function properly. This is not to say VMotion cannot succeed across VLANs on a public network, it can, but the chance of doing so are diminished greatly because of all the unknown factors.

The second reason relates to HA itself. VMware HA actually consists of two scenarios. The first case is a failed host -- in this case, the other participating hosts should be ready to assume control of the failed host's VMs. The second case is a host that has not failed, but has been isolated from the cluster due to loss of network connectivity -- in this case, the isolated host must know that it is isolated and that it should shut down its running VMs so that the other cluster members can assume control of them. Each of these HA scenarios depend on reliable network connectivity in order to determine when other hosts are down, or when a host is isolated, and a private, or rather dedicated, gigabit network is conducive to this reliability. Like VMotion, HA can succeed on a public network, but the chances of such a success are lessened due to the variables involved.

While a private and dedicated gigabit network is one way to provide fat, exclusive bandwidth, it is not the only way. You could easily set up a fast network devoid of congesting traffic across building and VLAN boundaries with a dedicated ethernet-to-fibre-to-ethernet link between the buildings and a static route between the VLANs. Although this is not exactly what VMware requires, it should work.

So, the answer to the quesiton is technically yes, what you want to do is possible with a little eclectic maneuvering. However, VMware does not currently support the configuration required to make it happen, and I am not aware of any plans on their part to do so in the near future.

This was last published in July 2006

Dig Deeper on VMware virtualization

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.