Microsoft introduced PVLANs in Hyper-V 3.0 to address management and scaling issues associated with VLAN traffic isolation. In addition, PVLANs will simplify virtual networking as more workloads migrate to and from multi-tenant cloud environments.
More on Hyper-V 3.0 features
Dissecting Hyper-V 3.0
Hyper-V 3.0 high availability and redundancy
Shared-nothing live migration in Hyper-V 3.0
For organizations adopting server virtualization, isolating certain types of network traffic is a big challenge – especially in high-security, heavily regulated or multi-tenant environments. For instance, if you were hosting virtual machines (VMs) for both Coca-Cola and Pepsi on the same Hyper-V host, you wouldn’t want either company intercepting the other’s traffic.
Traditionally, virtual local area networks (VLANs) are used to isolate network traffic, but they don’t scale well and are nearly impossible to manage in large, multi-tenant environments, because you must configure the trunking for multiple, interconnected switches. Trunking also becomes more complex if you create parallel VLANs.
To overcome these issues, Hyper-V 3.0 will support private virtual local area networks (PVLANs), which are an extension of the VLAN standard. PVLANs divide the VLAN into multiple broadcast domains, guaranteeing isolation for each broadcast domain. In effect, a PVLAN allows you to set up VLANs inside a VLAN, which is important in multi-tenant environments or infrastructures that use multiple, dedicated network segments.
PVLANs work by assigning virtual machines multiple IP addresses. This removes many of the networking challenges in multi-tenant environments, and allows for an almost complete abstraction between the physical and virtual networks. In other words, the virtual network can take any shape, regardless of the underlying physical infrastructure.
PVLANs and multiple IP addresses
In Hyper-V 3.0, you can assign identical computer names and IP addresses to VMs on a common host, as long as the VMs participate in separate PVLANs. In fact, every virtual machine participating in a PVLAN has at least two IP addresses, the Customer Address and the Provider Address. This configuration allows VMs to have overlapping IP addresses on the same host.
The Customer Address is the address that the customers assign to a virtual machine, based on their own network infrastructure. To understand how the Customer Address works, think of PVLANs in terms of a multi-tenant environment. Imagine that you move one of your servers to either a public or private cloud. You would want the server to use the same IP address, so you wouldn’t have to modify your DNS records or worry about unanticipated side effects of changing an IP address, such as network routing errors or DNS records pointing to the wrong addresses. After the server migrates to the cloud, its new host will undoubtedly exists on a different subnet. But with PVLANs and a Customer Address, the recently migrated server can retain its former IP address
The Provider Address is a unique IP address that the host assigns to VMs participating in the PVLAN. As such, the Provider Address is visible on the host’s physical network but not to the virtual machines. In contrast, the Customer Address is visible to the virtual machines but not on the host’s physical network. To put it simply, PVLANs virtualize the IP addresses, giving each VM two IP addresses: One it uses on the physical network and one it uses on the virtual network. The two addresses make full network abstraction possible without major infrastructure modifications.
Performing IP address virtualization
PVLANs are a multi-vendor standard, so there are two IP address virtualization methods: an IP rewrite or Generic Routing Encapsulation. The first technique rewrites the Customer Address before any packets are placed onto the physical network. It occurs at the switch level without the need for administrative intervention, and offers better overall performance.
Generic Routing Encapsulation encapsulates a packet generated by the virtual machine into a packet generated by the host, before it is placed onto the physical network. The idea of packet encapsulation is not new, and Windows has used the technique for other purposes. For instance The 6to4 mechanism encapsulates IPv6 packets inside IPv4 packets before the data is sent across an IPv4 network.
Activating network traffic isolation in Hyper-V
The process for implementing Hyper-V 3.0 PVLANs will differ from how VLANs are configured. Presently, VLAN configuration is part of a virtual machine’s configuration. In the Hyper-V Manager, open the VM’s Settings page and select the virtual network adapter. Next, check the box to enable virtual LAN identification. Then, assign a VLAN ID to the virtual machine.
In contrast, PVLAN configuration is not part of a VM’s configuration, and are policy-based. The present VLAN settings still exist in Hyper-V 3.0, but PVLAN policies are managed separately, outside of the Hyper-V Manager. As Hyper-V 3.0 is still in the pre-beta stage, the management interface does not yet exist.
As of now, Windows Server 8 and Hyper-V 3.0 are still in pre-beta. Regardless, there shouldn’t be any major changes to how you implement PVLANs, because they are a multi-vendor standard rather than a proprietary Microsoft technology. As such, PVLANs represent a manageable, scalable alternative to using VLANs in multi-tenant environments.