Essential Guide

Do more with less using these virtualization cost-saving approaches

A comprehensive collection of articles, videos and more, hand-picked by our editors

Virtualization management tools can help reduce server power usage

Emerging virtualization tools that automatically migrate workloads to optimize data center resources help improve server power usage.

This Content Component encountered an error
This Content Component encountered an error

Server power usage has become a major data center expense, and organizations have long sought alternatives for mitigating energy costs. Improving server hardware has been one approach, but the introduction of virtualization made the largest impact. By allowing one physical server to do the work of many, a business can buy, deploy and power far fewer physical systems -- shaving operating expenses for both servers and cooling.

Virtualization can reduce power demands

It's important to note that virtualization alone does not directly affect server power usage. Power savings is an indirect and controllable side effect of virtualization.

One of the most noteworthy benefits of virtualization is improved use of computing resources. Consider that a traditional physical server deployment would only use a fraction of the underlying hardware, and the remaining CPU cycles, memory, I/O and other resources would essentially be wasted. Virtualization abstracts the workloads from the underlying hardware and allows multiple workloads to run on the same server using a larger portion of the system's available resources.

This means an organization can operate the same number of workloads on far fewer physical servers; a concept called server consolidation. Far fewer servers can mean substantial reductions in server energy demands and monthly savings on the data center's power bills.

However, virtualization does not deliver predictable energy savings for every data center because administrators wield a great deal of control over consolidation. For example, it is possible that one server may operate two or three workloads, while another server supports 10, 20 or more workloads. The actual amount of consolidation will vary based on the overall computing capacity of the server, workload demands and the desired consolidation level.

For example, an older server with fewer available computing resources might only operate a handful of workloads, while a newer enterprise-class server could potentially support dozens of workloads. In addition, administrators may choose to leave some computing resources unused to allow workloads to fail over from other servers if the need arises.

So virtualization can drive energy savings, but the amount of savings can vary radically depending on the particular data center gear, IT staff and business motivations.

Automatically reduce server power use

Virtualization allows administrators to make choices that dramatically affect consolidation levels, and it's common to see administrators rebalance workload distribution across servers to optimize performance. It is also common to see noncritical or lightly used workloads reorganized onto even fewer systems.

In an ideal virtualized environment, idle or lightly used workloads could be migrated to highly consolidated servers. This migration could potentially free other data center servers, which administrators could power down to save energy, then power up again as computing demands increase.

The tools to perform this kind of dynamic consolidation, however, are still fairly new. One example of emerging automation is the Distributed Power Management (DPM) feature in VMware's Distributed Resource Scheduler (DRS). DRS can already allocate resources and migrate workloads within a DRS cluster, but DPM adds the ability to optimize power consumption by consolidating workloads onto fewer servers within the DRS cluster -- powering down the unneeded servers until computing demands increase.

Tools like DPM are still somewhat narrow in their scope and only address a limited DRS cluster, but it underscores the idea of automated power management in the data center, and future tools will likely expand on this concept to add more features and functionality.

What server capabilities support virtualized power management?

The exact server specifications will depend on the particular hypervisor and virtualization power management tool that you adopt. However, the server will require features that enable it to receive power commands from the LAN. As a minimum, expect the motherboard and network interface card (NIC) used for workload migration to support Wake on LAN (WOL) functionality.

In actual practice, most current server motherboards and PCIe NIC adapters support WOL, but implementations are not always uniform; there may be unsupported commands or bugs that cause problems. Even when a server and NIC are advertised with WOL support, it's important to test the hardware setup in a lab before deploying the hardware to production.

On the server side, admins may need to configure WOL (or sub-settings) through the server's BIOS, or a firmware update may be required to fix bugs or smooth compatibility issues. Don't just assume that a server's WOL will function properly with a tool like vSphere Distributed Power Management (DPM). If the hypervisor does not detect WOL capability in the NIC, it may not even provide a power-off capability. This may require a new or updated NIC driver or a NIC card upgrade to enable full WOL support at the NIC.

The network setup can also impact virtualized power management. For example, a NIC that only supports WOL at 10/100 Mbps might need to auto-negotiate a slower speed with a gigabit Ethernet switch port. This might not always work properly, causing the NIC to drop off the network and fail to power back on again. The way that WOL packets are encapsulated can also make a difference. For example, a WOL packet encapsulated in a UDP broadcast might be dropped by a router when originating from a different subnet, so all of the servers that are migrating workloads should be on the same IP subnet rather than connected by routers.

These are only a few of the potential problems that you might encounter when using automated workload migration and power management tools in a virtualized environment. Again, there is no substitute for actual testing and limited proof-of-principal deployments.

It's clear that the underlying virtualization platform -- the hypervisor -- has no direct influence on server power usage. But the server consolidation that results from virtualization has become a game-changing technology for organizations of almost any size. And the intelligence of modern virtualization infrastructures is increasing, allowing businesses to shift consolidation levels as computing demands change and maximize energy savings through automated tools.

This was first published in August 2013

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Essential Guide

Do more with less using these virtualization cost-saving approaches
Related Discussions

Stephen J. Bigelow asks:

Has reducing server power use been a driving force behind your virtualization project?

0  Responses So Far

Join the Discussion

1 comment

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close