Fotolia

Tip

How to realize server virtualization energy savings

Virtualizing servers is one of the most effective steps to reduce energy consumption, especially when the hypervisor and VM placement are taken into account.

One of the most important benefits of virtualization is better hardware resource utilization; with fewer physical servers, organizations use less energy to deliver workloads and cool the underlying equipment. Careful decision-making can help IT administrators realize even more server virtualization energy savings.

In 2014, researchers demonstrated how virtualization reduces both energy consumption and cooling requirements. In their report, "Implementation of Server Virtualization to Build Energy Efficient Data Centers," a small group of researchers described the effect of applying virtualization to a data center with 500 servers.

Prior to virtualizing the systems, the servers' utilization rate averaged around 10%, with each machine consuming about 100 watts of power, for a total of 50,000 watts. The researchers broke workloads into three categories based on application type, and then implemented virtualization to support those workloads.

With this approach, they reduced the number of servers to 96, with each running an average of five VMs. At the same time, they increased the utilization rates to an average of 30% and the energy consumption rates to 275 watts per server. Although this is an increase in the number of watts per server, the reduction of servers to 96 resulted in a total energy consumption of only 26,400 watts, a reduction of 23,600 watts.

The results of this study aren't an anomaly. Other research provides similar results. One of the benefits of virtualization is that it leads to better resource utilization, which means fewer hardware resources, resulting in lower energy consumption for both running computers and keeping them cool.

The hypervisor difference

Research into server virtualization energy savings has also extended into more specific areas. For example, experiments suggest that the hypervisor that virtualizes the workloads also plays a role in energy consumption. This doesn't necessarily mean one hypervisor is always better than another. In fact, several factors contribute to a hypervisor's energy consumption, such as the type of workload, the servers on which the hypervisors are installed and the hypervisor itself.

One of the benefits of virtualization is that it leads to better resource utilization, which means fewer hardware resources, resulting in lower energy consumption for both running computers and keeping them cool.

In their 2017 report "Energy efficiency comparison of hypervisors," researchers describe the extensive experiments they performed on four leading hypervisors: KVM, VMware ESXi, Citrix XenServer and Microsoft Hyper-V. For each hypervisor, they ran four workloads (very light, light, fair and very heavy) on four different server platforms (HP DL380 G6, Intel S2600GZ, Lenovo RD450 and APM X-C1).

Each experiment was specific to a hypervisor, workload and server platform. For every experiment, the researchers determined the real-time power consumption, the operation's duration and the total energy consumed to run that operation.

The researchers concluded that hypervisors exhibit different power and energy consumption when running the same workload on the same server. What's more important is that no one hypervisor proved to be more energy efficient than the other hypervisors across all workloads and platforms. In other words, no single hypervisor is always the most energy efficient or least energy efficient.

The researches also concluded that lower power consumption doesn't always translate to server virtualization energy savings. Under certain workloads, a hypervisor might consume less power than other hypervisors, but if the workload takes longer to run, the total energy consumption could be much higher.

Power consumption, completion time and energy consumption depend on the specific workload and platform. When choosing a hypervisor, IT teams should keep in mind the type of workloads they run and the platforms on which those workloads run. If possible, they should test potential hypervisors based on their anticipated workloads.

The VM difference

VM placement and its effect on energy consumption have also received a fair amount of attention, with much of the focus on how algorithms help server virtualization energy savings. The idea here is that intelligent software is used to strategically control where VMs are located across the available cluster to minimize energy consumption without impacting performance.

For example, in the 2016 report "Energy-efficient virtual machine placement using enhanced firefly algorithm," researchers proposed two ways to modify the firefly algorithm to address VM placement issues in a cloud-based data center. The firefly algorithm, used to address optimization problems, is a meta-heuristic algorithm based on firefly behavior. The researchers compared their modified algorithms to algorithms used to map VMs to physical machines. They found that they could reduce energy consumption by as much as 12% using their own algorithms.

In 2017, another group of researchers published the report "Energy-Efficient Many-Objective Virtual Machine Placement Optimization in a Cloud Computing Environment," which also proposed an algorithm for server virtualization energy savings through intelligent VM placement. Their algorithm is based on the knee point-driven evolutionary algorithm, a high-performance procedure for addressing many-objective problems. When the researchers compared their algorithm to several others, they saw a reduction in energy consumption between 1% and 28%.

In February 2018, a group of researchers published the report "An Energy Efficient Ant Colony System for Virtual Machine Placement in Cloud Computing." In this case, the proposed algorithm is based on the ant colony system (ACS) algorithm, another meta-heuristic optimization procedure inspired by insects. When modifying the ACS algorithm, they also incorporated order exchange and migration local search techniques. The combination resulted in a 6% decrease in power consumption, compared to several existing algorithms.

Although much of the research on VM placement is focused on cloud computing, there's also been a fair amount on VM placement in the more traditional data center. In either case, the important point is that an intelligent approach to VM placement results in server virtualization energy savings.

Dig Deeper on Data center design and facilities

SearchWindowsServer
Cloud Computing
Storage
Sustainability and ESG
Close