This article can also be found in the Premium Editorial Download "Virtual Data Center: The data center of the future."
Download it now to read this article plus other related content.
Business units and application owners have long regarded power as free of cost—much in the same way that it’s easy to regard disk space, memory and CPU as “free.” But power consumption is a silent cost in the heart of the data center,
Server virtualization has allowed many businesses to do more with less, or to grow their business without having to also grow their physical server footprint. In part the drive toward server consolidation was intended to reduce power and cooling costs—but some efficiencies have yet to be fully exploited.
Many virtualization platforms, like VMware’s High Availability and Distributed Resource Scheduler technologies, support hypervisor clustering for availability and performance. However, anecdotally I’ve heard that few customers have adopted its sister technology: Distributed Power Management (DPM). DPM analyzes the load on the cluster, and based on administrator high-availability settings, it shuts down servers that aren’t needed and puts them in a standby power state until compute demands rise again. You’d think this functionality would be a great fit for the new vision of a flexible, elastic and on demand cloud computing framework. That hasn’t happened—but why?
Many IT administrators are still ultraconservative when it comes to power—and believe that if a server is working fine, it shouldn’t be powered off. And some admins don’t even have access to the power switch. For example, if servers are hosted in a colocation facility, there’s no incentive to reduce power because hosting providers rarely reward customers with lower monthly bills for using less power than was assigned. Others note that their monitoring systems are not DPM-aware, so hosts that are put into standby mode would trigger all kinds of alarms and alerts. And from the colocation host’s perspective, offering fluctuating bills based on power consumption would impose an accountancy overhead that they would rather avoid.
At the opposite end of the spectrum, generally very large corporations benefit most from implementing a DPM like feature. However, if their power utilization is relatively static 24/7, there may be little to gain. That’s also partly because the amount of power draw each physical server takes is quite small when compared with spinning disks in a storage array. Thus, shutting down a server to save power doesn't always translate into huge cost savings.
In the long term, the industry may find more power savings by moving away from spinning-drive storage systems. Solid-state-disk-only storage arrays, combined with data deduplication and compression, are matching conventional storage arrays on capacity and price, while offering excellent I/O capabilities with reduced power. With the rise of virtual desktops and remote application delivery, organizations are looking toward replacing power-hungry PCs with fanless dumb terminals and zero clients. These clients generally have no moving parts, and this does a great deal to reduce power and cooling demands.
Over the next decade, increased power efficiency will be a strong narrative, as the era of “cheap energy” comes to an end. The cost of electricity is set only to increase, and national governments are beginning to pass legislation that will make energy efficiency a requirement (i.e., the U.K. government’s CRC Energy Efficiency Scheme). It’s time for data centers to catch on and start saving power where they can.
About the Author
Mike Laverick is a professional instructor with 17 years of experience in technologies such as Novell, Windows and Citrix. He is also the author of the virtualization website and blog RTFM Education, where he publishes free guides and utilities for VMware users.
This was first published in April 2012