Proper resource utilization can optimize performance

Optimizing performance on a virtual server can maximize efficiency and savings. Microsoft System Center is an example of one tool that can automate and adjust settings as needs change.

As IT budgets and staffing levels continue to tighten, organizations are watching operational costs more closely than ever. Business and technology administrators are making the most of virtualization in order to maximize efficiency and savings. This typically involves an array of virtualization-aware tools, such as Microsoft System Center, that can automate and dynamically adjust the computing environment as workload needs change.

Relying on software tools to optimize power and resources dynamically

Virtualization benefits the data center by increasing the utilization of computing resources that might otherwise be wasted. For example, a traditional physical server with a single workload might use only 10% to 15% of the server's processor cycles or memory space -- essentially wasting the remaining 85% to 90%. By applying a virtualization layer to the server, multiple virtual machines (VMs) can reside on the same server and each consume a portion of the available physical resources. It's not uncommon for a virtualized server to host 10, 15, 20 or more VMs (depending on each one's resource demands). Thus, the same amount of computing work can be performed with far fewer servers, reducing the cost and space demands for physical systems while reducing power and cooling requirements.

The principal challenge with virtualization is that resource utilization is not a constant throughout the day, month or year. Many workloads experience fluctuations in resource needs according to changes in the number of users, the type of tasks required at the time and so on. For example, a corporation might provide an important business application to its employees, but if they use the application only from 8 a.m. to 5 p.m., the workload is idle (and unneeded) the rest of the day. Another example might be a payroll application that processes payroll data only a total of one or two days each month. These circumstances also represent wasted computing resources within the virtualized data center, and organizations can reduce this waste by adjusting resources and migrating workloads as usage patterns change.

Consider the example of the important business application. If it were possible to reduce the resources allocated to the idle VM, more resources would be made available for other workloads that might need them -- or the deprecated workload could be migrated to (or parked on) a highly consolidated server where it could handle a low volume of work during off hours, then re-migrated and readjusted in preparation for the new day. The payroll workload might even be shut down and saved to a storage area network, or SAN, until it's needed for another payroll cycle. All these tactics further conserve server resources and make the most of existing computing.

It is certainly possible to adjust the resources provided to each VM -- or consolidate less-used workloads to secondary servers (or park them entirely) until they're needed again -- but those processes have typically required manual intervention from IT administrators. It's impractical for any administrator or staff to constantly assess resource utilization and adjust resources or migrate VMs on the fly.

However, a new generation of software tools is emerging to automate some of these resource optimization tasks. One example is Microsoft System Center, which can recommend VM migration when resource demands pass preset levels -- migrating the VM (often automatically) to another server that is better equipped to handle the workload's demands. System Center also provides power optimization features that can automatically power down or power up nodes within a server group in response to computing activity. For example, suppose server A is running at 20% processor utilization and server B is running at 30% processor utilization. Server A can move its workloads to server B and server A can power down (server B would then be running at 50% processor utilization). While server A is powered down, there is almost no energy use and thus, additional savings for the enterprise.

Decisions involved with configuring a feature like dynamic optimization

Although the software tools used to automate and optimize the resource utilization and power use within a virtualized data center are constantly improving, they are far from being intuitive or adaptive. Software tools have no knowledge of (and no practical way to determine) how your data center and its workloads should behave under every possible condition. Resource and power optimization features will require administrators to supply a selection of parameters that allow the software to make recommendations or automatic decisions.

First, administrators must decide whether the software tools will migrate workloads or control server power manually or automatically. In manual mode, the software will make recommendations but await administrator approval before it migrates workloads or powers servers. And at the same time, the software must also know the resource utilization thresholds (including processor, memory, disk space, disk I/O and network utilization) needed to make recommendations or take action.

Tools also need to know how aggressively to handle optimizations, and System Center provides a basic slider that allows administrators to select low, medium or high aggressiveness, which reflects the amount of improvement required to justify an optimization. More aggressive settings will look for more improvement before recommending (or triggering) a workload migration. Administrators must also decide how often to look for optimizations. Many organizations will consider optimizations every 10 minutes or 15 minutes, reflecting the changeable nature of modern data centers.

If tools also allow server power optimizations, administrators can decide whether to enable or disable the feature, and select a schedule for running power optimizations (usually during off hours, such as overnight or on weekends). When outside the schedule (such as during peak hours like weekdays), power optimization typically is disallowed and all servers are powered up and rebalanced to handle the day's computing. Power optimization is usually configured so that unexpected increases in activity will still allow servers to power on in order to meet additional computing needs.

Using System Center to optimize virtualized systems on demand

There is no question that interest in data center and virtualization automation is growing -- automation frees administrators from the daily drudgery of "keeping the shop open" and allows IT professionals to focus on more strategic projects that yield long-term, tangible benefits to the business. However, automation requires sound decision making based on a comprehensive set of rules that cover every possible scenario. It's almost impossible to achieve such confidence in an environment that is always changing, so there is always a danger of poor results or unforeseen consequences in automation.

When adopting optimization tools, one option is to disable automatic behavior and rely on manual approval before changes take place. For example, a tool like System Center lets you select a cluster and choose the "Optimize Hosts" option, which generates a set of suggested optimizations and lets you decide which ones to allow. This is also a good opportunity to change resource thresholds and aggressiveness settings where administrators can closely watch the ways that recommendations change in response to different parameters. It's important to document changes and results so that administrators can roll back the parameters if necessary.

Once administrators become comfortable with the tool's decision making (especially after setting new thresholds), it's a simple matter to enable automatic optimizations and allow the tool to handle changes in response to schedules and thresholds. So, the conservative approach is to start optimizations in manual mode, then switch to automatic mode when the tool's behaviors are tested and well understood.

Virtualization has radically improved the utilization and efficiency of data center resources, but even a virtualized environment can be optimized. Optimizations include workload balancing -- migrating VMs to servers with the best available resources -- and powering down unneeded servers during off hours to further conserve energy and lower operational costs. Tools are emerging to tackle optimizations both manually and automatically, but optimizations should be approached with careful configurations and assessments of behaviors before tools are allowed to function automatically.

This was first published in August 2014

Dig deeper on Server consolidation and improved resource utilization

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close