Problem solve Get help with specific problems with your technologies, process and projects.

Handling server virtualization's shortcomings

Virtualization may be the glamorous new starlet on technology's red carpet, but don't be wowed too easily. Miss Virtualization has some drawbacks. In this article, contributor Rick Ellson explains them in detail, from the high cost of downtime and training to hardware requirements and the lack of standards that go hand-in-hand with any new technology.

Rick Ellson

Server virtualization has rosy prospects today, because it offers such benefits as decreasing the operational costs of servers, saving storage space, conserving power and increasing utilization. But before you pick this rose, check for the size of its thorns.

While there has been much written about the technical wonders of server virtualization, there has not been as much discussion on the business implications of the technology. Virtualization, like any other advancement in technology, needs to be analyzed on how it meets the needs of the organization and how well it lives up to its promises.

Cost of downtime

For every minute an application is unavailable, money is lost. A lightly used application's downtime costs may be negligible, but the price paid for downtime on a busy enterprise resource planning (ERP) or e-commerce application could be very high.

Don't even think about virtualizing a server without having a contingency plan.
,

Prior to virtualization, there was basically a one-to-one ratio of application per physical server. So, if a server had a catastrophic failure, only one application was affected, and the cost to the organization was limited to that one application.

With virtualization the ratio is now 10, 20 or even 30 applications to one server. Should a virtualized server suffer a catastrophic failure, the cost to the organization is dramatically increased.

So, don't even think about virtualizing a server without having a contingency plan. Naturally, companies have to figure in the cost of the plan to their virtualization project.

Hardware requirements

In the distributed computing model, many organizations have used inexpensive servers for lightly-used applications. These servers usually do not have high-availability (HA) components. For example, if a computer needs two power supplies to function, then for the computer to be considered HA it must have three power supplies. Likewise, if a redundant array of independent disks (RAID) configuration needs a minimum of seven drives to function, then to be HA it would have a minimum of eight drives. Most computers do not have HA components because of the cost and complexity of those additions.

Virtualized servers have multiple applications on them, and -- in most cases -- these servers must have high availability components to ensure that the chance of a catastrophic failure is reduced or eliminated.

While all server manufacturers offer models that have high-availability components, the cost is substantially higher than servers without these components. Therefore, an organization must factor those costs into the total cost of virtualization.

Cost of training and support

Regardless of which virtualization technology you deploy, there are costs relating to the support of the product that were not present in a non-virtualized environment. For example, you may have to send your staff for virtualization deployment and management training, or employ a third-party to do the implementation and support the systems. Add that to the usual costs of annual support on the software.

While support costs are reduced in the area of hardware maintenance -- as there is fewer physical servers -- the applications and operating systems add layers of complexity and may actually cost you more to support in a virtualized environment. Before, you had just the operating system (OS) and the application to deploy and manage on one server. Now you still have the OS and the application plus the added layer of the virtualization software.

An organization must take into consideration training, support and management costs as they have a bearing on the total cost of virtualization.

Unlike a traditional environment where under-utilization was the issue -- the issue now can be over-utilization.
,

Finite resources

One of the primary reasons why virtualization is becoming mainstream is because many servers have been underutilized, particularly given the processing power available even in an entry-level server. However, regardless of their configuration, all servers have finite resources -- such as random access memory (RAM), processors, NICs (network interface cards), etc. -- which, in a virtualized environment, are being shared among all the "guests". Therefore, should one of the resources be overutilized, all the other applications may be affected. As a result, organizations have to understand the requirements of each "guest," as -- unlike a traditional environment where under-utilization was the issue -- the issue now can be over-utilization.

Sure, you may be able to run 50 applications on one server using virtualization, but all those applications are competing for available resources. So, be sure to evaluate the performance implications of running more than one app on a particular server. Performance can be degraded due to the lack of resources.

Lack of standards

Mainframes have been doing virtualization for many years, and standards have been established. Virtualization is a relatively new concept in the client-server world, and there has not been a standard established.

Organizations like VMware, Microsoft and XenSource are competing in the virtualization marketplace, and their products are not built around a standard. Therefore, as their products evolve, the marketplace may pick a "winner" and a "loser." (Remember Beta versus VHS or Windows versus OS2?) As a result, companies may invest in a certain technology to later find that they picked a loser, and they may have to make additional investments to replace their obsolete, unsupported virtualization technology.

Conclusions

Given the previous statements, should an organization implement virtualization?

Maybe. Virtualization is here to stay, and all organizations should ascertain if it makes sense for their environment. However, organizations must understand that virtualization is a tool that can yield such benefits as increased server utilization, reduced power consumption and so on. Those benefits, however, won't come cheaply, so factor in the issues I've mentioned above to calculate the true cost of a virtualization implementation. Be sure to do a cost comparison between a traditional versus virtualized environment.

Just because server virtualization is hot right now doesn't mean there's always a business case to be made for it. As always in IT, a reality check is in order.

About the author: Rick Ellson is a 25-year veteran of the technology industry. Since 1999, Rick has been an independent consultant who focuses on project management , compliance and infrastructure. He can be reached at ellson@shaw.ca.

Dig Deeper on Reducing IT costs with server virtualization

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close