News Stay informed about the latest enterprise technology news and product updates.

Is a 100% virtualized environment possible?

Organizations that have virtualized their environments often virtualize only a portion of their servers, leaving some servers running on standalone physical hardware. Is a 100% virtualized environment possible? Certainly it is, because almost all workloads can be virtualized, but there are some arguments against completely virtualizing your environment.

I recently wrote about an experience I had with a complete data center power failure. The problems resulted from all the DNS servers being virtualized and until the host servers and storage-area network were online no DNS was available, which made it difficult for anything in the environment to function properly. Having a DNS server and Active Directory domain controller running on a physical server would have been a great benefit in that situation.

Additionally, many organizations are leery of having too many servers virtualized because they want to avoid the risk of a single host outage causing many virtual machines to go down at once. This risk can be partially offset by some of the high availability features that are available in many of the virtualization products. In addition, if a virtual environment relies on a single shared storage device and that device has a major failure, it can take down all the virtual machines that reside on that storage. This risk can also be partially offset by having a well architected SAN environment with multiple switches and host bus adapters so multiple paths to the SAN are available.

Another reason that you may not want to virtualize your whole environment is that many software vendors do not fully support running their applications on virtual machines and subsequently may require you to reproduce a problem on a physical system. Because of this it is a good idea to have a few physical servers running applications that may be effected by these policies. For example, if you have multiple Oracle, SQL or Active Directory servers, consider leaving one or two of them on physical hardware.

Finally, you may consider leaving a few physical servers for applications that have non-virtualization friendly licensing and hardware requirements that can be difficult to virtualize (licensing dongles, fax boards, etc.) or for servers that have extremely high I/O requirements.

So is a 100% virtualized environment possible? Yes it is, but is it advisable? In most cases it is not recommended. The cost savings that are typically seen by implementing virtualization will increase the more an environment is virtualized but you may want to stop at around 90% and leave a few physical server for the reasons that were previously mentioned.

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Eric - great observations for those who believe that virtualization is the end-all-and-be-all. There are absolutely situations where virtualization -- including broad scale-out instances (think: web farm). What this situation implies however, is that there will be "management silos" whereby virtualized apps vs. physical apps are under different management/control. I hope the industry realizes that (as a result of your observation) we are inevitably headed in the direction of new management tools. The tools will have to manage *both* virtualized as well as physical app instances. And to make matters even more interesting, the tools will have to manage multiple vendor VMs (VMware, Citrix, MSFT, etc.). A few Virtualization vendors have announced support for 3rd-party VMs (e.g. MSFT) and other vendors have support for mixed physical/virtual environments (e.g. Methinks this is the "next big thing" for IT Ops...
I have done several vitalizations projects, and would never recommend 100%. i always recommended that DNS/WINS/DHCP not be virtual, (backup ok, primary not). For the most part, everything works great, specially when the VM/VS server is properly configured to support the needed sessions. I have built complete systems using MOSS 2k7, SQL 2k5, Exchange 2k7 and never had any issues with performance. I am working on a new project to do the same thing, under Hyper-V with all the same systems, and see how they preform.
You are rigth, it is not a good idea to virtualize all, some times it is a good idea jst to have several servers as stand alone whit good disc capacity, you never know when your SAN is going to break.
Not to sound demeaning, but you can't have a 100% virtualized environment, you still need some sort of physical servers to run the virtuals on. Chicken or the Egg?
OK. Great article, but I have to pick on one thing. Can everything be virtualised? Your answer was yes. But I disagree. For example, you cannot virtualise any server with physical dongles attached to it. Think about it, if the VM fails over through HA to another host where the dongle is not physically attached... what would happen? OK, technically it is possible to have multiple dongles made available on each host, but who would do that? Also, about not virtualising the servers with high IO, this is not a very big issue as you can still virtualise these with dedicated storage (separate LUNS) made avilable to these (for dedicated disk IO) and Gigabit ethernet (for NW throughput) and ensure you set up correct affinity rules to not fail over VMs with similar high I/O together in to a single host. But I can also understand the traditional support guys insisting on leaving these on physical servers simply because they do not want to take a chance. Furthermore, a one clear problem I see with virtualisation (regardless of the SAN multipath access provided through redundant HBAs) is that, as per the normal industry standard where storing 10 - 15 VMs data on a single datastore (which most often is a single LUN), you are risking the loss of more data if that RAID fails. By that I mean B4 you had say 10 servers with each server having its own RAID where if a failure occurs you only loose the data of that server but now if the RAID of the LUN failes beyond available recovery, you risk loosing 10-15 server's data which is a bigger risk. Now ofcourse you can mitigate this by introducing something really expensive like RAID 10 where the extra costs would perhaps outweight the cost savings of virtualisation or ontroduce more luns (= more datastores) per less number of servers which will now present an admin nightmare. So this is a triky one to tackle. Anyway, its a great article, and I am currently in the middle of implementing a VI3 project where I am taking all these facts in to account when designing the final list of virtualisation candidates. My plan basically is to AT LEAST leave the followings alone (on Physical): * Domain controller with the PDC emulation service running (So that this can be the primary source to sync domain time from even for the ESX hosts) * Any server with especialist hardware (i.e. Dongles) attached to it. * SQL Cluster which hosts the Virtual Centre Database * Oracle Cluster which requires per processor license (Would be to expensive otherwise) * Things like SUN and UNIX boxes
Thanks for the comments Chanaka, there are alternatives for using dongles with ESX hosts. For example with USB dongles, Digi makes a device called AnywhereUSB that works with ESX servers and provides IP based connections to USB devices. So any ESX host/VM could connect to the dongle through a network connection rather then have the dongle physically tied to one server.
has anyone tried the Digi device? Dongles have been an issue for me for a long time now. I honestly dont feel like blowing $350.00 if the device does not work :( I know you guys feel my frustration.
I bought the Digi device, and it worked out awesome. I virtualized about 5 servers thanks to this brilliant piece of tech. Thanks again for the info!