For modern enterprise IT, failure is not an option. So IT teams have worked hard to make individual x86 servers...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
highly available to avoid failure. While virtualization platforms aim to extend this reliability by making virtual machines highly fault tolerant and available, applications often come with fault tolerant features built in. And, using application-level high availability can cut costs.
Implementing traditional enterprise-level HA
To achieve enterprise-level availability, IT teams store virtual machines (VMs) on expensive shared storage. In fact, many customers have a storage area network (SAN) solely for VMs. Storing VMs on the SAN allows you to automatically restart them if the host fails, which makes the VMs more highly available than physical machines.
Avoiding VM downtime during physical host maintenance also defines enterprise-level availability. Live migration allows you to move VMs from one host to another to perform physical host maintenance without downtime. Both live migration and automated restart require shared storage, which comprises a significant part of the capital cost to implement virtualization. In addition, these features may require a more expensive license for the virtualization platform. Much of enterprise IT considers these critical requirements, but applications that provide their own high availability could change this perception.
Applications bring high-availability features to the virtualization table
Many vendors now design applications to run on a lot of small VMs rather than one or two large VMs. Web server and terminal server farms are classic examples: None of the VMs contain persistent data, and users can access the application as long as there are enough small VMs available to share the load. Some NoSQL databases and Hadoop clusters are also flexible enough to run on a collection of small VMs. In these cases, the application manages the loss of nodes and can reallocate workloads, rendering individual nodes disposable.
More resources on implementing high availability
Five situations where virtualization high availability is overkill
How a virtual server cluster brings high availability to test and dev
Reserving cluster resources to boost high availability
Given that individual VMs are disposable, you do not need shared storage. Instead, you can store VMs on disks inside the virtualization host. Local disk capacity in hosts costs less than shared storage and can even cut storage costs in half. Because these applications automatically balance workloads and manage VMs, you may not need features like automated VM restart, which could eliminate the need for some virtualization management software. You might also be able to use a less-expensive virtualization platform license. Building a farm with the free edition of your virtualization platform would further reduce the capital cost of implementing high availability.
Application-level HA products contain some elements that are single VM instances with persistent data where the VM must be made highly available. For example, the NameNode and JobTracker roles within Hadoop still require shared storage and automated restart following a host failure.
Not all applications scale out, meaning you still need an enterprise-level virtualization platform to house VMs, but you may not need to provide enterprise-level high availability for every VM. When implementing applications that provide high availability by scaling out, a virtualization platform that's less highly available may be more practical.
Dig Deeper on Disaster recovery strategies, business continuity and virtualization
Alastair Cooke asks:
Which of these applications do you use in your environment?
0 ResponsesJoin the Discussion