Manage Learn to apply best practices and optimize your operations.

Considerations for deploying apps to containers vs. VMs

Using containers doesn't make sense in all instances, despite their many benefits. Before incorporating them in your environment, consider the specific apps you want to deploy.

Although container technology has many benefits, containers aren't suitable in all cases. Rather than blindly jump...

on the container bandwagon, go slowly and decide whether to use containers vs. VMs on a case-by-case basis.

Some apps can't move to containers because they aren't written to update microservices service registries with their IP addresses. Other apps can't scale up with containers because of the way they're designed. Storage issues are another roadblock to be aware of when you decide whether to deploy apps to containers.

Container benefits

The ability to scale up rapidly and improve server efficiency is the vision container advocates put forth. That it should drive down cloud subscription fees is another argument for container technology. Plus, container proponents say Docker makes installation easy, as it abstracts implementation details to the degree that an app can run in another container without modification. You can see how easy it is to download and run preconfigured software from the Docker Hub Registry. It takes just seconds to get a database or other server up, but it takes more time for applications that require configuration.

One issue with the monolithic VM-based application is that it stores IP addresses and ports in configuration files and host files.

Also, you can't scale out all applications simply by adding another copy of it in a container. For example, it wouldn't make sense to run two back-end message servers unless you also add a load balancer, which means adding another piece. Why would you need two databases when they're already designed to scale up in other ways? In other words, you can't just install a second instance of Oracle to get higher output; you have to use a different approach to scale up.

A benefit of containers vs. VMs is ability to scale up using less hardware. Plus, advocates say you can speed up code delivery. The glowing assessments from some advocates sometimes gloss over implementation issues.

Apache Mesos puts it this way on the GitHub site for its Marathon load balancer: "Clients simply connect to the well-known defined service port and do not need to know the implementation details of discovery. This approach is sufficient if all apps are launched through Marathon."

To illustrate, a Java Database Connectivity (JDBC) connection isn't a messaging layer protocol. It's a persistent connection with hardwired IP addresses. And a reverse proxy server is also hardwired to a target IP. If you change your JDBC client or reverse proxy server, you have to open firewalls and routes in addition to changing XML and other configuration files. So, it's not as portable as containers are supposed to be.

And then there's the change to how administrators work with containers. Until you get used to working with containers, it can be challenging. When you deploy apps in containers, suddenly you can no longer connect to their file systems or use Secure Socket Shell to log into the shell. Instead, you have to use Docker commands with Bash to push instructions, set environment variables and so on. All of this will slow down system administrators who are unfamiliar with container management.

Service discovery

There are plenty of VM orchestration platforms. There are currently only a couple of options for Docker containers -- Docker Swarm, Kubernetes and Apache Marathon -- but a handful of commercial products are coming to the market to sort out this administrative problem.

Orchestration products are important because they're able to discover services, which load balancers like Marathon need. Marathon can update the High Availability Proxy configuration without changing the web server configuration.

A microservices architecture requires a service registry. And that only works in apps that are programmed to update their status in the registry. Apache ZooKeeper is an example of a microservices registry. It's used by Hadoop and other distributed applications.

But most enterprise apps aren't written to use service registries. Yet, they can, if the software upon which they run is compatible, such as the case with a registry-aware database server.

So, if your app isn't programmed to read the microservices registry, it can't take advantage of service discovery. That's one factor to consider in your decision to use containers vs. VMs.

Changing IP addresses

One issue with the monolithic VM-based application is that it stores IP addresses and ports in configuration files and host files. If the VM is deployed elsewhere or the database is moved, you would have to change these files.

Docker makes it easier to move containers, add more containers and change configuration. Each time you launch a Docker container, it gives the container a new IP address. Docker updates the hostname to point to the app to connect to in order to fix any IP address dependencies. For example, WordPress needs MySQL. So, you would run the following command to connect one to the other, meaning to set their configuration accordingly:

docker run --detach --name (wordpress container)--link (MySQL Application).

Container security

Much is made of the perceived lack of security in containers. This is because they share memory and process space with other containers, while VMs do not. One of the most common hacking techniques is to corrupt memory through buffer flow. It sprays the memory with shellcode, which lets hackers gain access to the OS. With containers, a memory hack could affect all services running on the server. A VM provides an additional level of isolation, so that a memory hack could only affect apps on that VM and not affect other services on the server.

Finally, programmers have long known the value of separating business, database and GUI logic into different components for reasons of simplicity and speed. But you can't divide a Java Archive (JAR) file into pieces and copy a small piece onto each container without writing interfaces between subroutines and functions, thus creating additional work. Instead, you could add more JBoss servers to boost throughput, but that offers a marginal and nonlinear increase in power.

But the concepts needed to deploy apps to a collection of autonomous containers are already in place. Apps are already spread across the architecture in a way that mimics containerization isolation. You can divide apps into logical units by writing pieces in different languages, such as Node.js for both client and server-side logic, and using JAR files to access basic business functions.

So, you could conclude that containerization works more at the infrastructure layer -- e.g., JBoss -- more than the application layer -- e.g., your shopping cart application. For now, focus on the architecture rather than the code and look at other use cases to see where containers can best be applied.

Next Steps

Test your knowledge of containers and VMs

Embrace container and VM integration

Choose the best place to run containers

This was last published in May 2017

Dig Deeper on Application virtualization

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What are some other considerations for deploying apps to containers vs. VMs?
Cancel

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close