carloscastilla - Fotolia


Deploy Docker Swarm and other containers with OpenStack Magnum

OpenStack Magnum is used to deploy and keep track of containers, such as Docker Swarm, Google Kubernetes and Apache Mesos, but the project has other useful tools as well.

Containers have grown in popularity because they are easier to deploy and much smaller than a hypervisor. Plus, they are easy to conceive as being a single object dedicated to a specific task. The three most common containers are Docker Swarm, Google Kubernetes and Apache Mesos.

With a Container, a user can roll out a complete working application, or an important piece of that application, in something that's only a few megabytes in size. And, they start up in seconds, as opposed to a hypervisor, which can take minutes.

The container runs on top of the host OS, but it also contains a mini OS. That's necessary because no matter what's installed, it needs an OS to run on. But, unlike a hypervisor, it doesn't have extra overhead, like hardware emulation.

Containers work on Linux and now have additional support from Windows Server 2016.

OpenStack and Containers

One common use for containers is that you can download a complete working system and start using it right away. For example, do you need to get an Ubuntu VM or Apache Spark environment up and running? Simply install Docker and download whatever self-contained image you need from the public repository of Docker images.

If you had 50 programmers working in your IT shop and they were each running containers, then you would quickly lose track of what is running where, as there is no central tracking mechanism. That could lead to orphaned containers using up resources -- and running up your monthly cloud computing bill. OpenStack Magnum can solve that problem.

OpenStack Magnum deploys containers and keeps track of them. But, it does more than that. It allows you to abstract your application, spawn a specific number of containers to handle a specific load and then shut those down when they aren't needed.

OpenStack Magnum

OpenStack Magnum is fairly simple to use. For example, to deploy a container there are just three steps:

  1. Use OpenStack Magnum to create a Google Kubernetes, Docker Swarm or Apache Mesos Bay.
  2. Create a Docker Swarm Bay. Docker Swarm enables clustering, i.e., running across multiple machines.
  3. Now you can run Docker commands to pull the image from OpenStack Glance and launch the Docker container.

You can use the OpenStack Glance program to store Docker images just as you would hypervisor images. For example, you could download MySQL and then save it in Glance with these two steps:

docker pull mysql

docker save mysql | glance image-create --is-public=True --container-format=docker --disk-format=raw --name mysql

The logical grouping of containers in OpenStack Magnum is called a Bay. Bays are a collection of OpenStack images created by the OpenStack orchestration tool, Heat. Heat is used to define all the resources a cloud application needs in a template, the collection of which is called a stack. This follows the rules set forth in AWS CloudFormation.

Docker Swarm
Docker Swarm abstracts host resources into a shared pool for Docker containers.

Other OpenStack Magnum Tools

In the OpenStack Magnum project, there are two useful tools: Kolla and Murano.

Kolla is product that lets you deploy OpenStack itself in a container. There are a lot of reasons for doing so, including reducing the complexity of installing OpenStack. One obvious use is to give developers an OpenStack environment to work with. Kolla uses Ansible to do this.

Murano lets you catalogue container images and add them to the Horizon dashboard. In addition to your own collection, you can use public catalogues like OpenStack Community App Catalog, Google Container Repository and the Docker Hub/Registry. The benefit of doing this is that it greatly simplifies using and deploying Docker Swarm, Google Kubernetes and Apache Mesos containers by putting them in the Horizon webpage graphical interface.

Next Steps

Manage Docker containers as an admin

Learn which container orchestration tools work with OpenStack

Use Kubernetes to run OpenStack lifecycle management

Dig Deeper on Open source virtualization