Over the past several years, container technology has exploded onto the tech scene. It seems like every week we...
hear new announcements around the technology, and that amount of news speaks volumes about the impact of containers. We've only just begun to uncover what the adoption of containers will mean for the enterprise data center. For example, when a container VM is deployed to host a containerized application, it can ultimately lead to VM consolidation. In theory, this can also reduce the number of resources needed for a virtualization environment, but in reality, this isn't normally the case.
One of the significant benefits of containerization is the ability to add or remove containers based on workload demand. The unpredictability of the workload makes capacity planning very hard for infrastructure teams, especially when you want the application to have predictable performance and an unknown peak. More often than not, this means you need more physical hosts in your virtualization farm than required on a daily basis. You end up with additional hosts to accommodate the best guess at peak demand. In addition to managing the physical infrastructure, you have to monitor the amount of CPU and memory an individual container VM is using. For a technology that's aiming to be more efficient, it can create challenges for virtualization administrators.
Azure Container Instances and Kubernetes
Container orchestration tools, like Kubernetes, can help to balance the workload of an individual container VM, but planning for bursts isn't something you can easily do -- that is, until Microsoft recently announced its new Azure Container Instances offering. The public cloud offering from Microsoft can help with managing peak container workloads in your on-premises virtualization environment. Let's examine what exactly the Azure Container Instances offering is and what it isn't.
Azure Container Instances is different than Amazon EC2 Container Service and Google Container Engine in that it's very simple. Azure Container Instances allows you to spin up a single container with your choice of CPU and memory, and usage is billed per second. It doesn't have any of the robust options other tools do, but that doesn't mean we can't use it to solve our burst problem. For that, we need to turn to Microsoft's new open source Kubernetes connector software. Kubernetes connector will allow Kubernetes on-premises deployments to select Azure Container Instances as another deployment location. In other words, when a developer is going to deploy a container, they'll be able to select an on-premises container VM or Azure Container Instances.
On-premises VMs are typically going to be cheaper for your base and predictable workloads, so it makes sense to use them to steady-state containers. However, when you need to burst quickly, the connection to Azure Container Instances is a huge benefit if you're running Kubernetes as your container orchestration engine. Rather than spend limited financial resources on VM hosts only needed to serve peak workloads, you can use this simple offering from Microsoft to better manage your capacity. This pairing of Azure Container Instances and Kubernetes provides agility and is a unique offering that can help when managing virtualization in a container environment.
Integrate containers and VMs in your environment
Learn key cloud-based container services terms
Choose the right containers-as-a-service offering