kantver - Fotolia
Published: 17 Nov 2016
Containers are the hottest software idea in IT. The concept of sharing the common parts of a VM -- the operating system, management tools and even applications -- reduces the memory footprint of any image by a large factor, while saving the network bandwidth associated with the loading of many copies of essentially the same code.
These are not trivial savings. Early estimates of containers supporting three to five times the number of instances that traditional hypervisor-based approaches can manage are proving true. In some cases, such as the virtual desktop infrastructure market, results are even better. Notably, containers can be created and deployed in a fraction of the time it takes for a VM to be made.
The economics of containers are substantially better than hypervisor virtualization, but containers are a new technology, and that immaturity still has to incorporate the -- sometimes painful -- lessons we learned for hypervisor virtualization. While many organizations are working with containers at some level, most would admit to serious fears when it comes to ensuring secure containers.
Security risks still exist, but the worries of operations professionals are beginning to soften as the market matures. In a 2015 survey by container data management vendor ClusterHQ, 61% of respondents cited security concerns as a moderate or major barrier to adoption. In the company's 2016 survey, the number of respondents citing security as a concern fell to 11%.
The most critical issue is multi-tenancy protection. Hypervisors have been around well over a decade and, more importantly, have gone through several CPU lifecycles. Intel and Advanced Micro Devices have added features to prevent cross-memory hacks in hypervisors.
These features protect systems with no local storage drives, but the advent of local instance stores used to accelerate apps meant that erased data, and especially solid-state drive data, could be exposed across tenants. Hypervisor vendors rose to the occasion and now flag blocks as unwritten. If an instance tries to read a block that hasn't yet been written, the hypervisor sends all zeros and hides any data in that block.
Without these safeguards, hypervisors would be unsafe, and any tenant could gain access to the data in other instances. Sharing a single operating system image across all the containers in a server nullifies the hardware memory barrier protection, and the storage issue is caught up in the immaturity of container development.
These two problems can be mitigated by running the containers inside a VM. This protects the containers in one VM from a cross-memory exploit of another VM, while the hypervisor provides the needed storage protection. All the major clouds and hypervisors, including Azure, now support containers.
The layers of protection can come at a cost, though. During a scale expansion, the VM may have to be created prior to building containers. These technologies operate on different timescales, with container deployment times measured in milliseconds against VM build times measured in seconds. Even with the restrictions, VM-based containers are a viable approach and by far the most common method of deployment. There has been considerable work toward developing lightweight hypervisor deployments. For instance, Intel Clear Containers is a hypervisor built for containers. Among other things, it uses kernel same-page merging to securely share memory pages among VMs to reduce memory footprint. VMware also supports containers, which -- given its dominance in virtualization -- is important for operational confidence in many shops.
User access controls
Beyond cross-tenancy exploits, containers carry privilege escalation risks, where an app getting root access can gain control of the host. Another problem is a denial-of-service (DoS) attack -- or even a bug-driven issue -- where all of the resources are grabbed by a single container. These problems are much easier to create in container environments. Docker, for instance, shares its namespace with the host system, which would never be the case on a hypervisor-based system.
To secure containers and mitigate escalation attacks, run them as ordinary users rather than root. In Docker, this means adding -u to the start command. Removing SUID flags bolsters this fix. Isolating namespaces between containers limits rogue apps from taking over the server storage space. Control groups can be used to set resource limits and to stop DoS attacks that suck up server resources.
Another major protection from attack, especially in the private cloud, is to use trusted public repositories for images. Today, almost all mashups use code from many public repository sources to build out an app. This saves enormous development time and cost, so it is an essential practice in a world of tight IT budgets. Still, plenty of horror stories abound. Even "high-class" repositories can propagate malware, and there are recent cases of such code remaining hidden in popular libraries for years.
Code from trusted repositories is still vulnerable to virus penetration. Image control is a critical problem with any environment today, not just containers. Use trusted repositories that support image signatures, and use those signatures to validate by loading the image into the library and later into a container. There are services for signature validation, and proper use of these services will limit your exposure to malware penetration. Docker Hub and Quay are two trusted public container registries.
Another problem that is not particular to containers, but is far more serious because of the typical microservice environment used with containers, is that users are expecting control over the app mashups that they run. This makes repository control a bit like herding cats. A forced user-level validation of both source identification and signature checking is a critical need for a stable, secure environment. The Docker security benchmark on GitHub is a utility that checks for many of the known security problems. Building one's own validated image library for users to access may be the ultimate embodiment of this approach, but the downsides are that coders are hard to discipline and a lack of agility by the librarians will almost guarantee that the library would be bypassed. Any repository has to have very tight security with limited access for obtaining images from third-party repositories and no write access for the user base. To facilitate image library management, you can use Docker's registry server or CoreOS Enterprise Registry.
Validation and Encryption
Version control of applications and operating systems in an image is a related vulnerability area. Again, it's not just a containers question, but the very rapid evolution of containers and Docker's tendency to tear down operating code structures and replace them as new releases are made requires strong discipline. Misaligned versions often offer up an attack surface.
Image scanner tools are available to automate image and file validation. Docker has Nautilus, and CoreOS offers Clair. The issue of encrypting images at rest or in motion is still somewhat unsettled. Generally, the more encryption of vulnerable files is practiced, the more protection we have against malware. For images, encryption should protect against virus or Trojan attacks on the image code and, when coupled with signature scanning and validated playlists, should keep malware at bay. Here, containers have a distinct advantage over hypervisors. With many fewer image files flying around, the encryption and decryption load on servers is much lower.
The container daemon is another point of vulnerability. This is the process that manages containers and, if compromised, could access anything in the system. Limiting access is the first step in securing the daemon. Encrypting transfers is essential if the daemon is exposed to the network, while using a minimal Linux configuration with only limited administrative tools reduces the attack surface.
With all of the above, we have the basics of creating secure containers and building their images. Protecting container stacks while they are running is still a work in progress. There is a good deal of startup activity in the monitoring area, which provides a first step in controlling what is typically a volatile instance mix. CAdvisor is a good open source tool for monitoring containers, while Docker offers the stats command. On their own, these tools guarantee data overload, so their output needs to be fed into a suitable analytics package such as Splunk or Sumo Logic's Docker Log Analysis App. By establishing a baseline of normal operations, any traces of abnormal access due to malware can be spotted and remediated.
Containers have evolved a long way in just a few years. A secure environment does require strong discipline, but it's notable that the container community is leading in areas such as image management.
We can expect hardware support for containers to arrive in one or two generations of CPUs, matching capabilities available for hypervisors today. When that happens, we can expect a move to simplified bare-metal container deployments. There will be further challenges, such as the incorporation of software-defined infrastructure into the containers ecosystem. But containers are on an equal footing with VMs from a security perspective and way ahead on agility and speed of deployment.
Take a different approach to container security with CoreOS
Secure enterprise container stacks
Improve container security with Docker Content Trust