Manage Learn to apply best practices and optimize your operations.

Revolutionize the data center with microservices and containers

Software-defined infrastructure, microservices and containers are changing the way data centers are built and operated, resulting in an efficient data center that is easier to use.

Some of the hottest topics in IT today are software-defined infrastructure, microservices and containers. These...

technologies have enabled a major reshaping in the way data centers are built and operated, and are also changing the landscape for performance, resilience and ease-of-use. We are rapidly moving away from the traditional rigid data center structure to one that's agile and responsive, and even an instigator of rapid resource reallocation.

Software-defined infrastructure is a simple concept. Take, for example, the control software that defines where data resides in storage or sets up a virtual LAN (VLAN), and move that code to VMs as a set of microservices. These microservices can be spun up and down as needed. The underlying bare bones storage or switches are very simple, while standard API structures allow microservices from numerous vendors to talk to any one type of gear. In reality, implementation is a bit tougher and is still a work in progress.

Microservices can be chained together to achieve a specific result, though we are still developing standards to make this chaining a solid reality. In operation, an application may request a service and, finding none available, trigger the spawning of a new copy.

Hypervisors versus containers

We are rapidly moving away from the traditional rigid data center structure to one that's agile and responsive, and even an instigator of rapid resource reallocation.

In an ideal world, the spawned microservice is instantly available, but the reality of creating a VM using a hypervisor, loading it with the microservice image on top of an OS and then starting all of it takes minutes with a hypervisor. That's nearly eternity in computer time.

Containers provide an answer to the agility problem. Since containers run on top of an existing OS host, it isn't necessary to load and boot an OS image. This makes a container much smaller in memory. More importantly, that microservice can be up and running in a few milliseconds with a container.

That time difference is crucial for agility. The time it takes to service a particular task in a storage or network microservice may only be a few seconds, so the ratio of overhead/wait time to payload would suffer drastically if it was necessary to wait for a hypervisor.

The issue of smaller space is important. Hyper-converged systems share compute resources with applications and storage. Again, it's a payload question: Containers have a few megabytes of overhead and hypervisor instances need as much as a gigabyte. With containers, storage microservices won't take up much space in the servers.

Microservices and containers pose challenges

The faster creation and destruction of microservices in containers poses a challenge for networking. Often, these microservices need to connect to remote microservices and/or to actual storage devices. To be compatible with the cluster LAN system, the storage interconnect has to be via Ethernet. Fibre-channel is too hierarchical and, anyway, the provision of software-defined infrastructure characteristics doesn't exist. An all-Ethernet platform is easier to manage.

One challenge in any software-defined infrastructure system is the publishing of available microservices. This is a task for orchestration tools to handle. One example of this issue is when multiple compression algorithms are available. The calling app has to identify which of these algorithms are suitable and then where services are available. In the short term, this tends to be somewhat inflexible; orchestration has evolved over time to improve the flexibility of the process.

Security is just as much an issue with microservices and containers as apps and containers. One could argue that microservices' access to VLAN structures and to storage, in fact, makes their security a critical item in any deployment. Now, containers have mechanisms that can make them as secure against intra-tenant hacking as hypervisors, but the sheer number of containers and the speed at which they spawn and vanish puts emphasis on source control and signature verification for any microservice code that's run.

Software-defined infrastructure, microservices and containers are about to revolutionize the data center, both independently and acting in concert. The result should be IT setups that are much easier to use, are run by tenant users rather than painstakingly configured by central IT and overall are much more agile and more efficient in resource utilization.

Next Steps

Learn about software-defined applications and infrastructure

Consider container networking software

Evaluate different cloud provider container services

This was last published in November 2016

Dig Deeper on Application virtualization



Find more PRO+ content and other member only offers, here.

Join the conversation


Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How do you think containers and microservices will continue to affect the data center?
I see great potential for micro-services to be the logical decoupling of major task operations and to form an assembly alphabet.  The big bugaboo is the insertion of hijacking tools into that alphabet to corrupt final constructs.  The flexibility of today I expect will yield to a rigid structural construction framework that reduces the ability to insert malicious routines.