In today’s world of ever-evolving technology, the introduction of the next big thing is always right around the...
corner. In fact, so many new technologies are coming at such a rapid pace it can be overwhelming to know what to adopt next and more importantly, if you even need it.
A lot of these technologies center on virtualization and how it can extend the virtual platform even further. Now, before extending the virtual platform we have to make sure everyone has virtualized everything. Yes a few folks have not made that leap and its time is overdue. Any thoughts or ideas that virtualization was simply a fad hopefully have faded away by now and all of you "server huggers" need to come to grips with the fact that virtualization is here to stay. With that being said, once we have virtualized most or our infrastructure, we can start to take a look at some of the technology that can be used to support and extend our virtualized environment.
Containers -- One of the latest innovations to come about is the introduction of containers. Containers are not a new form of virtualization but a layer on top of the virtualization platform. A container layer has an application engine, packaging infrastructure and runtime libraries allowing the application to run anywhere. The goal here is portability. This will remove the argument that it runs on one machine and not another. If this sounds familiar it should; Java and .Net Framework also have the same promise of a write once and run anywhere. For years IT applications requiring specific versions of Java have been notorious for having multiple versions installed in multiple web browsers. While the .Net Framework does not have the sheer number of versions as Java, it again tends to require older versions for certain software.
Docker is one of the more popular container providers today. If they will be able to avoid the same fragmentation as Java and .Net is unknown. The premise is ideal but the execution has been flawed in the past. So if you’re a software company, should you make the jump to containers? Some experimentation with Docker is warranted, but jumping in with both feet might be a little premature given the track record of the write once, run anywhere software.
Software Defined Networking (SDN) -- In a physical world, computers are required to be connected to the network to get things done. Core switches, routers and firewalls enable functions and security for us today and as our infrastructure continues to grow. As more servers became virtualized, the need for physical ports decreased and virtual ports increased. The same infrastructure designed to support hundreds of physical machines now is used to support a few dozen hosts running everything virtualized. Software-defined networking is the next logical step in continuing the virtualization journey.
A lot of hype and resistance comes with SDN though. Cisco network administrators are a very dedicated group and too many of them grew in the industry with the concept of physical separation, something virtualization fundamentally goes against. SDN brings the promise of virtualizing the hardware aspects of the network including routers and firewalls. SDN also goes a step beyond simply virtualizing by enabling micro-segmentation of your network. Think of it as deploying a customized firewall or router for every virtual machine you deploy, greatly increasing security beyond just the standard perimeter.
Cisco is lagging behind this effort at the moment with VMware’s NSX leading the charge. Many people doubted virtualization would take hold and even less thought it would change the data center the way it has. Look for VMware’s NSX to take the same path without slowing down. While you might not need it today, you will likely need it in the near future. As you look to upgrade or buy new networking gear it should be with an SDN focus.
Hybrid cloud -- So much has been said about clouds in general that it is getting more and more difficult to figure out what it means and if it is truly needed. Clouds come in three varieties: public, private and hybrid. For companies looking to move to the cloud, it's important to determine why. When searching for what the organization hopes to gain by this effort, the answers are often complex and inaccurate. The private cloud has been troublesome from the start as end users requesting their own resources tend to go against most business models of requesting resources and requiring approval. Public clouds work well if a business is willing to trust everything to someone else, not always an appropriate scenario for many companies.
This leaves the hybrid approach in being able to extend what you have internally to an external resource during those times when you want additional scalability. The goal of the hybrid cloud is to provide the best of both worlds in giving the internal security and control while having the ability to burst to an external resource. This becomes truly an ideal situation for the organization. The hybrid cloud's success will depend on ensuring it doesn't bring the worst of both worlds along with the best.
For many companies, this will be a watch and see with fingers crossed situation. It has the potential to be an ideal business situation depending on the complexity, security and cost factors that will come with an external offering that connects to your internal resources.
All of these technologies will have a place in a modern data center at some point, but is determining when will they be right for you is a key decision. Implementing a technology before the business need becomes a waste of capital dollars, while on the other hand waiting until a technology has matured, can put you behind your competitors and put you in a situation of trying to catch up. Ignoring the buzz words and hype while seeing the value to your business, will be the key to successfully implementing a technology on a schedule that is right for you and your customers.