Petya Petrova - Fotolia


Are software-defined data centers the next frontier of virtualization?

The hype surrounding software-defined data centers has reached a fever pitch, elevating businesses' expectations and raising new concerns.

The marketing hype around software-defined data centers is reaching out into the world and changing opinions. Still, broader awareness isn't necessarily the same thing as broad acceptance.

Software-defined describes an environment in which basic IT functions, such as processing, storage, networks and the like are placed in a virtual environment. That environment is augmented so that its behavior can be adjusted as needed by other programs.

Virtualizing all elements of an IT infrastructure -- networking, storage, CPU and security -- is a big step. In the software-defined data center (SDDC), an entire infrastructure's deployment, provisioning, configuration and operation functions are abstracted from hardware and implemented through software.

All of this raises expectations: Users are supposed to see better levels of agility, performance and operational efficiency. Businesses running software-defined data centers should be able to reduce their administrative costs and bolster IT security.

The hope is that an IT team would be able to monitor the operations of each application and workload, and then learn the optimal settings for everything, such as how much memory, storage, processing and network bandwidth should be made available. The idea that all of these functions could then adjust themselves to those settings is enticing.

The concepts folded into the SDDC include the ability for applications, their components, the underlying networks and storage to operate differently from workload to workload, at various times of day, or even on a geographic basis. Multiple instances of these functions can be made to operate on a single system and be isolated from one another. This also means that you can spin up additional instances of these functions to address increased demand or be shut down when demand lessens.

Functions can be moved automatically or manually from system to system or even from data center to data center to improve overall performance, avoid slowdowns, or address failures of systems, networks, or storage components. This includes moving to and from data centers owned and operated by a business into ones operated by a cloud services provider.

Do software-defined data centers really work?

Just about every supplier of systems, virtualization technology, and monitoring and management software has claimed its products will either create or work in a software-defined computing environment. If a business is willing to select a single supplier and work within the environment created by that supplier, this concept can work and live up to its promises.

Not surprisingly, challenges exist, and there are a number of obstacles getting in the way of mass adoption and deployment of software-defined concepts.

A key issue is the lack of broadly-accepted international standards for how technologies in software-defined data centers should work. Each supplier has its own approach and may address the requirements of a specific portion of the application. Some are focused only on the network while others focus on storage. Another group focuses on virtual environments contained within VMs. An emerging group is focused on containers and VMs.

Another important consideration is the complexity of these environments. There will be many moving parts, requiring businesses to engage the talents and expertise of a number of different specialists.

If the software-defined environment is based on the products of multiple vendors, the types and depth of expertise required grows dramatically.

Most organizations already face the challenges created by having silos of expertise. Moving to a multivendor, software-defined computing environment often exacerbates this sort of problem. Would it be better to continue the practice of having groups focused on each element of a computing technology, such as database, application frameworks, operating systems, VM software, container software, networks and storage? Or would it be better having multifunction groups housed in each business unit?

Some businesses are happy with independent functional groups. Others make one group responsible for each workload.

Exercise SDDC preparedness

Moving in the software-defined direction bolsters the concept that an organization's IT assets will be seen as a pool of resources, and that this pool can be used as needed by any and all workloads. This would, on the surface, seem to simplify IT planning and execution. As with many things, though, this depends on how a company is organized.

Having one group responsible for planning all IT purchases and operation is likely to be workable for small or medium-sized companies, but not for larger organizations.

It seems clear that suppliers intend to pull businesses into a software-defined world, regardless of whether those organizations really want to go there. The best course of action then is to factor it into your organization's plans. Look at the available technology and consider an architecture that would support today's needs and foreseeable future needs. Then develop a plan to move sanely into this software-defined future.

That approach is far better than shifting today's islands of computing into an incompatible software-defined infrastructure without an overarching plan.

Next Steps

VMware builds the foundation for SDDC

What does "software-defined" really mean?

HPE realizes its vision for an SDDC

Dig Deeper on Cloud computing and virtualization strategies