Brian Jackson - Fotolia
Software-defined technologies are still evolving and should see vast improvements in the years ahead, affecting servers, network, storage -- and perhaps even the data center as a whole. But the argument in favor of software-defined technologies is still a work in progress because of the drawbacks, and there are serious concerns that need to be addressed. IT education, interoperability testing and comprehensive proof-of-principle projects are vital to successful deployment in the data center.
What are software-defined technology's limitations?
Although software-defined technologies offer a great deal of promise, there are also a few notable issues that potential adopters should consider.
First, consider the potential impact of latency on a software-defined technology. Remember that server virtualization is quite efficient because of the introduction of processor extensions (like Intel V and AMD-VT) which are dedicated to virtualization support in a computer. Before the broad adoption of these command set extensions, most servers could only support several virtual machines -- and latency was a serious issue. Adding a software layer to network and storage functions will inevitably add traffic that might impact latency-sensitive workloads.
Provisioning can be problematic -- especially when automated or left to end-user decisions. Some workloads are extremely sensitive to memory, CPU and storage allocation, so consider what happens when the application doesn't get enough memory or the allocated storage space is exhausted. IT must be prepared to recognize and address a wide range of possible workload performance problems with software-defined infrastructures.
A software-defined technology can introduce a measure of vendor dependence. For example, software-defined networking may employ NSX which is backed by VMware, while Cisco sponsors Open Network Environment platforms. This puts businesses at the mercy of vendor product roadmaps and interoperability matrices. A move to embrace open standards can help, but vendor lock-in should always be a concern for any software-defined initiative.
Deploying the software-defined technology is one thing, but managing it can be quite another, so any management platform must use common APIs which can provide a full range of capabilities. For example, VMware provides some storage APIs such as vSphere APIs for Array Integration and vSphere APIs for Storage Awareness, but is the management tool capable of defining or setting specific services on a virtual disk, or supporting quality-of-service for virtual machines or disks, or automatically instance storage objects during VM provisioning? Just deploying a software-defined technology is no guarantee that the technology will support every feature or capability you're looking for.
And finally there is a cross-section of more familiar concerns like software-defined scalability, adequate insight into the underlying physical environment, support for multiple hypervisors, security, and (so often overlooked) is support for disaster recovery, backup, snapshots or other data protection schemes. All of these factors will have a profound impact on the data center and business.
The downsides of a software-defined infrastructure
"Software-defined" to define data center of the future
Dive into the different software-defined options
The reality of software-defined storage
Dig Deeper on Improving server management with virtualization
Related Q&A from Stephen J. Bigelow
Microsoft Hyper-V on Windows comes with advanced protection schemes, including several virtualization-based security features the company introduced ... Continue Reading
The BitLocker encryption technology continues to evolve from its roots as a Windows Vista feature to protect resources both in the local data center ... Continue Reading
Some enterprises avoid the public cloud due to its multi-tenant nature and data security concerns. Learn what data separation is and how it can keep ... Continue Reading