As IT transitions into the role of a service provider, the traditional practices involved in manually provisioning compute, storage and networking resources are under increasing pressure to keep pace. It's no longer enough to submit a service ticket and wait days for IT to set up a new virtual machine or carve out a virtual private network. Users expect prompt provisioning -- or even the ability to provision resources themselves. To meet the demands for flexible and efficient provisioning, data centers are exploring a diverse array of software-based technologies emerging to manage VMs, storage, networks and even entire data centers. Let's take a closer look at software-defined technologies and see what's required to deploy them successfully.
What does "software-defined" really mean? What criteria makes something "software-defined?"
Any "software-defined" technology is really about the resource abstraction and provisioning that takes place. It's the key principle of virtualization.
Virtualization allows computing resources to be abstracted from the underlyingphysical hardware. Once available, physical resources are abstracted into virtual resources, software tools can also be employed to reallocate the virtualized resources to operating systems and applications (or change previously provisioned resource allocations) on the fly and as desired without the need to ever touch the hardware setups or configurations.
Just consider an everyday disk drive. File system software is used to abstract the disk's tracks and sectors and carve that total disk capacity into one or more logical drives which are logically isolated from each other and then presented to the operating system. We don't really refer to a file system as virtualization software or "software-defined disk drives," but the principle of resource abstraction is almost identical.
A more recent example is server virtualization. A hypervisor like Microsoft's Hyper-V, VMware's vSphere or Citrix's XenServer works to abstract the server's physical computing resources (such as CPU clock cycles and memory space) into virtual resources. Then it's possible for administrators to provision the virtualized computing resources in order to create virtual machines (VMs). We could just as easily refer to VMs as "software-defined servers."
Ultimately, the "software" part of any software-defined technology provides the abstraction layer along with the graphical or command-line user interface needed to allocate, monitor and manage those abstracted resources. Application programming interfacesmay also be provided to support third-party software products or functional plug-ins. If the abstraction layer fails due to a bug or malware, any virtualized resources or provisioning may be compromised.
Software-defined technology that you should know about
Smarter hardware will make software-defined technology work
How to set up software-defined storage products
Does moving to software-defined everything make sense?
Ensure a successful software-defined networking architecture implementation
Dig Deeper on Improving server management with virtualization
Related Q&A from Stephen J. Bigelow
Though the Open19 initiative and Open Compute Project seem to have a similar goal, they do differ in type of support, hardware requirements and ... Continue Reading
A do-it-yourself approach with hyper-converged infrastructure can lead to trouble when software-defined features just won't work. See how the WSSD ... Continue Reading
With the right tools and resources, VM backup and recovery can be easier. Consider factors such as product compatibility and future business needs ... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.