freshidea - Fotolia
In this age of virtualization and cloud computing, it's a simple assumption that servers will be virtualized. Indeed, a physical application server is now a foreign and outdated concept to many. This thought process has naturally spread to other areas of IT, including both the network and storage aspects of the data center.
Software-defined storage and software-defined networking are poised to continue this journey to what has been called a software-defined infrastructure or data center. Software-defined storage, in particular, has a special place in the future of virtualization and may well be a key tool in future-proofing an infrastructure.
Not all data is created equal
Data in general is a unique challenge for a software-defined infrastructure. Storing data is critical to our jobs, company or even home life. We continue to create data at a staggering rate and often hold on to it for years beyond what is needed -- just in case. Our inability to let go of data, coupled with our ability to create new data, has led to a staggering amount of stale data in our infrastructure. While estimates vary, an average of 70% of data within a company is regarded as safe to delete. The reasons for that information being kept around include legal concerns, those just-in-case scenarios or a simple lack of time to verify that specific pieces aren't in fact needed.
This reluctance to eliminate data adds to the unique challenge of server or network virtualization. Each of those technologies are typically active or "in use" when virtualized. Even when a resource is lightly used it still has the ability to be shared among the other virtual machines.
Storage is different. It is more static in nature. A 100 GB VM is simply that: a 100 GB chunk of data, a large portion of which is likely stale or unused.
At first, storage for virtualization was an expensive solution. Any shared storage technology from the storage area network (SAN) to network-attached storage (NAS) carried a high price tag, so getting the best use of this resource was paramount.
One of the first responses was to place certain types of data in disk tiers. This could be challenging, though, as different types of data were often intertwined with each other. Since this offered limited success and a lot of manual intervention, another fix was needed. Automatic tiering was the next step. This method would look at the data as blocks and begin to separate it based on need. While this worked well, it came with higher costs and increased complexity. New tactics were required, and VMware, owned by a traditional storage company EMC, was one of the first to introduce software-defined storage to an IT community that needed it.
Storage options for a software-defined infrastructure
Traditional large storage frames still exist, and these SAN and NAS frames have a deep history with virtualization. Sales remain strong for a number of reasons, one of them being performance. Unlike several of the other options, the traditional frame can deliver more consistent levels of performance. Enterprise applications that need guaranteed IOPS still fall into the world of dedicated storage frames and networks.
However, you can still apply a layer of software-defined storage onto an existing frame. Bringing some of the services normally associated with the hardware controllers into the hypervisor layer on older storage frames can help to breathe life into older hardware.
With newer SAN or NAS storage frames, adding an additional software layer most likely will not have much of a dramatic benefit.
An organization looking to consolidate older storage resources and gain additional features and functions needs to keep in mind that the base storage underneath has not changed. Software caching and pre-fetching can help a system's effectiveness a bit, but performance and capacity has not fundamentally changed. This may become a concern with higher I/O loads.
With the larger storage networks and frames, the cost of the storage per VM is normally quite high. As organizations look to virtualize everything, a common concern is physical servers that are using large amounts of local storage. With the higher cost disks in a SAN or NAS, this can increase the GB-per-dollar cost enough that virtualization would no longer be economical -- unless it was possible to use local storage.
Some of these concerns gave rise to converged infrastructure or local storage options. While such alternatives will not be the end of the traditional frame, these new technologies have pushed the larger storage frames into being more innovative and cost-effective. To compete, these old frames will need to continue to offer more features and cost savings to compete with the newer storage options on the market.
With the introduction of VMware's vSAN, organizations can now use local server storage in place of a shared storage solution. This design was based on a combination of solid-state drive (SSD), Serial Advanced Technology Attachment and high-speed networking. The storage is pooled between the hosts and SSD caching combined with RAID configurations that are used to protect the data and provide the required performance. The scale and performance of vSAN and other local storage products continue to increase, and those tools will need 10 Gb of dedicated networking to be truly effective.
While a local storage option with 10 Gb of networking has the ability to be cost effective with the savings of the lower cost drives, the ability to scale up effectively is in question. Though the vendor specs tout impressive numbers, it is difficult to find evidence of large-scale deployment. Part of this could be a concern that the local storage aspect is using a traditional server-class hardware platform for a purpose for which it wasn't designed. Many external storage frames or converged infrastructure platforms are much faster and designed for the load.
While vSAN with local storage is a viable option, several questions exist. Will these products be able to truly scale to enterprise-class performance and reliability? Using local storage also limits your choices in the hardware platform. What if your environment uses blades? Adding local storage to blades is limited at best, simply because of the form factor. Software-defined storage on local storage is possible, but its role would appear to be best suited for a data center in transition to a converged infrastructure.
The downsides to a software-defined infrastructure
Is the software-defined data center ready for the masses?
Will software-defined storage make hardware obsolete?
Dive into the various software-defined options