In a virtual infrastructure, storage performance depends on one thing: space. You can reduce virtualization storage...
costs and boost performance by using tiered storage, and you’ll save precious space with data deduplication and storage consolidation.
Tiering storage to prioritize data
Storage administrators have long understood that all data is not created equal, and the value of data changes over time -- especially with storage for virtualization. Keeping all your data on expensive shared storage such as Fibre Channel can waste money and hurt performance, but tiered storage brings cost benefits and flexibility.
By implementing several different classes or “tiers” of storage, the less critical or less frequently accessed data can be migrated to slower, larger disks. Tiered storage has become another mainstay of storage technology in virtual environments.
As with thin provisioning, the principal benefit of tiered storage for virtualization is cost. It also enhances performance by distributing storage access across multiple storage systems. The fast and expensive storage at the top tier, or Tier 1, is reserved for mission-critical applications that demand top storage performance. Less critical data—or data that has “aged-out” of the top tier—can be moved to secondary storage, or Tier 2, such as serial-attached SCSI (SAS) disk arrays that offer more storage capacity at a lower cost. As data ages further, it can be moved further down the cost/performance curve to Tier 3 high-volume, low-cost SATA disk arrays serving as archival storage.
The one main drawback with tiered storage for virtualization is insight. An organization needs to establish the relative value of its data, develop the rules and policies that govern the handling of that data and then implement the mechanism required to handle movement according to those rules. Tiered storage for virtualization therefore relies heavily on intelligent automation tools that are available as software but are increasingly integrated into storage systems themselves.
“The key thing that’s changing is the element of automation that’s coming into the process,” said Mark Peters, senior analyst at Enterprise Strategy Group in Milford, Mass. For example, a modern storage subsystem can analyze the most active data and place that data on the fastest drives with little, if any, administrative interaction.
It’s important to understand that tiered storage does not reduce an organization’s total storage consumption, but savings can take place at the high end of storage for virtualization. As older or less used data is moved to lower tiers, the demand for fast high-performance storage is eased, and this can have a dramatic impact on high-end storage purchases.
Save space with data deduplication
In addition to adding tiered storage and prioritizing data needs, you can save storage space by using data deduplication. Much of the data that is stored within an organization contains duplicate information. There may be multiple iterations of the same file, but it is also possible to identify redundant blocks and even more granular data elements. Data deduplication identifies and removes redundant file data, saving only a single iteration of any data element to disk.
Data deduplication can potentially yield massive reductions the requirements of storage for virtualization. It’s not unusual to see savings ratios of 25:1 or 30:1 in backups, which usually contain a lot of redundant data, said Ray Lucchesi, president and founder of Silverton Consulting Inc. in Broomfield, Colo. Data deduplication is also moving from a secondary storage technology to the forefront of primary storage in subsystems from NetApp, EMC and others.
“Deduplication is one of those capabilities that ultimately most enterprise storage systems will come out with,” said Lucchesi, adding that deduplication in primary storage allows data reductions ranging anywhere from 45% to 50%, depending on the actual data being deduplicated.
For large organizations, the promise of slashing storage needs almost in half is a potential savings that cannot be overlooked—especially in virtual environments where there may be hundreds or even thousands of virtual machines or desktop iterations sharing identical operating systems, drivers and other redundant content.
One of the remaining challenges for data deduplication is the processing requirement—especially for deduplicating primary storage data in real time. Processing overhead was not a big concern for secondary storage such as backups or archive appliances, but deduplicating primary data on the fly can stress some storage systems. Experts recommend thorough performance testing before adopting data deduplication in primary storage.
Save space with storage consolidation
A significant amount of virtualization storage can be wasted with direct-attached storage (DAS) on individual servers. The need to reduce virtualization storage costs is driving the push toward storage consolidation—replacing islands of underused DAS with some form of central shared storage like a network-attached storage (NAS) or storage area networks (SANs). NAS and SANs are easier to provision and manage than DAS, allowing better storage use and potentially better performance. The introduction of unified, consolidated storage supports NAS and SAN on the same storage platform.
“Moving away from DAS to a more centralized shared storage environment is a good idea,” said Pierre Dorion, data center practice director at Long View Systems in Denver, but added that organizations must be big enough for the storage consolidation to make sense.
Shared storage really is a necessity in virtual environments where VMs load from and back up to shared storage resources. Shared storage also supports live migration in a virtual setting. But the value of storage consolidation benefits larger shops even when virtualization is not yet deployed. Consolidated storage on a shared platform is also easier to back up and protect with high-availability technologies like RAID and snapshots.
Stephen J. Bigelow, a senior technology editor in the Data Center and Virtualization Media Group at TechTarget Inc., has more than 20 years of technical writing experience in the PC/technology industry. He holds a bachelor of science in electrical engineering, along with CompTIA A+, Network+, Security+ and Server+ certifications and has written hundreds of articles and more than 15 feature books on computer troubleshooting, including Bigelow’s PC Hardware Desk Reference and Bigelow’s PC Hardware Annoyances. Contact him at email@example.com.