Storage virtualization on a SAN is a concept, rather than a technology. And since there are a lot of vendors targeting this hot new market, there are several different approaches to storage virtualization out there. If you're a data center pro looking to virtualize storage, you need to figure out which approach to go with.
There are three basic methods of virtualizing storage. The methods are characterized by whether the virtualization is done on the host, the storage array or the SAN.
Products that offer host-based virtualization, such as Storage Foundation for Windows from Veritas/Symantec or HP xp12000 Storageworks from Hewlett-Packard, handle virtualization at the server level. Of course, storage virtualization requires a lot of computing power and since host-based virtualization has to compete with the other host functions, performance may take a hit with this approach.
SAN-based virtualization puts the virtualization function in the network itself, either as a stand-alone appliance or as a function of a switch. The 'virtualizer,' whether appliance or switch function, usually appears as a host to the storage controller and as a storage controller to the host. EMC's Invista network storage virtualization software runs on SAN switches and directors, such as EMC's Connectrix series of switches. Other companies provide a solution that combines hardware and software, as StoreAge Networking Technologies does with its Storage Virtualization Manager (SVM) SAN appliance.
When it comes to SAN-based storage virtualization, throughput is a key factor. Whether the virtualization is in hardware or software, it must be able to handle the load without compromising existing SAN performance. In the case of a software-based approach, this may require upgrading switches or directors. Or, in the instance of a hardware-based approach, it may require multiple appliances.
Controller-based virtualization, offered by systems such as Hitachi Data Systems' TagmaStore Universal Storage Platform, puts virtualization in the storage controller, either as a separate appliance or built into the array. Since controller-based virtualization is intimately connected to the storage arrays, controller-based products generally do an excellent job of working with the storage, especially in the event of errors or write failures. This is an issue for virtualization vendors because while communication between controllers and storage arrays is highly standardized, what happens when there is a problem is not.
The major drawback to controller-based virtualization is vendor lock-in. In fact, in most cases you're not just locked into a vendor, you're locked into a particular product line (since storage controllers only work with one product line). Another disadvantage is that controllers, by nature, have the narrowest view of the SAN -- essentially, they only see the storage array.
Software vendors, such as Veritas, tend to prefer host-based virtualization. Vendors who specialize in storage, such as Hitachi Data Systems, prefer controller-based virtualization. A company like Hewlett-Packard offers virtualization products in all three categories.
For an administrator contemplating a move to storage virtualization, the decision is a complex one. The virtualization approach a particular vendor takes is only one factor to consider. Vendor loyalty and, most importantly, the specifics of the proposal are more important.
ABOUT THE AUTHOR: Rick Cook has been writing about mass storage since the days when the term meant an 80K floppy disk. The computers he learned on used ferrite cores and magnetic drums. He specializes in writing about issues related to storage and storage management.
This article originally appeared on SearchWinSystems.com.
Dig Deeper on Introduction to virtualization and how-tos