This article can also be found in the Premium Editorial Download "Virtual Data Center: The promise and perils of server virtualization."
Download it now to read this article plus other related content.
Moving to a virtual infrastructure has a major impact on storage and storage networking architecture. But the effects can vary greatly, particularly for organizations in different phases of storage networking adoption.
Early server virtualization implementations relied on storage area networks (SANs) and particularly Fibre Channel (FC) SANs to create the shared storage necessary for key availability functions. But today, storage choices have broadened. Now VMware also supports virtual machines (VMs) on both iSCSI SANs (often called IP SANs) and Network File System-based network-attached storage (or NFS/NAS), so users have additional options in terms of storage architectures. And that’s good news for those just starting virtualization projects; they can reap the benefits of virtualization without being forced into the cost and complexity of FC storage.
Understanding what is required to implement specific functions, along with implementation considerations and ramifications of these decisions can make the difference in moving to a virtual environment successfully. With the development of x86 server virtualization, several misconceptions have emerged concerning both storage networking and virtual servers. Before we discuss recommendations for the best storage architecture for your virtual environment, it’s worth examining some of these myths.
Common storage myths
Myth No. 1: Server virtualization requires a SAN.
Myth No. 2: SAN means Fibre Channel, NAS means IP.
In the early days of storage networking, all SANs were Fibre Channel. Today storage area networks can also be based on IP using the iSCSI protocol. To understand what defines a SAN (either Fibre Channel or iSCSI), think of it as a virtual SCSI cable that has a network inserted between the server and the storage. The application is still doing SCSI commands (i.e., reads and writes). But rather than executing over a local SCSI cable to a logical unit number (LUN) on a direct-attached storage device (owned by the server), the commands are sent across a network to a LUN on a storage device that can be shared by other devices on the network.
Fibre Channel SANs use the Fibre Channel Protocol, which is SCSI over Fibre Channel. They require an FC host bus adapter (HBA) in the server that connects to an FC fabric and then to an FC storage adapter in the storage device.
With iSCSI, an application still does SCSI reads and writes, which are then converted to the iSCSI protocol and sent down through the TCP/IP stack to a network interface card and across an Ethernet network to a specified IP address of an iSCSI storage device. VMware now supports both hardware and software iSCSI initiators (i.e., in the server making a read or write request) and targets (i.e., the storage device).
Because iSCSI SAN adoption has only recently gained traction and because VMware support for
iSCSI is also in its infancy, the majority of VMware production hosts today are Fibre Channel SAN
attached. Both analysts and Vmware have estimated that 70% to 90% of VMware implementations
currently use FC SANs.
As server virtualization adoption increases—particularly among midsized and smaller companies that have not implemented Fibre Channel SANs—this adoption has led to more iSCSI implementations. According to Network Appliance Inc., a leading supplier of enterprise-class NAS and iSCSI arrays, approximately 5,000 of its customers use VMware technology with either iSCSI or NAS. The bottom line: You can have a SAN with IP and iSCSI instead of Fibre Channel.
Myth No. 3: SANs are a way to share data.
SANs are a way to share storage, not data or files. Both FC and iSCSI SAN’s use block-based storage, so the SCSI reads and writes at a block level are routed over a network to a shared-storage device on the network. The server still does all file-level handling and locking locally.
Servers have access only to their own specific LUNs on the storage device. SANs are designed to
enable servers to share storage devices, not to share files. If multiple servers have to share
files, you need network-attached storage. With NAS, when multiple servers attempt to access the
same file at the same time, NAS locking mechanisms will prevent file corruption.
Think of a SAN as having a network inserted between the server’s file system management function and the storage device. Think of NAS as inserting the network between the server and the file system management (along with the storage).In other words, with a SAN, all file management is done locally with block access to the storage; with NAS, all file management is done on the NAS box, which provides file-level access to the server.
Factors in considering storage for virtual environments
Once you’ve dispensed with common misconceptions about storage for virtual environments, here are the major factors to consider when choosing among storage options:
- Cost and complexity
- Application requirements
- Impact on backup and disaster recovery
1. Performance: For quite some time, Fibre Channel SAN was considered the performance
storage solution. But now companies like EqualLogic/Dell Inc. and Left Hand Networks offer support
for10 Gigabit Ethernet (GbE) in their iSCSI SANs, which are supported by Vmware ESX 3.5. So the FC
performance argument has lost steam.
Many users run iSCSI SANs on both1 and 10 GbE with more than sufficient performance. In general, iSCSI is likely to give you a higher performance profile than NAS, although some customers have gotten better performance on NAS, depending, of course, on workload, configuration and so on. Most storage experts agree that there is conflicting information on performance and that no single answer fits all scenarios.
Network transport is not the only determinant of storage performance. If you require performance features that you can get only in a specific model of a high-end storage array, then that may well determine your choice. During the early and middle stages of deployment, however, many of the applications you virtualize won’t require this level of performance. As always, it’s important to measure and characterize an application’s behavior and seek out other users who have virtualized similar workloads.
2. Complexity and cost: If you have already implemented FC, you’re all too familiar with the complexities it brings (new protocol, new switches, HBAs, new media, and new management issues such as FC zoning) as well as the additional cost. For those who’ve successfully gotten over the Fibre Channel hump, you have a strong SAN platform that you can leverage with virtual servers. If you haven’t gotten over the hump, iSCSI or NAS will both be simpler and less costly.
3. Application requirements: When deciding between SAN and NAS, the considerations are nothing new. For large amounts of data, not file based, such as SQL Server and Exchange, SANs are still the preferred storage platform for virtual servers whether you go with FC or iSCSI. If you need file management and file sharing across servers, that’s what NAS was designed for and what it does well from physical or virtual servers. If you’re already using NAS and it works well, leave it alone. If it ain’t broke, don’t fix it.
But virtualization creates its own requirements. If, for example, VMotion is a strategic requirement for you, you have to rule out DAS. If you need to cluster Windows VMs using Microsoft Cluster Server, then FC SAN is currently your only option.
4. Backups: Moving to a virtual environment means that some of what used to reside on physical DAS is now a Virtual Machine Disk Format (VMDK) file within a Virtual Machine File System (VMFS) data store. LUNs that used to reside on FC SANs are still available but are now mapped through Raw Device Mapping (RDM). IP storage remains the same, either iSCSI SANs or NAS, sitting on the IP network. Backing up this environment presents challenges and involves several options, which may need to be combined to offer the greatest flexibility and fastest restore options.
(a) File-level/agent-based backup: You can run backup software agents in each VM and continue backups as executed previously. But this method addresses only file-level backup and restore. So, for example, if there is a disk failure affecting the VMFS files (including VMX and VMDK files), no backups are available to restore the VM itself. In addition, with multiple virtual servers running on one physical box, the overhead of the agent based approach and the effect on the VMs running on that box are less than optimal.
(b) VMFS snapshot backups: Vmware allows you to snapshot VMs and back up the entire VM by backing up its VMDK and configuration files. Through Vmware Consolidated Backup (VCB), a backup server can mount a VMFS LUN and read it using a special driver, then issue a command to VMware to create a snapshot of the VM on the LUN. Then the snapshot is copied to local storage so that it can be backed up. VCB now supports both SANs (FC and iSCSI) and NAS. (Note that if you use VCB to back up VMs on an FC SAN, it must be run from a physical server to read the disk directly. If backing up iSCSI, it can run from within a virtual server.) VCB gives you a full backup of a VM to go to tape or for disaster recovery purposes, but it is not streamlined for file-level backup and restore.
(c) SAN storage array snapshots: In order to utilize the built-in capabilities of FC storage arrays, you must use RDM. Although the disk will still appear to a virtual machine as a SCSI disk (it does not have access to the HBA), RDM enables functions like snapshots and cloning.
Whatever flavor of storage best meets your needs; server virtualization and network storage are the way of the future. Workloads, performance needs, backup architecture and many other factors all help determine the optimal storage architecture for your IT shop. If that sounds like analysis paralysis to you, here is a simple decision tree based on your current storage situation:
Existing SAN. If your organization has already implemented Fibre Channel, the next move is easy. VMFS on an FC SAN is an excellent choice for a virtual data store, offering the best performance. If you have implemented an iSCSI SAN, VMFS on iSCSI is a good choice as well, particularly if you’re moving to 10 GbE.
No SAN, but on the verge. If you have been evaluating the overall benefits of a SAN—eliminating islands of unequally utilized storage tied to servers—but FC has been too daunting, the move to virtual servers is the perfect time to implement an iSCSI SAN. You gain the benefits of SANs for both servers and storage.
No SAN, no interest. If even iSCSI is too daunting, using NFS as the VMFS platform is an option for small environments that don’t face major performance issues(and even this issue is debated by some).In the hierarchy of network storage, iSCSI is easier to manage than an FC SAN, but NAS is the easiest. If you currently run NAS, implementing VMware data stores on NAS is an easy way to get started. Depending on your consolidation ratios, workloads and NAS vendor, NAS may deliver what you need. As you move forward, monitor VM and disk performance, and compare performance with iSCSI for your environment and workloads. Virtualization and storage technologies like iSCSI have emerged at a time when there’s an unprecedented amount of knowledge sharing going on via the Internet. Find out what those who have already deployed these technologies have done; check out VMware user groups and discussion forums, and engage professional help if you need it. There’s a lot to learn, but even more to gain.
About the Author
Barb Goldworm is the president and chief analyst at Boulder, Colo.-based Focus Consulting, a research, analyst and consulting firm that specializes in systems, software and storage. She recently published the book Blade Servers and Virtualization: Transforming Enterprise Computing While Cutting Costs. To view Barb’s upcoming Focus Research Series on desktop and application virtualization, visit: www.focuson systems.com.
This was first published in March 2008