SAN versus NAS and iSCSI versus NFS are long-running debates similar to Mac versus Windows. Many enterprises believe...
they need an expensive Fibre Channel SAN for enterprise-grade storage performance and reliability. In reality, your vSphere infrastructure functions just as well whether you use NFS or iSCSI storage, but the configuration procedures differ for both storage protocols.
The block-based vs. file-based storage protocol debate
Whether you use a Windows server, a Linux server or a VMware vSphere server, most will need access to shared storage. With vSphere, the virtual machines (VMs) running in a high availability/distributed resource scheduler cluster must reside on the shared storage, so that if a server goes down, another server can access them.
VMware vSphere has an extensive list of compatible shared storage protocols, and its advanced features work with Fibre Channel, iSCSI and NFS storage. Fibre Channel and iSCSI are block-based storage protocols that deliver one storage block at a time to the server and create a storage area network (SAN). NFS, on the other hand, is a file-based protocol, similar to Windows' Server Message Block Protocol that shares files rather than entire disk LUNs and creates network-attached storage (NAS). So which protocol should you use?
The SAN vs. NAS debate
Fibre Channel, unlike iSCSI, requires its own storage network, via the Fibre Channel switch, and offers throughput speeds of 4 Gigabit (Gb), 8 Gb or 16 Gb that are difficult to replicate with multiple-bonded 1 Gb Ethernet connections.
However, with dedicated Ethernet switches and virtual LANs exclusively for iSCSI traffic, as well as bonded Ethernet connections, iSCSI offers comparable performance and reliability at a fraction of the cost of Fibre Channel.
The same can be said for NFS when you couple that protocol with the proper network configuration. Almost all servers can act as NFS NAS servers, making NFS cheap and easy to set up. NFS also offers a few technical advantages.
NFS and iSCSI have gradually replaced Fibre Channel as the go-to storage options in most data centers. Admins and storage vendors agree that iSCSI and NFS can offer comparable performance depending on the configuration of the storage systems in use.
Connecting vSphere to an iSCSI SAN
In a vSphere environment, connecting to an iSCSI SAN takes more work than connecting to an NFS NAS.
To demonstrate, I'll connect a vSphere host to my Drobo B800i server that is an iSCSI-only SAN. Then I'll connect the same host to my Synology DS211+ server, which offers NFS, iSCSI and other storage protocols. This comparison gives you a good indication of how to administer connections to each of the storage options.
First, you must enable the iSCSI initator for each ESXi host in the configuration tab, found under storage adapters properties. (See Figure 1.)
Next, you need to tell the host how to discover the iSCSI LUNs. In this example, I use static discovery by entering the IP address of the iSCSI SAN in the static discovery tab.
Once you enable the iSCSI initiator, and the host discovers the iSCSI SAN, you’ll be asked if you want to rescan for new LUNs. As you see in Figure 2, the host discovered a new iSCSI LUN.
A formatted iSCSI LUN will automatically be added as available storage, and all new iSCSI LUNs need to be formatted with the VMware VMFS file system in the storage configuration section.
Connecting vSphere to an NFS NAS
To add NFS storage, go to the ESXi host configuration tab under Storage and click Add Storage, then click on Network File System. (See Figure 3.)
You will need to provide the host name of the NFS NAS, the name of the NFS share and a name for the new NFS data store that you are creating.
Within seconds you will be able to create VMs in the NFS share.
Connecting vSphere hosts to either an iSCSI SAN or an NFS NAS provides comparable performance to the underlying network, array configuration and number of disks spindled. Though considered a lesser option in the past, the pendulum has swung toward NFS for shared virtual infrastructure storage because of its comparable performance, ease of configuration and low cost.