Problem solve Get help with specific problems with your technologies, process and projects.

Selecting storage hardware for a virtual deployment

Storage hardware is a critical component of a host server in a virtual infrastructure. As a result, knowing your requirements and workloads is critical. An expert explores what you should consider when selecting network and storage adapters, and storage types for your VMs.

In the first two parts of this series, we covered some of the choices that you need to make when choosing hardware...

for virtual servers. Now in part three, the final portion of this series, we cover choosing network and storage adapters, as well as selecting a storage type for your virtual machines (VMs).

For more on hardware and storage for virtualization:

Blades vs. rack servers: Basic purchasing principles still apply.

Selecting CPU, processors and memory for virtualized environments

Selecting a network adapter
Network interface cards (NICs) are an important component of any virtualization deployment. The amount of NICs that you need depends on several factors, such as how many VMs you run, their network workloads, how much redundancy you want, your virtual local area network (VLAN) configurations and whether you use network-based storage. For a virtual host, you need a minimum of two NICs, and typically four to six for an average host. Let's break down the factors that influence the number of NICs that you need:

  • VMs and network workloads. In general, the more VMs you have on a host, the more NICs you'll want. The network workload of these VMs is the biggest influence, though. If VMs have light workloads, you need fewer NICs , and heavier workloads need more. As a rule, you'll probably experience other resource bottlenecks before the network becomes an issue on virtual hosts.
  • Redundancy. It's important to have physical NIC redundancy in your virtual switches, because if a single NIC fails, your VMs will not lose network connectivity.
  • VLANs. Your virtual switch configuration, VM placement and the number of VLANs needed are factors as well. By using VLAN tagging, which allows you to use multiple VLANs on a single NIC, you need fewer NICs. If you don't use VLAN tagging, you need a virtual switch and NIC for each VLAN to which your host connects. Also, if you plan on connecting your host to a demilitarized zone (DMZ) network, you should use separate virtual switches and NICs to keep the DMZ isolated from your internal network.
  • Network based storage. If you plan on using network-based storage, such as a network file system (NFS) or iSCSI, with virtual hosts, you should have at least two network interface cards dedicated to it.

It's possible to get four NICs on a single adapter card, so adding NICs to your hosts is easy even for servers with limited peripheral component interconnect (PCI) slots. When selecting NICs for your host, the other decision is the NIC brand and model. Some virtual hosts support only specific NIC brands and models, so confirm that the virtualization software you use supports the NICs you buy. VMware and Citrix Systems have published I/O adapter compatibility guides to which you can refer. Meanwhile, Microsoft Hyper-V supports any NIC that's supported by Windows Server 2008.

Adopting a storage adapter
Next, select storage adapters to connect to your storage devices. There are several types of adapters, including local storage adapters like SCSI, serial-attached SCSI (SAS) and Serial Advanced Technology Attachment (SATA), Fibre Channel, and iSCSI host bus adapters (HBAs). Whichever storage adapters you use, make sure that your virtualization software supports them. Just like NICs, you should check the I/O adapter compatibility guides for this. When you choose storage adapters for a virtual host, keep the following in mind:

  • For local storage, it's best to use adapters that have large read-and-write caches on them, especially if you plan on exclusively using local disk on your ESX hosts. In addition, having a battery-backed write cache (BBWC) on your array controller improves performance and reliability. BBWCs add memory that is used to cache disk writes and, in case of power failure, also have a battery backup to protect data that hasn't been written to disk.
  • Your infrastructure should typically house two Fibre Channel or iSCSI adapters, because they provide two paths to your storage device and, as a result, maximum reliability. Server manufacturers such as Hewlett-Packard Co. and IBM often re-brand Fibre Channel and iSCSI adapters (e.g., QLogic, Emulex) as their own models, so consider this in terms of compatibility with virtualization software. Fibre Channel adapter speeds can vary from 1 GBps, to 8 GBps. Currently, 4 GBps is the most popular speed in most data centers. All the components in a Fibre Channel network must support the adapter speed you choose; this includes Fibre Channel HBA, the Fibre Channel switch and the Fibre Channel controller on a storage device.
Committing to a disk storage device
Finally, you need to choose a disk storage device for your virtual host. The two factors that influence the type of storage that you choose are cost and I/O requirements. Your budget plays a large part in determining which storage option you choose; for heavier workloads, disk storage is pricey. Also, the disk I/O requirements for the applications that you run are also a critical factor.

But no matter which storage option you purchase, you need to choose hard drives. Most SCSI hard drives are available in two speeds, 10,000 rpm (10K) and 15,000 rpm (15K). The speed attached to each hard drive indicates how fast the hard drive's platter spins, which is otherwise known as its rotational speed. The faster the drive platter spins, the faster data can be read and written, which reduces overall latency.

Even if a drive platter spins faster though, the head actuator that moves across the drive to access data does not move faster. Just because the drive spins 50% faster, for example, it doesn't mean overall drive performance has increased by 50%. The typical performance increase for a 15 K drive over a 10 K drive is about 30%, which increases IOPS (or I/O operations per second) and decreases average access times.

When it comes to choosing between 10 K and 15 K drives, there are two factors. The first is whether you use applications that have heavy disk utilization that could benefit from the extra speed of 15 K drives. The second is whether you can afford more expensive drives. The only downside to 15 K drives is the additional expense over 10 K drives. If you plan on running disk I/O-intensive applications on your VMs, though, you should consider them.

When choosing a storage option, the final choice is whether you will have a combination of storage types, or just one. The options here are local disk storage or shared storage types, such as iSCSI, NFS and Fibre Channel storage area networks (SANs). In most cases, shared storage is desirable because it's required for certain advanced features, such as VMware's VMotion, to work. Let's examine the advantages and disadvantages of each one.

Local disk storage
Local disk storage isn't that expensive and is beneficial for virtual hosts; even if you plan on running VMs on shared storage, you gain options and flexibility. Unless you boot VMs from a SAN, you should consider getting at least two local disks that use RAID on a virtual host. The advantages of using local storage include the following:
  • It is low cost compared with shared storage.
  • Local storage can be used for test and development VMs to prevent these VMs from taking up space on expensive shared storage;
  • It can back up VMs that are located on shared storage and can store virtual swap files and snapshots.
  • It can be converted to shared storage by using the virtual SANs on the market, such as LeftHand Networks' Virtualization SAN.

The disadvantages of using local storage are the following:

  • It cannot be used for advanced features (such as VMware's VMotion) that require shared storage.
  • It cannot not available for other ESX hosts to use; only the local ESX host can access it.

Fibre Channel SAN storage
Fibre Channel SAN storage uses fiber-optic cables to connect HBAs to SANs through special Fibre Channel switches. Fibre Channel networks normally have multiple paths, from the host servers to the storage devices that include multiple HBAs, switches and controllers. Fibre Channel is a popular storage choice for virtual hosts in larger environments because of its speed, security and reliability.

If you already have a Fibre Channel SAN in your environment, using it with your virtual hosts makes sense. Expanding an existing SAN is much easier and far cheaper than implementing a new SAN. If you plan on having several high disk I/O VMs running on your virtual hosts, then you should consider using SAN storage to achieve maximum performance. Ultimately, cost is the factor that determines if you use SAN storage or choose a less expensive alternative. The advantages of using Fibre Channel storage are the following:

  • It offers solid performance and secure storage.
  • The ability to boot your virtual host directly from the SAN instead of a local disk.
  • Block-level storage.

The disadvantages of using Fibre Channel storage include the following:

  • It is the most expensive storage option to implement in your environment.
  • It can be complex to deploy and manage.

iSCSI storage
ISCSI storage works by using a client called an initiator to send SCSI commands over an LAN to SCSI devices called targets, which are located on a remote storage device. ISCSI uses traditional networking components and the TCP/IP protocol and does not require special cables and switches like Fibre Channel storage. ISCSI is considered a type of SAN storage because it writes data using a block-level method rather than using NFS' file-level method.

ISCSI initiators can be software or hardware based. An initiator is the client that replaces the traditional SCSI adapter servers use to access SCSI storage. Software initiators use device drivers built into the host operating system that employ existing network adapters and protocols in order to write to remote SCSI devices. This can result in additional CPU and network overhead on the host server. Software initiators are a cheaper solution than hardware initiators and work well with blade servers that have limited expansion slots. Unfortunately though, they can't be used to boot your virtual host.

Hardware initiators use a dedicated iSCSI HBA which combines a network adapter, TCP/IP offload engine (or TOE) and SCSI adapter into one device that improves the I/O performance of the host server. Hardware initiators also use less host resources, such as CPU, and you can use them to boot your virtual host.

ISCSI is a solid alternative to Fibre Channel storage because it's cheaper to implement and provides comparable performance, especially when using a 10 Gbps Ethernet connection. The main disadvantages to iSCSI storage are the additional CPU overhead with software initiators and the fragile and volatile network infrastructure that it relies on. But these two issues can be mitigated by using hardware initiators and isolating iSCSI traffic from other network traffic.

The advantages of using iSCSI storage are the following:

  • It is cheaper to implement than Fibre Channel storage.
  • It provides the option of using software or hardware initiators.
  • It provides block level storage.
  • Speed and performance increase with 10 Gbps Ethernet.

The disadvantages of using iSCSI storage are the following:

  • Your virtualization software may not support jumbo frames.
  • There is CPU overhead when using a software initiator.

Network attached storage
Network-attached storage (NAS) uses the NFS protocol to enable virtual hosts to mount partitions on a remote file system and access them as if they were local disks. NAS has performance characteristics similar to software iSCSI, but performance is dependent on the speed of the network connection between the host and remote storage and on the type of NAS device to which you connect. A dedicated NAS appliance provides better performance than a Linux or Windows server running NFS services. Compared with iSCSI and Fibre Channel SAN storage, NAS offers disadvantages, mainly in the features that it supports. Yet NAS is a viable alternative for virtual hosts. If you choose NAS, you should also consider using a dedicated NAS device like that from NetApp.

Advantages of using NAS/NFS storage include the following:

  • It poses no substantial performance drop-off compared with iSCSI.
  • It's the cheapest shared-storage option.
  • It can use existing infrastructure components.
  • It features no single-disk I/O queue. Performance is strictly dependent on the size of the network connection and the speed of the disk array.
  • It has the smallest storage footprint of all options because by default it uses thin-provisioned disks.

Disadvantages of NAS/NFS storage include the following:

  • You cannot use it to boot virtual hosts.
  • It increases CPU overhead on virtual hosts.

Conclusion
Storage is the most important component of the decision on which hardware to use for host servers. As storage relies on a mechanical device, it's often the first resource bottleneck on host servers. So choosing a proper storage option is important to ensure a successful virtualization project. The bottom line is to understand your requirements and workloads before selecting storage for virtual hosts.

This series of tips on selecting hardware for virtual hosts has delved into the many choices you'll face when building your virtual hosts. Having the proper host hardware for virtual machines and their workloads is critical to the success of any virtualization deployment. Understanding the hardware options and components that will make up a host are central in making proper purchases for your virtual environment.

Eric Siebert, is a 25-year IT veteran who specializes in Windows and VMware system administration. He is a guru-status moderator on the VMware community VMTN forums and maintains VMware-land.com, a VI3 information site. He is also the author of the upcoming book VI3 Implementation and Administration , which is due out in June 2009 from Pearson Publishing. Siebert is also a regular on VMware's weekly VMTN Roundtable podcast.

This was last published in May 2009

Dig Deeper on Virtual server backup and storage

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close