This content is part of the Essential Guide: LUN storage: Working with a SAN's logical unit numbers

LUN configuration best practices to boost virtual machine performance

LUN configuration plays a key role in tweaking VM performance. With the right hardware, disk type and RAID level, VM storage helps run more VMs per host.

Advanced virtual machine (VM) storage options can improve performance, but their benefits will go only so far if your physical logical unit numbers (LUNs) are not configured with best practices in mind.

Only when a LUN configuration meets the needs of your VM workloads can you significantly improve virtual machine performance. When it comes to LUN configuration, hardware choices, I/O optimization and VM placement are all important considerations.

Hardware and LUN configuration

The hardware on which you house your LUNs can make all the difference in VM performance. To avoid an overtaxed disk subsystem, choose hardware with similar resource levels as your host systems. It does no good to design a cluster of servers with two six-core processors and 128 GB of RAM and attach it to an iSCSI Serial-Attached Technology Advancement (SATA) storage area network (SAN) over a 1 GB link. That arrangement can create a storage bottleneck at either the transport or disk-latency level.

As you set up the LUN configuration, correctly sizing your disk subsystem is the key to ensuring acceptable performance. Going cheaper on one component may save you money up front; but if a resulting bottleneck reduces overall VM storage capacity or stability, it could ultimately cost you much more.

Disk type

To improve virtual machine performance, choose disk types for VM storage based on workload. Lower speed, lower duty cycle and higher latency drives such as SATA/FATA may be good for development environments. These drives usually range from 7,200 RPM to 10,000 RPM. For production workloads, or those with low latency needs, various SCSI/SAS alternatives give a good balance of VM performance, cost and resiliency. These drives range from 10,000 RPM to 15,000 RPM.

Solid-state drives are also a realistic option. For most workloads, these kinds of drives may be overkill technically and financially, but they provide low latency I/O response.

I/O optimization

To ensure a stable and consistent I/O response, maximize the number of VM storage disks available. You can maximize the disk number in your LUN configuration whether you use local disks or SAN-based (iSCSI or Fibre Channel) disks. This strategy enables you to spread disk reads and writes across multiple disks at once, which reduces the strain on a smaller number of drives and allows for greater throughput and response times. Controller and transport speeds affect VM performance, but maximizing the number of disks allows for faster reads and resource-intensive writes.

RAID level

The RAID level you choose for your LUN configuration can further optimize VM performance. But there’s a cost-vs.-functionality component to consider. RAID 0+1 and 1+0 will give you the best virtual machine performance but will come at a higher cost, because they utilize only  50% of all allocated disks.

RAID 5 will give you more gigabytes per dollar, but it requires you to write parity bits across drives. On large SANs, any VM performance drawback will often go unnoticed because of powerful controllers and large cache sizes. But in less-powerful SANs or on local VM storage, this resource deficit with RAID 5 can create a bottleneck.

Still, on many modern SANs, you can change RAID levels for a particular LUN configuration. This capability is a great fallback if you’ve over-spec’d or under-spec’d the performance levels your VMs require.


Whether the connectivity between host servers and LUNs is local, iSCSI or Fibre Channel, it can create resource contention. The specific protocol determines how quickly data can traverse between the host and disk subsystem. Fibre Channel and iSCSI are the most common transports used in virtual infrastructures, but even within these designations there are different classes, for example 1/10 Gb iSCSI and 4/8 Gb Fibre Channel.

Thin provisioning

Thin provisioning technologies do not necessarily increase virtual machine performance, but they allow for more efficient use of SANs, because only the data on each LUN counts toward total utilization. This method treats total disk space as a pool that’s available to all LUNs, allowing for greater space utilization on the SAN. With greater utilization comes greater cost savings.

Block-level deduplication

Block-level Deduplication is still an emerging technology among most mainstream SAN vendors. Again, this technology does not improve virtual machine performance through the LUN configuration, but it does allow data to be stored only once on the physical disk. That means large virtual infrastructures can save many terabytes of data because of similarities in VM workloads and the amount of blank space inherent with fixed-size virtual hard disks.

So what does this all mean? For optimum VM performance and cost savings, use a healthy combination of the previously mentioned options. Using the best possible resources and LUN configuration is ideal, but it’s not practical or necessary for the majority of virtual infrastructures. 

The second part of this tip will offer guidelines on how many VMs you should put on a LUN, based on these LUN configuration options and the size of your infrastructure.

More on VM storage configuration

  • Cloning virtual machine storage volumes with NetApp Rapid Cloning Utility 3.0
  • VM errors: The case of the disappearing virtual machine
  • Virtual machine storage management with Virsto One

Dig Deeper on Virtual machine performance management