Internal vs. external guest virtual machine storage

What's the best option for guest virtual machine storage: internal VM storage with Virtual Machine Disk Formats, or VMDKs, or external VM storage with iSCSI? An expert weighs the pros and cons of both methods.

Scott Lowe

Numerous articles have been written about various storage protocols that can be used with VMware to store the files

that make up a virtual machine (VM). Little, however, has been published about storage within the VM, such as what the best options are for providing storage to guest VMs and which option is best in a given situation. It is almost guaranteed that the operating system (OS) instance for a particular guest VM will be found within a Virtual Machine Disk Format (VMDK) stored on a supported data store, but that's not necessarily the case for the data managed by that guest VM. Users have options in how that data will be stored and accessed by the guest VM. In this tip, I'd like to explore some of the options for providing additional storage to guest VMs and when these options are most applicable.

There are essentially two options for providing storage to guest VMs. Users can either provide internal storage by provisioning additional VMDKs at the VMware ESX layer and attach them to the guest VMs, or users can provide external storage by configuring guest VMs to use software iSCSI and attach to an iSCSI-based array.

Each of these options has its own advantages and disadvantages. The reasons for using one approach versus the other lie within these advantages and disadvantages. First, let's explore these options in a bit more detail.

Pros and cons of internal VM storage
The first approach involves the creation of an additional virtual disk inside the guest VM. The additional virtual disk is represented to the guest VM as an additional SCSI device, but outside the virtualization layer it is represented as a single file known as a VMDK file. Although the guest VM sees this additional virtual disk as a SCSI device, VMware ESX will access it via local SCSI disks, a Fibre Channel or iSCSI storage area network (SAN), or via Network File System (NFS). I refer to this approach as adding internal storage to the guest VM because the guest does not see or know about any of the various hardware components or protocols required to access the storage. Rather, all of that is managed by the virtualization layer, so the guest VM perceives it as local SCSI storage.

Pros and cons of external VM storage
The second approach involves configuring the guest VM to use a software iSCSI initiator and connect to an iSCSI-based array. As the guest must be specifically configured for iSCSI and connected to the array, I call this external storage. Many operating systems, Windows included, have free software iSCSI initiators available, so this is an increasingly common configuration. In this configuration, the hypervisor is not at all involved in any translation or management of the storage. The iSCSI traffic is simply another form of VM traffic that moves across the network interface cards (NICs) and vSwitches designated for VM traffic.

With that information in mind, users can probably begin to pick out some of the advantages and disadvantages of each of these approaches:

  • Configuring a VM to use internal storage means that the hypervisor manages access to that storage. That, in turn, means that all the features of the VMware ESX hypervisor -- things like snapshots or Storage VMotion -- are available.

  •  

  • Conversely, configuring a VM to use "external" storage means that features such as VMware snapshots and Storage VMotion are not available.

  •  

  • VMs configured to use external storage have the ability to communicate directly with the storage array, and thus may be able to leverage advanced array functionality. For example, some storage vendors offer software that allows users to resize logical unit numbers (LUNs) on the storage array while also resizing the guest OS partitions.

  •  

  • VMs configured with internal storage leverage the existing storage connectivity provided by the VMware ESX hypervisor, such as Fibre Channel or NFS. Nothing additional is required.

  •  

  • VMs configured with external storage via software iSCSI may require additional NICs and/or additional vSwitches to accommodate the traffic generated by this configuration.

When should each approach be used?
When should one approach be used over the other? That answer depends upon many different factors, but the roots of that answer lie in the advantages and disadvantages described above. If a user wants or needs to use VMware-specific functionality like VMware snapshots or Storage VMotion, then it's necessary to provision additional virtual disks or use internal storage.

If, on the other hand, the application within the guest VM needs direct connectivity to the storage array, external storage is almost always required. Users wanting to take advantage of advanced integration between the guest OS and the storage array for applications like Exchange or SQL Server (think along the lines of NetApp's SnapManager or perhaps EMC's Replication Manager) will also need to configure the guest VM to use external storage. It is possible to use raw device mappings in this case, as described in Storage options for virtual machines: Raw device mappings, VMFS.

Another instance in which external storage may be more applicable than internal storage is in situations where OS instances are being converted from physical to virtual via a P2V operation and existing SAN storage is involved. Users or organizations may prefer to leave the existing SAN storage in place rather than convert it and connect to it as external storage after the conversion.

Both of these options -- configuring the guest VM to have more virtual SCSI disks or configuring the guest VM to use external storage -- are valid options, and many organizations will end up using both of them to address specific business needs during the course of their virtualization implementation.

ABOUT THE AUTHOR: Scott Lowe is a senior engineer for ePlus Technology Inc. He has a broad range of experience, specializing in enterprise technologies such as storage area networks, server virtualization, directory services and interoperability. Previously he was President and CTO of Mercurion Systems, an IT consulting firm, and CTO of iO Systems.
 

This was first published in December 2008

Dig deeper on Virtual server backup and storage

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close