Sinisa Botas - Fotolia
Support for huge VMDK files under ESXi will require ESXi 5.5 or later along with Virtual Machine File System 5. The guest operating system (such as Windows Server 2012 R2) must also support huge virtual disks. However, there are several additional considerations that IT professionals must evaluate when attempting to move beyond 2 TB VMDK file sizes.
First, consider the file system. If storage is provided through a storage array using network file systems, the 64 TB maximum is still available, but the maximum VMDK capacity is limited by the actual file system in use. For example, the ext3 file system might only support volumes to 16 TB. In addition,huge VMDK files may experience issues with flash caching because vSphere's flash read cache only supports up to 16 TB disk sizes.
Even creating huge VMDK file sizes might take some finesse. For example, extending an existing VMDK beyond 2 TB requires the virtual machine to be powered off first and administrators may need to use the vSphere Web client to manage, create or extend huge VMDK files. The net result will be disruption to the workload's availability during the extension process.
Booting can be an issue with huge VMDK files under a GUID Partition Table (GPT) because traditional system BIOS relies on legacy master boot record disk preparation, which is not present in GPT partitions. To boot from a huge GPT-based VMDK file, the underlying server hardware will need a later-version BIOS designed around the unified extensible firmware interface (UEFI). Many current servers do support UEFI, but it's important to verify this with the server vendor and test the system for huge VMDK support in advance before making a massive hardware refresh.
Beyond booting issues, a variety of other features may not be supported when deploying huge VMDK files such as fault tolerance, certain storage controllers and virtual storage area network capabilities. Systems that depend on features like fault tolerance may experience an unacceptable level of risk when using extended VMDK files. Also remember that a single huge disk volume does not offer the same potential level of performance found when a drive group of multiple spindles is brought to bear on a storage task. This may be a problem for some performance-sensitive workloads -- though the additional performance of solid-state disk or I/O accelerator devices can help to mitigate the issue.
The move beyond 2 TB disk volumes is just one more inevitable step in a data center's unending technological march forward, but the adoption of larger VMDK files is hardly automatic. Environments need the underlying hardware, hypervisors, operating and file systems, and other elements to support huge volumes -- and many of those elements are still evolving or absent from current infrastructures. IT professionals will need to exercise great care in proof-of-principle testing and verification to determine the compatibility and performance implications of large VMDK files before rolling out this technology across the enterprise.
Dig Deeper on Virtual machine provisioning and configuration
Related Q&A from Stephen J. Bigelow
Navigating data center malfunctions when hardware is off premises can be tricky. Organizations must have strong SLAs with their colo provider to ... Continue Reading
Regression tests and UAT ensure software quality and both require a sizeable investment. Learn when and how to perform each one, and some tips to get... Continue Reading
Learn the meaning of functional vs. nonfunctional requirements in software engineering, with helpful examples. Then, see how to write both and build ... Continue Reading