Hypervisors perform reliably and well on almost any suitable server platform, but proper hypervisor installation...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
isn't always easy. Improper installation can cause unexpected errors that demand time-consuming troubleshooting, or even impair system performance and spawn erratic behavior. Beyond the common considerations of server processing and memory resources, proper hypervisor installation is also influenced by factors as varied as firmware features, storage configuration choices, and even installation options. Let's consider some of the installation issues ESXi users must address.
How is ESXi affected by UEFI BIOS? Are there any problems to contend with?
The unified extensible firmware interface (UEFI) is a recent expression of BIOS built on the extensible firmware interface (EFI) originally sponsored by Intel in the mid-1990s. One of the many improvements promised in UEFI is flexibility in the boot process to embrace a broader variety of devices. Where traditional BIOS focused on booting from local disk devices, UEFI expanded boot compatibility to optical drives (such as CD or DVD) and USB devices.
The problem is, while vSphere 5.5 can boot ESXi hosts through these options using UEFI firmware, vSphere cannot boot ESXi hosts through the network or VMware Auto Deploy -- these options still require conventional BIOS. This seemingly minor incompatibility can cause boot problems if you change the firmware mode after the hypervisor is installed. For example, if you install ESXi 5.5 and then change from legacy BIOS to UEFI firmware, the host server may not boot and will produce an error message.
Over the short term, restoring the firmware mode or rolling back a BIOS upgrade may correct the boot problem. However, it's an important reminder that firmware plays a critical role in the interface between an operating system and the underlying hardware, and incompatibilities do exist. IT professionals should always test firmware upgrades in a lab setting that simulates real behaviors before rolling firmware upgrades out to a production floor.
How critical is storage when planning an ESXi upgrade? Can ESXi boot from USB devices or disk volumes larger than 2 TB?
Storage concerns aren't overwhelming, but it's worth ensuring adequate storage capacity is available before starting an ESXi installation or upgrade. Although ESXi 5.5 only requires a boot device with at least 1 GB, the actual installation process will demand at least 4 GB of scratch partition. This means the ESXi installation process will need at least 5.2 GB on the local disk, SAN or iSCSI LUN. One exception is the SAN or Auto Deploy boot options -- the scratch space for multiple ESXi VMs can be consolidated into a single LUN.
If there isn't enough scratch space on the assigned LUN, the installer will try to create the space on another disk or a RAM disk in memory. Fortunately, you can configure scratch space through the vSphere client, but remember to remove any scratch storage allocated to a RAM disk -- leaving memory allocated to scratch space serves no purpose after the installation, so the memory space would simply be wasted.
ESXi 5.5 can be booted from a LUN larger than 2 TB, but the system firmware (or any other expansion card firmware related to disk storage) must also support volumes larger than 2 TB. However, such huge volumes are generally rare for a VM because it limits the VMs that the system can host. In an age where consolidation is often the goal, most VM deployments tend to use smaller LUNs and minimize memory space. When huge volumes are required, it's important to test ESXi installation and performance in a lab environment before attempting installations or upgrades in actual production environments.
ESXi 5.5 can also be installed onto a USB flash drive or other non-volatile storage device like an SD card, but the storage device should be at least 16 GB. The reason for this recommendation is related to wear leveling -- since many non-volatile memory devices designs can only tolerate a finite number of write cycles before failing, wear-leveling algorithms are designed to spread out new writes across the entire storage device before looping back to the beginning of the device to overwrite data. This is very different than conventional disks which can overwrite any previously written magnetic clusters without any worry of "wearing out" the disk. So starting with an oversized flash storage device provides extended storage space for wear-leveling purposes. However, scratch areas are not written to non-volatile memory, so plan for scratch space on a local disk or RAM disk.
How do I pick the best installation option for ESXi?
vSphere offers several ways to install ESXi hosts onto servers: interactive, scripted, Auto Deploy and a CLI-based option. One option isn't really better than another, but there may be a preference depending on the scope of your deployment and the amount of time and resources available.
For example, the interactive installation is your conventional first-person installation wizard. An interactive installer boots from the network, CD, DVD or USB device, and then walks IT technicians through the prompts required to set up or define an installation. The installer then creates and formats partitions and installs the ESXi boot image. This approach generally requires the most time and attention from IT staff, so it's best for one-off or small deployments of just a few systems.
Scripted installations employ a pre-defined list of configuration settings (the "script"). As long as the installation script and installer are accessible through disk, network, CD/DVD, USB or other acceptable media, a huge number of installations can proceed with almost no direct operator intervention. However, using the same script will result in identical installations, so scripting is often best used to deploy a large number of identical ESXi hosts.
vSphere Auto Deploy is similar to scripting, providing a wizard that enables technicians to define precise ESXi configurations and profiles for hundreds of physical hosts. Auto Deploy is primarily a network boot tool, and the content is provided from an Auto Deploy server (rather than storing the content to be installed on each individual host system). This provides a powerful and versatile installation platform, which is best suited for large enterprises that rely on multiple ESXi configurations or images.
And finally, ESXi installation can be customized through a PowerShell "image builder" command set. In most cases, the command-line option is used to create ESXi image updates or patches -- maintenance activities. The new image can be deployed by burning it to DVD for distribution, or through an automated distribution mechanism like Auto Deploy. Again, this approach is best for enterprise users that must retain careful control over patch or distribution versions.
ESXi installations and upgrades often proceed without a hiccup -- but problems can occur -- so organizations should always invest the time and effort needed to test installations in advance in an offline lab or limited production environment that will not interfere with actual production activities. This allows IT professionals to refine installation/upgrade processes, assess the impact of any changes, establish benchmarks for comparison, and resolve any potential errors or oversights without causing unexpected disruption or downtime for the data center.