Sashkin - Fotolia
An embedded system imposes significant limitations on available resources and performance. By adding a software layer, such as a hypervisor, IT administrators risk increasing stress on the system. Admins must ensure an embedded hypervisor has a small code base, as well as high bandwidth and low latency.
Several considerations are key to successfully implementing an embedded hypervisor, such as storage, performance and scheduling. Otherwise, admins risk system and processor latency and resource contention.
Ensure adequate storage and memory
Embedded systems are typically limited by the amount of storage and memory available. This means the hypervisor code base must be relatively small, highly efficient and extremely reliable. The smallest possible footprint reduces the resources needed to operate the hypervisor.
By eliminating superfluous features and easing code bloat, the minimized hypervisor footprint can run more quickly, with far fewer attack vectors. If admins don't ensure a minimized hypervisor code, it can lead to unwanted restarts, which might not even be possible on an embedded system. In most cases, the embedded hypervisor supports Type 1 -- or bare-metal – virtualization and can manage several VMs.
Avoid system latency
The second crucial factor for embedded hypervisor implementation is performance. Embedded systems are often deployed in environments and computers used for real-world tasks that are extremely demanding on performance, such as the extraction of high-dimensional data. Adding an embedded hypervisor can, in theory, increase latency and reduce the system's performance. This means the embedded hypervisor must then guarantee minimum latency.
The need for communication between the embedded system components can further complicate system performance. The embedded hypervisor must ensure high-bandwidth and low-latency communication between the system's hardware and/or system components, such as the processor and the emulator, and provide configurable security characteristics that encapsulate and encrypt communication between those components. This is critical because of the real-time demands of the embedded system and the close interactions of system hardware components.
Use scheduling to support the system processor
Careful scheduling is generally the final consideration before implementing an embedded hypervisor. In an embedded system, a processor can only work on one thing at one time. When multiple components, such as a hypervisor and VMs, demand and compete for service, the processor must divide its time among the various processes.
The way a processor, its I/O and the rest of the system handle time management is generally called scheduling. The embedded hypervisor must support a granular and reliable scheduler to support real-time, high-performance systems.
Additional considerations for hypervisor implementation include licensing costs, feature roadmaps and update frequency. Updates affect embedded system reliability, because admins must translate each update to the system's code base -- usually firmware -- which imposes a deliberate update process.
Admins must update the firmware chip in the same way they would update a computer's BIOS firmware. And admins must test and validate every new update extensively before they implement upgrades.
Dig Deeper on Server virtualization hypervisors and management
Related Q&A from Stephen J. Bigelow
Full virtualization and paravirtualization both enable hardware resource abstraction, but the two technologies differ when it comes to isolation ... Continue Reading
Organizations can cap their hyper-converged infrastructure costs when they deploy the Azure Stack HCI platform, but once they plug into the cloud, ... Continue Reading
You can implement ESXi on ARM -- or other RISC processors -- in micro and nano data centers. A nano data center is more specialized but also more ... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.