ra2 studio - Fotolia
The best way for IT administrators to obtain, deploy and update AWS Firecracker is to ensure that they have the proper software setups, such as an x86 or x64 virtual instance, and appropriate resources to ensure the reliability of their workloads.
Each of the steps needed to build and run AWS Firecracker are available -- along with examples -- in the documentation on GitHub. Because AWS Firecracker relies on Linux software, admins should remember to consider the security setup of the Linux OS to ensure the integrity of their multitenant environments.
Obtain and deploy Firecracker
AWS Firecracker was originally developed by Amazon, but has recently seen its release into the open source community under the versatile Apache License version 2.0, which enables admins to freely use, copy and distribute changes under other license terms if desired.
The easiest way to start AWS Firecracker is to download the latest AWS Firecracker binaries from GitHub. Developers can use the v0.17.0 version or later. Admins can also refer to the notes made for additions and changes to watch for bug fixes and any new features that AWS adds. Admins can deploy AWS Firecracker in an x86 or x64 virtual instance, such as an i3.metal instance in the AWS cloud, using a Linux version such as Ubuntu 18.04 with access to KVM and the Linux 4.14 kernel.
Since AWS Firecracker is statically linked to musl, which provides a standard C language Portable Operating System Interface library and extensions, admins can build the binaries and run the executable within the instance. AWS Firecracker is generally run in production with a Linux execution jail setup through the jailer binary, and it's important to refer to the latest GitHub documentation for current details on setting up the Linux environment to optimize performance and security.
Generally speaking, AWS Firecracker runs independently, so once it starts, it's configured and controlled externally through the API. Admins can then use the API -- through another shell -- to create a Linux machine, using the API to access the Linux kernel and rootfs files if necessary.
Once completed, admins can set the guest kernel and rootfs before starting up the newly-configured guest machine and stipulating the number of vCPUs and memory, if desired. Admins will then have the ability to log into the new guest machine. When the guest is no longer needed, admins can shut it down using a reboot command inside the guest.
Dig Deeper on Open source virtualization
Related Q&A from Stephen J. Bigelow
While the Windows Admin Center is one way to manage the Azure Stack HCI platform, you can also use traditional, battle-tested tools. Continue Reading
There are many tools available on the AWS Marketplace for QA testing, making it difficult to determine where to begin. What should an enterprise look... Continue Reading
Hyper-converged infrastructure that runs on Windows Server is not a new concept, but Microsoft's Azure Stack HCI program has one big difference from ... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.