Installing Red Hat Enterprise Linux 5 (RHEL5) and the Xen hypervisor lays the foundation for getting virtualization...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
to work in your Linux environment. Installing guest virtual machines is where the rubber meets the road, and that's the trip described in this tip.
I gave an overview of and installation tips for the RHEL5's new Xen virtualization technology, including paravirtualization and the hypervisor, in part one of this series on virtualization in RHEL5. In that installment, I explained how the hypervisor interacts with guest operating systems (OSes) that have been installed with specially- modified kernels designed to cooperate with the hypervisor. It's an easy process, as simple as checking a box during the install process so that the hypervisor is installed and configured to be brought up during the boot process.
Without guest virtual machines installed, we haven't really gotten anywhere. Fortunately, Red Hat has made the guest OS installation process easy, too; but it has some twists and turns along the way.
In RHEL5, Red Hat provides both a command line interface (CLI) and a graphical interface to its Xen virtualization capability. If you are just getting started with Xen, the graphical interface -- called Virtual Machine Manager -- is by far the easier way to go.
The Virtual Machine Manager (VMM) is accessed through the System menu item in the main menu. When the System menu item called "Virtual Machine Manager" is brought up the initial dialog box asks you whether to connect to a local Xen host or a remote Xen hypervisor (see Image 1). We choose local Xen host, since we want to manage guest virtual machines on the local server.
Once we click on the local Xen host, the Virtual Machine Manager interface comes up. Guest installation may be done through this interface (see Image 2). In Snapshot 1, we see that Domain-0, which is the privileged guest -- meaning it is the guest which controls access to the actual underlying hardware resources -- is up and running, using 1GB of memory at this time. (At the far right is a graphics bar indicating how much this represents of total memory.). You'll also see its status (running) and the fact that it is using a bit over 2% of total CPU.
Getting started on installing a new guest virtual machine only requires clicking on the File menu item and then clicking on the first entry labeled "New machine." Another way to get started is to click on the "New" button in the lower part of your Virtual Machine manager" main window.
The new machine installation wizard then starts (see Image 3). This is a general splash screen that describes the process that is about to happen, indicating that you have to choose a name for the system, decide whether it will be paravirtualized or fully virtualized (more on that in a moment), where the installation files necessary for installation may be found, where you will locate the file system of the new guest, as well as the memory and CPU parameters you will choose.
Upon clicking "Forward" on the splash screen, the initial dialog box comes up. It asks you to name your new guest virtual machine. In our case, we rather unimaginatively called the guest VM "RHELguest1" (see Image 4). Clicking "Forward" brings us to the next screen, which asks us what type of virtualization we wish to use. This is a bit tricky and bears further discussion.
Xen's approach to virtualization enables high performance by modifying guest virtual machines so that they can cooperate with the underlying hypervisor. This modification takes the form of changing portions of the virtual machine's operating system kernel so that calls are made to the hypervisor rather than to the physical hardware. However, a new generation of chips available from AMD and Intel provide some hardware enhancements that allow unmodified guest OSes to interact with the Xen hypervisor.
Obviously, avoiding the need for specialized versions of guest OSes is a boon, particularly with regard to supporting Microsoft Windows as guests; this is because the kernel modifications previously mentioned require access to the source code of the guest OS so that the low-level changes can be made. While this is possible for Linux and other open source operating systems, it's clearly out of the question for Microsoft Windows. However, precluding the requirement for modified OSes is also a benefit for Linux, since it simplifies the process of installing Linux guests. There is no need to determine whether you've got a paravirtualized version of the OS or not, and the standard OS can be installed and run on top of the Xen hypervisor.
Determining whether your machine is capable of supporting so-called fully-virtualized guests requires some detective work.
- For AMD, it's relatively straightforward. If the machine requires DDR2 memory, the chips are virtualization-enabled (this is called AMD-V) and can support unmodified virtualized guests.
- For Intel, it's more difficult to know if the server can support unmodified guests. This is because even if the chip has Intel's hardware virtualization capability (which Intel calls VT), the BIOS on the motherboard may not support it. So there's no absolute way to know whether an Intel VT-enabled machine will support full virtualization without checking into the specific combination of the chip and the motherboard. Fortunately, you can get more information in this XenSource wiki.
If your machine is capable of supporting fully-virtualized guest VMs, the RHEL 5 install wizard will offer both paravirtualized and fully-virtualized installation options. If, on the other hand, your machine does not have the capability of supporting fully-virtualized guests, the wizard will only offer the option of installing paravirtualized guests. As you can see from Image 5, our test machine is capable of supporting both types of guests.
This isn't quite the end of the matter, though. You have more things to take into consideration when making the decision about whether to go with paravirtualized or fully virtualized guests.
The first is the fact that RHEL 5 wizard can only install paravirtualized guests that have been preconfigured with the proper kernel arrangements. Today the distros supported by virt-manager are RHEL 5, RHEL4u5, and Fedora Core. Red Hat is working on supporting non-Red Hat distros for paravirtualized installation, but this isn't available now. That means you cannot install a standard Linux distro via the wizard and then update the kernel with proper modifications to be Xen-enabled. So, be sure about the status of your distro's configuration before attempting to install it as a guest OS.
The final consideration to keep in mind when deciding to go with paravirtualized or fully-virtualized guests is the fact that full virtualization (which depends on the underlying chip and motherboard combination, remember) is accomplished through the use of the QEMU emulation package for device access. So, using full virtualization is a tradeoff: you get to use standard OS versions, rather than the kernel modified versions required by paravirtualization; but you pay a performance penalty, since there is an additional layer of software sitting in the overall virtualization stack.
For our example, we decided to go with installing a paravirtualized RHEL 5 guest (see Image 5).
The next step in the process is indicating where the installation files may be found. It is a peculiarity of the paravirtualized/fully virtualized arrangement that fully virtualized guests may be installed from an installation CD or DVD, while paravirtualized guests must be installed from an installation tree located on the host (i.e., privileged guest) or on the network. I have been told that this will be changed in a future update from Red Hat that offers support for installation of both types of guests from the CD/DVD drive. That will certainly be more convenient.
In our example, we chose to install from an installation tree located on the host, accessed via FTP (note that you must use the IP address of the host; at this point you have a virtual machine partially built, and it "sees" the host as a separate machine, which must be accessed through its own IP address). If you look at Image 6, you can see that we used the ftp method with a privately assigned address.
After providing the location of the installation files, you must provide the location of the guest's own file system. While you can use a partition on the local drive (and, indeed, can use a partition on a NAS or SAN), we chose to use a local file, as can bee seen in Image 7. Note that Red Hat recommends locating the file system in the default /var/lib/xen/images directory. We chose to name the storage location RHELguest1.dsk. The dsk suffix is arbitrary but recommended by Red Hat. You also set the size of the guest file system in this dialog box; the default is 2 GB, but we bumped that up to 5 GB. Obviously, if you were planning a production system, you would select a more appropriate size, and might choose to use remote rather than DAS.
The next step in creating your guest virtual machine is defining how much memory the guest should have available as well as how many virtual CPUs it should run on. With Xen, system memory should not be over-allocated (that is, the sum of the privileged guest and all other guests' memory should not exceed actual physical memory, in fact, Xen will not allow you to over commit physical memory – if there's not enough physical memory available as required by a guest VM upon startup, the startup will fail with an ou-of-memory error).
On the other hand, Xen is quite happy to allow you to define more virtual CPUs than actually exist on the physical box. Doing so probably won't offer very good performance, but it does allow you to test multiprocessor software on a more hardware-constrained server. For our selections we chose 500 Mb for system memory and 1 virtual CPU, as can be seen in Image 8. Please note that both of these can be adjusted post-installation, and, indeed, while the guest is actually running.
With those steps complete, we are ready to actually begin installation of our guest OS. The confirmation dialog box is displayed for us to begin by pressing the "Forward" button, as shown in Image 9. Once the installation begins, we see a status box as shown in Image 10. For our 5 GB installation with the options we chose, total guest creation time was around 5 minutes.
Following these steps, the actual installation of the guest OS files begins, as can be seen in Image 11. At this point, the installation is just like a standard installation of the OS. There is, however, one tricky point in the process. After you go through all the standard screens asking for your time zone, etc., you are presented with a final standard installation screen asking you to begin the final install (see Image 12). At that point, the console window you've been working in disappears! You then need to go to the host OS shell and execute "xm create guestname", which in our case the entire string was "xm create RHELguest1".
While disconcerting, there is some logic here. The guest install is complete, and xm create (xm is the main command line program used by Xen for managing the hypervisor) then instantiates the installed guest. Overall, however, this is a bit confusing and, for a novice, non-intuitive, and something to be watched out for.
Once that command is issued, your new guest is up and running, as shown in Image 13. In this screenshot, we can see that RHELguest1 is running, using 500 Mb of memory (as we allocated during the installation process) and is ready to work. If you double click on the guest listing in the VMM, it will bring up a console window for that guest, in which you can run programs, start a shell, and, in general, do anything with the guest you could do if it were running natively on the hardware. At this point, you are ready to begin loading software on the guest and doing real work, while managing your Xen system through the VMM as well as the command line on the host system.
So, in conclusion, it's a good thing that Red Had simplified the process of installing guest OSes on the hypervisor. Believe me, if you've tried to install guest OSes by hand, you'll know it is no trivial task.
About the author: Bernard Golden is CEO of Navica Inc., a systems integration firm specializing in open source software based San Ramon, Calif. He founded Navica in 2001, after two decades of experience in starting and building successful IT-related organizations. He has previously served as a Venture Partner for an international venture fund and has senior executive positions in a number of private and public software companies, including Informix, Uniplex Software, and Deploy Solutions.