Managing hardware in a Xen environment doesn't stop after telling a virtual machine what PCI devices it can use....
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
In a paravirtual environment, it is also possible to change memory and CPU allocations dynamically. Do so, and you'll maximize virtual machine performance. In this article, you'll learn everything about it.
When your physical server boots, all memory by default is allocated to the dom0. When other virtual machines are started, they can take from that memory. If the virtual machine is used in full virtualization mode, there is no way that the hypervisor can talk to the virtualized kernel and you can't change the current memory allocation. If, however, the virtual machine is in paravirtualization, the Xen hypervisor can change the memory allocation of that machine dynamically. When doing this, you should however always make sure that a given minimum amount of memory stays available for the Dom0 machine, as you don't want it to run out of memory. I'd recommend to set this minimum to 512 MB.
To make this initial memory reservation for Dom0, you need to add a boot option to your kernel. The option to use is dom0_mem=
title XEN root (hd0,0) kernel /xen.gz module /vmlinuz-188.8.131.52-0.14-xen root=/dev/system/root vga=0x314 resume=/dev/system/swap splash=silent showopts module /initrd-184.108.40.206-0.14.xen
In this configuration, on the first "module" line, add the dom0_mem option. The result may look as in the following:
title XEN root (hd0,0) kernel /xen.gz module /vmlinuz-220.127.116.11-0.14-xen root=/dev/system/root vga=0x314 resume=/dev/system/swap splash=silent showopts dom0_mem=512M module /initrd-18.104.22.168-0.14.xen
Now that you have fixed the amount of memory that will always be available for Dom0, you can manage the memory allocation for your virtual machines. When a virtual machine starts, it normally takes the memory that it has assigned from the memory that is available to Dom0. Once allocated, the Dom0 will never get that memory back, not even when the virtual machines are all stopped. Especially for that reason, it is important that you assign a minimal amount of memory that will always be assigned to the Dom0.
To change the memory allocation for virtual machines, there are two xm commands that you can use:
: use this to change the current memory allocation for a virtual machine.
: use this to limit the maximum amount of memory that a virtual machine is allowed to use. Be aware however that the new maximum setting will be applied only after the machine has rebooted.
After changing the memory assignment, always use the xm list command to check if it has worked out the way you wanted:
Listing 1: Use xm list to check memory assignments on a regular basis
lin:~ # xm list
As with memory, you can manage CPUs that are assigned to a virtual machine as well. If your virtual machine uses paravirtualization, it is also possible to change CPU assignments dynamically. When assigning CPUs to a virtual machine, you are not bound to the number of CPUs that are physically installed in your server. You can go beyond that if you like, but do notice that there is absolutely no performance gain at all if you do so. Something that is very useful indeed, is the possibility to pin a virtual machine to a given physical CPU. This may greatly help you improve the performance of your virtual machines. Apart from that, you can tune the CPU run queue to give one virtual machine more priority on a CPU than another virtual machine.
All runnable virtual CPUs (VCPUs) are managed by the local run queue on a physical CPU. This queue is sorted by VCPU priority. In the queue, every VCPU gets its fair share of CPU resources. The status of a VCPUs priority can have two values: It is over if it has consumed more CPU resources than it normally would be allowed to and it is under if it has not yet reached that value. If a VCPU has a current status of under, it will always come first when the scheduler next decides what VCPU to service. This even works beyond physical CPUs; if the scheduler doesn't see a virtual machine with a status of under on its current CPU, it will look at the other CPU to see if it can find a VCPU with such a status there and if it does, it will service that VCPU immediately. By doing this, all CPUs would normally get their fair amount of CPU resources.
As an administrator, you can manage the priority that a CPU would get by manipulating the weight and the cap values. The weight parameter is used to assign the amount of CPU cycles that a domain receives. Weight is relative. A VCPU with a weight of 128 would receive twice as much CPU cycles as a VCPU with a weight of 64. So use this parameter to determine which VCPU should get more, and which should get less attention. The second parameter to tune what a CPU may be doing, is the cap parameter. This parameter defines in a percentage the maximum amount of CPU cycles that a domain will receive. This is an absolute value; If it is set to 100, it means that the VCPU may consume 100% of available cycles on a physical CPU, if you set it to 50, then that would mean that the VCPU can consume never more than half of the available cycles.
In the following example command, the weight of the machine with id 3 is set to 128, and the machine is allowed to use all CPU cycles on two physical CPUs:
xm sched-credit -d 3 -w 128 -c 200
Another important task with regard to virtual CPU, is CPU allocation. By default, there is no fixed relation between a virtual CPU and a physical CPU. To improve performance, such relations can be established easily. The major benefit of this "pinning" of a VCPU to a physical CPU is that you prevent a VCPU from floating around. Without pinning, the scheduler determines by what physical CPU a virtual CPU is serviced. If one physical CPU is busy, the virtual CPU can float to another core. In terms of performance, this is quite an expensive action. Therefore it is a good idea to pin virtual CPU's to physical CPU's at all times.
To pin a VCPU, first use the xm list command to see what your current configuration looks like. Next, use the xm vcpu-list on the domain for which you want to see CPU details. The result of this command looks as follows:
lin:~ # xm vcpu-list 2 Name ID VCPU CPU State Time(s) CPU Affinity oes2l 2 0 1 r-- 3693.8 any cpu
This command shows that the domain with ID 2 currently uses one CPU with the ID 0, which is currently allocated on physical CPU 0. To make sure that it stays like that, you can now use the following command:
xm vcpu-pin 2 0 1
If you next use the xm vcpu-list command again, you'll see that the CPU Affinity has now changed from "any cpu" to CPU 1.
Notice that this setting is written nowhere. That means that after a reboot of the virtual machine, you have to apply this setting again.
The last thing that you can do to manage CPUs, is to change the number of CPUs that is assigned to a virtual machine. You can do this from Virtual Machine Manager, and with the xm vcpu-set command. For example, to change the number of CPUs that are allocated to domain 1 to 4 VCPUs, use:
xm vcpu-set 1 4
When using this command, you will notice that it doesn't work all the times. This is because the operating system in the virtual machine has to offer support for dynamically changing the amount of CPUs as well. Therefore, it makes more sense to change the amount of VCPUs that a machine can use in its configuration file so that you are sure that the changed setting is persistent across a reboot as well.
For performance optimization within virtual machines, changing memory and CPU allocations is an important task. In this article, I taught you why. You have also learned how to modify the priority of a virtual machine on a physical CPU.
About the author: Sander van Vugt is an author and independent technical trainer, specializing in Linux since 1994. Vugt is also a technical consultant for high-availability (HA) clustering and performance optimization, as well as an expert on SLED 10 administration.
Dig Deeper on Citrix XenServer