Using LXC to create virtual containers in SUSE Enterprise Linux

Creating virtual containers with LXC may trump hypervisor-based virtualization -- especially if you want to run multiple instances of the same OS.

If all your virtual machines use the same operating system (OS), container-based virtualization can be more efficient...

than a conventional hypervisor. Linux Containers for SUSE Linux Enterprise allows a host to run virtual containers on top of the host kernel, conserving resources and saving IT shops money.

Think of Linux Containers (LXC) as change root technology (chroot) combined with Kernel Control Groups (cgroups). The chroot technology guarantees your LXC host has an isolated environment to run the VM, or container. Cgroups ensures all virtual containers have dedicated resources and access to those resources.

Once you've mastered installing and configuring LXC hosts and virtual containers, you'll be better equipped to manage resources in the SUSE Linux environment.

Preparing the LXC SUSE host

To start the LXC SUSE host, you must install the required utilities using the following command:

zypper install -y lxc bridge-utils yast-lxc

In order for the virtual containers to connect to your network, you'll need to set up a network bridge, which lets you grant access to multiple containers on the same Ethernet interface. Start the installation and configuration tool, YaST (Yet another Setup Tool), select Network Devices and then click Network Settings. This gives you an overview of existing network interfaces.

More resources on SUSE Linux Enterprise Server

Learning guide: Novell SUSE Linux Enterprise virtualization

What's in store for SUSE Linux?

SUSE Manager for Linux virtualization systems management

Next, select your regular Ethernet card and click Delete to remove its current configuration. Click Add to add a new device, choose the Bridge Device type and click Next to create the bridge interface.

Before writing the configuration, you need to select the physical device you want to connect to the bridge, such as the eth0 interface. You must also choose how you want to set the IP address. Both Dynamic Host Configuration Protocol and fixed IP addresses will suffice, but be sure to disable your firewall first.

Lastly, make sure the cgroup service starts when your host boots. Run the following commands to start and enable this service:

/etc/init.d/boot.cgroup start
insserv boot.cgroup

Installing virtual containers in a SUSE Linux Enterprise

After you've prepared the host, you can install virtual containers. SUSE is developing a graphical module to manage LXC from its YaST configuration tool. However, until that module is ready, you'll have to manually set up LXC.

Create a configuration file and edit it to reflecting the following:

lxc.utsname = vsuse0 = veth = up = br0 = 00:30:6E:A1:B2:C3 = = eth0

Save the file as lxc_vsuse0.conf. Note that vsuse0 in the file name matches the name you used in the lxc.utsname parameter.

Next, create the container using the following command:

lxc-create -t openSUSE -f lxc_vsuse0.conf -n vsuse0

In the command, the name of the container (vsuse0) must match the name you use in the lxc.utsname parameter of the configuration file.

Once you install the virtual container, you can chroot into its file system using the following command:

chroot /var/lib/lxc/vsuse0/rootfs/

Next, use the passwd root command to set the root password in the container and create a user that has non-root privileges for daily operational tasks. Use the exit command to leave the chroot environment. Now the VM is ready to use.

Use the lxc-start command to start an LXC VM. For instance, lxc-start lxc_vsuse0 would start the VM from the previous example. Then you can use lxc-console -n vsuse0 to open a console session into the VM and configure it to provide the required functionality. To stop a container, use the lxc-stop -n vsuse0 command followed by lxc-destroy -n vsuse0.

This was last published in September 2012

Dig Deeper on Virtual machine provisioning and configuration



Find more PRO+ content and other member only offers, here.

Join the conversation


Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Will you opt for LXC over hypervisor-based virtualization?
It depends on the needs, but there is a question, how does LxC preserve userspace isolation? This is paramount to avoid damages from malicious artifacts.
Hypervisor-based has best performance on stress activities
lower overhead, easier to manage, no license hassle.
lxc is a very very lightweight tool for virtual system. why go the whole 9 yards when you can do it better and more efficient using LXC. Only pity is that it is not pushed by the community as much as proprietary solutions. LXC can have as many containers as memory permits. You can only have just a few full blown vm's running on the same resources.

Also, LXC has power of cgroups. So
1. isolation
2. resource division
are covered
LXC = lightweight ... LXC wins hands down for cgroup support as well
It makes sense if one is running a "same-OS" pool of servers. It is obviously more efficient than any flavor of hypervisor. True less flexibility, but it is suitable for same-OS server pools scenarios.