Getting acquainted with libvirt
Libvirt is a set of API libraries that sits in between the end user and the hypervisor. The hypervisor can be built using any virtualization technology that libvirt supports. At the time of writing, libvirt supports the following hypervisors:
- The KVM/QEMU Linux hypervisor
- The Xen hypervisor on Linux and Solaris hosts
- The LXC Linux container system
- The OpenVZ Linux container system
- The User Mode Linux paravirtualized kernel
- The VirtualBox hypervisor
- The VMware ESX and GSX hypervisors
- The VMware Workstation and Player hypervisors
- The Microsoft Hyper-V hypervisor
- The IBM PowerVM hypervisor
- The Parallels hypervisor
- The Bhyve hypervisor
libvirt acts as a transparent layer that takes commands from users, modifies them based on the underlying virtualization technology, and then executes them on the hypervisor. This means that if you know how to use libvirt-based management tools, you should be able to manage the preceding set of hypervisors without knowing them individually. You can select any virtualization management technology. They all use libvirt as their backend infrastructure management layer, even though the frontend tools look different; for example, oVirt, Red Hat Enterprise Virtualization (RHEV), OpenStack, Eucalyptus, and so on. This book is all about KVM libvirt and its tools.
In the following figure, we will summarize how everything is connected:
Libvirt will take care of the storage, networking, and virtual hardware requirements to start a virtual machine along with VM lifecycle management.
Here's how easy it is to start VM using libvirt. Here, we are starting a VM named TestVM
using virsh
.
# virsh start TestVM
Note
virsh
is the frontend command line that interacts with the libvirt
service and virt-manager
is its GUI frontend. You will learn more about these tools later on in the book.
In the backend, you can see that libvirt initiated the qemu
process with a bunch of options:
# qemu-system-x86_64 -machine accel=kvm -name TestVM -S -machine pc-i440fx-1.6,accel=kvm,usb=off -m 4000 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 39ac4786-1eca-1092-034c-edb6f93d291c -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/TestVM.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/dev/vms/TestVM,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2 -netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:a5:cd:61,bus=pci.0,addr=0x3,bootindex=1 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0 -vnc 127.0.0.1:2 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Note
While introducing libvirt, we deliberately avoided mentioning many features of libvirt. This is done to make the concept clearer and focus on the key functions of libvirt. When you progress through the chapters, you will get introduced to those features.
Now, you are familiar with the key components required to use KVM-based virtualization. Before we learn how to set up the environment, we should take a look at the system requirements.
Host system requirements
A virtual machine needs a certain amount of CPU, memory, and storage to be assigned to it. This means that the number of virtual machines you are planning to run on that particular host decides the hardware requirements for the KVM hypervisor.
Let's start with the minimum requirements to start two simple virtual machines on KVM with 756 MB of RAM each:
- An Intel or AMD 64-bit CPU that has virtualization extension, VT-x for Intel and AMD-V for AMD.
- 2 GB RAM.
- 8 GB free disk space on KVM hypervisor after Linux OS installation.
- 100 Mbps network.
Note
For the examples in the book, we are using Fedora 21. However, you are free to use any Linux distribution (Ubuntu, Debian, CentOS, and so on) that has KVM and libvirt support. We assume that you have already installed a Fedora 21 or a Linux distribution with all the basic configurations, including the networking.
Determining the right system requirements for your environment
This is a very important stage and we need to get this right. Having the right system configuration is the key to getting native-like performance from the virtual machines. Let us start with the CPU.
Physical CPU
An Intel or AMD 64-bit CPU that has virtualization extension, VT-x for Intel and AMD-V for AMD.
To determine whether your CPU supports the virtualization extension, you can check for the following flags:
# grep --color -Ew 'svm|vmx|lm' /proc/cpuinfo flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt lahf_lm ida dtherm tpr_shadow vnmi flexpriority ept vpid
The svm
flag means that the CPU has AMD-V
, vmx
flag means that the CPU has VT-x
, and lm
means a 64-bit support.
If your CPU supports a virtualization extension, then your system is probably ready to host the KVM virtual machines. You will also notice that the appropriate KVM modules get loaded automatically with no additional configuration. To verify whether the modules are loaded or not, use following command:
# lsmod | grep kvm kvm_intel 148081 9 kvm 461126 1 kvm_intel
If the system is AMD, you will see kvm_amd
instead of kvm_intel
.
If you do not see the preceding CPU flags, or all the KVM modules are not loaded, but you are sure that the system supports virtualization extensions, then try the following troubleshooting steps:
- Reboot the system and go to the BIOS.
- Go to advanced options for CPU. Enable Intel Virtualization Technology or Virtualization Extensions. For AMD, it should be enabled by default. The exact words might be different depending on your BIOS.
- Restart the machine.
- You should now see the KVM modules loaded. If you still do not see them as loaded, then try loading them manually.
# modprobe kvm kvm_intel
ormodprobe kvm kvm_amd
- If you are able to load them manually but they still don't work, then it is time to involve your hardware vendor or double-check the processor details on respective Intel or AMD product pages.
In addition to the virtualization extension, you may need to enable Intel VT-d or AMD IOMMU (AMD-Vi) in the BIOS. These are required for direct PCI device assignment to virtual machines, for example, to assign a physical Network Interface Card (NIC) from the hypervisor to the virtual machine; we will be covering more about this in the upcoming chapters.
CPU cores
If you are planning to run server-class virtual machines, then one core per vCPU is recommended. When counting cores, do not count the hyperthreaded cores on the Intel CPUs, just the actual cores. Of course, you can overcommit the number of vCPUs available as more than the actual cores but it comes with a performance penalty.
If you are planning to run desktop-class virtual machines or less CPU-intensive virtual machines, then you can safely overcommit the CPU since the performance takes a back seat here and priority changes to VM density per hypervisor more than the performance.
Note
Overcommitting means assigning more virtual resources than the physical resources available.
There is no crystal clear definition of how many VMs you can run on a hypervisor. It all depends upon the type of workload inside the VMs and how much performance degradation you can afford. If all the VMs run CPU intensive tasks, then overcommitting vCPUs is a bad idea.
Tip
Use the lscpu
command to see your CPU topology.
Physical memory
A simple rule of thumb you can use to decide how much memory you need for the physical node is to add up all the memory you plan to assign to virtual machines and add an additional 2 GB of RAM for the hypervisor itself to use.
This is the expected configuration if you are planning to run memory intensive workloads.
Similar to the CPU, KVM also supports memory overcommitting. This means that you can assign more memory to the VMs than the hypervisor actually has, with the risk of running out of memory. Usually this type of allocation is done for desktop class virtual machines or test virtual machines.
You can use the following formulas to find how much RAM will be available to the VMs:
- For systems with memory up to 64 GB:
RAM - 2 GB = Amount of RAM available to VMs in GBs
- For systems with memory above 64 GB:
RAM - (2 GiB + .5* (RAM/64)) = Amount of RAM available to VMs in GBs
We are adding 500 MiB to every 64 GB added to the hypervisor + a mandatory 2 GB. Use this formula to get a rough idea of how much memory is available for the virtual machines. In some workloads, you may not need more than 5 GB of RAM space for the hypervisor, even if our formula suggests that you may need to keep 10 GB reserved for the hypervisor software on a system with 1 TB of RAM.
Storage
When considering the storage space for the hypervisor, you need to factor in the space required for the OS installation, SWAP, and virtual machines disk usage.
How much SWAP space is recommended?
Determining the ideal amount of SWAP space needed is a bit complicated. If you are not planning to do any memory overcommit, then you can use the following suggestion for an oVirt Node, which is a dedicated KVM hypervisor for running the VMs only:
- 2 GB of swap space for systems with 4 GB of RAM or less
- 4 GB of swap space for systems with 4 GB and 16 GB of RAM
- 8 GB of swap space for systems with 16 GB and 64 GB of RAM
- 16 GB of swap space for systems with 64 GB and 256 GB of RAM
If you are planning to do a memory overcommit, you will need to add additional swap space. If the overcommit ratio is .5 (that is, 50% more than the available physical RAM), then you need to use the following formula to determine the SWAP space:
(RAM x 0.5) + SWAP for OS = SWAP space required for overcommitting
For example, if your system has 32 GB RAM and you are planning to use a .5 overcommit ratio, then the SWAP space required is (32 * .5) + 8 = 24 GB.
A virtual disk can be stored as a file in the local file system storage (ext3
, ext4
, xfs
, and so on) or in a shared file storage (NFS, GlusterFS, and so on). A virtual disk can also be created from block devices, such as LVM, a locally partitioned disk, iSCSI disk, Fibre Channel, FCoE, and so on. In short, you should be able to attach any block device that the hypervisor sees to a VM. As you have guessed by now, the space is decided by how much disk space VMs will require or the applications installed in it. In storage, you can also do overcommitting similar to what we explained for CPU and memory, but it is not recommended for virtual machines that do heavy I/O operations. An overcommitted virtual disk is called a thin provisioned disk.
Further explanation about CPU, memory, and storage overcommitting will be given in the later chapters that cover virtual machines performance tuning.
Network
One NIC with a bandwidth of at least 1 GBps is recommended for smooth network operation, but again, it totally depends on how you configure your virtual network infrastructure and how the network requirement varies according to various scenarios.
It is suggested to bind multiple network interfaces together into a single channel using Linux bonding technology and build virtual machine network infrastructure on top of it. It will help in increasing the bandwidth and providing redundancy.
Note
There are several bonding modes but not all are supported for building virtual network infrastructure. Mode 1 (active-backup), Mode 2 (balance-xor), Mode 4 (802.3ad/LACP), and Mode 5 (balance-tlb) are the only supported bonding modes; the remaining bonding modes are not suitable. In Mode 1 and Mode 4 are highly recommended and stable.
Setting up the environment
This section guides you through the process of installing virtualization packages, starting with the libvirt service and validating that the system is ready to host virtual machines using KVM virtualization technology.
Note
We assume that you have a Fedora 21 system ready with a graphical user interface loaded and Internet connectivity to access the default Fedora yum repository through which the required KVM virtualization packages can be downloaded. We also assume that the Virtualization Technology (VT) feature is enabled in your server's BIOS.
To verify whether the default yum repository is enabled or not on your system, use the yum repolist
command. This command lists the yum repositories defined on the system:
Look for a repository named Fedora 21 - X86-64
in the output. It is where you will find an access to all the KVM virtualization packages.
Installing virtualization packages
This is the first step to converting your Fedora 21 server or workstation system into a virtualization host. Actually, this is a very easy thing to do. As root, you just have to execute the yum install <packages>
command, where <packages>
is a space-separated list of package names.
The minimum required packages for setting up a virtualization environment on the Fedora 21 system are libvirt
, qemu-kvm
, and virt-manager
.
So you should use the following yum
command:
# yum install qemu-kvm libvirt virt-install virt-manager virt-install -y
There are many dependent packages which are installed along with the preceding packages but you do not need to worry what those are or remember their names, the yum
command will automatically detect the dependency and resolve it for you.
The yum groupinstall
method can also be used to install the necessary and optional packages required for setting up the KVM virtualization environment:
#yum groupinstall "virtualization" -y
It will install the guestfs-browser
, libguestfs-tools
, python-libguestfs
, virt-top
packages along with the core components, such as libvirt and qemu-kvm.
Here is the output of yum groupinfo "virtualization"
for your reference:
#yum groupinfo "virtualization" Group: Virtualization Group-Id: virtualization Description: These packages provide a virtualization environment. Mandatory Packages: +virt-install Default Packages: libvirt-daemon-config-network libvirt-daemon-kvm qemu-kvm +virt-manager +virt-viewer Optional Packages: guestfs-browser libguestfs-tools python-libguestfs virt-top
For the time being, we would suggest that you install just the core packages using the yum install
command to avoid any confusion. In later chapters, the optional utilities available for KVM virtualization are thoroughly explained with examples and installation steps.
Starting the libvirt service
After installing the KVM virtualization packages, the first thing that you should do is start a libvirt service. As soon as you start the libvirt service, it will expose a rich Application Programmable Interface (API) to interact with qemu-kvm
binary. Clients such as virsh
and virt-manager
, among others, use this API to talk with qemu-kvm
for virtual machine life cycle management. To enable and start the service, run the following command:
# systemctl enable libvirtd && systemctl start libvirtd
Tip
Use libvirtd --version
command to find out the libvirt version in use.
Validate and understand your system's virt capabilities
Before creating virtual machines, it's very important to validate the system and make sure that it meets all the prerequisites to be a KVM virtualization host, and understand what are its virt capabilities.
Knowing this information will help you to plan the number of virtual machines and their configuration that can be hosted on the system. There are two important commands that help in validating a system configuration for KVM. Let's start with virt-host-validate
:
virt-host-validate
: Executing this command as root user will perform sanity checks on KVM capabilities to validate that the host is configured in a suitable way to run the libvirt hypervisor drivers using KVM virtualization.For example:
TestSys1
has all the necessary packages required for KVM virtualization but lacks hardware virtualization support. In this case, it will print out the following:root@'TestSys1 ~]#virt-host-validate QEMU: Checking for hardware virtualization : WARN (Only emulated CPUs are available, performance will be significantly limited) QEMU: Checking for device /dev/vhost-net : PASS QEMU: Checking for device /dev/net/tun : PASS LXC: Checking for Linux >= 2.6.26 : PASS
- This output clearly shows that hardware virtualization is not enabled on the system and only "qemu" support is present, which is very slow as compared to qemu-kvm.
It's the hardware virtualization support which helps the KVM (qemu-kvm) virtual machines to have direct access to the physical CPU and helps it reach nearly native performance. Hardware support is not present in a standalone qemu.
Now, let's see what other parameters are checked by the virt-host-validate
command when it's executed to validate a system for KVM virtualization:
/dev/kvm
: The KVM drivers create a/dev/kvm
character device on the host to facilitate direct hardware access for virtual machines. Not having this device means that the VMs won't be able to access physical hardware, although it's enabled in the BIOS and this will reduce the VMs, performance significantly./dev/vhost-net
: Thevhost-net
driver creates a/dev/vhost-net
character device on the host. This character device serves as the interface to configure the vhost-net instance. Not having this device significantly reduces the virtual machine's network performance./dev/net/tun
: This is another character special device used for creating tun/tap devices to facilitate network connectivity for a virtual machine. The tun/tap device will be explained in detail in future chapters. For now, just understand that having a character device is important for KVM virtualization to work properly.
Always ensure that virt-host-validate
passes all the sanity checks before creating the virtual machine on the system. You will see the following output on the system where it validates all the parameters:
[root@kvmHOST ~]# virt-host-validate QEMU: Checking for hardware virtualization : PASS QEMU: Checking for device /dev/kvm : PASS QEMU: Checking for device /dev/vhost-net : PASS QEMU: Checking for device /dev/net/tun : PASS LXC: Checking for Linux >= 2.6.26 : PASS [root@kvmHOST ~]#
The second command is virsh
. virsh
(virtualization shell) is a command-line interface for managing the VM and the hypervisor on a Linux system. It uses the libvirt management API and operates as an alternative to the graphical virt-manager and a Web-based kimchi-project. The virsh
commands are segregated under various classifications. The following are some important classifications of virsh
commands:
- Guest management commands (for example
start
,stop
) - Guest monitoring commands (for example
memstat
,cpustat
) - Host and hypervisors commands (for example
capabilities
,nodeinfo
) - Virtual networking commands (for example
net-list
,net-define
) - Storage management commands (for example
pool-list
,pool-define
) - Snapshot commands (
create-snapshot-as
)
Tip
To learn more about virsh
, we recommend that you read the main page of virsh. virsh is a very well-documented command. #man virsh
to access man pages of virsh command.
The reason why we introduced the virsh
command in this chapter is because virsh can display a lot of information about the host's capabilities, such as, the host CPU topology, memory available for virtual machines, and so on. Let's take a look at the output of the virsh nodeinfo
command, which will give us the physical node's system resource information:
#virsh nodeinfo CPU model: x86_64 CPU(s): 4 CPU frequency: 2534 MHz CPU socket(s): 1 Core(s) per socket: 2 Thread(s) per core: 2 NUMA cell(s): 1 Memory size: 7967796 KiB
Note
You must be the root to run virsh
commands.
In the virsh nodeinfo
output, you can see the system hardware architecture, CPU topology, memory size, and so on. Obviously, the same information can also be gathered using the standard Linux commands, but you will have to run multiple commands. You can use this information to decide whether or not this is a suitable host to create your virtual machine suitable, in the sense of hardware resources.
Another important command is #virsh domcapabilities
. The virsh domcapabilities
command displays an XML document describing the capabilities of qemu-kvm with respect to the host and libvirt version. Knowing the emulator's capabilities is very useful. It will help you determine the type of virtual disks you can use with the virtual machines, the maximum number of vCPUs that can be assigned, and so on.