Using libvirt/KVM with Vagrant¶
netlab uses Vagrant with vagrant-libvirt plugin to start virtual machines in libvirt/KVM environment.
To use libvirt/KVM environment on a Linux bare-metal server or a Linux VM:
If you’re using Ubuntu, execute netlab install libvirt to install KVM, libvirt, Vagrant, and vagrant-libvirt. You’ll have to install the software manually on other Linux distributions.
Create lab topology file. libvirt is the default virtualization provider and does not have to be specified in the topology file
Start the lab with netlab up
You MUST use netlab up to start the lab to ensure the virtual machines get correct management IP addresses – netlab up creates the vagrant-libvirt management network with predefined IP address range and DHCP bindings.
Table of Contents
We tested netlab with Vagrant version 2.3.4 and vagrant-libvirt plugin version 0.11.2. These are also the versions installed by netlab install libvirt command.
Vagrant starts virtual machines from prepackaged VM images called boxes. While it’s possible to download some network device images from Vagrant Cloud, you’ll have to build most of the boxes you’d want to use in your lab.
|Virtual network device||Vagrant box name|
|Cisco IOS XR||cisco/iosxr|
|Cisco CRS 1000v||cisco/csr1000v|
|Cisco Nexus 9300v||cisco/nexus9300v|
|Juniper vPTX (vJunos EVO)||juniper/vptx|
|Juniper vSRX 3.0||juniper/vsrx3|
|Mikrotik RouterOS 6||mikrotik/chr|
|Mikrotik RouterOS 7||mikrotik/chr7|
The following Vagrant boxes are automatically downloaded from Vagrant Cloud when you’re using them for the first time in your lab topology:
|Virtual network device||Vagrant box name|
|Cumulus VX 5.0 (NVUE)||CumulusCommunity/cumulus-vx:5.0.1|
Even if a new box version is available from Vagrant Cloud, Vagrant will only output a warning to let the user know an update is available. You can ignore that warning or update the box with
vagrant box update.
Vagrant does not automatically download the updated boxes because boxes can be relatively large (See Vagrant box versioning for details).
We recommend that you periodically download the updated box for
Building Your Own Boxes¶
Modifying VM Settings¶
The following node parameters influence the VM configuration created by vagrant-libvirt:
cpu – number of virtual CPUs allocated to the VM
memory – VM memory (in MB)
libvirt.nic_model_type – VM NIC model (example: e1000). Default netlab settings usually work fine.
libvirt.nic_adapter_count – maximum number of VM NICs (default: 8)
Replacing Vagrant Boxes¶
If you want to rebuild and install a Vagrant box with the same version number, you have to remove the old box manually. You also have to delete the corresponding volume (disk image) from libvirt storage pool (vagrant-libvirt plugin installs new boxes but does not clean up the old ones).
To delete an old version of a Vagrant box use a procedure similar to the one described below:
vagrant box listto list the installed boxes
vagrant box remove <box-name> --box-version=<box-version>to delete the Vagrant box1
virsh vol-list --pool default2 to list the installed Vagrant boxes
Find the relevant volume name, for example,
cisco-VAGRANTSLASH-iosxr_vagrant_box_image_7.4.2_box.imgfor an IOS XR 7.4.2 image
Delete the volume with
virsh vol-delete --pool default <volume-name>
The new Vagrant box will be copied into the libvirt storage pool the next time you’ll use the affected device in your lab.
netlab uses libvirt networks and P2P UDP tunnels to implement topology links:
P2P UDP tunnels are used for links with two nodes and link type set to p2p (default behavior for links with two nodes). P2P tunnels are transparent; you can run any layer-2 control-plane protocol (including LACP) over them.
libvirt networks are used for all other links. They are automatically created and deleted by vagrant up and vagrant down commands executed by netlab up and netlab down. netlab up sets the
group_fwd_maskfor all Vagrant-created Linux bridges to 0x4000 to enable LLDP passthrough.
Connecting to the Outside World¶
Lab networks are created as private, very-isolated libvirt networks without a DHCP server. If you want to have a lab network connected to the outside world:
Example: use the following topology to connect your lab to the outside world through
r1 on a Linux server that uses
enp86s0 as the name of the Ethernet interface:
defaults.device: cumulus nodes: [ r1,r2 ] links: - r1-r2 - r1: libvirt: public: True uplink: enp86s0
Using Existing Libvirt Networks¶
To attach lab devices to existing libvirt virtual networks:
Set the link bridge attribute to the name of an existing network.
Set the link libvirt.permanent attribute to True to tell vagrant-libvirt plugin it should not destroy the network on shutdown.
You can use this functionality to attach lab devices to public networks or networks extended with VXLAN transport.
vagrant destroy command will crash if it tries to destroy an existing non-persistent libvirt network, stopping the netlab down procedure. Rerun the netlab down command to complete the lab shutdown/cleanup process.
Libvirt Management Network¶
vagrant-libvirt plugin a dedicated uses libvirt network to connect the VM management interfaces to the host TCP/IP stack. netlab up command creates that network before executing vagrant up to ensure the network contains desired DHCP mappings. The management network is automatically deleted when you execute netlab down (recommended) or vagrant destroy.
You can change the parameters of the management network in the addressing.mgmt pool:
ipv4: The IPv4 prefix used for the management network (default:
_network: The libvirt network name (default:
_bridge: The name of the underlying Linux bridge (default:
Starting Virtual Machines in Batches¶
vagrant-libvirt plugin tries to start all the virtual machines specified in
Vagrantfile in parallel. The resulting strain on CPU resources might cause VM boot failures in very large topologies. As a workaround, you can configure libvirt virtualization provider to execute a series of
vagrant up commands to start the virtual machines in smaller batches:
Configure the batch size with defaults.providers.libvirt.batch_size parameter (an integer between 1 and 50)
Configure idle interval between batches (if needed) with defaults.providers.libvirt.batch_interval parameter (between 1 and 1000 seconds).
provider: libvirt defaults.device: cumulus defaults.providers.libvirt.batch_size: 2 defaults.providers.libvirt.batch_interval: 10 nodes: [ a,b,c,x,z ] module: [ ospf ] links: [ a-x, a-z, b-x, b-z, c-x, c-z ]
Please note that the
batch_size is set artificially low so that this pretty small topology generates three batches. Realistic
batch_size depends on your hardware resources (CPU, memory) and VM type.
The virtual machines are batched based on their order in nodes list/dictionary. You might want to adjust the node order to group virtual machines with long start times (example: Cisco Nexus OS or Juniper vSRX) into as few batches as possible.
You don’t have to specify the box version unless you created multiple versions of the same box.
libvirt environment created with the netlab install libvirt installation script uses the default storage pool. A custom installation might use a different storage pool name.
The default value for the libvirt.public attribute is bridge which creates a macvtap interface for every node connected to the link.
Use ip addr or ifconfig find the interface name.
Example: Ubuntu 22.04 uses weird interface names based on underlying NIC type.