You're reading the deprecated documentation on readthedocs.io. The documentation has moved to netlab.tools.

Using Containerlab with netlab

Containerlab is a Linux-based container orchestration system focused on creating virtual network topologies. To use it:

Supported Versions

We tested netlab with containerlab version 0.41.2. That’s also the version installed by the netlab install containerlab command.

Minimum supported containerlab version is 0.37.1 (2023-2-27) – that version introduced some changes to the location of generated certificate files.

If needed, use sudo containerlab version upgrade to upgrade to the latest containerlab version.

Container Images

Lab topology file created by netlab up or netlab create command uses these container images (use netlab show images to display the actual system settings):

Virtual network device Container image
Arista cEOS ceos:4.26.4M
Cumulus VX networkop/cx:4.4.0
Cumulus VX with NVUE networkop/cx:5.0.1
Dell OS10 vrnetlab/vr-ftosv
FRR frrouting/frr:v8.4.0
Juniper vMX vrnetlab/vr-vmx:18.2R1.9
Juniper vSRX vrnetlab/vr-vsrx:23.1R1.8
Mikrotik RouterOS 7 vrnetlab/vr-routeros:7.6
Nokia SR Linux ghcr.io/nokia/srlinux:latest
Nokia SR OS vrnetlab/vr-sros:latest
VyOS ghcr.io/sysoleg/vyos-container

You can also use vrnetlab to build VM-in-container images for Cisco CSR 1000v, Nexus 9300v and IOS XR, OpenWRT, Mikrotik RouterOS, Arista vEOS, Juniper vMX and vQFX, and a few other devices.

Warning

You might have to change the default loopback address pool when using vrnetlab images. See Using vrnetlab Containers for details.

LAN Bridges

For multi-access network topologies, netlab up command automatically creates additional standard Linux bridges.

You might want to use Open vSwitch bridges instead of standard Linux bridges (OVS interferes less with layer-2 protocols). After installing OVS, set defaults.providers.clab.bridge_type to ovs-bridge, for example:

defaults.device: cumulus

provider: clab
defaults.providers.clab.bridge_type: ovs-bridge

module: [ ospf ]
nodes: [ s1, s2, s3 ]
links: [ s1-s2, s2-s3 ]

Container Runtime Support

Containerlab supports multiple container runtimes besides the default docker. The runtime to use can be configured globally or per node, for example:

provider: clab
defaults.providers.clab.runtime: podman
nodes:
  s1:
    clab.runtime: ignite

Using File Binds

You can use clab.binds to map container paths to host file system paths, for example:

nodes:
- name: gnmic
  device: linux
  image: ghcr.io/openconfig/gnmic:latest
  clab:
    binds:
      gnmic.yaml: '/app/gnmic.yaml:ro'
      '/var/run/docker.sock': '/var/run/docker.sock'

Tip

You don’t have to worry about dots in filenames: netlab knows that the keys of the ‌clab.binds and ‌clab.config_templates dictionaries are filenames and does not expand them into hierarchical dictionaries.

Generating and Binding Custom Configuration Files

In addition to binding pre-existing files, netlab can also generate custom config files on the fly based on Jinja2 templates. For example, this is used internally to create the list of daemons for the frr container image:

frr:
 clab:
  image: frrouting/frr:v8.3.1
  mtu: 1500
  node:
    kind: linux
    config_templates:
      daemons: /etc/frr/daemons

netlab tries to locate the templates in the current directory, in a subdirectory with the name of the device, and within system directory templates/provider/clab/<device>. .j2 suffix is always appended to the template name.

For example, the daemons template used in the above example could be ./daemons.j2, ./frr/daemons.j2 or <netsim_moddir>/templates/provider/clab/frr/daemons.j2; the result gets mapped to /etc/frr/daemons within the container file system.

You can use the clab.config_templates node attribute to add your own container configuration files1, for example:

provider: clab

nodes:
  t1:
    device: linux
    clab:
      config_templates:
        some_daemon: /etc/some_daemon.cf

Faced with the above lab topology, netlab creates clab_files/t1/some_daemon from some_daemon.j2 (the template could be either in current directory or linux subdirectory) and maps it to /etc/some_daemon.cf within the container file system.

Jinja2 Filters Available in Custom Configuration Files

The custom configuration files are generated within netlab and can therefore use standard Jinja2 filters. If you have Ansible installed as a Python package2, netlab tries to import ipaddr family of filters, making filters like ipv4, ipv6 or ipaddr available in custom configuration file templates.

Warning

Ansible developers love to restructure stuff and move it into different directories. This functionality works with two implementations of ipaddr filters (tested on Ansible 2.10 and Ansible 7.4/ Ansible Core 2.14) but might break in the future – we’re effectively playing whack-a-mole with Ansible developers.

Using Other Containerlab Node Parameters

Default netlab settings support these additional containerlab parameters:

  • clab.type to set node type (used by Nokia SR OS and Nokia SR Linux)

  • clab.env to set container environment (used by Arista EOS to set Ethernet interface names)

  • clab.ports to map container ports to host ports

  • clab.cmd to execute a command in a container.

String values (for example command to execute specified in clab.cmd) are put into single quotes when written into clab.yml containerlab configuration file – make sure you’re not using single quotes in your command line.

To add other containerlab attributes to the clab.yml configuration file, modify defaults.providers.clab.node_config_attributes settings, for example:

provider: clab
defaults.providers.clab.node_config_attributes: [ ports, env, user ]

Containerlab Management Network

containerlab creates a dedicated Docker network to connect the container management interfaces to the host TCP/IP stack. You can change the parameters of the management network in the addressing.mgmt pool:

  • ipv4: The IPv4 prefix used for the management network (default: 192.168.121.0/24)

  • _network: The Docker network name (default: netlab_mgmt)

  • _bridge: The name of the underlying Linux bridge (default: unspecified, created by Docker)

Deploying Linux Containers

The initial configuration process (netlab initial) does not rely on commands executed within Linux containers:

  • The /etc/hosts file is generated during the netlab create process from the templates/provider/clab/frr/hosts.j2 template (see Generating and Binding Custom Configuration Files).

  • Interface IP addresses and static routes to in-lab default gateway are configured with ip commands executed on the Linux host but within the container network namespace.

  • Static default route points to the management interface.

You can therefore use any container image as a Linux node.

Using vrnetlab Containers

vrnetlab is an open-source project that packages network device virtual machines into containers. The architecture of the packaged container requires an internal network, and it seems that vrnetlab (or the fork used by containerlab) uses IPv4 prefix 10.0.0.0/24 on that network which clashes with the netlab loopback address pool.

If you’re experiencing connectivity problems or initial configuration failures with vrnetlab-based containers, add the following parameters to the lab configuration file to change the netlab loopback addressing pool:

addressing:
  loopback:
    ipv4: 10.255.0.0/24
  router_id:
    ipv4: 10.255.0.0/24

1

As the global provider parameters aren’t copied into node parameters, use groups to specify the same set of configuration templates for multiple devices.

2

Installing Ansible with Homebrew or into a separate virtual environment won’t work – netlab has to be able to import Ansible modules