Installing the tools:

  • Clone the netsim-tools Github repository (or the netsim-examples repository which includes netsim-tools repository as a submodule.

  • If needed, select the desired release with git checkout release-tag. Use git tag to get the list of release tags.

  • Within the netsim-tools directory, install PyYAML, Jinja2, netaddr and python-box Python libraries with pip3 install -r requirements.txt.

  • Optional: install Ansible or use ipSpace network automation container image. The tools were tested with Ansible 2.9 and 2.10.

Alternatively, you could use install.libvirt Ansible playbook to install Vagrant, libvirt Vagrant plugin, netsim-tools, and all their dependencies on Ubuntu (tested on a Ubuntu 20.04 virtual machine):

$ wget
$ ansible-playbook install.libvirt

The playbook:

  • Installs all software packages required to use the libvirt Vagrant backend (including the vagrant-libvirt plugin)

  • Adds current user to libvirt group

  • Configures the vagrant-libvirt network

  • Clones the netsim-tools repository into /opt/netsim-tools and makes that directory writeable by the current user

  • Instantiates a new Python virtual environment in /opt/netsim-tools and install the Python dependencies into it.

  • Installs Ansible collections for supported network devices (IOS, NXOS, EOS, Junos)

Building Your Lab

The current version of the config-generate tool contains templates needed to create a Vagrant topology containing Cisco IOSv, Cisco CSR 1000v, Cisco Nexus 9300v, and Arista vEOS devices with vagrant-libvirt or virtualbox provider. The tool also supports container-based network operating systems powered by containerlab.

The Vagrant templates were tested on Ubuntu 20.04; if your environment needs specific adjustments please submit a pull request.


  • The only device tested with VirtualBox is Nexus 9300v (more details)

  • The only device tested with Containerlab is Arista cEOS

Building a libvirt-based Lab

If you’re new to libvirt, read Using Libvirt Provider with Vagrant blog post by Brad Searle.

The vagrant-libvirt network needs static DHCP bindings to ensure the network devices get consistent management IP addresses. See creating vagrant-libvirt network for details.

The Vagrantfile templates for individual network devices were derived from the following build your own Vagrant box for vagrant-libvirt environment blog posts:

Use vagrant mutate plugin to migrate Cisco Nexus 9300v virtualbox image into libvirt image:

  • Download the .box file from the vendor web site;

  • Install the virtualbox box with vagrant box add filename –name boxname command (use cisco/nexus9300v for Nexus 9300v)

  • Mutate the virtualbox image into libvirt image with vagrant mutate boxname libvirt command.

  • Remove the virtualbox box with vagrant box remove boxname –provider virtualbox command.


  • Nexus 9300v takes almost exactly a minute after SSH becomes available on the management interface to “boot” the Ethernet linecard (module 1) and activate Ethernet interfaces. Leave it alone for a while even when Vagrant tells you the VM is ready.

  • If you’re experiencing high CPU utilization with Cisco CSR, set halt_poll_ns to zero.

  • For more Vagrant details, watch the Network Simulation Tools part of Network Automation Tools webinar.

Creating vagrant-libvirt Network

Vagrant libvirt provider connects management interfaces of managed VMs to vagrant-libvirt virtual network. Vagrant can figure out the device IP address based on dynamic DHCP mappings; netsim-tools can’t. To make the Ansible inventory created by create-topology tool work your virtual network MUST include static DHCP bindings that map MAC addresses used by create-topology into expected IP addresses.

The easiest way to create the management network is to use the XML file netsim/templates/provider/libvirt/vagrant-libvirt.xml :

  • If needed, delete the existing vagrant-libvirt network with virsh net-destroy vagrant-libvirt

  • Create the management network with virsh net-create path/vagrant-libvirt.xml

You could also add DHCP bindings to your existing vagrant-libvirt network; here’s the XML definition:

  <forward mode='nat'>
      <port start='1024' end='65535'/>
  <bridge name='virbr1' stp='on' delay='0'/>
  <mac address='52:54:00:d8:3f:0d'/>
  <ip address='' netmask=''>
      <range start='' end=''/>
      <host mac='08:4F:A9:00:00:01' ip=''/>
      <host mac='08:4F:A9:00:00:02' ip=''/>
      <host mac='08:4F:A9:00:00:03' ip=''/>
      <host mac='08:4F:A9:00:00:04' ip=''/>
      <host mac='08:4F:A9:00:00:05' ip=''/>
      <host mac='08:4F:A9:00:00:06' ip=''/>
      <host mac='08:4F:A9:00:00:07' ip=''/>
      <host mac='08:4F:A9:00:00:08' ip=''/>
      <host mac='08:4F:A9:00:00:09' ip=''/>
      <host mac='08:4F:A9:00:00:10' ip=''/>
      <host mac='08:4F:A9:00:00:11' ip=''/>

Notes on Arista EOS vagrant-libvirt Box

I used the recipe published by Brad Searle and modified it slightly to make it work flawlessly with EOS 4.25.0. After applying Brad’s initial configuration (do not configure his event handlers), execute these commands to generate PKI key and certificate:

security pki key generate rsa 2048 default
security pki certificate generate self-signed default key default ↩
  param common-name Arista

After generating PKI certificate add these configuration commands to enable NETCONF and RESTCONF

management api http-commands
 no shutdown
management api netconf
 transport ssh default
management api restconf
 transport https default
  ssl profile default
  port 6040
management security
 ssl profile default
  certificate default key default

Finally, remove custom shell from vagrant user with

no user vagrant shell

Vagrant will be totally confused when it sees something that looks like a Linux box but doesn’t behave like one, so add these commands to Vagrantfile (already included in eos-domain.j2 template): = "bash"
config.vm.guest = :freebsd

Notes on Juniper vSRX Vagrantfile template

The Vagrant template for vSRX uses default_prefix libvirt parameter to set the domain (VM) name and uses the VM name to set libvirt vCPU quota.

The template has been tested with Vagrant version 2.2.14. Some earlier versions of Vagrant generated VM names using a slightly different algorithm (the underscore between default_prefix and VM name was added automatically) and might thus generate an unexpected VM name. To fix that problem remove parts of vsrx-domain.j2 template:

  • Remove domain.default_prefix parameter (default value should generate the expected VM name) or

  • Remove the whole CPU-limiting logic (trading CPU cycles for simplicity)

Notes on Arrcus ArcOS vagrant-libvirt Box and Ansible Collections

Please reach out to your Arrcus Customer Engineering Representative or Contact Arrcus for access to the Arrcus Vagrant Box file and ArcOS Ansible collections.