Interested in LXD and LXC?  Check out our new LXC/LXD Deep Dive course here at Linux Academy!  We cover topics including installation, launching containers, persistent storage, networking, and even cover some fascinating use cases to make LXC useful and relevant to you right now!

We’ve also created this LXC-LXD Cheat Sheet to help you get started with LXD right away!


LXD is a really fun and easy way to jump into containers, especially if you have some experience with virtual machines.  LXD is designed to create machine containers, which strongly resemble virtual machines, so trying out new distributions or testing application deployments is easy and – dare I say it – fun.  LXD 2.0 brought myriad new features to the platform, but a few tasks remain adorably unfledged. A single node of LXD, for instance, can easily be initialized and containers come up on their own private network with the default settings.  Once a second node is added, a major limitation becomes obvious:  Each node has its own private network for containers and, without some networking jujitsu, will never be able to communicate with one another.

Getting all those containers on the same layer 2 network, regardless of which host each one resides on, is what this post is all about.  

A Modest Proposal

There are probably as many ways to solve this problem as there are network engineers.  I sought a solution that would be simple to explain, can be implemented on many disparate network architectures, and offers the same features as single node LXD.  So, it needed to retain name resolution across nodes and network address translation without the complication of segmenting the container network.  After rolling up my sleeves and getting most of this working, I started to realize that my initial solution was neither simple nor easy to explain, which is probably why I could not find much guidance on the internet for getting this done.  Then I realized that I didn’t need to manually recreate the infrastructure that lxd init gives me on the initial node – I just needed to extend its reach.

My solution involves first initializing LXD on a master node using the default settings (giving it the added benefit of keeping an existing LXD node with all its images, containers, and so-forth intact if necessary).  Next, bring up a secondary node and building a GRE tunnel from a generic bridge on the new node to the lxdbr0 on the existing node.  Finally, LXD is initialized on the second node using the new bridge instead of the built-in lxdbr0.  Network address translation works, container/hostname resolution works, and containers retain their IP addresses even after being transferred from one host to another.  

Making It Work

If you’ve already got an LXD node up and running, you can skip the initial part of these instructions.  You will still need to make a few changes on your primary node, but you shouldn’t need to reinstall or reinitialize LXD.  The environment consists of two Ubuntu 16.04 hosts, each on a network connected to the internet.  They need not be on the same network, but they do need to be able to access one another through a GRE tunnel, so some firewalls or security rules may thwart your attempts, though I didn’t have any problems in my testing in my home lab or on Linux Academy’s Cloud Servers.  “Alpha” is the host that will be acting as the master server.  This is the one that might have an LXD installation up and running already.  “Bravo” is the host that we’ll be adding to our cluster.  I’m assuming LXD has not been initialized there before.  Before beginning, you should note the IP addresses of Alpha and Bravo.  In the instructions, I’ll refer to these IP addresses as if they were set as environment variables $ALPHA_IP and $BRAVO_IP.

    1. Install some tools on Alpha.  We’re going to be building a new tunnel, so we should make certain the proper bridge utilities are installed:
      sudo apt-get install bridge-utils
    2. (Optional) If LXD isn’t initialized on Alpha, go ahead and do that now.  Let LXD handle the creation of the lxdbr0, as we’re going to use the services it manages for the entire cluster.  Accepting the defaults is fine for our purposes if you’re unsure of precisely what you want.
      sudo lxd init
    3. On Alpha, we need to start building our GRE tunnel.  This is done on each end by defining the link, plugging the link into the appropriate bridge, and then bringing the link up:
      sudo ip link add contgre type gretap remote $BRAVO_IP local $ALPHA_IP ttl 255
      sudo brctl addif lxdbr0 contgre
      sudo ip link set contgre up
    4. On Bravo, we need to complete the link.  First, we’ll create a new bridge for our containers residing on Bravo to use; next, we’ll set up the link in a manner similar to how we set it up on Alpha.  Once these steps are completed, it’s like we have two network switches connected with an ethernet cable.  One switch on each host for local containers, and network services like NAT, DHCP, and DNS running on Alpha and managed by LXD.
      sudo apt-get install bridge-utils
      sudo brctl addbr multibr0
      sudo ip link add contgre type gretap remote $ALPHA_IP local $BRAVO_IP ttl 255
      sudo brctl addif multibr0 contgre
      sudo ip link set contgre up
    5. On Bravo, we are now ready to run lxd init.  Answer most of the questions with the defaults or what you need or want for your LXD cluster.  The important questions come when the screen turns pink:

LXC Install Question
LXC Install Question
LXC Install Question

  1. Once Bravo is configured, if you made it available over the net, you can add it as a remote on Alpha so you can control Bravo from a single host:
    lxc remote add bravo $BRAVO_IP --password=password_you_chose
  2. Things are all set up, and the rest of these steps will just allow you to test and exercise mutli-node LXD from the Alpha host.
    lxc launch images:alpine/3.5 test1
    lxc launch images:alpine/3.5 bravo:test2
    lxc list
    lxc list bravo:
    lxc stop bravo:test2
    lxc move bravo:test2 local:
    lxc start test2
    lxc list
    lxc list bravo:

Hopefully, this guide has given you some insight in how to make LXD a bit more usable in larger lab environments.  For more information about LXD, I invite you to take a look at my Linux Academy course LXC/LXD Deep Dive, where we explore how to run LXD in your environment and examine many potential use cases.

DevOps Guides

2 responses to “Multiple LXD Hosts Can Share a Discreet Layer-2 Container-Only Network”

  1. brian mullan says:


    I’ve used LXC & LXD for a long time now and was glad to see your LinuxAcademy LXD tutorials. I also notified the LXD/LXC community about it via the lxc-users mail alias and also on the

    I did want to let you know of a very cool L2 full-mesh auto-learning VPN that works very well with LXD/LXC.
    Its called PeerVPN:

    PeerVPN is super easy to setup on each LXD host and being full-mesh & auto-learning each LXD Host only needs to be configured to communicate with at least 1 existing VPN Mesh LXD Host and then it will “auto-learn” all of the other LXD Hosts within a few seconds.

    A basic config file is only about 6-7 lines & can more or less be copied from LXD Host to LXD host & then change 1 line entry and restart PeerVPN on that Host and it will join the VPN Mesh. Unlike some VPN Mesh solutions PeerVPN was designed so traffic doesn’t have to traverse thru some “central” VPN node so there is no single point of failure.

    The PeerVPN tunnel on each LXD host just needs to be attached to that Host’s LXDBR0 (or whatever its called bridge) using something like:

    example: $ sudo ip link set dev peervpn0 master lxcbr0

    Anyway I thought you’d get a kick out of trying it. I have LXD nodes on AWS, Digital Ocean and here that all are on the same L2 VPN Mesh network. Matter of fact you can configure them to all use a single DHCP for the container IP address source.

    I have a wordpress post that is a couple years old now and I’m sorry to say I’ve not updated it yet but this will give you an idea of PeerVPN setup & use:

    PeerVPN was designed a couple years ago by a French Computer Science PHD and he did a great job.

    PeerVPN features include:

    Ethernet tunneling support using TAP devices.
    IPv6 support.
    Full mesh network topology.
    No NAT reconfiguration necessary, only one node with a public address is required.
    Shared strong key encryption and authentication support.

    Basic Configuration is documented here by the PeerVPN author:

  2. Greg W. says:

    Hi Chad,

    I’m going through your LXC/LXD deep dive course on Linux academy. Quick suggest to make GRE and bridge configuration persistent… I’m using different name for tunnel and bridge name, and IPs, but you get the idea.

    Host Alpha /etc/network/interfaces.d/lxdtunnel

    auto lxdtunnel
    iface lxdtunnel inet manual
    pre-up ip link add lxdtunnel type gretap remote local ttl 255
    pre-up brctl addif lxdbr0 lxdtunnel
    post-down brctl delif lxdbr0 lxdtunnel
    post-down ip link del lxdtunnel
    post-down modprobe -r ip_gre

    Host Bravo (/etc/network/interfaces.d/combined)

    auto multibr0
    iface multibr0 inet manual
    bridge_ports none
    bridge_stp off

    auto lxdtunnel
    iface lxdtunnel inet manual
    pre-up ip link add lxdtunnel type gretap remote local ttl 255
    pre-up brctl addif multibr0 lxdtunnel
    post-down brctl delif multibr0 lxdtunnel
    post-down ip link del lxdtunnel
    post-down modprobe -r ip_gre

Leave a Reply

Your email address will not be published. Required fields are marked *

Get actionable training and tech advice

We'll email you our latest articles up to once per week.