How to setup LXD containers that communicate over the LAN

Adil picture Adil · Jan 24, 2017 · Viewed 7.5k times · Source

I have a set of servers rung up to a LAN. I am able to install and work with LXD containers on a machine, but for the life of me I can't get the containers visible on the network. I have attempted to follow these urls, to no avail :

My servers are setup as follows:

  1. eth0 - Hardware NIC connected to the Internet
  2. eth1 - Hardware NIC connected to the LAN

If i try to setup a bridge on the eth1 device via lxdbr0, the containers are not visible on the LAN. If i try to setup a bridged br0 device manually, bridged to eth1 and using DHCP, the device fails to start.

My /etc/network/interfaces looks like this:

iface lo inet loopback

# The primary network interface
iface eth0 inet static
    address x.x.x.x
    netmask 255.255.255.224
    gateway x.x.x.x

iface eth1 inet static
    address 192.168.0.61/23

iface br0 inet dhcp
    bridge_ports eth1
    bridge-ifaces eth1
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 0

auto lo eth0 eth1 br0

Is it possible to create containers that are visible on the LAN and can connect to the internet?

LXD v2.7 on Ubuntu 16.04

Answer

Joshua Schaeffer picture Joshua Schaeffer · Mar 29, 2017

Yes, this is very possible. I haven't played around with all the new networking features introduced in LXD 2.3, so I can't speak to any of that, but it looks like you are wanting a pretty simple network layout, so those features may not even come into play. I do something somewhat similar to your network layout. I have 4 NIC's in all my servers. The first two I put in a bond and put on my management network, and the second two I put in a LAG (another bond) and use for all LXD traffic. I have multiple VLAN's so my LAG is setup as an trunk port and I create VLAN devices for each VLAN I want to be able to connect to. I then put those VLAN devices into a bridge that the actual container uses.

Take away all the bonding, and raw VLAN devices and you have essentially the same setup: one NIC for management of the LXD host, one bridge for LXD container traffic. I don't use the default lxcbr0 device, but all the concepts should be the same.

A simple example

First define the NIC or NIC's that will be part of your bridge. In your case it looks like you are just using one NIC (eth1). You need to set the NIC to manual. Do not assign an address to it.

auto eth1
iface eth1 inet manual

Next define your bridge, again I would not define an IP address here. I prefer to assign all my containers IP's inside the container. Set the bridge to manual as well. When the container starts it will bring up the device.

auto br0
iface br0 inet manual
  bridge_ports eth1
  bridge_stp off
  bridge_fd 0
  bridge_maxwait 0

Now all you have to do is use this bridge in your container's profile.

lxduser@lxdhost:~$ lxc profile show default
name: default
config: {}
description: ""
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0

Now start your container and edit the network configuration. For example in Debian based systems you would edit /etc/network/interfaces (in the container). In Red Hat based systems you would edit /etc/sysconfig/network-scripts/ifcfg-eth0 (in the container). Here is the Debian example.

auto eth0
iface eth0 inet dhcp

As long as DHCP is working on the network that eth1 (on your LXD host) is plugged into then the container should get an address and be routable on that network. In order to get internet access then eth1 has to be plugged into a subnet that has internet access, it is not dependent on the container.

More complex networking

If you want to put containers on different VLAN's, want fault tolerance on your host, or both, then this requires a bit more setup. I use the configuration file below on my LXD host.

############################
# PHYSICAL NETWORK DEVICES #
############################

# Management network interface
auto enp2s0f0
iface enp2s0f0 inet static
    address 10.1.31.36/24
    gateway 10.1.31.1
    dns-nameserver 10.1.30.2 10.1.30.3 75.75.75.75
    dns-search harmonywave.com

#iface enp2s0f0 inet6 dhcp

# Second network interface
auto enp2s0f1
iface enp2s0f1 inet manual

# LXD slave interface (1)
auto enp3s0f0
iface enp3s0f0 inet manual
    bond-master bond1

# LXD slave interface (2)
auto enp3s0f1
iface enp3s0f1 inet manual
    bond-master bond1

##########################
# BONDED NETWORK DEVICES #
##########################

# Bond network device
auto bond1
iface bond1 inet manual
    bond-mode 4
    bond-miimon 100
    bond-lacp-rate 1
    bond-slaves enp3s0f0 enp3s0f1
    bond-downdelay 400
    bond-updelay 800

####################
# RAW VLAN DEVICES #
####################

# Tagged traffic on bond1 for VLAN 10
iface bond1.10 inet manual
    vlan-raw-device bond1

# Tagged traffic on bond1 for VLAN 20
iface bond1.20 inet manual
    vlan-raw-device bond1

# Tagged traffic on bond1 for VLAN 30
iface bond1.30 inet manual
    vlan-raw-device bond1

# Tagged traffic on bond1 for VLAN 31
iface bond1.31 inet manual
    vlan-raw-device bond1

# Tagged traffic on bond1 for VLAN 42
iface bond1.42 inet manual
    vlan-raw-device bond1

# Tagged traffic on bond1 for VLAN 50
iface bond1.50 inet manual
    vlan-raw-device bond1

# Tagged traffic on bond1 for VLAN 90
iface bond1.90 inet manual
    vlan-raw-device bond1

##########################
# BRIDGE NETWORK DEVICES #
##########################

# Bridged interface for VLAN 10
auto br0-10
iface br0-10 inet manual
    bridge_ports bond1.10
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 0    

# Bridged interface for VLAN 20
auto br0-20
iface br0-20 inet manual
    bridge_ports bond1.20
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 0    

# Bridged interface for VLAN 30
auto br0-30
iface br0-30 inet manual
    bridge_ports bond1.30
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 0

# Bridged interface for VLAN 31
auto br0-31
iface br0-31 inet manual
    bridge_ports bond1.31
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 0

# Bridged interface for VLAN 42
auto br0-42
iface br0-42 inet manual
    bridge_ports bond1.42
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 0

# Bridged interface for VLAN 50
auto br0-50
iface br0-50 inet manual
    bridge_ports bond1.50
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 0

# Bridged interface for VLAN 90
auto br0-90
iface br0-90 inet manual
    bridge_ports bond1.90
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 0

Let's break this down. First I define the physical NIC's. That's the PHYSICAL NETWORK DEVICES section. Nothing different between mine and yours for this first NIC (eth0 for you enp2s0f0 for me). I just define it statically and give it an address. I assign an address on my management network for this NIC. The third and fourth NIC's I use for container traffic. I wanted to use LACP in a LAG so I defined the devices as manual and made them a slave to "bond1".

Next I define my bond devices. That's the BONDED NETWORK DEVICES section. In this case just the one bond for container traffic. Again, I set it to manual and define the bond mode as 4 (LACP). Could have just as easily setup a different type of bond (active-passive, active-active, etc).

Next, because I have the third and fourth NIC physically connected to a trunk port on the switch I have to specify dot1Q so that traffic is actually tagged. I create a raw VLAN device for each VLAN that a container could possibly be in. I append a .XX where "XX" is the VLAN ID. This isn't necessary anymore, I just do this for easy identification. Then tag the device with the "vlan-raw-device" stanza. This is the RAW VLAN DEVICES section.

Finally, in the BRIDGE NETWORK DEVICES section I create bridges for each VLAN device. This is what the container will actually use. Again, I set this to manual and do not define an IP address, that is defined inside the container.

Now all I have to do is assign whatever bridge is on the VLAN I want to a container. For simplicity and to avoid configuring every container, I just create a profile for each bridge/VLAN. For example here is my profile for VLAN 31.

lxduser@lxdhost:~$ lxc profile show 31_vlan_int_server
name: 31_vlan_int_server
config: {}
description: ""
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0-31
    type: nic

Then I just assign this profile to any container I want on VLAN 31. At this point I can either setup my container's /etc/network/interfaces file as dhcp (if DHCP is enabled on that VLAN) or give it a static IP address that is part of that VLAN.

The Entire network layout looks something like this.

LXD Multi-VLAN setup

The container uses the veth device inside the bridge (which LXD creates). The host adds the raw vlan device to the bridge as well and they both use the actual bond device, which in turn uses the physical NICs.

Concerning issues with connectivity

Finally, concerning your issues of not being able to connect to the internet, make sure that the bridge your containers is using has internet access. I like to take a step-by-step troubleshooting approach.

  1. Make sure that eth1 has internet access. Temporarily remove any bond and bridge configuration and use eth1 directly on the host. Change it from manual to static and give it an IP address.
    • On the host can you ping the NIC's IP address?
      • Yes? NIC is working, continue troubleshooting. No? NIC is not setup properly, everything else will fail.
    • On the host can you ping another host on the same subnet?
      • Yes? Switching is setup correctly, continue troubleshooting. No? Issue connecting to the network. Possible static routing issue on the box itself, possible issue with the switch.
    • On the host can you ping a machine on a different subnet (try ping 8.8.8.8)?
      • Yes? Routing is working properly, continue troubleshooting. No? Possible gateway misconfiguration on the host. Possible issue with the router. Possible issue with static routing on the box (check gateway of last resort).
    • Can you resolve a DNS address (try ping www.google.com)?
      • Yes? DNS resolution is setup correctly, continue troubleshooting. No? Issue with DNS resolution. Check /etc/resolv.conf or resolvconf setup.
  2. If everything is working on eth1 when it is setup statically then you know that NIC is functioning properly. The next step would be to change eth1 back to manual, remove the static IP from it, and recreate your bridge. Now assign the static IP address on the bridge itself and repeat all the sub-steps from step 1.

If everything works successfully then you should be able to connect with your container (after setting your bridge back to manual and removing the static IP). If not, then get an address on the container (static or DHCP) and repeat the sub-steps from step 1.