How to identify orphaned veth interfaces and how to delete them?

Sild picture Sild · Aug 13, 2015 · Viewed 19.4k times · Source

When I start any container by docker run, we get a new veth interface. After deleting container, veth interface which was linked with container should be removed. However, sometimes it's fail ( oftern then container started with errors):

root@hostname /home # ifconfig | grep veth | wc -l
53
root@hostname /home # docker run -d -P  axibase/atsd -name axibase-atsd-
28381035d1ae2800dea51474c4dee9525f56c2347b1583f56131d8a23451a84e
Error response from daemon: Cannot start container 28381035d1ae2800dea51474c4dee9525f56c2347b1583f56131d8a23451a84e: iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 33359 -j DNAT --to-destination 172.17.2.136:8883 ! -i docker0: iptables: No chain/target/match by that name.
 (exit status 1)
root@hostname /home # ifconfig | grep veth | wc -l
55
root@hostname /home # docker rm -f 2838
2838
root@hostname /home # ifconfig | grep veth | wc -l
55

How I can identify which interfaces are linked with existing containers, and how I can remove extra interface which was linked with removed contrainers?

This way doesn't work (by root):

ifconfig veth55d245e down
brctl delbr veth55d245e
can't delete bridge veth55d245e: Operation not permitted

Extra interfaces now defined by transmitted traffic (if there are no activity, it's extra interface).

UPDATE

root@hostname ~ # uname -a
Linux hostname 3.13.0-53-generic #89-Ubuntu SMP Wed May 20 10:34:39 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

root@hostname ~ # docker info
Containers: 10
Images: 273
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 502
 Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-53-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 8
Total Memory: 47.16 GiB
Name: hostname
ID: 3SQM:44OG:77HJ:GBAU:2OWZ:C5CN:UWDV:JHRZ:LM7L:FJUN:AGUQ:HFAL
WARNING: No swap limit support

root@hostname ~ # docker version
Client version: 1.7.1
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 786b29d
OS/Arch (client): linux/amd64
Server version: 1.7.1
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d
OS/Arch (server): linux/amd64

Answer

larsks picture larsks · Aug 13, 2015

There are three problems here:

  1. Starting a single container should not increase the count of veth interfaces on your system by 2, because when Docker creates a veth pair, one end of the pair is isolated in the container namespace and is not visible from the host.

  2. It looks like you're not able to start a container:

    Error response from daemon: Cannot start container ...
    
  3. Docker should be cleaning up the veth interfaces automatically.

These facts make me suspect that there is something fundamentally wrong in your environment. Can you update your question with details about what distribution you're using, which kernel version, and which Docker version?

How I can identify which interfaces are linked with existing containers, and how I can remove extra interface which was linked with removed contrainers?

With respect to manually deleting veth interfaces: A veth interface isn't a bridge, so of course you can't delete one with brctl.

To delete a veth interface:

# ip link delete <ifname>

Detecting "idle" interfaces is a thornier problem, because if you just look at traffic you're liable to accidentally delete something that was still in use but that just wasn't seeing much activity.

I think what you would actually want to look for are veth interfaces whose peer is also visible in the global network namespace. You can find the peer of a veth interface using these instructions, and then it would be a simple matter of seeing if that interface is visible, and then deleting one or the other (deleting a veth interface will also remove its peer).