Does a docker container have its own TCP/IP stack?

Manuel Durando picture Manuel Durando · Jul 7, 2014 · Viewed 8.8k times · Source

I'm trying to understand what's happening under the hood to a network packet coming from the wire connected to the host machine and directed to an application inside a Docker container.

If it were a classic VM, I know that a packet arriving on the host would be transmitted by the hypervisor (say VMware, VBox etc.) to the virtual NIC of the VM and from there through the TCP/IP stack of the guest OS, finally reaching the application.

In the case of Docker, I know that a packet coming on the host machine is forwarded from the network interface of the host to the docker0 bridge, that is connected to a veth pair ending on the virtual interface eth0 inside the container. But after that? Since all Docker containers use the host kernel, is it correct to presume that the packet is processed by the TCP/IP stack of the host kernel? If so, how?

I would really like to read a detailed explanation (or if you know a resource feel free to link it) about what's really happening under the hood. I already carefully read this page, but it doesn't say everything.

Thanks in advance for your reply.

Answer

TvE picture TvE · Oct 24, 2014

The network stack, as in "the code", is definitely not in the container, it's in the kernel of which there's only one shared by the host and all containers (you already knew this). What each container has is its own separate network namespace, which means it has its own network interfaces and routing tables.

Here's a brief article introducing the notion with some examples: http://blog.scottlowe.org/2013/09/04/introducing-linux-network-namespaces/ and I found this article helpful too: http://containerops.org/2013/11/19/lxc-networking/

I hope this gives you enough pointers to dig deeper.