
In the next section we go deeper to understand the details of how exactly the container to container communication happens using the overlay network, especially how VXLAN is used under the hood for inter-container communication to take place. Thus, in this section we looked in some detail the structure of the overlay network namespace and from a high level looked into how containers communicate with each other. lb_gc3lpbk65, we will see that it has an interface, with the other end hooked into the overlay network namespace. Now, if we look inside the third entry, i.e. The container has another interface, and this is plugged into the docker_gwbridge interface, and this can easily be seen from the host. If we look carefully at the interfaces from the screenshots above, we will see that the container with the container_id 76ea37e58494 has its interface plugged into the test_overlay network namespace as. There is a third entry as well- lb_gc3lpbk65. This can easily be checked out as follows: The entry which has no special characters in it is the container. The entry in the notation 1-xxxx is the network namespace. We see there are three entries generated here. Now, creating an overlay network followed by creation of a service attached to this particular overlay network generates a few entries in the location /var/run/docker/netns. We can also safely assume that a container is created as the service is generated. We created an overlay network named as test_overlay and then created a service called test_service, attaching it to our test_overlay network. Bridge networks can cater to single host, while overlay networks are for multiple hosts. Okay, before we move ahead, let us try to understand where we are. With Docker Multi-Host networking, you can create virtual networks and add containers. overns= /var/run/docker/netns/1-gc3lpbk65s nsenter -net=$overns ip link show overns=/var/run/docker/netns/3e2a28118c7a nsenter -net=$overns ip link show overns=/var/run/docker/netns/lb_gc3lpbk65 nsenter -net=$overns ip link show Let us inspect the entries one by one, starting with the overlay network namespace. So, we see that there are three new entries (highlighted) in the path to the network namespace. We can also list out the networks and be able to find the match. The overlay network namespace will always have a notation like 1-xxxx. docker service create -name test_service -network test_overlay redis:latest Let’s do so and then check the path to the network namespace. While we cannot attach a stand-alone container to an overlay network, we can create a service and attach it to this network. This could be explained by the fact that network namespaces get invoked only when we start attaching containers to it. Well, we don’t see any ‘footprint’ of our newly created network as yet. Let us jump to the /var/run/docker/overlay/netns area and see if everything is as expected. Maybe the original problem is my swarm join command was not fully correct.? docker run -d swarm join consul://127.0.0.Docker network create -driver overlay test_overlay docker network ls When I check the status of the swarm cluster there are no nodes indeed. I think I should have done this to my swarm manager instead - in this case the error message is different: docker -H tcp://0.0.0.0:3375 network create -driver overlay networkĮrror response from daemon: No healthy node available in the cluster I think this was a mistake to execute the network creation command to the locally running docker daemon. Is it possible that this is a misuse of the overlay network concept that I try to run this only on one host.? Maybe this causes that strange error? I checked my Boot2Docker image, it is using sysvinit and not systemd, so this shouldn't be a problem, and also the kernel version seems to be good: uname -r (most likely the backing datastore is not configured)` "GlobalDefault" pool "" subpool "": cannot find address space GlobalDefault When I try to create an overlay network with Docker I get the following error: docker network create -driver overlay somenetworkĮrror response from daemon: failed to parse pool request for address space
