Using the Real Network with Docker 28 Mar 2016
Overlay networks are all the rage with Docker, but I generally prefer to just use the network the host is sitting on, rather than deal with NATing in and out of an overlay network. I just rebuilt my general utility box at home and set it up this way, so figured I’d document it in case anyone else finds it useful, or better, can point out how to improve it! This is the same approach Stefan Schimanski described for Ubuntu. It’s super simple and works well.
To start with, we need to pick an IP block to give to the container for hosts. As my home network uses a (mostly) flat
192.168.0.0/16 network, I picked out
192.168.3.0/24 for containers on this particular host. The host in question is running vanilla Centos7 and has two NICs. I left the first NIC (
eno1) alone, as that is what I was using to ssh in and muck about. The second one I wanted to put in a bridge interface for Docker.
So, first, I created a bridge,
br0. This can be done lots of ways. Personally I used
nmtui to make the bridge, then reconfigured it by editing the config file in
/etc/. Probably didn’t need
nmtui in there, but I haven’t done networking beyond “enable DHCP” in a Redhat derivative since RH6 (the 1999 edition, not RHEL6).
# /etc/sysconfig/network-scripts/ifcfg-bridge-br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=none ONBOOT=yes STP=no IPADDR=192.168.3.0 PREFIX=16 GATEWAY=192.168.1.1 DNS1=192.168.1.1 DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=no NAME=bridge-br0 UUID=97dcd4e2-0fdc-2301-8ffc-f0f60c835659
The bridge is configured to use the
192.168.0.0/16 network (well, looking at it
192.168.3.0/16 but the mask wipes out the 3), with the same details (gateway, dns, etc). This is exactly the network as relayed by DHCP, though statically configured. It might be possible to configure a bridge via DHCP, but I have no idea how.
The next step is to add the second NIC (
eno2) to the bridge:
# /etc/sysconfig/network-scripts/ifcfg-eno2 TYPE=Ethernet BOOTPROTO=none BRIDGE=br0 NAME=eno2 UUID=e4e99e09-aa93-4d64-ab0d-5e2180e19c58 DEVICE=eno2 ONBOOT=yes
Note that this doesn’t have an IP of its own. I don’t want it to do anything but relay packets through to the containers, and vice versa.
We then tell Docker to use the
br0 bridge and allocate IPs from
192.168.3.0/24. This is done via
--bridge=br0 --fixed-cidr=192.168.3.0/24 when starting the docker daemon:
# /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target docker.socket Requires=docker.socket [Service] Type=notify ExecStart=/usr/bin/docker daemon --bridge=br0 --fixed-cidr=192.168.3.0/24 -H fd:// MountFlags=slave LimitNOFILE=1048576 LimitNPROC=1048576 LimitCORE=infinity TimeoutStartSec=0 [Install] WantedBy=multi-user.target
Docker kindly looks at the bridge it is putting virtual NICs in to get the network settings, so this Just Works. Nice touch picking that up from the bridge, Docker!
Finally, we need to remember to enable IP forwarding:
echo "1" > /proc/sys/net/ipv4/ip_forward
and to make it persistent, I added a file at
# /etc/sysctl.d/60-ip_forward.conf net.ipv4.ip_forward=1
Et voila, docker is now handing out IPs on the local network! If I were setting this up for a datacenter, or in a VPC, I’d give more thought to how big a block to give each container host. The full
/24 feels generous. Look at your expected container density and go from there.