Console Login

Mastering Container Networking: From Docker Links to Kubernetes Concepts

Scaling Docker: The Networking Nightmare & The Kubernetes Promise

Let’s be honest: Docker is revolutionizing how we ship code, but the networking model is currently a disaster for anyone trying to run a serious production cluster. If you have been manually linking containers with --link or wrestling with the ambassador pattern, you know the pain. I have spent the last three nights debugging iptables NAT rules just to get a Redis slave to talk to its master across two different physical hosts.

Google recently dropped the source code for Kubernetes (K8s), and while it is currently in pre-alpha (v0.x), it proposes a networking model that actually makes sense: a flat address space where every container (or "pod") gets its own IP address. No more port mapping hell. No more random high-numbered ports.

But you can't just deploy Kubernetes in production yet—it is too raw. However, you can and should implement its networking principles today using stable Linux tools. In this deep dive, we are going to build a cross-host container network manually using Open vSwitch (OVS) and GRE tunnels on Ubuntu 14.04 LTS.

Pro Tip: This tutorial requires kernel-level modification capabilities (loading openvswitch and gre modules). Most budget VPS providers using OpenVZ will block this. You need true virtualization. We use CoolVDS KVM instances for all our infrastructure because they provide raw access to the Linux kernel, essential for advanced networking.

The Problem: Docker's Default Bridge

By default, Docker creates a docker0 bridge. It picks a subnet (usually 172.17.0.0/16) and assigns IPs to containers from that pool. This works fine on a single laptop.

The moment you add a second server, you have a collision. Both Docker daemons will likely default to the same subnet. Even if you change the subnet, Host A has no idea how to route packets to the container IP on Host B. The packets hit the wire and die because private container IPs are not routable on the public internet.

The Solution: The Overlay Network

To mimic the Kubernetes model, we need:

  1. Unique subnets for every host (e.g., Host A gets 172.17.1.0/24, Host B gets 172.17.2.0/24).
  2. A way to encapsulate traffic between hosts (GRE or VxLAN).
  3. A bridge that connects the Docker daemon to this tunnel.

Step 1: Preparing the Network Bridge

First, we need to install Open vSwitch. Standard Linux bridges work, but OVS gives us more control and better performance for future automation.

sudo apt-get update
sudo apt-get install openvswitch-switch

Now, let’s create a bridge named obr0. Do this on both your CoolVDS nodes.

sudo ovs-vsctl add-br obr0

Step 2: Configuring the Tunnel

This is where the magic happens. We need to connect the bridge on Node A to Node B. We will use a GRE tunnel. It’s older than VxLAN, but in 2014, it’s rock solid and supported by everything.

On Node A (assume IP 192.168.1.10):

sudo ovs-vsctl add-port obr0 gre0 -- set interface gre0 type=gre options:remote_ip=192.168.1.11

On Node B (assume IP 192.168.1.11):

sudo ovs-vsctl add-port obr0 gre0 -- set interface gre0 type=gre options:remote_ip=192.168.1.10

If you are hosting in Norway, checking your latency between nodes is critical. If your latency exceeds 2-3ms, this tunneling approach will kill your database performance. We rely on CoolVDS because their datacenter peering in Oslo keeps internal latency effectively at zero.

Step 3: configuring Docker to Use the Bridge

Now we need to tell Docker to stop using its default bridge and use our OVS bridge instead. We also need to assign specific subnets to avoid overlap.

Edit your Docker config (/etc/default/docker on Ubuntu 14.04):

# On Node A
DOCKER_OPTS="-b=obr0 --fixed-cidr=172.17.1.0/24"

# On Node B
DOCKER_OPTS="-b=obr0 --fixed-cidr=172.17.2.0/24"

Restart the Docker daemon:

sudo service docker restart

Step 4: Routing Rules

Even with the bridge, the OS needs to know where to send packets destined for the other subnet. You need to add a static route on the host OS.

On Node A:

sudo ip route add 172.17.2.0/24 via 192.168.1.11 dev eth0

Wait, actually, since we are bridging layer 2 with GRE, the bridge handles the frames. However, for pure IP routing without the OVS complexity, you would use standard ip route. With our OVS setup, the ARP requests traverse the tunnel. To verify it works:

# On Node A, run a container
docker run -it ubuntu:14.04 /bin/bash
root@containerA:/# ip addr show
# You should see an IP like 172.17.1.1

# Ping a container on Node B (e.g., 172.17.2.1)
root@containerA:/# ping 172.17.2.1

Performance & Compliance Implications

Running overlay networks introduces overhead. The encapsulation/decapsulation process consumes CPU cycles. On a standard VPS with "shared" CPU cores, you will see CPU Steal spikes during high network load. This is why we advocate for Dedicated Core instances or high-performance KVM setups like those at CoolVDS.

Additionally, for our Norwegian clients dealing with Datatilsynet requirements, remember that traffic traversing these tunnels is unencrypted by default. If you are piping sensitive customer data between nodes over the public internet, you must use IPsec inside the GRE tunnel or switch to OpenVPN, though that comes with a significant performance penalty.

Comparison: Networking Approaches

Method Pros Cons Best For
Docker Links Easy, built-in Single host only, fragile Dev environments
Port Mapping Works everywhere Port conflicts, security risk Public facing apps
OVS / Tunnels Scalable, "Kubernetes-style" Complex config, MTU issues Production Clusters

The Future is Kubernetes, The Reality is Here

Kubernetes is going to change how we manage this. It automates everything we just did manually. But until v1.0 arrives and stabilizes, knowing how to manipulate ovs-vsctl and understanding the flow of packets through the Linux kernel is what separates a script kiddie from a Systems Architect.

Don't let your infrastructure be the bottleneck. Whether you are experimenting with the Kubernetes alpha or building a solid Docker cluster today, you need a foundation that doesn't limit your kernel access.

Ready to build your cluster? Deploy a high-performance KVM instance on CoolVDS in under 55 seconds and get full root control.