The "Flat IP" Promise vs. Reality
So, Google finally dropped Kubernetes 1.0 in July. If you are like me, you immediately tried to spin up a cluster on your development servers, expecting it to work like Docker Compose. You were wrong.
The Kubernetes networking model imposes a strict requirement: all containers can communicate with all other containers without NAT. On GCE or AWS, this is handled by the cloud provider's SDN. But here in the real world of bare metal and VPS hosting, we don't have magical VPC route tables. We have a flat network interface and a headache.
If you try to route traffic between Pods across different nodes without an overlay, you hit a wall. I spent last weekend debugging packet drops between two nodes in a cluster. The culprit wasn't iptables—it was the fact that my provider's switch had no idea how to route the 10.244.0.0/16 subnet I defined for my Pods.
The Overlay Fix: Flannel to the Rescue
For those of us not running on Google's infrastructure, Flannel (by the CoreOS team) is currently the most pragmatic solution. It creates an overlay network that encapsulates your TCP packets inside UDP packets, allowing them to traverse the physical network between your hosts.
However, getting Flannel to play nice with etcd version 2.0 requires precise configuration. One wrong key in your JSON, and flanneld will fail to allocate subnets. Here is the configuration I am currently running in production on my CoolVDS instances:
etcdctl set /coreos.com/network/config '{"Network":"10.1.0.0/16", "Backend": {"Type": "vxlan"}}'
Note the backend type: vxlan. By default, Flannel might fall back to udp mode, which does packet encapsulation in userspace. Do not do this. Userspace encapsulation is a CPU killer. It will thrash your context switches and destroy your throughput.
The "Steal Time" Trap
This brings me to a critical point about hosting infrastructure. Overlay networks like VXLAN rely heavily on the kernel to process packets efficiently.
If you are running Kubernetes on a cheap "container-based" VPS (like OpenVZ or LXC), you are sharing a kernel with hundreds of other customers. You often cannot load the necessary vxlan kernel modules, or worse, your "guaranteed" CPU cycles are stolen by a noisy neighbor running a Minecraft server.
Pro Tip: Always check your CPU steal time withtop. If%stis above 5.0, your overlay network latency will spike unpredictably. This is why we standardized on KVM virtualization at CoolVDS—you get your own kernel and dedicated instruction sets.
Etcd Needs IOPS, Not Just Promises
Your Kubernetes cluster state lives in etcd. This key-value store is extremely sensitive to disk latency. If fsync takes too long, the cluster leader election fails, and your API server goes down.
I recently benchmarked a standard SATA SSD VPS against a CoolVDS NVMe instance. The difference in etcd write latency was nearly 10x. With the sheer volume of state changes in a K8s 1.0 cluster (especially if you use replication controllers heavily), standard SSDs are becoming the bottleneck. If you want a stable cluster, you need NVMe.
Data Sovereignty in Oslo
There is also the elephant in the room: data privacy. With the "Safe Harbor" agreement looking increasingly shaky and scrutiny from the Datatilsynet (Norwegian Data Protection Authority) intensifying, relying on US-based cloud storage is a risk many of our CTOs are no longer willing to take.
Hosting your Kubernetes nodes in Oslo doesn't just lower latency to the NIX (Norwegian Internet Exchange)—it ensures your customer data stays within Norwegian legal jurisdiction, compliant with the Personopplysningsloven. Latency from Oslo to Trondheim is under 15ms on our network; routing that traffic through Frankfurt adds 30ms of unnecessary lag.
Final Thoughts
Kubernetes 1.0 is powerful, but it exposes every weakness in your infrastructure. If your network is slow, your disks are high-latency, or your kernel is shared, K8s will fail. Stop fighting the infrastructure.
Deploy a KVM-based, NVMe-powered instance on CoolVDS today. We support the custom kernel modules you need for Flannel and Docker 1.8 right out of the box.