Kubernetes vs. Docker Swarm: A Reality Check for Norwegian DevOps in 2018
Let’s be honest. The hype cycle in our industry has reached fever pitch this year. If you aren't deploying a microservices architecture on Kubernetes, you're supposedly living in the Stone Age. I’ve sat in meetings in Oslo where CTOs demand a full K8s cluster for a monolithic Magento shop that receives 5,000 hits a day. It’s madness.
I have spent the last six months migrating legacy infrastructure to containerized environments. I've seen the CrashLoopBackOff errors haunt my dreams. I've debugged overlay networks at 3 AM. The truth is, orchestration is not a silver bullet. It is a trade-off between operational complexity and deployment velocity.
Today, we are going to look at the two main contenders fighting for your terminal in late 2018: Docker Swarm and Kubernetes. We will look at this through the lens of a Norwegian engineer who cares about stability, GDPR compliance, and raw performance.
The Pragmatist's Choice: Docker Swarm
Docker Swarm is currently the most underrated piece of software in the ecosystem. Since it was integrated directly into the Docker engine in version 1.12, it has offered a native clustering experience that just works. For 80% of the teams I consult for, Swarm is actually the better choice.
Why Swarm?
It respects the KISS principle. You don't need to install a separate binary. You don't need to manage a separate etcd cluster (it's internal). The learning curve is practically flat if you know Docker Compose.
Here is how you start a cluster. It takes literally seconds:
# On the manager node
docker swarm init --advertise-addr 192.168.10.5
# Output gives you the join token instantly
# docker swarm join --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c 192.168.10.5:2377
Deploying a stack is equally trivial using a standard docker-compose.yml file (v3 format):
version: '3'
services:
web:
image: nginx:alpine
deploy:
replicas: 5
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.5"
memory: 512M
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
Run docker stack deploy -c docker-compose.yml myapp and you have a load-balanced, replicated service. The internal mesh routing handles the ingress automatically. If a node dies, Swarm reschedules. Simple.
The Downside of Swarm
It lacks the rich ecosystem of Kubernetes. Helm charts don't exist here. Secrets management is basic. If you need complex stateful workloads or granular Role-Based Access Control (RBAC), you will hit a wall. But for stateless web tiers? It’s unbeatable speed.
The Heavy Artillery: Kubernetes (v1.12)
Kubernetes (K8s) has won the mindshare war. With version 1.12 released just months ago, we are seeing better stability, but the complexity tax is high. K8s is not a platform; it is a platform for building platforms.
When you deploy K8s, you aren't just managing containers. You are managing:
- Etcd: The brain of the cluster. Extremely sensitive to disk latency.
- CNI (Container Network Interface): Flannel, Calico, Weave? Pick your poison.
- Ingress Controllers: Traefik, Nginx, HAProxy.
The "Etcd" Bottleneck
This is where most self-hosted clusters fail. Etcd uses the Raft consensus algorithm. It requires low latency for fsync operations to persist the cluster state. If you run this on cheap storage, your cluster will partition and fail.
Pro Tip: Never run Etcd on standard HDD or shared storage with high latency. We benchmarked this extensively. If fsync latency exceeds 10ms consistently, the leader election starts flapping. This is why we enforce NVMe storage on all CoolVDS instances. You need that high I/O throughput to keep the cluster consensus stable.
Here is a snippet of a Kubernetes Deployment. Notice the verbosity compared to Swarm:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15.4
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
To actually expose this, you need a Service, and likely an Ingress resource. It’s verbose, but it gives you total control.
Infrastructure Matters: The KVM Factor
Whether you choose Swarm or K8s, there is an underlying truth that gets ignored: Containers are just processes sharing a kernel.
In 2018, many cheap VPS providers in Europe are still pushing OpenVZ or LXC virtualization. This is dangerous for container orchestration. If you try to run Docker inside an OpenVZ container, you are running containers inside a container. Kernel modules required for overlay networks (like `vxlan`) are often missing or restricted.
You need full hardware virtualization. At CoolVDS, we rely strictly on KVM (Kernel-based Virtual Machine). This gives your Docker host its own dedicated kernel. You can tune sysctl parameters without begging support for permission. You can load the specific kernel modules your CNI plugin requires.
Tuning Linux for Containers
If you are setting up a host for high-density containers, default Linux settings are insufficient. Add these to your /etc/sysctl.conf to avoid connection tracking tables filling up under load:
# Increase connection tracking max
net.netfilter.nf_conntrack_max = 131072
# Enable forwarding for overlay networks
net.ipv4.ip_forward = 1
# Increase map count for Elasticsearch/logging containers
vm.max_map_count = 262144
The GDPR & Latency Angle
We are now six months post-GDPR implementation (May 2018). The legal landscape has shifted. If you are serving Norwegian customers, storing data on US-controlled clouds adds a layer of legal complexity regarding data processors.
Furthermore, physics is undefeated. If your users are in Oslo or Bergen, hosting in Frankfurt adds 20-30ms of round-trip time. Hosting in the US adds 100ms+. By utilizing VPS Norway solutions, you keep data under local jurisdiction (pleasing the Datatilsynet) and drop that latency to near-zero via NIX (Norwegian Internet Exchange).
Verdict: Which one to pick?
- Choose Docker Swarm if: You have a team of 1-5 developers, you want to move fast, and you don't need complex stateful orchestration. It allows you to focus on the code, not the cluster.
- Choose Kubernetes if: You are building a large-scale microservices platform, you need a rich ecosystem (monitoring, tracing, service mesh), and you have the budget for dedicated Ops engineers.
Whatever you choose, remember that orchestration adds overhead. The CPU cycles spent on the API server and the overlay network are cycles not serving your customers. Ensure your underlying infrastructure has the headroom to handle it. High-frequency CPUs and NVMe storage aren't luxuries in this game; they are requirements.
Ready to build your cluster? Don't let IO wait times kill your API server performance. Deploy a KVM-based, NVMe-powered instance on CoolVDS today and experience what sub-millisecond latency feels like.