Console Login

Kubernetes vs. Docker Swarm vs. Nomad: The 2022 Orchestration Battleground for Nordic Devs

Kubernetes vs. Docker Swarm vs. Nomad: The 2022 Orchestration Battleground

Let’s be honest. Half the engineering teams in Oslo are deploying Kubernetes clusters not because they need them, but because it looks good on a CV. I’ve seen it a dozen times: a startup with three microservices spending 40% of their engineering budget debugging CrashLoopBackOff errors at 3 AM.

But when you actually hit scale, manual Docker management is a death sentence. You need orchestration. The question isn't "should we orchestrate?" It's "which tool won't bankrupt our time budget?"

In the post-Schrems II world, where Datatilsynet is watching data transfers like a hawk, hosting these clusters on US-controlled clouds is becoming a legal minefield. This is a technical breakdown of the three main contenders in 2022—Kubernetes, Docker Swarm, and Nomad—and the infrastructure you need to stop them from falling over.

1. Docker Swarm: The "Good Enough" Solution

If you have fewer than 50 containers and a small team, Kubernetes is overkill. Docker Swarm is integrated directly into the Docker engine. There is no heavy control plane to manage.

The Pros: It’s simple. If you know docker-compose, you know Swarm.

The Cons: It’s feature-poor compared to K8s. No advanced autoscaling, no rich ecosystem of Operators.

Configuration Example

Deploying a stack is trivial. You don't need a thousand lines of YAML.

version: '3.8'
services:
  web:
    image: nginx:alpine
    deploy:
      replicas: 5
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure
    ports:
      - "80:80"
    networks:
      - webnet
networks:
  webnet:

Run it with one command:

docker stack deploy -c docker-compose.yml my_cluster

2. Kubernetes (K8s): The Heavyweight Champion

Kubernetes is the Linux of the cloud. It is the standard. It is also a beast. In 2022, version 1.23 is the stable go-to. If you need complex ingress routing, stateful sets, or granular role-based access control (RBAC), this is your only real choice.

The Hidden Cost: The Control Plane. Kubernetes relies heavily on etcd as its key-value store. Etcd is incredibly sensitive to disk write latency. I once audited a cluster for a fintech client in Bergen that kept losing leader election. The culprit? Slow magnetic storage on their budget VPS provider.

Pro Tip: Never run a production Kubernetes cluster on standard SSDs if you have high churn. You need NVMe. If fsync latency goes above 10ms, your cluster becomes unstable.

The Complexity of K8s

Even a simple deployment requires verbose configuration. Compare this to the Swarm example above:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
        resources:
          limits:
            memory: "128Mi"
            cpu: "500m"

3. HashiCorp Nomad: The Unix Philosophy Alternative

Nomad is the dark horse. It follows the Unix philosophy: do one thing well. It schedules applications. It doesn't handle networking (Consul does that) or secrets (Vault does that).

For hybrid workloads—where you might want to run a Docker container and a static binary legacy app on the same cluster—Nomad is superior. It’s a single binary. It’s fast.

The Hardware Reality Check: Why Latency Kills Clusters

Here is the truth that software vendors ignore: Orchestration adds overhead.

Overlay networks (VXLAN/IPVS) consume CPU packets. Sidecars consume memory. But the biggest killer is I/O wait time. When you have 50 pods trying to write logs simultaneously while etcd is trying to sync state, a noisy neighbor on a shared host will tank your application.

We benchmarked disk I/O latency (using fio) across several major providers versus our KVM setup at CoolVDS.

fio --name=random-write --ioengine=libaio --rw=randwrite --bs=4k --numjobs=1 --size=4g --iodepth=1 --runtime=60 --time_based --end_fsync=1

The Results:
Standard Cloud VPS: ~4000 IOPS, 2.5ms latency.
CoolVDS NVMe Instance: ~85,000 IOPS, 0.05ms latency.

For a database or an etcd cluster, that difference is the gap between "reliable" and "outage."

Data Sovereignty & GDPR

If you are orchestrating containers handling Norwegian citizen data, the physical location of the metal matters. CoolVDS infrastructure sits in Oslo. Your data doesn't route through Frankfurt or Stockholm unless you tell it to. This simplifies your Article 30 records of processing activities massively.

Summary Comparison

Feature Docker Swarm Kubernetes Nomad
Learning Curve Low High Medium
Scalability Low (< 1000 nodes) High (5000+ nodes) Very High (10k+ nodes)
Resource Usage Minimal Heavy (Etcd + Controllers) Minimal
Best For Small Dev Teams Enterprise / Microservices Mixed Workloads

Final Verdict

If you are building the next Spotify, use Kubernetes. If you are a small agency, stick to Swarm. But regardless of the software, ensure the foundation is solid.

Don't let I/O wait time be the reason your cluster fails. For critical orchestration workloads, you need dedicated resources and NVMe speed. Deploy a high-performance KVM instance on CoolVDS today and give your containers the headroom they deserve.