Console Login

Kubernetes vs. Docker Swarm vs. Nomad: The 2021 Orchestration Reality Check

Kubernetes vs. Docker Swarm vs. Nomad: The 2021 Orchestration Reality Check

Let’s get one thing straight: You probably don't need Google-scale infrastructure. I see it every week—teams of three developers trying to maintain a high-availability Kubernetes cluster for a monolithic PHP application that gets 500 hits a day. It’s engineering suicide.

As we close out 2021, the container orchestration landscape has settled, but the complexity hasn't. We are living in the wake of the Schrems II ruling, which has turned cloud architecture in Europe from a technical challenge into a legal nightmare. If you are hosting personal data of Norwegian citizens on US-controlled clouds, you aren't just risking latency; you are risking fines from Datatilsynet.

I’ve spent the last decade debugging race conditions and watching clusters implode at 3 AM. Today, we are going to look at the three main contenders—Kubernetes, Docker Swarm, and HashiCorp Nomad—through the lens of a Norwegian Systems Architect who cares about uptime, legal compliance, and raw I/O performance.

1. Kubernetes (K8s): The De Facto Standard (with a Cost)

Kubernetes won the war. That's indisputable. With v1.22 released recently, the ecosystem is mature. But K8s is not a tool; it is a distributed operating system. It requires a dedicated platform team.

The Pain Point: Latency in etcd. If your underlying storage is slow, your cluster state becomes unstable. I’ve seen control planes crash simply because the VPS provider used spinning rust (HDD) or throttled IOPS on their "standard" SSDs. Kubernetes is chatty. It demands low latency.

Configuration Reality Check

If you are running K8s, you need to tune your Kubelet to prevent it from killing your critical pods during resource contention. Here is a configuration snippet often overlooked in the /var/lib/kubelet/config.yaml that defines eviction thresholds:

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
  memory.available: "100Mi"
  nodefs.available: "10%"
  nodefs.inodesFree: "5%"
evictionSoft:
  memory.available: "200Mi"
evictionSoftGracePeriod:
  memory.available: "1m30s"
Pro Tip: In late 2021, the removal of extensions/v1beta1 Ingress is breaking legacy manifests everywhere. Ensure you update your manifests to networking.k8s.io/v1 immediately or your deployments will fail on newer clusters.

2. Docker Swarm: The "Dead" Tech That Won't Die

Every year pundits say Swarm is dead, and every year I see lean startups deploying it successfully. Why? Because a docker-compose.yml file is something a junior dev can understand in 15 minutes. K8s manifests require a PhD in YAML indentation.

Swarm is integrated into the Docker engine. It creates a mesh network automatically. If your goal is to deploy a stateless microservice stack across three nodes in Oslo, Swarm is arguably the most efficient path to production.

The Simplicity Argument

Deploying a stack in Swarm is ridiculously simple compared to the helm charts required for K8s:

# Initialize the swarm manager
docker swarm init --advertise-addr 192.168.1.10

# Deploy the stack
docker stack deploy -c docker-compose.yml production_stack

However, Swarm struggles with stateful workloads. Persistent volumes are a headache compared to K8s' CSI (Container Storage Interface) drivers. If you need complex database orchestration, avoid Swarm.

3. HashiCorp Nomad: The Unix Philosophy

Nomad is the dark horse. It follows the Unix philosophy: do one thing and do it well. Unlike K8s, which tries to be networking, storage, and compute, Nomad just handles scheduling. It integrates seamlessly with Consul for networking and Vault for secrets.

I recently migrated a high-traffic media processing workload to Nomad because K8s overhead was consuming 15% of our compute resources. Nomad runs as a single binary. It’s lightweight and rock-solid.

job "video-transcode" {
  datacenters = ["no-osl-1"]
  type = "batch"

  group "transcoder" {
    count = 5
    task "ffmpeg" {
      driver = "docker"
      config {
        image = "jrottenberg/ffmpeg:4.4-ubuntu"
        args = [
          "-i", "${NOMAD_META_input}",
          "/local/output.mp4"
        ]
      }
      resources {
        cpu    = 500 # MHz
        memory = 1024 # MB
      }
    }
  }
}

The Infrastructure Layer: Where Orchestrators Die

You can choose the perfect orchestrator, but if your network latency is trash, your distributed system will fail. This is especially true for the Nordic market. Routing traffic from a user in Trondheim to a server in Frankfurt adds 30-40ms of round-trip time. In a microservices architecture where one request hits five internal services, that latency compounds.

Latency Matters:
Running a ping from a server in Oslo to a standard cloud provider in Amsterdam usually returns ~18ms. Running it to a local endpoint in Norway (like NIX-connected data centers) returns ~2ms.

The Storage Bottleneck

All three orchestrators rely heavily on consensus protocols (Raft). These protocols require fast disk writes (fsync) to persist the cluster state. This is where "cheap" VPS providers fail you. They oversell storage I/O, leading to high iowait times.

# Check your disk latency. If await is > 10ms, your etcd cluster is suffering.
iostat -xz 1 10

At CoolVDS, we use pure NVMe storage arrays passed through via KVM. We don't throttle IOPS because we know that a 500ms write latency on etcd can cause a leader election failure, crashing your entire K8s control plane.

The GDPR & Schrems II Verdict

This is the elephant in the room for 2021. The invalidation of the Privacy Shield means that relying on US-owned hyperscalers (AWS, Azure, GCP) for processing Norwegian personal data is legally risky.

Feature Kubernetes Docker Swarm Nomad
Learning Curve Steep Low Medium
State Management Excellent (etcd) Weak Good (via Consul)
Resource Overhead High Low Very Low
Best For Enterprise Scale Small Teams Hybrid Workloads

Conclusion: Architecture is About Trade-offs

If you are building the next Spotify, use Kubernetes. If you are a team of five building a SaaS product, Docker Swarm or Nomad will let you sleep at night. But regardless of the software, the hardware dictates your reliability.

Don't put a Ferrari engine in a Go-Kart. Ensure your orchestration layer sits on infrastructure that offers local peering, verified data sovereignty, and NVMe I/O performance.

Ready to test your cluster performance? Spin up a high-performance KVM instance in Oslo on CoolVDS today and see what single-digit latency feels like.