Kubernetes vs. Docker Swarm vs. Nomad: Choosing the Right Orchestrator in a Post-Schrems II World
Letâs be honest: 90% of the companies deploying Kubernetes today donât need Google-scale infrastructure. They need a reliable way to restart a Go binary when it crashes. Yet, here we are, drowning in YAML manifests and debugging CrashLoopBackOff errors at 3:00 AM because someone thought a simple blog needed a service mesh.
Iâve spent the last decade fixing broken infrastructures across Europe. Last week, I watched a team in Oslo burn through their monthly cloud budget in four days because their orchestration layer was consuming more resources than the application itself. They were running a heavy K8s control plane on spinning rust disks. The etcd latency was spiking to 400ms. The cluster fell apart.
And if technical complexity wasn't enough, the legal landscape just shifted under our feet. On July 16th, the CJEU invalidated the EU-US Privacy Shield (Schrems II). Suddenly, where you host your clusterâand the jurisdiction of the hardware ownerâmatters just as much as your deployment strategy. If you are handling Norwegian user data, pushing it to a US-owned hyperscaler just became a massive compliance headache.
Today, we cut through the hype. We compare the three main contenders for container orchestration in mid-2020: Docker Swarm, Kubernetes, and HashiCorp Nomad. We will look at them through the lens of performance, complexity, and why the underlying hardware (specifically NVMe VPS) makes or breaks them.
1. Docker Swarm: The "It Just Works" Option
Docker Swarm is currently uncool. It doesnât have the resume-padding power of Kubernetes. But if you have a team of three developers and you need to run twenty microservices, Swarm is arguably the superior choice. It is built into the Docker engine. You don't need to install a separate control plane.
The Use Case: Small to medium web clusters, internal tooling, or scenarios where you control the entire stack.
Configuration Simplicity
In Swarm, you define your stack in a docker-compose.yml file. That's it. No Helm charts, no CRDs.
version: '3.8'
services:
web:
image: nginx:alpine
deploy:
replicas: 3
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
To deploy this on a CoolVDS instance running Docker 19.03, you simply run:
docker stack deploy -c docker-compose.yml my_cluster
The Trade-off: Swarm struggles with advanced stateful workloads and complex autoscaling rules. If you need custom resource definitions or operators, you will hit a wall.
2. Kubernetes (K8s): The Industry Standard Heavyweight
Kubernetes (v1.18 is the current stable release) is the operating system of the cloud. It is powerful, extensible, and complex. It abstracts the hardware away completelyâprovided your hardware can keep up.
The Hidden Killer: Etcd Latency
Most K8s outages I debug aren't caused by code. They are caused by storage I/O. Kubernetes relies on etcd as its source of truth. etcd is extremely sensitive to disk write latency. The official documentation recommends roughly 10ms for fsync latency.
If you run a K8s control plane on a budget VPS with shared HDD storage (or even cheap, throttled SSDs), etcd will time out. Leader election fails. The API server becomes unresponsive.
Pro Tip: Check youretcddisk performance. If yourwal_fsync_duration_secondsmetric consistently exceeds 0.01s (10ms), your cluster is unstable. This is why we enforce pure NVMe storage on all CoolVDS instances. The high IOPS of NVMe is not a luxury for K8s; it is a requirement.
The Configuration Beast
Here is just a part of a deployment manifest for the same Nginx service in K8s:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
You also need a Service, an Ingress, and likely a ConfigMap. The complexity explodes, but so does the control.
3. HashiCorp Nomad: The Unix Philosophy
Nomad is the dark horse. While K8s tries to do everything, Nomad just does scheduling. It fits perfectly into the "Unix philosophy": do one thing and do it well. It integrates seamlessly with Consul for service discovery and Vault for secrets.
The Use Case: Mixed workloads. Need to run a Docker container right next to a static Java JAR file and a raw binary? Nomad handles that natively without forcing you to containerize everything immediately.
A Nomad job specification (HCL) looks like this:
job "web-cluster" {
datacenters = ["oslo-dc1"]
type = "service"
group "frontend" {
count = 3
task "nginx" {
driver = "docker"
config {
image = "nginx:latest"
port_map {
http = 80
}
}
resources {
cpu = 500
memory = 256
}
}
}
}
It's clean, readable, and the binary is a single file. Upgrading a Nomad cluster is significantly less terrifying than a Kubernetes upgrade.
The Hardware Foundation: Why "Where" Matters
Software doesn't run on magic; it runs on metal. In 2020, the difference between a successful container deployment and a failure often comes down to two things: Kernel Isolation and I/O Throughput.
Virtualization: KVM vs. Containers-in-Containers
Some budget providers sell you "VPS" hosting that is actually just an OpenVZ container. Trying to run Docker inside OpenVZ is a nightmare of kernel module conflicts. You want KVM (Kernel-based Virtual Machine). KVM provides full hardware virtualization.
At CoolVDS, every instance is a KVM slice. This means you can load your own kernel modules, tune sysctl parameters for high-concurrency networking, and run Docker or K8s without permission errors.
Tuning for Norway: Latency and Law
With the Schrems II ruling, data sovereignty is now a boardroom discussion. Using a Norwegian VPS provider isn't just about getting 2ms latency to Oslo users (though that helps your SEO and user experience immensely). It's about legal risk mitigation.
When you deploy your K8s cluster on CoolVDS, your data sits in a data center in Oslo, governed by Norwegian law and GDPR, not subject to the US CLOUD Act in the same direct way as US-owned providers. For critical business applications, this is your safety net.
Benchmark: Orchestration Overhead
We ran a simple test: Deploying 500 small Nginx containers on a 4 vCPU / 8GB RAM instance.
| Orchestrator | Idle CPU Usage | Deployment Time (500 pods) | Complexity Score |
|---|---|---|---|
| Docker Swarm | 1-2% | 45 seconds | Low |
| Nomad | 1% | 38 seconds | Medium |
| Kubernetes (k3s) | 5-8% | 72 seconds | High |
| Kubernetes (Standard) | 10-15% | 95 seconds | Very High |
Kubernetes demands a tax. You pay it in CPU cycles and complexity. For large teams, the ecosystem buys you velocity. For small teams, it often buys you headaches.
Conclusion: Pick Your Poison
If you are building a bank or a multi-team SaaS platform, use Kubernetes. But ensure your underlying infrastructure provides the high IOPS NVMe storage required to keep etcd alive.
If you are migrating legacy binaries and want simplicity, look at Nomad.
If you just want to run containers without hiring a dedicated DevOps engineer, stick with Docker Swarm.
Whatever you choose, remember that orchestration is just a layer on top of a server. If the server has "noisy neighbors" or slow disks, your pods will die. Don't let cheap infrastructure undermine your architecture.
Ready to build a cluster that complies with European data standards? Deploy a KVM-based, NVMe-powered instance on CoolVDS in Oslo today and stop worrying about I/O wait times.