Kubernetes vs. Docker Swarm vs. Nomad: The 2022 Orchestration Reality Check
I am tired of seeing "Kubernetes" on the resume of every junior developer who ran minikube once. Let’s have an honest conversation. In the last three months, I've migrated two major Norwegian e-commerce platforms off AWS and onto local infrastructure. Why? Because the Schrems II ruling has made relying on US-owned clouds a legal minefield for GDPR compliance, and frankly, because the latency penalties for a user in Oslo hitting a data center in Frankfurt are unacceptable when you're pushing high-frequency trading data or real-time inventory updates.
But infrastructure is only layer zero. The real war is fought at the orchestration layer. By early 2022, the dust has largely settled, leaving us with three contenders: the omnipresent Kubernetes (k8s), the lingering Docker Swarm, and the pragmatic HashiCorp Nomad. Choosing the wrong one will burn your budget on operational complexity before you even deploy your first pod.
The Hardware Truth: It’s All About etcd Latency
Before we dissect the software, you need to understand the hardware bottleneck. Regardless of which orchestrator you pick, they all rely on a consensus algorithm (usually Raft) to maintain state. In Kubernetes, this is handled by etcd.
If your underlying storage has high latency, etcd heartbeats fail. The cluster assumes the leader is dead. It triggers a re-election. Your API server locks up. Your pods stop scheduling. This is why "cheap" VPS providers fail at orchestration. They put you on spinning rust or SATA SSDs shared with fifty other noisy neighbors. You need raw NVMe throughput.
Here is a quick sanity check I run on every new CoolVDS node before I even think about kubeadm init. We need to verify that fsync latency is negligible.
# Benchmark disk fsync latency for etcd performance
fio --rw=write --ioengine=sync --fdatasync=1 \
--directory=test-data --size=22m --bs=2300 \
--name=mytest
If the 99th percentile fdatasync duration is above 10ms, your cluster will be unstable. On our CoolVDS NVMe instances, I consistently see this under 2ms. That is the difference between a self-healing cluster and a pager going off at 3 AM.
1. Docker Swarm: The "Good Enough" Solution
Docker Swarm isn't dead, despite what the CNCF marketing machine wants you to believe. For teams of less than 10 engineers, Swarm is often superior because it lacks the cognitive overhead of k8s. You don't need a dedicated DevOps engineer just to manage the control plane.
However, it has limitations in 2022. The networking model is flatter, and autoscaling logic is rudimentary compared to k8s Horizontal Pod Autoscalers (HPA). But if you just need to keep five microservices alive, look how simple a stack deployment is:
# docker-compose.yml for Swarm
version: "3.9"
services:
web:
image: nginx:1.21-alpine
deploy:
replicas: 5
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
Deploying this takes seconds: docker stack deploy -c docker-compose.yml myapp. No YAML manifests, no Helm charts. Just functional code. If you are hosting internal tools or staging environments on a single CoolVDS instance, Swarm is efficient. It uses less CPU for overhead, leaving more cycles for your actual application.
2. Kubernetes (k8s): The Standard for a Reason
Kubernetes version 1.23 dropped recently, and with the deprecation of Dockershim, the ecosystem is shifting toward containerd. This is good for performance but adds another layer of learning for those used to the Docker CLI.
Kubernetes is necessary when you need:
- Granular access control (RBAC).
- Complex ingress routing (e.g., NGINX Ingress Controller or Traefik).
- StatefulSets with persistent volume claims (PVC).
The pain point in Norway is often data residency. Using a Managed Kubernetes service from a US provider technically exposes you to the CLOUD Act. Building your own cluster on top of a Norwegian VPS provider like CoolVDS ensures the data stays within the jurisdiction of Datatilsynet.
However, you must tune the kernel. Default Linux settings are not designed for the thousands of IP tables rules k8s generates.
Pro Tip: When setting up a cluster on CoolVDS, always tune your sysctl settings to handle high connection tracking loads, or your Service networking will silently drop packets.
# /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
# Increase connection tracking table size
net.netfilter.nf_conntrack_max = 131072
3. HashiCorp Nomad: The Dark Horse
Nomad is what I use when I need to mix containerized workloads with legacy Java binaries that haven't been containerized yet. Unlike k8s, which is containers-only, Nomad can schedule anything—Java jars, QEMU VMs, or simple binaries.
It is a single binary. It is incredibly lightweight. If you are running a high-performance compute cluster for data processing, Nomad's scheduler is faster than Kubernetes. It integrates seamlessly with Consul for service discovery.
Here is a job specification for Nomad. Notice how it defines resources explicitly. This allows you to bin-pack your VDS instances tightly, saving money on monthly hosting bills.
job "api" {
datacenters = ["oslo-dc1"]
type = "service"
group "web" {
count = 3
network {
port "http" {
to = 8080
}
}
task "server" {
driver = "docker"
config {
image = "my-registry/api:v2"
ports = ["http"]
}
resources {
cpu = 500 # 500 MHz
memory = 256 # 256 MB
}
}
}
}
The Network Layer: Latency to NIX (Norwegian Internet Exchange)
Orchestration manages the application, but it doesn't solve network physics. If your cluster is hosted in London but your users are in Trondheim, you are adding 20-30ms of round-trip time (RTT) to every request. For a complex microservices architecture where a single user action might trigger ten internal service calls, that latency compounds.
| Infrastructure Location | Avg Latency to Oslo | Compliance Risk |
|---|---|---|
| US East (Virginia) | ~95 ms | High (Schrems II) |
| Germany (Frankfurt) | ~25 ms | Medium |
| CoolVDS (Oslo/Norway) | < 2 ms | None |
When we deployed a K3s (lightweight Kubernetes) cluster on CoolVDS for a client last month, we saw API response times drop by 40% compared to their previous setup in Frankfurt. This wasn't code optimization; it was just physics.
Conclusion: Pick Your Poison
If you are a small team wanting to move fast, use Docker Swarm. If you are a large enterprise requiring service meshes and complex policy enforcement, bite the bullet and build Kubernetes properly—but build it on hardware that can support it. If you have mixed workloads, look at Nomad.
Whatever you choose, remember that an orchestrator is only as stable as the node it runs on. Don't let IO wait times kill your leader election. We built CoolVDS with pure NVMe storage and unshared CPU resources specifically to handle these workloads.
Ready to stress test? Spin up a 4-CPU, 8GB RAM instance on CoolVDS and run the fio benchmark yourself. If it doesn't beat your current provider, I’ll be surprised.