Orchestration Wars: Kubernetes vs. Swarm vs. Nomad in a Post-Schrems II World
If you have been paying attention to the Linux ecosystem this week, you are probably tired. First, the CentOS 8 announcement on December 8th effectively killed the stable downstream release we all relied on. Now you are staring at your infrastructure roadmap for 2021 wondering what else is going to break.
It is a fitting end to 2020. Trust is expensive.
In the container orchestration space, the noise is even louder. Kubernetes (K8s) is the de facto standard, yet I still see teams of three developers burning weeks trying to debug a CrashLoopBackOff caused by a misconfigured CNI plugin. Is it worth it? Sometimes. But not always.
Today, we are stripping away the marketing fluff. We will look at three orchestrators—Kubernetes (v1.19), Docker Swarm, and HashiCorp Nomad (v1.0 beta/0.12)—through the lens of a System Architect operating in Northern Europe. We care about latency to NIX (Norwegian Internet Exchange), raw I/O throughput for etcd, and legal compliance following the Schrems II ruling.
The Elephant: Kubernetes (K8s)
Kubernetes is an operating system for your cluster. It is powerful, extensible, and notoriously resource-hungry. The control plane alone requires significant compute. If you are running a massive microservices architecture, K8s is inevitable. But it demands fast storage.
The Storage Bottleneck: Kubernetes relies heavily on etcd for state. If your disk write latency spikes, etcd heartbeats fail, and your cluster creates a split-brain scenario. I have seen this happen on budget VPS providers using spinning rust or shared SATA SSDs.
Pro Tip: On CoolVDS NVMe instances, check your disk latency with ioping before deploying K8s. You need sub-millisecond seek times.
Here is a standard deployment. Notice the verbosity required just to get a simple Nginx pod with a ConfigMap:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nordic-web
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.19-alpine
ports:
- containerPort: 80
volumeMounts:
- name: config-volume
mountPath: /etc/nginx/conf.d
volumes:
- name: config-volume
configMap:
name: nginx-conf
The complexity is front-loaded. You pay the tax in YAML maintenance and upgrade anxiety. However, the ecosystem (Helm, Operators, Prometheus) is unmatched.
The Pragmatist: Docker Swarm
"Swarm is dead," they say. Mirantis acquired Docker Enterprise last year, and the future seemed uncertain. Yet, for 80% of the use cases I see in Oslo and Bergen, Swarm is still superior in TCO (Total Cost of Ownership).
Why? Because it is baked into the Docker engine. There is no separate binary to install. No complex PKI generation for the control plane (it rotates its own certificates). You can go from a fresh CentOS 7 server to a running cluster in 30 seconds.
Here is the setup on a CoolVDS node:
# On the Manager node (10.0.0.1)
[root@oslo-mgr ~]# docker swarm init --advertise-addr 10.0.0.1
# Output gives you the join token instantly
Swarm initialized: current node (dxn1...) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-49wj... 10.0.0.1:2377
And the stack file (docker-compose.yml) is human-readable:
version: '3.8'
services:
web:
image: nginx:alpine
deploy:
replicas: 2
placement:
constraints:
- node.role == worker
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
If you are a team of five managing a monolithic Magento store or a few Node.js services, K8s is over-engineering. Swarm just works.
The Unix Philosophy: HashiCorp Nomad
Nomad is the interesting middle ground. It is a single binary. It schedules containers, but it can also schedule raw binaries (Java JARs, static Go binaries) without Docker. This is huge for legacy banking applications often found in the Nordic financial sector.
Nomad separates the scheduler from the networking (Consul) and storage (Vault/CSI). This decoupling aligns with the Unix philosophy: do one thing well.
A Nomad job specification looks like this:
job "docs" {
datacenters = ["oslo-dc1"]
group "web" {
count = 3
task "server" {
driver = "docker"
config {
image = "nginx:latest"
port_map {
http = 80
}
}
resources {
network {
mbits = 10
port "http" {}
}
}
}
}
}
Performance & The Infrastructure Layer
The orchestrator is software. It cannot fix hardware limitations. In 2020, running databases or stateful containers on network-attached block storage with high latency is professional suicide. This is doubly true for Norway, where data sovereignty is now a legal minefield.
The Schrems II Factor
Since the CJEU invalidated the Privacy Shield in July, sending personal data to US-owned cloud providers is legally risky for European entities. If your K8s cluster logs IP addresses to a bucket owned by a US hyper-scaler, are you compliant? The Datatilsynet (Norwegian Data Protection Authority) is watching closely.
This is where hosting on local infrastructure like CoolVDS becomes a strategic advantage. Data stays in the jurisdiction. But beyond compliance, there is the physics of latency.
We tested network throughput between two CoolVDS instances using iperf3 to simulate inter-pod traffic:
[root@worker-1 ~]# iperf3 -c 10.0.0.5
Connecting to host 10.0.0.5, port 5201
[ 5] local 10.0.0.6 port 44218 connected to 10.0.0.5 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.12 GBytes 9.61 Gbits/sec 0 1.21 MBytes
[ 5] 1.00-2.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.21 MBytes
Almost 10Gbps internal throughput. This is what you need for a healthy overlay network (Flannel, Calico, or Weave). If your host network is congested, your Kubernetes liveness probes will time out, causing cascading restarts.
Comparison Matrix
| Feature | Kubernetes | Docker Swarm | Nomad |
|---|---|---|---|
| Complexity | High | Low | Medium |
| Maintenance | Heavy (Etcd, CA certs) | Zero-touch | Low (Single binary) |
| Scalability | 5000+ nodes | ~100 nodes sweet spot | 10k+ nodes |
| Nordic Latency | Depends on Ingress | Routing Mesh (add ~2ms) | Host networking available |
Conclusion: Choose Your Weapon
If you are building the next Spotify, use Kubernetes. The ecosystem is mandatory for that scale. Just ensure you are running on dedicated resources where CPU stealing won't throttle your control plane.
If you are a nimble dev shop in Oslo needing to ship reliable apps fast, Docker Swarm is not dead—it is your best friend. It is stable, predictable, and cheap to run.
If you need to mix legacy binaries with containers and want the speed of Go, look at Nomad.
Regardless of the choice, the foundation matters. Virtualization overhead can kill orchestration performance. We use KVM at CoolVDS because it provides the kernel-level isolation necessary for container security without the "noisy neighbor" effect of container-based VPS.
Don't let IOwait kill your cluster. Deploy a KVM-based, NVMe-powered instance on CoolVDS today and see what 0.5ms latency does for your API response times.