Orchestration Wars 2021: Kubernetes vs. Swarm vs. Nomad for Norwegian Infrastructure
Letâs be honest: most of you donât need Kubernetes. Iâve walked into too many meetings in Oslo where a startup CTO pitches a 3-node K8s cluster to host a static WordPress site and a Node.js API. Itâs overkill. Itâs expensive. And when etcd starts choking on disk latency because you cheaped out on storage, your site goes down.
As of late 2021, the container orchestration landscape has matured, but the complexity has skyrocketed. We are seeing a distinct split in the Nordic hosting market. On one side, enterprise teams are wrestling with Kubernetes (K8s) complexity. On the other, pragmatic teams are clinging to Docker Swarm or migrating to HashiCorpâs Nomad.
This isn't a marketing brochure. This is a technical breakdown of what actually works when you need low latency, high availability, andâcrucially for us operating under Datatilsynet's watchâGDPR compliance.
The Kubernetes Juggernaut: v1.22 and the Cost of Power
Kubernetes is the standard. With the release of v1.22 in August 2021, we saw the removal of several beta APIs. Itâs stable, but it is hungry. A K8s control plane is not a "set it and forget it" component. It requires constant monitoring.
The biggest killer of K8s performance isn't CPU; it's I/O latency. The heart of Kubernetes is etcd, a key-value store that demands fsync latency to be under 10ms (ideally under 2ms). If you run this on standard HDD or shared SATA SSDs with noisy neighbors, your cluster will partition.
Here is a production-grade Deployment manifest. Notice the resource limits. Without these, a memory leak in one pod can OOM-kill your entire node.
apiVersion: apps/v1
kind: Deployment
metadata:
name: production-api
namespace: backend
labels:
app: nordic-api
spec:
replicas: 3
selector:
matchLabels:
app: nordic-api
template:
metadata:
labels:
app: nordic-api
spec:
containers:
- name: api-container
image: registry.coolvds.com/backend:v2.1.4
ports:
- containerPort: 8080
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 3
Pro Tip: Never deploy K8s without defining `resources.limits`. On a shared VPS environment, this is neighborly. On CoolVDS dedicated KVM slices, this ensures your high-priority ingress controller doesn't get starved by a rogue worker process.
The Storage Bottleneck
To verify if your underlying storage can handle an etcd cluster, use fio. This is the exact benchmark we run when provisioning new NVMe storage nodes for our CoolVDS fleet.
fio --rw=write --ioengine=sync --fdatasync=1 --directory=test-data --size=22m --bs=2300 --name=mytest
If the 99th percentile fdatasync latency is above 10ms, do not run K8s there. You need genuine NVMe storage.
Docker Swarm: The "Dead" Tech That Won't Die
Docker Swarm mode is technically "maintenance mode" compared to K8s, but for small-to-medium teams, it remains superior in TCO (Total Cost of Ownership). You don't need a dedicated DevOps engineer to manage a Swarm cluster. You just need a `docker-compose.yml` file.
If you are running a shop with less than 50 microservices, Swarm is likely all you need. It handles overlay networking and secrets management out of the box without the verbose YAML hell of K8s.
version: "3.8"
services:
web:
image: nginx:alpine
deploy:
replicas: 2
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks:
webnet:
Deploying this is a single command:
docker stack deploy -c docker-compose.yml prod_stack
The downside? It lacks the rich ecosystem of Helm charts and Operators. But if you value sleep over complexity, itâs a strong contender.
HashiCorp Nomad: The Unix Philosophy Choice
Nomad is the middle ground. It schedules applications, not just containers. You can run a Docker container alongside a raw Java JAR file and a static binary on the same node. For legacy modernization projects in the Nordicsâwhere we see a lot of old Java monolithsâNomad is brilliant.
It uses HCL (HashiCorp Configuration Language), which is arguably more readable than YAML.
job "docs" {
datacenters = ["dc1"]
group "example" {
count = 3
task "server" {
driver = "docker"
config {
image = "hashicorp/http-echo"
args = [
"-listen", ":5678",
"-text", "hello world",
]
}
resources {
network {
mbits = 10
port "http" {
static = 5678
}
}
}
}
}
}
To run it:
nomad job run example.nomad
The Compliance Trap: Schrems II and Hosting
In July 2020, the CJEU invalidated the Privacy Shield (Schrems II). By late 2021, the implications are hitting home. If you host your Kubernetes cluster on a US hyperscaler, even in their "Frankfurt" region, you are legally exposed if the US CLOUD Act allows data extraction.
For Norwegian businesses, data sovereignty is no longer optional. It's a risk management necessity. This is where the infrastructure layer matters more than the orchestrator.
Comparison: Choosing Your Fighter
| Feature | Kubernetes | Docker Swarm | Nomad |
|---|---|---|---|
| Learning Curve | Steep | Low | Medium |
| Resource Overhead | High (needs 2GB+ just for control plane) | Minimal | Very Low (binary is ~50MB) |
| State Management | Complex (etcd) | Built-in (Raft) | External (Consul usually) |
| Best For | Enterprise, Complex Microservices | Small Teams, Simple Stacks | Mixed Workloads (Legacy + Docker) |
The Infrastructure Reality Check
Regardless of whether you choose K8s or Swarm, your orchestrator assumes it owns the hardware. In a virtualized environment, "Steal Time" (CPU steal) is the enemy. This happens when the hypervisor makes your VM wait for physical CPU cycles because another neighbor is busy.
Check your steal time right now:
top -b -n 1 | grep "Cpu(s)"
Look at the st value at the end. If it's consistently above 0.5%, your host is oversold. This causes random latency spikes in API responses that no amount of code optimization will fix.
This is why at CoolVDS, we don't play the overselling game. Our KVM instances are pinned to physical cores where possible, and we strictly limit neighbor noise. When you deploy a K8s worker node on our NVMe plans, you get the raw IOPS you paid for. Low latency isn't a luxury; for modern orchestration, it's a requirement.
Final Verdict
If you are building a massive platform with 50+ engineers, use Kubernetes. But ensure your underlying VPS provider offers true low-latency storage and DDoS protection, or your control plane will destabilize under load.
If you are a lean team in Oslo wanting to ship code fast, stick to Docker Swarm or look at Nomad. Complexity is technical debt.
Whatever you choose, ensure your data stays safe and your I/O stays fast. Don't let slow infrastructure kill your deployment.
Need a compliant, high-performance foundation for your cluster? Deploy a KVM instance on CoolVDS today and experience the difference raw NVMe power makes.