Kubernetes vs. Docker Swarm vs. Nomad: The 2023 Orchestration Battleground for Norwegian Ops
I still remember the silence in the Slack channel. It was 3:42 AM, and our primary production cluster in Frankfurt had just decided that etcd latency was too high, causing a leader election storm. Pods were evicting. The load balancer was confused. We were down. The root cause? Not the config, but the underlying "cloud" instances suffering from massive CPU steal due to noisy neighbors.
If you are running infrastructure in 2023, you know that container orchestration is no longer a "nice to have." It is the standard. But choosing the right tool—and more importantly, the right metal underneath it—is the difference between sleeping through the night and explaining downtime to your CTO. Whether you are serving high-frequency traffic in Oslo or managing data compliance under GDPR/Schrems II, the orchestrator is only as good as the node it runs on.
The Heavyweight: Kubernetes (K8s)
Let's be honest. Kubernetes has won the war. With version 1.28 recently dropped, it is the default operating system of the cloud. But it is also a resource vampire. Running a control plane requires serious compute. If you are deploying K8s, you aren't just deploying an app; you are deploying a platform.
When to use it: You have a team of at least three DevOps engineers, microservices architecture, and complex scaling requirements.
The Reality Check: K8s networking is brutal on standard HDDs. Etcd requires low latency write speeds. If you aren't running on NVMe storage, you are bottlenecking your cluster before you even deploy a pod. Here is a standard tuning we apply to sysctl.conf on CoolVDS nodes to handle high-traffic ingress controllers:
# /etc/sysctl.d/k8s-tuning.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
# Increase connection tracking for high load
net.netfilter.nf_conntrack_max = 131072
And for the Kubelet configuration, we always ensure we reserve compute for system daemons to prevent OOM kills on critical components:
# /var/lib/kubelet/config.yaml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
systemReserved:
cpu: "500m"
memory: "512Mi"
kubeReserved:
cpu: "500m"
memory: "512Mi"
evictionHard:
memory.available: "100Mi"
nodefs.available: "10%"
The Pragmatic Choice: Docker Swarm
Swarm is not dead. In fact, for 80% of small-to-medium businesses in Norway, it is arguably superior to K8s because it lacks the crushing complexity. You don't need Helm charts or CRDs. You need a docker-compose.yml file and five minutes.
When to use it: You have a monolith or a small set of services. You want to move from single-server Docker to a cluster without hiring a consultant.
Initializing a Swarm cluster is laughably simple compared to kubeadm:
# On the Manager Node
docker swarm init --advertise-addr 192.168.1.10
# Output gives you the join token immediately
# docker swarm join --token SWMTKN-1-49nj1cmql0l... 192.168.1.10:2377
Pro Tip: Swarm's overlay network encryption uses IPSec. On cheap VPS providers with older CPUs (lacking AES-NI instructions), this encryption tanks throughput. CoolVDS infrastructure uses modern CPUs where AES-NI is standard, making encrypted overlay networks virtually free in terms of performance overhead.
The Hipster Alternative: HashiCorp Nomad
Nomad is the UNIX philosophy applied to orchestration: do one thing and do it well. It schedules workloads. It doesn't care if it's a Docker container, a Java JAR, or a raw QEMU binary. It integrates seamlessly with Consul and Vault.
When to use it: You have a hybrid environment (legacy binaries + containers) or you despise the complexity of YAML manifest hell.
A Nomad job specification is readable by humans, not just parsers:
job "web-cache" {
datacenters = ["oslo-dc1"]
type = "service"
group "cache" {
count = 3
network {
port "redis" {
to = 6379
}
}
task "redis" {
driver = "docker"
config {
image = "redis:7.0-alpine"
ports = ["redis"]
}
resources {
cpu = 500
memory = 256
}
}
}
}
Infrastructure: The Silent Killer of Orchestration
Here is the uncomfortable truth: You can tune your scheduler all day, but if your underlying VPS steals CPU cycles or throttles I/O, your cluster will fail. In Norway, latency matters. Routing traffic from Oslo to a datacenter in the US adds 100ms+ latency. Routing to a generic European cloud often routes via Stockholm or Copenhagen first.
We built CoolVDS on KVM (Kernel-based Virtual Machine) because it offers strict isolation. Unlike container-based virtualization (like LXC/OpenVZ) used by budget hosts, KVM ensures that when you allocate 4 vCPUs to a Kubernetes worker node, those cycles are yours. This is critical for the Scheduler.
Comparison: Resource Overhead
| Feature | Kubernetes | Docker Swarm | Nomad |
|---|---|---|---|
| Idle Memory (Manager) | ~1.5GB+ | ~100MB | ~60MB |
| Setup Time | Hours/Days | Minutes | Minutes |
| State Store | etcd (Heavy I/O) | Raft (Built-in) | Raft (Built-in) |
| Minimum Viable Node | 2 vCPU / 4GB RAM | 1 vCPU / 1GB RAM | 1 vCPU / 512MB RAM |
Data Sovereignty and Latency
For Norwegian businesses, the Datatilsynet (Data Protection Authority) is watching. Storing customer data on US-controlled clouds is legally complex post-Schrems II. Hosting on CoolVDS, physically located in the region, simplifies your GDPR compliance posture. Furthermore, low latency to NIX (Norwegian Internet Exchange) ensures that your API responses feel instantaneous to local users.
Whether you choose K8s for power, Swarm for simplicity, or Nomad for flexibility, do not handicap them with slow hardware. A container orchestrator requires high IOPS for logging, state management, and image pulling. Standard SSDs often choke during a "stampede" (when all containers restart simultaneously). NVMe storage handles these spikes without sweating.
Final thought: Don't let your infrastructure be the bottleneck. Deploy a K8s worker node or a Swarm manager on a CoolVDS NVMe instance today and watch your `Pending` pods turn to `Running` in record time.