Kubernetes vs. Docker Swarm: A 2023 Reality Check for Norwegian DevOps
It was 3:00 AM in Oslo, and the latency on our primary ingress controller had just spiked from 15ms to 400ms. We weren't under a DDoS attack. We hadn't deployed new code. The culprit? Noisy neighbors on a generic public cloud stealing CPU cycles from our control plane. That night taught me a lesson that no certification course covers: your orchestrator is only as reliable as the metal it runs on.
In 2023, the pressure on Norwegian engineering teams is twofold: deliver high-availability services that compete globally while navigating the strict data sovereignty requirements of Datatilsynet and GDPR (post-Schrems II). Whether you are running a fintech stack requiring milliseconds at NIX (Norwegian Internet Exchange) or a simple CMS, choosing the right orchestration tool isn't just about features—it's about operational sanity.
The Latency & Compliance Trap
Before we look at the YAML, let's talk about the infrastructure. If you are hosting customer data for Norwegian entities, keeping that data within national borders isn't just a "nice to have"; it is often a legal necessity. Furthermore, physics still applies. Routing traffic from Oslo to a data center in Frankfurt and back adds unavoidable latency.
Pro Tip: When benchmarking your orchestration layer, always check/proc/statfor 'st' (steal time). If you see anything above 0.0 on a consistent basis, your VPS provider is overselling their CPU. This causes jitter inetcdleader elections, leading to cluster instability.
Docker Swarm: The "Just Works" Solution
Don't believe the Reddit threads claiming Swarm is dead. For teams of less than 10 engineers, Docker Swarm remains the most efficient path to production. It lacks the complex networking overlay overhead of Kubernetes, meaning simpler debugging and often faster packet processing.
Initialization is instantaneous. You don't need a PhD in networking to set this up:
docker swarm init --advertise-addr 192.168.1.10
Here is a real-world example of a Swarm stack optimized for a CoolVDS instance. Note the resource limits—crucial for preventing a single container from starving the host.
version: '3.8'
services:
nginx:
image: nginx:1.23-alpine
ports:
- "80:80"
- "443:443"
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
resources:
limits:
cpus: '0.50'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
networks:
- webnet
networks:
webnet:
driver: overlay
driver_opts:
encrypted: "true"
The Trade-off: Swarm struggles with stateful workloads. If you need complex persistent volume claims (PVCs) attached to specific pods moving across nodes, Swarm's CSI support is limited compared to K8s.
Kubernetes (K8s): The Standard for a Reason
Kubernetes (v1.26 as of writing) is the heavy lifter. It is necessary when you have microservices that need sophisticated service discovery, autoscaling based on custom metrics, or granular role-based access control (RBAC).
However, K8s is resource-hungry. A base control plane can easily consume 2GB of RAM just to exist. This is where underlying hardware quality becomes critical. Running K8s on slow spinning disks is a death sentence because etcd requires very low fsync latency.
To prepare a node for K8s, you must disable swap and tune the kernel. I run this script on every CoolVDS node before bootstrapping:
#!/bin/bash
# Disable swap explicitly
swapoff -a
sed -i '/ swap / s/^/#/' /etc/fstab
# Load kernel modules
cat <
Once tuned, bootstrapping a cluster with kubeadm allows us to define the pod network specifically to avoid collisions with our VPS private network:
kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.10.0.5
Storage Performance is the Bottle Neck
In a recent deployment for a Norwegian logistics firm, we faced timeouts with our Postgres pods. The database wasn't the issue; the storage I/O was. We migrated the worker nodes to CoolVDS NVMe instances. The difference was immediate.
We verified the disk scheduler to ensure it was optimized for NVMe:
cat /sys/block/vda/queue/scheduler
If you see [mq-deadline] or [none], you are in good shape. If you see cfq, you are throttling your database unnecessarily.
HashiCorp Nomad: The Dark Horse
If K8s feels like killing a fly with a sledgehammer, look at Nomad. It is a single binary. It schedules containers, Java JARs, and even raw executables. For a hybrid setup where you might have some legacy code running directly on the OS and some in Docker, Nomad is unbeatable.
Comparison: Choosing Your Weapon
| Feature | Docker Swarm | Kubernetes | Nomad |
|---|---|---|---|
| Learning Curve | Low (Hours) | High (Months) | Medium (Days) |
| Resource Overhead | Very Low | High | Low |
| Stateful Support | Basic | Excellent | Good |
| Minimum Node Count | 1 | 3 (recommended) | 1 |
Why Infrastructure is the Hidden Variable
You can spend weeks optimizing your Helm charts, but if your virtualization layer adds 20% overhead, you are burning money. This is particularly relevant in Norway, where energy costs and hosting premiums are considerations.
We use KVM virtualization on CoolVDS because it provides strict resource isolation. Unlike container-based virtualization (like OpenVZ or LXC) used by budget providers, KVM ensures that your kernel is your kernel. When you define a memory limit in Kubernetes, you need to know the physical RAM is actually there.
Check your disk latency with ioping before deploying a cluster:
ioping -c 10 .
On a standard SATA VPS, you might see 5-10ms. On CoolVDS NVMe, we consistently clock under 0.5ms. For a high-traffic K8s API server, that difference defines whether your cluster scales or stalls.
Final Verdict
If you are building a complex, microservices-oriented platform requiring strict separation of concerns and massive scale, Kubernetes is the unavoidable choice. Just ensure you back it with high-performance storage.
If you need to deploy a standard web stack (Nginx, PHP/Python, Redis) and want it running today with built-in high availability, Docker Swarm is still the king of ROI.
Whatever you choose, remember that the orchestrator controls the containers, but the VPS controls the reality. Don't let slow I/O kill your SEO or your uptime.
Ready to build a cluster that actually performs? Deploy a low-latency NVMe instance on CoolVDS in 55 seconds and test the difference yourself.