Kubernetes vs. Docker Swarm in 2019: The Nordic DevOps Reality Check
Let’s be honest for a second. If I see one more medium-sized e-commerce shop trying to deploy a massive Kubernetes cluster for a simple monolithic Magento stack, I might actually scream. It is September 2019, and the industry has convinced itself that if you aren't running K8s, you aren't doing DevOps. That is nonsense.
I have spent the last six months migrating workloads between Oslo and Frankfurt, and the lesson is always the same: Latency is the killer, not the orchestrator. Whether you choose the massive complexity of Kubernetes or the sleek simplicity of Docker Swarm, your cluster is only as stable as the I/O throughput of the underlying disk.
Today, we aren't just comparing features. We are looking at the operational reality of running containers on VPS infrastructure in Norway, where data residency (thanks to Datatilsynet) and millisecond latency actually matter.
The Pragmatic Choice: Docker Swarm Mode
In 2019, Docker Swarm is still the fastest way to go from "it runs on my laptop" to "it runs in production." It is integrated directly into the Docker engine (since 1.12). There is no extra binary to install, no massive control plane overhead.
If you are running a team of three developers managing five microservices, Swarm is your friend. The declarative nature of the docker-compose.yml file (version 3.7 is the current standard) makes it incredibly readable.
Deploying a Stack in Seconds
Here is how simple it is to get a replicated Nginx service running with a custom config. No Pods, no ReplicaSets, just services.
version: "3.7"
services:
web:
image: nginx:1.17-alpine
deploy:
replicas: 3
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
networks:
webnet:To deploy this on a CoolVDS cluster, you literally run:
docker stack deploy -c docker-compose.yml my_stackThe Trade-off: Swarm lacks the rich ecosystem of Kubernetes. You don't have Helm charts. You don't have the operator pattern. But do you need custom operators for a WordPress site? Probably not.
The Heavy Hitter: Kubernetes (v1.15)
Kubernetes (K8s) won the war. We know this. With the release of v1.15 recently, stability is solid. However, K8s is not a deployment tool; it is a framework for building deployment platforms. It assumes you have a dedicated ops team.
The complexity is front-loaded. You have to manage etcd, the API server, the scheduler, and the controller manager. And that brings us to the most critical failure point I see in 2019: Etcd latency.
Pro Tip: Etcd is extremely sensitive to disk write latency. If fsync takes too long, the cluster can become unstable or lose leader election. This is why spinning rust (HDD) is dead for K8s masters. You need NVMe.The Configuration Beast
Compare the Swarm config above to a basic K8s deployment. It is verbose.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.17
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
periodSeconds: 10And you haven't even defined the Service or Ingress yet. That’s two more YAML files.
Infrastructure: The Invisible Bottleneck
Here is the war story. Last month, we had a client pushing a K8s cluster on a budget provider. They were facing intermittent CrashLoopBackOff errors on their database pods. Logs showed nothing. The application code hadn't changed.
We dug into the system metrics. The issue was CPU Steal and I/O Wait. Their "VPS" was hosted on noisy, over-sold hardware. The neighbors were eating the CPU cycles required for the liveness probes to respond in time. K8s thought the app was dead and killed it. Rinse and repeat.
We migrated the cluster to CoolVDS. Why? Because KVM virtualization provides better isolation than the container-based virtualization (like OpenVZ) used by budget hosts. But more importantly, the NVMe storage on CoolVDS meant etcd latency dropped from 40ms to under 2ms.
Optimizing the Kernel for High-Load Networking
Whether you use Swarm or K8s, if you are pushing traffic in Norway, you need to tune the Linux kernel. Defaults in Ubuntu 18.04 are conservative. Add this to your /etc/sysctl.conf on your worker nodes to handle high connection counts:
# Increase system file descriptor limit
fs.file-max = 2097152
# Increase the range of ephemeral ports
net.ipv4.ip_local_port_range = 1024 65535
# Reuse Time-Wait sockets
net.ipv4.tcp_tw_reuse = 1
# Increase TCP max syn backlog
net.ipv4.tcp_max_syn_backlog = 4096
# Protect against SYN flood
net.ipv4.tcp_syncookies = 1Apply it with sysctl -p. This prevents your cluster from choking during traffic spikes, something we see often during Black Friday sales here in the Nordics.
The Verdict: What to run in 2019?
| Feature | Docker Swarm | Kubernetes |
|---|---|---|
| Learning Curve | Low (Hours) | High (Weeks) |
| Maintenance | Minimal | High (Needs Ops team) |
| Scalability | Good (< 2000 nodes) | Massive (5000+ nodes) |
| Storage | Simple Volumes | CSI / PVC / PV Complexity |
| Ideal Use Case | Small/Medium Dev Teams | Enterprise / Multi-cloud |
Why Location Matters
Finally, let's talk about GDPR. We are over a year into the regulation now. Storing customer data on US-managed servers is becoming a legal headache for Norwegian companies. Hosting your orchestration layer on CoolVDS in Norway ensures your data stays within the correct jurisdiction, satisfying the Datatilsynet requirements.
If you need raw simplicity, go Swarm. If you need the industry standard, go Kubernetes. But do not run either on slow disks. The orchestrator cannot save you from bad I/O.
Ready to build a cluster that actually stays up? Deploy a high-performance NVMe KVM instance on CoolVDS today and see the latency drop for yourself.