Kubernetes vs. Docker Swarm in Late 2019: Stop Over-Engineering Your Stack
Letâs be honest. Half of you reading this donât need Kubernetes. You think you do because every Medium article and KubeCon talk tells you that if you aren't deploying microservices with a service mesh, youâre a dinosaur. But Iâve seen the other side. Iâve seen startups burn three months of runway trying to debug Ingress controllers when a simple docker-compose up would have sufficed.
It is November 2019. Helm 3 just dropped last week (finally killing Tiller, thank the gods), and the Mirantis acquisition of Docker Enterprise has everyone panicking about Swarmâs future. If you are building infrastructure in Europeâspecifically here in the Nordicsâyou have a decision to make. Do you chase the Google-scale dream, or do you build something that actually stays up when you sleep?
The Latency Lie: Why Your "Global" Cluster Fails
Before we talk tools, let's talk physics. If your users are in Oslo or Bergen, hosting your control plane in Virginia is negligence. I recently audited a setup where the client complained about API timeouts. Their K8s masters were in Frankfurt, workers in Oslo, and the database in a cheap container wrapper that throttled IOPS.
Latency kills distributed systems. Specifically, etcdâthe brain of Kubernetesâis incredibly sensitive to disk write latency. If fsync takes too long, your cluster loses quorum. It doesn't matter how fancy your YAML is if your underlying disk I/O is garbage.
Pro Tip: Before you even install
kubeadm, test your disk. If your VPS provider is overselling storage, your cluster will implode under load. Usefioto verify you are actually getting the NVMe speeds you paid for.
The Benchmark of Truth
Run this on your current node. If you see high latency (over 10ms for fsync), move your workload immediately.
# 2019 Standard FIO Test for Etcd Performance
fio --rw=write --ioengine=sync --fdatasync=1 \
--directory=test-data --size=22m --bs=2300 \
--name=mytest
On a standard CoolVDS NVMe instance, we typically see fsync latencies in the microseconds, not milliseconds. This is why we insist on KVM virtualization; we don't let a neighbor's heavy database steal your I/O cycles.
The Contenders: K8s vs. Swarm
1. Docker Swarm: The "It Just Works" Choice
Despite the FUD around the Docker Enterprise sale, Swarm is not dead. For teams of 2-5 developers, it is superior. Why? Because you already know the API. If you can write a Compose file, you can orchestrate a cluster.
The Config:
version: '3.7'
services:
web:
image: nginx:alpine
deploy:
replicas: 5
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
Deploying this takes seconds: docker stack deploy -c docker-compose.yml myapp. No CNI plugins to configure, no RBAC headaches. It handles overlay networking and load balancing out of the box.
2. Kubernetes: The Bazooka
Kubernetes (currently v1.16/v1.17) is the standard. It won the war. But it requires a dedicated ops person. You need to manage the CNI (Flannel? Calico? Weave?), the storage classes, the ingress, and the certificates.
However, if you need persistent stateful workloads, K8s is better equippedâprovided you configure the StorageClass correctly. Here is how you map a high-performance local NVMe path to a PV in K8s, bypassing slow network storage:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv
spec:
capacity:
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node-1-coolvds-oslo
Note the nodeAffinity. We are pinning the data to a specific CoolVDS node in Oslo because we trust the local NVMe storage more than a networked filesystem for database workloads.
The Privacy Angle: GDPR and Datatilsynet
We are a year and a half into GDPR. The Privacy Shield framework is currently holding the bridge between Europe and the US, but let's be realisticâreliance on US-based cloud giants is becoming a legal minefield. Keeping data within Norwegian borders isn't just about patriotism; it's about compliance.
When you deploy a cluster on CoolVDS, your data sits in a datacenter subject to Norwegian law. There is no murky routing through a POP in London or Frankfurt unless you configure it that way.
The Verdict
| Feature | Docker Swarm | Kubernetes |
|---|---|---|
| Learning Curve | Low (Hours) | High (Weeks/Months) |
| Maintenance | Low | High (Upgrades are scary) |
| Flexibility | Limited | Infinite |
| Ideal Hardware | Any VPS | High IOPS, Stable CPU |
Choose Docker Swarm if: You are a small team, you value simplicity, and you just need to run stateless web apps behind a load balancer.
Choose Kubernetes if: You have a dedicated DevOps engineer, you need complex autoscaling, or you are running a multi-tenant environment requiring strict RBAC.
But regardless of your choice, orchestration is not magic. It multiplies the flaws of your infrastructure. If your VPS has "noisy neighbors" stealing CPU cycles, your K8s liveness probes will fail, pods will restart, and you will have downtime.
We built CoolVDS on pure KVM with NVMe storage specifically to solve the "etcd latency" problem. We don't oversell our CPU, meaning your orchestrator gets the cycles it expects, every millisecond.
Don't let slow I/O kill your SEO. Spin up a KVM instance on CoolVDS today and see what fio is supposed to look like.