Kubernetes vs. Docker Swarm in 2020: A Norwegian Sysadmin’s Reality Check
Let’s be honest with ourselves. We have reached peak container hype. As we step into January 2020, every CTO in Oslo seems to believe that unless their simple WordPress blog is running on a high-availability Kubernetes cluster spread across three availability zones, they are failing at digital transformation. This is nonsense.
I have spent the last decade fixing broken infrastructures across the Nordics, from Stavanger oil firms to Oslo fintech startups. The most common cause of outages I see today isn't hardware failure; it is unnecessary complexity. We are replacing monolithic applications with monolithic complexity.
Today, we are going to look at the state of container orchestration as it actually stands right now. We will compare the industry standard (Kubernetes) against the pragmatic choice (Docker Swarm) and the niche contender (HashiCorp Nomad). More importantly, we will discuss the one thing most tutorials ignore: the underlying hardware requirements. You can’t run a race car on a gravel road, and you can’t run a stable K8s cluster on cheap, over-committed HDD storage.
The State of the Union: January 2020
In late 2019, Mirantis acquired Docker Enterprise. This sent a shiver down the spine of many Swarm loyalists. Is Swarm dead? No. Is Kubernetes the inevitable future? Yes. But "inevitable" doesn't mean "required today."
1. Kubernetes (K8s): The Heavyweight Champion
Kubernetes version 1.17 was just released. It is robust, extensible, and backed by Google, Microsoft, and everyone in between. It is also an absolute beast to manage if you don't know what you are doing.
The Hidden Cost: etcd Latency
Here is a war story for you. A client recently came to me complaining that their K8s API server was timing out randomly. They were hosted on a generic budget VPS provider in Germany. I checked the logs and saw this terrifying message repeatedly:
etcdserver: failed to send out heartbeat on time (excessive variability in thread execution time, resulting in 1.5s > 100ms)
Kubernetes relies on etcd for its state. etcd is extremely sensitive to disk write latency (fsync). If your disk cannot write the state fast enough, the cluster leader election fails, and your nodes flap. The budget VPS provider was using standard SSDs with noisy neighbors stealing IOPS.
We migrated the control plane to CoolVDS instances backed by enterprise NVMe. The result? Fsync latency dropped to sub-2ms. The timeouts vanished. If you are building a K8s cluster, do not cheap out on storage IOPS.
Pro Tip: Before deploying K8s, run fio to test your disk latency. If your 99th percentile fsync latency is above 10ms, do not run etcd there.
2. Docker Swarm: The "Just Works" Solution
If you have a team of three developers and you just want to deploy a Node.js app with a Redis backend, Kubernetes is overkill. Docker Swarm is built into the Docker engine. You likely already have it.
Setting up a Swarm cluster takes literally two commands:
# On the manager node
docker swarm init --advertise-addr 10.10.0.5
# On the worker node
docker swarm join --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c 10.10.0.5:2377
That is it. You have a cluster. No CNI plugin selection, no RBAC headaches, no Helm charts.
However, Swarm struggles with advanced stateful workloads and complex ingress routing compared to the Kubernetes ecosystem (Ingress Controllers, Cert-Manager). If you need complex autoscaling based on Prometheus metrics, Swarm falls short in 2020.
Technical Showdown: Ingress and Networking
One of the biggest pain points in 2020 is getting traffic into the cluster.
Kubernetes Approach: You typically use an Ingress Controller like NGINX. You define an Ingress resource:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: production-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: api.coolvds-client.no
http:
paths:
- path: /
backend:
serviceName: backend-service
servicePort: 80
Docker Swarm Approach: Swarm uses the "Routing Mesh." You publish a port, and it listens on every node in the cluster. While simple, it introduces latency as traffic hops between nodes to find the container. On a high-latency network, this kills performance.
This is where infrastructure location becomes critical. If your nodes are spread across different datacenters with 20ms latency between them, the Swarm routing mesh will make your application feel sluggish. Hosting your nodes in a single, high-performance location like the CoolVDS Oslo zone ensures that node-to-node latency is negligible (often <1ms).
The Norwegian Context: GDPR and The CLOUD Act
We cannot talk about hosting in 2020 without addressing the elephant in the room: Data Sovereignty.
Since the US CLOUD Act was passed in 2018, Norwegian businesses have been nervous. Using US-owned hyperscalers (AWS, Azure, GCP) technically places your data under US jurisdiction, regardless of where the server is physically located. For companies dealing with sensitive Norwegian citizen data (health, finance), this is a risk compliance officers are increasingly unwilling to take.
By utilizing a local provider like CoolVDS, you ensure that:
- Data Residency: The data sits physically in Norway.
- Legal Entity: The contract is with a European entity, reducing CLOUD Act exposure.
- Latency: Ping times to NIX (Norwegian Internet Exchange) are minimal.
Benchmarking Latency: Oslo vs. Frankfurt
We ran a simple ping test from a fiber connection in downtown Oslo.
| Target | Location | Latency (ms) |
|---|---|---|
| CoolVDS VPS | Oslo, Norway | 2 ms |
| AWS eu-central-1 | Frankfurt, Germany | 28 ms |
| DigitalOcean | Amsterdam, NL | 24 ms |
For a standard web page, 26ms doesn't matter. But for a microservices architecture where a single user request triggers 50 internal service calls? That latency compounds. 26ms becomes 1.3 seconds of pure network overhead. Low latency is not a luxury; it is an architectural requirement.
Recommendation: What should you choose?
Choose Docker Swarm if:
- Your team is small (1-5 devs).
- You don't need complex autoscaling.
- You want to go from "zero to deployed" in an afternoon.
Choose Kubernetes if:
- You are building a cloud-native platform intended to last 5+ years.
- You need the rich ecosystem (Helm, Prometheus, Istio).
- You have a dedicated ops person.
Regardless of your choice, respect the hardware.
Container orchestration layers add CPU overhead (context switching) and rely heavily on network throughput. Running K8s on a shared, oversold VPS is a recipe for disaster. You need dedicated resources.
At CoolVDS, we don't oversell our CPU cores, and we use pure NVMe storage arrays. This means when your Kubernetes scheduler decides to move 50 containers to a new node, the I/O subsystem doesn't choke. We provide the raw power; you bring the orchestration.
Don't let slow I/O kill your cluster's stability. Deploy a high-performance test instance on CoolVDS today and see the difference real hardware makes.