Docker Swarm vs. Kubernetes: Orchestrating Chaos without Melting Your Servers
It is 2016, and if you are still manually SCP-ing tarballs to your production servers, you are doing it wrong. The container revolution is not coming; it is here. Docker has fundamentally changed how we package applications, shifting the friction from "development" to "operations." But here is the dirty secret nobody in the Valley tells you: running one container is easy. Running five hundred is a nightmare.
As a sysadmin managing infrastructure across Oslo and Stavanger, I have seen the same story play out a dozen times this year. A dev team builds a beautiful microservices architecture on their MacBooks, pushes it to a staging environment, and it collapses under load because nobody thought about service discovery, scheduling, or the underlying I/O constraints of the host nodes.
Today, we are looking at the two heavyweights fighting for the crown of container orchestration: Kubernetes and Docker Swarm. We will look at this through the lens of a Norwegian business needing stability, compliance (Datatilsynet is watching), and raw performance.
The Contender: Docker Swarm
Docker Swarm is the native clustering for Docker. It turns a pool of Docker hosts into a single, virtual Docker host. The appeal is obvious: if you know the Docker API, you know Swarm.
In a recent project for a media streaming client in Bergen, we needed to scale out quickly without retraining the entire dev team. Swarm was the logical first choice. It uses the standard Docker CLI, which means your tools (Compose, Dokku, Jenkins) just work.
Setting up a Swarm Cluster
Here is how simple it is to get a token and join nodes. You do not need a PhD in distributed systems to run this:
# Create a cluster token
docker run --rm swarm create
# Returns: 6856663c640366665766668cc
# Join a CoolVDS node to the swarm (Run on the node)
docker run -d swarm join --addr=10.0.0.1:2375 token://6856663c640366665766668cc
# Manage the swarm (Run on the manager)
docker run -d -p 2376:2375 swarm manage token://6856663c640366665766668ccThe Pros: Simplicity. It is lightweight. For smaller teams or those strictly adhering to the "one container per service" philosophy, it is fantastic.
The Cons: It is less mature than Kubernetes when it comes to complex failure states. If your etcd or consul backend gets out of sync due to network latency, your cluster state can get messy.
The Heavyweight: Kubernetes (K8s)
Google has been running containers for a decade (Borg), and Kubernetes is the open-source fruit of that labor. Version 1.2 was just released, and it brings significant improvements to scaling and the new GUI, but let's be honest: Kubernetes is a beast.
It introduces entirely new concepts: Pods, ReplicationControllers, Services, and Kubelets. It is not just about running containers; it is about managing the desired state of your infrastructure.
Defining a ReplicationController
Unlike Swarm's imperative commands, K8s is declarative. You tell it what you want, and it makes it happen. Here is a standard configuration we use for high-availability Nginx frontends:
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-frontend
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.9
ports:
- containerPort: 80This guarantees that 3 copies of Nginx are always running. If a node dies (hardware failure, kernel panic), Kubernetes automatically reschedules the pod to a healthy node. This self-healing capability is why enterprises are flocking to it.
The Invisible bottleneck: Your Infrastructure
Here is the critical piece of the puzzle that software-defined tutorials ignore: Orchestrators are resource hogs.
I recently debugged a Kubernetes cluster that was timing out on API calls. The culprit wasn't Go code; it was I/O wait. The client was hosting their master nodes on a cheap, oversold VPS provider using spinning rust (HDD) and OpenVZ virtualization.
OpenVZ shares the kernel with the host. When you try to run Docker (which relies on cgroups and namespaces) inside an OpenVZ container that already has restricted access to cgroups, you are asking for trouble. We call this "Inception-style" virtualization, and it is a performance killer.
Pro Tip: Always run container orchestrators on KVM or hardware virtualization. You need your own kernel to properly manage the overlay networks (like flannel or weave) and block devices.
The CoolVDS Advantage
At CoolVDS, we strictly use KVM virtualization. This means your Docker daemon talks directly to your kernel, not a shared one. Furthermore, container images are heavy on disk I/O. Pulling a 500MB image to 10 nodes simultaneously will choke a standard SATA drive.
We use NVMe storage arrays. The difference in latency is palpable. When you type kubectl rolling-update, you want it to happen now, not in 30 seconds.
Benchmarking Etcd Latency
Kubernetes relies on etcd for state. If disk latency spikes, etcd creates a leader election storm. Here is a quick way to check if your current disk can handle the write load required for a stable cluster using fio:
fio --rw=write --ioengine=sync --fdatasync=1 --directory=test-data --size=22m --bs=2300 --name=mytestOn a standard budget VPS, you might see 50-100 IOPS. On CoolVDS NVMe instances, we consistently deliver over 10,000 IOPS. That is the difference between a cluster that heals itself and a cluster that eats itself.
Compliance and The Norwegian Context
We cannot ignore the legal side. With the invalidation of Safe Harbor last year (Schrems I), relying on US-hosted managed container services is legally gray. Storing customer data on servers physically located in Oslo or nearby European hubs is the safest bet for compliance with Norwegian privacy laws.
When you deploy on CoolVDS, you know exactly where your bits are. You get the low latency to NIX (Norwegian Internet Exchange), ensuring your local users get snappy load times, and you keep the Datatilsynet happy.
Conclusion: Which one to choose?
Choose Docker Swarm if:
- You have a small team (1-5 devs).
- You want to leverage existing docker-compose.yml files.
- You need to deploy today.
Choose Kubernetes if:
- You have a dedicated DevOps engineer.
- You need auto-scaling and complex service discovery.
- You are building a platform that needs to survive partial infrastructure failure.
Regardless of your choice, the software is only as good as the metal it runs on. Don't let "steal time" and noisy neighbors ruin your orchestration. Build your cluster on dedicated KVM resources.
Ready to orchestrate? Deploy a high-performance KVM instance on CoolVDS in under 55 seconds and stop fighting with I/O wait.