Kubernetes vs. Docker Swarm in 2017: Orchestration for the Paranoid Sysadmin
Let’s be honest: running docker run in a loop inside a bash script is not orchestration. It’s a resignation letter waiting to happen. If you are managing more than five containers in production without an orchestrator, you are playing Russian Roulette with your uptime. But here we are in mid-2017, and the landscape is fragmented. Google is pushing Kubernetes hard, Docker has baked Swarm right into the engine, and Mesos is still the terrifyingly complex beast in the corner.
As a sysadmin operating out of Oslo, I look at infrastructure through a specific lens: latency, reliability, and data sovereignty. We have the Datatilsynet breathing down our necks about where user data lives, and we have users who expect sub-20ms load times. A bloated orchestrator on slow iron is useless. I've spent the last month migrating a high-traffic Magento cluster from bare metal to containers, and I’m going to share exactly what broke, what worked, and why the underlying hardware matters more than the YAML you write.
The Contenders: Swarm Mode vs. Kubernetes 1.6
Two years ago, this wasn't even a conversation. But today, the choice usually boils down to the native simplicity of Docker Swarm or the declarative power of Kubernetes.
Docker Swarm: The "It Just Works" Option
Since Docker 1.12, Swarm is embedded. No external KV store (goodbye, Consul requirements), no complex certificate generation for basic setups. It uses a Raft consensus algorithm to manage the state of the swarm. For a team of three developers, this is often enough.
# initialize a swarm manager on your primary node
docker swarm init --advertise-addr 192.168.1.10
# deploy a replicated service with zero downtime updates
docker service create \
--name frontend \
--replicas 3 \
--publish published=80,target=80 \
--update-delay 10s \
nginx:alpine
The beauty here is the overlay network. It works right out of the box. However, Swarm starts to show cracks when you need complex state management or specific cron-style job scheduling, although the recent updates are closing that gap.
Kubernetes: The Google Standard
Kubernetes (K8s) is the 800-pound gorilla. Version 1.6 was released recently, and it brought significant stability to etcd v3 and scalability updates. But the learning curve is a vertical wall. You aren't just managing containers; you are managing Pods, Deployments, Services, Ingress controllers, and ConfigMaps.
Consider a simple Nginx deployment in K8s. It's not a one-liner. It's a manifesto.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.10
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: nginx-storage
volumes:
- name: nginx-storage
hostPath:
path: /data/www/html
The Storage Headache: Persistence in 2017
This is where most tutorials lie to you. They show you stateless web servers. But in the real world, we have databases. We have user uploads. Containers are ephemeral; data cannot be.
In Docker Swarm, volume management across nodes is still clumsy unless you are using a volume plugin like Flocker or RexRay. If a container dies on Node A and respawns on Node B, your local data on Node A is gone to the wind.
Kubernetes handles this better with the concept of StatefulSets (formerly PetSets). It provides a stable identity for Pods. However, this software logic implies a heavy I/O tax. I recently benchmarked a MySQL Galera cluster inside Kubernetes against a standard VM install. The container overhead was negligible, but the storage overhead was massive because of poor disk performance on the host nodes.
Pro Tip: Never run a database in a container on spinning rust (HDD). The random I/O of container logs combined with database transactions will bring your iowait to 90%. Always audit your provider's storage backend.
The Infrastructure Reality Check
You can have the most perfectly architected Kubernetes cluster, but if your underlying VPS has "noisy neighbors" stealing CPU cycles, your kube-apiserver will time out. This is a massive issue in the budget hosting market.
When we provision infrastructure for orchestration at CoolVDS, we don't over-provision the CPU cores. Kubernetes schedulers assume they have the CPU capacity they request. If the hypervisor lies to the kernel, the scheduler makes bad decisions, leading to cascading failures.
Networking Latency and the NIX
For Norwegian businesses, the physical location of the master node matters. If your etcd cluster is spread across high-latency links, consensus fails. If your worker nodes are in Frankfurt but your customers are in Trondheim, you are adding 30ms of round-trip time (RTT) before the request even hits your application logic.
Hosting locally in Norway means you are peering directly at NIX (Norwegian Internet Exchange). Lower latency doesn't just mean faster page loads; it means faster replication for your distributed databases.
Security: The Overlay Network Risk
By default, Docker's overlay network traffic is not encrypted. If you are processing sensitive data—and with the new privacy regulations looming in Europe, you should treat all data as sensitive—you need to enable encryption.
In Swarm mode, this is a flag. In Kubernetes, it’s a project. Here is how you enforce it in Swarm:
docker network create --opt encrypted --driver overlay my-secure-net
Keep in mind that encryption adds CPU overhead. This is another reason why modern processors (like the ones we deploy for our NVMe plans) are non-negotiable. You need the AES-NI instruction set support to handle this without choking your throughput.
Configuration Management Integration
Even with orchestration, you still need to manage configuration drift. In 2017, we are seeing a shift from Chef/Puppet managing servers to Ansible managing clusters.
Here is a snippet of how I automate the bootstrapping of a worker node before it joins the cluster. This ensures the kernel is tuned for container workloads (disabling swap is mandatory for K8s 1.6!).
- name: Disable Swap for K8s
command: swapoff -a
- name: Remove Swap from fstab
lineinfile:
path: /etc/fstab
regexp: 'swap'
state: absent
- name: Enable IP forwarding
sysctl:
name: net.ipv4.ip_forward
value: 1
sysctl_set: yes
state: present
reload: yes
Conclusion: Pick Your Poison
If you are a small team deploying a monolithic Rails or Django app, Docker Swarm is sufficient. It is robust enough in 2017 to handle production traffic without needing a dedicated Ops team.
If you are building microservices and need granular control over pod placement, resource quotas, and complex ingress rules, Kubernetes is the only serious choice. But be prepared to pay the "complexity tax."
Regardless of the orchestrator, the foundation is the same: raw compute and fast storage. Containers amplify I/O patterns. They turn sequential writes into random writes. Don't let slow I/O kill your SEO or your uptime. Deploy a test instance on CoolVDS today—our KVM stack is optimized specifically for the kernel-heavy demands of container orchestration.