Container Orchestration in 2016: Kubernetes vs. Docker Swarm for Nordic Infrastructures
I still remember the night the database cluster desynchronized. It was 3 AM on a Tuesday, and a standard traffic spike from a marketing campaign caused a cascade failure across our legacy OpenVZ containers. The kernel panicked. I panicked. The client certainly panicked.
That was 2014. Fast forward to late 2016, and we are supposedly in the "Golden Age" of containers. Docker is no longer just a developer tool; it is production reality. But now we face a new headache: Orchestration. Managing five containers is easy with a shell script. Managing five hundred across a distributed cluster? That is war.
Right now, the battle lines are drawn between the newly integrated Docker Swarm Mode (introduced in Docker 1.12) and the Google-born behemoth, Kubernetes (currently v1.4). If you are running infrastructure in Norway, balancing strict latency requirements with the impending GDPR regulations, picking the wrong horse here can cost you your weekends.
The Contenders
1. Docker Swarm (Swarm Mode)
Until this summer, Swarm was a standalone container. Now, it is baked into the Docker Engine. It is opinionated, secure by default, and shockingly simple to set up. You don't need a PhD in distributed systems to get a cluster running.
2. Kubernetes (K8s)
Kubernetes is the industrial-grade option. It is verbose. It is complex. It introduces concepts like Pods, ReplicaSets, and Services that abstract away the container itself. But it handles failure like nothing else.
The Technical Face-Off
Let's look at the configuration. We will deploy a simple Nginx service.
Docker Swarm: The One-Liner
In Swarm, you initialize the manager and create a service. That's it.
# On the manager node
docker swarm init --advertise-addr 10.0.0.5
# Create an overlay network
docker network create --driver overlay my-net
# Deploy Nginx with 3 replicas
docker service create \
--name my-web \
--replicas 3 \
--network my-net \
--publish 80:80 \
nginx:alpine
This is clean. It uses the standard Docker CLI we already know. For many Norwegian development teams dealing with tight deadlines, this low barrier to entry is attractive.
Kubernetes: The Configuration Beast
Kubernetes requires explicit definitions. In version 1.4, we use kubectl and YAML files. Here is the equivalent Deployment.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.11-alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
You then apply this with:
kubectl create -f nginx-deployment.yaml
It is verbose. However, notice the granularity. You have total control over the spec. In a complex microservice architecture, this verbosity saves you when you need to debug networking policies or storage claims.
The Elephant in the Room: Storage Performance (IOPS)
Orchestrators schedule CPU and RAM. They do not magically fix slow disks. This is where most VPS providers fail you. I have seen Kubernetes clusters stall because `etcd` (the key-value store K8s relies on) couldn't write to disk fast enough due to high latency on shared storage.
Pro Tip: If your `etcd` latency goes above 10ms, your cluster stability will degrade. You cannot run a reliable orchestration layer on cheap, oversold spinning rust.
If you are hosting in Oslo or relying on the NIX (Norwegian Internet Exchange) for local traffic, the physical distance helps latency, but disk I/O is the bottleneck. This is why we build CoolVDS on pure NVMe storage. We don't just cache; the underlying storage is solid state.
Here is how you test if your current host is lying to you about "SSD speed". Run fio on your node:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 \
--name=test --filename=test --bs=4k --iodepth=64 --size=1G \
--readwrite=randwrite --ramp_time=4
On a standard SATA SSD VPS, you might see 3000-5000 IOPS. On a CoolVDS NVMe instance, we consistently push numbers that make databases weep with joy. When etcd writes strictly to the journal, that NVMe speed prevents the "split-brain" scenarios that wake you up at 3 AM.
Network Latency and Data Sovereignty
With the invalidation of Safe Harbor last year and the new Privacy Shield framework trying to find its footing, data location is critical. Datatilsynet (The Norwegian Data Protection Authority) is watching. Hosting your cluster on US-controlled servers adds a layer of legal complexity.
By deploying your Swarm or K8s nodes on CoolVDS servers physically located in Norway/Europe, you reduce that risk profile. Plus, the ping time to Norwegian ISPs is negligible.
The Verdict: Which one to choose?
| Feature | Docker Swarm (1.12+) | Kubernetes (1.4) |
|---|---|---|
| Setup Time | 5 Minutes | Hours (or days) |
| Complexity | Low | High |
| Scaling | Fast | Extremely Robust |
| Best For | Small-Med Teams, Speed | Enterprise, Complex Apps |
Choose Docker Swarm if: You have a team of 3 developers, you want to move fast, and your application is relatively standard. It works out of the box.
Choose Kubernetes if: You are Google, or you are aspiring to be. If you need complex ingress rules, distinct namespaces for different teams, and auto-scaling based on custom metrics, bite the bullet and learn K8s. Just use kubeadm to save some sanity.
Regardless of your choice: Do not run container orchestrators on weak virtualization. Containers share the kernel. If your provider uses old OpenVZ tech, you will hit limits. CoolVDS uses KVM (Kernel-based Virtual Machine) for full isolation and NVMe for the speed your containers demand.
Stop fighting with `iowait`. Deploy your cluster on infrastructure that actually works.