The Container Wars: Choosing Your Weapon in 2017
I have spent the last 72 hours staring at journalctl logs, trying to figure out why a distributed etcd cluster decided to lose quorum during a minor network partition. If you are reading this, you likely know the pain. The shift to microservices is not just a trend; it is a fundamental restructuring of how we deploy software. But with great modularity comes great operational headache.
Right now, in May 2017, we are standing at a fork in the road. On one side, we have the heavyweight champion, Kubernetes (fresh off the v1.6 release). On the other, the integrated challenger, Docker Swarm Mode. I talk to CTOs from Oslo to Trondheim every week, and the question is always the same: "Do we really need the complexity of Google's Borg, or is Docker's native tooling enough?"
Let's cut through the marketing noise. We are going to look at architectural differences, performance on virtualized hardware, and why your choice of VPS provider in Norway might matter more than the orchestrator itself.
The Contender: Docker Swarm Mode
Since Docker 1.12, Swarm has been embedded directly into the engine. This was a massive shift. Before this, setting up a cluster required external discovery services (Consul, etcd) and a lot of glue. Now? It is frighteningly simple.
For small to medium teams who just want to deploy a stack without hiring a dedicated Site Reliability Engineer, Swarm is attractive. It uses a mesh routing network that just works.
The "It Just Works" Workflow
To start a cluster on your primary node, you simply run:
docker swarm init --advertise-addr 192.168.10.5
That is it. You get a token. You paste that token into your worker nodes, and they join. No certificate authority management, no manual overlay network configuration. It is encrypted by default. Here is how you deploy a replicated Nginx service across your CoolVDS instances:
docker service create \
--name web_frontend \
--replicas 3 \
--publish 80:80 \
--update-delay 10s \
nginx:alpine
The beauty here is the --update-delay flag. It handles rolling updates natively. If you need to visualize this, Portainer is currently gaining traction as a UI management tool, but the CLI is often sufficient.
The Heavyweight: Kubernetes 1.6
Kubernetes (K8s) is a different beast entirely. With the release of 1.6 recently, we finally saw RBAC (Role-Based Access Control) move to beta. This is critical for larger organizations where you don't want junior devs deleting production namespaces.
However, K8s is verbose. It assumes you have a Google-sized problem. The learning curve is a vertical wall. You are dealing with Pods, Deployments, Services, Ingress Controllers, and Persistent Volume Claims.
The Configuration Overhead
While Swarm is imperative (commands), Kubernetes is strictly declarative (YAML). Here is the equivalent Nginx deployment in K8s 1.6 syntax:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web-frontend
spec:
replicas: 3
template:
metadata:
labels:
app: web
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
selector:
app: web
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
You need to apply this with kubectl apply -f nginx-deployment.yaml. It is more code, but it is version-controllable. The real power of K8s in 2017 is the ecosystem. Tools like Helm are emerging to manage these massive YAML piles.
The Infrastructure Reality Check: KVM vs. OpenVZ
Here is the secret nobody tells you: Container orchestration murders cheap VPS hosting.
I have seen deployments fail because the host was using OpenVZ (container-based virtualization). Docker assumes it owns the kernel. When you run Docker inside OpenVZ, you are running a container inside a container. You hit limits on inotify watchers, you run into OverlayFS driver incompatibilities, and you can't tune kernel parameters like net.ipv4.ip_forward needed for the cluster networking.
Why KVM is Mandatory
You need full hardware virtualization. At CoolVDS, we use KVM (Kernel-based Virtual Machine). This gives your Docker host its own dedicated kernel. It prevents the "noisy neighbor" effect where another customer's database consumes all the CPU cycles, causing your Swarm health checks to time out.
Pro Tip: If you are running stateful containers (like MySQL or Redis), check your storage driver. On CoolVDS, we provision NVMe storage. Containers generate massive amounts of random I/O operations. Spinning rust (HDD) or even standard SATA SSDs often bottleneck during image pulls or database commits.
To verify your I/O scheduler is optimized for our NVMe drives, run this on your node:
cat /sys/block/sda/queue/scheduler
You should see [noop] or none inside the KVM guest, letting the host handle the scheduling efficiently.
Latency and Data Sovereignty in Norway
We are approaching May 2018. If you follow the news from Datatilsynet, you know the GDPR (General Data Protection Regulation) enforcement date is exactly one year away. The "Privacy Shield" framework is shaky at best.
Hosting your orchestration cluster outside of Europe is becoming a liability. By deploying on CoolVDS, your data resides in TIER III data centers with direct fiber lines to NIX (Norwegian Internet Exchange).
Network Latency Test
Containers are chatty. If your Swarm managers are in Frankfurt and your workers are in Oslo, the consensus protocol (Raft) will suffer. Low latency is not just about page load speed; it is about cluster stability.
Here is a basic connectivity check you should run between your nodes to ensure Raft stability:
# Check latency statistics
ping -c 100 -i 0.2 192.168.10.6 | tail -1
# Check for MTU issues (common in overlay networks)
ping -s 1472 -M do 192.168.10.6
If you see packet loss or variance over 50ms, your cluster state will eventually corrupt. CoolVDS consistently delivers sub-5ms latency within the Nordic region, ensuring your etcd or Swarm managers remain in sync.
Conclusion: Which One to Choose?
| Feature | Docker Swarm | Kubernetes 1.6 |
|---|---|---|
| Setup Difficulty | Easy (Built-in) | Hard (kubeadm/manual) |
| Load Balancing | Automatic Mesh | Requires Ingress/Service |
| Scalability | Good (1000+ nodes) | High (5000+ nodes) |
| Ideal User | Small/Med Dev Teams | Enterprise/Ops Teams |
If you need to deploy today with a team of three developers, choose Docker Swarm. It is robust enough for 90% of use cases in 2017.
If you are building a platform that needs to survive the next five years and requires complex routing, stateful sets, or granular security policies, bite the bullet and learn Kubernetes.
Regardless of your choice, the software is only as good as the iron it runs on. Don't let IO wait times kill your container performance. Deploy your cluster on CoolVDS KVM instances with NVMe storage and keep your latency low and your data compliant.
Ready to build your cluster? Spin up a high-performance KVM instance in Oslo in under 55 seconds.