Orchestration Wars 2018: Choosing Your Weapon Without Blowing Up Production
Let's be honest. If I hear one more startup CTO say they need a multi-region Kubernetes federation for a PHP app with 500 daily users, I'm going to scream. It is mid-2018. The hype train for containers has officially left the station and crashed into the wall of reality. We aren't just playing with docker run anymore; we are trying to manage hundreds of these things without waking up at 3:00 AM because the overlay network collapsed.
I have spent the last six months migrating legacy monoliths into microservices for clients across Oslo and Bergen. The conclusion? There is no silver bullet, but there is definitely a wrong caliber for the job. Whether you are looking at Kubernetes (K8s), Docker Swarm, or HashiCorp Nomad, the software is only half the equation. The other half is the metal it runs on.
The Hardware Reality: Why I/O Kills Containers
Before we argue about YAML indentation, we need to address the elephant in the data center: Wait I/O. Containers are just isolated processes sharing a kernel. If you run a database inside a container on a host with spinning rust (HDD) or shared SATA SSDs that are oversold, your orchestration tool doesn't matter. Your cluster will die.
I recently debugged a Kubernetes cluster where the API server kept timing out. The culprit wasn't network latency; it was etcd write latency. The underlying disk couldn't keep up with the state changes. This is why, for any serious production workload, we default to CoolVDS NVMe instances. When fdatasync calls hang, the cluster assumes the node is dead and starts a rescheduling storm. Fast storage isn't a luxury; it's a dependency.
Contender 1: Docker Swarm (The "Just Works" Option)
Docker Swarm is currently the most underrated tool in 2018. It is built into the engine. It is simple. If you have a team of three developers and no dedicated Ops guy, use Swarm. You don't need a PhD in distributed systems to set it up.
The Configuration
Initializing a swarm takes exactly one command:
docker swarm init --advertise-addr 10.0.0.5
Deploying a stack is equally trivial using the docker-compose.yml (version 3) format we already know:
version: '3.3'
services:
web:
image: nginx:1.15-alpine
deploy:
replicas: 5
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
The Trade-off: Swarm's ingress mesh can be buggy under high load, and it lacks the rich ecosystem of K8s. But for pure simplicity? It wins.
Contender 2: Kubernetes (The Heavyweight Champion)
Kubernetes 1.11 just dropped. It's stable, it's powerful, and it is an absolute beast to manage. K8s is not an orchestration tool; it is a framework for building orchestration tools. If you are subject to GDPR and need strict separation of duties, RBAC (Role-Based Access Control) in K8s is mandatory.
Pro Tip: Never runetcdon the same disk partition as your container logs (/var/lib/docker). Docker logs can fill up I/O bandwidth, causingetcdheartbeats to fail. On CoolVDS, we usually mount a separate block volume for theetcddata directory to ensure isolation.
Here is what a basic deployment looks like in the K8s world. Note the verbosity compared to Swarm:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
If you are deploying this in Norway, you need to think about latency. K8s is chatty. If your worker nodes are in Oslo and your master nodes are in Frankfurt, the milliseconds add up. Keep your control plane close to your data. CoolVDS offers low latency local peering which effectively eliminates control plane lag.
Contender 3: HashiCorp Nomad (The Unix Philosophy)
Nomad is the outlier. It doesn't just run containers; it runs binaries, Java jars, and VMs. It creates a single binary deployment. If you are already using Consul for service discovery and Vault for secrets, Nomad is a natural fit. It is arguably more stable than K8s for mixed workloads (legacy binaries + Docker).
The Verdict: It Comes Down to Infrastructure
Here is the brutal truth: software cannot fix bad infrastructure. You can have the most beautifully architected Kubernetes cluster, but if it's running on a noisy neighbor VPS with high CPU steal time, your latency will spike, and your customers will leave.
In the Norwegian market, we have strict requirements. Datatilsynet (The Norwegian Data Protection Authority) is watching how we handle data. Moving data out of the country adds legal complexity under the new GDPR rules introduced in May. Hosting locally on VPS Norway infrastructure solves the compliance headache immediately.
Performance Tuning for 2018 Workloads
Regardless of which orchestrator you pick, you must tune the Linux kernel. Default settings are not enough for high-density containerization. Add this to your /etc/sysctl.conf:
# Increase connection tracking for heavy service mesh traffic
net.netfilter.nf_conntrack_max = 131072
# Allow more memory allocation for TCP buffers
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# Essential for Redis/Elasticsearch containers
vm.overcommit_memory = 1
Apply with sysctl -p.
Conclusion
- Choose Docker Swarm if you have a small team and want to deploy today.
- Choose Kubernetes if you need complex autoscaling, RBAC, and have a dedicated Ops engineer.
- Choose Nomad if you are deep in the HashiCorp stack or need to run non-containerized binaries.
But whatever you choose, do not skimp on the foundation. Orchestration overhead requires CPU cycles and fast random I/O. We use CoolVDS because they provide KVM isolation (no overselling) and NVMe storage that actually keeps up with etcd and overlay networks. Don't let your infrastructure become the bottleneck.
Ready to build a cluster that doesn't fall over? Spin up a high-performance NVMe instance on CoolVDS in Oslo today and see the difference raw I/O makes.