The Container Orchestration Wars: Choosing Your Weapon in 2020
If you are still manually SSHing into servers to docker run your production applications, stop. It is 2020. The "works on my machine" excuse died years ago, and managing distributed systems requires tooling that respects the chaos of the real world. But as we stand here in March, the landscape is fragmented. Kubernetes has effectively won the mindshare war, yet Docker Swarm refuses to die, and HashiCorp's Nomad is quietly powering massive workloads without the fanfare.
I have spent the last three weeks migrating a high-traffic eCommerce cluster for a client in Oslo. We started with a mess of shell scripts and ended with a fully GitOps-driven pipeline. But the journey forced us to re-evaluate the "Standard Choice" (Kubernetes) against the alternatives. Here is the raw technical reality of container orchestration right now, specifically for teams operating within the European regulatory framework.
The Contenders
1. Kubernetes (The 800lb Gorilla)
With the release of version 1.18 just dropping, Kubernetes (K8s) is the undeniable standard. It is powerful, extensible, and backed by Google. However, it is also an operational beast. K8s does not just manage containers; it manages networking, storage, secrets, and configuration. It is an operating system for the cloud.
The Pain Point: etcd latency. Kubernetes stores its cluster state in etcd. If your underlying storage cannot handle fsync operations fast enough, the API server times out, and your cluster effectively suffers a stroke. This is where cheap VPS providers fail you. If you aren't running on high-performance NVMe (like we standardize on at CoolVDS), your K8s control plane will be unstable.
2. Docker Swarm (The Pragmatist's Choice)
Despite the Mirantis acquisition of Docker Enterprise late last year, Swarm mode remains built into the Docker Engine. It is incredibly simple. You can spin up a cluster in roughly three commands.
3. HashiCorp Nomad (The UNIX Philosophy)
Nomad does one thing: scheduling. It doesn't care if you run Docker, raw binaries, or Java JARs. It integrates seamlessly with Consul and Vault, but it leaves networking largely up to you.
Technical Showdown: Deployment Complexity
Let's look at how we deploy a simple Nginx service across these platforms. The difference in boilerplate is staggering.
The Docker Swarm Way
Swarm uses the familiar docker-compose.yml syntax. It is readable and concise.
version: '3.7'
services:
web:
image: nginx:latest
deploy:
replicas: 3
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
The Kubernetes Way
Kubernetes requires significantly more verbosity. You need a Deployment and a Service at minimum. Note the selector matching logic.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.17
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
Infrastructure Matters: The Hidden Bottleneck
Regardless of which orchestrator you choose, they all share one vulnerability: The Noisy Neighbor. In a shared hosting environment, if another tenant decides to mine cryptocurrency or compile a massive kernel, your CPU steal time increases. When CPU steal spikes, your orchestrator's scheduler lags. It thinks nodes are unhealthy when they are just slow.
Pro Tip: Always monitor your `iowait` and steal time. On Linux, you can verify this quickly:
sar -p 1 5
If `%iowait` consistently exceeds 5% or `%steal` is non-zero, your host is oversold. Move to a provider that guarantees resources.
This is why we built CoolVDS on KVM (Kernel-based Virtual Machine) with strict resource isolation. We do not oversell CPU cores. When you run a Kubernetes worker node on CoolVDS, you get the dedicated cycles you pay for. This is critical for Latens (latency) sensitive applications targeting the Norwegian market.
Networking & Datatilsynet Compliance
For those of us operating in Norway and the broader EU, data sovereignty is not just a buzzword; it is a legal minefield. With GDPR enforcement ramping up, knowing exactly where your data packets flow is essential.
Kubernetes offers sophisticated CNI (Container Network Interface) plugins like Calico or Cilium. These allow you to define NetworkPolicies that act as firewalls between pods. You can strictly deny traffic between your database and your public frontend except on specific ports.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
Applying this policy locks down a namespace completely. You then whitelist only what is necessary. This level of granular control is often required to satisfy strict audits. Swarm's overlay network is encrypted by default (a nice touch), but lacks this granular policy engine out of the box.
Performance Tuning for 2020 Hardware
If you are running a database inside a container (stateful workloads), you must tune the host Linux kernel. Default settings are often too conservative for modern NVMe drives.
Here is a snippet from a standard CoolVDS node configuration script we use to prepare for high-throughput container storage:
# Increase the number of incoming connections
sysctl -w net.core.somaxconn=1024
# Enable forwarding for container networking
sysctl -w net.ipv4.ip_forward=1
# Increase max map count for Elasticsearch/DB workloads
sysctl -w vm.max_map_count=262144
# Swappiness - keep RAM in RAM for K8s stability
sysctl -w vm.swappiness=1
Failing to set vm.max_map_count is the #1 reason Elasticsearch containers crash on startup. Failing to set ip_forward means your pods can't talk to the world.
The Verdict
Choose Docker Swarm if: You have a team of 1-5 developers, you need to move fast, and you don't need complex autoscaling or custom CRDs.
Choose Nomad if: You have a mixed environment (legacy binaries + Docker) and appreciate the simplicity of a single binary binary.
Choose Kubernetes if: You are building for the long term, need a rich ecosystem (Helm, Prometheus, Istio), and can afford the learning curve.
But remember, an orchestrator is only as stable as the iron it runs on. A Kubernetes cluster on slow disks is a nightmare. A Swarm cluster on a laggy network is useless. At CoolVDS, we provide the raw, unadulterated performance—NVMe storage, 10Gbps uplinks, and pure KVM isolation—that these tools demand.
Don't let I/O wait kill your uptime. Deploy your test cluster on a CoolVDS instance today and experience the difference true dedicated resources make.