Console Login

Kubernetes vs. Docker Swarm vs. Nomad: Surviving the Orchestration Wars in 2019

Kubernetes vs. Docker Swarm vs. Nomad: Surviving the Orchestration Wars in 2019

Let’s be honest: half of you are deploying Kubernetes just to put it on your CV. I’ve seen it happen in startups from Oslo to Trondheim. You take a perfectly functional monolith, slice it into forty microservices, wrap it in YAML, and then wonder why your latency just jumped 200ms and your cloud bill doubled.

I’ve spent the last six months migrating a high-traffic FinTech workload from bare metal to a containerized setup. We broke things. We exhausted conntrack tables. We learned that the difference between "cutting edge" and "bleeding edge" is whether it’s your blood on the floor at 3 AM.

Today, we aren't talking about trends. We are talking about survival. We are looking at the three major players available right now in early 2019: Kubernetes (1.13), Docker Swarm, and HashiCorp Nomad. We will look at complexity, overhead, and why your underlying VPS matters more than the orchestrator you choose.

The Contenders

Feature Docker Swarm Kubernetes (K8s) Nomad
Learning Curve Low (Built-in) Vertical Wall Moderate
State Store Raft (Built-in) etcd (External/Complex) Raft (Consul)
Minimum RAM ~50MB ~1.5GB (Control Plane) ~100MB

1. Docker Swarm: The "Good Enough" Hero

If you are running a shop with five developers and you don't have a dedicated Ops team, stop reading and use Swarm. It is integrated into Docker Engine 1.12+. There is no extra binary to install. It just works.

The beauty of Swarm is the Compose file format. You likely already have a docker-compose.yml for local development. Swarm lets you take that almost 1:1 into production.

The Setup

Here is how difficult it is to set up a Swarm cluster on three CoolVDS instances connected via private networking:

# On the Manager Node (CoolVDS-01)
docker swarm init --advertise-addr 10.10.0.1

# Output:
# docker swarm join --token SWMTKN-1-49nv... 10.10.0.1:2377

# On Worker Nodes (CoolVDS-02, CoolVDS-03)
docker swarm join --token SWMTKN-1-49nv... 10.10.0.1:2377

That is it. You have a cluster. Now, let’s deploy a replicated Nginx service with a constraint to ensure high availability.

version: '3.7'
services:
  web:
    image: nginx:alpine
    deploy:
      replicas: 6
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure
      placement:
        constraints:
          - node.role == worker
    ports:
      - "80:80"
    networks:
      - webnet

networks:
  webnet:

The Catch: Swarm's ingress mesh can be buggy under high load. If you are pushing 10k RPS, you might see connections dropped if the IPVS routing table gets clogged. We mitigated this recently by bypassing the mesh and using mode: host for our HAProxy entry points.

2. Kubernetes: The Industrial Grade Sledgehammer

Kubernetes (K8s) is the standard. I won't deny that. But K8s v1.13 is a beast. You need to manage etcd, the API server, the scheduler, and the controller manager. If etcd experiences fsync latency because your host disk is slow, your entire cluster state desynchronizes.

Pro Tip: Never run a Kubernetes control plane on standard HDD VPS hosting. The write latency requirements for etcd are strict. We use CoolVDS NVMe instances specifically because they guarantee the IOPS needed to keep the cluster heartbeat stable.

Here is a snippet of a Kubernetes Deployment. Notice the verbosity compared to Swarm. This is just for a simple stateless app.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: payment-gateway
  labels:
    app: payment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: payment
  template:
    metadata:
      labels:
        app: payment
    spec:
      containers:
      - name: payment-api
        image: registry.coolvds.com/payment:v1.2
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
          initialDelaySeconds: 3
          periodSeconds: 3

Why do we use it? Granularity. Look at the resources block above. In a multi-tenant environment, being able to strictly limit CPU shares and memory usage prevents a memory leak in one pod from killing the neighbor. This is crucial for GDPR compliance (Datatilsynet is watching), ensuring that data processing is isolated and predictable.

3. The Hardware Reality Check

No matter which orchestrator you pick, they all rely on the Linux kernel features: cgroups and namespaces. But orchestrators introduce overhead. Networking overlays (flannel, calico, weave) encapsulate packets, costing you CPU cycles.

We ran a benchmark last week comparing packet processing on bare metal vs. a virtualized K8s node. The bottleneck wasn't the CPU; it was the SoftIRQ handling on the network interface.

To fix this on your worker nodes, you need to tune sysctl.conf. Do not leave these at default values if you are running production containers in Norway where users expect sub-20ms latency.

# /etc/sysctl.conf

# Increase the maximum number of open file descriptors
fs.file-max = 2097152

# Increase the backlog for incoming connections
net.core.somaxconn = 65535

# Increase the range of ephemeral ports
net.ipv4.ip_local_port_range = 1024 65000

# Protect against SYN flood attacks
net.ipv4.tcp_syncookies = 1

# Enable TCP Fast Open (useful for reducing latency on mobile networks)
net.ipv4.tcp_fastopen = 3

Apply these with sysctl -p. If you are on a cheap VPS provider that uses OpenVZ, you often cannot modify these kernel parameters because you are sharing the kernel with 50 other tenants. This is why CoolVDS uses KVM virtualization—you get your own kernel, your own control.

4. HashiCorp Nomad: The Unix Philosophy

Nomad is the underdog. It handles non-containerized workloads (Java jars, raw binaries) just as well as Docker. It integrates perfectly with Consul for service discovery. If you are already in the HashiCorp ecosystem, Nomad 0.9 (beta just dropped!) is compelling.

It’s simpler than K8s but more robust than Swarm. However, the ecosystem is smaller. You will be writing your own scripts for things that K8s solves with Helm charts.

Verdict: What should you host in Norway?

If you are targeting the Norwegian market, data sovereignty is paramount. You need to know exactly where your bits live to satisfy GDPR. Public clouds often obscure this.

  • Use Docker Swarm if you are a small agency deploying web apps. It is fast, cheap, and easy.
  • Use Kubernetes if you have a team of 10+ devs and require complex autoscaling or service meshes like Istio (which is gaining traction this year).
  • Use Nomad if you have a mix of legacy binaries and modern containers.

Regardless of the software, the hardware must be solid. Orchestration is IO-intensive. Logs are written, images are pulled, databases are queried. If your underlying storage waits 10ms for a write, your API waits 10ms. On CoolVDS, our local NVMe storage ensures that wait time is negligible.

Stop fighting your infrastructure. Choose the orchestrator that fits your team size, not the one that looks best on LinkedIn, and put it on iron that can handle the load.

Ready to test your cluster? Deploy a high-performance KVM instance on CoolVDS in Oslo today and see the latency difference for yourself.