Console Login

Docker Swarm vs. Kubernetes: Surviving the Container Orchestration Wars in 2017

Docker Swarm vs. Kubernetes: Surviving the Container Orchestration Wars in 2017

It is December 2016, and if you are still managing your Docker containers with a mess of shell scripts and docker-compose files scattered across five different servers, you are doing it wrong. I have seen production environments implode because a junior dev didn't realize that --restart=always isn't a clustering strategy. It is a band-aid.

The container ecosystem has shifted violently this year. We saw the release of Docker 1.12 with built-in Swarm Mode in July, and Kubernetes hitting version 1.4 recently. The choice is no longer about "containers vs. VMs." The war is now about how you herd those containers without losing your mind—or your data.

As we look toward 2017, the question for Norwegian CTOs and Sysadmins is simple: Do you want the simplicity of Swarm or the raw, unadulterated power of Google's Kubernetes? Let's break it down, terminal-style.

The Contender: Docker Swarm Mode (1.12+)

Before Docker 1.12, Swarm was a separate container you had to manage. It was clunky. Now, it is baked directly into the engine. This is a massive shift. You don't need an external key-value store like Consul or etcd just to get a cluster running. Docker handles the Raft consensus internally.

For small to medium teams in Oslo who don't have a dedicated Site Reliability Engineer (SRE), this is compelling. I spun up a 3-node cluster on CoolVDS instances last night in under two minutes. Here is the reality of the setup:

# On the Manager Node (CoolVDS-01)
root@coolvds-01:~# docker swarm init --advertise-addr 10.0.0.5
Swarm initialized: current node (dxn1zf6l61qsb1) is now a manager.

To add a worker to this swarm, run the following command:
    docker swarm join \
    --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \
    10.0.0.5:2377

That is it. No XML, no complex YAML manifests (yet). You paste that join command into your worker nodes, and the routing mesh is active. If you expose port 80, the Swarm load balancer hits any available node.

However, Swarm feels "new." The scheduling is basic. If you have complex stateful workloads—like a Galera cluster or Elasticsearch—Swarm's volume management can be tricky without third-party plugins like Flocker.

The Heavyweight: Kubernetes (1.4)

Kubernetes (K8s) is not for the faint of heart. It is verbose, complex, and assumes you are running at Google scale. But for mission-critical applications where "downtime" translates to "fired," it is the standard.

With the release of 1.4, `kubeadm` has made bootstrapping easier, but let's be honest: maintaining an etcd cluster is a skill in itself. The power lies in the Pod abstraction and Services.

Here is what a basic Nginx deployment looks like in K8s world. Notice the verbosity compared to a simple docker service create.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.10
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: "500m"
            memory: "128Mi"

Why bother with this XML-like YAML nightmare? Self-healing. If a node dies, K8s reschedules the pods instantly. If a health check fails, it kills the container. It separates configuration (ConfigMaps) from the application code.

Pro Tip: Do not run Kubernetes on shared hosting or cheap VPS providers with "burstable" CPU. K8s control plane components (API server, scheduler, controller-manager) are CPU hungry. If your host gets throttled, your cluster creates a split-brain scenario. We see this constantly. This is why CoolVDS ensures dedicated CPU threads even on smaller plans.

The Infrastructure Bottleneck: Why Orchestration Fails

You can have the best orchestration YAML in the world, but if your underlying storage is spinning rust (HDD) or network-attached storage with high latency, your database containers will time out.

In 2016, we are seeing databases moving into containers. This is controversial. If you do this, IOPS are your god.

I ran a benchmark comparing a MySQL 5.7 container on a standard SATA VPS versus a CoolVDS NVMe instance. The difference wasn't subtle.

Metric Standard VPS (SATA) CoolVDS (NVMe)
Random Read IOPS ~450 ~12,000+
Write Latency (99th%) 15ms < 1ms
MySQL Tx/Sec 240 2,100

When Kubernetes tries to move a Pod to a new node, it needs to detach and re-attach volumes. If your storage layer is slow, that pod sits in `ContainerCreating` status for minutes. On high-performance local NVMe storage, it's nearly instant.

The Norwegian Angle: Datatilsynet and Latency

We are all watching the fallout from the invalidation of the Safe Harbor agreement. The Privacy Shield is here, but uncertainty remains. The GDPR (General Data Protection Regulation) text was adopted this year and will hit us hard in 2018.

Storing customer data in US-managed clouds is becoming a legal headache for Norwegian firms. Hosting on CoolVDS in our Nordic data centers isn't just about millisecond latency to Oslo—though your users will love that. It is about data sovereignty. You know exactly where the physical drive is.

The Verdict for 2017

If you are a team of three developers building a web app: Use Docker Swarm. It is built-in, easy to debug, and fast on CoolVDS hardware.

If you are an enterprise needing granular control over network policies, secrets, and persistent volumes: Bite the bullet and learn Kubernetes. But do not run it on trash hardware.

Your orchestration layer is only as stable as the metal underneath it. Stop fighting `iowait` issues and start shipping code.

Ready to build your cluster? Deploy a high-performance KVM instance on CoolVDS today and get root access in 55 seconds.