Console Login

Kubernetes vs. Docker Swarm in 2020: A Pragmatic Look at Container Orchestration in Norway

Kubernetes vs. Docker Swarm: Stop Over-Engineering Your Stack

It’s July 2020. If I see one more startup with three microservices trying to deploy a full highly-available Kubernetes cluster across three availability zones before they even have a paying customer, I’m going to lose it.

I’ve been in the trenches of systems administration for over 15 years, from the dark days of manually syncing rsync scripts between bare metal boxes to the modern era of immutable infrastructure. Right now, the industry is suffering from a severe case of "Resume Driven Development." Everyone wants to run Kubernetes (K8s) because Google does it. But you are not Google.

In this analysis, I’m going to break down the current state of container orchestration as of mid-2020. We will look at the two heavyweights: Kubernetes (v1.18) and Docker Swarm. We will look at this through the lens of performance, operational complexity, and—crucially for us here in the Nordics—data sovereignty.

The War Story: When Latency Kills the Control Plane

Last month, I was called in to rescue a project for a fintech client in Oslo. They had migrated a perfectly working monolithic application into 40 microservices running on a managed Kubernetes service hosted in Frankfurt. Performance tanked. APIs that used to respond in 50ms were taking 400ms.

The culprit wasn't the code. It was the network overlay and the underlying infrastructure. They were running on oversold virtual machines where "Steal Time" (CPU usage stolen by the hypervisor for other tenants) was hitting 15%. When your etcd cluster—the brain of Kubernetes—waits on disk I/O or CPU, the whole cluster stutters. This is why infrastructure matters more than the orchestrator itself.

Docker Swarm: The "Just Works" Solution

In 2020, it is fashionable to say Docker Swarm is dead since Mirantis acquired Docker Enterprise last year. This is nonsense. For 90% of small-to-medium businesses in Norway, Swarm is superior because it is simple. You don’t need a dedicated team of five DevOps engineers to manage the control plane.

Swarm is integrated directly into the Docker engine. If you can write a docker-compose.yml file, you can run a cluster.

Setting up a Swarm

It takes literally two commands. On your first CoolVDS node:

docker swarm init --advertise-addr 10.0.0.1

On the worker node:

docker swarm join --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c 10.0.0.1:2377

That is it. You have a cluster. Now, let’s look at a typical stack definition. This file defines a web service and a Redis cache, constrained to run on specific nodes.

version: '3.8'
services:
  web:
    image: nginx:alpine
    ports:
      - "80:80"
    deploy:
      replicas: 3
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure
      placement:
        constraints:
          - node.role == worker
    networks:
      - webnet

  redis:
    image: redis:5.0-alpine
    volumes:
      - redis-data:/data
    deploy:
      placement:
        constraints:
          - node.role == manager
    networks:
      - webnet

networks:
  webnet:
    driver: overlay
    attachable: true

volumes:
  redis-data:

Deploying this takes milliseconds. The overlay network is encrypted by default if you pass the flag, and service discovery is built-in via internal DNS.

Kubernetes (v1.18): The Industrial Standard

Kubernetes is the operating system of the cloud. It is powerful, extensible, and complex. It uses a declarative model where you define the desired state, and the controllers work tirelessly to match the actual state to it.

However, K8s requires strict preparation of the underlying OS. You cannot just slap it on a cheap VPS. You need to disable swap, tune bridge networking, and ensure your container runtime (likely Docker or containerd) is configured via systemd.

Preparing the Node

Before you even install kubeadm, you must ensure the Linux kernel allows iptables to see bridged traffic. This is a common failure point I see on default installations.

#!/bin/bash
# Standard node prep for K8s v1.18 on CentOS 7 / Ubuntu 18.04

# 1. Disable Swap (K8s requirement)
swapoff -a
sed -i '/ swap / s/^/#/' /etc/fstab

# 2. Load required kernel modules
modprobe overlay
modprobe br_netfilter

# 3. Configure sysctl networking
cat <

Once the cluster is up, defining the same Nginx service we did in Swarm requires significantly more boilerplate. This verbosity gives you control, but it also increases the surface area for configuration errors.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: "500m"
            memory: "128Mi"
          requests:
            cpu: "250m"
            memory: "64Mi"
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: ClusterIP

The Hidden Bottleneck: Storage I/O

Whether you choose Swarm or Kubernetes, your stateful workloads (databases like MySQL, PostgreSQL, or Elasticsearch) will suffer if your underlying storage is slow. Containers are ephemeral, but data is forever.

Many VPS providers in Europe oversell their storage arrays. They put you on standard SSDs shared by 500 other users. When a neighbor runs a backup, your database latency spikes. This is "noisy neighbor" syndrome.

At CoolVDS, we use NVMe drives passed through with KVM virtio drivers. The difference in IOPS (Input/Output Operations Per Second) is staggering. Here is how you can test your current provider using fio. If you aren't getting at least 15k IOPS on random writes, your database will choke under load.

fio --name=random-write --ioengine=libaio --rw=randwrite --bs=4k --numjobs=1 --size=4g --iodepth=32 --runtime=60 --time_based --end_fsync=1
Pro Tip: Always check the I/O scheduler on your host. For NVMe drives inside a KVM guest, you usually want 'none' or 'noop' because the host handles the scheduling. Check it with: cat /sys/block/vda/queue/scheduler.

Data Sovereignty: The Norwegian Context

We are currently operating in a very uncertain legal climate regarding data transfers to the US. While the Privacy Shield is technically still in effect, scrutiny from the Datatilsynet (Norwegian Data Protection Authority) is increasing. Relying on US-owned cloud giants—even their European regions—exposes you to the CLOUD Act, which allows US law enforcement to demand data located abroad.

Running your own Kubernetes or Swarm cluster on CoolVDS infrastructure in Oslo ensures your data remains legally and physically in Norway. You own the encryption keys. You control the bits. This isn't just about performance; it's about compliance.

Comparison: Which one fits you?

Feature Docker Swarm Kubernetes (K8s)
Learning Curve Low (Days) High (Months)
Installation Native (Built-in) Complex (Kubeadm/Rancher)
Scalability Good (~1000 nodes) Massive (5000+ nodes)
Load Balancing Automatic Internal Mesh Requires Ingress Controller
Storage Volumes & Plugins CSI (Container Storage Interface)

Conclusion

If you are a team of 50 developers building a cloud-native platform that needs to scale to millions of users, use Kubernetes. The investment in complexity pays off in flexibility.

But if you are a typical Norwegian SMB or agency managing a few dozen web applications, Docker Swarm on high-performance KVM VPS is the pragmatic choice. It is stable, it is fast, and it doesn't require a dedicated team to keep the lights on.

Whichever orchestrator you choose, remember that software cannot fix hardware limitations. High CPU steal and slow disk I/O will kill your cluster stability. Start with a solid foundation.

Ready to build a cluster that doesn't sleep when you do? Deploy a high-frequency NVMe instance on CoolVDS today and see what single-digit latency to NIX looks like.