Console Login

Kubernetes vs. Docker Swarm in 2020: Choosing the Right Orchestrator for Nordic Infrastructure

Orchestration Wars 2020: Kubernetes vs. Docker Swarm for Nordic Deployments

I recently watched a promising Oslo startup burn through 30% of their seed funding hiring consultants to debug a Kubernetes cluster. Their infrastructure? Three microservices and a Redis cache. This is madness. In the rush to adopt Google-scale tools, we often forget that we aren't Google. We are building robust systems that need to serve customers from Bergen to Trondheim with minimal latency and maximum uptime.

As of March 2020, the container orchestration landscape is stabilizing, but the choice between the juggernaut that is Kubernetes (K8s) and the pragmatic Docker Swarm is still the most common debate I hear in server rooms. Let's break this down technically, focusing on the realities of running these on European infrastructure.

The Beast: Kubernetes (v1.17)

Kubernetes has won the marketing war. With the recent release of v1.17, stability is better than ever, and Helm 3 finally getting rid of Tiller has made security teams breathe easier. However, K8s is not a "install and forget" solution. It is an operating system for your cluster.

The Hidden Cost: Latency and Etcd

Kubernetes relies heavily on etcd for state management. Etcd is incredibly sensitive to disk write latency (fsync). If your underlying VPS storage chokes, your entire cluster enters a degraded state. I've seen K8s masters fail simply because the hosting provider oversold their HDD arrays.

This is where infrastructure choice becomes architectural. If you are running a self-managed K8s cluster, you cannot compromise on I/O. We run our heavy workloads on CoolVDS NVMe instances because the random write speeds (IOPS) keep etcd happy. If you are seeing leader election failures, check your disk latency first.

# Check disk sync latency (crucial for etcd)
# If the 99th percentile is > 10ms, your K8s cluster is at risk.
fio --rw=write --ioengine=sync --fdatasync=1 --directory=test-data --size=22m --bs=2300 --name=mytest

Configuration Complexity

Kubernetes requires verbosity. A simple Nginx deployment looks like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.9
        ports:
        - containerPort: 80
          name: http
        livenessProbe:
          httpGet:
            path: /
            port: http
          initialDelaySeconds: 30
          timeoutSeconds: 5

You then need a Service, an Ingress controller, and likely a Cert-Manager setup. It's powerful, but it's a lot of moving parts.

The Pragmatist: Docker Swarm

Despite the uncertainty after the Mirantis acquisition last year, Docker Swarm remains the superior choice for teams of less than 20 engineers. It is baked into the Docker engine. There is no massive control plane overhead.

Swarm shines in scenarios where you need to deploy fast and maintain low operational complexity. It respects the standard docker-compose.yml format (mostly), making the dev-to-prod pipeline trivial.

version: '3.8'
services:
  web:
    image: nginx:1.17.9
    deploy:
      replicas: 3
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure
    ports:
      - "80:80"
    networks:
      - webnet

networks:
  webnet:

Run docker stack deploy -c docker-compose.yml mystack and you are done. No Helm charts, no RBAC nightmares.

The Infrastructure Layer: Where It All breaks

Whether you choose Swarm or K8s, the orchestration layer cannot fix a bad network. In Norway, data sovereignty and latency are critical. Routing traffic through a datacenter in Frankurt when your users are in Oslo adds unnecessary milliseconds. Worse, if your provider's "virtualization" is just a glorified container (like OpenVZ), you might run into kernel version conflicts when trying to use specific Docker features or Overlay2 network drivers.

We strictly use KVM (Kernel-based Virtual Machine) for orchestration nodes. This ensures strict isolation. Container engines rely on cgroups and namespaces; you don't want a "noisy neighbor" on the host machine stealing CPU cycles and causing your pods to throttle.

Pro Tip for Nordic Devs: If you are handling Norwegian citizen data, ensure your hosting provider complies with GDPR strictures. While we wait for clarity on international transfers (the privacy landscape is volatile right now), keeping data within national borders or the EEA on owned hardware—like CoolVDS servers in European datacenters—is the safest legal hedge.

Tuning for Performance

If you are deploying a high-traffic application, default settings will kill you. Here is a real-world tuning example for your sysctl.conf on the host nodes to handle high connection rates, typical in a microservices environment:

# /etc/sysctl.conf

# Increase the maximum number of open file descriptors
fs.file-max = 2097152

# Increase the connection tracking table size (critical for kube-proxy/docker-proxy)
net.netfilter.nf_conntrack_max = 131072

# reuse timewait sockets
net.ipv4.tcp_tw_reuse = 1

# Increase the read/write buffer sizes for TCP
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216

Apply these with sysctl -p before you even install Docker. On a standard VPS, you might hit the nf_conntrack limit surprisingly fast during a DDoS attack or a traffic spike, causing dropped packets.

Conclusion: Pick Your Poison

If you need auto-scaling based on custom metrics, complex sidecar patterns, or you are managing hybrid clouds, use Kubernetes. Just make sure you run it on infrastructure that can handle the IOPS—CoolVDS NVMe plans are specifically architected for this workload.

If you need to deploy a standard web stack with redundancy and zero downtime updates, use Docker Swarm. It is robust, simple, and gets out of your way.

Don't let orchestration envy slow down your shipping cycle. Clean code on a fast server beats complex architecture on a slow one every time.

Ready to build your cluster? Deploy a high-performance KVM instance in Oslo on CoolVDS in under 60 seconds.