Console Login

Kubernetes vs. Docker Swarm in Late 2020: The Orchestration Reality Check for Norwegian Ops

Kubernetes vs. Docker Swarm in Late 2020: The Orchestration Reality Check for Norwegian Ops

Let's be brutally honest: most of you do not need Kubernetes. I’ve sat in too many meetings this year with CTOs in Oslo who want K8s solely because "Google uses it." Unless you are managing microservices at the scale of Equinor or Spotify, the operational overhead of a full Kubernetes control plane might effectively DDOS your own engineering team.

However, we are at a pivot point. With Mirantis acquiring Docker Enterprise late last year, Swarm’s future feels uncertain to many. Meanwhile, the Schrems II ruling from July has made relying on US-owned hyperscalers for sensitive European data a legal minefield. The Datatilsynet (Norwegian Data Protection Authority) is watching.

In this analysis, we strip away the marketing noise. We look at the raw technical trade-offs between Docker Swarm and Kubernetes v1.19, and why your underlying hardware—specifically storage I/O—matters more than your YAML files.

The Latency Trap: Why Containers Aren't Magic

Before we argue about orchestrators, we must address the infrastructure. Containers share the host kernel. This reduces overhead compared to VMs, but it introduces the "noisy neighbor" problem on CPU caches and I/O schedulers. If you deploy a database container next to a heavy log-processing container on shared storage, your wait times will spike.

This is where the "Cloud" often lies to you. Standard VPS hosting often uses network-attached storage (Ceph/GlusterFS) over 1Gbps links. For static files, that's fine. For etcd (the brain of Kubernetes), it is a death sentence.

Pro Tip: Kubernetes etcd requires extremely low latency for fsync operations. If disk write latency exceeds 10ms, your cluster leader elections will fail, causing a split-brain scenario. This is why we strictly provision local NVMe storage on CoolVDS instances rather than slower network storage. Using spinning rust (HDD) for K8s in 2020 is professional negligence.

Benchmarking Disk Latency for Orchestration

Before installing any orchestrator, run this fio command on your nodes to verify they can handle the state management load. You are looking for high IOPS and low latency (sync).

fio --rw=write --ioengine=sync --fdatasync=1 \
    --directory=test-data --size=22m --bs=2300 \
    --name=mytest

If your 99th percentile latency is above 10ms, do not deploy Kubernetes. You need better hardware.

Docker Swarm: The "Good Enough" Solution?

For many teams in the Nordics running 5 to 50 services, Docker Swarm remains the most pragmatic choice in 2020. It is built into the Docker engine. There is no separate binary to install, no complex PKI infrastructure to manage manually.

The Pros:

  • Simplicity: You can spin up a cluster in 3 commands.
  • Low Overhead: The memory footprint of a Swarm manager is a fraction of a K8s master.
  • Compose Compatibility: It uses the docker-compose.yml format developers already know.

The Cons:

  • Networking limitations: The overlay network can be buggy under high churn.
  • Feature Stagnation: Since the split between Docker Inc. and Mirantis, new features are slow to arrive.

Deploying a Stack

Swarm's elegance is in its brevity. Here is a standard HA setup:

version: "3.8"
services:
  web:
    image: nginx:alpine
    deploy:
      replicas: 5
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure
    ports:
      - "80:80"
    networks:
      - webnet
networks:
  webnet:

Command to deploy: docker stack deploy -c docker-compose.yml my_cluster

Kubernetes (K8s): The Industrial Standard

Kubernetes has won the war. With v1.19 released recently, stability is excellent. However, it demands a steep learning curve. You are not just managing containers; you are managing a software-defined datacenter.

The Critical Pain Point: Ingress & Networking

Unlike Swarm, K8s does not expose services easily by default. You need an Ingress Controller (like Nginx or Traefik) and a CNI plugin (Calico, Flannel, or Cilium). In 2020, we see a shift toward Cilium due to its use of eBPF for performance, but Calico remains the safe, battle-tested default.

Resource Discipline

The most common cause of outages I see in K8s clusters is the lack of resources limits. Without them, a memory leak in one Java pod will trigger the Linux OOM Killer, potentially taking down the Kubelet or other critical system processes.

Configuration Best Practice:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: production-api
spec:
  replicas: 3
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
      - name: go-api
        image: registry.coolvds.com/backend:v2.4
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 10

The Security Angle: Schrems II and GDPR

This is where the conversation shifts from technical to legal. In July 2020, the CJEU invalidated the Privacy Shield framework. If you are hosting personal data of Norwegian citizens on US-controlled clouds (AWS, Azure, GCP), you are now in a gray area regarding GDPR compliance.

Hosting on a Norwegian provider like CoolVDS is no longer just a latency decision; it is a compliance strategy. Our datacenters are in Oslo. The data stays under Norwegian jurisdiction. When you build your orchestration layer, you must ask: "Where do the physical disks actually reside?"

Performance Tuning: KVM vs. Bare Metal

There is a myth that you need bare metal for containers. That is false. You need hardware-assisted virtualization with no overprovisioning.

We utilize KVM (Kernel-based Virtual Machine) because it offers the necessary kernel isolation that containers lack, without the heavy overhead of older hypervisors. However, the host configuration is paramount. We enable nested virtualization and ensure CPU flags are passed through correctly so your containers can utilize instruction sets like AES-NI for SSL termination.

Optimizing Kernel Parameters for High-Load Docker

Whether you use Swarm or K8s, the default Linux kernel settings are often too conservative for thousands of containers. Add these to your /etc/sysctl.conf:

# Increase connection tracking table size
net.netfilter.nf_conntrack_max = 131072

# Allow more pending connections
net.core.somaxconn = 65535

# Enable IP forwarding (Required for CNI plugins)
net.ipv4.ip_forward = 1

# Increase memory for TCP buffers
net.ipv4.tcp_rmem = 4096 87380 6291456
net.ipv4.tcp_wmem = 4096 16384 4194304

Apply with sysctl -p.

Conclusion: Make the Pragmatic Choice

If you are a team of three developers managing a Magento store and a few microservices, stay with Docker Swarm or even a simple docker-compose setup on a robust VM. The complexity of Kubernetes will slow you down more than it helps.

If you are building a scalable SaaS platform requiring strict zero-downtime deployments, RBAC, and complex routing, Kubernetes is the only path forward. But respect the hardware requirements. Do not run it on cheap, oversold VPS hosting where I/O wait times fluctuate wildly.

Stability starts at the metal. Ensure your foundation is solid, your storage is NVMe, and your data is legally secure.

Ready to build? Deploy a KVM-optimized instance in Oslo with pure NVMe storage in under 60 seconds at CoolVDS.