Console Login

Kubernetes vs. Docker Swarm in 2021: The Pragmatic Guide to Container Orchestration in Norway

Kubernetes vs. Docker Swarm in 2021: Stop Over-Engineering Your Norwegian Stack

Let’s be honest: in late 2021, “Kubernetes” has become a synonym for “deploying applications.” It is the resume-driven development standard. But having spent the last six months migrating a client’s over-engineered microservices architecture back to a monolithic setup on raw NVMe VPS instances, I have a different take. Complexity is not a virtue. It is technical debt.

For DevOps teams operating out of Oslo or serving the Nordic market, the choice between Kubernetes (K8s) and Docker Swarm isn't just about features—it's about overhead, etcd latency, and the looming shadow of the Schrems II ruling. If you are routing traffic through US-owned cloud regions in Frankfurt while your customers are sitting in Bergen, you aren't just adding latency; you are inviting scrutiny from Datatilsynet.

The State of Orchestration: October 2021

While Google and the CNCF have declared Kubernetes the winner, Docker Swarm refuses to die. Why? Because `docker-compose.yml` is still the most intuitive way to describe a system. With Mirantis acquiring Docker Enterprise back in 2019, support has stabilized, and for teams smaller than 20 engineers, Swarm remains a viable, low-overhead contender.

Pro Tip: Don't underestimate the resource tax of K8s. A highly available control plane (3 masters) consumes significant RAM and CPU just to keep the API server and scheduler breathing. On a budget-constrained cluster, this is resources stolen from your actual application.

1. Docker Swarm: The "It Just Works" Option

If you need to deploy a stack in under 5 minutes, Swarm is unbeaten. There are no Charts, no Operators, and no CRDs. You initiate a swarm, join nodes, and deploy. The latency between command execution and container creation is negligible.

Here is how quickly you can turn a fresh CoolVDS instance into a manager node:

docker swarm init --advertise-addr $(hostname -i)

And to deploy a full stack? You likely already have the file. Swarm reads the standard Compose format natively.

version: '3.8'
services:
  web:
    image: nginx:alpine
    deploy:
      replicas: 5
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure
    ports:
      - "80:80"
    networks:
      - webnet

networks:
  webnet:
    driver: overlay

The beauty here is the mesh networking. You hit any node IP on port 80, and Swarm routes the request to an active container. For small Norwegian e-commerce shops needing high availability without the cognitive load of K8s, this is often enough.

2. Kubernetes: The Power (and the Pain)

Kubernetes v1.22 dropped in August, bringing major API changes and finally deprecating some long-standing beta features. It is robust, but it demands respect—specifically regarding storage I/O.

The heart of Kubernetes is etcd. Etcd is incredibly sensitive to disk write latency. If your underlying storage cannot guarantee fast fsync operations, your cluster leader elections will fail, and your API server will time out. This is where cheap, shared hosting falls apart.

The "Noisy Neighbor" Problem in K8s

On standard spinning rust (HDD) or throttled SSDs, etcd WAL (Write Ahead Log) sync times can spike above 100ms. When that happens, K8s assumes the node is unhealthy.

We run our control planes on CoolVDS NVMe instances because the I/O consistency is mandatory. If you are running your own cluster, you must benchmark your storage before installing `kubeadm`. Use `fio` to verify you aren't getting throttled.

# A quick way to test if your VPS disk is fast enough for etcd
fio --rw=write --ioengine=sync --fdatasync=1 \
    --directory=. --size=22m --bs=2300 --name=etcd_perf

If the 99th percentile fdatasync latency is over 10ms, do not deploy Kubernetes there. You will regret it during peak traffic.

Defining High-Performance Storage Classes

When you are hosting databases like Postgres or MySQL inside K8s, the default StorageClass usually isn't enough. You need to ensure the binding mode minimizes latency. Here is a production-ready StorageClass configuration we use for data-heavy workloads in 2021:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-nvme-fast
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
allowVolumeExpansion: true
parameters:
  fsType: ext4
  type: nvme

By setting volumeBindingMode: WaitForFirstConsumer, we force the Kubernetes scheduler to place the Pod on the specific node where the NVMe volume exists, rather than binding it prematurely. This is critical for maintaining the low latency we promise our clients.

Latency & Law: The Norwegian Context

Latency isn't just about disk speed; it's about physics. Light speed in fiber is finite. The round-trip time (RTT) from Oslo to Frankfurt is roughly 20-25ms. From Oslo to a local data center in Oslo, it is <2ms.

For real-time bidding apps or high-frequency transactional systems, that 20ms difference is an eternity. But in 2021, the bigger issue is legal. Following the Schrems II judgment last year, relying on US-owned cloud providers has become a compliance minefield for Norwegian companies handling sensitive personal data. Hosting on a VPS provider with strict Norwegian data residency (like CoolVDS) simplifies your GDPR compliance stance significantly.

Comparing the Options

Feature Docker Swarm Kubernetes (K8s)
Learning Curve Low (Hours) High (Months)
Minimum Resources ~512MB RAM ~2GB RAM (Control Plane)
Scaling Speed Fast Moderate (Complex scheduling)
Storage Complexity Simple (Bind mounts/Volumes) High (CSI, PVC, PV)

Ingress Tuning for Performance

Regardless of the orchestrator, your entry point is the bottleneck. In K8s, the Nginx Ingress Controller is standard. However, the defaults are tuned for compatibility, not speed. To handle DDoS attempts or high concurrency, you need to tweak the ConfigMap.

Here are the specific annotations we inject to harden our ingress against slow-loris attacks and optimize buffer sizes:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: production-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    # Increase buffer to handle larger headers/payloads
    nginx.ingress.kubernetes.io/proxy-buffer-size: "16k"
    # Timeouts to drop stale connections fast
    nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "60"
    # Enable client IP preservation for logs
    nginx.ingress.kubernetes.io/use-forwarded-headers: "true"
spec:
  rules:
  - host: app.coolvds-demo.no
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: app-service
            port:
              number: 80

Verdict: Choose Your Weapon Carefully

If you are building a massive microservices architecture with 50+ engineers, Kubernetes is the inevitable choice. But you must respect the hardware requirements. Do not try to run K8s on budget, oversold VPS instances. You need dedicated CPU cycles and, most importantly, NVMe storage to keep `etcd` happy.

However, for 80% of the projects I see in the Nordic market—web agencies, CMS hosting, internal tools—Docker Swarm on a robust Linux VPS is faster to build, cheaper to run, and easier to debug at 3 AM.

Infrastructure is not about collecting the shiniest tools. It is about uptime and latency. Whether you choose K8s or Swarm, ensure your foundation is solid. Don't let slow I/O kill your SEO.

Ready to test your cluster's performance? Deploy a high-performance NVMe instance on CoolVDS in 55 seconds and see the difference raw power makes.