Console Login

Kubernetes vs. Docker Swarm in Late 2020: The Infrastructure Reality Check for Norwegian Ops

Kubernetes vs. Docker Swarm in Late 2020: The Infrastructure Reality Check for Norwegian Ops

Let’s be honest. Most of you deploying Kubernetes today don't have Google-scale problems. You have resume-padding problems. I’ve spent the last six months migrating frantic dev teams away from over-engineered K8s clusters that were burning money, only to put them on a clean, solid Linux stack where they actually ship code.

But the landscape shifted violently this July. The CJEU's Schrems II ruling invalidated the Privacy Shield. Suddenly, dumping your customer data into a US-managed managed cluster isn't just lazy; it's a legal liability for Norwegian businesses answering to Datatilsynet. Data sovereignty is now a technical requirement, not a checkbox.

This brings us back to bare metal and reliable VPS infrastructure located physically in Norway (or at least the EEA). If you are building your own orchestration layer in late 2020, you have two pragmatic choices: the beast that is Kubernetes (v1.19) or the unkillable cockroach that is Docker Swarm.

The Latency Killer: Etcd vs. Your Disk

Kubernetes fails not because the code is bad, but because the underlying infrastructure lies to it. The heart of K8s is etcd, a key-value store that demands fsync latency implicitly under 10ms. If you run a control plane on cheap, oversold VPS hosting where "neighbors" steal your I/O operations, your cluster will split-brain during a traffic spike.

I recently debugged a cluster crashing every night at 02:00. The culprit? A noisy neighbor on the host node running backups. On CoolVDS, we enforce strict KVM isolation and use NVMe storage specifically to prevent this. If you aren't on NVMe, don't run etcd.

Pro Tip: Before you even install kubeadm, benchmark the disk. If the 99th percentile fsync is over 10ms, abort. You are building a house on sand.

Here is the exact fio command I use to validate a node before adding it to a cluster:

fio --rw=write --ioengine=sync --fdatasync=1 \
    --directory=test-data --size=22m --bs=2300 \
    --name=mytest

If you see fsync/fdatasync/write latencies creeping up, your control plane is doomed.

Kubernetes v1.19: The "Production Ready" Config

If you have a team of at least three DevOps engineers, Kubernetes is justifiable. Version 1.19 (released August) finally stabilized many features we need. But default configurations are dangerous.

When setting up a cluster on self-managed VPS instances (which you should, to avoid vendor lock-in and ensure GDPR compliance), you need to handle the networking manually. I prefer Calico for the CNI because BGP peering allows us to route service IPs directly if needed.

Here is a snippet of a StorageClass optimized for high-performance local NVMe (the kind you get on CoolVDS slices). We use the WaitForFirstConsumer binding mode to ensure the pod lands on the node where the volume exists.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-nvme-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain

Combined with a Local Persistent Volume setup, this gives you raw disk speed for databases, bypassing network-attached storage latency. This is critical for PostgreSQL performance.

The Ingress Bottleneck

Don't rely on default Nginx Ingress settings. I see this error in /var/log/nginx/error.log constantly during load tests:

[error] 19#19: *10042 upstream sent too big header while reading response header from upstream

You need to tune your ConfigMap. Add this immediately:

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
data:
  proxy-buffer-size: "16k"
  proxy-body-size: "50m"
  worker-processes: "4" 
  max-worker-connections: "10240"

Docker Swarm: The "Good Enough" Hero

If you are a single developer or a small startup in Oslo trying to launch before Christmas, Kubernetes is a trap. Docker Swarm is built into the Docker engine. It requires zero extra binary installation. It uses less RAM. It just works.

I run a Swarm cluster for a client processing 500 requests per second. The total cost? Three CoolVDS instances. The complexity? A single YAML file.

Initializing a Swarm is laughably simple compared to the kubeadm dance:

# On the Manager Node (CoolVDS Instance 1)
docker swarm init --advertise-addr 192.168.10.2

# On the Worker Nodes (CoolVDS Instances 2 & 3)
docker swarm join --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3... 192.168.10.2:2377

With Swarm, you don't need a dedicated team to manage the orchestrator. You define a stack, and you deploy. Here is a production-ready docker-compose.yml snippet for a high-availability web service with resource limits (essential to prevent OOM kills):

version: "3.8"
services:
  web:
    image: nginx:alpine
    deploy:
      replicas: 5
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure
      resources:
        limits:
          cpus: "0.50"
          memory: 512M
    ports:
      - "80:80"
    networks:
      - webnet

networks:
  webnet:

The Infrastructure Decision: Latency and Sovereignty

Whether you choose K8s or Swarm, your orchestrator acts as a multiplier for infrastructure quality. If your network latency fluctuates, K8s liveness probes fail, pods restart, and your site goes down.

In the Nordic market, latency to the Norwegian Internet Exchange (NIX) is paramount. Hosting in Frankfurt or London adds 20-30ms round trip. Hosting in Norway drops that to 2-5ms for local users. That speed difference is massive when you have microservices talking to each other across nodes.

Security & Compliance (Schrems II Context)

Since the July 2020 ruling, using US-owned cloud providers for core data storage is a gray area at best. By utilizing CoolVDS, you are leveraging European infrastructure. We own our hardware. We don't lease it from a hyperscaler who might be compelled to hand over keys.

For your orchestration layer, this means:

  • Encryption at Rest: Use LUKS on your Linux partitions.
  • Network Policies: Deny all ingress traffic by default.
  • Location: Ensure your nodes are physically in the target jurisdiction.

Verdict

If you need service meshes, complex canary deployments, and have a dedicated Ops team: Use Kubernetes v1.19. But run it on dedicated KVM slices, not shared hosting garbage. The CPU overhead of the Kubelet and API server requires guaranteed cycles.

If you want to sleep at night and just run a web app: Use Docker Swarm. It’s robust, it respects your RAM, and it’s fast.

Whatever you choose, the hardware dictates the stability. Don't let slow I/O kill your SEO or your uptime. Deploy a high-performance NVMe instance on CoolVDS today and see what sub-millisecond disk latency does for your database performance.