Console Login

Kubernetes vs. Docker Swarm vs. Nomad: The 2022 Orchestrator Showdown for Nordic Ops

The Orchestration Wars: K8s, Swarm, or Nomad?

It is 3:00 AM. Your pager is screaming because the etcd cluster just lost quorum, causing a split-brain scenario that took down your entire production API. If you have been in DevOps long enough, you know this pain. It usually isn't the software's fault; it's the infrastructure underneath it gasping for I/O.

In 2022, the container orchestration landscape has matured, but the complexity has exploded. We aren't just shipping Docker containers anymore; we are managing service meshes, persistent storage claims, and cross-zone replication. For teams targeting the Nordic market, the challenge is twofold: managing this complexity while adhering to strict data sovereignty laws (hello, Datatilsynet) and maintaining millisecond latency to end-users in Oslo.

I have built clusters ranging from 3-node Raspberry Pi swarms to 500-node Kubernetes federations. Here is the brutal truth about what you should run this year, and why the hardware you run it on matters more than the YAML you write.

1. Docker Swarm: The "Good Enough" Solution

Docker Swarm is not dead, despite what the Kubernetes maximalists scream on Twitter. For small teams deploying a standard LEMP stack or a microservices architecture with fewer than 20 services, Swarm is incredibly efficient. It is built into the Docker engine, meaning if you can run docker run, you can run a cluster.

However, Swarm shows its age in networking overlay performance and lack of advanced scheduling primitives. But for a quick deployment? It beats K8s on setup time by a factor of ten.

The Setup Reality

To initialize a swarm, you run one command. Compare this to the bootstrap process of K8s (even with kubeadm).

# On the manager node docker swarm init --advertise-addr 192.168.10.5 # On the worker node docker swarm join --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c 192.168.10.5:2377

Then, you define your stack. This looks almost exactly like Docker Compose. This familiarity is Swarm's biggest asset.

version: "3.9"
services:
  web:
    image: nginx:alpine
    deploy:
      replicas: 5
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure
    ports:
      - "80:80"
    networks:
      - webnet
networks:
  webnet:

2. Kubernetes (K8s): The Industry Standard

By August 2022, Kubernetes (specifically v1.24/1.25) has effectively won the war. The removal of dockershim in v1.24 caused panic, but the migration to containerd or CRI-O has largely been smooth for those paying attention.

Kubernetes is powerful, but it is resource-hungry. The control plane components (API server, Scheduler, Controller Manager, and specifically etcd) demand respect. Running a production K8s control plane on cheap, noisy-neighbor VPS hosting is a death sentence. I have seen etcd timeouts occur simply because the hypervisor stole CPU cycles or the disk I/O latency spiked above 10ms.

The Configuration Overhead

Unlike Swarm, you need to be explicit about everything. If you don't set ResourceQuotas, one memory leak in a Java app will OOM-kill your neighbors. Here is what a responsible deployment looks like in 2022:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-api
  namespace: production
spec:
  replicas: 3
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
      - name: go-api
        image: registry.coolvds.com/backend:v2.4.1
        resources:
          requests:
            memory: "128Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 10
        livenessProbe:
          tcpSocket:
            port: 8080
          initialDelaySeconds: 15
          periodSeconds: 20
Pro Tip: Never deploy K8s without defining requests and limits. If you do, the scheduler considers your pod BestEffort, and it will be the first to die when the node is under pressure.

The Infrastructure Bottleneck: etcd and NVMe

Here is the critical technical detail most tutorials skip: Kubernetes relies entirely on etcd for state, and etcd relies entirely on disk write latency (fsync).

If your disk write latency exceeds 10ms, etcd starts throwing warnings. If it hits higher spikes, leader election fails, and your cluster creates a split-brain. This is where "cheap" cloud hosting fails. They use network-attached storage (NAS) or Ceph clusters that, while redundant, often have high tail latency.

This is where CoolVDS has a distinct engineering advantage. We use local NVMe storage passed through KVM. When etcd calls fsync(), it hits the NVMe controller almost instantly. We aren't routing that traffic over a congested 10GbE network link to a SAN in a different rack.

Testing Your Storage for K8s

Before you install K8s on any VPS, run fio to ensure it can handle the load. Here is the benchmark we run on every CoolVDS node before commissioning:

fio --rw=write --ioengine=sync --fdatasync=1 --directory=test-data --size=22m --bs=2300 --name=mytest

If the 99th percentile fdatasync latency is over 10ms, do not put a K8s control plane on it.

3. Data Sovereignty and The "Schrems II" Effect

Technical architecture doesn't exist in a vacuum. Since the Schrems II ruling, transferring personal data of European citizens to US-controlled cloud providers has become a legal minefield. Even with the new frameworks being discussed in Brussels this year, the safest bet for Norwegian companies is to keep data on Norwegian soil.

Using a provider like CoolVDS, where the physical metal sits in Oslo, simplifies your GDPR compliance posture significantly. You know exactly where the bits are stored. There is no hidden replication to a data center in Virginia.

Conclusion: Choosing Your Path

If you are a solo dev or a team of three, stick to Docker Swarm or look at HashiCorp Nomad for a middle ground. The operational overhead of Kubernetes is real.

However, if you need the ecosystem—Helm charts, Operators, and serious scaling—Kubernetes is the only choice. Just remember that K8s is not magic; it is software that needs performant hardware. Don't let slow I/O kill your SEO or your uptime.

Ready to build a cluster that actually stays up? Deploy a high-performance VPS Norway instance on CoolVDS. With our NVMe backing, your etcd latency will be the least of your worries.