Console Login

Kubernetes vs. Docker Swarm vs. Nomad: A 2023 Infrastructure Survival Guide for Nordic DevOps

The "Resume-Driven Development" Trap: Choosing an Orchestrator in 2023

Let’s be honest. Half the engineering teams in Oslo are deploying Kubernetes (K8s) not because they need it, but because their Lead Dev wants to put it on their CV. I’ve seen it happen. A simple e-commerce startup running a monolithic Magento instance tries to split it into 20 microservices, wraps it in K8s, and suddenly they are burning 40% of their compute budget just on the control plane.

Complexity is the enemy of uptime. In the Nordic hosting market, where we pride ourselves on efficiency and stability, choosing the wrong abstraction layer can be fatal.

Today, strictly looking at the landscape as of March 2023, we are going to dissect the three contenders: Kubernetes, Docker Swarm, and HashiCorp Nomad. We will look at this through the lens of a System Architect who has to answer to the CTO when the latency from the NIX (Norwegian Internet Exchange) spikes.

The Latency Reality: It's All About etcd

Before we touch the tools, you need to understand the bottleneck. Distributed systems rely on consensus. Kubernetes uses etcd. Docker Swarm uses the Raft consensus algorithm directly. Nomad implies Raft as well.

These consensus protocols are incredibly sensitive to disk write latency (fsync). If your underlying storage is slow, your cluster falls apart. It doesn't matter how good your YAML configuration is.

Pro Tip: Never run a production cluster on standard HDD or shared SATA SSDs with high contention. The fsync latency must be under 10ms (ideally under 2ms) for the 99th percentile. This is why at CoolVDS we standardized on NVMe storage for all KVM instances. We tired of debugging timeouts that were actually just slow disks.

1. Docker Swarm: The "Good Enough" Hero

Docker Swarm is declared "dead" every year, yet it survives. Why? Because it is simple. If you have a team of two developers and you need high availability (HA) for a web app, K8s is overkill.

The Scenario: You have a standard Nginx + Python/Node.js stack. You need zero-downtime deployments.

The Config:
You don't need 50 files. You need one docker-compose.yml:

version: '3.8'
services:
  web:
    image: my-app:latest
    deploy:
      replicas: 3
      update_config:
        parallelism: 1
        delay: 10s
      restart_policy:
        condition: on-failure
    ports:
      - "80:8000"

Then, the magic command:

docker stack deploy -c docker-compose.yml production_stack

The Verdict: Swarm is perfect for small to medium Norwegian agencies handling GDPR-compliant data where the infrastructure needs to stay within the country (Data Sovereignty) but the team lacks a dedicated Platform Engineer.

2. Kubernetes (K8s): The Industrial Standard

Kubernetes is not a deployment tool. It is a framework for building platforms. In 2023, version 1.26 is the stable gold standard.

The "War Story": Last winter, I audited a setup for a fintech client in Stavanger. They were suffering random pod evictions. The culprit? They were using default memory settings on a generic VPS provider. The Linux OOM (Out of Memory) killer was murdering their kubelet process because the node ran out of RAM.

If you run K8s, you must set resource limits. If you don't, noisy neighbors inside your own cluster will kill your critical services.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: payment-gateway
spec:
  template:
    spec:
      containers:
      - name: processor
        image: payment-proc:v1.4
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"

The CoolVDS Factor: Running K8s requires significant overhead. A 4GB RAM VPS is the absolute minimum for a worker node, but 8GB is realistic. On CoolVDS, we see clients clustering our 8GB/4 vCPU instances using K3s or RKE2 (Rancher) to strip out the bloat of upstream Kubernetes.

3. HashiCorp Nomad: The Unix Philosophy

Nomad is the sniper rifle compared to the K8s shotgun. It’s a single binary. It schedules applications. That’s it. It integrates with Consul for networking and Vault for secrets.

The beauty of Nomad in 2023 is that it can orchestrate non-containerized workloads. Have a legacy Java JAR file that needs to run directly on the JVM without Docker? Nomad handles that natively.

The Configuration (HCL):

job "database-backup" {
  datacenters = ["oslo-dc1"]
  type = "batch"

  group "backup" {
    task "script" {
      driver = "exec"
      config {
        command = "/usr/local/bin/backup.sh"
      }
      resources {
        cpu    = 500
        memory = 256
      }
    }
  }
}

System Tuning for Orchestrators

Regardless of your choice, the default Linux kernel settings are often too conservative for container networking. If you are deploying on a CoolVDS instance today, check these sysctl values.

1. Increase File Watchers:
Containers love logs and monitoring agents. The default `inotify` limit is too low.

# Check current limit
sysctl fs.inotify.max_user_watches

# Set in /etc/sysctl.conf for permanence
fs.inotify.max_user_watches = 524288

2. Enable IP Forwarding:
Essential for overlay networks (Flannel, Calico, Weave).

net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1

The Verdict for 2023

FeatureDocker SwarmKubernetesNomad
Learning CurveLow (Hours)High (Months)Medium (Days)
MaintenanceLowHigh (Needs dedicated staff)Low
State ManagementBasicAdvanced (StatefulSets)Flexible
Ideal Use CaseWeb Apps, Small TeamsEnterprise MicroservicesHybrid / Legacy Mix

Why Infrastructure Matters More Than the Tool

You can run Kubernetes on a Raspberry Pi, but you shouldn't run a bank on it. The integrity of your orchestrator depends entirely on the stability of the virtualization layer.

In Norway, data privacy laws (Datatilsynet requirements) mean you often cannot just dump data into a US-owned cloud bucket. You need local storage. But local storage must be fast.

At CoolVDS, we don't oversell our CPUs. When you request 4 vCPUs for your K8s control plane, you get the cycles you paid for. We utilize high-frequency RAM and enterprise NVMe drives specifically to prevent etcd leader election failures. When your orchestrator thinks a node is dead because the disk was too slow to respond, it triggers a "rescheduling storm," moving pods around unnecessarily. That causes downtime.

Final Recommendation:
Start with Docker Swarm if you are a team under 5 people. Move to Kubernetes only when you need custom CRDs (Custom Resource Definitions) or complex ingress routing. And whatever you choose, build it on rock-solid KVM foundations.

Need a sandbox to test your cluster? Spin up a high-performance NVMe VPS on CoolVDS in Oslo. Experience the difference low latency makes.