Console Login

Kubernetes vs. K3s vs. Docker Swarm: A No-Nonsense Orchestration Guide for Norwegian Infrastructure

Kubernetes vs. K3s vs. Docker Swarm: A No-Nonsense Orchestration Guide for Norwegian Infrastructure

If I get one more PagerDuty alert at 3:00 AM because a kube-apiserver timed out waiting for disk I/O, I am going to throw a server out the window. It is 2025. We have solved the container runtime wars (Docker/containerd won), but the orchestration battle is still making engineers sweat.

Here is the reality: Most of you do not need a 50-node Kubernetes cluster. You think you do because you read a Medium article from a Netflix engineer. But running a full K8s control plane on cheap, shared storage is a suicide mission for your uptime. I’ve spent the last decade fixing broken clusters across Europe, and the pattern is always the same: over-complexity meeting under-provisioned infrastructure.

For Norwegian businesses, the stakes are higher. Latency to NIX (Norwegian Internet Exchange) matters. Compliance with Datatilsynet matters. You can't just slap everything on a US hyper-scaler region in Frankfurt and hope the latency doesn't kill your connection to the BankID API.

The Contenders: K8s vs. K3s vs. Swarm

Let’s strip away the marketing fluff and look at the technical debt you are signing up for.

Orchestrator Use Case Resource Overhead Storage Requirement
Kubernetes (Vanilla) Enterprise, complex microservices, multi-team High (Control plane needs dedicated cores) Critical (etcd requires NVMe <10ms latency)
K3s Edge, single-node, dev/test, small VDS Low (Binary <100MB) Moderate (SQLite or etcd)
Docker Swarm Simple web apps, legacy setups Negligible Low

1. The Heavyweight: Kubernetes (v1.32+)

Kubernetes is the standard. By May 2025, version 1.32 is the stable rock we rely on. But it is hungry. The control plane—specifically etcd—is extremely sensitive to disk latency. If `fsync` takes too long, leader election fails, and your cluster enters a split-brain scenario. I saw this happen last month with a client hosting a high-traffic Magento setup. They were on "standard SSDs" from a budget provider. Their etcd latency spiked to 45ms during a traffic surge.

The result? The API server stopped responding. Pods didn't reschedule. Downtime.

This is where infrastructure choice dictates architecture. On CoolVDS, we enforce NVMe storage with high IOPS ceilings specifically to prevent this. If you are running Vanilla K8s, you must verify your storage speed first.

The "etcd" Reality Check

Run this on your current VPS. If the 99th percentile is above 10ms, do not install Kubernetes.

# Install FIO if you haven't
apt-get install fio -y

# Run the benchmark that mimics etcd write patterns
fio --rw=write --ioengine=sync --fdatasync=1 \
    --directory=/var/lib/etcd-test --size=22m --bs=2300 \
    --name=etcd_benchmark

2. The Pragmatic Choice: K3s

For 80% of projects in Norway—agencies, SMB SaaS, internal tools—K3s is superior. It strips out the legacy cloud provider plugins and consolidates the processes. It starts in 30 seconds. I use this for production workloads that need the declarative power of Kubernetes manifests without the overhead.

K3s is particularly good when you have strict data sovereignty requirements. You can spin up a K3s cluster on three CoolVDS instances in Oslo, keep all data within Norwegian borders to satisfy GDPR/Schrems II, and manage it exactly like a massive cluster.

Pro Tip: When using K3s on a public VPS, always disable the Traefik ingress controller by default if you plan to use Nginx or HAProxy. It saves RAM and prevents port conflicts.
# Installing K3s without Traefik
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--no-deploy traefik" sh -

3. The Zombie: Docker Swarm

People keep saying Swarm is dead. Yet, it refuses to die. Why? Because a `docker-compose.yml` file is easier to read than 50 lines of Kubernetes YAML. If you have a monolithic application and just need zero-downtime deployments, Swarm works. But be warned: the ecosystem has moved on. Finding Helm charts or operators for Swarm in 2025 is impossible. You are on your own.

War Story: The "Noisy Neighbor" Incident

In 2023, I was debugging a cluster for a logistics company in Bergen. Their pods were getting OOMKilled (Out Of Memory) randomly, even though monitoring showed 40% free RAM. We spent days blaming the Java Garbage Collector.

The real culprit? Noisy neighbors on a shared VPS host. The host CPU was getting stolen by another tenant running crypto miners. The "CPU Steal" metric was spiking, causing the Kubelet to miss its heartbeat checks, which the control plane interpreted as a node failure. It tried to reschedule pods, causing a storm of activity that maxed out the memory.

We migrated the workload to CoolVDS dedicated KVM slices. CPU Steal dropped to 0.0%. The cluster stabilized immediately. Container orchestration implies you have control over the compute resources. If your hypervisor lies to you, your orchestrator fails.

Configuration Best Practices for 2025

Whether you choose K8s or K3s, you must define resource limits. Without them, one memory leak in a Node.js app brings down the whole node.

Enforcing Limits with ResourceQuotas

Don't trust developers to add limits. Enforce them at the namespace level.

apiVersion: v1
kind: ResourceQuota
metadata:
  name: compute-resources
  namespace: production
spec:
  hard:
    requests.cpu: "4"
    requests.memory: 8Gi
    limits.cpu: "8"
    limits.memory: 16Gi

Network Policies (The GDPR Shield)

In a multi-tenant environment, pods can talk to each other by default. This is a security nightmare. Use NetworkPolicies to isolate sensitive workloads, like your database.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: db-access
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          role: backend
    ports:
    - protocol: TCP
      port: 5432

Conclusion: Infrastructure is the Foundation

Kubernetes is not a magic wand. It is a complex engine that requires high-octane fuel: low latency storage, clean CPU cycles, and reliable networking. If you try to run a Ferrari on swamp mud, it will get stuck.

For the Norwegian market, where data privacy and reliability are non-negotiable, you need to own your stack. Do not rely on opaque cloud abstractions. Get a Linux terminal, verify your IOPS, and configure your orchestrator correctly.

Ready to build a cluster that actually stays up? Deploy a high-performance KVM instance on CoolVDS today. With our NVMe storage and Oslo-optimized routing, your `etcd` will thank you.