Console Login

Kubernetes vs. K3s vs. Docker Swarm: The 2024 Orchestration Reality Check for Norwegian DevOps

You Probably Don't Need Full-Blown Kubernetes

It’s 2024. Kubernetes (K8s) has won the container orchestration war. We know this. But let me be brutally honest: unless you are running Netflix-scale microservices or have a dedicated platform engineering team of five people, deploying a vanilla Kubernetes cluster from scratch is often a resume-driven decision, not a technical one.

I have seen too many Norwegian startups burn months of runway debugging CrashLoopBackOff errors and fighting with CNI plugins when they could have been shipping code. I've spent nights staring at etcd latency graphs because someone decided to run a three-node control plane on cheap HDD-backed instances.

Today, we aren't just looking at feature lists. We are looking at the operational reality of running containers on VPS infrastructure in Norway. We are comparing the heavy artillery (Kubernetes v1.30), the tactical rifle (K3s), and the reliable old shotgun (Docker Swarm).

The Latency Trap: Why Your Cluster Feels Slow

Before we touch the orchestration tools, we need to talk about the physics of distributed systems. All these orchestrators rely on a consensus algorithm (usually Raft) to maintain state. Kubernetes uses etcd. Docker Swarm uses its own Raft implementation.

Raft is extremely sensitive to disk write latency (fsync). If your underlying storage cannot acknowledge a write quickly enough, the leader election times out, and your cluster enters a split-brain scenario or simply freezes. This is where "budget" VPS providers fail you. They oversell storage I/O.

Here is a basic fio command I run on every new CoolVDS instance before I even install Docker, just to ensure the NVMe storage is actually performing:

fio --name=etcd-bench --rw=write --ioengine=sync --fdatasync=1 --size=100m --bs=2300

If the 99th percentile fsync latency is above 10ms, do not install Kubernetes there. You will regret it. On our CoolVDS NVMe instances, we typically see latencies well under 2ms, which is why we can run production K8s clusters without the control plane catching fire.

1. The Heavyweight: Kubernetes (v1.30)

Kubernetes is the standard. It has the ecosystem, the Helm charts, and the flexibility. But it is hungry. A bare minimum HA cluster requires three control plane nodes and at least two workers. That is five VPS instances before you deploy a single app.

The Configuration Reality: Getting the networking right is usually the first hurdle. If you are setting up a cluster manually using kubeadm, you need to handle the CNI (Container Network Interface) carefully. Here is a snippet of a Calico configuration for a standard setup, explicitly setting the MTU to avoid packet fragmentation on virtual interfaces:

kind: ConfigMap
apiVersion: v1
metadata:
  name: calico-config
  namespace: kube-system
data:
  # Typha is needed for scaling beyond 50 nodes
  typha_service_name: "none"
  # Configure the MTU to match the host interface (usually 1450 or 1500 depending on encapsulation)
  veth_mtu: "1440"

Verdict: Use vanilla K8s if you need strict compliance, granular RBAC for large teams, or specific CRDs (Custom Resource Definitions) for operators. But ensure your underlying nodes have dedicated CPU resources. CPU steal on a noisy neighbor VPS will cause liveness probes to timeout, killing your pods unnecessarily.

2. The Pragmatic Choice: K3s

For 90% of the projects I see in the Nordic region, K3s is the superior choice. It is a fully CNCF-certified Kubernetes distribution, but stripped of the bloat. It replaces etcd with SQLite (by default, though you can use etcd) and compiles everything into a single binary.

Why do I love K3s for VPS deployments? Memory footprint.

You can run a functional K3s master on a CoolVDS instance with 2GB RAM. Try that with vanilla K8s and watch the OOM killer murder your kube-apiserver. Here is how simple the deployment is compared to the kubeadm dance:

curl -sfL https://get.k3s.io | sh - 
# Check the node status immediately
sudo k3s kubectl get node

For a High Availability (HA) setup using an external DB (like a managed Postgres or a MariaDB cluster running on another CoolVDS instance), the config looks like this:

curl -sfL https://get.k3s.io | sh -s - server \
  --datastore-endpoint="postgres://k3s:super_secret_pass@10.0.0.5:5432/k3s_db" \
  --tls-san="k8s.my-cool-company.no" \
  --node-taint CriticalAddonsOnly=true:NoExecute

Verdict: This is the sweet spot for dev teams. You get the standard Kubernetes API, so all your Helm charts work, but you don't need a dedicated ops team to keep the lights on.

3. The Old Guard: Docker Swarm

Swarm is not dead. It is just "boring," and in infrastructure, boring is good. If your architecture is just Nginx + Python App + Redis, Swarm is incredibly efficient. It is built into the Docker engine you already have.

The docker-compose.yml file you use for development is 95% of the way to production. No translating to manifests, services, and ingresses.

version: "3.8"
services:
  web:
    image: registry.coolvds.com/my-app:v2
    deploy:
      replicas: 4
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure
    ports:
      - "80:8000"
    networks:
      - webnet

networks:
  webnet:
    driver: overlay

Verdict: Use Swarm if you have a small team and want to deploy today. However, be aware that the ecosystem is shrinking. You won't find many "Swarm Charts" for complex third-party tools.

Pro Tip for Norwegian Compliance:
Regardless of the orchestrator, data sovereignty is critical under GDPR and the Schrems II ruling. If you are storing personal data on Norwegian citizens, ensure your Persistent Volumes (PVs) are physically located in Norway. CoolVDS offers local data centers, ensuring your data never inadvertently crosses the Atlantic. Configure your StorageClass with volumeBindingMode: WaitForFirstConsumer to ensure the storage creates exactly where the pod is scheduled.

The Storage Class Issue: Connecting State to Compute

The hardest part of container orchestration is persistence. Containers are ephemeral; data is not. In a cloud environment, you usually rely on the provider's CSI (Container Storage Interface) driver to provision block storage automatically.

When running on VPS infrastructure, you often need to handle this yourself or use a solution like Longhorn or Rook/Ceph. However, Ceph is a beast to manage.

For most setups on CoolVDS, I recommend using the Local Path Provisioner for high-performance databases where the pod is pinned to a specific node, or a lightweight NFS provisioner for shared assets (like CMS uploads). Here is a StorageClass definition for high-speed local NVMe storage:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-nvme
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain

Using WaitForFirstConsumer is critical here. It tells the scheduler: "Don't create the volume until you know which node the pod is going to run on." This prevents the common error where a pod tries to start on Node A, but the volume was created on Node B.

Conclusion: Match the Tool to the Throughput

Don't pick Kubernetes because it's trendy. Pick it because you need the API. If you just need to run containers reliably, K3s on CoolVDS is likely your best price-to-performance ratio in 2024. You get the industry-standard API without the resource tax.

Remember, an orchestrator is only as stable as the hardware underneath it. Low latency to the Norwegian Internet Exchange (NIX) and stable NVMe I/O are non-negotiable for keeping etcd healthy.

Ready to build your cluster? Don't let slow I/O kill your consensus. Deploy a high-performance K3s node on CoolVDS in 55 seconds.