Orchestration Wars 2025: Kubernetes vs. K3s vs. Docker Swarm on Nordic Infrastructure
Let’s be honest: 90% of you reading this do not need a bare-metal Kubernetes cluster spanning three availability zones. I’ve audited enough infrastructure in Oslo and Bergen to know that "Resume Driven Development" is still plaguing our industry. You spin up a massive K8s control plane for a simple Django app, and then wonder why your cloud bill is higher than your monthly revenue.
But when you do need orchestration, the choice defines your operational misery index for the next two years. In 2025, the landscape has settled, but the trade-offs haven't changed. Latency matters. Data sovereignty (thank you, Schrems II & III) matters. And disk I/O—the silent killer of orchestrators—matters most of all.
I’m going to break down the technical reality of running container orchestration on Virtual Dedicated Servers (VDS) in the Nordic region. No marketing fluff. Just `sysctl` flags, IOPS, and hard truths.
The Hidden Bottleneck: Etcd and Disk Latency
Before we argue about K3s versus vanilla K8s, let's address the elephant in the server room: fsync latency.
I recently helped a fintech startup in Stavanger debug a "flaky" cluster. Their pods were randomly restarting. The logs showed leader election failures. They blamed the network. They were wrong.
The culprit was slow disk I/O on their budget VPS provider. Kubernetes relies on etcd, which is incredibly sensitive to disk write latency. If fsync takes too long, the heartbeats fail, and the cluster panics. If you are not running on NVMe storage, you are playing Russian Roulette with your uptime.
Here is how you actually test if your VPS can handle a production orchestrator. Run this fio command:
fio --rw=write --ioengine=sync --fdatasync=1 --directory=test-data --size=22m --bs=2300 --name=mytest
If your 99th percentile fdatasync latency is above 10ms, do not install Kubernetes. You will regret it. This is why we default to NVMe storage on CoolVDS. When we provision a node, we expect sub-millisecond commit times. Anything else is negligence.
Option 1: K3s (The Pragmatic Choice)
For 80% of deployments in Norway targeting local businesses, K3s is the answer. It’s CNCF certified, but it strips out the legacy cloud provider bloat. It compiles to a single binary. It consumes roughly 512MB of RAM for the control plane, whereas vanilla K8s eats 2GB just waking up.
The Configuration
When deploying K3s on a CoolVDS instance running Ubuntu 24.04, don't just run the default curl pipe. You need to optimize for the Flannel CNI and disable the components you don't need (like the default Traefik if you plan to use Nginx).
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server \
--disable traefik \
--disable servicelb \
--flannel-backend=host-gw \
--write-kubeconfig-mode 644" sh -
Pro Tip: Notice the --flannel-backend=host-gw flag. If your VDS instances are on the same Layer 2 network (which CoolVDS supports via private networking), this mode routes packets via IP routes instead of VXLAN encapsulation. This saves you CPU cycles on packet encapsulation and reduces latency. Essential for high-traffic Norwegian news sites.
Option 2: Vanilla Kubernetes (The Enterprise Standard)
If you are managing strict compliance requirements under Datatilsynet or handling sensitive health data, you might need the full RBAC granularity and admission controllers of vanilla Kubernetes (v1.31+).
However, running this on a VPS requires strict kernel tuning. You must ensure your Linux kernel is optimized for high connection counts and packet forwarding.
Add this to your /etc/sysctl.d/k8s.conf:
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.ipv4.conf.all.forwarding = 1
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 16384
vm.swappiness = 0
vm.overcommit_memory = 1
Critical Warning: Never use swap with Kubernetes. It confuses the scheduler's resource accounting. On CoolVDS, we allow you to disable swap partition creation during the OS install phase. Do it.
Option 3: Docker Swarm (The "Just Works" Alternative)
I hear the snickering. "Swarm is dead." Is it? For a simple 3-node cluster hosting a WordPress setup and a Redis cache, Swarm is infinitely easier to manage than K8s. It has no etcd management overhead (it's built-in) and the YAML is just docker-compose.
If your team consists of two developers and zero SysAdmins, use Swarm. You can init a swarm in 10 seconds:
docker swarm init --advertise-addr $(hostname -i)
However, Swarm lacks the rich ecosystem of Helm charts and Operators. You are trading convenience for extensibility. If you need auto-scaling based on custom Prometheus metrics, Swarm will fight you. If you just need to keep a service up, it's a tank.
Network Latency and Geography
The physical location of your control plane matters. If your users are in Oslo, but your VPS is in Frankfurt, you are adding 20-30ms of round-trip time (RTT) to every request.
| Parameter | K3s | Vanilla K8s | Docker Swarm |
|---|---|---|---|
| Min RAM Req | 512 MB | 2 GB+ | 128 MB |
| Control Plane | SQLite / Etcd | Etcd (Strict) | Raft (Built-in) |
| Storage Sensitivity | High | Critical | Moderate |
| Best For | Edge / VPS | Enterprise / Hybrid | Small Teams |
Security: The Isolation Factor
Containers provide process isolation, not kernel isolation. If a container escapes (e.g., via a dirty COW exploit or runc vulnerability), it hits the host kernel. This is why the underlying virtualization of your VPS provider is paramount.
We see