Kubernetes vs. Docker Swarm vs. K3s: The 2021 Orchestration Reality Check for Norwegian DevOps
Let’s be honest. Most of you reading this don’t need the complexity of full-blown Google Kubernetes Engine. You think you do because Hacker News tells you so, but your infrastructure bill tells a different story. I’ve spent the last six months migrating a client’s e-commerce platform from a bloated, overpriced managed cluster in Frankfurt back to bare KVM instances here in Oslo. The result? Latency dropped by 12ms, and the monthly burn rate was cut in half.
In August 2021, the container orchestration landscape is brutal. Docker Desktop just announced subscription changes, making everyone nervous. Kubernetes 1.22 just dropped with the removal of several beta APIs. The choice isn't just about features anymore; it's about stability, data sovereignty (thanks, Schrems II), and raw IOPS.
If you are building systems for the Nordic market, latency to the Norwegian Internet Exchange (NIX) and compliance with Datatilsynet aren't optional features. They are requirements. Let's break down the three contenders for your VPS infrastructure: The fading veteran (Swarm), the industry standard (K8s), and the agile newcomer (K3s).
1. Docker Swarm: Dead or Just Sleeping?
Three years ago, I would have argued for Swarm for any team under 10 people. It’s built into the Docker engine. It’s simple. You write a docker-compose.yml, run docker stack deploy, and go home.
But in 2021, Swarm is in maintenance mode. Mirantis bought Docker Enterprise, and while they promise support, the ecosystem momentum has shifted entirely to Kubernetes. However, Swarm still has one massive advantage: Velocity.
The Configuration Reality
Compare setting up a secret in Swarm versus K8s. In Swarm, it's trivial.
# docker-compose.yml for Swarm
version: "3.8"
services:
web:
image: nginx:alpine
deploy:
replicas: 3
secrets:
- site_cert
secrets:
site_cert:
external: true
Verdict: Use Swarm if you have a legacy monolithic app you are just containerizing now, or if you refuse to hire a dedicated Ops person. Just know that you are building on a technology with an expiration date.
2. Kubernetes (v1.22): The Heavyweight Champion
Kubernetes is no longer just software; it's the operating system of the cloud. With version 1.22, we see the removal of Ingress beta versions. If you haven't updated your manifests to networking.k8s.io/v1, your pipelines are about to break.
The problem with K8s isn't capability; it's the resource tax. A vanilla K8s control plane (API server, Scheduler, Controller Manager, etcd) eats RAM for breakfast. Running a highly available control plane on cheap, spinning-disk VPS instances is a suicide mission.
Pro Tip: The Etcd Bottleneck
etcdis sensitive to disk write latency. Iffsynctakes too long, your cluster heartbeat fails, and cascading failures begin. We benchmarked this. On standard SSDs,etcdcreates leader election timeouts under load. On CoolVDS NVMe instances, write latency stays consistently under 2ms, which is critical for cluster stability.
The Deployment Standard
Here is what a modern, v1.22 compatible deployment looks like. Note the API version changes.
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nordic-api
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: go-api
image: registry.coolvds.com/nordic-api:v1.4
ports:
- containerPort: 8080
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
If you are deploying this in Norway, you need to consider Data Sovereignty. Using a US-managed Kubernetes service often involves control planes hosted outside of Norway, or telemetry data crossing borders. By bootstrapping K8s yourself using kubeadm on CoolVDS, you ensure 100% of the data—and the metadata—stays within Norwegian jurisdiction.
3. K3s: The Pragmatic Compromise
Rancher’s K3s has stripped out the legacy cloud provider add-ons, storage drivers you’ll never use, and the bloat. It compiles to a single binary less than 100MB. For a robust VPS environment, this is often the superior choice over upstream K8s.
It replaces etcd with SQLite by default, but for production, you can (and should) still use etcd. The real win here is memory footprint. You can run a K3s master on a 2GB RAM instance comfortably.
Setting up K3s on a CoolVDS Instance
It’s ridiculously fast. I timed this yesterday: 45 seconds from login to active cluster.
# On the master node (CoolVDS Instance A)
curl -sfL https://get.k3s.io | sh -
# Get the token
cat /var/lib/rancher/k3s/server/node-token
# On the worker node (CoolVDS Instance B)
curl -sfL https://get.k3s.io | K3S_URL=https://10.0.0.5:6443 K3S_TOKEN=MYTOKEN sh -
The Hardware Truth: Why Virtualization Matters
Software orchestration is useless if the underlying hypervisor steals your CPU cycles. Many budget providers oversell their CPU cores. You think you have 4 vCPUs, but you’re fighting 20 other neighbors for time slices. In a containerized environment, this manifests as random latency spikes in your API—the kind that makes customers leave.
We see this constantly in benchmarks. Containers are lightweight processes; they spin up and down in milliseconds. They require a hypervisor that respects that agility.
| Feature | Kubernetes (K8s) | Docker Swarm | K3s |
|---|---|---|---|
| Learning Curve | Steep | Low | Medium |
| Resource Usage | High | Low | Very Low |
| Schrems II Safe? | Yes (Self-hosted) | Yes (Self-hosted) | Yes (Self-hosted) |
| Ideal Storage | NVMe (Required for etcd) | Standard SSD | Standard SSD/NVMe |
The Network Layer: CNI and Latency
Once you pick your orchestrator, you must pick a CNI (Container Network Interface). In 2021, Flannel is the easy choice, but Calico gives you network policies which are essential for security compliance.
If you are routing traffic between nodes, standard generic cloud networking adds overhead. CoolVDS instances are connected via high-throughput localized switching. When pod A on Node 1 talks to pod B on Node 2, you want that packet to stay within the rack or the same datacenter hall in Oslo. We minimize the hops.
Here is a Calico configuration snippet to ensure IP autodetection works correctly on a VPS where you might have multiple interfaces (public/private):
# calico.yaml snippet
- name: IP_AUTODETECTION_METHOD
value: "interface=eth0"
Conclusion: Own Your Stack
Stop renting control planes from tech giants who don't care about Norwegian data laws. If you need scale, Kubernetes is the answer. If you need efficiency, K3s is the sharpest tool in the shed in 2021.
But remember: Kubernetes is a force multiplier for your hardware. If the hardware is slow, K8s just helps you fail faster. You need dedicated resources, low-latency NVMe storage, and a network that keeps your data inside Norway.
Don't let IO_WAIT kill your cluster. Spin up a CoolVDS NVMe instance today and deploy a K3s cluster that actually screams.