K8s vs. Swarm vs. K3s: Orchestration Reality Check for Nordic Ops
Let’s cut the marketing noise. If I see one more junior dev spinning up a full-blown HA Kubernetes cluster on t2.micro instances with network-attached storage located three countries away, I’m going to lose it. In the last six months, my team and I have audited over a dozen infrastructure setups across Oslo and Bergen. The pattern is always the same: over-engineering meets under-provisioning. You want the resilience of Google, but you are running on a budget that barely covers coffee. The reality of container orchestration in late 2023 is that while Kubernetes (K8s) has won the mindshare war, it is often the wrong tool for the job unless you have the underlying iron to support it. I am talking about raw IOPS, stable latency, and CPU cycles that aren't stolen by a noisy neighbor mining crypto on the same physical host. This article isn't a "Getting Started" guide; it is a survival guide for those of us responsible for keeping services green when the traffic spikes hit.
The Latency Killer: Etcd and Your Storage
Before we even argue about Swarm versus K8s, we need to address the elephant in the rack: etcd. If you are running Kubernetes, you are running etcd. This key-value store is the brain of your cluster, and it is notoriously sensitive to disk write latency. I once spent three days debugging a flapping cluster for a client in Trondheim. The API server kept timing out, pods were restarting randomly, and the logs were screaming about leader election failures. The culprit? They were hosting their control plane on budget VPS providers with shared HDD storage. Every time a neighbor did a backup, fsync latency spiked to 40ms. Etcd starts panicking if fsync takes longer than 10ms. If you are serious about orchestration, you need NVMe. Period. We use fio to validate every environment before deployment. If the random write speeds don't meet the threshold, we don't deploy K8s there.
Here is the exact benchmark command we use to disqualify sluggish hosts immediately:
fio --rw=write --ioengine=sync --fdatasync=1 --directory=test-data --size=22m --bs=2300 --name=mytest
On a proper CoolVDS NVMe instance, the 99th percentile latency sits comfortably below 2ms. On standard cloud block storage, I've seen it drift to 15-20ms. That difference is the difference between a self-healing cluster and a cascading failure on a Friday afternoon.
Option 1: Docker Swarm (The "Just Works" Choice)
Docker Swarm is not dead, despite what the CNCF landscape suggests. For teams of 2-5 developers managing a few microservices, Swarm is arguably superior because of its negligible overhead. You don't need a dedicated team just to manage the control plane. In 2023, Swarm is stable, boring, and fast. I recently migrated a logistics dashboard from a managed K8s service back to a Swarm cluster running on three bare-metal-style VPS nodes. The cost dropped by 60%, and the deployment time went from 4 minutes to 15 seconds. Swarm uses the standard Docker API, meaning your CI/CD pipelines probably don't even need to change. The beauty lies in the simplicity of the mesh networking. You initialize a manager, join workers, and you are done.
Pro Tip: When using Swarm on public nodes, always encrypt the overlay network traffic. It adds slight CPU overhead, but with modern AES-NI instructions on CoolVDS CPUs, it's negligible.
To deploy a secure, replicated service on Swarm, you barely need a manifest file. A single command handles the ingress, load balancing, and replicas:
docker service create \
--name micro-api \
--replicas 3 \
--publish published=8080,target=80 \
--update-delay 10s \
--network overlay_net \
nginx:alpine
However, Swarm lacks the rich ecosystem of Operators and CRDs. If you need complex stateful sets with automated backups or advanced autoscaling based on Prometheus metrics, Swarm will force you to write custom bash scripts, which is a technical debt trap.
Option 2: Kubernetes (The Heavyweight Champion)
Kubernetes version 1.28 has just dropped, bringing improved support for swap memory and sidecar containers. But with great power comes massive resource consumption. A vanilla K8s installation (kubeadm) consumes a chunk of RAM just to exist. On a 2GB VPS, K8s is suffocating before you deploy a single app. This is where infrastructure selection becomes critical. In Norway, data sovereignty is becoming a massive legal headache due to Schrems II. Hosting your K8s cluster on US-owned cloud infrastructure, even if the region is "Europe," is a risk many CTOs are no longer willing to take. By running K8s on local Norwegian infrastructure, like CoolVDS, you satisfy Datatilsynet requirements while gaining the latency benefits of being physically close to NIX (Norwegian Internet Exchange).
If you are deploying K8s manually, you need to tune the kubelet to prevent it from evicting pods too aggressively during traffic spikes. Here is a production-grade KubeletConfiguration snippet we use to ensure stability on high-performance VDS nodes:
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
memory.available: "100Mi"
nodefs.available: "10%"
nodefs.inodesFree: "5%"
evictionSoft:
memory.available: "200Mi"
evictionSoftGracePeriod:
memory.available: "1m"
systemReserved:
memory: "500Mi"
cpu: "500m"
kubeReserved:
memory: "200Mi"
cpu: "100m"
Configuring systemReserved is crucial. Without it, your pods will eat all available RAM, starving the SSH daemon and locking you out of your own server when you need to debug. I've been there, waiting for a console reboot while a client screams on Slack. Never again.
Option 3: K3s (The Efficiency King)
For most deployments that don't require multi-cloud federation, K3s by Rancher is the answer. It is a fully certified Kubernetes distribution but stripped of the legacy cloud provider bloat. It replaces etcd with sqlite (optional) or allows you to run etcd with a much lighter footprint. It compiles to a single binary. We have benchmarked K3s on CoolVDS 4GB instances, and it idles at around 500MB RAM, leaving plenty of room for your Java or Node.js applications. It treats the server resources with respect. K3s is particularly potent when combined with the LocalPath storage provisioner, which utilizes the high-speed local NVMe storage of the host rather than slow network-attached block storage.
Deploying a Stateful Workload on K3s
Here is how you define a StorageClass to leverage the local NVMe speed explicitly. This bypasses the network overhead entirely, giving your databases raw I/O performance.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-path
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fast-db-storage
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 10Gi
With this setup, your database writes hit the NVMe drive directly. On CoolVDS, this translates to database transaction speeds that rival dedicated bare metal servers, but with the flexibility of virtualization.
The Network Layer: CNI Plugins
Another area where performance dies is the CNI (Container Network Interface). Flannel is the default for many, but it uses VXLAN encapsulation which adds CPU overhead. For high-throughput environments, use Calico with BGP or Cilium (which uses eBPF). eBPF is revolutionary because it processes packets in the kernel sandbox without the context-switching cost of standard iptables. If you are targeting the Norwegian market, every millisecond of latency counts. Routing traffic efficiently within the cluster ensures that the low latency provided by the Oslo location isn't wasted by inefficient internal packet shuffling.
To check your current CNI pod status and ensure no restarts are happening:
kubectl get pods -n kube-system -l k8s-app=cilium
Conclusion: Match the Orchestrator to the Infrastructure
There is no "best" orchestrator, but there is definitely a "worst" configuration: heavy K8s on weak I/O. If you need simple redundancy, stick to Docker Swarm. If you need the full cloud-native stack, use K3s or K8s, but ensure your underlying VDS provides the low latency and NVMe throughput required to keep etcd stable. In the Nordic market, where privacy laws like GDPR intersect with high user expectations for speed, owning your infrastructure on a provider like CoolVDS gives you the control that hyperscalers obfuscate.
Don't let storage I/O be the bottleneck that wakes you up at 3 AM. Provision a high-performance NVMe instance in Oslo today and test your cluster's limits.